USENIX supports diversity, equity, and inclusion and condemns hate and discrimination.
SREcon17 Americas Program Selection Process
The following post has been collaboratively written by the co-chairs for the SREcon17 Americas conference (Liz Fong-Jones and Kurt Andersen) and several of the program committee members (Murali Suriar and Betsy Beyer). It is intended to provide greater insight into the selection process which we used for this conference and may not entirely match the strategies which other conference committees employ. The program committee participants are volunteers who have experience in the field and the time (and support from their companies) to invest in the betterment of the profession through the time they spend on the conference.
After much effort by many people, the program for SREcon17 Americas has been finalized. We received 224 proposals for talks (lightning or otherwise) from 167 distinct speakers. Between us, the program committee spent hundreds of hours reviewing the content of these proposals before settling on the agenda, which is now published.
We received several requests for feedback from submitters about why a particular talk was declined. While we’re unable to provide specific feedback on each talk, we felt that a blog post on the specifics of our methodology might be useful and instructive. So, here we go.
Goals
Our goals when setting the agenda for the conference was to come up with a program which was balanced across the different themes while remaining focused and accessible as well as welcoming new speakers. We wanted to ensure that most of the content from speakers and plenary sessions would be relevant and useful to people performing SRE work, without presupposing access to excessively proprietary technologies. The conference is also intended to help people new to the discipline, and those attempting to bring SRE practices and principles to organizations which have historically operated in very different ways.
While not explicitly mentioned in the Call for Participation (CfP), two sub-themes emerged as we selected talks: SRE practices across the spectrum of small environments to large enterprises, and SRE practices in not-primarily-digital enterprises (organizations whose primary business function or product is not digital).
Constraints
Going into the review process, we knew (a) how many rooms we had available and (b) the rough timing for sessions, which gave us an upper bound on talk-hours. In addition, we added self-imposed constraints around some aspects of the conference after discussion amongst the program committee.
Plenary and panel discussions were going to be limited. The limit on plenary sessions is pragmatic: we want to give people as much choice as possible in the talks they can attend, so we limited ourselves to four slots over the two days. Our hope is that the talks for these slots, which are exclusively from invited speakers, will be of general interest and value to all attendees.
The limit on panel discussions was self-imposed because of the challenges of running productive panel sessions. The success of a panel hinges on having an effective moderator and engaging speakers who have planned and coordinated in order to make the most of the allocated time. In this case, we selected a single panel proposal, for which the organizer had coordinated amongst the panelists and the conference co-chairs prior to the ranking and scoring cycle. A secondary point here is that panel discussions are difficult for people not at the conference to benefit from. We’re aware that not everyone who would like to attend SREcon is able to attend; as part of USENIX’s Open Access Policy, we’re committed to publishing videos and slides from all presentations. Often panels have no slide materials for later reference due to the nature of the dynamic interchange amongst the participants.
The committee also considered talk duration: How long should individual talk slots be? We chose to run the schedule on a 60-minute cadence and allow five minutes between sessions for people to circulate in and out of the rooms. The program committee discussed balancing “full length” and “half length” sessions. Shorter talks allow us to cover more topics, tend to be more focused, and are better tailored towards people with shorter attention spans. They also allow for better interleaving with the “hallway track”—if you want to chat with someone over a coffee, more, shorter sessions give you flexibility in what you want to skip. On the other hand, there are some topics which are difficult to cover sufficiently in half an hour. After scoring the talks without regard for length, it turned out that we could split the conference time almost equally between 30-minute and 60-minute talks (meaning there are about twice as many 30-minute talks). There are also two one-hour lightning talk sessions, for very concise speakers who can fit their talks into 5- or 10-minute slots. Even within lightning talk selection, an organic balance between the 5-minute and the 10-minute talks emerged.
The Review Process
Previous SREcon conferences have subsisted on a pastiche of ad hoc Google forms and derived spreadsheets. This year, USENIX evaluated several CFP (call for participation) systems and settled on Submittable, which provided the program chairs a fair amount of flexibility for structuring the CFP. Additionally, for the first time at an SREcon event, the program committee could undertake blind review. All program committee members were involved in voting on all talk proposals (full length and lightning).
Blind Review / Initial Scoring
In order to decrease instances of bias (conscious and otherwise), the initial review of talk proposals was done blindly. Program committee members scored talks based only on the title, short and long descriptions, and “notes to committee” fields. Only the program committee chairs could view speaker names, affiliations, and biographies. Since this blind review process was new this year, we had a few submitters who included identifying information in the the few fields that were exposed for scoring.
This scoring process could be done while the CFP was still open, although most of the committee participants chose to batch their scoring once the submission window had closed. After the CFP submission deadline passed, we set a date for everyone to complete their initial review. The scoring was done on a simple -1 (No), 0 (Maybe), +1 (Yes) basis for the first pass ranking. Talks which were perceived as being “pitchy” or a bad fit (more details below) received “No” votes, while talks that were perceived as “probably interesting to a significant group of attendees” received “Yes” votes. Reviewers were also able to post comments providing additional commentary or questions about the session.
Scoring criteria
As mentioned previously, our goals in building the agenda were to make the conference as focused and accessible as possible. Different reviewers came to the process with different biases, which led to very interesting discussions in the review phase. However, there was a common set of criteria which normalized fairly early on during this process, which we’ve attempted to summarize below.
In general, we preferred talks which provided actionable advice or takeaways for attendees. Pointers to documentation, how-tos, and open source software were positive indicators here; sales pitches for commercial products, or descriptions of proprietary internal solutions without at least an associated research paper were viewed less favorably.
We also looked at applicability: discussions of very niche bugs/features/problems (“one weird trick to fix clustered widgets when you’re using Acme foobars made in February”) were generally discouraged in favor of more broadly useful information.
Final Selections
After the initial scoring by the committee, we had a list of talks with numeric scores ranging from +12 to -12. Based on the rough target for the number of talks, we picked a threshold to get approximately the right number of talks, and then deanonymized the proposals to the reviewers to allow more detailed discussion.
We looked carefully at situations where a single presenter submitted multiple proposals and narrowed the selection to the “best” single proposal by a given presenter. Past experience has shown that having a presenter responsible for more than one session ends up degrading that participant's overall conference experience. SREcon thrives on active engagement by all the attendees and we did not want to impair that, even for prolific contributors.
We had a number of cases where the title conveyed a significantly different “tone” than the descriptions. Some proposals lacked a logical flow, while others didn't quite connect with the conference themes and topics. These characteristics were detractors, though we tried to give the benefit of the doubt in cases where it appeared that the presenter’s native language was not English.
There were a few cases where multiple presenters proposed talks on highly similar topics (so we de-duped in favor of the more highly scored proposal), or where a single theme appeared in an imbalanced number of talks.
We also had some talks which seemed better suited to the shorter lightning talk format. In such cases, we asked the submitter to resubmit their content as a focused lightning talk (the deadline was considerably later for lightning talks than regular sessions).
In order to finalize the selections which were near the cut-off threshold, we encouraged reviewers to take another look at the talks they personally had scored highly but which had not surpassed the threshold. This led to discussion in comment threads on each talk, with various reviewers justifying their high/low scores. It also provided more data; many comments were along the theme of, “This talk could be really interesting, but it depends on the speaker.”
Another input to the filtering process was an attempt to branch out from the traditional companies associated with SRE. We paid particular attention to companies not normally associated with the SRE/devops community—small companies with one or two SREs, large enterprises moving away from a traditional operations model, tech companies from outside the US, and so on. We gave some preference to talks from these contributors if they were on the verge of the acceptance threshold.
This provided us with the initial acceptance list. We also retained about 12 talks on a waitlist in case the accepted talks could not follow through, and we declined the rest of the proposals.
A separate, later cycle for lightning talks followed a similar process, but a subset of the committee performed the final selections.
Late Program Addition
With the most recent eruption in the blogosphere pertaining to hostile workplace environments, the program chairs made a late change to the program in order to confront this problem head on, rather than having it fester as an undercurrent. We are delighted that Ashe Dryden will be facilitating a session on Tuesday about what we can all do as a profession to address this topic: “SRE Isn't for Everyone, but It Could Be.”
Outcome
This process, from the opening of the CFP at the beginning of November through the final selection of lightning talks in February, has resulted in a conference program which we hope will prove to be informative and entertaining to all attendees. We also hope that publicizing this process will prove interesting and useful, both to potential future speakers and organizers of other conferences.