Julia Evans

Hacker School alumna

If you like this, you may like Ulia Ea.

Open sourced talks!

The wonderful Sumana Harihareshwara recently tweeted that she released her talk A few Python Tips as CC-BY. I thought this was a super cool idea!

After all, if you’ve put in a ton of work to put a talk or workshop together, it’s wonderful if other people can benefit from that as much as possible. And none of us have an unlimited amount of time to give talks.

Stephanie Sy, a developer in the Phillippines, emailed me recently to tell me that she used parts of my pandas cookbook to run a workshop. IN THE PHILIPPINES. How cool is that? She put her materials online, too!.

So if you want to give a talk about how to do data analysis with Python, you too can reuse these materials in any way you see fit! You can get materials for talks I’ve given on this page of talks. Just attribute me, and maybe tell me about it because THAT WOULD BE COOL :)

In other open source talks news, Software Carpentry also has MIT-licensed lesson materials! Want to give an novice introduction to git? Go to the SWC bootcamp respository and look in novice/git! They even take pull requests.

Ruby Rogues podcast: systems programming tricks!

If you listen to the Ruby Rogues podcast this week, you will find me! We talked about using systems programming tools (like strace) to debug your regular pedestrian code, building an operating system in Rust, but also other things I didn’t expect, like how asking stupid questions is an amazing way to learn.

Ruby Rogues also has a transcript of the entire episode, an index, and links to everything anyone referenced during the episode, including apparently 13 posts from this blog (!). I don’t even understand how this is possible, but apparently it is! It was a fun time, and apparently it is totally okay to spend a Ruby podcast discussing Rust, statistics, strace, and, well… not Ruby :)

Questions for a senior engineer genie

The amazing Julia Grace asked on twitter:

If you are early in your eng career, what types of topics (tech/career/other) would you be most interested in discussing w/ senior engs?

If you’re reading this, you should also know about her Tips for Women: Finding Software Engineering Jobs. Almost all of it applies to all people looking for tech jobs, not just women.

So! This got me thinking! If I had a super wise senior engineer friend who lived next door and I talked to all the time, what would I ask them questions about?

I realized that these questions sound like “uh julia is your job terrible” (“how do I handle being condescended to?” =D) . But I’m largely just extremely interested in proactively avoiding terrible job situations! For instance, I really hate being condescended to so even if it happens very rarely so I would like to learn how to make it never happen :) And some of these are things that don’t happen to me at all, but I’ve seen friends dealing with.

Another thing about these questions is that I’m not actually interested in Twitter responses to them or comments – if you know me IRL and you find them interesting, talk to me about them!

I am also interested in more questions! I love questions.

Most of these are not really about tech at all! A lot are general “how to deal with weird social situations at work” discussions.

Okay cool now we’ve made it past “JULIA ARE YOU OKAY” (yes), “ARE ALL THESE THINGS HAPPENING TO YOU RIGHT NOW” (no), “ARE ANY OF THEM” (sure! everyone has to deal with technical disagreements!) “I KNOW ALL THE ANSWERS WHERE CAN I SUBMIT THEM” (no) and “THESE AREN’T EVEN QUESTIONS ABOUT TECH” (yes).

Here are the questions / topics. Maybe you will find them useful and have conversations about them!

  • How to find mentors? How do you know if you’re doing well? How to become a senior eng :-)
  • Strategies for building consensus on technical decisions among a bunch of smart people with very different opinions
  • How do you know when you should quit your job?
  • How to do good work and also go home and go biking
  • How to handle your organization being subtly sexist / racist / classist in a way that you can’t specifically call out because it’s not a specific incident.(see The Ping-Pong theory of Tech Sexism)
  • Strategies (when interviewing for a job) for avoiding toxic work environments so you don’t have to quit
  • How to handle being condescended to by people who know more (or even who don’t!).
  • How to decide when to agree with someone more senior and when to push back. (because it’s not “just because you really thnk you’re right”)
  • What to do if people don’t take you seriously as a developer
  • Making sure you learn new things while employed.
  • Deciding what language to write a new project in
  • Processes for avoiding security holes in your code (though this is really an org-level thing)
  • MEETINGS. How to be an effective awesome person in meetings so that you contribute something useful and people listen to you
  • When to escalate issues you’re having and when… not to
  • How to know when you should ask for a raise (?!!)
  • How to choose which projects to work on (assuming you have choice about this)
  • How to build a good relationship with your manager.
  • How to handle having a bad manager, or a manager who you don’t work well with.
  • How to have technical disagreements constructively without making it personal
  • How to be an awesome coworker who everyone wants to work with! (be helpful? be super reliable? give good estimates? great code review? build things quickly? all of the above? SO MANY AXES)

Fun with stats: How big of a sample size do I need?

[There’s a version of this post with calculations on nbviewer!]

I asked some people on Twitter what they wanted to understand about statistics, and someone asked:

“How do I decide how big of a sample size I need for an experiment?”

Flipping a coin

I’ll do my best to answer, but first let’s do an experiment! Let’s flip a coin ten times.

> flip_coin(10)
heads    7
tails    3

Oh man! 70% were heads! That’s a big difference.

NOPE. This was a random result! 10 as a sample size is way too small to decide that. What about 20?

> flip_coin(20)
heads    13
tails     7

65% were heads! That is still a pretty big difference! NOPE. What about 10000?

> flip_coin(10000)
heads    5018
tails    4982

That’s very close to 50%.

So what we’ve learned already, without even doing any statistics, is that if you’re doing an experiment with two possible outcomes, and you’re doing 10 trials, that’s terrible. If you do 10,000 trials, that’s pretty good, and if you see a big difference, like 80% / 20%, you can almost certainly rely on it.

But if you’re trying to detect a small difference like 50.3% / 49.7%, that’s not a big enough difference to detect with only 10,000 trials.

So far this has all been totally handwavy. There are a couple of ways to formalize our claims about sample size. One really common way is by doing hypothesis testing. So let’s do that!

Let’s imagine that our experiment is that we’re asking people whether they like mustard or not. We need to make a decision now about our experiment.

Step 1: make a null hypothesis

Let’s say that we’ve talked to 10 people, and 7/10 of them like mustard. We are not fooled by small sample sizes and we ALREADY KNOW that we can’t trust this information. But your brother is arguing “7/10 seems like a lot! I like mustard! I totally believe this!”. You need to argue with him with MATH.

So we’re going to make what’s called a “null hypothesis”, and try to disprove it. In this case, let’s make the null hypothesis “there’s a 50/50 chance that a given person likes mustard”.

So! What’s the probability of seeing an outcome like 7/10 if the null hypothesis is true? We could calculate this, but we have a computer and I think it’s more fun to use the computer.

So let’s pretend we ran this experiment 10,000 times, and the null hypothesis was true. We’d expect to sometimes get 10/10 mustard likers, sometimes 0/10, but mostly something in between. Since we can program, let’s run the asking-10-people experiment 10,000 times!

I programmed it, and here are the results:

0        7
1      102
2      444
3     1158
4     2002
5     2425
6     2094
7     1176
8      454
9      127
10      11

Or, on a pretty graph:

Okay, amazing. The next step is:

Step 2: Find out the probability of seeing an outcome this unlikely or more if the null hypothesis is true

The “this unlikely or more” part is key: we don’t want to know the probability of seeing exactly 7/10 mustard-likers, we want to know the probability of seeing 7/10 or 8/10 or 9/10 or 10/10.

So if we add up all the times when 7/10 or more people liked mustard by looking at our table, that’s about 1700 times, or 17% of the time.

We could also calculate the exact probabilities, but this is pretty close so we won’t. The way this kind of hypothesis testing works is that you only reject the null hypothesis if the probability of seeing this data if it’s true is really low. So here the probability of seeing this data if the null hypothesis is true is 17%. 17% is pretty high, (1/6!), so we won’t reject it. This value (0.17) is called a p-value by statisticians. We won’t say that word again here though. Usually you want this to be more like 1% or 5%.

We’ve really quickly arrived at

Step 3: Decide whether or not to reject the null hypothesis

If we see that 7/10 people like mustard, we can’t reject it! If we’d instead seen that 10/10 of our survey respondants liked mustard, that would be a totally different story! The probability of seeing that is only about 10/10000, or 0.1%. So it would be actually very reasonable to reject the null hypothesis.

What if we’d used a bigger sample size?

So asking 10 people wasn’t good enough. What if we asked 10,000 people? Well, we have a computer, so we can simulate that!

Let’s flip a coin 10,000 times and count the number of heads. We’ll get a number (like 5,001). Then we’ll repeat that experiment 10,000 times and graph the results. This is like running 10,000 surveys of 10,000 people each.

That’s pretty narrow, so let’s zoom in to see better.

So in this graph we ran 10,000 surveys of 10,000 people, and in about 100 of them 5000 people said they liked mustard

There are two neat things about this graph. The first neat thing is that it looks like a normal distribution, or “bell curve”. That’s not a coincidence! It’s because of the central limit theorem! MATH IS AMAZING.

The second is how tightly centred it is around 5,000. You can see that the probability of seeing more than 52% or less than 48% is really low. This is because we’ve done a lot of samples.

This also helps us understand how people could have calculated these probabilities back when we did not have computers but still needed to do statistics – if you know that your distribution is going to be approximately the normal distribution (because of the central limit theorem), you can use normal distribution tables to do your calculations.

In this case, “the number of heads you get when flipping a coin 10,000 times” is approximately normally distributed, with mean 5000.

So how big of a sample size do I need?

Here’s a way to think about it:

  1. Pick a null hypothesis (people are equally likely to like mustard or not)
  2. Pick a sample size (10000)
  3. Pick a test (do at least 5200 people say they like mustard?)
  4. What would the probability of your test passing be if the null hypothesis was true? (less than 1%!)
  5. If that probability is low, it means that you can reject your null hypothesis! And your less-mathematically-savvy brother is wrong, and you have PROOF.

Some things that we didn’t discuss here, but could have:

  • independence (we’re implicitly assuming all the samples are independent)
  • trying to prove an alternate hypothesis as well as trying to disprove the null hypothesis

I was also going to do a Bayesian analysis of this same data but I’m going to go biking instead. That will have to wait for another day. Later!

(Thanks very much to the fantastic Alyssa Frazee for proofreading this and fixing my terrible stats mistakes. And Kamal for making it much more understandable. Any remaining mistakes are mine.)

How I did Hacker School: ignoring things I understand and doing the impossible

Hacker School is a 12 week workshop where you work on becoming a better programmer. But when you have 12 weeks of uninterrupted time to spend on whatever you want, what do you actually do? I wrote down what I worked on every day of Hacker School, but I always have trouble articulating advice about what to work on. So this isn’t advice, it’s what I did.

One huge part of the way I ended up approaching Hacker School was to ignore a ton of stuff that goes on there. For example! I find all these things kind of interesting:

  • machine learning
  • web development
  • hardware projects
  • games
  • new programming languages

But I’d been working as a web developer / in machine learning for a couple of years, and I wasn’t scared by them. I don’t feel right now like learning more programming languages is going to make me a better programmer.

And there were tons of interesting-sounding workshops where Mary would live code a space invaders game in Javascript (!!!), or Zach would give an intermediate Clojure workshop, or people would work together on a fun hardware project. People were building neural networks, which looked fun!

I mostly did not go to these workshops. It turned out that I was interested in all those things, but more interested in learning:

I wanted to work on things that seemed impossible to me, and writing an operating system seemed impossible. I didn’t know anything about operating systems. This was amazing.

This meant sometimes saying no to requests to pair on things that weren’t on my roadmap, even if they seemed super interesting! I also learned that if I wanted something to exist, I could just make it.

I ran a kernel development workshop for a while in my first two weeks. Jari and Pierre and Brian came, and they answered “what is a kernel? what are its responsibilities?”. This was hugely helpful to me, and I learned a ton of the basics of kernel programming. Nobody I talked to had built an operating system from scratch, so I learned how! Filippo answered a lot of my security questions and helped when I was confused about assembly. Daphne was working on a shell and I paired with her and learned a ton.

People at Hacker School know an amazing amount of stuff. There is so much to learn from them.

So I don’t have advice, but for me one some the most important things to remember about Hacker School were that other people have different interests than me, and that’s okay, and I can make Hacker School what I want it to be.

!!Con talks are up

The talk recordings and transcripts for the amazing talks at !!Con have been posted! Go learn about EEG machines, how to stay in love with programming, type theory, dancing robots, hacking poetry, and more!

Here they are!!

Erty Seidel did pretty much 100% of the work for the talk recordings. Super pleased with the results.

Machine learning isn’t Kaggle competitions

I write about strace and kernel programming on this blog, but at work I actually mostly work on machine learning, and it’s about time I started writing about it! Disclaimer: I work on a data analysis / engineering team at a tech company, so that’s where I’m coming from.

When I started trying to get better at machine learning, I went to Kaggle (a site where you compete to solve machine learning problems) and tried out one of the classification problems. I used an out-of-the-box algorithm, messed around a bit, and definitely did not make the leaderboard. I felt sad and demoralized – what if I was really bad at this and never got to do math at work?! I still don’t think I could win a Kaggle competition. But I have a job where I do (among other things) machine learning! What gives?

To back up from Kaggle for a second, let’s imagine that you have an awesome startup idea. You’re going to predict flight arrival times for people! There are a ton of decisions you’ll need to make before you even start thinking about support vector machines:

Understand the business problem

If you want to predict flight arrival times, what are you really trying to do? Some possible options:

  • Help the airline understand which flights are likely to be delayed, so they can fix it.
  • Help people buy flights that are less likely to be delayed.
  • Warn people if their flight tomorrow is going to be delayed

I’ve spent time on projects where I didn’t understand at all how the model was going to fit into business plans. If this is you, it doesn’t matter how good your model is. At all.

Understanding the business problem will also help you decide:

  • How accurate does my model really need to be? What kind of false positive rate is acceptable?
  • What data can I use? If you’re predicting flight days tomorrow, you can look at weather data, but if someone is buying a flight a month from now then you’ll have no clue.

Choose a metric to optimize

Let’s take our flight delays example. We first have to decide whether to do classification (“will this flight be delayed for at least an hour”) or regression (“how long will this flight be delayed for?”). Let’s say we pick regression.

People often optimize the sum of squares because it has nice statistical properties. But mispredicting a flight arrival time by 10 hours and by 20 hours are pretty much equally bad. Is the sum of squares really appropriate here?

Decide what data to use

Let’s say I already have the airline, the flight number, departure airport, plane model, and the departure and arrival times.

Should I try to buy more specific information about the different plane models (age, what parts are in them..)? Really accurate weather data? The amount of information available to you isn’t fixed! You can get more!

Clean up your data

Once you have data, your data will be a mess. In this flight search example, there will likely be

  • airports that are inconsistently named
  • missing delay information all over the place
  • weird date formats
  • trouble reconciling weather data and airport location

Cleaning up data to the point where you can work with it is a huge amount of work. If you’re trying to reconcile a lot of sources of data that you don’t control like in this flight search example, it can take 80% of your time.

Build a model!

This is the fun Kaggle part. Training! Cross-validation! Yay!

Now that we’ve built what we think is a great model, we actually have to use it:

Put your model into production

Netflix didn’t actually implement the model that won the Netflix competition because it was too complicated.

If you trained your model in Python, can you run it in production in Python? How fast does it need to be able to return results? Are you running a model that bids on advertising spots / does high frequency trading?

If we’re predicting flight delays, it’s probably okay for our model to run somewhat slowly.

Another surprisingly difficult thing is gathering the data to evaluate your model – getting historical weather data is one thing, but getting that same data in real time to predict flight delays right now is totally different.

Measure your model’s performance

Now that we’re running the model on live data, how do I measure its real-life performance? Where do I log the scores it’s producing? If there’s a huge change in the inputs my model is getting after 6 months, how will I find out?

Kaggle solves all of this for you.

With Kaggle, almost all of these problems are already solved for you: you don’t need to worry about the engineering aspects of running a model on live data, the underlying business problem, choosing a metric, or collecting and cleaning up data.

You won’t go through all these steps just once – maybe you’ll build a model and it won’t perform well so you’ll try to add some additional features and see if you can build a better model. Or maybe how useful the model is to your business depends on how good the results are.

Doing Kaggle problems is fun! It means you can focus on machine learning algorithm nerdery and get better at that. But it’s pretty far removed from my job, where I work on a team (hiring!) that thinks about all of these problems. Right now I’m looking at measuring models’ performance once they’re in production, for instance!

So if you look at Kaggle leaderboards and think that you’re bad at machine learning because you’re not doing well, don’t. It’s a fun but artificial problem that doesn’t reflect real machine learning work.

(to be clear: I don’t think that Kaggle misrepresents itself, or does a bad job – it specializes in a particular thing and that’s fine. But when I was starting out, I thought that machine learning work would be like Kaggle competitions, and it’s not.)

(thanks to the fantastic Alyssa Frazee for helping with drafts of this!)

Asking questions is a superpower

There are all kinds of things that I think I “should” know and don’t. A few things that I don’t understand as well as I’d like to:

  • Database replication and sharding (seriously how does replication even work)
  • How fast a computer can process data (should I expect more or less than 6GB/s if it’s a simple CPU-bound program where the data is already in RAM?)
  • How do system calls work, reeeeally? (I do not understand context switching nearly as well as I could!)
  • An truly embarrassing amount of basic statistics, even though I have a math degree.

There are lots of much more embarrassing things that I just can’t think of right now.

I’ve started trying to ask questions any time I don’t understand something, instead of worrying about whether people will think I’m dumb for not knowing it. This is magical, because it means I can then learn those things!

One of my very favorite examples of this is how I started learning about operating systems. At the beginning of Hacker School, I realized that I legitimately did not know what a kernel was or did more than “er, operating system stuff”.

This was super embarrassing! I’d been using Linux for 10 years, and I didn’t really understand at all what the basic responsibilities of the Linux kernel were. Oh no! Instead of hiding under a rock, I asked. And then people told me, and I wrote What does the Linux kernel even do?.

I don’t know how I would have learned without asking. Now I have given talks about getting started with understanding the Linux kernel! So fun!

One surprising thing about asking questions is that when I start digging into a problem, people who I respect and who know a lot will sometimes not know the answers at all! For instance, I’ll think that someone totally knows about the Linux kernel, but of course they don’t know everything, and if I’m trying to do something specific like write a rootkit they might not know all the details of how to do it.

aphyr is a really good example of someone who asks basic questions and gets unexpected answers. He does research into whether distributed systems are reliable (linearizable? consistent? / available?). The results he finds are like RabbitMQ might lose 40% of your data. Ooooops. If you don’t start asking questions about how RabbitMQ works from the beginning (in his case, by writing a program called Jepsen that automates this kind of reliability testing), then you’ll never find that out. (be skeptical! Don’t believe what people say even if they’re using fancy words!)

“I don’t understand.”

Another hard thing is admitting that I don’t understand. I try to not be too judgemental about this – if someone is explaining something to me and it doesn’t make sense, it’s possible that they’re explaining it badly! Or that I’m tired! Or any other number of reasons. But if I don’t tell them I’m don’t understand, I’m never going to understand the damn thing.

So I try to take a deep breath and say cheerfully “Nope!”, figure exactly which aspect of the thing I don’t understand, and ask a clarifying question.

As a sideeffect, I’ve acquired much less patience and respect for people who give talks which sound really smart but are difficult to understand, and somewhat more willingness to ask questions like “so what IS <basic concept that you did not explain>?”.

Avoiding mansplaining

A difficult thing about asking questions is that I have to be pretty careful about asking the right questions and making it clear which parts I know already. This is just good hygiene, and makes sure nobody’s time gets wasted.

For instance, I have sometimes said things like “I don’t know anything about statistics”, which is actually false and sometimes results in people trying to explain basic probability theory to me, or what an estimator is, or maybe the difference between a biased and unbiased estimator. It turns out these are actually things I know! So I need to be more specific, like “can we walk through some basic survival analysis?” (actually a thing I would like to understand!)

HUGE SUCCESS

So! Understanding and learning are more important than feeling smart. Probably the most important thing I learned at Hacker School was how to ask questions and admit when I don’t understand something. I know way more things now as a result! (see: this entire blog of things I have learned)

Working remote, 3 months in

I’ve been working remotely for Stripe for 3 months now.

I decided to do this because I interviewed at this place, and the people were thoughtful and friendly and interesting and knew things that I did not know! But they were all in San Francisco, and I didn’t want to move there at all. They convinced me that if I worked remote it might not be a disaster.

I was still pretty scared about working remote, though! So far it’s been hard, but I’m learning how to do it better. I’m somewhat extroverted, so it’s possible for me to go a bit stir-crazy sitting alone by myself all day.

I live on the east coast. The people I work with are mostly in San Francisco, three timezones away. So when I start work it’s usually around 6am in SF.

Let’s start with some things I have trouble with:

Hard things

  • Timezones are hard. If I start working at 8, there aren’t many people I can talk to BECAUSE IT’S 5AM. (however: it’s a really good time to focus! And I can be a wizard and finish tasks before everyone wakes up in the morning!)
  • I don’t know how to meet new people without visiting the physical office. A lot of people are just names on IRC to me. I do not know of any upside to this, or how to fix it.
  • I’m worried about the winter.
  • I didn’t realize how much I depended on synchronous communication (talking face-to-face!) to do things until it was taken away from me. This is thankfully getting easier.
  • It seems pretty difficult for me to know very much about the office culture.
  • I find building consensus about technical decisions hard to do remotely. (see: depending on synchronous communication)
  • A/V is hard. I often don’t try to participate in talks because I don’t expect the experience to be good.

Good things:

  • I get to work with people who I like and live where I want to live. And I’m learning a lot. This is why I decided to do this in the first place =)
  • I can work in my backyard in the sun.
  • I have more flexibility about when and where to work. I appreciate this more than I thought I would.
  • Thinking about working remote as “a cool possibility with some ups and downs” instead of “this enemy that means I HAVE TO SEE LESS PEOPLE OH NO” helps me be happy instead of grumpy.
  • My happiness seems to be proportional to the amount of time I spend talking to people. This is something I can measure and optimize!
  • I’m getting better at asynchronous communication.
  • If I ask someone to do something when I finish work, they’ll be working for 3 hours after me! It might be already done when I start the next day.
  • 2 people on my team are remote! (colin and avi). This is a huge deal. If I were the only one it would probably be a disaster and I would be way more sad. As far as I can tell Avi’s been working remote approximately forever and he has a lot of good things to say.
  • I like that Stripe actually changes things to accommodate remotes (for instance: the all-hands meeting switched times so that it’s not at 7:30pm on Friday on the east coast)
  • Basically all of the discussion on my team happens over IRC/email. This means that there is a lot of IRC to keep up with. This is harder than I expected.

Strategies

  • I changed my work computer’s clock to be the time in San Francisco. This helps more than I expected.
  • I made a short URL (http://go/julia) that links to a Google Hangout with me
  • Deciding to be happy this summer. There is no reason to be sad in the summer.
  • Talking to other people who work remote sometimes and learning about things they do!

That’s all! Maybe there will be further updates.

Should my conference do anonymous review?

I recently wrote an post called Anonymous review is amazing, talking about our experience with anonymous review at !!Con (it was excellent! I was surprised and delighted!). There was a discussion on the PyCon organizers list today about whether PyCon should do anonymous review, and I started thinking about this a little more carefully.

I’m going to make a few assumptions up front: our goal as conference organizers is to have

  • a process that is as unbiased as possible
  • speakers who will be engaging
  • who come from diverse backgrounds
  • where some are new speakers, and some are more experienced

Let’s talk about whether anonymous review will help us with these things!

Is anonymous review less biased?

Yes.

Firstly, people believe that anonymous review is less biased.

One of our !!Con’s speakers, Katherine Ye, told us:

Thank you so much for [anonymizing everything]! It’s a relief to know that I wasn’t picked for gender, race, age, or anything like that.

Kenneth Hoxworth said of RailsConf’s anonymous review process:

It gave me courage that I wasn’t going up against big names.

It’s really important for people to have confidence in a conference’s review process. Nobody wants to put time into a proposal if they’re going to be dismissed because of their gender or age or race, or just because they’re not famous enough. People also worry about not being accepted on their own merit.

Anonymous review helps us build confidence, and that’s really valuable.

Anonymous review is also actually less biased. This study by Kathryn McKinley shows that, in peer-reviewed scientific articles, both men and women express systemic bias against women, and double-blind reviewing removed that bias. (thanks to Lindsey Kuper for the link!)

They found nepotism and gender bias were significant factors in the evaluation process. To be judged as good as their male counter parts, female applicants had to be 2.5 times more productive.

Will anonymous review help my conference’s diversity?

Maybe. EuroPython has an anonymous review process, and recently very few of their announced speakers were women. This is because very few women applied to give talks. You can’t accept talks that don’t exist!

A more effective way to diversify your speaker pool is through active outreach. I don’t know of any evidence to show that anonymous review helps you attract a more diverse range of speakers. (is there some? I would love to know.)

Will anonymous review help me get inexperienced speakers?

Maybe.

On one hand, we have

It gave me courage that I wasn’t going up against big names.

On the other hand, Douglas Napoleone pointed out:

An anonymous system has an inherent bias towards very well written proposals. Those people whom have given the most talks are those whom are best at writing proposals which are best at getting through selection committees. It becomes a feedback loop which cuts out the very speakers we want most. Knowing that a person is a new speaker with a decent proposal is key when comparing them against a proposal by someone whom has given a talk at the last 8 python conferences.

PyCon’s approach is to actively encourage new speakers to apply and work with them to write better proposals, and that’s been successful.

Florian Gilchner wrote about eurucamp’s experience with anonymous review here:

We found that newcomers don’t write worse proposals than seasoned speakers. Quite the contrary, we found that many proposals that are submitted to many conferences are unspecific and dull and would only fly by having a big name attached. Anonymous CFPs are very good at weeding out copy-pasta. We didn’t accept quite a few people that would have been really shiny on the program.

and

Every year, we have at least one person we take huge bets on and get very good talks out of that. Most of the time, it’s someone would [would lose out] in a direct and open battle.

But will my speakers be good?!

This is probably the scariest part. We did anonymous review for !!Con, and our speakers were very good. Our main hope was that if somebody wrote a proposal about an interesting topic, then they could give an engaging 10-minute talk. This worked. It’s relevant here that our talks were all lightning talks.

We also had an anonymizer, who did an amazing job reviewing videos and telling us his impressions. This meant that we had to trust his judgement (which I do! and our speakers were great!), but having only one person watching talks introduces bias.

I’d be worried about doing anonymous review if I was organizing a conference where the talks were longer. (though it’s been done successfully!)

So should you do anonymous review?

Anonymous review takes extra time. You should think about what benefits you hope that it’ll bring, and what your alternatives are. There’s some excellent discussion on the comments to a draft of this post. Go read the whole thing.

Some other things you can spend time on:

  • doing outreach to get more applications from under-represented communities
  • giving new speakers feedback on their proposals and helping them do a better job
  • writing up a really good call for speakers (see JSConf EU’s!)
  • running brainstorming sessions to help people come up with ideas

I would do it again for !!Con, since the response to it was super positive and the talks were good. I find the bias-reduction argument pretty compelling. Nepotism and accepting your friends’ talks are really hard to fight against. Judging speaker quality still worries me!