Monthly Archives: April 2012

Valve’s New Employees Handbook: “What is Valve *Not* Good At?”

Valve's New Employee Guide: Methods to find out what's going on

Valve admits that one of its weaknesses is internal communication. So its new employee guide provides a helpful illustration of how to stay in the loop.

A copy of gaming company Valve’s new employee guide made the rounds on Hacker News this morning (read the discussion here). Of all such company-manifestos, Valve’s ranks as one of the most well-design, brightly-written, and astonishingly honest.

Google has its 20-percent-time policy, Valve’s is 100 percent:

We’ve heard that other companies have people allocate a percentage of their time to self-directed projects. At Valve, that percentage is 100.

Since Valve is flat, people don’t join projects because they’re told to. Instead, you’ll decide what to work on after asking yourself the right questions (more on that later). Employees vote on projects with their feet (or desk wheels).

Strong projects are ones in which people can
see demonstrated value; they staff up easily. This means there are any number of internal recruiting efforts constantly under way.

To be fair, Google’s policy ostensibly allows that 20 percent time to be directed at non-company-boosting projects. It’s likely there is some internal mechanism/dynamic that prevents Valve malcontents from going too far off the ranch.

With the attention that Valve puts into just their guide, they’re obviously betting that their hiring process finds the talent with the right attitude. They describe the model employee as being “T-shaped”: skilled in a broad variety of talents and peerless in their narrow discipline.

One of the best sections comes at the end, under the heading “What is Valve Not Good At?” This is the classic opportunity to do the humblebrag, as when it comes up during hiring interviews (“My greatest weakness is that I’m too passionate about my work!”). Valve’s list of weaknesses are not harsh or odious – if you like what they’ve opined in the guide, then these weaknesses logically follow:

  • Helping new people find their way. We wrote this book to help, but as we said above, a book can only go so far. [My reading between the lines: the people we seek to hire are intelligent and experienced enough to navigate unknown territory]
  • Mentoring people. Not just helping new people figure things out, but proactively helping people to grow in areas where they need help is something we’re organizationally not great at. Peer reviews help, but they can only go so far. [our “T” shaped employees were hired because they are good at a lot of things and especially good at one thing. Presumably, they have enough of a “big picture” mindset to realize how they became an expert in one area, why they chose to become good at it, what it takes to get there, and a reasonable judgment of cost versus benefit ]
  • Disseminating information internally [Since we’re a flat organization, it is incumbent on each team member to proactively keep themselves in the loop].
  • Finding and hiring people in completely new disciplines (e.g., economists! industrial designers!)[what can you say, we started out primarily as a gaming company and were so good at making games that we apparently could thrive on that alone].
  • Making predictions longer than a few months out [team members and group leaders don’t fill out enough TPS reports for us to keep reliable Gantt charts. Also, having set-in-stone deadlines and guidelines can restrict mobility].
  • We miss out on hiring talented people who prefer to work within a more traditional structure. Again, this comes with the territory and isn’t something we should change, but it’s worth recognizing as a self-imposed limitation.

All of Valve’s weaknesses can be spun positively, but they would legitimately be critical weaknesses in a company with a differing mindset. For anyone who has read through the entire guide, these bullet points are redundant. But it’s an excellent approach for doing a concluding summary/tl;dr version (in fact, it reminds me of the pre-mortem tactic: asking team members before a project’s launch to write a future-dated report describing why the project became a disaster. It reveals problems that should’ve been discovered during the project’s planning phases, but in a fashion that rewards employees for being critical, rather than seeing them as negative-nancies).

Read the Valvue guide here. And check out the Hacker News discussion which ponders how well this scales.

Dummy Data, Drugs, and Check-lists

Using dummy data — and forgetting to remove it — is a pretty common and unfortunate occurrence in software development…and in journalism (check out this headline). If you haven’t made yourself a pre-publish/produce checklist that covers even the most basic things (“do a full-text search for George Carlin’s seven dirty words“), now’s a good time to start.

These catastrophic mistakes can happen even when billions of dollars are on the line.

In his book, New Drugs: An Insider’s Guide to the FDA’s New Drug Approval Process, author Lawrence Friedhoff says he’s seen this kind of thing happen “many” times in the the drug research phase. He describes an incident in which his company was awaiting the statistical review of two major studies that would determine if a drug could move into the approval phase:

The computer programs to do the analysis were all written. To be sure that the programs worked properly, the statisticians had tested the programs by making up treatment assignments for each patient without regard to what the patients had actually received, and verifying that the programs worked properly with these “dummy” treatment codes.

The statisticians told us it would take about half an hour to do the analysis of each study and that they would be done sequentially. We waited in a conference room. The air was electric. Tens of millions of dollars of investment hung in the balance. The treatment options of millions of patients would or would not be expanded. Perhaps, billions of dollars of revenue and the future reputation of the development team would be made or lost based on the results of the statistical analyses.

The minutes ticked by. About 20 minutes after we authorized the code break, the door opened and the statisticians walked in. I knew immediately by the looks on their faces that the news was good…One down, one to go (since both studies had to be successful to support marketing approval).

The statisticians left the room to analyze the second study…which could still theoretically fail just by chance, so nothing was guaranteed. Finally, after 45 minutes, the door swung open, and the statisticians walked in. I could tell by the looks on their faces that there was a problem. They started presenting the results of the second study, trying to put a good face on a devastatingly bad outcome. The drug looked a little better than control here but worse there… I couldn’t believe it. How could one study have worked so well and the other be a complete disaster? The people in the room later told me I looked so horrified that they thought I would just have a heart attack and die on the spot.

The positive results of the first study were very strong, making it exceedingly unlikely that they occurred because of a mistake, and there was no scientific reason why the two studies should have given such disparate results.

After about a minute, I decided it was not possible for the studies to be so inconsistent, and that the statisticians must have made a mistake with the analysis of the second study…Ultimately they said they would check again, but I knew by their tone of voice that they viewed me with pity, a clinical development person who just couldn’t accept the reality of his failure.

An eternity later, the statisticians re-entered the room with hangdog looks on their faces. They had used the “dummy” treatment randomization for the analysis of the second study. The one they used had been made up to test the analysis programs, and had nothing to do with the actual treatments the patients had received during the study.

From: Friedhoff, Lawrence T. (2009-06-04). New Drugs: An Insider’s Guide to the FDA’s New Drug Approval Process for Scientists, Investors and Patients (Kindle Locations 2112-2118). PSPG Publishing. Kindle Edition.

So basically, Friedhoff’s team did the equivalent of what a newspaper does when laying out the page before the articles have been written: put in some filler text to be replaced later. Except that the filler text doesn’t get replaced at publication time…again, see this Romenesko link to see the disastrous/hilarious results.

Here’s an example of it happening in the tech startup world.

What’s interesting about Friedhoff’s case, though, is that validation of study results is a relatively rare – and expensive occurrence…whereas publishing a newspaper happens every day, as does pushing out code and test emails. But Friedhoff says the described incident is “only one of many similar ones I could write about”…which goes to show that rarity and magnitude of a task won’t stop you from making easy-to-prevent, yet devastating mistakes.

Relevant: Atul Gawande’s article about the check-list: how a simple list of five steps, as basic as “Wash your hands”, prevented thousands of surgical disasters.

ProPublica at Netexplo

A few weeks ago, I had the honor of joining my colleagues Charlie Ornstein and Tracy Weber in Paris to receive a Netexplo award for our work with Dollars for Docs. Check out the presentation video they prepared for the awards ceremony (held at UNESCO), featuring us as bobbleheads.

The easiest way to explain Netexplo is that one of the organizers told me that it hopes to be a South by Southwest of Paris. Check out the quirky trophy we got:

Netexplo trophy

Check out the other great entries in this year’s ceremony.

This was my first trip to Paris so of course I took photos like a shutterbug tourist. You can view them on my Flickr account:

Sony Alpha NEX-7: Paris - Eiffel Tower

Centre Pompidou, Musée National d'Art Moderne

Tuileries Garden

The Eiffel Tower, as seen from the Trocadéro.