Author Archives: Dan Nguyen

Valve’s New Employees Handbook: “What is Valve *Not* Good At?”

Valve's New Employee Guide: Methods to find out what's going on

Valve admits that one of its weaknesses is internal communication. So its new employee guide provides a helpful illustration of how to stay in the loop.

A copy of gaming company Valve’s new employee guide made the rounds on Hacker News this morning (read the discussion here). Of all such company-manifestos, Valve’s ranks as one of the most well-design, brightly-written, and astonishingly honest.

Google has its 20-percent-time policy, Valve’s is 100 percent:

We’ve heard that other companies have people allocate a percentage of their time to self-directed projects. At Valve, that percentage is 100.

Since Valve is flat, people don’t join projects because they’re told to. Instead, you’ll decide what to work on after asking yourself the right questions (more on that later). Employees vote on projects with their feet (or desk wheels).

Strong projects are ones in which people can
see demonstrated value; they staff up easily. This means there are any number of internal recruiting efforts constantly under way.

To be fair, Google’s policy ostensibly allows that 20 percent time to be directed at non-company-boosting projects. It’s likely there is some internal mechanism/dynamic that prevents Valve malcontents from going too far off the ranch.

With the attention that Valve puts into just their guide, they’re obviously betting that their hiring process finds the talent with the right attitude. They describe the model employee as being “T-shaped”: skilled in a broad variety of talents and peerless in their narrow discipline.

One of the best sections comes at the end, under the heading “What is Valve Not Good At?” This is the classic opportunity to do the humblebrag, as when it comes up during hiring interviews (“My greatest weakness is that I’m too passionate about my work!”). Valve’s list of weaknesses are not harsh or odious – if you like what they’ve opined in the guide, then these weaknesses logically follow:

  • Helping new people find their way. We wrote this book to help, but as we said above, a book can only go so far. [My reading between the lines: the people we seek to hire are intelligent and experienced enough to navigate unknown territory]
  • Mentoring people. Not just helping new people figure things out, but proactively helping people to grow in areas where they need help is something we’re organizationally not great at. Peer reviews help, but they can only go so far. [our “T” shaped employees were hired because they are good at a lot of things and especially good at one thing. Presumably, they have enough of a “big picture” mindset to realize how they became an expert in one area, why they chose to become good at it, what it takes to get there, and a reasonable judgment of cost versus benefit ]
  • Disseminating information internally [Since we’re a flat organization, it is incumbent on each team member to proactively keep themselves in the loop].
  • Finding and hiring people in completely new disciplines (e.g., economists! industrial designers!)[what can you say, we started out primarily as a gaming company and were so good at making games that we apparently could thrive on that alone].
  • Making predictions longer than a few months out [team members and group leaders don’t fill out enough TPS reports for us to keep reliable Gantt charts. Also, having set-in-stone deadlines and guidelines can restrict mobility].
  • We miss out on hiring talented people who prefer to work within a more traditional structure. Again, this comes with the territory and isn’t something we should change, but it’s worth recognizing as a self-imposed limitation.

All of Valve’s weaknesses can be spun positively, but they would legitimately be critical weaknesses in a company with a differing mindset. For anyone who has read through the entire guide, these bullet points are redundant. But it’s an excellent approach for doing a concluding summary/tl;dr version (in fact, it reminds me of the pre-mortem tactic: asking team members before a project’s launch to write a future-dated report describing why the project became a disaster. It reveals problems that should’ve been discovered during the project’s planning phases, but in a fashion that rewards employees for being critical, rather than seeing them as negative-nancies).

Read the Valvue guide here. And check out the Hacker News discussion which ponders how well this scales.

Dummy Data, Drugs, and Check-lists

Using dummy data — and forgetting to remove it — is a pretty common and unfortunate occurrence in software development…and in journalism (check out this headline). If you haven’t made yourself a pre-publish/produce checklist that covers even the most basic things (“do a full-text search for George Carlin’s seven dirty words“), now’s a good time to start.

These catastrophic mistakes can happen even when billions of dollars are on the line.

In his book, New Drugs: An Insider’s Guide to the FDA’s New Drug Approval Process, author Lawrence Friedhoff says he’s seen this kind of thing happen “many” times in the the drug research phase. He describes an incident in which his company was awaiting the statistical review of two major studies that would determine if a drug could move into the approval phase:

The computer programs to do the analysis were all written. To be sure that the programs worked properly, the statisticians had tested the programs by making up treatment assignments for each patient without regard to what the patients had actually received, and verifying that the programs worked properly with these “dummy” treatment codes.

The statisticians told us it would take about half an hour to do the analysis of each study and that they would be done sequentially. We waited in a conference room. The air was electric. Tens of millions of dollars of investment hung in the balance. The treatment options of millions of patients would or would not be expanded. Perhaps, billions of dollars of revenue and the future reputation of the development team would be made or lost based on the results of the statistical analyses.

The minutes ticked by. About 20 minutes after we authorized the code break, the door opened and the statisticians walked in. I knew immediately by the looks on their faces that the news was good…One down, one to go (since both studies had to be successful to support marketing approval).

The statisticians left the room to analyze the second study…which could still theoretically fail just by chance, so nothing was guaranteed. Finally, after 45 minutes, the door swung open, and the statisticians walked in. I could tell by the looks on their faces that there was a problem. They started presenting the results of the second study, trying to put a good face on a devastatingly bad outcome. The drug looked a little better than control here but worse there… I couldn’t believe it. How could one study have worked so well and the other be a complete disaster? The people in the room later told me I looked so horrified that they thought I would just have a heart attack and die on the spot.

The positive results of the first study were very strong, making it exceedingly unlikely that they occurred because of a mistake, and there was no scientific reason why the two studies should have given such disparate results.

After about a minute, I decided it was not possible for the studies to be so inconsistent, and that the statisticians must have made a mistake with the analysis of the second study…Ultimately they said they would check again, but I knew by their tone of voice that they viewed me with pity, a clinical development person who just couldn’t accept the reality of his failure.

An eternity later, the statisticians re-entered the room with hangdog looks on their faces. They had used the “dummy” treatment randomization for the analysis of the second study. The one they used had been made up to test the analysis programs, and had nothing to do with the actual treatments the patients had received during the study.

From: Friedhoff, Lawrence T. (2009-06-04). New Drugs: An Insider’s Guide to the FDA’s New Drug Approval Process for Scientists, Investors and Patients (Kindle Locations 2112-2118). PSPG Publishing. Kindle Edition.

So basically, Friedhoff’s team did the equivalent of what a newspaper does when laying out the page before the articles have been written: put in some filler text to be replaced later. Except that the filler text doesn’t get replaced at publication time…again, see this Romenesko link to see the disastrous/hilarious results.

Here’s an example of it happening in the tech startup world.

What’s interesting about Friedhoff’s case, though, is that validation of study results is a relatively rare – and expensive occurrence…whereas publishing a newspaper happens every day, as does pushing out code and test emails. But Friedhoff says the described incident is “only one of many similar ones I could write about”…which goes to show that rarity and magnitude of a task won’t stop you from making easy-to-prevent, yet devastating mistakes.

Relevant: Atul Gawande’s article about the check-list: how a simple list of five steps, as basic as “Wash your hands”, prevented thousands of surgical disasters.

ProPublica at Netexplo

A few weeks ago, I had the honor of joining my colleagues Charlie Ornstein and Tracy Weber in Paris to receive a Netexplo award for our work with Dollars for Docs. Check out the presentation video they prepared for the awards ceremony (held at UNESCO), featuring us as bobbleheads.

The easiest way to explain Netexplo is that one of the organizers told me that it hopes to be a South by Southwest of Paris. Check out the quirky trophy we got:

Netexplo trophy

Check out the other great entries in this year’s ceremony.

This was my first trip to Paris so of course I took photos like a shutterbug tourist. You can view them on my Flickr account:

Sony Alpha NEX-7: Paris - Eiffel Tower

Centre Pompidou, Musée National d'Art Moderne

Tuileries Garden

The Eiffel Tower, as seen from the Trocadéro.

Because of a typo, the government needs to keep your private data 10 times longer?

Yesterday the Obama administration approved new rules to greatly extend the time – from 180 days to 1,826 days (5 years) – that domestic intelligence services can retain American citizens’ private information. Citizens are eligible to be part of this federal data warehouse even when “there is no suspicion that they are tied to terrorism.”

As Charlie Savage in the New York Times reports:

Intelligence officials on Thursday said the new rules have been under development for about 18 months, and grew out of reviews launched after the failure to connect the dots about Umar Farouk Abdulmutallab, the “underwear bomber,” before his Dec. 25, 2009, attempt to bomb a Detroit-bound airliner.

After the failed attack, government agencies discovered they had intercepted communications by Al Qaeda in the Arabian Peninsula and received a report from a United States Consulate in Nigeria that could have identified the attacker, if the information had been compiled ahead of time.

The case of the “underwear bomber” is a strange justification for this expansion of data storage. Because the 2009 Christmas terror attempt nearly succeeded thanks to a series of what seems like common human errors, not from an information drought.

Shortly after the underwear bomber incident, the White House released a report examining how our vast intelligence network failed to prevent Abdulmutallab, the bomber, from boarding a flight from Amsterdam to Detroit.

One of the critical failures? Someone at the State Department, when sending information about Abdulmutallab to the National Counterterrorism Center, misspelled his name. Even though his father alerted American intelligence officials a full month before the attempted attack, our sophisticated surveillance system was partially stymied by a single misplaced letter.

As Foreign Policy reported in 2010:

State called an impromptu press briefing late Thursday evening to address the issue. The tone of the briefing was combative, as reporters pressed the “senior administration official” for details about the misspelling that he seemed not to want to give up. But here’s what we learned.

Someone (they won’t say who) at the State Department (presumably at the U.S. Embassy in Nigeria) did check to see if Abdulmutallab had a visa (they won’t say exactly when). That person was working off the Visas Viper cable originally sent from the embassy to the NCTC, which had the name wrong.

“There was a dropped letter in that — there was a misspelling,” the official said. “They checked the system. It didn’t come back positive. And so for a while, no one knew that this person had a visa.” (They won’t say for how long)

The chain of failures is more complicated than that, but the fact that a typo was a big enough of a wrench to warrant special mention in the White House review is an indication that the government’s surveillance systems, despite the work of its data architects, engineers and scientists, were compromised by some pretty banal problems, like not having spell-check capability.

In fact, the White House report goes out of its way to assert that the information-sharing problems that failed to prevent the 9/11 attacks “have now, 8 years later, largely been overcome.” Information about Abdulmutallab (again, his own father met with U.S. officials to warn them of his son a month ahead of the attack), his association with Al Qaeda, and Al Qaeda’s attack planning, “was available to all-source analysts at the CIA and the NCTC prior to the attempted attack.”

In other words, the 9/11 attack was possible because government agencies wouldn’t share information with each other. Now, they are happily sharing information with each other, they just aren’t diligently looking at it.

So the best solution is to enact a ten-fold increase the legal time limit for storing American citizens’ data?

It sounds like the government’s ability to detect terrorists would be greatly improved with better user-friendly software and adherence to data-handling standards. The ability to catch slight misspellings and do fuzzy data matches is something that Facebook and Google users have enjoyed for years; hell, the basic concept and consumer-friendly implementation has been around in Microsoft Word since about 20 years ago. Have software overhauls been enacted before deciding that the government needs more of its citizens’ private information? Or does the review of such technical details and policies seem too unsexy and pedantic for our intelligence bureaucracy?

The Times article also mentions that the guidelines call for more duplication of entire databases…which is a bit confusing. I’m assuming that this doesn’t refer to making backup copies (in case of a hard drive failure), but to a method of data-sharing between analysts. This is how the Times describes it:

The guidelines are also expected to result in the center making more copies of entire databases and “data mining them” using complex algorithms to search for patterns that could indicate a threat.

Hopefully, this doesn’t mean that database files are being copied and passed around so that each department can have their own copy of another department’s data. This would seem to introduce a few major logistical issues: namely, how do you know the copy you have contains the latest data? Remember that the typo in Abdulmutallab’s name was one mistake that helped spawn a series of snafus. Are we going to have an incident in which a terrorist slips through because an analyst forgot to update his/her copy of a database before mining it? Also, there’s the possibility that some of these data copies might end up lying around long after their 5-year limit.

There have been several reports of how intelligence agencies now suffer from too much data, to the point where analysts are “drowning in the data.” If this is a reason cited for how an attack went unprevented in the future, I hope the proposed reform is not “more data.”

Tools to get to the precipice of programming

I’m not a master programmer but it’s been so long since I’ve done my first “Hello World” that I don’t remember how people first grok the point of programming (for me, it was to get a good grade in programming class).

So when teaching non-programmers the value of code, I’m hoping there’s an even friendlier, shallower first step than the many zero-to-coder references out there, including Zed Shaw’s excellent Learn Code the Hard Way series.

Not only should this first step be “easy”, but nearly ubiquitous, free-to-use, and most importantly: has immediate benefit for both beginners and experts. The point here is not to teach coding, per se, but to get them to a precipice of great things. So that when they stand at the edge, they can at least see something to program towards, even if the end goal is simply labor-aversion, i.e. “I don’t want to copy-and-paste 100 web page tables by hand.”

Here are a few tools I’ve tried:

Inspecting a cat photo

1. Using the web inspector – I’ve never seen the point of taking an indepth HTML class (unless you want to become a full-time web designer/developer, and even then…) because so many non-techies even grasp that webpages are (largely) text, external multimedia assets (such as photos and videos), and the text that describes where those assets come from. To them, editing a webpage is as arcane as compiling a binary.

Nothing breaks that illusion better than the web inspector. Its basic element-inspector and network panel illustrates immediately the “magic” behind the web. As a bonus, with regular, casual use, the inspector can teach you the HTML and CSS vocabulary if you do intend to be a developer. It’s hard to think of another tool that is as ubiquitous and easy to use as the web inspector, yet as immensely useful to beginner and expert alike.

Its uses are immediate, especially for anyone who’s ever wanted to download a video from YouTube. To journalists, I’ve taught how this simple-to-use tool has helped me in my investigative reporting when I needed to find an XML file that was obfuscated through a Flash object.

In a hands-on class I taught, a student asked “So how do I get that XML into Excel?” – and that’s when you can begin to describe the joy of a basic for loop.

Here’s an overview of a hands-on web session I taught at NICAR12. Here’s the guide I wrote for my ProPublica project. And here’s the first of a multi-part introduction to the web inspector.

Refine WH Visitors

2. Google Refine – Refine is a spreadsheet-like software that allows you to easily explore and clean data: the most common example is resolving varied entries (“JOHN F KENNEDY”, “John F. Kennedy”, “Jack Kennedy”, “John Fitzgerald Kennedy”) into one (“John F. Kennedy”). Given that so many great investigative stories and data projects start with “How many times does this person’s name appear in this messy database?”, its uses are immediate and obvious.

Refine is an open-source tool that works out of the web browser and yet is such a powerful point-and-click interface that I’m happy to take my data out of my scripted workflow in order to use Refine’s features on it. Not only can you use regular expressions to help filter/clean your data, you can write full-on scripts, making Refine a pretty good environment to show some basic concepts of code (such as variables and functions).

I wrote a guide showing how Refine was essential for one of my investigative data projects. Refine’s official video tutorial is also a great place to start.

3. Regular Expressions – maybe it was because my own comsci curriculum skipped regexes, leaving me to figure out their worth much much later than I should have. But I really try to push learning regexes every time the following questions are asked:

  • In Excel, how do I split this “last_name, first_name middle_name” column into three different columns?
  • In Excel, how do I get all these date formats to be the same?
  • In Excel, how do I extract the zip code from this address field?

…and so on. The use of LEFT, TRIM, RIGHT, etc. functions seem to always be much more convoluted than the regex needed to do this kind of simple parsing. And while regexes aren’t the answer to every parsing problem, they sure deliver a lot of return for the investment (which can start from a simple cheat sheet next to your computer).

Regular-expressions.info has always been one of my favorite references. Zed Shaw is also writing a book on regexes. I’ve also written a lengthy tutorial on regexes.

So none of these tools or concepts involve programming…yet. But they’re immediately useful on their own, opening new doors to useful data just enough to interest beginners into going further. In that sense, I think these tools make for an inviting introduction towards learning programming.

Code, Don’t Tell: Programming as an Essential Journalism Skill

(tl;dr: this started out as a short post about how all of journalism can benefit from learning to code. It is now a massive rant that maybe I’ll split up later. It covers:

This post is inspired by a recent discussion on the NICAR (National Institute of Computer Assisted Reporting) mailing list, in which a journalism professor asked how her students should position themselves for a newspaper’s web developer job. The answer I suggested was: have them learn programming and have them publish projects online, on their own, that they can later show an employer.

But I’m becoming more convinced that programming – a decent grasp of it, not make-the-next-Facebook level – is an essential skill for all journalists, even ones that never intend to produce a webpage in their career. And for students, or any aspiring journalist, I think I can make the case that programming is absolutely the most important skill to learn in school (along with honing your interviewing, research and writing skills at the school paper/radio/TV station) if you want to improve your chances for a serious journalism career.

#

Hersh and Bamford

A few years ago, I attended a panel on investigative reporting that featured Seymour Hersh – the Pulitzer Prize-winning reporter who exposed the My Lai massacre – and James Bamford, a former Navy intelligence analyst who is well-respected for writing books that managed to penetrate the workings of even the super-secret NSA (affectionately known as the No Such Agency).

Seymour Hersh

The discussion turned to the use of the Freedom of Information Act, a law that reporters wield to get sensitive, unpublished documents from the federal government. Given that the NSA isn’t known for being chatty, Bamford explained how his stories were put together through exhaustive uses of FOIA.

When an audience member asked Hersh how often he used FOIA, his response – and I’m quoting from memory here – was:

“Why the fuck would I FOIA documents?”

I can’t read Hersh’s mind, but I’m guessing that he wasn’t wholesale dismissing the importance of FOIA, which has been essential in countless investigative stories.

He probably meant that: He’s Seymour Hersh. He exposed the My Lai massacre. He’s a regular contributor for the New Yorker. The kind of stories he writes for the New Yorker involves the type of people who wouldn’t be caught dead making a statement that would ever be reprinted on a document subject to a FOIA request.

And even if they were FOIAble, those requests take time (sometimes many years) and involve countless lawyers and legal wrangling. In the meantime, he’s up to his eyeballs in secret officials who, for some reason or other, are eager to spill their secrets to him, because he’s Seymour Hersh.

So, why the fuck would he FOIA documents?

Bamford doesn’t have quite that brand power and his targets are likely more reticent. But he’s learned – possibly through his Navy days – that there are plenty of important secrets in the stacks of documents that have been deemed fit for public consumption. It’s not always obvious, even among intelligence officials, to see how a mass of innocuous information inadvertently reveals big secrets.

So, back to the subject of aspiring journalists: to them, the already-employed journalists are the Seymour Hershes. These journalists have established themselves and their beat, which they can focus full-time on because they’re earning a salary to do so. Their phone contains the cell numbers of all the important officials who won’t ignore their 9 p.m. Sunday night call. When they write something, a large number of trees, barrels of ink, and/or corporate-purchased bandwidth are readily expended to make it known.

On the other end of the spectrum, the aspiring journalists are the Bamfords. Between work shifts, they have the same right as Joe Public to attend meetings and leave inquiries with contact@yourcity.gov. But for them, even the local police department might as well be the NSA. They aren’t going to be privy to hush-hush phone calls or let past the murder scene tape.

If this is your current vantage point, even if you intend to be a Hersh-style reporter, you’re going to have to Bamford your way into a field that has an increasing amount of noise and a corresponding shrinkage in paid, established positions.

Given this situation, I can think of no better strategy than to learn programming. This is a skill that not only makes more efficient every other journalism skill (writing, researching, publishing) but can, like Bamford’s relentless FOIAs, reveal stories that non-programming journalists will never be able to do, and in an unfortunate number of cases, even conceive of.

Learning the Hard Way

Zed Shaw, who isn’t a journalist but is renowned for both his code contributions and his widely-read (and free!) how-to-program books, puts it this way:

“Programming as a profession is only moderately interesting. It can be a good job, but you could make about the same money and be happier running a fast food joint. You’re much better off using code as your secret weapon in another profession.

People who can code in the world of technology companies are a dime a dozen and get no respect. People who can code in biology, medicine, government, sociology, physics, history, and mathematics are respected and can do amazing things to advance those disciplines.

Note that he doesn’t have many romantic notions about programming as a profession. However, programming is something bigger than just a job: it’s an essential, game-changing skill.

Code, don’t tell

Show, don’t tell” is how my high school journalism teacher taught us how to write. Instead of telling the reader something:

James Smith is one of the toughest football players on the team.

show it, through observed evidence:

When the halftime whistle blew, James Smith walked to the sidelines and collapsed. He later was told that the neck pain he played with through the second quarter was caused by a fracture in his neck.

I guess “Code, don’t tell” doesn’t really make sense; it’s just my made-up-way of saying that we have fantastically more ways than ever to tell – blog posts, retweets, status updates, auto-aggregations and other forms of repurposing – but we’re little better equipped to find and develop the actual stories. Programming is a skill that cuts through the noise, allows for the analysis and reporting on new substantive information sources, and even provides a way to create innovative story-telling forms (i.e. the web developer’s role).

So to follow my high school teacher’s advice, here’s an overview of my two most successful journalism projects so far, both done at ProPublica. As I explain later, both more or less originated from me sitting on my couch, being annoyed by what I saw as a lack of transparency. The first one, SOPA Opera, was initially self-published and probably could have been done entirely from the couch. The other, Dollars for Docs, was a full out effort by my colleagues and me. But it was programming-driven at every phase.

#

SOPA Opera

I won’t rehash the debate over this now-dead Internet regulation law, but the inspiration for the SOPA Opera news app was simply: I had read plenty of debate about SOPA for months. But when I wanted to see just which legislators actually supported it and their reasons for doing so, there wasn’t yet a great resource for that.

If you know about the official legislative site, THOMAS and are familiar with its navigation, you could at least find the list of sponsors. But good luck trawling the Congressional committee sites to find transcripts and testimony related to the law. It goes without saying that a list of opponents doesn’t exist and is beyond the official scope of THOMAS anyway.

So SOPA Opera, boiled down, is an pretty pedantic concept: “Hey, here’s a list of Congressmembers and what I’ve found out so far about their positions on SOPA.

In other words, changing this:

SOPA sponsors, on THOMAS

To this:

The gist of SOPA Opera could be done without any programming whatsoever. You could even build a static in Photoshop and upload the image onto the Internet. So what role did programming play in this? It made it very easy to gather the already-available information, which included: the official list of sponsors, the boilerplate biographical and district information on every Congressmember (including their mug shots), and contribution data from the Center for Responsive Politics.

No exaggeration: a decent programmer could build a nice site from this data in about half-an-hour. The jazzy part of the site – the dynamic sorting of the list – was already built and offered as a free plugin to use (courtesy David DeSandro). It’s entirely possible to create SOPA Opera by hand, given a few days and an infinite amount of patience.

So programming allowed me to save my time and energy for the actual reporting. I thought about building a scraper to go through each Congressmember’s Facebook and Twitter page to search for the term “SOPA” But until the blackout, most lawmakers had nothing to say on the topic. So at first, my “research” largely involved typing “SOPA [some congressmember’s name]” into Google News and usually finding nothing.

When the SOPA issue blew up during the Jan. 18 blackout, I didn’t have to do any searching, as Congressmembers pretty much rushed to make known their opposition to SOPA. SOPA Opera was designed in a way to make it easy for constituents to look up their representative and, if I had no information about him/her, tell me what they found out after talking to their representative. Or, in a few cases, Congressional staffers contacted me directly.

SOPA Opera easily broke the single-day traffic record at ProPublica. This was mostly due to blackout-participating sites like Craigslist that directed their traffic to us as a reference. Clearly, what caused the seismic shift on the SOPA debate were the mega-sites that coordinated the millions of emails and phone calls to lawmakers, and SOPA Opera was an indirect beneficiary of the increased public interest.

But I believe that SOPA Opera made at least one important contribution to the debate: it made very clear the level and characteristics of support enjoyed by SOPA. One thing that the THOMAS listing of sponsors fails to do is note the political parties of the lawmakers. I felt that this was a critical piece and it was easy to get and display. The result: many visitors to SOPA Opera who had believed that SOPA was a diabolical scheme by [whatever-party-they-oppose] were shocked at how SOPA’s support was so bi-partisan and broad.

I heard from a number of people who had been highly energized about the anti-SOPA debate yet were completely shocked that Sen. Al Franken – automatically assumed to be on the side of “Internet freedom” – was in fact, was a sponsor of SOPA’s counterpart in the Senate. This was no state secret: Sen. Franken has been passionate and outspoken in his support and was one of the few who didn’t back away after political support collapsed post-blackout.

SOPA Opera’s success probably owes less to my skill than to the dismal state of accessibility in our legislative process. This goes to show that even when you arrive extremely late to the game, it’s possible to make a significant impact by simply having an idea of how things can be better. This applies in just about any situation and profession. Programming just makes it much easier to push your creation forward.

Some technical details on self-publishing

I don’t want to dwell too much on the web-side of things, as that is just one specific use of programming. But to get back on the topic of how someone can position themselves for a web-related media job, SOPA Opera is a really excellent example of the potential in self-publishing.

SOPA Opera spent about a week on a domain (sopaopera.org) that I purchased for $10. It didn’t bear the ProPublica brand then, and I didn’t have time to promote it beyond a few tweets and submitting it to Hacker News and Reddit.

But in just a week, sopaopera.org had racked up about 150,000 pageviews before we migrated it to ProPublica:

That’s not a huge number in itself, and traffic to it increased exponentially under ProPublica’s umbrella. But it had gained enough notice that prominent were linking to it. The problem I had at the start: Googling a random lawmaker’s name and the term “SOPA” and finding absolutely nothing – was solved, as Google highly-indexed all of the auto-generated lawmaker pages on SOPA Opera. At least one Congressmember’s office emailed me to update his page.

Not bad for a holiday break project and $10, using free resources and tools that are available to anyone with a computer. Back when I applied for newspaper jobs, I had to carry a portfolio of cut-out newspaper articles to show editors that at least a few people (my college paper and the one newspaper I interned at) had been willing to waste trees and ink on me. You were out of luck if all you had were a bunch of links to blog posts.

The mindset is different today, articles published on a traditional publication’s website, or at an online-only organization like Huffington Post, can count as legit clips. But I’d like to think that showing a full-blown website that includes not only traditional reporting content, but examples of how visual and interface design can tell a new story, as well as being able to provide concrete metrics (pageviews, referring links) of impact, would be even more impressive to today’s news editors.

#

Dollars for Docs

Like SOPA Opera, Dollars for Docs (aka “D4D”) is late to its respective topic. It is a long accepted practice for medical companies to pay doctors to promote their products, not much different from a notable athlete who endorses a shoe that she considers to be the best for her sport. But in recent years, lawmakers and regulators have called for more transparency of these financial ties to prevent cases in which a doctors are unduly influenced by their benefactors.

Data on company-to-doctor payments is at least two decades old: Minnesota enacted a law in 1993 requiring companies to disclose their payments. However, that “data”, which came in the form of paper records that had to be hand-entered into a computer – after, of course, you visited the records’ actual storage location and photocopied each page at 25 cents a pop. For that reason, the records were collected but unexamined for at least a decade until Dr. Joseph Ross (now at Yale University) and the Public Citizen advocacy group collected and analyzed the records.

In 2007, they published their findings in the Journal of the American Medical Association, with the conclusion that the payment records were “compromised by incomplete disclosure as well as insufficient access.”:

  • In Vermont (which enacted a public disclosure law in 2001), most disclosures were redacted for “trade secrets” reasons. Of the publicly disclosed payments, 75% of them lacked information identifying the recipient
  • In Minnesota, many of the companies had years in which they reported nothing.
  • The “public” disclosures were pretty much inaccessible to the public. Dr. Ross and Public Citizen had to go to court to get the Vermont records.

Dr. Ross told me that after his study was published, not only was it apparent that the public was in the dark, but doctors themselves had no idea that the data were even being collected. The Minnesota pharmacy board was subsequently so swamped by requests from other researchers, hospitals and litigants that it began publishing the disclosures online.

At around the same time, the New York Times published its own investigation using the Minnesota records. Their analyzed both the company disclosures and Medicaid payments to Minnesota psychiatrists and found that during a time period in which company payments to Minnesota psychiatrists increased “compromised by incomplete disclosure as well as insufficient access.”, antipsychotic drug prescriptions for children jumped more than 900 percent.

The Times investigation (also in 2007) sparked a large political fight in which U.S. Senate investigators targeted prominent psychiatrists whose work had expanded the use of antipsychotic drugs as treatment for children.

The end result of this was a proposed federal law to mandate these payment disclosures nationwide. This law was later folded into the 2010 health care reform package. By 2013, the federal government will publish a database of these disclosures.

Couch database

The idea for D4D was sparked from something I wrote for my blog one evening. I was writing some programming tutorials to show journalists, well, how programming could be used for everyday reporting, and I needed a current example. I came across this Times article, Pfizer Gives Details on Payments to Doctors, which reported that Pfizer was fulfilling the terms of a legal settlement by publishing a searchable database of the health professionals it paid. At that point, it was the fourth such drug company that had disclosed its payments in advance of the 2013 law – most of the others had done so also as part of settling their own lawsuits.

At that point, I knew virtually nothing about the issue. But what I did see was that Pfizer’s site seemed unnecessarily cumbersome. Though the disclosures were mandated, Pfizer’s site made it difficult to do simple analyses such as finding the professionals who had received substantial amounts or even the sum of the database’s payments. So I wrote a scraper and published the code and data for others to use.

Data-scraping and pharmaceutical payments isn’t a high-traffic topic, but the blog post caught the eyes of a few important people. My colleagues Charles Ornstein and Tracy Weber, both Pulitzer Prize health reporters, were well-informed about the issue but didn’t know how feasible it would be to do a broad analysis of the data. I think I would eventually have written a scraper and published the delimited data for every drug company that had so far been required to disclose (this article in the Times, Data on Fees to Doctors Is Called Hard to Parse, was a particular inspiration), but Charlie and Tracy knew how to turn it into a strong investigation.

Another side-benefit of self publishing: A reporter at PBS had also been working to collect and parse the data. He noticed My Pfizer post, though, and rather than being competitors, PBS teamed up with ProPublica in conducting the investigation.

#

Done stories are never dead. They aren’t even done.

Even with the groundbreaking work already done by Dr. Ross, Public Citizen and the Times in 2007, the subsequent Senate investigations, and the impending official database in 2013, there was still room for D4D to become a valuable, innovative investigation. It had considerable impact on the debate, prompting companies and medical institutions to change their disclosure and conflict-of-interest policies. D4D is currently is the most-viewed resource at ProPublica.

You can read our series coverage here.

The most obvious way that D4D differed from previous investigations is that it looked at the available nationwide set, not just Minnesota’s. Some of the data-driven angles we took included:

So even on a well-trodden story, there are still countless angles when you combine a keen reporter’s instinct with an ability to collect data. A prevalent theme in the journalism industry – especially in the age of Twitter – is the drive to be first. I understand that it’s important in a ratings-sweep context, and it certainly gets the blood going when you’re competing for a story, but I’ve never thought that it was good for journalism or particularly useful to news consumers.

This is why I enjoy data-driven journalism. On any given topic, there are so many valid and substantial ways to cross-examine the evidence behind a story and produce meaningful stories. And as time passes, the analysis only becomes more interesting, not less, because more and more data is added to the picture. Data-driven journalism can be done through simple use of Excel. But programming, as I explain in the next section, can vastly increase the opportunities and depth of these analyses.

A sidenote: The hardest part of D4D was the logistics, which were only manageable through programming. It’s too boring to go into detail, but D4D required a collaborative reporting process more disciplined than “just send that Word.doc as an attachment.” I don’t forsee a project of D4D’s scale being attempted by many other organizations, because of the difficulty in managing all the moving parts.

Never attribute to malice that which…

The most interesting reaction I got from D4D came not from doctors, but from researchers, compliance officers, and even federal investigators whose jobs it was to monitor these disclosures. They were thrilled that D4D made it so easy for them to check up on things. What was surprising to me was that I just assumed that everyone who had a professional stake in overseeing these disclosures had already collected the data themselves. The scraping-and-collecting of the company reports was by far the easiest part of D4D, and even if you couldn’t program you could at least hack together a system of copying-and-pasting from the various data-sources, if such information was vital to your job.

The truth is that the concept of “user interface” is as critical to an investigation as it is in separating successful tech startups from their clunky, failed competitors. I occasionally get asked for advice by researchers on their own projects. What stalls a surprising number of interesting investigative projects and analyses is not something as malicious as a shady CEO or the threat of a lawsuit, but problems as benign as: a company over the years has hundreds of datafiles, all zipped and scattered across many webpages. Is it possible to somehow download them all, unzip them, and put them into a database (or Excel) just so I can just find if someone’s name is in there?. Or: If I could only fix the few places where the agency screwed up in outputting this comma-delimited data file, I could analyze it in Excel.

If you can’t program, this isn’t a trivial problem: working with ten such files is an inconvenience. When there are 100 files, then the momentum for an inquiry might just stop dead, especially if the inquiry arises from curiosity instead of certainty (think of how many great stories and investigations have come out of such casual inquiries). However, with some basic programming, the difference between organizing 10 files and 10,000 files is a matter of milliseconds. A programmer thus has the power to not only work with already-normalized datasets and produce interesting stories, but he/she can (efficiently) create datasets that otherwise would have never been examined.

To reiterate: there are an astonishing number of stories and inquiries that are derailed by what are trivial technical issues to any half-competent programmer. This is both alarming from a civic perspective, yet extremely exciting if you’re someone with the right skills at the right time, as you’ll never want for ideas.

#

A practical road to programming

OK enough abstract talk. The amazing promise of programming is that there are so many opportunities. This leads to its biggest problem when trying to learn it: there are too many places to start.

This section contains some advice. It may not be the best advice for everyone, but at least everything I mention below is absolutely free to use and to learn from.

The Basics

If you haven’t already, create a Twitter account. Stop kvetching about how “no one wants to read what I ate for breakfast” because that casually implies that people would want to read your 100,000 word opus, as soon as you finish it. They won’t.
But having a Twitter account provides one avenue to spread your work and just as importantly, a channel to learn from people who aren’t just tweeting about their breakfasts.

Get a Dropbox. Get used to putting stuff on the cloud. Not your sensitive documents, but things like e-reference books and datasets and code. This is much better than emailing (and in some ways, more secure) things to yourself.

Create a Google account. Even if you don’t use it for email, Google Documents is extremely useful. And you may find use from the other parts of Google’s ecosystem

Get a second or third browser: If you’re paranoid about Twitter/Facebook/Google cookies tracking you, then use one browser to handle those accounts and another browser to do all your other web-browsing.

Data stuff

If you don’t have Excel, you can download OpenOffice’s capable suite. That said, Google Docs is probably the easiest way to get into keeping spreadsheets, with the added bonus of being in the cloud and thus easy to do collaborations and to use your programming with. Again, be cautious about putting very sensitive data there. But I’d argue that the cloud is still safer than keeping everything on a stealable-Macbook.

Google Refine: this was a project formerly known as Gridworks. It runs in the browser, Unlike Google Docs, you don’t need a Google Account or to be online to use it. It’s similar to a spreadsheet except that you won’t use it to calculate an average/sum of a column or to make charts. It’s for cleaning data, to quickly determine that “John F. Kennedy” — “Jack F. Kennedy”, “John Fitzgerald Kennedy” and “J.F. Kennedy” are all the same person. There have been some investigative data-work that would not have been possible without this tool. Check out the video introduction here; I’ve also written a tutorial at ProPublica.

Given the number of important stories that basically boil down to finding someone’s name several times in a database, it’s a little amazing to me that every serious reporter hasn’t at least tried Google Refine.

Programming

Don’t get stalled by trying to figure out which is the best language. The three most current popular: Ruby, Python and JavaScript, will serve your needs well, and you’ll find it relatively easy to pick up the other two after learning one.

That said, there’s one main big difference: Ruby and Python are more general purpose scripting languages. You can use them to sort your files, process (and build) a database, and even build a full out website (you may have heard of Ruby on Rails and Django).

JavaScript is most typically used for web interactivity in the browser, everything from animating buttons to full-fledged applications. Because it’s in every browser, it takes no work to try it out and to produce interactive bits. It takes a little more work to setup JS to do things like web-scraping or local file processing.

JS has an additional advantage in that there many interactive tutorials that you can access through your browser. Codecademy is one of the best known ones.

Programming resources

Zed Shaw’s “How to Learn Python the Hard Way” is one of the most popular beginnner-level (and free) ebooks. There is also a Ruby version.

A little self-promotion: for people who best learn through practical projects, I’ve been working on my own Ruby beginner’s guide, tentatively titled the Bastards Book of Ruby. It’s a work in progress but you’ll find some ideas on starter projects to work towards (a good start is writing a script to download and store all your tweets).

HTML

Don’t learn HTML. That is, don’t take a course in HTML. Learn enough to know that the HTML behind a webpage is just plain text. And learn enough to understand how:

<a target="_blank" href="http://en.wikipedia.org/wiki/HTML">Wikipedia's entry on HTML</a>

Creates a link that takes you to Wikipedia in a new window, like this: Wikipedia’s entry on HTML

That’s basically enough to get the concept of HTML (and the idea of meta-information) and to begin scraping webpages. One of the fastest ways to learn as you go is to get acquainted with your web browser’s inspector.

#

People to learn from

I’m not a particularly inspiring example of a journo-coder: I took up computer engineering because I was afraid there wouldn’t be many journalism jobs so I kind of half-stumbled into combining reporting with code because it’s easy to learn programming fundamentals during college. This is why if you’re a college student now, I strongly advise you to pick up programming at a time when learning is your main job in life.

Much more impressive to me are people who were doing well in their day jobs but decided to pick up programming in their spare hours – and then returned to do their day jobs with newfound inspiration and possibilities.

John Keefe (WNYC) – About a year-and-a-half ago, I remember John coming to Hacks/Hackers events to watch people code and to continually apologize for having to ask what he thought were dumb questions. In an incredibly short time, Keefe learned enough hacking to produce some great, creative apps and now heads WNYC’s data team, and also leads the discussion among news orgs on how to modernize the way we do things like election coverage.

Zach Sims – Sims is a co-founder of Codecademy. He was a poli-sci major who ventured into tech entrepreneurship but was frustrated that his lack of technical skills hindered his work. He learned programming on his own and with a co-founder, created Codecademy to teach others how to program. Codecademy itself is one of the hottest recent startups.

Neil Saunders – I stumbled across Neil Saunders’ blog while looking for R + Ruby examples. His blog is titled “What You’re Doing is Rather Desperate“, inspired by the reaction of a colleague who was apparently unimpressed with his use of programming in his bioscience job.
It’s a misconception that scientifically-minded professionals also know how to program. In fact, some don’t even have basic computer skills. Saunders not only publishes his code, but shows how others in his field can greatly improve their research with programming skills.

Kaitlyn Trigger
As this TechCrunch article puts it, Kaitlyn Trigger was a poly sci major who “never took any computer classes.” She has been together with Instagram co-founder Mike Krieger but had been frustrated that she didn’t understand his work. So she picked up/downloaded Learn Python the Hard Way, learned the Python-based web framework Django, and created Lovestagram as a Valentine Day’s present.

It’s not just a cute story – learning Python and Django and making something within 2 months in your spare time is a pretty incredible achievement. It’s an awesome example of how having a project in mind can really help you learn code.

Matt Waite – was an award winning newspaper reporter before becoming a web developer. He went on, as a web developer, to win the most prestigious of journalism prizes: a Pulitzer for PolitiFact. He now teaches at University of Nebraska Lincoln and keeps a blog related to his work with journalism students.

Photos from outside of NYFW 2012 (Fall/Winter)

Fashion Week

Another Fashion Week come and gone. I didn’t have the time to go to any actual events but I did take part in a couple of related things:

I did portraits and some scenery photos during Correll Correll’s casting call. I didn’t really get time to set much up but it wasn’t too hectic of a shoot.

To make it easier for the casting director to keep track of them, the models were asked to hold up their cards while their portraits were taken. I think it’s really interesting how similar (or different) they look compared to their cards, without fancy makeup, lights or post-processing:

models and cards

The light changed drastically throughout the day and afternoon:

Fashion Week

NYFW 2012 Casting Call

So this wasn’t officially part of Fashion Week in anyway, but still the coolest fashion-related (and celebrity-sighting) experience I’ve had in the city. While me and my co-worker were exploring the ticker-paper-apocalypse after the Giants’ Super Bowl Parade, she spotted NYT fashion photog Bill Cunningham making his way down Broadway through the crowds and paper piles:

Bill Cunningham, on the street at the Super Bowl Giants Parade

Bill Cunningham, on the street, after the Super Bowl Giants Parade

I hadn’t yet finished watching his documentary so I was too intimidated to say “Hello.” Afterwards, I went home and watched it and wish I at least gave him a thumbs up. One of my favorite parts of the documentary is when Cunningham describes how he maintains his outsider status when invited to glam events. He’ll eat a modest meal beforehand and won’t even accept a glass of water while doing his work. Just like any other standup journalist. I hope I get to properly meet him one of these days.

You can see my other Fashion Week-related photos here.

Big Faces – a mashup of The Big Picture and Face.com

Big Faces

Just put up another quick side project: Big Faces, which aggregates the excellent Big Picture Blog (by the Boston Globe and its many contributors) and just shows the faces. I used the Face.com API to crop the faces before uploading them.

I have in mind a multi-level photo-exploration app but because it’s been so long since I looked at using Backbone.js, I needed some practice. As it is, the fine work of the photographers and curators associated with The Big Picture stands on its own power. I probably should rethink it so it doesn’t require a 700K JSON download…

The process is simliar to the Congressmiles demo I did last week.

On the front end, this uses the excellent isotope JQuery plugin. It also has Backbone.js though I vastly simplified the project so much that I pretty much ditched using any of Backbone’s useful features. The Backbone boilerplate was extremely helpful, though, in organizing the project.

Best subway music ever: Super Mario Bros. theme

Musician Gypsy Joe Trane played the classics, including the theme to Nintendo Classic Super Mario Brothers on the R-train heading uptown Manhattan the other night. I can’t believe that no one else was (or at least looked) as amused as I was. I generally like subway performers and these guys had the best approach I’ve seen: just stand and jam for many stops. I don’t know if they made more or less money than those who do a quick set before hopping into another car, but I gave them a few bucks for the great accompaniment.

Woody Allen: Every step is part of the writing process

Woody Allen (2006), photo by Colin Swan

One of the best books I’ve picked up recently is Eric Lax’s Conversations with Woody Allen: His Films, the Movies, and Moviemaking, which is basically a 400+ page interview, spanning decades, between the author and Allen. I’m a fair-weather fan myself, I’ve only seen a few of his movies but I’ve always admired his relentless pursuit for his art, even when some of it seems to just be screwball comedy.

The book is divided into 8 parts for different facets of Allen’s work, including “Writing It”, “Shooting, Sets, Locations” and “Directing.” The following excerpt comes from the “Editing” part and in it, Allen talks about how he sees every step of filmmaking as part of the writing process (emphasis added):

[Eric Lax]: You’re involved with the details of every step of a film, and I’ve noticed that you do not delegate any part of its creation, even assembling a first cut from takes you’ve already selected.

[Woody Allen]: To me the movie is a handmade product. I was watching a documentary on editing on television the other day and many wonderful filmmakers were on and wonderful editors and everyone was talking briefly about how they edit. Years ago, they would turn it over to an editor. Or there are people I know who finish shooting and go away for a vacation and let the editor do a draft; then they come back and they check it out and do their changes.

I can’t do that. It would be unthinkable for me not to be in on every inch of movie – and this is not out of any sort of ego or sense of having to control; I just can’t imagine it any other way. How could I not be in on the editing, on the scoring, because I feel that the whole project is one big writing project?

You may not be writing with a typewriter once you get past the script phase, but when you’re picking locations and casting and on the set, you’re really writing. You’re writing with film, and you’re writing with film when you edit it together and you put some music in. This is all part of the writing process for me.

Lax, Eric (2009-08-12). Conversations with Woody Allen (p. 284). Random House, Inc.. Kindle Edition.

I feel the exact same way about any kind of modern storytelling. Whether it’s done as a photo essay, movie, or news application/website, each step of the process can profoundly affect and be affected by your editorial vision. Back in the day of traditional journalism, it’s possible that you could have one person do just the interviewing and research and then one person to put it as story form. But the feedback in that process – an unexpectedly emotional interview that alters what you previously thought the story arc should be – would almost be entirely lost.