Monthly Archives: March 2012

Because of a typo, the government needs to keep your private data 10 times longer?

Yesterday the Obama administration approved new rules to greatly extend the time – from 180 days to 1,826 days (5 years) – that domestic intelligence services can retain American citizens’ private information. Citizens are eligible to be part of this federal data warehouse even when “there is no suspicion that they are tied to terrorism.”

As Charlie Savage in the New York Times reports:

Intelligence officials on Thursday said the new rules have been under development for about 18 months, and grew out of reviews launched after the failure to connect the dots about Umar Farouk Abdulmutallab, the “underwear bomber,” before his Dec. 25, 2009, attempt to bomb a Detroit-bound airliner.

After the failed attack, government agencies discovered they had intercepted communications by Al Qaeda in the Arabian Peninsula and received a report from a United States Consulate in Nigeria that could have identified the attacker, if the information had been compiled ahead of time.

The case of the “underwear bomber” is a strange justification for this expansion of data storage. Because the 2009 Christmas terror attempt nearly succeeded thanks to a series of what seems like common human errors, not from an information drought.

Shortly after the underwear bomber incident, the White House released a report examining how our vast intelligence network failed to prevent Abdulmutallab, the bomber, from boarding a flight from Amsterdam to Detroit.

One of the critical failures? Someone at the State Department, when sending information about Abdulmutallab to the National Counterterrorism Center, misspelled his name. Even though his father alerted American intelligence officials a full month before the attempted attack, our sophisticated surveillance system was partially stymied by a single misplaced letter.

As Foreign Policy reported in 2010:

State called an impromptu press briefing late Thursday evening to address the issue. The tone of the briefing was combative, as reporters pressed the “senior administration official” for details about the misspelling that he seemed not to want to give up. But here’s what we learned.

Someone (they won’t say who) at the State Department (presumably at the U.S. Embassy in Nigeria) did check to see if Abdulmutallab had a visa (they won’t say exactly when). That person was working off the Visas Viper cable originally sent from the embassy to the NCTC, which had the name wrong.

“There was a dropped letter in that — there was a misspelling,” the official said. “They checked the system. It didn’t come back positive. And so for a while, no one knew that this person had a visa.” (They won’t say for how long)

The chain of failures is more complicated than that, but the fact that a typo was a big enough of a wrench to warrant special mention in the White House review is an indication that the government’s surveillance systems, despite the work of its data architects, engineers and scientists, were compromised by some pretty banal problems, like not having spell-check capability.

In fact, the White House report goes out of its way to assert that the information-sharing problems that failed to prevent the 9/11 attacks “have now, 8 years later, largely been overcome.” Information about Abdulmutallab (again, his own father met with U.S. officials to warn them of his son a month ahead of the attack), his association with Al Qaeda, and Al Qaeda’s attack planning, “was available to all-source analysts at the CIA and the NCTC prior to the attempted attack.”

In other words, the 9/11 attack was possible because government agencies wouldn’t share information with each other. Now, they are happily sharing information with each other, they just aren’t diligently looking at it.

So the best solution is to enact a ten-fold increase the legal time limit for storing American citizens’ data?

It sounds like the government’s ability to detect terrorists would be greatly improved with better user-friendly software and adherence to data-handling standards. The ability to catch slight misspellings and do fuzzy data matches is something that Facebook and Google users have enjoyed for years; hell, the basic concept and consumer-friendly implementation has been around in Microsoft Word since about 20 years ago. Have software overhauls been enacted before deciding that the government needs more of its citizens’ private information? Or does the review of such technical details and policies seem too unsexy and pedantic for our intelligence bureaucracy?

The Times article also mentions that the guidelines call for more duplication of entire databases…which is a bit confusing. I’m assuming that this doesn’t refer to making backup copies (in case of a hard drive failure), but to a method of data-sharing between analysts. This is how the Times describes it:

The guidelines are also expected to result in the center making more copies of entire databases and “data mining them” using complex algorithms to search for patterns that could indicate a threat.

Hopefully, this doesn’t mean that database files are being copied and passed around so that each department can have their own copy of another department’s data. This would seem to introduce a few major logistical issues: namely, how do you know the copy you have contains the latest data? Remember that the typo in Abdulmutallab’s name was one mistake that helped spawn a series of snafus. Are we going to have an incident in which a terrorist slips through because an analyst forgot to update his/her copy of a database before mining it? Also, there’s the possibility that some of these data copies might end up lying around long after their 5-year limit.

There have been several reports of how intelligence agencies now suffer from too much data, to the point where analysts are “drowning in the data.” If this is a reason cited for how an attack went unprevented in the future, I hope the proposed reform is not “more data.”

Tools to get to the precipice of programming

I’m not a master programmer but it’s been so long since I’ve done my first “Hello World” that I don’t remember how people first grok the point of programming (for me, it was to get a good grade in programming class).

So when teaching non-programmers the value of code, I’m hoping there’s an even friendlier, shallower first step than the many zero-to-coder references out there, including Zed Shaw’s excellent Learn Code the Hard Way series.

Not only should this first step be “easy”, but nearly ubiquitous, free-to-use, and most importantly: has immediate benefit for both beginners and experts. The point here is not to teach coding, per se, but to get them to a precipice of great things. So that when they stand at the edge, they can at least see something to program towards, even if the end goal is simply labor-aversion, i.e. “I don’t want to copy-and-paste 100 web page tables by hand.”

Here are a few tools I’ve tried:

Inspecting a cat photo

1. Using the web inspector – I’ve never seen the point of taking an indepth HTML class (unless you want to become a full-time web designer/developer, and even then…) because so many non-techies even grasp that webpages are (largely) text, external multimedia assets (such as photos and videos), and the text that describes where those assets come from. To them, editing a webpage is as arcane as compiling a binary.

Nothing breaks that illusion better than the web inspector. Its basic element-inspector and network panel illustrates immediately the “magic” behind the web. As a bonus, with regular, casual use, the inspector can teach you the HTML and CSS vocabulary if you do intend to be a developer. It’s hard to think of another tool that is as ubiquitous and easy to use as the web inspector, yet as immensely useful to beginner and expert alike.

Its uses are immediate, especially for anyone who’s ever wanted to download a video from YouTube. To journalists, I’ve taught how this simple-to-use tool has helped me in my investigative reporting when I needed to find an XML file that was obfuscated through a Flash object.

In a hands-on class I taught, a student asked “So how do I get that XML into Excel?” – and that’s when you can begin to describe the joy of a basic for loop.

Here’s an overview of a hands-on web session I taught at NICAR12. Here’s the guide I wrote for my ProPublica project. And here’s the first of a multi-part introduction to the web inspector.

Refine WH Visitors

2. Google Refine – Refine is a spreadsheet-like software that allows you to easily explore and clean data: the most common example is resolving varied entries (“JOHN F KENNEDY”, “John F. Kennedy”, “Jack Kennedy”, “John Fitzgerald Kennedy”) into one (“John F. Kennedy”). Given that so many great investigative stories and data projects start with “How many times does this person’s name appear in this messy database?”, its uses are immediate and obvious.

Refine is an open-source tool that works out of the web browser and yet is such a powerful point-and-click interface that I’m happy to take my data out of my scripted workflow in order to use Refine’s features on it. Not only can you use regular expressions to help filter/clean your data, you can write full-on scripts, making Refine a pretty good environment to show some basic concepts of code (such as variables and functions).

I wrote a guide showing how Refine was essential for one of my investigative data projects. Refine’s official video tutorial is also a great place to start.

3. Regular Expressions – maybe it was because my own comsci curriculum skipped regexes, leaving me to figure out their worth much much later than I should have. But I really try to push learning regexes every time the following questions are asked:

  • In Excel, how do I split this “last_name, first_name middle_name” column into three different columns?
  • In Excel, how do I get all these date formats to be the same?
  • In Excel, how do I extract the zip code from this address field?

…and so on. The use of LEFT, TRIM, RIGHT, etc. functions seem to always be much more convoluted than the regex needed to do this kind of simple parsing. And while regexes aren’t the answer to every parsing problem, they sure deliver a lot of return for the investment (which can start from a simple cheat sheet next to your computer).

Regular-expressions.info has always been one of my favorite references. Zed Shaw is also writing a book on regexes. I’ve also written a lengthy tutorial on regexes.

So none of these tools or concepts involve programming…yet. But they’re immediately useful on their own, opening new doors to useful data just enough to interest beginners into going further. In that sense, I think these tools make for an inviting introduction towards learning programming.