The smiles of your U.S. Senate from most smiley-est to least, according to Face.com's algorithm
Who’s got the biggest smile among our U.S. senators? Let’s find out and exercise our Ruby coding and civic skills. This article consists of a quick coding strategy overview (from the full code is at my Github). Or jump here to see the results, as sorted by Face’s algorithm.
About this tutorial
This is a Ruby coding lesson to demonstrate the basic features of Face.com’s face-detection API for a superficial use case. We’ll mash with the New York Times Congress API and data from the Sunlight Foundation.
The code comprehension is at a relatively simple level and is intended for learning programmers who are comfortable with RubyGems, hashes, loops and variables.
If you’re a non-programmer: The use case may be a bit silly here but I hope you can view it from an abstract-big-picture level and see the use of programming to: 1) Make quick work of menial work and 2) create and analyze datapoints where none existed before.
On to the lesson!
—
The problem with portraits
For the SOPA Opera app I built a few weeks ago, I wanted to use the Congressional mugshots to illustrate
the front page. The Sunlight Foundation provides a convenient zip file download of every sitting Congressmember’s face. The problem is that the portraits were a bit inconsistent in composition (and quality). For example, here’s a usable, classic head-and-shoulders portrait of Senator Rand Paul:
Sen. Rand Paul
But some of the portraits don’t have quite that face-to-photo ratio; Here’s Sen. Jeanne Shaheen’s portrait:
Sen. Jeanne Shaheen
It’s not a terrible Congressional portrait. It’s just out of proportion compared to Sen. Paul’s. What we need is a closeup crop of Sen. Shaheen’s face:
Sen. Jeanne Shaheen's face cropped
How do we do that for a given set of dozens (even hundreds) of portraits that doesn’t involve manually opening each image and cropping the heads in a non-carpal-tunnel-syndrome-inducing manner?
Easy face detection with Face.com’s Developer API
Face-detection is done using an algorithm that scans an image and looks for shapes proportional to the average human face and containing such inner shapes as eyes, a nose and mouth in the expected places. It’s not as if the algorithm has to have an idea of what an eye looks like exactly; two light-ish shapes about halfway down what looks like a head might be good enough.
You could write your own image-analyzer to do this, but we just want to crop faces right now. Luckily, Face.com provides a generous API that when you send it an image, it will send you back a JSON file in this format:
{
"photos": [{
"url": "http:\/\/face.com\/images\/ph\/12f6926d3e909b88294ceade2b668bf5.jpg",
"pid": "F@e9a7cd9f2a52954b84ab24beace23046_1243fff1a01078f7c339ce8c1eecba44",
"width": 200,
"height": 250,
"tags": [{
"tid": "TEMP_F@e9a7cd9f2a52954b84ab24beace23046_1243fff1a01078f7c339ce8c1eecba44_46.00_52.40_0_0",
"recognizable": true,
"threshold": null,
"uids": [],
"gid": null,
"label": "",
"confirmed": false,
"manual": false,
"tagger_id": null,
"width": 43,
"height": 34.4,
"center": {
"x": 46,
"y": 52.4
},
"eye_left": {
"x": 35.66,
"y": 44.91
},
"eye_right": {
"x": 58.65,
"y": 43.77
},
"mouth_left": {
"x": 37.76,
"y": 61.83
},
"mouth_center": {
"x": 49.35,
"y": 62.79
},
"mouth_right": {
"x": 57.69,
"y": 59.75
},
"nose": {
"x": 51.58,
"y": 56.15
},
"ear_left": null,
"ear_right": null,
"chin": null,
"yaw": 22.37,
"roll": -3.55,
"pitch": -8.23,
"attributes": {
"glasses": {
"value": "false",
"confidence": 16
},
"smiling": {
"value": "true",
"confidence": 92
},
"face": {
"value": "true",
"confidence": 79
},
"gender": {
"value": "male",
"confidence": 50
},
"mood": {
"value": "happy",
"confidence": 75
},
"lips": {
"value": "parted",
"confidence": 39
}
}
}]
}],
"status": "success",
"usage": {
"used": 42,
"remaining": 4958,
"limit": 5000,
"reset_time_text": "Tue, 24 Jan 2012 05:23:21 +0000",
"reset_time": 1327382601
}
}
The JSON includes an array of photos (if you sent more than one to be analyzed) and then an array of tags – one tag for each detected face. The important part for cropping purposes are the attributes dealing with height, width, and center:
"width": 43,
"height": 34.4,
"center": {
"x": 46,
"y": 52.4
},
These numbers represent percentage values from 0-100. So the width of the face is 43% of the image’s total width. If the image is 200 pixels wide, then the face spans 86 pixels.
Using your favorite HTTP-calling library (I like the RestClient gem), you can simply ping the Face.com API’s detect feature to get these coordinates for any image you please.
Image manipulation with RMagick
So how do we do the actual cropping? By using the RMagick (a Ruby wrapper for the ImageMagick graphics library) gem, which lets us do crops with commands as simple as these:
img = Magick::Image.read("somefile.jpg")[0]
# crop a 100x100 image starting from the top left corner
img = img.crop(0,0,100,100)
The RMagick documentation page is a great place to start. I’ve also written an image-manipulation chapter for The Bastards Book of Ruby.
The Process
The code for all of this is stored at my Github account.
I’ve divided this into two parts/scripts. You could combine it into one script but to make things easier to comprehend (and to lessen the amount of best-practices error-handling code for me to write), I divide it into a “fetch” and “process” stage.
In the fetch.rb stage, we essentially download all the remote files we need to do our work:
- Download a zip file of images from Sunlight Labs and unzip it at the command line
- Use NYT’s Congress API to get latest list of Senators
- Use Face.com API to download face-coordinates as JSON files
In the process.rb stage, we use RMagick to crop the photos based from the metadata we downloaded from the NYT and Face.com. As a bonus, I’ve thrown in a script to programmatically create a crude webpage that ranks the Congressmembers’ faces by smile, glasses-wearingness, and androgenicity. How do I do this? The Face.com API handily provides these numbers in its response:
"attributes": {
"glasses": {
"value": "false",
"confidence": 16
},
"smiling": {
"value": "true",
"confidence": 92
},
"face": {
"value": "true",
"confidence": 79
},
"gender": {
"value": "male",
"confidence": 50
},
"mood": {
"value": "happy",
"confidence": 75
},
"lips": {
"value": "parted",
"confidence": 39
}
}
I’m not going to reprint the code from my Github account, you can see the scripts yourself there:
https://github.com/dannguyen/Congressmiles
First things first: sign up for API keys at the NYT and Face.com
I also use the following gems:
The Results
Here’s what you should see after you run the process.rb script (all judgments made by Face.com’s algorithm…I don’t think everyone will agree with about the quality of the smiles):
10 Biggest Smiles
Sen. Wicker (R-MS) [100]
Sen. Reid (D-NV) [100]
Sen. Shaheen (D-NH) [99]
Sen. Hagan (D-NC) [99]
Sen. Snowe (R-ME) [98]
Sen. Kyl (R-AZ) [98]
Sen. Klobuchar (D-MN) [98]
Sen. Crapo (R-ID) [98]
Sen. Johanns (R-NE) [98]
Sen. Hutchison (R-TX) [98]
10 Most Ambiguous Smiles
Sen. Inouye (D-HI) [40]
Sen. Kohl (D-WI) [43]
Sen. McCain (R-AZ) [47]
Sen. Durbin (D-IL) [49]
Sen. Roberts (R-KS) [50]
Sen. Whitehouse (D-RI) [52]
Sen. Hoeven (R-ND) [54]
Sen. Alexander (R-TN) [54]
Sen. Shelby (R-AL) [62]
Sen. Johnson (D-SD) [63]
The Non-Smilers
Sen. Bingaman (D-NM) [79]
Sen. Coons (D-DE) [77]
Sen. Burr (R-NC) [72]
Sen. Hatch (R-UT) [72]
Sen. Reed (D-RI) [71]
Sen. Paul (R-KY) [71]
Sen. Lieberman (I-CT) [59]
Sen. Bennet (D-CO) [55]
Sen. Udall (D-NM) [51]
Sen. Levin (D-MI) [50]
Sen. Boozman (R-AR) [48]
Sen. Isakson (R-GA) [41]
Sen. Franken (D-MN) [37]
10 Most Bespectacled Senators
Sen. Franken (D-MN) [99]
Sen. Sanders (I-VT) [98]
Sen. McConnell (R-KY) [98]
Sen. Grassley (R-IA) [96]
Sen. Coburn (R-OK) [93]
Sen. Mikulski (D-MD) [93]
Sen. Roberts (R-KS) [93]
Sen. Inouye (D-HI) [91]
Sen. Akaka (D-HI) [88]
Sen. Conrad (D-ND) [86]
10 Most Masculine-Featured Senators
Sen. Bingaman (D-NM) [94]
Sen. Boozman (R-AR) [92]
Sen. Bennet (D-CO) [92]
Sen. McConnell (R-KY) [91]
Sen. Nelson (D-FL) [91]
Sen. Rockefeller IV (D-WV) [90]
Sen. Carper (D-DE) [90]
Sen. Casey (D-PA) [90]
Sen. Blunt (R-MO) [89]
Sen. Toomey (R-PA) [88]
10 Most Feminine-Featured Senators
Sen. McCaskill (D-MO) [95]
Sen. Boxer (D-CA) [93]
Sen. Shaheen (D-NH) [93]
Sen. Gillibrand (D-NY) [92]
Sen. Hutchison (R-TX) [91]
Sen. Collins (R-ME) [90]
Sen. Stabenow (D-MI) [86]
Sen. Hagan (D-NC) [81]
Sen. Ayotte (R-NH) [79]
Sen. Klobuchar (D-MN) [79]
—
For the partisan data-geeks, here’s some faux analysis with averages:
Party |
Smiles |
Non-smiles |
Avg. Smile Confidence |
D |
44 |
7 |
85 |
R |
42 |
5 |
86 |
I |
1 |
1 |
85 |
There you have it, the Republicans are the smiley-est party of them all.
Further discussion
This is an exercise to show off the very cool Face.com API and to demonstrate the value of a little programming knowledge. Writing the script doesn’t take too long, though I spent more time than I liked on idiotic bugs of my own making. But this was way preferable than cropping photos by hand. And once I had the gist of things, I not only had a set of cropped files, I had the ability to whip up any kind of visualization I needed with just a minute’s more work.
And it wasn’t just face-detection that I was using, but face-detection in combination with deep data-sources like the Times’s Congress API and the Sunlight Foundation. For the SOPA Opera app, it didn’t take long at all to populate the site with legislator data and faces. (I didn’t get around to using this face-detection technique to clean up the images, but hey, I get lazy too…)
Please don’t judge the value of programming by my silly example here – having an easy-to-use service like Face.com API (mind the usage terms, of course) gives you a lot of great possibilities if you’re creative. Off the top of my head, I can think of a few:
- As a photographer, I’ve accumulated thousands of photos but have been quite lazy in tagging them. I could conceivably use Face.com’s API to quickly find photos without faces for stock photo purposes. Or maybe a client needs to see male/female portraits. The Face.com API gives me an ad-hoc way to retrieve those without menial browsing.
- Data on government hearing webcasts are hard to come by. I’m sure there’s a programmatic way to split up a video into thousands of frames. Want to know at which points Sen. Harry Reid shows up? Train Face.com’s API to recognize his face and set it loose on those still frames to find when he speaks.
- Speaking of breaking up video…use the Face API to detect the eyes of someone being interviewed and use RMagick to detect when the eyes are closed (the pixels in those positions are different in color than the second before) to do that college-level psych experiment of correlating blinks-per-minute to truthiness.
Thanks for reading. This was a quick post and I’ll probably go back to clean it up. At some point, I’ll probably add this to the Bastards Book.