<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>danwin.com &#187; data</title>
	<atom:link href="https://danwin.com/tag/data/feed/" rel="self" type="application/rss+xml" />
	<link>https://danwin.com</link>
	<description>Words, photos, and code by Dan Nguyen. The &#039;g&#039; is mostly silent.</description>
	<lastBuildDate>Thu, 21 Nov 2019 12:29:57 +0000</lastBuildDate>
	<language>en-US</language>
	<sy:updatePeriod>hourly</sy:updatePeriod>
	<sy:updateFrequency>1</sy:updateFrequency>
	<generator>https://wordpress.org/?v=4.2.39</generator>
	<item>
		<title>Peter Norvig on cleverness and having enough data</title>
		<link>https://danwin.com/2013/12/peter-norvig-on-cleverness-and-having-enough-data/</link>
		<comments>https://danwin.com/2013/12/peter-norvig-on-cleverness-and-having-enough-data/#comments</comments>
		<pubDate>Thu, 05 Dec 2013 11:19:15 +0000</pubDate>
		<dc:creator><![CDATA[Dan]]></dc:creator>
				<category><![CDATA[thoughts]]></category>
		<category><![CDATA[data]]></category>

		<guid isPermaLink="false">https://danwin.com/?p=2646</guid>
		<description><![CDATA[<p>I&#8217;m reposting this entry, near verbatim, from The Blog of Justice, which picks out a keen quote from Google/Stanford genius, Peter Norvig, from his Google Tech Talk on August 9, 2011. â€œAnd it was fun looking at the comments, because youâ€™d see things like â€˜well, Iâ€™m throwing in this naive Bayes now, but Iâ€™m gonna [&#8230;]</p>
<p>The post <a rel="nofollow" href="https://danwin.com/2013/12/peter-norvig-on-cleverness-and-having-enough-data/">Peter Norvig on cleverness and having enough data</a> appeared first on <a rel="nofollow" href="https://danwin.com">danwin.com</a>.</p>
]]></description>
				<content:encoded><![CDATA[<p>I&#8217;m reposting this entry, near verbatim, from <a href="http://blog.strafenet.com/2011/12/17/when-you-have-enough-data-sometimes-you-dont-have-to-be-too-clever/#sthash.ogLRy3AR.dpuf">The Blog of Justice</a>, which picks out a keen quote from Google/Stanford genius, Peter Norvig, from his <a href="http://www.youtube.com/watch?v=ql623nyCdKE#t=340">Google Tech Talk on August 9, 2011</a>.</p>
<p><iframe width="640" height="360" src="//www.youtube.com/embed/ql623nyCdKE?rel=0" frameborder="0" allowfullscreen></iframe></p>
<blockquote><p>â€œAnd it was fun looking at the comments, because youâ€™d see things like â€˜well, Iâ€™m throwing in this naive Bayes now, but Iâ€™m gonna come back and fix it it up and come up with something better later.â€™  And the comment would be from 2006. [laughter] And I think what that says is, when you have enough data, sometimes, you donâ€™t have to be too clever about coming up with the best algorithm.â€</p></blockquote>
<p>I think about this insight more times than I&#8217;d like to admit, in those frequent situations where you end up spending more time on a clever, graceful solution because you look down on the banal work of finding and gathering data (or, in the classical pre-computer world, fact-finding and research).</p>
<p>But I also think about it in the context of people who <em>are clever</em>, but don&#8217;t have enough data to justify a &#8220;big data&#8221; solution. There&#8217;s an unfortunate tendency among these non-tech-savvy types to think that, once someone tells them how to use a magical computer program, they&#8217;ll be able to finish their work. </p>
<p>The flaw here is, well, if you don&#8217;t have enough data (i.e. more than a few thousand data-points or observations), then no computer program will help you find any worthwhile insight. But what&#8217;s more of a tragedy is that, since the datasets involved here are small, these clever people could&#8217;ve done their work just fine without waiting for the computerized solution.</p>
<p>So yes, having lots of data can make up for a lack of cleverness, because computers are great at data processing. But if you&#8217;re in the opposite situation &#8211; a clever person with not a lot of data &#8211; don&#8217;t overlook your cleverness.</p>
<p>The post <a rel="nofollow" href="https://danwin.com/2013/12/peter-norvig-on-cleverness-and-having-enough-data/">Peter Norvig on cleverness and having enough data</a> appeared first on <a rel="nofollow" href="https://danwin.com">danwin.com</a>.</p>
]]></content:encoded>
			<wfw:commentRss>https://danwin.com/2013/12/peter-norvig-on-cleverness-and-having-enough-data/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>Because of a typo, the government needs to keep your private data 10 times longer?</title>
		<link>https://danwin.com/2012/03/because-of-a-typo-the-government-needs-to-keep-your-private-data-10-times-longer/</link>
		<comments>https://danwin.com/2012/03/because-of-a-typo-the-government-needs-to-keep-your-private-data-10-times-longer/#comments</comments>
		<pubDate>Fri, 23 Mar 2012 16:47:33 +0000</pubDate>
		<dc:creator><![CDATA[Dan Nguyen]]></dc:creator>
				<category><![CDATA[thoughts]]></category>
		<category><![CDATA[data]]></category>

		<guid isPermaLink="false">https://danwin.com/?p=1938</guid>
		<description><![CDATA[<p>Yesterday the Obama administration approved new rules to greatly extend the time &#8211; from 180 days to 1,826 days (5 years) &#8211; that domestic intelligence services can retain American citizens&#8217; private information. Citizens are eligible to be part of this federal data warehouse even when &#8220;there is no suspicion that they are tied to terrorism.&#8221; [&#8230;]</p>
<p>The post <a rel="nofollow" href="https://danwin.com/2012/03/because-of-a-typo-the-government-needs-to-keep-your-private-data-10-times-longer/">Because of a typo, the government needs to keep your private data 10 times longer?</a> appeared first on <a rel="nofollow" href="https://danwin.com">danwin.com</a>.</p>
]]></description>
				<content:encoded><![CDATA[<p>Yesterday the Obama administration <a href="http://www.nytimes.com/2012/03/23/us/politics/us-moves-to-relax-some-restrictions-for-counterterrorism-analysis.html" title="U.S. Relaxes Some Restrictions for Counterterrorism Analysis - NYTimes.com">approved new rules to greatly extend the time</a> &ndash; from 180 days to 1,826 days (5 years) &ndash; that domestic intelligence services can retain American citizens&#8217; private information. Citizens are eligible to be part of this federal data warehouse even when &#8220;there is no suspicion that they are tied to terrorism.&#8221;
</p>
<p>As <a href="http://www.nytimes.com/2012/03/23/us/politics/us-moves-to-relax-some-restrictions-for-counterterrorism-analysis.html?_r=1&amp;ref=politics" title="U.S. Relaxes Some Restrictions for Counterterrorism Analysis - NYTimes.com">Charlie Savage in the New York Times reports</a>:
</p>
<blockquote><p>Intelligence officials on Thursday said the new rules have been under development for about 18 months, and grew out of reviews launched after the failure to connect the dots about Umar Farouk Abdulmutallab, the â€œunderwear bomber,â€ before his Dec. 25, 2009, attempt to bomb a Detroit-bound airliner.
</p>
<p>After the failed attack, government agencies discovered they had intercepted communications by Al Qaeda in the Arabian Peninsula and received a report from a United States Consulate in Nigeria that could have identified the attacker, if the information had been compiled ahead of time.</p>
</blockquote>
<p>The case of the &#8220;underwear bomber&#8221; is a strange justification for this expansion of data storage. Because the 2009 Christmas terror attempt nearly succeeded thanks to a series of what seems like common human errors, not from an information drought.
</p>
<p>Shortly after the underwear bomber incident, the White House released a report examining how our vast intelligence network failed to prevent Abdulmutallab, the bomber, from boarding a flight from Amsterdam to Detroit.
</p>
<p>One of the critical failures? Someone at the State Department, when sending information about Abdulmutallab to the National Counterterrorism Center, <strong>misspelled his name</strong>. Even though his father alerted American intelligence officials a full month before the attempted attack, our sophisticated surveillance system was partially stymied by a single misplaced letter.
</p>
<p>As <a href="http://thecable.foreignpolicy.com/posts/2010/01/08/more_on_misspelling_the_underwear_bomber_s_name" title="More on misspelling the underwear bomberâ€™s name | The Cable">Foreign Policy reported in 2010</a>:</p>
<blockquote>
<p>State called an impromptu press briefing late Thursday evening to address the issue. The tone of the briefing was combative, as reporters pressed the &#8220;senior administration official&#8221; for details about the misspelling that he seemed not to want to give up. But here&#8217;s what we learned.
</p>
<p>Someone (they won&#8217;t say who) at the State Department (presumably at the U.S. Embassy in Nigeria) did check to see if Abdulmutallab had a visa (they won&#8217;t say exactly when). That person was working off the Visas Viper cable originally sent from the embassy to the NCTC, which had the name wrong.
</p>
<p>&#8220;There was a dropped letter in that &#8212; there was a misspelling,&#8221; the official said. &#8220;They checked the system. It didn&#8217;t come back positive. And so for a while, no one knew that this person had a visa.&#8221; (They won&#8217;t say for how long)
</p>
</blockquote>
<p>The chain of failures <a href="http://thecable.foreignpolicy.com/posts/2010/01/07/how_much_did_misspelling_abdulmutallabs_name_matter" title="How much did misspelling Abdulmutallab&#039;s name matter? | The Cable">is more complicated than that</a>, but the fact that a typo was a big enough of a wrench to <a href="http://www.whitehouse.gov/sites/default/files/summary_of_wh_review_12-25-09.pdf" title="">warrant special mention in the White House review</a> is an indication that the government&#8217;s surveillance systems, despite the work of its data architects, engineers and scientists, were compromised by some pretty banal problems, like not having spell-check capability.</p>
<p>In fact, the White House report goes out of its way to assert that the information-sharing problems that failed to prevent the 9/11 attacks &#8220;have now, 8 years later, <a href="http://www.whitehouse.gov/sites/default/files/summary_of_wh_review_12-25-09.pdf" title="">largely been overcome</a>.&#8221; Information about Abdulmutallab (again, <em>his own father</em> met with U.S. officials to warn them of his son a month ahead of the attack), his association with Al Qaeda, and Al Qaeda&#8217;s attack planning, &#8220;was available to all-source analysts at the CIA and the NCTC prior to the attempted attack.&#8221;</p>
<p>In other words, the 9/11 attack was possible because government agencies wouldn&#8217;t share information with each other. Now, they are happily sharing information with each other, they just aren&#8217;t diligently looking at it.</p>
<p>So the best solution is to enact a ten-fold increase the legal time limit for storing American citizens&#8217; data?</p>
<p>It sounds like the government&#8217;s ability to detect terrorists would be greatly improved with better user-friendly software and adherence to data-handling standards. The ability to catch slight misspellings and do fuzzy data matches is something that Facebook and Google users have enjoyed for years; hell, the basic concept and consumer-friendly implementation has been around <a href="http://en.wikipedia.org/wiki/Microsoft_Word" title="Microsoft Word - Wikipedia, the free encyclopedia">in Microsoft Word since about 20 years ago</a>. Have software overhauls been enacted before deciding that the government needs more of its citizens&#8217; private information? Or does the review of such technical details and policies seem too unsexy and pedantic for our intelligence bureaucracy?</p>
<p>The Times <a href="http://www.nytimes.com/2012/03/23/us/politics/us-moves-to-relax-some-restrictions-for-counterterrorism-analysis.html?_r=1&amp;ref=politics" title="U.S. Relaxes Some Restrictions for Counterterrorism Analysis - NYTimes.com">article also mentions</a> that the guidelines call for more duplication of entire databases&#8230;which is a bit confusing. I&#8217;m assuming that this doesn&#8217;t refer to making backup copies (in case of a hard drive failure), but to a method of data-sharing between analysts. This is how the Times describes it:</p>
<blockquote>
<p>   The guidelines are also expected to result in the center making more copies of entire databases and â€œdata mining themâ€ using complex algorithms to search for patterns that could indicate a threat.
</p>
</blockquote>
<p>Hopefully, this doesn&#8217;t mean that database files are being copied and passed around so that each department can have their own copy of another department&#8217;s data. This would seem to introduce a few major logistical issues: namely, how do you know the copy you have contains the latest data? Remember that the typo in Abdulmutallab&#8217;s name was one mistake that helped spawn a series of snafus. Are we going to have an incident in which a terrorist slips through because an analyst forgot to update his/her copy of a database before mining it? Also, there&#8217;s the possibility that some of these data copies might end up lying around long after their 5-year limit. </p>
<p>There have been several reports of how intelligence agencies now suffer from <em>too much data</em>, to the point where analysts are &#8220;<a href="http://www.npr.org/2012/01/11/144322791/why-americas-spies-struggle-to-keep-up" title="Why America's Spies Struggle To Keep Up : NPR">drowning in the data</a>.&#8221; If this is a reason cited for how an attack went unprevented in the future, I hope the proposed reform is not &#8220;more data.&#8221;</p>
<p>The post <a rel="nofollow" href="https://danwin.com/2012/03/because-of-a-typo-the-government-needs-to-keep-your-private-data-10-times-longer/">Because of a typo, the government needs to keep your private data 10 times longer?</a> appeared first on <a rel="nofollow" href="https://danwin.com">danwin.com</a>.</p>
]]></content:encoded>
			<wfw:commentRss>https://danwin.com/2012/03/because-of-a-typo-the-government-needs-to-keep-your-private-data-10-times-longer/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>Coding for Journalists 104: Pfizer&#8217;s Doctor Payments; Making a Better List</title>
		<link>https://danwin.com/2010/04/pfizer-web-scraping-for-journalists-part-4-pfizers-doctor-payments/</link>
		<comments>https://danwin.com/2010/04/pfizer-web-scraping-for-journalists-part-4-pfizers-doctor-payments/#comments</comments>
		<pubDate>Tue, 06 Apr 2010 13:50:19 +0000</pubDate>
		<dc:creator><![CDATA[Dan Nguyen]]></dc:creator>
				<category><![CDATA[works]]></category>
		<category><![CDATA[data]]></category>
		<category><![CDATA[journalism]]></category>
		<category><![CDATA[pfizer]]></category>
		<category><![CDATA[scraper]]></category>
		<category><![CDATA[tutorial]]></category>

		<guid isPermaLink="false">https://danwin.com/?p=643</guid>
		<description><![CDATA[<p>Update (12/30): So about an eon later, I&#8217;ve updated this by writing a guide for ProPublica. Heed that one. This one will remain in its obsolete state. Update (4/28): Replaced the code and result files. Still haven&#8217;t written out a thorough explainer of what&#8217;s going on here. Update (4/19): After revisiting this script, I see [&#8230;]</p>
<p>The post <a rel="nofollow" href="https://danwin.com/2010/04/pfizer-web-scraping-for-journalists-part-4-pfizers-doctor-payments/">Coding for Journalists 104: Pfizer&#8217;s Doctor Payments; Making a Better List</a> appeared first on <a rel="nofollow" href="https://danwin.com">danwin.com</a>.</p>
]]></description>
				<content:encoded><![CDATA[<p><strong>Update (12/30): So about an eon later, <a href="http://www.propublica.org/nerds/item/scraping-websites">I&#8217;ve updated this by writing a guide for ProPublica</a>. Heed that one. This one will remain in its obsolete state.</strong></p>
<p><strong>Update (4/28): Replaced the code and result files. Still haven&#8217;t written out a thorough explainer of what&#8217;s going on here.</strong></p>
<p><strong>Update (4/19): After revisiting this script, I see that it fails to capture some of the payments to doctors associated with entities. I&#8217;m going to rework this script and post and update soon.</strong></p>
<p>So the world&#8217;s largest drug maker, <strong>Pfizer</strong>, decided to tell everyone which doctors they&#8217;ve been giving money to to speak and consult on its behalf in the latter half of 2009. These doctors are the same ones who, from time to time, recommend the use of Pfizer products.</p>
<p> <a href="http://www.nytimes.com/2010/04/01/business/01payments.html">From the NYT</a>:</p>
<blockquote><p>
				Pfizer, the worldâ€™s largest drug maker, said Wednesday that it paid about $20 million to 4,500 doctors and other medical professionals for consulting and speaking on its behalf in the last six months of 2009, its first public accounting of payments to the people who decide which drugs to recommend. Pfizer also paid $15.3 million to 250 academic medical centers and other research groups for clinical trials in the same period.</p>
<p> A spokeswoman for Pfizer, Kristen E. Neese, said <strong>most of the disclosures were required by an integrity agreement that the company signed in August to settle a federal investigation into the illegal promotion of drugs for off-label uses</strong>.
			</p></blockquote>
<p>
So, not an entirely altruistic release of information. But it&#8217;s out there nonetheless. You can <a href="http://www.pfizer.com/responsibility/working_with_hcp/payments_report.jsp">view their list here</a>. <strong>Jump to <a href="#results">my results here</a></strong><br />
<br />
<a href="http://www.pfizer.com/responsibility/working_with_hcp/payments_report.jsp"><img src="https://danwin.com/words/wp-content/uploads/2010/04/pfizer-list.gif" alt="" title="pfizer-list" width="917"  class="aligncenter size-full wp-image-677"></a> Not bad at first glance. However, on further examination, it&#8217;s clear that the list is nearly useless unless you intend to click through all 480 pages manually, or, if you have a doctor in mind and you only care about that one doctor&#8217;s relationship. As a journalist, you probably have other questions. Such as:</p>
<ul>
<li>Which doctor received the most?
				</li>
<li>What was the largest kind of expenditure?
				</li>
<li>Were there any unusually large single-item payments?
				</li>
</ul>
<p>None of these questions are answerable unless you have the list in a spreadsheet. As I mentioned in earlier lessons&#8230;there are cases when the information is freely available, but the provider hasn&#8217;t made it easy to analyze. Technically, they are fulfilling their requirement to be &#8220;transparent.&#8221; </p>
<p>I&#8217;ll give them the benefit of the doubt that they truly want this list to be as accessible and visible as possible&#8230;I tried emailing them to ask for the list as a single spreadsheet, but the email function was broken. So, let&#8217;s just write some code to save them some work and to get our answers a little quicker.<br />
<span id="more-643"></span></p>
<link rel='stylesheet' href='https://danwin.com/css/code.css' type='text/css' media='all'>
<div class="code-doc">
<div class='over-note' style='font-size: 12pt; color: #a44; border: 1px solid black; margin: 20px; padding: 20px;'>This is part of a <a href="https://danwin.com/works/coding-for-journalists-101-a-four-part-series/">four-part series on web-scraping for journalists</a>. As of <strong>Apr. 5, 2010</strong>, it was published a bit incomplete because I wanted to post a timely solution to the <a href="https://danwin.com/works/pfizer-web-scraping-for-journalists-part-4-pfizers-doctor-payments/">recent Pfizer doctor payments list release</a>, but the code at the bottom of each tutorial should execute properly. The code examples are meant for reference and I make no claims to the accuracy of the results. Contact <a href="mailto:dan@danwin.com">dan@danwin.com</a> if you have any questions, or leave a comment below.</p>
<p><strong>DISCLAIMER:</strong> <em>The code, data files, and results are meant for reference and example only. You use it at your own risk.</em></p>
</div>
<div class="sec">
<h2>
					The Code<br />
				</h2>
<p>The following code uses the same nokogiri strategies in the past three lessons. But here are the specific considerations that we have to make for Pfizer&#8217;s list:</p>
<ul>
<li>The base url is: <a href="http://www.pfizer.com/responsibility/working_with_hcp/payments_report.jsp?enPdNm=All&amp;iPageNo=1">http://www.pfizer.com/responsibility/working_with_hcp/payments_report.jsp?enPdNm=All&amp;<strong>iPageNo=1</strong></a> The most interesting parameter, <strong>iPageNo</strong>, is bolded. If you replace &#8216;1&#8217; with any number, you&#8217;ll see you can progress through the list. There appears to be <a href="http://www.pfizer.com/responsibility/working_with_hcp/payments_report.jsp?enPdNm=All&amp;iPageNo=486">486 pages</a>.
					</li>
<li>So each page has a table of data with id <strong>#hcpPayments</strong>. The rows of data aren&#8217;t very normalized. For example, each &#8220;Entity Paid&#8221; can have many services/activity listed, with each of those items having another name attached to it. Then there are &#8220;cash&#8221; and &#8220;non-cash&#8221; values, which may or may not be numeric (&#8220;&#8212;&#8221; apparently means 0) There&#8217;s no easy css selector to grab each entity&#8230;but it seems that we can safely assume that if the first table column has a name (and the second and third contain city and state) that this is a new entity.
					</li>
<p>
						These are the steps we&#8217;ll take:</p>
<ul>
<li>Download pages 1 to 486 of the list (each page has 10 entries)</li>
<li>Run a method that gathers all the doctor names from the pages we just downloaded on to our hard drive)</li>
<li>From that list of doctors, query the Pfizer site and gather the individual payments to every doctor.</li>
</ul>
<div class='sec'>
<p>	At the top, I&#8217;ve written a few convenience methods to deal with strings. Also included are: <strong>get_doc_query</strong> is a function we call to extract the doctor name from the links on the site.
					</p>
<p><strong>puts_error</strong> is a quick function to log any errors we might have</p>
<pre name="code" class="ruby">
						# Some general functions to deal with strings
					class String

					  alias_method :old_strip, :strip

					  def strip
						  self.old_strip.gsub(/^[\302\240|\s]*|[\302\240|\s]*$/, '').gsub(/[\r\n]/, " ")
					  end

					  def strip_for_num
					    self.strip.gsub(/[^0-9]/, '')
					  end

					  def blank?
						respond_to?(:empty?) ? empty? : !self
					  end
					end
					
					
					END_PAGE=486
					BASE_URL=''
					DOC_QUERY_URL='http://www.pfizer.com/responsibility/working_with_hcp/payments_report.jsp?hcpdisplayName='


					def get_doc_query(str)
					  str.match(/hcpdisplayName\=(.+)/)[1]
					end

					def puts_error(str)
					  err = "#{Time.now}: #{str}"
					  puts err
					  File.open("pfizer_error_log.txt", 'a+'){|f| f.puts(err)}
					end
					
					
						</pre>
</p></div>
<div class='sec'>
<p>I found it easiest to download all the pages onto the hard drive first, using something like <a href='http://en.wikipedia.org/wiki/CURL'>CURL</a>, and then run the following code on it.</p>
<p><strong>process_local_pages</strong> is a method that will iterate through every page (you can set BASE_URL to either your hard drive if you&#8217;ve downloaded all the pages yourself, or to the Pfizer page), run <strong>process_row</strong>, and store all the doctor names and payees into separate files, as well as hold all the total amounts</p>
<p> The three resulting files that you get are:</p>
<ul>
<li><strong>pfizer_doctors.txt</strong> &#8211; Every doctor name listed. We will use this in the next step to query each doctor individual on Pfizer&#8217;s site</li>
<li><strong>pfizer_entities.txt</strong> &#8211; A list of every payment made to Entities</li>
<li><strong>pfizer_entity_totals.txt</strong> &#8211; A list of the total payments made to Entities</li>
</ul>
<pre name="code" class="ruby">


						def process_row(row, i, current_entity, arrays)  

						  tds = row.css('td').collect{|r| r.text.strip}

						   if !tds[3].blank? 
						     if !tds[1].blank?
						     # new entity
						     puts tds[0]
							     current_entity = {:name=>tds[0],:city=>tds[1], :state=>tds[2], :page=>i, :services=>[]} 
							     arrays[:entities].push(current_entity) if arrays[:entities]
						  	   current_class = row['class']
							   end

						     if tds[3].match(/Total/)
						       arrays[:totals].push([current_entity[:name], tds[4].strip_for_num, tds[5].strip_for_num].join("\t")) if arrays[:totals]

						     else
						        # new service
						   	   services_td = row.css('td')[3]
						   	   service_name = services_td.css("ul li a")[0].text.strip 
						   	   puts "#{current_entity[:name]}\t#{service_name}" 
						   	   current_entity[:services].push([service_name, tds[4].strip_for_num, tds[5].strip_for_num]) 

						   	   arrays[:doctors].push(services_td.css("ul li ul li a").map{|a| get_doc_query(a['href']) }.uniq) if arrays[:doctors]
						     end
						   elsif tds.reject{|t| t.blank?}.length == 0
						     #blank row
						   else
						     puts_error "Page #{i}: Encountered a row and didn't know what to do with it: #{tds.join("\t")}"
						   end

						   return current_entity
						end





						def process_local_pages

						  doctors_arr = []
						  entities_arr = []
						  totals_arr =[]

						  for i in 1..END_PAGE
						    begin
						  	   page = Nokogiri::HTML(open("#{BASE_URL}#{i}.html"))

						    	 count1, count2 = page.css('#pagination td.alignRight').last.text.match(/([0-9]{1,}) - ([0-9]{1,})/)[1..2].map{|c| c.to_i}
						    	 count = count2-count1+1

						    	 puts_error("Page #{i} WARNING: Pagination count is bad") if count < 0
						    	 puts("Page #{i}: #{count1} to #{count2}")

						    	 rows = page.css('#hcpPayments tbody tr')

						    	 current_entity=nil

						    	 rows.each do |row|  	   
						    	   current_entity= process_row(row, i, current_entity, {:doctors=>doctors_arr, :entities=>entities_arr, :totals=>totals_arr})
						       end

						     rescue Exception=>e
						  	   puts_error "Oops, had a problem getting the #{i}-page: #{[e.to_str, e.backtrace.map{|b| "\n\t#{b}"}].join("\n")}"
						     else


						     end
						  end

						  File.open("pfizer_doctors.txt", 'w'){|f|
						    doctors_arr.uniq.each do |d|
						        f.puts(d)
						    end
						  }

						  File.open("pfizer_entities.txt", 'w'){|f|
						    entities_arr.each do |e|
						      e[:services].each do |s|
						        f.puts("#{e[:name]}\t#{e[:page]}\t#{e[:city]}\t#{e[:state]}\t#{s[0]}\t#{s[1]}\t#{s[2]}")
						      end  
						    end
						  }


						  File.open("pfizer_entity_totals.txt", 'w'){|f|
						    totals_arr.uniq.each do |d|
						        f.puts(d)
						    end
						  }
						end

					</pre>
</p></div>
<div class='sec'>
<p><strong>process_doctor</strong> is what we run after we&#8217;ve compiled the list of doctor names that show up on the Pfizer list. Each doctor has his/her own page with detailed spending. The data rows are roughly in the same format as the main list, so we reuse <strong>process_row</strong> again</p>
<p>.</p>
<pre name="code" class="ruby">

						def process_doctor(r, time='')
						  begin
						    url = "#{DOC_QUERY_URL}#{r}"
						    page = Nokogiri::HTML(open("#{url}"))
						  rescue
							   puts_error "Oops, had a problem getting the #{r}-entry: #{[e.to_str, e.backtrace.map{|b| "\n\t#{b}"}].join("\n")}"
						  end

						  rows = page.css('#hcpPayments tbody tr')
						  entities_arr = []
						  current_entity=nil

						   rows.each do |row|  	   
						     current_entity= process_row(row, '', current_entity, {:entities=>entities_arr})
						   end


						   name = r.split('+')
						   puts_error("Should've been a last name at #{r}") if !name[0].match(/,$/)
						   name = "#{name[0].gsub(/,$/, '')}\t#{name[1..-1].join(' ')}"

						   vals=[]
						   entities_arr.each do |e| 
						     e[:services].each do |s|
						       vals.push("#{name}\t#{e[:name]}\t#{e[:page]}\t#{e[:city]}\t#{e[:state]}\t#{s[0]}\t#{s[1]}\t#{s[2]}\t#{url}\t#{time}")
						    end
						   end

						  vals.each{|val| File.open("pfizer_doctor_details.txt", "a"){ |f| 
						    f.puts val
						  }}

						  puts vals
						  return vals
						end


					</pre>
</p></div>
<div class='sec'>
<p><strong>process_doctor_pages</strong> is just a function that calls <strong>process_doctor</strong> for each name in the <strong>pfizer_doctors.txt</strong> we previously gathered</p>
<p>The final result is pfizer_doctor_details.txt, which contains a line for every payment to every doctor.</p>
<pre name="code" class="ruby">
						def process_doctor_pages
						  time = Time.now

						  File.open("pfizer_doctors.txt", 'r'){|f|
						     f.readlines.each do |r|
						        vals = process_doctor(r, time)
						     end 
						  }
						end		

					</pre>
</p></div>
</p></div>
<div class='sec'>
<h2><a name="results"></a><br />
					The Results</h2>
<p>				After Googling the top-Pfizer-paid-doctor on the list (<a href="http://www.pfizer.com/responsibility/working_with_hcp/payments_report.jsp?hcpdisplayName=SACKS,+GERALD+MICHAEL">Gerald Michael Sacks for ~$150K</a>), I came across the <a href='http://blog.pharmaconduct.org/'>Pharma Conduct</a> blog, which had <a href='http://blog.pharmaconduct.org/2010/04/who-were-top-5-recipients-of-money-from.html?src=PharmaConduct+20100403'>already posted partial aggregations of the list</a>, including the <a href='http://blog.pharmaconduct.org/2010/04/which-doctors-received-highest.html?src=PharmaConduct+20100405'>top 5 doctors</a>, complete with profiles and pics.</p>
<p>				As Pharma Conduct has already been on the ball, I&#8217;ll defer to its analysis. It has some good background here on how lame pharma companies have been in <a href='http://blog.pharmaconduct.org/2010/02/pharma-gets-failing-grades-for-initial.html'>past releases of data</a>. Overall, Pharma Conduct is <a href='http://blog.pharmaconduct.org/2010/03/pfizer-releases-payments-to-physicians.html'>less-than impressed</a> with Pfizer:</p>
<blockquote><p>
				Despite reporting more information than some its peers, Pfizer&#8217;s interface is still very limited.  For one, to use the search filtering, you must know a physician&#8217;s first name and last name, as well as the state where the payment was made.  Also, the data cannot be sorted by payment amount, which is a big limitation.  Pfizer should be given credit for releasing the information and being so thorough.  However, by releasing it in a format that is not really amenable to data analysis and is more suited to simply looking up results one physician at a time, I echo John Mack&#8217;s sentiment, namely, that this data is translucent, but not transparent.	</p></blockquote></div>
<p>The post <a rel="nofollow" href="https://danwin.com/2010/04/pfizer-web-scraping-for-journalists-part-4-pfizers-doctor-payments/">Coding for Journalists 104: Pfizer&#8217;s Doctor Payments; Making a Better List</a> appeared first on <a rel="nofollow" href="https://danwin.com">danwin.com</a>.</p>
]]></content:encoded>
			<wfw:commentRss>https://danwin.com/2010/04/pfizer-web-scraping-for-journalists-part-4-pfizers-doctor-payments/feed/</wfw:commentRss>
		<slash:comments>9</slash:comments>
		</item>
	</channel>
</rss>
