Tag Archives: web design

Why did infinite scroll fail at Etsy?

Descending down a Manhattan Subway

Descending down a Manhattan Subway

My point is not that infinite scroll is stupid. It may be great on your website. But we should have done a better job of understanding the people using our website” – Dan McKinley, Principal Engineer at Etsy

A few weeks ago, Etsy engineer Dan McKinley gave a talk on “Design for Continuous Experimentation.” It’s an interesting, humorous presentation on large scale A/B testing and I fully recommend you check it out (slides and video here).

The moral was that A/B testing – much like the code it tests – must be done in a modularized fashion. The “fail” case he gave was when Etsy spent months developing and testing infinite scroll to their search listings, only to find that it had a negative impact on engagement.

McKinley said Etsy’s main lesson from this was that their A/B testing strategy was too monolithic:

  1. Spend a ton of time building the infinite scroll feature and release it.
  2. Verify that people love it, then find a Brooklyn warehouse to throw a celebration party.

“Seeing more items faster is presumed to be a better experience”, McKinley said. But the A/B tests showed various negative effects of the feature, including fewer clicks on the results and fewer items “favorited” from the infinite results page. And curiously, while users didn’t buy fewer items overall, “they just stopped using search to find these items.”

So basically infinite scroll failed in every major way. But not only was Etsy’s team wrong in assuming that users would benefit from infinite sroll, McKinley said, they were wrong in automatically accepting the two underlying assumptions behind infinite scroll:

  1. Users want more results per page.
  2. Users want faster results.

(Assumption #2 was tested by artificially slowing down search for a cohort of users. These users did not necessarily react positively to slower results. But their engagement level was not statistically significantly lessened.)

The point of McKinley’s talk was that instead of having the goal of “test infinite scroll,” Etsy realized it needed to test each assumption separately, and this going forward is their game plan (the success case McKinley gives is the revamp of Etsy’s search box).

As I said, a great talk worth checking out. However, McKinley didn’t have an answer for this:

Even if user engagement isn’t positively influenced by more results per page or by faster results, why does the combination of both have a decidedly negative impact?

The decision to try infinite scroll was partially influenced by Google apparent success with “instant search results,” according to McKinley. And it’s a feature that is prominent among popular Tumblr themes, Pinterest, and of course, at Facebook and Twitter, so presumably their A/B testing has yielded good results.

But why not at Etsy? Or at Amazon, which sticks with 16 results per search page? Users are notoriously fickle about interface changes. But if the search algorithm still brings up the best results at the top, then a user who has only 16 options before clicking through has no better advantage than the user who has 16 plus an “infinite” number of lesser results, if the latter doesn’t have to do any additional work to get them.

McKinley said he didn’t know why infinite scroll didn’t succeed for Etsy. There wasn’t, as far as they could tell, a technical fault (i.e. infinite scroll breaking in a specific browser). It was just a bad thing and the reason why wasn’t clear.

Now the actual merits of infinite scroll itself is still a controversial feature – even if there aren’t technical issues, which there almost always are. But in Etsy’s use case, it seems that at worst there should have been no effect, not negative effect. The most jarring problem of infinite scroll is that there’s no footer, which presumably the average Etsy user doesn’t need when making viewing/purchasing decisions.

Maybe it’s not that users consciously dislike infinite scroll. But in practice, maybe they lose a sense of orientation? Some users may have the habit of going further and further into search results – “playing the field” so to speak – before they realize that pages 1 and 2 are the best options they have. With pagination, it’s fairly easy to get back to those pages.

But if these users don’t have the habit of bookmarking/favoriting/writing-down-the-names of items as they scroll through, perhaps the deluge of infinite scroll makes it more likely for users to forget where they once were and what once caught their eye? Or maybe it’s a lack of developing the habits needed to actually act: when users are forced to deal with the loading time of each page, they learn to do their favoriting/click-throughs/impulse-purchasing on those pages before moving on? These users, when allowed to scroll for more items than they were used to, might simply lose the habit of click-to-favorite/browse/buy – but never develop the habit of scrolling back up. Or they’re just overwhelmed from the information overload and don’t feel like taking action any longer.

Infinite scroll may be pleasant for browsing, but does it lead to inaction? It’s an interface issue that is likely less a technical question than a psychological. I wonder what other online retail interfaces have found?

Here’s a short Etsy forum discussion where users wonder where the infinite-scroll went. McKinley’s talk and slides are here.

On Hacker News, user oconnore makes a great observation here, in that least some users, when returning to the search page after backing up from a product page, would find that they had lost their place in the infinite search stream (probably the biggest problem with infinite scroll implementations). I didn’t mention this in the original version of this post, but McKinley opened his talk with an anecdote about another poor assumption: power users (who worked at Etsy and made the suggestion) often opened up search results in new windows because they wanted to do side-by-side comparisons. But when Etsy made that the default behavior, testing found that most users did not appreciate it.

So putting 2 and 2 together: Perhaps many Etsy users have the habit of clicking-through a search result and then backing up to the search page. When the infinite-scroll didn’t properly mark their place, they *really* got lost and the search experience would obviously not be a good one. Seems as good as explanation as any, and so maybe it really is a technical issue after all.

Another update: HN user gfodor, who worked on the project, said the back-button problem may have applied to some IE users. But as McKinley says in his talk, the negative search effect occurred across all browsers, including ones that did back-up correctly. McKinley does go into good detail about how Etsy determines control group composition (sellers vs. non-sellers, for example, are a much different user type), but as this blog post is near-infinitely long I suggest again you check out his excellent talk on his blog.

The Tea Party takes on Microsoft

Tea Party Review anniversary issue

There is now a new magazine for the Tea Party political wing: the Tea Party Review.

Say what you will about the Tea Party, but it takes a lot of cojones – more than most of the world’s webmasters and web designers – to launch a website and putting users of Safari, Firefox, and the “Googlechrome” ahead of Internet Explorer users.

Hard to get more progressive than this:

Screenshot of Tea Party Review's Homepage

Screenshot of Tea Party Review's Homepage

Been away for the redesign

Convincing the editors to use this as the lede image for the redesign was my most visible contribution

Been stuck at the office for the past couple of weeks, but it’s been worth it. Helped with ProPublica’s redesign, which we did in conjunction with Mule Design Studio. Our journalism has always been top notch, but the site didn’t quite look the part. Now it does. As my boss writes, “Our goal is simple: For the design of our site to match the sophistication of our reporting.”

ProPublica tracks the bailout, a year or so later

Today, my ProPublica colleague Paul Kiel and I put out some graphical revisions to PP’s bank bailout tracking site, including our master list of companies to get taxpayer bailout money:

Graphic: The Status of the Bailout

Graphic: The Status of the Bailout

Bailout List Page

Bailout List Page

Nothing fancy, mostly made the numbers easier to find and compare. The site itself has been far-from-fancy at its inception, since it was my first project after taking a crash course on Ruby on Rails. Back when the bailout was first announced in Q4 2008, the Treasury declined to name the banks it was doling taxpayer money to, for fear that non-listed banks would take a hit in reputation. Paul was one of the first few people to comb through banks’ press releases and enter them into a spreadsheet. His list of the first 26 – put into a simple html table – was a pretty big hit.

As the list grew into the dozens and hundreds, it became more cumbersome to maintain the static list, which was nothing more than the bank’s name, date of announcement, and amount of bailout. Plus, it was no longer just one bailout per company; Citigroup and Bank of America were beneficiaries of billions of dollars through a couple other programs.

So, I proposed a bailout site that would allow Paul to record the data at a more discrete level…up to that point, for example, most online lists showed that AIG had several dozen billion dollars committed to it, but not the various programs, reasons, and dates on which those allocations were made. A little anal maybe, but it gave the site the flexibility to adapt when the bailout grew to include all varieties of disbursements, including to auto parts manufacturers and mortgage servicers, as well as the money flow coming in the opposite direction, in the form of refunds and dividends.

I saw the site as more of a place for Paul to base his bailout coverage on (he’s been doing an excellent job covering the progress of the mortgage modification program), as I assumed that in the near future, Treasury would have its own, easy-to-use site of the data. Unfortunately, that is not quite the case, nearly a year and a half later. Besides some questionable UI decisions (such as having the front-facing page consist of a Flash map), the data is not put forth in an easily accessible method. It could be that I need to take an Excel refresher course here, but trying to sort the columns in these Excel spreadsheets just to find the biggest bailout amount, for example, throws an error.

Only in the past couple of months did Treasury finally release dividends in non-pdf form, and even then, it’s still a pain to work with (there’s no way, for example, to link the bank names in the dividends sheet to the master spreadsheet of bailouts). I would’ve thought that’d be the set of bailout data Treasury would be most eager to send out, because it’s the taxpayers’ return on investment. But, as it turns out, there is a half-empty perspective from this data (such as banks not having enough reserves to pay dividends in a timely fashion), one that would’ve been immediately obvious if the data were in a more sortable form.

ProPublica’s bailout tracking site doesn’t have much data other than the official Treasury bailout numbers; there’s all kinds of other unofficial numbers, such as how much each bank is giving out in bonuses, that people are more interested in. American University has gathered all kinds of financial health indicators for each bailout bank, too. There’s definitely much more data that PP, and other bailout trackers need to collect to provide a bigger picture of the bailout situation. But for now, I guess it’s a small victory to be one of the top starting points to find out just exactly where hundreds of billions of our taxes went to. And the why, too; Paul’s done a great job writing translations of the Treasury’s official-speak on each program.

ProPublica’s Eye on the Bailout