We specialise in online usability testing.

3 Case Studies that make the UX argument

March 26th 2014 by Posted In: Case Studies

Tags: , , ,

Some rights reserved by Je.T. (Attribution Share Alike)

We often get asked for case studies about Website Funnel Optimisation.

A funnel is way of visualising data. As this is often a competitive advantage, companies are not keen on publishing this information.

In this post, we share three case studies. These are the published studies that we aware of. If you know of any more please add them in the comments.

The case studies show that by simplifying and reducing the number of steps, each website was able to massively increase their conversion rates (and sales).

HSBC case study

This is a powerful case study on how HSBC Hong Kong went from an average of 9 inquiries per month to 800 inquirie per month. They achieved this by simply by cutting the numbers of form fields from 14 down to just 3.

Source: Bank 2.0: How Customer Behaviour and Technology Will Change the Future of Financial Services. See page 28 to 30.


The 300 million dollar button

The “Three hundred million dollar button” is a great example of a form that was super simple, it asked for an email and password, but customers still did not buy. Carrying out the usability testing helped identify the why.

The form was meant to help repeat customers fulfill a purchase on the site. First time users were very resistant to registering on the site, as they felt they would be giving away information to be used for marketing purposes. Even repeat users, who had previously registered, had problems recalling their login information and either used the password recovery link or resorted to multiple registrations.

And the solution?

They took away the Register button. In its place, they put a Continue button with a simple message: “You do not need to create an account to make purchases on our site. Simply click Continue to proceed to checkout. To make your future purchases even faster, you can create an account during checkout.”

The results: The number of customers purchasing went up by 45%. The extra purchases resulted in an extra $15 million the first month. For the first year, the site saw an additional $300,000,000.

Source: UIE


Expedia increased revenue by $12m

In a similar case, Expedia used analytics to find out why their customers were not completing a purchase, even after clicking the buy button and entering their billing information.

By analysing what was going wrong they worked out that one data field – “Company” – was confusing customers, causing the transaction to fail. Expedia simply deleted the confusing data field and saw an immediate jump in successful bookings

Source: Lexology and here Singlehop.com


The case studies show that the small details matter. They can range from the choice of words to the way buttons are laid out.  With a solid research methodology it is easier to pin point where these changes need to happen. With remote usability tests and surveys we are able to get the view point of the user.

In the past we applied our remote usability testing tool to Travelport for their new product launch. The project involved travel agents in 5 different countries, and helped provide insights into the usability and user experience of this new product.
Photo credits: Je.T – Flickr.

Leave a Comment

Events at Webnographer in Lisbon

January 16th 2014 by Posted In: Events


Going forward nearly every week there will be a meetup at Webnographer’s office in Lisbon. Ideas are created and spread through people meeting. Today we are lending our office for an event on Javascript called require(lx) next week we have a restarted Lisbon UX Cocktail hour, and the week after a Hacker News meetup on Technology and entrepreneurship. We also have in the plans an event on Growth Hacking (Technology meets Marketing).

London, San Francisco, Paris, Boston and Berlin have 100’s of meetups happening every week. Not everybody knows everybody and a themed event can help jell connections. They also act as a port for visitors to a city.


Today, January 16th: require(lx)

On the Javascript framework Hapi.js by the core developers of the software: Wyatt Lyon Preul and Ben Acker. RSVP here.


Thursday, January 23rd: Lisbon Ux Cocktail Hour.

Google’s Tomer Sharon talking on Start-up and UX research. RSVP here.


Wednesday, January 29th: HNLisbon.

Talks on Technology and entrepreneurship to be announced. Join the group here.


Growth Hacking

[Exact date still to be confirmed]

Ricardo Nunes, Head of Social Media at Mindshare Lisbon on Adaptive Marketing.


I have to thank João Pinto Jerónimo, Bruno Barreto, and David Dias the organisers of require(lx) in making it so easy.


Some pictures from previous events we organised…













Leave a Comment

Why Santa uses Webnographer

December 17th 2013 by Posted In: Testimonials

Tags: ,

Santa using Webnographer

Santa’s challenge is that he wants to deliver the right present to the right person, but he is constrained by being far far away from his customers. What the children in Lapland want is very different than the children in Brazil, or Ireland. Only by using Remote Usability Testing can he reach the far corners of the world. Santa’s challenges are similar to most Website Product Managers.

His customers have a broad variety of needs. Some children want the latest video game, others the latest Lego kit, and some want that patch work doll. So Santa needs to sample thousands of children to be able to find the requirements of millions of children. Only by using a service like Webnographer can you easily research 100’s of users instead of 10.

Up to the last moment new toys are being released, launch dates are being changed. Only by using a service like Webnographer can the testing be moved forward or back easily. There is no participant recruitment cost to reschedule.

Santa tried AB testing last year but then 50% of the children got the wrong present. He has realised that Analytics is like accounting, it only tells you after the fact. With Webnographer you can test before.

To add Webnographer to your New Year’s wish list contact Sabrina or James at Webnographer.

Illustration by Bruno Miguel Fernandes Maltez

Leave a Comment

Discovering WHY from numbers

November 7th 2013 by Posted In: Events, Research methods and approaches

A couple of weeks ago James and I gave a talk at the UXPA in London. The theme of the October event was “The power of quantitative data”. The title of our talk “Discovering WHY from numbers”.

In a UX test, conventional UX thinking is that numbers can only tell you what happened, and cannot explain the ‘why’ of a UX issue. James and I showed in our talk how one can discover ‘why’ an issue is happening. And how one can go beyond just knowing simple completion rates.

@ElviaVasc created a great sketchnote of our talk. And there are many great pictures of the event on the UXPA Pintrest board.


Related articles

» “Why You Only Need to Test with 5 Users” is no longer relevant
» 23 Remote Usability Methods
» CAPTCHAs: How bad are they really for User Experience?
» 3 Case Studies that proof the value of UX



23 Remote Usability Methods

October 31st 2013 by Posted In: Research methods and approaches

Tags: , , , , ,

It is easy to think that there is only one or two remote usability methods out there: moderated and un-moderated. But there are many more methods than just a couple.

Last year before the Denver Lean UX conference, Sabrina and I came up with a list of the different Remote Research Methods. In total we came up with 23 different methods. Some of them we use at Webnographer, and other methods we know that other people use. I am sure we are missing one or two. Please post any we are missing in the comments.

This post is the first one of a series, giving a top-level overview of all the online user research methods. Over the next few weeks, we will be posting a detailed article on some of more popular methods explaining in detail how and when to use it.

The 23 methods fall into 2 main categories: synchronous, where the moderator or researcher interacts with the respondent; and asynchronous, where the researcher does not interact with the respondent directly.

The methods listed with a (w) are ones that
we either use or have used at Webnographer.


Synchronous remote methods


Moderated Test






The participant and the researcher meet via web/video conferencing. The participant is asked to think aloud while the researcher notes his/her observations.


Provides a remote version of a lab test and enables a researcher to test users anywhere in the world.


Online Ethnography






This method researches communities facilitated through online spaces such YouTube, Twitter, or Pintrest. The research is carried out by participating in those online spaces.


Online Ethnography helps increase empathy and reduce context collapse. By partaking in the experience the designer understands the participant’s perspective.


Telephone Survey






Participants are called and asked questions on the phone.


A quick way of gathering responses from users. It can be useful for getting feedback from users who will not take part in other forms of survey data.


Remote Eye Tracking






This method provides information about where the participant is looking on the page. It identifies what attracts their attention and what passes unseen.


It shows which areas of the webpage users look at first and which areas they ignore.


Asynchronous remote methods


True Intent






Users are intercepted when they arrive on the website. They are asked the reason for their visit before being redirected back to the site to complete the task using a remote usability testing tool. While they complete the task, their interaction with the site is recorded.


Records real user behaviour and identifies their real goals. Identifies why people are visiting and whether they succeed or not. Helps to prioritise and quantify issues.


Set Task






Participants are set a task on a website to find a piece of information. While they navigate through the site their interactions are recorded with a remote usability testing tool.


Evaluates if users can complete a task successfully, how they navigate the site, and the amount of time spent. It provides insights into where and why customers are having problems.


Race Test






Participants are asked to perform a task on the website within a given time.


Tests how people will perform under stress. Often behavior is different in stressful situations.


Click to Finish Test






Users are asked to complete a task on a single page. Once participants click on the page the task is finished. The test can be done with static mock-up or real sites.


Provides a fast way of gathering early feedback. This method evaluates whether users can find a piece of information within a page. Metrics include success ratings, time on task, and satisfaction ratings.


5 Second Test






Participants are shown a webpage for five seconds. They are then asked to write down everything they remember about the page.


Shows what users remember about the webpage, and evaluates what items are the most prominent. This method uses recall memory.


Recognition Test






Participants are shown a webpage. The page then “disappears” and participants are shown a list of items. The participants must identify which items were on the webpage and which items were not.


Shows what users remember about the webpage, and evaluates what items are the most prominent. This method uses recognition memory.


Critical Incident Report






Participants are asked to give feedback via email, postal mail, or a web form about their experience after visiting a site.


A fast and relatively inexpensive method of collecting data about users’ experiences.


A vs B






This method is used to test alternative designs or versions for the same webpage. It can use a web stats package or a remote usability testing tool like Webnographer.


With a large enough sample size, this method identifies which design route works best.


Web Analytics






Web analytics collects your website’s visitor log data, and allows the analysis and reporting of Internet data for purposes of understanding and optimizing your website.


Discover where your visitors are going, what pages they are visiting, and how long they are staying. This method does not identify why users are visiting.


Open Card Sort






Participants are given a set of cards. They need to come up with the appropriate categories and group similar cards.


A card sort can help navigation design and information architecture. This can also be an effective way of determining which labels and wording works best.


Closed Card Sort






Participants are given a set of cards and categories. They need to group similar cards under each appropriate category.


A card sort can help navigation design and information architecture. This can also be an effective way of determining which labels and wording works best.


Click Sort






Participants are given a set of cards and categories. They need to click on the appropriate category for each card.


This is similar to a card sort but, due to its faster interaction, users can complete far more items than a card sort.


Tree test






Participants are asked to perform a task. They are shown a list of items, such as links, and they need to select the one item they think is the most effective to complete the task.


A tree test is useful for testing menu structures.


Diary Study






Participants are asked to keep a log/diary of their activities over a period of time. The study can have a focus/theme. In this case, participants need to track everything related to a given topic.


Helps to understand how users’ experience changes over time. It can help with getting a broader experience, especially with a service that has multiple touch points.


Participant Review






Participants are sent a link to a website and give their design suggestions via instant messaging.


Co-design goes remote. This allows people with diverse backgrounds to contribute to the design.


Video and Send






Users film their experience and then email the video back to the researchers.


Video reduces context collapse. It makes the designer feel closer to the research participant.


Web Survey






Participants respond to a survey online. They can either be intercepted on a website or emailed the invite to take part in the research.


It is one of the most cost-effective ways of gathering a large selection of user feedback.


Longitudinal Survey






The study is conducted over a period of time with the same participants. The aim is to observe the changes that occur between the different sessions.


It is useful to see how users experience changes over time.


Delayed Survey






Users are intercepted by a pop-up window when they arrive on the website. They are asked for their email address and about their goal.
The following day, the user is emailed a follow-up survey questioning the success of their visit.


A delayed survey captures the whole customer experience.


Exit Survey






Real users are intercepted by a pop-up window. They are asked if they want to take part in the research, and a new window opens in a pop-under if yes. When they end the session and close the browser, the survey window reappears, and they are asked a couple of questions.


Captures users’ experience after their interaction with the site.


This post is the first one of a series, giving a top-level overview of all the online user research methods. Over the next few weeks, we will be posting a detailed article on for each method explaining in detail how and when to use each method.


Feel free to post in the comments, if you feel that we are are missing a remote research method.