Resources

Research Methods

A Penny for Your Thoughts

In this section I provide an overview of the research and evaluation methods available, provide examples (as downloadable guides) and end with a brief look at the effectiveness and best use of the methods.

Introduction

The choice of research methods depends on the goals of the research, the data you want to obtain and the development stage of the design. Essentially there are five groups of research type. Although much of the research around the user (discovery research) is conducted through interviews (also possibly diary studies, analytics and data mining), when it comes to design evaluation, there are several applicable approaches.

User Testing

Users are invited to interact with a system, interactive or paper prototype, in the research lab, home/office or remotely. This includes some form of observation which is good for identifying issues and quantitative measures. This is the gold standard in user research and a method that most UX professionals use on a regular/daily basis.

Expert Evaluations

Here UX and/or domain experts apply their judgement to a design and identify potential usability problems, contributing to the incremental improvement of a design. There’s no point in using this approach if you’re aim is to learn about the broader user experience. E.g. heuristic evaluation and cognitive walkthrough.

Analytical Evaluations

This is another expert evaluation method – there are no users involved. The expert makes detailed performance predictions based on models of the user and design rather than their own judgement. E.g. keystroke level model.

Query-based Evaluations

Direct enquiries posed to the user to discover preferences, attitudes and experiences. There is no point in this approach if your goals are to find out how long it takes to complete a task. E.g. interviews, questionnaires and focus groups.

Experimental Evaluations

These are focused, controlled experiments to investigate specific hypothesis. They are not suitable for evaluating a complete design and when used are usually combined with another method at different stages in the design process. E.g. hypothesis testing.

When considering a method, you should always consider the following:

  • Purpose – Why are you doing the evaluation? Is it to test a new design? Is it to improve an existing product? Benchmarking the current product for comparison with a new product?
  • Data – what information are you trying to obtain? Is it usability problems (qualitative) and/or metrics (quantitative)? Are you seeking opinion on the design?
  • Users or Experts – what are you testing, which phase are you in the design process, is there any commercial sensitivity around your design? Do you need the general public or users well versed in the domain. Do you have to look at specific age groups, gender, computer competency, physical ability etc?
  • Stage of the Design – Are you testing concepts, a prototype or a final product?
  • Location – Where would you conduct the research? In a usability lab or at the user’s place of work or home, or remotely via a web conference?
  • Novelty of the System – something that is a first will probably require more research and therefore time as there is a good chance of more design iterations.
  • Criticality – what is it you are testing? The extent of the research is likely to vary if you are testing a control panel for hospital use versus a museum information kiosk.
  • Resources – how much time do you have? What’s your budget and do you have the equipment or intend to hire it?

You should plan your research ahead of scheduling any sessions. Be sure to have the plan driven by the purpose and data you wish to obtain through the research. Once established, then consider the other practical issues raised above.

Research Methods

Below are guides to some of the avaliable research and evaluation techniques.

  • Accessibility Audit

    An accessibility audit, as with heuristic evaluations, involves an accessibility expert reviewing your website or web application.

    PDF download
  • Analytical Evaluation

    An analytical evaluation is performed by experts using models and formulae to make predictions on performance time.

    PDF download
  • Card Sort

    A card sort is a great way to get your users to help you organise your navigation and content.

    PDF download
  • Cognitive Walkthrough

    The cognitive walkthrough involves going through tasks on an interface and looking for problems users may encounter.

    PDF download
  • Competitor Analysis

    Compare the product you are working on against its competitors, understanding where your product is better/worse, or where your product is unique and marketing should promote.

    PDF download
  • Experimental Evaluation

    The purpose of an experimental evaluation is to concentrate on a single design-related issue.

    PDF download
  • Expert Review

    These reviews are for assessing and/or scoring the usability of applications by experts.

    PDF download
  • Heuristic Evaluation

    The heuristic evaluation is a method proposed by Jacob Nielsen for reviewing desktop applications for potential usability issues.

    PDF download
  • Pluralistic Walkthrough

    The pluralistic walkthrough is not a strict expert review but an inspection method where a team walkthrough an application.

    PDF download
  • Query-Based Evaluation

    Essentially this is the collection of feedback by a query method rather than observation.

    PDF download
  • Stakeholder Analysis

    Not so much a research method as it is an important process you should undertake to have a better understanding of who you are working with, where they provide the most value and how to effectively communicate with them.

    PDF download
  • User Testing

    User testing is the gold standard empirical approach to which other methods are compared.

    PDF download
  • User Experience Audit

    A UX audit is not too dissimilar to a competitor analysis except that in the audit you focus on your site and provide more depth to your analysis.

    PDF download
  • Alternative Evaluation Methods

    A few less often used research and evaluation methods to complement the more traditional staples of user testing and interviews.

    PDF download

Effectiveness of Evaluation Methods

Knowing the strengths and weaknesses of the various evaluation techniques can help you to choose the appropriate approach, whilst appreciating its limitations.

  Heuristic Evaluation Cognitive Walkthrough User Testing Query Method Analytic Method Experiment
Purpose Formative Formative Both Both Formative Summative
Data Qual. Qual. Both Both Quant. Quant.
Cost Low Medium High Medium Medium High
Participant Expert Expert Users Users Expert Users
Location Lab Lab Lab / Field Lab / Field Lab Lab
Good for ... Issue prediction Learnability Real problems Opinion and unanticipated Time prediction Key issues / concerns

 

Depending on where your organisation is in the product development phase there are some research methods that are more suitable than others, see the table below (Christian Rohrer, 2014).

   
Discovery / Strategy
Product Development Phase
Execution
 
Assessment
Goal Inspire, explore and choose new directions and opportunities Inform and optimise designs to de-risk and improve usability Measure against itself or competition
Approach Qualitative and Quantitative Mainly Qualitative (formative) Mainly Quantitative (summative)
Examples Interviews, questionnaires, diary studies, data mining, analytics Card sorting, user testing, questionnaires, participatory design Usability benchmarking, online assessments, surveys, A/B testing

 

Generally, early on you are looking for both qualitative (why and how) and quantitative (how many and how often) data, this then moves to qualitative during the execution phase. The assessment of your final product tends to look for more quantitative data.

Reference
https://www.nngroup.com/articles/which-ux-research-methods/