A Penny for Your Thoughts
In this section I provide an overview of the research and evaluation methods available, provide examples (as downloadable guides) and end with a brief look at the effectiveness and best use of the methods.
Introduction
The choice of research methods depends on the goals of the research, the data you want to obtain and the development stage of the design. Essentially there are five groups of research type. Although much of the research around the user (discovery research) is conducted through interviews (also possibly diary studies, analytics and data mining), when it comes to design evaluation, there are several applicable approaches.
User Testing
Users are invited to interact with a system, interactive or paper prototype, in the research lab, home/office or remotely. This includes some form of observation which is good for identifying issues and quantitative measures. This is the gold standard in user research and a method that most UX professionals use on a regular/daily basis.
Expert Evaluations
Here UX and/or domain experts apply their judgement to a design and identify potential usability problems, contributing to the incremental improvement of a design. There’s no point in using this approach if you’re aim is to learn about the broader user experience. E.g. heuristic evaluation and cognitive walkthrough.
Analytical Evaluations
This is another expert evaluation method – there are no users involved. The expert makes detailed performance predictions based on models of the user and design rather than their own judgement. E.g. keystroke level model.
Query-based Evaluations
Direct enquiries posed to the user to discover preferences, attitudes and experiences. There is no point in this approach if your goals are to find out how long it takes to complete a task. E.g. interviews, questionnaires and focus groups.
Experimental Evaluations
These are focused, controlled experiments to investigate specific hypothesis. They are not suitable for evaluating a complete design and when used are usually combined with another method at different stages in the design process. E.g. hypothesis testing.
When considering a method, you should always consider the following:
- Purpose – Why are you doing the evaluation? Is it to test a new design? Is it to improve an existing product? Benchmarking the current product for comparison with a new product?
- Data – what information are you trying to obtain? Is it usability problems (qualitative) and/or metrics (quantitative)? Are you seeking opinion on the design?
- Users or Experts – what are you testing, which phase are you in the design process, is there any commercial sensitivity around your design? Do you need the general public or users well versed in the domain. Do you have to look at specific age groups, gender, computer competency, physical ability etc?
- Stage of the Design – Are you testing concepts, a prototype or a final product?
- Location – Where would you conduct the research? In a usability lab or at the user’s place of work or home, or remotely via a web conference?
- Novelty of the System – something that is a first will probably require more research and therefore time as there is a good chance of more design iterations.
- Criticality – what is it you are testing? The extent of the research is likely to vary if you are testing a control panel for hospital use versus a museum information kiosk.
- Resources – how much time do you have? What’s your budget and do you have the equipment or intend to hire it?
You should plan your research ahead of scheduling any sessions. Be sure to have the plan driven by the purpose and data you wish to obtain through the research. Once established, then consider the other practical issues raised above.
Research Methods
Below are guides to some of the avaliable research and evaluation techniques.