Please enable JavaScript.
Coggle requires JavaScript to display documents.
E-Lecture 7: Lab Studies (observation (d. Think Aloud (iii. Think Aloud…
E-Lecture 7: Lab Studies
Learning objectives
- Describe the role of observation in interaction design
- Describe the key concepts in evaluation
- Outline different types of evaluations method and how they might be applied
- Outline the process of usability testing
- Discuss the use of experiments
a. - Data gathering
- any stage in development
- early, late
- evaluation
- Observation is a method of gathering data and
- it can be used at any stages in the development lifecycle
- It can be used early on to collect information about the users, and their tasks and goals
- Or it might be used later in the development, for example at the stage of evaluation in order to see how successful a prototype supports these tasks and goals
b. Types of Observation
Types of Environment
- Field studies
- lab studies (controlled)
- There are two types of observation direct and indirect
- Direct observation is used by investigator to study the users and as their perform their activities
- Indirect observation involves studying the records of an activity after it’s been performed
- There are also two types of environment where observation might take place
- In field studies, users are observed performing their day to day tasks in an actual setting
- In lab studies, individuals are observed performing specify tasks within a controlled environment
c. Controlled Environment (What am I thinking?)
- Usability Labs
- Portable Labs
- Observing users in a controlled environment may occur within a purpose built usability laboratory but it’s also possible to create a portable lab which is taken to the normal environment of the users
- It creates less disruption and it's cheaper than maintaining a purpose built laboratory
- One of the problems with lab studies is that you can see what the user is doing but you don;t know what they are thinking
d. Think Aloud
i. Think Aloud (I wonder this paper is about?)
- One of the attempts to overcome this problem is known as the Think Aloud technique or the Think Aloud protocol
- And this idea was first introduced by Ericsson and Simon in 1980
- The idea is just to get the users to say what is going on in their head to think aloud
ii. Think Aloud Problems
One of the problems with the Think Aloud technique is that users typically are not think aloud experts and often there will be period of silence where they don’t say anything
Here are some questions that you can use as an investigator to try and prompt them to start talking about what is going on inside their heads.
- Silence
- What are you looking for?
- What do you think you need to do?
- How do you think you’ll do it?
- Can you see what you expect?
- What can you not find?
iii. Think Aloud Tips:
- Tip One:
Take notes throughout- What they did, what they did not do, what questions
- Tip Two:
Let the user make errors, let them get stuck
- Here are 2 tips if you think you might using the Think Aloud technique.
- Tip 1 is take note on everything throughout the session, what they say, what they do, or what the don’t do.
- Tip 2 is don’t be afraid to let the users make mistakes or get stuck, you going to learn something from it.
iv. Think Aloud After the Event
- Video interaction
- Playback
- Think aloud in review
- A variation on the Think Aloud technique is to perform the thinking aloud after the event.
- You get the users to interact with the system and video it
- Then play the video back to the users and get them to tell you what they were thinking at each step of the interaction
i. Evaluation
- Evaluation
- Usability
- User Experience
- Evaluation is integral to the design process
- Evaluators collect information about the user experience with the prototype or system in order to improve its design
- Evaluation focusses on both the usability of the system for example how easy it is to learn and use and the user experience when interacting with it for example how satisfying, enjoyable or motivating the interaction is
ii. Why, What, Where and When of Evaluation
- Conducting evaluation involves understanding not only why evaluation is important but also what aspects to evaluate, where evaluation should take place and when to evaluate
Why evaluate?
- Usability
- User experience
- Design sells
- Real problems
- Users now expect more than just a usable system
- They also look for pleasing and engaging experience
- From a business and marketing perspective, well design products sell hence there are good reasons for companies investing in evaluation
- Designers can focus on real problems and the need of different group of users rather than debating what each other likes or dislikes
- types of evaluations method
Types of Lab Evaluation
Controlled settings
1- With users
2- Without users
- There are two types of evaluation that can be performed in the lab
- Evaluation that involves users and evaluation that don’t
-
Settings without User Involvement
1- Heuristic Evaluation
2- Cognitive Walkthrough
3- Analytics
- Evaluation without users involves the researcher imagining how the interface is likely to be used
- There are three common evaluation techniques: heuristic evaluation, cognitive walkthrough and analytics
- Heuristic Evaluation
- Heuristics (“rules of thumb”)
- In heuristic evaluation, the investigator has a set of heuristics or rules of thumb
- These are kind of usability principles by which the system is judged.
- The extent to which the system conform to or violates each of the heuristics is usually given some kind of a rating
- The next video presents the 10 usability heuristics pf Yaacob Nielsen, a pioneer of the use of heuristics evaluation
- Cognitive Walkthrough (Simulating the process & Ease of learning)
- A cognitive walkthrough involves simulating a user’s problem solving process at each step in the human computer dialogue, and checking to see how users progress from step to step in this interaction
- A key features of cognitive walkthrough is that it focus on evaluating design for the ease of learning
- The video is a sample cognitive walkthrough of a paper prototype from an iPhone app
- Analytics (Discovery of patterns & Web Analytics “The measurement, collection, analysis and reporting of Internet data)
- Analytics is the discovery and communication of meaningful patterns in data
- The most common form of analytics relates to users interacting with websites , this is known by the name of web analytics
- Web analytics is defined by Arikan as the measurement, collection, analysis and reporting of Internet data
- The next video is from a company selling web analytics software
- They give a rigour explanation of what web analytics is about
-
Experiments (Hypothesis Testing & Statistical Analysis)
- Controlled experiments usually involves the testing of hypothesis
- Hypothesis is prediction about the ways that the users will perform with the interface.
- The interpretation of results usually involves some kind of statistical analysis
i. Variables (Independent &Dependent)
- Typically a hypothesis involves a relationship between two things called variables
- The independent variable and the dependent variable
- The video introduces the concept of independent variable
- The independent variable was the frequency of watering of the plants
- The independent variable was the variable which you can manipulate
- By contrast, the dependent variable is the variable that you measure
It is called dependent because by hypothesis, its value depends on the value of the independent variable
- In the video, it was the healthiness of the plant
- An example of the dependant variable in interaction design might be how long it takes the user to complete a given task
ii. Hypotheses
H1: plant health- depends on watering frequency
- The unstated hypothesis in the video then is that the plant health depends on watering frequency
- Plant health is the dependent variable and watering frequency is the independent variable
Null and Alternative Hypotheses
- H1(alternative): plant health depends on watering frequency
- H0(null): plant health does not depend on watering frequency
- The hypothesis that we have considered so far is known as the alternative or alternate or experimental hypothesis
- It’s the hypothesis that one thing depends on another
- But hypothesis always comes in pair
- There is another hypothesis that we are wrong and that the health does not depend on watering frequency
- This is known as the null hypothesis
- If we fail to find enough evidence that the null hypothesis is true we have to reject it and accept the null hypothesis
iii. Design (Between subjects & Within subjects)
- There are two kinds of experimental design: between subjects and within subjects
- In between subjects, also known as independent groups, it divides all users into two or more groups and then give each group a different system or interface to use
- You then compare the performance of each of the two groups and ask the performances of each of the two systems
- In within subjects, also known as repeated measures, the same users are exposed to both system
- You then simply compare their performance with each one
iv. Statistics
- Data from hypothesis testing is usually analyse using statistics
- The idea of using statistics is simply to find out whether the difference that you might find between two interfaces could be nothing to do with the design but just charms
- You can think of statistics software as a kind of chance calculator
- You feed in the data from the performance of each of the two interfaces and out from the chance calculator, becomes a value called “p”, the probability value or p value for short .
- The “p” value is the probability between null/not and one that difference that you might have found between the two interfaces happen purely by chance
- If the p value is very small, close to zero, it means that the difference between the two systems is not down to chance.
- That means that the design makes a difference
- How close to zero does it have to be? Well most researchers in Interaction Design, take a probability value of less than 0.05
- That means, less than 5%
v. Reliability and Validity
- One of the considerations in evaluating your data is its reliability and validity.
- Reliability is about whether the same results would be replicated if you repeat the evaluation
- Different evaluation methods have different levels of reliability
- Lab studies involving a carefully controlled experiments will have high reliability
- Whereas field studies are less reliable
- An unstructured interview has low reliability because it’s difficult, if not impossible, to repeat exactly the same discussion
- Validity concerns whether the evaluation methods measure what it's intended to measure
- For example, of the goal is to find out how users use a new product in their homes, a laboratory experiment would be less valid than an ethnographic studies