Please enable JavaScript.
Coggle requires JavaScript to display documents.
Lecture 10- Evaluation/ Testing. - Coggle Diagram
Lecture 10- Evaluation/ Testing.
Heuristic Evaluation
use pre-defined set of rules to assess the usability of a product interface
Two well known Heuristics
Nielsen Heuristic Evaluation
name comes from the 10H
Nielsen suggested that a group of evaluators(external) should evaluate the UI’s guided by some heuristics.
local exprts are still okay- but they could be biased as they were working on the UI design
Schneiderman's 8SGR
Process of Heuristic Evaluation
Evaluation team is introduced and trained on the domain. (Brief intro to APP)
some bg knowledge may be needed by Evaluation team
light intro
Each individual carries out an evaluation. Evaluation is based on a set of Heuristics
any heuristics is good enough; not Nielsen specifically
but they need to understand heuristics well
individuals will go through the app and evaluate tasks
indvs will go through app multiple times to get a feel for it. they they will evaluate it
for a breach of heuristics there is a severity rating from 0 to 4( relative to its urgency
1- cosmetic problem- need not be fixed unless extra time is available
2- fixing this should be given low priority
0- don't agree that this is a usability problem at all
3- major usability problem: important to fix;should be given a high priority
4- Usability catastrophe: imperative to fix this before product can be released
Factors to assess severity of the problem
Frequency
how many users will be affected? common/rare problem
Impact
the impact on the task and the user
Persistence
how many times will a user experience the problem
Group of evaluators get back together and combine/ merge lists
average out the severity rankings
Report is given back to the dev team with findings
ADV and DISADV
ADV
inexpensive
can be repeated multiple times
DISADV
Results depend on evaluator skills, experiences, and biases.
Platform dependent
Experts struggle to predict what users will actually do.
NOT A replacement FOR USABILITY/USER TESTING
Need multiple experts.. around 5-6
Nielsen answer was that after 5 or 6 experts you get diminishing returns for the % of usability problems found
Evaluation-(
"EVALUATION==TESTING"
)
Why
check and ensure if a product meets the user needs
whether or not a user likes the product
extensive eval== successful design
leads to reduced error
improves UX +user satisfaction
to compare similar products
to find a problem- cheaper to fix a problem earlier in DEV than later
Who
Users and experts
Users
Experts
go through
cognitive walkthrough(not covered
Heuristic eval
Usability- user testing
think aloud
Field Studies- not covered in unit
What
often done for new products but can be also done on existing products as well.
method
DECIDE Usability Evaluation
Evaluate, analyse, interpret and present the data
Explore Specific Questios
Decide how to deal with ethical issues
identify practical issues
Choose Evaluation approach and methods
Det Overall goals
Usability Tests
think aloud technique
observe and take notes on various aspects
define common tasks and invite participants to go through them
Factors on what method to choose
cost of product and finances allocated for testing
time and resources available
criticality of Interface
exp of design and eval team
no.of expected users
novelty of project
Stage of Design or dev
When
Evaluation should be done from the start
iteratively
Nielsen's 10H
Error prevention
Recognise rather than call
Consistency and Standards
Flexibility and efficiency of use
User control and freedom
Aesthetic and Minimal design
Match between system and the real world
Help users recognise, diagnose, and recover from errors
Visibility of System Status
Help and Documentation