Please enable JavaScript.
Coggle requires JavaScript to display documents.
Human-Computer Interaction - INF4820 (Chapter 2 (Graphs (column or bar…
Human-Computer Interaction - INF4820
Chapter 1
What is user experience
user involved
Interface
Interested in experience, to be measured
what are metric
Measure or evaluation
task success, satisfaction, errors
Observable, quantifiable
UX measures effectiveness, efficiency or satisfaction
enables informed decisions
Value of UX metrics
key in calculating ROI
Chapter 2
Independent and dependent
independent we manipulate
dependent are measured - success rate, errors, satisfaction, completion
Types of data
nominal
unordered groups / categories
requires coding (assigning codes)
ordinal
ordered groups/categories
distance between ranks not meaningful
user can rate a website, good, bad, great
interval
continuous such as temperature
distance between 20 and 30 is meaningful
allows descriptive statistics
ask if point halfway between to make sense
ratio
same as interval, but has 0
age, height, weight
zero indicates absence
also descriptive statistice
but includes mean
descriptive statistics
central tendency
mean, median, mode
variability
how data is spread out
range, variance and standard deviation
confidence intervals
95% confidence interval
Mean ± 1.96 * [standard deviation/sqrt(sample size)]
describes data makes no inference
independent samples
Comparing means
independent or paired samples
overlapping means similar
t test to calculate if means are different
paired samples
a, b testing
need equal amount of values to pairs match up
multiple samples compared using anova
Relationships between variables
correlation
negative relation, as one increases the other decreases
nonparametric tests
analyse nominal and ordinal data
chi square
Graphs
column or bar chart
line graphs
scatterplot
pie or donut chart
Stacked
Chapter 3
3.1 STUDY GOALS
3.1.1 Formative Usability
changes as product is being developed
iterative
questions
what are most significant usability issues
What is good, what is frustrating
Most common mistakes users make
Do iteration result in improvements
What issues will remain after launch
Identify improvements
3.1.2 Summative Usability
measure how well a product meets expectations
evaluate against criteria
questions
did we meet goals
Overall usability?
comapre against compitition
improvements from one release to next
3.2 USER GOALS
3.2.1 Performance
what the user does
how successfully a task is completed
how much effort
how many errors
amount of time it takes
3.2.2 Satisfaction
what user says or thinks
was it easy or confusing
performance and satisfaction not always correlated
3.3 CHOOSING THE RIGHT METRICS: TEN TYPES OF
USABILITY STUDIES
3.3.1 Completing a Transaction
well defined start and end
% of completions
Drop off rate and where users dropoff
self reporting metrics
liklihood of returning
user expectiation`
efficiency is completion rate
3.3.2 Comparing Products
compare to competition or previous releases
is production improving
use success measure, efficiency and satisfaction
3.3.3 Evaluating Frequent Use of the Same Product
taks time
learnability
self reporting on awareness and usefulness
3.3.4 Evaluating Navigation and/or Information Architecture
makes use of mockups or wireframes
task success best measure
lostness
card sort study
3.3.5 Increasing Awareness
advertising
underused part
number of interactions with element
self reporting
memory test
eye tracking
a/b testing
3.3.6 Problem Discovery
periodic checkup
more open ended tests
use actual site if possible
unique per user
hard to compare
assign priority
quick improvements
3.3.7 Maximizing Usability for a Critical Product
completing task is utmost importance
large number of participants required
user error
success rate
may tie to efficiency measure too
3.3.8 Creating an Overall Positive User Experience
subject but still measurable
satisfaction most important
Many self reports
on satisfaction
and expectation
likelihood of future use
physiological mesaures for engagement
pupil
heart rate
skin conductance
3.3.9 Evaluating the Impact of Subtle Changes
live site metrics from a/b testing
less effective email or surveys
3.3.10 Comparing Alternative Designs
usually early on
ask participant opinion on different designs
they can also rate prototypes
what tech, money, time do we have available
3.4 EVALUATION METHODS
3.4.1 Traditional (Moderated) Usability Tests
5-10 users
lab test one-on-one
formative studies
record issues, frequency, type and severity
self reporting possible
beware small sample size cause over generalization
3.4.2 Online (Unmoderated) Usability Tests
collect lots of data in short amount of time
automatically collected data
can collect quantitve and qualitive data
less usefull when deeper insights are required
3.4.3 Online Surveys
how many participants and what metrics are required
3.5 OTHER STUDY DETAILS
3.5.1 Budgets and Timelines
3.5.2 Participants
3.5.3 Data Collection
3.5.4 Data Cleanup
Chapter 4 - Performance Metrics
4.4 EFFICIENCY
4.4.1 Collecting and Measuring Efficiency
identify action to be taken
define start and end
count the actions
must be meaningful action
automated capture much easier
4.4.2 Analyzing and Presenting Efficiency Data
actions per task
which task take most effort
lostness
N = different pages
S = total pages (count revisit)
R = optimal number of pages
L = sqrt[(N/S – 1)^2+ (R/N – 1)^2].
score above 0.5 appear to be lost
average lostness, or count of lost users
4.4.3 Efficiency as a Combination of Task Success and Time
ratio of task completion rate to mean time per task
alternative total task success count / total time by participant
cognitive or physical effort
4.2 TIME ON TASK
4.2.1 Importance of Measuring Time on Task
frequent task needs less time to complete
4.2.2 How to Collect and Measure Time on Task
be diligent when doing it manually
if recording use timestamp
4.2.3 Analyzing and Presenting Time-on-Task Data
bar chart with confidence interval
represent number of users per range
record % of users above/below threshold
4.2.4 Issues to Consider When Using Time Data
only success or failtures too
think aloud could add time
retrospective think out loud
should participants know about timing
4.3 ERRORS
4.3.1 When to Measure Errors
useful in determining cause of failure
usefull when
error will result in big loss
result in significant cost
when error results in task failure
4.3.2 What Constitutes an Error?
sometime lack of action is failure
4.3.3 Collecting and Measuring Errors
you need to know what the correct action should be
single or multiple possible errors
capture amount of errors that occurred
or 1/0 per defined error
4.3.4 Analyzing and Presenting Errors
could present average error rate (with confidence)
capture frequency
average number of errors per user, to reduce bias
4.3.5 Issues to Consider When Using Error Metrics
avoid double counting
code errors by type
repeated errors
error is action that leads to failure
4.1 TASK SUCCESS
intro
needs well defined tasks
start and end state
well defined success criteria
in lab user can verbalise how they completed task
or some other structured way to test user afterwards
proxy measures
4.1.2 Levels of Success
how close to success user came
user experience of success
optimal success of took longer route to success
3 levels, success, partial, failure
can add assistance to break levels down even further
can use 4 point scoring system
can aggregate into binary
remember it is ordinal
represent with stacked bar chart
4.1.3 Issues in Measuring Success
dealing with unexpected situation
when to stop a participant
4.1.1 Binary Success
success or not
bar chart represent success rate
larger sample size more accurate, more confidence
4.5 LEARNABILITY
4.5.1 Collecting and Measuring Learnability Data
multiple trials over different periods
when learning occurs then efficiency improves
decide on time between trials
4.5.2 Analyzing and Presenting Learnability Data
show specific metric per trial
can aggregate tasks or show tasks individually
difference between highest and lowest point is amount of learning required
4.5.3 Issues to Consider When Measuring Learnability
what is a trial
how many trials
idealy 3-4
time and effort required to become proficient