Please enable JavaScript.
Coggle requires JavaScript to display documents.
Point and click (Speeding up movement time (Link-shortening service:
…
Point and click
-
Fitts' law
- predicts that the time required to rapidly move to a target area is a function of the ratio between the distance to the target and the width of the target
- used to model the act of pointing, either by physically touching an object with a hand or finger, or virtually, by pointing to an object on a computer monitor using a pointing device.
MT = a + b . log2 (D/W +1)
- MT = average time to complete the movement
- a and b = constants that depend on the choice of input device and are usually determined empirically by regression analysis
- D = distance from the starting point to the center of the target
- W = width of the target measured along the axis of motion. W can also be thought of as the allowed error tolerance in the final position, since the final point of the motion must fall within ± W⁄2 of the target's center.
- Since shorter movement times are desirable for a given task, the value of the b parameter can be used as a metric when comparing computer pointing devices against one another.
Index of difficulty:
- ID = log2 (D/W + 1)
- determines movement time and will be equal for targets with the same ID
- it is scale invariant (all relative) and so works across a range of devices and individuals
- D/W + 1 is the linear function, log2(D/W +1) is the log function --> makes a curve and this takes into account that there's acceleration in human movement (i.e. when things are twice as far it doesn't take twice as long to reach it because there is acceleration)
- MT is a linear function of the ID = to maximise MT, distance to travel should be short and target should be big
- Regardless of specific width and distance, there will always be positive skews for response time because theres a physical limit to how fast you can be but no limit to how slow you can be
- there is however variance in movement time and in end point location
- and despite its predictive power, there is variability in individual's movements (e.g. older people just slower?)
- can apply under a variety of conditions, with many different limbs (hands, feet, the lower lip, eye gaze), physical environments (including underwater), and user populations (young, old, special educational needs, and drugged p's)
- Many experiments testing Fitts's law apply the model to a dataset in which either distance or width, but not both, are varied --> model's predictive power deteriorates when both are varied over a significant range (Evan, 1996)
- Because the ID term depends only on the ratio of distance to width, the model implies that a target distance and width combination can be re-scaled arbitrarily without affecting movement time, which is impossible.
- Despite its flaws, this form of the model does possess remarkable predictive power across a range of computer interface modalities and motor tasks, and has provided many insights into user interface design principles
Errors
Wobbrock et al's (2008) error model:
- extends Fitts' law to predict probability of errors because it assumes a 4% error rate, but there is no equivalent “error law” that predicts the probability of a user hitting or missing a target using Fitts’ law parameters
- the faster you are the more errors you are likely to make --> see equation
- expt to test: p's did conventional Fitts' reciprocal pointing task to elicit personal models. Then visual and auditory metronome used to manipulate p's movement times --> instructed to click when lines around target joined up (at same time clicking noise heard). Told to be as accurate as possible.
- used each p’s a and b coefficients from the first phase to determine the MT then set the nominal metronome time, MTm, so when MT% < 1.00, participants moved faster than Fitts’ law predicts. When MT% > 1.00, they moved slower than Fitts’ law predicts
- Regresson analysis - error model for pointing provides good error-rate predictions
- Confirm logarithmic speed-accuracy tradeoff
- Future work needed to tease out sensitivities of error model to its parameters (a and b in regression model). Should test model 1) in diff experimental situations, 2) for discrete movements and 3) with a stylus, where arrival at a target and selection of that target are bound.
Why predict error?
1 - Error prediction = as useful as time prediction given the diametric relationship of these two entities (one increases -> other decreases). Thus, the theory requires a predictive model for errors.
2 - if a Fitts-based error model is shown to hold, it supports of the law itself. (if not, need to further reearch the theory)
3 - allows us to estimate text entry error rates given different tapping speeds on a keyboard, or to ensure that buttons are big enough in a safety-critical system where speed is crucial.
Fitts law experiments traditionally studied target pointing tasks without considering the effect of the cost of error when a user misses a target —> in real life when a user makes an error/misses a target there is a cost associated with that error, that the user has to spend time recovering from e.g. selecting wrong item in application menu means the user has to undo the effect of selecting the wrong command and also navigate back to the correct menu item
- as the cost of error increases, the users will change their behaviour in order to reduce the error rate.
- Speed accuracy trade-off = user will move slower to reduce error rate, which increases task completion time —> plenty of research on speed-accuracy tradeoff but little on how cost of error impacts user behaviour/performance in target-directed pointing tasks
- Banovic et al. (2013) cost-based completion time model: based on assumption that users will change their performance characteristics in favour of strategies that will maximise their expected utility —> given a time-based cost of making an error, users will change their speed and accuracy in order to minimise their task completion time
- Assume that 1)users should be able to make a rough estimate of optimal utility in target-directed pointing tasks due to their simplicity and 2)even though the probability of incurring an error is not fixed, users should be able to learn to optimise their accuracy, allowing them to use this info to estimate expected utility
- A = user attempts to acquire target of width W at a distance D - Fitts’ law predicts MT, but, if user misses the target on first attempt they incur some cost, C (includes the time to recover from error and attempt to select target again)
Define completion time (CT) as time until the target is successfully selected --> MT is the time until the first target selection attempt. If the user is successful on their first attempt, CT will = MT (in B)--> If the user makes and error CT will be the sum of MT and C (in C). SEE DIAGRAM
- Task to test: p’s had to successfully click inside a goal target, if they missed the target they had to continue the trial from where they missed until they correctly selected the target. Different conditions gave different time costs e.g. 0 cost condition = could continue immediately, higher cost conditions = longer to wait before could continue
- users changed the characteristics of their target-directed pointing given different time-based penalties -> P’s change their performance according to the expected completion time utility function (based on movement time and its associated error rate (Wobbrock et al’s P(E)) and the time based error cost)
- Optimal performance computed using the model predict the task completion times well = users do tend towards optimal performance (at which the user will be performing a speed-accuracy tradeoff which minimises the user’s expected completion time)
= the model can predict task completion times for tasks that involve the cost of making an error
What about errors in temporal pointing?
= a target is about to appear within a limited time window for selection. Unlike in spatial pointing, there is no movement to control in the temporal domain --> user can only determine when to launch the response (e.g. like in flappy bird game)
- Lee & Outasvirta (2016) devised novel model to predict error rates in temporal processing --> assumes users have an implicit point of aim but their ability to elicit the input event at that time is hampered by variability in:
1) an internal time-keeping process --> internal estimate and uncorrectable once executed
2) a response-execution stage -->variability due to users’ estimate of when the input is registered, variability in finger travel distance (e.g., if the finger hovers over a virtual button), and variability in muscle activation during motion.
3) input processing in the computer --> sensor data (e.g. touch down) registered and processed to determine the input event
- derived a mathematical model which showed high fit for user performance with two task types, including a rapidly paced game (flappy bird) --> can explain previous findings showing that touchscreens are much worse in temporal pointing than physical input devices (i.e. timing of sensor event is uncertain and users can't precisely control how high they hold their finger) & could be used analytically to tune the difficulty of temporal pointing in interactive tasks e.g. to design game levels
Accessibility
Users with motor impairments often find it difficult to use common software applications - many argue that these needs are met by specialised assistive technologies, these have 2 main limitations:
- 1) Often abandoned due to cost, complexity, limited availability and constant maintenance —> estimated that only 60% of users who need assistive technologies use them (Fichten et al. 2000)
- 2) Designed assuming that the user interface is unchangeable and so users with motor impairments have to adapt themselves to these interfaces using specialised devices —> it would be better to adapt user interfaces to the abilities of individual users because of the variability in capabilities of those with motor impairments, but manual design of this is impractical = need interfaces to automatically adapt themselves
Gajos et al. (2008) compare 2 systems that automatically generate user interfaces:
- 1) SUPPLE: adapts to users’ capabilities indirectly by first using the ARNAULD preference elicitation engine to model a user’s preferences on how they like the interface to be created
-inputs device-specific constraints e.g. screen size, a cost function (guide search with lowest estimated cost), a typical usage trace, a functional specification of the interface (types of info needed to be communicated between the application and the user)
-ARNAULD system (Gajos and Weld, 2005) captures the user’s preferences, which allows SUPPLE to generate appropriate interfaces
- 2) SUPPLE++: models motor abilities directly from a set of one-time motor performance tests (relies on a built in Ability Modeler)
-builds explicit model of actual motor capabilities through testing
-uses the ability model as a cost function to generate the appropriate interface = lets the user accomplish tasks in least amount of time
Test on 11 motor impaired and 6 control p's:
- P’s with motor impairments sig. faster, made fewer errors and preferred automatically-generated personalised interfaces generated by SUPPLE++ over the baselines
- P’s were 26.4% faster with SUPPLE++ than baselines and found least tiring and most efficient
- SUPPLE++ narrowed gap between motor-impaired and able-bodied users by 62%, and individual gains ranged from 32% to 103%
- = Current differences in performance between motor impaired users and controls/able-bodied is partially due to the inaccurate design of user interfaces for motor impaired users.
- Speed and accuracy of motor impaired users can be improved, even with conventional input devices (e.g. mice, trackballs) if ability-based interfaces are provided
- Even able-bodied p’s were faster and made fewer errors with ability-based interfaces = must be sig. easier to use than alternatives, but found these uglier and so no more preferable to baselines