Please enable JavaScript.
Coggle requires JavaScript to display documents.
Smart Heating :explode: Literature (Trust (Bussone: Role of Explanations…
Smart Heating :explode:
Literature
Trust
Bussone:
Role of Explanations on Trust in Dr Algs
With explanations, people tended to rely more on the system. With less explanation, people didn’t trust the system and did the problem themselves. Overall, people wanted (1) an explanation of the system’s level of confidence and (2) an explanation of how the system got to that conclusion.
USEFUL: People's desire for (1) levels of confidence, (2) explanations :star:
Diagnosis Algorithm Study
2015 Health conf
Research Qs
How does explanation completeness impact clinical decision-making?
How does explanation completeness impact a clinician’s confidence in their ability to diagnose patients?
How does explanation completeness affect a clinical user’s trust in a CDSS?
What is the impact of explanation completeness on a clinical user’s workload?
What information do clinicians desire from a CDSS explanation when making a diagnostic decision?
Stumpf:
Explanations Harmful
The more explanation that the doctors were given, the more they trusted the system, even when it was wrong. Gasp! This is harmful in clinical decision support systems, because they are not always right.
Doesn't really apply to smart heating, because these systems are usually right, and better off when not adjusted / messed with :no_entry:
Diagnosis Algorithm Study
???
Kulesza
: Explanatory Debugging to Help Machine Learning
Presents a theory called Explanatory Debugging and builds a prototype with it. People were able to understand the system better and correct mistakes made by the system 2x as efficiently.
Good principles. Smart heating owners really shouldn’t be debugging the heaters too much. But the principles listed here would be useful for any corrections that the user does make. :star:
Email sorting machine learning study
2015 Intelligent UI conf
Holliday
: User Trust in Intelligent Systems
User trust in smart systems fluctuates over time. Explanations apparently only increased trust temporarily, with people feeling exactly the same after a while.
If people aren’t told how a sys works, they don’t trust it. If they are told how it works, they then determine their trust based on the activity of the system.
USEFUL: both explanations + correct behaviour -> TRUST :star:
Case study: qualitative data coding machine
2016 Intelligent UI conf
Lee:
Trust in Automation
Detailed discussion of trust in automated systems, particularly when the automation can make significant mistakes (aviation, machinery, factories, etc)
2004 Human Factors
Not useful yet. Too detailed for now, but may be useful when actually trying to measure trust :no_entry:
Kizilcec:
Peer grading
People submitted their own grades, then received peer grades. The peer grades were then adjusted for bias. People who had received lower than expected grades had a variation in their trust in the system, depending on the level of transparency of its algorithms. No explanation? Distrust. Medium explanation? Much improved trust. Full explanation? Less trust than the medium.
CHI 2016
Online study of peer grading in MOOC
It's only when a system is performing unexpectedly that explanations are crucial, and they generally do help
Heating
Fischer:
Working with IoT data in Home
This experiment developed a kit to be used by “energy advisors”, ppl making heating recommendations for low-income households. They found that the value came not from the data itself, but from the conversations it sparked between homeowners and advisors.
Interesting to see attitudes toward managing heating, particularly low-income families (diff from tech enthusiasts). General distrust of the value of set heating schedules. :star:
Temp sensor kit given to energy advisors
CHI 2016
Wall:
User Testing of Thermostats
Tasks were temporary temperature increase, and setting the weekly heating schedule. Did no wiz-of-oz, just tested people on simple UI tasks
Too simple to emulate. Not smart heating (no automated systems) and the tasks were basic UI tests. :no_entry:
User testing on 5 heating UIs
Gov report, 2013
Yang
: Learning from a Learning Thermostat
Found complaints that the system was purposefully opaque or considered dumb. Resulted in several principles for revising the Nest: exception flagging, succinct clarity, and limited fiddle needs
Super helpful. Incorporate these principles into another UI?
:star:
Diary / interview study of Nest owners.
UbiComp’13
Rodden:
At Home with Agents
Focus groups exploring attitudes toward smart meters
Interesting for people's comments. Overall distrust of energy companies, fear of loss of control (you'll tell me when to wash my clothes? I think not!), lack of interest to engage with the details.
CHI 2013
Costanza
: Doing the Laundry with Agents
CHI 2014
In the wild study of laundry scheduling
People were asked to scheduled out their laundry loads based on flexible pricing, anticipating a future in which renewable energy causes electricity supply to fluctuate greatly.
People displayed willingness to work with the system in order to save money, but also frustration for ppl with more flexible lives.
Didn't explore trust, but did look at energy engagement :star:
Yang:
Living with the Nest
Diary / interview study of ppl living with Nest
Web / mobile apps UI changed interactions ppl have with these devices . Energy savings was limited because of : (1) convenient control, and (2) technology limitations. Found that most effective use of the technology came when ppl were thoughtfully engaged. Call for continued improvement in intelligibility and user input.
UbiComp’12
Intelligibility
Lim:
Explanations Intelligibility
Lab study using abstract exercise machine learning model
compared diff. explanation types: why, why not, vs none
Explanations lead to increased intelligibility of the system.
CHI 2009
Alan:
real-time pricing
In the wild, heating prototype study of real-time pricing
CHI 2016
Ppl overall liked the idea of the system responding to pricing for them, and felt in control. Liked remote control of the heating. Mixture of mental models on the learning: was the thermostat learning occupancy / temperature schedule, or learning price / comfort tolerance?
Bellotti:
Intelligibility of context-aware
Discussion on context-aware systems (sensors + machine learning, 'smart')
HCI, 2001
There are aspects of context that machines have no way of sensing, so smart systems can likely never be fully autonomous. The trick is deferring to users as unobtrusively as possible.
Proposes a design framework consisting of principles such as providing control and informing the user of the system's understanding and possible actions.