Smart Heating đ„
Literature
Trust
Heating
Bussone: Role of Explanations on Trust in Dr Algs
Stumpf: Explanations Harmful
With explanations, people tended to rely more on the system. With less explanation, people didnât trust the system and did the problem themselves. Overall, people wanted (1) an explanation of the systemâs level of confidence and (2) an explanation of how the system got to that conclusion.
The more explanation that the doctors were given, the more they trusted the system, even when it was wrong. Gasp! This is harmful in clinical decision support systems, because they are not always right.
Doesn't really apply to smart heating, because these systems are usually right, and better off when not adjusted / messed with â
USEFUL: People's desire for (1) levels of confidence, (2) explanations â
Fischer: Working with IoT data in Home
This experiment developed a kit to be used by âenergy advisorsâ, ppl making heating recommendations for low-income households. They found that the value came not from the data itself, but from the conversations it sparked between homeowners and advisors.
Interesting to see attitudes toward managing heating, particularly low-income families (diff from tech enthusiasts). General distrust of the value of set heating schedules. â
Wall: User Testing of Thermostats
Tasks were temporary temperature increase, and setting the weekly heating schedule. Did no wiz-of-oz, just tested people on simple UI tasks
Too simple to emulate. Not smart heating (no automated systems) and the tasks were basic UI tests. â
Kulesza: Explanatory Debugging to Help Machine Learning
Presents a theory called Explanatory Debugging and builds a prototype with it. People were able to understand the system better and correct mistakes made by the system 2x as efficiently.
Good principles. Smart heating owners really shouldnât be debugging the heaters too much. But the principles listed here would be useful for any corrections that the user does make. â
Holliday: User Trust in Intelligent Systems
User trust in smart systems fluctuates over time. Explanations apparently only increased trust temporarily, with people feeling exactly the same after a while.
If people arenât told how a sys works, they donât trust it. If they are told how it works, they then determine their trust based on the activity of the system.
USEFUL: both explanations + correct behaviour -> TRUST â
Yang: Learning from a Learning Thermostat
Found complaints that the system was purposefully opaque or considered dumb. Resulted in several principles for revising the Nest: exception flagging, succinct clarity, and limited fiddle needs
Super helpful. Incorporate these principles into another UI? â
Diagnosis Algorithm Study
Email sorting machine learning study
Diagnosis Algorithm Study
Case study: qualitative data coding machine
Temp sensor kit given to energy advisors
User testing on 5 heating UIs
Diary / interview study of Nest owners.
CHI 2016
Gov report, 2013
UbiCompâ13
2015 Health conf
???
2016 Intelligent UI conf
2015 Intelligent UI conf
Lee: Trust in Automation
Detailed discussion of trust in automated systems, particularly when the automation can make significant mistakes (aviation, machinery, factories, etc)
2004 Human Factors
Not useful yet. Too detailed for now, but may be useful when actually trying to measure trust â
Rodden: At Home with Agents
Focus groups exploring attitudes toward smart meters
Interesting for people's comments. Overall distrust of energy companies, fear of loss of control (you'll tell me when to wash my clothes? I think not!), lack of interest to engage with the details.
CHI 2013
Costanza: Doing the Laundry with Agents
CHI 2014
In the wild study of laundry scheduling
People were asked to scheduled out their laundry loads based on flexible pricing, anticipating a future in which renewable energy causes electricity supply to fluctuate greatly.
People displayed willingness to work with the system in order to save money, but also frustration for ppl with more flexible lives.
Didn't explore trust, but did look at energy engagement â
Research Qs
How does explanation completeness impact clinical decision-making?
How does explanation completeness impact a clinicianâs confidence in their ability to diagnose patients?
How does explanation completeness affect a clinical userâs trust in a CDSS?
What is the impact of explanation completeness on a clinical userâs workload?
What information do clinicians desire from a CDSS explanation when making a diagnostic decision?
Intelligibility
Lim: Explanations Intelligibility
Alan: real-time pricing
Yang: Living with the Nest
Kizilcec: Peer grading
Diary / interview study of ppl living with Nest
Web / mobile apps UI changed interactions ppl have with these devices . Energy savings was limited because of : (1) convenient control, and (2) technology limitations. Found that most effective use of the technology came when ppl were thoughtfully engaged. Call for continued improvement in intelligibility and user input.
UbiCompâ12
Bellotti: Intelligibility of context-aware
Discussion on context-aware systems (sensors + machine learning, 'smart')
HCI, 2001
There are aspects of context that machines have no way of sensing, so smart systems can likely never be fully autonomous. The trick is deferring to users as unobtrusively as possible.
Proposes a design framework consisting of principles such as providing control and informing the user of the system's understanding and possible actions.
Lab study using abstract exercise machine learning model
compared diff. explanation types: why, why not, vs none
Explanations lead to increased intelligibility of the system.
CHI 2009
In the wild, heating prototype study of real-time pricing
CHI 2016
Ppl overall liked the idea of the system responding to pricing for them, and felt in control. Liked remote control of the heating. Mixture of mental models on the learning: was the thermostat learning occupancy / temperature schedule, or learning price / comfort tolerance?
People submitted their own grades, then received peer grades. The peer grades were then adjusted for bias. People who had received lower than expected grades had a variation in their trust in the system, depending on the level of transparency of its algorithms. No explanation? Distrust. Medium explanation? Much improved trust. Full explanation? Less trust than the medium.
CHI 2016
Online study of peer grading in MOOC
It's only when a system is performing unexpectedly that explanations are crucial, and they generally do help