Please enable JavaScript.
Coggle requires JavaScript to display documents.
PhD Proposal Brainstorming - Coggle Diagram
PhD Proposal Brainstorming
Risk assessment V1:
RQ: What risk assessment techniques can be effective in industry?
Method: co-develop risk assessment techniques, assess risk, implement certain interventions, re-assess risk in 6 months or so.
Expected results/contribution - a new risk assessment method that is empirically tested
Feedback V1
Scoping can be challenging
Uniquely qualified
Would need to work with someone from operational research
Can be spun as engineering thesis
Trust, robotics and AI V1
RQ: How does trust in a decision maker or a collaborator change as we shift along the human-machine scale for different tasks?
There are many dimensions we can explore for this question. We can look at non-embodied vs. embodied AI. We can look at different scales for human (ex. child-adult-elderly)
Feedback V1:
A lot of actors
Is a robot replacing human or assisting has been studied -- settled on assisting - we have somewhat closed the lid on this question. But could be looked at in more granular ways
Collaborator vs. decision maker OR active engagement or passively subjected to an action.
Leyla Takayama has done research on robots in different roles - look into this paper
We have trust at an interaction level. Trust at societal level (a.k.a public trust) - what is in between?
Vulnerability with AIS V1
RQ: How vulnerable can people be besides an AIS? How does interaction design impact openness to vulnerability? Is it okay to be vulnerable around AIS?
Vulnerable robot
Few studies the effects of vulnerable robot on human
Gaze, touch
Vulnerable human
How vulnerable can people be beside AIS?
What vulnerabilities are people experiencing when in the presence of AIS? This is more related to the risk discussion.
Feedback on V1
Trendy topic
Different dimension of trust
Novel framing of the work
Historic example of people open to being vulnerable around automation
Eliza - therapeutic robot
HRI- policy workshop in 2015
Could relate this to risk
Empathy-sympathy spectrum -- connection to vulnerability and trust
Moral psychology perspective on AI ethics V1
Q: What moral psychological frameworks can we use to understand ethics of AIS-human interaction? How do these frameworks relate to pre-dominant ways in which we talk about AIS ethics?
Look at the framework that Jimin came up with - can we use this framing and align it with ML development process?
Feedback V1:
Needs more scoping