The problem with AI is us (Polars (VS (history/future (this is a…
The problem with AI is us
What makes us human?
Turing said if you can't tell the difference between the computer and the person it must be intelligent
Contesting view says understanding is important
Ethical decision making
Is there such a thing as a "moral machine"?
What if computers learn to be creative in the same way we are?
What does this mean for art?
We just copy each other anyway - what's the difference between us copying each other and robots copying us?
The identity of the individual vs. the identity of the whole
The gap is closing: at some point we might not be able to tell the difference between tech and us
The most efficient way to attack people is to attack their humanity - their weaknesses is often what makes them human (and in a psychological respect is what makes us strong when we use it to connect with others) - but we are physically weak. e.g. If you want to kill lots of people, attack them with disease. Is this possible through technology?
From a psychological standpoint: many would agree, our weakness. This is how we connect with others and find strength
Didn't I say it before to lashentax? If he was truly intelligent, he would be more humble... it's often the opposite of what you think.
What justifies human imperfection and not AI? We let ourselves make errors and it's ok (e.g. jury's aren't always right) but if a robot has a lower error rate than us it's still a problem
Uncertainty/Predicting the future/Lack of definition
What will the future of AI look like?
Some people think it will spell the end of the human race
Lots of excitement about the practical applications of AI
AI can produce unexpected results
Lack of definition - could be literal and metaphorical. There is a lack of definition about the future of AI and what it means to be intelligent and human - but AI predictions are fuzzy too. They're based on past experience, and it's impossible to know for sure. There is always error
What do we want AI to be?
Many practical applications
Many see fully sentient robots as the pinnacle of AI - but do we want to create sentient robots? Is it even possible?
If we talk about the ultimate AI as being indistinguishable from humans... none of us are ever in agreement with each other, so why would we want that? We can never agree with what we want as a whole species - so we will never agree about what AI should be... omg, we will never agree on what we want AI to be...fuuuuck. this isn't a question that can be answered hahahaha
So this isn't the problem... we will never agree... the problem will be whether people will create robots that we disagree about how they should be... and that is inevitable... this is an age-old problem which isn't at all new and will not end with robots (if we live past them). If someone had asked if we could live past atomic bombs would we expect it?
Is creating destructive AI powers more accessible than atomic bombs?
Tech and the general public
Technology in general is becoming more integrated into the lives of everyday people
AI specifically is also being integrated into the lives of every day people, not always with their knowledge (e.g. home systems)
Not always well-educated about how technology works, how their data is being used, etc.
The general public don't usually understand tech but they are who will be affected by it
Ethics and Trust
Businesses and services
Don't always know how tech works or how our data is being used, have to trust businesses to be ethical
Feels like monetary wealth is tipping in the favour of businesses - money runs the world, so doesn't that mean they have all the power? Biggest names are e.g. owners of businesses like amazon, microsoft, apple. etc.
And these are the people working with data!!!!
What happens if we can predict crimes before they happen with 99% accuracy?
What happens if technology can reliably detect liars better than we can? Our whole jury structure goes to shit
What happens if technology can generate footage that is indistinguishable from real footage? That's a major part of legal trials.... what happens when we don't know it exists!? Who gets convicted on pure bs?
Businesses are within a society that we understand and have some control over - we trust the law to protect us from these things... what of other forces? Truly destructive ones that have no interest in obeying our laws
People create unpredictable robots to manipulate other people - is this the robot, or the person? At what point does it become the robot? More blurring lines! Links to responsibility of the creator.
The difficulties in modelling human behaviour
Over simplification of human behaviour
Data treats us as a number of data points, not as people
...leads to fault. Can we be simplified? Simplification leads to generalisation and prejudice which exists in human nature too. Synthesis is achieved by celebrating variance and accepting difference. But how can data be treated this way? We are also creatures of collective habit... I'm confused!
Variation in behaviours across different cultures
Variation in ethics
Roots in philosophy
What is sentience?
What are feelings?
Data isn't just used in tech, it's used across many fields to understand people as a whole
Data use is like... this is you! We presented this just for you... but it isn't you, actually. it's based on lots of other people something like you, but it doesn't celebrate difference or variance. It focuses on similarity
Make something that celebrates variance!? Would this be cooler!?
Was going to focus on variance being a bad thing: how data giants treat us, but would it be cooler to do something different? Something that makes differences amazing?
Do something different and something cool happens
Maybe a reflection of someone doing something different... an echo of someone more different when you're doing something normal... but then if you're doing something new that it records as a new variable of difference it does something fun like sparkly lights spraying out from where you are. YOU are an individual. You're doing something original. You're not just a data point
Human vs AI
Scary vs cool
Does AI naturally reflect us because it is kind of an extension of us? We make it in our image - imperfect, it's flaws are our flaws, e.g. we treat people as data, and that is the problem with computers treating people as data. People are prejudice and so are the programs they make
Human nature in AI - what can AI do for us? Power over people through technology, but what if it invalidates us? Wealth gap between people with technology and people without - those who make it get rich, but those at the bottom of the chain lose their jobs
Every day people vs. the makers of tech
We want to make AI like us - but we're scared of it being too much like us. If it's flawed like we're flawed - WW3! Are we scared of AI because we're scared of ourselves? YES.
Fear and safety. With advancements there is always the safety that comes in having a powerful force, and the fear that comes from someone else having it - think about atomic bombs. Responsibility - it being in the "right" hands, which is subjective? i.e. one person's terrorist is another person's freedom fighter.
People are cool, and boring, and the same and different. Can I capture this? Can I say the problem with AI in a scary and cool and positive and negative way? Can I make it general and personal?
Want it to apply to a lot of people - be relatable by everyone, but also be personal. My personal recent experiences are going into this - I am thinking a lot about human psychology and what it means and tech is close to me so I'm using it as a medium to display that. Fuck I'm so fucking goldsmiths.
Main thing: people are the same AND different. Is this a major problem with AI? We need to be able to take into account similarities and differences. Computers are typically bad at doing this at the same time (didn't I read that when I was researching something recently? Oh was that short term vs. long term... maybe) computers typically do one thing at a time. Hmmmm...
Wouldn't it then be wrong to penalise someone for being the same? I guess that idea works for this... do something original and fun for people that are the same, and do something cool also for people that are different
Is this the whole problem? Omg freaky links directly into my personal life right now. I am polarised - people are naturally polarised - this whole issue with AI is polarised because we are... wtf........... ummmmm. of course we would have polarised views on it because we are naturally polarised and so why would with be any different? We worry about what will happen while feeling confident - and it's all about us predicting the future but we never know until it actually happens
This is though, a strangely unique problem in that we really have never faced anything like this before. Is that because we're creating life? Creating sentience? Something that can make independent decisions with genuine power? Because we're creating something with more power than we have? We're tipping the power balance? Hahahahha we're idiots, we think we're so smart cos we can make smart things and we're going to make something smarter than us that can destroy us. We're transferring the power balance in nature to something else after millions years of being the dominant force of the planet - what idiots!
Interestingly this is a product of the human condition. We are power hungry - we want more, even though it often destroys us. Links back to weapons of mass destruction again - atomic bombs. We're scared of each other... wait does this link back to how everything is basic human nature since cave man days? Waving your dick around? Biggest man of the pack? Race to the moon? Are we all so concerned with being the best of the best that we don't realise what a threat we pose to ourselves?
Product but also the inherently self-destructive nature we have... the things that drive us also destroy us.
Individuals vs. groups
general public/makers of tech
birth/creation - death/destruction
this is a reflection of data use, we learn from data so this is a reflection of everything we do - economy etc.
There's no such thing as perfect tech because there's no such thing as perfect us: what we think is right for us is variable. We are born imperfect: but what of something that make make imperfect?
The confusing loop of birth and responsibility!
Does this have something to do with determinism? How responsible are people for their own actions, or does everything that happen to us lead us down an inevitable path? Where humanity is concerned wouldn't you always want to have the chance at life? But what of robots? Is it life? What if we could create life? Should we? How is creation different from birth?
Birth and death is such a major part of our psychology and what we are that it's hard to create true AI without it. What are we without the drive to live? Isn't that why philosophers and religion, etc. exist? Because we want to know why were here, what we are, etc? Also links to how AI can help us... weird loop. AI can help us exist more - for longer, create more, do more... and that is humanity perhaps... but it could also be the end of it. Wtf.
I'm totally taking my personal experiences recently in therapy to uni with me. Celebrating imperfection and the contradictory notion of being open for authenticity... so what are we trying to achieve with AI? Certainly not humanity? We are so complex and flawed! But our flaws make us perfect!? I'm confused! My ideas are way too deep!
We are flawed and that makes us perfect because social opinion is important and human connection is important and being open and flawed makes us more able to connect if we're honest about it - being vulnerable only matters in a social context and being vulnerable is worth something and you can only do it if you are imperfect sooo... could robots be imperfectly vulnerable? If they could do that, maybe they would be intelligent. Fuck my life.
If being social is part of what makes us human, don't robots need to be social to be true AI? They need a drive to be social like we are. How could they ever properly understand social conventions without our complex biological psychology based on social norms. This indicates that social normals is a VERY interesting point to take into consideration
We relegate each other... our understanding of what's right and wrong only makes sense in a social context and it dependant upon lots of experiences of other people around us that we learn over a lifetime... robots are only built on one set of laws? If you could build them to change depending on where they are placed they would make more sense.
The more variability you introduce the more human they will become - and as being human is being variable and unpredictable - the more variable and unpredictable they will become. This is why AI is unpredictable - because it's modelled after us, and so are we. If we could build programs to unpick robot logic, we could also make programs to unpick our own logic - to some extent! We're more complicated!
We ha concern for other beings that are like us - that's how we make connections - how can we relate to robots have them relate to us when we're not alike? Should be even be trying to create things like us when they won't be? Is that what we're trying to do?
We have respect for people that are able to overcome difficulty - this is biological. What of robots? They don't have the difficulties we have.
Even therapy is kind of a personal experience but the therapist learns from past experiences with other clients to know how to deal with you... at what point are they going on a limb and not basing it on past experiences? Is that even possible?
Age old argument of nature vs. nurture. Robots only have one of these. Keep asking: what makes us human? So many things
Control and power
We make robots to have control, but fear the control it has of us - or is that the control the people who have control of it has over us? At what point do we fear the tech itself? Robot wars!
Adam and god!
People and robots!
Contrived simplified versions of the same thing: like how we are made in gods image but less awesome, robots are us but less awesome... but what if they surpass us? like parents fear their children surpassing them, we fear robots surpassing us... but it is inevitable! how scary!
The creation of robots is reflected in human psychology in the way we create our children??? Wtf is this true???? They're like our babies... we think what they could do for the world, we're proud of them, and yet we fear what they could become. We feel responsible for them, and understandably. But with AI... we don't know what it might become? Wtf...
Don't get too focused on the outcome before you know what it is - doesn't really matter?
Have a drag shape of the prediction vs. the actual reflection and highlight the difference like a penn diagram
Getting bogged down with the conceptual details, could just be interesting to display the data without knowing what it means exactly - can discuss this is the report
Could this be an interesting reflection of my own mental state/which is probably in turn its own reflection of the whole of humanity's mental state? We don't always know. We don't know if it's good or bad - we don't know what's going to happen and we will inevitably get it wrong. Our history is both long and short, time is both fast and slow - this is relating to my own recent personal thoughts again.... A stretch of time is immeasurable but by those golden moments that stand out and are special for good and bad reasons and they shape us and our future...
Is this not what the future of AI will be? Some big moments and break throughs resulting in huge changes that are never forgotten and echo down the future of humanity.... they could destroy us, as moments can destroy individuals. There will always be error in prediction.
Because people are unpredictable!
Like this idea of echos - echoes are past events that reiterate over and over into the future becoming distorted over time. Is this not what AI does? Except it takes lots of things and combines them into one I suppose... Echos of the past. Hmmmm
Don't have light or dark or any connotations! Could just highlight the difference and let that be enough?
Me literally losing my mind over this project >
This is a reflection of my mental state right now, and I didn't even realise it, and we are trying to reflect our mental state as a whole in AI, and if AI created life we could create an endless loop of creation. WTF
Didn't occur to be before but something that has long been coveted by people is eternal life... what if we create robots that can live so much longer than us??? weird...
Nature vs. nurture
Some things are inherently programmed into us for anarguably good reason???
We are a combination of nature and nurture
If you want good robot you have to make it both too?
Often about humanity rising up against AI evil - like what makes us different and resilient and our connection to our people is what makes us able to do this - but is it true? Could this really happen? If it could, surely we would have built robots smart enough to spot it
What happens when we win against robots in pop culture would never happen - we celebrate our resilient humanity but we're smart enough and dumb enough to create robots that can defend against it
Is a lot of this about power and control? We're scared of what other people might do with technology, and we're scared of what technology might do if we can't control it properly, and we're trusting the makers of tech with a lot of power of us