Please enable JavaScript.
Coggle requires JavaScript to display documents.
Difficulties surrounding AI advancement (Miscellany (I think this field is…
Difficulties surrounding AI advancement
Problems with creating AI (production)
Reproducibility Issue
I’ll talk about the difficulties of working between labs and not only why that’s happening, but how it’s holding up progress in this field and what scientists are trying to do to create a better framework for communication.
He outlines the issues with taking one team of researcher’s methods and outcome in machine learning, the main one being that reports of what happened in the original experiment aren’t usually thorough enough to be reproduced.
The issue of communication and reproducibility in AI seems like an issue that only matters to those working on these machines, but it also affects the public.
Gregory Barber talked about the difficulty in reproducing artificial intelligence systems and how that’s making it difficult for labs to work together and improve upon each other’s work.
With better communication frameworks, scientists could actually build on each other's work and do so with little troubleshooting that takes up so much time today.
By creating a “reproducibility checklist” for AI scientists to go through, the mysteries as to how one lab’s code works when another lab tries to work on it would go away. This would create a shared learning environment.
A framework called MANTiD outlines benchmarks for visiting researchers at these facilities to follow. This makes it so the data produced is uniform and deep learning networks can easily analyze the data to produce an appropriate model.
It outlines everything that a team of researchers should include when sharing their findings so other labs can build upon those findings. This echoed a similar tone from part of Tessella’s article about machine learning analytics: black boxes, or blocks of information that are omitted from an explanation of how a system works, need to go away in today’s world of machine learning.
Machine Bias
I’ve also learned how machine learning creates bias from the dataset it’s pulling from, and not the coder that creates the bias (in most cases), and some of the different ways and methods scientists have structured machine learning.
How are you supposed to develop impartial programs if the data it’s pulling from is even the littles bit biased?
He sites the problem for this is that algorithms are able to become flooded with so much information that they adapt to the biases of the input information.
This comes from the wage gap; the algorithm learns of its existence and draws on it to produce the ‘socially appropriate’ advertisement for the user.
Bias is another important aspect because it can affect the entire decision making process of a machine. I’ve learned that bias comes from the dataset the machine learning program is pulling from rather than a biased programmer.
If the technology we’re using produces skewed results, there’s a problem that needs to be fixed.
The production aspect of AI also includes biased codes and methods for teaching machines in a way that they won’t become biased.
Deciding the best machine learning method to employ
I also want to look at the issues affecting how users experience AI, such as how AI reacts to different human actions and how applying different methods of machine learning produce different outputs from these programs.
The researchers came to the conclusion that the majority of difficulties reside in three categories: “difficulty following an iterative and exploratory process, difficulty understanding the relationship between data and models and difficulty of evaluating the performance of models in the context of an application.”
Refining AI and machine learning methods to better and more quickly select a model to fit data will help quicken the process of coming to conclusions in these experiments.
Different Machine learning methods
Another method is to give the robot a defined set of rules and conditions that all must equate to ‘yes’ for the program to execute. The benefit of this is that the user knows exactly why the robot made the decision it did, but the issue comes with the immense amount of rules a programmer would have to input for a robot to function in the complex reality that we live in.
The most abstract but perhaps most useful method was to allow the machine to learn for itself. The more situations you put the machine through and expose it to for analysis, the more it will learn how to behave by deriving a pattern of behavior throughout the various scenarios.
Try to program the robot as simply as possible and give it only one rule, but found that when more complex situations arose, the simple code broke the robot.
Problems with implementing AI
Examples of AI failure
Face detection programs have been shown to detect white male faces better than any other, and darker faces less accurately than lighter faces.
As machine learning algorithms learned the English language, they also quickly developed a racial bias against people of African descent. This error came through continually running into word associations such as “white male name” and “CEO.”
These outcomes include photo labeling programs that labeled black men as apes and advertisement generation programs that displayed CEO and upper management titles to male users six times as often as the programs displayed such ads to female users.
Ethical Dilemmas
AI has been depicted in science fiction as going off the rails and overcoming humanity faster than scientists can manage, and that fear has certainly permeated to reality. Is it responsible to produce a machine that could be what saves us or what dooms us, as well as who’s at fault in incidents where AI hurts human beings.
Unless the scientific community can assure the public that it has created a robot/program that’s capable of acting in a definitely ethical and culturally acceptable manner, there will be many doubts about the capability of machine learning.
If scientists are unable to create a program that behaves in a manner that we can accept and find ethically appropriate, AI would be more accepted and the process of development would move faster.
It turns out that when teams of researchers create an AI program that uses machine learning they oftentimes leave out large swaths of information in their reports that are critical in getting the program to execute (such as all of the iterations of the code that were attempted before the final one was chosen, the amount of models statistical AI analysis programs need to try before they find a fitting model for a large set of data, etc.) which leaves scientists to spend lots of time troubleshooting just to get the program to function as it was intended, and then even more time altering it to get it to work for their purposes.
Solutions to these problems (difficulty solving these problems)
The machine picks up on any and every bais and then applies it to its own thinking, so we would have to go in and manually write exceptions for the biased decisions the bot is making, which is impossible in algorithms like Deep Learning that are essentially black boxes.
Programmers removed racist tags but in an algorithm as complex as that, it was difficult to see exactly where the machine learned to be racist since it’s so complex.
The engineers and product directors need to communicate better about the unintended consequences.
Miscellany
I think this field is really cool and needs to be further explored before we move too far forward, but we have to overcome the many little issues facing us before we can even worry about moving too far forward too fast.
It’s really cool that we can teach machines to do these incomprehensibly complex analysis problems and then turn around and use the same technology to predict when you’ll be home and adjust your AC system to save power accordingly.
It exists in our digital assistants, our Google searches and so many other processes we don’t think about that use machine learning. Pretty soon machine learning will be able to be implemented in even more ways, so it’s only going to become more ingrained in our lives as time goes on. I want to help people understand what scientists are doing to make safer and better AI and machine learning algorithms so that this technology can be more widely trusted because the use of computer thinking — computation that is far beyond human ability — could solve so many issues we face today like food and medicine shortages, adaptive diseases and natural disaster prediction and damage reduction.
These kind of questions affect everyone who exists in a society with AI, so we have to become comfortable with the answers to further AI and machine learning’s production and prevalence.
Using AI and machine learning can help us better understand massive amounts of raw data very quickly.
So the problem being the data that’s input into the algorithm really means that there might be some problems in society.