Artificial Intelligence

Impacts

Artificial Neural Network

Human factors

Opportunities

Structure

The structure is what the different parts of the ANN are and how they are connected ai_structure

Regression

Regression is predicting an outcome.

💬 vocab: Training

Training is the process of making the ANN useful at doing something, either predicting (regression) or identifying (classification)

⚠ Layers

Layers are groups of neurons. Each neuron in a layer is connected to every neuron in the previous layer, and every neuron in the next layer

⚠ Neurons

Neurons are individual nodes in the graph of the neural network. Each one does a small calculation based on the information from the neurons in the previous layer, and passes its findings onto all the neurons in the next layer

Trust

Classification

Classification is identifying or grouping the input.

Sustainability

Future proofing

Bias changes the activation of a neuron. Adding bias means the neuron will activate more often.

Predicting diabetes/other medical issues based on lifestyle

Weights an Biases is a company trying to avoid pointless training by saving trained ANNs on their website.

Ethical issues

Human's shouldn't dismiss AI's completely.

Sigmoid function takes the sum of all the weights times activations and turns it into a number between 0 and 1.
o(w1a1 + w2a2 + w3a3 + ... + wnan + b)


The weights that leave the neuron are multiplied by this number — if it is close to zero then this neuron is effectively ‘removed’ from the network

Social impacts

Housing prices

Humans shouldn't trust AI blindly.

Retraining a natural language processing ANN like Google Translate can cost up to $3mil. Retraining it to add a few new slang words every year may not be financially sustainable.
Link

💬 vocab: Cost

The cost is the difference between the output you wanted and the output the ANN gave (how good or bad the ANN is).


The cost function (or cost surface) is the mathematical graph of all the possible weight configurations and how good that weight configuration is, lower being better.


Important: The aim of training is to reduce the cost.


Cost Surface
ai_cost_surface

Output is the last layer and represents the decision the ANN has made.

ANN algorithms

The algorithms and computer science that ANNs are based on is well developed and will be effective for a long time.

Job losses

Automation will cause many people to lose their jobs and become not just unemployed, but unemployable e..g truckers and legal assists.

Interpretability

⚠ Weights

Weights are the connections between neurons. Each weight is a value representing how important that connection is to the neuron receiving the information.
Important: weights are the only thing you can change to train a network.

AIs saved 250,000 tonnes of CO2 in 12 months by optimising cargo ships.
Link

The raw output from an AI is often hard to understand and needs to be interpreted and presented to users in a way they can understand.

Input is the first layer and has one neuron for every input, e.g. a 25px by 25px picture will have 625 input neurons (one for every pixel).

Data

Trained ANNs may become out of date as the data that they were trained on becomes less relevant. They may need to be retired or re-trained periodically to remain useful.

YouTube suggestions

Gradient Descent
To make an ANN better, you want to choose a weights configuration that gets close to the local minimum of the cost surface. This means your weights configuration has as little error as possible.


Gradient descent is the process of moving down the cost surface in small steps towards the local minimum. Each time the ANN is adjusted using backpropagation the aim is to reduce the cost (move down the cost surface) a little more.


Important: gradient descent is the process of improving an ANN.

💬 vocab: Generalisation
The aim of training is to get an ANN that is general enough that it can apply what it learnt from the training examples to examples it has never seen before.

💬 vocab: Local minimum
The local minimum is the lowest point of the cost function (cost surface) in the area around the current weight configuration. You can think of it as a dip in the cost surface. It's the lowest point in the area, but there's no way to know if it's the lowest in the whole cost surface.

Face unlock

Negative weight means the information is irrelevant to the neuron.

Spotting concerning things in X-rays/MRIs

Positive weight means the information is important to the neuron.

Backpropagation
When an ANN makes a mistake in training, it means the weights aren't adjusted correctly and we need to update them.


Backpropagation is an algorithm that assigns blame to the specific weights that caused the mistake, and then adjusts those weights to try to make the ANN better.


Important: Backpropagation is an algorithm that helps the ANN figure out which ‘direction’ to go across the cost surface


Important: Back Propagation is efficient

💬 vocab: Overfitting
If the ANN only works on the training data - and not on data it's never seen - then the ANN has overfit the training data. It's not generalised enough to be useful in the real world.

Genetic Algorithm


Important: Genetic algorithms are a different method for improving the weights in a neural network

FYP on TikTok

Siri

Hidden layers are between the input and output. It's unknown what they actually do but ideally they look for components and trends in the input and build complexity as you get further into the ANN.

Stock Market changes

Human biases

AIs are trained on data that needs to be sorted and made usable. This is done by humans, who need to be careful about what they include and train the AI on.

AIs cannot exlain themselves, and AI researchers can't explain how an AI came to its outcome.

Training a single large natural language processing ANN can produce up to 284 tonnes of CO2 - five times the CO2 produced by an average car during its entire lifetime.
Link

Data biases

Crime prediction AIs in the United States were trained to predict where crimes would happen, to help send police patrols to the right place at the right time. Unfortunately, the AIs were trained on arrest data instead of conviction data, so they were racially biased because the data they were based on was racially biased.