Please enable JavaScript.
Coggle requires JavaScript to display documents.
AI Week 9/10, We want to minimise the objective function f(x) - this may…
AI Week 9/10
Simulated Annealing
This is an alternative to hill-climbing as an optimisation algorithm. It's an attempt to avoid getting stuck in local optima as we sometimes accept a downwards move
The General method is similar to hill-climbing but the accept condition is slightly different. In this case, we get a random neighbour rather than the best quality one and sometimes will accept a solution with a worse quality than the current solution based on the "probability of accepting a worse quality solution"
Setting the probability: We desire a higher probability of accepting a worse solution at the beginning (so we don't get stuck in a local optima and explore the search space) which lowers slowly over time (so we can exploit a hill)
PROBABILITY FUNCTION: e^(deltaE/T)
deltaE is the quality(rand_neighbour)-quality(current_solution) ; this is always less than or equal to 0 as we only have to do this when deciding whether to accept a worse/equal solution
temperature
e is a constant 2.718...
It's exponential but always <=1 as deltaE/T will always be <=0. As deltaE/T increases (gets closer to 0), so does the probability of acceptance (although it never reaches 0 as its an asymptote)
This means if we reduce temperature over time, we reduce the probability of accepting a bad neighbour over time - we can use sifferent rules for updating T over time such as T=alphaT where alpha is a number less than, but close to, 1 (e.g. 0.95) which is updated each iteration.
It isn't guaranteed to find the optimum in a reasonable amount of time (but frequently obtains near-optimal) but will if left indefinitely (MAY TAKE LONGER THAN EVEN BRUTE FORCE)
Time Complexity: varies dependent on the problem and number of iterations depend on the schedule and minimum temperature, or if the current solution stabilises
Space Complexity: depends on how the design variable is represented
Constraint Handling
In order to find candidate solutions which are feasible and likely, we have to remove, or at least dramatically lower, the opportunity to choose infeasible ones. We do this by
adding a penalty when a solution breaks constraint functions
.
This allows us to rank infeasible solutions (obviously lower ranked) so it's easier to design and we can rank their neighbours until we find a feasible one
Generalisation: f(x) + Q(x) where Q(x) = 0 if x is feasible, P*the sum of the violated constraints otherwise
One way is to
implement the constraints within the neighbourhood operator, representation and initialisation
This means no infeasible solutions are generated
May be difficult to design and is heavily problem dependent; may restrict the search space too much so it's difficult to find the optimum
We want to minimise the objective function f(x) - this may be multi-objective fk(x), k=1,...,p and some of these might be minimised and others maximised
We do this by inputting different candidate solutions, represented by the design variables X, which are found within the search space
We have inequality constraints, gi(x) <=0, i=1,...,m and equality constraints, hj(x) = 0, j=1,...,n
If the penalty P is too low or too high, the problem becomes GA-hard but its difficult to know the boundary