Thursday, November 6, 2014

4r's genetic problem solving

The problem I developed this technique for is to fill a square with 10 points so that the minimum distance between any selection of two points is maximized with wrap around. It produced the following solution after 10000 generations.
The 4 r's pneumonic is for:
1. Randomly produce a population of solutions
2. Reduce the population to the best performing subset of the population.
3. Reintroduce some new random solutions
4. Reproduce the solutions with one another to make offspring solutions.

For the example above the first step was to produce 120 random solutions each of 10 random points in the square. Then the top 10 best solutions were kept and the rest discarded. Then 10 more random solutions are added. This step turned out to be very important as without it the population stagnates and doesn't continue improving. Then the reproduction step was to take every pairwise combination of solutions, and take the average of each of the 10 points location in each to produce an offspring and add that to the population. Then steps 2 through 4 are repeated indefinitely.
It actually converges quite quickly to nearly the best solution it will be able to find, 100 generations takes only a couple of seconds while 10000 takes around 10 minutes, but the difference between the top solution they found was very small. The great improvement is in only the first handful of generations and then gradual improvement after that.Also as for the performance, after 100 generations a total of around 1000 random solutions are added in, a different approach that simply took the best of 1000 random solutions was no where near as good, so it is in the process not simply the number of solutions considered that the improvements are found. It even did better than looking at 100000 random solutions which took considerably longer to generate.

**Python source code
 import random
def populate(n):
    p = []
    for i in range(0, n):
        p.append((random.random(), random.random()))
    return p
def distance(p1, p2):
    return ((p1[0]-p2[0])**2.0 + (p1[1]-p2[1])**2.0)**.5
def fitness(p):
    mindistance = 2.0
    for i in range(0, len(p)-1):
        for j in range(i+1, len(p)):
            m = distance(p[i], p[j])
            l = distance(p[i], (p[j][0]-1, p[j][1]))
            r = distance(p[i], (p[j][0]+1, p[j][1]))
            u = distance(p[i], (p[j][0], p[j][1]+1))
            d = distance(p[i], (p[j][0], p[j][1]-1))
            if m < mindistance:
                mindistance = m
            if l < mindistance:
                mindistance = l
            if r < mindistance:
                mindistance = r
            if u < mindistance:
                mindistance = u
            if d < mindistance:
                mindistance = d
    return mindistance
def breed(pops):
    pops2 = []
    for p in pops:
    l = len(pops2)
    for i in range(0, l):
        p = populate(len(pops[0][1]))
        pops2.append((fitness(p), p))
    for i in range(0, len(pops)-1):
        for j in range(i, len(pops)):
            p = []
            for k in range(0, len(pops[1][1])):
            pops2.append((fitness(p), p))
    return pops2
def main():
    pops = []
    n = 10
    for i in range(0, n*n+1):
        p = populate(n)
        pops.append((fitness(p), p))
    pops = sorted(pops)
    for g in range(0, 10000):
        pops2 = []
        for i in range(len(pops)-10, len(pops)):
        pops2 = breed(pops2)
        pops2 = sorted(pops2)
        pops = []
        for i in range(len(pops)-10, len(pops)):
        pops = breed(pops)
        pops = sorted(pops)
    print pops2[len(pops2)-1]

No comments:

Post a Comment