## Friday, November 28, 2014

### Branching poisson disk algorithm

My program outputs a distribution like this:
Every dot is at least a distance of 10 pixels from any other.
The way it works is to start at the middle, and branch out in 3 evenly spaced directions at a random angle from the center dot, then from those 3 points branch in a similar way generating more points and continue in that fashion for a certain number of iterations, above is 25 generations. I think this picture helps explain it a lot.
The speed is fairly quick, generating these graphs in python took only about 1.5 seconds for around 1300 points, I think it could be speeded up considerally using parallel processing for example on a gpu with a faster language.
**Python source code**
import math
import random
import Image, ImageDraw
def check(d, pointlist, point, gridsize):
tooclose = False
if point < 10 or point > gridsize-10 or point < 10 or point > gridsize-10:
return True
for p in range(0, len(pointlist)):
if ((point-pointlist[p])**2.0 + (point-pointlist[p])**2.0)**.5 < d-1:
return True
def findspot(pointlist, added, d, gridsize, draw):
newlist = []
for point in pointlist:
newlist.append(point)
tx = point
ty = point
theta = random.random()*2*3.14159
for i in range(0, 3):
px = tx + d*math.cos(theta+i*(2*3.14159)/3)
py = ty + d*math.sin(theta+i*(2*3.14159)/3)
if check(d, newlist, [px,py], gridsize) == False:
newlist.append([px,py])
class graph():
def __init__(this, d, gridsize, draw):
this.pointlist = [[400,400]]
this.d = d
this.gridsize = gridsize
this.draw = draw
def tree(this, limit):
if limit < 25:
this.pointlist = newlist
this.tree(limit+1)
def main():
gridsize = [790,790]
d = 10.0
point = [400,400]
plot = Image.new("RGB", [800,800])
draw = ImageDraw.Draw(plot)
G = graph(d, gridsize, draw)
G.tree(0)
for point in G.pointlist:
for i in range(-1, 2):
for j in range(-1, 2):
plot.putpixel((int(point+i), int(point)+j), (255,255,255))
plot.save("plotdot.png")
main()

## Sunday, November 23, 2014

### recursively moving around the unit circle with constant angle to the origin

Starting with solving this general formula for two vectors [x,y] and [a,b]:
The above says the dot product between them equals to c and the magnitude of the cross product equals to d (counterclockwise). And then it is solved for a,b.
So we begin by thinking about if [x,y] starts as the vector [1,0]... then we can find possible values for c and d like so, picking any value as close or far away from 1 as we'd like for x and solving...
So plug those in for d and c respectively...Or alternatively use a rational value from the rational parameterization of the circle. These are the values for the dot and cross product from the vector to the generated point to the vector [1, 0] because the dot product will just be the x coordinate of the point we generated and the cross product will just be the y.   Let's call this Formula 1.
Now c and d is also our first vector after [1,0] so we can plug that in for x and y as well and get...
This vector [a,b] is the same angle to [.9801,.141] as [.9801,.141] was to [1,0], because we made the dot and cross products stay the same.

Now we can plug this [a,b] in for x and y in formula 1 and get...
And in this way work our way around the unit circle by that small angle (which we actually don't know) without ever having to figure a cosine or sine! An approximation of the angle can be found by seeing how long it takes for this process to get close to a known value like [0,1]

## Friday, November 21, 2014

### wave interpolation (with natural number harmonics of different amplitudes)

I noticed that it's possible to find solutiond for waves that are 0 at the endpoints of the interval and that pass through a certain number of points.
Like for instance, say you want to know the waves that pass through these points:
[[.5, 1], [1, 0], [1.5, -1]]
Since that is three points, I start by considering waves of the form:
Maple can solve this by entering...
You can see that there are many waves satisfying these conditions, b and c can have any possible value and a just has to be 1+ whatever c is chosen to be... for example a=5.5, b=3, c=4.5
The thing I noticed is that there are two degrees of freedom in the solutions for this wave because the t values interpolated through .5, 1, 1.5 are all multiples of the first value of .5. We can remove a degree of freedom in the solutions by interpolating through .5, 1.2, 1.5, here see that 1.2 is not a multiple of .5 but 1.5 still is...
The solutions only have one degree of freedom now so that c can be chosen as anything. Interestingly the solutions involve the golden ratio! 1.618033... is usually known as phi or the golden ratio. I'm not sure exactly why that is.

So the next step of course is to look to see that a case where the t values are all not multiples of each other yields only one solution...Let's try [.7, 1.2, 1.6]
There is only one solution for those heights through those t values.

** Finding unique solutions for an unknown wave though a number of points**
The trick for this is as you can see above to sample your unknown wave so that none of the t values are multiples of each other... There might be many ways to do this. It's kind of an interesting problem in itself.
One way is to use prime numbers...if you have n points you would use fractions of the (n)th prime like for example if you have six points you could use for t values:
[1/13, 2/13, 3/13, 5/13, 7/13, 11/13]
For example, if our 6 heights at those t values are [1, 1.5, .5, 0, -.65, -.8]

## Sunday, November 9, 2014

### fanless turbo compressor

An idea I had for an alternative to a turbo compressor with fan blades like they usually have. I thought because the center of gravity is so close to the spindle on this design you could spin it easily really quickly and get better performance.

### yellow on an RGB computer screen vs from the spectrum

The bottom graph shows an overlay of the yellow you get from mixing pure red and green light and what you would get from a light source of the right frequency for yellow. As you can probably see it's off not just by a scaling factor vertically, but the second half of the wave is closer than the first half.  This phenomenon would persist with different colors even if a yellow led was added and after that an orange, etc.Though it would improve with each addition. Maybe one day they'll make an led that can actually be any different color from the spectrum by altering the voltage source to it, so they don't have to add different colors of light to try and make all the colors.

### Symbol gas cryptography

First two computers are initialized to the same configuration of the symbol gas...
Every symbol is given an initial position and an initial motion vector, above I've shown the motion vector for just the k.
The algorithm for running the simulation is made so that no matter how long it is run of simulated time, the final position of the symbols is deterministic based on the starting values. Though it is very psuedo random where every symbol will be after a fair amount of time, because they are all bouncing off of each other and off the walls.  One computer has a message it wants to send to the other computer such as...
"this is a secret message..."
It first sends the position of the symbol "t" rounded to some accuracy, then the simulation is run for some predetermined amount of time such as 1000 seconds of simulation time, then it sends the new position of the symbol "h"  and repeats for the whole message.
To decode the other computer looks for which symbol is at the first position sent from the other computer, then runs the simulation for the predetermined amount of time then which symbol is at the second position sent, etc..
I think because of the simulated entropy that a third party listening in on the message, but not knowing what the initial parameters of the simulation were, will never be able to decode a message. Even if they have an encoded message and a plaintext decoding of the same message, because only positions are given and not vectors or the predetermined time scale it would be impossible to generalize to decode other messages.
I think it's better than other methods because all have a sort of formula using a  key that might be very complex but then there's always the possibility of finding the inverse. Here there is no formula for where one of the symbols will be after a certain amount of time and the simulation simply has to be run to find where a symbol will be so there is no inverse either.

## Thursday, November 6, 2014

### 4r's genetic problem solving

The problem I developed this technique for is to fill a square with 10 points so that the minimum distance between any selection of two points is maximized with wrap around. It produced the following solution after 10000 generations.
The 4 r's pneumonic is for:
1. Randomly produce a population of solutions
2. Reduce the population to the best performing subset of the population.
3. Reintroduce some new random solutions
4. Reproduce the solutions with one another to make offspring solutions.

For the example above the first step was to produce 120 random solutions each of 10 random points in the square. Then the top 10 best solutions were kept and the rest discarded. Then 10 more random solutions are added. This step turned out to be very important as without it the population stagnates and doesn't continue improving. Then the reproduction step was to take every pairwise combination of solutions, and take the average of each of the 10 points location in each to produce an offspring and add that to the population. Then steps 2 through 4 are repeated indefinitely.
**Note
It actually converges quite quickly to nearly the best solution it will be able to find, 100 generations takes only a couple of seconds while 10000 takes around 10 minutes, but the difference between the top solution they found was very small. The great improvement is in only the first handful of generations and then gradual improvement after that.Also as for the performance, after 100 generations a total of around 1000 random solutions are added in, a different approach that simply took the best of 1000 random solutions was no where near as good, so it is in the process not simply the number of solutions considered that the improvements are found. It even did better than looking at 100000 random solutions which took considerably longer to generate.

**Python source code
import random
def populate(n):
p = []
for i in range(0, n):
p.append((random.random(), random.random()))
return p
def distance(p1, p2):
return ((p1-p2)**2.0 + (p1-p2)**2.0)**.5
def fitness(p):
mindistance = 2.0
for i in range(0, len(p)-1):
for j in range(i+1, len(p)):
m = distance(p[i], p[j])
l = distance(p[i], (p[j]-1, p[j]))
r = distance(p[i], (p[j]+1, p[j]))
u = distance(p[i], (p[j], p[j]+1))
d = distance(p[i], (p[j], p[j]-1))
if m < mindistance:
mindistance = m
if l < mindistance:
mindistance = l
if r < mindistance:
mindistance = r
if u < mindistance:
mindistance = u
if d < mindistance:
mindistance = d
return mindistance
def breed(pops):
pops2 = []

for p in pops:
pops2.append(p)
l = len(pops2)
for i in range(0, l):
p = populate(len(pops))
pops2.append((fitness(p), p))
for i in range(0, len(pops)-1):
for j in range(i, len(pops)):
p = []
for k in range(0, len(pops)):
p.append(((pops[i][k]+pops[j][k])/2.0,(pops[i][k]+pops[j][k])/2.0))
pops2.append((fitness(p), p))

return pops2
def main():
pops = []
n = 10
for i in range(0, n*n+1):
p = populate(n)
pops.append((fitness(p), p))
pops = sorted(pops)
for g in range(0, 10000):
pops2 = []
for i in range(len(pops)-10, len(pops)):
pops2.append(pops[i])
pops2 = breed(pops2)
pops2 = sorted(pops2)
pops = []
for i in range(len(pops)-10, len(pops)):
pops.append(pops2[i])
pops = breed(pops)
pops = sorted(pops)
print pops2[len(pops2)-1]
main()