analytics

Thursday, June 25, 2015

Comparison of 16 note scale to 12 note scale

I noticed Western Music's 12 note scale in black is pretty close to a 16 note scale shifted off of a power of 2 about 5 hz left or right and leaving out 4 notes...
Also notice how many nice rational harmonies there are like C=256, E=320, G=384 in the standard notation with the new frequencies, and 320/256 = 1.25 and 384/320 = 1.2... A at 432 over G at 384 gives 1.125 for the ratio etc...

And I noticed that in standard notation with the new frequencies E:C is 1+1/4, G:E is 1+1/5, and the highest C:G is 1+1/3 making the nice 4-5-3 ratios for that chord.

Thursday, June 18, 2015

archimedean spiral bluish noise

I noticed that if you make an archimedean spiral in polar coordinates starting at the origin that goes to [1,0] at theta = 2*pi, you can solve for the next point on the curve that is a distance 1 from the last point, and it has a sort of blue noise look to the pattern of the dots...
In fact it's easy to prove that every point is at least distance 1 apart so it has that quality that blue noise has...
The maple commands to solve for the next point...
If you add a couple more points at the very middle of the spiral and crop to an offset square it starts to look very blue noisy...

Maybe it would look even better if you vary randomly the length instead of just 1 apart around the curve a little more or equal to 1...


Sunday, June 14, 2015

"Thresher": Sorting networks for 2^n inputs

As an example here is the "Thresher" sorting network for n=4 or 16 inputs, it has 99 comparators and 15 depth...

The pattern might be more obvious from looking at the code listed below but it is completely procedurally generated for any number of inputs that are a power of 2, and it's the same pattern every time. The 32 input network, for example, gets harder to draw but I found that it sorts random input as many times as I've tried... The depth for X inputs is always X-1...

This source code allows one to change in main the power of 2 of the size of the network, and it designs the Thresher network and tests it on a randomly ordered list of that size... The code runs really fast, it designed the 32 input Thresher sorting network and tested it almost instantly...
**Go Source Code**
package main
import (
    "fmt"    "math/rand"    "time")
func swap(l []int, x int, y int){
    if l[x] > l[y]{
        temp:=l[y]
        l[y]=l[x]
        l[x] = temp
    }
}
func y(l [] int, y int, a int, b int){
    for i:=a; i<=b; i*=2{
        x:=y
        j:=0        for x+i+j <= len(l)-1{
            fmt.Println(x+j, x+j+i)
            swap(l, x+j, x+j+i)
            j += 1            if j == i{
                x=x+j+i
                j = 0                            }
                    }
    }
}
func z(l [] int, y int, a int, b int){
    c:=0        for i:=a; i>b; i--{
        x:=y
        x+=c
        c+=1        j:=0        for x+j+i <= len(l)-1{
            fmt.Println(x+j, x+j+i)
            swap(l, x+j, x+j+i)
            j += 1            if j == i{
                x=x+j+i
                j = 0                            }
                    }
    }
}
func sortn(l [] int){
    c:=0    for i:=2; i <= len(l)/2; i*=2{
        y(l, c, 1, len(l)/i)
        c+=1        b:=len(l)/(2*i)
        if b > 1{
            z(l, c, len(l)/i-1, len(l)/(2*i))            }else{
            z(l, c, len(l)/i-1, 0)            }
            }
}
func main() {
    rand.Seed(time.Now().UTC().UnixNano())
    l := make([] int, 32)
    for i:=0; i<32; i++{
        l[i] = rand.Intn(100)
    }
    sortn(l)
    fmt.Println(l)
    }

Monday, June 8, 2015

Symetrical sorting network

This is a sorting network of 67 comparators and 10 depth, but with the obvious symmetry it suggests that there might be designs that can be found for any power of 2... The lighter colored part is how a lot of the best designs start, the rest is what I came up with to finish it...
Notes:
To make the pattern more complete but less optimal, there can be added a group of comparisons distance 5 apart between the group of 6 apart and 4 apart, so I'll conjecture that it's always possible to make a network extending the pattern above of log[2](n)+n/2 depth for N a power of 2, though I'm not sure about the total number of comparators because as you can see in the above they decreased in number down to when the comparisons were 3 apart when they suddenly went all the way across... And then again when they were 1 apart... Maybe that always happens when the comparators are n/(2^i) - 1 apart for i greater than or equal to 2? I'm not sure, a 32 item network would be really hard to test because of the size of 2^32, maybe there can be some proofs constructed to determine that... Or alternatively maybe just build one according to the plan described for 1024 items and try a great number of random arrangements of that many items and if it sorts them all there's a good probability it always works...



Friday, June 5, 2015

Code for finding close to optimal sorting networks

The code listed below finds sorting networks as described here:
http://en.wikipedia.org/wiki/Sorting_network

The sorting network it found for 12 items is close to what's been proven optimal, it took 43 steps instead of the proven optimal betweeen 37 and 39...
[0, 1], [2, 3], [4, 5], [6, 7], [8, 9], [10, 11], [0, 2], [1, 3], [1, 2], [4, 6], [5, 7], [5, 6], [8, 10],[9, 11], [9, 10], [0, 4], [1, 5], [2, 6], [3, 7], [1, 4], [2, 4], [3, 5], [3, 4], [5, 6],   [0, 8], [1, 9], [2, 10], [3, 11], [1, 8], [2, 8], [3, 9], [4, 10], [5, 11], [3, 8], [4, 8], [5, 9], [6, 10], [7, 11], [5, 8], [6, 8], [7, 9], [7, 8], [9, 10]]

A slight modification of the code found one with one more step (44) but one less depth, for a depth of 9:
[[10, 11], [8, 9], [6, 7], [4, 5], [2, 3], [0, 1], [9, 11], [8, 10], [9, 10], [5, 7], [4, 6], [5, 6], [1, 3], [0, 2], [1, 2], [7, 11], [6, 10], [5, 9], [4, 8], [7, 10], [7, 9], [6, 8], [7, 8], [5, 6], [3, 11], [2, 10], [1, 9], [0, 8], [3, 10], [3, 9], [2, 8], [1, 7], [0, 6], [3, 8], [3, 7], [2, 6], [1, 5], [0, 4], [3, 6], [3, 5],   [2, 4], [3, 4], [1, 2], [10, 11]]

I found that one with 64 comparators and depth of 10 could be found by "seeding" the process with these starting values that I had kind of seen the pattern of in the above results:
Seed Values: [[15, 14], [13, 12], [11, 10], [9, 8], [7, 6], [5, 4], [3, 2], [1, 0], [15, 13], [14, 12], [11, 9], [10, 8], [7, 5], [6, 4], [3, 1], [2, 0]]
And the whole network:
[[15, 14], [13, 12], [11, 10], [9, 8], [7, 6], [5, 4], [3, 2], [1, 0], [15, 13], [14, 12], [11, 9], [10, 8], [7, 5], [6, 4], [3, 1], [2, 0], [13, 14], [9, 10], [5, 6], [1, 2], [11, 15], [9, 13], [10, 14], [8, 12], [10, 15], [8, 13], [13, 14], [9, 10], [8, 15], [3, 7], [1, 5], [2, 6], [0, 4], [2, 7], [0, 5], [5, 6], [1, 2], [0, 7], [7, 15], [5, 13], [6, 14], [4, 12], [0, 8], [2, 10], [1, 9], [3, 11], [7, 11], [5, 9], [6, 10], [4, 8], [10, 15], [8, 13], [4, 9], [6, 11], [0, 5], [2, 7], [13, 14], [9, 10], [8, 15], [5, 6], [4, 11], [1, 2], [0, 7], [14, 15]]

The concept is that you start with a list A of every unique string of a certain length pf 0's and 1's and you provisionally try every possible pairwise swap on that list and find the swap that after that swap is done on every element of A (making a list B of strings of 1's and 0's) has the greatest intersection of A and B, meaning the most strings are the same between the two. The union becomes the new A and iterated. The greatest intersection also means it has the smallest union, so the idea is that this union will generally be smaller than A. I found that there is a non-zero intersection (And smaller union!) all the way until you end up with L lists of length L each one a sorted version of the L possible initial weights (where each 1 in the initial string is adding one to the weight). This means every possible list has been mapped to one of those lists via the swapping and thus all the lists can be sorted with those swaps. By the One-Zero principle it means every list of L sortable items can be sorted just as well with those swaps...

I know this isn't explained well in words but the code is fairly simple and works fairly quickly even written in single threaded python finding the network above only took a couple of minutes...

***Source Code***
import random
def copy(a):
    a1 = []
    for e in a:
        a1.append(e)
    return a1

def cullswap(s, sw):
    t = []
    for n in range(0, len(s)):
        a = copy(s[n])
        if s[n][sw[0]] > s[n][sw[1]]:
            swap(a, sw[0], sw[1])
            if finddupe(a, n, s) == False:
                t.append(a)
        else:
            t.append(s[n])
    return t
def swap(a, i, j):
    temp = a[i]
    a[i] = a[j]
    a[j] = temp
def finddupe(a, n, s):
    for e in range(0, len(s)):
        if s[e] == a and e != n:
            return True
    return False
def bestswap(s):
    best = 0
    bestij = []
    for i in range(0, len(s[0])-1):
        for j in range(i+1, len(s[0])):
            count = 0
            for n in range(0, len(s)):
                if s[n][i] > s[n][j]:
                    a = copy(s[n])
                    swap(a, i, j)
                    if finddupe(a, n, s) == True:
                        count +=1
            if count > best:
                best = count
                bestij = [i,j]
    return bestij, best
def addelement(s,a, listlength):
    if len(a) < listlength:
        a1 = copy(a)
        a1.append(0)
        addelement(s, a1, listlength)
        a2 = copy(a)
        a2.append(1)
        addelement(s, a2, listlength)
    else:
        s.append(a)
def main():
    listlength = 12
    s = []
    addelement(s, [], listlength)
    swaps = []
    i = 0
    while(len(s) > 0):
        swap, best = bestswap(s)
        i+=1
        print(i, swap)
        swaps.append(swap)
        if best == 0:
            break
        s = cullswap(s, swap)
    print(swaps, len(swaps))  


main()

** Slower, but Depth optimizing source code**
import random
def copy(a):
    a1 = []
    for e in a:
        a1.append(e)
    return a1

def cullswap(s, sw):
    t = []
    for n in range(0, len(s)):
        a = copy(s[n])
        if s[n][sw[0]] > s[n][sw[1]]:
            swap(a, sw[0], sw[1])
            if finddupe(a, n, s) == False:
                t.append(a)
        else:
            t.append(s[n])
    return t
def swap(a, i, j):
    temp = a[i]
    a[i] = a[j]
    a[j] = temp
def finddupe(a, n, s):
    for e in range(0, len(s)):
        if s[e] == a and e != n:
            return True
    return False
def finddepth(swaps, newswap):
    
    for k in range(len(swaps)-1, 0, -1):
        if swaps[k][0] == newswap[0] or swaps[k][0] == newswap[1] or swaps[k][1] == newswap[0] or swaps[k][1] == newswap[1]:
            return len(swaps)-k
    return len(swaps)
def bestswap(s, swaps):
    best = 0
    bestij = []
    for i in range(0, len(s[0])-1):
        for j in range(i+1, len(s[0])):
            count = 0
            d = -1
            for n in range(0, len(s)):
                if s[n][i] > s[n][j]:
                    a = copy(s[n])
                    swap(a, i, j)
                    if finddupe(a, n, s) == True:
                        count +=1
            if count >= best:
                f = finddepth(swaps, [i,j])
                if f > d:
                    d = f
                    best = count
                    bestij = [i,j]
    return bestij, best
def addelement(s,a, listlength):
    if len(a) < listlength:
        a1 = copy(a)
        a1.append(0)
        addelement(s, a1, listlength)
        a2 = copy(a)
        a2.append(1)
        addelement(s, a2, listlength)
    else:
        s.append(a)
def main():
    listlength = 14
    s = []
    addelement(s, [], listlength)
    swaps = []
    i = 0
    while(len(s) > 0):
        swap, best = bestswap(s, swaps)
        i+=1
        print(i, swap)
        swaps.append(swap)
        if best == 0:
            break
        s = cullswap(s, swap)
    print(swaps, len(swaps))  

main()


**Code for starting with a certain "seeding" network**
import random
def copy(a):
    a1 = []
    for e in a:
        a1.append(e)
    return a1

def cullswap(s, sw):
    t = []
    for n in range(0, len(s)):
        a = copy(s[n])
        if s[n][sw[0]] > s[n][sw[1]]:
            swap(a, sw[0], sw[1])
            if finddupe(a, n, s) == False:
                t.append(a)
        else:
            t.append(s[n])
    return t
def swap(a, i, j):
    temp = a[i]
    a[i] = a[j]
    a[j] = temp
def finddupe(a, n, s):
    for e in range(0, len(s)):
        if s[e] == a and e != n:
            return True
    return False
def finddepth(swaps, newswap):
    
    for k in range(len(swaps)-1, 0, -1):
        if swaps[k][0] == newswap[0] or swaps[k][0] == newswap[1] or swaps[k][1] == newswap[0] or swaps[k][1] == newswap[1]:
            return len(swaps)-k
    return len(swaps)
def bestswap(s, swaps, starter):
    best = 0
    bestij = []
    if len(swaps) < len(starter):
        return starter[len(swaps)], 1
    for i in range(0, len(s[0])-1):
        for j in range(i+1, len(s[0])):
            count = 0
            d = -1
            for n in range(0, len(s)):
                if s[n][i] > s[n][j]:
                    a = copy(s[n])
                    swap(a, i, j)
                    if finddupe(a, n, s) == True:
                        count +=1
            if count >= best:
                f = finddepth(swaps, [i,j])
                if f > d:
                    d = f
                    best = count
                    bestij = [i,j]
    return bestij, best
def addelement(s,a, listlength):
    if len(a) < listlength:
        a1 = copy(a)
        a1.append(0)
        addelement(s, a1, listlength)
        a2 = copy(a)
        a2.append(1)
        addelement(s, a2, listlength)
    else:
        s.append(a)
def main():
    listlength = 16
    s = []
    addelement(s, [], listlength)
    starter = [[15,14],[13,12],[11,10],[9,8],[7,6],[5,4],[3,2],[1,0], [15,13],[11,9],[7,5],[3,1],[4,0], [5,1],[6,2],[7,3],[8,12],[9,13],[10,14],[11,15]]
    swaps = []
    i = 0
    while(len(s) > 0):
        swap, best = bestswap(s, swaps, starter)
        i+=1
        print(i, swap)
        swaps.append(swap)
        if best == 0:
            break
        s = cullswap(s, swap)
    print(swaps, len(swaps))  

main()

Wednesday, June 3, 2015

Prime sort

My idea was considering you have a list L of N items, and a list of primes P1...Q including 1, listed in reverse order like so:

Here Q is 89...
[89,83, 79, 73, 71, 67, 63, 61, 59, 53, 47, 43, 41, 37, 31, 29,23,19, 17,13, 11, 7, 5, 3, 2, 1]

My algorithm first compares every number up to N-P1 in L at position i to the number at position i+P1 , so in the example above it would compare the first item in the list at i=0 to the 89th, the i=1 number to 90, up to N-89 and N.

This is then done for every P on the list, and I found that with Q = 89 this could sort all of the random lists I tried of 1000 items,

Q varies somehow with N, I found that

N        Q
32       13
100     23
1000   89

It seems to me that the relationship might be that the number of primes in the list P should be around the square root of N...

The single threaded source code listed below is very simple, but it's actually possible to make a concurrent version where the prime P(j) loop starts as soon as the i counter for the P(j-1) loop is P(j-1)+1 through the list... So potentially many "waves" of comparisons could be going through the list simultaneously...

**For tournaments**

I think it might be good for a tournament at some game, but one where the total number of matches played for a person is minimized over the need to play concurrent matches, and where the most important thing is to make as many good matches as possible. Because each round everyone has a chance to climb up the ladder as far as they can go, and the people they're playing would generally be getting closer and closer to the same skill level as the rounds progressed making for better and better matches, Some concurrency is possible as the jth round of matches can start as soon as the prior round's i counter is P(j-1) up the list, and so on until many rounds are travelling like waves through the list... Mathematically there is the potential for someone to work their way up the ladder 1 step at a time from the bottom to the top, but statistically that's improbable as they wouldn't be that far down the ladder by the last round if they were good enough to win so many consecutive matches against increasingly tough opponents. The nature of this algorithm is that losing wouldn't punish anyone too much; they could still end up winning the whole tournament overall no matter where they are in the randomized list in the final round though it's easier to climb early on...And losing later and later on matters less and less to the person's final ranking... Adding a couple rounds over what's strictly necessary for the ideal case of numbers accentuates the fairness of the final ranking...
Source Code:
import random
def swap(l, i, ip):
    if l[i] > l[ip]:
        temp = l[ip]
        l[ip] = l[i]
        l[i] = temp        
def main():
    l = []
    for i in range(0, 1000):
        l.append(i)
    random.shuffle(l)
    p = [89,83, 79, 73, 71, 67, 63, 61, 59, 53, 47, 43, 41, 37, 31, 29,23,19, 17,13, 11, 7, 5, 3, 2, 1]
    print(l)
    for prime in p:
        for i in range(0, 1000-prime):
            swap(l, i, i+prime)
    print(l)
main()

Tuesday, June 2, 2015

+- 40% curve

I found a curve that smoothly piece-wise interpolates a set of  a waves values along the integers and proved something interesting about it...



Given final and initial y values for each interval between integers, the curve smoothly connects those points such that the integral of the curve over the interval is either very close to 40% greater than the integral of the function that is a straight line connecting the two points or very close to 40% less, depending on whether it is concave up or down and which side of the x axis the curve is on... The two formulas for concavity work as follows: The concavity will be up if the Final point is greater than or equal to the Initial point, in which case the other formula will be concave down, and vice versa... I've chosen the above example to have alternating concavities using alternating formulas, it doesn't work for always increasing or always decreasing functions...
**Proofs**
The integral of the curve written above is very nearly .6 times the sum of the endpoints over the interval from -1..1 which is about 60% what the integral would be for a straight line connecting the two points by the average value formula...
Note the integral of the average value over the interval is multiplied by 2, or F+I because the interval is from -1 to 1 which is two units wide..
And the other concavity:
**Notes**
It's interesting how it compares to other types of interpolation, for polynomial interpolation for example you need 3 points for a second order curve, whereas this only uses the two endpoints and has a closed form. I think that might make it computationally simpler if one has a fast way to calculate the ln and exponential functions involved...

**Future investigation**
I'd like to know whether in terms of arclength whether this curve is somehow optimal in terms of smoothness...