It's known that the optimal sorting network for 16 entries as described here:
http://en.wikipedia.org/wiki/Sorting_network
will be between 53 and 60 compares long, the approach I describe here could only find a sorting network of 121 compares which I would call somewhat suboptimal, it would only require twice as many physical components to implement in hardware, but my code can find it quickly. It only takes a couple of minutes (in single-threaded Python) to find the 121 network for 16 entries which came out to:
[(0, 15), (1, 15), (0, 14), (2, 15), (1, 14), (0, 13), (3, 15), (2, 14), (1, 13), (0, 12), (4, 15), (3, 14), (2, 13), (1, 12), (0, 11), (5, 15), (4, 14), (3, 13), (2, 12), (1, 11), (0, 10), (6, 15), (5, 14), (4, 13), (3, 12), (2, 11), (1, 10), (0, 9), (7, 15), (6, 14), (5, 13), (4, 12), (3, 11), (2, 10), (1, 9), (0, 8), (8, 15), (7, 14), (6, 13), (5, 12), (4, 11), (3, 10), (2, 9), (1, 8), (0, 7), (9, 15), (8, 14), (7, 13), (6, 12), (5, 11), (4, 10), (3, 9), (2, 8), (1, 7), (0, 6), (10, 15), (9, 14), (8, 13), (7, 12), (6, 11), (5, 10), (4, 9), (3, 8), (2, 7), (1, 6), (0, 5), (11, 15), (10, 14), (9, 13), (8, 12), (7, 11), (6, 10), (5, 9), (4, 8), (3, 7), (2, 6), (1, 5), (0, 4), (12, 15), (11, 14), (10, 13), (9, 12), (8, 11), (7, 10), (6, 9), (5, 8), (4, 7), (3, 6), (2, 5), (1, 4), (0, 3), (13, 15), (12, 14), (11, 13), (10, 12), (9, 11), (8, 10), (7, 9), (6, 8), (5, 7), (4, 6), (3, 5), (2, 4), (1, 3), (0, 2), (14, 15), (13, 14), (12, 13), (11, 12), (10, 11), (9, 10), (8, 9), (7, 8), (6, 7), (5, 6), (4, 5), (3, 4), (2, 3), (1, 2), (0, 1)]
My approach was to compile a list of lists of every combination of 0's and 1's up to length 16, and find the longest swap necessary over all the lists, then iterate by making that swap over all the lists and then finding the next longest swap necessary, etc until all the lists are sorted. It's known by the Zero-One theorem that a network that sorts all of these lists will also sort any list of arbitrary elements...
** Notes**
If you shuffle the list of lists, it finds a different sorting network, but I haven't run it shuffled and found one better than 121, but these different 121 sorting networks might be good starting candidates for a genetic approach... Also thanks to John Owens who knew that my rough idea had to do with sorting networks and pointed me to some good references to continue...
**Source Code**
def copy(l):
l2 = []
for i in l:
l2.append(i)
return l2
def compare(t, i, j):
if t[i] > t[j]:
return True
return False
def swap(t, i, j):
if t[i] > t[j]:
temp = t[j]
t[j] = t[i]
t[i] = temp
return True
return False
def addelement(s, a, weight, listlength):
if len(a) < listlength:
a1 = copy(a)
a1.append(0)
addelement(s, a1,weight+1, listlength)
a2 = copy(a)
a2.append(1)
addelement(s, a2,weight, listlength)
else:
s.append([a, sorted(a)])
def bestswap(s, listlength):
pair = [0, listlength-1]
bestr = 0
bestl = listlength
best = 0
allsorted = True
for n in range(0, len(s)):
if s[n][0] != s[n][1]:
allsorted = False
if allsorted == True:
return [-1,-1]
for n in range(0, len(s)):
for l in range(0, listlength-best):
if s[n][0][l] == 1:
for r in range(listlength-1, l, -1):
if s[n][0][r] == 0:
if r - l > best:
best = r-l
bestl = l
bestr = r
return bestl, bestr
def main():
listlength = 16
s = []
a = []
addelement(s, a,0, listlength)
print(len(s))
pair = [0,0]
i = 0
pairs = []
while(pair != [-1,-1]):
pair = bestswap(s, listlength)
print(pair)
pairs.append(pair)
if pair != [-1,-1]:
for j in range(0, len(s)):
swap(s[j][0], pair[0], pair[1])
i+=1
print(i)
print(pairs)
main()
analytics
Friday, May 29, 2015
Wednesday, May 27, 2015
Centroid Drift Point in Polygon algorithm
Suppose you have a polygon, possibly concave, like so:
I found an algorithm that works for telling whether a point is inside the polygon or not... The first step in the reasoning is to consider the normalized vectors from the point P to the set of vertices V, those will be on a unit circle, but it will look very different depending on whether P is inside the polygon or not...
So my idea was to consider what happens when every connected pair of vertices on the polygon is replaced with the normalized midpoint of those 2 vertices...
You can see that if the point is inside the normalized polygon gets closer to a regular polygon and if the point is outside the vertices bunch together, so a way to measure that is to ask whether the centroid of the transformed polygon has moved closer to P or farther away after the midpoint operation...
A test of the algorithm...
In every case and some others I've tried it could tell whether the point was inside or outside!
** Note **
I think I thought of a counter example when the winding number exceeds 1 for a period of time.
**Source Code**
def distance(point):
return (point[0]**2.0 + point[1]**2.0)**.5
def normal(point):
d = distance(point)
if d == 0:
return [0,0]
return [point[0]/d, point[1]/d]
def centroid(v):
sumx = 0
sumy = 0
for i in range(0, len(v)):
sumx += v[i][0]
sumy += v[i][1]
return [1.0*sumx/len(v), 1.0*sumy/len(v)]
def halves(v):
v2 = []
for i in range(0, len(v)-1):
v2.append(normal([(v[i][0]+v[i+1][0])/2.0, (v[i][1]+v[i+1][1])/2.0]))
v2.append(normal([(v[len(v)-1][0]+v[0][0])/2.0, (v[len(v)-1][1]+v[0][1])/2.0]))
return v2
def pointinpoly(polygon, point):
v = []
for i in range(0, len(polygon)):
v.append(normal([polygon[i][0]-point[0], polygon[i][1]-point[1]]))
c1 = centroid(v)
v2 = halves(v)
c2 = centroid(v2)
d1 = distance(c1)
d2 = distance(c2)
if d1 - d2 >= 0:
return True
else:
return False
def main():
polygon = [[3.22, 2.18],[4.24, 4.7],[7,5],[9.82,3.4],[7.58,.52],[6.14,2.52]]
points = [[6, 4.68],[8.4, 4.28],[4.1,4.52],[6.97,1.04],[8.7,3.07], [4.16,2.12]]
for i in range(0, len(points)):
point = points[i]
print(point, pointinpoly(polygon, point))
main()
I found an algorithm that works for telling whether a point is inside the polygon or not... The first step in the reasoning is to consider the normalized vectors from the point P to the set of vertices V, those will be on a unit circle, but it will look very different depending on whether P is inside the polygon or not...
So my idea was to consider what happens when every connected pair of vertices on the polygon is replaced with the normalized midpoint of those 2 vertices...
You can see that if the point is inside the normalized polygon gets closer to a regular polygon and if the point is outside the vertices bunch together, so a way to measure that is to ask whether the centroid of the transformed polygon has moved closer to P or farther away after the midpoint operation...
A test of the algorithm...
In every case and some others I've tried it could tell whether the point was inside or outside!
** Note **
I think I thought of a counter example when the winding number exceeds 1 for a period of time.
**Source Code**
def distance(point):
return (point[0]**2.0 + point[1]**2.0)**.5
def normal(point):
d = distance(point)
if d == 0:
return [0,0]
return [point[0]/d, point[1]/d]
def centroid(v):
sumx = 0
sumy = 0
for i in range(0, len(v)):
sumx += v[i][0]
sumy += v[i][1]
return [1.0*sumx/len(v), 1.0*sumy/len(v)]
def halves(v):
v2 = []
for i in range(0, len(v)-1):
v2.append(normal([(v[i][0]+v[i+1][0])/2.0, (v[i][1]+v[i+1][1])/2.0]))
v2.append(normal([(v[len(v)-1][0]+v[0][0])/2.0, (v[len(v)-1][1]+v[0][1])/2.0]))
return v2
def pointinpoly(polygon, point):
v = []
for i in range(0, len(polygon)):
v.append(normal([polygon[i][0]-point[0], polygon[i][1]-point[1]]))
c1 = centroid(v)
v2 = halves(v)
c2 = centroid(v2)
d1 = distance(c1)
d2 = distance(c2)
if d1 - d2 >= 0:
return True
else:
return False
def main():
polygon = [[3.22, 2.18],[4.24, 4.7],[7,5],[9.82,3.4],[7.58,.52],[6.14,2.52]]
points = [[6, 4.68],[8.4, 4.28],[4.1,4.52],[6.97,1.04],[8.7,3.07], [4.16,2.12]]
for i in range(0, len(points)):
point = points[i]
print(point, pointinpoly(polygon, point))
main()
Tuesday, May 26, 2015
Spearhead sort
This sort starts with the list of items:
[13, 8, 10, 18, 0, 4, 2, 11, 12, 14, 7, 19, 1, 3, 17, 6, 15, 5, 9, 16]
Starting it takes the last three items, 16, 9 and 5, sorts them, and makes a 3 item "spearhead" like below and two empty piles L and M:
Now we can recursively use the same notion on the two smaller lists until all are in order...
** Notes**
The right to left order I used might be replaced with 3 random numbers to start and using the next random nuumber from the unsorted to prevent bad performance when the list is already sorted, for example...
[13, 8, 10, 18, 0, 4, 2, 11, 12, 14, 7, 19, 1, 3, 17, 6, 15, 5, 9, 16]
Starting it takes the last three items, 16, 9 and 5, sorts them, and makes a 3 item "spearhead" like below and two empty piles L and M:
It then looks at the next item from the right, 15, and compares it to the items on the spearhead and finds where it will fall, either greater than 16, or less than 5, or between 9 and 16, or between 9 and 5... Since it is between 9 and 16 it pops 16 off the spearhead and puts 16 in the M pile and replaces it with 15...
It iterates the above step all the way across the list, the next number is 6 which falls between 5 and 9, so 6 replaces 5 and 5 goes in the L pile...
The next is 17 which is greater than 15 so goes directly into the M pile:
Then 3 which is less than 6 so goes in the L pile, then 1 goes in the L pile, 19 goes in the M pile, Then 7 is between 9 and 6 so replaces 6 and 6 goes in the L pile... The only other addendum is that you try to keep the piles the same size which might change the lead number, but in this case didn't... In that case you would just replace the appropriate trailing number with the item that would make the pile too big, and move what was there to the lead and the lead to the other trailing spot and the number that was there into the other pile, keeping the two piles balanced...
And so on until you get:
** Notes**
The right to left order I used might be replaced with 3 random numbers to start and using the next random nuumber from the unsorted to prevent bad performance when the list is already sorted, for example...
Friday, May 22, 2015
A G-sort of 20 items and example code
Supposing we have a list of 20 items to be sorted, here I'll just use the numbers 0-19:
[13, 8, 10, 18, 0, 4, 2, 11, 12, 14, 7, 19, 1, 3, 17, 6, 15, 5, 9, 16]
Consider a number M for each pairwise comparison that is the distance between the numbers swapped if a swap should be made, that is that the rightmost number is smaller, for that pair or 0 otherwise.
Now I thought, what swap could be made on 10000 such random lists like the above so that the sum of the M's for that swap is the greatest over all the lists... perhaps unsurprisingly this ended up being:
[0, 19], indicating that the first and the last items of the the list should be compared and swapped to maximize the sum of the M's over all the lists, in the example above 13 would be compared to 16 and no swap would be made because 13 is already smaller than 16, but in many lists a swap would be made....
So if we have our 10000 random lists and we've already swapped [0,19], we find what should be the next swap to make the greatest sum of M's over all the lists, and iterate like that until all the lists are completely sorted, I ended up with:
[[0, 19], [1, 18], [2, 17], [3, 16], [4, 15], [5, 19], [0, 14], [1, 13], [6, 18], [7, 17], [2, 12], [5, 14], [0, 11], [8, 19], [9, 16], [1, 9], [3, 13], [6, 15], [0, 10], [10, 18], [4, 10], [5, 11], [11, 17], [8, 14], [2, 8], [12, 19], [7, 12], [0, 7], [10, 16], [3, 11], [8, 13], [13, 18], [1, 7], [9, 15], [15, 19], [3, 9], [6, 11], [11, 15], [5, 10], [2, 6], [1, 5], [14, 18], [4, 8], [0, 4], [10, 14], [6, 10], [16, 19], [12, 16], [8, 12], [14, 17], [7, 13], [3, 7], [9, 14], [5, 9], [0, 2], [2, 5], [10, 13], [7, 10], [1, 4], [13, 15], [5, 8], [12, 14], [6, 9], [9, 12], [4, 6], [8, 11], [15, 18], [17, 19], [15, 17], [6, 8], [11, 13], [3, 5], [7, 9], [10, 12], [9, 11], [16, 18], [0, 3], [14, 16], [5, 7], [2, 4], [1, 3], [13, 14], [14, 15], [12, 13], [8, 10], [3, 4], [11, 12], [18, 19], [5, 6], [4, 5], [7, 8], [6, 7], [16, 17], [15, 16], [8, 9], [10, 11], [1, 2], [2, 3], [0, 1], [17, 18], [9, 10], [13, 14], [12, 13], [5, 6], [14, 15], [7, 8], [11, 12], [3, 4], [13, 14], [1, 2], [6, 7], [8, 9], [12, 13], [10, 11], [14, 15], [9, 10]]
After these 116 compares and swaps all 10000 lists were in order. As you can see it's hard to discern a pattern after the first few swaps other than generally the items swapped get closer together, the code below is general enough that it can be used for more than 10000 lists or more or less than 20 items...
**Python Source Code**
import random
def copy(l):
l2 = []
for i in l:
l2.append(i)
return l2
def compare(t, i, j):
if t[i] > t[j]:
return True
return False
def swap(t, i, j):
if t[i] > t[j]:
temp = t[j]
t[j] = t[i]
t[i] = temp
return True
return False
def bestswap(s, trials, listlength, sortd):
pair = [0,0]
best = 0
allsorted = True
for n in range(0, trials):
r = copy(s[n])
if s[n] != sortd:
allsorted = False
if allsorted == True:
return [-1,-1]
for i in range(0, listlength-1):
for j in range(i+1, listlength):
total = 0
for n in range(0, trials):
t = copy(s[n])
if compare(t, i, j):
total += j-i
if total > best:
best = total
pair = [i,j]
return pair
def main():
s = []
trials = 10000
listlength = 20
sortd = []
for j in range(0, listlength):
sortd.append(j)
for j in range(0, trials):
l = []
for i in range(0, listlength):
l.append(i)
random.shuffle(l)
s.append(l)
bestpair = [0,listlength-1]
i = 0
al = []
while(bestpair != [-1,-1]):
i+=1
al.append(bestpair)
for j in range(0, trials):
swap(s[j], bestpair[0], bestpair[1])
bestpair = bestswap(s, trials, listlength, sortd)
print(bestpair)
print(i)
print(al)
main()
[13, 8, 10, 18, 0, 4, 2, 11, 12, 14, 7, 19, 1, 3, 17, 6, 15, 5, 9, 16]
Consider a number M for each pairwise comparison that is the distance between the numbers swapped if a swap should be made, that is that the rightmost number is smaller, for that pair or 0 otherwise.
Now I thought, what swap could be made on 10000 such random lists like the above so that the sum of the M's for that swap is the greatest over all the lists... perhaps unsurprisingly this ended up being:
[0, 19], indicating that the first and the last items of the the list should be compared and swapped to maximize the sum of the M's over all the lists, in the example above 13 would be compared to 16 and no swap would be made because 13 is already smaller than 16, but in many lists a swap would be made....
So if we have our 10000 random lists and we've already swapped [0,19], we find what should be the next swap to make the greatest sum of M's over all the lists, and iterate like that until all the lists are completely sorted, I ended up with:
[[0, 19], [1, 18], [2, 17], [3, 16], [4, 15], [5, 19], [0, 14], [1, 13], [6, 18], [7, 17], [2, 12], [5, 14], [0, 11], [8, 19], [9, 16], [1, 9], [3, 13], [6, 15], [0, 10], [10, 18], [4, 10], [5, 11], [11, 17], [8, 14], [2, 8], [12, 19], [7, 12], [0, 7], [10, 16], [3, 11], [8, 13], [13, 18], [1, 7], [9, 15], [15, 19], [3, 9], [6, 11], [11, 15], [5, 10], [2, 6], [1, 5], [14, 18], [4, 8], [0, 4], [10, 14], [6, 10], [16, 19], [12, 16], [8, 12], [14, 17], [7, 13], [3, 7], [9, 14], [5, 9], [0, 2], [2, 5], [10, 13], [7, 10], [1, 4], [13, 15], [5, 8], [12, 14], [6, 9], [9, 12], [4, 6], [8, 11], [15, 18], [17, 19], [15, 17], [6, 8], [11, 13], [3, 5], [7, 9], [10, 12], [9, 11], [16, 18], [0, 3], [14, 16], [5, 7], [2, 4], [1, 3], [13, 14], [14, 15], [12, 13], [8, 10], [3, 4], [11, 12], [18, 19], [5, 6], [4, 5], [7, 8], [6, 7], [16, 17], [15, 16], [8, 9], [10, 11], [1, 2], [2, 3], [0, 1], [17, 18], [9, 10], [13, 14], [12, 13], [5, 6], [14, 15], [7, 8], [11, 12], [3, 4], [13, 14], [1, 2], [6, 7], [8, 9], [12, 13], [10, 11], [14, 15], [9, 10]]
After these 116 compares and swaps all 10000 lists were in order. As you can see it's hard to discern a pattern after the first few swaps other than generally the items swapped get closer together, the code below is general enough that it can be used for more than 10000 lists or more or less than 20 items...
**Python Source Code**
import random
def copy(l):
l2 = []
for i in l:
l2.append(i)
return l2
def compare(t, i, j):
if t[i] > t[j]:
return True
return False
def swap(t, i, j):
if t[i] > t[j]:
temp = t[j]
t[j] = t[i]
t[i] = temp
return True
return False
def bestswap(s, trials, listlength, sortd):
pair = [0,0]
best = 0
allsorted = True
for n in range(0, trials):
r = copy(s[n])
if s[n] != sortd:
allsorted = False
if allsorted == True:
return [-1,-1]
for i in range(0, listlength-1):
for j in range(i+1, listlength):
total = 0
for n in range(0, trials):
t = copy(s[n])
if compare(t, i, j):
total += j-i
if total > best:
best = total
pair = [i,j]
return pair
def main():
s = []
trials = 10000
listlength = 20
sortd = []
for j in range(0, listlength):
sortd.append(j)
for j in range(0, trials):
l = []
for i in range(0, listlength):
l.append(i)
random.shuffle(l)
s.append(l)
bestpair = [0,listlength-1]
i = 0
al = []
while(bestpair != [-1,-1]):
i+=1
al.append(bestpair)
for j in range(0, trials):
swap(s[j], bestpair[0], bestpair[1])
bestpair = bestswap(s, trials, listlength, sortd)
print(bestpair)
print(i)
print(al)
main()
** Update**
I let the program go over 100,000 trials and it found these 120 swaps to work over that entire assortment of lists, it took a couple of hours for the python code to run it could be much faster in other languages. So for 10,000 it took 116 swaps, for 100,000 it took 120, I figure it must not be that many more swaps for all possible lists...
[[0, 19], [1, 18], [2, 17], [3, 16], [4, 15], [5, 19], [0, 14], [1, 13], [6, 18], [2, 12], [7, 17], [5, 14], [8, 19], [0, 11], [9, 16], [1, 9], [3, 10], [10, 18], [6, 13], [4, 12], [12, 19], [8, 15], [0, 8], [5, 11], [11, 17], [7, 14], [2, 8], [1, 7], [0, 6], [6, 12], [8, 13], [13, 19], [4, 9], [9, 14], [14, 18], [3, 11], [0, 4], [10, 15], [5, 10], [12, 16], [7, 12], [1, 5], [15, 19], [3, 8], [13, 17], [2, 6], [6, 11], [10, 13], [11, 15], [4, 7], [7, 10], [8, 12], [6, 9], [16, 18], [13, 16], [0, 3], [2, 4], [3, 6], [9, 11], [11, 14], [5, 8], [17, 19], [15, 17], [12, 15], [11, 13], [8, 11], [3, 5], [5, 7], [7, 9], [1, 3], [10, 12], [12, 14], [14, 16], [8, 10], [4, 6], [6, 8], [0, 2], [18, 19], [4, 5], [6, 7], [13, 15], [16, 17], [17, 18], [15, 16], [8, 9], [3, 4], [2, 3], [5, 6], [1, 2], [0, 1], [11, 12], [9, 11], [10, 11], [12, 13], [9, 10], [14, 15], [13, 14], [7, 8], [16, 17], [11, 12], [15, 16], [3, 4], [4, 5], [2, 3], [6, 7], [8, 9], [1, 2], [16, 17], [12, 13], [5, 6], [14, 15], [10, 11], [9, 10], [7, 8], [6, 7], [13, 14], [8, 9], [10, 11], [11, 12], [5, 6]]
I let the program go over 100,000 trials and it found these 120 swaps to work over that entire assortment of lists, it took a couple of hours for the python code to run it could be much faster in other languages. So for 10,000 it took 116 swaps, for 100,000 it took 120, I figure it must not be that many more swaps for all possible lists...
[[0, 19], [1, 18], [2, 17], [3, 16], [4, 15], [5, 19], [0, 14], [1, 13], [6, 18], [2, 12], [7, 17], [5, 14], [8, 19], [0, 11], [9, 16], [1, 9], [3, 10], [10, 18], [6, 13], [4, 12], [12, 19], [8, 15], [0, 8], [5, 11], [11, 17], [7, 14], [2, 8], [1, 7], [0, 6], [6, 12], [8, 13], [13, 19], [4, 9], [9, 14], [14, 18], [3, 11], [0, 4], [10, 15], [5, 10], [12, 16], [7, 12], [1, 5], [15, 19], [3, 8], [13, 17], [2, 6], [6, 11], [10, 13], [11, 15], [4, 7], [7, 10], [8, 12], [6, 9], [16, 18], [13, 16], [0, 3], [2, 4], [3, 6], [9, 11], [11, 14], [5, 8], [17, 19], [15, 17], [12, 15], [11, 13], [8, 11], [3, 5], [5, 7], [7, 9], [1, 3], [10, 12], [12, 14], [14, 16], [8, 10], [4, 6], [6, 8], [0, 2], [18, 19], [4, 5], [6, 7], [13, 15], [16, 17], [17, 18], [15, 16], [8, 9], [3, 4], [2, 3], [5, 6], [1, 2], [0, 1], [11, 12], [9, 11], [10, 11], [12, 13], [9, 10], [14, 15], [13, 14], [7, 8], [16, 17], [11, 12], [15, 16], [3, 4], [4, 5], [2, 3], [6, 7], [8, 9], [1, 2], [16, 17], [12, 13], [5, 6], [14, 15], [10, 11], [9, 10], [7, 8], [6, 7], [13, 14], [8, 9], [10, 11], [11, 12], [5, 6]]
**UPDATE**
A professor of computer science John Owens pointed out that there are different measures of sortedness that I will try to find some improvements, and also that these are covered under the name "sorting networks" in Donald Knuth's Art of Programming books, including how to tell whether one is optimal or not, so I'll be reading that and adding more to this idea later...
A professor of computer science John Owens pointed out that there are different measures of sortedness that I will try to find some improvements, and also that these are covered under the name "sorting networks" in Donald Knuth's Art of Programming books, including how to tell whether one is optimal or not, so I'll be reading that and adding more to this idea later...
Sunday, May 10, 2015
Artificial writing system
I was wondering if it were possible to make a single simple character for every word in the English language, which is around 100,000 words not counting really esoteric ones... Similar idea to Chinese but easier to write and with simpler basic elements...
Consider the following chart:
Each numbered line or curve is a stroke divided into six basic categories, when writing a character one would learn to always write strokes from a lower number category first and subsequently write the strokes in that category in the character in their order by number as seen above...
For example, one would write character 1 below as follows, the vertical line first, because that's category 1, then the diagonal line because that's the next lowest category, 3, then both the halves of the circle are written left half first because both are category 6, but the left half is a lower number, 1, within that category, then the right half is number 4 within that category so goes last...
Character 2 would start with the horizontal line because that's category 2, then the two diagonal rightmost first, then the bottom c shape.
So the fact that every character can be written by stroke order also makes it possible to alphabetize all the possible characters like so, all characters that start with category one strokes go first, then within that group characters who have a first stroke with a lower number go before those with a higher number, and a tie is broken by comparing second strokes and so on to the last stroke, with the rule that for characters sharing exactly the same strokes but one has more strokes total, the shorter goes first...
The fact that they can be alphabetized makes it possible to make a dictionary for the words...
So the number of possible characters is hard to say exactly because you generally don't want any character with disconnected parts and it can get hard to say when that might be, choosing 2, 3, 4, or 5 strokes from 26 possible strokes is somewhere north of 90,000 characters so 99.99% of the words that people use could be written simple with one character... Next I might write a computer program that generates a sample sheet of text written in this language I think it will look pretty interesting...
Consider the following chart:
For example, one would write character 1 below as follows, the vertical line first, because that's category 1, then the diagonal line because that's the next lowest category, 3, then both the halves of the circle are written left half first because both are category 6, but the left half is a lower number, 1, within that category, then the right half is number 4 within that category so goes last...
Character 2 would start with the horizontal line because that's category 2, then the two diagonal rightmost first, then the bottom c shape.
The fact that they can be alphabetized makes it possible to make a dictionary for the words...
So the number of possible characters is hard to say exactly because you generally don't want any character with disconnected parts and it can get hard to say when that might be, choosing 2, 3, 4, or 5 strokes from 26 possible strokes is somewhere north of 90,000 characters so 99.99% of the words that people use could be written simple with one character... Next I might write a computer program that generates a sample sheet of text written in this language I think it will look pretty interesting...
**Division of meanings**
I was thinking that the way the word is written could show the part of speech, for example starting with a stroke from category 1 could be a noun, which most words are in the English language, category 2 could make the word an adjective, category 3 verb, and category 4 adverb, Anything composed solely from strokes in category 5 and 6 and 7 would be prepositions, pronouns, and other minor parts of speech, of which there are around 1000 in the English language which matches up with the number of ways to create a symbol from the 3 categories.
And then even beyond parts of speech words could be categorized by how they are written, like the last word above is a noun because it has horizontal lines but the next stroke is category 5 which could, theoretically, indicate the noun is a place of some kind... Maybe further divisions are even possible like the third stroke might indicate a noun is a type of animal...
**Pronunciation
It might be possible to map every word in English to one syllable sounds, since so many aren't used like scrug and footh, one count estimates there are about 100,000 possible, the trick would be to map the possibilities into the writing system...
Friday, May 8, 2015
Component algebra and simple connected graphs
I wanted to go more in depth on the previous idea on the blog but I have to coin some terms to be able to write about it as I don't know what it would normally be called if it's already something that's studied. So I call it component algebra...
Each component is a variable a,b,c,... and it is related to other components through multiplication... Each somponent is equal to the multiplication of one or more of the other components, with a couple more rules that I'll show below...
So there might be a list of these component relationships like below, and a corresponding simple and connected graph, which is drawn so that components are equal to the multiplication of components that they are connected to...
Each component is a variable a,b,c,... and it is related to other components through multiplication... Each somponent is equal to the multiplication of one or more of the other components, with a couple more rules that I'll show below...
So there might be a list of these component relationships like below, and a corresponding simple and connected graph, which is drawn so that components are equal to the multiplication of components that they are connected to...
And we are also interested in the solutions which can be found by variable elimination, for example plugging what one component is equal to in each of the rest of the equations and proceeding until there is only one variable in one equation, and backsolving. It is easy to see there must be always at least two solutions, every component equals 1 or every component equals 0, but in the case above there are even more...
In the above e=e is Maple's way of saying that e can be any value, so in this case there are infinitely many solutions. So if e =2, say, the non-zero solution would be a=1/4, b=2, c=1/4, d=1/4, e=2, f=4, those we can say "satisfy" the graph... See that for instance c=1/4 = a*b*d*e = (1/4)*2*(1/4)*2=(1/4). And all the other components work as well.
The fact that the equations are related to a simple and connected graph ensures that every variable is defined, all exponents are 1, and a variable can occur on the left or right side of an equation but not both at the same time... a.k.a there are no loops in the graph. These all make the system relatively easy to solve.
The interesting thing to study is when there are more than the two trivial solutions, but I'm working on how to know when that will be... Sometimes the solutions are complex numbers as well...
**As sums**
Dr. Rose suggested perhaps using sums as linear equations might be simpler, for example...
Sometimes that is easier and nicer to think about but other times it misses solutions at least in Maple:
In the example above using linear sums only shows a trivial solution but the product shows more solutions...
Wednesday, May 6, 2015
Algebraically coloring a map
I noticed you can take an uncolored map like:
You can set up a set of equations to solve for, as follows... if region X is next to regions Y, Z,... one equation is X=Y*Z*...
For the above:
Note that C is not considered adjacent to D...
And there are a few solutions to the above, but we can add extra conditions such as A!=B, because A and B are adjacent and so on until we get down to one solution...
Now we can use a map of these complex numbers to color values by putting a color wheel on the complex plane:
And use that as a key to fill in the colors:
In this case it gave 5 colors not 4 as for the famous 4 color theorem, but I still think it's an interesting process...
For the above:
Note that C is not considered adjacent to D...
And there are a few solutions to the above, but we can add extra conditions such as A!=B, because A and B are adjacent and so on until we get down to one solution...
Now we can use a map of these complex numbers to color values by putting a color wheel on the complex plane:
And use that as a key to fill in the colors:
In this case it gave 5 colors not 4 as for the famous 4 color theorem, but I still think it's an interesting process...
Sunday, May 3, 2015
Point in Convex polygon using cross products
Suppose you have a convex polygon like below labeling vertices clockwise from A, and we want to know whether some point Z is inside or outside the polygon...
Clearly if vector v from point B to point Z makes an angle greater than 180 degrees with vector u from point A to point B, the point is outside of the polygon, as the polygon is convex, and that would make the polygon concave if Z was inside of the polygon and the angle was greater than 180 degrees...
This is equivalent to saying if cross product uxv is positive point Z is definitely outside the polygon! A simple calculation...
Now consider u(2) to be vector BC and v(2) to be vector CZ, the same reasoning holds and we see that if u(2)xv(2) is positive the point Z must be outside the polygon, in fact going all the way around if any such cross product is positive, the point Z must be outside the polygon...
So it's necessary that none of those cross products are positive, as it turns out it is evidently also sufficient, as determined by drawing many such polygons in a program like Geogebra and dragging the vector around and noting the angles when the point is inside and when it is outside but I haven't thought of a good proof.
Clearly if vector v from point B to point Z makes an angle greater than 180 degrees with vector u from point A to point B, the point is outside of the polygon, as the polygon is convex, and that would make the polygon concave if Z was inside of the polygon and the angle was greater than 180 degrees...
This is equivalent to saying if cross product uxv is positive point Z is definitely outside the polygon! A simple calculation...
Now consider u(2) to be vector BC and v(2) to be vector CZ, the same reasoning holds and we see that if u(2)xv(2) is positive the point Z must be outside the polygon, in fact going all the way around if any such cross product is positive, the point Z must be outside the polygon...
So it's necessary that none of those cross products are positive, as it turns out it is evidently also sufficient, as determined by drawing many such polygons in a program like Geogebra and dragging the vector around and noting the angles when the point is inside and when it is outside but I haven't thought of a good proof.
Subscribe to:
Posts (Atom)