The first lemma will be: "The second of a pair of twin primes does not have a remainder of 2 when divided by any prime up to its square root, and a number is the second of a pair of twin primes if it doesn't have a remainder of 2 or 0 when divided by any prime up to it's square root."

Proof: Suppose P did divide a prime p less than it's square root with a remainder of 2. Then the number 2 less than P would divide it exactly. Thus the number 2 less than P is not prime, which contradicts the fact that P was the second of a pair of twin primes. Also if P does not divide any number less than it's square root with a remainder of 0 then it is prime, and if it also does not divide any of those with a remainder of 2 then the number 2 less than P does not divide them either, which means it is also prime. Hence, P is the second of a pair of twin primes.

Theorem: Let N = (p+1)^2-1, a way to roughly count the number of primes less than N is:

N*(1/2)*(2/3)*(4/5)...*(p-1)/p

Proof: Roughly 1/2 of all the numbers up to N won't divide 2, 2/3 won't divide 3, and so on up to p-1. Whether a number divides p or divides another prime q happen independently of one another so the percentages are multiplied. (p+1)^2 is also the first number where factors of p+1 need to be considered as any number that divides p+1 evenly before that also divides a smaller prime.

Theorem 2: Let N = (p+1)^2-1, a way to count one of each of every pair of twin primes less than N is roughly:

N*(1/2)*(1/3)*(3/5)...*(p-1)/p: The same as the formula from theorem 1 but the numerators of all but the 1/2 term are decreased by 1.

Proof: Because of the lemma, we know that numbers that have a non-zero remainder when divided by every p up to the square root, and also not 2 when divided by each p are one of a pair of twin primes. For every p but 2 this is one less possible remainder so every term other than 1/2's numerator is reduced by 1. For example there are 3 possibilities for the remainder when divided by 5, 1,3, and 4. So 3/5 appears in the fraction. This formula thus roughly counts the number of twin primes less than N.

Theorem 3: There are an infnite number of twin primes estimated by the formula in Theorem 2:

Proof: Suppose the number of twin primes up to N used theorem 2's formula and counted P pairs of twin primes by:

((p)^2-1)*(1/2)*(1/3)*(3/5)...*(p-2)/p

for now we can write that as:

((p)^2-1)*X

We can show that if we replace p with the next bigger prime q, the number of twin primes estimated always increases because the ratio of estimated primes is:

This ratio is always greater than 1 as p gets larger because the ratio of q^2-1 to p^2-1 is larger than 1 by more than the ratio of (q-2) to q is smaller than 1. This means there must be a positive number of estimated pairs of twin primes between p^2-1 and q^2-1 And since there are an infinite number of primes to choose p and q from there must be an infinite number of pairs of twin primes...

## Saturday, November 23, 2013

### Idea for a simple proof of twin prime conjecture?

Suppose you do a similar operation to the sieve of Eratosthenes, but mark off numbers that are p*n+2 numbers for n greater than 1 instead of p*n numbers. You would start with the list of numbers to infinity...

1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28...

Mark off all the 2*n+2 numbers...

1,2,3,4,5,x,7,x,9,x,11,x,13,x,15,x,17,x,19,x,21,x,23,x,25,x,27,x...

Of those mark off the 3*n+2 numbers with n greater than 1...

1,2,3,4,5,x,7,x,9,x,x,x,13,x,15,x,x,x,19,x,21,x,x,x,25,x,27,x..

and so on until you get...

1,2,3,4,5,x,7,x,9,x,x,x,13,x,15,x,x,x,19,x,21,x,x,x,25,x, x, x...

Now there are 3 things we know about this list...

1.The total number of entries up to n is n/log(n) because one number was marked off for every one number marked off in the usual sieve of Eratosthenes, and the prime number theorem states that in the usual sieve the number of numbers left will be n/log(n).

2.The list of these numbers continues indefinitely because n/log(n) continues to increase to infinity.

3. Every number on this list is 2 more than a prime, except for 3. This follows because every number we've marked off is two numbers to the right of the number we would mark off in the normal sieve of Eratosthenes. So what would be left as primes after doing the normal sieve we are left with numbers that are two to the right of those.

Now we can deduce that there will always be primes left on this modified sieve of Eratosthenes because the remainders when a prime is divided by all the primes up to the square root are equally likely, so clearly an infinite number of them will have a different remainder than 2 when divided by each prime up to the square root. But these primes are all the second prime of a pair of twin primes by fact number 3 above. So thus there are an infinite number of twin primes.

1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28...

Mark off all the 2*n+2 numbers...

1,2,3,4,5,x,7,x,9,x,11,x,13,x,15,x,17,x,19,x,21,x,23,x,25,x,27,x...

Of those mark off the 3*n+2 numbers with n greater than 1...

1,2,3,4,5,x,7,x,9,x,x,x,13,x,15,x,x,x,19,x,21,x,x,x,25,x,27,x..

and so on until you get...

1,2,3,4,5,x,7,x,9,x,x,x,13,x,15,x,x,x,19,x,21,x,x,x,25,x, x, x...

Now there are 3 things we know about this list...

1.The total number of entries up to n is n/log(n) because one number was marked off for every one number marked off in the usual sieve of Eratosthenes, and the prime number theorem states that in the usual sieve the number of numbers left will be n/log(n).

2.The list of these numbers continues indefinitely because n/log(n) continues to increase to infinity.

3. Every number on this list is 2 more than a prime, except for 3. This follows because every number we've marked off is two numbers to the right of the number we would mark off in the normal sieve of Eratosthenes. So what would be left as primes after doing the normal sieve we are left with numbers that are two to the right of those.

Now we can deduce that there will always be primes left on this modified sieve of Eratosthenes because the remainders when a prime is divided by all the primes up to the square root are equally likely, so clearly an infinite number of them will have a different remainder than 2 when divided by each prime up to the square root. But these primes are all the second prime of a pair of twin primes by fact number 3 above. So thus there are an infinite number of twin primes.

## Thursday, November 21, 2013

### Josephus Crypto

So suppose you have a message that you want to encrypt M. I'll just show a short one:

watson come here i need you

i mean if you have a minute

The basic idea I had was that you take the length of message M, n, which in this case is 54, and generate a key K which can be any number significantly greater than L. For this example it will be 386. Then to find the first letter of the new message find the 386th letter of the message modulo L. 386 mod 54 is 8. The eighth letter of the message (first letter is letter 0) is the o in "come" which becomes the first letter of the new message S.

o

Now the letter that was chosen is removed from the original message to generate M2.

watson cme here i need you

i mean if you have a minute

To get the second letter of the new message start with the next letter after the one removed and count to the 386th letter after that this time modulo 53 because there are 53 letters in the message. Mathematically that is (7+386) mod 53 = 22 (remember it is a 7 instead of an 8 because the first letter is letter 0). The 22th letter of the message is the space before the first y so that becomes the second letter of the new message S and this whole process iterates until all the letters from M are now in S.

o eenuwham su nem i euia

i eea v aefdmiyoronynhtoct

python code:

def main():

M = "watson come here i need you i mean if you have a minute"

S = ""

K = 386

p = 0

while len(M) > 1:

i = (p+K)%(len(M)-1)

S+=M[i]

M = M[:i] + M[(i+1):]

print(S)

main()

To decode, start with an empty string and build it up in reverse.

The name:

It's related to the Josephus problem which is based on a historical occurrence:

http://en.wikipedia.org/wiki/Josephus_problem

It's sort of a different dimension to cryptography, that instead of scrambling the value of the characters as the starting point you scramble the order that they appear. But you could also combine both...

watson come here i need you

i mean if you have a minute

The basic idea I had was that you take the length of message M, n, which in this case is 54, and generate a key K which can be any number significantly greater than L. For this example it will be 386. Then to find the first letter of the new message find the 386th letter of the message modulo L. 386 mod 54 is 8. The eighth letter of the message (first letter is letter 0) is the o in "come" which becomes the first letter of the new message S.

o

Now the letter that was chosen is removed from the original message to generate M2.

watson cme here i need you

i mean if you have a minute

To get the second letter of the new message start with the next letter after the one removed and count to the 386th letter after that this time modulo 53 because there are 53 letters in the message. Mathematically that is (7+386) mod 53 = 22 (remember it is a 7 instead of an 8 because the first letter is letter 0). The 22th letter of the message is the space before the first y so that becomes the second letter of the new message S and this whole process iterates until all the letters from M are now in S.

o eenuwham su nem i euia

i eea v aefdmiyoronynhtoct

python code:

def main():

M = "watson come here i need you i mean if you have a minute"

S = ""

K = 386

p = 0

while len(M) > 1:

i = (p+K)%(len(M)-1)

S+=M[i]

M = M[:i] + M[(i+1):]

print(S)

main()

To decode, start with an empty string and build it up in reverse.

The name:

It's related to the Josephus problem which is based on a historical occurrence:

http://en.wikipedia.org/wiki/Josephus_problem

It's sort of a different dimension to cryptography, that instead of scrambling the value of the characters as the starting point you scramble the order that they appear. But you could also combine both...

## Wednesday, November 6, 2013

### Good approximation for Gauss's circle problem

I found this formula as an approximation to Gauss's circle problem that I described on the last blog entry:

A comparison of the output of this above to the actual values:

r= 0: 0 ~ 1

r=1: 2 ~ 5

...

r=10: 310.45 ~317

r=20: 1251.38 ~1257

r=30: 2820.99 ~2821

r=40: 5019.11 ~5025

r =45: 6353.84 ~6361

(within .1 % on that last one)

I only found one example where it overestimates:

r=46 6639.63 ~6625

proof: Actually I'm not sure why, but this expression is exact:

A comparison of the output of this above to the actual values:

r= 0: 0 ~ 1

r=1: 2 ~ 5

...

r=10: 310.45 ~317

r=20: 1251.38 ~1257

r=30: 2820.99 ~2821

r=40: 5019.11 ~5025

r =45: 6353.84 ~6361

(within .1 % on that last one)

I only found one example where it overestimates:

r=46 6639.63 ~6625

proof: Actually I'm not sure why, but this expression is exact:

because r^2-a^2 is the height of the circle in one quadrant as a increases and the floor of that height counts the number of grid points less than that height then you multiply that by 4 and add the points on the axes with the 4*r term and add one for the point in the very center.

and then I used the fact that the average value before flooring is probably .5 to get the approximation

### Gauss's circle problem calculation without using the floor function

Gauss's circle problem is the one that asks given a circle of a certain radius centered at the origin, how many grid points are on the circle or inside of it. For example:

The graphic shows the answers for radius = 1, 2, 3 are 5, 13, 29 respectively.

One way to calculate how many there would be is to add 1 for every point on the grid where the distance to the center is greater than the radius, and count how many times that happens checking every grid point. But that's really more of an algorithm...

I found this other way:

My reasoning was, if you look at the bottom right quadrant of the circle in question (disregarding the points on the axis) and and imagine all the vectors from those points to the point on the circle along the line to the origin, you see vectors like this:

It ends up you can always tell from the sign of the y component of this vector whether the point in question is inside or outside of the circle. The X above is the expression for the y component of this vector, derived by calculating the angle to a point from the origin and then finding the polar coordinates of a point that is that same angle from the origin but with a distance to the origin r, of the radius of the circle, and then converting back to cartesian coordinates and looking at the y component. I divide whatever this y value is by the absolute value to get either 1 if the point is inside the circle, 0 if it is on the circle, or -1 if it is outside.

The final expression is totalling these values up and solving for how many must be inside the whole circle from that sum.

Unfortunately it's harder to get rid of the absolute value and it didn't happen that there was a closed form sum for the double sum over the grid, but I still think it's an interesting approach.

The graphic shows the answers for radius = 1, 2, 3 are 5, 13, 29 respectively.

One way to calculate how many there would be is to add 1 for every point on the grid where the distance to the center is greater than the radius, and count how many times that happens checking every grid point. But that's really more of an algorithm...

I found this other way:

For example, this sum totals 113 for r=6 which is the correct value.

It ends up you can always tell from the sign of the y component of this vector whether the point in question is inside or outside of the circle. The X above is the expression for the y component of this vector, derived by calculating the angle to a point from the origin and then finding the polar coordinates of a point that is that same angle from the origin but with a distance to the origin r, of the radius of the circle, and then converting back to cartesian coordinates and looking at the y component. I divide whatever this y value is by the absolute value to get either 1 if the point is inside the circle, 0 if it is on the circle, or -1 if it is outside.

The final expression is totalling these values up and solving for how many must be inside the whole circle from that sum.

Unfortunately it's harder to get rid of the absolute value and it didn't happen that there was a closed form sum for the double sum over the grid, but I still think it's an interesting approach.

## Friday, November 1, 2013

### Sharing network conjecture

My experiment was to start with a graph with every vertex having a value of 1 and numbered as follows:

The iterative step was to randomly pick a vertex and take a random amount from that vertex and give it to another vertex that the first was connected to. So the first step might be to take .753 from the bottom vertex and give it to the vertex to the top left. Where .753 is just a random amount from the value of the bottom vertex. Then this step repeats on the new values for the vertices.

So I did 3 trials of repeating this step 10,000,000 times on the computer and calculating the average amount that each vertex had throughout the experiment.

Trial one Verices #1 through #6

[1.1518971995357796,

1.1570109780324331,

0.924463843491593,

0.6901779251616574,

0.923239128197964,

1.1532115255805495]

Trial two:

[1.1521431216043976,

1.1542216510411125,

0.9243974762519348,

0.6943663654748421,

0.9220785246649209,

1.1527934609628454]

Trial three:

[1.1496366397190037,

1.1542629725537692,

0.9252708248471094,

0.694030320080955,

0.9233941167266151,

1.1534057260733457]

My conjecture is that the expected value for each vertex V(n) after many iterations is:

V(n) = (n*(1+E(n)))/ (S+n)

Where E(n) is the number of edges connected to V(n) and S is the total number of edges and n is the number of vertices.

So the expected value for the vertices under this formula would be:

[1.1538461538461538,

1.1538461538461538,

0.9230769230769230,

0.6923076923076923,

0.9230769230769230,

1.1538461538461538]

This compares well to the experimental values.

I thought it was interesting that the expected value for a vertex only seems to depend on the number of vertices, the total number of edges, and the number of edges connecting to that vertex, and not on the way that the vertices are connected in the graph.

So I did 3 trials of repeating this step 10,000,000 times on the computer and calculating the average amount that each vertex had throughout the experiment.

Trial one Verices #1 through #6

[1.1518971995357796,

1.1570109780324331,

0.924463843491593,

0.6901779251616574,

0.923239128197964,

1.1532115255805495]

Trial two:

[1.1521431216043976,

1.1542216510411125,

0.9243974762519348,

0.6943663654748421,

0.9220785246649209,

1.1527934609628454]

Trial three:

[1.1496366397190037,

1.1542629725537692,

0.9252708248471094,

0.694030320080955,

0.9233941167266151,

1.1534057260733457]

My conjecture is that the expected value for each vertex V(n) after many iterations is:

V(n) = (n*(1+E(n)))/ (S+n)

Where E(n) is the number of edges connected to V(n) and S is the total number of edges and n is the number of vertices.

So the expected value for the vertices under this formula would be:

[1.1538461538461538,

1.1538461538461538,

0.9230769230769230,

0.6923076923076923,

0.9230769230769230,

1.1538461538461538]

This compares well to the experimental values.

I thought it was interesting that the expected value for a vertex only seems to depend on the number of vertices, the total number of edges, and the number of edges connecting to that vertex, and not on the way that the vertices are connected in the graph.

Subscribe to:
Posts (Atom)