Primes, Randomness and the Prime Twin Proof

Martin C. Winer


The article explains randomness and uses recursively self complicating algorithms to prove the Prime Twin conjecture.

Code Samples:

Some java code has been provided to illustrate some of the concepts on this website.  It can be found here:

and the output can be viewed here:


A ‘C’ version for those who just can’t stand Java :)

What is Random?

Randomness is one of those things that is hard to pin down.  As applied to patterns, a random pattern is one which has an infinite supply of complexity.  Complexity, in turn, is given by the number of non-reducible, non-recursive (non-reflexive) statements needed to categorize a pattern.  As an example, suppose you are asked to categorize yoghurt.  You could say that yoghurt is yoghurt and milk.  This is a recursive (reflexive) definition which does not categorize yoghurt.  Sure, you can make more yoghurt now, but you need to know that yoghurt is milk and bacterial culture to make yoghurt without having yoghurt on hand.  Similarly recursive statements give us (potentially) some information about the pattern at hand, however, to fully categorize the problem, we need non recursive definitions which tell us how to define the pattern out of elemental (deterministic) statements.  Each non-reducible, non-recursive statement contributes some measure of complexity.  As a corollary, complexity of a pattern can be thought of as the collection of all essential non-reducible information about this pattern.

Each statement about the random pattern is deterministic.  That is, it is finite and elemental.  However, when you take these deterministic elemental patterns and unboundedly combine them together you get a random pattern.

Formal Definition

Randomness = Recursive self complication cross determinism

How do we get a random pattern?

Random patterns as explained above are produced by recursive self complicating using deterministic elementals.  We will consider only binary patterns (patterns of ones and zeros).  The general form of the algorithm follows


Pat(f(n)) = Pat(f(n-1) merge Elemental(f(f0)) where

Pat(0) is the empty pattern,

f0 is the location (1 based) of the first zero in the lhs pattern, and

f(n) is some function.

Merge operation

The merge operation takes the left hand side (lhs) pattern and finds the first zero.  It then composes the Elemental(f(f0)) pattern and matches (aligns) the leading one of the elemental with the first zero of the lhs pattern.  It then repeats both patterns until length(lhs pattern) * length (Elemental(f(n)) bits are produced of each.  It then OR’s the result producing the new pattern.  Further, the merge of any pattern with the empty pattern produces that same pattern.  The first zero of the empty pattern is defined to exist at position 1.  (The empty pattern can be thought of as an infinite string of only zeros or 0…)


The Elemental(f(f0)) function produces a pattern of one followed by f(f0)-1 zeros.  (Recall: f0 is the location of the first zero in the lhs pattern.)  An example of an elemental would be 10… = 10101010 and so on.  This is an irreducible pattern.  It cannot be described any more simply than 10…

When will Pat(f(n)) produce random patterns?

Pat(f(n)) will produce random patterns IFF Pat(1) has at least 1 one and 1 zero and if f(n+1)>f(n) for all n.  (This is true due to pattern combinatorics.  If Pat(1) has ones and zeros, and Elemental(n+1) has more zeros than Elemental(n), we’re always guaranteed to get more zeros as we iterate which are later matched with the ones of the Elemental in subsequent merge operations)

Example Pat(2n+1)

Let’s work an example to see how this works.  Let’s consider Pat(2n+1). 

Pat(0) = emptyPattern

Pat(2*0+1) = Pat(0) merge Elemental(2*f0+1)

Pat(1) = emptyPattern merge Elemental (2*f0+1); f0 of the empty pattern is defined to be 1

Pat(1)  = emptyPattern merge Elemental(3)

Pat(1) = emptyPattern merge 100…  (the ‘…’ means repeat everything to the left over and over again, not just the last entry over and over, but everything to the left, over and over)

Pat(1) = 100…

Pat(2) = 100… merge Elemental (2*f0 + 1); f0 of the lhs pattern occurs at position 2

Pat(2) = 100… merge Elemental (5); f0 of the lhs pattern occurs at position 2


Pat(2) = 100… merge 10000…

100… merge 10000…

<=> 100100100100100

        010000100001000   OR


        110100100101100… = Pat(2)

Note length(Pat(2)) = length(Pat(1)) x length(Elemental(5)) = 3x5 = 15

Examining the descriptions of Pat(2n+1)

Describe Pat(1) = 100…

This is the pattern of one followed by 2 zeros repeated over and over again

Describe: Pat(2) = 110100100101100

This is the pattern of one followed by 2 zeros repeated over and over again AND the pattern of 1 followed by 4 zeros, with the first 1 of the pattern at position 2, repeated over and over again.

Note: as we iterate, we get more and more irreducible, non-recursive statements. 

Stateless entities vs Stateful Algorithms

The number 1 is stateless.  It’s 1, it will always be 1 and will never change.  Algorithms however, describe things which can exist in certain states.  Suppose I wanted to construct an algorithm that defined the number 1.  First of all it’s important to realize that 1 is actually 0.999999999999 and so on. 

Proof:  1/3 = 0.33333333 and so on

Multiply both side by 3

1 = 0.99999999 and so on.


Now lets construct an algorithm to make 0.99999 and so on.  One(1) = 9/10 = 0.9,  One(n) = 9/10^n + One(n-1).


If we allow this algorithm to run indefinitely, it will produce 0.999999 and so on and hence produce 1.  However, at any stage One(n) for any finite n does NOT EVER equal 1.  One(n) for any finite n, represents a state of the algorithm, but with unbounded n, this describes the number 1.

Examining Pat(f(n)), we see that provided f(n) meets the requirements of Pat(1) having at least one 1 and one 0 and that f(n+1) > f(n) for all n, that Pat(f(n)) will produce more and more complicated patterns as it iterates.  For no n will Pat(f(n)) produce a fully random pattern, there will only be finite (yet possibly great – certainly increasing -- amounts of) complexity.  However, if we allow Pat(f(n)) to iterate unboundedly, we get a truly random pattern.

Importance of the set of first zeros.

Note that as we iterate, the description of Pat(f(n)) goes something like, the pattern of 1 followed by x zeros starting at position y repeated over and over AND the pattern of 1 followed by p zeros starting at position q repeated over and over again … AND …  and so on.  So the position of the first zero in the lhs pattern becomes the position of the first 1 in the rhs pattern.  As such the set of first zeros is an important one because it contains necessary information we need to categorize Pat(f(n)).  Let’s call this set firstZeros.  The firstZeros set for Pat(2n+1) is {1,2,3,5,6,8,9…}

Allowable constellations in firstZeros

Notice in the firstZeros set for Pat(2n+1) the entries 1,2,3.  Note that in the rest of the entries you never get 3 numbers 1 apart ever again.  This marks an unallowable constellation.  Allowable constellations are ones where candidates for their creation occur infinitely.  When allowable constellations first appear, the elemental patterns which currently exist can be arranged such that the zeros of these patterns can align to produce a candidate for this pattern.  A candidate for a pattern is the same pattern XOR’ed (that is with the bits reversed).  For example, a candidate for the constellation c,c+1,c+2 is 000 in Pat(2n+1). 

Consider 1,2,3 (c,c+1,c+2):  The elemental patterns here are:




Try as you may, you can’t shift these patterns such you can get 3 zeros to align in all 3 patterns.  (Simply put, the pattern 100… blocks any such attempt because there is always a 1 in any 3 considered positions).  Thus all constellation candidates for c,c+1,c+2 are destroyed at once, by the first instance of the constellation no less.  Thus the constellation c,c+1,c+2 occurs only once, but isn’t allowed.


How about just 1,2?  Sure, that’s allowable.  The elementals are:


10000… where two zeros can be made to align in all the elementals


How about the simplest constellation, just 1?  Ie, just one entry.  This first occurs with the elementals 100…  and sure there are two zeros (we need only one) which can be aligned to produce this constellation.

All allowable constellations in firstZeros occur infinitely

Any allowable constellation is a building block upon which future complexity of Pat(f(n)) is built.  Recall that Pat(f(n)) recursively takes the complexity of the lhs pattern and merges in the complexity of some elemental.  The key here is that any complexity (details) that occurred in the lhs are preserved in the new pattern.  Simply put this means that if you had two consecutive zeros in the lhs pattern, the new pattern will have two consecutive zeros somewhere in the new pattern. 

Look at the constellation 1,2,3 or c,c+1,c+2:  Once created it destroys itself in Pat(2n+1) forever.  Suppose an allowable constellation such as c,c+1 occurred finitely in firstZeros.  This means that the constellation candidates were destroyed all at once or they were destroyed iteratively.  If the constellation candidates were destroyed all at once, this must mean that it was not an allowable constellation to begin with.  This would mean that the elementals forbade the construction of such a constellation candidate anywhere in Pat(2n+1) and hence such a constellation couldn’t have been an allowable constellation in the first place.

Next, suppose the constellation is destroyed iteratively during construction of Pat(2n+1).  This means that the constellation candidates (in this case two adjacent zeros) are not destroyed all at once but simply, magically, never manage to make it to be the head of a new Pat(2n+1), and hence another entry of this constellation into firstZeros.  The more you avoid allowing a certain constellation to be at the head of Pat(2n+1), the less normally, or randomly, distributed Pat(2n+1) is.  However, this constellation is itself a building block of complexity which is iteratively being used to make Pat(2n+1) more and more complicated and more and more random.  Now we can’t suck and blow from the same pipe.  If we’re using a constellation candidate to build complexity, and complexity in turn produces randomness and normal distribution, then it is impossible to ad infinitum disallow any constellation candidate access to the head of Pat(2n+1).

Hence it has been shown that immediate or iterative prevention of allowable constellation candidates reaching the head of Pat(2n+1) is impossible, making all allowable constellations infinite in firstZeros.

How does all of this apply to prime constellations?

Examine the firstZeros set for Pat(2n+1) = {1,2,3,5,6,8,9,…}.  Note the magic that occurs when you multiply every element by 2n+l yielding: {3,5,7,11,13,17,19,…} why you get the (odd) primes?  Proof?  All the elementals of Pat(2n+1) form the pattern of multiples for all the primes.  The constellation c,c+1 is allowable in Pat(2n+1), and all allowable constellations occur infinitely.  If this is true, then primes that are two apart (prime twins) must occur infinitely because this corresponds to an allowable constellation in P(2n+1).  Moreover ALL constellations of primes which corresponds to allowable constellations in P(2n+1) are infinite.  In case that last part missed your attention that means prime triplets quadruplets etc etc of allowable form are all infinite too. 

More Information

This is a shortened summary of a larger work.  Please see for more information.

© Martin C. Winer, 2004-2007