# God's Algorithm out to 14f*

```Here are the results for postions at exactly that distance:

d   mod M + inv          mod M       positions
-- -------------- --------------- ----------------
9      183339529       366611212      17596479795
10     2419418798      4838564147     232248063316
11    31909900767     63818720716    3063288809012
12   420569653153    841134600018   40374425656248
13  5538068321152  11076115427897  531653418284628
14 72805484795034 145610854487909 6989320578825358
```
I had the good fortune to run a few test calculations on a brand new Cray XT6m with over 4000 Opteron cores. As I have done in the past I tried my own program for rubiks cube calculation. Results looked promising and after some optimisations I was able to complete the calculation out to 14 moves in the full turn metric in a about nine days. If I had been able to use the entire machine alone it should have taken a bit over a day. It verifies all previous calculations out to 13f* and adds of course 14f*. As it has been done before I used all edge cube positions as cosets. After symmetry reduction this leaves 934778 cosets which can be calculated independently. The single largest of these cosets has 1273353199 different positions, which is incidentally the same coset as for the distance 13.

## Comment viewing options

### Antisymmetric positions

That is a huge computation. Another huge computation will be solving all antisymmetric cube positions. I think it can be done in a reasonable amount of time on your computer with 32GB of memory, a rough guess is a month instead of 9 days. See here for some ideas for that computation.

### Wow!

Congratulations! This is an excellent result. Thank you so much
for confirming my results, too.

So the actual fraction of distance-14 positions is 0.000161595303100046.
My one million optimal cube exploration put it at about 0.000172. I'll have to dig out the old statistics books to see if that's within a reasonable margin of error, but I suspect it is.

It's nice to see the branching factor decrease, however slowly.

Can I ask for more information on your precise technique? Did you use a hash table, a binary tree of some sort, or just put all the found positions in an array and then sort and uniquefy? Did you write them to disk, or do it all in memory?

### Technique

First of all I used of course fixed corner cube positions as cosets (a bad case of me translating Eck-Würfel as edge cubes).

About the technique, I tabulated all symmetry reduced positions up to eight moves. I can quite easily tabulate them up to ten moves, but the resulting tables are to large to fit into memory. I then used a combination technique starting at all position at depth one "combining" them with a position at depth eight to reach the desired corner cube position, and then going on to depth two and so on until reaching the desired depth, discarding on the way all duplicate positions. I experimented with pruning tables, after my first version was about a factor 100 to slow compared to your calculation up to 13 moves. During that I found the flaw in my Program (doing a full symmetry analysis of every position, though most of the time the corner cube position is already completely unsymmetric). After correcting that it turned out that the combination method was slower up to depth twelve, but starting at depth 13 it was faster, so I used that method for the calculation out to 14 moves.

As for storing the positions, I simply used an ordered list, which allows quite easily to check if a position is already in the list. Entering a new positon then means of course to move all later positions in memory. For performance reasons I didn't use a single list but actually 131072 by using the edge cube flip and six bits from the edge cube position as index into which list to use. Since the edge cube flip is already in the index all I had to store was the edge cube position. The neccessary 29 bits fit quite easily into 4 bytes.

So even for the largest cosets all positions fit in about 5 GBytes of memory. The Cray has 32 GByte of memory for every 24 cores. Over 99% of all cosets have less then 250 million positions, resulting in less than 1 GByte of memory usage, so I could use all 24 cores in a node. For the largest cosets I used only 4 cores of a node, which means I had 8 GBytes of available memory per coset, taking into account a bit of overhead because of dynamic memory allocation. As estimate for the required memory I used the number of positions the coset had a depth 13 (times ten), which worked out quite nicely.

This as comment since it might be of interest to other people. I will contact you via mail if you do have more questions.

### Symmetry of the Corners vs. the Whole Cube

You stated the following: During that I found the flaw in my Program (doing a full symmetry analysis of every position, though most of the time the corner cube position is already completely unsymmetric).

Here is how I think about this issue. For a position x, we let Symm(x) be the set of symmetries that preserve x. Symm(x) will be a subgroup of M, where M is the group of 48 symmetries of the cube. For most positions, it will be the case that Symm(x) contains a single element, namely the identity symmetry.

Also, we may write x as (xc,xe), where xc is the position of the corners and xe is the position of the edges. It is the case that Symm(x) = Symm(xc) ∩ Symm(xe). Because Symm(xc) is the identity most of the time, it is the case that most of the time there is nothing to calculate to calculate Symm(x) because if Symm(xc) is the identity then so too is Symm(x).  This can save a tremendous amount of time in calculating symmetry reduction.

The same thing is true in the other direction, of course.  It is usually the case that the edges have the identity as their only symmetry, so it's usually not necessary to calculate the symmetry of the corners if we know the symmetry of the edges. But it's much easier to store all the corner positions than all the edge positions, so it's much easier to pre-calculate the symmetry of all the corner positions than to pre-calculate the symmetry of all the edge positions.

Jerry Bryan

### Exactly

I hadn't thought about that, so initially I did the most easiest thing.

I needed to know how many "real" positions a symmetry reduced position would represent. So I used all 96 operations (including inversion) and counted how many different positions I would get.

Later I started using cosets but still for every position generated I tried out all 96 operations. Since my cosets are fixed corner cube positions and twists, I can do the analysis once and most of the time I don't have to look at the edge cubes to know that this one position represents 96 real positions. And even if the corner position has a symmetry it reduces the number of symmetry operations I have to look at.

There is only one coset where the corner cubes have the full symmetry and this is the one coset which takes the most time to calculate.

The flaw was only that I did not realise that this took over 90% of CPU time and could be avoided at least most of the time.

### More Congratulations and Questions

I add my congratulations. This is an outstanding achievement. Let me request that follow-up questions and comments be kept on the this forum rather than on E-mail, because I think a number of us are interested.

Even though the terminology is quite a bit different, it seems to me that there is a certain amount of kinship between your technique and the algorithm I used several years ago to calculate out to 11f from Start. (Tom Rokicki has long since improved on this result, of course!).

My algorithm produces permutations in lexicographic order. On its face, lexicographic order doesn't sound like it has anything to do with processing cosets. But processing permutations in lexicographic order is in fact producing the permutations in order by coset. I can elaborate further if anybody is interested, but I think it's pretty clear why this is the case. So let me start with just one question.

You state the following: About the technique, I tabulated all symmetry reduced positions up to eight moves. For the set of permutations S and the set of permutations T, my algorithm produces the product ST in lexicographic order. In order to produce positions out to a distance of m+n from Start, my program tabulates all positions out to a distance of m moves from Start in the set S and out to n moves from Start in the set T. More typically, the set S and T are the same set containing all the positions out to n moves from Start, and my program produces the positions out to 2n from Start in lexicographic order.

The difference is that I had to tabulate all positions out to a distance n from Start, not just the symmetry reduced positions. The product ST was symmetry reduced, but not the sets S and T themselves. How did you manage to produce all the longer positions if you only tabulated the symmetry reduced positions out to length 8?

Jerry Bryan

### My positions are not ordered

To reach all positions you are of course correct I have to consider all not symmetry reduced positions. But there is a shortcut.

For position S I started with depth one than two up to six. For that I needed all positions so I used the symmetry reduced positions and used all possible symmetry operations to generate the rest. For speed and to avoid duplicate work I stored which symmetry operations would generate different positions along with the position, actually I only stored an index since I found only 76 different sets of symmetry operations.

For position T I only used positions at depth eight. And to reach all positions T too has to be all possible positions at that depth.

But here comes the shortcut. I only need to produce positions which are in a specific coset. It means I know which exact symmetry reduced corner cube position and twist I want to reach. I look at a specific position S and can therefore tell exactly what corner cube position and twist the position T need to have. By use of a lookup table I could then find the corresponding symmetry reduced corner cube position. And by trying out all symmetry operations I could create a list of symmetry operations which would generate the desired corner cube position for T. And again most of the time this is only one.

I had all positions indexed by the symmetry reduced corner cube position and twist. So I could than just by looking at the index tell if there are any fitting positions at depth eight. And if there were any I could than just combine these, after applying all of the list of symmetry operations which would generate distinct positions, with the position S. Since I already know that this will reach exactly the wanted corner cube position and twist, all I had to do was look at the edge cubes.

The rest was to look if I generated a duplicate and if not to increase the counters.

### Further Question

I think I understand about 90% of your algorithm.  Let me ask a further question about what I think is the key element of the algorithm.  To that end, I'm going to describe a simpler algorithm which I believe faithfully replicates the spirit of your algorithm while omitting a certain amount of the necessary but messy bookkeeping.

Let's suppose we have stored two sets S and T, with S being the set of all positions that are 2 moves from Start and T being the set of all positions that are 8 moves from Start.  And let's suppose that we want to calculate the set of all positions that are 10 moves from Start.  Because we are not going all the way out to 14 moves from Start, I will omit the "combining" process of letting S have the positions 2 moves from Start, then letting S have the positions 3 moves from Start, etc.

Let's further assume that the sets S and T are not reduced by symmetry.  It seems to me that reduction by symmetry is essential to the efficiency of your algorithm, but that reduction by symmetry is not essential to the basic correctness of your algorithm.  So I will choose to assume there is no reduction by symmetry in S and T.

So your algorithm is going to form products of the form st.  Some such products will be 10 moves from Start and some such products will be less than 10 moves from Start.  Eliminating those products that are less than 10 moves from Start is a detail that is pretty simple to take care of it, so I will omit that detail.

We are left with the quintessential problem that some of the products that are 10 moves from Start will be duplicate.  sitj may or may not be equal sktm, where i ≠ k and j ≠ m.  We only want to count the cases where sitj and sktm are not equal.

Your algorithm does not produce the products st in a totally random order, nor does it produce the products in lexicographic order.  Rather, it produces the products in sort of a targeted order.  We let z=st, and we write z as z=(zc,ze), where zc is the position of the corners and ze is the position of the edges.  For the purposes of your algorithm, a coset is the set of all positions with a fixed zc.  So it seems to me that you need to find the set of all solutions to the equation zc=sctc for a fixed zc.  Some sc and tc pairs will solve the equation, and most of them won't.  How do you quickly find all the sc and tc pairs that will work?

The following occurs to me, but you are probably doing something a little more clever than this.  For the case where S contains those positions 2 moves from Start, S only contains 114 elements and there are only 114 possible values for sc.  When S contains positions more than 2 moves from Start, there will be more than 114 elements, of course.  Also, further from Start than 2 moves, there will be fewer distinct corner positions in S than there are elements in S.  But let's just stick with simple case.  Anyway, for each specific sc in S and for each fixed zc as described above, we solve zc=sctc for tc.  Namely, tc=sc-1zc.  It's then pretty straightforward to determine if sc-1zc is in T.  But as I said, you may be doing something more clever than this.

### Correct

Symmetry reduction is not essential for my algorithm. It is merely a way to reduce memory consumption and calculation time. I need 12 Bytes to store a position, so my depth eight table is 166 MBytes in size. Without symmetry reduction it would be nearly 16 GBytes. I have to add a lookup table to easily convert between non symmetry reduced and symmetry reduced, which is 350 MBytes in size (4 Bytes for 88179840 possible corner cube positions) but it still saves over 15 GBytes.

About avoiding duplicates. It is just not that big a deal, that trying to avoid duplicates is worth the trouble. Even at depth 14 I use all non symmetry reduced positions at depth six (there are 7618438) and combine each with essentially every symmetry reduced position at depth eight once (there are 13890036), which is not entirely correct if the depth eight position is symmetric, but this is almost never the case. All in all my algorithm produces 7618438*13890036 which is about 105.8 trillion positions. Out of these 72.8 trillion were not duplicates of depth 14 or shorter moves. This means at best I could avoid generating about one out of three positions. At depth ten is only one out of four positions.

About the last part tc=sc-1zc is exactly what I am using in my program. It is so straight forward I couldn't think of something more clever than that. And I have a table which indexes the positions at depth eight by their corner cube position, so I just have to look if my index says there are some or there are none.

At depth two for S I simply use all 243 non symmetry reduced positions even though there are only 114 different corner cube positions. Since calculating tc is so easy I am not sure if I could save much but doing it only once instead of twice. At depth six it would be about once instead of five times, which still is not that much since it is an outer loop. If I had stored the non symmetry reduced positions, this would be quite easy to do. But I have a list of symmetry reduced positions and two positions with the same corner cube position might need to be treated differently because of different edge cube positions.