26f now claimed proven sufficient

It is now claimed proven that 26 face turns is sufficient to solve Rubik's Cube. I heard about this on the Yahoo speedsolving group forum. There is an article about it at the following link, which also contains a link for downloading the paper. That paper was written by two people at Northeastern University.


Comment viewing options

Select your preferred way to display the comments and click 'Save settings' to activate your changes.

I found this video. Not sure

I found this video. Not sure if you it was already posted.

Some comments on the paper

I invested some time to study the paper in more detail. It's a nice result to know that it takes max 16 moves to go from any cube position to the square group H. The methods used seem similar to those, Silviu Radu, Tom Rokicki, Bruce Norskog... and I use to do computations for related problems. The perfect hash functions applied to symmetrized group elements or symmetrized cosets are called sym-coodinates in my diction for example.

There is a serious gap in the proof, that 26 moves suffice though. I already told this Daniel Kunkle and he agreed with this. In the paper they want to show that all cubes which have a distance <=3 from the square group H can be solved in maximal 14 moves. For this they compute for all 23 cosets at level 3 the 624 cases which take 12 or 13 when they have reached H with the aid of cube explorer and show that the optimal solution is <=14 moves.

The problem is that *both* the 23 cosets and the 624 H-elements are symmetry reduced. But if you take the symmetry reduced coset-elements, you must combine them with *all* H-elements at depth 12 and 13. So you have to analyse 48 times more cubes in the worst case. If any of these cubes have a 15 move optimal solution, the upper bound does not hold any more with this argumentation.

Symmetry Problem Fixed for Proof of 26

As Herbert mentioned, there was an error in our original ISSAC '07 publication announcing our proof that 26f suffice for the cube. Specifically, we reduced both levels 12 and 13 of the square subgroup and the level 3 cosets by symmetry, when in fact only one can be reduced for a correct result.

We have since gone back and re-done the computation, this time using the full subgroup (without symmetry reduction). Unfortunately, a few configurations were found that required 15 moves, hence allowing only a reduction of 1 move at coset level 3. Through further analysis, we have performed the same computation at further levels. We finally removed all cases at coset level 9. This required significant additional computation, and the use of the "brute-forcing" method we mention at the end of the paper.

This fix was presented this week at ISSAC '07, and will appear in a later journal version of the paper. We will likely also provide a fixed version of the paper online in the mean time.

Thanks very much to Herbert for his careful reading of our paper, and for bringing this mistake to our attention (among his other very useful comments). Also, thanks to all of the other researchers in this area (some of which Herbert mentioned). Our result would not have been possible without the methods and results that came before.



fixed proof

Congrats for successful proof!
I would love to read the final paper.
(Unfortunately, I'm not a member of ACM)

26 move prove fixed

Good news that you were able to fix the problem! How many additional CPU-time did you need for the necessary new computations?

large God's algorithm calculations

I thought I would remark that the Kunkle/Cooperman paper has established what I believe is a new "record" for the largest coset space of the cube for which a full God's algorithm calculation has been carried out.

On October 1, 2005, I published on this forum a summary of my God's Algorithm calculation of the permutations of the corners and edges (with respect to fixed centers) in the quarter-turn metric. I believe this is the largest subgroup of the cube for which a God's algorithm calculation has been completed, with 9,656,672,256,000 positions.

The coset space analyzed by Kunkle/Cooperman has 65,182,537,728,000 positions, and thus is 6.75 times larger than the subgroup in my analysis. Interestingly, the paper by Kunkle/Cooperman only lists the number of equivalence classes (symmetrized cosets) at each distance from the trivial coset, but doesn't give a distribution in terms of the actual number of cosets of the squares subgroup.

So to my knowledge, the largest God's algorithm calculations of Rubik's cube sub-spaces in terms of the number of elements are given below. This table does not include analyses of coset spaces that are not closed under a set of basic cube moves, such as done by Rokicki, Radu, and Kociemba. In the table, the numbers under "Symmetries" in parentheses are for symmetry and antisymmetry, regardless of whether or not antisymmetry was used in the calculation.

Sub-space                    Size        Symmetries   Type     Metric   Person(s)   Year
-------------------  ------------------  ----------   ----     ------   ---------   ----
Cosets of squares    65,182,537,728,000    48      coset space   FTM    Kunkle/     2007
subgroup                                                                Cooperman

Permutations of the   9,656,672,256,000    48 (96)  subgroup     QTM    Norskog     2005
corners and edges

Permutations of the   9,656,672,256,000    48 (96)  subgroup     FTM    Norskog     2006
corners and edges

Edges (with centers)    980,995,276,800    48 (96)  subgroup     QTM    Rokicki     2004

Edges (with centers)    980,995,276,800    48 (96)  subgroup     FTM    Rokicki     2004

My congratulations to Kunkle/Cooperman for achieving this new "record."

I also congratulate Silviu Radu for his 34q upper bound proof.

- Bruce Norskog


In my previous post, I called the permutations of the corners and edges a subgroup of the cube group. The definition of a subgroup (call it H) of another group (call it G) more or less says that H must consist of a subset of elements of G, and that H itself is a group under the group operation of G. However, I view the permutations of the corners and edges "subgroup" as partitioning the elements of the cube group into a set of equivalence classes of size 2048*2187 as opposed to a subset of elements of the cube group. With that viewpoint, it seems to me that it isn't technically a subgroup.

However, I can define the following subgroup of the cube group:
H = < U, D, U' L2 U L2 U2 B' R' U' F U2 F' R B L' U', U' R2 U R2 U2 F' L' U' B U2 B' L F R' U', L B2 R' F2 D' R B2 L' U' F2 R' F2 R U', R F2 L' B2 D' L F2 R' U' B2 L' B2 L U' >
The set of generators for H are equivalent to U, D, L, R, F, and B except that the orientation of all the corners and edges are preserved (using a particular commonly-used convention for corner orientation as well as edge orientation). Thus, each of my equivalence classes contains one of the elements of the above subgroup, and that element can be considered a representative element of the equivalence class. Thus, what I am calling a subgroup of the cube group is at least isomorphic to a subgroup of the cube group. So I think it is reasonable to talk of that "subgroup" as a subgroup. Similar logic can be applied to the edges (with centers) "subgroup."

One of Herbert's comments tha

One of Herbert's comments that I find most interesting has to do with whether or not items (permutations or cosets) that have been reduced by symmetry have to be expanded at any point in the calculations. I have a similar situation in the program I have used to calculate God's algorithm out to 13q and to 11f. Let's take the 13q case as an example, although the 11f case is essentially the same.

In my case, I'm not dealing with cosets. To calculate out to 13q, I store the set of all permutations out to 7q (call it T7) and the set of all the permutations out to 6q (call it T6). Then, I form the product T7T6, with the products being produced in lexicographical order so as to detect duplicate permutations. (I have posted all this before.)

What I really wanted to do was to deal with T7 reduced by symmetry and T6 reduced by symmetry, and I tried mightily to find a way to get it to work. But it just doesn't work, and I think for the same reasons that Herbert cited in the last paragraph of his message.

What I ended up doing was storing T7 and T6 reduced by symmetry, and then dynamically exanding each of them roughly 48 times as I was forming the products. (And of course, I really don't have to store T6 separately from T7 because T6 is a subset of T7.)

I wondered if maybe I could expand T7 but not T6 or vice versa. Perhaps I could have done it that way. But what I did instead was to expand both. Then, I found a way to create cutoff points so that not all products are really produced. In fact, the only products that are produced out to 13q are those known to be reduced by symmetry. So I do get a 48 times speedup.

Here is another example of the same principle. Consider the way breadth first searches are usually done. Let's let Vn be the set of permutations that are exactly n moves from Start. In this terminology, Tn is the union of Vi for i from 0 to n. The basic breadth first algorithm that anybody would use is Vi+1=ViQ in the quarter turn metric, or Vi+1=Vi(Q union H) in the face turn metric. There's some bookkeeping you have to do to avoid counting duplicate positions, but that's the basic algorithm.

Well, what I have all ways done instead is Wi+1=WiQ or Wi+1=Wi(Q union H), where Wi is Vi reduced by symmetry. Working with W in this fashion, it doesn't have to be re-expanded back to V. It can be used as it. But you can't reduce Q or H by symmetry. They are already expanded, and you just leave them expanded. And with this algorithm also, you get about a 48 times speedup.

The difference in my two algorithms is that in the Wi+1=WiQ case, nothing has to be expanded, but the products have to be reduced by symmetry after they are formed. And when you reduce the products by symmetry after they are formed, I can't figure out a way to make the products come out in lexicographic order. So this algorithm is only suitable for problems that are small enough to allow storing all results in a table in memory. In the T7T6 case, both T7 and T6 have to be expanded. But the products are produced in lexicographic order, and hence do not have to be stored at all. Cutoff points are used so that the only products that are produced are the ones that are already reduced by symmetry (about 1/48 of the total, which is why you get about a 48 times speedup).

The bottom line is that if you are forming the product of two sets of items (permutations or cosets) X and Y, then you can reduce X by symmetry or you can reduce Y by symmetry, but you cannot reduce both of them by symmetry.