# 26f now claimed proven sufficient

http://www.physorg.com/news99843195.html

## Comment viewing options

### Some comments on the paper

There is a serious gap in the proof, that 26 moves suffice though. I already told this Daniel Kunkle and he agreed with this. In the paper they want to show that all cubes which have a distance <=3 from the square group H can be solved in maximal 14 moves. For this they compute for all 23 cosets at level 3 the 624 cases which take 12 or 13 when they have reached H with the aid of cube explorer and show that the optimal solution is <=14 moves.

The problem is that *both* the 23 cosets and the 624 H-elements are symmetry reduced. But if you take the symmetry reduced coset-elements, you must combine them with *all* H-elements at depth 12 and 13. So you have to analyse 48 times more cubes in the worst case. If any of these cubes have a 15 move optimal solution, the upper bound does not hold any more with this argumentation.

### Symmetry Problem Fixed for Proof of 26

We have since gone back and re-done the computation, this time using the full subgroup (without symmetry reduction). Unfortunately, a few configurations were found that required 15 moves, hence allowing only a reduction of 1 move at coset level 3. Through further analysis, we have performed the same computation at further levels. We finally removed all cases at coset level 9. This required significant additional computation, and the use of the "brute-forcing" method we mention at the end of the paper.

This fix was presented this week at ISSAC '07, and will appear in a later journal version of the paper. We will likely also provide a fixed version of the paper online in the mean time.

Thanks very much to Herbert for his careful reading of our paper, and for bringing this mistake to our attention (among his other very useful comments). Also, thanks to all of the other researchers in this area (some of which Herbert mentioned). Our result would not have been possible without the methods and results that came before.

Best,

-Dan

### fixed proof

I would love to read the final paper.

(Unfortunately, I'm not a member of ACM)

### 26 move prove fixed

### large God's algorithm calculations

I thought I would remark that the Kunkle/Cooperman paper has established what I believe is a new "record" for the largest coset space of the cube for which a full God's algorithm calculation has been carried out.

On October 1, 2005, I published on this forum a summary of my God's Algorithm calculation of the permutations of the corners and edges (with respect to fixed centers) in the quarter-turn metric. I believe this is the largest subgroup of the cube for which a God's algorithm calculation has been completed, with 9,656,672,256,000 positions.

The coset space analyzed by Kunkle/Cooperman has 65,182,537,728,000 positions, and thus is 6.75 times larger than the subgroup in my analysis. Interestingly, the paper by Kunkle/Cooperman only lists the number of equivalence classes (symmetrized cosets) at each distance from the trivial coset, but doesn't give a distribution in terms of the actual number of cosets of the squares subgroup.

So to my knowledge, the largest God's algorithm calculations of Rubik's cube sub-spaces in terms of the number of elements are given below. This table does not include analyses of coset spaces that are not closed under a set of basic cube moves, such as done by Rokicki, Radu, and Kociemba. In the table, the numbers under "Symmetries" in parentheses are for symmetry and antisymmetry, regardless of whether or not antisymmetry was used in the calculation.

Sub-space Size Symmetries Type Metric Person(s) Year ------------------- ------------------ ---------- ---- ------ --------- ---- Cosets of squares 65,182,537,728,000 48 coset space FTM Kunkle/ 2007 subgroup Cooperman Permutations of the 9,656,672,256,000 48 (96) subgroup QTM Norskog 2005 corners and edges Permutations of the 9,656,672,256,000 48 (96) subgroup FTM Norskog 2006 corners and edges Edges (with centers) 980,995,276,800 48 (96) subgroup QTM Rokicki 2004 subgroup Edges (with centers) 980,995,276,800 48 (96) subgroup FTM Rokicki 2004 subgroup

My congratulations to Kunkle/Cooperman for achieving this new "record."

I also congratulate Silviu Radu for his 34q upper bound proof.

- Bruce Norskog### "subgroups"

In my previous post, I called the permutations of the corners and edges a subgroup of the cube group. The definition of a subgroup (call it H) of another group (call it G) more or less says that H must consist of a subset of elements of G, and that H itself is a group under the group operation of G. However, I view the permutations of the corners and edges "subgroup" as partitioning the elements of the cube group into a set of equivalence classes of size 2048*2187 as opposed to a subset of elements of the cube group. With that viewpoint, it seems to me that it isn't technically a subgroup.

However, I can define the following subgroup of the cube group:

H = < U, D, U' L^{2} U L^{2} U^{2} B' R' U' F U^{2} F' R B L' U',
U' R^{2} U R^{2} U^{2} F' L' U' B U^{2} B' L F R' U',
L B^{2} R' F^{2} D' R B^{2} L' U' F^{2} R' F^{2} R U',
R F^{2} L' B^{2} D' L F^{2} R' U' B^{2} L' B^{2} L U' >

The set of generators for H are equivalent to U, D, L, R, F, and B except that the orientation of all the corners and edges are preserved (using a particular commonly-used convention for corner orientation as well as edge orientation). Thus, each of my equivalence classes contains one of the elements of the above subgroup, and that element can be considered a representative element of the equivalence class.
Thus, what I am calling a subgroup of the cube group is at least isomorphic to a subgroup of the cube group.
So I think it is reasonable to talk of that "subgroup" as a subgroup.
Similar logic can be applied to the edges (with centers) "subgroup."

### One of Herbert's comments tha

In my case, I'm not dealing with cosets. To calculate out to 13q, I store the set of all permutations out to 7q (call it T

_{7}) and the set of all the permutations out to 6q (call it T

_{6}). Then, I form the product T

_{7}T

_{6}, with the products being produced in lexicographical order so as to detect duplicate permutations. (I have posted all this before.)

What I really wanted to do was to deal with T

_{7}reduced by symmetry and T

_{6}reduced by symmetry, and I tried mightily to find a way to get it to work. But it just doesn't work, and I

**think**for the same reasons that Herbert cited in the last paragraph of his message.

What I ended up doing was storing T

_{7}and T

_{6}reduced by symmetry, and then dynamically exanding each of them roughly 48 times as I was forming the products. (And of course, I really don't have to store T

_{6}separately from T

_{7}because T

_{6}is a subset of T

_{7}.)

I wondered if maybe I could expand T

_{7}but not T

_{6}or vice versa. Perhaps I could have done it that way. But what I did instead was to expand both. Then, I found a way to create cutoff points so that not all products are really produced. In fact, the only products that are produced out to 13q are those known to be reduced by symmetry. So I do get a 48 times speedup.

Here is another example of the same principle. Consider the way breadth first searches are usually done. Let's let V

_{n}be the set of permutations that are exactly n moves from Start. In this terminology, T

_{n}is the union of V

_{i}for i from 0 to n. The basic breadth first algorithm that anybody would use is V

_{i+1}=V

_{i}Q in the quarter turn metric, or V

_{i+1}=V

_{i}(Q union H) in the face turn metric. There's some bookkeeping you have to do to avoid counting duplicate positions, but that's the basic algorithm.

Well, what I have all ways done instead is W

_{i+1}=W

_{i}Q or W

_{i+1}=W

_{i}(Q union H), where W

_{i}is V

_{i}reduced by symmetry. Working with W in this fashion, it doesn't have to be re-expanded back to V. It can be used as it. But you can't reduce Q or H by symmetry. They are already expanded, and you just leave them expanded. And with this algorithm also, you get about a 48 times speedup.

The difference in my two algorithms is that in the W

_{i+1}=W

_{i}Q case, nothing has to be expanded, but the products have to be reduced by symmetry after they are formed. And when you reduce the products by symmetry after they are formed, I can't figure out a way to make the products come out in lexicographic order. So this algorithm is only suitable for problems that are small enough to allow storing all results in a table in memory. In the T

_{7}T

_{6}case, both T

_{7}and T

_{6}have to be expanded. But the products are produced in lexicographic order, and hence do not have to be stored at all. Cutoff points are used so that the only products that are produced are the ones that are already reduced by symmetry (about 1/48 of the total, which is why you get about a 48 times speedup).

The bottom line is that if you are forming the product of two sets of items (permutations or cosets) X and Y, then you can reduce X by symmetry or you can reduce Y by symmetry, but you cannot reduce both of them by symmetry.

## I found this video. Not sure