In [1]:
%run ../python-libs/inpututils.py
%run ../python-libs/graycodes.py
%run ../python-libs/bits.py
The theory described here is given by Frank Ruskey and adapted from his Combinatorial Generation book. We use the following arguments to implement some Python functions, available here. In this introduction we report some concepts and definitions from Ruskey that we believe interesting and necessary for what follows, starting with the meaning of Combinatorial Generation:
Let's look at all the possibilities. This phrase sums up the outlook of this book. In computer science, mathematics, and in other fields it is often necessary to examine all of a finite number of possibilities in order to solve a problem or gain insight into the solution of a problem. These possibilities often have an underlying combinatorial structure which can be exploited in order to obtain an efficient algorithm for generating some appropriate representation of them.
About complexity:
Not much attention has been paid to the model of computation in the area of combinatorial generation. For the most part authors use the random access machine model. This is more or less like counting operations in a Pascal program, with the understanding that integers can be arbitrarily large and the assumption that standard arithmetic operations can be done in constant time. For large integers this is an unrealistic assumption. It is generally the case that integers are small in generation algorithms, but that they are large in ranking and unranking algorithms. We will therefore measure ranking and unranking algorithms in terms of the number of arithmetic operations used.
About analysis:
There are two terms that are used throughout the book. We strive to develop algorithms that have these properties.
- CAT algorithms: the holy grail of generating combinatorial objects is to find an algorithm that runs in Constant Amortized Time. This means that the amount of computation, after a small amount of preprocessing, is proportional to the number of objects that are listed. We do not count the time to actually output or process the objects; we are only concerned with the amount of data structure change that occurs as the objects are being generated. Suppose that the input to our generation algorithm is $n$, and that this will produce $N$ objects, each of "size" $n$. A CAT algorithm has running time $\mathcal{O}(N)$. In the typical application of combinatorial generation, each object, as it is produced, is processed somehow. If this processing takes time $\mathcal{O}(n)$, then having a CAT algorithm has no advantage over having one with running time $\mathcal{O}(nN)$, since the total running time is $\mathcal{O}(nN)$ in either case.
- BEST algorithms: this means Backtracking Ensuring Success at Terminals. In other words, the algorithm is of the backtracking type, but every leaf of the backtracking tree is an object of the desired type; it is a success.
Don't count the output principle:
There are are many papers about generating combinatorial objects that describe an $\mathcal{O}(nN)$ running time as optimal, because this is the amount of output produced by the algorithm — each object has size $n$ and there are $N$ objects in total. This is a misguided notion that has its roots in the way lower bounds are discussed in the traditional introduction to complexity theory course. There the trivial lower bound says that the amount of output gives a lower bound on the amount of computation — of course it says the same thing in our setting, but the amount of output is the wrong thing to measure.
Don't Count the Output Principle: In combinatorial generation it is the amount of data structure change that should be measured in determining the complexity of an algorithm; the time required to output (and process) each object should be ignored.
The trivial lower bound for combinatorial generation is $\mathcal{Θ}(N )$; it is independent of $n$. Now back to our typical application, where processing each object took time $\mathcal{O}(n)$ even though our generation algorithm was CAT. Wouldn't it be nice if the $\mathcal{O}(nN)$ could be brought down to $\mathcal{O}(N)$? This is frequently possible! If you go back and observe how successive objects are processed, it is often the case that the amount of processing required is proportional to the amount of change that successive objects undergo.
A last concept, ordering:
Lexicographic order is based on the familiar idea as the ordering of words in dictionaries. The only requirement is that the letters that make up the alphabet of the language be ordered. In the definitions below we use $\prec$ to denote the assumed underlying ordering of the symbols of the alphabet and $<$ to denote orderings of strings (and, of course, integers). We remark again that in most instances the alphabet is the set of natural numbers under the usual numeric ordering $0 \prec 1 \prec 2 \prec \ldots$.
We report definitions of three orderings:
Here we introduce combinatorial Gray codes, which are lists of combinatorial objects such that are pairwise close. Quoting Ruskey:
Suppose that we wish to list all $2^n$ bitstrings of length $n$, in such a way that each bitstring differs from its predecessor by a single bit. Such lists are called binary Gray codes, named after Frank Gray who used them in a patent he obtained for "pulse code communication".
Using reflection, it yields a recursive definition:
There is a very simple recursive algorithm for generating one particular Gray code, known as the binary reflected Gray code (BRGC). The binary reflected Gray code on n elements, $\textbf{G}_n$, is defined recursively by letting $\textbf{G}_0$ be the single element sequence containing the empty string $\epsilon$, and letting $\textbf{G}_n$ be the list obtained by prefixing each element of $\textbf{G}_{n−1}$ with a 0 and then prefixing each element of $\textbf{G}_{n−1}$ in reversed order with a 1. For example: $\textbf{G}_1 = (0,1), \textbf{G}_2 = (00, 01, 11, 10), \textbf{G}_3 = (000, 001, 011, 010, 110, 111, 101, 100)$. The $n$-dimensional hypercube, or $n$-cube, $Q_n$ , may be considered as the graph whose vertices consist of all bitstrings of length $n$ and where two vertices are joined by an edge if they differ by a single bit. The problem of finding a Gray code corresponds to finding a Hamilton path in $Q_n$: observe that the binary reflected Gray code actually defines a Hamiltonian cycle on the $n$-cube since the first and last bitstrings only differ by a single bit.
There is an interesting connection between the Towers of Hanoi problem, the binary reflected Gray code, and ordinary counting in binary:
Now suppose that the disks are numbered $0, 1, \ldots , n-1$ where 0 is the smallest disk and $n-1$ is the largest disk. Let $T_n$ denote the sequence of disks that are moved in solving the $n$ disk Towers of Hanoi problem. Looking at the following table, $T_3 = 0, 1, 0, 2, 0, 1, 0$. It turns out that the number at position k in T_n, zero-based, is the index of the bit that changes in the bitstring of $\textbf{G}_n$ in position $k$, and that this is equal to the position $k$ of the rightmost 0, counting from the right, in the bitstring when counting in binary. The sequence $T_n$ is called the transition sequence of the BRGC. Every Hamilton path on the $n$-cube has a transition sequence and these paths are often specified by their transition sequences. $$ \begin{array}{ccc} \text{binary} & \text{Gray code} & T_n\\ \hline 000 & 000 & 0 \\ 001 & 001 & 1 \\ 010 & 011 & 0 \\ 011 & 010 & 2 \\ 100 & 110 & 0 \\ 101 & 111 & 1 \\ 110 & 101 & 0 \\ 111 & 100 & \\ \end{array} $$
We will first derive two functions, for ranking and unranking gray codes and, lately, a function to efficiently compute the position of the rightmost 0 when counting in binary.
As a notational convenience we will now assume that the Gray code bitstrings are indexed $g_{n} \ldots g_1$, in contrast to the usual 0-based indexing.
Let $r$, standing for rank, be a function defined inductively as follows: $$ r()=0 \\ r(0 g_{n-1} \ldots g_{1} ) = r(g_{n-1} \ldots g_{1} ) \\ r(1 g_{n-1} \ldots g_{1} ) = 2^{n-1} + \bar{r}(g_{n-1} \ldots g_{1} ) $$ where auxiliary function $\bar{r}$ is defined as: $$ \bar{r}()=0 \\ \bar{r}(0 g_{n-1} \ldots g_{1} ) = 2^{n-1} + \bar{r}(g_{n-1} \ldots g_{1} ) \\ \bar{r}(1 g_{n-1} \ldots g_{1} ) = r(g_{n-1} \ldots g_{1} ) $$
In Ruskey words, slightly modified:
$$ \begin{array}{cccccccccc} & g_{9} & g_{8} & g_{7} & g_{6} & g_{5} & g_{4} & g_{3} & g_{2} & g_{1} \\ r & 1 & 1 & 1 & 0 & 1 & 0 & 1 & 0 & 1 \\ 1 & \bar{r} & 1 & 1 & 0 & 1 & 0 & 1 & 0 & 1 \\ 1 & 0 & r & 1 & 0 & 1 & 0 & 1 & 0 & 1 \\ 1 & 0 & 1 & \bar{r} & 0 & 1 & 0 & 1 & 0 & 1 \\ 1 & 0 & 1 & 1 & \bar{r} & 1 & 0 & 1 & 0 & 1 \\ 1 & 0 & 1 & 1 & 0 & r & 0 & 1 & 0 & 1 \\ 1 & 0 & 1 & 1 & 0 & 0 & r & 1 & 0 & 1 \\ 1 & 0 & 1 & 1 & 0 & 0 & 1 & \bar{r} & 0 & 1 \\ 1 & 0 & 1 & 1 & 0 & 0 & 1 & 1 & \bar{r} & 1 \\ 1 & 0 & 1 & 1 & 0 & 0 & 1 & 1 & 0 & r \\ b_{8} & b_{7} & b_{6} & b_{5} & b_{4} & b_{3} & b_{2} & b_{1} & b_{0} \\ \end{array} $$Let $\textbf{g}=g_{n}\ldots g_{1}$ be a Gray code and assume $r(g_{n}\ldots g_{1})=(b_{n-1}\ldots b_{0})_{2}$. This mutual recursive definition may be interpreted as setting $b_{n−1}$ to 0 or 1, depending upon whether the term $2^{n−1}$ in the binary expansion is present or not. The repeated application of these recurrence relations to a bitstring may then be thought of as sweeping functions $r$ or $\bar{r}$ from left-to-right as illustrated in the example below, where $\textbf{g}=g_{n}\ldots g_{1} = 111010101$:
$$ \begin{array}{ccc} b_{i} & \sigma & g_{\hat{i}} \\ & \downarrow & \\ b_{i} & b_{i-1} & \sigma' \\ \end{array} $$With $g_{n+1}$ = 0, note that an $r$ always has a 0 to its left and a $\bar{r}$ has a 1 to its left. Thus for each sub-sequence $b_{i}\,\sigma\,g_{i}$, where $\sigma\in\lbrace r, \bar{r}\rbrace$, there are four possibilities $\lbrace 0\,r\,0, 0\,r\,1, 1\,\bar{r}\,0, 1\,\bar{r}\,1\rbrace$. The resulting values of $b_{i−1}$ are $\lbrace 0, 1, 1, 0 \rbrace$ in those four cases, respectively:
$$ r(g_{n}\ldots g_{1})=(b_{n-1}\ldots b_{0})_{2} \rightarrow b_{i−1} = b_i \oplus g_{\hat{i}} $$where $\downarrow$ is the application of function $\sigma \in \lbrace r, \bar{r}\rbrace$, which sets $b_{i-1}$ and function $\sigma'$ according to the mutual definition. From this we observe that $b_{i−1} = b_i \oplus g_{\hat{i}}$ , where $\oplus$ denotes exclusive-or: the bit below an $r$ or $\bar{r}$ is the exclusive-or of the bits left and right of the $r$ or $\bar{r}$. Therefore the following holds:
where $g_{\hat{i}} = g_{i-1}$.
We can use the above argument to produce the Gray code with generic symbol $g_{i-1}$, zero-based indexing, in rank or position $k$ (named $b$ above) using the relation $k_i \oplus k_{i−1} = g_{i-1}$ and visualized as:
$$ \begin{array}{ccccc} b_{i} & & b_{i-1} & & b_{i-2} \\ \oplus & \searrow & \oplus & \searrow & \oplus \\ b_{i+1} & & b_{i} & & b_{i-1} \\ = & & = & & = \\ g_{i} & & g_{i-1} & & g_{i-2} \\ \end{array} $$observe that the former and latter rows stay steal, only the middle is shifted to the right.
In [2]:
python_code(graycode_unrank)
Out[2]:
The following is an iterable of Gray codes we saw in the example above, namely all of them of length 3, pretty-printed:
In [3]:
print('\n'.join(list(binary_reflected_graycodes(3, justified=True))))
Moreover, we have implemented an operator which allows us to process an iterable of codes pairwise, possibly plugging-in custom behavior via lambda
expressions. Previous table with the position of the changing-bit can be rebuilt as follows:
In [11]:
from itertools import count
example_graycodes = binary_reflected_graycodes(length=3)
for c, (o, s, p, r) in zip(count(), binary_reflected_graycodes_reduce(example_graycodes)):
print('{:>5}\t{:>5}\t{}'.format(bin(c), bin(o), p))
In [12]:
python_code(graycodes_direct)
Out[12]:
and can be used to build the same example table; observe that the changing-bit positions sequence is shifted forward by one:
In [10]:
for i, (code, changingbit_position) in zip(count(), graycodes_direct(3)):
print('{:>5}\t{:>5}\t{}'.format(bin(i),bin(code), changingbit_position))
In this section we fullfil the last correspondence among binary counting and graycodes generation. Here we will develop a loopless algorithm such that, keeping track of the rightmost 0 (in the least significant part), allows to build graycodes: such tracking avoid scans to find it and break the worst cases in which that 0-bit is in the most significant part of the number. Begin defining an array $\tau$ which is a representation of the counting number $b_{n-1}b_{n-2}\ldots b_{1}b_{0}$, aiming to catch 0-bits positions, defined as follows:
$$ \tau_{i} = \left\lbrace \begin{array}{ll} i & \text{if } b_{i}=0 \vee b_{i-1}=1\\ \min_{k\in\lbrace i+1,\ldots,n\rbrace}{\lbrace k: b_{k}=0 \rbrace} & \text{if } b_{i}=1 \vee b_{i-1}=0 \end{array} \right. $$In words, if $b_{i}$ is 0-bit than $\tau_{i}$ is its own position $i$, otherwise if the previous bit $b_{i-1}$ is 1-bit too, then set $\tau_{i}=i$ again because they are in a sequence of 1 in a row. On the other hand, if bit $b_{i}$ is the last 1-bit in a sequence of ones, so $b_{i-1}=0$, then find the closer position in $\tau$ which is zero. Follows four examples with $n-1=10$:
$$ \begin{array}{ccccccccccc||ccccccccccc} b_{10} & b_9 & b_8 & b_7 & b_6 & b_5 & b_4 & b_3 & b_2 & b_1 & b_0 & \tau_{10} & \tau_9 & \tau_8 & \tau_7 & \tau_6 & \tau_5 & \tau_4 & \tau_3 & \tau_2 & \tau_1 & \tau_0 \\ \hline 1 & 1 & 0 & 1 & 0 & 0 & 0 & 1 & 1 & 1 & 1 & 10 & \textbf{11} & 8 & \textbf{8} & 6 & 5 & 4 & 3 & 2 & 1 & \textbf{4} \\ 1 & 1 & 0 & 1 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 10 & \textbf{11} & 8 & \textbf{8} & 6 & 5 & \textbf{5} & 3 & 2 & 1 & 0 \\ 1 & 1 & 0 & 1 & 0 & 0 & 1 & 0 & 0 & 0 & 1 & 10 & \textbf{11} & 8 & \textbf{8} & 6 & 5 & \textbf{5} & 3 & 2 & 1 & \textbf{1} \\ 1 & 1 & 0 & 1 & 0 & 0 & 1 & 0 & 0 & 1 & 0 & 10 & \textbf{11} & 8 & \textbf{8} & 6 & 5 & \textbf{5} & 3 & 2 & \textbf{2} & 0 \\ \end{array} $$where in boldface are value in $\tau$ set by the second clause in the definition; therefore, the changing-bit in the graycode, lying on the same row in the table together with the binary counter, has position $\tau_{0}$, since it is the minimum position where a 0-bit appears in the counter.
In order to derive an efficient function using $\tau$ array, we study 4 cases, namely the four combinations of $b_{i+1}$ and $b_{0}$ binary values, where $\tau_{0}=i$:
$b_{i+1}=0 \wedge b_{0}=1$: $$ \begin{array}{cccccc|cccccc} b_{i+1} & b_{i} & b_{i-1} & \ldots & b_1 & b_0 & \tau_{i+1} & \tau_i & \tau_{i-1} & \ldots & \tau_1 & \tau_0 \\ \hline 0 & 0 & 1 & ones & 1 & 1 & i+1 & i & i-1 & \ldots & 1 & i \\ 0 & 1 & 0 & zeroes & 0 & 0 & i+1 & i+1 & i-1 & \ldots & 1 & 0 \\ \end{array} $$ hence the following updates take place: $\tau_{0}\leftarrow 0$, $\tau_{i}\leftarrow i+1 = \tau_{i+1}$ and $\tau_{i+1}\leftarrow i+1$;
$b_{i+1}=1 \wedge b_{0}=1$: $$ \begin{array}{cccccc|cccccc} b_{i+1} & b_{i} & b_{i-1} & \ldots & b_1 & b_0 & \tau_{i+1} & \tau_i & \tau_{i-1} & \ldots & \tau_1 & \tau_0 \\ \hline 1 & 0 & 1 & ones & 1 & 1 & j & i & i-1 & \ldots & 1 & i \\ 1 & 1 & 0 & zeroes & 0 & 0 & i+1 & j & i-1 & \ldots & 1 & 0 \\ \end{array} $$ where $j = min_{k\in\lbrace i+2,\ldots, n\rbrace}{\lbrace k: b_{k}=0\rbrace}$; hence the following updates take place: $\tau_{0}\leftarrow 0$, $\tau_{i}\leftarrow j = \tau_{i+1}$ and $\tau_{i+1}\leftarrow i+1$;
$b_{0}=0 \wedge b_{1}=0$, so $\tau_{0}=i=0$, hence $b_{i+1}=b_{1}$: $$ \begin{array}{cc|cc} b_{i+1} = b_1 & b_{i} = b_0 & \tau_{i+1} = \tau_1 & \tau_{i}=\tau_0 \\ \hline 0 & 0 & 1 & 0 \\ 0 & 1 & 1 & 1 \\ \end{array} $$ hence the following updates take place: $\tau_{0}\leftarrow 0$ (implicitly), $\tau_{i}\leftarrow 1 = \tau_{i+1}$ and $\tau_{i+1}\leftarrow 1 = i+1$;
$b_{0}=0 \wedge b_{1}=1$, so $\tau_{0}=i=1$, hence $b_{i+1}=b_{1}$: $$ \begin{array}{cc|cc} b_{i+1} = b_1 & b_{i} = b_0 & \tau_{i+1} = \tau_1 & \tau_{i}=\tau_0 \\ \hline 1 & 0 & 2 & 0 \\ 1 & 1 & 1 & 2 \\ \end{array} $$ hence the following updates take place: $\tau_{0}\leftarrow 0$ (implicitly), $\tau_{i}\leftarrow 2 = \tau_{i+1}$ and $\tau_{i+1}\leftarrow 1 = i+1$.
So in every case, the following assignments occur sequentially: $$ \tau_{0}\leftarrow 0 \\ \tau_{i}\leftarrow \tau_{i+1} \\ \tau_{i+1}\leftarrow i+1 \\ $$
Previous argument yield the following functions quite naturally:
In [13]:
python_code(rightmost_zeroes_in_binary_counting, graycodes_by_transition_sequence)
Out[13]:
In [14]:
list(rightmost_zeroes_in_binary_counting(3))
Out[14]:
In [15]:
graycodes_with_positions = graycodes_by_transition_sequence(rightmost_zeroes_in_binary_counting(3))
for b, (code, p) in zip(count(), graycodes_with_positions):
print('{:>5}\t{:>5}\t{}'.format(bin(b), bin(code), p))
In [16]:
python_code(turn_on_last_zero)
Out[16]:
and can be used directly with binary counting:
In [17]:
for i in range(2**3-1):
I, z = turn_on_last_zero(i)
print(I, z)
Although not required, we can work the relation $k_{i−1} = k_i \oplus g_{i-1}$ backwards, namely given a Gray code g
find the corresponding position or rank k
. To succeed, we think inductively as follows: rewrite the same schema, starting from the very left, namely the most significant part
where the row in the middle starts with 0 because it is shifted to the right by one. Doing the leftmost $\oplus$ and copying it according to the arrow -- after all, the middle row is the first one, just shifted -- we get:
$$ \begin{array}{ccccc} g_{n-1} & & b_{n-2} & & b_{n-3} \\ = & \searrow & = & \searrow & = \\ 0 & & g_{n-1} & & b_{n-2} \\ \oplus & & \oplus & & \oplus \\ g_{n-1} & & g_{n-2} & & g_{n-3} \\ \end{array} $$We can complete the induction steps, obtaining:
$$ \begin{array}{ccccc} g_{n-1} & & g_{n-2}\oplus g_{n-1} & & g_{n-3}\oplus g_{n-2}\oplus g_{n-1} \\ = & \searrow & = & \searrow & = \\ 0 & & g_{n-1} & & g_{n-2}\oplus g_{n-1} \\ \oplus & & \oplus & & \oplus \\ g_{n-1} & & g_{n-2} & & g_{n-3} \\ \end{array} $$it seems that we have to cumulate the given gray code. In order to derive working Python code, we can split the above schema according the following equivalent one:
$$ \begin{array}{ccc} b_{n-1} & b_{n-2} & b_{n-3} \\ = & = & = \\ g_{n-1} & g_{n-1} & g_{n-1} \\ & \oplus & \oplus \\ & g_{n-2} & g_{n-2} \\ & & \oplus \\ & & g_{n-3} \\ \end{array} $$This argument is implemented in function gray_position
, where we code the repetition of each symbol $g_{i}$ building a sequence of ones of the correct length at each iteration, multiplied by a check that ensure that such symbol is actually 1:
In [18]:
python_code(graycode_rank)
Out[18]:
where auxiliary functions are defined as follows:
In [19]:
python_code(is_on, set_all)
Out[19]:
Quoting the introduction from Ruskey, section 5.3, directly:
In this section we develop several Gray codes for combinations; that is, for $k$-subsets taken from the set $[n] = \lbrace1, 2, . . . , n\rbrace$. For the most part, we identify each subset with a bitstring of length $n$ with exactly $k$ ones; i.e., with an element of $\textbf{B}(n, k)$. The algorithms developed can be easily modified to generate the elements of the other common representation, $\textbf{A}(n, k)$ (recall that it is the set of monotonically increasing sequences of length $k$ with elements taken from $[n]$; see (4.2) of Combinatorial Generation). There are many closeness conditions (or relations) that can be applied to bitstrings with a fixed number of ones. In this section we consider five natural closeness relations. These relations can be defined more generally on any set of strings in which the number of occurances of each symbol is the same in each string.
- the transposition condition: The two permutations differ in exactly two positions: call the two positions $i$ and $j$. Example: $\underline{0}021\underline{1}$ and $\underline{1}021\underline{0}$. In this example the positions that change are $i = 1$ and $j = 5$, and are underlined, as they will be in each of the example that follows.
- the homogenous transposition condition: The permutations, say $\pi$ and $\phi$ differ in exactly two positions $i$ and $j$, and only symbols less than or equal to $min(\pi_i , \phi_j)$ occur between those two positions; example: $\underline{0}000\underline{1}12$ and $\underline{1}000\underline{0}12$.
Again in Ruskey words:
Algorithms satisfying the transposition criteria are sometimes said to be revolving door algorithms. Imagine two rooms $A$ and $B$ connected by a revolving door that operates only if occupied by two persons, one from each room. As a person leaves one room, they enter the other, so the door acts to swap two persons. Given $k$ persons in the first room, the problem is to sequence through all possible combinations of persons in the first room so that each such combination occurs exactly once. This is precisely the same as the question of generating combinations by transpositions. The transposition closeness condition is handled in a manner similar to the way subsets were done. That is, we rely on the classic recurrence relation for binomial coefficients, but reverse the list corresponding to one of the terms.
This yield to the following relation:
$$ C(n,k) = \left\lbrace \begin{array}{ll} 0^{n} & \text{if } k = 0 \\ 1^{n} & \text{if } k = n \\ C(n-1,k)\cdot 0 \circ \bar{C}(n-1,k-1)\cdot 1 & otherwise \end{array} \right. $$In order to understand previous recurrence we draw a simple schema:
$$ \left. \begin{array}{ccccc} 1^{k-1} & 1 & 0 & 0^{n-k-2} & 0 \\ 1^{k-1} & 0 & 1 & 0^{n-k-2} & 0 \\ & & \vdots & & \\ 1^{k-1} & 0^{n-k-2} & 1 & 0 & 0 \\ 1^{k-1} & 0^{n-k-2} & 0 & 1 & 0 \\ & & = & & \\ 1^{k-2} & 1 & 0^{n-k-1} & 1 & 0 \\ \end{array} \right\rbrace = C(n-1,k)\cdot 0 \\ $$on the other hand, for the second part:
$$ \left. \begin{array}{ccccc} 1^{k-2} & 0 & 0^{n-k-1} & 1 & 1 \\ & & = & & \\ 1^{k-2} & 0^{n-k-1} & 0 & 1 & 1 \\ 1^{k-2} & 0^{n-k-1} & 1 & 0 & 1 \\ & & \vdots & & \\ 1^{k-2} & 0 & 1 & 0^{n-k-1} & 1 \\ 1^{k-2} & 1 & 0 & 0^{n-k-1} & 1 \\ \end{array} \right\rbrace = \bar{C}(n-1,k-1)\cdot 1 $$where boundary combinations can be rewritten as:
The following is a Pythonic implementation of the above relation, ispired by Ruskey again:
In [20]:
python_code(graycodes_combinations)
Out[20]:
it can be used as follows, showing changing-bits positions too:
In [21]:
for c, i, j in graycodes_combinations(6,3):
print("{:>8}\t{}\t{}".format(bin(c), i, j))
In this section we would like to produce a sequence of code such that belong to the homogeneous transposition relation pairwise. In order to accomplish that, we unfold the binomial term ${{n-1}\choose{k-1}}$ and write the following recurrence: $$ E(n, k) = \left\lbrace \begin{array}{cc} 0^{n} & \text{if } k = 0\\ 1^{n} & \text{if } k = n\\ 10^{n-1}\circ 010^{n-2}\circ\ldots\circ 0^{n-2}10\circ 0^{n-1}1 & \text{if } k = 1 \\ E(n-1,k)\cdot 0 \circ \bar{E}(n-2,k-1)\cdot 01 \circ E(n-2, k-2)\cdot 11 & otherwise \end{array} \right. $$
In order to understand previous recurrence we draw a simple schema:
$$ \left. \begin{array}{ccccc} 1^{k-1} & 1 & 0 & 0^{n-k-2} & 0 \\ 1^{k-1} & 0 & 1 & 0^{n-k-2} & 0 \\ & & \vdots & & \\ 1^{k-1} & 0^{n-k-2} & 1 & 0 & 0 \\ 1^{k-1} & 0^{n-k-2} & 0 & 1 & 0 \\ & & = & & \\ 1^{k-2} & 1 & 0^{n-k-1} & 1 & 0 \\ \end{array} \right\rbrace = E(n-1,k)\cdot 0 \\ $$for the second part:
$$ \left. \begin{array}{cccccc} 1^{k-2} & \underline{0} & 0^{n-k-2} & \underline{1} & 1 & 0 \\ & & = & & \\ 1^{k-2} & 0^{n-k-2} & 0 & 1 & 1 & 0 \\ 1^{k-2} & 0^{n-k-2} & 1 & 0 & 1 & 0 \\ & & \vdots & & & \\ 1^{k-2} & 0 & 1 & 0^{n-k-2} & 1 & 0 \\ 1^{k-2} & 1 & 0 & 0^{n-k-2} & 1 & 0 \\ \end{array} \right\rbrace = \bar{E}(n-2,k-1)\cdot 10 \\ $$for the third part:
$$ \left. \begin{array}{cccccc} 1^{k-2} & \underline{0} & 0 & 0^{n-k-2} & 1 & \underline{1} \\ & & = & & & \\ 1^{k-3} & 1 & 0 & 0^{n-k-1} & 1 & 1 \\ 1^{k-3} & 0 & 1 & 0^{n-k-1} & 1 & 1 \\ & & \vdots & & & \\ 1^{k-3} & 0^{n-k-1} & 1 & 0 & 1 & 1 \\ 1^{k-3} & 0^{n-k-1} & 0 & 1 & 1 & 1 \\ \end{array} \right\rbrace = E(n-2,k-2)\cdot 11 \\ $$
In [4]:
python_code(graycodes_combinations_homogeneous)
Out[4]:
In [3]:
for c, i, j in graycodes_combinations_homogeneous(8,4):
print('{}\t{}\t{:>8}'.format(i, j, bin(c),))
<span xmlns:dct="http://purl.org/dc/terms/" property="dct:title">Gray codes tutorial</span> by <a xmlns:cc="http://creativecommons.org/ns#" href="massimo.nocentini@unifi.it" property="cc:attributionName" rel="cc:attributionURL">Massimo Nocentini</a> is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.
Based on a work at <a xmlns:dct="http://purl.org/dc/terms/" href="https://github.com/massimo-nocentini/competitive-programming/blob/master/tutorials/graycodes.ipynb" rel="dct:source">https://github.com/massimo-nocentini/competitive-programming/blob/master/tutorials/graycodes.ipynb</a>.