15/02/2017

Proof: the symmetry group of a complete graph is vertex transitive

Let K_n be a complete graph and \mbox{Sym}(K_n) its symmetry group. It will be shown that K_n is vertex transitive, i.e. that for any two vertices v_i\in K_n and v_j\in K_n there exists a symmetry a\in\mbox{Sym}(K_n) such that a(v_i)=v_j.

The proof is constructive. Given any two vertices v_i\in K_n and v_j\in K_n, it suffices to provide a symmetry a_{ij}:V(\Gamma)\bigcup E(\Gamma)\rightarrow V(\Gamma)\bigcup E(\Gamma)\in\mbox{Sym}(K_n) such that a_{ij}(v_i)=v_j. Although the suggested symmetry is different for every v_i and v_j, the simpler notation a is preferred in place of a_{ij}.

Given v_i\in K_n and v_j\in K_n, a suitable symmetry can be constructed by swapping v_i with v_j, by redirecting all edges with one end in v_i to edges with one end in v_j and vice versa, and by leaving all other vertices and edges intact.

To formalize the construction, given v_i\in K_n and v_j\in K_n, the symmetry a is set to satisfy the following seven properties:

  • a(v_i)=v_j,
  • a(v_j)=v_i,
  • a(v_k)=v_k for all vertices v_k other than v_i,~v_j,
  • a(e_{ij})=e_{ij}, where e_{ij} denotes the edge connecting v_i with v_j,
  • a(e_{ik})=e_{jk} for all k other than i,~j, where e_{ik} denotes the edge connecting v_i with v_k and e_{jk} the edge connecting v_j with v_k,
  • a(e_{jk})=e_{ik} for all k other than i,~j, and
  • a(e_{kl})=e_{kl} for all k,~l other than i,~j, where e_{kl} denotes the edge connecting v_k with v_l.

Note that all edges exist in the above construction since K_n is complete, so the symmetry is well-defined.

It is easy to check that the proposed symmetry maps vertices to vertices, edges to edges, and preserves the connectivity of \Gamma according to the definition of symmetry provided in this blog post. For instance, \mbox{ends}(e_{ik})=\{v_i,v_k\}, a(e_{ik})=e_{jk}, a(v_i)=v_j and a(v_k)=v_k yield \mbox{ends}(a(e_{ik}))=\mbox{ends}(e_{jk})=\{v_j,v_k\}=\{a(v_i),a(v_k)\}, which is an edge in K_n.

11/02/2017

Clarifying the concept of symmetry group of a graph

The symmetry group of a graph, also referred to as the automorphism group of a graph, is informally the set of all possible rearrangements of the graph that preserve its edge-vertex connectivity.

A practical way of understanding what rearrangements are permitted by the symmetry group of a graph is to assign labels to the vertices and edges of the graph. Consider moving around the vertices and edges of the graph in such way so that each graph resulting from the reconfiguration connects the same labelled vertices via the same labelled edges.

For example, the figure below displays a graph \Gamma with labelled vertices \{v_1, v_2, v_3, v_4\} and edges \{e_1, e_2, e_3, e_4, e_5, e_6\}:

group_symmetry_of_graph_original

Six symmetries come from permuting the edges \{e_4, e_5, e_6\} connecting the vertices \{v_1, v_4\} and one symmetry comes from reflecting through a horizontal axis. The symmetry arising from a reflection through the horizontal axis is shown below:

group_symmetry_of_graph_reflected

On the other hand, the graph below does not belong to the symmetry group of the initial graph \Gamma because of connecting v_2 with v_4 through e_2, which introduces a connection not present in \Gamma:

group_symmetry_of_graph_wrong

Let’s count the graphs in the symmetry group of \Gamma. Six graphs symmetric to \Gamma arise from permuting the edges \{e_4, e_5, e_6\}, and this set of six graphs is isomorphic to the symmetric group \mbox{Sym}_3. The original graph \Gamma along with its reflection through the horizontal axis gives a set of two graphs isomorphic to \mbox{Sym}_2. So the symmetry group of \Gamma contains 6\times 2 =12 graphs, and is isomorphic to the direct product \mbox{Sym}_3\bigoplus\mbox{Sym}_2.

The main aim of this blog post is to provide a mathematically rigorous definition of the symmetry group of a graph starting by the concept of graph symmetry. A symmetry of a graph \Gamma=(V,E) with vertex set V and edge set E is a permutation f of the vertex set V such that the pair of vertices \{v_1,v_2\} form an edge in \Gamma if and only if the pair \{f(v_1), f(v_2)\} also form an edge in \Gamma.

Unfortunately, there is an error in this first definition. In the previous example, if the edges e_4 and e_5 are permuted, a new distinct graph from the symmetry group of \Gamma arises, which is not accounted by the definition due to permuting the edges but not the vertices of \Gamma.

A more mathematically accurate definition is provided in John Meier’s book “Groups, Graphs and Trees: An Introduction to the Geometry Group of Infinite Groups”. According to Meier’s definition, a symmetry of a graph \Gamma with vertices V(\Gamma) and edges E(\Gamma) is a bijection f taking vertices to vertices and edges to edges such that if \mbox{ends}(e)=\{v_1,v_2\} in \Gamma, then \mbox{ends}(f(e))=\{f(v_1),f(v_2)\} in \Gamma, where \mbox{ends}(e)=\{v_1,v_2\} denotes the set of two vertices v_1 and v_2 connected by an edge e.

Meier’s definition corrects the issue of the previous definition, but does not tell what is the domain and range of f. In fact, trying to define the domain and range of f does not seem to be obvious due to the dubious notation f(e) and f(v_1).

One possible way round this problem is to define graph symmetry to be a set of two separate bijections, one on the vertices and one on the edges of the graph. The third definition would take the following form; a symmetry of a graph \Gamma with vertices V(\Gamma) and edges E(\Gamma) is a set \{f_1,f_2\} of bijections f_1:V(\Gamma)\rightarrow V(\Gamma) and f_2:E(\Gamma)\rightarrow E(\Gamma) such that if \mbox{ends}(e)=\{v_1,v_2\} in \Gamma, then \mbox{ends}(f_2(e))=\{f_1(v_1),f_1(v_2)\} in \Gamma.

Note that the converse holds in the above definition, i.e. \mbox{ends}(f_2(e))=\{f_1(v_1),f_1(v_2)\} in \Gamma\Rightarrow\mbox{ends}(e)=\{v_1,v_2\} in \Gamma. To prove this, assume that \mbox{ends}(e)=\{q,r\}\ne \{v_1,v_2\}. From the definition, \mbox{ends}(e)=\{q,r\}\Rightarrow\mbox{ends}(f_2(e))=\{f_1(q),f_1(r)\}. Given that \{q,r\}\ne \{v_1,v_2\}, assume without loss of generality that q\ne v_1 and q\ne v_2. Since f_1 is a bijection, it follows that f_1(q)\ne f_1(v_1) and f_1(q)\ne f_1(v_2), which is a contradiction since the edge f_2(e) connects two vertices according to \{f_1(v_1),f_1(v_2)\}=\mbox{ends}(f_2(e))=\{f_1(q),f_1(r)\}.

In light of the validity of the inverse, one can define graph symmetry using the equivalence \mbox{ends}(e)=\{v_1,v_2\} in \Gamma\iff \mbox{ends}(f_2(e))=\{f_1(v_1),f_1(v_2)\} in \Gamma.

However, this third definition raises one more question. Having two separate bijections, one for vertices and one for edges, does not allow to define graph symmetry as a single function. The solution is to define a symmetry f of a graph \Gamma by f(x)= \begin{cases} f_1(x) \mbox{ if } x\in V(\Gamma)\\ f_2(x)\mbox{ if } x\in E(\Gamma)\end{cases}.

So the third definition can now be corrected. A symmetry of a graph \Gamma with vertices V(\Gamma) and edges E(\Gamma) is a bijection f:V(\Gamma)\bigcup E(\Gamma)\rightarrow V(\Gamma)\bigcup E(\Gamma) defined byf(x)= \begin{cases} f_1(x) \mbox{ if } x\in V(\Gamma)\\ f_2(x)\mbox{ if } x\in E(\Gamma)\end{cases}, where f_1:V(\Gamma)\rightarrow V(\Gamma) and f_2:E(\Gamma)\rightarrow E(\Gamma) are two bijections such that if \mbox{ends}(e)=\{v_1,v_2\} in \Gamma, then \mbox{ends}(f_2(e))=\{f_1(v_1),f_1(v_2)\} in \Gamma.

This corrected third definition coincides with Meier’s definition if the latter is clarified formally as follows; a symmetry of a graph \Gamma with vertices V(\Gamma) and edges E(\Gamma) is a bijection f:V(\Gamma)\bigcup E(\Gamma)\rightarrow V(\Gamma)\bigcup E(\Gamma) with the following three properties:

  • v\in V(\Gamma)\Rightarrow f(v)\in V(\Gamma),
  • e\in E(\Gamma)\Rightarrow f(e)\in E(\Gamma),
  • \mbox{ends}(e)=\{v_1,v_2\} in \Gamma\Rightarrow\mbox{ends}(f(e))=\{f(v_1),f(v_2)\} in \Gamma.

Having defined graph symmetry, the symmetry group of a graph becomes simply the collection of all graph symmetries. Formally, the symmetry group \mbox{Sym}(\Gamma) of a graph \Gamma is the set of symmetries of \Gamma equipped with the operation of composition of graph symmetries.

Note that the composition f\circ g of two graph symmetriesf:V(\Gamma)\bigcup E(\Gamma)\rightarrow V(\Gamma)\bigcup E(\Gamma) andg:V(\Gamma)\bigcup E(\Gamma)\rightarrow V(\Gamma)\bigcup E(\Gamma) is essentially a composition of two vertex or edge permutations.

26/01/2017

Understanding the concept of group action via an example

A rigorous definition of group action and of its group homomorphism representation was provided in a previous post. This post presents the group action on the symmetries of the square to exemplify the concept.

Consider a square with numbered vertices, as shown in the figure below. The set of vertices of the square is X=\left\{1,2,3,4\right\}.

square_vertices

A symmetry of the square can be informally thought of as a way of moving the square so that it coincides with its former position. Every such move is fully described by its effect on the vertices, in the sense that every new vertex position coincides with a distinct former vertex position.

There are exactly 8 symmetric moves of the square, each of which can be described by a permutation of the square’s vertices. Here is the list of the 8 symmetries of the square:

  • The identity R_0=\left(\begin{matrix}1 & 2 & 3 & 4\\  1 & 2 & 3 & 4 \end{matrix}\right), which does not move the square.
  • Clockwise rotation of the square about its centre P by an angle of 90^{\circ}R_1=\left(\begin{matrix}1 & 2 & 3 & 4\\  2 & 3 & 4 & 1\end{matrix}\right).
  • Clockwise rotation of the square about its centre P by an angle of 180^{\circ}R_2=\left(\begin{matrix}1 & 2 & 3 & 4\\  3 & 4 & 1 & 2\end{matrix}\right).
  • Clockwise rotation of the square about its centre P by an angle of 270^{\circ}R_3=\left(\begin{matrix}1 & 2 & 3 & 4\\  4 & 1 & 2 & 3\end{matrix}\right).
  • Flip of the square about its diagonal A (see figure below): R_4=\left(\begin{matrix}1 & 2 & 3 & 4\\  1 & 4 & 3 & 2\end{matrix}\right).
  • Flip of the square about its diagonal CR_5=\left(\begin{matrix}1 & 2 & 3 & 4\\  3 & 2 & 1 & 4\end{matrix}\right).
  • Flip of the square about its vertical axis BR_6=\left(\begin{matrix}1 & 2 & 3 & 4\\  2 & 1 & 4 & 3\end{matrix}\right).
  • Flip of the square about its horizontal axis DR_7=\left(\begin{matrix}1 & 2 & 3 & 4\\  4 & 3 & 2 & 1\end{matrix}\right).

square_dihedral_group

The set of all 8 symmetries of the square is denoted by D_4=\left\{R_0,R_1,R_2,R_3,R_4,R_5,R_6,R_7\right\}. Define the operation \circ : D_4\times D_4\rightarrow D_4 to be the function composition R_i\circ R_j(x)=R_i(R_j(x)) for x\in X\left\{1,2,3,4\right\}. For example,

R_1\circ R_6 is the result of first flipping the square about its vertical axis B and then rotating it clockwise by 90^{\circ}:

R_1\circ R_6 =\left(\begin{matrix}1 & 2 & 3 & 4\\  2 & 3 & 4 & 1\end{matrix}\right)\circ\left(\begin{matrix}1 & 2 & 3 & 4\\  2 & 1 & 4 & 3\end{matrix}\right)=\left(\begin{matrix}1 & 2 & 3 & 4\\  3 & 2 & 1 & 4\end{matrix}\right)=R_5.

R_1\circ R_6 =R_5 means that first flipping the square about its vertical axis B and then rotating it clockwise by 90^{\circ} is the same as flipping the square about its diagonal C.

The set of symmetric permutations D_4 of the square along with the operation of permutation composition induces the so-called dihedral group (D_4, \circ) of the symmetries of the square. |D_4| denotes the order of group (D_4, \circ), which is the number of elements of D_4. Obviously, |D_4|=8.

The symmetric group (\mbox{Sym}(X), \circ) of X=\left\{1,2,3,4\right\} is the set of all the permutations of X, i.e. the set of all the bijective functions from X to X. Since \mbox{Sym}(X) has 4! elements, the order of \mbox{Sym}(X) is |\mbox{Sym}(X)|=24.

Note that D_4\subseteq \mbox{Sym}(X) and that (D_4, \circ) is a subgroup of (\mbox{Sym}(X), \circ).

Any function \phi:D_4\times X\rightarrow X is a group action as long as it satisfies

  • \phi(R_0, x)=x for all x \in X=\left\{1,2,3,4\right\} (identity property) and
  • \phi(R_i\circ R_j, x)=\phi(R_i, \phi(R_j, x)) for all i, j=0,1,2,3,4,5,6,7 and for all x \in X=\left\{1,2,3,4\right\} (compatibility property).

One way of picking a specific group action \phi relies on defining the type of associated group homomorphism h:D_4\rightarrow\mbox{Sym}(X) in a way that respects the identity and compatibility properties of \phi:D_4\times X\rightarrow X.

The simplest possible example would be to choose the group homomorphism h:D_4\rightarrow\mbox{Sym}(X) to be the identity function h(R_i)=R_i,~i=0,1,2,3,4,5,6,7, in which case the group action \phi takes the form \phi(R_i, x)=R_i(x).

It is easy to check that the the group action \phi(R_i, x)=R_i(x), which arises by setting the group homomorphism h to be the identity function, satisfies the properties of identity and compatibility:

  • \phi(R_0, x)=R_0(x)=x,
  • \phi(R_i\circ R_j, x)=R_i\circ R_j(x)=R_i(R_j(x))=R_i(\phi(R_j,x))=\phi (R_i,\phi (R_j, x)).

It is also easy to see for instance that the group action \phi(R_i, x)=R_i(x) maps (R_1, 2)\in D_4\times X to 3\in X, since \phi(R_1, 2)=R_1(2)=3.

The group action \phi(R_i, x)=R_i(x) is interpreted as the function that maps every symmetric move (permutation) R_i of the square and every square vertex x to the square vertex R_i(x).

The group homomorphism h(R_i)=R_i is the identity, so it is an injective function. As elaborated in this previous post, since h with h(R_i)=R_i is injective, the group action \phi with \phi(R_i, x)=R_i(x) is faithful.

25/01/2017

A note on faithful group actions

Let G be a group, X a set and \mbox{Sym}(X) the symmetric group of X. As it is known and as it has been elaborated in a previous post, a function \phi : G \times X \rightarrow X is a group action if and only if the function h:G\rightarrow \mbox{Sym}(X),~h(g)=f_g, with f_g(x)=\phi(g, x)=:gx, is a group homomorphism. The purpose of the present post is to clarify the concept of faithful group action.

It is easy to show that (\forall g\in G, g\ne e )(\exists x\in X)gx\ne x if and only if (\forall g,s\in G, g\ne s )(\exists x\in X)gx\ne sx. Either of these two equivalent statements defines a faithful group action \phi.

The faithful property of a group action \phi is equivalent to properties of the associated group homomorphism h. More concretely, it is easy to show that a group action \phi is faithful if and only if the associated group homomorphism h is injective if and only if h has a trivial kernel. Shortly, \mbox{Ker}(h)=\left\{e\right\}\Leftrightarrow \phi~\mbox{faithful}\Leftrightarrow h~\mbox{injective}, where \mbox{Ker}(h) is the kernel of h and e the neutral element of G.

It can also be shown that if \phi is injective, then \phi is faithful. However, the converse does not hold. So, the well-known equivalence between injective and faithful actions states that a group action \phi is faithful if and only if its associated group homomorphism h is injective (whereas \phi~\mbox{faithful}\Leftrightarrow\phi~\mbox{injective} does not hold).

22/01/2017

An example of how to select ideals to obtain quotient rings with desirable properties

The fundamental homomorphism theorem for rings employs the kernel K of a homomorphism f:A\rightarrow B to construct a quotient ring A/K isomorphic to ring B. Conversely, the homomorphic image A/J of a ring A is useful for factoring out unwanted features of A to obtain a quotient ring A/J with the desirable properties. Selecting an appropriate ideal J is the key for acquiring the desired quotient ring A/J.

In some cases, picking a suitable ideal J can be done methodically by tracing the desired property of A/J back to an analogous property of J. To demonstrate the process of selecting J, the simple example of obtaining a quotient ring A/J that doesn’t contain any divisors of zero will be considered.

The example under consideration is stated as follows. Let A be a commutative ring with unity. An ideal J of A is prime if and only if A/J is an integral domain.

Briefly recall the associated definitions. An integral domain is a commutative ring with unity that doesn’t have any divisors of zero. An ideal  J of a commutative ring A is prime if \forall a,b\in A with ab\in J it follows that a\in J or b \in J.

It is straightforward to prove the equivalence of this example. To prove that the integral domain has no divisors, it suffices to observe that a\in J \Leftrightarrow J=J+a. Emphasis will be shifted towards understanding why one would think of using a prime ideal to retain the cancellation property in A/J, i.e. to ensure that there are no divisors of zero in A/J.

Think first how the lack of divisors of zero is stated for A/J. The product of two cosets J+a and J+b in A/J is (J+a)(J+b)=J+ab. Moreover, the zero element of A/J is the coset J=J+0, so to say that A/J has no divisors of zero implies that J+ab=J\Rightarrow J+a=J or J+b=J.

Recall also that for any x\in A, x\in J \Leftrightarrow J+x=J. It then becomes obvious that the property J+ab=J\Rightarrow J+a=J or J+b=J in A/J corresponds to the property ab\in J\Rightarrow a\in J or b\in J, which is the defining property of a prime ideal J.

So it appears that in some cases it is easy to choose an ideal J whose properties are conveyed to a quotient ring A/J.

18/01/2017

Group actions as group homomorphisms

I have found the interpretation of the definition of group action as a group homomorphism to be verbose. This blog post provides a formalistic exposition of the equivalence between a group action and its induced homomorphism.

Let G be a group and X a set. A group action \phi of G on X is a function \phi : G \times X \rightarrow X that satisfies the properties of

  • identity, i.e (\forall x\in X)\phi(e, x)=x, where e is the identity element of G,
  • and compatibility, i.e. (\forall g, s\in G)(\forall x\in X)\phi(gs, x)=\phi(g, \phi(s, x)).

For convenience, the shortands ex=x and (gs)x=g(sx) are used in place of \phi(e, x)=x and \phi(gs, x)=\phi(g, \phi(s, x)), respectively.

Let \mbox{Sym}(X) denote the symmetric group of X, that is the group of bijective functions from X to X. Consider the family of bijective functions \left\{f_g:g\in G\right\}\subseteq\mbox{Sym}(X), where f_g:X\rightarrow X and f_g(x)=\phi(g, x)=gx.

It is now possible to define the function h:G\rightarrow \mbox{Sym}(X) as h(g)=f_g. In words, h maps an element g\in G to the bijective function f_g:X\rightarrow X defined by f_g(x)=\phi(g, x)=gx.

The common representation of a group action as a homomorphism relies on the following equivalence; a function \phi : G \times X \rightarrow X is a group action if and only if the function h:G\rightarrow \mbox{Sym}(X),~h(g)=f_g, with f_g(x)=\phi(g, x)=gx, is a group homomorphism. The main point of this blog post has been to formalize this statement. For the sake of completeness, its proof will be provided, albeit being simple.

Firstly assume that \phi is a group action. It will be shown that h is a homomorphism. By definition, h(gs)=f_{gs}. Moreover, h(g)h(s)=f_gf_s, where the product f_gf_s denotes function composition, the operation of the symmetric group \mbox{Sym}(X). Since \phi is a group action, it follows that (\forall x\in X) f_{gs}(x)=(gs)x=g(sx)=(f_gf_s)(x), therefore h(gs)=h(g)h(s), so h is a group homomorphism.

Conversely, assume that h is a homomorphism. It will be shown that \phi is a group action.

The identity e\in G belongs to the kernel of h; h(e)=h(ee), whence h(e)=h(e)h(e), so h(e)=\varepsilon, with \varepsilon(x)=x being the identity function in \mbox{Sym}(X). Furthermore, h(e)=f_e, so f_e=\varepsilon, which means that (\forall x\in X)ex=f_e(x)=\varepsilon(x)=x. The property of identity is thus satisfied for \phi.

Since h is a homomorphism, it follows that h(gs)=h(g)h(s), so f_{gs}=f_g f_s. Finally, (\forall x\in X)(gs)x=f_{gs}(x)=(f_gf_s)(x)=g(sx), hence the compatibility property of \phi is also satisfied, and \phi is a group action.

09/01/2017

Proof: any element of a group of prime order generates the group

Let G be a group of prime order |G|=p. It will be shown that for any a\in G with a\ne e it holds that \langle a\rangle=G, where \langle a\rangle is the set generated by a and e is the identity element of G.

Since the order of group G is |G|=p<\infty, the order \mbox{ord}(a) of element a is also finite, i.e. \mbox{ord}(a)=n<\infty for some positive integer n.

Note that the set \langle a\rangle=\left\{a^0,a^1,\ldots,a^{n-1}\right\} generated by a is a subgroup of G, since \langle a\rangle is closed with respect to multiplication and with respect to inverses. Indeed, for any a^i,a^j\in\langle a\rangle, the division algorithm of integers ensures that there exist integers q,r with 0\le r <n such that a^i a^j = a^{i+j}=a^{nq+r}=(a^n)^q a^r=e a^r=a^r\in \langle a \rangle. Moreover, the inverse of any a^i\in \langle a \rangle is (a^i)^{-1}=a^{n-i}\in \langle a \rangle.

The order |\langle a \rangle| of the cyclic subgroup\langle a \rangle of G is equal to the order of generator a, that is |\langle a \rangle|=\mbox{ord}(a)=n.

By Lagrange’s theorem, n is a factor of p. Since p is a prime, n=1 or n=p. It follows from a\ne e that n\ne 1, so n=p, which means that |\langle a \rangle|=|G|.

\langle a \rangle\subseteq G and |\langle a \rangle|=|G|\Rightarrow\langle a \rangle=G.

07/01/2017

Clarification on the proof of isomorphism of finite cyclic groups

In order to show that any two cyclic groups of finite order n are isomorphic to each other, it suffices to show that every cyclic group \langle a\rangle=\left\{a^0,a^1,a^2,\ldots,a^{n-1}\right\} of finite order n is isomorphic to the group \mathbb{Z}_n of integers modulo n with the operation of addition modulo n, where \mathbb{Z}_n=\left\{0,1,2,\ldots,n-1\right\}.

Towards this end, it can be shown that the function f:\mathbb{Z}_n\rightarrow\langle a\rangle defined as f(i)=a^i is a group isomorphism. Recalling the definition of group isomorphism, it suffices to show that f is a bijective homomorphism. It is straightforward to check that f is a bijective function. The fact that f is a homomorphism is proven also easily using laws of exponents for groups, as follows: f(i+j)=a^{i+j}=a^i a^j=f(i)f(j).

Proving the homomorphic property of f as above works well if f is taken to be f:\mathbb{Z}\rightarrow\langle a\rangle,~f(i)=a^i, that is from the additive group of integers to the cyclic group \langle a\rangle=\left\{\ldots,a^{-2},a^{-1},a^0,a^1,a^2,\ldots\right\} of infinite order generated by a. However, in the case of f:\mathbb{Z}_n\rightarrow\langle a\rangle with \langle a\rangle=\left\{a^0,a^1,a^2,\ldots,a^{n-1}\right\}, there is a slight oversight; the addition implied by f(i+j) is addition modulo n.

Let’s denote the addition modulo n by \oplus. So, now the proof of homomorphic property takes the form f(i\oplus j)=a^{i\oplus j}=a^i a^j=f(i)f(j). The identitya^{i+j}=a^i a^j is not the same asa^{i\oplus j}=a^i a^j; the former is one of the common exponent laws for groups, while the latter is not.

What needs to be shown is that the exponents of addition and of addition modulo n coincide, i.e. a^{i+j}=a^{i\oplus j}. The proof of this identity is simple, yet it is useful to be aware of such potential oversight.

According to the division algorithm for integers, there exist unique integers q and r such that i+j = nq+r and 0\le r < n. Thus, a^{i+j}=a^{nq+r}=(a^n)^q a^r. Since the cyclic group \langle a\rangle is of order n, it holds that a^n=e, where e is the identity element of \langle a\rangle. So, a^{i+j}=a^r.

Notice that r coincides with the result of the addition modulo n, that is r=i\oplus j. It follows that a^{i+j}=a^{i\oplus j}, which completes the proof.

04/01/2017

Proof: every cycle expresses as a product of transpositions

To show that a cycle (a_1~a_2~\dots~a_{r-1}~a_r) can be written as a product of one or more transpositions, it suffices to prove the validity of the identity (a_1~a_2~\dots~a_{r-1}~a_r)=(a_r~a_{r-1})\dots (a_r~a_2)(a_r~a_1), which will be done by using mathematical induction.

Assume that the identity holds for k. It will be shown that it also holds for k+1, as in (a_1~a_2~\dots~a_{k-1}~a_k~a_{k+1})=(a_{k+1}~a_k)(a_{k+1}~a_{k-1})\dots (a_{k+1}~a_2)(a_{k+1}~a_1).

Start by noticing that (a_1~a_2~a_3~\dots~a_{k-1}~a_k~a_{k+1})=(a_2~a_3~\dots~a_{k-1}~a_k~a_{k+1})(a_{k+1}~a_1).

Due to the induction assumption, which says that a k-length cycle can be decomposed into a product of transpositions, (a_2~a_3~\dots~a_{k-1}~a_k~a_{k+1}) can be written as (a_2~a_3~\dots~a_{k-1}~a_k~a_{k+1})=(a_{k+1}~a_k)(a_{k+1}~a_{k-1})\dots (a_{k+1}~a_3)(a_{k+1}~a_2).

By combining the last two equations, the conclusion follows for k+1.

04/01/2017

Proof: the product of disjoint cycles of a permutation is unique

Theorem 1, page 82, in the second edition of “A Book of Abstract Algebra” by Charles C. Pinter states that “Every permutation is either the identity, a single cycle, or a product of disjoint cycles”. The theorem is proved in the book. After the proof, it is stated that it is easy to see that the product of cycles is unique, except for the order of factors.

I have seen online proofs of this claim of uniqueness based on the concept of orbit. In what follows, an alternative proof is offered without relying on orbits.

Assume that there exists a permutation \pi which can be written as a product of disjoint cycles in two ways. So, there are two collections, each consisting of disjoint cycles, C=\left\{c_1,c_2,\dots,c_n\right\} and S=\left\{s_1,s_2,\dots,s_m\right\} such that C\ne S and \pi=\prod_{i=1}^{n}c_i=\prod_{i=1}^{m}s_i.

C\ne S\Rightarrow C\not\subseteq S~\mbox{or}~S\not\subseteq C. Without loss of generality, assume that C\not\subseteq S. Thus, (\exists~\mbox{cycle}~c_i \in C)(\forall~\mbox{cycles}~s_k\in S) c_i\ne s_k.

Let a_q be some element in cycle c_i. Since a_q\in c_i and the cycles in C are disjoint, a_q is permuted by \pi. Therefore, \exists~\mbox{cycle}~s_l\in S such that a_q\in s_l.

The cycles c_i and s_l are disjoint from the rest of cycles in C and S, respectively. Thus, c_i(a_q)=s_l(a_q)=\pi(a_q).

Since c_i\ne s_l, there exists an element a_p that breaks the equality arising from the successive implementation of permutation \pi in each of the two cycles c_i and s_l starting from a_q. Thereby, there exists an element a_p in c_i and in s_l with c_i(a_p)\ne s_l(a_p). This is a contradiction, since the permutation \pi is a function, so a_p should be mapped to a unique element \pi(a_p).