Greatest Common Divisor


For two integers and , their greatest common divisor is the greatest number such that . Otherwise we could take such that is the common divisor of but is greater than , which is a contradiction to our previous definition.

When evaluating greatest common divisor between and another number , let , since when it also divides , and when , it makes sense to define as a fallback signifying this case.

Obviously there's . To simplify, we constrain the discussion of to non-negative numbers in this text.

Euclidean algorithm

All of us have learned how to calculate the greatest common divisor of two integers in our elementary school. And long time ago the greek mathematician Euclid has invented an algorithm to calculate the greatest common divisor of two integers :

  1. If then we return and vice versa, otherwise start with where are integers to calculate greatest common divisor.
  2. Calculate , substitute .
  3. Repeat step 2 until , then is the greatest common divisor evaluated.

Proof

When the input integers are the same, then the integers' value itself is the greatest common divisor and could be evaluated by the algorithm above, trivially. Things also holds when either of them is zero. So let's just focus on the case of evaluating two different non-zero integers.

If the first step encounters that , then by , the just swap their places, so we can trivially assume on entering the algorithm.

Define , on each step of the algorith, we calculate and substitute , where holds after each step.

If the algorithm terminates at , then by , also divides , which is a contradiction.

Since is finite and decreasing after each step, we will finally reach a point that and , leaving and halting the algorithm.

Complexity

To analyze the complexity of the algorithm, let's denote the and in the form of two dimensional vector 1. In each step of the algorithm, we calculate from , which can be written as:

Assume it takes steps for the algorithm to halt, the algorithm itself generates the series of products of:

Intuitively, it is the algorithm's worst case that , where will we greedily get as many steps as possible. The expression above can be rewritten as:

The eigen matrix is denoted as . Let , and it is obvious that , so the Euclidean Algorithm's complexity is at most , and is very likely to be sublogarithmic 2.

Least common multiple

Just like the way we define greatest common divisor, we define the least common multiple as the smallest number to be divisible both and .

Obviously, a number to be divisible by both and could be , and it smells like the least common multiple intuitively. But proof is still needed.

Proof by construction

To be divisible by both and , it's undoubtedly for to have a factor of . Let's assume , and so that it is minimal.

As we know , so trivially , which concludes our proof.

This shows us the relationship between products of two numbers, their greatest common divisor, and their least common multiple. And the least common multiple can be evaluated by evaluating the greatest common divisor, using the Euclidean algorithm we've described above.

Proof by prime factorization

If you feels like the proof above is not sound enough, there's an alternative proof done by prime factorizing and as:

Where are the primes comprised in either or . It won't be hard to show:

And since , we've effectively proved that:

Linear Diophantine equations

Structure of solutions

A linear Diophantine equation is an equation in the form of asking for integral solution and .

Let's do congruence modulo- operation on both sides of the equation, we have . Obviously it's impossible for the equation to have integral solution if does not divide .

In the text above, we've analyzed that the Euclidean algorithm is fundamentally a method in finding some linear combination of or . Multiplying both sides by we have:

So must be among the solutions of the linear Diophantine equation.

On the other hand, for another solution of , by substracting these two equations we have . So all solutions of can be represented as where are the solution of .

To find the solution of , let , so we have . Since and , which is divisible by both . Given that , it won't be hard to see that is the solution sufficiently and essentially.

Finally, the structure of solution of linear Diophantine equation is:

The equation in the form of is also called the Bezout's identity and are called the Bezout's coefficients. Obviously the Euclidean algorithm guarantees us to find at least one of them, after some tunning which we are about to show in the next section.

Extended Euclidean algorithm

The objective is simple, to recover Bezout's coefficients during the process of Euclidean algorithm. The analysis is clear, it is just some consecutive linear transform shown in the complexity analysis of Euclid algorithm. All we need to do is to take a deeper look at:

Obviously, are the coefficients we want as our final result, but other elements in the matrix might be envolved in our calculation and must be studied.

Given the matrix of step , the next matrix could be derived by:

Let's substitute , so that , which means we need to do some Fibonacci style bookkeeping, and at each step we calculate the new . The algorithm initializes the with , which is the identity matrix.

Besides finding the Bezout's coefficients, we've also solved the equation with solution placed in : by substituting in some examples, you could find or . Is this a coincidence? It would be even more useful if it could yield and at the same time, but some proof is still required before we could use it fearlessly.

The key to the proof is hideous to some extent, which is embedded in the determinants of the matrices. Each matrix of is of determinant , then by the properties of matrix determinant:

It has effectively found an integral solution of linear Diophantine equation , so and are coprime 3, according to the Bezout's identity.

In the meantime, and are the solution to Diophantine equation . Divide the equation by their greatest common divisor, and we get . Based on the fact that and are coprime, and are coprime, the only possible solutions are either or , which concludes the proof.

This should be sufficient for implementing the extended Euclidean algorithm. The algorithm should yield the greatest common divisor, the Bezout's coefficients, and the quotients after dividing the greatest common divisor.

Chinese remainder theorem

The Sunzi Suanjing has stated a problem that: there're some objects whose number is unknown, counting them 3 by 3 will have 2 remained, counting them 5 by 5 will 3 remained, and counting them 7 by 7 will have 2 remained, so how many objects are there?

The problem could be formalized as a linear congruences:

Let's start with the case of only two rows of congruences to solve. Obviously they are equivalent to system of linear Diophantine equations that:

Subtract them and move the terms so that:

This is a linear Diophantine equation about . Since the choice of is arbitrary, if you want the equation to be solvable indefinitely, must be coprime.

And when and are coprime, we must be capable to find a Bezout's identity that , and the solution of the linear Diophantine equation is . By substiting into we get . Same thing happens when you substitute into instead, which can be verified yourself.

The analysis on linear congruences with two rows shows us that can be equivalently collapsed into when : from left to right, we've showed why linear congruences are indefinitely solvable despite the choice of when ; from right to left, just by doing congruence modulo- and modulo- operations on and you will get the result.

For the case of more rows of linear congruences, by collapsing the first two rows iteratively you will finally reach a point that all linear congruences are collapsed into a single row of congruence. To ensure the collapsed one and the original ones are equivalent, must be pairwisely coprime, by mathematical reduction on and on each -th step of collapsing.

Let's conclude it by solving the problem from Sunzi Suanjing, first write it as linear congruents:

For the first two rows, by Bezout's identity , we have . Collapse it with the third row, by Bezout's identity , we have , and are the answers to the question in Sunzi Suanjing.

Conclusion

Even the concept of greatest common divisor which most of us have learned in elementary school, has profound properties imbued, and itself solely can be used to solve linear Diophantine equations, to a complete extent.

Mastering number theory will certainly aid us a lot in the study of group theory and even harder algebra. But this requires us to comprehend and view mathematical structures in an interconnected manner.


  1. Don't panic or be matrix-phobic, some of you might be astonished about using linear algebra in such a kind of elementary school stuff, but soon you will see the charm of matrix not only on analyzing the complexity, but also in number theory proof. [return]
  2. In fact, usually a very large number will be taken into calculation. So the division's cost must be taken into consideration, which is also logarithmic. So strictly speaking, the total complexity will be instead. [return]
  3. This is also an important application of Bezout's identity, if you've found integers such that , then each combination of will be coprime. [return]
January 1, 2023