Fundamentals of Asymptotic Analysis
When establishing the size of certain problems and the time complexity of certain algorithms, especially for those closely related to number theory, a skirmish with number theoretic functions and their prefix sums may be inevitable. They are in staircase shape, some of them are hard to visualize or calculate directly, while precise calculation is unnecessary when we just want to estimate the size or complexity.
And this is how the natural idea of asymptotic analysis comes into being. With the exotic work of numerous mathematicians, we have finally established a theory to provide smooth or even analytic alternatives to their staircase counterparts, with reasonable error terms. The theory comes with basic asymptotic result as building blocks, along with validated principles to compose them, to apply them in complexer scenarios.
In this text, we are going to introduce some fundamental principles and techniques of asymptotic analysis. They are the foundation of deeper content in analytic number theory.
Asymptotic notations
We may be familiar with analysis of complexity of algorithms, and uses small-Oh and big-Oh notations there. Such notations are also commonly used in asymptotic analysis in analytic number theory, but with a little bit variation, in order to adapt to the definition of number theoretic functions:
Let there be functions
Proof. Here's just some comments on the translation to the limits.
In the case of
In the case of
For convenience, we ellide certain terms
with asymptotic notations in appropriate
circumstances: for functions
In order to be useful, we may always
require that
Meantime,
Euler's summation formula
Euler's summation formula provides us a
handy way to do asymptotic analysis for
ranged summation in the form of
For integers
Proof. First, for any integer
Which can be fixed into
Then, we know there's
Now we are done with the proof.
With a little modification, we can get a more general form extended to those ranged from real numbers:
For real numbers
Proof. Let
We would like to pad the integral
range from
With them, we can pad the integral and finally we have:
Which can be fixed into the formula we would like to prove.
Here's a direct application of the Euler's summation formula:
We all know the harmonic series
Where
Proof. By Euler's summation formula we have:
The integral in the middle can be
splitted by
Where the replacement of
All it remains is to show the limit
Let
On the other hand, we can see
Since
And we are done with the proof.
The harmonic series occurs very
often in analytic number theory,
recall that
Please notice that
Here's another example where Euler's summation formula is applicable:
Proof. By Euler's summation formula we have:
Again, with analogous rationale of
how we deal with the intermediate
integral of asymptotic expansion of
Where the replacement of
Let
On the other hand, the graph of
Replace the result relative to the
intermediate integral
Riemann zeta function
Besides harmonic series, we may also
encounter its variation in the form of
Proof. First, all of them are
monotonically increasing as
Which means when
When
Proof. Just like what we've
done before, let
Noted that the diagram of
Then, consider
So in order for
Which means
As they converge, mathematicians has defined a function to yield their convergence values, which is the Riemann zeta function:
With Riemann zeta function, we can
write down the asymptotic expansion of
Proof. First, inspired by the
form of Euler's summation formula,
for each positive integer
For the interval of integral
And we also have
Then, it must be mentioned that
- When
, we have and ; - When
, although and , but we have already defined and proved .
Finally, the asymptotic expansion can be obtained in the way as below:
And we are done with the proof.
With Riemann zeta function and the
asymptotic expansion of
Abel's identity
The Abel's identity, also known as the Abel's summation formula, is a common summation technique that appears in both fundamental real analysis and asymptotic analysis, comes in formally and analytically:
For real numbers
For real numbers
Proof. Utilize the feature of
Substitute them back and we have:
Which is in the form of what we are to prove.
Sometimes it's also considered to
be more general than the Euler's
summation formula, as Euler's
summation formula can be seen as
letting
However, as we can see, the Euler's summation formula does provide us with a common way to accomodate each terms in simple asymptotic expansion.
We will reveal more common usage in the following texts, and let's leave it for now.
Faulhaber's formula and Bernoulli numbers
We have shown the asymptotic
complexity of the sum of
For convenience, we may let
Proof. First, we apply the Abel's
identity by seeing the summation as
Then, move the term of
Replace
Proof. Just expand the term
Although the expansion in such a
way above requires
The theorem and collary enable
us to write the code to compute
the sum of
from typing import *
# compute A_0 up to A_m in Faulhaber's formula
def faulhaber(N: Any, m: int, idiv=False) -> List[Any]:
A = list([None] * (m+1))
A[0] = N # A_0 = N
binom = list([0] * (m+1)) # space up to m
binom[0] = 1
Np = list(N**k for k in range(0, m+2)) # N^0 to N^(m+1)
for k in range(1, m+1): # 1 to m
# update binom for subscript k first
prev = 1
for i in range(1, k):
prev, binom[i] = binom[i], prev + binom[i]
binom[k] = 1
# evaluate A_k now
kp1Ak = (Np[k+1]) + k*(Np[k]) + sum(
binom[l-1] * (Np[l]-A[l]) for l in range(1, k))
A[k] = kp1Ak // (k+1) if idiv else kp1Ak / (k+1)
return A
# normal usage with python integer
#print(faulhaber(10**7, 100, idiv=True))
# use sympy to export the result
import sympy
N = sympy.var('N')
A = faulhaber(N, 17) # A_0 up to A_17
for i, expr in enumerate(A):
print(f'A_{i} = {expr.expand()}')
Which generates the following result:
With a little bit of mathematical
induction, one can easily see the
closed form of
The coefficient of
Where
Other terms in
To show, we first need to get aquainted with a technique called binomal inversion:
Proof. Let's denote:
Clearly there's
Such that
While one can directly check whether
Alongside with:
It's easy to check
Which completes the proof.
The transformation applied in our proof is just like the binomial inversion, and can be proof with the analogous rationale:
Proof. Just like what we've done in the theorem of binomial inversion:
(Be wary of the matrix size, this time
there're
And we need to prove:
In this case, the recursion of
And thus we have:
Consider the first item of
Please notice the expression above
is not compatible with the case
of
Now consider the
And now we are done with the proof.
Proof. When
Be wary of the range of
Proof. It won't be hard
to verify the right hand side
is true when
Which completes the proof.
The constants
By now, we are eventually able to prove the closed form of the Faulhaber's formula:
Proof. First, for each
By seeing
And thus we have:
Where the swap of the summation
is based on that the summands of
And when we replace
Which is exactly what we are to prove.