Matroid, a combinatorial object that
was first introduced by Whitney in 1935
and has been studied extensively by
mathematicians ever since then, is one
of the most important topics and glorious
triumphs of combinatorics.
Matroid is thought to be the product
when linear algebra meets graph theory.
Initially, it was an abstraction and
generalization of linear independence
of linear algebra. This is also how it
got its name, "something similar to
matrix". However, it was then found to
be strongly connected to graph theory:
there are many graph theoretic objects
found to be matroids; while graph theory
provides metaphors and algorithms, for
interpreting matroids and performing
matroid operations.
However, despite the fact that linear
algebra and graph theory establishes
the language of matroid theory, there's
no isolated island in mathematics, and
matroid is also found to be related
to other fields of combinatorics, as
well as algebra, geometry, and topology.
On the other hand, matroid has a bunch
of published results in combinatorial
optimized algorithms, so studying
matroid can also improve problem-solving
in computer science.
Independent Sets and Bases
As is said, that matroids are abstract
linear independence, let's recall
what linear independence can do.
First, we know a linearly independent
set is a set of vectors such that no
vector can be represented by the
linear combination of other vectors.
It won't be hard to find any subset
of a linearly independent set is also
a linearly independent set, since
otherwise the linear combination
evidence in the subset can also be
carried out to the superset, rendering
the superset not to be linearly
independent. A property present in
the superset is said to be
hereditary, if it is preserved
when pruning to any of its subsets.
The linear independence is an example
of hereditary property.
The next one is not so obvious but
still simple for those familiar
with linear algebra, that for two
linearly independent sets
(of the same vector space), if
, then there exists a
vector , such that
is linearly
independent. To prove, it suffices
to show some vectors in can't
be represented by the linear
combination of . This can be
simply shown by Gaussian
elmination. If not so, let
and
,
represent all as
the linear combination of :
This can be viewed as a system
of linear equations with
variables. We may use
Gaussian elimination to wipe
the -th equation out, to
.
But this is accomplished by
subtracting coefficients
scalar-multiplying the first
rows from the -th
row, that
for some coefficients
.
So now
is a linear combination of the
first vectors in ,
which is a contradiction to
is a linearly independent set.
The property that a larger
linearly independent set can
provide some vector to a smaller
independent set is called the
independent set exchange property
(aka. augmentation property).
A matroid is a combinatorial
object that captures and generalizes
the hereditary property and the
independent set exchange property:
Definition (Hereditary Systems and Matroids)
A hereditary system
on
is a combinatorial object,
where is a finite set
called the ground set,
is a set of subsets of
called the independent sets
of , such that:
- .
- .
A matroid
on
is a combinatorial object
such that:
- (Hereditary Property)
is a hereditary system on .
- (Independent Set Exchange Property)
.
In a hereditary system
or a matroid on , for
any subset ,
it's said to be
independent when
,
otherwise it's said to be
dependent when
.
For convenience, we may define
and use it for retrieving
independent set from matroid
.
Convention (Set Operation Convention in Matroid Theory)
In matroid theory, we will be
taking an element from the
ground set , forming a
singleton set , and
performing set operations
with some subset of
from time to time. For
convenience, we may write:
So the independent set exchange
property above may also be
written as
.
The axioms of matroids
above are called the
independent set axioms
of matroids, and we
will see there're many
ways to define a matroid
axiomatically and
equivalently. Here, we
define hereditary systems
prior to matroids, since
we want to reuse it in other
axioms of matroids later.
In the case of linearly
independent sets, it's
okay to let empty set
to be
linearly independent.
However, we will see letting
is indispensible if we
want the hereditary
system to exhibit some
natural behaviors, which
makes our life easier.
In combinatorics, we are
discussing about finite
objects in most time,
and that's why we require
the ground set to be
a finite set.
The linearly independent
sets itself that motivates
us obviously defines
a matroid:
Example (Vectorial Matroids)
Let be any finite
set of vectors from vector
space , and
is all linearly independent
subsets of . Clearly
fulfills the independent set
axioms of matroids, and is called
a vectorial matroid of .
For example, by letting:
We define a vectorial matroid
in real
vector space .
However, a matroid over a
ground set of vectors with
independent sets being
linearly independent
subsets need not be a
vectorial matroid:
Example (Matroid over Vectors but not Vectorial)
In the previous example,
the linearly independent subset
is dispensible:
And
is still a matroid other than
. To prove, it suffices to
show the independent set
exchange property in :
- For , we can see
.
- For , we can see
.
We also pull the independent
set exchange property of
closer
so that we can see what is
happening here:
- For , we can see
.
- For , we can see
.
- For , we can see
.
As linearly independent
sets are the initial
motivation of matroids,
playing with them is not
very motivating. So let's
see an object from graph
theory that also turns
out to have abstract
linear independence:
Example (Cycle Matroids)
Let be a
graph, let the ground
set be the edge set
, and
be the set of edge
sets such that for any
,
the spanning subgraph
is a spanning
forest of . It can
be proved that
forms a matroid, which
is called the
cycle matroid of
and denoted as .
Proof. We need
to show the hereditary
property and independent
set exchange property
of .
For any ,
it won't be hard to find
is obtained
by deleting some edges
from , and merely deleting
edge will not introduce
cycle. So the spanning
subgraph is also
a spanning forest of ,
and the hereditary
property is held.
To prove the independent
set exchange property, we
need to consider the
connected components of
their spanning forests.
That is, if we add
the edges of
to
one by
one, every edge connects
two distinct components
when added. And for
such that ,
it won't be hard to find
the spanning forest
contains more
connected components than
. By pigeonhole
principle, there must be
one component of
containing
vertices from two or
more components of
. Categorize
vertices in , and
we must find an edge
,
such that
for two distinct
components
of . As adding
to connects
and will not
introduce cycle, there's
,
Therefore the independent
set exchange property
is also held.
In this way,
has a finite ground set
with hereditary property
and independent set exchange
property in ,
and is thus a matroid.
So the abstract linear
independence occurs in a
seemingly completely
unrelated field, the
graph theory. Interesting
as it is, the purpose of
defining matroids is still
fuzzy, why would we want
to unify the linearly
independent sets and the
spanning forests? To
answer this question, we
will have to reveal more
properties of matroids.
Bases of Matroids
Again, let's do a quick
review about the bases
in linear algebra.
For a finite set of vectors
, we may use it to span
a vector space
by the linear combination
of vectors in . However,
by the theory of linear
algebra, some vectors in
might be dispensable:
let
and vector
,
if is expressible by
,
then is dispensable and
represents the same vector.
In this way, the set
might be reduced to a basis
, a subset of linearly
independent vectors, such that
.
In the example above,
given that
,
the way of eliminating
linearly dependent vectors
in is not unique, and we
should expect more than one
bases of
obtainable from eliminations
in in general.
For a finite set of vectors
and the vector space
it spans, a linearly
independent set is a
basis iff it's a
maximal, that is, no
proper superset of it
is linearly independent.
To show the necessity,
for a basis and a
proper linearly
independent superset
, there's
an element
that is not expressible
by linear combination
of vectors in , but
is expected, which is
a contradiction. To
show the sufficiency, if
is maximal linearly
independent subset of
but not span ,
there must be a vector
that
is not spanned by ,
and thus
is linearly independent
and larger than ,
which is contradiction
to the maximality of .
And in the case of matroid:
Definition (Bases of Matroids)
A basis
of matroid
is
a maximal independent set
of under set
inclusion. That is,
and
.
We shall denote the set of
bases of matroid as
.
So a basis of matroid
captures the maximal property
of a basis in linear algebra.
However, we will show, it
suffices to capture this
property in order for a basis
matroid to exhibit many
behaviors similar to a basis
in linear algebra.
We are familiar with the
conclusion, that for the
-dimensional vector space
spanned by , every basis
has vectors. And the
bases of matroid also
possess this property:
Proof. Assume the
contrary, and without loss
of generality, let
. By the
independent set exchange
property, we have
,
while ,
which is contradiction to
the maximality of .
Proof. If ,
then
.
Since , by
the independent set exchange
property, we have
while ,
which is contradiction to the
maximality of .
Simple as they are, if we
put the proofs above
alongside with the proof
of every basis of
-dimensional vector
space has vectors,
we will soon find the
proofs above are just
abstracted versions of the
one in linear algebra.
The bases in linear algebra
also have the basis exchange
property, that is, for two
distinct bases
of the same vector space,
,
such that is
also a basis. This property
also holds for bases of matroids:
Lemma (Weak Basis Exchange Property)
Proof. Since
and ,
must be
a proper subset of them
while both
and are
not empty. By
for
we obtain a set of
size , thus
.
It won't be hard to find
and
,
and we are done with the proof.
The matroid version of basis
exchange property is called the
weak basis exchange property,
in comparision with the
strong basis exchange property that
.
However, the latter one can't
be proved by using rudimentary
properties of matroids we've
known (in this text) so far.
We will revisit the proof
of the strong basis exchange
property once we have discussed
sufficiently about properties
of matroids to prove.
Matroids and Greedy Algorithms
The matroids are even famous
among computer scientists due
to their connection with the
greedy algorithms, especially
the fact that it's possible
to generalize the classical
Kruskal's MST algorithm
over matroids.
The Kruskal's MST algorithm
finds the minimum spanning
tree of a connected graph,
that is, with weights assigned
to every edges in graph, the
Kruskal's MST algorithm finds
a spanning tree minimizing
the total weights.
On the other hand, it won't
be hard to see the bases of
a cycle matroid are
the spanning forests such
that each component is a
spanning tree of a
connected component in the
original graph . When
there's only one component,
the bases are the spanning
trees of the graph. And
when we assign weights
to the edges, let
,
the MST problem can also
be viewed as finding
among the bases.
And in this section, we
would like to show for
a general matroid
with weights
assigned to each elements,
a greedy algorithm suffices
to solve the problem of finding
.
For convenience, we would
like to show the following
process ends up with finding
a basis of a matroid:
Algorithm
The input is a matroid
, the
output is any basis
:
- Initialze empty set
.
- Iterate :
- If ,
update .
- Halt with being a basis.
Proof. It won't be hard
to find must be independent
all along the way, and if
,
there must be
such that .
By independent set exchange
lemma, there must be
such that
. Let's
rewind back to the moment when
is visited, and let
the set be
constructed so far in the
algorithm. Since ,
there must be
so that is discarded. But
,
which is a contradiction.
Please notice the order of
iterating is arbitrary,
and different order of
iteration might end up with
different basis constructed.
And the algorithm of finding
with
minimum weight is shown
as below:
Algorithm (Greedy Algorithm for Minimum Weighted Basis of Matroid)
The input is a matroid
and
a weight function
, the
output is the minimum
weighted basis
:
- Initialze empty set
, and
sort the elements in
into list
such that
.
- Iterate :
- If ,
update .
- Halt with being the
minimum weighted basis.
Proof. By the previous
algorithm we know when the
algorithm halts, the result
, the
only thing remains is to show
minimizes .
For convenience, let's sort
the elements in by the
order of being added to , so
and is added after
if . Assume the contrary
that does not minimize ,
and let
be the one minimizing instead.
So we have strictly.
Let be the first element
such that while
.
Such element must exists since
otherwise . And
we would like to show there's
a way to "hybridize"
into another basis :
That is, first we initialize
with elements
from , then we extend
with elements from until
. For convenience,
for the -th element to the
-th element in , we
sort them by their order in
, so that there's
and
.
Since to
in are just some
elementes taken from
and sorted by their weights
afterward, they can maximally
be to , and thus
.
The key to the proof is
, in which we have
.
Therefore is iterated
before in . Since
is a basis,
is independent, however
is discarded due to the subset
's
being dependent, during the
process of constructing ,
which is a contradiction.
For those who are familiar with
the non-matroid version of the
proof of Kruskal's MST algorithm,
they will quickly find how similar
these two proofs are, and no wonder
why the algorithm above is considered
a generalization of Kruskal's MST
algorithm to matroids.
For every matroid, the greedy
algorithm above solves the minimum
weighted basis problem for it
sufficiently. However, it's a
common misconception to think every
greedy algorithm has necessarily an
underlying matroid. The Prim's MST
algorithm, which is the twin cousin
compared to Kruskal's MST algorithm
from time to time, has no underlying
matroid inside in this case.
Base System Axioms of Matroids
For every matroid ,
we can find the bases, a set of
maximally and equally-sized
independent set via .
However, is it possible to specify
a set of equally-sized
subsets of a ground set , and use
it as the set of bases,
defining a matroid?
From the previous sections, we know
the bases of a matroid have the
weak basis exchange property, so if
a set of equally-sized
subsets would like to define a matroid,
they must fulfil this property in the
first place. Thinking of extremality,
it won't be hard to find the tuple
defnies
a matroid minimally, and its bases
are , so the set
of bases can never be empty. But
is possessing these properties
sufficient, let's give it a try:
Definition (Base Systems)
A
base system
on
is a combinatorial object,
where
is a finite set called
the
ground set, and
is the set of
bases such that:
- is non-empty.
- .
- .
Intuively speaking, in a matroid,
every independent set must be a
subset of a basis. This inspire
us to define:
Definition (Independent Sets Defined by a Set of Subsets)
Lemma
For any non-empty set
,
defines the independent sets
of a hereditary system on .
Proof. Since
is not empty, there must be
an element , and
.
And for every
,
by construction
,
so
.
In this way,
defines the independent sets
of a hereditary system on
when is non-empty.
So that we may retrieve the
independent sets from a base
system through
.
The only thing left to check is
whether the hereditary system
also possesses the independent
set exchange property, and how
are they connected to the
way we define the base
system above:
Theorem (Base System Axioms of Matroids)
For any base system
on ground set ,
is a matroid on .
Proof. We just need to
show the independent set
exchange property in
.
For any
,
such that ,
by construction there're
such that
.
If , then we
are done with
.
If , given that
,
we assert it's the case of
,
otherwise there's a way to
fix into the case
we asserted indefinitely.
If ,
then ,
and ,
so we may exchange it with
, and use
as the new
containing . Noted that
since is
replaced with now,
such a process can't last
forever and there's a point
that ,
at which .
In this way, we may pair the
elements in by which
element they are in .
Here is one of the ways:
(Consider: Is it ever possible to avoid
elements from to be paired with
elements in , given that
? Have a try!)
We argue that
,
since if and
would like to be disjoint as
subsets of , there must be
,
but
is expected. Therefore we have
and
.
In this way, to define a matroid,
it suffice to come up with a base
system . That is,
it suffices to say there's a
matroid such that
,
in order to define a matroid in
the form of
.
The theorem above also shows us
an alternative way to define a
matroid equivalently. In fact,
there're other ways of defining
matroid. Such a phenomenon of
defining matroid equivalently
in many seemingly unrelated
way (that are later shown to be
tightly connected) is sometimes
referred as cryptomorphism
of matroids by mathematicians.
The terminology of cryptomorphism
is also leveraged to describe
different but equivalent sets of
axioms of defining the same
mathematical object.
Rank Function
We are familiar with the rank
of a matrix, that is, the
maximum number of linearly
independent row or column
vectors in a matrix. It won't
be hard to find the order of
rows and columns does not
matter, and deduplication
does not affect the rank, so
we may assume the rank to be
defined on a set of vectors,
by evaluating the maximum
number of linearly
independent vectors in the set.
Analogously, for a given set
which is a subset of the
ground set of a matroid, we
would also want to ask the
maximum number of "independent"
elements, this is equivalent
to asking the maximum size of
an independent subset of
the given set. Noted that
asking the rank of a matrix
is equivalent to asking the
maximum size of an independent
subset in a vectorial matroid.
To fulfill this purpose, we
first define the restriction
of matroid to a set:
Definition (Restrictions of Matroids)
For matroid ,
the restriction of
to is
a matroid ,
such that
.
Proof. To prove
is a matroid, it
suffices to show the
hereditary property
and independent set
exchange property are
preserved in
.
It won't be hard to find
,
and
,
so the hereditary property
is preserved in
.
To show independent set
exchange property, for any
,
we have
in the first place.
And since both
are subsets of
, the element
taken is
also element of ,
thus is a subset
of , and independent
set exchange property is
preserved in
.
Obviously the maximum size
of an independent subset
of the given set is the
size of a basis of the
restriction of the matroid
to the given set. And we
may define:
Definition (Rank Function of a Matroid)
For matroid ,
we define the rank function
of matroid , so that for any
input subset ,
outputs the size of
bases of .
When the context of matroid is
clear, we may write
directly for input subset
. When under
the context of multiple matroids
numbered as ,
the corresponding rank
functions may also be numbered
as .
By doing so, given that the
size of bases of a matroid
is unique, we obtain the
well-formedness of the rank
function of a matroid for
free. Meantime, such a
definition is compatible
with the rank of a matrix,
when viewing it as a
vectorial matroid.
Absorption Rules
We can see metaphors play
an important role in studying
matroids. In a matrix, when
extending it by a vector,
its rank remains unchanged
or increment by , depending
whether the introduced row
is independent or not. And in
a matroid, the case is similar:
Proof. For a basis
,
it won't be hard to see
,
which implies
.
Meantime, consider the basis
such that :
If , then
.
If , then
.
No matter which case
is always true. By combining
these two inequalities we have
,
and we are done with the proof.
In a matrix , assume
augmenting with
vector does not
, and so does
, then augmenting
with
simultenously will not affect
either. And
when it comes to the case of
a matroid , we also have:
Theorem (Weak Absorption Rule)
Proof. Assume
,
consider the bases
,
there's , thus
by independent set exchange
property when viewed from .
Noted that
otherwise is to be
found in or
, forcing
.
If , then
,
which is a contradiction. So
it's only ,
and if it's , then
,
which is also a contradiction;
similar argument is applicable
to .
In this way, it's impossible
for
to be true, and it's only
.
Corollary (Strong Absorption Rule)
Proof. This is
done by arguing
for any -combination
(that is, any set of distinct
elements chosen from )
to be true until ,
by mathematical induction.
When , it's the
presumption; When ,
it's the weak absorption
rule. And we would like
to show the case of
to be
true when the induction
hypothesis for
has been true. To show
for combination
,
by the induction hypothesis
for and we have:
Noted that they are in the form
ready for applying the weak
absorption rule. Consider augmenting
with , we have
,
now the induction hypothesis
holds for , and we
are done with the proof.
Submodularity
First, we would like to
introduce the concept of
submodular function:
Definition (Submodular Functions)
For a finite set used
as the ground set, a function
is said to be submodular,
when there's:
Then, we would like to show the
rank function of a matroid is a
submodular function:
Theorem (Submodularity of Rank Function)
Proof. First, we find a basis
,
and then extend it to a basis
.
Extending the basis to may
not use elements from ,
or it's a contradiction to the
maximality of in .
So we may categorize the elements
in by whether they are from
, or .
When viewing from the matroid
, since is independent,
both and
are subsets of , independent,
and to be found in
and .
So we have:
While
is clear by the inclusion-exclusion
principle (or by the figure above).
So we are done with the proof.
The submodularity of rank function
is a useful in proving properties
of matroids, we will see later
in this text.
Rank Function Axioms of Matroids
There's another cryptomorphism of
matroids, that if a set is
equipped with a rank function,
then we are able to extract a
hereditary system with independent
set exchange property out of it,
using it as the independent sets
of a matroid and defining a matroid.
First, we would like to equip a
set with a rank function:
Definition (Rank Functions on Finite Sets)
A
rank function on
,
is a function in the form of
,
such that
is a
finite set
called the
ground set, and:
- .
- (Monotonicity)
,
noted that the first inequality
implies .
- (Weak Absorption Rule)
.
A rank function on a finite set
has the strong absorption rule,
which can be directly implied
from the weak absorption rule:
Lemma (Strong Absorption Rule of Rank Functions)
Proof. Recall how we
prove the
strong absorption rule of the rank function of a matroid,
we argue on the -combination
to fulfil
,
by mathematical induction to
.
When it's the presumption.
When it's the weak absorption
rule of the rank function. When
it has been true for
, by arguing it has
been true for
when , and
when
, we apply the weak
absorption rule, viewing it as the
case of extending
with , to imply
,
which is the case of .
Intuitively speaking, in a matroid,
all independent sets fulfills
,
and this inspires us to define:
Definition
For a rank function on ,
we may let
.
And see whether
defines
a hereditary system on with
independent set exchange property.
We will begin with showing
the hereditary property:
Lemma
The set
defines the independent sets
of a hereditary system on .
Proof. First, given that
and ,
it won't be hard to find
,
and since
by the range of ,
so we have
.
Then, to show
,
first for ,
there's a chain of elements
to be removed from , that is,
.
Our plan is to prove
,
so that
,
and therefore
.
To fulfill our plan, we need
to utilize the monotonicity that
.
In our case, it's
,
so . By
combining with
we are done with
,
and therefore the proof.
So it suffices for
and the monotonicity property
to ensure the hereditary
property of .
But to prove the independent
set exchange property, we still
need the absorption rules:
Theorem (Rank Function Axioms of Matroids)
For any rank function
on ground set ,
is
a matroid on .
Proof. Given that
has already
been a hereditary system on , we
just need to show the independent
set exchange property inside.
First, pick up any
such
that , and assume
there's no element in
that can be used to augment
. That is,
,
by the monotonicity,
,
so it must only be
if
would like to be true. By
the strong absorption rule,
there's
.
By monotonicity, there's
.
So they combine into
,
which is a contradiction.
In this way, the independent
set exchange property must be
present in ,
and therefore
defines a matroid.
The submodularity of a
rank function on
ground set , that
,
follows immediately by
viewing from the
underlying matroid
.
On the other hand, it's
also possible to substitute
the weak absorption rule
with the submodularity
in the axioms of rank
function. Since if
,
then given that
,
we have
,
and by combining with the
monotonicity we have
.
Dependent Sets and Circuits
While the bases and rank
function we've discussed about
are related to independent
sets, it's also possible to
study matroids from the
dependent sets. The most
important kind of dependent
sets might be the circuits:
Definition (Circuits)
In a matroid ,
a subset is
said to be a circuit,
if it's a minimal dependent
set under set inclusion.
That is,
and
.
It can also be reworded as
and
.
Similarly we may denote the
set of circuits of matroid
as .
The circuits rise from the
cycle matroids:
Example (Circuits in Cycle Matroids)
In a cycle matroid ,
a dependent set
is a circuit iff is a cycle.
Proof. The sufficiency is
obvious. For the necessity,
if induced subgraph
contains a
cycle but not a cycle,
then is not empty,
and removing edges
still
yields
with the cycle inside.
But the definition above is
applicable to all matroids,
not just the cycle matroid.
So what do circuits look like
in the vectorial matroids?
Example (Circuits in Vectorial Matroids)
In a vectorial matroid
such
that ,
a linearly dependent set
is a circuit iff any vector
in can be represented
by the remaining vectors and
the representation is
unique. That is:
And such a representation is unique.
Proof. For the sufficiency,
recall what "uniqueness" means by.
The representation is unique, therefore
since otherwise we may obtain another
representation
of . And thus,
is
linearly independent.
For the necessity, since
is a circuit, every vector
in can be
removed so that
is linearly independent. So if
vector can
be represented by the linear
combination of
,
then since only solution to
.
is
by the linear independence of
, the representation
of by
is unique.
Meantime, if there's a vector
can't be
represented by the linear
combination of ,
then there has to be
such that not all of the
are
zeroes, since has to be
linearly dependent. However
is still
linearly dependent, which is a
contradiction to is circuit.
The circuits draw special attention,
since there's:
Lemma
Every dependent set contains
a circuit. That is,
.
Proof. Assume the contrary
that does not contain a circuit,
so for any dependent subset
, there must be
to avoid being a circuit.
In this way, given that is
not a circuit (otherwise it
sufficiently contains a circuit),
starting with , we find
the element such that
, and
use it as the new that is
dependent. So we obtain a
proper descending chain:
Since is finite and every
time one elements is removed
from the current set, we will
inevitably reach a point that:
So that .
But the only proper set of
is ,
which is independent. So
and ,
which is contradiction to
does not contain a circuit.
Definition (Loops and Parallel Elements)
Conventionally, if a set of single
element
is a circuit, then said
to be a loop, and if a set
of two elements
is a circuit, then
are said to be
parallel elements.
A matroid is said to be
simple if it has no
loop or parallel element.
Proof. It remains to show
the necessity: If ,
then can't be independent,
otherwise by hereditary property
, which is
a contradiction.
Proof. The set
also contains
the circuit .
It's good to take care of the
following common misconception:
Warning (Comments on the Size of Circuits)
The definition of the circuits
and bases might be seemingly
similar: One is the minimal
dependent set and the other
is the maximum independent set.
Given that all bases have the
same size, one might start to think
all circuits have the equal size,
but this is wrong.
The most simple example might be
from the vectorial matroids. In
the vectorial matroid on
,
it won't be hard to verify
both
and
are circuits, but they have
the different sizes.
Weak Elimination Rule
Consider two cycles
in a cycle matroid and their
union , if they
intersect at a path
, then after
removing any edge
in the path,
will still
contain a cycle:
Interestringly, it turns out that such
a property is general and applicable
to any matroid:
Proof. Just consider:
And we are done with the proof immediately.
Theorem (Weak Elimination Rule)
Proof. Assume the contrary that
.
On one hand, must
be dependent for containing circuits
, so
;
on the other hand
is independent so
.
So we can only have
.
And by inclusion-exclusion principle,
.
On the other hand, by the submodularity
of rank function, we have:
Since , their
intersection
must be a proper subset of
either or (or both),
and thus
.
This will imply
,
and the inequality above
will soon become:
In this way,
,
which is a contradiction, so
there can only be
.
We name such a property of matroid
the weak elimination rule.
There's also a
strong elimination rule,
which states in every dependent set
and for every element
, there's a
circuit
such that
.
Intuitively, those
corresponds
to the edges marked purple in
our cycle matroid example.
We will prove this property
later,
as what we've learnt about
matroids so far is insufficient
for proving it.
Consider a spanning forest in a
cycle matroid: If by adding one
edge we introduce a cycle, then
only one cycle can be introduced.
Otherwise for two cycles
introduced, they must both contain
otherwise they are to find in
the original graph. But edge set
contains
another cycle and can be found
in the original spanning forest.
And by utilizing the weak
elimination rule, such a
phenomenon in cycle matroid can also
be generalized to any matroid:
Lemma
For any
and any , if
,
then contains
unique circuit.
Proof. Assume does
not contain unique circuit, that
is, there're two distinct circuits
such that .
We argue that and
intersects at , since
the one who does not contain
is a subset of and
thus independent.
Therefore, by the weak elimination
rule, is
dependent. However, by
,
it's expected to be independent,
which is a contradiction.
The process of proof is literally
identical to what we've argued for
the cycle matroid above. So no
wonder graph theory also plays
an important role in
studying matroids.
Greedy Algorithm Axioms of Matroids
This section is not related to
circuits directly, however it's
essential to the later proof of
showing that we may define matroids
by circuits cryptomorphically.
It's said that there's not necessarily
a matroid behind a greedy algorithm,
however there's a specialty when the
greedy algorithm is running over a
hereditary system. If we go over the
proof of the correctness of greedy algorithm
to find the minimum weighted basis,
we will notice that the
independent set exchange property
plays a major role in ensuring the
correctness, regardless of what
weight function is used as input.
And the specialty is, if a greedy
algorithm to run in a hereditary
system is correct, regardless of
the input weight function, then
we may excavate the independent
set exchange property from the
hereditary system, showing it
to have an underlying matroid.
The proof is based on the
classical work of Rado and Gale.
They first came up with the
following greedy algorithm:
Algorithm (Greedy Algorithm for Maximum Weighted Independent Set)
The input is a hereditary system
and a
non-negative weight function
,
the output is the maximum
weighted independent set
:
- Initialze empty set
, and
sort the elements in
into list
such that
.
- Iterate :
- If ,
update .
- Halt with being the
maximum weighted independent set.
And then they asserted that:
Assertion (Rado-Gale)
The Greedy Algorithm for Maximum
Weighted Independent Set is correct
iff the input hereditary system
is a matroid.
Let's begin with the sufficiency
part. When the hereditary system
is a matroid, it won't be hard
to find the algorithm above is
almost identical to
the greedy algorithm
we've used for finding the
minimum weighted basis, except
for forcing the weight function
to be non-negative, ordering the
list in descending order, and
not requiring to find a basis.
However, if our previous greedy
algorithm is correct, then this
algorithm must also be correct:
Lemma
In a matroid, the maximum weighted
independent set must be a basis.
Proof. Let the maximum weighted
independent set be
such that
,
then by the independent set exchange
property there's ,
such that , and
given that is non-negative.
In this way, the maximum weighted
independent set must be a basis.
Theorem (Rado-Gale)
If the hereditary system
is a matroid,
then the Greedy Algorithm for Maximum
Weighted Independent Set is correct.
Proof. Let's do a little
translation: define weight
as
, and run
the Greedy Algorithm for Minimum
Weighted Basis with matroid
and weight function , we've
proved the algorithm must
correctly yield the basis
minimizing .
Due to the way we write the
the Greedy Algorithm for Maximum
Weighted Independent Set, it must
yield the same basis
,
claiming it to be maximizing
. This is correct since
there must be a basis maximizing
, while minimizing is
equivalent to maximizing .
So in the case of matroid, by
requiring the weight function
to be non-negative we have
forced a maximum weighted
independent set to be a basis,
and then the greedy algorithm
will find such a basis with
maximum weight sufficiently.
Now let's focus on the necessity
part. Please notice that a
general hereditary system does
not necessarily have the
independent set exchange property.
However, if such a property is
dispensible, we can show the
greedy algorithm will fail for
a deliberately designed input
weight function:
Theorem (Rado-Gale)
If the Greedy Algorithm for Maximum
Weighted Independent Set is correct,
then the input hereditary system
is a matroid.
Proof. Let there be
such that
. We tailor a weight
function for them, so that:
Now let's run the greedy algorithm
with and above as input.
The algorithm will first
encounter elements in
since they are of maximum
weight . All of them
will be added to variable
since has been known
to be independent, and so are
its subsets.
Then the algorithm has to
face the elements in
, assume there's
,
then the algorithm will have
to dismiss every
end up yielding some set
such that
,
given that
.
Noted that the algorithm yields
a wrong answer since is
independent and sufficiently
with greater weight than ,
which is a contradiction to the
correctness presumption of
the greedy algorithm.
In this way, there must be
,
the independent set exchange
property is to be found in
, and the
hereditary system
is a matroid.
Corollary (Rado-Gale)
The Greedy Algorithm for Maximum
Weighted Independent Set is correct
iff the input hereditary system
is a matroid.
In this way, with a hereditary system
, paired with
the guarantee for the Greedy Algorithm
for Maximum Weighted Independent Set
to yield the correct answer for
whichever non-negative weight function
indefinitely, we may define a matroid
cryptomorphically. We shall call it the
greedy algorithm axioms of matroids.
Circuit System Axioms of Matroids
So let's see whether we can define a
matroid cryptomorphically. Just like
what we've done before, we begin with
extracting properties of circuits
in a matroid:
Definition (Circuit System)
A
circuit system
on
is a combinatorial object,
where
is a finite set called
the
ground set, and
is
the set of
circuits such that:
- .
- (Minimality) .
- (Weak Elimination Rule) .
And since all dependent sets are
those containing a circuit, it's
expected that by defining:
Definition (Dependent Sets Defined by a Set of Subsets)
We retrieve the "dependent sets"
defined
by a circuit system, and therefore
the "independent sets"
.
And therefore for a subset
, it suffices for
to show is independent.
So if a matroid is hopefully
defined in such a way, it should be
in the form of
.
Let's begin with showing the
hereditary property:
Lemma
For any circuit system
on ,
the set
defines the independent set of
a hereditary system.
Proof. Since
,
and is a proper
super set of no set, therefore
.
For any
,
assume the contrary that
,
then there's a circuit
such that
,
which is a contradiction.
Then let's show some familiar
properties in the circuit system:
Proof. If
,
then there's a circuit
such that
, however
is a contradiction to the
minimality of .
Lemma
For any independent set
and , if
is dependent, then
contains a unique circuit.
Proof. Assume
contains two distinct circuits
,
these circuits must contain
, since otherwise they
are the subsets of and
independent. By the
weak elimination rule,
which is a contradiction to
the presumption
.
Finally, we are ready to show a
circuit system defines a matroid:
Lemma
For any independent set
and , if
is dependent and contains
the unique circuit
, then
for any ,
is independent.
Proof. The circuit
must contain if it
won't like to be a subset
of . When is
chosen, clearly
is independent.
Assume is chosen
from , but
is dependent, then it
contains a circuit
.
And since
but ,
is another circuit than
. However, given that
,
contains two
distinct circuits
now, which is a contradiction.
Theorem
For any circuit system
on ,
the Greedy Algorithm for Maximum
Weighted Independent Set is
correct for the hereditary system
and whichever non-negative
weight function
.
Proof. Let the output
of the greedy algorithm be ,
and is a maximum weighted
independent set maximizing
. That is,
.
If , then we are done.
If , then by
considering ,
we have:
- If
,
then
,
and is a maximum weighted
independent set, however it's
a contradiction since is
not maximizing
due to .
- If
,
then consider any
, it means
greedy algorithm discards
when
is constructed, so
is inspected, however
it's expected that
which is a contradiction.
- and
, so
is a proper
subset of both , the
remaining proof will be
discussing this case.
Let's sort the elements by
their weight descendingly
(the one with greater weight
appears first), and let them
be first diverging at
, as is shown in
the figure:
We would like to show
: Since
is first the
divergent point, if
then must
appears after . However
when the greedy algorithm
constructs
and scans , its
discarding implies
is dependent, which is
a contradiction.
We would also like to show
is dependent:
Since otherwise
while
,
and is not maximizing
, which is
a contradiction.
So contains a circuit
, and
if won't like to be a proper
subset of . Meantime,
is not empty if
won't like to be a proper
subset of . So we may
always choose
, and there's
.
By the previous lemma, we know
is independent.
Again, we've "hybridized" the
into another independent
set :
Since is a maximum weighted
independent set, we have
.
However contradictions are
still inevitable:
- If ,
then
,
so the greedy algorithm scans
before . However, it
discards because
is dependent, which is a contradiction.
- If , then
since we've replaced the
element
with . But this
will imply is not
maximizing ,
which is a contradiction.
In this way, there's no way
for to be true,
so and the greedy
algorithm yields the
correct answer for whichever
non-negative weight function.
Corollary (Circuit System Axioms of Matroids)
For any circuit system
on ground set ,
is a matroid on .
In this way, it turns out that
we may define a matroid
on by simply saying
for a circuit system
on .
Span Function
We are familiar with the vector
space spanned by a set of vectors.
When it comes to the case of
vectorial matroid ,
the most direct way of defining
the span of
is to consider those vectors
representable
by the linear combination of
vectors in .
However, being a matroid
deliberately hides the underlying
detail of "linear combination"
from us, so we must only rely on
the language and characterization
of independent sets and dependent
sets. However,
is dependent does not necessarily
imply that can be
represented by the linear
combination of : as long as
itself is dependent, any
superset of will also be
dependent. This will ridiciously
lead us to every element in
is "representable" by the "linear
combination" of .
But the previous section also
provides us a solution, that is,
for that would like
to be expressible by the
remaining vectors, there's a circuit
iff
for
(by eliminating those with zero
coefficients).
In this way, the closest
characterization of span may
rely on the language of circuits:
Definition (Span Function of a Matroid)
For matroid ,
we define the span function
of matroid , as those
"representable" by the input
set, that is:
Those are
considered representable,
at least by . However,
this is incompatible with the
language of circuits since a
set disallows duplicate
elements and thus
is
not a circuit. In fact, when
is independent, there's
no way we can construct a
circuit with
and .
In this way, itself has
to be unioned.
When the context of matroid is
clear, we may write
directly
for the input .
When ,
then we say spans .
Does defining in such a way
eliminate the problem of
dependent sets' "spanning"
every element in , let's
give it a try: in a matroid
on ,
there's a loop ,
while no singleton set of
is loop. When we
evaluate
,
it won't be hard to find
none of the sets
is circuit in for
containing circuit .
So does not span
and
.
So there's no longer the
problem of dependent sets'
"spanning" every element
by defining so.
Let's see what span function
looks like in a cycle matroid:
Example (Span Function in Cycle Matroids)
In a cycle matroid ,
for , an edge
iff or
in the
spanning graph .
Proof. If
then
is trivial, so we may assume
. And if
, there's
a -path
such that
is a cycle, and thus a circuit.
So
in this case.
Conversely, if
,
then there's a subset
such that
is a circuit, so
there's a cycle such that
, and removing
from this cycle yields
a -path, thus
.
The independent sets and
dependent sets in a matroid
can also be characterized
by the span function:
Proof. For the
sufficiency, if there's
such that
,
then a subset
completes
a circuit , but
, which
is a contradiction.
For the necessity, we
prove the contrapositive
.
If ,
then
,
and since is not empty,
,
while
.
The lemma above can be
intuitively interpreted in
a cycle matroid, it means
for edge set ,
the spanning subgraph
is a spanning forest
iff every
is a cut edge.
For any vector space, we
know it suffices for a
basis to span it. And this
property also holds for a
general matroids:
Proof. If
,
consider any
,
we claim that
.
Otherwise contains
a circuit , where
must be present if the
circuit won't like to be
a subset of . By
the presence of we have
,
which is a contradiction.
However,
is still a contradiction, so only
is possible.
Span Function Defined by Rank Function
Let's see another seemingly
"careless" way of defining
span function. In a vectorial matroid
, if
we want to test whether vector
is expressible
by the linear combination of
, we may put
vectors of
into a matrix
(assume are column vectors)
and evaluate ,
which equals . Then we
augment with
, resulting in
,
evaluate
and see whether
.
If
then can be
eliminated by Gaussian
elimination, which requires
representing as
the linear combination of .
And we would like to show it's
as effective to express elements
spanned by a set using rank
function, as the one using circuits:
Proof. For the sufficiency,
consider any basis
,
it seems that
since otherwise
.
So every contains a
circuit ,
and if won't
like to a subset of . And
.
For the necessity, first if
, then
and
. So we may
assume . If so,
then
.
Noted that is independent
in since it's formed
by removing from a circuit
.
Noted that is also
independent in ,
thus we may extend to a basis
.
Clearly otherwise
,
therefore is to be found
in , and thus:
In this way, we are done with the proof.
So is there any intuitive way
to interpret why it works by
doing so?
Well, by evaluating
, it can be seen as
reducing matrix
of elements in into a
maximal independent set. If
, then the
maximal independent set can
represents , thus
forms a circuit with elements
from this maximal independent
set, and thus
.
Conversely, if
by
forming a circuit with some
elements in , that is,
is representable by
some elements in
with non-zero coefficients,
then move these elements to
the first few rows and reduce
the matrix to its maximal
independent sets. The first
few rows is independent and
thus intact during elimination.
Adding to the matrix will
not increment the rank since
it is sufficiently eliminated
by the first few rows, and
thus . So the
rank function has intrinsic
connection to the circuits.
Defining span using rank
function can easily lead to:
Lemma (Incorporation Property)
Proof. Since
and
.
By the strong absorption rule,
we have
.
Closure Operator Properties
Let's first introduce the
definition of a closure operator:
Definition (Closure Operators)
A mapping
is said to be a
closure operator
on
, if it fulfills:
- (Extensive) .
- (Increasing) .
- (Idempotent) .
And we would like to show:
Theorem
For a matroid ,
the span function
defines a closure operator on .
Proof. The extensive property
is obvious since
.
The increasing property is
true by considering:
To prove the idempotent
property / idempotence,
since by extensive
property there has already been
,
it suffices to show
,
which will leads to
,
and thus mutual inclusion.
This is easy when using the
rank function definition of
span, that
,
while .
And since
,
we have
,
and thus
.
Proof.
In this way, it's said that
defines a
closure operator in a matroid.
These concepts are closely
related to the closure
operator aspect of matroids:
Definition (Spanning Sets, Closed Sets and Hyperplanes)
For a matroid
on and a subset ,
if , then
is said to be the spanning
set of . If
, then
is said to be a
closed set or flat,
clearly a closed set is the
spanning set of .
The hyperplanes are the
closed sets that are
maximal proper subsets of
, that is, being a proper
closed superset of a hyperplane
implies it is the ground set .
For convenience, we may denote
the set of spanning sets as
, and the set
of hyperplanes as
.
When combining with the property
,
one can easily see:
Lemma
A set is spanning iff it
contains a basis.
Proof. The sufficiency is
obvious. For the necessity,
just consider
,
the set contains an
independent set of size
, which must be
a basis of .
Lemma
A set is hyperplane iff it
is a maximal
non-spanning subset of .
Proof. For the sufficiency,
let such a maximal non-spanning
subset of be , first we
prove to be closed. Assume
it's not closed, then
,
however
implies is
also non-spanning, which is a
contradition to the maximality
of . Let be any proper
closed superset of , then
by definition it's spanning, so
. In
this way, matches the
definition of a hyperplane.
For the necessity, let such a
hyperplane be , since
has already been non-spanning,
we just need to prove its
maximality. Assume it's not
maximal that
is also non-spanning, then
by extensive and increasing property
.
However, since
is a closed proper superset
of , there must be
, and
is a contradiction.
Corollary
A set is hyperplane iff it
is a maximal subset of
containing no basis.
Proof. Being non-spanning
is equivalent to containing no basis.
Proof. Fix any choice of
. By definition
is spanning and containing
a basis .
Clearly if won't
like to contain the basis .
Noted that ,
by hereditary property
, so
.
And by combining with ,
there can only be .
Proof. Take any
and inspect any .
Since
while
,
there must be
. And
since the rank function is
monotonic, it must only be
, and therefore
,
which implies .
Consider any proper superset
of , we have
and therefore
. So is
a maximal non-spanning
subset of .
Steinitz Exchange Property and Closure Operator Axioms of Matroids
In linear algebra, the Steinitz
exchange lemma states, that if
there's a set of vectors
plus a vector ,
that spans a vector space
,
then for any vectors
,
there's
.
The proof is simple, when
,
there's
and the proposition is trivially true.
When
,
in any representation of
,
we must have .
Thus for
there must necessarily be a representation
such that , otherwise
there must be
by there's no way to fix
into . In this way, we
must have
,
and thus
.
This phenomenon can also be
generalized to matroids:
Lemma (Steinitz Exchange Property for Matroids)
Proof. When ,
by ,
the proposition is trivially true.
When ,
there must at least be
.
And for every
,
the proposition is trivially true when
, so we should assume
. In this way, by definition
there's a circuit
such
that . And
there must be since
otherwise
.
For convenience, we will let
for .
Noted that
and thus
is a circuit implying
.
By combining with the presumption
we are done with the proof.
It won't be hard to see the proof
is just a rephrashing of the
proof of Steinitz exchange lemma
in linear algebra.
So the span function of a matroid
are the kind of closure operators
with Steinitz exchange property.
Conversely, we would like to show
it suffices for a closure operator
with Steinitz exchange property to
define a matroid cryptomorphically.
Recall that independent sets of
a matroid can also be retrieved by
,
so we would like to show a
hereditary system may be defined
in such a way first:
Definition
For a closure operator
on
, we may let
.
Lemma
The set
defines the independent sets
of a hereditary system on .
Proof. First, for empty
set , there's no
element to check whether it's
in , so
we have
trivially.
Then, just like what we've done
in showing
defines the independent sets before,
we would like to show
.
Assume there's
such that
,
then by incresing property
,
and we have
,
which is a contradiction.
Then we would like to show
the Steinitz exchange property
enables a closure operator
to define a matroid. The role
that Steinitz exchange property
plays in the proof is
indispensible and must be emphasized:
Theorem
For a closure operator
on
, if it fulfills the
Steinitz exchange property
,
then
is a matroid.
Proof. We have shown
to be a hereditary system on
, so it remains to show
has the independent set
exchange property that
.
Recall that in a matroid, the
independent set exchange property means
,
this will require
.
So if the hereditary system
would like to be a matroid,
it must only be the case
indefinitely.
First, we would like to show
.
Such a property is intuitive in
a matroid: fix the precondition
, if
,
then must contain a
circuit with inside,
and thus
. By
taking the contrapositive and
adding the precondition, we have
.
The proof is done by showing
.
Assume the contrary that
,
and by
we have .
However, by Steinitz exchange property
we have
,
but
has been expected, which is
a contradiction.
In this way, once there's
,
it will lead to
sufficiently.
Then, it remains to show
leads to a contradiction, since
it won't happen in a matroid.
By increasing and idempotent property,
we have
.
Interestingly, we are able to show
show whenever
there's a way to "hybridize"
into independent set
using some ,
so that there's
again.
Since ,
undoubtedly the element
exists.
Obviously by extensive
property we have
,
while
.
Fix the choice of ,
we argue that
:
Assume the contrary that
,
then by the idempotent property
,
however
is a contradiction.
Fix the choice of ,
by extensive property we have
,
by combining with
we have . So
such an element must be
chosen from .
We can easily show
by using
this argument,
given that there has already been
.
Finally, we would like to show
:
by extensive property we have
and
,
therefore
.
And then apply the increasing property
and idempotent property we have
.
Noted that each step of
the "hybridizing" process
replaces one element of
with , while
maintaining the condition
to apply the process, so
it persists until it reachs
a point that
for some
such that
.
At this point there's
,
and there's also
by increasing property of a
closure operator, so
by mutual inclusion.
However, consider any
, there's
,
which is a contradiction.
Thus the case of
is impossible.
In this way, when
there's always
,
which sufficiently leads to
,
and independent set exchange
property is to be found in
.
Corollary (Closure Operator Axioms of Matroids)
For a closure operator
on
, it fulfills the
Steinitz exchange property
iff
is a matroid.
So closure operators with
Steinitz exchange property
are equivalent to matorids.
Strong Elimination Property
With the language of span function,
we may now prove the strong
elimination property we've
mentioned when showing the weak
elimination property of matroids.
Theorem (Strong Elimination Property)
Proof. If there's such a
circuit ,
then it must be an evidence of
.
For convenience, we may let
, and
our goal is to show
.
We know
,
so we may assume ,
and the case of
can be handled analogously.
First, we have
with as the evidence. And
our plan is to show
.
First, it won't be hard to find
there has already been
.
The key is at the opposite side
, where we have
with as the evidence.
There is also
,
and by increasing and idempotent
property we have
.
Now we have
,
and therefore
.
Then by applying the increasing
idempotent property we have
,
and we are done with the proof.
Dual Matroids
By utilizing the properties of
circuits in a matroid, we are ready
to show the strong basis exchange
property of a matroid:
Theorem (Strong Basis Exchange Property)
Proof. First, since
,
contains a unique circuit
.
Such a circuit must contain
if it won't like to be
a subset of . Meantime,
such a circuit must contain
elements from
if it won't like to be
the subset of .
Fix the choice of any element
,
we would like to show
is independent:
Otherwise is
dependent and contains a
circuit .
Again if it won't
like to be a subset of .
Noted that and
,
which means and
are distinct circuits. And
since both of them contains ,
by weak elimination rule
,
however
,
which is a contradiction.
Generally speaking, the choice
of elements
such that enables
does not neccessarily guarantee
,
and vice versa.
However, by utilizing the
properties of span function,
we are able to prove, there
does exist choice of elements
such that enables
simultaneously:
Theorem (Even Stronger Basis Exchange Property)
Proof. Fix some
, from the proof
of strong basis exchange property,
we know contains a
circuit
such that
.
If we are able to furtherly show
,
then we will be done with the proof.
Consider , we know
due to its completing the circuit
. Since
,
we have
.
Since contains the
basis , it's spanning, therefore
,
so is also
spanning, and contains a
basis .
Since
while ,
it must contain elements from
. Meantime there's
,
so must use some
to augment by
independent set exchange property,
into ,
and we are done with the proof.
The even stronger version of
strong basis exchange property
is not required by the following
text, but required by the later
topic of the matroid
intersection theorem.
The strong basis exchange
property sheds light on some sort
of duality in a matroid. That is,
we may retrieve the dual base system
from a base system of a matroid:
Definition (Dual Base System)
For a matroid on
, the dual base system
is
a combinatorial object such that:
Where
(or )
is considered in the power set
lattice of .
And then show it to be a base system,
which defines the dual matroids:
Theorem
For a matroid on
, the dual base system
is
a base system.
Proof. It won't be hard to
find the elements of
are obtained
by taking the map
. So if
is not empty, then
is not empty;
If the elements in
are of size , then the
elements in
are of size . And
we still need to show the
basis exchange property in it.
Consider distinct bases
,
they are corresponding to
,
and we want to show
.
Since
,
and similarly
,
so we need to show
,
which is equivalent to showing
.
The must accept an
element from
and then choose an element
in to discard,
and the strong basis exchange
property of
enables it to do so. In this
way, the basis exchange
property is present in
.
Definition (Dual Matroids)
For a matroid
on , the dual matroid
of is a matroid on
such that
.
It won't be hard to find
, for which
it's said to be dual.
Conventionally, we may add
co-prefix for those objects
from the dual matroid, e.g.,
an independent set in
may be called a coindependent set
in , a basis in is called a
cobasis in , and so on.
For convenience, we may add asterisk
() to a matroid theoretic
symbol, in order to signify it's
exactly the same entity from the
dual matroid. For example, we may write
,
,
,
and so on.
By now, we can clearly see how is
the strong exchange property of a
matroid connected to its duality.
The dual matroid arises from
the dual graph of a planar graph,
and the topological arguments of
planar graphs can also be
generalized to matroids, offering
an alternative characterization
of planar graphs by matroids, the
Whitney's Planarity Criterion.
We have a later topic of matroid
duality and topology, and we will
be focusing on the non-topological
part of dual matroids in this text.
Coindependent Sets and Corank Function
We begin with arguing a set is
coindependent iff it's the
complement of a spanning set:
Proof. For every
,
it can be extended to a
cobasis
,
which corresponds to basis
,
and
.
So is a spanning
set, and we have
.
By taking complement in the sets
pointwisely, we have
.
For every ,
it contains a basis
. Since
is a cobasis, and
,
is coindependent,
and thus we have
.
By mutual inclusion, we have
By taking the dual we have
,
and by taking the pointwise
complement we have
.
This property can easily lead
to the dual augmentation property:
Theorem (Dual Augmentation Property)
Proof. First, we know
,
which implies
,
and .
Then we may use to augment
into a basis ,
given that both and
are disjoint with , the
augmented basis is also
disjoint with , therefore
.
In this way, we may always
permute the elements of ,
so that for disjoint
independent set
and
coindependent set
,
the following setup exists:
By the definition of dual
matroid, one can easily see
, but
the relation between rank
and corank
for can't
be easily seen. In fact, by
the construction of matroid
and subset , the
values of
can be arbitrary:
Example
Consider the following matroid:
We have
while
.
Therefore one must not
expect there to be any
direct connection between
and .
However, there's still
connection between
and :
Proof. First we find
the maximal coindependent set
.
The coindependent set
can be extended to a cobasis
,
and the extension must not use
elements in if we
won't like to contradict with
the maximality of . So the
rest of elements are
contained in the basis
, and
therefore
is independent. On the other
hand, we find another
independent set
.
Permute the elements in
so that we can visualize
our setup easily:
Since ,
and ,
there must be
.
And by independent set
exchange property, if
, then
the maximal independent
subset of
has more element than ,
and can be used to extend
. In other word, if we
are able to show can't
be extended by elements in
, there
must be .
In the figure above, noted
that we may permute elements
of freely, without
affecting the setup it
represents, and so do other
elements in
.
Without loss of generality,
we may assume can be
used to augment into
.
The we can use to augment
into another
basis of , using merely
elements from . Again,
without loss of generality,
assume all elements of
but are used
to augment .
As we can see, now
is extended
into basis
,
corresponding to the cobasis
.
Noted that
,
so by hereditary property
while ,
which is a contradiction
to is the maximal
coindependent subset of .
Finally, by some counting
as simple as
,
we are done with the proof.
Cocircuits
Just like the connection
between coindependent sets
and spanning sets, we have
connection between cocircuits
and hyperplanes. We state
that a set is cocircuit
iff it's the complement
of a hyperplane:
Lemma
A set is a circuit iff it
is a minimal set containied
in no basis.
Proof. A set if
independent iff it's
contained in a basis: the
hereditary property implies
its sufficiency; and by
fixing any basis, an
independent set may be
augmented to a basis due
to the independent set
exchange property, which
implies the necessity.
The statement is equivalent
to a set is dependent iff
it's contained in no basis.
By definition, a dependent
set is circuit iff it's
minimal, therefore a set
is circuit iff it's a
minimal set contained
in no basis.
Proof. The previous
lemma can be translated to
. The
key point is to utilize
the one-to-one
correspondence of a basis
and a cobasis
,
and vice versa.
For the first statement,
we first transform into
.
Then we notice that
is actually
arbitrary cobasis, by letting
for any desired
.
In this way,
contains no cobasis.
For the second statement,
by taking the complement
we can rewrite it as
,
which is furtherly equivalent to
.
Therefore adding any
elements outside
enforces
to contain
a cobasis.
In this way, we can see
iff is
a maximal set that
contains no cobasis,
and thus iff
is a co-hyperplane. That is,
.
By taking complement on
both sides again, we have
;
by taking it to the
dual matroid we have
,
which is exactly what we
are to prove.
Corollary
A set is dependent iff it
intersects every cobasis. A set
is circuit iff it is the
minimal set intersecting
every cobasis.
Proof. In the proof
of previous lemma above,
and for the first statement,
if we transform
into
,
since is
arbitrary cobasis, the
first statement is now
restated as that
intersects every cobasis.
Noted that the first
statement actually
characterizes the dependent
sets, that a set is dependent
iff it intersects every cobasis.
For the second statement,
can be
rewritten as
,
and again
.
So is disjoint with
cobasis , and
does not intersect
every cobasis any more.
Corollary
A basis intersects every cocircuits,
and a cobasis intersects every circuits.
Proof. If basis does not
intersect with cocircuit , then
is the very basis that does
not intersect with, which is a
contradiction to the corollary above.
Analogous argument is applicable to
cobasis versus circuits.
In this way, we can see
the independent sets and
circuits are actually
connected and interchangeable,
due to the string the
dual matroids pull for us.
In fact, due to
the corollary,
we are now able to interpret
the dual matroid of a cycle
matroid intuitively, for bonds,
the minimal edge cuts:
Lemma
An edge set
of a graph is
an edge cut iff it
intersects every maximal
spanning forest.
Proof. We will first
show the necessity so that
one can gain some intuition.
Let edge set
be an edge cut, splitting
into for
component , so that
vertices in and
are disconnected. If
does not intersect with a
maximal spanning forest
of , which contains
maximal spanning tree
for the component , then
is to be found in
, and vertices in
are still
connected by taking paths
in , which is a
contradiction to is an
edge cut.
Now consider the sufficiency.
If is not an edge cut,
then consider any connected
component , it's still
connected in , so
there's a maximal spanning
tree in for
component such that
,
and such a tree is also
to be found in . By
iterating every component,
evaluating maximal
spanning trees in
and unioning them, we
obtain the maximal
spanning forest to be found
in both and ,
which is the very one that
does not intersect with.
Theorem
And edge set
is a cocircuit in cycle
matroid of graph
iff it's a
bond, the minimal edge cut.
Proof. A set is edge cut
iff it intersects every
maximal spanning forest, while
maximal spanning forests are
the bases of
, so by
this corollary,
a set is an edge cut
iff
it's co-dependent in
.
A bond is a minimal edge cut,
and a minimal co-dependent
set is a cocircuit, both
minimalities are taken under
set-inclusion. So just by
connecting them with "iff", we
will be done with the proof.
Definition (Bond Matroids)
For integrity, just like we
name cycle matroid
due to cycles' being the
circuits of the matroid,
we may name the dual
matroid the
bond matroid of .
We will discuss more about
the topological aspect of
the bond matroids in the
later topic.
Finally, we want to
characterize the circuits
and cocircuits by their
intersections:
Proof. Assume the
contrary that
.
Now we obtain disjoint independent
set and coindependent
set , so dual augumentation
property is applicable. Let
them be augmented to basis
and cobasis
.
But where does goes? If
then
;
and if then
.
Either case is a contradiction.
Proof. First augment
to a cobasis
.
Since is non-empty, pick
any , since
,
is
dependent and contains a
circuit , such that
and thus
.
Theorem
A set is cocircuit iff
it's the minimal among the
non-empty sets such that
for every
circuit .
Proof. We begin with
the necessity to ensure we
are on the correct trail. If
is a cocircuit, then
every non-empty proper subset
is coindependent,
and all of them fulfills
by the second lemma above. So
is the minimal set
among the non-empty
sets such that
.
Be vary that the minimality is
taken among the non-empty
sets, since there's
.
Now we prove the sufficiency,
given a set that is
minimal among the non-empty
sets that
while
.
Clearly is co-dependent,
since otherwise it must fulfill
,
the problem remains as whether
is a cocircuit. Assume
it's not, then must
contain a proper subset
that is a cocircuit, and that
cocircuit must fulfill
.
However, is non-empty
for being a cocircuit, and
it's a proper subset of ,
so there is
,
which is a contradiction.
In this way, must be
a cocircuit, and we are done
with the proof.
Matroid Isomorphisms
It won't be hard to see, it's
the independent sets and their
set-inclusion relationship that
really matters. So we may define
isomorphism in such a way:
Definition (Matroid Isomorphisms)
For matroid
and matroid ,
given a bijection , we
may define
for any subset . So we
may also view as
, obviously
it's also a bijection from
to .
Such a bijection is said
to be a matroid isomorphism, iff
.
Matroid and are said to
be isomorphic iff there's a
matroid isomorphism between them.
We may denote such an relation
as , and denote
a matroid isomorphism
as , with
and
to be the defined in
the way shown as above.
Proof. It suffices to show
is a matroid. Then given that
,
the matroid
coincides with .
First, it won't be hard to find
is a hereditary
system, since
and for
,
just consider
,
we have
.
Then, the independent set exchange
property may also be shown in
analogous way, that for
,
since
and
,
it won't be hard to see
.
Just like graph isomorphisms,
the matroid isomorphisms
also categorize matroids into
isomorphism classes. We will
be going over some important
isomorphism classes of
matroids in the remaining text
of this section.
On the other hand, since the
ground set of a matroid is
always finite, we may define
matroids over letter set
,
and use them as the representatives
of the isomorphism classes.
The simplest form of matroids
may be the uniform matroids:
Definition (Uniform Matroids)
Given a ground set and a
parameter , we
may define a matroid ,
such that
.
Such a matroid is called a
uniform matroid
of rank
, given that .
Specially, when ,
the uniform matroid becomes
,
and we call it the
free matroid then.
To categorize them, we may
define matroid
over letter set such that
.
Obviously there's always
.
By putting some uniform
matroids over disjoint
ground sets together, with
their direct sum, we obtain
the parition matroids:
Definition (Direct Sum of Matroids)
Given a matroid
and a matroid
over disjoint
ground sets , we
may define their direct sum
, which is a matroid
over with
.
Isomorphisms are preserved by
direct sums, that
,
where for
,
an isomorphism between the
direct sum may be
defined as
and
.
The direct sums of three or more
matroids over disjoint ground
sets as summands can be
obviously defined in such a way,
so that there are
and
,
and thus are well defined.
Isomorphisms are also
obviously preserved.
Proof. Obviously
and the hereditary property
is preserved. For the
independent set exchange
property, if
,
then either
or
is true. Without loss of generality,
let it be the former case, and thus
.
Convention (Direct Sum of Matroids over Letter Sets)
When doing direct sums of
two or more matroids over letter
sets, we automatically add
subscriptions to make the ground
sets disjoint and identifiable,
by which matroid they are from.
For example, we may define
and analogously.
So that for matroids
and
,
when doing direct sum ,
they will be viewed as over
and
instead. Assume
are taken from the independent sets,
they will first become
,
and then
.
Definition (Partition Matroids)
Given a series of uniform
matroids
over disjoint ground sets
,
their direct sum
is called a partition matroid.
A uniform matroid can be
viewed as a partition matroid
of single summand. And
there's always
.
The problem of matching in
bipartite graph is expressible
in the language of the
intersection of the independent
sets, of two partition matroids:
Example (Matchings in Bipartite Graphs in the Language of Matroids)
Consider the following
-bigraph
,
we may define two
partition matroids
and over :
So that an edge set is a
matching iff it is a common
independent set, lying in
.
Finding a maximal matching is
equivalent to finding an
common independent set such
that there's no common
independent set being the
super set of it. Finding a
maximum matching is equivalent
to finding the common
independent set with
maximum cardinality.
While the connection above
seems like nothing more than
an uninteresting coincidence,
mathematicians have grasped
this clue, discovered and proved
its "converse", which is truly
mind-blowing: it turns out
that finding the maximum
(cardinality) common
independent set of any two
matroids over the same
ground set, not limited
to partition matroids, is
reducible to multiple
iterations of finding /
augmenting matchings, in
delicately constructed
bipartite graphs
per iteration.
Meantime, there's a
min-max duality similar to
König-Egerváry Theorem
between the maximum
cardinality of the common
independent sets and the
rank functions in each
matroids, due to the way
we find the maximum
common independent set.
We will discuss these in
detail in the later topic
of matroid intersection theorem.
Linear Matroids and Graphic Matroids
Let's introduce the concept of
column matroids first:
Definition (Column Matroids)
Given matrix
,
where
are column vectors, we may
define the column matroid
of ,
by using the column vectors
as the ground set
,
and letting that the subset
is independent
iff is a linearly
independent subset.
While vectorial matroid deduplicates
vector as soon as the ground set
is defined, defining column matroid
allows us to identify elements by
which column identity, rather than
their underlying vectorial value.
We will see why defining so is
useful soon.
By now, we may unify matroids that
can be represented by certain kind
of vector spaces:
Definition (Linear Matroids)
A matroid is said to be
linear or representable
if it's isomorphic to a
vectorial matroid or column matroid.
More specifically, if matroid
is linear, and elements of
its ground set are taken from
-vector space , then
is said to be
representable over field .
When , is
said to be real representable;
when ,
is said to be binary;
and when ,
is said to be ternary.
Whether is finite dimensional or
not doesn't matter: given that the
ground set is always finite,
we may take ,
then the elements of can be seen
as taken from , while
is always finite dimensional.
On the other hand, we may also
unify matroids that can be
represented by certain kind of
cycle matroids:
Definition (Graphic Matroids)
A matroid is said to be
graphic if it's isomorphic
to a cycle matroid .
Specially, we claim that:
Assertion
A graphic matroid is linear and
representable over any field.
To prove this statement, clearly
it suffices to show every cycle
matroid has some kinds of
linear matroids corresponding
to it.
Our construction is, for every
cycle matroid , we orient
graph arbitrarily into
digraph , and take its incidence
matrix . Here,
we allow loops to be present, by
requiring the corresponding
column of that loop in
to be zero vector.
Here's an example of our proposal:
Example
For some randomly taken graph ,
we orient it arbitrarily into ,
and evaluate its incidence matrix:
Clearly only can be
the elements of ,
while for
any field .
We want to show
is the very linear matroid
isomorphic to . This will
require us to show an edge subset
is dependent in iff
its corresponding column vectors
in are linearly
dependent, regardless of which
field the matrix
is seen as over.
Noted that for an oriented edge
corresponding
to column vector , if
it were oriented into
instead,
the only difference is that its
corresponding column vector in
becomes
now, which doesn't affect the
linear independence of the column
vectors of . So
the way we orient
doesn't matter.
Let's begin with showing the
necessity:
Lemma
For graph and its arbitrary
orientation , if an edge
subset is
dependent in , then the
corresponding column vectors
of in
are linearly dependent.
Proof. If edge subset
is dependent,
then it contains cycle
in . Let the corresponding column
vectors of
in be
.
Without loss of generality, we
may assume the cycle is not a
loop, otherwise it's trivially
.
We construct an identity
such that or
with the following steps:
Follow the cycle, if
in is oriented to
in , then we assign ,
otherwise we assign .
Consider the path component
and the corresponding prefix sum
,
it won't be hard to find the
row corresponding to
is set to while the row
corresponding to is
set to . And
is obtained from adding
to
, where
,
corresponding to
,
provides to cancel
the at the row
corresponding to ,
and then introduces
at the row corresponding
to . In this way,
when we enclose the cycle,
,
so the corresponding
column vectors of are
linearly dependent if
is dependent.
And since the coefficients
of the identity
can only be or ,
such an identity should
reveal linear dependence
over any field .
Those familiar with Kirchoff's
Matrix Tree Theorem should be
very familiar with the
construction above.
Now let's show the sufficiency,
that given a set of linearly
dependent column vectors
corresponding to edges
,
a cycle is to be found,
and therefore they are
dependent in . Again
we may assume no edges in
is loop, otherwise it's just
the trivial case.
Consider the identity
revealing the linear dependence of
.
Without loss of generality,
we assume ,
otherwise we eliminate those
terms with zero coefficients.
The subset of column vectors
with non-zero coefficients
is still linearly dependent,
but for the corresponding
edge subset, once we are able
to show there's a cycle inside,
any super set of it contains
such a cycle.
Our argument of sufficiency is
based on the following algorithm:
Algorithm
The input is the edge set
corresponding to column vectors
,
fulfilling the identity
such that the coefficients
are
non-zero. The output is the
cycle formed by edges in
:
- Initialize empty path
, zero vector
,
and counter .
- Concatenate
, update
,
and increment .
- Now
for some vertex , and
the row corresponding to
in is
. We assert there's some
edge incident to in
. That is, either
oriented edge
or
is to be found in .
- If , then
since contains a segment
,
and
is a cycle, and we halt
with as the cycle found.
- Otherwise update
,
such that if
,
and if
,
and increment .
Then go to step 3.
Proof. Obviously if the
algorithm would like to halt,
it must halt at step 4.
We've asserted that every time
step 3 is executed, there's
edge in incident to
, this is due to the
corresponding row of is
in . If
not, there must be only one
edge incident to
in , and such an edge
must be only used
once. Since is preserved
as a path, and each vertices
and edges must occur only
once, otherwise the algorithm
must have halted at step 4.
But if so, we look at the row
corresponding to in
the identity
,
it's only
,
as there's no zero divisor in
field , which is a contradiction
to our input constraint
. In this way,
as long as is
non-empty, there's such
an edge incident to
to be selected at step 3.
Now we show it must halt at
step 4 before exhausting
every edge in .
If not so, then the
algorithm concatenates all
edges in into a path
.
And is the only edge
incident to . Again, if
we look at the row corresponding
to in the identity
,
it's only
,
which is again a contradiction
to our input constraint
.
Lemma
For graph and its arbitrary
orientation , if a subset
of column vectors in
are linearly dependent, then
its corresponding edge subset
is dependent in .
Proof. Just eliminate
the zero coefficients
in the identity
revealing linear dependence, and
then execute the algorithm above,
we will find a cycle formed by
the corresponding edge subset.
Theorem
A graphic matroid is linear and
representable over any field.
Proof. We've done with
showing: For graph and its
arbitrary orientation ,
an edge subset
is dependent
in iff the
corresponding column vectors
of in
are linearly dependent.
So there's
,
and any graphic matroid
isomorphic to is
isomorphic to ,
for the
matrix over any field .
While every graphic matroid
is linear, the converse is
not true. The simplest
example might be the uniform
matroid :
Example
The uniform matroid
is linear and ternary. Consider
the matrix
over .
It's trivial to verify that
any two column vectors are
linearly independent, but:
So clearly
due to matroid isomorphism
.
However, is never
binary, there's no way for
to be represented
over . Assume
the contrary that it's
isomorphic to some linear
matroid over ground set
,
taken from
-vector space .
Then any two column vectors
are linealy independent, and
any three column vectors are
linearly dependent.
Let
and
to be taken. The identities
revealing their linear
dependence can only be:
All coefficients must be ,
otherwise it will render the
combination of one or two
vectors to be dependent, which
is a contradiction. But
once we add these identities
together, there will soon be:
So
must be linearly dependent,
which is a contradiction to
any two column vectors are
linearly independent.
So there's no way for uniform
matroid , which is
linear and ternary, to be
isomorphic to a graphic matroid.
Since a graphic matroid must be
representable over any field,
but is never representable
over sufficiently.