Graph
G is
simply a way of encoding pair wise relationships among a set of objects: it
consists of a collection V of nodes
and a collection E of edges, each of which
“joins” two of the nodes. We thus
represent
an edge e ∈ E as
a two-element subset of V: e = {u, v} for some
u, v ∈ V, where we call u and v
the ends of e.
One
of the fundamental operations in a graph is that of traversing a sequence of
nodes connected by edges.
An
undirected graph is a tree if it is connected and does not contain a cycle.
Rooted
trees are fundamental objects in computer science, because they
encode
the notion of a hierarchy.
Every n-node tree has exactly n − 1 edges.
Let G be an undirected graph on n nodes. Any two of the
following
statements implies the third.
(i) G is connected.
(ii) G does not contain a cycle.
(iii) G has n − 1 edges.
Properties of DFS :
For a given recursive call DFS(u), all nodes that are marked “Explored”
between the invocation and end of this recursive call are
descendants of u
in T.
Let T be a depth-first search tree, let x and y be nodes
in T, and let
(x, y) be an edge of G that is not an edge of T. Then one of x or
y is an ancestor
of the other.
Connected Components : For any two nodes s and t in a
graph, their connected components are
either identical or disjoint.
The adjacency matrix representation of a graph requires O(n2)
space,
while the adjacency list representation requires only O(m + n) space.
The
adjacency list data structure is ideal for implementing breadth-first search.
The
algorithm examines the edges leaving a given node one by one. When we
are
scanning the edges leaving u and come to an edge (u, v),
we need to
know
whether or not node v has been previously discovered by the search.
To
make this simple, we maintain an array Discovered of length n and set
Discovered[v]= true as soon as our
search first sees v.
Similarly
for DFS : we first add all of the nodes
adjacent
to u to
our list of nodes to be considered, but after doing this we
proceed
to explore a new neighbor v of u. As we explore v, in turn, we add
the neighbors of v to the list we’re maintaining, but we do so in stack order,
so
that these neighbors will be explored before we return to explore the other
neighbors of u.
We only come back to other nodes adjacent to u when there
are
no other nodes left.
If a graph G is bipartite, then it cannot contain an odd
cycle.
If u and v are mutually reachable, and v and w are
mutually reachable,
then u and w are mutually reachable.
For any two nodes s and t in a directed graph, their
strong components
are either identical or disjoint.
Given
a set of tasks with dependencies, it would be natural to seek
a
valid order in which the tasks could be performed, so that all dependencies
are
respected. Specifically, for a directed graph G, we say that a topological
ordering of G is an ordering of
its nodes as v1, v2, . . . , vn so that for every
edge
(vi , vj),
we have i < j. In other words,
all edges point “forward” in the ordering.
A
topological ordering on tasks provides an order in which they can be safely
performed;
when we come to the task vj, all the tasks
that are required to
Precede
it have already been done. If G
has a topological ordering, then G is a DAG. In every DAG G, there is a node v
with no incoming edges. If G is a DAG, then G has a topological ordering.