A greedy algorithm is an algorithm that
follows the problem solving heuristic of making the locally optimal choice at
each stage, with the hope of finding a global optimum. In many problems, a
greedy strategy does not in general produce an optimal solution, but
nonetheless a greedy heuristic may yield locally optimal solutions that
approximate a global optimal solution in a reasonable time.
For example, a greedy strategy for the
traveling salesman problem (which is of a high computational complexity) is the
following heuristic: "At each stage visit an unvisited city nearest to the
current city". This heuristic need not find a best solution but terminates
in a reasonable number of steps; finding an optimal solution typically requires
unreasonably many steps. In mathematical optimization, greedy algorithms solve combinatorial
problems having the properties of matroids.
In general, greedy algorithms have five
components:
·
A candidate set, from which a
solution is created
·
A selection function, which
chooses the best candidate to be added to the solution
·
A feasibility function, that is
used to determine if a candidate can be used to contribute to a solution
·
An objective function, which
assigns a value to a solution, or a partial solution, and
·
A solution function, which will
indicate when we have discovered a complete solution
Greedy algorithms produce good solutions on
some mathematical problems, but not on others. Most problems for which they
work, will have two properties:
Greedy choice property
We can make whatever choice seems best at
the moment and then solve the sub problems that arise later. The choice made by
a greedy algorithm may depend on choices made so far but not on future choices
or all the solutions to the sub problem. It iteratively makes one greedy choice
after another, reducing each given problem into a smaller one. In other words,
a greedy algorithm never reconsiders its choices. This is the main difference
from dynamic programming, which is exhaustive and is guaranteed to find the
solution. After every stage, dynamic programming makes decisions based on all
the decisions made in the previous stage, and may reconsider the previous
stage's algorithmic path to solution.
Optimal substructure
"A problem exhibits optimal substructure if an optimal solution to
the problem contains optimal solutions to the sub-problems."[2]
Types
Greedy algorithms can be characterized as
being 'short sighted', and as 'non-recoverable'. They are ideal only for
problems which have 'optimal substructure'. Despite this, greedy algorithms are
best suited for simple problems (e.g. giving change). It is important, however,
to note that the greedy algorithm can be used as a selection algorithm to
prioritize options within a search, or branch and bound algorithm. There are a
few variations to the greedy algorithm:
Pure greedy algorithms
Orthogonal
greedy algorithms
Relaxed greedy algorithms
Applications
Greedy algorithms mostly (but not always)
fail to find the globally optimal solution, because they usually do not operate
exhaustively on all the data. They can make commitments to certain choices too
early which prevent them from finding the best overall solution later.
If a greedy algorithm can be proven to
yield the global optimum for a given problem class, it typically becomes the
method of choice because it is faster than other optimization methods like
dynamic programming.
Greedy algorithms appear in network routing
as well. Using greedy routing, a message is forwarded to the neighboring node
which is "closest" to the destination. The notion of a node's
location (and hence "closeness") may be determined by its physical
location, as in geographic routing used by ad hoc networks. Location may also
be an entirely artificial construct as in small world routing and distributed
hash table.
Example of failure.
The 0-1 Knapsack problem is posed as
follows. A thief robbing a store finds n items ; the ith item is worth vi
dollars and weights wi pounds, where vi and wi are integers. He wants to take
as valuable a load as possible, but he can carry at most W pounds in his
knapsack for some integer w.
According to Greedy strategy consider the
problem instance ,as there are 3 items and the knapsack can hold 50 pounds.
Item 1 weight 10 pounds and is worth 60
dollars.
Item 2 weights 20 pounds and is worth 100
dollars.
Item 3 weight 30 pounds and is worth 120
dollars.
Thus , the value per pound of item 1 is 6
dollars per pound, which is greater than the value per pound of either item 2
(5 dollars per pound) or item 3 which is 4 dollars per pound.
Greedy strategy would take item 1 first,
leaving behind item 2 and item 3.
However the optimal solution consider item
2 and item 3 , leaving behind item 1.
Hence greedy strategy fails.
Thank you Wiki and Algorithms book.
Thank you Wiki and Algorithms book.
No comments:
Post a Comment