Thursday, October 3, 2013
Thursday, September 26, 2013
Greedy algorithms.
A greedy algorithm is an algorithm that
follows the problem solving heuristic of making the locally optimal choice at
each stage, with the hope of finding a global optimum. In many problems, a
greedy strategy does not in general produce an optimal solution, but
nonetheless a greedy heuristic may yield locally optimal solutions that
approximate a global optimal solution in a reasonable time.
For example, a greedy strategy for the
traveling salesman problem (which is of a high computational complexity) is the
following heuristic: "At each stage visit an unvisited city nearest to the
current city". This heuristic need not find a best solution but terminates
in a reasonable number of steps; finding an optimal solution typically requires
unreasonably many steps. In mathematical optimization, greedy algorithms solve combinatorial
problems having the properties of matroids.
In general, greedy algorithms have five
components:
·
A candidate set, from which a
solution is created
·
A selection function, which
chooses the best candidate to be added to the solution
·
A feasibility function, that is
used to determine if a candidate can be used to contribute to a solution
·
An objective function, which
assigns a value to a solution, or a partial solution, and
·
A solution function, which will
indicate when we have discovered a complete solution
Greedy algorithms produce good solutions on
some mathematical problems, but not on others. Most problems for which they
work, will have two properties:
Greedy choice property
We can make whatever choice seems best at
the moment and then solve the sub problems that arise later. The choice made by
a greedy algorithm may depend on choices made so far but not on future choices
or all the solutions to the sub problem. It iteratively makes one greedy choice
after another, reducing each given problem into a smaller one. In other words,
a greedy algorithm never reconsiders its choices. This is the main difference
from dynamic programming, which is exhaustive and is guaranteed to find the
solution. After every stage, dynamic programming makes decisions based on all
the decisions made in the previous stage, and may reconsider the previous
stage's algorithmic path to solution.
Optimal substructure
"A problem exhibits optimal substructure if an optimal solution to
the problem contains optimal solutions to the sub-problems."[2]
Types
Greedy algorithms can be characterized as
being 'short sighted', and as 'non-recoverable'. They are ideal only for
problems which have 'optimal substructure'. Despite this, greedy algorithms are
best suited for simple problems (e.g. giving change). It is important, however,
to note that the greedy algorithm can be used as a selection algorithm to
prioritize options within a search, or branch and bound algorithm. There are a
few variations to the greedy algorithm:
Pure greedy algorithms
Orthogonal
greedy algorithms
Relaxed greedy algorithms
Applications
Greedy algorithms mostly (but not always)
fail to find the globally optimal solution, because they usually do not operate
exhaustively on all the data. They can make commitments to certain choices too
early which prevent them from finding the best overall solution later.
If a greedy algorithm can be proven to
yield the global optimum for a given problem class, it typically becomes the
method of choice because it is faster than other optimization methods like
dynamic programming.
Greedy algorithms appear in network routing
as well. Using greedy routing, a message is forwarded to the neighboring node
which is "closest" to the destination. The notion of a node's
location (and hence "closeness") may be determined by its physical
location, as in geographic routing used by ad hoc networks. Location may also
be an entirely artificial construct as in small world routing and distributed
hash table.
Example of failure.
The 0-1 Knapsack problem is posed as
follows. A thief robbing a store finds n items ; the ith item is worth vi
dollars and weights wi pounds, where vi and wi are integers. He wants to take
as valuable a load as possible, but he can carry at most W pounds in his
knapsack for some integer w.
According to Greedy strategy consider the
problem instance ,as there are 3 items and the knapsack can hold 50 pounds.
Item 1 weight 10 pounds and is worth 60
dollars.
Item 2 weights 20 pounds and is worth 100
dollars.
Item 3 weight 30 pounds and is worth 120
dollars.
Thus , the value per pound of item 1 is 6
dollars per pound, which is greater than the value per pound of either item 2
(5 dollars per pound) or item 3 which is 4 dollars per pound.
Greedy strategy would take item 1 first,
leaving behind item 2 and item 3.
However the optimal solution consider item
2 and item 3 , leaving behind item 1.
Hence greedy strategy fails.
Thank you Wiki and Algorithms book.
Thank you Wiki and Algorithms book.
Monday, August 26, 2013
Dynamic programming part 2.
Examples of dynamic programming.
1.) Dijkstra's algorithm for the shortest path problem
2.) Fibonacci sequence
3.)Tower of Hanoi.
4.)Duckworth - Lewis method for resolving the problem when
game of cricket is interrupted by rain.
Thanks to different books and Wiki for helping me understand dynamic programing concept.
Tuesday, August 20, 2013
Dynamic programming.
Here is the short text related to dynamic programming.
In general, to solve a given problem, we need to solve different parts of the problem (sub problems), then combine the solutions of the sub problems to reach an overall solution. Often when using a more naive method, many of the sub problems are generated and solved many times. The dynamic programming approach seeks to solve each sub problem only once, thus reducing the number of computations: once the solution to a given sub problem has been computed, it is stored or
the next time the same solution is needed, it is simply looked up.
Dynamic programming algorithms are used for optimization
A dynamic programming algorithm will examine all possible ways to solve the problem and will pick the best solution. Therefore, we can roughly think of dynamic programming as an intelligent, brute-force method that enables us to go through all possible solutions to pick the best one. If the scope of the problem is such that going through all possible solutions is possible and fast enough, dynamic programming guarantees finding the optimal solution. The alternatives are many, such as using a greedy algorithm, which picks the best possible choice "at any possible branch in the road". While a greedy algorithm does not guarantee the optimal solution, it is faster. Fortunately, some greedy algorithms (such as minimum spanning trees) are proven to lead to the optimal solution.
There are two key attributes that a problem must have in order for dynamic programming to be applicable: optimal substructure and overlapping sub problems. If a problem can be solved by combining optimal solutions to non-overlapping sub problems, the strategy is called "divide and conquer" instead. This is why merge sort and quick sort are not classified as dynamic programming problems.
Optimal substructure means that the solution to a given optimization problem can be obtained by the combination of optimal solutions to its sub problems. Consequently, the first step towards devising a dynamic programming solution is to check whether the problem exhibits such optimal substructure. Such optimal substructures are usually described by means of recursion. For example, given a graph G=(V,E), the shortest path p from a vertex u to a vertex v exhibits optimal substructure: take any intermediate vertex w on this shortest path p. If p is truly the shortest path, then it can be split into subpaths p1 from u to w and p2 from w to v such that these, in turn, are indeed the shortest paths between the corresponding vertices(Thanks to Introduction to Algorithms-MIT press).
Hence, one can easily formulate the solution for finding shortest paths in a recursive manner, which is what the Bellman–Ford algorithm or the Floyd–Warshall algorithm does.
Overlapping sub problems means that the space of sub problems must be small, that is, any recursive algorithm solving the problem should solve the same sub problems over and over, rather than generating new sub problems.
This can be achieved in either of two ways:
Top-down approach: This is the direct fall-out of the recursive formulation of any problem. If the solution to any problem can be formulated recursively using the solution to its sub problems, and if its sub problems are overlapping, then one can easily store the solutions to the sub problems in a table. Whenever we attempt to solve a new sub problem, we first check the table to see if it is already solved. If a solution has been recorded, we can use it directly, otherwise we solve the sub problem and add its solution to the table.
Bottom-up approach: Once we formulate the solution to a problem recursively as in terms of its sub problems, we can try reformulating the problem in a bottom-up fashion: try solving the sub problems first and use their solutions to build-on and arrive at solutions to bigger sub problems. This is also usually done in a tabular form by iteratively generating solutions to bigger and bigger sub problems by using the solutions to small sub problems.
to be continued...
In general, to solve a given problem, we need to solve different parts of the problem (sub problems), then combine the solutions of the sub problems to reach an overall solution. Often when using a more naive method, many of the sub problems are generated and solved many times. The dynamic programming approach seeks to solve each sub problem only once, thus reducing the number of computations: once the solution to a given sub problem has been computed, it is stored or
the next time the same solution is needed, it is simply looked up.
Dynamic programming algorithms are used for optimization
A dynamic programming algorithm will examine all possible ways to solve the problem and will pick the best solution. Therefore, we can roughly think of dynamic programming as an intelligent, brute-force method that enables us to go through all possible solutions to pick the best one. If the scope of the problem is such that going through all possible solutions is possible and fast enough, dynamic programming guarantees finding the optimal solution. The alternatives are many, such as using a greedy algorithm, which picks the best possible choice "at any possible branch in the road". While a greedy algorithm does not guarantee the optimal solution, it is faster. Fortunately, some greedy algorithms (such as minimum spanning trees) are proven to lead to the optimal solution.
There are two key attributes that a problem must have in order for dynamic programming to be applicable: optimal substructure and overlapping sub problems. If a problem can be solved by combining optimal solutions to non-overlapping sub problems, the strategy is called "divide and conquer" instead. This is why merge sort and quick sort are not classified as dynamic programming problems.
Optimal substructure means that the solution to a given optimization problem can be obtained by the combination of optimal solutions to its sub problems. Consequently, the first step towards devising a dynamic programming solution is to check whether the problem exhibits such optimal substructure. Such optimal substructures are usually described by means of recursion. For example, given a graph G=(V,E), the shortest path p from a vertex u to a vertex v exhibits optimal substructure: take any intermediate vertex w on this shortest path p. If p is truly the shortest path, then it can be split into subpaths p1 from u to w and p2 from w to v such that these, in turn, are indeed the shortest paths between the corresponding vertices(Thanks to Introduction to Algorithms-MIT press).
Hence, one can easily formulate the solution for finding shortest paths in a recursive manner, which is what the Bellman–Ford algorithm or the Floyd–Warshall algorithm does.
Overlapping sub problems means that the space of sub problems must be small, that is, any recursive algorithm solving the problem should solve the same sub problems over and over, rather than generating new sub problems.
This can be achieved in either of two ways:
Top-down approach: This is the direct fall-out of the recursive formulation of any problem. If the solution to any problem can be formulated recursively using the solution to its sub problems, and if its sub problems are overlapping, then one can easily store the solutions to the sub problems in a table. Whenever we attempt to solve a new sub problem, we first check the table to see if it is already solved. If a solution has been recorded, we can use it directly, otherwise we solve the sub problem and add its solution to the table.
Bottom-up approach: Once we formulate the solution to a problem recursively as in terms of its sub problems, we can try reformulating the problem in a bottom-up fashion: try solving the sub problems first and use their solutions to build-on and arrive at solutions to bigger sub problems. This is also usually done in a tabular form by iteratively generating solutions to bigger and bigger sub problems by using the solutions to small sub problems.
to be continued...
Subscribe to:
Posts (Atom)