Tuesday, December 10, 2013

This Keyword in Java.


Using the this Keyword

Within an instance method or a constructor, this is a reference to the current object — the object whose method or constructor is being called. You can refer to any member of the current object from within an instance method or a constructor by using this.

Using this with a Field

The most common reason for using the this keyword is because a field is shadowed by a method or constructor parameter.
For example, the Point class was written like this
public class Point {
    public int x = 0;
    public int y = 0;
        
    //constructor
    public Point(int a, int b) {
        x = a;
        y = b;
    }
}
but it could have been written like this:
public class Point {
    public int x = 0;
    public int y = 0;
        
    //constructor
    public Point(int x, int y) {
        this.x = x;
        this.y = y;
    }
}
Each argument to the constructor shadows one of the object's fields — inside the constructor x is a local copy of the constructor's first argument. To refer to the Point field x, the constructor must use this.x.

Using this with a Constructor

From within a constructor, you can also use the this keyword to call another constructor in the same class. Doing so is called an explicit constructor invocation. Here's another Rectangle class, with a different implementation from the one in the Objects section.
public class Rectangle {
    private int x, y;
    private int width, height;
        
    public Rectangle() {
        this(0, 0, 0, 0);
    }
    public Rectangle(int width, int height) {
        this(0, 0, width, height);
    }
    public Rectangle(int x, int y, int width, int height) {
        this.x = x;
        this.y = y;
        this.width = width;
        this.height = height;
    }
    ...
}
This class contains a set of constructors. Each constructor initializes some or all of the rectangle's member variables. The constructors provide a default value for any member variable whose initial value is not provided by an argument. For example, the no-argument constructor calls the four-argument constructor with four 0 values and the two-argument constructor calls the four-argument constructor with two 0 values. As before, the compiler determines which constructor to call, based on the number and the type of arguments.
If present, the invocation of another constructor must be the first line in the constructor.

Thursday, November 7, 2013

Small information releated to graph.



Graph G is simply a way of encoding pair wise relationships among a set of objects: it consists of a collection V of nodes and a collection E of edges, each of which “joins” two of the nodes. We thus
represent an edge e E as a two-element subset of V: e = {u, v} for some
u, v V, where we call u and v the ends of e.
One of the fundamental operations in a graph is that of traversing a sequence of nodes connected by edges.
An undirected graph is a tree if it is connected and does not contain a cycle.
Rooted trees are fundamental objects in computer science, because they
encode the notion of a hierarchy.
Every n-node tree has exactly n 1 edges.
Let G be an undirected graph on n nodes. Any two of the following
statements implies the third.
(i) G is connected.
(ii) G does not contain a cycle.
(iii) G has n 1 edges.
Properties of DFS :
For a given recursive call DFS(u), all nodes that are marked “Explored”
between the invocation and end of this recursive call are descendants of u
in T.
Let T be a depth-first search tree, let x and y be nodes in T, and let
(x, y) be an edge of G that is not an edge of T. Then one of x or y is an ancestor
of the other.

Connected Components : For any two nodes s and t in a graph, their connected components are
either identical or disjoint.
The adjacency matrix representation of a graph requires O(n2) space,
while the adjacency list representation requires only O(m + n) space.

The adjacency list data structure is ideal for implementing breadth-first search.
The algorithm examines the edges leaving a given node one by one. When we
are scanning the edges leaving u and come to an edge (u, v), we need to
know whether or not node v has been previously discovered by the search.
To make this simple, we maintain an array Discovered of length n and set
Discovered[v]= true as soon as our search first sees v.

Similarly for DFS : we first add all of the nodes
adjacent to u to our list of nodes to be considered, but after doing this we
proceed to explore a new neighbor v of u. As we explore v, in turn, we add
the neighbors of v to the list we’re maintaining, but we do so in stack order,
so that these neighbors will be explored before we return to explore the other
neighbors of u. We only come back to other nodes adjacent to u when there
are no other nodes left.

If a graph G is bipartite, then it cannot contain an odd cycle.
If u and v are mutually reachable, and v and w are mutually reachable,
then u and w are mutually reachable.
For any two nodes s and t in a directed graph, their strong components
are either identical or disjoint.

Given a set of tasks with dependencies, it would be natural to seek
a valid order in which the tasks could be performed, so that all dependencies
are respected. Specifically, for a directed graph G, we say that a topological
ordering of G is an ordering of its nodes as v1, v2, . . . , vn so that for every edge
(vi , vj), we have i < j. In other words, all edges point “forward” in the ordering.
A topological ordering on tasks provides an order in which they can be safely
performed; when we come to the task vj, all the tasks that are required to
Precede it have already been done. If G has a topological ordering, then G is a DAG. In every DAG G, there is a node v with no incoming edges. If G is a DAG, then G has a topological ordering.

Friday, October 11, 2013

Algorithm for Maze problem.


We'll solve the problem of finding and marking a solution path using recursion.

Remember that a recursive algorithm has at least 2 parts:

    Base case(s) that determine when to stop.

    Recursive part(s) that call the same algorithm (i.e., itself) to assist in solving the problem.

Recursive parts
Because our algorithm must be recursive, we need to view the problem in terms of similar sub problems. In this case, that means we need to "find a path" in terms of "finding paths."

Let's start by assuming there is already some algorithm that finds a path from some point in a maze to the goal, call it FIND-PATH(x, y).

Also suppose that we got from the start to position x=1, y=2 in the maze (by some method):

+#####
++...#
#+####
#.####
...#.G
##...#

    What we now want to know is whether there is a path from x=1, y=2 to the goal. If there is a path to the goal from x=1, y=2, then there is a path from the start to the goal (since we already got to x=1, y=2).

To find a path from position x=1, y=2 to the goal, we can just ask FIND-PATH to try to find a path from the North, East, South, and West of x=1, y=2:

    FIND-PATH(x=1, y=1) North
    FIND-PATH(x=2, y=2) East
    FIND-PATH(x=1, y=3) South
    FIND-PATH(x=0, y=2) West

Generalizing this, we can call FIND-PATH recursively to move from any location in the maze to adjacent locations. In this way, we move through the maze.
Base cases
It's not enough to know how to use FIND-PATH recursively to advance through the maze. We also need to determine when FIND-PATH must stop.

One such base case is to stop when it reaches the goal.

The other base cases have to do with moving to invalid positions. For example, we have mentioned how to search North of the current position, but disregarded whether that North position is legal. In order words, we must ask:

    Is the position in the maze (...or did we just go outside its bounds)?
    Is the position open (...or is it blocked with an obstacle)?

Now, to our base cases and recursive parts, we must add some steps to mark positions we are trying, and to unmark positions that we tried, but from which we failed to reach the goal:

FIND-PATH(x, y)

    if (x,y outside maze) return false
    if (x,y is goal) return true
    if (x,y not open) return false
    mark x,y as part of solution path
    if (FIND-PATH(North of x,y) == true) return true
    if (FIND-PATH(East of x,y) == true) return true
    if (FIND-PATH(South of x,y) == true) return true
    if (FIND-PATH(West of x,y) == true) return true
    unmarked x,y as part of solution path
    return false

All these steps together complete a basic algorithm that finds and marks a path to the goal (if any exists) and tells us whether a path was found or not (i.e., returns true or false). This is just one such algorithm--other variations are possible.

Note: FIND-PATH will be called at least once for each position in the maze that is tried as part of a path.

Also, after going to another position (e.g., North):

    if (FIND-PATH(North of x,y)¹ == true) return true²

if a path to the goal was found, it is important that the algorithm stops. I.e., if going North of x,y finds a path (i.e., returns true¹), then from the current position (i.e., current call of FIND-PATH) there is no need to check East, South or West. Instead, FIND-PATH just need return true² to the previous call.

Path marking will be done with the '+' symbol and unmarking with the 'x' symbol.
Using Algorithm
To use FIND-PATH to find and mark a path from the start to the goal with our given representation of mazes, we just need to:

    Locate the start position (call it startx, starty).
    Call FIND-PATH(startx, starty).
    Re-mark* the start position with 'S'.

*In the algorithm, the start position (marked 'S') needs to be considered an open position and must be marked as part of the path for FIND-PATH to work correctly. That is why we re-mark it at the end.


To be continued...

Wednesday, October 9, 2013

Time complexity.

For Linear Time
An algorithm that runs in O(n), or linear, time has a very natural property:its running time is at most a constant factor times the size of the input. One basic way to get an algorithm with this running time is to process the input in a single pass, spending a constant amount of time on each item of input encountered. Other algorithms achieve a linear time bound for more subtle
reasons.

For O(n log n) Time
It is the running time of any algorithm that splits its input into two equal-sized pieces, solves each piece recursively, and then combines the two solutions in linear time.

For Quadratic time  arises naturally from a pair of nested loops: An algorithm
consists of a loop with O(n) iterations, and each iteration of the loop
launches an internal loop that takes O(n) time. Multiplying these two factors
of n together gives the running time.

Tuesday, October 8, 2013

Maze to solve.

Solve the following maze by eliminating dead ends. Draw the maze with the dead ends marked out and the path going from the start to the end shown.

Monday, October 7, 2013

Solution for stable marriage problem.

Everyone gets married :
    Once a woman becomes engaged, she is always engaged to someone. So, at the end, there cannot be a man and a woman both unengaged, as he must have proposed to her at some point (since a man will eventually propose to everyone, if necessary) and, being unengaged, she would have had to have said yes.
   
The marriages are stable:
    Let Alice be a woman and Bob be a man who are both engaged, but not to each other. Upon completion of the algorithm, it is not possible for both Alice and Bob to prefer each other over their current partners. If Bob prefers Alice to his current partner, he must have proposed to Alice before he proposed to his current partner. If Alice accepted his proposal, yet is not married to him at the end, she must have dumped him for someone she likes more, and therefore doesn't like Bob more than her current partner. If Alice rejected his proposal, she was already with someone she liked more than Bob.

Algorithm

function stableMatching {
    Initialize all m ? M and w ? W to free
    while ? free man m who still has a woman w to propose to {
       w = m's highest ranked such woman to whom he has not yet proposed
       if w is free
         (m, w) become engaged
       else some pair (m', w) already exists
         if w prefers m to m'
           (m, w) become engaged
           m' becomes free
         else
           (m', w) remain engaged
    }
}

Optimality of the solution

While the solution is stable, it is not necessarily optimal from all individuals' points of view. The traditional form of the algorithm is optimal for the initiator of the proposals and the stable, suitor-optimal solution may or may not be optimal for the reviewer of the proposals. An example is as follows:

There are three suitors (A,B,C) and three reviewers (X,Y,Z) which have preferences of:

    A: YXZ   B: ZYX   C: XZY   X: BAC   Y: CBA   Z: ACB

There are 3 stable solutions to this matching arrangement:

    suitors get their first choice and reviewers their third (AY, BZ, CX)
    all participants get their second choice (AX, BY, CZ)
    reviewers get their first choice and suitors their third (AZ, BX, CY)

All three are stable because instability requires both participants to be happier with an alternative match. Giving one group their first choices ensures that the matches are stable because they would be unhappy with any other proposed match. Giving everyone their second choice ensures that any other match would be disliked by one of the parties. The algorithm converges in a single round on the suitor-optimal solution because each reviewer receives exactly one proposal, and therefore selects that proposal as its best choice, ensuring that each suitor has an accepted offer, ending the match. This asymmetry of optimality is driven by the fact that the suitors have the entire set to choose from, but reviewers choose between a limited subset of the suitors at any one time.

Similar problems

The weighted matching problem seeks to find a matching in a weighted bipartite graph that has maximum weight. Maximum weighted matchings do not have to be stable, but in some applications a maximum weighted matching is better than a stable one.

The stable roommates problem is similar to the stable marriage problem, but differs in that all participants belong to a single pool (instead of being divided into equal numbers of "men" and "women").

The hospitals/residents problem — also known as the college admissions problem — differs from the stable marriage problem in that the "women" can accept "proposals" from more than one "man" (e.g., a hospital can take multiple residents, or a college can take an incoming class of more than one student). Algorithms to solve the hospitals/residents problem can be hospital-oriented (female-optimal) or resident-oriented (male-optimal).

The hospitals/residents problem with couples allows the set of residents to include couples who must be assigned together, either to the same hospital or to a specific pair of hospitals chosen by the couple (e.g., a married couple want to ensure that they will stay together and not be stuck in programs that are far away from each other). The addition of couples to the hospitals/residents problem renders the problem NP-complete.

The matching with contracts problem is a generalization of matching problem, in which participants can be matched with different terms of contracts. An important special case of contracts is matching with flexible wages.

Thank you Wiki.

Friday, October 4, 2013

Stable marriage problem.

The stable marriage problem (SMP) is the problem of finding a stable matching between two sets of elements given a set of preferences for each element. A matching is a mapping from the elements of one set to the elements of the other set. A matching is stable whenever it is not the case that both:

    some given element A of the first matched set prefers some given element B of the second matched set over the element to which A is already matched, and
    B also prefers A over the element to which B is already matched

In other words, a matching is stable when there does not exist any alternative pairing (A, B) in which both A and B are individually better off than they would be with the element to which they are currently matched.

The stable marriage problem is commonly stated as:

    Given n men and n women, where each person has ranked all members of the opposite sex with a unique number between 1 and n in order of preference, marry the men and women together such that there are no two people of opposite sex who would both rather have each other than their current partners. If there are no such people, all the marriages are "stable".

Algorithms for finding solutions to the stable marriage problem have applications in a variety of real-world situations, perhaps the best known of these being in the assignment of graduating medical students to their first hospital appointments.

wait for solution.

Thursday, October 3, 2013

Arrange given numbers to form the biggest number.





Given an array of numbers, arrange them in a way that yields the largest value. 

For example, if the given numbers are {54, 546, 548, 60}, the arrangement 6054854654 gives the largest value. And if the given numbers are {1, 34, 3, 98, 9, 76, 45, 4}, then the arrangement 998764543431 gives the largest value.

A simple solution that comes to our mind is to sort all numbers in descending order, but simply sorting doesn’t work. For example, 548 is greater than 60, but in output 60 comes before 548. As a second example, 98 is greater than 9, but 9 comes before 98 in output.

So how do we go about it? The idea is to use any comparison based sorting algorithm. 

In the used sorting algorithm, instead of using the default comparison, write a comparison function myCompare() and use it to sort numbers. Given two numbers X and Y, how should myCompare() decide which number to put first – we compare two numbers XY (Y appended at the end of X) and YX (X appended at the end of Y). If XY is larger, then X should come before Y in output, else Y should come before. For example, let X and Y be 542 and 60.

To compare X and Y, we compare 54260 and 60542. Since 60542 is greater than 54260, we put Y first.

Thursday, September 26, 2013

Greedy algorithms.



A greedy algorithm is an algorithm that follows the problem solving heuristic of making the locally optimal choice at each stage, with the hope of finding a global optimum. In many problems, a greedy strategy does not in general produce an optimal solution, but nonetheless a greedy heuristic may yield locally optimal solutions that approximate a global optimal solution in a reasonable time.
For example, a greedy strategy for the traveling salesman problem (which is of a high computational complexity) is the following heuristic: "At each stage visit an unvisited city nearest to the current city". This heuristic need not find a best solution but terminates in a reasonable number of steps; finding an optimal solution typically requires unreasonably many steps. In mathematical optimization, greedy algorithms solve combinatorial problems having the properties of matroids.
In general, greedy algorithms have five components:

·         A candidate set, from which a solution is created
·         A selection function, which chooses the best candidate to be added to the solution
·         A feasibility function, that is used to determine if a candidate can be used to contribute to a solution
·         An objective function, which assigns a value to a solution, or a partial solution, and
·         A solution function, which will indicate when we have discovered a complete solution

Greedy algorithms produce good solutions on some mathematical problems, but not on others. Most problems for which they work, will have two properties:
Greedy choice property
We can make whatever choice seems best at the moment and then solve the sub problems that arise later. The choice made by a greedy algorithm may depend on choices made so far but not on future choices or all the solutions to the sub problem. It iteratively makes one greedy choice after another, reducing each given problem into a smaller one. In other words, a greedy algorithm never reconsiders its choices. This is the main difference from dynamic programming, which is exhaustive and is guaranteed to find the solution. After every stage, dynamic programming makes decisions based on all the decisions made in the previous stage, and may reconsider the previous stage's algorithmic path to solution.

Optimal substructure
    "A problem exhibits optimal substructure if an optimal solution to the problem contains optimal solutions to the sub-problems."[2]

Types
Greedy algorithms can be characterized as being 'short sighted', and as 'non-recoverable'. They are ideal only for problems which have 'optimal substructure'. Despite this, greedy algorithms are best suited for simple problems (e.g. giving change). It is important, however, to note that the greedy algorithm can be used as a selection algorithm to prioritize options within a search, or branch and bound algorithm. There are a few variations to the greedy algorithm:
    Pure greedy algorithms
    Orthogonal greedy algorithms
   Relaxed greedy algorithms

Applications
Greedy algorithms mostly (but not always) fail to find the globally optimal solution, because they usually do not operate exhaustively on all the data. They can make commitments to certain choices too early which prevent them from finding the best overall solution later.
If a greedy algorithm can be proven to yield the global optimum for a given problem class, it typically becomes the method of choice because it is faster than other optimization methods like dynamic programming.

Greedy algorithms appear in network routing as well. Using greedy routing, a message is forwarded to the neighboring node which is "closest" to the destination. The notion of a node's location (and hence "closeness") may be determined by its physical location, as in geographic routing used by ad hoc networks. Location may also be an entirely artificial construct as in small world routing and distributed hash table.
Example of failure.
The 0-1 Knapsack problem is posed as follows. A thief robbing a store finds n items ; the ith item is worth vi dollars and weights wi pounds, where vi and wi are integers. He wants to take as valuable a load as possible, but he can carry at most W pounds in his knapsack for some integer w.
According to Greedy strategy consider the problem instance ,as there are 3 items and the knapsack can hold 50 pounds.
Item 1 weight 10 pounds and is worth 60 dollars.
Item 2 weights 20 pounds and is worth 100 dollars.
Item 3 weight 30 pounds and is worth 120 dollars.
Thus , the value per pound of item 1 is 6 dollars per pound, which is greater than the value per pound of either item 2 (5 dollars per pound) or item 3 which is 4 dollars per pound.
Greedy strategy would take item 1 first, leaving behind item 2 and item 3.
However the optimal solution consider item 2 and item 3 , leaving behind item 1.
Hence greedy strategy fails.

Thank you Wiki and Algorithms book.