Tuesday, March 18, 2008

PageRank By ashesh deep

Introduction

PageRank is a numeric value that represents how important a page is on the web. Google figures that when one page links to another page, it is effectively casting a vote for the other page. The more votes that are cast for a page, the more important the page must be. Also, the importance of the page that is casting the vote determines how important the vote itself is. Google calculates a page's importance from the votes cast for it. How important each vote is taken into account when a page's PageRank is calculated. PageRank is Google's way of deciding a page's importance. It matters because it is one of the factors that determine a page's ranking in the search results. It isn't the only factor that Google uses to rank pages, but it is an important one.

From here onwards, we'll occasionally refer to PageRank as "PR".



Notes

Not all links are counted by Google. For instance, they filter out links from known link farms. Some links can cause a site to be penalized by Google. They rightly figure that webmasters cannot control which sites link to their sites, but they can control which sites they link out to. For this reason, links into a site cannot harm the site, but links from a site can be harmful if they link to penalized sites. So be careful which sites you link to. If a site has PR0, it is usually a penalty, and it would be unwise to link to it.

How is PageRank calculated?

To calculate the PageRank for a page, all of its inbound links are taken into account. These are links from within the site and links from outside the site.

PR(A) = (1-d) + d(PR(t1)/C(t1) + ... + PR (tn)/C (tn))

That's the equation that calculates a page's PageRank. It's the original one that was published when PageRank was being developed, and it is probable that Google uses a variation of it but they aren't telling us what it is. It doesn't matter though, as this equation is good enough.
In the equation 't1 - tn' are pages linking to page A, 'C' is the number of outbound links that a page has and 'd' is a damping factor, usually set to 0.85.
We can think of it in a simpler way:-
a page's PageRank = 0.15 + 0.85 * (a "share" of the PageRank of every page that links to it)
"share" = the linking page's PageRank divided by the number of outbound links on the page.
A page "votes" an amount of PageRank onto each page that it links to. The amount of PageRank that it has to vote with is a little less than its own PageRank value (its own value * 0.85). This value is shared equally between all the pages that it links to.
From this, we could conclude that a link from a page with PR4 and 5 outbound links are worth more than a link from a
page with PR8 and 100 outbound links. The PageRank of a page that links to yours is important but the number of links on that page is also important. The more links there are on

a page, the less PageRank value your page will receive from it.

If the PageRank value differences between PR1, PR2,.....PR10 were equal then that conclusion would hold up, but many people believe that the values between PR1 and PR10 (the maximum) are set on a logarithmic scale, and there is very good reason for believing it. Nobody outside Google knows for sure one way or the other, but the chances are high that the scale is logarithmic, or similar. If so, it means that it takes a lot more additional PageRank for a page to move up to the next PageRank level that it did to move up from the previous PageRank level. The result is that it reverses the previous conclusion, so that a link from a PR8 page that has lots of outbound links is worth more than a link from a PR4 page that has only a few outbound links.
Whichever scale Google uses, we can be sure of one thing. A link from another site increases our site's PageRank. Just remember to avoid links from link farms.
Note that when a page votes its PageRank value to other pages, its own PageRank is not reduced by the value that it is voting. The page doing the voting doesn't give away its PageRank and end up with nothing. It isn't a transfer of PageRank. It is simply a vote according to the page's PageRank value. It's like a shareholders meeting where each shareholder votes according to the number of shares held, but the shares themselves aren't given away. Even so, pages do lose some PageRank indirectly, as we'll see later.

Ok so far? Good. Now we'll look at how the calculations are actually done.
For a page's calculation, its existing PageRank (if it has any) is abandoned completely and a fresh calculation is done
where the page relies solely on the PageRank "voted" for it by its current inbound links, which may have changed since the last time the page's PageRank was calculated. The equation shows clearly how a page's PageRank is arrived at. But what isn't immediately obvious is that it can't work if the calculation is done just once. Suppose we have 2 pages, A and B, which link to each other, and neither have any other links of any kind.
This is what happens:-
Step 1: Calculate page A's PageRank from the value of its inbound links
Page A now has a new PageRank value. The calculation used the value of the inbound link from page B. But page B has an inbound link (from page A) and its new PageRank value hasn't been worked out yet, so page A's new PageRank value is based on inaccurate data and can't be accurate.
Step 2: Calculate page B's PageRank from the value of its inbound links
Page B now has a new PageRank value, but it can't be accurate because the calculation used the new PageRank value of the inbound link from page A, which is inaccurate.

It's a Catch 22 situation. We can't work out A's PageRank until we know B's PageRank, and we can't work out B's PageRank until we know A's PageRank.
Now that both pages have newly calculated PageRank values, can't we just run the calculations again to arrive at accurate values? No. We can run the calculations again using the new values and the results will be more accurate, but we will always be using inaccurate values for the calculations, so the results will always be inaccurate.
The problem is overcome by repeating the calculations many times. Each time produces slightly more accurate values. In fact, total accuracy can never be achieved because the calculations are always based on inaccurate values. 40 to 50 iterations are sufficient to reach a point where any further iteration wouldn't produce enough of a change to the values to matter. This is precisely what Google does at each update, and it's the reason why the updates take so long.
One thing to bear in mind is that the results we get from the calculations are proportions. The figures must then be set against a scale (known only to Google) to arrive at each page's actual PageRank. Even so, we can use the calculations to channel the PageRank within a site around its pages so that certain pages receive a higher proportion of it than others.



NOTES

You may come across explanations of PageRank where the same equation is stated but the result of each iteration of the calculation is added to the page's existing PageRank. The new value (result + existing PageRank) is then used when sharing PageRank with other pages.
These explanations are wrong for the following reasons:-
1. They quote the same, published equation - but then change it from

PR(A) = (1-d) + d(......) to PR(A) = PR(A) + (1-d) + d(......)

It isn't correct, and it isn't necessary.

2. We will be looking at how to organize links so that certain pages end up with a larger proportion of the PageRank than others. Adding to the page's existing PageRank through the iterations produces different proportions than when the equation is used as published. Since the addition is not a part of the published equation, the results are wrong and the proportioning isn't accurate.

According to the published equation, the page being calculated starts from scratch at each iteration. It relies solely on its inbound links. The 'add to the existing PageRank' idea doesn't do that, so its results are necessarily wrong.

Internal linking

Fact: A website has a maximum amount of PageRank that is distributed between its pages by internal links.
The maximum PageRank in a site equals the number of pages in the site * 1. The maximum is increased by inbound links from other sites and decreased by outbound links to other sites. We are talking about the overall PageRank in the site and not the PageRank of any individual page. You don't have to take my word for it. You can reach the same conclusion by using a pencil and paper and the equation.
Fact: The maximum amount of PageRank in a site increases as the number of pages in the site increases.

The more pages that a site has, the more PageRank it has. Again, by using a pencil and paper and the equation, you can come to the same conclusion. Bear in mind that the only pages that count are the ones that Google knows about.
Fact: By linking poorly, it is possible to fail to reach the site's maximum PageRank, but it is not possible to exceed it.
Poor internal linkages can cause a site to fall short of its maximum but no kind of internal link structure can cause a site to exceed it.
The only way to increase the maximum is to add more inbound links and/or increase the number of pages in the site.

Cautions: Whilst I thoroughly recommend creating and adding new pages to increase a site's total PageRank so that it can be channeled to specific pages, there are certain types of pages that should not be added.


These are pages that are all identical or very nearly identical and are known as cookie-cutters. Google considers them to be spam and they can trigger an alarm that causes the pages, and possibly the entire site, to be penalized. Pages full of good content are a must.


What can we do with this 'overall' PageRank?

We are going to look at some example calculations to see how a site's PageRank can be manipulated, but before doing that, I need to point out that a page will be included in the Google index only if one or more pages on the web link to it. That's according to Google. If a page is not in the Google index, any links from it can't be included in the calculations.
For the examples, we are going to ignore that fact, mainly because other 'PageRank Explained' type documents ignore

it in the calculations, and it might be confusing when comparing documents. The calculator operates in two modes:- Simple and Real. In Simple mode, the calculations assume that all pages are in the Google index, whether or not any other pages link to them. In Real mode the calculations disregard unlinked-to pages. These examples show the results as calculated in Simple mode.

Let's consider a 3 page site (pages A, B and C) with no links coming in from the outside. We will allocate each page an initial PageRank of 1, although it makes no difference whether we start each page with 1, 0 or 99. Apart from a few millionths of a PageRank point, after much iteration the end result is always the same. Starting with 1 requires less iteration for the PageRank to converge to a suitable result than

when starting with 0 or any other number. You may want to use a pencil and paper to follow this or you can follow it with the calculator.
The site's maximum PageRank is the amount of PageRank in the site. In this case, we have 3 pages so the site's maximum is 3.
At the moment, none of the pages link to any other pages and none link to them. If you make the calculation once for each page, you'll find that each of them ends up with a PageRank of 0.15. No matter how many iterations you run, each page's PageRank remains at 0.15. The total PageRank in the site = 0.45, whereas it could be 3. The site is seriously wasting most of its potential PageRank.


Example 1

Now begin again with each page being allocated PR1. Link page A to page B and run the calculations for each page. We end up with:- Page A = 0.15 Page B = 1 Page C = 0.15
Page A has "voted" for page B and, as a result, page B's PageRank has increased. This is looking good for page B, but it's only 1 iteration - we haven't taken account of the Catch 22 situations. Look at what happens to the figures after more iterations:-
After 100 iterations the figures are: - Page A = 0.15 Page B = 0.2775 Page C = 0.15
It still looks good for page B but nowhere near as good as it did. These figures are more realistic. The total PageRank in the site is now 0.5775 - slightly better but still only a fraction of what it could be.


Example 2

Try this linkage. Link all pages to all pages. Each page starts with PR1 again. This produces:- Page A = 1 Page B = 1 Page C = 1
Now we've achieved the maximum. No matter how many iterations are run, each page always ends up with PR1. The same results occur by linking in a loop. E.g. A to B, B to C and C to D. View this in the calculator.
This has demonstrated that, by poor linking, it is quite easy to waste PageRank and by good linking, we can achieve a site's full potential. But we don't particularly want all the site's pages to have an equal share. We want one or more pages to have a larger share at the expense of others.


The kinds of pages that we might want to have the larger shares are the index page; hub pages and pages that are optimized for certain search terms. We have only 3 pages, so we'll channel the PageRank to the index page - page A. It will serve to show the idea of channeling.

Example 3

Now try this. Link page A to both B and C. Also link pages B and C to A. Starting with PR1 all round, after 1 iteration the results are: -
Page A = 1.85 Page B = 0.575 Page C = 0.575
and after 100 iterations, the results are: - Page A = 1.459459 Page B = 0.7702703 Page C = 0.7702703
In both cases the total PageRank in the site is 3 (the maximum) so none is being wasted. Also in both cases you can see that page A has a much larger proportion of the PageRank than the other 2 pages.

This is because pages B and C are passing PageRank to A and not to any other pages. We have channeled a large proportion of the site's PageRank to where we wanted it.

Example 4

Finally, keep the previous links and add a link from page C to page B. Start again with PR1 all round. After 1 iteration:-
Page A = 1.425 Page B = 1 Page C = 0.575
By comparison to the 1 iteration figures in the previous example, page A has lost some PageRank, page B has gained some and page C stayed the same. Page C now shares its "vote" between A and B.

Previously A received all of it. That's why page A has lost out and why page B has gained and after 100 iterations:-
Page A = 1.298245 Page B = 0.9999999 Page C = 0.7017543
When the dust has settled, page C has lost a little PageRank because, having now shared its vote between A and B, instead of giving it all to A, A has less to give to C in the A------>C link. So adding an extra link from a page causes the page to lose PageRank indirectly if any of the pages that it links to return the link. If the pages that it links to don't return the link, then no PageRank loss would have occurred. To make it more complicated, if the link is returned even indirectly (via a page that links to a page that links to a page etc), the page will lose a little PageRank. This isn't really important with internal links, but it does matter when linking to pages outside the site.


Example 5: new pages

Adding new pages to a site is an important way of increasing a site's total PageRank because each new page will add an average of 1 to the total. Once the new pages have been added, their new PageRank can be channeled to the important pages.
We'll use the calculator to demonstrate these.
Let's add 3 new pages to Example 3. Three new pages but they don't do anything for us yet. The small increase in the Total, and the new pages' 0.15, are unrealistic as we shall see. So let's link them into the site.
Link each of the new pages to the important page, page A. Notice that the Total PageRank has doubled, from 3 (without the new pages) to 6. Notice also that page A's PageRank has almost doubled.

There is one thing wrong with this model. The new pages are orphans. They wouldn't get into Google's index, so they wouldn't add any PageRank to the site and they wouldn't pass any PageRank to page A. They each need to be linked to from at least one other page.
If page A is the important page, the best page to put the links on is, surprisingly, page A. You can play around with the links but, from page A's point of view, there isn't a better place for them.
It is not a good idea for one page to link to a large number of pages so, if you are adding many new pages, spread the links around.
The chances are that there is more than one important page in a site, so it is usually suitable to spread the links to and from the new pages.
You can use the calculator to experiment with mini-models of a site to find the best links that produce the best results for its important pages.

Examples summary

You can see that, by organizing the internal links, it is possible to channel a site's PageRank to selected pages. Internal links can be arranged to suit a site's PageRank needs, but it is only useful if Google knows about the pages, so do try to ensure that Google spiders them.

Questions

Q: When a page has several links to another page, are all the links counted?
E.g. if page A links once to page B and 3 times to page C, does page C receive 3/4 of page A's shareable PageRank?
The PageRank concept is that a page casts votes for one or more other pages. Nothing is said in the original PageRank document about a page casting more than one vote for a single page. The idea seems to be against the PageRank concept and would certainly be open to manipulation by unrealistically proportioning votes for target pages. E.g. if an outbound link, or a link to an unimportant page, is necessary, add a bunch of links to an important page to minimize the effect.
Since we are unlikely to get a definitive answer from Google, it is reasonable to assume that a page can cast only one vote for another page, and that additional votes for the same page are not counted.

Q: When a page links to itself, is the link counted?

Again, the concept is that pages cast votes for other pages. Nothing is said in the original document about pages casting votes for themselves. The idea seems to be against the concept and, also, it would be another way to manipulate the results. So, for those reasons, it is reasonable to assume that a page can't vote for itself, and that such links are not counted.


Dangling links

"Dangling links are simply links that point to any page with no outgoing links. They affect the model because it is not clear where their weight should be distributed, and there are a large number of them. Often these dangling links are simply pages that we have not downloaded yet..........Because dangling links do not affect the ranking of any other page directly, we simply remove them from the system until all the PageRank’s are calculated. After all the PageRank’s are calculated they can be added back in without affecting things significantly." - extract from the original PageRank paper by Google’s founders, Sergey Brin and Larry Page.


A dangling link is a link to a page that has no links going from it, or a link to a page that Google hasn't indexed. In both cases Google removes the links shortly after the start of the calculations and reinstates them shortly before the calculations are finished. In this way, their effect on the PageRank of other pages in minimal.
The results shown in Example 1 (right diagram.) are wrong because page B has no links going from it, and so the link from page A to page B is dangling and would be removed from the calculations. The results of the calculations would show all three pages as having 0.15.
It may suit site functionality to link to pages that have no links going from them without losing any PageRank from the other pages but it would be waste of potential PageRank. Take a look at this example. The site's potential is 5 because it has 5 pages, but without page E linked in, the site only has 4.15.

Link page A to page E and click calculate. Notice that the site's total has gone down very significantly. But, because the new link is dangling and would be removed from the calculations, we can ignore the new total and assume the previous 4.15 to be true. That's the effect of

functionally useful, dangling links in the site. There's no overall PageRank loss.

However, some of the site's potential total is still being wasted, so link Page E back to Page A and click Calculate. Now we have the maximum PageRank that is possible with 5 pages. Nothing is being wasted.

Although it may be functionally good to link to pages within the site without those pages linking out again, it is bad for PageRank. It is pointless wasting PageRank unnecessarily, so always make sure that every page in the site links out to at least one other page in the site.


Inbound links

Inbound links (links into the site from the outside) are one way to increase a site's total PageRank. The other is to add more pages. Where the links come from doesn't matter. Google recognizes that a webmaster has no control over other sites linking into a site, and so sites are not penalized because of where the links come from. There is an exception to this rule but it is rare and doesn't concern this article. It isn't something that a webmaster can accidentally do.
The linking page's PageRank is important, but so is the number of links going from that page. For instance, if you

are the only link from a page that has a lowly PR2, you will receive an injection of 0.15 + 0.85(2/1) = 1.85 into your
Site, whereas a link from a PR8 page that has another 99 links from it will increase your site's PageRank by 0.15 + 0.85(7/100) = 0.2095. Clearly, the PR2 link is much better –
or is it? See here for a probable reason why this is not the case.
Once the PageRank is injected into your site, the calculations are done again and each page's PageRank is changed. Depending on the internal link structure, some pages'

PageRank is increased, some are unchanged but no pages lose any PageRank.
It is beneficial to have the inbound links coming to the pages to which you are channeling your PageRank. A PageRank injection to any other page will be spread around the site through the internal links. The important pages will receive an increase, but not as much of an increase as when they are linked to directly. The page that receives the inbound link makes the biggest gain.
It is easy to think of our site as being a small, self-contained network of pages. When we do the PageRank calculations we are dealing with our small network. If we make a link to another site, we lose some of our network's PageRank, and if we receive a link, our network's PageRank is added to. But it isn't like that. For the PageRank calculations, there is only one network - every page that Google has in its index. Each

iteration of the calculation is done on the entire network and not on individual websites.
Because the entire network is interlinked, and every link and every page plays its part in each iteration of the calculations, it is impossible for us to calculate the effect of inbound links to our site with any realistic accuracy.



Outbound links

Outbound links are a drain on a site's total PageRank. They leak PageRank. To counter the drain, try to ensure that the links are reciprocated. Because of the PageRank of the pages at each end of an external link, and the number of links out from those pages, reciprocal links can gain or lose PageRank. You need to take care when choosing where to exchange links.

When PageRank leaks from a site via a link to another site, all the pages in the internal link structure are affected. (This doesn't always show after just 1 iteration).

The page that you link out from makes a difference to which pages suffer the most loss. Without a program to perform the calculations on specific link structures, it is difficult to decide on the right page to link out from, but the generalization is to link from the one with the lowest PageRank.

Many websites need to contain some outbound links that are nothing to do with PageRank. Unfortunately, all 'normal' outbound links leak PageRank. But there are 'abnormal' ways of linking to other sites that don't result in leaks. PageRank is leaked when Google recognizes a link to another site.

The answer is to use links that Google doesn't recognize or count. These include form actions and links contained in JavaScript code.

Form actions

A form's 'action' attribute does not need to be the URL of a form parsing script. It can point to any html page on any site. Try it.

Example:


Click here

To be really sneaky, the action attribute could be in some JavaScript code rather than in the form tag, and the JavaScript code could be loaded from a 'js' file stored in a directory that is barred to Google's spider by the robots.txt file.

JavaScript

Example:

Click here


Like the form action, it is sneaky to load the JavaScript code, which contains the URLs, from a separate 'js' file, and sneakier still if the file is stored in a directory that is barred to googlebot by the robots.txt file.


The "rel" attribute

As of 18th January 2005, Google, together with other search engines, is recognizing a new attribute to the anchor tag. The attribute is "rel", and it is used as follows:-

link text

The attribute tells Google to ignore the link completely. The link won't help the target page's PageRank, and it won't help its rankings. It is as though the link doesn't exist. With this attribute, there is no longer any need for JavaScript, forms, or any other method of hiding links from Google.


So how much additional PageRank do we need to move up the toolbar?

First, let me explain in more detail why the values shown in the Google toolbar are not the actual PageRank figures. According to the equation, and to the creators of Google, the billions of pages on the web average out to a PageRank of 1.0 per page. So the total PageRank on the web is equal to the number of pages on the web * 1, which equals a lot of PageRank spread around the web.
The Google toolbar range is from 1 to 10. (They sometimes show 0, but that figure isn't believed to be a PageRank calculation result). What Google does is divide the full range of actual PageRank’s on the web into 10 parts - each part is represented by a value as shown in the toolbar. So the toolbar values only show what part of the overall range a page's PageRank is in, and not the actual PageRank itself. The numbers in the toolbar are just labels.

Whether or not the overall range is divided into 10 equal parts is a matter for debate - Google aren't saying. But because it is much harder to move up a toolbar point at the higher end than it is at the lower end, many people (including me) believe that the divisions are based on a logarithmic scale, or something very similar, rather than the equal divisions of a linear scale.
Let's assume that it is a logarithmic, base 10 scale, and that it takes 10 properly linked new pages to move a site's important page up 1 toolbar point. It will take 100 new

pages to move it up another point, 1000 new pages to move it up one more, 10,000 to the next, and so on. That's why moving up at the lower end is much easier that at the higher end.
In reality, the base is unlikely to be 10. Some people think it is around the 5 or 6 mark, and maybe even less. Even so, it still gets progressively harder to move up a toolbar point at the higher end of the scale.
Note that as the number of pages on the web increases, so does the total PageRank on the web, and as the total PageRank increases, the positions of the divisions in the overall scale must change. As a result, some pages drop a toolbar point for no 'apparent' reason.
If the page's actual PageRank was only just above a division in the scale, the addition of new pages to the web would cause the division to move up slightly and the page would end up just below the division. Google's index is always increasing and they re-evaluate each of the pages on more or less a monthly basis. It's known as the "Google dance". When the dance is over, some pages will have dropped a toolbar point. A number of new pages might be all that is needed to get the point back after the next dance.


The toolbar value is a good indicator of a page's PageRank but it only indicates that a page is in a certain range of the overall scale. One PR5 page could be just above the PR5 division and another PR5 page could be just below the PR6 division - almost a whole division (toolbar point) between them.

Conclusion & Tips

Domain names and Filenames

To a spider, www.domain.com/, domain.com/, www.domain.com/index.html and domain.com/index.html are different URLs and, therefore, different pages. Surfers arrive at the site's home page whichever of the urls are used, but spiders see them as individual urls, and it makes a difference when working out the PageRank. It is better to standardize the URL you use for the site's home page. Otherwise each URL can end up with a different PageRank, whereas all of it should have gone to just one url.

If you think about it, how can a spider know the filename of the page that it gets back when requesting www.domain.com/? It can't. The filename could be index.html, index.htm, index.php, default.html, etc. The spider doesn't know. If you link to index.html within the site, the spider could compare the 2 pages but that seems unlikely. So they are 2 URLs and each receives PageRank from inbound links. Standardizing the home page's URL ensures that the PageRank it is due isn't shared with ghost URLs.


Adding new pages

There is a possible negative effect of adding new pages. Take a perfectly normal site. It has some inbound links from other sites and its pages have some PageRank. Then a new page is added to the site and is linked to from one or more of the existing pages. The new page will, of course, aquire PageRank from the site's existing pages. The effect is that,

whilst the total PageRank in the site is increased, one or more of the existing pages will suffer a PageRank loss due to the new page making gains. Up to a point, the more new pages that are added, the greater is the loss to the existing pages. With large sites, this effect is unlikely to be noticed but, with smaller ones, it probably would.

So, although adding new pages does increase the total PageRank within the site, some of the site's pages will lose PageRank as a result. The answer is to link new pages is such a way within the site that the important pages don't suffer, or add sufficient new pages to make up for the effect (that can sometimes mean adding a large number of new pages), or better still, get some more inbound links.


Miscellaneous

The Google toolbar

If you have the Google toolbar installed in your browser, you will be used to seeing each page's PageRank as you browse the web. But all isn't always as it seems. Many pages that Google displays the PageRank for haven't been indexed in Google and certainly don't have any PageRank in their own right. What is happening is that one or more pages on the site have been indexed and a PageRank has been calculated. The PageRank figure for the site's pages that haven't been indexed is allocated on the fly - just for your toolbar. The PageRank itself doesn't exist.
It's important to know this so that you can avoid exchanging links with pages that really don't have any PageRank of their own. Before making exchanges, search for the page on Google to make sure that it is indexed.


Sub-directories

Some people believe that Google drops a page's PageRank by a value of 1 for each sub-directory level below the root directory. E.g. if the value of pages in the root directory is generally around 4, then pages in the next directory level down will be generally around 3, and so on down the levels. Other people (including me) don't accept that at all. Either way, because some spiders tend to avoid deep sub-directories, it is generally considered to be beneficial to keep directory structures shallow (directories one or two levels below the root).

Bibliography
The Google Story by David A. Wise
PageRank Explained by Phil Craven

URL:
www.webworkshop.net
www.Google.co.in













Saturday, January 26, 2008

Sorry, for being Offline....

Hi fans....!!!

I am very to say that i will be offline for few weeks due to some unavoidable circumstances.

Hope will be right back with some exceptional stuffs.....

Just follow me....!!!Yo!!!.....

Regards,

ashesh deep

Wednesday, December 5, 2007

THE GOOGLE STORY ---- by ashesh deep

"Here is the story behind one of the most remarkable Internet successes of our time. Based on scrupulous research and extraordinary access to Google, this article takes you inside the creation and growth of a company whose name is a favorite brand and a standard verb recognized around the world. Its stock is worth more than General Motors’ and Ford’s combined, its staff eats for free in a dining room that used to be run by the Grateful Dead’s former chef, and its employees traverse the firm’s colorful Silicon Valley campus on scooters and inline skates.



THE GOOGLE STORY is the definitive account of the populist media company powered by the world’s most advanced technology that in a few short years has revolutionized access to information about everything for everybody everywhere. In 1998, Moscow-born Sergey Brin and Midwest-born Larry Page dropped out of graduate school at Stanford University to, in their own words, “change the world” through a search engine that would organize every bit of information on the Web for free.While the company has done exactly that in more than one hundred languages, Google’s quest continues as it seeks to add millions of library books, television broadcasts, and more to its searchable database.



You will learn about the amazing business acumen and computer wizardry that started the company on its astonishing course; the secret network of computers delivering lightning-fast search results; the unorthodox approach that has enabled it to challenge Microsoft’s dominance and shake up Wall Street. Even as it rides high, Google wrestles with difficult choices that will enable it to continue expanding while sustaining the guiding vision of its founders’ mantra: DO NO EVIL."



Back Before it was a GOOGLE......


According to Google lore, company founders Larry Page and Sergey Brin were not terribly fond of each other when they first met as Stanford University graduate students in computer science in 1995. Larry was a 24-year-old University of Michigan alumnus on a weekend visit; Sergey, 23, was among a group of students assigned to show him around. They argued about every topic they discussed. Their strong opinions and divergent viewpoints would eventually find common ground in a unique approach to solving one of computing's biggest challenges: retrieving relevant information from a massive set of data.
By January of 1996, Larry and Sergey had begun collaboration on a search engine called BackRub, named for its unique ability to analyze the "back links" pointing to a given website. Larry, who had always enjoyed tinkering with machinery and had gained some notoriety for building a working printer out of Lego™ bricks, took on the task of creating a new kind of server environment that used low-end PCs instead of big expensive machines. Afflicted by the perennial shortage of cash common to graduate students everywhere, the pair took to haunting the department's loading docks in hopes of tracking down newly arrived computers that they could borrow for their network.
A year later, their unique approach to link analysis was earning BackRub a growing reputation among those who had seen it. Buzz about the new search technology began to build as word spread around campus.




The hard work continued...


Larry and Sergey continued working to perfect their technology through the first half of 1998. Following a path that would become a key tenet of the Google way, they bought a terabyte of disks at bargain prices and built their own computer housings in Larry's dorm room, which became Google's first data center. Meanwhile Sergey set up a business office, and the two began calling on potential partners who might want to license a search technology better than any then available. Despite the dotcom fever of the day, they had little interest in building a company of their own around the technology they had developed.
Among those they called on was friend and Yahoo! founder David Filo. Filo agreed that their technology was solid, but encouraged Larry and Sergey to grow the service themselves by starting a search engine company. "When it's fully developed and scalable," he told them, "let's talk again." Others were less interested in Google, as it was now known. One portal CEO told them, "As long as we're 80 percent as good as our competitors, that's good enough. Our users don't really care about search."



One Lucky Click away.....


Andy Bechtolsheim, one of the founders of Sun Microsystems, was used to taking the long view. One look at their demo and he knew Google had potential – a lot of potential. But though his interest had been piqued, he was pressed for time. As Sergey tells it, "We met him very early one morning on the porch of a Stanford faculty member's home in Palo Alto. We gave him a quick demo. He had to run off somewhere, so he said, 'Instead of us discussing all the details, why don't I just write you a check?' It was made out to Google Inc. and was for $100,000."
The investment created a small dilemma. Since there was no legal entity known as "Google Inc.," there was no way to deposit the check. It sat in Larry's desk drawer for a couple of weeks while he and Sergey scrambled to set up a corporation and locate other funders among family, friends, and acquaintances. Ultimately they brought in a total initial investment of almost $1 million.
Everyone's favorite garage band
In September 1998, Google Inc. opened its door in Menlo Park, California. The door came with a remote control, as it was attached to the garage of a friend who sublet space to the new corporation's staff of three. The office offered several big advantages, including a washer and dryer and a hot tub. It also provided a parking space for the first employee hired by the new company: Craig Silverstein, now Google's director of technology.
Already Google.com, still in beta, was answering 10,000 search queries each day. The press began to take notice of the upstart website with the relevant search results, and articles extolling Google appeared in USA TODAY and Le Monde. That December, PC Magazine named Google one of its Top 100 Web Sites and Search Engines for 1998. Google was moving up in the world.





A Real life Instance of Sergey & Larry by an Expert...


Sergey Brin and Larry Page cruised onto the stage to the kind of roars and excitement that teenagers normally reserve for rock stars. They had entered the auditorium through a rear door, leaving behind photographers, sunglasses, a pair of hired cars with drivers, and an attractive young woman who was traveling with Sergey. Dressed casually, they sat down and cracked smiles, pleased at their heroes’ welcome. They were near the birthplace of civilization, thousands of miles and an ocean away from the place where their work together had begun. It seemed as good a place as any for a pair of young superstars, whose shared ambition revolved around changing the world, to talk about what they had done, how they had done it, and what their dreams were for the future.“Do you guys know the story of Google?” Page asked. “Do you want me to tell it?”“Yes!” the crowd shouted.It was September 2003, and the hundreds of students and faculty at this Israeli high school geared toward the brightest young minds in mathematics wanted to hear everything the youthful inventors had to say. Many of them identified with Brin because, like him, they had escaped with their families from Mother Russia in search of freedom. And they related to Page just as eagerly, since he was part of the duo that had created the most powerful and accessible information tool of their time–a tool sparking change that was already sweeping the world. Like kids playing basketball and dreaming of being the next Michael Jordan, the students wanted to be like Sergey Brin and Larry Page.“Google was started when Sergey and I were Ph.D. students at Stanford University in computer science,” Page began, “and we didn’t know exactly what we wanted to do. I got this crazy idea that I was going to download the entire Web onto my computer. I told my advisor that it would only take a week. After about a year or so, I had some portion of it.” The students laughed. “So optimism is important,” he went on. “You have to be a little silly about the goals you are going to set. There is a phrase I learned in college called, ‘Having a healthy disregard for the impossible,’” Page said. “That is a really good phrase. You should try to do things that most people would not.”As proponents of tackling important problems and seeking transformative solutions, Brin and Page were certainly armed with a healthy disregard for the impossible. And while not much older than the throng of high school students who packed the jammed auditorium, they were truly in a class by themselves. In the rich and storied history of American invention and capitalism, there had never been a meteoric rise comparable to theirs. It had taken Thomas Edison a quarter century to invent the lightbulb; Alexander Graham Bell had spent many years developing the telephone; Henry Ford created the modern assembly line and turned it into the mass production and consumption of automobiles only after decades of work; and Thomas Watson Jr. labored long and hard before IBM rolled out the modern computer. But Brin and Page, in just five years, had taken a graduate school research project and turned it into a multibillion-dollar enterprise with global reach. They were in Tel Aviv, but had it been Tokyo, Toronto, or Taipei, the Google Guys would have received the same raucous reception. The youthful pair had changed the lives of millions of people by giving them free, instant access to information about any subject. And by being devilishly clever in the Internet age, they had created the best-known new brand in the world without advertising to promote the name. The two were astute businessmen, and knew that to succeed over time it was imperative that they remain in complete control of their privately owned business and its quirky culture. It saddened Page that many inventors die without ever seeing the fruits of their labors. Determined to avoid a similar fate, he and Brin understood how to use the right connections, access to money and brilliant minds, raw computing power, and a culture of limitless possibilities to make Google a beacon and a magnet. In the click of a mouse, it had replaced Microsoft as the place for the world’s top technologists to work. Yet they knew that maintaining the pace of innovation and the mantle of leadership would be no easy feat, since they faced a deeper-pocketed competitor in Microsoft, and a ruthless combatant in its chief, the billionaire Bill Gates.Supremely confident about their achievements and vision, Brin and Page had been on a roll ever since they started working together. They wanted no one–neither competitors nor outside investors–to come between them or interfere in any way. That combination of dependence on each other, and independence from everyone else, had contributed immeasurably to their astounding success.“So I started downloading the Web, and Sergey started helping me because he was interested in data mining and making sense of the information,” Page went on, continuing with the pair’s history. “When we first met each other, we thought the other was really obnoxious. Then we hit it off and became really good friends. That was about eight years ago. And we started working really, really hard on it.” He stressed this critical point: inspiration still required plenty of perspiration. “There was an important lesson for us. We worked through holidays, and worked many, many hours a day. It ended up working out, but it is hard because it takes a lot of effort.”Page said that as they told friends about Google, more and more people started using it. “Pretty soon we had 10,000 searches a day at Stanford. And we sat around the room and looked at the machines and said, ‘This is about how many searches we can do, and we need more computers.’ Our whole history has been like that. We always need more computers.”It was a sentiment the students and teachers at the school could relate to.“So, we started a company. Being in Silicon Valley at this time, it was relatively easy to do. You have a number of very excellent companies here as well,” he added, alluding to the growing technology sector in Israel, “so this is also a good environment to do things like that. We started up the company, and it grew and grew and grew–and that is why we are here. So that,” he concluded, “is ‘The Google Story.’”But there was something more he wanted to convey: closing words of inspiration.“Let me explain to you guys why I am so excited about being here,” Page said. “And it is really that there is so much leverage in science and technology. I think most people don’t really realize that. There is so much that can be done with these new technologies. We are an example of that.“Two kind of crazy kids have had a big impact on the world because of the power of the Internet, the power of the distribution, and the power of software and computers. And there are so many things like that out there. There are so many opportunities where you can have a huge impact on the world by using the leverage of science and technology. All of you are uniquely positioned, and you should be excited about that.”Brin interjected that the pair’s overseas travels included not only Israel but also several European countries. They were on the prowl for talent, and they were considering opening new offices. For Sergey, who has a sharp sense of humor, the search was ongoing. “We spend most of our time trying to get Internet access,” he quipped. “We surf every day. I was on until 4 a.m. last night. And then I got on again earlier this morning. It is an invaluable tool. It is kind of like a respirator now.”Having fled Russia with his family for freedom from anti-Semitism and discrimination, Brin had something powerful in common with the experience of many of the Russian-born students at the Israeli high school. His father, Michael Brin, had captured the essence of why he and his wife and young son had left Russia when he said that one’s love of country is not always reciprocated. Sergey recognized that he had a special opportunity to motivate these students, so he took the wireless microphone in hand, stood up, and connected. He had been advised before they came to the high school that this group truly was extraordinary, the best of breed, and the recent recipients of all but three of the top mathematics prizes in the country.“Ladies and gentlemen, girls and boys . . .” he said, before speaking in Russian, to the delight of the students, who broke out in spontaneous applause.“I came and emigrated from Russia when I was six. I went to the United States. Similar to here, I have standard Russian-Jewish parents. My dad is a math professor. They have a certain attitude about studies. And I think I can relate that here, because I was told that your school recently got seven out of the top ten places in a math competition throughout all Israel.”The students, unaware of what was coming next, applauded their seven-out-of-ten achievement and the recognition from Sergey.“What I have to say,” Sergey continued, “is in the words of my father, ‘What about the other three?’”A ripple of laughter went through the crowd. “You have several things here that I didn’t have when I was going through high school. The first one is the beautiful weather and the windows. My school in Maryland, which was built during the ’70s energy crisis, has three-foot-thick walls and no windows. You are very fortunate to be in such a beautiful setting. The other thing we didn’t have back then was Internet access. “Let me get a show of hands. How many of you used the Internet yesterday?”Virtually every hand in the auditorium shot up.






What's next from Google?

It's hard to say. We don't talk much about what lies ahead, because we believe one of our chief competitive advantages is surprise. And then there's innovation, and an almost fanatical devotion to our users. These are the things that fuel us, and, we hope, fuel our dreams.


Courtsey: 1. http://www.google.com/corporate/history.html
2. The Google Story by David Wise & Mark Malseed

Thursday, November 29, 2007

How to Reset the Password of Windows XP, if forgotten accidently? ---by ashesh deep


To solve the above problem, we must follow the following steps:-


Download the file "cd060213.zip" from



zip file and burn dis image to a CD; this is a bootable Linux Kernel image. Burn it as an


image of aCD. Set ur computer 2 boot 4m CD/DVD drive. Insert CD in drive and reboot.




And follow easy instructios to RESET WIN-XP PASSWORD.


.

Friday, November 23, 2007

Bad news for IT employees...

IT companies, faced with a free-falling dollar and receding bottomlines, are cutting wages of
their staff, reducing increments and freezing recruitment. They are also forcing techies to work
for more hours so that every dollar they earn is worth it. Industry analysts say Indian software
companies, labelled globally as cyber sweat shops, spend up to 45 per cent of their dollar
earnings on staff salaries. And every dollar counts at a time when the greenback has plunged to
an all-time low of Rs 39.39. Of course, the latest developments have been worrying those
techies who earn part-dollar wages.

Wednesday, November 14, 2007

India's SUPERCOMPUTER is really Superb...

Hi!
ashesh deep is back with some technical but interested post. Being an indian i really feel proud to have a software background. I am going to tell about a new national craze. I saw this article in Times news and so wanted to throw some lights on SUPERCOMPUTERS. So, again the repeated lines----"just follow me"----yo........ a beautiful mind ashesh deep is back.....
India's Supercomputer is inTop4
India has surprisingly broken into the Top Ten in a much-fancied twice-yearly list of
the fastest supercomputers in the world, marking a giant leap in its push towards
becoming a global IT power. A cluster platform at Pune's Computational Research
Laboratories (CRL), a Tata subsidiary, has been ranked fourth in the widelyanticipated TOP500 list released at an international conference on high performance computing in Reno, Nevada.
It is the first time that India has figured in the Top100 let alone Top Ten of the supercomputing list. The list, which is usually dominated by the United States, is also notable this time because it has five new entrants in the Top Ten, with supercomputers in Germany and Sweden up there with the one in India. The fourth-ranking Tata supercomputer, named EKA after the Sanskrit term for one, is a Hewlett-Packard Cluster Platform 3000 BL460c system. CRL has integrated this system with its own innovative routing technology and to achieve a 117.9 Teraflop or trillions of calculations per second. The No. 1 position was again claimed by the BlueGene/L System, a joint development of IBM and the US Department of Energy's (DOE) National Nuclear Security Administration (NNSA) and installed at DOE's Lawrence Livermore National Laboratory in California.
Although BlueGene/L has occupied the No. 1 position since November 2004, the current system is much faster at 478.2 TFop/s compared to 280.6 TFlop/s six months ago before its upgrade. At No 2 is a BlueGene/P system installed in Germany at the Forschungszentrum Juelich (FZJ) and it achieved performance of 167.3 TFlop/s. The No. 3 system at the New Mexico Computing Applications Center (NMCAC) in Rio Rancho, N.M posted a speed of 126.9 TFlop/s. Ashwin Nanda, who heads the CRL, told the conference that its supercomputer had been builtwith HP servers using Intel chips with a total of 14,240 processor cores. The system went operational last month and achieved a performance of 117.9 teraflops. The system is slated for use in government scientific research and product development for Tata, as well as to provide services to US customers, Nanda said. In a statement in India, the Tata Group said "EKA marks a milestone in the Tata group's effort to build an indigenous high performance computing solution." CRL, it disclosed, built the supercomputer facility using dense data centre layout and novel network routing and parallel processing library technologies developed by its scientists. While the US is clearly the leading consumer of high power computing systems with 284 of the 500 systems, Europe follows with 149 systems and Asia has 58 systems. In Asia, Japan leads with 20 systems, Taiwan has 11, China 10 and India 9. The second ranked supercomputer inIndia, rated 58th in the Top500 list is at the Indian Institute of Science in Bangalore. Others are ranked 152, 158, 179, 336, 339,340 and 371. Horst Simon, associate laboratory director, computing sciences, at the Lawrence Berkeley National Laboratory in Berkeley, and one of the Top500 list authors, told Computerworld that it was exciting to see India's entrance into the Top 10 and because the country has "huge potential" as a supercomputing nation.
"India is very well known for having great software engineers and great mathematicians, and having a (HPC) center there is a catalyst for doing more in the high performance computing field," Simon told the industry publication, adding that it brings "a whole new set of players into the supercomputing world." India has made steady progress in the field of supercomputing from thetime it first bought two from the US pioneer Cray Research in 1988, at a time of a tough technology control regime. US strictures on the scope of its use and demand for intrusive monitoring and compliance led India to devise its own supercomputers using clusters of multiprocessors.
Supercomputers are typically used for highly calculation problem solving in quantum
mechanical physics, molecular modeling, weather forecasting and climate research, and physical simulation including that of nuclear tests. The term supercomputer is quite relative. It was firstused in 1929 to refer to large custom-built tabulators IBM made for Columbia University. Thesupercomputers of the 1970s are today's desktops.
Post some comments on it.
courtesy: Times of India

Hyde Act ----by ashesh deep

1) Full co-operation in civilian nuclear energy has been denied to India:

a) U.S. unwillingness to co-operate in the areas of spent-fuel reprocessing and uranium enrichment related to the full nuclear fuel cycle.
b) Denial of the nuclear fuel supply assurances and alternate supply arrangements mutually agreed upon earlier.
c) Limits co-operation in the GNEP programme. India will not be permitted to join as a technology developer but as a recipient state.





2) India asked to participate in the international effort on nuclear non-proliferation, with a policy congruent to that of United States.
The Hyde Act envisages (Section-109) India to jointly participate with the U.S. in a programme involving the U.S. National Nuclear Security Administration to further nuclear non-proliferation goals. This goes much beyond the IAEA norms and has been unilaterally introduced apparently without the knowledge of the Indian government. In addition, the U.S. President is required to annually report to the congress whether India is fully and actively participating in U.S. and international efforts to dissuade, isolate and if necessary sanction and contain Iran for its pursuit of indigenous efforts to develop nuclear capabilities. These stipulations in the Act and others pertaining to the Proliferation Security Initiative (PSI), the Wassenaar Arrangement, and the Australia Group etc. are totally outside the scope of the July 18th Agreement and they constitute intrusion into India's independent decision making and policy matters. India's adherence to MTCR is also unnecessarily brought in.




3) Impact on our Strategic Defence Programme

In responding to the concerns earlier expressed by us, the Prime Minister stated in the Rajya Sabha on August 17, 2006 that "we are fully conscious of the changing complexity of the international political system. Nuclear weapons are an integral part of our national security and will remain so, pending the elimination of all nuclear weapons and universal non-discriminatory nuclear disarmament. Our freedom of action with regard to our strategic programmes remains unrestricted. The nuclear agreement will not be allowed to be used as a backdoor method of introducing NPT type restrictions on India." And yet, this Act totally negates the above assurance of the PM.
In view of the uncertain strategic situation around the globe, we are of the view that India must not directly or indirectly concede our right to conduct future nuclear weapon tests, if these are found necessary to strengthen our minimum deterrence. In this regard, the Act makes it explicit that if India conducts such tests, the nuclear cooperation will be terminated and we will be required to return all equipment and materials we might have received under this deal. To avoid any abrupt stoppage of nuclear fuel for reactors which we may import, India and the U.S. had mutually agreed to certain alternative fuel supply options which this Act has totally eliminated out of consideration. Thus, any future nuclear test will automatically result in a heavy economic loss to the country because of the inability to continue the operation of all such imported reactors.
Furthermore, the PM had assured the nation that "India is willing to join any non-discriminatory, multilaterally negotiated and internationally verifiable Fissile Material Cut-off Treaty (FMCT), as and when it is concluded in the Conference on Disarmament." But, the Act requires the U.S. to "encourage India to identify and declare a date by which India would be willing to stop production of fissile material for nuclear weapons unilaterally or pursuant to a multilateral moratorium or treaty."
In his Rajya Sabha address, the PM had said, "Our commitment towards non-discriminatory global nuclear disarmament remains unwavering, in line with the Rajiv Gandhi Action Plan. There is no dilution on this count." Unfortunately, the Act is totally silent on the U.S. working with India to move towards universal nuclear disarmament, but it eloquently covers all aspects of non-proliferation controls of U.S. priority, into which they want to draw India into committing.
In summary, it is obvious that the Hyde Act still retains many of the objectionable clauses in the earlier House and Senate bills on which the Prime Minister had clearly put forth his objections and clarified the Indian position in both Houses of Parliament. Once this Act is signed into law, all further bilateral agreements with the U.S. will be required to be consistent with this law.
As such, the Government of India may convey these views formally to the U.S. Administration and they should be reflected in the 123 Agreement.

Wednesday, November 7, 2007

"INDO-US Nuclear Deal and 123 Agreement" ---an exclusive explanatory article

Hi!
We all have heard a lot about "Indo-US nuclear deal Controversy or 123 agreement"
. I want to throw some lights on this topic as well so that in short everyone can understand what this nuclear-deal all about? In brief, i will describe about the aspects of so-called deal and the controvesies following it up.
So, as usual follow me up..........
The Indo-US nuclear deal is being rerarded as the biggest breakthrough in years.
The 123 agreement that will make the deal operational, was finally made public earlier on Aug.03, 2007. This, after months of tough negotiations between Indian and American diplomats that took place, even as the deal battled its way through Parliament in Delhi and the US Congress.
Many have consistently raised concerns that this make-or-break deal might be bad for India but here is a look at more of the fine prints of the deal that has a life span of 40 years.
About:
The 123 agreement is a civil nuclear deal, therefore, it will have no bearing on India's strategic and military programme and India can make a bomb. It is completely out of the ambit of the deal.In the text of the deal there is a clause that says that the agreement will in no way be a hindrance to India's strategic programme.
Therefore, India can continue to make a bomb with its own fuel.What is clear from the draft of the 123 agreement is that there is no legal binding commitment on India to never test again. India, if it wants to, can choose to conduct a nuclear test.If India does conduct a nuclear test, it will not be violating any international treaty or agreement because there is no mention of testing or detonation in this bilateral agreement.
Essentially, what the controversy has been over is whether if India conducts a test the Americans under their own laws would have the right to take back all the fuel that they give us.
Controversies & "Right of Return" :
The deal interestingly says that the right of return that the Americans have does not automatically comes into effect. It is something the US administration chooses to do. They would have to stop cooperation with India.But whether or not they take back fuel is something they would have to choose to do.
Even after the US chooses to do that, there are about seven to eight barriers before the right of return actually comers into play.What the agreement says that that it will take into account the circumstances in which India conducts a nuclear test.These include a ''changed security environment'' or action, which could impact national security.
Essentially, what it boils down to is that the right of return may not be invoked if Pakistan or China conduct nuclear tests and India responds to that by conducting a test of its own.In a way, this is the first international agreement, which would justify the circumstances in which a nuclear test is conducted.
So India is not giving up its right to test and right of return of nuclear fuel does not automatically comes into play.Apart from this there are certain assurances given by US President George W Bush to the Indian side.
Those have been verbatim repeated in the text of the agreement. These assurances are that the US would ensure that there is a lifetime supply of fuel for India's nuclear reactors and that they help India build its strategic fuel reserve.
If the US is unable to fulfill this commitment, it will convene a group of countries like Russia, France and the UK to ensure supply.Even if, for some reason, they were to take back nuclear fuel, India retains the right to seek alternate sources of fuel for itself. India will have to build strategic reserve so that it does not go out nuclear fuel.
Any kind of feedback will be heartly welcomed.

Saturday, November 3, 2007

Visvesvaraya Technological University


Visvesvaraya Technological University, also spelt Visveswaraiah Technological University, (VTU), is a University in Karnataka, India.
At present, 141 engineering colleges all over Karnataka are affiliated to VTU. The intake every year for engineering under this university inclusive of post-graduate courses is around 38,000. The university has 25 branches of Bachelor of Engineering and 54 branches of Masters in Technology courses. B. E. (Bio-Technology) was an under-graduate branch introduced in 2002-03 in 21 colleges. In addition, the university offers MBA and MCA courses. The university also offers M. Sc. (Engg.) through Research and Ph.D. programmes. Currently, the University has 249 registered candidates for M.Sc. (Engg.) by Research and Ph.D. in 80 Research Centres in 24 affiliated colleges.
The University has achieved the tremendous task of bringing various colleges affiliated earlier to different Universities, with different syllabi, different procedures and different traditions under one umbrella.
The university is also known for its record number of placement activities.Every year thousands of students are recuited by various companies through campus placement programs.
The first batch consisting of approximately 13,000 students of under-graduate (engineering and technology fields) students admitted in the academic year 1998-99 graduated from the portals of this University during the month of July 2002. Four batches of M. Tech., 3 batches of MBA and 2 batches of MCA students have so far graduated from the university. The total number of post-graduates who have completed their studies in this university is around 5000.

Sunday, October 28, 2007

STOCK MARKET


A stock market is a market for the trading of company stock, and derivatives of same; both of these are securities listed on a stock exchange as well as those only traded privately.



The Definition


The term 'the stock market' is a concept for the mechanism that enables the trading of company stocks (collective shares), other securities, and derivatives. Bonds are still traditionally traded in an informal, over-the-counter market known as the bond market. Commodities are traded in commodities markets, and derivatives are traded in a variety of markets (but, like bonds, mostly 'over-the-counter').
The size of the worldwide 'bond market' is estimated at $45 trillion. The size of the 'stock market' is estimated at about $51 trillion. The world derivatives market has been estimated at about $480 trillion 'face' or nominal value, 30 times the size of the U.S. economy…and 12 times the size of the entire world economy.[1] The major U.S. Banks alone are said to account for well over $200 trillion. It must be noted though that the value of the derivatives market, because it is stated in terms of notional values, cannot be directly compared to a stock or a fixed income security, which traditionally refers to an actual value. (Many such relatively illiquid securities are valued as marked to model, rather than an actual market price.)
The stocks are listed and traded on stock exchanges which are entities (a corporation or mutual organization) specialized in the business of bringing buyers and sellers of stocks and securities together. The stock market in the United States includes the trading of all securities listed on the NYSE, the NASDAQ, the Amex, as well as on the many regional exchanges, the OTCBB, and Pink Sheets. European examples of stock exchanges include the Paris Bourse (now part of Euronext), the London Stock Exchange and the Deutsche Börse.





Trading


Participants in the stock market range from small individual stock investors to large hedge fund traders, who can be based anywhere. Their orders usually end up with a professional at a stock exchange, who executes the order.
Some exchanges are physical locations where transactions are carried out on a trading floor, by a method known as open outcry. This type of auction is used in stock exchanges and commodity exchanges where traders may enter "verbal" bids and offers simultaneously. The other type of exchange is a virtual kind, composed of a network of computers where trades are made electronically via traders at computer terminals.
Actual trades are based on an auction market paradigm where a potential buyer bids a specific price for a stock and a potential seller asks a specific price for the stock. (Buying or selling at market means you will accept any bid price or ask price for the stock.) When the bid and ask prices match, a sale takes place on a first come first served basis if there are multiple bidders or askers at a given price.
The purpose of a stock exchange is to facilitate the exchange of securities between buyers and sellers, thus providing a marketplace (virtual or real). The exchanges provide real-time trading information on the listed securities, facilitating price discovery.
The New York Stock Exchange is a physical exchange. This is also referred to as a "listed" exchange (because only stocks listed with the exchange may be traded). Orders enter by way of brokerage firms that are members of the exchange and flow down to floor brokers who go to a specific spot on the floor where the stock trades. At this location, known as the trading post, there is a specific person known as the specialist whose job is to match buy orders and sell orders. Prices are determined using an auction method known as "open outcry": the current bid price is the highest amount any buyer is willing to pay and the current ask price is the lowest price at which someone is willing to sell; if there is a spread, no trade takes place. For a trade to take place, there must be a matching bid and ask price. (If a spread exists, the specialist is supposed to use his own resources of money or stock to close the difference, after some time.) Once a trade has been made, the details are reported on the "tape" and sent back to the brokerage firm, who then notifies the investor who placed the order. Although there is a significant amount of direct human contact in this process, computers do play a huge role in the process, especially for so-called "program trading".
The Nasdaq is a virtual (listed) exchange, where all of the trading is done over a computer network. The process is similar to the above, in that the seller provides an asking price and the buyer provides a bidding price. However, buyers and sellers are electronically matched. One or more Nasdaq market makers will always provide a bid and ask price at which they will always purchase or sell 'their' stock.[2].
The Paris Bourse, now part of Euronext is an order-driven, electronic stock exchange. It was automated in the late 1980s. Before, it consisted of an open outcry exchange. Stockbrokers met in the trading floor or the Palais Brongniart. In 1986, the CATS trading system was introduced, and the order matching process was fully automated.
From time to time, active trading (especially in large blocks of securities) have moved away from the 'active' exchanges. Securities firms, led by UBS AG, Goldman Sachs Group Inc. and Credit Suisse Group, already steer 12 percent of U.S. security trades away from the exchanges to their internal systems. That share probably will increase to 18 percent by 2010 as more investment banks bypass the NYSE and Nasdaq and pair buyers and sellers of securities themselves, according to data compiled by Boston-based Aite Group LLC, a brokerage-industry consultant.
Now that computers have eliminated the need for trading floors like the Big Board's, the balance of power in equity markets is shifting. By bringing more orders in-house, where clients can move big blocks of stock anonymously, brokers pay the exchanges less in fees and capture a bigger share of the $11 billion a year that institutional investors pay in trading commissions.




Market Participants


Many years ago, worldwide, buyers and sellers were individual investors, such as wealthy businessmen, with long family histories (and emotional ties) to particular corporations. Over time, markets have become more "institutionalized"; buyers and sellers are largely institutions (e.g., pension funds, insurance companies, mutual funds, hedge funds, investor groups, and banks). The rise of the institutional investor has brought with it some improvements in market operations. Thus, the government was responsible for "fixed" (and exorbitant) fees being markedly reduced for the 'small' investor, but only after the large institutions had managed to break the brokers' solid front on fees (they then went to 'negotiated' fees, but only for large institutions).
However, corporate governance (at least in the West) has been very much adversely affected by the rise of (largely 'absentee') institutional 'own'.



History




Historian Fernand Braudel suggests that in Cairo in the 11th century Muslim and Jewish merchants had already set up every form of trade association and had knowledge of every method of credit and payment, disproving the belief that these were invented later by Italians. In 12th century France the courratiers de change were concerned with managing and regulating the debts of agricultural communities on behalf of the banks. Because these men also traded with debts, they could be called the first brokers. In late 13th century Bruges commodity traders gathered inside the house of a man called Van der Beurse, and in 1309 they became the "Brugse Beurse", institutionalizing what had been, until then, an informal meeting. The idea quickly spread around Flanders and neighboring counties and "Beurzen" soon opened in Ghent and Amsterdam.
In the middle of the 13th century Venetian bankers began to trade in government securities. In 1351 the Venetian government outlawed spreading rumors intended to lower the price of government funds. Bankers in Pisa, Verona, Genoa and Florence also began trading in government securities during the 14th century. This was only possible because these were independent city states not ruled by a duke but a council of influential citizens. The Dutch later started joint stock companies, which let shareholders invest in business ventures and get a share of their profits - or losses. In 1602, the Dutch East India Company issued the first shares on the Amsterdam Stock Exchange. It was the first company to issue stocks and bonds.
The Amsterdam Stock Exchange (or Amsterdam Beurs) is also said to have been the first stock exchange to introduce continuous trade in the early 17th century. The Dutch "pioneered short selling, option trading, debt-equity swaps, merchant banking, unit trusts and other speculative instruments, much as we know them" (Murray Sayle, "Japan Goes Dutch", London Review of Books XXIII.7, April 5, 2001). There are now stock markets in virtually every developed and most developing economies, with the world's biggest markets being in the United States, Canada, China (Hongkong), India, UK, Germany, France and Japan.



Investment Strategies


One of the many things people always want to know about the stock market is, "How do I make money investing?" There are many different approaches; two basic methods are classified as either fundamental analysis or technical analysis. Fundamental analysis refers to analyzing companies by their financial statements found in SEC Filings, business trends, general economic conditions, etc. Technical analysis studies price actions in markets through the use of charts and quantitative techniques to attempt to forecast price trends regardless of the company's financial prospects. One example of a technical strategy is the Trend following method, used by John W. Henry and Ed Seykota, which uses price patterns, utilizes strict money management and is also rooted in risk control and diversification.
Additionally, many choose to invest via the index method. In this method, one holds a weighted or unweighted portfolio consisting of the entire stock market or some segment of the stock market (such as the S&P 500 or Wilshire 5000). The principal aim of this strategy is to maximize diversification, minimize taxes from too frequent trading, and ride the general trend of the stock market (which, in the U.S., has averaged nearly 10%/year, compounded annually, since World War II).
Finally, one may trade based on inside information, which is known as insider trading.


The behavior of the stock market


From experience we know that investors may temporarily pull financial prices away from their long term trend level. Over-reactions may occur— so that excessive optimism (euphoria) may drive prices unduly high or excessive pessimism may drive prices unduly low. New theoretical and empirical arguments have been put forward against the notion that financial markets are efficient.
According to the efficient market hypothesis (EMH), only changes in fundamental factors, such as profits or dividends, ought to affect share prices. (But this largely theoretic academic viewpoint also predicts that little or no trading should take place— contrary to fact— since prices are already at or near equilibrium, having priced in all public knowledge.) But the efficient-market hypothesis is sorely tested by such events as the stock market crash in 1987, when the Dow Jones index plummeted 22.6 percent — the largest-ever one-day fall in the United States. This event demonstrated that share prices can fall dramatically even though, to this day, it is impossible to fix a definite cause: a thorough search failed to detect any specific or unexpected development that might account for the crash. It also seems to be the case more generally that many price movements are not occasioned by new information; a study of the fifty largest one-day share price movements in the United States in the post-war period confirms this.[2] Moreover, while the EMH predicts that all price movement (in the absence of change in fundamental information) is random (i.e., non-trending), many studies have shown a marked tendency for the stock market to trend over time periods of weeks or longer.
Various explanations for large price movements have been promulgated. For instance, some research has shown that changes in estimated risk, and the use of certain strategies, such as stop-loss limits and Value at Risk limits, theoretically could cause financial markets to overreact.
Other research has shown that psychological factors may result in exaggerated stock price movements. Psychological research has demonstrated that people are predisposed to 'seeing' patterns, and often will perceive a pattern in what is, in fact, just noise. (Something like seeing familiar shapes in clouds or ink blots.) In the present context this means that a succession of good news items about a company may lead investors to overreact positively (unjustifiably driving the price up). A period of good returns also boosts the investor's self-confidence, reducing his (psychological) risk threshold.[3]
Another phenomenon— also from psychology— that works against an objective assessment is group thinking. As social animals, it is not easy to stick to an opinion that differs markedly from that of a majority of the group. An example with which one may be familiar is the reluctance to enter a restaurant that is empty; people generally prefer to have their opinion validated by those of others in the group.
In one paper the authors draw an analogy with gambling.[4] In normal times the market behaves like a game of roulette; the probabilities are known and largely independent of the investment decisions of the different players. In times of market stress, however, the game becomes more like poker (herding behavior takes over). The players now must give heavy weight to the psychology of other investors and how they are likely to react psychologically.
The stock market, as any other business, is quite unforgiving of amateurs. Inexperienced investors rarely get the assistance and support they need. In the period running up to the recent Nasdaq crash, less than 1 per cent of the analyst's recommendations had been to sell (and even during the 2000 - 2002 crash, the average did not rise above 5%). The media amplified the general euphoria, with reports of rapidly rising share prices and the notion that large sums of money could be quickly earned in the so-called new economy stock market. (And later amplified the gloom which descended during the 2000 - 2002 crash, so that by summer of 2002, predictions of a DOW average below 5000 were quite common.)


Courtesy: wikipedia