Google PageRank-like algorithm dates back to 1941

Feb 19, 2010 by Lisa Zyga report
Since the 1940s, PageRank-like iterative algorithms have been used to rank industries, journals, and people.

(PhysOrg.com) -- When Sergey Brin and Larry Page developed their PageRank algorithm for ranking webpages in 1998, they certainly knew that the seeds of the algorithm had been sown long before that time, as is evident from their paper's references. But the Google founders may not have known just how far back PageRank's predecessors reach - nearly 70 years, according to Massimo Franceschet, who dug up a 1941 paper with a similar ranking method, as well as several other pre-Google papers with algorithms that show remarkable similarities to PageRank. Yet Brin and Page may have expected as much; after all, as Franceschet notes, the motto of Google Scholar is "Stand on the shoulders of giants."

In a recent study, Franceschet, a computer scientist at the University of Udine in Italy, has presented a brief history of iterative ranking methods that predate PageRank. He also explains how the circular PageRank concept of determining the importance of a webpage based on the number of links it receives from important webpages, rather than by subjective expert evaluation, has provided an alternative way to define the quality of an item.

The 1941 predecessor of PageRank is a paper by the economist Wassily W. Leontief, who developed a method for ranking the values of a nation’s various industrial sectors. Each industrial sector relies on the others, both for building materials (inputs) to manufacture its own products, and by selling its finished products (outputs) to other industries so they can manufacture their own products. Leontief developed an iterative method of valuing each industry based on the importance of the industries with which it is connected through input and outputs (similar to web links in PageRank). In 1973, Leontief earned the Nobel Prize in economics for his work in this area.

Other more recent PageRank-like algorithms have been used for ranking items in areas such as sociology and bibliometrics. In 1965, 33 years before Page and Brin developed PageRank, the sociologist Charles Hubbell published a method for ranking individuals. His premise was that “a person is important if it is endorsed by important people.” Like PageRank and Leontief’s algorithm, Hubbell’s method is also iterative, with its outputs influencing its inputs, ad infinitum.

Later, in 1976, Gabriel Pinski and Francis Narin developed a journal ranking method in the field of bibliometrics. Here, the premise is that the importance of a journal is determined by the importance of the journals that cite it, which again uses the same circular reasoning as PageRank.

Most recently, the computer scientist Jon Kleinberg of Cornell University developed a ranking approach very similar to PageRank, which was published around the same time of Brin and Page’s publication (Brin and Page reference Kleinberg’s work in their own paper). Kleinberg’s method was also aimed at optimizing Web information retrieval. The algorithm, called Hypertext Induced Topic Search (HITS), referred to webpages as “hubs” and “authorities.” These definitions are purely functional; hub pages point to authority pages, and authority pages are pointed to by hub pages. Mathematically, HITS is strikingly similar to PageRank, even though both were developed independently. Since they’ve been published, both papers have received widespread recognition and thousands of citations.

While PageRank has made a very powerful search engine, it had to radically reformulate the concept of quality to do so. The algorithm must constantly reevaluate each page as the importance of other pages varies - making quality seem fleeting, and no longer permanent.

“Expert evaluation, the judgment given by peer experts, is intrinsic, subjective, deep, slow and expensive,” Franceschet writes. “By contrast, network evaluation, the assessment gauged [by] exploiting network topology, is extrinsic, democratic, superficial, fast and low-cost.”

As Franceschet has shown, the new concept of value goes beyond webpages. Today, this “popularity contest” style of determining quality is stirring debate in academic circles in the area of research quality evaluation. Traditionally, evaluation of academic papers is done through expert peer review; the alternative is to use the PageRank-inspired Eigenfactor metric, which uses bibliometric indicators to evaluate research quality. Most likely, there will be other areas that see the use of PageRank-inspired methods redefining the concept of value.

Explore further: Computer scientists can predict the price of Bitcoin

More information: Massimo Franceschet. "PageRank: Stand on the shoulders of giants." arxiv.org
via: Technology Review

Related Stories

New Algorithm Ranks Sports Teams like Google's PageRank

Dec 15, 2009

(PhysOrg.com) -- Sports fans may be interested in a new system that ranks NFL and college football teams in a simple, straightforward way, similar to how Google PageRank ranks webpages. The new sports algorithm, ...

WOWD, the real-time search engine

Oct 26, 2009

(PhysOrg.com) -- The beta version of WOWD, the Internet's newest search engine, was launched last week at the 2009 Web 2.0 Summit in San Francisco. It aims to differentiate itself from other search engines ...

Search engine marketing for non-profits

Dec 08, 2008

Non-profit organizations should be exploiting the strategies of online marketers to gain traffic to their websites, raise awareness of their "brand" and its aims and convert visitors into donors, according to a study published ...

Google co-founders to sell $5.5B combined in stock

Jan 23, 2010

(AP) -- Google Inc. co-founders Larry Page and Sergey Brin are relinquishing some of their control over the Internet search leader with the sale of 10 million shares worth $5.5 billion at current prices.

Google co-founder Brin prefers Yahoo! without Bing

Oct 23, 2009

Google co-founder Sergey Brin has expressed amazement at resistance to the Internet giant's efforts to digitize the world's books and lamented a deal to have Microsoft handle online search at Yahoo!

Recommended for you

Some online shoppers pay more than others, study shows

20 minutes ago

Internet users regularly receive all kinds of personalized content, from Google search results to product recommendations on Amazon. This is thanks to the complex algorithms that produce results based on users' profiles and ...

Comcast wins more Internet customers, ad sales up

2 hours ago

Comcast Corp.'s third-quarter net income jumped 50 percent in the third quarter, helped by a one-time tax settlement, growth in Internet subscribers and fewer defectors from its cable service.

Christian Bale to play Apple's Steve Jobs

2 hours ago

Oscar-winner Christian Bale—best known for his star turn as Batman in the blockbuster "Dark Knight" films—will play Apple co-founder Steve Jobs in an upcoming biopic.

Netflix to stream new online TV series, 'Bloodline'

2 hours ago

Fresh from commercial and critical success with hit shows "House of Cards" and "Orange is the New Black," Netflix on Thursday announced a new online series, "Bloodline," set for release in March.

User comments : 5

Adjust slider to filter visible comments by rank

Display comments: newest first

Husky
5 / 5 (1) Feb 19, 2010
Its worth digging into old science as it often contains solutions for specific problems at that time but carry a much broader scope than forseen.

Take the famous Fast Fourier transformation, Fourier probably did not forsee the intensive reliance on FFT math in present digital age. Without FFT, the internet, mobile phone networks, consumer electronics would not have taken such a flight.

It might be worth to learn an old dog new tricks, but a new dog and some old tricks might work as well
Husky
1 / 5 (1) Feb 20, 2010
Some Lawyer food, are these old algorithms similar enough in their basic methodology to count as prior art??, in other words, If one was to build a new search engine based on these old papers, and Google would sue you for suspected patent infringements....Are some parts of Google search patents, weakened by this to an extend that a judge could conclude "been there, done that, these searchalgorithms can no longer be regarded as exclusive Google inventions/implementations, it has been done such long time ago, covering such a broad scope of areas that the algoritme becomes part of the public domain"????
vigna
not rated yet Mar 10, 2010
Well, actually Leontief's theory just shows how to define a sustainable economy through the fixpoint of a linear operator. I see no connection with PageRank, except that PageRank is a linear fixpoint, too. But then Markov's work (1906) predates PageRank even more, as any work about fixed point of linear operators. Rather, Seeley (1949) introduced reputation as a recursive concept using exactly PageRank's formulation, and is the oldest currently known instance of the idea. Kats (1953) and Wei (1952) did essentially the same using different starting points. They are psychologists and sociologists, but they can all be found in Wasserman and Faust's book about social networks (1994), which makes this news IMHO quite a (late) red herring. A rather detailed mathematical account can be found here: http://arxiv.org/abs/0912.0238
franceschet
not rated yet Mar 14, 2010
In fact, the connections are striking:

a) Leontief closed system is essentially Pinski and Narin bibliometric method, which is endorsed by Larry Page in PageRank patent;
b) The stochastic reformulation of Leontief closed system is a weighted teleportation-free version of PageRank.
c) The solution to such reformulation is the leading eigenvector of a stochastic matrix, in analogy with the solution of the PageRank problem (the leading eigenvector of Google matrix);
d) Individual solution scores correspond to total revenues of industries. In particular, the revenue of an industry B depends on the revenue of industries A that produce products for B weighted by the proportion of product that A produces for B. Highly remunerated sectors are those that receive inputs from other highly remunerated industries with low propensity to differentiate their outputs among the other industries. Sounds familiar? It's PageRank logic!

Details on: http://arxiv.org/abs/1002.2858

Massimo Franceschet
vigna
not rated yet Apr 14, 2010
These connections are very vague, and mostly related to the mathematical tools involved, which are quite common. They are no stronger, say, than those with Markov's work (1906!), as observed also by Massimo Marchiori (of HyperSearch fame). Seeley's paper, on the other hand, has the specific purpose of estimating reputation recursively starting from directed endorsement.