New program color-codes text in Wikipedia entries to indicate trustworthiness

Aug 03, 2007

The online reference site Wikipedia enjoys immense popularity despite nagging doubts about the reliability of entries written by its all-volunteer team. A new program developed at the University of California, Santa Cruz, aims to help with the problem by color-coding an entry's individual phrases based on contributors' past performance.

The program analyzes Wikipedia's entire editing history--nearly two million pages and some 40 million edits for the English-language site alone--to estimate the trustworthiness of each page. It then shades the text in deepening hues of orange to signal dubious content. A 1,000-page demonstration version is already available on a web page operated by the program's creator, Luca de Alfaro, associate professor of computer engineering at UCSC.

Other sites already employ user ratings as a measure of reliability, but they typically depend on users' feedback about each other. This method makes the ratings vulnerable to grudges and subjectivity. The new program takes a radically different approach, using the longevity of the content itself to learn what information is useful and which contributors are the most reliable.

"The idea is very simple," de Alfaro said. "If your contribution lasts, you gain reputation. If your contribution is reverted [to the previous version], your reputation falls." De Alfaro will speak about his new program this Saturday, August 4, at the Wikimania conference in Taipei, Taiwan.

The program works from a user's history of edits to calculate his or her reputation score. The trustworthiness of newly inserted text is computed as a function of the reputation of its author. As subsequent contributors vet the text, their own reputations contribute to the text's trustworthiness score. So an entry created by an unknown author can quickly gain (or lose) trust after a few known users have reviewed the pages.

A benefit of calculating author reputation in this way is that de Alfaro can test how well his reliability scores work. He does so by comparing users' reliability scores with how long their subsequent edits last on the site. So far, the program flags as suspect more than 80 percent of edits that turn out to be poor. It's not overly accusatory, either: 60 to 70 percent of the edits it flags do end up being quickly corrected by the Wikipedia community.

The exhaustive analysis of Wikipedia's seven-year edit history takes de Alfaro's desktop PC about a week to complete. At present he is working from copies of the site that Wikipedia periodically distributes. Once the initial backlog of edits is calculated, however, de Alfaro said that updating reliability scores in real time should be fairly simple.

While the program prominently displays text trustworthiness, de Alfaro favors keeping hidden the reputation ratings of individual users. Displaying reputations could lead to competitiveness that would detract from Wikipedia's collaborative culture, he said, and could demoralize knowledgeable contributors whose scores remain low simply because they post infrequently and on few topics.

"We didn't want to modify the experience of a user going in to Wikipedia," de Alfaro said. "It is very relaxing right now and we didn't want to modify what has worked so well and is so welcoming to the new user."

Source: UC Santa Cruz

Explore further: Forging a photo is easy, but how do you spot a fake?

add to favorites email to friend print save as pdf

Related Stories

Recommended for you

Forging a photo is easy, but how do you spot a fake?

Nov 21, 2014

Faking photographs is not a new phenomenon. The Cottingley Fairies seemed convincing to some in 1917, just as the images recently broadcast on Russian television, purporting to be satellite images showin ...

Algorithm, not live committee, performs author ranking

Nov 21, 2014

Thousands of authors' works enter the public domain each year, but only a small number of them end up being widely available. So how to choose the ones taking center-stage? And how well can a machine-learning ...

Professor proposes alternative to 'Turing Test'

Nov 19, 2014

(Phys.org) —A Georgia Tech professor is offering an alternative to the celebrated "Turing Test" to determine whether a machine or computer program exhibits human-level intelligence. The Turing Test - originally ...

Image descriptions from computers show gains

Nov 18, 2014

"Man in black shirt is playing guitar." "Man in blue wetsuit is surfing on wave." "Black and white dog jumps over bar." The picture captions were not written by humans but through software capable of accurately ...

Converting data into knowledge

Nov 17, 2014

When a movie-streaming service recommends a new film you might like, sometimes that recommendation becomes a new favorite; other times, the computer's suggestion really misses the mark. Yisong Yue, assistant ...

User comments : 0

Please sign in to add a comment. Registration is free, and takes less than a minute. Read more

Click here to reset your password.
Sign in to get notified via email when new comments are made.