Making crowdsourcing more reliable

Oct 10, 2012

Researchers from the University of Southampton are designing incentives for collection and verification of information to make crowdsourcing more reliable.

is a process of outsourcing tasks to the public, rather than to employees or contractors. In recent years, crowdsourcing has provided an unprecedented ability to accomplish tasks that require the involvement of a large number of people, often across wide-spread geographies, expertise, or interests.

The world's largest encyclopaedia, Wikipedia, is an example of a task that can only be achieved through crowd participation. Crowdsourcing is not limited to volunteer efforts. For example, Amazon Mechanical Turk (AMT) and CrowdFlower are 'labour on demand' markets that allow people to get paid for micro-tasks, as simple as labelling an image or translating a piece of text.

Recently, crowdsourcing has demonstrated effectiveness in large-scale, information-gathering tasks, across very wide geographies. For example, the Ushahidi platform allowed volunteers to perform rapid crisis mapping in real-time in the aftermath of disasters such as the Haiti earthquake.

One of the main obstacles in crowdsourcing information gathering is reliability of collected reports. Now Dr Victor Naroditskiy and Professor Nick Jennings from the University of Southampton, together with Masdar Institute's Professor Iyad Rahwan and Dr Manuel Cebrian, Research Scientist at the University of California, San Diego (UCSD), have developed novel methods for solving this problem through crowdsourcing. The work, which is published in the academic journal PLOS ONE, shows how to crowdsource not just gathering, but also verification of information.

Dr Victor Naroditskiy of the Agents, Interaction and Complexity group at the University of Southampton, and lead author of the paper, says: "The success of an information gathering task relies on the ability to identify trustworthy information reports, while false reports are bound to appear either due to honest mistakes or sabotage attempts. This information verification problem is a difficult task, which, just like the information-gathering task, requires the involvement of a large number of people."

Sites like Wikipedia have existing mechanisms for quality assurance and information verification. However, those mechanisms rely partly on reputation, as more experienced editors can check whether an article conforms to the Wikipedia objectivity criteria, has sufficient citations, etc. In addition, has policies for resolving conflicts between editors in cases of disagreement.

However, in time-critical tasks, there is no established hierarchy of participants, and little basis for judging credibility of volunteers who are recruited on the fly. In this kind of scenario, special incentives are needed to carry out verification. The research presented in the PLOS ONE paper provides such incentives.

Professor Iyad Rahwan of Masdar Institute in Abu Dhabi and a co-author of the paper, explains: "We showed how to combine incentives to recruit participants to verify . When a participant submits a report, the participant's recruiter becomes responsible for verifying its correctness.

Compensations to the recruiter and to the reporting participant for submitting the correct report, as well as penalties for incorrect reports, ensure that the recruiter will perform verification."

Incentives to recruit participants have previously been proposed by Dr Manuel Cebrian from UCSD, and a co-author of the paper, to win the DARPA Red Balloon Challenge, where teams had to locate 10 weather balloons positioned at random locations throughout the United States. In that scheme, where the person who found the balloons received a pre-determined compensation, for example $1,000, his recruiter received $500 and the recruiter of the recruiter got $250. Dr Manuel Cebrian says: "The results on incentives to encourage verification provide theoretical justification for the incentives used to win the Red Balloon Challenge."

Explore further: Researchers developing algorithms to detect fake reviews

More information: dx.plos.org/10.1371/journal.pone.0045924

Related Stories

Moderate pay best for job performance, study suggests

Nov 19, 2008

(PhysOrg.com) -- Employers hoping to get the best out of employees with huge performance contingent payments may actually be helping them to do worse, suggests a new paper published by a team of researchers in behavioral ...

Making crowdsourcing easier

Aug 24, 2012

Crowdsourcing is a technique for farming out labor-intensive tasks over the Internet by splitting them into small chunks that dozens, hundreds or even thousands of people complete at their desks for a few ...

Global manhunt pushes limits of social mobilization

Apr 06, 2012

(Phys.org) -- An international team of researchers, including computer scientist Manuel Cebrian from the University of California, San Diego, has won a seemingly impossible challenge: tracking down a group ...

Recommended for you

Tablets, cars drive AT&T wireless gains—not phones

1 hour ago

AT&T says it gained 2 million wireless subscribers in the latest quarter, but most were from non-phone services such as tablets and Internet-connected cars. The company is facing pricing pressure from smaller rivals T-Mobile ...

Twitter looks to weave into more mobile apps

2 hours ago

Twitter on Wednesday set out to weave itself into mobile applications with a free "Fabric" platform to help developers build better programs and make more money.

Blink, point, solve an equation: Introducing PhotoMath

3 hours ago

"Ma, can I go now? My phone did my homework." PhotoMath, from the software development company MicroBlink, will make the student's phone do math homework. Just point the camera towards the mathematical expression, ...

Google unveils app for managing Gmail inboxes

3 hours ago

Google is introducing an application designed to make it easier for its Gmail users to find and manage important information that can often become buried in their inboxes.

User comments : 1

Adjust slider to filter visible comments by rank

Display comments: newest first

Caliban
1 / 5 (1) Oct 10, 2012

Careful selection of tasks is what makes crowdsourcing effective.

Unfortunately, that effectiveness will be usurped where this concept is inevitably broadened to be used by corporate and other special interests to reflect the opinions and interests of the public.

Talk about "skewed poll results".

In this manner, crowdsourced surveying/marketing will be able to be manipulated --both internally and externally-- to distort reality:

Compensations to the recruiter and to the reporting participant for submitting the correct report, as well as penalties for incorrect reports, ensure that the recruiter will perform verification."


--and who is making this compensation? And does it preclude another entity making a Better Offer?

Therefore, any and all "crowdsourced" information --beyond the raw number-crunching power of distributed computing-- should be viewed with extreme suspicion.