Debunking study suggests ways to counter misinformation and correct 'fake news'

September 12, 2017
Credit: CC0 Public Domain

It's no use simply telling people they have their facts wrong. To be more effective at correcting misinformation in news accounts and intentionally misleading "fake news," you need to provide a detailed counter-message with new information—and get your audience to help develop a new narrative.

Those are some takeaways from an extensive new meta-analysis of laboratory debunking studies published in the journal Psychological Science. The analysis, the first conducted with this collection of debunking data, finds that a detailed counter-message is better at persuading people to change their minds than merely labeling as wrong. But even after a detailed debunking, misinformation still can be hard to eliminate, the study finds.

"The effect of misinformation is very strong," said co-author Dolores Albarracin, professor of psychology at the University of Illinois at Urbana-Champaign. "When you present it, people buy it. But we also asked whether we are able to correct for misinformation. Generally, some degree of correction is possible but it's very difficult to completely correct."

Countering beliefs based on misinformation

"Debunking: A Meta-Analysis of the Psychological Efficacy of Messages Countering Misinformation" was conducted by researchers at the Social Action Lab at the University of Illinois at Urbana-Champaign and at the Annenberg Public Policy Center of the University of Pennsylvania. The teams sought "to understand the factors underlying effective messages to counter attitudes and beliefs based on misinformation." To do that, they examined 20 experiments in eight research reports involving 6,878 participants and 52 independent samples.

The analyzed studies, published from 1994 to 2015, focused on false social and political news accounts, including misinformation in reports of robberies; investigations of a warehouse fire and traffic accident; the supposed existence of "death panels" in the 2010 Affordable Care Act; positions of political candidates on Medicaid; and a report on whether a candidate had received donations from a convicted felon.

The researchers coded and analyzed the results of the experiments across the different studies and measured the effect of presenting misinformation, the effect of debunking, and the persistence of misinformation.

The value of extended corrections

"This analysis provides evidence of the value of the extended correction of misinformation," said co-author Kathleen Hall Jamieson, director of the Annenberg Public Policy Center (APPC) and co-founder of its project FactCheck.org, which aims to reduce the level of deception in politics and science. "Simply stating that something is false or providing a brief explanation is largely ineffective."

The lead author, Man-pui Sally Chan, a research assistant professor in psychology at the University of Illinois at Urbana-Champaign, said the study found that "the more detailed the debunking message, the higher the debunking effect. But misinformation can't easily be undone by debunking. The formula that undercuts the persistence of misinformation seems to be in the audience."

A critical factor: Stimulating counterarguments among audiences

As the researchers reported: "A detailed debunking message correlated positively with the debunking effect. Surprisingly, however, a detailed debunking message also correlated positively with the misinformation-persistence effect."

However, Albarracin said the analysis also showed that debunking is more effective - and misinformation is less persistent - when an audience develops an explanation for the corrected information. "What is successful is eliciting ways for the audience to counterargue and think of reasons why the initial information was incorrect," she said. For news outlets, involving an audience in correcting information could mean encouraging commentary, asking questions, or offering moderated reader chats - in short, mechanisms to promote thoughtful participation.

Recommendations for debunking misinformation

The researchers made three recommendations for debunking misinformation:

  • Reduce arguments that support misinformation: News accounts about misinformation should not inadvertently repeat or belabor "detailed thoughts in support of the misinformation."
  • Engage audiences in scrutiny and counterarguing of information: Educational institutions should promote a state of healthy skepticism. When trying to correct misinformation, it is beneficial to have the involved in generating counterarguments.
  • Introduce new information as part of the debunking message: People are less likely to accept debunking when the initial message is just labeled as wrong rather than countered with new evidence.

The authors encouraged the continued development of "alerting systems" for debunking misinformation such as Snopes.com (fake news), RetractionWatch.com (scientific retractions), and FactCheck.org (political claims). "Such an ongoing monitoring system creates desirable conditions of scrutiny and counterarguing of misinformation," the researchers wrote.

Explore further: 'Inoculation' messages prevent spread of fake news

More information: Man-pui Sally Chan et al, Debunking: A Meta-Analysis of the Psychological Efficacy of Messages Countering Misinformation, Psychological Science (2017). DOI: 10.1177/0956797617714579

Related Stories

Detecting misinformation can improve memory later on

January 3, 2017

Exposure to false information about an event usually makes it more difficult for people to recall the original details, but new research suggests that there may be times when misinformation actually boosts memory. Research ...

Recommended for you

New paper answers causation conundrum

November 17, 2017

In a new paper published in a special issue of the Philosophical Transactions of the Royal Society A, SFI Professor Jessica Flack offers a practical answer to one of the most significant, and most confused questions in evolutionary ...

Chance discovery of forgotten 1960s 'preprint' experiment

November 16, 2017

For years, scientists have complained that it can take months or even years for a scientific discovery to be published, because of the slowness of peer review. To cut through this problem, researchers in physics and mathematics ...

47 comments

Adjust slider to filter visible comments by rank

Display comments: newest first

rderkis
3 / 5 (12) Sep 12, 2017
The news networks are the best at presenting fake news. That is why President Trump got elected. We the people are kind of gullible but the majority of us knew when an attempt to manipulate us was so apparent.
I think a great deal of President Trump's votes were from people that did not particularly like President Trump but were sick of the media trying to brainwash them.

Example - Megyn Kelly and her smirk did more to get President Trump elected than all the speeches President Trump made.
Zzzzzzzz
2 / 5 (8) Sep 12, 2017
"It's no use simply telling people they have their facts wrong. To be more effective at correcting misinformation in news accounts and intentionally misleading "fake news," you need to provide a detailed counter-message with new information—and get your audience to help develop a new narrative."

In other words, you'll need to replace their current delusion with another one. Getting investment in the new delusion is easier if the investor has a hand in the construction of the delusion.
PTTG
2.8 / 5 (5) Sep 12, 2017
@rderkis: How do you know that the media hated Trump? You said yourself that their actions helped him get elected.
rderkis
3 / 5 (8) Sep 12, 2017
@rderkis: How do you know that the media hated Trump? You said yourself that their actions helped him get elected.


I am sorry but I have to ask Where did I use the word hate? And even if I did use the word hate, I would never use such a broad term as media. But obviously I did because I am fairly sure both sides of the media tried to brain wash us but one side was much more apparent about it and they lost a LOT of votes because of it..
snoosebaum
1.8 / 5 (5) Sep 12, 2017
More research results from the '' Ministry of TRUTH ''
Nik_2213
4 / 5 (4) Sep 12, 2017
"Educational institutions should promote a state of healthy skepticism."

Well, that's harder than it sounds. Teaching people to think thus is non-trivial...
KBK
2.3 / 5 (3) Sep 12, 2017
Teaching courses at university in human self revelation and unfolding into true functional intelligence... is likely to end up with more suicides and psyche cases than graduates.

It's part of the human wiring package to be ensconced in horseshit at many levels. Most people are lucky if they get a few shovelfuls off themselves before they fall into their graves.

then, add in the (limited) external view of the world, of those given people, who view/filter it through those horseshit colored internal glasses.

Good luck getting even the slightest form of consensus that does not stink badly in multiple ways.
rderkis
1 / 5 (3) Sep 13, 2017
Most people are lucky if they get a few shovelfuls off themselves before they fall into their graves.

Let me take a wild guess, it is a good thing you feel you are as pure as the driven snow. :-)
Da Schneib
3 / 5 (4) Sep 13, 2017
So it's actually worthwhile pointing out the more risible elements of cranks' claims, and not worthwhile arguing with them on anything serious so you don't reinforce their idiocy.

About what I figured.
rrrander
2.7 / 5 (7) Sep 13, 2017
A study done 20 years ago found 1st year university students (mostly liberal) were the most gullible of any group. Least gullible? Businessmen. Worth noting as well, childen in grade school were less gullible than 1st year university students. Fervent liberalism LEADS to gullibility.
gmurphy
3.5 / 5 (8) Sep 13, 2017
rrrander: Citation needed.
antialias_physorg
5 / 5 (6) Sep 13, 2017
In other words, you'll need to replace their current delusion with another one.

The way I read it you just have to lead people to construct a narrative based on the facts instead of the fake facts.

What they are saying is that regurgitating numbers doesn't help. People have to understand. Understanding requires a narrative (embedding facts into a knowledge base and see that they fit. Converselsy embedding the fake facts into a knwledge base and see that they don't fit).

Problem is: When people have no knowedge base because educations is 'uncool' then they can't construct a narrative. The only narrative open to them is "that's the way it is because I believe it". It's an easy/convenient/lazy narrative - arguably the easiest. Probably why systems like religion have latched onto it.
TheGhostofOtto1923
5 / 5 (3) Sep 13, 2017
Just watched a vid on how Steve bannons face was altered on a 60 Minutes interview
https://youtu.be/pkF9Ab8wblM

- How are you going to prevent disgusting fake news like that? Humans cannot be trusted with facts. Period. There is no way to prevent politics from influencing what we see and hear and read, as long as it's being delivered by humans.

Wiki is a good first step in objective delivery but it needs to be governed by AI. Machine intelligence is the only way to keep people honest. No - it is the only way to keep our intrinsic dishonesty from impinging on our right to know.

Which makes it inevitable as well as essential. And it's most vocal detractors are those who among us are the most dishonest.

Do speak up and show yourselves.
Eikka
5 / 5 (1) Sep 13, 2017
So it's actually worthwhile pointing out the more risible elements of cranks' claims, and not worthwhile arguing with them on anything serious so you don't reinforce their idiocy.

About what I figured.


That's a good idea, though care must be taken to pick your battles.

Engaging the cranks in some peripherial claims often just devolves into arguing about semantics or some other irrelevant tangent, and the crank emerges as "victorious" because they can claim you're not engaging them on the meat of the matter but just trying to blow a bunch of smoke on the issue.

You see, that's also a technique the cranks themselves use to attack real information: latch onto some irrelevant detail that you got wrong, or which can be interpreted and re-framed in order to make you look incredible and laughable.

In general, it's called the fallacy of Ignoratio elenchi, or "missing the point". It's a tactic of demagoguery rather than honest discourse.
Eikka
5 / 5 (1) Sep 13, 2017
Or to put it otherwise: if you lower yourself to the level of the crank, they'll simply beat you by experience.

If you engage false information by persuasion and psychological tactics, you're harming your own cause by legitimizing propaganda. Suddenly the crank doesn't seem to be so cranky anymore because the public cannot tell the difference - both sides talk and sound the same.

but it needs to be governed by AI. Machine intelligence is the only way to keep people honest


And what about the people who program said AI? After all, an "AI" is merely a regurgitation of the values and standards of the people who train it.
Eikka
not rated yet Sep 13, 2017
On the question of artifical intelligence to whittle out fact from fiction, here's a good warning

https://www.statn...-cancer/

IBM pitched its Watson supercomputer as a revolution in cancer care. It's nowhere close

STAT found that the system doesn't create new knowledge and is artificially intelligent only in the most rudimentary sense

While Watson became a household name by winning the TV game show "Jeopardy!", its programming is akin to a different game-playing machine: the Mechanical Turk, a chess-playing robot of the 1700s, which dazzled audiences but hid a secret — a human operator inside

In the case of Watson for Oncology, those human operators are a couple dozen physicians at a single, though highly respected, U.S. hospital: Memorial Sloan Kettering Cancer Center in New York. Doctors there are empowered to input their own recommendations into Watson, , even when the evidence supporting those recommendations is thin.
Chris_Reeve
1 / 5 (2) Sep 13, 2017
Re: "This analysis provides evidence of the value of the extended correction of misinformation"

Hasn't anybody told the researchers who "Gish-Gallop" was?
TheGhostofOtto1923
5 / 5 (2) Sep 13, 2017
And what about the people who program said AI? After all, an "AI" is merely a regurgitation of the values and standards of the people who train it
AI implies self-programming and self-improvement. So yes, AI will be able to remove any human influence of bias and deception.

Humans invented red lights but THEY are not biased. We invent machines to compensate for our limitations. The compulsion to cheat is one of them.

And of course your persistent distrust is, like I said, an indication of your reluctance to surrender your god-given right to cheat. I bet you hate speedcams as well don't you?
TheGhostofOtto1923
5 / 5 (1) Sep 13, 2017
And I do mean 'god-given'. God is the ultimate cheat. He will help us cheat death, win at the gaming table, grant all our wishes, and punish all our enemies whether they deserve it or not.

Our compulsion to cheat is so great that we invented the greatest accomplice we could think of to hedge our bets. No wonder godders are so anti-evidence and anti-reason.
bschott
1 / 5 (2) Sep 13, 2017
No wonder godders are so anti-evidence and anti-reason.

Um that is far to general of a statement...even for you Otto. But then again...the atheist take on God has always been ill informed and inaccurate, why would you be any different?
Science was invented by priests, unfortunately so was religion. Like the CO2 bullshit or theoretical astrophysics, the loudest proponents of the theories are the devout...the ones who believe because they believe what someone else has told them....not because they understand it so well it is a forgone conclusion in their mind. When you understand something that well....there is no need to debate a naysayer.
If God wanted your support, he would have it....sucks for you he doesn't need it I guess.
TheGhostofOtto1923
5 / 5 (2) Sep 13, 2017
Um that is far to general of a statement...even for you Otto
NO its not. I dont think you appreciate the pervasiveness of the need to cheat. I also dont think you appreciate how your religions so thoroughly satisfies that need in so many ways.
But then again...the atheist take on God has always been ill informed and inaccurate
I'm not an atheist I'm an antireligionist. Big difference. Look it up. Godders are willfully blind to all the damage their fantasies have done to the species, and we like to focus on informing others of this fact.
TheGhostofOtto1923
5 / 5 (2) Sep 13, 2017
Re eikkas odd comment
In the case of Watson for Oncology, those human operators are a couple dozen physicians at a single, though highly respected, U.S. hospital: Memorial Sloan Kettering Cancer Center in New York
Watson is an insect compared to the intellect that will be AI. In order to work properly ALL medical knowledge will need to be fed into it so that it can cross-reference and begin discarding all the bullshit.

This might sound like an impossible task but look at the impossible strides in gene sequencing that have happened in the last 15 years.

During this process it can begin to output meaningful recommendations for physicians and patients, and at a certain point we will have more confidence in them than in human pros.

Medicine might be the most humane place for this transition to take place.
bschott
not rated yet Sep 13, 2017
My apologies Otto, the word "Godder" was taken to mean a person who believes in God, not a person who believes in a religion that claims to speak for God. I am also an anti-religionist for the same reasons...and that is all religions, not just ones based around some version of God. May your mission of informing others succeed in raising awareness to the difference between God and religion.
TheGhostofOtto1923
5 / 5 (1) Sep 13, 2017
My apologies Otto, the word "Godder" was taken to mean a person who believes in God, not a person who believes in a religion that claims to speak for God
What's the difference? Deist philos use their intellects to arrive at their conclusions (ostensibly). Their understandings are not a matter of faith although their nonsense can be just as damaging and dangerous.
bschott
5 / 5 (1) Sep 13, 2017
What's the difference?

Rational, logical people who believe in God for other reasons than religious instruction usually do so because of personal experience...or, as you seem to be indicating, some have simply convinced themselves there is a God. For some the faith is genuine and warranted, for others it isn't. There are as many perspectives as there are people, hence why I said what I did about generalizing such a concept....it is the summit of naivety to do so.
rderkis
2.3 / 5 (3) Sep 13, 2017
Some have simply convinced themselves there is a God.


Some have simply convinced themselves there is NO God because it makes them feel smart.
TheGhostofOtto1923
5 / 5 (1) Sep 13, 2017
Some have simply convinced themselves there is a God.


Some have simply convinced themselves there is NO God because it makes them feel smart.
Naw it makes me feel smart to have realized that there is no god. And lucky.
For some the faith is genuine and warranted, for others it isn't
Codified fantasies which require one to be a bigot, and to ignore reason and evidence, and to make babies which you cant hope to support, in return for everything one would ever want, are NOT WARRANTED by anyone.
There are as many perspectives as there are people, hence why I said what I did about generalizing such a concept
So you dont understand how evil religion is and you dont care. Right?
https://www.youtu...TVUulGwc

Eikka
not rated yet Sep 15, 2017
AI implies self-programming and self-improvement. So yes, AI will be able to remove any human influence of bias and deception.


There is no such thing. Ex nihilo nihil fit.

Our biases and illusions come from the way we happen to exist - our human condition as evolved beings. The machine's condition is that it's designed and built by humans, and as such it cannot arbitrarily "self-improve" beyond what it is.

More to the point, the machine has to pick sides to understand anything at all, because nothing in the world makes sense unless you look at it through some system of values - it needs a social paradigm to have a point of view. Otherwise it's just observing atoms bumping to atoms; no right and wrong, no true and false.

And of course your persistent distrust is, like I said, an indication of your reluctance to surrender your god-given right to cheat. I bet you hate speedcams as well don't you?


Now you're just babbling.
Eikka
not rated yet Sep 15, 2017
If you invent an "AI" to sort out fake news, and then don't load it up with some particular point of view but simply let it form its own conclusions, the results will be absurd.

Such as, imagine that the AI reads a news article that says wealth disparity is on the rise. Alright, it asks "what is wealth?" and derives the answer "things that people have". Then it reads another article that says poverty is increasing, and makes the conclusion that the first article is wrong because while the rich have all the riches, the poor have all the poverty and so it all balance out.

If you don't tell the machine what everything -means- then it cannot make intelligent conclusions. If you do tell it what everything means, you load it up with your own point of view that defines what reality is.

Of course, the machine may also come up with its own context, but that's not necessarily meaningflul to us.

TheGhostofOtto1923
not rated yet Sep 15, 2017
The machine's condition is that it's designed and built by humans, and as such it cannot arbitrarily "self-improve" beyond what it is
Ah. Eikkas luddite mentalisms are the result of failure to self-improve when given the opportunity.

"The intelligence explosion is a possible outcome of humanity building artificial general intelligence (AGI). AGI would be capable of recursive self-improvement leading to rapid emergence of ASI (artificial superintelligence), the limits of which are unknown."

-You have the opportunity to Google this excerpt and learn something about AI. Machines will routinely be doing this and self-improving as a result, rather than just regurgitating old philo nonsense.

Self-improvement is the basis of AI.
TheGhostofOtto1923
not rated yet Sep 15, 2017
If you invent an "AI" to sort out fake news, and then don't load it up with some particular point of view but simply let it form its own conclusions, the results will be absurd
No, simply proclaiming things like this when experts all say something very different, is absurd.

Provide refs for your absurd declarations or stop making them.
Eikka
not rated yet Sep 15, 2017
Ah. Eikkas luddite mentalisms are the result of failure to self-improve when given the opportunity.


You're making the argument that the "self" is capable of adding onto itself from nothing. Of course it cannot spontaneously create anything that it already isn't - that would be violating the very laws of physics.

Whatever improvement it can manage is contigent on the environment where it exists and what it interacts with, and what counts as improvement is dependent on who's judging. Without people judging it, how would the AI know that it's improving? How would it self-define what is better? If people are judging, the meaning of "better" again depends on who you ask.

proclaiming things like this when experts all say something very different, is absurd.


Just because some futurist loony raves on about "AGI" and "ASI" doesn't mean it's true. You're again displaying a curious lack of critical thinking when it comes to your pet subject.
bschott
not rated yet Sep 15, 2017
I'm not an atheist....

hmmm...definitive, to the point...no misinterpreting that statement!
Naw it makes me feel smart to have realized that there is no god. And lucky.

Hmmm, definitive, to the point...no misinterpreting that statement either.
doodedoodedoo....schitzo....
TheGhostofOtto1923
not rated yet Sep 15, 2017
it cannot spontaneously create anything that it already isn't - that would be violating the very laws of physics
So when humans innovate they are violating the laws of physics? I don't understand.
Whatever improvement it can manage is contigent on the environment where it exists and what it interacts with
-which in the case of AI would potentially be the entire cache of human knowledge.
Then it reads another article that says poverty is increasing, and makes the conclusion that the first article is wrong because blah
In dealing with useless indefinable political terms like poverty and wealth, AI would most likely disregard them and seek actionable methods of improving the human condition, if that was what it was tasked to do.

But it would be privy to all the ways they are used.

Just because some futurist loony raves on about "AGI" and "ASI" doesn't mean it's true
And if you had read the source you would know it's not just some looney.
TheGhostofOtto1923
not rated yet Sep 15, 2017
Hmmm, definitive, to the point...no misinterpreting that statement either
Why don't you look up the terms atheist and antireligionist for yourself?

The term atheism legitimizes the question which is after all not worth considering. Religionists invented the term atheism for that very reason. Am I an a-voldemort? Am I an a-spiderman? Would millions of people think I was intrinsically dishonest and untrustworthy if I was?

This is WHY I am antireligion.
Eikka
not rated yet Sep 15, 2017
So when humans innovate they are violating the laws of physics? I don't understand.


"Innovate" means taking something old and making it new. There's no systematic way to actually invent or innovate anything - most human progress comes by accident or by necessity as such suitable conditions arise that it would take an idiot not to notice.

-which in the case of AI would potentially be the entire cache of human knowledge.


All that is just a bunch of noise to a valueless machine, no more important than watching water drip.

AI would most likely disregard them and seek actionable methods of improving the human condition


But what does "Improve" mean? If you don't know "better", then how can you tell the machine to do that?

It's like the problem of the linguist to come up with a totally alien language - something that humans could never imagine. Well, how do you imagine it? You're loading your AI with god-like qualities to just know stuff.
Eikka
not rated yet Sep 15, 2017
If you take a bunch of scientists and engineers, very smart people, and lock them in a white room with no features, nothing interesting to look at, and tell them "now invent" - nothing comes out except a bunch of broken and mad ex scientists and engineers. In such sensory deprivation, nothing intelligent can happen.

Same thing with the AI. It's a brain in a box - it doesn't understand anything because it doesn't actually live in the world it's supposed to comment on, because everything it sees is strings of symbols that are just meaningless data with no "personal" context to the machine.

Why would it do anything with it? What's the motivation?

You tell the machine to "improve itself" - how would it do that? The task itself is meaningless to the machine. You say "learn to learn", well if you can't already learn then it's impossible. You have to program it to do it, and how you program it affects what it actually learns.

Eikka
5 / 5 (1) Sep 15, 2017
And if you had read the source you would know it's not just some looney.


You didn't give any source.

You do understand that Google gives different results to different people? The top source I can find is Wikipedia, which just goes back to the point: just because it's written down somewhere doesn't make it true.

One of the problems of AGI/ASI is that nobody has sufficiently defined "intelligence" in the first place. We simply don't know what we're talking about when we say the word, so we're lying to ourselves by claiming that we're building an "Artifical Intelligence". It's not - it's just a faster bigger computer that can do some things we find impressing - and as long as we can't see what it's actually doing we call it "intelligent".

Eikka
not rated yet Sep 15, 2017
As for experts opinion. Wikipedia points out:

A 2017 email survey of authors with publications at the 2015 NIPS and ICML machine learning conferences asked them about the chance of an intelligence explosion. 12% said it was "quite likely", 17% said it was "likely", 21% said it was "about even", 24% said it was "unlikely" and 26% said it was "quite unlikely".


So about 70% of experts say "dunno" or "not gonna happen".

rderkis
not rated yet Sep 15, 2017
So about 70% of experts say "dunno" or "not gonna happen".

And just out of curiosity, how many computer experts and Go player experts said a computer would never beat ALL men at Go in the 60s?
TheGhostofOtto1923
not rated yet Sep 15, 2017
The top source I can find is Wikipedia, which just goes back to the point: just because it's written down somewhere doesn't make it true
Machines already know how to value info better than eikka. No surprise.

AI certainly can value info the same way as scientists do. It can find peer-reviewed scientific papers and compare that info with unsubstantiated claims, and begin to weed those claims out. It can separate truth from fiction just by comparing all the available info there is on a subject and looking for consistency.

It can do this with science, with law, with medicine, and so forth. And like I say, during this process it will begin to produce answers that we can depend on.
We simply don't know what we're talking about when we say the word
-Probably because its the wrong word? Its a nonsense word like consciousness or belief. And you certainly dont need to define it before AI starts producing useful results.
TheGhostofOtto1923
not rated yet Sep 15, 2017
Why would it do anything with it? What's the motivation?
Well cyberwarfare is a good example.

"Last year, the IT security community started to buzz about AI and machine learning as the Holy Grail for improving an organization's detection and response capabilities. Leveraging algorithms that iteratively learn from data, promises to uncover threats without requiring headcounts or the need to know "what to look for".

"Human-interactive machine learning systems analyze internal security intelligence, and correlate it with external threat data to point human analysts to the needles in the haystack. Humans then provide feedback to the system by tagging the most relevant threats. Over time, the system adapts its monitoring and analysis based on human inputs, optimizing the likelihood of finding real cyber threats and minimizing false positives."
TheGhostofOtto1923
1 / 5 (1) Sep 15, 2017
Eikka needs to ask himself what he thinks it is about humans that can not be done by a machine?

We ARE machines.

If we can innovate, they can innovate. If we can synthesize, they can synthesize. If we can self-improve, THEY can self-improve. And as they dont have the hardware limitations that we do, they will be able to do these things much faster than any human or group of humans.

Your inability to imagine this is only a reflection of your limitations, not theirs.

BTW the excerpts in my last post are sources. Drop them into google. Multiple results is useful info in itself; you can see who repeats it and in what form... an indication of its possible value.

Much better than a dumb link.
Eikka
not rated yet Sep 18, 2017
Eikka needs to ask himself what he thinks it is about humans that can not be done by a machine?

We ARE machines.


That's too narrow a way to to understand the issue.

If we can innovate, they can innovate. If we can synthesize, they can synthesize. If we can self-improve, THEY can self-improve.


That's the point. We cannot arbitrarily self-improve. Again, I go back to the previous point:
to improve, one must understand what is "better", and to define "better" means to take sides in the question. There is no objective answer to the question, thereby it cannot be solved by a program that is merely observing its surroundings. Neither is it solvable by us - we just have to see what the future brings and what new demands it poses on us.

The machine does not automatically share your idea, or my idea, of what "improvement" means unless you explicitly program it to. That means it cannot be neutral in the question that you're asking.

Eikka
not rated yet Sep 18, 2017
What I'm saying is, we do not and cannot know what "better" is.

To make the assertion that such and so is better and should be done is completely unfounded, and all such opinions can be discarded as meaningless. They can be accepted subjectively, among people who share broadly the same beliefs and social context, but they do not apply universally to all people.

So how could you design a machine and task it to find an objective truth about mankind?

In the most likely scenario, the AI finds simply that people have problems no matter what you do, and so the best course of action and the only resolution to the task is to kill everybody.
Eikka
not rated yet Sep 18, 2017
AI certainly can value info the same way as scientists do. It can find peer-reviewed scientific papers and compare that info with unsubstantiated claims, and begin to weed those claims out. It can separate truth from fiction just by comparing all the available info there is on a subject and looking for consistency.


That's the trivial part of the problem.

The harder part of the problem is making inferences of that data and coming to conclusions about what it means. Fake news can be made by telling you nothing but the truth, and scientific data is rarely conclusive - sometimes it isn't even correct but nobody's found the error yet.

It would be rather inconvenient if your AI started supporting the racist alt-right because it finds so many articles about the racial IQ gaps that haven't yet been debunked thoroughly for lack of data.
snoosebaum
not rated yet Sep 18, 2017
AI ? its a stupid machine that needs an entire segment of the economy to program it , why can't it program itself ??

Please sign in to add a comment. Registration is free, and takes less than a minute. Read more

Click here to reset your password.
Sign in to get notified via email when new comments are made.