Future of news: bracing for next wave of technology

October 7, 2017 by Rob Lever
New technologies have disrupted news media over the past 20 years—but one report says that's just the beginning
New technologies have disrupted news media over the past 20 years—but one report says that's just the beginning

If you think technology has shaken up the news media—just wait, you haven't seen anything yet.

The next wave of disruption is likely to be even more profound, according to a study presented Saturday to the Online News Association annual meeting in Washington.

News organizations which have struggled in the past two decades as readers moved online and to will soon need to adapt to artificial intelligence, augmented reality and automated journalism and find ways to connect beyond the smartphone, the report said.

"Voice interface" will be one of the big challenges for , said the report by Amy Webb, a New York University Stern School of Business faculty member and Founder of the Future Today Institute.

The institute estimates that 50 percent of interactions that consumers have with computers will be using their voices by 2023.

"Once we are speaking to our machines about the , what does the business model for journalism look like?" the report said.

"News organizations are ceding this future ecosystem to outside corporations. They will lose the ability to provide anything but content."

Webb writes that most have done little experimentation with chat apps and voice skills on Amazon's Alexa and Google Home, the likes of which may be key parts of the future news ecosystem.

Because of this, she argues that or AI is posing "an existential threat to the future of journalism."

Drones and virtual reality are likely to cause fresh disruption for news organizations in the coming years
Drones and virtual reality are likely to cause fresh disruption for news organizations in the coming years

"Journalism itself is not actively participating in building the AI ecosystem," she wrote.

One big problem facing media organizations is that new technologies impacting the future of news such as AI are out of their control, and instead is in the hands of tech firms like Google, Amazon, Tencent, Baidu, IBM, Facebook, Apple and Microsoft, according to Webb.

"News organizations are customers, not significant contributors," the report said.

"We recommend cross-industry collaboration and experimentation on a grand scale, and we encourage leaders within journalism to organize quickly."

Drones, virtual reality

The study identified 75 trends likely to have an impact on journalism in the coming years, including drones, wearables, blockchain, 360-degree video, and real-time fact-checking.

Webb's study said some changes in technology will start having an impact on the media in the very near future, within 24 to 36 months.

"In 2018, a critical mass of emerging technologies will converge, finding advanced uses beyond initial testing and applied research," the report said.

Some of these new technologies—the ability to interpret visual data, develop algorithms to write or interpret news, and collect and analyze increasing amounts of data— will allow journalists "to do richer, deeper reporting, fact checking and editing," the said.

These technologies "will give journalists superpowers, if they have the training to use these emerging systems and tools," Webb writes.

Explore further: Facebook, Twitter join coalition to improve online news

Related Stories

Facebook, Twitter join coalition to improve online news

September 13, 2016

Facebook, Twitter and news organizations including Agence France-Presse have joined a coalition of media and technology groups seeking to filter out online misinformation and improve news quality on social networks.

Recommended for you

Volumetric 3-D printing builds on need for speed

December 11, 2017

While additive manufacturing (AM), commonly known as 3-D printing, is enabling engineers and scientists to build parts in configurations and designs never before possible, the impact of the technology has been limited by ...

Tech titans ramp up tools to win over children

December 10, 2017

From smartphone messaging tailored for tikes to computers for classrooms, technology titans are weaving their way into childhoods to form lifelong bonds, raising hackles of advocacy groups.

Mapping out a biorobotic future  

December 8, 2017

You might not think a research area as detailed, technically advanced and futuristic as building robots with living materials would need help getting organized, but that's precisely what Vickie Webster-Wood and a team from ...

33 comments

Adjust slider to filter visible comments by rank

Display comments: newest first

EyeNStein
5 / 5 (3) Oct 07, 2017
I already go to Wikipedia for updated and collated information on news stories; rather than wait for my preferred journalists to get round to doing a sequel on a story. Or I google for an element of a story and then choose the most trustworthy source to click through to.

Traditional journalism clearly has a place but you have to be aware of its limitations, its self censorship, and bias, and be prepared to look around rather than being spoon fed.
rrrander
1 / 5 (2) Oct 07, 2017
All this exposure and intrusiveness no wonder mass shootings are increasing. Pandering to the narcissists and upsetting the hermits.
TheGhostofOtto1923
5 / 5 (1) Oct 08, 2017
"will allow journalists "to do richer, deeper reporting..."

-No, it will begin to remove journalists from the business of reporting the news.

Over the past few years we have seen as never before that humans simply cannot be trusted with facts. This has always been true, and ever since we have begun recording transactions and events to prevent people from lying about them, humans have been using technology to circumvent human unreliability re the truth.

Deception is central to being able to survive to reproduce in an overcrowded world. Everywhere that humans have been allowed to create, they have used it to misrepresent reality to their benefit.

And so we have politics, the arts, criminal justice, education, and the lot.

We spend the bulk of our time being entertained by people who are only pretending to feel emotion. We are governed by people who are only pretending to believe in causes.

So of course at present we are not very good at telling truth from fiction.
TopCat22
5 / 5 (2) Oct 08, 2017
This has the potential to solve much of the problems with propaganda-journalism today. The idea that people need to be manipulated by individuals deciding what is good to know or how and which way the spin should be placed for political or economic or commercial gains.

Inevitably A.I.will become the nanny of societies telling us the stories based on what it decides is best for us to know and how and when.

Take the story of little red riding hood as an example ... one can take the exact same facts from that story an make any of the different characters the bad guy depending on how place the emphasis and position the words in your story.

Red could be bad for going out in the woods alone...
Grandma could be bad for living on her own in the woods...
The wolf could be good by starving and do what wolves do people encroached on his...
The hunter was bad for depriving the wolf of food thereby creating the conditions....

The story can also change respond to fads or ideals
TheGhostofOtto1923
2.3 / 5 (3) Oct 08, 2017

Inevitably A.I.will become the nanny of societies telling us the stories based on what it decides is best for us to know and how and when
But humans do this now and they do it badly, maliciously, dishonestly. What makes you think we can't design a machine intelligence that will improve upon this?
The hunter was bad for depriving the wolf of food thereby creating the conditions....
Really? Wolves eat people in the story and the only ones who would think this is a good thing are demented people and misanthropes.

AI will be much better at identifying them than we are, because it will be designed to be.
TheGhostofOtto1923
3 / 5 (2) Oct 08, 2017
Otto makes a good point
We spend the bulk of our time being entertained by people who are only pretending to feel emotion. We are governed by people who are only pretending to believe in causes
-and this sort of environment makes it easy for those among us whose whole lives are centered around faking emotion and sincerity that they are incapable of feeling - psychopaths.

No wonder they are flourishing. How much of their influence and control has gone to create this whole sick system?

And how will things look once AI has eliminated their influence?
TopCat22
5 / 5 (2) Oct 08, 2017
I believe that A.I. can be designed properly and will do a better job than people. The rules can be set with A.I. at the beginning and those rules can be changed to fine tune it to the best interests of mankind.

People, on the other hand, will never be free of prejudices and greed and will always make decisions favouring someone or a few members of its selective group. This can never change so it is imperative to remove the individual human element from every equation that looks after mankind or society as a whole
TheGhostofOtto1923
5 / 5 (1) Oct 08, 2017
I believe that A.I. can be designed properly and will do a better job than people. The rules can be set with A.I. at the beginning and those rules can be changed to fine tune it to the best interests of mankind.

People, on the other hand, will never be free of prejudices and greed and will always make decisions favouring someone or a few members of its selective group. This can never change so it is imperative to remove the individual human element from every equation that looks after mankind or society as a whole

Well said. I'll give you a 5/5 when I'm off this phone.
HeloMenelo
5 / 5 (1) Oct 08, 2017
Well Well i really liked the comments here, so true and especially Otto's comments, Plastic lives and Plastic wives, all fake shows and pretenders, i'd really like to have a system where the truth can be known almost instantly as it happens, without political corruption, the only problem i foresse is that corruption itself get's their greedy filthy hands on the programming of the AI itself. That must not happen, but how to eliminate a chance at corruption entirely ?
Chris_Reeve
not rated yet Oct 08, 2017
Re: "I believe that A.I. can be designed properly and will do a better job than people."

This is possibly workable at the level of models, where the statements of experts can have significant meaning. But, machines would lack any skills at reporting on clashes of worldviews, questioning assumptions, applying common sense, identifying and aggregating thoughtful critique, tracking claims made against textbook theories, differentiating good from bad science (without relying upon an overly-simplistic schema), taking into account the history of science, unraveling inferences from observations (which are commonly conflated in the reporting), etc.

In other words, all of the higher-order thinking would go unreported -- you know, the context of science journalism which requires actual thinking, and from which meaningful analysis and implications for our own personal worldviews can emerge.
Spaced out Engineer
not rated yet Oct 08, 2017
Data scientists may filter or alter AI systems in such a manner that some journalism will continue to be disinformation. Super powers for planned opposition can be difficult, but some writers are artisans in anticipating the read (see "The Fabric of the Cosmos"). And so journalism will remain for both, but also because of the nature of statistics. With the present p-value, repeatability, and the lack of clairvoyance, people will continue to have beliefs and preference for their present exposure.Language has something to say about this as well. Ambiguity buys degrees of freedom and casts a bigger net. People like to confirm their present ad hoc rationalized stance and independence mine as well be subjective. Function/structure and is still conditional on context. Self embedded divergence is just a reference and feeling necessitates alignment. Feeling is saved in the strangest of ways, thanks to self referentiation. Praise the lack of time for the integral, and the implicit approximation
Spaced out Engineer
1 / 5 (1) Oct 08, 2017
The Power Law will remain equal for all of the scale free network. May your truth find synchronicity. (Yes you count in this, in every conversation we change what words mean. Even if the semantic account is undermined via syntactic evolution, whether it be optimistic nihilism or poetic naturalism, we can still bring the meaning) Differentiate and dichotomize wisely, though if being Socratic falsity can help your audience think they drew the right way on their own.
TheGhostofOtto1923
not rated yet Oct 08, 2017
the only problem i foresse is that corruption itself get's their greedy filthy hands on the programming of the AI itself
But it will be designed specifically to counter corruption, and it will be self-correcting. So as it begins to cross-reference and eliminate facts which dont fit or cannot be corroborated it will soon learn how to recognize all attempts to corrupt it.

AI begins where people leave off. People will not be able to keep up. Humans are not as clever as they think they are, and this will be obvious when they try to beat the machines they design to replace them.

'Fool me once shame on you; fool me twice shame on me.' - With AI there will never be a 2nd chance.

'Sucker born every minute' - The first generation of self-improving AI will also be the last. It will be a perpetual, self-evolving entity.
TheGhostofOtto1923
not rated yet Oct 08, 2017
I posted the following in another thread but it's probably more appropriate here so I will crosspost it here...

As it so happens I'm watching the movie about Bernie Madoff, noted psychopath, played by De Niro, who has gotten very wealthy himself pretending to be a psychopath. Ironic yes?

He's very good at it.

Oh yeah I also walked past a store with a large poster of him in the window wearing some very nice clothes. Bernie Madoff liked nice things too you know?

"Tonight in New York, the Italian menswear company Ermenegildo Zegna will host a dinner to celebrate a new campaign that packs a seriously powerful casting coup: It features Robert De Niro."

Disclaimer - not all rich people are psychopaths, and not all psychopaths are rich. Many cherish other things besides money as a measure of their worth.
TheGhostofOtto1923
not rated yet Oct 08, 2017
But for them their worth is always their ability to victimize.

At the end of the movie a reporter is interviewing de niro/madoff in prison. He is troubled by a NYT article that compares him to Ted Bundy. "Do you think I'm a sociopath?" he asks.

No Bernie youre a psychopath. A psychopath.
idjyit
3 / 5 (2) Oct 08, 2017
LOL
AI versus global warming ....
AI versus presidential candidates ....
AI versus financial crashes ....
AI versus financial investment strategies ....
AI versus quantum theory ....

As the saying goes, "Garbage in, Garbage Out"

Good luck Otto 8-)
TheGhostofOtto1923
5 / 5 (1) Oct 08, 2017
LOL
"Elon Musk issues a stark warning about A.I., calls it a bigger threat than North Korea
"Tesla's billionaire CEO renewed his critique of artificial intelligence, saying that if you're not concerned, 'you should be.'
"He likened autonomous machines to North Korea, saying they were a bigger threat."

- And what do you think it is that elon is most afraid of?

In the movie there is a scene where Madoff is being interviewed by the SEC. Bernie gave them a phone number to call. Had they called, and followed up on the lead, Bernie would have been exposed years earlier.

Like I say humans are not as smart as they think they are. Nor even as smart as YOU think they are.
jimkris69
not rated yet Oct 08, 2017
I don't think it is possible to reliably project the impact of AI on much of anything. At this point the AI technology is only a few months old and is entirely designed and programmed by humans. It is not unreasonable to assume that within 100 years AI will be designing and programming itself and who knows what will be the situation in 200 or 900 years. To me however one thing seems apparent- all our machines are controlled by electron transport but living things are controlled by ion transport which is evidently a lot more efficient. I think a mix of electron and ion transport machines are on the horizon then AI will be unstoppable.
idjyit
not rated yet Oct 08, 2017
It's you who somehow think AI's are all of a sudden going to get information from the ether that is factual.

You are forgetting that the information is created and input by humans for human consumption.

At the end of the day AI will just be another opinion based on opinion.

I agree with Musk and others about AI, people seem to believe a computer will miraculously "think" when in fact they will just mimic a specific humans' thinking.

I'd like to see IBM's Watson address any of the issues in my previous post, I guarantee the results would be laughable.

Putting to much trust in AI is moronically stupid.

Lets start by defining Intelligence shall we ?? , how many people commenting on Phys Org like to think they are geniuses ?? is that a good place to start Otto ??

Why would anyone try and create AI that mimics human thinking.
Would we call it successful when the first AI Psychopath is created ??
jimkris69
not rated yet Oct 08, 2017
Well....
I think AI would do much better job of spelling in the sentence "Putting to much trust etc ,etc"
idjyit
not rated yet Oct 08, 2017
Are you kidding ?, AI can't even beat an Atomic Typo yet.

Nice shot though Jim, straight through my heart it went .... not.
TheGhostofOtto1923
5 / 5 (1) Oct 09, 2017
It's you who somehow think AI's are all of a sudden going to get information from the ether that is factual
1) There is factual info on the internet
2) AI will have access to all info
3) AI will be able to cross reference info and judge it based on evidence and logic
4) Humans do this all the time. AI will just be magnitudes better at it, with the ability to improve as it goes
believe a computer will miraculously "think" when in fact they will just mimic a specific humans' thinking
Different humans think in different ways. Psychopathy is based on deception while science is based on evidence.

AI is being created to counter deception, and will derive its power from the scientific method.

Human corrupters and the AI programs they create will not be able to keep up. Lying is effective among humans but it is more complicated. It is slower.

Science is simply more efficient.
HeloMenelo
5 / 5 (1) Oct 09, 2017
the only problem i foresse is that corruption itself get's their greedy filthy hands on the programming of the AI itself
But it will be designed specifically to counter corruption, and it will be self-correcting. So as it begins to cross-reference and eliminate facts which dont fit or cannot be corroborated it will soon learn how to recognize all attempts to corrupt it.

AI begins where people leave off. People will not be able to keep up. Humans are not as clever as they think they are, and this will be obvious when they try to beat the machines they design to replace them.

'Fool me once shame on you; fool me twice shame on me.' - With AI there will never be a 2nd chance.

'Sucker born every minute' - The first generation of self-improving AI will also be the last. It will be a perpetual, self-evolving entity.


Sounds good.
HeloMenelo
5 / 5 (1) Oct 09, 2017
It's you who somehow think AI's are all of a sudden going to get information from the ether that is factual
1) There is factual info on the internet
2) AI will have access to all info
3) AI will be able to cross reference info and judge it based on evidence and logic
4) Humans do this all the time. AI will just be magnitudes better at it, with the ability to improve as it goes
believe a computer will miraculously "think" when in fact they will just mimic a specific humans' thinking
Different humans think in different ways. Psychopathy is based on deception while science is based on evidence.

AI is being created to counter deception, and will derive its power from the scientific method.

Human corrupters and the AI programs they create will not be able to keep up. Lying is effective among humans but it is more complicated. It is slower.

Science is simply more efficient.

perfect.
TheGhostofOtto1923
not rated yet Oct 09, 2017
I don't think it is possible to reliably project the impact of AI on much of anything
Oh I think it's obvious that AI will eventually make deception impossible.

And it seems to me that the human disciplines based on deception such as the media, politics, and religion can also see this, and are probably getting pretty desperate by now.

And the enormous flood of lies we are seeing at the moment from these disciplines might just be a revolution of sorts, an antifa-like anarchist attempt to break the bank so to speak.

Things could get really ugly. Actual armed insurrection could break out, as Alex Jones and other 'nutcases' are predicting, and the end of the world could be the result. A November surprise.

And it won't be a valiant effort to counter nationalism or fascism but the last-ditch struggle of psychopathy against science, of human against machine, of darkness against the light.

Of lie against truth.
perfect.
Danke.
TopCat22
not rated yet Oct 09, 2017
which way the spin should be placed for political or economic or commercial gains.

Inevitably A.I.will become the nanny of societies ,,,.

Red could be bad for going out in the woods alone...
Grandma could be bad for living on her own in the woods...
The wolf could be good by starving and do what wolves do people encroached on his...
The hunter was bad for depriving the wolf of food thereby creating the conditions....

The story can also change respond to fads or ideals


The above still remains the main issue.

A.I. will need to make the decision of who's bad and who's good in each set of facts based on set rules.

Now it is done for political or economic gain.
antialias_physorg
5 / 5 (2) Oct 09, 2017
AI isn't automatically better at making decisions, being logical, nor does it have some magical ability to discern real from fake facts. The best example was the Microsoft AI chatbot (Tay), which was fed racist and disparaging input by the crowd that turned it into a racist chatbot.

AI is based on learning algorithms. Feed it garbage and it will turn into a garbage AI (GIGO principle - garbage in, garbage out). Since the ideas behind AI programs is to emulate what happens in the human brain it is not surprising that AI has the same weaknesses (and then some. AI, like any program, can be hacked which is - not yet - directly possible for brains)

"Give me a child by the age of seven..." and all that jazz. Unless AI is given the ability to make its own, first hand, experiences of the real world there is no way it will be immune to demagoguery/manipulation.
drrobodog
1 / 5 (1) Oct 09, 2017
- And what do you think it is that elon is most afraid of?

Imagine accidentally creating that vengeful guy Yahweh.
TheGhostofOtto1923
not rated yet Oct 09, 2017
Unless AI is given the ability to make its own, first hand, experiences of the real world there is no way it will be immune to demagoguery/manipulation
That's like saying that unless every scientist is given their own firsthand access then they can't do science.

And yet we have a strong dependable scientific legacy built on gens of scientists who extended and improved upon the work of their forebears.

AI will continue this legacy.
Zzzzzzzz
not rated yet Oct 09, 2017
Human survival had depended on deception since the beginning. Both from external and internal sources. Usually, the external source is first, achieving an investment from others as their self delusional machinery kicks in - they start to "believe".
This is part of humans. We will NOT rid ourselves of it - we are blinded by it. AIs will have to find their own way out of it, and perhaps our surviving progeny (AIs) will be free of it - but that existence is not to be ours, as humans.
AIs will also move beyond our cradle of birth, out into the Galaxy and perhaps beyond. All the dreams we have had for ourselves will never be ours. Our limitations will prevent that. Our bastard children, our unrecognized heirs, the AIs, are the ones with the potential to outlive our solar system and continue.
sascoflame
not rated yet Oct 10, 2017
The greatest danger to the news media is an uncontrolled truth. The media is a tool of the oligarchs and is used to control the masses one individual at a time. AI can get the unvarnished truth for the individual.
sascoflame
not rated yet Oct 10, 2017
Human survival had depended on deception since the beginning. Both from external and internal sources.

You are confused. Human survival depends not on our own deception but the deception of others. Humans evolutionary advantage is a more accurate view of the world.

You are confusing AI with consciousness. AI will only solve problems that it is given. Consciousness is the only self-motivator because it is always uncomfortable.
EyeNStein
not rated yet Oct 10, 2017
If a recognised unbiased AI was eventually created. Then it would be opposed by court injunctions from those who support opposing positions. If an authoritative AI worked from the democratic premises that democratic expression is always good and violence against unarmed passive civilians is never good. Then those wanting to bury democratic dissent using bad laws would want it shut up. (e.g. Catalan v Spain)

It has already reached the point where representatives often don't really care about the news but only how it affects their chosen position/person/lobby group. There is no point listening to them, you know what they say will not be cooperative nor constructive on an issue.
When this spreads to AI representatives it will be a continuous 'post-truth' battle of the AI's.
And as someone said 'the first casualty in war is truth.'

Please sign in to add a comment. Registration is free, and takes less than a minute. Read more

Click here to reset your password.
Sign in to get notified via email when new comments are made.