IBM reveals five innovations that will change our lives within five years

December 17, 2012, IBM
Physical Analytics Research Manager Hendrik Hamann examines an array of wireless sensors used to detect environmental conditions such as temperature, humidity, gases and chemicals at IBM Research headquarters in Yorktown Heights, NY, Monday, December 17, 2012. In five years, technology advancements could enable sensors to analyze odors or the molecules in a person’s breath to help diagnose diseases. This innovation is part of IBM’s 5 in 5, a set of IBM annual predictions that have the potential to change the way people work, live and interact during the next five years. Credit: Jon Simon/Feature Photo Service for IBM

Today IBM unveiled the seventh annual "IBM 5 in 5" (#ibm5in5) – a list of innovations that have the potential to change the way people work, live and interact during the next five years.

The IBM 5 in 5 is based on market and societal trends as well as from IBM's R&D labs around the world that can make these transformations possible.

This year's IBM 5 in 5 explores that will be the underpinnings of the next era of computing, which IBM describes as the era of cognitive systems. This new generation of machines will learn, adapt, sense and begin to experience the world as it really is. This year's predictions focus on one element of the new era, the ability of computers to mimic the human senses—in their own way, to see, smell, touch, taste and hear.

These sensing capabilities will help us become more aware, productive and help us think – but not think for us. Cognitive computing systems will help us see through complexity, keep up with the speed of information, make more informed decisions, improve our health and standard of living, enrich our lives and break down all kinds of barriers—including geographic distance, language, cost and inaccessibility.

"IBM scientists around the world are collaborating on advances that will help computers make sense of the world around them," said Bernie Meyerson, IBM Fellow and VP of Innovation. "Just as the human brain relies on interacting with the world using multiple senses, by bringing combinations of these breakthroughs together, cognitive systems will bring even greater value and insights, helping us solve some of the most complicated challenges."

Here are five predictions that will define the future:

Touch: You will be able to touch through your phone

Imagine using your smartphone to shop for your wedding dress and being able to feel the satin or silk of the gown, or the lace on the veil, all from the surface of the screen? Or to feel the beading and weave of a blanket made by a local artisan half way around the world? In five years, industries such as retail will be transformed by the ability to "touch" a product through your mobile device.

IBM scientists are developing applications for the retail, healthcare and other sectors using haptic, infrared and pressure sensitive technologies to simulate touch, such as the texture and weave of a fabric—as a shopper brushes her finger over the image of the item on a device screen. Utilizing the vibration capabilities of the phone, every object will have a unique set of vibration patterns that represents the touch experience: short fast patterns, or longer and stronger strings of vibrations. The vibration pattern will differentiate silk from linen or cotton, helping simulate the physical sensation of actually touching the material.

Current uses of haptic and graphic technology in the gaming industry take the end user into a simulated environment. The opportunity and challenge here is to make the technology so ubiquitous and inter-woven into everyday experiences that it brings greater context to our lives by weaving technology in front and around us. This technology will become ubiquitous in our everyday lives, turning mobile phones into tools for natural and intuitive interaction with the world around us.

Sight: A pixel will be worth a thousand words

We take 500 billion photos a year[1]. 72 hours of video is uploaded to YouTube every minute[2]. The global medical diagnostic imaging market is expected to grow to $26.6 billion by 2016[3].

Computers today only understand pictures by the text we use to tag or title them; the majority of the information—the actual content of the image—is a mystery.

In the next five years, systems will not only be able to look at and recognize the contents of images and visual data, they will turn the pixels into meaning, beginning to make sense out of it similar to the way a human views and interprets a photograph. In the future, "brain-like" capabilities will let computers analyze features such as color, texture patterns or edge information and extract insights from visual media. This will have a profound impact for industries such as healthcare, retail and agriculture.

Within five years, these capabilities will be put to work in healthcare by making sense out of massive volumes of medical information such as MRIs, CT scans, X-Rays and ultrasound images to capture information tailored to particular anatomy or pathologies. What is critical in these images can be subtle or invisible to the human eye and requires careful measurement. By being trained to discriminate what to look for in images—such as differentiating healthy from diseased tissue—and correlating that with patient records and scientific literature, systems that can "see" will help doctors detect medical problems with far greater speed and accuracy.

Hearing: Computers will hear what matters

Ever wish you could make sense of the sounds all around you and be able to understand what's not being said?

Within five years, a distributed system of clever sensors will detect elements of sound such as sound pressure, vibrations and sound waves at different frequencies. It will interpret these inputs to predict when trees will fall in a forest or when a landslide is imminent. Such a system will "listen" to our surroundings and measure movements, or the stress in a material, to warn us if danger lies ahead.

Raw sounds will be detected by sensors, much like the human brain. A system that receives this data will take into account other "modalities," such as visual or tactile information, and classify and interpret the sounds based on what it has learned. When new sounds are detected, the system will form conclusions based on previous knowledge and the ability to recognize patterns.

For example, "baby talk" will be understood as a language, telling parents or doctors what infants are trying to communicate. Sounds can be a trigger for interpreting a baby's behavior or needs. By being taught what baby sounds mean – whether fussing indicates a baby is hungry, hot, tired or in pain – a sophisticated speech recognition system would correlate sounds and babbles with other sensory or physiological information such as heart rate, pulse and temperature.

In the next five years, by learning about emotion and being able to sense mood, systems will pinpoint aspects of a conversation and analyze pitch, tone and hesitancy to help us have more productive dialogues that could improve customer call center interactions, or allow us to seamlessly interact with different cultures.

Today, IBM scientists are beginning to capture underwater noise levels in Galway Bay, Ireland to understand the sounds and vibrations of wave energy conversion machines, and the impact on sea life, by using underwater sensors that capture sound waves and transmit them to a receiving system to be analyzed.

Taste: Digital taste buds will help you to eat smarter

What if we could make healthy foods taste delicious using a different kind of computing system that is built for creativity?

IBM researchers are developing a computing system that actually experiences flavor, to be used with chefs to create the most tasty and novel recipes. It will break down ingredients to their molecular level and blend the chemistry of food compounds with the psychology behind what flavors and smells humans prefer. By comparing this with millions of recipes, the system will be able to create new flavor combinations that pair, for example, roasted chestnuts with other foods such as cooked beetroot, fresh caviar, and dry-cured ham.

A system like this can also be used to help us eat healthier, creating novel flavor combinations that will make us crave a vegetable casserole instead of potato chips.

The computer will be able to use algorithms to determine the precise chemical structure of food and why people like certain tastes. These algorithms will examine how chemicals interact with each other, the molecular complexity of flavor compounds and their bonding structure, and use that information, together with models of perception to predict the taste appeal of flavors.

Not only will it make healthy foods more palatable—it will also surprise us with unusual pairings of foods actually designed to maximize our experience of taste and flavor. In the case of people with special dietary needs such as individuals with diabetes, it would develop flavors and recipes to keep their blood sugar regulated, but satisfy their sweet tooth.

Smell: Computers will have a sense of smell

During the next five years, tiny sensors embedded in your computer or cell phone will detect if you're coming down with a cold or other illness. By analyzing odors, biomarkers and thousands of molecules in someone's breath, doctors will have help diagnosing and monitoring the onset of ailments such as liver and kidney disorders, asthma, diabetes and epilepsy by detecting which odors are normal and which are not.

Today IBM scientists are already sensing environmental conditions and gases to preserve works of art. This innovation is beginning to be applied to tackle clinical hygiene, one of the biggest challenges in healthcare today. For example, antibiotic-resistant bacteria such as Methicillin-resistant Staphylococcus aureus (MRSA), which in 2005 was associated with almost 19,000 hospital stay-related deaths in the United States, is commonly found on the skin and can be easily transmitted wherever people are in close contact. One way of fighting MRSA exposure in healthcare institutions is by ensuring medical staff follow clinical hygiene guidelines. In the next five years, IBM technology will "smell" surfaces for disinfectants to determine whether rooms have been sanitized. Using novel wireless "mesh" networks, data on various chemicals will be gathered and measured by sensors, and continuously learn and adapt to new smells over time.

Due to advances in sensor and communication technologies in combination with deep learning systems, sensors can measure data in places never thought possible. For example, computer systems can be used in agriculture to "smell" or analyze the soil condition of crops. In urban environments, this technology will be used to monitor issues with refuge, sanitation and pollution – helping city agencies spot potential problems before they get out of hand.

Explore further: IBM Research Unleashes Powerful Speech Software

More information: [1] Digital Media Analysis, Search and Management workshop, 2/27-2/28, 2012

[2] YouTube

[3] MarketsAndMarkets Diagnostic Imaging Market Competitive Landscape & Global Forecasts 2010-2016 … ging-market-411.html

Related Stories

IBM Research Unleashes Powerful Speech Software

August 18, 2008

( -- IBM today announced the availability of state-of-the-art speech recognition software to clients and partners exploring the development of real-world consumer and business solutions.

IBM Unveils Revolutionary Cell Broadband Engine Computer

February 8, 2006

At a press conference in New York today, IBM introduced a blade computing system based on the Cell Broadband Engine (Cell BE). The IBM branded Cell BE-based system is designed for businesses that need the dense computing ...

UW team part of IBM 'cognitive' computing chip project

August 19, 2011

( -- University of Wisconsin-Madison researchers are part of the IBM-led team that has unveiled a new generation of experimental computer chips - the first step in a project to create a computer that borrows principles ...

Recommended for you

Earwigs and the art of origami

March 22, 2018

ETH Zurich researchers have developed multifunctional origami structures, which they then fabricated into 4-D printed objects. The design principle mimics the structure of an earwig's wing.


Adjust slider to filter visible comments by rank

Display comments: newest first

4 / 5 (4) Dec 17, 2012
You can find their past predictions on their website:

I wouldn't say all of them are particularly realistic and happening in the "promised" timeframe... but at least there has been some simple and realistic ones in each year's set.
This year's set however seems completely bonkers and unrealistic to me.
1 / 5 (2) Dec 17, 2012
Seems to me that someone is building an android. Put together all these new, upcoming gadgets and you have one super robot.
2.8 / 5 (4) Dec 17, 2012
This year's set however seems completely bonkers and unrealistic to me.

You don't know do you?

Nanotechnology can accomplish smell and taste.

Vision finders which are able to spot and classify defects, and facial recognition software, have been around for a long time now. Making a software which can understand what it's seeing fully is conceivable since you have the options of 3-d camera technology and multi-spectra imaging.

With all of that data, and adaptive A.I would begin to approach animal or even near-human intelligence.

Consider a "mouse" robot has a maze solving algorithm. Now add sensory data and that guy's video game mechanics A.I which can select "mechanics" to solve a problem...and guess what? You have an A.I. who can recognize a maze as a maze, and then call an algorithm which can solve the maze...and all the sensory input to understand it's environment...

Sensory input is a requirement for intelligence.

This development is not necessarily a good thing.
5 / 5 (1) Dec 17, 2012
5 / 5 (1) Dec 17, 2012
I read this like a job advertisement from IBM intended to attract talent.
4 / 5 (1) Dec 17, 2012
Eulerian Video Magnification-


can sense all sorts of stuff from just video imagery alone. Heartbeat, emotion, truthfulness, micro-mechanical movement and other structural behaviors, and much more. By applying temporal filtering and amplification to decomposed original video imagery, minute changes in the videoed scene can be exaggerated to reveal things normally not seen. No special cameras are required, either.
2 / 5 (4) Dec 17, 2012
Lurker... I am sure anything can be done. But I do not see it happening so soon.

Looking back on the last 5 years, I'm actually quite disappointed by the slow progress in some fields.

Common everyday things:
- HDD capacity improvements have nearly stopped,
- Memory, CPU and GPU MHz not growing like before; Intel promised 10GHz chips back in NetBurst days. Now advances are refocusing on core count, power draw.
- TV resolutions aren't increasing either. 4k TVs were hyped for years, but not here yet.
- OLED screens still not making economic sense. Every year tech expos you have manufacturers promise they'll have it nailed next year.
1 / 5 (2) Dec 17, 2012
Fancier "sciency" examples that I was excited about:
- non-invasive brain-computer interfaces died off. OCZ NIA was an exciting first step, but no follow up 3-4 years later
- invasive BCI implants, more exciting and with greater potential, but still very far away. Braingate really rocked my world with Matt Nagle's implant, but now 8 years later there's not much new to show off... I know the researchers are most certainly hard at work, and that improvements to the field are being made, it just isn't all too visible to the public, and is not happening as fast as such optimistic prognoses may suggest.
2.8 / 5 (4) Dec 17, 2012
Cochlear implants are a good example where there has been a potential for a lot of improvement, but for some reason it is not happening. The number of electrodes they use, and therefore the sound resolution and quality they produce has not changed since they got commercialized. There was some hype about "hi-fi" versions with double the electode count nearly 5 years ago, with no results that I know of. I fully expect one day in the future they will be so advanced that actual healthy-hearing people would want to have them inplanted just for the awesomness they could privide... Imagine the comfort of hi-fi headphones built-in in your ears, for listening to music or phone calls or any other audio source, or even enhancing the sounds surrounding you, etc. I'll be the first to sign up for one.

It kinda seems like once a tech reaches certain "good enough" level of development, there isn't enough incentive to drive it further... Somehow, people are just not complaining, keep paying for the cu
1 / 5 (1) Dec 17, 2012
Speaking as a crank, I would probably put all five senses on a chip and manage each input with a dedicated CPU. The interface that has everything working together to make a bot respond with motion logically is good bait for competing OS apps. It receives and analyses video, audio, tactile, olfactory including taste, and polls altitude, gravity, and attitude continuously in order to optimize them continuously. This decision making app develops character, which includes degree of intelligence to apply generally, and determines emotional responses, initiative, speech, and reflex to stimuli, based on sensory input. A certain degree of stupidity built in will make it seem all the more human.
1 / 5 (2) Dec 18, 2012
Are we still trying to make an artificial animal?

I say more genetics and biological studies could revolt the world than all this fancy tech
not rated yet Dec 18, 2012
Kudos to IBM for focusing on longer term goals than the next shareholders' meeting. One of a small number of companies in the U.S. that still does R&D.

Please sign in to add a comment. Registration is free, and takes less than a minute. Read more

Click here to reset your password.
Sign in to get notified via email when new comments are made.