It's Saturday night and you've just finished watching the last episode of a Swedish crime drama that you somehow stumbled upon, although you can't quite remember how.
It's late and probably time for bed, but—without prompting—your Netflix screen fills with promotional shots for more shows. There's one about a female detective in Denmark and another about a British inspector who weaves between both sides of the law.
It's a familiar scenario to any Netflix watcher—when the service seems to magically suggest programs that fit your latest pop-culture craze.
And it's an instance of artificial intelligence at work.
These days, the computer algorithms that allow Netflix or Amazon to make purchasing suggestions are a normal part of life. But sometimes, it's hard not to feel a sense of awe when a machine – that square and immobile box in the room—connects with you on an intimate level. A nagging question lingers: "How did it KNOW that?"
In the coming years, people will be asking that question of their cars, phones, banking systems, and virtually every piece of technology with which they interact every day. Artificial intelligence is expected to reshape all of those facets of life, and untold others.
Companies such as Google, Amazon and Facebook are spending billions of dollars to develop the AI capabilities of their products and services. Meanwhile, some of the biggest breakthroughs and most consistent research in the field have been happening at a public university in Edmonton.
The University of Alberta has been growing its AI team with some of the discipline's best minds for more than a decade. Its bona fides as a top-tier school were cemented last year when Google's DeepMind announced its first satellite campus outside of London in partnership with the U of A.
It was a huge coup for the university, but not entirely surprising for the faculty who have spent years building, testing and proving concepts. Those research findings are already transforming fields from health-care research to investment banking. And the ripple effects are just starting.
"Artificial intelligence has the potential to impact any industry," says Jonathan Schaeffer, a computing scientist and dean of the university's Faculty of Science.
"Within 10 years, artificial intelligence will be extremely disruptive."
The many faces of AI
For some, it can be hard not to see artificial intelligence as a science so advanced and complicated that it exists only in the realm of movies.
For Patrick Pilarski, artificial intelligence exists on the same technological continuum as a stick.
At one time, a stick was an essential tool to poke things, to lean on, and so forth. In the same way, he says, artificial intelligence is a tool that can be used to amplify a person's abilities and allow them to accomplish tasks needed for day-to-day living.
"We've been slowly improving our ability to interact with the world through technology since the early days of our species. We might be doing it faster than we used to … but interacting with machines remains something prominent in our daily lives," says Pilarski, an assistant professor at the university, who holds the Canada Research Chair in Machine Intelligence for Rehabilitation.
"Those machines make us smarter, they make us better able to see the world, they make us better able to interact with or change the world, and they make us better able to think about the world."
Artificial intelligence is the branch of computing science that develops machines that can act with the intelligence we normally associate with humans. It allows machines to sift through massive amounts of data to find patterns and even develop intuition through trial and error.
The discipline has been developing since the 1950s, but its popularity and promise—in scientific circles and beyond—has ebbed and flowed through the years.
"I've been working on machine learning since the 1980s," says Russ Greiner, a computing science professor at the university. "We'd work on problems that we made up and, deservedly, the rest of the world said, 'Who cares?' But when we started deploying it, people paid attention."
People might pay even more attention if they understood the extent to which AI influences their lives already. In the past eight years, the capacity for computation has increased exponentially, allowing researchers to apply the research in more practical ways. At the same time, the world's biggest technology companies have started investing heavily in the field.
The result is changing how we live our lives.
Artificial intelligence allows for credit card fraud detection (computer programs know when a purchase appears to be out of the normal range for a particular client); it's the basis for our familiar friend, Siri (voice recognition and natural language processing are major branches of AI); it powers your smartphone's ability to identify faces on your camera roll (likewise, image recognition is a common use); and it's used to encourage people to make Internet purchases, based on their past purchasing and browsing habits.
"We're a data-rich society and we have been for years," says Schaeffer. "What AI does is allow us to take data and turn it into knowledge. To have a billion pieces of information is useless unless you can distil it into something meaningful."
Teaching machines to learn
If artificial intelligence allows machines to act like humans, there's no more explicit example of that than reinforcement learning.
U of A professor Rich Sutton is the world-leading pioneer on the subject. More than 20 years ago, he used his psychology background to take a learning approach to artificial intelligence—but his work is perhaps having its biggest impact today.
Reinforcement learning is behind the Internet advertisements that automatically appear on your computer screen, and it powers stock market trading. The basic tenets of reinforcement learning likely fuel billions of dollars in economic activity every year.
At its core, the approach mimics how humans learn through trial and error.
"If good things happen, you keep doing those things. If bad things happen, you stop and go on to something else. It's that simple, that obvious, that plain," says Sutton.
"If you're on your bike and you're about to fall over, you turn your steering wheel and you recover … you should learn three things from that. One, that at first you thought you were fine. Two, that you hit the stone and you weren't as safe. And three, that you did some moves and you felt safe again."
Similarly, a reinforcement learning program must decide what's "good" or "bad" based on a final, desired outcome. It will run through millions or tens of millions of scenarios to figure out for itself what puts it in a "good" position or a "bad" position, vis-à-vis its desired outcome. It will then adjust its actions to achieve that outcome.
Sutton's work is about as close as we get to having machines that think like humans. But computers still have limitations.
Schaeffer, the computing scientist and dean, describes a person's brain as being "general purpose." We might not know how to fix a burst pipe in our home, but we know how to deal with the problem. A computer, on the other and, will only do what a person tells it to do.
"We're building lots of idiot savants. The driverless car knows how to drive but it doesn't know how to spell-check a document," he says. "We're sentient. There's independence of thought. We do what we want. Computers only do what I tell them to do."
Explore further: Can artificial intelligence be used to study gut microbes in patients?