Researcher uses 'big data' algorithm to customize video game difficulty

Apr 19, 2013
Researcher uses 'big data' algorithm to customize video game difficulty
Mark Riedl. Assistant Professor, School of Interactive Computing

Georgia Tech researchers have developed a computational model that can predict video game players' in-game performance and provide a corresponding challenge they can beat, leading to quicker mastery of new skills. The advance not only could help improve user experiences with video games but also applications beyond the gaming world.

Digital gaming has surged in recent years and is being adopted almost as fast as the mobile devices that are enabling its growth. The Georgia Tech researchers developed a simple turn-based game, then used participant scores to apply algorithms that predict how others with similar skill sets would perform and adjust the difficulty accordingly.

"People come in with different skills, abilities, interests and even desires, which is very contrary to the way video games are built now with a 'one size fits many approach,'" said Mark Riedl, co-creator of the model and assistant professor in the School of Interactive Computing.

The researchers used a method called collaborative filtering, a popular technique employed by Netflix and Amazon in product ratings and recommendations. While recommends movies, the gaming model recommends the next challenge for players, adjusting game difficulty by computationally forecasting in-game performance. Riedl said the approach can scale to tens of thousands of users.

The data-driven gaming model outperforms other current techniques specifically because it models player improvement over time, said Riedl. It uses an off-the-shelf algorithm, called tensor factorization, for the first time in gaming research to tailor challenges to individual players.

The gaming model also includes a performance arc with which an algorithm selects in-game events for gamers that brings the predicted player performance in line with the developer's specifications for target performance (i.e., completing the game). Current games use player progress to make small adjustments to what's going on in the game, sometimes called "rubberbanding." The classic example: fall behind in a racing game and the other cars slow down; blow away the field with a large lead and the cars speed up.

"This is very reactionary," said Riedl, who directs the Georgia Tech Entertainment Intelligence Lab. "You have to wait for things to fall apart, and then the game tries to correct it in this ad-hoc way."

Riedl said that the new gaming model, which grows alongside the learner, has significant potential for educational and training applications as well. Students struggling with math concepts, for example, could use the model to master arithmetic and mitigate the chances of falling behind in a course, said Riedl.

"We've also done some work with the U.S. Army," he said, "to generate virtual missions where we choose and tailor the types of things that have to happen in the mission so that we don't overwhelm the novices or that we can really challenge the experts."

"Our approach could allow novices to progress slowly and prevent them from abandoning a challenge right away," said Riedl. "For those good at certain skills, the game can be tuned to their particular talents to provide the right challenge at the right time."

Alex Zook, a Ph.D. candidate in human-centered computing, said that they were able to predict, with up to 93 percent accuracy, how players would perform in- by modeling the changes in a player's skills and applying the recommendation algorithm.

Zook was primary author on the paper he and Riedl presented on their findings at the 8th AAAI Conference on Artificial Intelligence and Interactive Digital Entertainment in Palo Alto, Calif.

Explore further: IBM Watson Group to transform the consumer shopping experience

More information: The paper is available at www.cc.gatech.edu/~riedl/pubs/aiide12.pdf.

Related Stories

Pico projector used in eye based video gaming system

May 03, 2011

(PhysOrg.com) -- Students at the University of Texas in Austen are playing video games. Honestly, that is really not news. Students all over the country are playing video games, usually when they should be studying. In this ...

Recommended for you

Computer-assisted accelerator design

Apr 22, 2014

Stephen Brooks uses his own custom software tool to fire electron beams into a virtual model of proposed accelerator designs for eRHIC. The goal: Keep the cost down and be sure the beams will circulate in ...

First steps towards "Experimental Literature 2.0"

Apr 21, 2014

As part of a student's thesis, the Laboratory of Digital Humanities at EPFL has developed an application that aims at rearranging literary works by changing their chapter order. "The human simulation" a saga ...

User comments : 1

Adjust slider to filter visible comments by rank

Display comments: newest first

PosterusNeticus
not rated yet Apr 19, 2013
The potential use in education & training programs is fantastic. It's not hard to imagine something like this playing a key role in a vast, fundamental improvement in how we educate people.

But as a game developer I find myself shaking my head and asking, what ever happened to learning to be better at a game? People don't even want to try to win now? They really just want a guaranteed win? If we incorporate this kind of "perfect loser" algorithm into the rules & the AI, where's the challenge? What's the point? What have you "won" if the system is very carefully designed to lose?

More news stories

US urged to drop India WTO case on solar

Environmentalists Wednesday urged the United States to drop plans to haul India to the WTO to open its solar market, saying the action would hurt the fight against climate change.