Can standardized testing capture learning potential?

Can standardized testing capture learning potential?
Years of standardized testing have resulted in a rich pool of data to help determine a student's learning curve. Credit: Colorado Department of Education 

However much they are dreaded and bemoaned, standardized tests remain a big part of the education landscape. And for everyone concerned—test takers, educators and even the nation's employers—that's both boon and bane.

"Standardized tests have actually gotten pretty good at testing knowledge," says University of Denver assistant professor Denis Dumas, an educational psychologist and statistician in the Department of Research Methods and Information Science at the Morgridge College of Education.

But beneficial as testing knowledge may be, he adds, "knowledge and potential are not the same."

In fact, a single taken on a given day captures only what the test taker knows at that moment. And that information may not provide a fair depiction of what Dumas calls "learning capacity."

Along with fellow researcher Daniel McNeish, a psychology professor at Arizona State University, Dumas aims to make better use of testing results. Partnering with a small team of other data enthusiasts, the two are developing—and yes, testing—a that captures the potential to acquire, master and deploy knowledge. In other words, the model offers insight into the test taker's learning curve.

"We study the shape of learning curves," Dumas explains, noting that this provides insight into the pressing questions that educators never stop pondering. "How do people learn? And when do they learn faster?"

To find out, Dumas and McNeish have developed what they call a "dynamic measurement model"—so called because it doesn't rely on a single high-stakes test but instead harvests and analyzes years of examination data on individuals. Fortunately, the nation's have long been administering standardized tests to children from grade school through high school, giving Dumas and McNeish plenty of data to work with. That vast store of information, they say, makes the model "three times more predictive than a single standardized assessment."

Their claims regarding the model's effectiveness have been supported in a series of 11 articles published over the last five years, with the latest piece appearing in a recent issue of Multivariate Behavioral Research. And the education community is beginning to take notice.

"This work is central to understanding growth and change," says Karen Riley, dean of the Morgridge College. "Outcome measures and their limitations have long been the challenge for accurately assessing the effectiveness of all types of interventions. Addressing these challenges opens the door to transformational change in learning."

In developing their model, Dumas says, the researchers focused on a key question: "How do we take the data that students give us on tests and get the most meaningful information?"

They began work by drawing on datasets from the University of California, Berkeley's Institute of Human Development. Among this rich stash of information were test scores and career reports from participants who had been tracked for four to five decades, from grade school until they were in their 50s, 60s and even 70s. Some of the tests in question had been administered in the 1920s and 1930s to participants who were as young as 3 years old, giving the researchers the ability to connect early results with subsequent results and even lifetime career choices and achievements. Using this data, Dumas and McNeish, along with co-author Kevin Grimm, also of Arizona State, were able to study learning curves, deduce potential and then correlate those findings with academic and professional outcomes.

How well did their model's predictions coincide with actual outcomes? Much of the time, Dumas says, "We were pretty darn close."

Close enough that Dumas is beginning to think about where and when the model might best be used. It's applicable for any organization, such as the military, that needs to funnel labor and talent into occupational and career paths, he says. The education community would undoubtedly welcome a "data analysis" that accounts for learning capacity. And students and potential employees might also cheer this innovation, if only because it reduces the stakes for any one test—say the SAT or GRE.

For the time being, Dumas says, the methodology remains in development. "The problem is that it is far and away more complicated than previous methods," he explains. For example, expediting the computations requires technology—think super computers—seldom directed to the educational arena. And dynamic measurement also requires lots of data that, while technically available, isn't always accessible. States don't always want to release or share their data, Dumas explains.

This isn't the only assessment project occupying Dumas' time. Along with another Morgridge College professor, Peter Organisciak, he has been involved in launching a free website to score creativity assessments. It not only could change how school psychologists approach such testing, but it should make it easier for school districts with limited resources to offer this option to their students.

As with that project, the dynamic measurement model focuses on addressing inequities in education and on eluding what Dumas calls "the trap" of standardized testing as it currently exists.

"This model is meant to get us out of that trap," he says. "We want to create a that quantifies not just knowledge but how much potential somebody has to grow."

Citation: Can standardized testing capture learning potential? (2020, June 4) retrieved 30 June 2024 from https://phys.org/news/2020-06-standardized-capture-potential.html
This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only.

Explore further

New statistical model improves the predictive power of standardized test scores

1 shares

Feedback to editors