Research reveals software improving in ability to analyze, score writing

A Penn State College of Education faculty member's research showed that software for evaluating human writing is improving and expanding in use. Roy Clariana, a professor of education in the college's Learning and Performance Systems (LPS) department, said that software such as this could have important technological implications.

The research, which was done along with Michael B. Wolfe from Grand Valley State University and Kyung Kim, a doctoral student in LPS, compared a software analysis of 90 essays to human-rated scores of those same essays. The team aimed to use these findings to help improve the software, called ALA-Reader, which was developed by Clariana in 2002.

"Software, such as ALA-Reader, would be handy in an online course, or even a large enrollment MOOC (massive open online course), to be able to automatically score students' written submissions, including comments in a discussion board," said Clariana.

According to Clariana, all text has a structure, so different texts can be compared as patterns.

"Previously, we have only considered the patterns in expository texts, such as comparing students' essays about management theories to an expert's essay," said Clariana. "The better essays 'look' more like the expert."

But, said Clariana, narrative text structure is fundamentally different because it is fairly "linear" and is often chronological.

"This investigation allowed us to test our pattern matching software with narrative text for the first time," he said.

Clariana said there is other existing software that scores writing. However, it is costly and complicated. In comparison, ALA-Reader is less expensive, easier to use and faster.

"In a small study in 2004, ALA-Reader did well against the leading for-profit system, which is used to score the GRE and GMAT and other tests," said Clariana. "Unlike that system, ours requires no training, can be set up by a teacher or researcher in about an hour and can mark an essay in fractions of a second on an ordinary laptop."

Clariana said that the team is using the software to think about "knowing" in new ways, in what he has dubbed Knowledge Structure (KS) theory.

"We recently used the tool to measure the Chinese and English knowledge structure of bilinguals in order to see how the different languages are related," said Clariana. "We have also had promising data from Dutch-English and from Korean-English bilinguals that suggest that we are on a good track."

He added that KS theory has direct application to reading comprehension and reading-comprehension research, which is an area of preeminent importance in education research.

Clariana said he could see this software providing a wide number of future applications, such as adding the software's capabilities to online text-visualization . However, the critical next step is to take the tool online.

"We hope to find funding to hire a programmer to program our tool to run in a web browser online," said Clariana. "That will open a large potential for further research for us and for practical application to essentially making the tool available to anyone, including researchers at universities and teachers in classrooms."

The article, which is titled "The influence of narrative and expository lesson text structures on knowledge structures: alternate measures of knowledge structure," was published in October in the journal "Educational Technology Research and Development."

Citation: Research reveals software improving in ability to analyze, score writing (2014, November 18) retrieved 21 November 2024 from https://phys.org/news/2014-11-reveals-software-ability-score.html
This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only.

Explore further

Team develops software for automatic summarization of long texts

0 shares

Feedback to editors