The evolution process of object detection: (A) image classification; (B) object localization; (C) semantic segmentation; and (D) instance segmentation. Credit: The Plant Phenome Journal (2023). DOI: 10.1002/ppj2.20065

A new University of Illinois project is using advanced object recognition technology to keep toxin-contaminated wheat kernels out of the food supply and to help researchers make wheat more resistant to fusarium head blight, or scab disease, the crop's top nemesis.

"Fusarium head blight causes a lot of economic losses in wheat, and the associated toxin, deoxynivalenol (DON), can cause issues for human and animal health. The disease has been a big deterrent for people growing wheat in the Eastern U.S. because they could grow a perfectly nice crop, and then take it to the elevator only to have it get docked or rejected. That's been painful for people. So it's a big priority to try to increase resistance and reduce DON risk as much as possible," says Jessica Rutkoski, assistant professor in the Department of Crop Sciences, part of the College of Agricultural, Consumer and Environmental Sciences (ACES) at Illinois. Rutkoski is a co-author on the new paper in the Plant Phenome Journal.

Increasing resistance to any traditionally means growing a lot of genotypes of the crop, infecting them with the disease, and looking for symptoms. The process, known in plant breeding as phenotyping, is successful when it identifies resistant genotypes that don't develop symptoms, or less severe symptoms. When that happens, researchers try to identify the genes related to and then put those genes in high-performing hybrids of the crop.

It's a long, repetitive process, but Rutkoski hoped one step—phenotyping for disease symptoms—could be accelerated. She looked for help from AI experts Junzhe Wu, doctoral student in the Department of Agricultural and Biological Engineering (ABE), and Girish Chowdhary, associate professor in ABE and the Department of Computer Science (CS). ABE is part of ACES and the Grainger College of Engineering, which also houses CS.

"We wanted to test whether we could quantify damage using simple cell phone images of grains. Normally, we look at a petri dish of kernels and then give it a subjective rating. It's very mind-numbing work. You have to have people specifically trained and it's slow, difficult, and subjective. A system that could automatically score kernels for damage seemed doable because the symptoms are pretty clear," Rutkoski says.

(A) The training set that was used to train Mask R-CNN (Region-based Convolutional Neural Network), as well as train genomic selection models. For training of Mask R-CNN, kernels were manually labeled as diseased (blue boundary) or healthy (gold boundary), creating the FDKL dataset. Next, a subset of 49 images was designated as a validation set, and Mask R-CNN hyperparameters were adjusted to maximize the ability of the neural network to predict labels in the validation set. (B) The test set consisted of new samples of new breeding lines that the trained Mask R-CNN was tested on to predict the diseased (blue boundary) or healthy (red boundary) state of kernels, creating the FDKL dataset. Additionally, this set was used as the test set to determine genomic selection (GS) accuracy for deoxynivalenol (DON). FDK, Fusarium-damaged kernel. Credit: The Plant Phenome Journal (2023). DOI: 10.1002/ppj2.20065

Wu and Chowdhary agreed it was possible. They started with algorithms similar to those used by tech giants for object detection and classification. But discerning minute differences in diseased and healthy wheat kernels from cell phone images required Wu and Chowdhary to advance the technology further.

"One of the unique things about this advance is that we trained our network to detect minutely damaged kernels with good enough accuracy using just a few images. We made this possible through meticulous pre-processing of data, transfer learning, and bootstrapping of labeling activities," Chowdhary says. "This is another nice win for machine learning and AI for agriculture and society."

He adds, "This project builds on the AIFARMS National AI Institute and the Center for Digital Agriculture here at Illinois to leverage the strength of AI for agriculture."

Successfully detecting fusarium damage—small, shriveled, gray, or chalky kernels—meant the technology could also foretell the grain's toxin load; the more external signs of damage, the greater the DON content.

When the team tested the technology alone, it was able to predict DON levels better than in-field ratings of disease symptoms, which breeders often rely on instead of kernel phenotyping to save time and resources. But when compared to humans rating disease damage on kernels in the lab, the technology was only 60% as accurate.

The researchers are still encouraged, though, as their initial tests didn't use a large number of samples to train the model. They're currently adding samples and expect to achieve greater accuracy with additional tweaking.

"While further training is needed to improve the capabilities of our model, initial testing shows promising results and demonstrates the possibility of providing an automated and objective phenotyping method for fusarium damaged kernels that could be widely deployed to support resistance breeding efforts," Wu says.

Rutkoski says the ultimate goal is to create an online portal where breeders like her could upload cell phone photos of wheat kernels for automatic scoring of fusarium damage.

"A tool like this could save weeks of time in a lab, and that time is critical when you're trying to analyze the data and prepare the next trial. And ultimately, the more efficiency we can bring to the process, the faster we can improve resistance to the point where scab can be eliminated as a problem," she says.

Study authors include Junzhe Wu, Arlyn Ackerman, Rupesh Gaire, Girish Chowdhary, and Jessica Rutkoski.

More information: Junzhe Wu et al, A neural network for phenotyping Fusarium ‐damaged kernels (FDKs) in wheat and its impact on genomic selection accuracy, The Plant Phenome Journal (2023). DOI: 10.1002/ppj2.20065