This article has been reviewed according to Science X's editorial process and policies. Editors have highlighted the following attributes while ensuring the content's credibility:

fact-checked

trusted source

proofread

Novel 'registration' method identifies plant traits in close-up photos

Novel 'registration' method identifies plant traits in close-up photos
(A) Schematic diagram of the raw image acquisition system for the hemisphere references. (B) Schematic diagram of the raw image acquisition system for the plant. (C) Simulation of the plant leaf and the hemisphere reference tangent to each other and the relevant 3D light field features. In this case, the plant and the reference at the tangent point share the same 3D light field features. Here, di denotes the distance from the point to the light source, dv denotes the distance from the point to the camera optical center, θi denotes the angle between the direction vector of the incident light and the normal vector of the surface where the point is located, θv denotes the angle between the observation direction vector and the normal vector of the surface where the point is located, and θ denotes the angle between the projection vector of the incident light direction vector and the projection vector of the observation direction vector on the surface where the point is located. Credit: Plant Phenomics

Modern cameras and sensors, together with image processing algorithms and artificial intelligence (AI), are ushering in a new era of precision agriculture and plant breeding. In the near future, farmers and scientists will be able to quantify various plant traits by simply pointing special imaging devices at plants.

However, some obstacles must be overcome before these visions become a reality. A major issue faced during image-sensing is the difficulty of combining data from the same plant gathered from multiple , also known as 'multispectral' or 'multimodal' imaging. Different sensors are optimized for different frequency ranges and provide useful information about the plant. Unfortunately, the process of combining plant images acquired using multiple sensors, called 'registration,' can be notoriously complex.

Registration is even more complex when involving three-dimensional (3D) multispectral images of plants at close range. To properly align close-up images taken from different cameras, it is necessary to develop computational algorithms that can effectively address geometric distortions. Besides, algorithms that perform registration for close-range images are more susceptible to errors caused by uneven illumination. This situation is commonly faced in the presence of leaf shadows, as well as light reflection and scattering in dense canopies.

Against this backdrop, a research team including Professor Haiyan Cen from Zhejiang University, China, recently proposed a new approach for generating high-quality point clouds of plants by fusing depth images and snapshot spectral images. As explained in their paper, which was published in Plant Phenomics, the researchers employed a three-step image registration process which was combined with a novel (AI)-based technique to correct for illumination effects.

Prof. Cen explains, "Our study shows that it is promising to use stereo references to correct plant spectra and generate high-precision, 3D, multispectral point clouds of plants."

The consisted of a lifting platform which held a rotating stage at a preset distance from two cameras on a tripod; an RGB (red, green, and blue)-depth camera and a snapshot multispectral camera. In each experiment, the researchers placed a plant on the stage, rotated the plant, and photographed it from 15 different angles.

Novel 'registration' method identifies plant traits in close-up photos
(A) The flow chart of generating plant multispectral point cloud. Raw images such as depth image and multispectral image were registered, and multispectral image was reshaped as a multichannel image at the beginning of the procedure. Then, follow the point cloud generation that relies on the transformation from depth image coordinate system to the world coordinate system under the constrains of the camera intrinsic parameters. Finally, with the fusion of multiview point clouds and the mapping of corrected multispectral textures, the 3D multispectral point cloud model was constructed. (B) The flow chart of calculating the spatial distribution of the DN values of the references and correcting the plant spectral reflectance using ANN. In the stage of model training, the 3D light field features of references were extracted from depth image as independent variables and the spectral DN values as dependent variables. In the stage of model application, the 3D light field features of plant were set as input to obtain the predictions of the corresponding DN values of the reference. Finally, the reflectance image is corrected pixel by pixel based on this method to generate a mappable texture. Credit: Plant Phenomics

They also took images of a flat surface containing Teflon hemispheres at various positions. The images of these hemispheres served as reference data for a reflectance correction method, which the team implemented using an artificial neural network.

For registration, the team first used image processing to extract the plant structure from the overall images, remove noise, and balance brightness. Then, they performed coarse registration using Speeded-Up Robust Features (SURF)—a method that can identify important image features that are mostly unaffected by changes in scale, illumination, and rotation.

Finally, the researchers performed fine registration using a method known as "Demons." This approach is based on finding mathematical operators that can optimally 'deform' one image to match it with another.

These experiments showed that the proposed registration method significantly outperformed conventional approaches. Moreover, the proposed reflectance correction technique produced remarkable results, as Prof. Cen highlights: "We recommended using our correction method for plants in growth stages with low canopy structural complexity and flattened and broad leaves." The study also highlighted a few potential areas of improvement to make the proposed approach even more powerful.

Satisfied with the results, Prof. Cen concludes, "Overall, our method can be used to obtain accurate, 3D, multispectral point cloud model of plants in a controlled environment. The models can be generated successively without varying the illumination condition."

In the future, techniques such as this one will help scientists, farmers, and plant breeders easily integrate data from different cameras into one consistent format. This could not only help them visualize important plant traits, but also feed these data to emerging AI-based software to simplify or even fully automate analyses.

More information: Pengyao Xie et al, Generating 3D Multispectral Point Clouds of Plants with Fusion of Snapshot Spectral and RGB-D Images, Plant Phenomics (2023). DOI: 10.34133/plantphenomics.0040

Citation: Novel 'registration' method identifies plant traits in close-up photos (2023, April 26) retrieved 3 July 2024 from https://phys.org/news/2023-04-registration-method-traits-close-up-photos.html
This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only.

Explore further

New phenotyping approach analyzes crop traits at the 3D level

6 shares

Feedback to editors