Credit: University of Luxembourg

Physicists from the University of Luxembourg have recently presented a new material which can become a key component of a new infrastructure designed to help robots understand their surroundings. The team shows that the material can be used to introduce tailor-made graphical information in the environment, which is invisible to humans but easily readable by robots. The new material and the innovative procedure by which it is made possible have been recently published in Advanced Functional Materials, one of the world's top journals in the field of materials science.

Reign of automation

Widespread automation is a key component in the on-going fourth industrial revolution. The current interest in automation envisages an enormous expansion of the concept, often involving machines that are not only automatic but also autonomous and mobile, such as self-driving cars or drones. In contrast to what the term "Industry 4.0" might suggest, these machines are also likely to engage in direct interaction with humans, even in places outside industrial production, like our homes or non-industrial work places.

"As beneficial as this transition to ubiquitous automation could be, it also comes with significant challenges of many types. One of the most important thresholds is caused by safety concerns: as demonstrated by recurring tragic fatalities involving autonomous vehicles, they currently have an insufficient understanding of their environment despite state-of-the-art on-board sensor and computation technology. It is simply not easy to make sense of the busy, complex and messy world that we humans create and live in, full of signals, some important, some only distracting, and others yet being pure noise," explains Jan Lagerwall, Professor in the Department of Physics and Materials Science (DPhyMS) at the University of Luxembourg and principal investigator of the study.

New approach using liquid crystals

While most attempts to allow robots access to human-populated environments focus on providing the robots a combination of multiple sensory inputs and massive computational power, a different approach is now proposed by Prof. Jan Lagerwall and his two team members Yong Geng and Rijeesh Kizhakidathazhath from the University of Luxembourg, in collaboration with Prof. Mathew Schwartz, who is an expert in automation and design of the built environment at the New Jersey Institute of Technology.

The key breakthrough presented in the article is the realization of retroreflective spheres made from cholesteric liquid crystals, which are turned into solid state by a process called polymerisation. In one way, these spheres are similar to the retroreflectors we have in the safety vests in our cars, in road signs and in certain clothing, because they send light back to the source regardless of the direction along which they are illuminated. But there are two very important differences that make these Cholesteric Spherical Reflectors (CSRs) so useful. First, the reflection is limited to a narrow wavelength range, explaining why the human eye does not see them. Second, the reflection is circularly polarized, in the same way as each of the two movies simultaneously shown in a 3D cinema are circularly polarized, in opposite ways.

"If you ever took off your goggles while at a 3D cinema you will have noticed that the human eye cannot distinguish different polarisations, as both our eyes then see both movies, and we simply experience a strange "shadow" effect. The goggles contain circular polarisers, one right-handed and the other left-handed, ensuring that our right eye sees only the movie for the right eye, the left only the movie for the left eye. Outside a movie theater, the world is very rarely circularly polarized and this means that the circular polarization of CSRs is quite unique. A designed to read out CSR-encoded information will have two cameras, both operating in the ultraviolet and/or infrared regions in which the CSRs reflect, and each will have a circular polariser of different type, just like 3D cinema glasses. The robot subtracts one image from the other, meaning that all visual information that is not circularly polarized, which is all content except the CSRs, is canceled out, because this information appears identical to the two cameras. But the CSRs remain, as they are visible only to one camera but not to the other. This allows the robot to identify the CSR-encoded information extremely rapidly, with minimum computing power, and without risk of false positives," the scientists explain.

More information: Yong Geng et al. Encoding Hidden Information onto Surfaces Using Polymerized Cholesteric Spherical Reflectors, Advanced Functional Materials (2021). DOI: 10.1002/adfm.202100399

Journal information: Advanced Functional Materials