Page 3: Research news on Artificial neural networks

Artificial neural networks, as physical systems, are implemented through hardware substrates that realize interconnected processing units and weighted connections using electronic, optical, or neuromorphic architectures. In conventional digital hardware, they are instantiated as configurations of logic gates, memory arrays, and interconnects on CPUs, GPUs, or specialized accelerators (e.g., TPUs), where weights reside in volatile or non-volatile memory and computation is executed via parallel multiply-accumulate operations. Emerging neuromorphic and analog implementations encode synaptic weights in device conductances (e.g., memristors, phase-change materials) and exploit device physics for in-memory computation, enabling high spatiotemporal parallelism, low-latency signal propagation, and energy-efficient approximation of neural operations in a physically embedded network topology.

Shape complementarity enables precise protein binder design

Recent advances in computational protein design have depended mainly on neural networks and machine learning to generate binders. However, the complexity of protein–protein interactions and the limitations of data-driven ...

Multisynapse optical network outperforms digital AI models

For decades, scientists have looked to light as a way to speed up computing. Photonic neural networks—systems that use light instead of electricity to process information—promise faster speeds and lower energy use than ...

page 3 from 5