Page 4: Research news on Artificial neural networks

Artificial neural networks, as physical systems, are implemented through hardware substrates that realize interconnected processing units and weighted connections using electronic, optical, or neuromorphic architectures. In conventional digital hardware, they are instantiated as configurations of logic gates, memory arrays, and interconnects on CPUs, GPUs, or specialized accelerators (e.g., TPUs), where weights reside in volatile or non-volatile memory and computation is executed via parallel multiply-accumulate operations. Emerging neuromorphic and analog implementations encode synaptic weights in device conductances (e.g., memristors, phase-change materials) and exploit device physics for in-memory computation, enabling high spatiotemporal parallelism, low-latency signal propagation, and energy-efficient approximation of neural operations in a physically embedded network topology.

Researchers develop energy-efficient optical neural networks

EPFL researchers have published a programmable framework that overcomes a key computational bottleneck of optics-based artificial intelligence systems. In a series of image classification experiments, they used scattered ...

Classical optical neural network exhibits 'quantum speedup'

In recent years, artificial intelligence technologies, especially machine learning algorithms, have made great strides. These technologies have enabled unprecedented efficiency in tasks such as image recognition, natural ...

page 4 from 5