Page 2: Research news on Artificial neural networks

Artificial neural networks, as physical systems, are implemented through hardware substrates that realize interconnected processing units and weighted connections using electronic, optical, or neuromorphic architectures. In conventional digital hardware, they are instantiated as configurations of logic gates, memory arrays, and interconnects on CPUs, GPUs, or specialized accelerators (e.g., TPUs), where weights reside in volatile or non-volatile memory and computation is executed via parallel multiply-accumulate operations. Emerging neuromorphic and analog implementations encode synaptic weights in device conductances (e.g., memristors, phase-change materials) and exploit device physics for in-memory computation, enabling high spatiotemporal parallelism, low-latency signal propagation, and energy-efficient approximation of neural operations in a physically embedded network topology.

Equipping artificial intelligence with the lens of evolution

Artificial intelligence is now better than humans at identifying many patterns, but evolutionary relationships have always been difficult for the technology to decipher. A team from the Bioinformatics Department at Ruhr University ...

DNA-based neural network learns from examples to solve problems

Neural networks are computing systems designed to mimic both the structure and function of the human brain. Caltech researchers have been developing a neural network made out of strands of DNA instead of electronic parts ...

page 2 from 5