Optalysys will launch prototype optical processor

Aug 09, 2014 by Nancy Owano weblog
Credit: Optalysys

UK-based startup Optalysys is moving ahead to deliver exascale levels of processing power on a standard-sized desktop computer within the next few years, reported HPCwire earlier this week. The company itself announced on August 1 that it is "only months away from launching a prototype optical processor with the potential to deliver exascale levels of processing power on a standard-sized desktop computer." The company will demo its prototype, which meets NASA Technology Readiness Level 4, in January next year. Though the January date represents only a proof-of-concept stage, the processor is expected to run at over 340 gigaFLOPS , which will enable it to analyze large data sets, and produce complex model simulations in a laboratory environment. Engadget commented that those numbers were not bad for a proof of concept. HPCwire pointed at the potential significance of this work in its article's headline, "IsThis the Exascale Breakthrough We've Been Waiting For?" Optalysys' technology uses light for compute-intensive mathematical functions at speeds that exceed what can be achieved with electronics, at a fraction of the cost and power consumption.

The company CEO, Dr. Nick New, said that, "Using low power lasers and high resolution liquid crystal micro-displays, calculations are performed in parallel at the ." Engadget contributing editor Steve Dent wrote how "low-intensity lasers are beamed through layers of liquid crystal grids, which change the light intensity based on user-inputted data. The resulting interference patterns can be used to solve mathematical equations and perform other tasks. By splitting the beam through multiple grids, the system can compute in parallel much more efficiently than standard multi-processing supercomputers."

Optalysys technology can provide a big improvement over traditional computing, the company noted. "Electronic processing is fundamentally serial, processing one calculation after the other. This results in big data management problems when the resolutions are increased, with improvements in processor speed only resulting in incremental improvements in performance. The Optalysys technology, however, is truly parallel – once the data is loaded into the grids, the processing is done at the speed of light, regardless of the resolution. The Optalysys technology we are developing is highly innovative and we have filed several patents in the process."

The company's target audience for their technology? "We hope the will be used by everyone, but to begin with we envisage the first users to be the users of high power Computational Fluid Dynamics (CFD) simulations and Big Data sets."

This video is not supported by your browser at this time.

Optalysys chairman James Duez said in applications such as aerofoil design, weather forecasting, MRI data analysis and quantum computing, "it is becoming increasingly common for traditional computing methods to fall short of delivering the needed." CFD, noted Duez, can be used to predict the weather, design cars and model airflow, "but the speed of processing needed to create models is constrained by current electrical capabilities."

Dr. New said "We are currently developing two products, a 'Big Data' analysis system and an Optical Solver Supercomputer, both of which are expected to be launched in 2017."

This video is not supported by your browser at this time.

Optalysys was founded in 2013 by New and Duez. The team includes specialists in software development, free-space optics, optical engineering and production engineering.

Explore further: IBM to spend $3 bn aiming for computer chip breakthrough

More information:www.optalysys.com/faq/

www.optalysys.com/blog/light-s… ow-only-months-away/

add to favorites email to friend print save as pdf

Related Stories

Intel flirts with exascale leap in supercomputing

Jun 19, 2012

(Phys.org) -- If exascale range is the next destination post in high-performance computing then Intel has a safe ticket to ride. Intel says its new Xeon Phi line of chips is an early stepping stone toward ...

NVIDIA helps spark 64-bit ARM systems for HPC

Jun 23, 2014

(Phys.org) —NVIDIA could not have chosen a better venue for a chosen target: The International Supercomputing Conference, running to June-26 in Leipzig, Germany, is where NVIDIA took center stage, to demonstrate ...

Recommended for you

DARPA seeks new positioning, navigation, timing solutions

27 minutes ago

The Defense Advanced Research Projects Agency (DARPA), writing about GPS, said: "The military relies heavily on the Global Positioning System (GPS) for positioning, navigation, and timing (PNT), but GPS access is easily blocked by methods such as jamming. In addition, many environments in which our mil ...

Future US Navy: Robotic sub-hunters, deepsea pods

4 hours ago

The robotic revolution that transformed warfare in the skies will soon extend to the deep sea, with underwater spy "satellites," drone-launching pods on the ocean floor and unmanned ships hunting submarines.

Festo has BionicANTs communicating by the rules for tasks

15 hours ago

Germany-based automation company Festo, focused on technologies for tasks, turns to nature for inspiration, trying to take the cues from how nature performs tasks so efficiently. "Whether it's energy efficiency, ...

Intel in talks with Altera on tie-up

16 hours ago

US tech giant Intel is in talks with rival Altera on a tie-up to broaden the chipmaker's product line amid growth in Internet-connected devices, the Wall Street Journal reported Friday.

User comments : 4

Adjust slider to filter visible comments by rank

Display comments: newest first

5 / 5 (2) Aug 09, 2014
Some more uses - Modelling universe , Modelling virus and bacteria completely, and may be artificial intellligence
5 / 5 (1) Aug 09, 2014
The way it's described makes it an optical math co-processor. Very impressive for implementing highly parallel mathematical calculations, but it doesn't appear to be useful for other big data tasks like searching, sorting, and moving data - unless I'm missing something here. I don't think anyone will be implementing database engines with this. Still, it should be a boon for highly parallel mathematical tasks, saving a lot of space and energy while delivering extremely high performance levels.
5 / 5 (3) Aug 10, 2014
What have they achieved so far? How can it be programmed? Is it open for developers community?
not rated yet Aug 11, 2014
It'll be a stone bitch to program, I'm thinking - at least until software matures for controlling the thing. That'll take a long time, if history is any guide.

I bet they'll have reliability problems with their microlasers, too.

But even so, I hope they succeed. It seems worth trying.

Please sign in to add a comment. Registration is free, and takes less than a minute. Read more

Click here to reset your password.
Sign in to get notified via email when new comments are made.