Here come the quantum dot TVs and wallpaper

Dec 13, 2011 by Nancy Owano report
Quantum dot OLED prototype. Image credit: Nanoco Group

(PhysOrg.com) -- A British firm's quantum dot technology will be used for flat screen TVs and flexible screens, according to the company’s chief executive.

The quantum dots will be in use for ultra thin, light flat screen TVs by the end of next year, and, in another three years, will be used in flexible screens rolled up like paper or used as wall coverings.

The company, Nanoco Group, is reportedly working with Asian electronics companies to bring this technology to market.

“The first products we are expecting to come to market using quantum dots will be the next generation of flat-screen televisions,” Nanoco chief executive Michael Edelman has stated.

Nanoco describes itself as the “world leader in the development and manufacture of cadmium-free quantum dots.” While quantum dots technology is not new, the scientists at Nanoco are succeeding in their goals toward mass production. Earlier this year, the company, which was founded in 2001 and is based in Manchester, announced it successfully produced a1kg batch of red cadmium-free quantum dots specified by a major Japanese corporation.

The ability to mass-produce consistently high quality quantum dots, says the company site, enables product designers to envisage their use in consumer products and other applications for the first time, and then bring the products to market.

Quantum dots are nano-materials with a core semiconductor and organic shell structure. This structure can be modified and built on to ensure the quantum dots work in applications that may use different carrier systems. This includes but is not limited to printing ink including ink jet printing, silicone, polycarbonate, polymethyl methacrylate based polymers, alcohols and water.

Nanoco’s team says it can manipulate the organic surfaces of the quantum dots to work in applications like electroluminescent displays, solid state lighting and biological imaging.

Here come the quantum dot TVs and wallpaper

To be sure, flexible displays that can be used as wall coverings have been of interest. Individual light-emitting quantum dot crystals are 100,000 times smaller than the width of a human hair. Large numbers used together potentially create room-sized screens on wallpapers.

The ability to precisely control the size of a quantum dot enables the manufacturer to determine the wavelength of the emission, which determines the color of light that the eye perceives. During production the dots can be tuned to emit any desired color of light. Dots can even be tuned beyond visible light, into infra-red or the ultra-violet.

Nanoco defines itself as “a world leader in the development and manufacture of cadmium-free quantum dots” at a time when being “cadmium-free” presents special advantage. Cadmium is generally used in LEDs in lighting and displays. The European Union has made it exempt, though, from its Restriction of Hazardous Substances (RoHS) directive due to the fact that there isn’t yet a practical substitute, according to eWEEK Europe. That exemption is to end in July 2014.

“Our research and development department is also constantly engaged in the creation of new quantum dots with additional properties sought by the market, such as our RoHS-compliant heavy metal-free ,” says the company.

Explore further: For Google's self-driving cars, learning to deal with the bizarre is essential

Related Stories

Quantum-dot LED screens may soon rival OLEDs and LCDs

Dec 13, 2010

(PhysOrg.com) -- A partnership has been formed between US, South Korean and Belgian companies to develop quantum-dot light emitting diode (QLED) displays to rival the organic light emitting diode (OLED) markets ...

Single quantum dot nanowire photodetectors

Dec 14, 2010

Moving a step closer toward quantum computing, a research team in the Netherlands recently fabricated a photodetector based on a single nanowire, in which the active element is a single quantum dot with a ...

Coupling of Single Quantum Dots to Smooth Metal Films

Jul 20, 2009

Scientists at Argonne National Laboratory's CNM Nanophotonics Group have measured how light emission from individual colloidal semiconductor nanocrystals, or quantum dots, is modified when in proximity to ...

Recommended for you

Hackers force message on websites via US firm

9 hours ago

A U.S. firm that helps connect more than 700 companies with customers through social media says a Syrian group hacked the company's web address to upload a message to other websites.

Shedding light on solar power

14 hours ago

Everyone wants to save energy, but not everyone knows where to start. Grid Resources, a startup based out of the Centre for Urban Energy's iCUE incubator, is developing a new website that seeks to help homeowners ...

Energy transition project moves into its second phase

15 hours ago

Siemens is studying new concepts for optimizing the cost-effectiveness and technical performance of energy systems with distributed and fluctuating electricity production. The associated IRENE research project ...

User comments : 25

Adjust slider to filter visible comments by rank

Display comments: newest first

ScienceFreak86
5 / 5 (3) Dec 13, 2011
thanks to TV wallpapers and a lot faster graphics cards, maybe virtual reality will finally come in 2014. I am waiting for this impatiently.
DaffyDuck
5 / 5 (3) Dec 13, 2011
Virtual reality won't really come into it's own until we have brain-computer interfaces. I want these as replacements for windows so I can pretend I live in Bora Bora.
Nikola
not rated yet Dec 13, 2011
What is the resolution of a display with such small pixels?
ScienceFreak86
5 / 5 (2) Dec 13, 2011
Virtual reality won't really come into it's own until we have brain-computer interfaces. I want these as replacements for windows so I can pretend I live in Bora Bora.


Agree, but I am talking about first generation which should be quite convincing, not about ultimate VR :)
antialias_physorg
5 / 5 (3) Dec 13, 2011
thanks to TV wallpapers and a lot faster graphics cards,

I can do without TV - but a OLED wallpaper that you can change according to mood (or play a live scenery on) would be awesome.
Eikka
4 / 5 (4) Dec 13, 2011
A picture the size of a wall with pixels small enough that you can't see them from 3ft away, with enough processing power to run modern videogame quality graphics at the highest settings would need an ordinary desktop computer for every square foot of wall space to have enough processing power. Though then of course, the memory interface to such a computer would be really the limiting factor, because they all have to share data in order to compute the simulation in parallel, so you're limited to a somewhat simple scenery.

Or you could cheat with the visuals like they do in games, so you get things like a beach that looks like a beach but you don't make footprints in the sand, because that change into the model would have to be somehow communicated to all the processors. There comes the problems, because you don't have enough memory with each CPU to cache all necessary data unless the model is very simple, and getting it in and out from a central memory is a major logistics problem.

ScienceFreak86
not rated yet Dec 13, 2011
Eikka, maybe borrowing power from the Cloud should help users of such systems?
Eikka
5 / 5 (1) Dec 13, 2011
It's interesting to note that, assuming you have a 6 foot sphere around your head, you only need about 115 megapixels for the entire globe to have a total surround view of sufficient resolution that you'd be hard pressed to tell the difference between it and reality with one eye closed.

With both eyes open, then you have to inflate the sphere to 60 feet in diameter and not show anything closer than that, because your stereoscopic vision would ruin the illusion - but the amount of data is still the same 114 megapixels.

That only requires about 40-50 ordinary computers to process if all the data is known to all computers in advance and they're just playing it back in synchrony, like a video or something. Simple interactive simulation is fine as well, as long as you don't introduce much new detail.

In fact, that's exactly what they're already doing with flight training simulators.
Eikka
1 / 5 (1) Dec 13, 2011
Eikka, maybe borrowing power from the Cloud should help users of such systems?


Sure, if you can cope with the enormous time delays.

Which is kinda the whole problem, just 1000x worse when you're sending data over the internet.
antialias_physorg
4 / 5 (2) Dec 13, 2011
A picture the size of a wall with pixels small enough that you can't see them from 3ft away, with enough processing power to run modern videogame quality graphics at the highest settings would need an ordinary desktop computer

Just the graphics card. And graphics cards can be run in parallel. All current graphics cards can run (at least) two displays at the same time. As long as you don't interact much with what is on the screen it's no more difficult than running a movie. Desktop PCs can run multiple movies at the same time already. Using 4 beamers you can flood the walls of a room with anything you like (done that) without any obvious pixellation. Takes all of 2 low end PCs to do. Cave systems have been around for more than a decade. The power needed back then (high end SGI workstation) you can get today in a high end PC.

I don't think you'd want this stuff on the floor, though. That takes expensive covering and soft slippers at all times (and no furniture!).
antialias_physorg
5 / 5 (6) Dec 13, 2011
For example we were running Quake at good framerates 10 years ago on a 5 side cave with 3D shutterglasses at work on an SGI Onyx system with 8 processors and 4 graphics pipes (yes, there is a Quake for cave port...awesome). We got into some, but not serious, trouble for doing that in our off work hours (because of using up beamer lamp lifetimes).

But it was totally worth it just for the geek factor.

Eikka
3.5 / 5 (2) Dec 13, 2011
Just the graphics card. And graphics cards can be run in parallel. All current graphics cards can run (at least) two displays at the same time.


Yes they can, but they don't necessarily have the power to run at the same level of detail. It's trivial to push out pixels, but non-trivial when you have to actually calculate what they should contain.

Modern graphics cards don't really parallel all that well in processing power either, due to the need to share data through the system bus. If they can work on independent sets of data, then that's fine, but for games you see less than 1.5x increase in actual processing power from doubling the number of GPUs. Four GPUs per machine is just about twice as powerful as one.
ScienceFreak86
5 / 5 (1) Dec 13, 2011
Imagine watching porn in such 5 wall system ;), playing FPS, flying through Universe in some highly realistic visualization, visiting most beautifil places in the world, Google Earth 3.0, Second Life 3.0, meetings with family, friends and with guys from PhysOrg for discussions about newest amazing breakthroughs and discoveries.
Eikka
2 / 5 (2) Dec 13, 2011
All in all, what simulation is really about is doing small changes to little bits of data over and over again, multiplied billions of times in parallel. You don't need very complicated processors for that.

But because of the structure of our computers, we need a great deal of speed and power because there's relatively few massive CPUs processing big chunks of data at a time, and you hve to do so without making the CPUs wait for too long, so we have to implement all sorts of ways to skirt around the problem of having to move a ton of data just to grab one bit.

Essentially, you have a factory with the tools and the products in different buildings, and there's just one guy shuttling between the two. He may carry stuff with a wheelbarrow, or with a dumptruck, but he will always take the same time going back and forth, so if you ask him to bring you a single bolt, he's going to drive his truck over, load it with a single bolt, and then drive it to you.
Foolish1
5 / 5 (3) Dec 13, 2011
It's interesting to note that, assuming you have a 6 foot sphere around your head, you only need about 115 megapixels for the entire globe to have a total surround view of sufficient resolution that you'd be hard pressed to tell the difference between it and reality with one eye closed.

All you need is an eye tracker to know where your eye is looking. You can't read whats behind you or even whats within your periphrial vision. There is a surprisingly small sphere in which resolution matters at all. Try writing something on a piece of paper and reading it without moving your eyes to look directly at it.
Eikka
1 / 5 (1) Dec 13, 2011
All you need is an eye tracker to know where your eye is looking. You can't read whats behind you or even whats within your periphrial vision. There is a surprisingly small sphere in which resolution matters at all. Try writing something on a piece of paper and reading it without moving your eyes to look directly at it.


I saw someone do that by strapping a mini projector on a plastic gun and playing Call of Duty in a darkened room.

Though all in all, the dynamic resolution rendering would probably take as much power to calculate where you need the resolution and how much, and applying blur to the image since you don't want things to turn blocky - you'd notice that.
that_guy
not rated yet Dec 13, 2011
What is the resolution of a display with such small pixels?

I'm not trying to be mean, but that's a pointless question.

It didn't say how small the pixels are, but we can assume that the limit in pixel size is directly related to our limit to create electronic circuits: ~22NM or some multiple thereof.

The size of the quantum dots has nothing to do with the resolution.

That said, it would also be pointless to make a resolution so fine for a consumer product.

So, realistically, the resolution is whatever we want it to be, as long as it is cost effective - That is determined by the process for making the quantum dots, how small the electronics are, and how much computing power we want to throw at it.

If you're going to make a large screen that you can throw up on the wall in 5 or 10 years, you could probably expect a maximum around 4k-8k (As opposed to 1080p ~ 1k).

Higher resolutions would hit diminishing returns as it would only be peripheral vision.
antialias_physorg
5 / 5 (1) Dec 13, 2011
Modern graphics cards don't really parallel all that well in processing power either, due to the need to share data through the system bus.

Not my experience.

It all depends on what you want to do: Watch a movie? Display some looping images/effects? Or play full HD Crysis 2 on 6 walls?

But because of the structure of our computers, we need a great deal of speed and power because there's relatively few massive CPUs processing big chunks of data at a time

Not so - because today you push most stuff to the graphics card which has many parallel pixel pipes. The speedup is enormous ... as we just found out in our current project for an algorithmically intense graphics filter on 800MB image dataset:
CPU, single thread: 3.5 hours
CPU, Quadcore OpenCL: 600 seconds
GPU 16 pixel pipes: 25 seconds

As long as you're just reading there is no bus bottleneck.
Nanobanano
not rated yet Dec 13, 2011
That only requires about 40-50 ordinary computers


Classical computers are going to get down to around 11nm in 4 years or so anyway, so they should be 4 times as powerful.

Also, most video cards tend to be about a generation or two behind in the transistor miniaturization process. So there is a bit more poetential for the video cards going out 10 years than there is for the CPU.

We may see early spintronic or photonic computers by then as well.

The real issue for gaming is the development time.

Making a game to modern standards, i.e. Starcraft 2, already takes a decade.

Starcraft 2 has really failed it's goal, because the vast majority of gamers who bought the game aren't smart enough to play it, but that's another matter.

Since they've damn near got weather forecasting perfected, it seems mostly what we'll be doing with these computers is entertainment and maybe design of new products. Maybe model materials research...
Nanobanano
not rated yet Dec 13, 2011
CPU, single thread: 3.5 hours
CPU, Quadcore OpenCL: 600 seconds
GPU 16 pixel pipes: 25 seconds


yes, many weather models are run on graphics cards now.

NHC's position and timing for forecast track is almost perfect now. In fact, it's more limited by resolution of input data than the computer processing power.
that_guy
not rated yet Dec 13, 2011
modern cards don't parallel well

Why would this matter? Because you are trying to get stand alone systems to work in parallel.

A modern card has hundreds or thousands of cores that parallel very efficiently. If it is 100% expected to run things in parallel, then it will be designed well. Pixel display and shading are all things that parallel very efficiently. The hard part is designing something to work as a stand alone AND in parallel.

you would need 40 or 50 computers

Ridiculous. A modern discrete card can generally display up to 4k video. There is a huge difference between displaying a video, and building up each element piece by piece. In 5 or 10 years, a card capable of 8 or 16k will be nothing.
dnatwork
5 / 5 (1) Dec 13, 2011
I find it funny that you are talking about hooking this up to a desktop pc and worrying about the graphics card running two screens. A new paradigm is called for in a room-sized application. Put the computing power in the panel, and have each pixel respond to its neighbors, some physics model, and a single user input.

"Are you a grain of sand? So am I, and all our neighbors, we are a beach. The user just pressed on me, and you and you. We now show a footprint."

Somewhere else on another section of screen, there is another user input that forms a footprint; those pixels process that, and the first bunch know nothing about it. There is no central processor, there are 115 million processors each doing their thing in concert.
wealthychef
5 / 5 (2) Dec 13, 2011

Since they've damn near got weather forecasting perfected ...


Um .... no. We're not even close to getting locally accurate climate simulations, much less weather.
HealingMindN
5 / 5 (1) Dec 13, 2011
The CIA has been using this in their LSD room for decades.
Nanobanano
not rated yet Dec 14, 2011
dnatwork:

You're talking about mimmicking a neural net or a "sensory" style of input with sort of a "cascading" effect.

That may be possible in the next several years, but most forms of entertainment, even interactive entertainment are highly linear, even multi-player games.

If you've ever written a dialogue tree for an RPG or other custom game, you know how comlicated it can be.

Manually writing scripts for that sort of interaction would take decades, so "somebody" would need to write software that can generate life-like outputs on the fly.

You're talking about not just plot decisions in entertainment, or dialogue trees, but interaction with the very physics of the virtual universe.

You basically need a processor (maybe more than one,) for every pixel.

Please sign in to add a comment. Registration is free, and takes less than a minute. Read more

Click here to reset your password.
Sign in to get notified via email when new comments are made.