Hello DARKNESS: Physicists team up with astronomers to commission the most advanced camera in the world

April 17, 2018 by Julie Cohen, University of California - Santa Barbara
The world’s most advanced camera can detect planets around the nearest stars. Credit: University of California - Santa Barbara

Somewhere in the vastness of the universe another habitable planet likely exists. And it may not be that far—astronomically speaking—from our own solar system.

Distinguishing that planet's light from its star, however, can be problematic. But an international team led by UC Santa Barbara physicist Benjamin Mazin has developed a new instrument to detect around the nearest . It is the world's largest and most advanced superconducting camera. The team's work appears in the journal Publications of the Astronomical Society of the Pacific.

The group, which includes Dimitri Mawet of the California Institute of Technology and Eugene Serabyn of the Jet Propulsion Laboratory in Pasadena, California, created a device named DARKNESS (the DARK-speckle Near-infrared Energy-resolved Superconducting Spectrophotometer), the first 10,000-pixel integral field spectrograph designed to overcome the limitations of traditional semiconductor detectors. It employs Microwave Kinetic Inductance Detectors that, in conjunction with a large telescope and an adaptive optics system, enable direct imaging of planets around nearby stars.

"Taking a picture of an exoplanet is extremely challenging because the star is much brighter than the planet, and the planet is very close to the star," said Mazin, who holds the Worster Chair in Experimental Physics at UCSB.

Funded by the National Science Foundation, DARKNESS is an attempt to overcome some of the technical barriers to detecting planets. It can take the equivalent of thousands of frames per second without any read noise or dark current, which are among the primary sources of error in other instruments. It also has the ability to determine the wavelength and arrival time of every photon. This time domain information is important for distinguishing a planet from scattered or refracted light called speckles.

"This technology will lower the contrast floor so that we can detect fainter planets," Mazin explained. "We hope to approach the photon noise limit, which will give us contrast ratios close to 10-8, allowing us to see planets 100 million times fainter than the star. At those contrast levels, we can see some planets in reflected light, which opens up a whole new domain of planets to explore. The really exciting thing is that this is a technology pathfinder for the next generation of telescopes."

Designed for the 200-inch Hale telescope at the Palomar Observatory near San Diego, California, DARKNESS acts as both the science camera and a focal-plane wave-front sensor, quickly measuring the light and then sending a signal back to a rubber mirror that can form into a new shape 2,000 times a second. This process cleans up the atmospheric distortion that causes stars to twinkle by suppressing the starlight and enabling higher contrast ratios between the star and the planet.

During the past year and a half, the team has employed DARKNESS on four runs at Palomar to work out bugs. The researchers will return in May to take more data on certain planets and to demonstrate their progress in improving the contrast ratio.

"Our hope is that one day we will be able to build an instrument for the Thirty Meter Telescope planned for Mauna Kea on the island of Hawaii or La Palma," Mazin said. "With that, we'll be able to take pictures of planets in the habitable zones of nearby low mass stars and look for life in their atmospheres. That's the long-term goal and this is an important step toward that."

Explore further: A small step toward discovering habitable Earths

Related Stories

A small step toward discovering habitable Earths

March 5, 2014

University of Arizona researchers snapped images of a planet outside our solar system with an Earth-based telescope using essentially the same type of imaging sensor found in digital cameras instead of an infrared detector. ...

Image: Looking into the Cheops telescope tube

May 8, 2017

Seen here is a Cheops team member reflected in the satellite's main mirror, and framed by the black internal surface of the telescope tube. The back of the secondary mirror is seen at the centre of the image, held in place ...

Hidden stars may make planets appear smaller

July 11, 2017

In the search for planets similar to our own, an important point of comparison is the planet's density. A low density tells scientists a planet is more likely to be gaseous like Jupiter, and a high density is associated with ...

Keck Observatory planet imager delivers first science

January 31, 2017

A new device on the W.M. Keck Observatory in Hawaii has delivered its first images, showing a ring of planet-forming dust around a star, and separately, a cool, star-like body, called a brown dwarf, lying near its companion ...

Recommended for you

The surprising environment of an enigmatic neutron star

September 17, 2018

An unusual infrared emission detected by the Hubble Space Telescope from a nearby neutron star could indicate that the pulsar has features never before seen. The observation, by a team of researchers at Penn State, Sabanci ...

Ceres takes life an ice volcano at a time

September 17, 2018

Every year throughout its 4.5-billion-year life, ice volcanoes on the dwarf planet Ceres generate enough material on average to fill a movie theater, according to a new study led by the University of Arizona.

Slowest-spinning radio pulsar detected by astronomers

September 17, 2018

An international team of astronomers has discovered a new radio pulsar as part of the LOFAR Tied-Array All-Sky Survey (LOTAAS). The newly detected object, designated PSR J0250+5854, turns out to be the slowest-spinning radio ...

14 comments

Adjust slider to filter visible comments by rank

Display comments: newest first

TopCat22
5 / 5 (1) Apr 17, 2018
To me its sounds like the problem is more one of computation.

If we make a sensor that can count and distinguish more photons per pixel and have enough computing behind it the images could be rendered with enough detail including exoplanet surfaces.

The problem of the start light blurring the image is not unlike trying to view a 4k image file on a commodore 64 hardware.
IMP-9
5 / 5 (4) Apr 17, 2018
If we make a sensor that can count and distinguish more photons per pixel and have enough computing behind it the images could be rendered with enough detail including exoplanet surfaces.


That's not the case. You cannot beat the diffraction limit, current telescopes are a few orders of magnitude too small to possibly resolve the surface of an exoplanet. No amount of post-processing will overcome that.

Even when not limited by the diffraction limit post-processing can only make rather modest improvements to the sharpness of an image. In the case of exoplanets the light from the host star and planet need to be separated before detection, because even if you can subtract the much brighter star in post-processing it will leave behind irreducible noise.
Da Schneib
5 / 5 (3) Apr 17, 2018
That's a great explanation @IMP-9. The only part I think you didn't make clear is that the diffraction limit based on the size of the aperture of the telescope convolves the bright star's light with the planet's light making it harder to eliminate the starlight before the imaging plane of the sensor.

Also I spotted another flaw in @TopCat's reasoning. The actual criterion is not more photons per pixel; one can achieve that quite simply by increasing the pixel size, but of course that reduces the resolution of the imaging instrument. The necessary criterion is to match the pixel size with the telescope's inherent resolution based on the diffraction limit. If one goes above the limit, one sacrifices resolution; if one goes below the limit, the light is spread across more pixels and one sacrifices contrast.
TopCat22
5 / 5 (2) Apr 19, 2018
Great comments guys... thanks.

No how about getting rid of the lens and what not and just make a sensor that can capture every photon individually and measure the properties of each photon. Then arguably then we can simply calculate where the photon should be from and reconstruct the image photon by photon.

It may be that the sensor is the size of the moon and has to point towards the object for a few years to get enough data points. but given enough computing and storage power it may not be impossible.
Da Schneib
5 / 5 (2) Apr 19, 2018
@TopCat, the size of the pixel on the sky is the question here; you must make that as small as possible, and for that you need magnification. I'll stick to mirror telescopes describing this rather than refractors, since I work with one, but the principle is the same either way.

Regarding the sensors, the pixel sizes are limited by the physics of photolithography; that controls how close together we can make the cells, and thus how small. In fact, since there's more to it than just a bunch of cells, for example connecting them together so you can read them out, they have to be considerably bigger than the diffraction limit in the photolitho process. So right there you're limited in how small an area you can image.

To get an image of something as small and far away as a planet orbiting another star, you have to apply magnification, and a lot of it. That's what the telescope is for. Now, you can make the image scale small enough to see this.
[contd]
Da Schneib
5 / 5 (2) Apr 19, 2018
[contd]
I explained the next part in my other post, but here's another way of looking at it:

Now, to make the most of the magnification, you want to make your cells small; but there's no point in making them smaller than the diffraction limit of the telescope, because then you won't be getting as much light on each pixel as you could, thus limiting your ability to see dim objects.

On the other hand, you don't want the cells to be too big, because then you're wasting magnification, by combining images the telescope can resolve onto the same pixel.

Remember that for a single cell, you get one value for the whole cell; ideally, this value is the diffraction limit of the telescope, so you get the most light you can in each cell for the smallest image the telescope can resolve.

Everything I'm talking about so far is before we start blocking the star's light so we can see the much dimmer planet, and before we start image processing.
[contd]
Da Schneib
5 / 5 (2) Apr 19, 2018
[contd]
To get an image that resolves the planet from the star, you need to have one pixel that sees the star alone, and one pixel that sees the planet alone, and one dark pixel between them. But at or near the diffraction limit, another effect becomes important: the light from the star "bleeds over" into adjacent pixels. This happens both as a result of the diffraction rings around the star's image, and as a result of electrons freed by the charge in one cell jumping to an adjacent cell. This can happen both as a result of quantum tunneling and as a result of the insulation between cells not being perfect.

Thus, by blocking out the light of the star, without affecting the light of the planet, you can get an image of the planet that won't be washed out by the light of the star. You can only do this in the optics because of the leakage between cells.
[contd]
Da Schneib
5 / 5 (2) Apr 19, 2018
[contd]
@IMP-9's point is that the telescopes we can practically make are orders of magnitude too small to resolve details on the planet. The best we can do is get a spectrum of the light reflected from the planet.

And that's not all; that spectrum will be from light from the star, traversing the planet's atmosphere twice, with a reflection off the surface between, so we have to account for the star's spectral lines before we can understand what part of the spectrum is from the planet's atmosphere, and from reflection from the planet's surface. To get this, the general idea is that we "subtract" the spectrum of the star from the spectrum of the planet, looking for spectral lines that aren't present in the star's spectrum.

So we still need the light from the star, but we also have to have it in a separate image. This is what the technique described in this paper accomplishes.
[contd]
Da Schneib
5 / 5 (2) Apr 19, 2018
[contd]
To make @IMP's point more forceful, essentially this means that even with the smallest cell size and largest magnification we can get practically, the planet will be only on a single or a few pixels. We can get more data from a spectrogram of it than from staring at a handful of pixels or a single one, because at the highest practical magnification (controlled by the smallest possible diffraction ring, which is a function of the aperture of the telescope), we can't see details that are smaller than the above two pixels separated by a different pixel, but with spectroscopy we can tell things about the composition of the planet's atmosphere and surface.

No amount of image processing can find that dark pixel between two bright ones (or bright pixel between two dark ones, as the case may be) if it doesn't exist. That's the resolution limitation here.

I hope that helps you understand the reasons for these limitations. Please ask any questions you might have.
Da Schneib
5 / 5 (2) Apr 19, 2018
And that, of course, makes your question a good one; it's not simple at all. So 5s for you!
TopCat22
5 / 5 (4) Apr 20, 2018
Thank you for the excellent explanation. I now understand perfectly why it is not possible given the current technology and physical limits of using light.

434a
5 / 5 (3) Apr 20, 2018
@Da Schneib Do you think optical wavelength interferometry can have a role to play in exo-planet observations?

Would an array of space based telescopes be a feasible way of overcoming some of the problems of ground based optical interferometry?
Da Schneib
5 / 5 (3) Apr 20, 2018
@434a, eventually, yes, optical interferometry will play a part, and as you foresee, once we start putting enough telescopes in orbit or on airless moons and asteroids we'll get more and better results faster than we can with ground based instruments. Eventually we'll launch or build telescope arrays that span orbits of planets in our solar system, which will give us the distance to make interferometers with equivalent apertures tens of millions of miles across. This will give us the resolution to image exo-planets at a usable image scale, likely well before we are ready to launch expeditions to them.

As computer processing becomes less and less expensive, using such large interferometers will require less and less time for data analysis, and having watched CCDs take over from film over what seems to me but a few scant decades, I expect the confluence of these trends to continue.
RealityCheck
1 / 5 (1) Apr 28, 2018
@IMP-9
Even when not limited by the diffraction limit post-processing can only make rather modest improvements to the sharpness of an image. In the case of exoplanets the light from the host star and planet need to be separated before detection, because even if you can subtract the much brighter star in post-processing it will leave behind irreducible noise.
Thanks, mate, for unwittingly confirming me correct when I pointed out that photons from far-distant sources cannot reliably be discerned (from in-line-of-sight photons from intervening sources; and gravity-redirected photons from 'side-sources') whose radiation has been put into trajectories that coincide at our detectors which 'build up' an 'image' from *individual photons accumulates" over long exposure/collection times. See, @IMP-9, how much worse (for 'imaging' extreme distance sources) would be the same problems you just admitted bedevil even nearby 'photonic image' contamination/overwhelming' situations? :)

Please sign in to add a comment. Registration is free, and takes less than a minute. Read more

Click here to reset your password.
Sign in to get notified via email when new comments are made.