Are the Colors in Space ‘Real’?


Are the Colors in Astronomical Images ‘Real’?

In colorful photographs of galaxies, stars, planets, and more, what you see isn’t necessarily what you get

A colorful view of Lynds 483, an hourglass-shaped system of jets and outflows from two central protostars mid-formation. Captured by the James Webb Space Telescope (JWST), this image uses false colors to highlight certain structural details. Lynds 483 is too sprawling to fit within JWST’s field of view, and so is partially cut off in this image.

When I give a public talk about space and show off the latest dazzling images from the Hubble Space Telescope or James Webb Space Telescope (JWST), one of the most common questions I get is, “Is that what these objects really look like?” What that usually means is: If you were witnessing them with your own eyes, is this what you’d see?

The answer is almost always no. But it’s not that astronomers are faking the photographs! It’s just that cameras (especially on telescopes) and eyeballs work in very different ways. No photograph, including ones you take with your smartphone, can perfectly replicate what your eye sees. The best our tech can do is approximate what you see—and sometimes we don’t want to do even that.

Two kinds of cells in your retinas, called rods and cones, are the basis of human vision. Rods can’t detect color but are good at registering low levels of light (which is why the faint light from most stars looks white to the unaided eye). Cones are the cells that decipher color, and they come in three kinds: each cone is sensitive to either red, green or blue light. The colors we perceive when we look at an object come from the mix of light detected by the cones. This is of course a phenomenally more complicated process than what I’ve just described, but that’s the gist.


On supporting science journalism

If you’re enjoying this article, consider supporting our award-winning journalism by subscribing. By purchasing a subscription you are helping to ensure the future of impactful stories about the discoveries and ideas shaping our world today.


Some digital cameras can mimic this approach. Instead of light-sensitive biological cells, they have tiny pixels that essentially count every photon that hits them and store that number electronically. In this system, a brighter object, which emits more light, has a larger number of photons than a dimmer object.

These pixels aren’t intrinsically able to differentiate color, though. They just see a photon and record it. To get color information, each pixel can be covered by a filter that only lets through a range of colors in the red, green or blue parts of the spectrum. A color image emerges after the raw per-pixel photon counts are tallied—then sorted and summed by the specific colors of the detected light.

This is called a three-color image, and it’s close to what the eye sees. Astronomical cameras typically use larger red, green and blue filters over the entire field of view rather than the tiny filters that a smartphone camera uses for individual pixels, but the end result is much the same. In either case, however, the color filters generally can’t exactly match the color response of your eyes, so the image is not precisely what you see. Still, it can be very close.

It’s good enough to create a nice photograph—that is, if we’re trying to take an image that matches what the eye would see. We call images like that “true color,” though technically that’s a misnomer because it’s really only an approximation.

Such images of cosmic objects are lovely (and much loved by the public) but of limited use for actual scientific research. For that, astronomers generally prefer to analyze any color-filtered images individually rather than combine them to make a three-color photograph.

That’s because there’s much more to “color” than making pretty pictures. The sun emits light across a wide range of wavelengths—what we call a continuous spectrum—and when we look at, say, a flower, it reflects a mix of those wavelengths of light back to us, which we perceive as color. Most stars emit a continuous spectrum as well, but not all astronomical objects do.

Hydrogen in a gas cloud, for example, emits light at very specific wavelengths, though generally most is at 656 nanometers (in the red part of the spectrum). Emission like this creates what is called a line spectrum. If astronomers want to know where the hydrogen is in that nebula, they use “narrow-band” filters that only let that specific wavelength of light pass through to reach a detector. These filters can be tuned to isolate the light from any of a huge variety of atoms and molecules that might be in a gas cloud, allowing a cloud’s composition, temperature, density, structure and other properties to be measured.

Most nebula photographs you see use combinations of these narrow filters, so how these images look is very different from any unaided first-person view you’d get if you were physically floating nearby. The imaging process is different, so the images look different. And that’s fine! Astronomers aren’t trying to fool you or anyone else. It’s just that these objects emit light differently than the continuous spectrum our eyes evolved to perceive, but we still want to see them. So we create these kinds of images to do that.

I’ve never run across a good name for this process, though. “False color” was popular for a while but has fallen out of favor because it implies fakery. “Unnatural color” is worse. The technique is well worth any naming woes, however, because it allows us to turn a wider variety of types of light into images. Some camera detectors are sensitive to infrared light—not just detectors in JWST but also those in newer smartphones. Others can pick up ultraviolet light, x-rays and other types of nonoptical light.

This allows the creation of images with a mix of light from across the electromagnetic spectrum. For example, you could make a picture where ultraviolet light is shown as blue, visible light as green and infrared as red. Satellite images depict this; vegetation is an excellent reflector of infrared light, so it looks bright red in such images instead of the verdant green your eye sees. Astronomical images do, too, often with even more colors: Many Hubble images, for instance, use five or more filters that are then each assigned to different corresponding colors. This makes the resulting final images especially vibrant, though they’re not, at least as far as your eyes are concerned, in true color.

In the end, the way a photograph is created depends on its use. Sometimes astronomers use single filters, multiple filters or none at all, depending on what they’re measuring. And the images you see from telescopes across—and above—the world can be created in any number of ways, then balanced and delicately processed to enhance their natural beauty.

You can make the case that none of them are in true color. But then again, if they were, they’d be unable to reveal the true nature of objects emitting or reflecting various kinds of otherwise invisible light. So in that sense, they’re all true!



Source link

Scroll to Top