mobile photography technology, culture and community
www.dpreview.com

Aptina says Clarity+ technology will mean better smartphone cameras

43
The use of clear pixels allows small-pixel sensors to capture more of the available light.

Sensor maker Aptina says its Clarity+ system will allow 13MP smartphones that match the performance of 8MP models, as it gave more details of the new technology.

The company uses clear pixels and some advanced image processing to offer improved sensitivity in both low-light and daylight situations.

Sensors using clear pixels have been proposed before (including Sony’s announcement and then retraction of its plans to use clear pixels in its current generation of smartphone sensors), but Aptina says its combination of a new color filter pattern, redesigned microlenses and novel image processing allows its system to make full use of a 2X increase in light capture, compared with existing sensors.

Rather than adding clear pixels into a red, green and blue array, Clarity+ uses clear pixels in the place of green ones. It is then able to calculate green values by subtracting red and blue from the data captured at the clear pixels. The company says it can match the color accuracy of existing sensors.

The additional light capture from the clear pixels offers improved low-light performance but also allows the use of shorter exposures the rest of the time – decreasing the risk of motion-blur. It also promises improved HDR performance by decreasing the amount of time needed to capture multiple frames – decreasing the differences between frames, hence requiring less correction for motion effects.

Aptina says replacing the all green pixels with clear ones is more effective than inserting some clear pixels into a red, green, blue array. It allows full advantage to be taken of the additional light capture without an undue impact on resolution or an increase in color errors.

The company says its color filter pattern avoids the reduced resolution and color errors shown by existing, more elaborate clear pixel patterns. The similarity of its layout to conventional Bayer designs also means it can easily be used with the highly refined noise reduction, sharpening and edge detection algorithms that have been developed over the decades that the Bayer pattern has dominated the industry.

Although no products have yet been announced, the company was able to show us a fully-working (and apparently near-production standard) sample of a Clarity+ chip, which performed very well, when compared with a Bayer version of a similar sensor.

The working sample we were shown showed significant improvements in noise performance in low light.
 Bayer image
 Clarity+ image

The technology is built on a combination of a number of hardware and software modifications but the breakthrough steps are the use of the clear channel to reduce noise before and during conversion to RGB, along with the re-introduction of the luminance data from the clear pixels, at the end of the processing pipeline.

“Effectively the color comes from the [calculated] red, green and blue signal and the [brightness] comes from the clear channel,” explains President and CTO, Dr Robert Gove.

The biggest single breakthrough came from re-introducing brightness (luma) information from the clear pixels, late in the processing chain. This represents the most significant change to the image processing pipeline, compared with a conventional sensor design.

This modified processing pipeline could put increased pressure on smartphone processors but the company will offer its own image processor and says it will make it easy for other processor makers to adopt its technologies. “We’ve generated a lot of intellectual property but that IP has no value if it slows the adoption of this technology,” says CEO Philip Carmack.

Comments

Total comments: 43
ZC Lee
By ZC Lee (Jul 30, 2013)

Ok, HTC, do you read this?
Contact Aptina to improve your UltraPixel.
(This time you can get same light with 8M sensor)

1 upvote
zycamaniac
By zycamaniac (Jul 22, 2013)

Quite a self contradictory comparison table, either their math has gone haywire, or they are trying to spin this up to be something much bigger than it actually is.

How could a beyer pattern with half of the green pixel replaced have worse spatial color artifacts than ones with reference sample or ones with ALL the green pixel replaced.

This is just absurd...

Comment edited 2 minutes after posting
0 upvotes
Der Steppenwolf
By Der Steppenwolf (Jul 18, 2013)

Less noise in Clarity+ picture but less detail as well..But it's a start and in few generations it will get there. Now it's not really THAT much better, it almost looks like one could get similar results with Bayer pic with some intelligent noise removal.

0 upvotes
Deleted78792
By Deleted78792 (Jul 18, 2013)

A welcome development. Although in the samples, the new imaging pipeline output seems just a bit softer, just as applying a little NR does. I wonder how much this translates to an actual improvement in sensitivity. Maybesomewhat less than 1EV.

0 upvotes
peevee1
By peevee1 (Jul 17, 2013)

I am afraid you cannot approximate color response of a human eye without a green-sensitive sensor, but for target audience with notoriously color-incorrect displays (like AMOLED) it does not matter much.

0 upvotes
ovatab
By ovatab (Jul 17, 2013)

Next non-Bayer array would be 2X2 R-C-N-B,
N for ND2 (grey) filter.

0 upvotes
Deleted78792
By Deleted78792 (Jul 18, 2013)

An interesting comment. Would you care to elaborate how the N cell would be useful, since the primary challenge seems to be colour interpolation?

0 upvotes
ovatab
By ovatab (Jul 18, 2013)

#how the N cell would be useful

N-cell will capture luminance values in highlights and extend physical dynamic range.
Color interpolation is just a calculation - could be fixed in firmware or raw development software.

0 upvotes
AstroStan
By AstroStan (Jul 17, 2013)

One problem with this approach is that on average (full spectrum) the clear pixels will fill (and also attain full well saturation) much faster than the filtered pixels. This is of course what gives the sensor its sensitivity boost but it also means that the filtered signals will generally be low and noisy (shot noise and camera noise are more significant for the low signal). Thus improving luminance S/N while degrading color integrity. And interpolating the green from noisy red and blue makes the color noise even worse.

So this sensor should be inferior in well lighted situations but could have a real advantage in dark conditions. E.g. it would be superior to traditional Bayer astro-imaging.

4 upvotes
engbert
By engbert (Jul 17, 2013)

There is a lot less detail in the Clarity+. The Bayer with a little noise reduction would produce a better end result.

But it is good to see they are trying, and this is an early glimpse.

2 upvotes
Julian
By Julian (Jul 17, 2013)

Just to ask a dumb question - Won't having a clear pixel risk more blown highlights?

Comment edited 7 minutes after posting
1 upvote
AstroStan
By AstroStan (Jul 17, 2013)

No because the ISO speed of this sensor is set much higher than an identical sensor with the traditional Bayer pattern. But you are onto something because it does mean that the filtered pixels will usually be under-exposed and noisy.

2 upvotes
Jimbo420
By Jimbo420 (Jul 17, 2013)

Easy to see that colors are very different between the two sensors.

2 upvotes
tkbslc
By tkbslc (Jul 17, 2013)

Oh it's for phones. I was hoping for a 1" sensor breakthrough.

2 upvotes
Gesture
By Gesture (Jul 17, 2013)

They don't attach to the visceral senses in the same way (anyone who has developed black and white film negatives understands), but certainly digital generated images are as magical as those generated by silver halide chemistry. Can't judge this work, but I welcome those, like Fuji or Foveon, that are trying to look at these arrays in a different way. Why can't the array be non-symmetrical, not homogeneous, for example.

0 upvotes
Mescalamba
By Mescalamba (Jul 16, 2013)

Well, matching 6D color accuracy shouldnt be hard.. :D

But I doubt about matching camera with seriously good color accuracy (like Sony A900, Canon 1Ds MK3). Thats something in which fails almost every today camera.. not mentioning if you try some freaky CFA.

Sure idea is nice, but I doubt it can work as good in all aspects as Bayer. Maybe is Bayer CFA old thing, but so far it seems that its still best compromies..

0 upvotes
falconeyes
By falconeyes (Jul 16, 2013)

Unfortunately, this technology is old and has no theoretical advantage.

It is a variant of the class of possible Bayer filter spectra which may be wider or narrower. Clarity+ just uses an extremely wide green filter, the one with 100% transmission for all wavelengths.

However, current Bayer filter spectra are ALREADY optimized to be in a sweet spot: Make it narrower and luminance noise will increase. Make it wider and color noise will increase.

You actually see it in the Clarity+ sample image if you look at the white text's color artefacts.

In a DxO test, this sensor would score better in the landscape and sports scores. But worse in the portrait score. DxO anticipates vendors playing tricks with the color matrix which is why their portrait score has a relatively high overall weight.

Other tests such as DPR's may be fooled though.

Nevertheless, this technology brings no progress whatsoever.

To promise +1EV better sensitivity is a false news statement. Sorry, DPReview ...

4 upvotes
bluevaping
By bluevaping (Jul 16, 2013)

Its all about algorithms! If you think that DxO can measure every aspect of every camera sensor technology and image properties you be just wrong. For example, They can't measure X trans sensors yet. There testing is based on their algorithm, not sensor makers or even camera makers. It's ok to get a rough idea of image quality. But I would say it shows more about what cameras work best with their software. As far as your statement, it brings progress, but sorry its not what your looking for and not good enough for what you think DxO measures.

Comment edited 32 seconds after posting
0 upvotes
falconeyes
By falconeyes (Jul 16, 2013)

You seem to be biased wrt DxO. So, forget my comments regarding DxO.

The Clarity+ approach increases chroma noise and reduces luminance noise (leaving aside the additional problem of a higher native ISO to avoid clipping of whites).

This is and was always possible. By using a wider transmission spectrum for the Bayer filter colors (what Clarity+ does and what Canon does to some extent too). It can't even be licensed because the original patent does not restrict the spectra one may apply.

Some may not care about chroma noise. However, I think chroma noise is much worse because it is ugly and does not look like analog grain which can be artistically pleaseant. Chroma noise destroys portraits, not the grain.

The current RGGB Bayer filter is an optimized balance between luminance (resolution/noise) and color (resolution which is half only/noise). The Clarity+ idea is old. I imagine it would be a step back for any color camera.

OTOH, a monochrome camera removes the Bayer filter.

3 upvotes
Richard Butler
By Richard Butler (Jul 17, 2013)

@falconeyes - our headline simply says what is promised, not whether it does or doesn't achieve it. It's a completely legitimate headline.

4 upvotes
JKP
By JKP (Jul 16, 2013)

I see reduced contrast, for example on the two the exhaust pipes.

2 upvotes
Franka T.L.
By Franka T.L. (Jul 16, 2013)

Made sense on its own, but I do not think their claim of Sony's RGB-W pattern being inferior in overall sharpness or more color artifacts. All of which must be bored out of its own image processing and its own algorithm. onto the most basic ( in theory, and less so in practical ) the only way to get more data is to have more photosite ( datum point collected ) per area count.

None the less, interesting development; in the long term though, we still need to see the true full spectrum image sensor though

0 upvotes
GURL
By GURL (Jul 16, 2013)

"The additional light capture from the clear pixels offers improved low-light performance but also allows the use of shorter exposures the rest of the time"

Said shorter exposures should result in deteriorated SNR for red and blue signals. As I suspect that color accuracy in the darkest parts of an image don't really matter this should not be a problem (our eyes don't see the difference?)

Comment edited 9 minutes after posting
1 upvote
Franka T.L.
By Franka T.L. (Jul 16, 2013)

The disparity between the sensitivity of the filtered and the unfiltered ( R/B vs clear ) photosite might present a problem, since by default, those R and B photosites will have less exposure, and thus less accurate, but equally the clear pixel can easily lead to overexposure. The question is how can an exposure which had only one set value cater to 2 different exposure need. Well it seems like either the shadow had to go or the highlight had to go. Not particularly enlightening from a photographic POV

1 upvote
micahmedia
By micahmedia (Jul 17, 2013)

What about making the R&B photosites larger to compensate? That could be a win/win.

1 upvote
Fred Briggs
By Fred Briggs (Jul 16, 2013)

It occurs to me that this filter array would lend itself to a system which did two samples per exposure, with the colour filter displaced by one pixel for the second sample.

This would give you 100% luminance resolution, equivalent to a monochrome sensor, and twice as good colour resolution as currently available for a given array. I suspect the better SNR would more than compensate for the reduced exposure time for each sample.

Just need to find a way to implement the filter switch - either mechanically or electronically!

3 upvotes
yabokkie
By yabokkie (Jul 16, 2013)

we will get rid of color filters altogether by measuring the photons one by one with a pure, all-digital sensor.

> give you 100% luminance resolution
17% + 50% = 67% luminance? (more than 17% in real)

Comment edited 3 minutes after posting
0 upvotes
Fred Briggs
By Fred Briggs (Jul 16, 2013)

TV /video systems from way back gave greater signal bandwidth for luminance v chrominance, on the basis that this provided the best match to the human visual system.

Until now Bayer type sensors have been throwing away two thirds of the critical luminance signal. I think this is a very clever system and shows great promise for the future.

I just hope they license this widely to other companies and don't just use it with their own sensors.

Comment edited 22 seconds after posting
2 upvotes
yabokkie
By yabokkie (Jul 16, 2013)

I'm willing to trade color for better SNR. after all our eyes are not very good at colors (but quite good at chromatic adaptation and tolerance).

1 upvote
Peiasdf
By Peiasdf (Jul 16, 2013)

The color on the car is off vs. bayer picture. I guess phone users don't really care for color accuracy.

5 upvotes
Richard Butler
By Richard Butler (Jul 16, 2013)

The white balance is also slightly different - the samples we were shown (of gretag color patches) were very close and the company seems confident it can match the color accuracy of Bayer.

2 upvotes
Barrie Davis
By Barrie Davis (Jul 16, 2013)

Why do you assume that it is the new colour array that is wrong? That seems to be a groundless assumption, to me.

Comment edited 26 seconds after posting
0 upvotes
Peiasdf
By Peiasdf (Jul 16, 2013)

@Barrie Davis
Bayer is control for now.

2 upvotes
Richard Butler
By Richard Butler (Jul 16, 2013)

I've added a link to some crops taken from a GretagMacbeth/X-Rite color chart which should give a clearer idea of how it performs.

4 upvotes
SunnyFlorida
By SunnyFlorida (Jul 16, 2013)

Nikon 1 user don't really care about color accuracy, noise levels or even IQ for that mater, that's just a horrible sensor

0 upvotes
bgbs
By bgbs (Jul 16, 2013)

I don't think the bayer color is correct either.

0 upvotes
chaos215bar2
By chaos215bar2 (Jul 16, 2013)

What I wonder is, how do they manage exposure in colored vs. clear pixels? If the clear pixels receive roughly three times the light that the colored ones do, wouldn't they tend to overexpose and destroy any chance of accurately recovering the green channel in highlights?

Comment edited 52 seconds after posting
0 upvotes
Richard Butler
By Richard Butler (Jul 16, 2013)

The design includes some hardware and processing adjustments to help balance the exposure between the filtered and clear pixels.

0 upvotes
Kevin Purcell
By Kevin Purcell (Jul 16, 2013)

Hmm, R Butler says the "color balance is off" but this is a test setup with a known (D65) illuminant. There should be no color balance problems. It's 6500K. The only issues are in the processing pipeline.

The issue is color resolution in the green (as that's the channel they need to synthesize by subtraction.

The car (as Brits like yourself might know) is a model of the 1928 Le Mans winning Supercharged 4½ Litre Bentley in British racing green. A dark (i.e. low intensity) green. In the Clarity+ image it's closer to British racing grey.

Weird that Aptina should release an image this is clearly off in color.

Interesting idea. I wonder if they'll use this with their DRpix ideas (as seen and misunderstood in the Nikon 1).

0 upvotes
Kevin Purcell
By Kevin Purcell (Jul 16, 2013)

On "over exposing" remeber that R G abd B are not equally represented in white: 70% G, 20% R and 10% B.

It's 130% more photons or about +1/3 stop more "white" light than green light.

It's no difference in Bayer. Green has a lot more photons (in daylight 6500K) than R or B.

0 upvotes
Brian Steele
By Brian Steele (Jul 17, 2013)

This might open up another subject entirely, but I wonder why hexagonal image arrays didn't take off? This seems to be the best way to get an equal number of R, G, and B sensors into the array, which should produce the best results.

0 upvotes
peevee1
By peevee1 (Jul 17, 2013)

It should not. Human eye has about 50-70% green-sensitive cones (depending on a person), 5% blue-sensitive cones and the rest are red-sensitive. You know, to stay alive, our ancestors needed better resolution in the foliage than in the sky, but the color of blood had to be detected promptly. :)

0 upvotes
Brian Steele
By Brian Steele (Jul 19, 2013)

Admittedly I was looking more at noise rather than colour resolution. Wouldn't equal size/amount of pixels result in equal noise across R G and B for every exposure?

0 upvotes
Total comments: 43
About us
Sitemap
Connect