Convergence, equivalence and the future of sensors

20120918_Sony_sensor_sizes_001
Image credit: Cnet

I’m sure you’ve all seen this Sony sensor size comparison chart at various fairs, on various sites, or in the simulated display (in which no sensors were harmed in the making of) at their various retail outlets. The implication, of course, is that bigger is better; look how much bigger a sensor you can get from us! This is of course true: all other things being equal, the more light you can collect, the more information is recorded, and the better the image you’ll be able to output for a given field of view. However, I’m going to make a few predictions today about the way future digital sensor development is going to go – and with it, the development of the camera itself. Revisit this page in about five years; in the meantime, go back to making images after reading…

100D_MG_1670 copy
Differential size comparisons: small MF vs big M4/3.

1. Underlying sensor technology is converging.
Larger sensors used to be CCDs; smaller ones were CMOS. Now they’re all CMOS, and they’re slowly all moving towards putting the supporting circuitry on the rear so as to maximise light collection ability – the so called ‘BSI’ architecture. They’re all not just sporting microns arrays, but microns arrays and filter stacks that are designed into being part of the overall optical formula so that both resolving power and light collection are maximised. Look closely into the white papers that get put out with every subsequent generation, and you’ll find that the same feature set that’s in the larger sensors is also in the smaller ones – and vice versa. The upshot is that the inequality in some areas of performance that previously existed (e.g. small sensor high ISO performance being superior to MF because of the generational gap) will pretty much be eliminated.

2. The monopoly will want to maximise efficiency.
I suspect part of the reasons we’re finally seeing this convergence is that the underlying designs are more scalable; not only does this simplify production and maximise the R&D dollar (especially since camera sales have been shrinking over the last few years) – it means that you can offer the same improvements to a much larger potential range of customers. Semiconductor fabrication is an expensive business: it’s both highly capital intensive because of the required production hardware, but also requires a high degree of supporting infrastructure and expertise. Not many companies can afford to do this, and with Sony slowly cleaning up the board – you can bet that the squeeze to increase profitability is going to start very, very soon. Using one underlying pixel-level architecture and scaling is one way to do this.

3. Sensor size is likely to once again be directly proportional to raw output, processing aside.
Here’s what I think will happen in the long run: we’ll have one or two pixel sizes, and simply fill those across whatever overall area is desired. At the pixel level, readout and processing limitations aside, I think performance will be identical. More light collection area over the same angle of view will once again mean both more spatial and luminance information at finer gradations. In other words: in a way, digital will be like film again. For any given area of film, you can expect a certain amount of resolution since the underlying emulsion is the same. Thus, more performance requires a larger format – and the same will be true of digital sensors.

However, the differentiators in final performance are likely to come from various technologies that can’t be implemented on all sensor sizes, or that have bottlenecks/limitations common to all sizes (e.g. maximum data processing rates etc.). For example, the magnetic sensor suspension used on M4/3 cameras is significantly more effective than on FF; this has to do with the mass that must be moved/accelerated and associated power consumption, plus increase in angular resolution requiring finer control at the same time. We might see optical IS on medium format – Pentax has already been doing it, though I find IS results in general are somewhat hit and miss beyond a certain resolution – but to suspend a 54x40mm sensor plus mount and ancillaries just isn’t going to happen with the same efficiency as a M4/3 one. Similarly, whilst we may reach extremely high data read rates – the E-M1.2 can already manage 60fps at 20MP, which means 1200MP/s of data being read, processed and stored – the bottlenecks are likely to be common across all cameras. This is not an exact comparison, but scaling the same pixel density to 54×40 yields a 202MP sensor, and 1/10th the frame rate. That extra data may well be processed in creative ways (pixel shift, noise averaging, etc.) to make up the single capture gap – more on this later. I wouldn’t be surprised if in practice, under less than ideal conditions, the gap between large and small sensors is far smaller (and less linear) than the numbers themselves would suggest.

4. Unconventional sensor layouts are unlikely to become mainstream.
Whilst the various Foveon, X-trans etc. options have the potential to extract more performance in various ways than an equivalent Bayer sensor, there are few things that will eventually land up limiting their potential. Firstly, simple economics: the performance differential simply isn’t big enough to support the necessary R&D required to develop those alternative sensor architectures to the same level, which means not enough cameras get sold, etc. Even though the Foveon designs may excel in color and spatial resolution over their Bayer counterparts, the tradeoffs of high ISO performance and speed have proven not really acceptable to consumers. X-trans has fewer tradeoffs, but until recently, post processing workflow has not been ideal, and significant processing happens between sensor and even camera JPEG output – leading to raw files that actually have much less latitude than you’d expect (especially in shadow recovery). The only thing we’re likely to see is some form of pixel binning in the case massive output sizes are not required, since that can still use underlying Bayer architecture.

5. Computational photography will provide the Next Big Leap.
We’ve long ago passed sufficiency at high levels; we’ve passed them at the middle price point, and we’ve now passed it at the consumer level, too. Even taking state of the art displays into account like those high-density 4K smartphones and iPads (didn’t I predict display media would be the next advance many years ago with Ultraprinting?) – we still have enough pixels to go around. The consumer world at least now appreciates why more information density looks better – even if it doesn’t precisely understand why. However, physical limits to our vision mean that we may not need much more information for the majority of uses simply because we lack the ability to absorb it all. What we can and do appreciate, however, is hardware that gets us to the same point – or provides more options like cropping to simulate longer lenses, or pixel binning in very low light – without the current weight penalties.

Whilst companies like Lytro and Light have tried, with somewhat mixed results, I’d say that the approach simply hasn’t been consumer-friendly. Beyond the extremely passionate, the technical execution under the hood does not matter; only the ease of use of the chose implementation and the results. Trying to force-fit the capabilities of the new technology to the existing photographic framework doesn’t make sense, either; we may well need to come to accept a much simpler terminology, at least at the consumer end – e.g. ‘background: more blur, less blur’ and a slider. Even for the serious, I’ve come to realise I don’t really care what the numbers say so long as a) I can get the exposure I want, and b) the visual look I want. If sliders simplify the UI, and the rest of the information is available on demand if you nee to calculate flash power etc. – then what’s wrong with that? The proof is the number of serious, knowledgeable photographers who do just fine with an iPhone (myself included) and have no clue what the exposure parameters actually are, other than we can focus and meter on what we want, and make the capture brighter or darker.

I actually feel encouraged by this: less focus on the how, more focus on the why, and the image. It’s the modern photographic revolution, redux: instead of showing us the source code and making the boxes available to all, it’s the Apple-ization and film point and shoot rejuvenation of photography. I’m about as far from being a hipster as you can get, and don’t like Instax, but I suspect that this will actually stick, because we’ll be spending more brains power on making images and less on buying gear. And that, is a Very Good Thing. MT

__________________

Visit the Teaching Store to up your photographic game – including workshop and Photoshop Workflow videos and the customized Email School of Photography. You can also support the site by purchasing from B&H and Amazon – thanks!

We are also on Facebook and there is a curated reader Flickr pool.

Images and content copyright Ming Thein | mingthein.com 2012 onwards. All rights reserved

Filed under: Articles, On Photography http://ift.tt/20qMcfK

Leave a Reply