For sale – Hasselblad CFV-50c


I’m helping a client move his Hasselblad CFV-50c, mint in box with warranty til Dec 2017, two batteries and charger and all box contents (including dedicated focusing screen). The back is the 50MP CMOS version with live view. It is physically with me at the moment, I have tested it fully on my camera, and cosmetically it really is indistinguishable from new. Works with all V series cameras, activation count under 2,000, and remains under warranty til the end of 2017. Photos are of the actual item. Alas, the film components of the system aren’t for sale (I asked).

I still believe this back represents the most versatile medium format solution (self-powered, high ISO capable, V series, tech cameras, Flexbody compatible etc.) – as well as the lowest overall cost of MF system entry given the price of V series bodies and lenses. It is also state of the art in image quality, second only to the 100MP sensors.

My review of the back (same internals as the H5D) is here, and guide to the V system here.

Reason for sale: he blames the X1D system which just arrived on me…

New pricing is $12k at B&H, second hand anywhere between $7,500 and $9,500 on the forums. The backs are in somewhat limited supply at the moment as all available sensors are going into X1Ds. Asking $8,000 wire transfer price, including shipping via DHL, add 2% for paypal, and insurance is extra (it depends on location).

Full disclosure: I will be acting as proxy for the sale and will receive a small commission in return. Please send me an email if you’re interested. Thanks! MT


Convergence, equivalence and the future of sensors

Image credit: Cnet

I’m sure you’ve all seen this Sony sensor size comparison chart at various fairs, on various sites, or in the simulated display (in which no sensors were harmed in the making of) at their various retail outlets. The implication, of course, is that bigger is better; look how much bigger a sensor you can get from us! This is of course true: all other things being equal, the more light you can collect, the more information is recorded, and the better the image you’ll be able to output for a given field of view. However, I’m going to make a few predictions today about the way future digital sensor development is going to go – and with it, the development of the camera itself. Revisit this page in about five years; in the meantime, go back to making images after reading…

100D_MG_1670 copy
Differential size comparisons: small MF vs big M4/3.

1. Underlying sensor technology is converging.
Larger sensors used to be CCDs; smaller ones were CMOS. Now they’re all CMOS, and they’re slowly all moving towards putting the supporting circuitry on the rear so as to maximise light collection ability – the so called ‘BSI’ architecture. They’re all not just sporting microns arrays, but microns arrays and filter stacks that are designed into being part of the overall optical formula so that both resolving power and light collection are maximised. Look closely into the white papers that get put out with every subsequent generation, and you’ll find that the same feature set that’s in the larger sensors is also in the smaller ones – and vice versa. The upshot is that the inequality in some areas of performance that previously existed (e.g. small sensor high ISO performance being superior to MF because of the generational gap) will pretty much be eliminated.

2. The monopoly will want to maximise efficiency.
I suspect part of the reasons we’re finally seeing this convergence is that the underlying designs are more scalable; not only does this simplify production and maximise the R&D dollar (especially since camera sales have been shrinking over the last few years) – it means that you can offer the same improvements to a much larger potential range of customers. Semiconductor fabrication is an expensive business: it’s both highly capital intensive because of the required production hardware, but also requires a high degree of supporting infrastructure and expertise. Not many companies can afford to do this, and with Sony slowly cleaning up the board – you can bet that the squeeze to increase profitability is going to start very, very soon. Using one underlying pixel-level architecture and scaling is one way to do this.

3. Sensor size is likely to once again be directly proportional to raw output, processing aside.
Here’s what I think will happen in the long run: we’ll have one or two pixel sizes, and simply fill those across whatever overall area is desired. At the pixel level, readout and processing limitations aside, I think performance will be identical. More light collection area over the same angle of view will once again mean both more spatial and luminance information at finer gradations. In other words: in a way, digital will be like film again. For any given area of film, you can expect a certain amount of resolution since the underlying emulsion is the same. Thus, more performance requires a larger format – and the same will be true of digital sensors.

However, the differentiators in final performance are likely to come from various technologies that can’t be implemented on all sensor sizes, or that have bottlenecks/limitations common to all sizes (e.g. maximum data processing rates etc.). For example, the magnetic sensor suspension used on M4/3 cameras is significantly more effective than on FF; this has to do with the mass that must be moved/accelerated and associated power consumption, plus increase in angular resolution requiring finer control at the same time. We might see optical IS on medium format – Pentax has already been doing it, though I find IS results in general are somewhat hit and miss beyond a certain resolution – but to suspend a 54x40mm sensor plus mount and ancillaries just isn’t going to happen with the same efficiency as a M4/3 one. Similarly, whilst we may reach extremely high data read rates – the E-M1.2 can already manage 60fps at 20MP, which means 1200MP/s of data being read, processed and stored – the bottlenecks are likely to be common across all cameras. This is not an exact comparison, but scaling the same pixel density to 54×40 yields a 202MP sensor, and 1/10th the frame rate. That extra data may well be processed in creative ways (pixel shift, noise averaging, etc.) to make up the single capture gap – more on this later. I wouldn’t be surprised if in practice, under less than ideal conditions, the gap between large and small sensors is far smaller (and less linear) than the numbers themselves would suggest.

4. Unconventional sensor layouts are unlikely to become mainstream.
Whilst the various Foveon, X-trans etc. options have the potential to extract more performance in various ways than an equivalent Bayer sensor, there are few things that will eventually land up limiting their potential. Firstly, simple economics: the performance differential simply isn’t big enough to support the necessary R&D required to develop those alternative sensor architectures to the same level, which means not enough cameras get sold, etc. Even though the Foveon designs may excel in color and spatial resolution over their Bayer counterparts, the tradeoffs of high ISO performance and speed have proven not really acceptable to consumers. X-trans has fewer tradeoffs, but until recently, post processing workflow has not been ideal, and significant processing happens between sensor and even camera JPEG output – leading to raw files that actually have much less latitude than you’d expect (especially in shadow recovery). The only thing we’re likely to see is some form of pixel binning in the case massive output sizes are not required, since that can still use underlying Bayer architecture.

5. Computational photography will provide the Next Big Leap.
We’ve long ago passed sufficiency at high levels; we’ve passed them at the middle price point, and we’ve now passed it at the consumer level, too. Even taking state of the art displays into account like those high-density 4K smartphones and iPads (didn’t I predict display media would be the next advance many years ago with Ultraprinting?) – we still have enough pixels to go around. The consumer world at least now appreciates why more information density looks better – even if it doesn’t precisely understand why. However, physical limits to our vision mean that we may not need much more information for the majority of uses simply because we lack the ability to absorb it all. What we can and do appreciate, however, is hardware that gets us to the same point – or provides more options like cropping to simulate longer lenses, or pixel binning in very low light – without the current weight penalties.

Whilst companies like Lytro and Light have tried, with somewhat mixed results, I’d say that the approach simply hasn’t been consumer-friendly. Beyond the extremely passionate, the technical execution under the hood does not matter; only the ease of use of the chose implementation and the results. Trying to force-fit the capabilities of the new technology to the existing photographic framework doesn’t make sense, either; we may well need to come to accept a much simpler terminology, at least at the consumer end – e.g. ‘background: more blur, less blur’ and a slider. Even for the serious, I’ve come to realise I don’t really care what the numbers say so long as a) I can get the exposure I want, and b) the visual look I want. If sliders simplify the UI, and the rest of the information is available on demand if you nee to calculate flash power etc. – then what’s wrong with that? The proof is the number of serious, knowledgeable photographers who do just fine with an iPhone (myself included) and have no clue what the exposure parameters actually are, other than we can focus and meter on what we want, and make the capture brighter or darker.

I actually feel encouraged by this: less focus on the how, more focus on the why, and the image. It’s the modern photographic revolution, redux: instead of showing us the source code and making the boxes available to all, it’s the Apple-ization and film point and shoot rejuvenation of photography. I’m about as far from being a hipster as you can get, and don’t like Instax, but I suspect that this will actually stick, because we’ll be spending more brains power on making images and less on buying gear. And that, is a Very Good Thing. MT


Visit the Teaching Store to up your photographic game – including workshop and Photoshop Workflow videos and the customized Email School of Photography. You can also support the site by purchasing from B&H and Amazon – thanks!

We are also on Facebook and there is a curated reader Flickr pool.

Images and content copyright Ming Thein | 2012 onwards. All rights reserved

Filed under: Articles, On Photography

Meg Baird: January 22, 2017 Park Church Co-op

Finally capping off our coverage of the epic Ranaldo-Gunn-Baird tour of 2017—last but certainly not least—here’s Meg Baird’s opening set. We’ve long been fans of everything Meg Baird, from Espers to her solo work to the Baird Sisters to Heron Oblivion, but we haven’t yet had occasion to post a solo set until now. This recording captures Baird on the last date of her two-week tour with Lee Ranaldo and Steve Gunn (those sets posted here and here, respectively) and finds her playing a couple songs off her latest solo record for Drag City, Don’t Weigh Down the Light, along with a few covers: “Old Man on the Mountain” from fellow traveler P.G. Six, “Beatles and the Stones” by The House of Love, and the traditional Scottish folk song “Willie O’Winsbury.”  It’s been an emotional couple weeks for all, and Baird provides a touching introduction introduction to “Willie O’Winsbury,” describing the unique ability of traditional songs to communicate with us through time. Only a little over a week old, this set nonetheless feels like a transmission from the distant pass. Our work continues… But in the meantime, feel free to accept into your life those small joys, like this gorgeous acoustic guitar set from Meg Baird.

I recorded this set from my improvised tapers section in front of board, combined with a feed from Park Church’s engineer Jasno Swarez. Despite some ambient noise from the church, the sound is very good. Enjoy—and resist!

Download: MP3/FLAC


Meg Baird
Park Church Co-op
Brooklyn, NY

Recorded and produced by Eric PH for

Soundboard [engineer: Jasno Swarez] + AKG C480B/CK61 > Roland R-26 > 2xWAV (24/48) > Adobe Audition CC (align, compression, mixdown, normalize, fades) + Izotope Ozone 5 (EQ) > Audacity 2.0.5 (downsample, dither, tracking, tagging) > FLAC (16/44.1, level 8)

Tracks [31:23]
01. Old Man on the Mountain [P.G. Six]
02. Back to You
03. Don’t Weigh Down the Light
04. Beatles and the Stones [The House of Love]
05. [Willie O’Winsbury intro]
06. Willie O’Winsbury [traditional]

Buy Meg Baird records via Drag City

And then, please consider donating to local organizations that support freedom and fair treatment for all, such as the New York Civil Liberties Union and the New York Immigration Coalition.

The Supreme Court’s Return To Nine

On this day, February 1, 1790, the United States Supreme Court met for the first time, presided over by Chief Justice John Jay.

Article Three of the US Constitution took effect in March 1789, in which a Supreme Court was given ultimate jurisdiction over all laws in the US. President George Washington appointed Jay and five other justices, who were confirmed by the US Senate a couple days later.

Supreme Court of the United States

On this day in 2017, the United States begins evaluating Judge Neil Gorsuch’s nomination to the Supreme Court by President Donald Trump. Gorsuch is a 49-year-old federal appellate judge from Colorado who attended Columbia and Harvard, and earned a doctorate in legal philosophy at Oxford.

Trump'ın Yüksek Mahkeme adayı belli oldu!

The court has had only eight of nine justices for nearly a year, after the Senate’s refusal to hold hearings on Merrick Garland, President Barack Obama’s Supreme Court Nominee of the US Court of Appeals for the District of Columbia. Garland’s nomination followed the sudden death last February of Justice Antonin Scalia, one of the most well-known justices in the history of the court

U.S. Supreme Court Justice Antonin Scalia -- Ellis Island (NY/NJ) September 1990

This is the longest period without nine justices since The Judiciary Act of 1869, which officially set the number. Previously there were as few as six and as many as ten Supreme Court Justices.

Take a look at some Supreme Court photos from the archives. It has indeed changed over the years.

The nine members of the Supreme Court of the United States (LOC)
supreme court 1932
Supreme Court Justices Sotomayor, Ginsburg, Kagan

New Highlight Video From Grand Cayman Island!

I spent a crazy five days taking photos and hanging out with some of the world’s top chefs including Anthony Bourdain, Éric Ripert, José Andrés, and Emeril Legasse in the Grand Cayman Islands for the famous Cayman Cookout! Here’s a highlight video!

You can also see my personal challenge to lose a little weight in my nonstop up-and-down struggles. How did I fare? Well, you’ll see…

Playing on the Beach in Fort Lauderdale

Thanks again Pascha!

So Pasha brought his gal (pictured below) over to the Ritz-Carlton in Fort Lauderdale for a fun day of photography. Pascha is also the guy running our big charity event for my wife’s cancer situation. You can Download the Android version or the iOS version right now. After you get it, find me there in the app and join in! ?" class="wp-smiley" style="height: 1em; max-height: 1em;" /> The winner gets a 30 minute facetime with me to talk about whatever you like! ?" class="wp-smiley" style="height: 1em; max-height: 1em;" />

Daily Photo – Playing on the Beach in Fort Lauderdale

Here’s one of my favorite shots from my 2-3 hours shooting with Katie Ann, and there were a lot of good ones to choose from. She was really embarrassed and thought should would look stupid jumping because she hadn’t done gymnastics since high school. I asked her if that was last week.

Playing on the Beach in Fort Lauderdale

Photo Information

  • Date Taken2017-01-11 02:16:58
  • CameraILCE-7RM2
  • Camera MakeSony
  • Exposure Time1/4000
  • Aperture6.3
  • ISO800
  • Focal Length24.0 mm
  • FlashOff, Did not fire
  • Exposure ProgramManual
  • Exposure Bias-0.7

New fonts from Type Network for Typekit Marketplace

We feel fortunate to play a role in supporting a worldwide network of type designers and typographers. Few groups embrace this collective sense of agency as completely as Type Network, which we welcomed as one of our founding Typekit Marketplace partners last fall.

Type Network is an alliance of independent foundries and type designers from all over the world. Your Typekit Marketplace purchases support these independent designers, and we’re delighted to share even more fonts from these foundries with you today.

Lipton Letter Design

Mero from Lipton Letter Design

Richard Lipton originally designed Meno back in 1994, and this updated version takes advantage of the intervening 20-plus years of type developments and adds in OpenType features, three optical sizes, alternate styles, and plenty more — and Meno was already versatile to begin with! The italics are truly gorgeous and a testament to Lipton’s background as a calligrapher, and don’t miss the chance to play with the small and petite cap styles.

See Meno Text, Banner, & Display on Typekit Marketplace.

More about Meno on Type Network.

David Jonathan Ross

Fit from David Jonathan Ross

Type Network describes Fit from David Jonathan Ross as a “wildly imaginative all-caps constructivist juggernaut,” and we couldn’t think of a more apt description for this one-of-a-kind typeface. The shapes are truly experimental and make for bold, arresting works of design. With ten weights from Skyline to Ultra Extended, there’s a lot of room to play with this one — and we can’t wait to see what people do with it.

Get Fit on Typekit Marketplace. (Sorry, does not count as physical exercise.)

More about Fit on Type Network.


New Hero, New Rubric, and New Herman from Newlyn

Miles Newlyn has decades of practice designing typefaces for commercial and corporate use, and the families we’re adding to the Marketplace show the range of his talent, and also each possess distinctive personalities that are easy to fall in love with. Miles worked with Elena Schneider on New Herman, a revival of an Art Nouveau style whose proportions are similar to blackletter but completely modernized. New Rubrik‘s rounded forms add plenty of softness to a page (if too soft, also try New Rubrik Edge), and New Hero was built to travel far — it was originally designed for Citibank, with all the weights and styles a bank (or you!) might need to build into a complete typographic hierarchy.

Get New Hero, New Rubrik, New Rubrik Edge, & New Herman on Typekit Marketplace.

More about New Hero on Type Network.

Font Bureau

Stereo, Bodoni FB, Apres, and Belizio from Font Bureau

Founded in 1989 by Roger Black and David Berlow, Font Bureau’s friendship with Typekit predates Type Network. The foundry has developed countless custom designs for several major American publications, and their fonts are among the most trusted in the industry. Look no further than Apres for a workhorse sans, for example — perhaps even paired with Belizio for a well-balanced serif contrast… or with Stereo, a digital revival of a 1968 design, for something a little more fun and offbeat. And we’re delighted to add a new Bodoni to our collection with Richard Lipton’s extremely sharp-contrasted Bodoni FB.

Californian and Benton Modern Text from Font Bureau

David Berlow’s high-contrast Californian has a delightful personality, with details like the very-slightly-quirky angle on the i tittle and a funky reverse contrast on the z. And Benton Modern Text, expanded from Tobias Frere-Jones’s 1997 design for The Boston Globe and Detroit Free Press, is simply gorgeous and a should be a strong contender for any long-form publication.

Find Stereo, Bodoni FB, Belizio, Apres, Californian Text and Display, & Benton Modern Text on Typekit Marketplace.

About Typekit Marketplace

The fonts you purchase from Typekit Marketplace are yours to use as long as you have a Creative Cloud login — even if you end your paid subscription.

Questions about getting started? Any other fonts you’re dying to see? Let us know on Twitter or drop an email to

Kyoto Café Reimagines Rocks & Minerals as Beautiful Food

an edible rock and mineral specimen meal that was offered last year

Usaginonedoko is a Kyoto shop, café and lodge whose purpose is to convey the sculptural beauty of mother nature. Their shop is like a little museum where you’ll find natural artifacts and specimens scattered among their signature sola cubes of plants and minerals encapsulated in resin.

inside the usaginonedoko cafe

Next door in their café they share an equal attraction to nature, and offer a continuously rotating seasonal menu inspired often by rocks, minerals and other objects of nature that have occupied this planet far longer than we have. Their limited specials are particularly playful and seem almost too beautiful to eat.

the “Meteorite Curry” is a black rice and curry inspired by Tektite, natural glass formed from terrestrial debris ejected during meteorite impacts.

“kantenseki” was a wagashi take on crystals that was served in 2016 (no longer available)

the rock & mineral specimens meal that is currently being served

Right now the café is offering edible rock & mineral specimens that include pork stone, cavansite, jade and amethyst. But the meals are only offered for 2-3 day durations at the end of each month and must be reserved in advance. There’s also a 15 person limit for each day. In addition, an upcoming exhibition at the shop takes an opposite approach and will be showcasing stones from around the world that look delicious but are not edible.

an upcoming exhibition will showcase delicious looking real stones

If you decide to go, know that the cafe’s menu is constantly rotating and items you see here might not be available. Even if you can’t make it to their shop in Kyoto you can still enjoy some of their objects of nature encapsulated in resin.

the Usaginonedoko shop in Kyoto

The Story Behind Comic Sans

Designers love to say that they hate Comic Sans. It makes them feel sophisticated and discerning. To admit any fondness for Comic Sans is the equivalent of saying you like to eat canned string beans and fried Spam sandwiches. You might, but you don’t tell anyone. So, it is refreshing to learn that designer Vincent Connare drew Comic Sans while he was Microsoft in 1995 and is proud that it has become an iconic symbol familiar to designers everywhere. Indeed, there are thousands of unknown type designers around who produce respectable fonts that no one uses, can recognize on sight, or can name. Comic Sans will live on, just like Helvetica and Bodoni.

Monthly Theme: Radio and Podcasts

(click to view source)

Buggles said that video would kill the radio star. But despite the turbulent history of audio/visual entertainment innovation, here in 2017 amidst 3D films, world-wide online video games and virtual reality, radio remains. From AM/FM on your morning commute though to the 1’s and 0’s of Digital Radio, Streams and Podcasts on your phone, this medium has long been a sonic playground for storytellers.

So join us, bring your ears and leave your eyes behind (only in spirit, you’ll be needing your eyes for most of our content) as we explore the history, creativity within and meaning that Radio and Podcasts hold for us today.

There is a feeling, when you listen to radio, that it’s one person, and they’re talking to you, and you really feel their presence as one person.

Want to join in the conversation? Comment below, ask a question in the Designing Sound Exchange, post to Facebook, or start up a conversation on Twitter!

Please email richard [at] this site to contribute an article for this month’s topic. And as always, please feel free to go “off-topic” if there’s something else you’re burning to share with the community.