The Wild West: Reps from CW Sonderoptic, Duclos Lenses and Panavision discuss optics and sensor coverage at NAB 2018.
On April 11, moderator Gary Adcock addressed a number of questions and concerns from cinematographers regarding the impending class of full-frame motion picture cameras. With Matt Duclos, Chief Operating Officer at Duclos Lenses; Seth Emmons, Marketing Director, Cine Products for CW Sonderoptic; and Guy McVicker, Manager Panavision Technical Optics Division, the hour-long informal discussion addressed the many challenges that manufacturers, sellers and rental houses are facing.
Gary Adcock: Welcome everybody. Full-frame lenses! We are incredibly lucky today. I reached out to a few people that we have here. Matt Duclos from Duclos Lenses. For those of you that don’t know, Duclos on the West Coast, sell, refurbish, manufacturer and customize everything there is. Is there anything you don’t do in the cine world?
Matt Duclos: Rentals.
Gary Adcock: Next we have Seth Emmons from CW Sonderoptic — that’s Leica’s! So we’ll share some Leica information. And on the end is Guy McVicker from Panavision. We decided if we’re going to talk full frame, we’d get the people who actually know what they’re talking about. Tell us a little bit about yourself your opinion on full frame, not that it’s good or bad, but why you decided to invest the time and effort into full frame?
Guy McVicker: One of the big attributes to full frame is to better create perspective. That’s a big draw. If you want to create this visual space like we are, one of the best ways to do that is with a 75mm spherical prime on a 55mm imager, or slightly wider on the [Panavision] DXL or [Red] Monstro, and than even wider on the [Sony] Venice and Canon C700FF. It just puts the visual cues, the depth perception, back into the perspective that ‘we’ would normally see. We can do it in Super 35, and get the frame, but the perspective is off just a little bit.
Seth Emmons: As a manufacturer, we’re focused on full frame and large format right now because the industry is, to be honest. Our job is to serve the industry and to present tools that people want to use, and be able to create new looks. Large format offers a lot of new options for people to explore. And historically there haven’t been a ton of lens options outside of Panavision, in that space.
Matt Duclos: My thoughts on full-frame format for motion pictures is a bit backwards for me, because we’ve been modifying and working with lenses that were originally designed for full frame for a long time. So for a very long time it was us trying to explain to people how a lens was designed for full-frame, but you can still use it on Super 35 and then this whole shift happened, and things just sort of naturally progressed and we had to tell people “now it’s the opposite,” to relearn it again. What we told you 5 or 10 years ago is now backwards. For us, it was being very ‘brand’ and ‘camera’ and ‘lens’ agnostic. It was a very welcome transition.
Gary Adcock: Before we go any farther [asking audience], how many here are working in large format now? Full frame? Nobody working on a DSLR? An [Sony] A7s or a [Canon] 5D? Those are full frame. So you are working on those lenses already. Some of you are working in this stuff already and maybe you don’t even realize it.
Super 35 was developed because the acetate base of film would only run so fast through the projector, and then you had to do certain things. That’s why Super 35 at 24 frame is a specific format that’s been kind of stabilized in our industry… In the early days of film, the formats were all over. You saw stuff as wide as 100mm in some instances. Lots of 65mm, 75mm, that ran at 12, or 10 frames, because they couldn’t pass it through the projectors.
We’re talking about the turn of the 1900s when they started doing some of these processes. Because it was easier to make stuff larger! But as it got more and more efficient, you went to 35mm film from 60mm to 70mm film, and instead of going horizontally, across, like it would be in a VistaVision camera, they rotated so the sprockets run through, and you start to fill it between the sprockets, so you weren’t getting 35mm frame, which has the sprockets long, from the long edge, now you’re getting the sprockets in the short edge. Then you rotate it 90 degrees.
Then you start adding audio and everything else, so [the film estate] is just getting smaller and smaller. But there’s a reason why that got to that point after nearly 30 years of film production, it pretty much stabilized around the 1900s to the 1930s. As we started going from black-and-white and talkies, it pretty much stabilized on 35mm.
Seth, as a manufacturer, what are some of the problems that people keep coming to you with? I mean Leica as a brand has been around forever. How long does it take you to design something new for a new sensor size? How long do you have to work in advance for that?
Seth Emmons: It’s a bit funny, because Leica as a company has been around for just over 100 years. Our first camera was the first 24mm by 36mm film frame, in 1913, and so now we’re kind of going back to what we have always done. Leica has always made full-frame photography lenses, in a few different brands. The “M”, and the Leica R lenses, which are used a lot in cinema, too.
[CW Sonderoptic’s] process was to start developing for Cinema 35 digital and film, with the Summilux, and then Summicron, and the industry started to shift to full frame. In general, from concept to design to production, anywhere from 3 to 5 years is generally our path in putting something into the market. We have to do a lot of anticipation.
We chose not to jump into virtual reality, and we chose not to ‘’really’ jump into 3D, but large format, full frame, is going to become a major growth factor in our industry… So we started off this a couple of years ago and anticipate that it will do nothing but grow.
Gary Adcock: What do you think about this, Guy? For the people that don’t understand what Panavision is, Panavision doesn’t sell anything. They’re a rental-only facility, encompassing the world, with some of the best people in the industry and the technology, but you guys don’t sell anything! What would be the philosophy that you came to embrace for full frame on the new Panavision line of DXL cameras?
Guy McVicker: One of the nice things about being at Panavision is that we have our own camera, but we also have access to all of these other cameras. So we’re going to get to use, if we haven’t already used, most of what’s out there. We have the full line of Panavision optics, but we also carry non-Panavision lenses. We carry Canon, Zeiss, Kowa, Baltar. We have a long line of various optics from various manufacturers. It’s fun guiding the client through the creative process to pick the format that they want, whether it’s large, 35mm or somewhere in between with all these multiple systems behind it.
Gary Adcock: Explain your inventory in glass for a second. The global formats that you support on the higher end above, let’s say “just” above Super 35.
Guy McVicker: As we’ve come to discover with the rise of larger format capture, a lot of our vintage optics that we were conceiving back then just for Academy capture covers Super. And now, it covers full frame, too. There are also ways with some of the tools that Mr. Duclos has that we can enhance older optics to cover a larger format. In terms of lens sets that we have to cover large format, the newest would be the Primo 70. Below that would be the Sphero 65. Older than that would be System 65, and than there’s a new line we’re putting out which is PVintage 65. That’s a collection of elder optics that we’ve discovered cover, and have a pretty unique aesthetic. They’re currently working on a couple of shows right now.
Gary Adcock: And you’ve got your own optics division to do this. The optics that Guy is talking are literally the ones that have done the movies that you hold most dear. Ben Hur, the first ones that did Cinemascope, and the rest of those.
Guy McVicker: We didn’t even get to anamorphic yet! Anamorphically, there’s the Ultra Panavision, and there are two generations of them. There’s the older Ben-Hur lenses that we used in Hateful Eight, and there’s a newer rendition which is more cylindrical glass versus the prismatic design of some of the older optics. For shooting with VistaVision size imagers, that opens the door to 2x-squeeze anamorphics, squeeze anamorphics, and many of our 2x anamorphics, new and vintage, that will cover that image area. On the DXL sensor, for instance, that’s a 21 percent increase in fieldview over traditional Super 35 or Alexa Open Gate capture. So you can get bigger than 2x as well.
Gary Adcock: For those of you that don’t quite understand how anamorphics work, basically it squeezes something in the frame so that you’re getting the most resolution in the area that the imager is working. And then, on projection, it’s rotated. In the old days, the projection lens was rotated the other way to expand it back out. It’s basically a way to get the maximum amount of information in the smallest amount of space. It happened originally because film was 3:2, and when you’re capturing a relatively square image and you’re trying to put it on a wide screen, you had to do something to make that work. That’s how anamorphics came into being, or that’s a simple enough explanation. Matt, talk about the complexity of covering a circle that large?
Matt Duclos: Covering all these new formats, and as you saw (in the slides) they’re all different. You’ve got the Venice, the C700FF, the Monstro. The DXL and the Monstro are the same [size]. Then the Alexa LF. They’re all large format, all VistaVision, full frame. They are all are bigger than Super 35, and yet they all have different image circle requirements. There’s really not a standard. VistaVision is not defined, at least not these days. And ‘full frame’ kind of is ‘24 x 36’ — or it’s supposed to be. When people use the term ‘full frame,’ it could still be slightly different. This is one specific example, Leica’s lineup of cinema lenses. You can see that the Summilux has the smallest image circle, followed by the Summicrons — very, very similar, but different image circles. So not only are sensor formats slightly different camera-to-camera, brand-to-brand, but your lenses are, too, so it’s kind of this Wild West. Obviously you jump way up with the [Leica] M’s, which were originally designed for full frame. So they work well, and then you go way beyond that if you’re on the Thalias, because those were designed for medium format, right?
Gary Adcock: They were designed for the 6 x 45 or something like that.
Seth Emmons: We made our own format just to make it more complex. [laughs]
Gary Adcock: We’re up to the point with the LF where you’re literally in Hasselblad-size imagers… You get up to that level, you start talking 65mm across. That’s a two-and-a-quarter camera… Seth, you said, you know, 3 to 5 years, in advance, to work on this? That’s a long lead time and a lot of money to invest!
Seth Emmons: Yes to both of those things. (laughs) You kind of have to forecast, and at some certain point there’s only so much you can change in design. Optical design comes first; that’s the first thing that you’re supposed to lock down. And if suddenly there’s a new format coming, and it’s (the image circle) not quite big enough, you kind of have to go back to the beginning. Maybe you can make some changes here and there. You really have to decide on format, and coverage, and ‘look’, first. The mechanics comes after that.
Gary Adcock: I’m going to make a comment under the assumption that we’re basically discussing PL as our deliverable at this point. What do we do when we start changing mount?
Seth Emmons: It can be kind of confusing. There are optical advantages to a shallower depth, to a more narrow mount. These guys can probably discuss that in more detail than I can, but shifting mounts is a tricky thing for us, as a manufacturer, without our own camera, because we need to be able to cover the ‘least common denominator’. Not having the opportunity to create a pairing like Canon does with their cameras and lenses, where you can really optimize a system, and Panavision with their PV mount, and the cameras that you guys make, optimizing the systems together, there is an advantage that we have to kind of ‘give up’ to be more generalist.
Guy McVicker: There are a lot of flange depths to work with. Our solution was to come up with a 40mm flange depth, which is the shallowest right now for motion-picture cinema. That allows us to make adapters to step up to 52mm for PL, or cheat your 35mm flange depth. And now LPL has come out, which is 44mm. So that’s another hurdle we’ll have to go over.
Gary Adcock: Do you guys know what we mean by flange depth? Flange depth is the basic from the mouth to the sensor, and that depth varies a lot. In PL, it’s 52mm, and it varies based on whether you’re working with still cameras or cinema cameras, or anything else. The shallower the flange depth, the more that you can do with the lens. But the harder it is to actually control it.
Matt Duclos: It’s a compromise. There’s a tradeoff. When there’s a really shallow flange depth, you can maintain more light transmission, closer to the sensor. Arri’s Signature Primes, because those are designed for LPL mount, they can keep the glass very close to the sensor, and maintain pretty good light transmission, which is how they’re getting [T1.7] or [T1.8] on such a large image circle. The tradeoff there is telecentricity. The closer you get to the sensor, the more you have to bend your light coming out of the back of your lens, unless you don’t really see the glass. So if you want to keep your image as clean as possible, especially in this day of digital cinematography, when you’re dumping your light into, we’ll call it a ‘bin,’ on the sensors, if that light is hitting it at a really steep angle, it wreaks havoc on image quality. So the closer you get, yes, you can maintain light, but then you are really concerned with how far you’re pushing the limits of your design.
Guy McVicker: To back up, it all kind of relates, but in the world we’re in now, we see some manufacturers on the floor and some of the work we’ve done where we’re striving for the perfect lens. An even field of illumination and as clean and crisp as possible, and aberration free. Than there are other lenses that are less than perfect, but they’re less than perfect on purpose. That’s one of the reasons why our vintage glass and the stuff we’ve accumulated from other manufacturers has played a big role in modern cinematography. The fact that a lens covers the image area is one thing, but how well it covers will define its aesthetic. For instance, in the Panavision line, we have the Sphero optics, which are vintage, and we have the System 65 lenses, which are vintage.
The Sphero optics have better cover, or better field illumination, across the frame. So you’re bokeh has a more true round effect. The System 65s cover but they don’t cover as well. There’s no hard vignette… but the bokeh is compressed in the corners, and it creates a swirl around the images. It’s called the ‘cat’s eye effect,’ or some refer to as the petzval effect. If you Google it, you’ll see hundreds of images of pretty ladies standing in front of backlit trees. If you look at Hasselblad’s V series [for example], they have a lot of compression in the bokeh, which is very desirable
If you look at the newer Vintage 765s, they’re not new, they’re from the ’80s, those are being rebranded by other companies on the floor [at NAB]. Those actually cover quite more evenly, and don’t have as much compression. A cinematographer doing a project right now has been using two sets of lenses: one set of vintage optics, which have a lot of compression in the bokeh, and he’s using that for flashback sequences, and then a new set of Primo 70s, which don’t have that.
They have the round bokeh almost always all the way to the edge. He’s had to detune them so much that the vintage lenses are actually more critical in skin tone, than the modern lens, but he wanted that even field illumination, and he wanted, present day, for the people to be as pretty as possible.
Gary Adcock: Do you notice he said, detune the lens? That’s the level of detail working with Panavision and Duclos and those kinds of companies. They can actually customize the optics for you for a specific job at the level that they get to work. That’s an extreme level of capability that you guys have, better than anyone else, and it’s a rarity in the industry.
I know Matt has done some stuff with Leicas, and also coatings. Coatings, in general have a lot to do with that [customizing]. That actually brings up something on why we’re rehousing old lenses? You guys [Panavision] have an exclusionary clause, because you don’t sell anything, so you can still work with glass with some rarer elements.
A lot of the glass we’re talking about, the Hasselblad glass, and the old stuff, cannot be reproduced anymore because of levels of lead, arsenic, and other things in the materials that prohibit you from being able to manufacture them nowadays… So let’s talk about multi-coatings, coatings and non-coatings, and how those kinds of things were affected as you work with the elements of the glass when you start working with larger and larger sensors?
Guy McVicker: Coating technologies grow leaps and bounds every decade. It’s really about what a modern multi-layer coating achieves for you, versus something older and more vintage. Anything pre-nineties from any manufacturer is going to ghost. Ghosting means ‘two pairs’ of headlights, or if you have an open bowl, or flame, you’re going to see an inverse of that light source somewhere else in the frame. That’s ghosting. If you shoot with circa-’80s or below optics, ghosting is a trap before you even put a filter on the camera.
Matt Duclos: One of the benefits of a larger format camera is that you just have more data, everything is cleaner, you’ve spaced your pixels out a little bit more. Like Gary said, everything starts to become a little more clinical, a little more boring. I think you were asking what we do with, like, the Leica Summicrons, and I’d be lying if I said I wasn’t inspired by Dan Sasaki [VP of Optical Engineering] and Panavision with their de-tuning process.
I’m basically taking a nice clean lens, you know those 3 to 5 years where [CW Sonderoptic’s] engineers did that very hard work and ‘undo’ it, to a degree. With the Summicrons, and we call them the ‘classic Crons,’ we took brand new Summicron-Cs, which are designed for Super 35, very nice lenses, but somebody wanted something that was a little more ‘character rich.’
So we tinkered with the coatings, whether it was just removing the coating of an element altogether, or the surface of one element, or replacing that coating with a different type of coating, change airspacing to achieve different aberrations. There are a bunch of little things that we can tweak and de-tune, which Panavision should trademark [laughs], but it it’s a huge trend because of how clean and again, how clinical these larger formats are. Everyone wants to take the edge off.
A quick example of that. People think about polishing or ‘uncoating’ a lens. A lot of people think that that means removing every coating from every element, which is just not possible. If you did that, you would not have any image. It’d be a terrible mess. So when we undertake a project like that, it’s anywhere from one surface on one element, to, I think the most we’ve ever done is five surfaces on three elements.
Even on that we started to see diminishing returns, where image quality suffers too much or light transmission begins to suffer. So it’s not just polishing the front surface, or the front element, or the back lens, or back element. It’s a specific recipe for every single lens to get a very specific look.
Seth Emmons: To add to that from a design standpoint, when we started making cinema lenses, we started with the Summilux-Cs, designed for Super 35, film first, and then digital, because it was 10 years ago. At that time, what the market said people wanted was really fast, really sharp, edge-to-edge performance, well-corrected lenses, because film has motion in it, and digital sensors were still coming online.
Now when we designed the Thalias, we went in a very different direction, because that larger format does not want that. If it’s overly sharp, if it’s overly contrast-y, on a larger sensor, the bigger it gets, the less realistic the image feels. It just isn’t natural if it’s really sharp. So the format does determine a little bit of that. Some of that is done in coating, some of that’s done in ‘where’ you allow aberrations, and where you allow air gaps and all these other things. In general, larger formats don’t benefit from an overly clear, overly corrected lens.
Gary Adcock: A simple explanation of the physical frame size that we’re dealing with?
Guy McVicker: As the last three generations of cinematographers have known, 35mm film is four-perf, vertical-pulldown. If you flip it sideways, you get the eight-perf horizontal negative, which is what we know as VistaVision, which is 36mm x 24mm, the Alexa LF, the Sony Venice, and so is the Canon C700. They’re all a little different, but they’re all in that same realm. Than the DXL and the Monstro are a little bit wider…
Large format doesn’t require the same optical performance to yield the same quality. Smaller formats are going to get enlarged. So the magnification difference [crop factor] here is 1.5x for VistaVision, and it’s 1.73x to go to the DXL sensor. So as you cascade up, you can technically cascade down in lens performance, to yield the same quality.
Gary Adcock: I’ve always looked at that as the resolution and sharpness in a smaller format is more required than it is in a larger format because you’re not expanding it up as much to get the information. You’re not doing magnification from a 35mm frame. I learned this because I started on an 8 x 10 view camera. That was large format in my world, where I was physically working on an 8 x 10 piece of film, but the rules apply the same way.
Guy McVicker: When you shoot in large format, it gives it a more three-dimensional look. Film is obviously a 2D capture, but we don’t see in 2D, so this helps give some depth to the image. If the goal is to create the perspective, the large format certainly makes that a lot easier on the cinematographer.
Anamorphically, on our chip [the DXLs], which is great, you can do a true 2x anamorphisis… It’s 21% larger than the traditional film frame, or like Open Gate. If you use our 1.25x-squeeze anamorphics, you can utilize much more [of the sensor] than others.
Gary Adcock: Let’s talk about apertures and depth of field. What’s the result of going to these larger formats? Now we’re covering a larger sensor area. To cover the larger area a lot of the time, the lenses are slower, instead of T1.5, there’s T2 or T2.9. What does that mean? We’re working with a larger sensor. It’s forcing a much shallower depth of field. Let’s talk about how this magnification changes the depth of field process in all of this because that’s a big part of where people mess up.
Seth Emmons: If you remember that first image of the scholarly bearded gentlemen that we had when we pulled out, and we saw all the different ‘angles of view.’ That’s all shot on one lens. Same focal length, right? But as the format changes, the angle of view changes. We have to have this conversation a lot. Not as much when it was just Super 35, because most people that are in the cinema space understand framing and composition for that.
Whenever you move further out, your angle of view changes, but your focal length doesn’t change. What is tied to focal length is depth of field characteristics. A more-telephoto lens has a shallower depth of field at the same focus distance than a wide-angle lens. Wide-angle lenses natively have a much deeper depth of field. So if you take that same 50mm lens and you go to these wider sensors, all of a sudden you have an equivalent angle of view, you’re framing like you’re using a 35mm lens, rough calculation, there, you’re framing like a 35mm but with the depth of field of a 50mm.
That’s where, to me, the interesting part of larger format comes in is having that come a compression and magnification and the depth of field but in a wider space. So, to Gary’s point of why everything was so shallow, even though you’re framing wide, you have that shallower depth of field. It doesn’t change how you light. You still light to the T-stop on the lens.
Guy McVicker: Like you said, a 50mm prime on full frame is a 35mm field of view in VistaVision, because of that crop factor [inversely]. So if your image area has grown 1.5x, your depth of field has shrunk 1.5x, with the same resulting field of view. So you’re going to shoot with lenses 1.5x longer to yield the same frame.
Like Seth said, for today’s cinematographers, Super 35 is their reference, whether they’re shooting 16mm or 65mm, they’re composing in their head in 35mm focal lens because that’s the majority of photography that we’ve created.
Matt Duclos: We have a lot of customers that — thanks to companies like Red, and Canon with the 5D — who got into cinematography coming from still photography. So they know that if they would have a 5D, or have a full-frame camera, if they were professional photographers, or a Nikon, and they were familiar, they knew what a 50mm was on a still camera.
Then they made the transition into cinematography, and all of a sudden they had to adjust the way they thought, and they had to keep this crop factor in their head. So when they said, ‘Oh, I know what my 50mm looks like, I know the field of view that I ’ They had to then, going down to the Super 35 theme, say, ‘Well now I need something like a 35mm, and had to do that math and figure it out.’
Now, like I said, it’s backwards all over again, because you’ve got these cinematographers that spent their entire career knowing Super 35, which has been the standard for 85 years. Every ASC member, other than the guys who are shooting Imax, have known Super 35 their entire career. Now all of a sudden — getting to the LF, Venice and Monstro — they have to do the opposite math. I grew up knowing 50mm, and now I’m on a larger format and it’s wider, so I need to go tighter, now. It started spinning one direction, and now it went back the other direction.
The whole concept of crop factor, I think, escapes a lot of people. There are a lot of people that are actually a little shy to admit that they don’t understand it. Even today, there is so, so much content on the Internet, where so many people attempt to explain crop factor, and even people attempting to explain it incorrectly, it’s mind boggling. If we could just get rid of millimeters and think in field of view, everything would be fine! But that’s never going to happen. A 50mm does not become a 35mm. It’s still a 50, and you’re just looking at a different field of view.
Gary Adcock: We’re also talking changes in the apertures. You get to a larger format aperture, and to be able to change those things, you have to work at different apertures than you’re used to. Now all of a sudden you start working at 65mm, and T2 is infinitesimal in focus, if you can find a lens for it. Now, you start talking Alexa 65, and if you’re used to working in a T2, now you’re working at T5.6 to get the same kind of look.
Matt Duclos: Somebody that’s used to shooting, let’s say wide open on Super 35, just for example, we’ll say T1.3, so wide open, you will always have a perfect circle. So your bokeh is very good. It’s very dreamy, perfectly circular. Than if you want to keep the same look that you had before but on a different format, and you maybe have to stop down to we’ll say T2.8, whatever the case may be, depending on the shape of that iris, once once you’ve stopped down, you could introduce that hexagon, octagon, whatever the shape is, depending on your blades.
Now, what you had before, even though you’re matching your field of view, setup and everything is the same, you had a dreamy beautiful bokeh before, because you were wide open, but now you have bokeh with a shape, and maybe it’s not as clean as it was before. It’s almost never the same. You can almost never make a perfect switch. Go for a perfect circle. Got to see. It’s all about the lens.
Seth Emmons: For the Thalias, we designed an iris that stays round through all the stops. So if you do stop down, you still getting round bokehs.
Matt Duclos: But that is not common!
Guy McVicker: [Canon] K35s have a round bokeh, as do the Panavision Ultra Speeds. To my knowledge, there are only three sets that we’ve played with [with round bokeh].
Gary Adcock: We haven’t really talked about how the average or changes is the frame size gets bigger. When I was working in a 8 x 10, wide open was like f/16. The frame size was a 150x larger than Super 35. As people start moving into DSLR technology, they’ve moved away from manufacturers, camera and lens, and start embracing what this industry is about, which is the ability to achieve whatever you want by using all of these different kinds of tools…
Matt Duclos: That’s another thing — don’t get too caught up in the terminology. Arri is calling the Alexa ‘LF’ for ‘large format,’ but it’s not the same thing. What is medium format? These days, it’s pretty much anything bigger than full frame, and that’s medium format, which could be a Leica S system, which is a little bigger. It could be Hasselblad, which is even a little bigger. It could be Phase One.
Gary Adcock: For a long time, medium format in stills was 6 x 6. It was Rolli, it was Hasselblad, and because that’s all you got. Then you got the [Mamiya] RB67 which gave you a 6 x 7 image, you know, or you got the Olympia 6 x 45, which gave you a 6 x 45 image. We’ve got those all classified as medium format. But again, just like what we have accepted with Super 35, this started changing.
I mean, even in Super 35, we’ve got two-perf, three-perf and some of the other variants of it. This was done to save cost. You go to do two-perf and it’s 15% less film. Those things make a difference. There was a cost factor.
While we’re talking about these imagers, people forget to mention the fact that they did this in film all the time and we kind of ignored it. Panavision, in particular, was known for having multiple formats to fit very specific projects. It wasn’t uncommon at all. They would even cut special gates for the cameras in some cases.
Guy McVicker: Super 35 is mentioned a lot, as a reference, but the camera manufacturers will brand their cameras Super 35. Our first goal at Panavision is to track them down as to the size of the sensor. There are two manufacturers that have cameras branded as Super 35 sensors.
The Helium 8K, Red’s sensor, is actually quite a bit larger than Super 35, and is only about 6% smaller than the Dragon 6K. So, most spherical, wide-angle motion picture lenses won’t cover it. And than, the new Panasonic EVA1 is just a little bit tinier than that, and is actually the exact same width as the Arri Open Gate 3.4K, which we all know is about 20% bigger than Super 35.
Gary Adcock: So we’re talking about a $7,000 camera [the Panasonic EVA1] that has an imager that big?
Guy McVicker: Which is bigger than the VariCam [imager]. The VariCam is closer to Super 35 — very, very close.
Matt Duclos: That’s what I was saying earlier. It’s the Wild West right now. Nobody is conforming [to a standard].
Gary Adcock: There are no rules. When DSLRs came into the mix, a lot of us thought that rules went out the window.
Matt Duclos: [referencing Duclos Lenses’ test charts] There’s a ton of talk right now about ‘coverage.’ Does a lens ‘cover’? Does it cover Vista? Does it cover full frame? And, inevitably, we run into the term ‘image circle.’ What is the ‘image circle’ of the lens? I really want to start driving home, with everybody that I can, especially because we’re in this Wild West of formats, that there needs to be a clear difference between ‘image circle’ and ‘illumination cicle’.
So this lens (referencing Duclos Lenses’ test charts) illuminates and resolves this to the corners, where it cracks (sharpness, not the lens). So we’d say this lens has a beautiful illumination circle, but a terrible image circle. It’s not covering the entire sensor. So yeah, it’s casting light out to the corners, but the corners just aren’t acceptable, unless that is a look you are going for.
Gary Adcock: Notice the difference in the corner here? The corners were a little softer here, but the targets on the edges are still sharp. So you’re looking at an image that’s good around, but the targets on the sides at the actual width (horizontally and vertically) of the image are still good. It’s only the corners that are out.
Matt Duclos: [referencing Duclos Lenses’ test charts] This would be a lens that has a good image circle but a poor illumination circle. So the corners are still acceptable, you can see resolution in the corners, but you have a massive vignette. So this lens was obviously not designed to cover this format at all.
Audience Member: But this is wide open though?
Matt Duclos: In terms of coverage, in general, when you stopped your lens down, I shouldn’t say it will get worse, but it will change. Some lenses we see that the vignette will become a bit bigger, but it will stop faster, it will darken faster, over a shorter span. Some lenses, as you stop them down, will creep in even more.
The same thing happens with focus. That vignette can change depending on your focus distance. Same thing with zoom. With zoom, your image circle, the illumination can change dramatically. Almost any adjustment that a lens has, will change all of this.
So the point of this whole ‘tirade’ [laughs], is that because it’s such a Wild West, everyone’s trying to find lenses that ‘cover’ these larger sensors. I’ll use Arri as an example because they’re not here [at the panel]. They have their Prime DNA lenses, one of which, I believe the 85mm is a Vintage Super Speed, that just happens to cover their larger format. This is actually pretty indicative of what you get, because it happens to cast light out to their large sensor, but you’ll have very, very soft corners…
Gary Adcock: Do you see lenses like this that people would still find acceptable?
Guy McVicker: Every day! They’ll ask, ‘Can we make one that’s softer?’
Gary Adcock: Anamorphics are real sharp, but how many times do you have people to come in and take this kind of look because it forces them into an antique feel? They want something ethereal or ephemeral. They’re looking for something that’s grossly sharp, just disjointedly unsharp on the edges, but razor sharp at the center.
Matt Duclos: Most lens manufacturers list the image circle. Though, again, we’ve been talking about illumination. But they’ll list that image circle. I’m sure that Leica does that, right?
Seth Emmons: We do it mathematically, not by coverage, but by performance.
Matt Duclos: There you go!
Gary Adcock: Which is a totally different way to do it, right?
Seth Emmons: The way we determine our image circle is that ’80% performance’ is our threshold. Even if it illuminates beyond a certain diameter, when contrast and resolution drop below 80%, we call that the image circle.
Matt Duclos: That’s a perfect example, because I may have a number, let’s say, just for example, that is a 36mm image circle. Let’s say that’s the safe point that they’re saying, all right, this is the maximum that the lens can do. We still get customers that they will ask, ‘But does it illuminate? Is there a hard cutoff?’ Even being conservative and very honest, it may still work very well for a particular project.
Seth Emmons: The Alexa LF is a great example because it has a slightly larger image circle than the the Summicrons by our mathematical calculations. But we’ve tested all the lenses, even the 15mm, the widest one, and it illuminates acceptably, and the performance, is really close. You look at the numbers, you’d be like, this number is smaller than this one. It doesn’t work. Put it on. Let me know where you can see where it goes from that 80% to that 70% falloff. You probably won’t cause it’s not a hard vignette.
Matt Duclos: It’s really subjective, yeah, exactly. A ‘lot’ of lenses will work, really depending on your requirements.