Clarifying Lens Compression

Facebooktwitterlinkedin

Today’s Question: Can you amplify regarding telephoto “compression” in an image such as a row of telephone poles that look like they’re only inches apart but are in fact many yards apart when shot with a long lens? I have always heard this referred to as “telephoto compression”.

Tim’s Quick Answer: The compression effect we experience when using a telephoto lens is really caused by being further from the scene we are photographing when using a lens with a long telephoto lens. The longer focal length lens is effectively cropping the scene, but the perspective change is caused by a change in position, not the lens itself.

More Detail: I should hasten to point out that the discussion of “telephoto compression” is really an issue of semantics. When you change two variables, it is perhaps inevitable that some people will say the first change caused a result, and others will say it is the second change.

Those who suggest it is the longer lens causing compression of the scene point out that to retain the same framing of the scene when you move farther away from a scene you must use a lens with a longer focal length. That is true, but that relates to the field of view, or the “cropping” of the scene.

The actual change in perspective that we refer to as “compression” of the scene is a result in changing your position relative to the scene.

Let’s assume I am recording a video of a person standing on the New Jersey side of the Hudson River, with the New York skyline in the background. I can move the camera closer to or farther away from the person, and their size within the frame will change rather significantly. The size of the New York skyline, however, won’t appear to change at all, because the relative change in distance is so much less.

It is that movement closer to or farther away from the scene that causes the change in perspective, based on changes in the relative distance to different objects within the scene. The long lens is simply cropping the scene to a particular field of view.

Put another way, if you move closer toward or farther away from a scene, the perspective will change. If you stay in one position and use lenses of different focal lengths, the perspective won’t change (ignoring distortion caused by extremely wide-angle lenses, of course), only the cropping of the scene will change.

You can see a visual demonstration of the change in perspective as you change distance relative to a scene in my “Lens Compression Myth” video, which you can find on the Tim Grey TV channel on YouTube here:

https://youtu.be/wFqukptuwmg

Right Software for Raw Processing

Facebooktwitterlinkedin

Today’s Question: I recently was told that RAW conversion should be done through the manufacturer’s software rather than from a third party like Adobe. The reason being that Adobe has to reverse engineer the RAW conversion and data is lost while the manufacturer’s software is optimized for their RAW format. How big an advantage is this?

Tim’s Quick Answer: With the advances of the latest software for processing raw captures, I would say there is no advantage in terms of image quality when it comes to software from the camera manufacturer versus third party software developers such as Adobe. In my mind the only reason to use the software from the manufacturer of your camera to process raw captures is to take advantage of special camera features that third-party software applications don’t support.

More Detail: In theory there are some advantages to using the software from the manufacturer of your camera to process your raw captures. In reality, the benefits are mostly quite minimal.

The most important aspect of raw-processing is creating an image with optimal quality, with relatively accurate tone and color, and a pleasing look. The top software tools available today all provide very good image quality, and the baseline interpretation of tone and color are generally good. And if the tone and color aren’t optimal, you have a variety of adjustment tools available to optimize the appearance of the image.

Therefore, in my mind the only real reason that would clearly favor the software from the manufacturer of your camera is to take advantage of special camera features you aren’t able to access with third-party raw-processing software. For example, some cameras include a dust removal feature, where dust and other blemishes are detected on the sensor, and that information can be used in post-processing to automatically remove the blemishes from the image.

It is true that a raw capture will contain proprietary information that only the camera manufacturer can interpret. However, in general the lack of access to that information won’t create significant problems. For example, in-camera settings that alter the appearance of a photo won’t affect a raw capture, but much of that information may be available in the special metadata that can only be interpreted by the software from your camera’s manufacturer. Because of this issue, the initial interpretation of the raw capture may have higher fidelity with the manufacturer’s software. But you could still achieve an equally good result with adjustments using other software.

Ultimately, I feel you should make a decision about raw-processing software based on both image quality and workflow efficiency. In my experience the best results don’t require the software from the manufacturer of your camera, and in fact you can achieve excellent results with Adobe’s raw processing tools (such as Lightroom and Camera Raw).

Crop and Resize

Facebooktwitterlinkedin

Today’s Question: I have an image that’s 19.2 x 12.8 inches at 300 ppi [pixels per inch] and I would like to crop it to be 12.25 inches square at 300 ppi. However, when I attempt to do this using the Crop tool in Photoshop it seems that the image is only cropped in the long direction and simply resized in the short direction. When I do this using the crop tool in Canon’s Digital Photo Professional (DPP) software it works perfectly. Is there something I can do to get the Crop tool in Photoshop to behave like the crop tool in DPP?

Tim’s Quick Answer: You can crop and resize as one step in Photoshop by setting values for Width, Height, and Resolution on the Options bar for the Crop tool. Then set the crop box on the image to the intended area of the image, which could in this case include setting the crop box to match the full short side of the image if you only want to crop the long side in order to achieve a square crop.

More Detail: There are two basic options when it comes to how you crop an image in Photoshop. The first option is to simply crop, which involves trimming away portions of the image outside a crop box you can adjust. The second option is to resize the image to specific output dimensions.

If you are working on your master image, you generally would not want to resize as part of the cropping process. However, you might still want to crop to a specific aspect ratio, such as cropping to create a square image. To crop at a specific aspect ratio without resizing the image, you can enter values for Width and Height on the Options bar for the Crop tool, without setting a resolution.

First, make sure the popup toward the left end of the Options bar (just to the right of the Crop tool presets popup) is set to “Ratio”. You can then enter values for Width and Height (the next two fields to the right of the popup). For example, to crop to a square image you can enter “1” for both Width and Height, leaving the Resolution field blank. Note that the first text box after the popup is the Width value, and the second text box is the Height value. You can then set the crop box as desired based on how you want to trim the image, including the option to have that crop box include the full short side of the image.

If you want to resize as part of the cropping process, you will still want to enter values for the Width and Height fields (such as in inches or centimeters), but then also set a value for Resolution. This would be a common workflow if you were working with a copy of the master image (not the master image itself) and you are preparing that copy to be printed at a specific output size.

First, instead of using the Ratio option for the popup toward the left end of the Options bar, select “”W x H x Resolution” from the popup. You can then enter values for the dimensions you want to crop and resize the image to.

So, with the example in today’s question you would set both the Width and Height to “12.25 in” to set a square crop (since the two values are the same). However, the image will only actually be resized to 12.25 inches on each side if you also set a value for Resolution. In this case the intended output resolution is 300 pixels per inch (ppi), so you could enter “300” in the Resolution field.

You can then adjust the actual crop box on the image to trim away any portion of the image you want to exclude. You could of course also have the crop box go all the way to the edges on the short side of the image if you only want to crop the long edge. When you apply the crop, the image will be trimmed based on the positioning of the crop box, and also resized to the exact dimensions you specified on the Options bar.

Cleaner Panels in Lightroom

Facebooktwitterlinkedin

Today’s Question: In one of your recent Lightroom [Classic] presentations I noticed that when you were using one of the sections of the right panel in the Develop module, only one section was open at a time. When you opened another section, the section you were previously using would close. How can I get Lightroom to behave this way for me? Right now all of the panel sections are open all the time, and the only way I’ve figured out to have only one section open is to manually close all of the others.

Tim’s Quick Answer: The option you’re referring to is called “Solo Mode”, which can be enabled by right-clicking on one of the headings on the applicable panel and choosing “Solo Mode” from the popup menu. Note that this feature can be enabled or disabled individually for the left and right panels in each module of Lightroom Classic.

More Detail: The individual sections of all panels within Lightroom can be expanded and collapsed as needed. So you could collapse all panels to reduce clutter, and only expand those you are actively working with. To collapse or expand a given section of a panel, simply click on the heading (title) for the panel. For example, clicking the “Keyword List” heading on the right panel in the Library module will expand or collapse that section.

With multiple panel sections expanded at a given time, you may need to scroll down a bit to get to the particular control you are looking for. This can also cause a bit of visual clutter, making it a little more difficult to find the specific option you’re looking for. Using Solo Mode can help in this regard.

When Solo Mode is enabled for a panel, only one section of that panel will be expanded at a time. When you click on the heading for a panel that is collapsed, it will expand, and the panel that had been expanded will be collapsed. So each time you click on the heading for a panel you want to work in, that panel will be the only one that is expanded.

To enable Solo Mode for a given panel, simply right-click on one of the headings on the panel for which you want to enable Solo Mode. Then choose “Solo Mode” from the popup menu that appears.

Note that the Solo Mode setting is independent for each panel (left and right) in each module (Library, Develop, Map, etc.). So, for example, you could enable Solo Mode for the right panel in the Develop module, but leave Solo Mode turned off for the left panel. This would enable you to have multiple sections open at the same time on the left panel, while only one panel would be expanded at a time on the right panel.

Output Resolution at Capture

Facebooktwitterlinkedin

Today’s Question: How do I shoot to guarantee I am getting a 300 ppi resolution? I have a Sony a7R III.

Tim’s Quick Answer: You don’t actually establish an output resolution in the camera, since at that stage of your workflow the output resolution doesn’t matter. All you really care about is making sure that in general you are capturing the maximum resolution so you have the most flexibility in terms of final output.

More Detail: I’ve been teaching about photography and digital imaging in various ways for about two decades now, and the subject of resolution continues to be one that many photographers are confused by. That is absolutely understandable, considering that resolution is a factor in a variety of contexts.

First off, we have the capture resolution, which basically means how many pixels the image sensor is capturing information for. This is generally described as the number of megapixels for the sensor, or the millions of pixels being captured.

Another type of resolution is essentially the density of information. In other words, how much information do you need in order to be able to produce output of a particular size with good quality. A monitor display requires far fewer pixels than a high-quality print, for example.

With digital displays, you can generally simply refer to the number of pixels, rather than a pixel per inch (ppi) resolution value. So the only time the pixel per inch (ppi) value really comes into play is when you are printing.

You can change the output resolution to any value based on how the image is being printed. Ultimately, all that really matters is that you are providing enough pixels to produce output at the intended print size. That either means having a camera with an adequate resolution, or using software to enlarge the image by adding pixels through a process referred to as “interpolation”.

The appropriate output resolution (ppi) will be based on the specific printer being used to produce the print. In most cases an output resolution of 300 ppi will produce excellent results, though a higher ppi resolution may be helpful in some cases. But the bottom line is that you aren’t really able to alter the final output resolution at the time of capture.

When you’re capturing a photo, the sensor resolution determines how much information is being captured, which in turn determines how large a print you’re able to produce. In other words, the only thing you can really do to ensure the best capture in terms of final output size is to buy a camera with a relatively high resolution, and to make sure you’re using the full-resolution setting for the camera when capturing photos.

Negative Texture

Facebooktwitterlinkedin

Today’s Question: After seeing your video about the new Texture adjustment in Lightroom [and Camera Raw], I’ve played around with it a bit. I see that like Clarity it is possible to use a negative value for Texture. Is there ever a situation where you would actually want to use a negative value for this adjustment?

Tim’s Quick Answer: You can use a negative value for the new Texture adjustment anytime you want to reduce the appearance of fine texture in a photo. The most common scenario for this would probably for reducing texture for a portrait of a person, for example. But there may be other types of images where fine texture serves as more of a distraction than a benefit to the image.

More Detail: While many photographers (myself included) will be inclined to only use the new Texture adjustment in Lightroom and Camera Raw to enhance texture, it can also be used very effectively to reduce texture. In fact, that is the reason the adjustment was originally created.

In some cases you may find that significant fine detail in an image can be something of a distraction. This is certainly true for closeup photos of people, but the same can be true with other images as well. If you want to reduce the appearance of texture in an image, until the latest update you could use a negative value for the Clarity adjustment in Lightroom or Camera Raw. But that didn’t provide quite the same effect. Also, in some cases you might want to enhance midtone contrast, while also toning down fine detail.

Now that Lightroom and Camera Raw include a Texture adjustment, there is a solution. As noted in my video covering the differences between Texture and Clarity (among other adjustments), one of the key differences between the Texture adjustment and the Clarity adjustment is the scale at which they operate. Texture operates at a very fine scale, and Clarity operates at a larger scale.

So, if you want to tone down the very fine textures in an image, you can use a negative value for Texture. At the same time, you might want to enhance midtone contrast at a larger scale, and so you could use a positive value for Clarity.

In many respects, you can think of the Texture, Clarity, and Dehaze sliders as all providing options to enhance or reduce the appearance of texture in an image. The difference is that Texture operates at a very small scale, Clarity operates at a “medium” scale, and Dehaze operates at a relatively large scale.

So, depending on your intent in terms of the appearance of texture, detail, and contrast in an image, you can use these various controls with positive or negative values, depending on whether you want to enhance or tone-down detail at various scales.

Note that you can see these adjustments compared in the context of enhancing detail in a video published on my Tim Grey TV channel on YouTube here:

https://youtu.be/H6TMlwYTZd8

Cropped Sensor Changes Image

Facebooktwitterlinkedin

Today’s Question: A 105mm lens on a full frame camera in FX mode has the same field of view as a 70mm lens on the same camera in DX mode. However, images won’t be the same. The 105mm lens image would have more apparent compression of distance than the 70mm lens because of the real difference in focal length, and at the same aperture, the 105mm lens would have a shallower depth of field than the 70mm lens.

Tim’s Quick Answer: The compression of the scene is only a factor if the photographer changes position. From the same position the perspective of the full-frame versus cropped-sensor image will be the same. Only if you back away when using the cropped sensor will the perspective of the image change. And there will be a reduction of depth of field when a longer focal length lens is used, regardless of in-camera cropping.

More Detail: Today’s question is a follow-up to one about the option to crop in the camera on some full-frame digital SLR cameras. These cameras essentially let you choose between capturing the full frame, or cropping to only capture a portion of the image circle, matching what you would have achieved with a smaller image sensor using the same lens.

When your camera provides an option to crop the image in-camera to produce a smaller field of view, or you are using a camera that has a sensor smaller than a full-frame sensor, the images you capture will reflect a narrower field of view than you would achieve with the same lens on a full frame camera.

If you do not change your position or adjust the camera settings, you are literally just cropping the image. So with a full-frame camera compared to in-camera cropping or a smaller sensor, if you are using the same lens everything is the same except that the image is cropped.

Note that cropping the image circle doesn’t alter depth of field in the same way that changing lens focal length would. There are factors such as sensor resolution and focal length that affect depth of field. But if the same focal length lens is used on the same camera, and you are merely cropping the image in the camera (such as with the FX versus DX option on some Nikon cameras) the depth of field would not be altered.

The perspective change referred to in today’s question is not really caused by the focal length of the lens, but rather by the change in position by the photographer. If you stay in the same position, the perspective will not change. But, of course, if you are getting the result of a longer focal length in terms of field of view, you would need to back up if you wanted to maintain the same framing of the scene. That would result in a change of perspective.

You can see an example of the effect of your position versus the focal length of the lens in an episode of Tim Grey TV on my YouTube channel here:

https://youtu.be/wFqukptuwmg

“Raw” Capture on Mobile

Facebooktwitterlinkedin

Today’s Question: I use Lightroom Classic on my laptop. I downloaded the Lightroom app on my iPhone in order to use the Lightroom camera because it can create DNG files. Is there any working relationship between the Lightroom Mobile app and Lightroom Classic? What is your opinion about the Lightroom Mobile camera versus the iPhone camera app?

Tim’s Quick Answer: Using the Lightroom mobile app for photography on your mobile device can help improve overall quality, since the images can be captured in the Adobe DNG format (rather than JPEG capture, for example). Photos captured within the Lightroom mobile app will automatically synchronize to Lightroom Classic, as long as you’ve enabled that synchronization.

More Detail: The Lightroom mobile app works with both Lightroom Classic and the cloud-based Lightroom CC. With the cloud-based version of Lightroom all photos in your catalog are synchronized to the cloud so they are available on all of your devices or through a web browser. With Lightroom Classic only photos you add to collections with synchronization enabled will be synchronized to the cloud.

As long as you have enabled synchronization on Lightroom Classic, the photos you capture with the Lightroom mobile app will appear in your catalog as soon as they are synchronized. The device (your iPhone in this case) will appear as a separate hard drive, essentially. So if you have all of your photos on an external hard drive, you’ll see that drive listed in the header above all of the folders on that drive in the Folders section of the left panel in the Library module.

Once you have synchronized photos from the Lightroom app on a mobile device, you’ll see a header for that device, with a folder called “Imported Photos”. That folder will contain all images captured with the Lightroom Mobile app, and of course you could move those photos to a different storage location within Lightroom Classic.

The result is that using the Lightroom mobile app to capture photos on your mobile device can provide better image quality as well as a more streamlined workflow.

Lenses in Carry-on Bags

Facebooktwitterlinkedin

Today’s Question: Do you pack your 150-600mm lens into your carry-on bag for flights? Or do you put larger lenses in checked luggage?

Tim’s Quick Answer: Yes, so far I have always carried my 150-600mm lens in a carry-on bag for all flights. As a general rule I don’t put any cameras or lenses into a checked bag. I often do, however, put a tripod and various camera accessories into a checked bag.

More Detail: Like most photographers, I’m a bit nervous about putting my important (and sometimes expensive) photography gear in checked luggage for flights. I’m concerned that the luggage will get lost or stolen, or that the gear will get damaged in transit. So I’d much rather have all of my photography (and computer) gear with me in a carry-on bag.

That can certainly add up to a lot of gear and a heavy bag. For example, the lens in question (https://timgrey.me/150600) weighs 4.4 pounds. But I prefer to keep my important gear with me rather than putting it in a checked bag.

Because I generally want to have a bag that works when I’m on the go once I reach my destination, my carry-on is also my camera bag. I prefer to use a backpack as my carry-on bag, and so I need a bag that provides a good balance between providing lots of storage but still being small enough to use as a carry-on bag.

I generally use the Lowepro Fastpack BP 250 AW II, which you can find here:

https://timgrey.me/fastpack250

Note that today’s question was a follow-up from my recent webinar on “Which Lenses Do You Bring?”. If you missed the live presentation, a recording of the full webinar can be found on my Tim Grey TV channel on YouTube here:

https://youtu.be/d1NFXqL__fw

Lenses for Cropped Sensors

Facebooktwitterlinkedin

Today’s Question: I see some lenses being promoted as having been designed for cameras with “cropped” sensors, often to provide a wider field of view. But can these lenses still be used with a full-frame camera?

Tim’s Quick Answer: No, lenses that are designed for cameras with “cropped” sensors can’t really be used on full-frame cameras, since the image circle projected by these lenses is not large enough for a full-sized sensor.

More Detail: Lenses in general are obviously designed for a specific camera system, which generally just means a lens will be compatible with the mount for that camera. So a lens might be designed to mount onto a Canon versus Nikon digital SLR, for example, or for a specific mirrorless camera system mount.

With some camera systems, however, different bodies might use the same lens mount but have different sensor sizes. A common example would be various digital SLR cameras that essentially evolved from 35mm film cameras. Some of these digital SLRs have a sensor that is the same size as a frame of 35mm film, which are generally referred to as “full frame” models. Others have a smaller sensor, which are often referred to as having “cropped” sensors because the smaller sensor is cropping a smaller area of the image circle.

Because a “cropped” sensor is capturing a smaller area of the image circle projected by the lens compared to a full-frame sensor, the photos captured with a cropped sensor have a narrower field of view than the same lens would have provided on a full-frame camera. For example, a 100mm lens on a camera with a 1.6X cropping factor due to a smaller sensor would provide a field of view equivalent to a 160mm lens on a full-frame camera.

So, cropped sensors enable our long lenses to provide the field of view of a longer lens. But those sensors also mean you are missing out on the capabilities of a wide-angle lens. For example, a 16mm wide-angle lens on a cropped sensor camera might give you the field of view comparable to a lens with about a 26mm focal length. That can be a big shortcoming.

To compensate for this issue, many lens manufacturers have started designing lenses specifically for cropped sensors. For example, a 10-24mm lens would make up for the loss of wide-angle capabilities, providing an effective range of 16mm to 38mm compared to a full-frame setup.

However, these specialty lenses project an image circle that is smaller than a “normal” lens designed for a full-frame sensor. That means if you used such a lens on a full-frame camera, the edges of the photo would be dark (as in basically black) because the image circle would not cover the full size of the sensor. Therefore, it is important to be aware of not only the lens mount a lens is compatible with, but also the sensor size that is supported.