Smart Object Issues

Facebooktwitterlinkedin

Today’s Question: When you send an image from Lightroom to Photoshop for editing, do you choose the option to “Edit in Adobe Photoshop” or “Open as Smart Object in Photoshop”? What is the practical impact of each choice?

Tim’s Quick Answer: My preference is to use the “Edit In” command, and to not open the image as a Smart Object in Photoshop. While there are some very nice benefits to the use of Smart Objects, there are also some challenges related to a layer-based workflow.

More Detail: Smart Objects in Photoshop provide for some very interesting and potentially helpful features. In the context of applying a filter effect, for example, adding that filter as a Smart Filter (the variation on a Smart Object used for filters) enables you to refine the settings for the filter effect after that filter has been applied, with no degradation in overall image quality.

When sending a RAW capture from Lightroom to Photoshop (or opening a RAW capture as a Smart Object separately in Photoshop if you’re not a Lightroom user), you are essentially embedding the RAW capture into the file you’re creating in Photoshop. That allows you to simply double-click on the Smart Object layer to bring up the Adobe Camera Raw dialog so you can make changes to the original adjustments applied to the RAW capture.

That capability can certainly be very helpful in a variety of situations. However, it also creates some potential challenges related to a layer-based workflow.

As just one simple example, let’s assume a workflow that involves some image cleanup work with an image opened as a Smart Object. You make use of the powerful Content-Aware technology with the Spot Healing Brush tool to cleanup some dust spots and other blemishes in tricky areas of the photo. You apply this cleanup work on a separate image layer to maintain flexibility with a non-destructive workflow.

Later, you decide that the color isn’t quite right in the image, and you decide to refine the adjustments you applied to the original RAW capture. So you double-click on the Smart Object layer, and apply color changes via the Adobe Camera Raw dialog. The color in the image is improved, but now the color in the areas you cleaned up no longer matches the surrounding photo.

Ultimately, I think Smart Objects are an incredibly powerful feature in Photoshop. Unfortunately, for my purposes they aren’t quite “smart” enough, creating challenges for my preferred layer-based workflow. So until Smart Objects get a bit smarter, my preference is to not use Smart Objects in most cases. And therefore I simply use the “Edit In” command when sending a photo from Lightroom to Photoshop, rather than the option to open the image as a Smart Object.

Stabilization with Fast Shutter

Facebooktwitterlinkedin

Today’s Question: I photograph hummingbirds, and try and keep my shutter speed at 4000 or above. I would like to know if the image stabilizer is doing any good at that speed? Normally I will just turn it off to save battery at that shutter speed, but I’m not sure if that is the right thing to do. If there is no need for the IS at that speed, what is the maximum speed the image stabilization is effective?

Tim’s Quick Answer: In general, image stabilization won’t provide much (if any) benefit with particularly fast shutter speeds, and it could potentially cause problems. As a basic rule of thumb I would say that at shutter speeds above around 1/1000th of a second it generally makes sense to turn off image stabilization.

More Detail: That said, some photographers still prefer to keep image stabilization turned on, even at fast shutter speeds. Put simply, they worry they’ll forget to turn image stabilization back on when they need it, which could potentially be a bigger problem than having the stabilization turned on when it isn’t really needed.

It is possible for stabilization to create problems with sharpness in an image when it is used in the wrong circumstances. Essentially, when image stabilization is used in the wrong circumstances, the compensation that is intended to reduce motion blur can instead create motion blur.

In some situations where you will be imparting significant movement to the camera, it is possible you would achieve a benefit from image stabilization even at very fast shutter speeds. But in general I would say that the fast shutter speed itself will provide the greatest benefit.

It is also worth remembering that image stabilization technology is generally focused on compensating for movement of the camera caused by the photographer, not movement in the frame caused by the subject moving. Assuming you are on a tripod photographing the hummingbirds, and especially considering the fast shutter speeds you’ll be using, my recommendation would be to keep image stabilization turned off for this type of photography.

As a brief aside, I’m reminded of the basic rule of thumb for minimum shutter speed required when hand-holding a lens based on focal length. When you combine that with the notion that image stabilization is probably not going to provide much (if any) benefit at those relatively fast shutter speeds, it seems reasonable to wonder how often we really need to employ image stabilization with long telephoto lenses. Perhaps it isn’t so important to spend the extra money on stabilization with a super telephoto lens. But I suppose it is nice to have in any event.

ISO Invariance

Facebooktwitterlinkedin

Today’s Question: What about the new ISO-invariant cameras? Does you answer [about optimal night exposures in yesterday’s edition of the Ask Tim Grey eNewsletter] apply to them too?

Tim’s Quick Answer: In general I would still say that increasing ISO in the camera is preferred over adjusting in post-processing, even with a sensor that has been labeled as being “ISO Invariant”.

More Detail: The term ISO Invariant in this context refers to a sensor where you can achieve the same results by increasing the ISO in the camera, or under-exposing the image and then brightening in post-processing. That doesn’t mean you’ll have a result with low noise levels necessarily, but rather that the results will be the same with the two approaches.

It is important to keep in mind that, as I pointed out in the article “ISO Illustrated” in the December 2013 issue of Pixology magazine, raising the ISO setting really represents underexposing a photo (perhaps severely) and amplifying the signal recorded by the image sensor in an effort to compensate.

In other words, raising the ISO setting can be thought of as brightening a photo in much the same way that dragging the Exposure slider in Adobe Camera Raw or Lightroom will brighten the photo.

In other words, in the context of ISO the real question is whether the camera can do a better job of amplifying the signal recorded by the sensor or if software in post-processing is able to do a better job.

In general I have found that the camera does a better job of amplifying the signal compared to using software after the capture. This makes sense considering the camera has the benefit of analog data to work with from the image sensor, rather than digital values in the RAW capture file after the capture.

Some cameras perform better than others, of course, both in terms of baseline noise thresholds as well as amplification quality. What I would say in general though is that based on what I’ve seen and have been able to test, there isn’t a clear advantage to ignoring the ISO setting in the camera, even with an “ISO Invariant” sensor.

As such, my recommendation is still to expose properly in the camera, even if that involves increasing the ISO setting to achieve the desired overall settings. That still represents underexposing the photo and using ISO to brighten in the camera, but I have found that this provides a superior result in most cases.

Optimal Exposure at Night

Facebooktwitterlinkedin

Today’s Question: When shooting at night, is it better to shoot for a “proper” exposure at a high ISO, say 3200, or underexpose a couple stops with a lower ISO, then compensate in processing with increased exposure?

Tim’s Quick Answer: You will get the best results by achieving the brightest exposure possible without clipping highlight detail while at the same time using the minimum possible ISO setting. If you need to achieve a shorter exposure duration, generally speaking you are better off increasing the ISO setting rather than creating an underexposure.

More Detail: When you raise the ISO setting, in a way you can think of your result as being underexposed based on, for example, a shorter exposure duration. The resulting image is then brightened up through the use of amplification of the signals recorded by the image sensor.

With the various cameras I have had the opportunity to test, the results consistently show that it is better to let the camera brighten the image through a higher ISO setting than to apply brightening to the image after the capture. In other words, the in-camera amplification of the signal recorded by the image sensor yields higher quality than applying the same change in brightness with software after the capture.

So, if at all possible I would use the lowest ISO setting available to minimize noise, and create an exposure that is as bright as possible without clipping highlight detail (or only clipping the brightest areas, such as illuminated lights). If I needed a faster shutter speed (shorter exposure duration) for any reason, I would raise the ISO setting in order to achieve that goal, because this will generally provide the best final image quality compared to underexposing the scene and then brightening the image later.

Black and White JPEG

Facebooktwitterlinkedin

Today’s Question: Is there any way to optimize a JPEG for a black and white photo? Since there’s no color information, can more gray tonalities be squeezed into fewer megabytes?

Tim’s Quick Answer: While it is certainly possible to produce a black and white JPEG image, this is not something I recommend due to the relatively high risk of posterization (the loss of smooth gradations) in such an image.

More Detail: JPEG images do not support high-bit data, meaning you can only have 8-bit per channel information available for a JPEG image. For full-color images that translates to more than 16.7 million possible color values. However, for a black and white image, having only 8-bits for what is then a single channel means there are only a maximum of 256 shades of gray available for a black and white (grayscale) image.

With only 256 shades of gray available, it can be very difficult to have (or maintain) smooth gradations of tonal value. For example, it is very common to see a banded appearance in a sky rather than a smooth gradation with a black and white image in the 8-bit per channel mode.

When strong adjustments are applied to an 8-bit per channel black and white image, the loss of smooth gradations is compounded. Note that the limitations of 8-bit per channel black and white images apply even if you are working with a color original. Even with a Black & White adjustment layer in Photoshop, working on an RGB image, for example, the final image can only contain up to 256 shades of gray, even though there is more information available in the source image.

Because of these factors, I highly recommend working only in the 16-bit per channel mode for black and white images. That, in turn, means JPEG images should generally be avoided in terms of the source image you optimize for a black and white photo. Instead, only a 16-bit per channel source image should be used as the basis of a black and white interpretation of a photo. You can then certainly save the final result (after all adjustments have been applied) as a JPEG image for purposes of sharing the photo, and still retain relatively high image quality for that final output.

Image Sizing Targets

Facebooktwitterlinkedin

Today’s Question: I want to enter a photo contest that asks for 5MB image saved as a JPEG, at 300 ppi resolution. Isn’t this a strange setting? I have the photo as a 5MB TIFF now, but can’t seem to get it over to the JPEG without losing the 5MB!

Tim’s Quick Answer: The settings you’ve provided are indeed strange, and in large part completely useless for this purpose. I would suggest first saving the image at the original pixel dimensions as a JPEG image at a Quality setting of 80 if you are using Lightroom or 8 if you are using Photoshop. Unless you’re working with a photo with extreme resolution, this will produce a file of under 5MB that you can submit for the contest.

More Detail: It amazes me how often I see submission guidelines for images that don’t provide an adequate amount of information, and that make it clear that the person writing the guidelines doesn’t know much about resolution. In many cases the primary motivation seems to be to ensure that the submitted image files aren’t too large, in an effort to prevent an overload of the server (or an excessive cost for online storage).

The 5MB size is most certainly an upper limit intended to prevent huge image files from being submitted. But with a JPEG file there is always compression applied, so the chances of ending up with a file over 5MB are pretty slim. Even at the maximum Quality setting for a JPEG image, you would need to have pixel dimensions of around 5,000 or so pixels on the long side to produce an image file of around 5MB. Plus, there’s no real advantage to that file size in this context.

Instead I would submit images at either full resolution or a bit smaller if you’re concerned about producing a file that is too large. For a photo contest I generally want to have an image that is large enough to be evaluated effectively, enabling the judge to zoom in on the image, for example.

Frankly, I would ignore the pixel per inch resolution altogether in this case. This setting only applies when the photo is being printed, so it isn’t critical for an online photo contest submission.

Of course, it perhaps goes without saying that it is a good idea to confirm that the photo contest is being run by a reputable organization, and to check the submission guidelines carefully. If you have any doubts about these issues it may not be worthwhile to submit your photos for the contest, especially if you are sending relatively high-resolution image files.

White Balance Challenges

Facebooktwitterlinkedin

Today’s Question: I have adopted your Auto White Balance approach and in most cases I am pleased with my results. I do occasionally have issues and want clarification of the Lightroom white balance dropper. Should I set it on something that I think is close to 18% gray or should I set it on a white patch in the image? One of my most difficult issues are images of my grandson who has an olive complexion, especially in the winter. I struggle to find a white balance that satisfies me or his mother (my daughter).

Tim’s Quick Answer: The core function of the White Balance Selector tool (the eyedropper) in Lightroom or Adobe Camera Raw is to adjust the values for Temp and Tint so that the pixel you click on in the image becomes neutral gray. As such, you generally want to click on an area of the photo that should be perfectly neutral when using this tool. But of course in the real world things are a little more complicated than that.

More Detail: If your scene includes an object or area that should be absolutely neutral gray without any color cast, then you can use the White Balance Selector to quickly neutralize the overall color in your photo. Simply choose the White Balance Selector and click on the area of the photo that would be perfectly neutral. Note that the area you click on can be of any tonal value from black all the way to white. When we say “neutral gray” in this context we simply mean a shade of gray of any tonality.

Of course, in many photographic situations we don’t actually want a gray object to appear gray. Very often, for example, we specifically seek out early morning or late afternoon light in pursuit of the color cast provided by that light. During “golden hour” an object that would appear perfectly neutral gray under white lighting should most certainly not appear gray in our final image.

What to do? There are a couple of approaches I can recommend.

First, it is worth noting that having a neutral starting point can be very helpful, even if that starting point doesn’t match your final intent for the image. You could, for example, position a gray card in the frame under the same lighting as your key subject, and use that gray card as the basis of a white balance adjustment.

For situations where it isn’t practical to use a gray card during your photography, you can instead click on an area of the photo that you feel should most likely appear as a neutral gray. For a portrait this could include the white of the eye, for example. In a landscape a cloud will often provide a good area to click on.

Of course, once you’ve clicked on an area of the photo that was actually gray, you’ll likely want to apply a correction to add a bit of color to the image. If you photograph a scene under late afternoon light and employ this gray card approach, for example, you’ll then want to adjust the Temp slider toward yellow to add back the golden light.

This brings us to the real challenge of the situation. If we want to retain the color influence of the light illuminating a scene, how do we achieve accurate color without losing that color influence? This is the challenging part.

What I recommend is that you develop your eye for accurate color. Start by calibrating your monitor display to make sure the display is accurate. Then skip the White Balance Selector tool or the preset options and go right to the Temp and Tint sliders. Drag through the extremes, and gradually “settle down” the slider movement as you zero in on optimal appearance for the image. With practice this will become easier (I promise!).

Note that it can also be very helpful to apply an exaggerated increase to the Saturation value for the image while you’re working, so you can more easily see the colors that are present in the photo.

As an aside, there are specific techniques for targeting accurate skin tones in photos of people. But again, keep in mind that the color of an object is not necessarily the color we want in our final photo, such as with situations where part of the reason we captured the image was the color of the light illuminating the scene. Therefore, targeting “accurate” RGB values for specific skin tones photographed in the middle of the day won’t work well for photos captured during “golden hour”.

In other words, at the end of the day your best approach is generally going to be to use your eyes to evaluate the color for each individual photo. There are “shortcuts” to helping you get a neutral starting point that can be helpful, but ultimately the best results will come from practicing the art of fine-tuning the color in your photos.

Crop Aspect Ratio Preview

Facebooktwitterlinkedin

Today’s Question: When I’m trying to decide how to crop a photo in Lightroom, I’m looking for an easy way to decide if I want it 8×10, or 8×12, etc. I’ve been using the crop tool and then setting a custom crop setting then going back and looking at the photo, but it seems like there should be an easier, faster way.

Tim’s Quick Answer: It sounds like the “Crop Guide Overlays” feature in Lightroom will provide a good solution for what you’re looking for. You can choose the aspect ratios you want to compare in the Choose Aspect Ratios checkbox, and then enable the crop overlay so you can compare different aspect ratios at one time simply by adjusting the crop.

More Detail: To get started, go to the Develop module and select the Crop tool using the button below the Histogram display on the right panel (or by pressing “R” on the keyboard). Then choose Tools > Crop Guide Overlay > Choose Aspect Ratios from the menu. This will bring up the Choose Aspect Ratios dialog, where you can turn on the checkbox for the specific aspect ratios you’d like to compare. Then click OK to close the Choose Aspect Ratios dialog.

Next, choose Tools > Crop Guide Overlay > Aspect Ratios from the menu. This will cause this option to be activated, as indicated by a checkmark icon to the left of this menu option. However, initially nothing will seem to have changed for the image.

To actually see the crop tool overlay and therefore compare different crop aspect ratios, you simply need to drag one of the edges or corners of the crop box on the image. When you have the mouse button down, in addition to the outer crop boundary you will see an overlay indicating the various aspect ratios you selected. This enables you to get a reasonable preview of different aspect ratios for the crop in real time.

One minor challenge with this feature is that a certain amount of translation is required if you are seeking to preview specific print sizes. For example, there isn’t an overlay option for “8×12”. Instead there is an option for “2×3 4×6”, which of course reflects the same aspect ratio as an 8×12 crop.

If the crop tool overlay causes you to decide on a particular crop aspect ratio, you can then select that aspect ratio for the Crop tool and apply the desired crop to your photo.

White Balance with Neutral Density

Facebooktwitterlinkedin

Today’s Question: In Tim Grey TV Episode 23 (https://youtu.be/qZ3HDJmOuVs) you mentioned you set your white balance to Sunny. When using a neutral density filter do you always set your white balance for the conditions or do you use auto white balance often?

Tim’s Quick Answer: In this specific example, I was actually only using a white balance preset in order to test the relative behavior of different neutral density filters. In most cases I tend to employ the “Auto” setting for white balance provided I am capturing photos in the RAW capture mode.

More Detail: When testing gear for a particular behavior, it is important to isolate as many variables as possible. In the episode of Tim Grey TV referenced I was testing a couple of neutral density filters to get a sense of just how much variability there was in the neutrality of different filters. Therefore, I wanted all camera settings “locked in” to fixed settings, so that the only variable was the actual filter being attached to the lens.

Under normal circumstances (with or without the use of a neutral density filter), my personal preference is to simply employ the “Auto” setting for white balance when shooting RAW. This is based in large part on the fact that the white balance setting doesn’t actually affect capture data when you are using the RAW capture mode.

It is important to keep in mind, of course, that by using Auto white balance you are introducing potential variability in the appearance of one photo to the next. You are also potentially creating additional work for yourself in post-processing. Of course, it is worth noting that it is also very easy to synchronize the white balance setting (and other adjustments) for multiple photos at once with software tools such as Adobe Camera Raw and Lightroom.

That said, in general I find that I don’t find that I need to synchronize the white balance setting for multiple photos all that often. I also prefer to fine-tune the overall white balance adjustments for my photos after the capture almost without exception. Based on my preferred workflow, choosing a particular white balance setting in the camera wouldn’t provide any real benefit, unless I was lucky enough to guess the perfect setting for every photo.

Degradation with Adjustments

Facebooktwitterlinkedin

Today’s Question: I generally make basic adjustments in Lightroom, such as Whites, Blacks, Highlights and Clarity. I then take the image into Photoshop where I make additional adjustments such as Tonal Contrast using Nik plug-ins, which I’ve used for years and love. Is making adjustments like that in both Lightroom and Photoshop (using 16-bit images) likely to have an adverse effect on the image quality? To my eye they look better and I haven’t noticed any gapping in the histogram.

Tim’s Quick Answer: The approach you describe will not cause any significant degradation in image quality. There is a theoretical disadvantage to applying multiple passes of adjustments to an image, but as long as those adjustments are relatively modest and you are working with a 16-bit per channel image, there won’t be a visible degradation in image quality.

More Detail: The core issue here is that adjustments can cause a certain degree of image degradation. Obviously the adjustment is aimed at improving the overall appearance of the image, but some degradation will occur. For example, many adjustments will reduce the smoothness of transitions of tone and color in an image. Increasing contrast or saturation will tend to have the strongest impact in this regard.

When you use multiple adjustment steps rather than a single step, there can be a compounding effect, where the final image has degraded more than if the final result had been achieved with fewer adjustments. So, for example, if you can produce the same final appearance in the photo with one adjustment rather than three, the image will exhibit better quality.

It is important to keep in mind that the differences here are generally going to be extremely minor, unless the adjustments are especially strong. In other words, with typical adjustments you wouldn’t be able to see a visual difference between two versions of an image processed with more versus fewer adjustments. It would require very detailed analysis to find any variation in pixel values under typical circumstances.

In addition, keep in mind that in many cases when you are applying adjustments in multiple steps, you aren’t actually having the same cumulative degradation in image quality. For example, in Lightroom all of the adjustments you apply don’t really alter pixel values until you export or otherwise share the photo. In other words, no matter how many times you move an individual slider in Lightroom’s Develop module, the result is as though you only moved the slider once to its final decision.

Especially when you are making use of specialized tools (such as a plug-in as described in today’s question), I wouldn’t hesitate at all to employ multiple adjustment tools in my workflow. When using a 16-bit per channel workflow, you don’t need to have any real concern about image degradation using a workflow such as this. That said, for 8-bit per channel images, these concerns can be very real indeed, especially for black and white photos.