Eyeglass Coating

Facebooktwitterlinkedin

Today’s Question: I recently got a new pair of eyeglasses for computer use. The optician suggested a coating that reduces or eliminates the blue cast from computer monitors. The sample lens did make things appear warmer. Is this a good option? My monitor is calibrated, but if I get the coating should I recalibrate to compensate?

Tim’s Quick Answer: My personal preference would be to not use eyeglasses with this type of coating when making color corrections based on a monitor display. Alternatively, you could attempt to calibrate the display to compensate for the color shift your eyeglasses produces.

More Detail: I suspect your optician was thinking more about the overall viewing experience and perhaps eye strain, rather than color accuracy. Naturally, if the photos on your display look warmer (more yellow) than they actually are, you’re going to have a tendency to shift the color balance to a value that is too cool (blue).

You could, of course, simply remove the eyeglasses when you need to make a critical decision about color, if you prefer to have the effect of the coating at other times. Another option would be to calibrate to a different white point target for your display. The idea would be to make the display appear cooler (more blue) with a target value that offsets the effect of your eyeglasses.

There are two challenges involved with that. First, it will take some trial and error to figure out what target value will provide a good result. Second, the more entry-level display calibration tools don’t allow you to customize the target you’ll calibrate to for the white point.

If you prefer to calibrate to compensate for the effect of your eyeglasses, I would recommend the X-Rite i1Display Pro (http://timgrey.me/i1prodis), which provides you with the ability to set a custom white point target, among other advanced controls.

sRGB for Digital Sharing

Facebooktwitterlinkedin

Today’s Question: I saw your recommendation to convert photos to the sRGB color space for digital sharing such as slideshows. How do I do that in Lightroom?

Tim’s Quick Answer: In Lightroom you don’t actually convert an existing image to the sRGB color space, but rather apply that conversion when exporting an image, in the process creating a copy of the image to be used for sharing.

More Detail: Converting to the sRGB color space is a strategy employed to help ensure the most accurate appearance of your photos in situations where color management isn’t being employed. The idea is that because the sRGB color space was original designed to encompass the color range of a typical monitor display, it provides a good foundation for colors that will be shared digitally.

In applications such as Photoshop you can open a photo, convert it to the sRGB color space, and re-save the image, thus converting your master file to the sRGB color space. In Lightroom that option doesn’t exist, so instead you convert to sRGB when exporting a photo for sharing.

So, for example, you could select the photo (or photos) you want to share digitally, and click the Export button at the bottom of the left panel to bring up the Export dialog. In the File Settings section of the Export dialog you can choose the file type (such as a JPEG image for many types of digital sharing), and then choose sRGB as the color space from the Color Space popup. When you perform the export, new copies of the selected photos will be created, having been converted to the sRGB color space in the process.

Removing Duplicates

Facebooktwitterlinkedin

Today’s Question: I recently bought software (Duplicate Cleaner Pro) for checking for duplicate Images, and found an abundance on several of my external drives. The software gives me the option of getting rid of all the duplicates outside of Lightroom, but my question is, how can I do this in a way that lets LR know what’s being deleted? Any other tips you might have for deleting duplicate images safely?

Tim’s Quick Answer: My recommendation is to first make sure there are no missing photos in your Lightroom catalog. Then remove duplicates from your hard drive, and remove the resulting missing photos from Lightroom.

More Detail: Lightroom unfortunately does not have a built-in feature for locating and removing duplicate images. Therefore, a third-party utility is necessary for this task. However, you certainly want to coordinate that task to keep your photos properly organized in Lightroom.

The first step is to make sure that you don’t have any missing photos in your Lightroom catalog, so you can use the “missing” status later to synchronize the removal of duplicates. Start by going to the Library module in Lightroom, and choosing Library > Find All Missing Photos from the menu. If there are any missing photos, they will be presented in a “Missing Photographs” collection under the Catalog heading on the left panel. Resolve those missing photos by reconnecting them to the source files, or by removing them from your Lightroom catalog.

Once you no longer have any missing photos in your Lightroom catalog, you can employ software to search for and remove duplicate photos on your hard drive. Once those duplicates are removed, you then want to update your Lightroom catalog to reflect the changes.

Because you know there were no photos missing from your Lightroom catalog when you initiated this process, you can now use that “missing” status to remove the duplicates that had been deleted outside of Lightroom. First, choose the Library > Find All Missing Photos command once again. Then, once you are in the “Missing Photographs” collection in the Catalog section of the left panel, select all of the missing photos by choosing Edit > Select All from the menu. Finally, choose Photo > Remove Photos from the menu.

At this point, all of the duplicate photos that had been removed from your hard drive will also be removed from your Lightroom catalog.

Multiple Drives

Facebooktwitterlinkedin

Today’s Question: When I started in Lightroom a few years ago, I put my photos on an external (4TB) hard drive. That is where my catalogue resides as well. The drive is close to full, so I need to switch to a fresh external drive. What do I do about the catalogue? If I copy it to the new drive and work from it, when I plug it into Lightroom most of the photos won’t be available if I don’t have the first drive hooked up. Is there a real downside to starting a new catalogue on the new drive, for the new photos going forward? Then I would only need that external drive attached. If I want to work on the other photos, I can hook up the other drive. Am I missing something key?

Tim’s Quick Answer: I would actually recommend putting the Lightroom catalog onto the internal hard drive of your computer, both for improved performance and the ability to access your catalog regardless of whether the drives with your photos are connected. If possible I also recommend using a single hard drive for storing all of your photos.

More Detail: My preference is to use a single Lightroom catalog to manage all of my photos. With this approach, you don’t have to think about which catalog you need to open in Lightroom in order to locate a particular image. Instead you simply open your only Lightroom catalog, knowing that all of your photos are being managed by that catalog.

Whenever possible, I also prefer to have the Lightroom catalog stored on the internal hard drive of your computer. This enables you to open your catalog and even apply updates to the metadata of your photos, for example, without needing to connect the external hard drive that contains your photos. This approach will also generally provide improved performance in Lightroom, since an internal hard drive will generally be faster than an external hard drive (though not always, of course).

If at all possible, I also prefer to have all photos stored on a single external hard drive, simply to streamline the overall folder structure to reside on a single drive. In the context of Lightroom this approach also makes it a little easier to find a particular folder, since you’ll only have one list of folders in the Folders section on the left panel in the Library module.

RAW Support Workaround

Facebooktwitterlinkedin

Today’s Question: I have Adobe Photoshop CS5. It does not support the ARW files from my Sony Alpha a6000. I want to start my editing of a photo in Photoshop rather than Lightroom. Can the files be changed to a DNG by Photoshop instead of taking it into Lightroom?

Tim’s Quick Answer: Yes, you can convert your RAW captures to the Adobe DNG file format, and then open those DNG files directly in Photoshop via Adobe Camera Raw.

More Detail: For photographers who have chosen not to sign up for an Adobe Creative Cloud subscription or to use Adobe Lightroom as the central tool in their workflow, new or recently updated RAW capture formats may not be supported. The Adobe DNG Converter is a free utility that provides a workaround for this situation.

You can simply use the Adobe DNG Converter to convert your photos to the DNG format and then open those in Photoshop via Adobe Camera Raw. You can find a download link for the Adobe DNG Converter on the Adobe website here:

https://helpx.adobe.com/photoshop/digital-negative.html

The only caution I would add is to make sure that you are either always using Lightroom as the starting point for your workflow, or that you are never using Lightroom. A mixed approach to using Lightroom can create some serious organizational challenges.

But again, for photographers who have not upgraded to the latest version of Lightroom or signed up for an Adobe Creative Cloud subscription, the free Adobe DNG Converter does provide a workaround for updated RAW format support.

Smart Object Issues

Facebooktwitterlinkedin

Today’s Question: When you send an image from Lightroom to Photoshop for editing, do you choose the option to “Edit in Adobe Photoshop” or “Open as Smart Object in Photoshop”? What is the practical impact of each choice?

Tim’s Quick Answer: My preference is to use the “Edit In” command, and to not open the image as a Smart Object in Photoshop. While there are some very nice benefits to the use of Smart Objects, there are also some challenges related to a layer-based workflow.

More Detail: Smart Objects in Photoshop provide for some very interesting and potentially helpful features. In the context of applying a filter effect, for example, adding that filter as a Smart Filter (the variation on a Smart Object used for filters) enables you to refine the settings for the filter effect after that filter has been applied, with no degradation in overall image quality.

When sending a RAW capture from Lightroom to Photoshop (or opening a RAW capture as a Smart Object separately in Photoshop if you’re not a Lightroom user), you are essentially embedding the RAW capture into the file you’re creating in Photoshop. That allows you to simply double-click on the Smart Object layer to bring up the Adobe Camera Raw dialog so you can make changes to the original adjustments applied to the RAW capture.

That capability can certainly be very helpful in a variety of situations. However, it also creates some potential challenges related to a layer-based workflow.

As just one simple example, let’s assume a workflow that involves some image cleanup work with an image opened as a Smart Object. You make use of the powerful Content-Aware technology with the Spot Healing Brush tool to cleanup some dust spots and other blemishes in tricky areas of the photo. You apply this cleanup work on a separate image layer to maintain flexibility with a non-destructive workflow.

Later, you decide that the color isn’t quite right in the image, and you decide to refine the adjustments you applied to the original RAW capture. So you double-click on the Smart Object layer, and apply color changes via the Adobe Camera Raw dialog. The color in the image is improved, but now the color in the areas you cleaned up no longer matches the surrounding photo.

Ultimately, I think Smart Objects are an incredibly powerful feature in Photoshop. Unfortunately, for my purposes they aren’t quite “smart” enough, creating challenges for my preferred layer-based workflow. So until Smart Objects get a bit smarter, my preference is to not use Smart Objects in most cases. And therefore I simply use the “Edit In” command when sending a photo from Lightroom to Photoshop, rather than the option to open the image as a Smart Object.

Stabilization with Fast Shutter

Facebooktwitterlinkedin

Today’s Question: I photograph hummingbirds, and try and keep my shutter speed at 4000 or above. I would like to know if the image stabilizer is doing any good at that speed? Normally I will just turn it off to save battery at that shutter speed, but I’m not sure if that is the right thing to do. If there is no need for the IS at that speed, what is the maximum speed the image stabilization is effective?

Tim’s Quick Answer: In general, image stabilization won’t provide much (if any) benefit with particularly fast shutter speeds, and it could potentially cause problems. As a basic rule of thumb I would say that at shutter speeds above around 1/1000th of a second it generally makes sense to turn off image stabilization.

More Detail: That said, some photographers still prefer to keep image stabilization turned on, even at fast shutter speeds. Put simply, they worry they’ll forget to turn image stabilization back on when they need it, which could potentially be a bigger problem than having the stabilization turned on when it isn’t really needed.

It is possible for stabilization to create problems with sharpness in an image when it is used in the wrong circumstances. Essentially, when image stabilization is used in the wrong circumstances, the compensation that is intended to reduce motion blur can instead create motion blur.

In some situations where you will be imparting significant movement to the camera, it is possible you would achieve a benefit from image stabilization even at very fast shutter speeds. But in general I would say that the fast shutter speed itself will provide the greatest benefit.

It is also worth remembering that image stabilization technology is generally focused on compensating for movement of the camera caused by the photographer, not movement in the frame caused by the subject moving. Assuming you are on a tripod photographing the hummingbirds, and especially considering the fast shutter speeds you’ll be using, my recommendation would be to keep image stabilization turned off for this type of photography.

As a brief aside, I’m reminded of the basic rule of thumb for minimum shutter speed required when hand-holding a lens based on focal length. When you combine that with the notion that image stabilization is probably not going to provide much (if any) benefit at those relatively fast shutter speeds, it seems reasonable to wonder how often we really need to employ image stabilization with long telephoto lenses. Perhaps it isn’t so important to spend the extra money on stabilization with a super telephoto lens. But I suppose it is nice to have in any event.

ISO Invariance

Facebooktwitterlinkedin

Today’s Question: What about the new ISO-invariant cameras? Does you answer [about optimal night exposures in yesterday’s edition of the Ask Tim Grey eNewsletter] apply to them too?

Tim’s Quick Answer: In general I would still say that increasing ISO in the camera is preferred over adjusting in post-processing, even with a sensor that has been labeled as being “ISO Invariant”.

More Detail: The term ISO Invariant in this context refers to a sensor where you can achieve the same results by increasing the ISO in the camera, or under-exposing the image and then brightening in post-processing. That doesn’t mean you’ll have a result with low noise levels necessarily, but rather that the results will be the same with the two approaches.

It is important to keep in mind that, as I pointed out in the article “ISO Illustrated” in the December 2013 issue of Pixology magazine, raising the ISO setting really represents underexposing a photo (perhaps severely) and amplifying the signal recorded by the image sensor in an effort to compensate.

In other words, raising the ISO setting can be thought of as brightening a photo in much the same way that dragging the Exposure slider in Adobe Camera Raw or Lightroom will brighten the photo.

In other words, in the context of ISO the real question is whether the camera can do a better job of amplifying the signal recorded by the sensor or if software in post-processing is able to do a better job.

In general I have found that the camera does a better job of amplifying the signal compared to using software after the capture. This makes sense considering the camera has the benefit of analog data to work with from the image sensor, rather than digital values in the RAW capture file after the capture.

Some cameras perform better than others, of course, both in terms of baseline noise thresholds as well as amplification quality. What I would say in general though is that based on what I’ve seen and have been able to test, there isn’t a clear advantage to ignoring the ISO setting in the camera, even with an “ISO Invariant” sensor.

As such, my recommendation is still to expose properly in the camera, even if that involves increasing the ISO setting to achieve the desired overall settings. That still represents underexposing the photo and using ISO to brighten in the camera, but I have found that this provides a superior result in most cases.

Optimal Exposure at Night

Facebooktwitterlinkedin

Today’s Question: When shooting at night, is it better to shoot for a “proper” exposure at a high ISO, say 3200, or underexpose a couple stops with a lower ISO, then compensate in processing with increased exposure?

Tim’s Quick Answer: You will get the best results by achieving the brightest exposure possible without clipping highlight detail while at the same time using the minimum possible ISO setting. If you need to achieve a shorter exposure duration, generally speaking you are better off increasing the ISO setting rather than creating an underexposure.

More Detail: When you raise the ISO setting, in a way you can think of your result as being underexposed based on, for example, a shorter exposure duration. The resulting image is then brightened up through the use of amplification of the signals recorded by the image sensor.

With the various cameras I have had the opportunity to test, the results consistently show that it is better to let the camera brighten the image through a higher ISO setting than to apply brightening to the image after the capture. In other words, the in-camera amplification of the signal recorded by the image sensor yields higher quality than applying the same change in brightness with software after the capture.

So, if at all possible I would use the lowest ISO setting available to minimize noise, and create an exposure that is as bright as possible without clipping highlight detail (or only clipping the brightest areas, such as illuminated lights). If I needed a faster shutter speed (shorter exposure duration) for any reason, I would raise the ISO setting in order to achieve that goal, because this will generally provide the best final image quality compared to underexposing the scene and then brightening the image later.

Black and White JPEG

Facebooktwitterlinkedin

Today’s Question: Is there any way to optimize a JPEG for a black and white photo? Since there’s no color information, can more gray tonalities be squeezed into fewer megabytes?

Tim’s Quick Answer: While it is certainly possible to produce a black and white JPEG image, this is not something I recommend due to the relatively high risk of posterization (the loss of smooth gradations) in such an image.

More Detail: JPEG images do not support high-bit data, meaning you can only have 8-bit per channel information available for a JPEG image. For full-color images that translates to more than 16.7 million possible color values. However, for a black and white image, having only 8-bits for what is then a single channel means there are only a maximum of 256 shades of gray available for a black and white (grayscale) image.

With only 256 shades of gray available, it can be very difficult to have (or maintain) smooth gradations of tonal value. For example, it is very common to see a banded appearance in a sky rather than a smooth gradation with a black and white image in the 8-bit per channel mode.

When strong adjustments are applied to an 8-bit per channel black and white image, the loss of smooth gradations is compounded. Note that the limitations of 8-bit per channel black and white images apply even if you are working with a color original. Even with a Black & White adjustment layer in Photoshop, working on an RGB image, for example, the final image can only contain up to 256 shades of gray, even though there is more information available in the source image.

Because of these factors, I highly recommend working only in the 16-bit per channel mode for black and white images. That, in turn, means JPEG images should generally be avoided in terms of the source image you optimize for a black and white photo. Instead, only a 16-bit per channel source image should be used as the basis of a black and white interpretation of a photo. You can then certainly save the final result (after all adjustments have been applied) as a JPEG image for purposes of sharing the photo, and still retain relatively high image quality for that final output.