Unexpected Bit Depth

Facebooktwitterlinkedin

Today’s Question: Based on your discussion about bit depth, my JPEGs, as shot in the camera, show up as 24 bit depth when looked at in my Windows [operating system] folder. I’m not sure if that information is available on Photoshop or Lightroom, but I am confused by this bit depth. How do I change that number to a 16 bit as you recommend, or change the number in general?

Tim’s Quick Answer: The reference to “24-bit” is actually the same thing as “8-bit per channel”. The operating system is simply describing the total bit depth rather than the per-channel bit depth.

More Detail: Bit depth refers to the total number of potential tonal and color values in an image. In digital photography we generally refer to the per channel bit depth, such as 8-bits per channel or 16-bits per channel.

In other contexts the total number of bits is used instead. This is often the case with film scanners, for example. With an RGB image you have three channels (red, green, and blue). So, if the image is 8-bit per channel there is a total of 24 bits (eight bits multiplied by three channels). For a 16-bit per channel image the total would be 48 bits (sixteen bits multiplied by three channels).

It is worth noting, by the way, that JPEG images can only be in the 8-bit per channel mode. Furthermore, if you have an 8-bit per channel image I don’t recommend converting it to 16-bits per channel. Doing so would double the base file size of the image with no real benefit in terms of image quality or flexibility in optimizing the image.

Smart Previews for Develop

Facebooktwitterlinkedin

Today’s Question: In Lightroom > Preferences > Performance, do you recommend enabling “Use Smart Previews Instead of Originals for Image Editing” in order to speed up performance?

Tim’s Quick Answer: Yes, having this option turned on can improve performance in the Develop module in Lightroom, especially if the source images are of a particularly large size in terms of resolution. If you have adequate storage space for the Smart Previews and your photos have a very high resolution, I would recommend turning on this option.

More Detail: The option to use Smart Previews in the Develop module rather than the original source image enables you to potentially streamline your workflow. Put simply, enabling this option can speed up performance when applying adjustments in the Develop module.

My testing has demonstrated that the performance improvement is generally rather modest. The benefit is more pronounced with images that have a very high resolution. In other words, if you’re using a camera that has a 40-megapixel sensor you can expect a more significant benefit compared to images captured with a 20-megapixel sensor. That is because the difference in the amount of data will be more significant with higher resolution captures.

Of course, you also need to consider the amount of additional storage space required for those Smart Previews. You can generally expect to consume around 2MB of hard drive space for each Smart Preview. That isn’t a tremendous amount of space, but it can add up.

It is also possible that the preview in the Develop module based on a Smart Preview won’t be completely accurate, since the original source image is not being taken into account when working exclusively with Smart Previews.

If you are seeing slow performance in the Develop module, with adjustments taking time to be reflected in your preview images, I would most certainly recommend turning on the option to use Smart Previews in order to help improve that performance.

Camera Bit Depth

Facebooktwitterlinkedin

Today’s Question: Do all cameras have approximately the same bit depth or do they differ significantly? If so what is the difference?

Tim’s Quick Answer: Most cameras today provide 14-bit per channel analog to digital conversion. A small number of higher-end cameras offer 16-bit per channel support, and some (mostly older) cameras are limited to 12-bit per channel. Cameras with higher bit depth have the potential for greater detail with smoother gradations.

More Detail: Light represents an analog signal, and so you could say that light could theoretically be divided into an infinite number of brightness values. However, digital images are described with discrete numeric values, and so a limit to how many values are available must be defined.

You can think of this limit as being a limit to how many digits can be used for a number. If you are limited to a two-digit number, the maximum value is 99. For a three-digit number the maximum value would be 999.

In the context of digital images, bit depth defines the limit in terms of how many possible values are available, and therefore how many tonal and color values are possible. Cameras that only offered 12-bit per channel analog-to-digital (A/D) conversion were limited to a total of 4,096 tonal values per channel, or more than 68 billion possible tonal and color values for a full-color image.

Most cameras employ 14-bit per channel A/D conversion, providing 16,384 tonal values per channel, or more than 4 trillion possible tonal and color values overall. And those few cameras that offer 16-bit per channel A/D conversion offer 65,536 tonal values per channel, or over 281 trillion possible tonal and color values.

Of course, you only really need about 8-bit per channel information to provide a photographic image of excellent quality. But having more information can ensure you retain smooth gradations and optimal overall quality, even after strong adjustments are applied. So there is an advantage to higher bit depth, but that advantage has a diminishing return.

When processing your images after the capture, most software only provides support for 8-bit per channel and 16-bit per channel modes. So when your camera “only” offers 14-bit (or 12-bit) A/D conversion, you would still generally be working with that image in the 16-bit per channel mode. You simply don’t have full 16-bit information in that scenario.

I wouldn’t recommend choosing a specific camera based only on the bit depth of the A/D conversion for that camera. Many other factors are far more important both in terms of image quality and overall feature set. All things considered, I would say that most cameras today are about equal in terms of the net effect of their bit depth, in large part because the vast majority of cameras today offer the same 14-bit per channel A/D conversion.

Bit Depth Importance

Facebooktwitterlinkedin

Today’s Question: How important is it really to work at a high bit depth? Will I even be able to see the difference in my photos?

Tim’s Quick Answer: That depends. When a photo requires strong adjustments or will be presented as a monochrome image, working at a high bit depth can be critical. When working with a full-color photo that only requires minor adjustments, the bit depth isn’t likely to be a significant factor. I simply prefer a conservative approach that involves always using 16-bit per channel mode when optimizing photos.

More Detail: The bit depth used when applying adjustments to your images affects the total number of tonal and color values available for the image. That, in turn, determines the degree to which smooth gradations of tone and color can be maintained, even as you apply strong adjustments.

A monochrome (such as black and white) image at a bit depth of 8-bits per channel will only have 256 shades of gray available, while a 16-bit image will have 65,536 shades of gray. That can translate into a tremendous risk of posterization (the loss of smooth gradations) for an 8-bit monochromatic image, even with modest adjustments.

A color image at 8-bits per channel will have more than 16.7 million possible tonal and color values available. At 16-bits per channel that number jumps to over 281 trillion tonal and color values.

While 16.7 million possible tonal and color values is generally adequate for ensuring smooth gradations within the photo, strong adjustments can result in a degree of posterization. It will usually take a very strong adjustment (perhaps combined with an image that had been under-exposed initially) to create visible posterization with a color image, but the point is that there is a degree of risk.

For many photographers the difference may be virtually non-existetnt between an 8-bit per channel and 16-bit per channel image for a color photograph that doesn’t require strong adjustments. However, my personal preference is to always work in the 16-bit per channel mode when possible, just to ensure I am always producing an image with the highest potential quality.

It is important to note, however, that if the original capture does not provide high-bit data, there is no real advantage to converting an 8-bit image to the 16-bit per channel mode. This is one of the key reasons I prefer RAW capture rather than JPEG capture (along with the risk of compression artifacts with JPEG captures).

Blurry Print

Facebooktwitterlinkedin

Today’s Question: I captured a photo in RAW and loaded it into Lightroom CC.  I converted to black and white and exported to my hard drive at a resolution of 300ppi with pixel dimensions of 4608×3456. I sent the image to a photo lab to have a 12×18 print made. I have a 27” monitor (iMac) and the image looks fantastic on it, sharp as a tack and rich in contrast. When I got the print back from the lab it looked blurry and dull. This has happened to two different labs. Am I seeing it wrong on my monitor?

Tim’s Quick Answer: It sounds like you are using an appropriate workflow here, so either two labs did a bad job of printing the image, there was a problem in the file you provided, or you’ve not gotten a clear view of the actual image quality.

More Detail: While a typical monitor display without calibration is about a full stop too bright, this issue won’t affect the relative appearance of sharpness and detail in an image. The typical complaint I hear about prints is that they are too dark, which is often attributable to the lack of calibration. That, in turn, leads to the application of improper adjustments to the image.

However, this won’t cause problems with the appearance of sharpness and detail in the image. That said, it is important to zoom in to a 100% view to get an accurate sense of the sharpness of the image. If you’re not zooming in to evaluate the image it is possible you’re simply not getting an accurate sense of how sharp the image should be and therefore what you can expect in the print.

You might also confirm your export settings for the image. I assume the pixel dimensions stated in the question are the native pixel dimensions for the original capture. When possible you want to provide the printer with a file that has as much data as possible, up to the intended output size. In this case the file is large enough that good output quality could be reasonably expected.

However, you haven’t prepared a file sized to the final output dimensions. Whenever possible I recommend sending the printer an image sized to the exact output size, typically based on a pixel per inch (ppi) resolution of 300 ppi. So in this case you would want to provide a file of around 5,400 pixels by 3,600 pixels.

I also highly recommend having a conversation with the print lab you are using. They should be able to confirm that the file you sent was prepared properly, and provide you with a print that matches the source image. One printer I have been recommending for a long time is Fine Print Imaging, which you can learn more about here:

http://www.fineprintimaging.com

It is worth keeping in mind that a print will never have the same luminance and depth that a monitor display is capable of presenting. Therefore, it is also important to have realistic expectations based on what is possible in a print. But in this case it does indeed sound like there is an issue causing a print that is not matching the potential of the source image.

Manual Focus for ND

Facebooktwitterlinkedin

Today’s Question: I have a question of clarification on your answer to focusing with a strong neutral density filter. Can you please tell me if after you set the camera settings without the filter on do you then have to switch to manual focus on the lens or does it not matter?

Tim’s Quick Answer: Yes, if autofocus is enabled for your shutter release button, then you should disable autofocus on the lens after adding a neutral density filter for a long exposure.

More Detail: As mentioned in yesterday’s Ask Tim Grey eNewsletter, I recommend configuring all of the camera settings for your photo with the neutral density filter detached. You can then add the neutral density back to the lens, and adjust the shutter speed to increase the exposure duration based on the strength of the neutral density filter.

In other words, you should refine your composition, establish exposure settings, and set the focus with the neutral density filter removed from the lens. Make sure the exposure settings are established in manual mode, and then add the neutral density filter and adjust the shutter speed setting.

If your shutter release button is configured to activate autofocus, then pressing that button to capture the image will cause the camera to attempt to refocus on the scene. This can result in inaccurate focus, in part because of the presence of the neutral density filter.

By turning off autofocus on the lens, you’ll ensure that the camera isn’t able to autofocus when you capture the image. Of course, if you’re using back-button focus and have disabled autofocus for the shutter release button then this additional step is not necessary.

Also, be sure to re-enable autofocus when you’re finished capturing the photo.

Focusing with Neutral Density

Facebooktwitterlinkedin

Today’s Question: Is there any reason not to use autofocus when using a solid neutral density for a long exposure?

Tim’s Quick Answer: In some cases autofocus may be difficult or impossible to achieve when a strong solid neutral density filter is attached to the lens. Therefore, as a general rule I recommend a workflow that involves establishing focus before attaching the neutral density filter to the lens.

More Detail: Whether or not you will be able to use autofocus when a solid neutral density filter is attached to the lens depends on a variety of factors. That includes the strength (or overall density) of the filter, the type of autofocus your camera employs, and other factors.

In my experience it is often possible to achieve autofocus with a relatively strong neutral density filter. However, I’ve also found that in some cases the performance can be slow or the results can be inaccurate. In addition, it can be difficult to otherwise establish the overall composition and capture settings when a strong neutral density filter is attached to the lens.

In some cases the Live View display combined with exposure simulation can provide an adequate solution, but this too can be challenging. For example, the exposure simulation feature may result in a very noisy preview image, making it difficult to confirm accurate focus.

For these and various other reasons, I recommend configuring your shot without the neutral density filter attached to the lens, and then attach the filter and adjust the exposure settings.

The general process here involves first configuring the overall composition with your camera firmly mounted on a tripod. You can then use whatever method you prefer to establish exposure settings based on a photo without the use of the neutral density filter. If you used one of the semi-automatic modes (such as Aperture Priority mode) to determine the exposure settings without the neutral density filter attached, then you’ll want to switch to the manual exposure mode and dial in those same settings.

After everything is configured in manual mode, you can add the neutral density filter to the lens and adjust the shutter speed (increasing the exposure duration) based on the number of stops of light the neutral density filter will reduce your exposure by.

This overall approach makes it easier to configure the overall shot. Once you have established the camera settings based on not using the neutral density filter, you can add the filter and adjust the exposure time accordingly.

Balancing Shutter Speed

Facebooktwitterlinkedin

Today’s Question: I saw your recent photo of a crop duster on Instagram, and wonder how you managed to get a blurred propeller without having the airplane blurred too.

Tim’s Quick Answer: The photo you refer to (https://www.instagram.com/p/BVFQzi0AjPw/) was captured with a shutter speed of 1/350th of a second, which provides a good balance for providing an image that is sharp overall while retaining a blur for the propeller.

More Detail: When photographing a propeller-driven aircraft in flight, it is important to have a degree of blur visible in the propeller. Otherwise, the result makes it look like the propeller isn’t turning at all, which obviously would suggest an entirely different tone for the photo.

As a very general rule, a shutter speed of around 1/250th of a second will provide a degree of blur for the propeller, but will still provide an adequately fast shutter speed to sharply render the overall airplane, even when you are hand-holding the camera (as was the case with the photo linked above).

The specific shutter speed that will provide the best result will vary based on the rotational speed of the propeller for the aircraft you are photographing. In addition, you’ll want to take into consideration other factors such as whether you’re hand-holding the camera and the lens focal length you are using to capture the images. But in general I find that a shutter speed of around 1/250th of a second provides a good balance between propeller blur and a relatively high percentage of images that are otherwise sharp overall.

You can view more of my images from the Palouse (and elsewhere) by finding me (and following me!) under user name “timgreyphoto” in Instagram on your mobile devices, or by pointing your web browser here:

https://www.instagram.com/timgreyphoto/

Portable Backup

Facebooktwitterlinkedin

Today’s Question: Back in 2013 and 2014 you were not enthusiastic about any of the available backup storage options that did not need to be connected to a laptop for power.  Have the options gotten any better since then? I’d like to travel without a laptop and would like to copy my CompactFlash cards directly to a storage device, probably two of them, to have duplicate copies. Any new advice?

Tim’s Quick Answer: I have to admit I’m not terribly enthusiastic about any of the portable backup options that are currently available. About the best option I can recommend would be the HyperDrive products from Sanho.

More Detail: For whatever reason, there were a variety of portable backup options available a number of years ago, and there aren’t many options available now. Perhaps because laptops have gotten smaller and other mobile devices can be used to download photos from media cards, there simply isn’t as large a market for standalone portable storage devices capable of backing up your digital media cards when traveling.

You can find the 500GB model of the HyperDrive here:

http://timgrey.me/BH-HyperDrive500

This is the product I would probably recommend most at this point. It enables you to download images from your media cards to the portable hard drive storage, and you could obviously use two matching devices to create two download copies of your photos.

There was recently a Kickstarter campaign for a new portable storage device, which sounds somewhat promising. There seem to have been some delays in production, but the overall specifications look good. I’ve not had a chance to test this device personally, but you can learn more here:

https://www.kickstarter.com/projects/dfigear/flash-porter-backup-and-protect-photos-and-video-f

For most of my travel I need the utility of a laptop in addition to storage (and backup storage). As a result, portable storage such as the devices above have not been a priority for me personally. That said, I certainly appreciate the value these devices can provide, and hope there will be improved solutions available soon. And perhaps the Flash Porter linked above will prove to be that solution.

Non-Destructive RAW Workflow

Facebooktwitterlinkedin

Today’s Question: Isn’t the RAW file already processed once it’s imported into Lightroom? Or is it just processed in the Lightroom catalog so the original captured file is truly untouched?

Tim’s Quick Answer: When you import a RAW capture into Lightroom, it truly is untouched, other than the actual process of copying that RAW capture file from one location to another.

More Detail: Lightroom employs a non-destructive workflow, which ensures that your original image files are not altered (or potentially damaged) by Lightroom. With a normal workflow Lightroom won’t apply any changes to your original RAW capture, in order to help protect that important original capture data.

The only situation where you might actually enable Lightroom to alter an original RAW capture is if you have enabled the option to update proprietary RAW capture files when you change the capture time for specific images.

By default, all changes you make within Lightroom are only updated within the Lightroom catalog, not with the original image files on your hard drive. There is an option to save metadata to your images, but in the case of RAW captures that will cause an XMP “sidecar” file to be created or updated, without actually altering the original RAW capture file.

However, you can choose to apply changes to the capture time to the original RAW capture files on your hard drive if you prefer. This is the only scenario where you would actually be altering the original RAW capture using Lightroom. This option (along with the option to actually save changes directly to your files) can be found in the Catalog Settings dialog.

But again, importing RAW captures into Lightroom will not cause those original captures to be modified. They are simply copied to the destination you specify, and added to the Lightroom catalog so you can use Lightroom to manage, optimize, and share the photos.