Camera Bit Depth

Facebooktwitterlinkedin

Today’s Question: Do all cameras have approximately the same bit depth or do they differ significantly? If so what is the difference?

Tim’s Quick Answer: Most cameras today provide 14-bit per channel analog to digital conversion. A small number of higher-end cameras offer 16-bit per channel support, and some (mostly older) cameras are limited to 12-bit per channel. Cameras with higher bit depth have the potential for greater detail with smoother gradations.

More Detail: Light represents an analog signal, and so you could say that light could theoretically be divided into an infinite number of brightness values. However, digital images are described with discrete numeric values, and so a limit to how many values are available must be defined.

You can think of this limit as being a limit to how many digits can be used for a number. If you are limited to a two-digit number, the maximum value is 99. For a three-digit number the maximum value would be 999.

In the context of digital images, bit depth defines the limit in terms of how many possible values are available, and therefore how many tonal and color values are possible. Cameras that only offered 12-bit per channel analog-to-digital (A/D) conversion were limited to a total of 4,096 tonal values per channel, or more than 68 billion possible tonal and color values for a full-color image.

Most cameras employ 14-bit per channel A/D conversion, providing 16,384 tonal values per channel, or more than 4 trillion possible tonal and color values overall. And those few cameras that offer 16-bit per channel A/D conversion offer 65,536 tonal values per channel, or over 281 trillion possible tonal and color values.

Of course, you only really need about 8-bit per channel information to provide a photographic image of excellent quality. But having more information can ensure you retain smooth gradations and optimal overall quality, even after strong adjustments are applied. So there is an advantage to higher bit depth, but that advantage has a diminishing return.

When processing your images after the capture, most software only provides support for 8-bit per channel and 16-bit per channel modes. So when your camera “only” offers 14-bit (or 12-bit) A/D conversion, you would still generally be working with that image in the 16-bit per channel mode. You simply don’t have full 16-bit information in that scenario.

I wouldn’t recommend choosing a specific camera based only on the bit depth of the A/D conversion for that camera. Many other factors are far more important both in terms of image quality and overall feature set. All things considered, I would say that most cameras today are about equal in terms of the net effect of their bit depth, in large part because the vast majority of cameras today offer the same 14-bit per channel A/D conversion.

Bit Depth Importance

Facebooktwitterlinkedin

Today’s Question: How important is it really to work at a high bit depth? Will I even be able to see the difference in my photos?

Tim’s Quick Answer: That depends. When a photo requires strong adjustments or will be presented as a monochrome image, working at a high bit depth can be critical. When working with a full-color photo that only requires minor adjustments, the bit depth isn’t likely to be a significant factor. I simply prefer a conservative approach that involves always using 16-bit per channel mode when optimizing photos.

More Detail: The bit depth used when applying adjustments to your images affects the total number of tonal and color values available for the image. That, in turn, determines the degree to which smooth gradations of tone and color can be maintained, even as you apply strong adjustments.

A monochrome (such as black and white) image at a bit depth of 8-bits per channel will only have 256 shades of gray available, while a 16-bit image will have 65,536 shades of gray. That can translate into a tremendous risk of posterization (the loss of smooth gradations) for an 8-bit monochromatic image, even with modest adjustments.

A color image at 8-bits per channel will have more than 16.7 million possible tonal and color values available. At 16-bits per channel that number jumps to over 281 trillion tonal and color values.

While 16.7 million possible tonal and color values is generally adequate for ensuring smooth gradations within the photo, strong adjustments can result in a degree of posterization. It will usually take a very strong adjustment (perhaps combined with an image that had been under-exposed initially) to create visible posterization with a color image, but the point is that there is a degree of risk.

For many photographers the difference may be virtually non-existetnt between an 8-bit per channel and 16-bit per channel image for a color photograph that doesn’t require strong adjustments. However, my personal preference is to always work in the 16-bit per channel mode when possible, just to ensure I am always producing an image with the highest potential quality.

It is important to note, however, that if the original capture does not provide high-bit data, there is no real advantage to converting an 8-bit image to the 16-bit per channel mode. This is one of the key reasons I prefer RAW capture rather than JPEG capture (along with the risk of compression artifacts with JPEG captures).

Blurry Print

Facebooktwitterlinkedin

Today’s Question: I captured a photo in RAW and loaded it into Lightroom CC.  I converted to black and white and exported to my hard drive at a resolution of 300ppi with pixel dimensions of 4608×3456. I sent the image to a photo lab to have a 12×18 print made. I have a 27” monitor (iMac) and the image looks fantastic on it, sharp as a tack and rich in contrast. When I got the print back from the lab it looked blurry and dull. This has happened to two different labs. Am I seeing it wrong on my monitor?

Tim’s Quick Answer: It sounds like you are using an appropriate workflow here, so either two labs did a bad job of printing the image, there was a problem in the file you provided, or you’ve not gotten a clear view of the actual image quality.

More Detail: While a typical monitor display without calibration is about a full stop too bright, this issue won’t affect the relative appearance of sharpness and detail in an image. The typical complaint I hear about prints is that they are too dark, which is often attributable to the lack of calibration. That, in turn, leads to the application of improper adjustments to the image.

However, this won’t cause problems with the appearance of sharpness and detail in the image. That said, it is important to zoom in to a 100% view to get an accurate sense of the sharpness of the image. If you’re not zooming in to evaluate the image it is possible you’re simply not getting an accurate sense of how sharp the image should be and therefore what you can expect in the print.

You might also confirm your export settings for the image. I assume the pixel dimensions stated in the question are the native pixel dimensions for the original capture. When possible you want to provide the printer with a file that has as much data as possible, up to the intended output size. In this case the file is large enough that good output quality could be reasonably expected.

However, you haven’t prepared a file sized to the final output dimensions. Whenever possible I recommend sending the printer an image sized to the exact output size, typically based on a pixel per inch (ppi) resolution of 300 ppi. So in this case you would want to provide a file of around 5,400 pixels by 3,600 pixels.

I also highly recommend having a conversation with the print lab you are using. They should be able to confirm that the file you sent was prepared properly, and provide you with a print that matches the source image. One printer I have been recommending for a long time is Fine Print Imaging, which you can learn more about here:

http://www.fineprintimaging.com

It is worth keeping in mind that a print will never have the same luminance and depth that a monitor display is capable of presenting. Therefore, it is also important to have realistic expectations based on what is possible in a print. But in this case it does indeed sound like there is an issue causing a print that is not matching the potential of the source image.

Manual Focus for ND

Facebooktwitterlinkedin

Today’s Question: I have a question of clarification on your answer to focusing with a strong neutral density filter. Can you please tell me if after you set the camera settings without the filter on do you then have to switch to manual focus on the lens or does it not matter?

Tim’s Quick Answer: Yes, if autofocus is enabled for your shutter release button, then you should disable autofocus on the lens after adding a neutral density filter for a long exposure.

More Detail: As mentioned in yesterday’s Ask Tim Grey eNewsletter, I recommend configuring all of the camera settings for your photo with the neutral density filter detached. You can then add the neutral density back to the lens, and adjust the shutter speed to increase the exposure duration based on the strength of the neutral density filter.

In other words, you should refine your composition, establish exposure settings, and set the focus with the neutral density filter removed from the lens. Make sure the exposure settings are established in manual mode, and then add the neutral density filter and adjust the shutter speed setting.

If your shutter release button is configured to activate autofocus, then pressing that button to capture the image will cause the camera to attempt to refocus on the scene. This can result in inaccurate focus, in part because of the presence of the neutral density filter.

By turning off autofocus on the lens, you’ll ensure that the camera isn’t able to autofocus when you capture the image. Of course, if you’re using back-button focus and have disabled autofocus for the shutter release button then this additional step is not necessary.

Also, be sure to re-enable autofocus when you’re finished capturing the photo.

Focusing with Neutral Density

Facebooktwitterlinkedin

Today’s Question: Is there any reason not to use autofocus when using a solid neutral density for a long exposure?

Tim’s Quick Answer: In some cases autofocus may be difficult or impossible to achieve when a strong solid neutral density filter is attached to the lens. Therefore, as a general rule I recommend a workflow that involves establishing focus before attaching the neutral density filter to the lens.

More Detail: Whether or not you will be able to use autofocus when a solid neutral density filter is attached to the lens depends on a variety of factors. That includes the strength (or overall density) of the filter, the type of autofocus your camera employs, and other factors.

In my experience it is often possible to achieve autofocus with a relatively strong neutral density filter. However, I’ve also found that in some cases the performance can be slow or the results can be inaccurate. In addition, it can be difficult to otherwise establish the overall composition and capture settings when a strong neutral density filter is attached to the lens.

In some cases the Live View display combined with exposure simulation can provide an adequate solution, but this too can be challenging. For example, the exposure simulation feature may result in a very noisy preview image, making it difficult to confirm accurate focus.

For these and various other reasons, I recommend configuring your shot without the neutral density filter attached to the lens, and then attach the filter and adjust the exposure settings.

The general process here involves first configuring the overall composition with your camera firmly mounted on a tripod. You can then use whatever method you prefer to establish exposure settings based on a photo without the use of the neutral density filter. If you used one of the semi-automatic modes (such as Aperture Priority mode) to determine the exposure settings without the neutral density filter attached, then you’ll want to switch to the manual exposure mode and dial in those same settings.

After everything is configured in manual mode, you can add the neutral density filter to the lens and adjust the shutter speed (increasing the exposure duration) based on the number of stops of light the neutral density filter will reduce your exposure by.

This overall approach makes it easier to configure the overall shot. Once you have established the camera settings based on not using the neutral density filter, you can add the filter and adjust the exposure time accordingly.

Balancing Shutter Speed

Facebooktwitterlinkedin

Today’s Question: I saw your recent photo of a crop duster on Instagram, and wonder how you managed to get a blurred propeller without having the airplane blurred too.

Tim’s Quick Answer: The photo you refer to (https://www.instagram.com/p/BVFQzi0AjPw/) was captured with a shutter speed of 1/350th of a second, which provides a good balance for providing an image that is sharp overall while retaining a blur for the propeller.

More Detail: When photographing a propeller-driven aircraft in flight, it is important to have a degree of blur visible in the propeller. Otherwise, the result makes it look like the propeller isn’t turning at all, which obviously would suggest an entirely different tone for the photo.

As a very general rule, a shutter speed of around 1/250th of a second will provide a degree of blur for the propeller, but will still provide an adequately fast shutter speed to sharply render the overall airplane, even when you are hand-holding the camera (as was the case with the photo linked above).

The specific shutter speed that will provide the best result will vary based on the rotational speed of the propeller for the aircraft you are photographing. In addition, you’ll want to take into consideration other factors such as whether you’re hand-holding the camera and the lens focal length you are using to capture the images. But in general I find that a shutter speed of around 1/250th of a second provides a good balance between propeller blur and a relatively high percentage of images that are otherwise sharp overall.

You can view more of my images from the Palouse (and elsewhere) by finding me (and following me!) under user name “timgreyphoto” in Instagram on your mobile devices, or by pointing your web browser here:

https://www.instagram.com/timgreyphoto/

Portable Backup

Facebooktwitterlinkedin

Today’s Question: Back in 2013 and 2014 you were not enthusiastic about any of the available backup storage options that did not need to be connected to a laptop for power.  Have the options gotten any better since then? I’d like to travel without a laptop and would like to copy my CompactFlash cards directly to a storage device, probably two of them, to have duplicate copies. Any new advice?

Tim’s Quick Answer: I have to admit I’m not terribly enthusiastic about any of the portable backup options that are currently available. About the best option I can recommend would be the HyperDrive products from Sanho.

More Detail: For whatever reason, there were a variety of portable backup options available a number of years ago, and there aren’t many options available now. Perhaps because laptops have gotten smaller and other mobile devices can be used to download photos from media cards, there simply isn’t as large a market for standalone portable storage devices capable of backing up your digital media cards when traveling.

You can find the 500GB model of the HyperDrive here:

http://timgrey.me/BH-HyperDrive500

This is the product I would probably recommend most at this point. It enables you to download images from your media cards to the portable hard drive storage, and you could obviously use two matching devices to create two download copies of your photos.

There was recently a Kickstarter campaign for a new portable storage device, which sounds somewhat promising. There seem to have been some delays in production, but the overall specifications look good. I’ve not had a chance to test this device personally, but you can learn more here:

https://www.kickstarter.com/projects/dfigear/flash-porter-backup-and-protect-photos-and-video-f

For most of my travel I need the utility of a laptop in addition to storage (and backup storage). As a result, portable storage such as the devices above have not been a priority for me personally. That said, I certainly appreciate the value these devices can provide, and hope there will be improved solutions available soon. And perhaps the Flash Porter linked above will prove to be that solution.

Non-Destructive RAW Workflow

Facebooktwitterlinkedin

Today’s Question: Isn’t the RAW file already processed once it’s imported into Lightroom? Or is it just processed in the Lightroom catalog so the original captured file is truly untouched?

Tim’s Quick Answer: When you import a RAW capture into Lightroom, it truly is untouched, other than the actual process of copying that RAW capture file from one location to another.

More Detail: Lightroom employs a non-destructive workflow, which ensures that your original image files are not altered (or potentially damaged) by Lightroom. With a normal workflow Lightroom won’t apply any changes to your original RAW capture, in order to help protect that important original capture data.

The only situation where you might actually enable Lightroom to alter an original RAW capture is if you have enabled the option to update proprietary RAW capture files when you change the capture time for specific images.

By default, all changes you make within Lightroom are only updated within the Lightroom catalog, not with the original image files on your hard drive. There is an option to save metadata to your images, but in the case of RAW captures that will cause an XMP “sidecar” file to be created or updated, without actually altering the original RAW capture file.

However, you can choose to apply changes to the capture time to the original RAW capture files on your hard drive if you prefer. This is the only scenario where you would actually be altering the original RAW capture using Lightroom. This option (along with the option to actually save changes directly to your files) can be found in the Catalog Settings dialog.

But again, importing RAW captures into Lightroom will not cause those original captures to be modified. They are simply copied to the destination you specify, and added to the Lightroom catalog so you can use Lightroom to manage, optimize, and share the photos.

Unwanted Flattening

Facebooktwitterlinkedin

Today’s Question: Why did a photo with many layers flatten when I needed to Convert to Profile in Photoshop’s latest version? The original image was a slide scan from a Nikon scanner profile. I needed to convert it to ProPhoto RGB and when I did, all the layers disappeared leaving only the base layer.

Tim’s Quick Answer: The image was flattened because you left the “Flatten Image to Preserve Appearance” checkbox turned on in the Convert to Profile dialog. You can keep this checkbox turned off if you want to retain the layers when converting the image to a different color profile.

More Detail: When you convert an image from one color profile to another, there is a risk that you will lose some color fidelity in the process. This is in large part because some colors that are available in your source color profile may not be available in the destination color profile.

In addition, the process of converting an image from one profile to another can cause a small degree of change in the color appearance of the photo. This is generally not significant, but it is possible. Turning on the “Flatten Image to Preserve Appearance” checkbox will help reduce these issues, by removing any adjustment layers and image layers that might cause variability in the final effect, based on how those layers impact the underlying image data.

In most cases I am perfectly comfortable leaving the “Flatten Image to Preserve Appearance” checkbox turned off. The change in color in the image will generally be extremely minor, and generally not visible within the image.

If you are concerned about a color shift when converting an image to a different color profile, I generally recommend creating a flattened copy of the image rather than flattening your master image. To do so you can choose Image > Duplicate from the menu. In the Duplicate Image dialog you can turn on the “Duplicate Merged Layers Only” checkbox so that the duplicate you’re creating will be flattened. You can then convert that duplicate image to the desired color space and continue working with that copy of the image as needed, preserving the original master image with all layers intact.

Folder Strategy Challenge

Facebooktwitterlinkedin

Today’s Question: I saw your video on your workflow for importing photos. That works great for a single place or date or subject/trip. But what is your recommended workflow for importing a batch of photos taken over time at various locations/months and varied unique subjects/events. This happens when I periodically import my iPhone/iPad photos or take a while to import photos from my camera.

Tim’s Quick Answer: As a general rule, I recommend using a folder structure that reflects the way you think about your photos with an individual folder for each photo trip or outing. In cases where that approach doesn’t work, I recommend a hybrid approach that may include a handful of general folders. In some cases a date-based folder structure might even be appropriate.

More Detail: In my opinion it is critically important to define a folder structure that can serve as a foundation of your overall image-management workflow. In general I find that most photographers (including myself) use the folder structure as a first step in locating a specific folder. While metadata values such as keywords, star ratings, and other details might also prove very helpful in locating a particular photo, navigating to a specific folder is often the first significant step toward locating an image.

That said, there are certainly situations where this approach doesn’t quite fit the needs when it comes to organizing your photos. For some photographers this folder strategy doesn’t work at all, and for other photographers (such as reflected in today’s question) the approach doesn’t work in certain situations.

In this type of situation I recommend first considering whether a hybrid approach provide a good solution. For example, you could organize most of your photos using a folder structure where the folders are named based on the way you think about a given photo shoot or trip. For those photos that don’t really fit well into this approach, you could create a separate folder structure.

For example, if you’re also managing more “casual” captures made with a smartphone alongside your master collection of photos, you might want to create a folder called “Phone Captures”, for example. Any captures from your phone that were part of an overall photo trip or outing could be placed in the folder along with the other photos from that trip. Images that don’t fit into your existing folder structure can be placed into the “Phone Captures” folder.

You might also consider a date-based folder structure as part of this strategy for photos that don’t fit into your normal folder structure. In general I prefer to avoid the use of date-based folders, since they often lead to confusion in the context of an image-management workflow. However, in some cases it may be the only option that really makes sense.

For example, consider a street photographer who lives in a big city and goes out just about every day to explore on foot and capture images. Those images are always captured in the same city, and there may not be a theme that ties together the photos from a given day or week. The only real way to divide those images into manageable segments may be to create a date-based folder structure.

To me the most important thing is to have a strategy that makes sense for your folder structure. To the extent possible, try to be very consistent about the approach you use. When there are exceptions to your normal structure, try to have define a specific strategy for those exceptions, such as by having folders for the categories of photos that don’t fit into your normal folder structure strategy.

As for the actual process of importing those “exception” photos into Lightroom, if the photos you’re importing will end up being placed into a variety of different folders, I recommend first downloading the images into a “download” folder so that you are getting the images into your workflow as quickly as possible. You can then sort through those images and move them into different folders as needed.

When creating that “download” folder, I recommend using a folder name that will ensure the folder appears at the very top of the alphabetical list of folders. For example, you could precede the folder name with an underscore (_) character to ensure this “temporary” folder will always appear at the top of your list of folders.