PNG File Size

Facebooktwitterlinkedin

Today’s Question: Occasionally, I attach a screen grab to an email, usually PNGs directly from my Mac’s screen-grab commands. In one example the PNG’s file size is over 860KB versus the JPG version at 208KB. Are PNGs always so big, at about four times the size of equivalent JPGs?

Tim’s Quick Answer: PNG (Portable Network Graphics) files will generally be considerably larger than JPEG (Joint Photographic Experts Group) images, because PNG images use lossless compression compared to lossy compression for JPEGs. The specific results will vary depending on the image in question, but when file size is the priority the JPEG format is preferred over PNG.

More Detail: When sharing images, the JPEG file format is a popular option because it enables you to achieve relatively small file sizes. This small file size is achieved due to lossy compression, meaning some degree of image quality may be lost even at a high quality setting.

By comparison, the PNG file format uses lossless compression. As a result, image quality is preserved at the cost of a larger file size.

Of course, at a high quality setting the JPEG image will generally appear to have quality that is nearly the same as the same image saved as a PNG file. Therefore, the JPEG file format is generally superior for sharing images where file size is a concern, such as for online sharing.

In theory the PNG format would be preferred when image quality is the priority. However, in those cases it might also make sense to instead use the TIFF file format. The main reason I consider the PNG format to be important is that it supports transparency in the image. This isn’t generally a necessity for a photographic workflow, but for certain uses (such as in digital slideshows, on web pages, or in videos) that feature can be helpful.

Curve Direction

Facebooktwitterlinkedin

Today’s Question: When a Curves adjustment is used to increase or decrease brightness in an image, some photographers drag the curve center point at a roughly 45 degree angle toward the top left to increase brightness or toward the bottom right to decrease brightness. Other photographers drag the center point vertically toward the top to increase brightness or vertically toward the bottom to decrease brightness. And the arrow keys also facilitate fine adjustment using this approach. The resulting tone curves are plainly different with each approach. Which is the best approach for overall image brightness adjustment?

Tim’s Quick Answer: There isn’t a single “correct” direction to drag an anchor point with the Curves adjustment. Rather, what is important is the specific relationship you’re defining between the “before” and “after” value for tonality. The optimal result will vary based on the specific image you’re working on.

More Detail: If you drag an anchor point directly upward (or downward) at the precise center of the curve, you will be applying an equal adjustment to the highlights and shadows in the image. If the anchor point is dragged out at a 45-degree angle toward the top-left, you will have a stronger effect on the highlights than on the shadows.

What ultimately matters here is the shape of the curve and the effect you want to have on the image. Keep in mind that what you’re altering with the curve is the relationship between the “before” and “after” values within the image. How you adjust the shape of the curve depends on how you want to adjust the tonal values in bright versus dark areas of the image.

It is also worth keeping in mind that in many (if not most) cases you will want to add more than one anchor point for the Curves adjustment. This enables you to have a different effect in highlight areas versus shadow areas of the image, for example. In some cases you might want to lighten the shadows and darken the highlights, while in other cases you might want to darken the shadows and lighten the highlights. And of course with some images you might want to only lighten or darken, though with an emphasis on specific tonal ranges.

So again, it is important to consider the shape of the curve for a Curves adjustment based on how you want to alter the appearance of the specific photo you’re working on.

Hard Drive Life Expectancy

Facebooktwitterlinkedin

Today’s Question: What is the reasonable life of a hard drive? In other words, how often should I replace them?

Tim’s Quick Answer: In theory a typical hard drive should operate normally for hundreds of thousands of hours of use. A solid-state drive (SSD) can generally operate for thousands of write cycles, which would generally translate into multiple years (and potentially decades) of reliable use. That said, I do think replacing hard drives every few years is a good idea.

More Detail: There are a wide variety of factors that impact the lifespan of a hard drive or SSD. For traditional hard drives one of the biggest risk factors in my experience tends to be heat, for example. Physical damage, manufacturing defects, and other factors can also play a significant role.

Even if a given hard drive model could be expected to operate normally for one million hours of use, it is also possible that the drive could fail after a single hour of use. This is why a frequent and consistent backup is critically important for protecting your data.

While today’s storage devices tend to be very reliable overall, I do think it can be a good idea to replace your storage about every three to five years. Of course, for many photographers this approach is something that happens somewhat “automatically”, by virtue of filling up a hard drive and needing to replace that drive with a higher capacity drive.

Predicting the failure of a storage device is quite difficult. This is why a good backup strategy is so critical, in conjunction with making sure to keep your storage devices physically safe and protected from extreme conditions.

Unexpected Bit Depth

Facebooktwitterlinkedin

Today’s Question: Based on your discussion about bit depth, my JPEGs, as shot in the camera, show up as 24 bit depth when looked at in my Windows [operating system] folder. I’m not sure if that information is available on Photoshop or Lightroom, but I am confused by this bit depth. How do I change that number to a 16 bit as you recommend, or change the number in general?

Tim’s Quick Answer: The reference to “24-bit” is actually the same thing as “8-bit per channel”. The operating system is simply describing the total bit depth rather than the per-channel bit depth.

More Detail: Bit depth refers to the total number of potential tonal and color values in an image. In digital photography we generally refer to the per channel bit depth, such as 8-bits per channel or 16-bits per channel.

In other contexts the total number of bits is used instead. This is often the case with film scanners, for example. With an RGB image you have three channels (red, green, and blue). So, if the image is 8-bit per channel there is a total of 24 bits (eight bits multiplied by three channels). For a 16-bit per channel image the total would be 48 bits (sixteen bits multiplied by three channels).

It is worth noting, by the way, that JPEG images can only be in the 8-bit per channel mode. Furthermore, if you have an 8-bit per channel image I don’t recommend converting it to 16-bits per channel. Doing so would double the base file size of the image with no real benefit in terms of image quality or flexibility in optimizing the image.

Smart Previews for Develop

Facebooktwitterlinkedin

Today’s Question: In Lightroom > Preferences > Performance, do you recommend enabling “Use Smart Previews Instead of Originals for Image Editing” in order to speed up performance?

Tim’s Quick Answer: Yes, having this option turned on can improve performance in the Develop module in Lightroom, especially if the source images are of a particularly large size in terms of resolution. If you have adequate storage space for the Smart Previews and your photos have a very high resolution, I would recommend turning on this option.

More Detail: The option to use Smart Previews in the Develop module rather than the original source image enables you to potentially streamline your workflow. Put simply, enabling this option can speed up performance when applying adjustments in the Develop module.

My testing has demonstrated that the performance improvement is generally rather modest. The benefit is more pronounced with images that have a very high resolution. In other words, if you’re using a camera that has a 40-megapixel sensor you can expect a more significant benefit compared to images captured with a 20-megapixel sensor. That is because the difference in the amount of data will be more significant with higher resolution captures.

Of course, you also need to consider the amount of additional storage space required for those Smart Previews. You can generally expect to consume around 2MB of hard drive space for each Smart Preview. That isn’t a tremendous amount of space, but it can add up.

It is also possible that the preview in the Develop module based on a Smart Preview won’t be completely accurate, since the original source image is not being taken into account when working exclusively with Smart Previews.

If you are seeing slow performance in the Develop module, with adjustments taking time to be reflected in your preview images, I would most certainly recommend turning on the option to use Smart Previews in order to help improve that performance.

Camera Bit Depth

Facebooktwitterlinkedin

Today’s Question: Do all cameras have approximately the same bit depth or do they differ significantly? If so what is the difference?

Tim’s Quick Answer: Most cameras today provide 14-bit per channel analog to digital conversion. A small number of higher-end cameras offer 16-bit per channel support, and some (mostly older) cameras are limited to 12-bit per channel. Cameras with higher bit depth have the potential for greater detail with smoother gradations.

More Detail: Light represents an analog signal, and so you could say that light could theoretically be divided into an infinite number of brightness values. However, digital images are described with discrete numeric values, and so a limit to how many values are available must be defined.

You can think of this limit as being a limit to how many digits can be used for a number. If you are limited to a two-digit number, the maximum value is 99. For a three-digit number the maximum value would be 999.

In the context of digital images, bit depth defines the limit in terms of how many possible values are available, and therefore how many tonal and color values are possible. Cameras that only offered 12-bit per channel analog-to-digital (A/D) conversion were limited to a total of 4,096 tonal values per channel, or more than 68 billion possible tonal and color values for a full-color image.

Most cameras employ 14-bit per channel A/D conversion, providing 16,384 tonal values per channel, or more than 4 trillion possible tonal and color values overall. And those few cameras that offer 16-bit per channel A/D conversion offer 65,536 tonal values per channel, or over 281 trillion possible tonal and color values.

Of course, you only really need about 8-bit per channel information to provide a photographic image of excellent quality. But having more information can ensure you retain smooth gradations and optimal overall quality, even after strong adjustments are applied. So there is an advantage to higher bit depth, but that advantage has a diminishing return.

When processing your images after the capture, most software only provides support for 8-bit per channel and 16-bit per channel modes. So when your camera “only” offers 14-bit (or 12-bit) A/D conversion, you would still generally be working with that image in the 16-bit per channel mode. You simply don’t have full 16-bit information in that scenario.

I wouldn’t recommend choosing a specific camera based only on the bit depth of the A/D conversion for that camera. Many other factors are far more important both in terms of image quality and overall feature set. All things considered, I would say that most cameras today are about equal in terms of the net effect of their bit depth, in large part because the vast majority of cameras today offer the same 14-bit per channel A/D conversion.

Bit Depth Importance

Facebooktwitterlinkedin

Today’s Question: How important is it really to work at a high bit depth? Will I even be able to see the difference in my photos?

Tim’s Quick Answer: That depends. When a photo requires strong adjustments or will be presented as a monochrome image, working at a high bit depth can be critical. When working with a full-color photo that only requires minor adjustments, the bit depth isn’t likely to be a significant factor. I simply prefer a conservative approach that involves always using 16-bit per channel mode when optimizing photos.

More Detail: The bit depth used when applying adjustments to your images affects the total number of tonal and color values available for the image. That, in turn, determines the degree to which smooth gradations of tone and color can be maintained, even as you apply strong adjustments.

A monochrome (such as black and white) image at a bit depth of 8-bits per channel will only have 256 shades of gray available, while a 16-bit image will have 65,536 shades of gray. That can translate into a tremendous risk of posterization (the loss of smooth gradations) for an 8-bit monochromatic image, even with modest adjustments.

A color image at 8-bits per channel will have more than 16.7 million possible tonal and color values available. At 16-bits per channel that number jumps to over 281 trillion tonal and color values.

While 16.7 million possible tonal and color values is generally adequate for ensuring smooth gradations within the photo, strong adjustments can result in a degree of posterization. It will usually take a very strong adjustment (perhaps combined with an image that had been under-exposed initially) to create visible posterization with a color image, but the point is that there is a degree of risk.

For many photographers the difference may be virtually non-existetnt between an 8-bit per channel and 16-bit per channel image for a color photograph that doesn’t require strong adjustments. However, my personal preference is to always work in the 16-bit per channel mode when possible, just to ensure I am always producing an image with the highest potential quality.

It is important to note, however, that if the original capture does not provide high-bit data, there is no real advantage to converting an 8-bit image to the 16-bit per channel mode. This is one of the key reasons I prefer RAW capture rather than JPEG capture (along with the risk of compression artifacts with JPEG captures).

Blurry Print

Facebooktwitterlinkedin

Today’s Question: I captured a photo in RAW and loaded it into Lightroom CC.  I converted to black and white and exported to my hard drive at a resolution of 300ppi with pixel dimensions of 4608×3456. I sent the image to a photo lab to have a 12×18 print made. I have a 27” monitor (iMac) and the image looks fantastic on it, sharp as a tack and rich in contrast. When I got the print back from the lab it looked blurry and dull. This has happened to two different labs. Am I seeing it wrong on my monitor?

Tim’s Quick Answer: It sounds like you are using an appropriate workflow here, so either two labs did a bad job of printing the image, there was a problem in the file you provided, or you’ve not gotten a clear view of the actual image quality.

More Detail: While a typical monitor display without calibration is about a full stop too bright, this issue won’t affect the relative appearance of sharpness and detail in an image. The typical complaint I hear about prints is that they are too dark, which is often attributable to the lack of calibration. That, in turn, leads to the application of improper adjustments to the image.

However, this won’t cause problems with the appearance of sharpness and detail in the image. That said, it is important to zoom in to a 100% view to get an accurate sense of the sharpness of the image. If you’re not zooming in to evaluate the image it is possible you’re simply not getting an accurate sense of how sharp the image should be and therefore what you can expect in the print.

You might also confirm your export settings for the image. I assume the pixel dimensions stated in the question are the native pixel dimensions for the original capture. When possible you want to provide the printer with a file that has as much data as possible, up to the intended output size. In this case the file is large enough that good output quality could be reasonably expected.

However, you haven’t prepared a file sized to the final output dimensions. Whenever possible I recommend sending the printer an image sized to the exact output size, typically based on a pixel per inch (ppi) resolution of 300 ppi. So in this case you would want to provide a file of around 5,400 pixels by 3,600 pixels.

I also highly recommend having a conversation with the print lab you are using. They should be able to confirm that the file you sent was prepared properly, and provide you with a print that matches the source image. One printer I have been recommending for a long time is Fine Print Imaging, which you can learn more about here:

http://www.fineprintimaging.com

It is worth keeping in mind that a print will never have the same luminance and depth that a monitor display is capable of presenting. Therefore, it is also important to have realistic expectations based on what is possible in a print. But in this case it does indeed sound like there is an issue causing a print that is not matching the potential of the source image.

Manual Focus for ND

Facebooktwitterlinkedin

Today’s Question: I have a question of clarification on your answer to focusing with a strong neutral density filter. Can you please tell me if after you set the camera settings without the filter on do you then have to switch to manual focus on the lens or does it not matter?

Tim’s Quick Answer: Yes, if autofocus is enabled for your shutter release button, then you should disable autofocus on the lens after adding a neutral density filter for a long exposure.

More Detail: As mentioned in yesterday’s Ask Tim Grey eNewsletter, I recommend configuring all of the camera settings for your photo with the neutral density filter detached. You can then add the neutral density back to the lens, and adjust the shutter speed to increase the exposure duration based on the strength of the neutral density filter.

In other words, you should refine your composition, establish exposure settings, and set the focus with the neutral density filter removed from the lens. Make sure the exposure settings are established in manual mode, and then add the neutral density filter and adjust the shutter speed setting.

If your shutter release button is configured to activate autofocus, then pressing that button to capture the image will cause the camera to attempt to refocus on the scene. This can result in inaccurate focus, in part because of the presence of the neutral density filter.

By turning off autofocus on the lens, you’ll ensure that the camera isn’t able to autofocus when you capture the image. Of course, if you’re using back-button focus and have disabled autofocus for the shutter release button then this additional step is not necessary.

Also, be sure to re-enable autofocus when you’re finished capturing the photo.

Focusing with Neutral Density

Facebooktwitterlinkedin

Today’s Question: Is there any reason not to use autofocus when using a solid neutral density for a long exposure?

Tim’s Quick Answer: In some cases autofocus may be difficult or impossible to achieve when a strong solid neutral density filter is attached to the lens. Therefore, as a general rule I recommend a workflow that involves establishing focus before attaching the neutral density filter to the lens.

More Detail: Whether or not you will be able to use autofocus when a solid neutral density filter is attached to the lens depends on a variety of factors. That includes the strength (or overall density) of the filter, the type of autofocus your camera employs, and other factors.

In my experience it is often possible to achieve autofocus with a relatively strong neutral density filter. However, I’ve also found that in some cases the performance can be slow or the results can be inaccurate. In addition, it can be difficult to otherwise establish the overall composition and capture settings when a strong neutral density filter is attached to the lens.

In some cases the Live View display combined with exposure simulation can provide an adequate solution, but this too can be challenging. For example, the exposure simulation feature may result in a very noisy preview image, making it difficult to confirm accurate focus.

For these and various other reasons, I recommend configuring your shot without the neutral density filter attached to the lens, and then attach the filter and adjust the exposure settings.

The general process here involves first configuring the overall composition with your camera firmly mounted on a tripod. You can then use whatever method you prefer to establish exposure settings based on a photo without the use of the neutral density filter. If you used one of the semi-automatic modes (such as Aperture Priority mode) to determine the exposure settings without the neutral density filter attached, then you’ll want to switch to the manual exposure mode and dial in those same settings.

After everything is configured in manual mode, you can add the neutral density filter to the lens and adjust the shutter speed (increasing the exposure duration) based on the number of stops of light the neutral density filter will reduce your exposure by.

This overall approach makes it easier to configure the overall shot. Once you have established the camera settings based on not using the neutral density filter, you can add the filter and adjust the exposure time accordingly.