Display Calibration Options

Facebooktwitterlinkedin

Today’s Question: Two quick (I hope) questions about monitor calibration, based on an answer you provided recently: 1) You seem to suggest that I should disable the feature for my calibration software (ColorMunki) that will cause the display to adjust based on the current lighting conditions at my computer. Is that true? And 2) How often should I repeat the calibration for my display? The software seems to suggest every two weeks is necessary.

Tim’s Quick Answer: I do indeed prefer to disable the automatic ambient lighting adjustment for the display, and calibrating about every three months is more than adequate for a digital display, provided nothing has changed about your overall computer configuration.

More Detail: I fully appreciate that in theory it is a good thing to have your monitor display updated automatically to reflect the current ambient lighting conditions. For example, when the lighting in the room gets brighter because of the sun shining through the windows, the display will get brighter to compensate. The same holds true for the color of the display, again to compensate for the color of the ambient lighting conditions.

However, there are two basic reasons I prefer to disable this automatic adjustment. First, it introduces a degree of variability, and I very much prefer consistency in my color management workflow. Second, in my opinion it is much (much!) better to work in an environment with consistent lighting conditions, preferably in a relatively dark environment where outside light sources aren’t interfering with your monitor display. In other words, I prefer to make sure my working environment remains as stable as possible, rather than having my display calibration tool apply an automatic adjustment based on changing conditions.

As for the frequency of the calibration and profiling, this is something that changed dramatically when we transitioned from analog displays (such as CRT monitors) to digital displays (such as LCD monitors). Digital displays are much more consistent over time, and so you don’t need to calibrate and profile the display as frequently.

With analog displays I used to recommend calibration every one to two weeks, but I also knew photographers who would calibrate their display every single day. With digital displays, frankly it is perfectly reasonable to only calibrate every few months or so, provided there hasn’t been any change in the hardware or software configuration that would impact the appearance of the display.

The primary issue with digital displays is that they fade in brightness over time (although not very rapidly under normal circumstances). The color doesn’t tend to shift much at all, at least until the display starts to get toward the end of its useful life. Therefore, in most cases, calibrating every few months will ensure an accurate and consistent display.

Need a tool for display calibration? Check out the X-Rite ColorMunki Display:http://timgrey.me/colormunkidisplay

Metadata for Video

Facebooktwitterlinkedin

Today’s Question: Nowhere in Photoshop, Bridge, nor Lightroom can I find my camera settings data for video that I shoot ( e.g. ISO, aperture, shutter speed). Is there something I need to change in camera to have this data show up the way it does for still shots? Or does the camera not record this data when shooting video?

Tim’s Quick Answer: Video captures don’t generally contain the same metadata information you’re accustomed to having for your still captures. But for Canon digital SLRs (and some other cameras) it is possible to view your capture settings in metadata.

More Detail: With most (or perhaps all) Canon digital SLR cameras, the capture metadata is saved in a THM (thumbnail) file that shares the same base filename as the video file that is created (for example, a file with an extension of MOV). However, even that THM file requires an “extra” step for you to be able to view the capture metadata. There are actually two approaches you could take.

The first is to use Canon’s Digital Photo Professional software to browse your video captures. As you might expect, since this software is from Canon, you can view the metadata captured by Canon digital SLRs using Digital Photo Professional.

The other approach is to simply change the filename extension for the THM file to JPG, so that other software tools (such as Adobe Bridge, for example) will be able to find the metadata. Yes, the THM file is really just a JPEG image with a different filename extension. So if you change the filename extension to JPG, you can then browse that JPEG image with any software that allows you to view metadata for images. Among that metadata, you’ll find all of the capture settings you are accustomed to seeing for your still images.

I’m not familiar with the options that might be available for other cameras, but in general you’ll find that there is much less support for metadata among video file formats. For example, Lightroom will not create an XMP sidecar file (nor write to the video directly) to add metadata to video clips the way it is able to for still images. The metadata for videos will only be retained in the Lightroom catalog.

So, in general, there isn’t as much flexibility when it comes to metadata for video files, at least compared to what we have become accustomed to for still captures. But, at least for video captured with a Canon digital SLR, that information can be found.

Display Brightness

Facebooktwitterlinkedin

Today’s Question: With today’s monitors, laptops, tablets and smart phones auto-adjusting brightness according to ambient light as well as power conservation, how do you recommend brightness be set so your work in Lightroom and Photoshop are consistent? I find on our gloomy mid-west days with breaks of bright sun here and there that I want to adjust my MacBook’s brightness up and down to compensate. But will that mean any exposure related changes I might make while developing could look drastically incorrect when printed or viewed by someone else? Any suggestions?

Tim’s Quick Answer: Consistency is important when it comes to your display, especially when applying adjustments to your photos. I therefore highly recommend calibrating and profiling your display, working in a consistent environment that is relatively dark, and avoiding the urge to change the display brightness based on changing lighting conditions.

More Detail: Most computer displays with their default settings are about twice as bright as they should be from a color management perspective. This is a big part of the reason that calibrating and profiling your display is so important. If your display is twice as bright as it should be, you’re seeing your images one full stop of light brighter than they really are. Calibrating the display ensures that display is adjusted to a more appropriate brightness level, and profiling ensures color accuracy as well.

Once you have made those adjustments in the process of calibrating your display, you should not make changes to the display settings. Doing so would work against the calibration.

It is also important to ensure that your environment is not significantly impacting the computer display. Ideally, you should work in a slightly darkened environment, so that display “overpowers” the ambient lighting conditions. Of course, I realize this isn’t always possible, but it is an ideal to strive for.

Many of the newer tools for calibrating and profiling your display include an option to automatically compensate for changes in the luminance or color of the ambient lighting conditions. However, I prefer not to make use of these options. While they can be very helpful in concept, their use suggests that you are working in an environment that has changing lighting conditions, and thus is not an ideal working space.

My approach is to try to always work in a relatively dim room, even if that means closing the blinds and turning off lights. This isn’t always practical, but I make an effort to ensure the environment is consistent (and relatively dark) whenever I’m making critical decisions about adjustments for my photos.

I do in general prefer a very bright display when performing general work (not adjusting photos) on my computer. When I increase the brightness for those other tasks, I make sure that when I’m finished I return the brightness setting to the setting that was established during the calibration process.

Resolution Revisited

Facebooktwitterlinkedin

Today’s Question: When I export from Lightroom trying to use a size that will look good on an iPad I understand that 2048 pixels on the long dimension is good to use. I am still unsure what dimension I should put in the resolution box. Say I want to use every available pixel in the image so it will look really good. It seems that there should be a difference between 50 and 500? I usually use 200, but it is only a guess.

Tim’s Answer: This is obviously a bit of a follow-up to Wednesday’s question. I thought that since resolution tends to be a subject that many photographers struggle with, it made sense to amplify the issues raised in yesterday’s email by addressing this question today.

In short, the pixel per inch (ppi) resolution for an image does not matter unless that image is being printed. What that means is that for an image you intend to share electronically, it doesn’t matter what you set the ppi resolution to. All that matters is how many pixels are in the image.

So, for example, if you are preparing an image to be shared on an iPad, it makes sense to size the image to match the pixel dimensions of the display. In the case of the latest iPad tablets, the resolution is 2048×1536 pixels. So you can size your images to 2048 pixels on the long edge to get great results, and you can set the ppi resolution to anything you like. Regardless of the ppi resolution you set, the image will still be 2048 pixels on the long side, and so will be sized to match the number of pixels (at least on the long edge) for the display.

In other words, for a digital display you’ll get the best results when there is one pixel in the image for each pixel on the display. That way, you have an image that matches the “size” of the display in terms of the amount of information, and you can expect the best display quality for the image.

I should add that some software does actually look at the ppi resolution value when you add an image to a document, even if the aim is not printing. As far as I’m concerned that shouldn’t be the case, but in some cases software will adjust the apparent size of the image based on the ppi resolution value. But that doesn’t change the number of pixels in the image, and thus has no impact on the quality of the image if it is sized based on the actual pixel dimensions.

So, for printing you should absolutely set the ppi resolution based on the type of printed output you’re producing. But for sharing images electronically, you don’t have to worry about the ppi value at all. In those cases, I recommend setting the ppi resolution value to your lucky number, just for fun.

Transparent Canvas

Facebooktwitterlinkedin

Today’s Question: I suddenly have a problem [in Photoshop] that I haven’t experienced before when adding extra canvas to an image. When adding extra canvas to a flattened image I‘m getting the transparency chequered board instead of a pure white boarder. The ‘Canvas Extension Color’ at the bottom of the dialogue box is greyed-out (not active). I’m sure there is a very simple solution that I’m overlooking here.

Tim’s Answer: In a way this is two questions in one. Sometimes you might actually want transparent pixels for the additional canvas around an image, and sometimes you might want actual pixels to fill in that area, so it is helpful to understand how this option works. The key is the presence (or lack thereof) of a Background image layer.

If the image includes a Background image layer, the Canvas Size dialog (accessible by choosing Image > Canvas Size from the menu) will include the option to set a color to be used for the new canvas area that will be added to the image. The default option is to use the current background color (white by default) for that canvas area, but you can choose a different option from the “Canvas extension color” popup if you prefer.

If the image does not include a Background image layer, the canvas you add to the image will be transparent, and the “Canvas extension color” popup will be disabled.

This, of course, leads to the question of how to change an image layer into a Background image layer, and how to change a Background image layer to a “normal” image layer. You can make this change by first making sure the layer you want to convert is the active layer on the Layer panel. You can make a layer active by simply clicking on the thumbnail for the layer on the Layers panel.

To change the status of the active layer to or from a Background image layer, you can then first choose Layer > New from the menu. On the submenu that appears you can choose “Background from Layer” if the active layer is not currently a Background image layer, or “Layer from Background” if the layer is currently a Background image layer. These two options are actually one item on the menu, with the specific option shown dependent upon whether the current image contains a Background image layer.

In the case of the example cited in today’s question, the image layer (even though the result of flattening) is not a Background image layer. Therefore, you could choose Layer > New > Background from Layer to convert the layer to a Background image layer, so that you can set the color of the canvas area you add to the image. If you wanted to add transparent canvas area for a different image, you would simply want to make sure the Background image layer is converted to a “normal” layer by choosing Layer > New > Layer from Background from the menu.

Resolution and File Size

Facebooktwitterlinkedin

Today’s Question: The same image exported twice from Lightroom with same pixel dimensions but one at 72 ppi and one at 300 ppi are the same file size. I would have thought that the 300 ppi file would have been larger. Why isn’t it?

Tim’s Answer: The issue of resolution continues to be one of the more common sources of confusion in photography, in part because information about an image is often presented in a way that can be a little confusing.

When it comes to the pixel per inch (ppi) resolution for an image, I think it is best to think about this as simply being a metadata value. It has absolutely no bearing whatsoever on the “real” information in your photographic image.

The overall size of an image is determined by how many pixels are in that image. Adding confusion, that total volume of pixel information is often referred to as resolution as well. In other words, sometimes the term “resolution” refers to the total volume of information (pixel dimensions, megapixels, etc.), and sometimes it refers to the density of information (how many pixels per inch, for example).

Forget about printing for the moment, and think about the image size as it appears at a 100% scale in Photoshop (for example) as well as the size of the file saved on hour hard drive. The number of pixels is the primary factor here.

This makes sense when you consider how the appearance of an image in Photoshop at a 100% zoom setting changes based on how many pixels are in the image. If we have a square image that is 10 pixels on each side, that image will look very small in Photoshop even at a 100% zoom setting. A square image that is 10,000 pixels on each side will look very different, with the image being so big that we can only see a small portion of the image when viewed at a 100% zoom setting.

If the pixel dimensions remain the same, with only the pixel-per-inch resolution changing, the file size will not change. The ppi resolution only affects (for the most part) how the image is printed. In other words, when you send that 10,000 pixel-per-side image to the printer, how do you want the pixels spread out on the page? If you spread them out really far (perhaps only 72 pixels per inch) you’ll be able to make a very big print, but the quality won’t be very good. If you keep the pixels pretty close together (perhaps 360 ppi) you’ll have a smaller print, but that print will have great image quality.

In both examples above, the number of pixels didn’t change, so the file size would be the same (all other things being equal). All that changed is a simple metadata value that provided information on how the pixels should be distributed on the page when printed.

There are, of course, other factors that impact file size. These include (among other things) the bit depth of the image, compression applied to the image, and whether layers are included with the image. However, all other things being equal, changing the ppi resolution for an image will have absolutely no impact on file size. The number of pixels in an image is the key factor in overall file size, as well as for the potential output size for the photo.

It is worth noting, by the way, that if you had specified the output size in inches (for example) instead of pixels, this would have made a difference. For example, to create a square image that is one inch on each side at 300 pixels per inch, the resulting pixel dimensions will be 300×300 pixels. At ten inches on a side at 300 pixels per inch, the resulting pixel dimensions would be 3,000×3,000. And the ten inch image at 72 pixels per inch would be 720×720 pixels. So describing image dimensions in inches at a given pixel per inch resolution may result in different pixel dimensions. But if pixel dimensions are fixed, the file size is fixed (all other things being equal, of course).

Exposure and ISO

Facebooktwitterlinkedin

Today’s Question: In an earlier Q&A [from January 23rd] you refer to underexposing by raising the ISO.

There is some confusion here somewhere. If one keeps all other parameters constant and raises the ISO isn’t this equivalent to using a faster film and therefore one would, relative to the earlier ISO, be over exposing? What am I missing?

Tim’s Answer: I would be more than happy to clarify.

When I was referring to the notion of raising the ISO, resulting in an under-exposed image, I wasn’t trying to suggest that raising the ISO setting actually caused the image to be darkened. Rather, I was referring to the impact of ISO on overall exposure and image quality.

I think it will be helpful to talk about specific exposure settings in order to help clarify. So, let’s assume a “sunny 16” exposure with an aperture of f/16, a shutter speed of 1/125th of a second, and an ISO setting of 100.

If I raise the ISO setting by two stops (to 400) and adjust other settings to compensate, I might end up with an aperture still set to f/16 but a shutter speed of 1/500th of a second. So, you could reasonably suggest that the faster shutter speed (the shorter exposure duration) would cause the image to be darkened, but that the higher ISO setting caused the image to be brightened to the same degree, resulting in an exposure that is exactly the same as would be achieved with the prior settings.

The key thing that I think photographers need to understand is how each setting affects the final image, and that is why I refer to the “underexposure” issue when you raise the ISO setting. More on that in just a moment.

The aperture primarily affects, of course, the depth of field in the scene. The shutter speed has primary control over the degree to which motion is frozen (or not) in the photo. And the ISO setting determines (in many respects) the amount of noise in the photo.

When you raise the ISO setting you are making a change that will have a brightening effect on the photo, all other things being equal. But you aren’t doing so by “magically” increasing the sensitivity of the image sensor.

So, to my point about raising the ISO resulting in a reduced exposure, let’s take a look at the exposure settings referenced above. At an ISO setting of 100 I referenced an aperture of f/16 and a shutter speed of 1/125th of a second. Raising the ISO to 400 resulted in a change to a shutter speed of 1/500th of a second at f/16.

But going from a shutter speed of 1/125th of a second to a shutter speed of 1/500th of a second represents two stops of exposure reduction. We’ve caused two stops less light to actually reach our image sensor. The image sensor can’t magically collect more light, or be more sensitive to the light. The result is that we’re actually taking a photo that is two-stops under-exposed, and the camera is then applying amplification to the signal information that was recorded to create the effect of a brighter exposure. In the process, noise will result.

To be fair, today’s digital cameras do a remarkable job of applying amplification through higher ISO settings without creating excessive noise. And there are a variety of ways you can mitigate the noise after the fact. But if you think of a higher ISO setting as representing an underexposed image that needs to later be brightened considerably, I think (and hope) it will provide a useful way for you to evaluate the ISO setting relative to other exposure settings. In other words, I hope this information helps encourage you to avoid raising the ISO setting on your camera unless it is necessary for your other exposure goals, in order to minimize the amount of noise in the final image.

Calibrating a Projector

Facebooktwitterlinkedin

Today’s Question: Our club has a good quality digital projector but some people are not always happy with how their images look when projected. Would it make a substantial difference if we took the extra step of calibrating it or will we always see a difference between the projected image and the display on a calibrated monitor?

Tim’s Answer: Calibrating and profiling your digital projector will indeed have a tremendous impact on the accuracy and consistency of the display of projected images.

Put simply, if all club members calibrate the monitor display they use for reviewing and optimizing their photos, and you calibrate the projector being used to display the images at your club meetings, you can expect a very good match between what the photographer created on their own computer and what is being displayed by your projector.

The key is to make sure everyone is calibrating to the same target values. For example, you could specify that everyone should calibrate to a color target of 6500 Kelvin and luminance of 100 candelas per square meter (cd/m2). If everyone uses the same values, you will achieve a high degree of consistency across multiple displays.

To actually calibrate the digital projector, you’ll need a monitor calibration package that supports digital projectors as well as standard displays. One such package that works very well is the X-Rite ColorMunki Display package, which you can find here:

http://timgrey.me/colormunkidisplay

Low-Light Options

Facebooktwitterlinkedin

Today’s Question: When one is confronted with extreme low-light conditions, could you discuss the pros and cons of shooting at high ISO with in-camera high ISO noise reduction engaged versus purposely underexposing a photograph at lower ISO and subsequently correcting for the underexposure and reducing the noise during post-processing?

Tim’s Answer: When you raise the ISO setting in your digital camera, you are effectively under-exposing the photo, possibly to an extreme degree. Therefore, it is worth considering (as suggested in today’s question) how to minimize the risk of noise associated with that under-exposed photo.

The basic choice here is how to compensate for an under-exposed photo. Your two options are to either increase the ISO setting in the camera, or to leave the ISO setting at a low value (with a photo that is therefore underexposed) and save all adjustments for your image optimization workflow after the capture.

Put simply, you will generally get better results (often much better results) by raising the ISO setting as compared to simply under-exposing. To be sure, it is best to use the lowest ISO setting possible for the conditions in order to minimize the amount of noise in a given photo, because raising the ISO setting translates into amplification of the signal being gathered by the image sensor. That amplification translates to increased noise in the image.

However, while a high ISO setting increases the amount of noise in a photo, severely underexposing the image will produce (in most cases) far worse results. This was actually the subject of an article called “ISO Illustrated” that I published in the December 2013 issue of Pixology magazine.

I most certainly recommend keeping the ISO setting as low as possible to minimize noise. However, that doesn’t mean using a shutter speed that is too slow, or underexposing the image. When the situation requires a higher ISO setting to achieve a proper exposure, by all means raise the ISO setting. You may need to mitigate the noise in post-processing, and even with noise reduction the photo may not exhibit optimal quality. But the quality will still be better than if you had simply kept the ISO setting at a low value and under-exposed the photo.

Batch HDR

Facebooktwitterlinkedin

Today’s Question: Is there any way I can batch process multiple image sets through HDR Pro in Photoshop, or am I restricted to one set at a time?

Tim’s Answer: In theory there are a few possible ways you might batch process your high dynamic range (HDR) captures using Photoshop or other HDR-processing software. However, I don’t recommend this approach, for two main reasons.

First, HDR processing can be rather labor-intensive. Especially when merging the data to a 32-bit per channel source HDR image (which then gets tone-mapped to produce the final result, or saved to be tone-mapped with other software), there is a tremendous amount of data being processed. In other words, it would be relatively easy to overwhelm even a very powerful computer system.

Second, when it comes to HDR processing there is tremendous variability in terms of the specific settings used for each set of photos. That is especially true when it comes to the tone-mapping phase of processing, where you’re applying adjustments and creative effects to the HDR image. However, it can also be an issue for the initial captures, especially as it relates to alignment, chromatic aberration adjustments, and ghosting removal.

Photoshop does not offer a batch-processing feature as part of the HDR Pro tool. I have seen some scripts that enable batch processing for HDR in Photoshop, and it is possible to batch-process images with Photomatix software as well (among other solutions, I’m sure).

But the bottom line in my mind is that you’re not missing anything, and that it is worth it (in my opinion) to “manually” process each set of photos for an HDR capture. I’ll add, by the way, that while HDR Pro in Photoshop does a good job of creating the initial HDR image, I consider other tools to be better at the tone-mapping portion of the process. For example, you can perform the tone-mapping work in Lightroom for a 32-bit per channel HDR image, or you might look at third-party products such as HDR Efex Pro (part of the Nik Collection from Google) for processing your RAW captures.