Custom White Balance

Facebooktwitterlinkedin

Today’s Question: You made reference to setting a custom white balance in the camera based on a gray card. How can this be done?

Tim’s Quick Answer: The specific process will vary from one camera to the next, but in general that process is the same. You simply capture a photo of a gray card (or similar neutral object) and then use that photo as the basis of a custom white balance setting in the camera.

More Detail: The concept here is that if you capture a photo of a neutral object (such as a gray card) under the same lighting conditions as the subject you will be photographing, the color of that object as it is captured can be used as the basis of an automatic white balance adjustment for subsequent photos captured under the same conditions.

While a gray card is certainly a great tool to use in this process, it is also possible to employ any other neutral (non-colored) object, such as a bright white sheet of paper. Perhaps the most important part of this process is to be sure you are photographing the gray card (or alternative) under the same lighting conditions that will illuminate your subject.

In other words, this process works best when the lighting on your subject is relatively consistent and uniform. You can then place the gray card in that light, and fill the camera’s frame with the gray card. Once you have captured a photo of the gray card under the lighting conditions for your subject, you can set that photo as the basis of a custom white balance adjustment.

Again, the specific process will vary by camera model, but in general you can simply choose the custom white balance option within your camera’s menu system and then select the appropriate photo as the basis of that custom white balance.

I should hasten to add that this custom white balance option is different from the similarly named option that enables you to dial in a specific Kelvin value for the white balance compensation. The process described above offers a relatively automated approach to compensating for the color of light on your subject through the use of a “sample” photograph captured under the same conditions.

Color Accuracy

Facebooktwitterlinkedin

Today’s Question: If you are using Lightroom and you setup your camera for white balance by using a gray card, is it still necessary to use the ColorChecker Passport to setup a color profile the get accurate colors?

Tim’s Quick Answer: That depends on the degree of accuracy you require. For situations (such as product photography) where color accuracy can be critical, I would recommend building a profile based on the X-Rite ColorChecker Passport (http://timgrey.me/checkerpassport). For situations where the color accuracy in the original capture isn’t as critical, using a gray card is generally adequate.

More Detail: When using a gray card (or other color-neutral object) as the basis of a white balance adjustment either in the camera or in post-processing, you’re only applying a simple compensation to the color in the image that will produce a neutral value for that gray card. The X-Rite ColorChecker Passport includes a series of color swatches that can be used to apply a more accurate color adjustment to your photos.

In many cases, especially when you are exercising a degree of artistic interpretation for the colors in a photo, the accuracy provided by the ColorChecker Passport is generally not necessary. By using a gray card to compensate for the color of the light illuminating the scene you’ll be able to get a reasonably accurate result.

What you’ll find is that a gray card enables you to compensate rather effectively for the color of the light source. But the individual color values may still not be quite perfect. Because of the multiple color swatches on the ColorChecker Passport, individual colors will appear more accurate after applying the profile. This will produce a subtle (but sometimes important) shift in some of the individual colors within a photo.

It is worth pointing out, of course, that in some cases you don’t really need to use a gray card or other approach to compensate for the color of light. After all, using this type of approach is focused on removing the color element of the light illuminating the scene, and in many cases that color is a big part of the reason you captured the image in the first place.

Hiding the Grid

Facebooktwitterlinkedin

Today’s Question: Somehow I’ve accidently activated a ‘Grid’ that is overlaying every image that I open in Photoshop. I have gone into ‘View’ but the ‘Clear Guides’ command is not active (greyed out).

Tim’s Quick Answer: In this case it sounds like it is actually the Grid display option that is an issue, not the Guides. So all you need to do is choose View > Show > Grid from the menu to disable the Grid display.

More Detail: Photoshop provides you with a variety of non-printing display options, primarily aimed at helping you align various objects within a document. Of course, when you enable one of these options by mistake it can become a bit distracting.

For example, one of the other grid options is the Pixel Grid, which can also be found on the View > Show submenu. You’ll find other options as well, some of which you may find helpful from time to time.

There is also an “Extras” option on the View menu, which you can think of as something of a “master switch” for the various display options found on the View > Show submenu. In other words, if you like to enable some of the options found under the View > Show menu, you can temporarily disable (or re-enable) all of them by choosing View > Extras from the menu.

It is worth noting that these various display options can be toggled on or off. When an option is enabled, you will see a checkmark icon next to the name of that option on the menu. When an option is disabled you will not see that checkmark.

Also note that some of these view options have keyboard shortcuts associated with them, which you can see to the right of the applicable options on the menu. Those keyboard shortcuts provide an additional option for quickly enabling or disabling specific options. I suspect that in most cases when one of these options is activated by accident, it happens by pressing a keyboard shortcut unintentionally.

Image Stabilization Benefit

Facebooktwitterlinkedin

Today’s Question: With reference to your answer about a rule for minimum shutter speed relative to lens focal length, how does the presence of image stabilization in the lens impact this? In other words, can’t I use a slower than recommended shutter speed when employing image stabilization?

Tim’s Quick Answer: Image stabilization technology does provide a benefit that could enable you to employ a slower shutter speed (longer exposure duration) than would otherwise be possible when shooting hand-held (and even when shooting on a tripod in many cases). However, my preference is to treat this benefit as a “bonus” and to continue following the rule of thumb about minimum shutter speed relative to focal length.

More Detail: As a reminder, the rule of thumb about minimum shutter speed relates to lens focal length. More accurately, this rule relates to field of view. With a narrower field of view, any movement of the camera will essentially be magnified, requiring a faster shutter speed to ensure a sharp image. As a general guideline, it is recommended that the lens focal length be used as a minimum value for the denominator in the shutter speed. So, a 100mm lens would call for a 1/100th of a second or faster shutter speed, and a 300mm lens would call for a 1/300th of a second or faster shutter speed.

Image stabilization technology is generally promoted as providing a benefit expressed as a number of stops of light. You might achieve a benefit of anywhere from one stop to about five stops, at least according to marketing materials from various manufacturers. For illustrative purposes, let’s assume a two-stop benefit from a given image stabilization technology.

With a two-stop benefit you could use a slower shutter speed than would otherwise be possible. So with a 100mm lens you could use a 1/25th of a second shutter speed rather than 1/100th of a second. With a 300mm lens you could use a 1/75th of a second shutter speed rather than 1/300th.

While I certainly appreciate the benefit of image stabilization technology, I also realize there are limitations and a variety of other real-world issues that may affect the sharpness of my photos. Therefore, I prefer to follow the rule of thumb about shutter speeds relative to focal length without taking image stabilization into account.

As a result, any benefit caused by image stabilization becomes a “bonus” benefit, further increasing the chances of capturing a sharp photo when shooting hand-held.

Rotate in Develop?

Facebooktwitterlinkedin

Today’s Question: When I’m reviewing my photos in the Library module in Lightroom I have rotate buttons below the image preview. But when I go to the Develop module those rotate buttons disappear. Is there no way to rotate photos within the Develop module?

Tim’s Quick Answer: While the rotate buttons are not available on the toolbar when working in the Develop module in Lightroom, you can still rotate images by using a menu command or a keyboard shortcut.

More Detail: In most cases, of course, you aren’t likely to need to rotate your photos in 90-degree increments, because your camera will have set a rotation flag automatically for your photos. But in some cases you may find that the rotation wasn’t applied correctly, or that you want to alter the rotation for an abstract photo.

When reviewing your photos in the Library module you can have rotation control buttons on the toolbar below the image display (or on the thumbnails themselves). You can hide or reveal these rotation buttons when working in the Library module. Simply click the downward-pointing triangle at the far right of the toolbar below the image preview area and choose the “Rotate” options.

When you are working in the Develop module, however, these rotation buttons are not available. However, the menu commands for rotating your photos are still available. You can, for example, right-click on the thumbnail for a photo on the filmstrip and choose the “Rotate Left” or “Rotate Right” command. You can also go to the Photo menu on the main menu bar and choose the same options.

In addition, you can use keyboard shortcuts to rotate your images while in the Develop module. To rotate the image left (in 90-degree increments) hold the Ctrl key on Windows or the Command key on Macintosh and press the left square bracket key ([) on the keyboard. To rotate the image right, hold the Ctrl/Command key and press the right square bracket key (]).

Basic Panel Missing

Facebooktwitterlinkedin

Today’s Question: I have upgraded to Lightroom CC and am confused as to why I am not seeing the Basic panel in the Develop module [screenshot included].

Tim’s Quick Answer: In this case it sounds like you have hidden a panel section without realizing it. You can re-enable the Basic section by right-clicking on one of the headers of the right panel in the Develop module and choosing “Basic” from the popup menu that appears.

More Detail: Lightroom enables you to hide individual sections of the left and right panels in the various modules. This can be helpful if you want to reduce clutter by removing sections you never make use of. However, it can also be confusing if you manage to hide a section by accident.

Fortunately, it is very easy to reveal a hidden panel section once you know the “trick”. To hide or reveal any sections on a given panel, you can first right-click on one of the headers for any section. For example, you could click in the area where you see the “Basic” header or the “Tone Curve” header when looking at the right panel in the Develop module.

When you right-click on a header on one of the panels, you will see a popup menu with a list of all of the available sections for that panel. In the case of the right panel in the Develop module, for example, you will see Basic, Tone Curve, Adjustments, Split Toning, Detail, Lens Corrections, Effects, and Camera Calibration.

A checkmark icon to the left of these section names indicates that the section is enabled. If there is no checkmark it indicates the section is hidden. You can toggle any section between being hidden or revealed by clicking the applicable option from the popup menu. So, in this example with the Basic section missing you could simply choose “Basic” from that popup menu to reveal the Basic section on the right panel in the Develop module once again.

In-Camera Noise Options

Facebooktwitterlinkedin

Today’s Question: As a follow-up to the May 12 question about in-camera versus post processing long exposure noise reduction, do you have the same opinion on high ISO noise reduction?

Tim’s Quick Answer: No, there is an important distinction between long exposure noise reduction and high ISO noise reduction in the context of noise reduction applied in the camera. The high ISO noise reduction in the camera does not directly affect RAW capture data. Results will vary, but in general I find that I prefer to apply the high ISO noise reduction in post-processing rather than in the camera.

More Detail: The long exposure noise reduction I referred to in the May 12th edition of the Ask Tim Grey eNewsletter does apply to RAW captures. The second “dark frame” exposure is used to subtract noise from the image in the camera, changing the information in the RAW capture.

High ISO noise reduction operates differently. For JPEG captures, this high ISO noise reduction will of course affect the original JPEG capture. However, for RAW captures the high ISO noise reduction will not affect the actual data recorded by the image sensor. Instead, this information will be added a special metadata within the image. However, in general the only way to make use of that special metadata regarding noise reduction is to use the software provided by your camera’s manufacturer to process your RAW captures.

In other words, if you are using software such as Lightroom or Adobe Camera Raw to process your RAW captures, the high ISO noise reduction settings in your camera will not apply to those RAW captures at all.

As noted above, for the limited number of cameras I have tested for in-camera high ISO noise reduction, I have found that I am happier with the results I can achieve with post-processing noise reduction. For example, Lightroom and Adobe Camera Raw now provide excellent noise reduction, which I have generally found to be better than the in-camera noise reduction that is available.

So, while I do recommend making use of long exposure noise reduction in the camera, I generally don’t recommend using in-camera high ISO noise reduction. Note that even when you have applied long exposure noise reduction in the camera you will likely want to apply additional noise reduction when processing your photos after the capture.

Double Exposure Time

Facebooktwitterlinkedin

Today’s Question: In your answer about in-camera noise reduction you explained that a second “black frame” is captured. Doesn’t that mean you are essentially doubling the time it takes to capture a photo? Wouldn’t a 30-second exposure then take a full minute to actually capture? And isn’t that a good reason not to use in-camera noise reduction?

Tim’s Quick Answer: Yes, in-camera noise reduction does double the total time required to capture a photograph, but in my mind it is well worth the extra time in terms of the improvement in quality (relative to noise levels) in the final capture.

More Detail: As noted in the previous edition of the Ask Tim Grey eNewsletter, in-camera noise reduction generally operates by capturing a “black frame”, which is essentially a duplication of the “real” capture with the shutter closed to prevent light from reaching the image sensor.

This black frame provides information about how the image sensor is performing in terms of noise in the same overall conditions as the photo you are capturing. That information is then used by the camera to subtract the noise from the capture.

To be sure, there are situations where it is more important to be able to work as quickly as possible, or to be able to capture more photos within a given timeframe. But when it comes to overall image quality I consider in-camera noise reduction to be tremendously helpful for long exposures.

You will generally find that in-camera noise reduction is not employed until you reach exposure times of around thirty seconds or more. As a result, the impact of the second black frame capture is somewhat significant in terms of total time. It is also significant, however, in terms of the reduction of noise in the capture.

So, recognizing that using in-camera noise reduction essentially doubles the amount of time required for each long exposure capture, I do recommend making use of this feature whenever time allows.

Noise Reduction Options

Facebooktwitterlinkedin

Today’s Question: I would like to know the difference between the long exposure noise reduction option in camera and the noise reduction with Lightroom or Photoshop. What is the best option for long exposure with a 10-stop ND filter?

Tim’s Quick Answer: These two types of noise reduction are actually fundamentally different, with the in-camera option relating to how the camera actually behaves while the post-processing option can only analyze pixel values in the photo. For long exposures I recommend employing both in-camera and post-processing noise reduction.

More Detail: The in-camera noise reduction option provides a unique advantage, in that it is able to compensate for the actual behavior of the image sensor at that specific time under the current conditions.

As you may be aware, there are a variety of factors that can impact overall noise, including the design of the image sensor, the duration of the exposure, and the amount of heat buildup on the sensor. In-camera noise reduction provides the best opportunity to compensate for these various factors. In most cases this function operates by essentially capturing two exposures. First, your actual exposure is captured. Then a “black frame” is captured, where a photo is captured with the same exposure duration, but with the shutter closed to prevent light from reaching the sensor.

This black frame exposure can then be used to determine the noise behavior of the image sensor under the current conditions, so that the camera can then process the actual capture to subtract out the noise. This is, of course, a rather sophisticated operation, and it can be very effective at reducing the noise in the initial capture.

When you are applying noise reduction in post-processing, you only really have the pixel information to work with. Thus, noise reduction software uses a variety of techniques to evaluate and reduce the appearance of noise in the photo.

Since both in-camera and post-processing noise reduction employ a different approach to reducing noise, and since they compensate for different limitations, I recommend using both of them. I generally consider in-camera noise reduction to be the more important of the two, but there will likely be some degree of problematic noise remaining in the image even after in-camera noise reduction has been applied. The careful application of additional noise reduction in post-processing can help ensure the best image possible from the perspective of noise.

Maximum Frame Rate

Facebooktwitterlinkedin

Today’s Question: Can you explain why a digital SLR that is limited to somewhere around ten frames per second for photos is able to shoot sixty frames per second for video?

Tim’s Quick Answer: There are two key factors here, which I’ll over-simplify in an effort to keep things as clear as possible. First, video is typically shot at a lower resolution than still photos, so there is less data to process per frame. Second, for still photos there is generally a lot more work being done by the camera, such as moving the mirror and potentially establishing autofocus for each frame.

More Detail: Let’s assume a “typical” digital SLR that has a 20-megapixel image sensor. That means that each photo you capture will contain 20 million pixels. If we assume a frame rate of ten frames per second, that translates into 200 million pixels recorded per second.

If we then assume full high definition (HD) video at 1080p, we would be recording frames that are 1920 by 1080 pixels in overall size. That translates into just over 2 million pixels per frame. Even if we assume a higher-than-typical frame rate of 60 frames per second, the total number of pixels being recorded per second with video is “only” about 124 million pixels. At 30 frames per second that value goes down to about 62 million pixels per second. So, fewer pixels being processed per second with video.

Of course, things do get a bit more complicated. For example, the new Canon EOS-1D X Mark II (http://timgrey.me/atg1dx2) supports 4K video up to 60 frames per second, while capturing still images with a 20.2 megapixel image sensor. The resolution of the 4K video in this case is 4096 by 2160 pixels. That is “only” about 8.8 megapixels. However, when shooting that video at 60 frames per second you are processing more pixels per second (about 530 million) that you would for still photos captured at the maximum frame rate of 16 frames per second (about 319,000 pixels per second).

However, there are additional complexities here that make this possible. As noted above, there is generally more work being done by the camera when capturing individual still images in sequence. In addition, the “more” data being recorded for video doesn’t always translate into more data being stored on a memory card.

In the case of the Canon EOS-1D X Mark II, 4K video is captured in the Motion JPEG format, which employs compression to reduce overall file size as well as the overall amount of data being handled. With HD (1080p) the result is an MPEG-4 video.

So again, in general the answer here is relatively simple, because in most cases video represents less information being processed compared to still photos with a typical digital SLR. But, of course, there are more complicated issues involved in some cases, especially when it relates to how the video frames are processed and what specific tasks the camera must perform for still photos compared to video.