High Pass Filters


Today’s Question: Canon has introduced a new 5DS R, a camera without a high pass filter over the sensor. This is supposed to make the image sharper. In Photoshop we use the high pass filter to increase sharpness. Please explain.

Tim’s Quick Answer: The short answer here is that the High Pass filter (in Photoshop for example) doesn’t truly sharpen a photo. Rather, it removes low detail areas of an image and retains high detail areas, creating contrast in the process. That contrast can then be used to sharpen with the help of a blend mode. A high pass filter on a camera’s image sensor filters out some of the low frequency information to avoid artifacts, but softens the image in the process of filtering out some of the information projected by the lens.

More Detail: In other words, both filters are doing approximately the same thing, but they are generally used in different ways. A high pass filter will filter out low frequency information and retain high frequency information. This is helpful in image sensors for preventing moire patterns and other visible artifacts in a photo. However, because some information is being filtered out, there is a degree of softness imparted to the image. In other words, there is a tradeoff here, but many photographers would choose a higher degree of sharpness and detail over the potential for removing certain artifacts in the image.

The High Pass filter in Photoshop also filters out low frequency (low detail) information in a photo, preserving (and enhancing) the high frequency (high detail) areas. The result is something of an embossed effect. When combined with one of the “contrast” blend modes (such as Overlay), this can create a sharpening effect. The light areas of the embossed copy of the photo will brighten one side of a contrast edge, and the dark areas of the embossed copy will darken the other side, resulting in the appearance of greater detail and sharpness.

So, in both cases the same basic filtering of information is being used for different purposes, but with variations on how those filters are implemented contributing to the difference.

Resolution for Printing


Today’s Question: I read on a blog post that they thought 150 ppi is enough to print. Wouldn’t a resolution of at least 300+ produce a better image? Or is the print quality and ppi based solely on the size of the final output of the image – can a smaller print be printed at a lower ppi without sacrificing print quality?

Tim’s Quick Answer: If we use the pixel per inch (ppi) resolution as a form of “shorthand” for describing how much information is in a photo compared to the final print size, then we can use that ppi figure to talk about potential output size. To that end, I would consider 150 ppi to be too low a resolution value for most photographic prints. I consider values of around 200 to 250 ppi to be a good threshold for ideal output, and higher values are generally better. However, in most cases there is no need to go beyond about 360 to 400 ppi for output resolution.

More Detail: Part of the reason I think the general topic of resolution causes so much confusion for photographers is that we use so many different ways of explaining the same basic concepts. In addition, we often mean two different things when we use the term “resolution”.

Resolution can refer to both the total quantity of information as well as the density of information. When printing a photo, both of these concepts intersect.

To produce a print of a given size at optimal quality, you need a specific amount of information. If you’ll excuse analogy, imagine a photo inkjet print of being comprised of individual pixels on the printed page, something along the lines of tiny grains of colored sand you might otherwise glue to the paper to produce an image. It should be noted that photo inkjet printers use ink droplets of variable sizes, so this analogy doesn’t quite fit the reality of the situation. But hopefully it is a helpful analogy in any event.

Based on this analogy, we could say that a photographic print of a given size requires a specific number of grains of colored sand. While there are some added elements of complexity involved with a photo inkjet print, you can think of a print at a given size as requiring a specific amount of information. For a typical photo inkjet printer, that might require 360 pixels per inch. In other words, for a print at 8-inches by 10-inches, you would need an image that is 2,880 pixels by 3,600 pixels.

If you have less information than this, the printer (or the software being used to send the photo to the printer) will need to add information to the image. There is complex software at work behind the scenes to accomplish this task, but the bottom line is that if a photo doesn’t contain enough information to print at a given size, that information needs to be added in some way. Similarly, if a photo contains more information than is needed to produce a print at a given size, the “extra” information will be discarded.

Once you understand that a print at a given size requires (in a general way, at least) a given amount of information, you can choose to describe that information in a variety of ways. One way is to describe the amount of information in the image based on a pixel per inch (ppi) value.

For example, if your image was only 1,440 pixels by 1,800 pixels, you have much less information than you really need to produce the 8×10 print referenced above. If you do the math here, you’ll discover that instead of an image that represents 360 pixels per inch for the intended output size, you only have 180 pixels per inch.

The extent to which you can “get away with” having less information in your photo than is required to print depends on a variety of factors. That includes the degree to which your image is coming up short in terms of the amount of information (the number of pixels), the type of paper you’re printing to, the distance from which the final print will be viewed, the quality of the image you’re starting from, and other factors.

The bottom line is that your results may vary based on a variety of factors. However, in general I aim for having enough information in my image that I will at most need to enlarge by double the width and height (four times the total surface area). You can often produce even larger output, but the more you push the limits, the more likely you’ll be dissatisfied with the quality of the final print.

Print Brightness


Today’s Question: I still need to calibrate my monitor – but I also read that just by bumping up the brightness to 20%-30% more, would be good enough to compensate for monitor brightness?

Tim’s Quick Answer: As far as I’m concerned, increasing the brightness of an image for the sole purpose of correcting for a print that is too dark is something that should be considered a last resort. Such a step should only be taken after troubleshooting the real reason the print is too dark and not being able to properly resolve that issue. While making an adjustment can certainly result in a print that better matches what you see on your monitor, I would generally regard this approach as representing a bad color management workflow. Also, the monitor should absolutely be calibrated before even trying to consider whether a print is too dark in the first place.

More Detail: When it comes to producing an accurate print, the desired result is often described as having a print that matches the monitor display. That is certainly our goal when printing, but it doesn’t quite explain what’s really going on. And I think that additional detail can be helpful.

What we’re really doing when printing within the context of a color-managed workflow could better be described in two steps. First, we want to make sure our monitor is presenting an accurate display of the information contained within our image, so that the adjustments we apply are based on an accurate view of the photo. Second, we want to make sure that the printer is producing an accurate print based on the information contained in the photo.

When the print doesn’t match what we see on the monitor display, the first thing to be done is figure out why the print doesn’t match the monitor display. If your print is too dark, it is quite possibly the result of the display being too bright. So if your monitor display is inaccurate, resulting in a print that is inaccurate, to me it makes absolutely no sense to change the print settings so that the printer is no longer producing a print that matches the information contained in the photo.

In other words, if your monitor is too bright, the solution is to set the monitor to an appropriate brightness level, not to make the printer produce lighter prints.

The first step then, is to calibrate and profile the monitor display so that what you’re seeing is an accurate reflection of the information in your photos. Most monitors with their default settings are about twice as bright as they should be. That is a full stop of light too bright. Calibrating the display will provide compensation for this issue, so that the brightness of the display is appropriate. That, in turn, will help ensure that the adjustments you’re applying are appropriate to the actual photo you’re working on.

You also want to make sure, of course, that you’re using a good profile for the printer, ink, and paper combination you’re using for printing, and that you’ve established correct settings in the software you’re using to print and in the printer properties dialog.

The bottom line is that you want to be sure that any work you’re doing in the context of color management is helping to make all of your devices (monitors and printers, for example) more accurate. Taking this approach will help ensure predictable and consistent results both when making prints yourself, and when sending photos to someone else to be printed.

Display Calibration Options


Today’s Question: Two quick (I hope) questions about monitor calibration, based on an answer you provided recently: 1) You seem to suggest that I should disable the feature for my calibration software (ColorMunki) that will cause the display to adjust based on the current lighting conditions at my computer. Is that true? And 2) How often should I repeat the calibration for my display? The software seems to suggest every two weeks is necessary.

Tim’s Quick Answer: I do indeed prefer to disable the automatic ambient lighting adjustment for the display, and calibrating about every three months is more than adequate for a digital display, provided nothing has changed about your overall computer configuration.

More Detail: I fully appreciate that in theory it is a good thing to have your monitor display updated automatically to reflect the current ambient lighting conditions. For example, when the lighting in the room gets brighter because of the sun shining through the windows, the display will get brighter to compensate. The same holds true for the color of the display, again to compensate for the color of the ambient lighting conditions.

However, there are two basic reasons I prefer to disable this automatic adjustment. First, it introduces a degree of variability, and I very much prefer consistency in my color management workflow. Second, in my opinion it is much (much!) better to work in an environment with consistent lighting conditions, preferably in a relatively dark environment where outside light sources aren’t interfering with your monitor display. In other words, I prefer to make sure my working environment remains as stable as possible, rather than having my display calibration tool apply an automatic adjustment based on changing conditions.

As for the frequency of the calibration and profiling, this is something that changed dramatically when we transitioned from analog displays (such as CRT monitors) to digital displays (such as LCD monitors). Digital displays are much more consistent over time, and so you don’t need to calibrate and profile the display as frequently.

With analog displays I used to recommend calibration every one to two weeks, but I also knew photographers who would calibrate their display every single day. With digital displays, frankly it is perfectly reasonable to only calibrate every few months or so, provided there hasn’t been any change in the hardware or software configuration that would impact the appearance of the display.

The primary issue with digital displays is that they fade in brightness over time (although not very rapidly under normal circumstances). The color doesn’t tend to shift much at all, at least until the display starts to get toward the end of its useful life. Therefore, in most cases, calibrating every few months will ensure an accurate and consistent display.

Need a tool for display calibration? Check out the X-Rite ColorMunki Display:http://timgrey.me/colormunkidisplay

Metadata for Video


Today’s Question: Nowhere in Photoshop, Bridge, nor Lightroom can I find my camera settings data for video that I shoot ( e.g. ISO, aperture, shutter speed). Is there something I need to change in camera to have this data show up the way it does for still shots? Or does the camera not record this data when shooting video?

Tim’s Quick Answer: Video captures don’t generally contain the same metadata information you’re accustomed to having for your still captures. But for Canon digital SLRs (and some other cameras) it is possible to view your capture settings in metadata.

More Detail: With most (or perhaps all) Canon digital SLR cameras, the capture metadata is saved in a THM (thumbnail) file that shares the same base filename as the video file that is created (for example, a file with an extension of MOV). However, even that THM file requires an “extra” step for you to be able to view the capture metadata. There are actually two approaches you could take.

The first is to use Canon’s Digital Photo Professional software to browse your video captures. As you might expect, since this software is from Canon, you can view the metadata captured by Canon digital SLRs using Digital Photo Professional.

The other approach is to simply change the filename extension for the THM file to JPG, so that other software tools (such as Adobe Bridge, for example) will be able to find the metadata. Yes, the THM file is really just a JPEG image with a different filename extension. So if you change the filename extension to JPG, you can then browse that JPEG image with any software that allows you to view metadata for images. Among that metadata, you’ll find all of the capture settings you are accustomed to seeing for your still images.

I’m not familiar with the options that might be available for other cameras, but in general you’ll find that there is much less support for metadata among video file formats. For example, Lightroom will not create an XMP sidecar file (nor write to the video directly) to add metadata to video clips the way it is able to for still images. The metadata for videos will only be retained in the Lightroom catalog.

So, in general, there isn’t as much flexibility when it comes to metadata for video files, at least compared to what we have become accustomed to for still captures. But, at least for video captured with a Canon digital SLR, that information can be found.

Display Brightness


Today’s Question: With today’s monitors, laptops, tablets and smart phones auto-adjusting brightness according to ambient light as well as power conservation, how do you recommend brightness be set so your work in Lightroom and Photoshop are consistent? I find on our gloomy mid-west days with breaks of bright sun here and there that I want to adjust my MacBook’s brightness up and down to compensate. But will that mean any exposure related changes I might make while developing could look drastically incorrect when printed or viewed by someone else? Any suggestions?

Tim’s Quick Answer: Consistency is important when it comes to your display, especially when applying adjustments to your photos. I therefore highly recommend calibrating and profiling your display, working in a consistent environment that is relatively dark, and avoiding the urge to change the display brightness based on changing lighting conditions.

More Detail: Most computer displays with their default settings are about twice as bright as they should be from a color management perspective. This is a big part of the reason that calibrating and profiling your display is so important. If your display is twice as bright as it should be, you’re seeing your images one full stop of light brighter than they really are. Calibrating the display ensures that display is adjusted to a more appropriate brightness level, and profiling ensures color accuracy as well.

Once you have made those adjustments in the process of calibrating your display, you should not make changes to the display settings. Doing so would work against the calibration.

It is also important to ensure that your environment is not significantly impacting the computer display. Ideally, you should work in a slightly darkened environment, so that display “overpowers” the ambient lighting conditions. Of course, I realize this isn’t always possible, but it is an ideal to strive for.

Many of the newer tools for calibrating and profiling your display include an option to automatically compensate for changes in the luminance or color of the ambient lighting conditions. However, I prefer not to make use of these options. While they can be very helpful in concept, their use suggests that you are working in an environment that has changing lighting conditions, and thus is not an ideal working space.

My approach is to try to always work in a relatively dim room, even if that means closing the blinds and turning off lights. This isn’t always practical, but I make an effort to ensure the environment is consistent (and relatively dark) whenever I’m making critical decisions about adjustments for my photos.

I do in general prefer a very bright display when performing general work (not adjusting photos) on my computer. When I increase the brightness for those other tasks, I make sure that when I’m finished I return the brightness setting to the setting that was established during the calibration process.

Resolution Revisited


Today’s Question: When I export from Lightroom trying to use a size that will look good on an iPad I understand that 2048 pixels on the long dimension is good to use. I am still unsure what dimension I should put in the resolution box. Say I want to use every available pixel in the image so it will look really good. It seems that there should be a difference between 50 and 500? I usually use 200, but it is only a guess.

Tim’s Answer: This is obviously a bit of a follow-up to Wednesday’s question. I thought that since resolution tends to be a subject that many photographers struggle with, it made sense to amplify the issues raised in yesterday’s email by addressing this question today.

In short, the pixel per inch (ppi) resolution for an image does not matter unless that image is being printed. What that means is that for an image you intend to share electronically, it doesn’t matter what you set the ppi resolution to. All that matters is how many pixels are in the image.

So, for example, if you are preparing an image to be shared on an iPad, it makes sense to size the image to match the pixel dimensions of the display. In the case of the latest iPad tablets, the resolution is 2048×1536 pixels. So you can size your images to 2048 pixels on the long edge to get great results, and you can set the ppi resolution to anything you like. Regardless of the ppi resolution you set, the image will still be 2048 pixels on the long side, and so will be sized to match the number of pixels (at least on the long edge) for the display.

In other words, for a digital display you’ll get the best results when there is one pixel in the image for each pixel on the display. That way, you have an image that matches the “size” of the display in terms of the amount of information, and you can expect the best display quality for the image.

I should add that some software does actually look at the ppi resolution value when you add an image to a document, even if the aim is not printing. As far as I’m concerned that shouldn’t be the case, but in some cases software will adjust the apparent size of the image based on the ppi resolution value. But that doesn’t change the number of pixels in the image, and thus has no impact on the quality of the image if it is sized based on the actual pixel dimensions.

So, for printing you should absolutely set the ppi resolution based on the type of printed output you’re producing. But for sharing images electronically, you don’t have to worry about the ppi value at all. In those cases, I recommend setting the ppi resolution value to your lucky number, just for fun.

Transparent Canvas


Today’s Question: I suddenly have a problem [in Photoshop] that I haven’t experienced before when adding extra canvas to an image. When adding extra canvas to a flattened image I‘m getting the transparency chequered board instead of a pure white boarder. The ‘Canvas Extension Color’ at the bottom of the dialogue box is greyed-out (not active). I’m sure there is a very simple solution that I’m overlooking here.

Tim’s Answer: In a way this is two questions in one. Sometimes you might actually want transparent pixels for the additional canvas around an image, and sometimes you might want actual pixels to fill in that area, so it is helpful to understand how this option works. The key is the presence (or lack thereof) of a Background image layer.

If the image includes a Background image layer, the Canvas Size dialog (accessible by choosing Image > Canvas Size from the menu) will include the option to set a color to be used for the new canvas area that will be added to the image. The default option is to use the current background color (white by default) for that canvas area, but you can choose a different option from the “Canvas extension color” popup if you prefer.

If the image does not include a Background image layer, the canvas you add to the image will be transparent, and the “Canvas extension color” popup will be disabled.

This, of course, leads to the question of how to change an image layer into a Background image layer, and how to change a Background image layer to a “normal” image layer. You can make this change by first making sure the layer you want to convert is the active layer on the Layer panel. You can make a layer active by simply clicking on the thumbnail for the layer on the Layers panel.

To change the status of the active layer to or from a Background image layer, you can then first choose Layer > New from the menu. On the submenu that appears you can choose “Background from Layer” if the active layer is not currently a Background image layer, or “Layer from Background” if the layer is currently a Background image layer. These two options are actually one item on the menu, with the specific option shown dependent upon whether the current image contains a Background image layer.

In the case of the example cited in today’s question, the image layer (even though the result of flattening) is not a Background image layer. Therefore, you could choose Layer > New > Background from Layer to convert the layer to a Background image layer, so that you can set the color of the canvas area you add to the image. If you wanted to add transparent canvas area for a different image, you would simply want to make sure the Background image layer is converted to a “normal” layer by choosing Layer > New > Layer from Background from the menu.

Resolution and File Size


Today’s Question: The same image exported twice from Lightroom with same pixel dimensions but one at 72 ppi and one at 300 ppi are the same file size. I would have thought that the 300 ppi file would have been larger. Why isn’t it?

Tim’s Answer: The issue of resolution continues to be one of the more common sources of confusion in photography, in part because information about an image is often presented in a way that can be a little confusing.

When it comes to the pixel per inch (ppi) resolution for an image, I think it is best to think about this as simply being a metadata value. It has absolutely no bearing whatsoever on the “real” information in your photographic image.

The overall size of an image is determined by how many pixels are in that image. Adding confusion, that total volume of pixel information is often referred to as resolution as well. In other words, sometimes the term “resolution” refers to the total volume of information (pixel dimensions, megapixels, etc.), and sometimes it refers to the density of information (how many pixels per inch, for example).

Forget about printing for the moment, and think about the image size as it appears at a 100% scale in Photoshop (for example) as well as the size of the file saved on hour hard drive. The number of pixels is the primary factor here.

This makes sense when you consider how the appearance of an image in Photoshop at a 100% zoom setting changes based on how many pixels are in the image. If we have a square image that is 10 pixels on each side, that image will look very small in Photoshop even at a 100% zoom setting. A square image that is 10,000 pixels on each side will look very different, with the image being so big that we can only see a small portion of the image when viewed at a 100% zoom setting.

If the pixel dimensions remain the same, with only the pixel-per-inch resolution changing, the file size will not change. The ppi resolution only affects (for the most part) how the image is printed. In other words, when you send that 10,000 pixel-per-side image to the printer, how do you want the pixels spread out on the page? If you spread them out really far (perhaps only 72 pixels per inch) you’ll be able to make a very big print, but the quality won’t be very good. If you keep the pixels pretty close together (perhaps 360 ppi) you’ll have a smaller print, but that print will have great image quality.

In both examples above, the number of pixels didn’t change, so the file size would be the same (all other things being equal). All that changed is a simple metadata value that provided information on how the pixels should be distributed on the page when printed.

There are, of course, other factors that impact file size. These include (among other things) the bit depth of the image, compression applied to the image, and whether layers are included with the image. However, all other things being equal, changing the ppi resolution for an image will have absolutely no impact on file size. The number of pixels in an image is the key factor in overall file size, as well as for the potential output size for the photo.

It is worth noting, by the way, that if you had specified the output size in inches (for example) instead of pixels, this would have made a difference. For example, to create a square image that is one inch on each side at 300 pixels per inch, the resulting pixel dimensions will be 300×300 pixels. At ten inches on a side at 300 pixels per inch, the resulting pixel dimensions would be 3,000×3,000. And the ten inch image at 72 pixels per inch would be 720×720 pixels. So describing image dimensions in inches at a given pixel per inch resolution may result in different pixel dimensions. But if pixel dimensions are fixed, the file size is fixed (all other things being equal, of course).

Exposure and ISO


Today’s Question: In an earlier Q&A [from January 23rd] you refer to underexposing by raising the ISO.

There is some confusion here somewhere. If one keeps all other parameters constant and raises the ISO isn’t this equivalent to using a faster film and therefore one would, relative to the earlier ISO, be over exposing? What am I missing?

Tim’s Answer: I would be more than happy to clarify.

When I was referring to the notion of raising the ISO, resulting in an under-exposed image, I wasn’t trying to suggest that raising the ISO setting actually caused the image to be darkened. Rather, I was referring to the impact of ISO on overall exposure and image quality.

I think it will be helpful to talk about specific exposure settings in order to help clarify. So, let’s assume a “sunny 16” exposure with an aperture of f/16, a shutter speed of 1/125th of a second, and an ISO setting of 100.

If I raise the ISO setting by two stops (to 400) and adjust other settings to compensate, I might end up with an aperture still set to f/16 but a shutter speed of 1/500th of a second. So, you could reasonably suggest that the faster shutter speed (the shorter exposure duration) would cause the image to be darkened, but that the higher ISO setting caused the image to be brightened to the same degree, resulting in an exposure that is exactly the same as would be achieved with the prior settings.

The key thing that I think photographers need to understand is how each setting affects the final image, and that is why I refer to the “underexposure” issue when you raise the ISO setting. More on that in just a moment.

The aperture primarily affects, of course, the depth of field in the scene. The shutter speed has primary control over the degree to which motion is frozen (or not) in the photo. And the ISO setting determines (in many respects) the amount of noise in the photo.

When you raise the ISO setting you are making a change that will have a brightening effect on the photo, all other things being equal. But you aren’t doing so by “magically” increasing the sensitivity of the image sensor.

So, to my point about raising the ISO resulting in a reduced exposure, let’s take a look at the exposure settings referenced above. At an ISO setting of 100 I referenced an aperture of f/16 and a shutter speed of 1/125th of a second. Raising the ISO to 400 resulted in a change to a shutter speed of 1/500th of a second at f/16.

But going from a shutter speed of 1/125th of a second to a shutter speed of 1/500th of a second represents two stops of exposure reduction. We’ve caused two stops less light to actually reach our image sensor. The image sensor can’t magically collect more light, or be more sensitive to the light. The result is that we’re actually taking a photo that is two-stops under-exposed, and the camera is then applying amplification to the signal information that was recorded to create the effect of a brighter exposure. In the process, noise will result.

To be fair, today’s digital cameras do a remarkable job of applying amplification through higher ISO settings without creating excessive noise. And there are a variety of ways you can mitigate the noise after the fact. But if you think of a higher ISO setting as representing an underexposed image that needs to later be brightened considerably, I think (and hope) it will provide a useful way for you to evaluate the ISO setting relative to other exposure settings. In other words, I hope this information helps encourage you to avoid raising the ISO setting on your camera unless it is necessary for your other exposure goals, in order to minimize the amount of noise in the final image.