Relative Display Sharpness

Facebooktwitterlinkedin

Today’s Question: I know there are many variables, but generally speaking would a MacBook Pro laptop monitor be sharper than a 27-inch external monitor? Common sense would dictate that the laptop monitor will be sharper, but I just wanted to get your thoughts.

Tim’s Quick Answer: There are indeed many variables, but given similar display resolution and assuming similar overall quality, a smaller display will appear sharper than a larger display. In this case, that means the laptop display would appear sharper than the external monitor.

More Detail: The apparent sharpness of a monitor display is primarily a result of the overall pixel dimensions and the physical size of the display. There are certainly other factors, such as the quality of components, but effective pixel per inch resolution is the top factor.

To start with, I highly recommend opting for a monitor with a resolution of 4K or better. This will provide a greater number of pixels, which in turn will provide greater pixel density. That translates into a sharper display, all other things being equal.

Monitor size also affects the sharpness of the display, especially if we’re assuming the same (or similar) overall resolution. If you compare a 27-inch and 32-inch display, both of which have 4K resolution, the smaller display will look sharper because of the greater pixel density.

The MacBook Pro mentioned in today’s answer is a little unique, in that while the laptop is available in two sizes (14-inch and 16-inch), those models also have different resolutions for the display (3,024 pixels across versus 3,456 pixels across, respectively). The result is that while the displays are different physical sizes with different pixel dimensions, the pixel density is the same at 254 pixels per inch.

A 27-inch monitor will have a display width of around 23.5 inches, though the exact dimensions will vary especially based on the aspect ratio of the display. But assuming a width of 23.5 inches and full 4K resolution (3,840 pixels across) this display would have a pixel density of 163 pixels per inch. Note, by the way, that for a typical 32-inch display with 4K resolution the pixel density goes down to about 138 pixels per inch.

The real issue here has nothing to do with a built-in laptop display versus an external display. It really comes down to pixel density impacting the visual sharpness of the display. That means a higher resolution display will appear sharper, all other things being equal. And a smaller display will appear sharper, all other things being equal. The key is to strike the right balance between display size and display resolution, taking into account other factors such as the price of the different displays you’re considering.

And of course, I should hasten to add that pixel per inch resolution isn’t necessarily the most important factor to consider either. There is also the ergonomic consideration of which display size is most comfortable for you, along with many other factors to consider.

Benefit of “Upgrading” Bit Depth

Facebooktwitterlinkedin

Today’s Question: Continuing with the discussion of 8-bit and 16-bit, for images captured at 8-bit, is there any advantage to converting them to 16-bit TIFF files for editing and then printing to photo paper?

Tim’s Quick Answer: While the potential benefits of converting an 8-bit per channel capture to 16-bit per channel mode would be minimal, I still consider it a best practice, especially in the context of converting a JPEG image to a non-compressed format for editing.

More Detail: There are theoretical benefits to converting an 8-bit per channel image, but from a practical standpoint the benefits are relatively minimal. However, considering there’s a good chance that an 8-bit capture is also a capture that used JPEG or other image compression, converting to another file format (such as TIFF) can help preserve image quality. I consider it a best practice to convert to 16-bits per channel as part of that process. Just keep in mind that the file size for a 16-bit per channel image will be double the file size for the same image in an 8-bit per channel file.

If you consider the numbers involved, converting an image from 8-bits per channel to 16-bits per channel sure sounds like a good thing. After all, you go from an image with just under 16.8 million possible colors to an image that has the potential to contain more than 281 trillion possible colors! But the image still only contains fewer than 16.8 million colors, even if it has greater potential.

As you apply adjustments to an image, the changes in pixel values can cause the total number of colors represented in the image to increase. That, in turn, can help lead to slightly smoother gradations of tone and color. But what you’re really gaining is a reduced risk of posterization, which is the loss of smooth gradations of tone and color.

As noted above, there is also a slight benefit to converting a JPEG capture to a non-compressed format (such as TIFF, even with ZIP or LZW compression since they are lossless). The primary benefit here is that you are avoiding the additional degradation in image quality caused by repeatedly saving (and therefore compression) the image as additional changes are applied.

To be sure, converting an image from 8-bit to 16-bit per channel mode is not going to provide such a significant benefit that you would be able to tell the difference. Similarly, the compounding effect of JPEG compression being applied multiple times is not something you’d likely be able to make out without zooming in to the pixel level. But I still consider it a best practice to save images in a format without lossy compression, in the 16-bit per channel mode, and in a format that supports layers if you’ll be working in Photoshop.

Download with Drive Mismatch

Facebooktwitterlinkedin

Today’s Question: I went to import photos into Lightroom Classic without realizing that the drive letter had changed from F: to D: [this would be the equivalent of the volume label for Macintosh users]. Having caught the drive letter issue, I changed it to F:. That caused Lightroom Classic to recognize the 200,000 photos in my catalog, but the 600 photos I had just imported appear missing because Lightroom Classic expects them on the D: drive. How can I fix this?

Tim’s Quick Answer: You can resolve this issue very easily by using the “Find Missing Folder” command to reconnect the folder that Lightroom Classic thinks is on the D: drive so that it maps to the F: drive.

More Detail: If the drive letter (Windows) or volume label (Macintosh) for a hard drive is changed, this can create issues for Lightroom Classic since the catalog is tracking photos based on the hard drive and folder structure. This can be a particular challenge for Windows users, because if you connect an external hard drive in a different order the drive letter assignment could change. You can generally resolve that by assigning the correct drive letter assignment using the Disk Management utility to assign the intended drive letter as the “permanent” drive letter.

Fortunately, this issue is easy to fix in Lightroom Classic. In this particular example one folder is missing because Lightroom Classic thinks it is on the D: drive when it is actually on the F: drive with all the other photos. So, you just need to let Lightroom Classic know where the folder really is.

Start by right-clicking on the missing folder and choosing “Find Missing Folder” from the popup menu. In the dialog that appears, navigate to the correct hard drive (the F: drive in this example) and then to the applicable folder. Click the Choose button and Lightroom Classic will reconnect the folder that was expected on the D: drive so it appears on the F: drive, with the folder and the photos within no longer missing.

Setting Brightness for Online Sharing

Facebooktwitterlinkedin

Today’s Question: How can I determine the level of exposure for images to be posted online? I’ve been told to set exposure until the L [luminance] value in the histogram in Lightroom Classic is between 89 and 93 for white areas in the image. But many images lack white entirely. And of course, the online viewers device settings will affect how bright or dim the image appears.

Tim’s Quick Answer: My recommendation is to optimize photos to perfection based on your calibrated display and not apply adjustments for online sharing (or other digital sharing). This will provide an optimal experience for those who have also calibrated their displays, rather than trying to compensate for the tremendous variation among non-calibrated displays.

More Detail: When preparing a photo to be printed, it can be very helpful to apply adjustments to compensate for the output behavior of the printer, such as by lightening the black point and slightly darkening the white point. In my view these adjustments should be applied only based on specific testing of output conditions, not based on arbitrary rules of thumb.

For digital sharing I don’t recommend applying these types of compensating adjustments. Instead, I recommend optimizing based on an evaluation of the image on your calibrated display, based on your vision and intent for the image.

It is, of course, that many photographers are probably the vast majority of non-photographers do not calibrate their displays. It is therefore quite common for their display to be inaccurate, in many cases being about one exposure stop brighter than your calibrated display. However, in my view it doesn’t make sense to try to compensate for non-calibrated displays.

The way I see it, I’d rather have my images look accurate (based on my intent) for those with calibrated displays. While that means the images won’t look their best for those with non-calibrated displays, my assumption is that this simply puts your images in line with what they’re accustomed to seeing anyway.

So, my recommendation is to optimize your photos based on an accurate digital display, and to not degrade the images based on those who will view your images without calibrating their displays.

Adobe Price Increase

Facebooktwitterlinkedin

Today’s Question: I’ve heard that Adobe is increasing their Creative Cloud subscription prices, but I haven’t seen an official announcement on the subject. Do you know anything about this?

Tim’s Quick Answer: Yes, Adobe is increasing the pricing for their Lightroom plan and one of their Photography plans. However, the increase will only affect monthly subscriptions, not pre-paid annual subscriptions.

More Detail: It has been more than a decade since Adobe increased the cost for their Photography plan subscription, but some of the prices are going up effective January 15, 2025.

First off, I want to emphasize that pre-paid annual plans will not be affected by the price increase. Prices do vary by region, and here I’ll only reference prices in the United States in US Dollars.

There are three subscription plans in this overall category, two of which are affected by a price increase. The Photography plan with 20GB of cloud-based storage will increase from $9.99 per month to $14.99 per month, but the pre-paid annual price will remain at $119.88 per year (equivalent to $9.99 per month).

The price for the Photography plan with 1TB of cloud-based storage will not change. It remains at $19.99 per month or $239.88 for the pre-paid annual option (which is the equivalent of $19.99 per month, just paid upfront).

Both of the above Photography plans include all versions of Lightroom, including both the Lightroom and Lightroom Classic desktop applications, the Lightroom mobile app, and Lightroom for web browsers. They also include Photoshop for both desktop and iPad, and Adobe Portfolio for building websites to share your photos.

There is also a Lightroom plan that includes the Lightroom desktop application, Lightroom mobile and web, and Adobe Portfolio. With the new changes, Lightroom Classic will now be included in this plan as well. However, Photoshop is NOT included in this Lightroom plan. The pricing for this plan is increasing from $9.99 per month to $11.99 per month, but the pre-paid annual plan will not change, remaining at $119.88 per year (the equivalent of $9.99 per month).

You can get details of the several Adobe Photography plan options on the Adobe website here:

https://bit.ly/adobeprice2025

Avoid Importing to Wrong Location

Facebooktwitterlinkedin

Today’s Question: Sometimes I import pictures in Lightroom Classic to the wrong destination, like the internal hard drive instead of an external hard drive. Is there a way to create a list of preferred import targets so I won’t import to the wrong place if I don’t notice the settings?

Tim’s Quick Answer: There isn’t an option to prevent downloading to specific storage locations, but you may find it helpful to use a preset for the Import feature to quickly establish preferred settings.

More Detail: Most of the settings in the Import dialog are “sticky”, meaning when you bring up the Import dialog most of the settings will be the same as they were the last time you imported photos (or the last time you clicked the Done button without importing, rather than clicking Cancel). Exceptions are the Keywords field (which will be blank) and the Start Number field for renaming photos (which will reset to 1), as well as the Import Preset popup mentioned below.

However, you can preserve any and all settings for quick access by saving a preset in the import dialog. Start by configuring the settings you’d like to preserve as the most common settings in the Import dialog, such as to specify your preferred storage location for photos from the popup at the top-right of the Import dialog.

Once you’ve configured your preferred settings, click the Import Preset popup on the dark bar at the bottom-center of the Import dialog. Choose “Save Current Settings as New Preset” from the popup, enter a meaningful name in the Preset Name field of the New Preset dialog, and click the Create button.

With that preset created, whenever you bring up the Import dialog you can select the saved preset from the Import Preset popup. Granted, this still means you need to remember to check settings (at least the Import Preset popup) whenever you bring up the Import dialog, but if you get in that habit, you’ll help ensure you have a better starting point for all settings for each import.

Criteria for Importing Duplicates

Facebooktwitterlinkedin

Today’s Question: I find that Lightroom Classic will import duplicates if their names differ even if they’re identical, like from a checksum perspective. Sounds like I’m doing something wrong. Thoughts?

Tim’s Quick Answer: When you enable the option to not import suspected duplicates in Lightroom Classic, the images are only evaluated based on the original filename (if available), the capture date and time from metadata, and the file size. In this example the issue is probably that the files were renamed without retaining the original filename in metadata.

More Detail: Lightroom Classic does not use a checksum to compare images being imported to determine if they are duplicates of any other images already in the catalog. A checksum is a value based on an algorithm to describe the contents of a file, which can provide a more accurate way of determining whether two files are identical. Instead of a checksum, Lightroom Classic uses the original filename, the capture time, and the file size.

In this case I suspect the issue is that the filename was changed without preserving the original filename in metadata. If you use Lightroom Classic or Adobe Bridge to rename files, the original filename will be retained in metadata. This original filename can be found in the Preserved Filename field in the File Properties section of the Metadata panel in Bridge, or in the Original Filename field on the Metadata panel if you select “EXIF and IPTC” from the popup to the left of the Metadata heading.

If you renamed with other software or through the operating system, the file can be renamed without preserving the original filename. In that case, Lightroom Classic will not identify files with different names to be duplicates even if they are, because the filenames don’t match and the original filename isn’t preserved in metadata.

But, as long as you don’t rename photos using software that doesn’t preserve the original filename and don’t change the capture time, true duplicates should be detected by Lightroom Classic upon import as long as you keep the “Don’t Import Suspected Duplicates” checkbox turned on.

Preset Folder Confusion

Facebooktwitterlinkedin

Today’s Question: I have presets for Lightroom Classic in both the Camera Raw folder (Imported Settings) and Lightroom (Develop Presets). Can you explain for me why they are kept in two separate places in the file system?

Tim’s Quick Answer: This is an artifact of an older version of Lightroom Classic, as presets for both Lightroom Classic and Camera Raw are now stored in the same location, within the Camera Raw application support folder.

More Detail: When you save a preset in the Develop module in Lightroom Classic or in Camera Raw, a file containing the preset settings is saved in a Settings folder. The same folder is used for both Lightroom Classic and Camera Raw, and can be found here by default:

Windows: C:\Users\[Username]\AppData\Roaming\Adobe\CameraRaw\Settings

Macintosh: [Username] > Library > Application Support > Adobe > Camera Raw > Settings

Note, by the way, that presets you import rather than save are placed in the “Imported Settings” folder rather than the “Settings” folder.

In early versions of Lightroom Classic, saved presets were stored in a separate folder for Lightroom, not within the Camera Raw folder. However, this was changed some time ago so that presets for both applications are stored in the same folder. If you have presets in the “old” location, those would have been left behind from an earlier version of Lightroom Classic, before the location was updated. That original folder location was similar to the path above, but instead of being in the Camera Raw folder the presets were stored in a Lightroom > Develop Settings folder.

If the presets in the old location are not reflected in the new location, you can move them to the correct location, so they’ll be available in both applications.

JPEG and Dynamic Range

Facebooktwitterlinkedin

Today’s Question: How does JPEG compression affect the dynamic range available on the overall image and how the display used affects each characteristic of the final image?

Tim’s Quick Answer: JPEG compression doesn’t technically affect the dynamic range potential of the image, but does affect overall image fidelity and detail. The characteristics of the display being used have a potentially significant impact, though with modern displays this isn’t generally a major issue.

More Detail: In the context of photography, dynamic range is most applicable in the context of capturing an image in the first place. In other words, dynamic range mostly relates to the range the camera is able to capture from the darkest shadows to the brightest highlights.

Once the image data is captured, the dynamic range potential of the digital image is somewhat fixed based on how the image is processed and the bit depth. And even the bit depth is more applicable to the smoothness of gradations rather than a strict dynamic range at this point. After all, a digital image can always have a dynamic range that extends all the way from pure black to pure white. What’s more important is how pixel values were mapped to the digital image based on capture data.

One of the significant issues with saving an image as a JPEG is that the image will only support an 8-bit per channel bit depth. That translates to only 256 shades of gray for a black and white image, and almost 16.8 million colors for an RGB image. This compares to 65,536 shades of gray for a 16-bit black and white image, and over 281 trillion possible colors for a 16-bit RGB image.

The reduced bit depth reduces the potential for detail in the image and increases the risk that gradations of tone and color won’t be smooth. Furthermore, JPEG compression reduces detail and perceived sharpness in the image.

When it comes to the digital display of an image (whether JPEG or something else), the attributes of that display play a key role. Display resolution determines the perceived quality of the image. For example, a display with a 4K resolution will provide a crisper view even if set to a lower resolution. The dynamic range and color space capabilities of the display can also affect perceived image quality. For example, just because an image contains excellent shadow detail does not guarantee that the shadow detail will be visible on the monitor display, depending on the specifications and configuration of that display.

Dynamic Range versus Bit Depth

Facebooktwitterlinkedin

Today’s Question: Please explain the difference between “sensor dynamic range” and “image bit depth”. I hear lot of confusion about those two measures.

Tim’s Quick Answer: In some respects, these two factors describe the same (or at least similar) attribute, just based on a different context for each. They both relate to the total range of information (such as tonal range or color range) a photo could potentially contain.

More Detail: The dynamic range of the image sensor on a camera determines the maximum tonal range the camera is able to contain in a single capture. It relates to the difference between “empty” or “full” for each of the photodiodes that ultimately represent the pixels in the final image. Empty in this context means no electrical charge based on the amount of light detected, and full means the maximum charge.

Dynamic range is a measure of the difference between the darkest value (empty) and the brightest value (full) that the image sensor can capture. I think a reasonable (though abstract) analogy is to think of the image sensor as being comprised of buckets that are capturing light. An empty bucket is black, and a full bucket is the brightest value that can be recorded (theoretically white). Cameras with larger buckets can capture a greater dynamic range. For example, with relatively small buckets the sun might be blown out in a photo, while with relatively large buckets detail might be retained in the sun.

Ultimately, the dynamic range of the camera relates to the maximum range of tonal values you can capture with your camera, without blocking up the shadows or blowing out the highlights. In other words, the camera’s dynamic range determines the total potential for tonal range in the original captures for your photos.

The bit depth, while similar, plays a bit of a different role. The bit depth relates to the total number of tonal or color values that are possible for an image. For an image sensor, the bit depth of the processing from an analog signal (light) to discreet digital values can be performed at varying bit depths. In this context, the bit depth relates to how many different values can be recorded between black and white, for example, which in turn determines things like how smooth the gradations from dark to bright areas can be.

Once an image has been digitized, the bit depth determines the total number of tonal or color values that are possible. For example, an 8-bit per channel grayscale image can contain a maximum of 256 shades of gray, while a 16-bit per channel greyscale image can contain up to 65,536 shades of gray. For RGB color images the total number of possible colors is almost 16.8 million for 8-bit per channel images, and over 281 trillion for 16-bit per channel images.

In terms of image processing, for optimal quality you’ll want to ensure you’re working in the 16-bit per channel bit depth. For the camera, dynamic range is a product of the image sensor, so you’ll want to choose a camera based on maximum dynamic range if that is important to you. Furthermore, for optimal image quality with smooth gradations of tone and color, a higher bit-depth for the analog to digital (ADC) processing is preferred. Some cameras do offer 16-bit per channel in-camera processing, while many others only support 14-bit or even 12-bit processing.