Working Offline

Facebooktwitterlinkedin

Today’s Question: If I store my photos on an external hard drive, do I need to always have that hard drive connected to my computer in order to work with my images in Lightroom [Classic CC]?

Tim’s Quick Answer: No, as long as the Lightroom catalog is available, you don’t need to have your source image files available to perform many tasks in Lightroom Classic CC. However, some tasks require that you first build Smart Previews, and some tasks require that the source images be available.

More Detail: Lightroom Classic CC stores the information about your photos in the Lightroom catalog. That catalog, in turn, references the actual photos being managed by your catalog, wherever those photos might be stored. Because the information about your photos is contained within the catalog, you can perform a variety of tasks in Lightroom even if the source files aren’t currently available (such as when you have disconnected an external hard drive that contains your photos).

As long as your Lightroom catalog is available (such as by being stored on the internal hard drive on your computer) you can perform a variety of tasks without access to the source photos. For example, you can browse your photos based on the previews Lightroom has generated, and even update keywords and other metadata, even if the external hard drive containing your photos is not currently connected to your computer. If you build Smart Previews for your photos, you can even work in the Develop module when the source image files are not available.

Some tasks do require the source images. For example, if you want to export a copy of a photo in the original capture format, the source file must be available. But there are quite a number of tasks you can perform in Lightroom even when the source photos aren’t currently available, thanks to the catalog Lightroom uses to manage the information about your photos.

Note that the GreyLearning library includes a course on “Understanding Lightroom”, to help photographers better understand how Lightroom Classic CC works. The aim of this course is to help ensure you don’t run into challenges based on a lack of understanding of Lightroom. You can get this course for just $10 if you use the coupon code “understand” during checkout. Or, simply follow this link to get started with the discount applied automatically:

https://www.greylearning.com/courses/understanding-lightroom?coupon=understand

Byte Order for TIFF Images

Facebooktwitterlinkedin

Today’s Question: In Photoshop CC, when I choose “Save As” to save an image as a TIFF file, the TIFF Options box displays the choice of Byte Order (IBM PC or Macintosh) pre-selected for Macintosh. Is there a way to always make it pre-selected for IBM PC?

Tim’s Quick Answer: There is not a way to change the default Byte Order setting in Photoshop, at least as far as I know. However, for the vast majority of users, there’s actually no benefit to choosing one option over the other.

More Detail: There was a time long ago (ten years or so) when selecting the correct Byte Order option for TIFF images was important. This was due to software compatibility issues, specifically as it related to the way image file data was stored on a Windows versus Macintosh computer.

Today’s software (with very few exceptions) is able to read TIFF images regardless of which Byte Order option is selected. Frankly, I would prefer it if Adobe removed the Byte Order setting from the TIFF Options dialog altogether, and simply provided a default setting option in Preferences.

Windows users can most certainly read and write TIFF files that are saved with the Macintosh setting for Byte Order with the vast majority of software available today. In fact, there isn’t a single imaging or video software application I use that doesn’t support TIFF files saved with either Byte Option setting.

So, if it makes you feel better to choose one option over the other, by all means feel free. But I recommend simply ignoring the fact that this setting even exists in the TIFF Options dialog, as it is extremely unlikely you will run into any compatibility issues with any software or operating system regardless of which setting is selected.

Old PSD Compatibility

Facebooktwitterlinkedin

Today’s Question: I have some Photoshop files that I think are from 2005 (or earlier). I can’t get them to open and am receiving the message “Could not complete your request because it is not a valid Photoshop document.” I am sure that I clicked the button to maintain compatibility along the way. Is there any way you can suggest that I might be able to retrieve these files?

Tim’s Quick Answer: The message you’re seeing indicates that the file you’re opening is either corrupted or isn’t actually a Photoshop PSD file. If you suspect the file format could be something other than a PSD file, you could open it as a different file format. You could also try a recovery tool to see if it can salvage a corrupted file.

More Detail: When Photoshop presents the error message “Could not complete your request because it is not a valid Photoshop document”, it indicates that the file does not appear to be the type of file expected. In this case, in other words, the PSD file you are trying to open doesn’t appear to Photoshop to actually be a PSD file.

It is possible that the file you’re trying to open isn’t actually a PSD file. For example, it could be a JPEG image that somehow had the filename extension changed to PSD. If that were the case, renaming the image so that the filename extension matches the actual file type would solve the issue. I suspect that is not the issue in this case.

Unfortunately, I suspect that in this case the PSD file are corrupted. Newer versions of Photoshop are still able to open PSD files created with older versions of Photoshop. Certain features might not be supported in some cases, and the appearance of the image may not match what was created originally in an older version of Photoshop. But the files can be opened with newer version of Photoshop.

Note that the “Maximize Compatibility” option for PSD files is not required in order to simply open an older PSD file in Photoshop.

Therefore, it is reasonable to assume that the files in this case are corrupted. You might check to see if you have backup copies of the files that are not corrupted. In addition, you could attempt to use a recovery tool to salvage the corrupted PSD files. For example, one such tool is the Photoshop file extract/recover tool found here:

http://www.telegraphics.com.au/sw/product/PSDRecover

Why the Red Pod?

Facebooktwitterlinkedin

Today’s Question: You mentioned the Red Pod bean bag you use for your camera support when without a tripod. I noticed there is also a Green Pod bean bag. The Red is supposed to be for compact cameras with the ¼-20 threads in the center and the green is supposed to be for DSLRs and the ¼-20 stud is off center. They appear to weigh the same so I am curious why you use the Red one?

Tim’s Quick Answer: I prefer the Red Pod (https://timgrey.me/redpod) over the Green Pod (https://timgrey.me/greenpod) because I’ve found the Red Pod provides better support and flexibility for my camera, as long as I’m not using a particularly long lens.

More Detail: When I use a beanbag, I’m generally not using a particularly heavy lens. With the Green Pod you can have the camera mounted at the edge of the beanbag, with a strap around the lens. This works well with a long lens, because it puts the overall support of the beanbag more at the center of the overall mass of the camera and lens setup. However, I’ve also found this provides a little less flexibility in terms of the positions you can place the camera, since there isn’t as much beanbag “behind” the camera.

With the Red Pod mounting at the center of the beanbag, you can “squish” the beanbag a little more in any direction to adjust the tilt of the camera. I’ve found this to be a better solution, as long as I’m not using a particularly big/heavy lens that would extend far beyond the edge of the Red Pod.

You can view photos and get more information about the Red Pod and Green Pod by following these links:

The Red Pod: https://timgrey.me/redpod
The Green Pod: https://timgrey.me/greenpod

Editing Environment

Facebooktwitterlinkedin

Today’s Question: What would you consider to be an ideal editing environment with regard to factors such as the type and amount of ambient lighting, screen brightness, screen type (reflective or not) and any other important considerations?

Tim’s Quick Answer: There are two things that I consider to be critical “ingredients” to an optimal view when optimizing your photos. The first is to be sure that the display is properly calibrated, including an adjustment to the overall luminance of the display. The second is to work in a darkened environment where there is no outside influence impacting your perception of the display of the image on your monitor.

More Detail: I think it is easy to understand how important it is that the images displayed on your monitor are accurate in terms of color and tonality, since you will be using the monitor display to make decisions about how to adjust the appearance of a photo. While today’s displays are quite stable and relatively accurate by default, it is still important to calibrate the display to ensure the color and luminance are as accurate as possible.

I strongly recommend using a calibration package that includes a colorimeter, which is a device that measures the light emitted by the monitor display. For example, one great option is the ColorMunki Display from X-Rite, which you can find here:

http://timgrey.me/colormunkidisplay

Note that with a calibration tool such as the ColorMunki Display, you will be guided through the process of adjusting your display’s brightness to match a target value. I recommend using a target luminance of about 120 candelas per square meter, which is a bit darker than most displays will appear by default.

Once your display is properly calibrated, you’ll want to be sure you are getting an accurate view of that display. In other words, you don’t want there to be any interference in terms of the brightness or color of the display. The optimal approach here is to work in a darkened environment if at all possible, so that the appearance of the display is not affected by the ambient conditions.

While it is best to work in a relatively dark environment, the key is to avoid strong color and tonal influences for the display. Moderate lighting won’t generally cause a significant problem, as long as it isn’t creating glare or positioned such that it affects your view of the display. Relatively neutral colors are best near the monitor. That’s not to say you need to paint all of the walls in the room neutral gray, but vibrant colors near your monitor display can alter your perception of that display.

As for glossy versus a flat finish for the monitor display itself, you’ll actually get better color and contrast in most cases with a glossy display. You just need to be extra careful to ensure color or lights are not being reflected by the display, since those would have a greater impact with the highly reflective surface of such a monitor.

No More Composite Panoramas?

Facebooktwitterlinkedin

Today’s Question: Do you think the “automatic” panoramic images you can capture with smartphones and other cameras have gotten good enough that you don’t need to create composite panoramas with a DSLR?

Tim’s Quick Answer: Not quite. Automatic in-camera panoramas can produce very good results, but generally not as good as you can achieve by capturing a series of images with careful technique, and then assembling those photos into a composite panorama.

More Detail: There are two basic types of “automatic” panoramas you can capture with certain cameras, and that don’t require any post-capture assembly. The first type involves the assisted capture of multiple frames that are then assembled in the camera to create a panoramic image. The second type is a “scanning” panorama, such as you can find with the iPhone’s Camera app. With this option you scan across the scene during the capture, and a panorama is created from that scanning view.

There are advanced cameras that create automated panoramic images, including scanning cameras that create 360-degree panoramic images. Those are capable of excellent image quality. But the more basic options available with smartphones and compact cameras do not offer quite the level of quality you could achieve with a composite capture.

For example, with a scanning approach to panoramas it can be very difficult to achieve proper alignment relative to the scene. This can result in an image where the horizon curves up and down throughout the panorama.

Cameras that assist you with the process of capturing multiple frames that are then assembled in the camera to create a panorama work better in most cases than scanning cameras, but I’ve found there can sometimes be errors or distortions in these captures.

In general you’ll find that many of the smartphone and compact camera options for creating panoramas provide results that are of high enough quality to share online. However, these captures are often not quite of the quality and technical accuracy needed for producing a large print.

So, for more casual online sharing, automatic panoramas provide a good solution. But for the highest quality results (especially if your intent is to print the panorama) I still recommend capturing a series of images with a high-quality camera, and assembling those frames into a composite panorama using software such as Photoshop or Lightroom Classic CC.

Function Keys Not Functioning

Facebooktwitterlinkedin

Today’s Question: I was trying out some software. The F2 key was supposed to perform some action. When I pressed and held F2 on my Mac, the screen brightened. I’m not what was supposed to happen. My question is whether I have to recalibrate my display, or will the proper calibration occur if I turn the computer off and then on?

Tim’s Quick Answer: On many computers (especially laptops), the function keys perform computer-specific tasks unless you hold a separate key on the keyboard. In the case of changing the display brightness, if you return it to the original setting you’ll be fine. When in doubt, simply re-calibrating the display will resolve the issue.

More Detail: On most computer keyboards you will find a set of “function” keys across the top of the keyboard, typically labeled “F1”, “F2”, and so on. However, in many cases you will find that these function keys perform computer-specific tasks by default, rather than acting as the function keys certain software applications might be expecting.

For example, on Macintosh laptop computers the F1 key serves to darken the display, and the F2 key serves to brighten the display.

In this type of situation, to access the actual function keys you need to hold a special key on the keyboard. With a Macintosh laptop, for example, that key is labeled “fn”. On other computers the key may be similarly labeled.

So, to access the F2 key for specific software functions, for example, you would hold the “fn” key while pressing the “F2” key.

Of course, having unintentionally adjusted the brightness of your display, you will want to make sure you return the brightness to the original value. When calibrating the display it is a good idea to make a note of the specific brightness setting used on your computer. If you’re not sure what setting was used, I do recommend repeating the process of calibrating your display. This will ensure you have an optimal brightness setting for the display, and that the color is as accurate as possible as well.

Once the display is calibrated, I recommend not changing the brightness setting for the display, unless you later return that setting to the value achieved during the calibration process.

Exposure versus HDR Bracketing

Facebooktwitterlinkedin

Today’s Question: I do very little bracketing and HDR [high dynamic range imaging]. I’m expecting to do some, and I seem to be brain dead about the difference between the two. Am I understanding correctly that taking the series of exposures is done exactly the same for bracketing and HDR, and that the only difference is what you do with the exposures later?

Tim’s Quick Answer: Yes, in general exposure bracketing and capturing a sequence for a high dynamic range (HDR) image would involve the same basic process in terms of the original captures. The difference is primarily your motivation and how you process the photos after the capture.

More Detail: Put simply, when capturing a sequence of images for an HDR result you will be using exposure bracketing. The key difference between HDR and “simple” exposure bracketing is that with exposure bracketing the aim is generally to achieve a single good exposure. When bracketing for HDR, the aim is to blend the exposures into a final image that presents greater dynamic range than the camera could capture with a single exposure. The HDR image can be assembled from the bracketed exposures using software such as Aurora HDR (https://timgrey.me/aurora2019), Photoshop, or Lightroom Classic CC)

Exposure bracketing is generally motivated by tricky lighting conditions, or a lack of confidence on the part of the photographer that they will be able to achieve an optimal exposure. You can capture bracketed exposures that are brighter and darker than what the camera’s meter suggests, with the idea being that if the “middle” exposure isn’t optimal, the brighter or darker image from the bracketed set will provide a good alternative.

With exposure bracketing you just need to be sure to bracket enough that one of the captures will provide a good exposure. For HDR you need to make sure you cover the full range of tonal values in the scene you are photographing.

Because of the differences in terms of motivation and final result for exposure bracketing versus HDR bracketing, in concept you could bracket with a narrower range of exposures for exposure bracketing than might be necessary for HDR bracketing. In reality, however, I would tend to take a conservative approach in either situation, so you are always confident you’ll have all of the exposures you need to achieve your photographic goal. You could always delete the “extra” exposures that result from such an approach.

When creating bracketed exposures for an HDR image, I will generally separate the exposures by two full stops, and capture five or seven exposures to ensure I cover the full range of the scene I am photographing. The number of exposures could vary depending on the specific conditions in terms of overall dynamic range.

When using exposure bracketing to compensate for a lack of confidence in exposure settings, I would generally separate the exposures by a single stop rather than two stops, and only capture three (or maybe five) total exposures.

So, there are some differences in how you might approach exposure bracketing versus HDR bracketing, but the overall concepts are the same. In both cases I would generally use your camera’s automatic exposure bracketing (AEB) feature to streamline your workflow, simply changing the settings based on the requirements of the current photographic situation.

Beanbag Camera Support

Facebooktwitterlinkedin

Today’s Question: In a recent answer you mentioned using a beanbag when capturing a photo of the total lunar eclipse. Is there a specific beanbag you recommend as an alternative to a tripod?

Tim’s Quick Answer: I have been using The Red Pod from The POD (https://timgrey.me/redpod), and have been very happy with it. In particular, I like this beanbag because it attaches directly to the camera for better support.

More Detail: To be sure, a tripod provides a very stable platform for your camera, and is generally the best option for helping ensure the sharpest captures possible. In many situations, however, you may find that a tripod is not practical, or is not allowed to be used at some locations.

For example, I am currently on an extended trip for which I wanted to pack as light as possible. I will also have only limited opportunities for night photography, so I made the decision to leave my tripod behind. The Red Pod has provided a great solution, enabling me to capture photos that would have otherwise been difficult or impossible.

For example, I used The Red Pod to capture a photo of the total lunar eclipse last month, which you can see on my Instagram feed here:

https://www.instagram.com/p/Bs6CmblA4cV/

What I appreciate most about The Red Pod is that it features a 1/4″ screw that mounts directly to the camera body, much as a tripod plate would. It is therefore possible to simply screw the beanbag directly onto the camera, and then set the complete assembly on a stable surface to capture an image.

Because the beanbag is somewhat pliable, you can shift the shape around a bit to get the camera at just the right angle. As much as a tripod would provide better results in many cases, I’ve found The Red Pod to be a great tool when a tripod can’t be used. You can get more details about this beanbag camera support here:

https://timgrey.me/redpod

JPEG to RAW

Facebooktwitterlinkedin

Today’s Question: I got an email about software from Topaz Labs that claims to convert JPEG images to raw captures. Is that even possible, and if so will it provide all of the benefits of raw?

Tim’s Quick Answer: The “JPEG to RAW AI” software from Topaz Labs does not provide the same benefits as a raw capture, and frankly I feel that their marketing around this software is misleading.

More Detail: “JPEG to RAW AI” from Topaz Labs enables you to batch process JPEG images and convert them to a DNG or TIFF image with a 16-bit per channel bit depth. As part of the processing, various enhancements are applied to the image. The claim is that the result will be greater dynamic range, a larger color space, higher bit depth, reduced artifacts, and increased detail.

To begin with, converting a JPEG image to a DNG or TIFF file format with a different color space and higher bit-depth setting does not provide any quality benefit for the image all by itself. The only real benefit from these changes would be the potential for better image quality after applying strong adjustments. The exact same results could be achieved by changing the color space and bit depth for an image in Photoshop, for example, with no visible change in appearance for the photo.

After testing a variety of images with JPEG to RAW AI, I did not find that there was any significant improvement in the level of detail in the photos. Some photos showed evidence of contrast enhancement and sharpening in certain areas, which obviously could also be applied using other software.

While some of the visible artifacts in JPEG images I tested with JPEG to RAW AI were reduced, in areas where artifacts were reduced overall sharpness and detail were also reduced. In some cases detail enhancement in certain areas of an image actually increased the visibility of artifacts in the image.

Overall I was not impressed with the results I achieved with the JPEG images I processed with JPEG to RAW AI. More worrisome to me, however, is that I feel the way the product is being marketed is misleading. While I do feel that some of the software products from Topaz Labs are very good, I would not recommend JPEG to RAW AI.

If you’d like to check out JPEG to RAW AI for yourself, you can get more info on the Topaz Labs website here:

https://topazlabs.com/jpeg-to-raw-ai/ref/273/