How to find the focal length used by looking at image contents without examining metadata?
How to find the focal length used by looking at image contents without examining metadata?
By looking at a photo, how can one tell what focal length was used without looking at the photo metadata?
@scottbb wow that's awesome! Love the math in that question! Thank you
– user4127551
Aug 27 at 14:49
4 Answers
4
It would be more accurate to say that you can estimate the angle of view demonstrated by a photo. Angle of view is a result of both focal length and sensor size. In the case of a cropped photo, only the part of the sensor that has information included in the cropped photo should be considered as the 'sensor size'.
Even then, one must have context clues to estimate the angle of view at which a photo was shot.
If the photo is of a flat test chart perpendicular to the optical axis of the lens and parallel to the imaging sensor, then we can't really tell, even if we know the exact size of the chart. In such a case we can only determine field of view. But the same field of view can be obtained by shooting from different distances with differing angles of view.
For example, assuming the same 35mm/full frame camera/sensor size, we could get the same field of view of a flat subject by using a 50mm lens from 10 feet, a 100mm lens from 20 feet, or a 600mm lens from 120 feet. In the first case the 40° horizontal angle of view from 10 feet with the 50mm lens shows the same field of view (i.e. the same width and height) of the flat test chart as the 20° horizontal angle of view from 20 feet with the 100mm lens and as the 3.5° horizontal angle of view from 120 feet with the 600mm lens.
Depends on the photo.
If there are not clear references, for example, imagine a photo of a plain wall, no.
If you have some references you can generally assume the overall focal distances. Wide angle, telephoto.
With exact measures, two people standing in different locations with the height of the person, that would be more accurate.
With known parallel lines, for example, a corridor you can use a 3D program to recreate the framing and then set the focal length.
It's very difficult to nail the exact focal length. However since wide, normal, telephoto and super-telephoto lenses all have different levels of distortion and different levels of perspective compression, one can make an educated guess. For example, an image taken with a fisheye will have a completely unique look with severely curved lines. There are programs like Lightroom that can correct for this type of distortion not other types of distortion to the perspective.
Telephoto lenses will have a narrow depth of field, especially if shot at large apertures. The background will often look blurred out and appear closer and larger to the elements that are closer to the camera. Because of the narrow angle of view, telephoto lenses will show less of the background than wide-angle lenses. All of these factors combine to allow telephoto lenses to have a very distinctive look relative to normal or wide lenses.
With this in mind, it's therefore easy to tell the difference between a shot taken with a wide-angle (short focal length) lens vs a telephoto. Narrowing down the exact focal length to 24mm or 28mm is much, much more difficult if possible at all.
As it was already pointed out in the previous answers, if you know the sensor size (assuming no additional cropping) and have a reasonably accurate sense of scale for the scene, the best you can do in the general case is to estimate the field of view and infer the focal length.
In theory, it might be possible to do better and to infer the focal length independently of sensor size and additional cropping. You would still need an accurate sense of scale for the scene.
Using these models for near and far DoF limits, if you have the patience to resolve the system of equations, you will be able to eliminate the f-number N and to get an expression of the focal length f that depends only on the focus distance (s), the near DoF limit (Dn) and the far DoF limit (Df). If all 3 values s, Dn and Df can be measured on the pictures, their values can be used to calculate the focal length f.
A high level description of the method:
Note that this is purely theoretical and very unlikely to lead to a good approximation in most practical cases. It is unlikely that the scene will provide an accurate scale and even if it was, it is very unlikely that s, Dn and Df can be measured accurately on the picture.
Edit: as scottbb correctly pointed out in his first comment, there is a dependency between the expressions of Dn and Df and therefore it is impossible to resolve both f and N from these expressions with a single capture. The best that can be done with a single capture is determining f^2/N/c. For instance, assuming that f is much smaller than H and s, the expression of H can be approximated as H=f^2/N/c=sDf/(Df-s).
To address scottbb second comment, the circle of confusion is an arbitrary value that is determined on the image - typically around 2 pixels but it could be anything. The value of s is the distance to the focus plane and the focus plane can often be identified as the part of the scene that maximizes contrast. If any of the points in the focus plane is at a known distance, this gives the value of s. Both Dn and Df depend on the arbitrary selection of the circle of confusion: the planes for Dn and Df are those where a sharp edge is blurred over an area equal to c. This implies that if there is a blurred sharp edge at a known distance - outside of the focus plane - it is possible to estimate the circle of confusion at that distance and know the limit of the depth of field (Dn or Df) for the estimated value of c.
-1, for several reasons. First, bad algebra. Following your method, reduce the expression for ƒ you get from step 4. to its simplest form, you get ƒ = s. This should be a red alert that something is up. How can 3 equations describing 7 variables (ƒ, H, N, c, s, Dn, Df) reduce down to 1 equation of 2 variables? It can't. What you've essentially done is solve y = A_x_ + b for b (b = y - A_x_), then plugged that back into the equation for y to try to eliminate b. All you're left with is y = y.
– scottbb
Aug 27 at 13:29
But aside from algebra, let's think of this from a photography standpoint. There's no way to measure Df and Dn merely by inspecting the contents of a photo. They are subjective values based on the concept of acceptable circle of confusion, not measurable properties of a photograph, nor even of measurable properties of the Z-axis placement of objects in the scene. You can't "algebra away" the aperture and CoC — those are inherent inputs that determine the DoF limits.
– scottbb
Aug 27 at 13:42
@scottbb agreed with your first comment but somewhat disagree with your second comment: yes the circle of confusion is arbitrary chosen but the blur certainly can be observed and measured on the actual photo - therefore, for an arbitrary selected value of the circle of confusion it is possible to identify the parts of the images that are in the planes of Dn and Df (those where the blur is equal to c). Edited my post to reflect all of that.
– Come Raczy
Aug 27 at 18:33
By clicking "Post Your Answer", you acknowledge that you have read our updated terms of service, privacy policy and cookie policy, and that your continued use of the website is subject to these policies.
This question has an example of what you're asking about: What focal length is used in photos that both model is zoomed in and the background is open
– scottbb
Aug 25 at 20:48