We had an interesting discussion about this a few nights ago at a Photojournalism talk.
In that field, digital edits are seriously banned, to the point multiple very well known photo journalists have been fired for one little use of the clone tool [1] and other minor edits.
It's interesting to think I can throw an f/1.8 lens on my DSLR and take a very shallow depth of field photo, which is OK, even though it's not very representative of what my eyes saw. If I take the photo at f/18 then use an app like the one linked, producing extremely similar results, that's banned. Fascinating what's allowed and what's not.
I find even more interesting is the allowance of changing color photos to B/W, or that almost anything that "came straight off the camera" no matter how far it strays from what your eyes saw.
When you say "digital edits are seriously banned," I think that's overreaching. I'm a former newspaper editor, and "digital edits" in the form of adjusting levels, color correction, etc., are performed on every single photo.
What's not allowed, as you allude to, is retouching a photo.
So does introducing blur after the photo was taken count as retouching, or does it fall into the same category as color correction? It's an interesting question. On the one hand, it has the potential to obscure elements of the picture, which seems like retouching, but on the other hand, you could just as easily achieve the same effect with a DSLR and there would be no outcry.
Those permitted edits you mention seem to all preserve the pixel-level integrity of the image. The fact that intentional blurring does not do this seems to be a distinction to me. I think that the fact that the blur layer must be synthesized heuristically indicates that not all of the information of the final image was really captured from the real world.
I suppose that resizing (resampling) an image might be said to not preserve the integrity of the original pixels, but I think it does if you consider the original pixels to be a reflection of the continuous field at the sensor.
Question for professionals -- how are noise reduction, masks (unsharp), etc. treated?
>Those permitted edits you mention seem to all preserve the pixel-level integrity of the image.
Curiously, you're allowed to convert color to black and white, which in my opinion is not preserving the pixel-level integrity. An algorithm is making a guess at what level of black to convert a pixel to.
> Question for professionals -- how are noise reduction, masks (unsharp), etc. treated?
I'm not a pro yet, but my understanding is it's a big no-no.
> Curiously, you're allowed to convert color to black and white, which in my opinion is not preserving the pixel-level integrity. An algorithm is making a guess at what level of black to convert a pixel to.
In the sense of pixel integrity I had in mind, a b&w conversion wouldn't be a violation. Each output pixel would be directly effected by the corresponding input pixel (Bayer interpolation notwithstanding). Image data wouldn't be moved around from one region to another.
>That's why at the start of this whole thread I reference photojournalism.
I saw it, but it's not that clear cut.
What you write, "an artistic shot to fill space" implies to me generic illustration pictures, which the above isn't an example of.
I think the restrictions to editing are mostly contrained for photos about stuff like politics, world affairs, crime etc -- stuff that is presented as 100% dry news.
But the term photojournalism covers other stuff too, right? Isn't, say, a travel article written by a journalist with a photographer photojournalism too? Or the images taken by a photojournalist for a piece on dance culture, the burning man, stuff like that. Or for a sports feature.
> But the term photojournalism covers other stuff too, right? Isn't, say, a travel article written by a journalist with a photographer photojournalism too? Or the images taken by a photojournalist for a piece on dance culture, the burning man, stuff like that. Or for a sports feature.
I agree, and as I understand it, anything beyond some basic level/color adjustments and cropping is a no-no in those areas if you want to keep your integrity.
>algorithm is making a guess at what level of black to convert a pixel to
Can you not just average the RGB values? Perhaps adjusting a bit for relative intensities of those values (I'm just guessing but red is probably less bright than blue or green). It's not really a "guess" is it unless it's a sophisticated algorithm. It's more akin to rotating or skewing an image.
Or you mean it's a guess compared to how non-colour film would actually record the light?
> the same effect with a DSLR and there would be no outcry.
I think at some point the metadata would have to set the precedent. If the image capture device were forensically examined and showed the image was taken as it, then it's untouched. If it was downloaded and blurred, then it's not untouched.
I think a similar, albiet reverse, comparison would be the iPhone's panorama function. If your cousin jumped from one end of the frame to the other, the camera's not lying. It's the person.
If the Google Camera app took the picture and the metadata prove it, the camera's not lying.
Blurring seems almost a form of cropping; in both processes image information is removed rather than added. But blurring could be more powerful and subtle since it's less constrained by the need to maintain a rectangular shape.
Removing image information can certainly be an editorial choice. Take the all-too common picture of someone being beaten in a street brawl and imagine that the photographer or editor has used cropping or blurring to remove the fact that this scene is taking place directly in front of a police station.
> I think that's overreaching. I'm a former newspaper editor, and "digital edits" in the form of adjusting levels, color correction, etc., are performed on every single photo.
Yep, sorry. I was a bit rushed when I wrote my comment. I should have said "digital edits beyond basic color correction".
Which is massively interesting, as I can walk forward effectively cropping the scene to remove (let's say) a car from the very bottom corner. I could use software to crop the image to remove the car, but if I use the clone tool to get rid of it, that's a big no-no.
The difference, in the PJ ethics sense of things, is that while the car is gone in both cases, cropping is a "lie of omission" while cloning is a "lie of commission". It's the difference between not mentioning the years 2001 through 2003 on your CV at all on the one hand and claiming you were the CEO of a now-defunct foreign company (when you were actually incarcerated) during that period on the other. (That's a bit hyperbolic, of course, but such exaggerations are differences of degree, not kind.) All journalism, no matter how "fair and balanced" you wish to paint it, is editorial and bias, but there is a big difference between editorial and fabrication. (And you don't have to zoom or "zoom with your feet" normally; cropping in post/printing is ordinarily allowed, though the editor/publisher will want to see the full frame as well. The shot you intend is not necessarily possible with the lens you have mounted - think of all of the PJ that has been done with wide primes over the years, as often as not so that "f/8 and be there" is all you need to think about. As long as the full frame is not showing your crop to be a blatant lie, like turning a dozen people in an otherwise empty city square into a massive popular demonstration, the publication will usually run with the image you intended rather than the one you captured.)
Yea I think it's ok to edit as long as it doesn't produce significant logical inconsistencies from the original (A=>B) - e.g. two missiles are turned into three; or it seemed to be late afternoon which can be made to seem nighttime from color balance (when that's significant).
Procedures like blurring shouldn't be able to cause those because like you mentioned the could have been done in situ with a camera, and they usually just lower the amount of information in the picture. That itself can change the interpretation of the scene (B=/>A), but to some extent this is inevitable -- and so acceptable if not overdone.
The edit doesn't really introduce any "logical inconsistencies", it just acts to "remove" the photographer from the scene (by way of removing his other camera), and yet it was ultimately a fire-able-offense.
The photographer, Narciso Contreras had manipulated a photo of a Syrian rebel by using a common Photoshop technique called 'cloning' in order to remove a fellow reporter's camera out of the picture, before sending it to an AP photo desk.
He removed another photographer's camera, essentially exaggerating his own ability to get pictures that other photographers cannot.
It seems to me that photojournalists in general ought to be disclosing any digital editing (especially if it is near the boundary), and it ought to be an editorial decision on the part of the publication on using the photos. The only thing that ever ought to be a firing offense (in this area!) for a photojournalist is undisclosed edits which interfere with ability to make informed editorial decisions.
On the other hand, while I agree that firing might be an extreme response, the edit in question does seriously effect the implied context of the picture. On the third hand, it does so in a way which photojournalist often seek to do through composition, so as you say, the ethics are a bit blurry -- which gets back to why I think disclosure and editorial decisions on whether and how to use photos is more important than blanket policies on edits (digital or otherwise).
Radiologists are the same... You take a Computed Tomography series of images, doing all kinds of back-projection (duh) and volumetric anisotropic edge-preserving smoothing, and then you apply segmentation and false-color, semi-transparent transfer functions and lighting...
And then you apply one more post-processing effect to try to highlight something, and they freak out that you're no longer showing them the raw data!
On a slightly related tangent, every time I see someone post a "#nofilter" photo on facebook I have to bite my tongue to stop from asking them how their camera managed to separate the RGB channels without a Bayer (or similar, eg Foveon) filter.
I mean, I sort of get what they are saying and I hate the overuse of Instagram filters probably much more than the next guy, but the odd relevance people place towards getting an image "directly out of the camera" is bizarre considering the incredible number of decisions (sometimes correctly, often not) the average digital camera has made for you in getting the measured light into a jpeg.
Personally I much prefer to shoot in RAW and then post-process because cameras, as amazing as they are in some ways, are still incredibly dumb when it comes to context and intent and I'm going to do a better job at getting the white balance, dynamic range, contrast, etc right (for how the shot was intended) than the camera is.
Regarding no filters, I've been shooting on 35mm film for the last year and been really enjoying it, almost never have to color correct, and the photos look so damn good! (especially skin tones)
I just do develop + scan at the drug store, no prints lol.
Doesn't work for everyone though, as waiting a day for photos might not be ideal (but I enjoy the anticipation heh).
That's actually massively filtered most of the time. The colour corrections are done at scan time (just as they were at print time with standard photofinishing in times past). If you were developing/printing in a home darkroom in ye olde days, you'd expect a good print on the third sheet of paper (well, the 2.25th sheet, since you'd only use a quarter-sheet for the rougher adjustments) after fiddling with filter packs or the settings on a dichroic enlarger head. (Slide shooters needed to use CC filters at shooting time.)
Digital can give you pretty much the same skin tones as you'd get with a given film (Fuji's out-of-camera JPEGs are very close to the films they're named after); it's just a matter of matching the response curve of the film you want to emulate, and that takes some fiddling that most people don't take the time to do. (Capture One has much better default conversion curves than most raw conversion software, especially where skin tones are concerned. But you can profile your camera and create your own defaults in most software.) Film does have some advantages, especially when pulled to increase its dynamic range, but it's not fundamentally better than digital, just different.
Yes, regarding the scanning, this is true... I will say though that I've tried photographing (digitally) my own negatives to see how similar it was to a drug-store scan, and it was pretty close color-wise.
However, I would have to disagree about skin tones unless you can provide some examples to sway me. I've googled a lot of comparisons and have yet to see one that looked as good. I find that the way digital captures just makes the skin look quite harsh, and reveals flaws. I personally think digital needs to make more changes to the sensor itself to really step up the quality. Foveon is an awesome example, but not fully there yet in my opinion.
It's almost certainly the other way around esp. by your description of 'harsh' skin. Likely something lossy in the film process gives the effect you like (if it is even real... it's easy to imagine all sorts of visual differences that aren't real when you know stuff)
Is the store's scan resolution decent? The last time I asked about that sort of service (a few years ago), they only offered scans at some completely awful resolution, e.g. ~1k pixels across...
I appreciate #nofilter. Color correction and enhancement is one thing, but not every selfie or food pic needs to be antiqued. Most filter usage seems to be an attempt to cover up crap lighting and careless composition with "irony".
I don't think anybody's intending for their filtered photos to be ironic. It's just aesthetics... old photos are low-fi in a way that's pleasing, while grainy blue-tinged cell phone pics are not. With one tap you can hide the latter with a filter that makes it look like the former. It's a simple thing and it's no big deal.
On the one hand, you have dodging and burning, which were often used in actual darkrooms and are still used by respected photojournalists to increase the impact of their photos. [0]
Where clumsy and obvious use of the clone tool damaged the reputation of an entire news organization.
The AP standards and practices strikes an interesting balance [1]:
AP pictures must always tell the truth. We do not alter or digitally
manipulate the content of a photograph in any way.
The content of a photograph must not be altered in Photoshop or by
any other means. No element should be digitally added to or subtracted
from any photograph. The faces or identities of individuals must not
be obscured by Photoshop or any other editing tool. Only retouching
or the use of the cloning tool to eliminate dust on camera sensors
and scratches on scanned negatives or scanned prints are acceptable.
Minor adjustments in Photoshop are acceptable. These include cropping,
dodging and burning, conversion into grayscale, and normal toning and
color adjustments that should be limited to those minimally necessary
for clear and accurate reproduction (analogous to the burning and
dodging previously used in darkroom processing of images) and that
restore the authentic nature of the photograph. Changes in density,
contrast, color and saturation levels that substantially alter the
original scene are not acceptable. Backgrounds should not be digitally
blurred or eliminated by burning down or by aggressive toning. The
removal of “red eye” from photographs is not permissible.
I'm curious to know why the AP don't allow red-eye removal--if the only manipulations allowed are those that help restore the clarity and accuracy to reality of the photographs, I'm not quite sure why red-eye removal wouldn't be permitted--most people's eyes, after all, are not red.
Sure they are red - in the brief instant the flash lights them up. You can see it with a spotlight too, try following someone around a dark stage with a spotlight.
> It's interesting to think I can throw an f/1.8 lens on my DSLR and take a very shallow depth of field photo, which is OK, even though it's not very representative of what my eyes saw
I beg to differ. Pictures with a shallow depth of field feel more real because that is how the eyes work naturally. Hold up your hand at full arms length, and focus on it with your eyes. Everything else around it is blurred.
You’re both right, to some degree. While you are correct that the eyes focus at a certain depth, much like a camera, the difference is that we can chose to focus at any depth (assuming good eyesight). This is not true of a photograph if it has no depth. (It’s also true that, if the photograph has both fore and background in focus, edges may be harder to pick out and depth harder to discern, even though we would have no trouble with our own eyes. That’s the basis of much trick photography, after all!)
All photos are unnatural. To quote the artist (and, dare I say, photographer) David Hockney: “I mean, photography is all right if you don’t mind looking at the world from the point of view of a paralyzed cyclops—for a split second.”
At risk of this becoming too long-winded, allow me to point out another “normal” variable of photography that is wholly unnatural (beyond the issue of focus and the fact that one moment is extended to infinity): Shutter speeds faster than 1/150s or slower than 0.5s. Our own eyes will never see the individual blade of a helicopter so clearly as a simple iPhone camera will when shot against a bright sky. Nor will naked eye see a waterfall as a blurry, peaceful average the way a long exposure portrays it.
Dynamic range too. Modern digital sensors are just now approaching dynamic range similar to that of the "instantaneous" dynamic range of human eyes, but as you mentioned we don't really see images instantaneous as a camera does and what we "see" (when you include brain processing) is really essentially a stack of everything we've seen up to as much as the previous 15 seconds; and when you take that into account even the best cameras are still way off on DR.
The data that the photo conveys should not be edited (i.e. the people in it, the objects in it, the framing shouldn't be used to intentially remove relevant data to the subject etc) but the mood or style of the photo may be edited. Colors, contrast, some stylistic effects, lighting, depth of field etc.
Its fairly obvious whats over the line and what isn't in 99% of cases with this.
Photojournalism is a weird cross over of history/evidence collection and drifting in to art. They fit together like oil and water, for all the reasons you pointed out.
on the other hand, i know photographers in art/advertising
-- working for some of the biggest companies, like apple --
who brag that "not one pixel goes out without being touched".
Could you speak to how familiar photojournalists are with the math and transformations that take place inside a digital camera?
For example, almost certainly your cameras have dead pixels, which are processed away during the demosaicing stage, but showing them would be a "straight off the camera image" that I doubt any photojournalist would desire.
Additionally many scene-wide process steps (like lens shading map estimation) can be changed in post-processing if the cameras automatic algorithms (3a, etc) "decided" wrong.
It sounds like the problem isn't image processing, the problem is image processing by the photojournalist, because with Photoshop in hand they can change the meaning of the photo. The best way to avoid that, is to avoid the use of Photoshop.
It’s an age old topic really, what you are doing with the camera is representing what you see, you are cropping a small part of what’s around you, and you can give something completely different meanings by just pointing the camera in a different direction.
They should make an exception for photos appearing online, since you can always link to the original one. Maybe a stipulation that the edited photos can't be used in print, unless accompanied by the original.
In that field, digital edits are seriously banned, to the point multiple very well known photo journalists have been fired for one little use of the clone tool [1] and other minor edits.
It's interesting to think I can throw an f/1.8 lens on my DSLR and take a very shallow depth of field photo, which is OK, even though it's not very representative of what my eyes saw. If I take the photo at f/18 then use an app like the one linked, producing extremely similar results, that's banned. Fascinating what's allowed and what's not.
I find even more interesting is the allowance of changing color photos to B/W, or that almost anything that "came straight off the camera" no matter how far it strays from what your eyes saw.
[1] http://www.toledoblade.com/frontpage/2007/04/15/A-basic-rule...