Although the underlying technology is similar, AI upscaling is not a valuable propaganda tool, so is not in any way shape or form the same as AI “fakes”.
The intention with AI upscaling is to enhance existing detail and remove artefacts while increasing size and scale; not to create a completely new or false image that is different to the input source or changes its narrative. It’s closer to this, than it is to deep fakes or propaganda.
You’re not wrong. But that’s done by inferring what should be there. So it’s still going to appear to be faked, because in a very real sense it is faked. It’s faked within a narrow band of expectations, but it is faked. A better way to send out photos like this is to include the original and the enhanced version in the publication. To remove doubt
True. Upscaling is likely to always trigger a positive with any AI analysis tool, unless it has been calibrated to detect upscaling; including probably some reference to, or pre-processing of, the original image.
So yes… Honestly, including a visible disclaimer, and providing a reference to the original, should be a requirement for ANY digital image adjustment, in ANY work of non-fiction; including adjustments made in photoshop, like making a model skinnier or removing stretch marks. You shouldn’t be able to misrepresent reality to consumers without explicitly telling them it’s a misrepresentation.
Technically true, but I don’t think most people would consider upscaling as “faking” an image. However, given a poor quality source (and parameters), I’m sure some terrible “AI” upscalers could produce some kind of massively-modified abomination.
Edit: Taking a closer look, I think this image qualifies. Terribly done at every level, with extreme over-sharpening and hideous artifacts. SMFH.
You play with words to make seem as if upscaling takes Hamas dude from photo where he is pulling injured out of the crambled building and place him in private jet environment.
So even if upscaling “recalculate” most pixels it’s based on what is already in picture and doesn’t change the content or context of the pic. By that i mean information given to t he the viewer stay the same.
If we rely on computers and algorithms to maintain the intented interpretation of data we are setting ourselves up for confusion.
Showing the regular unenhanced data is key to credibility. Enhancing data, is just a fancy way of saying guessing what the data should look like, and there’s a range of valid guesses. Which puts it right up there with artistic interpretation, courtroom sketches, which have their places. And they should be labeled as such, they shouldn’t be called a photograph.
As much as i agree that link to the original is better and that verification is always required. There are these points that make this debate meaningless:
This case was verified. We know where it came from how it looked like before.
Verification is required regardless of apparent enchantment.
AI can make picture avg Joe won’t be able to tell apart from “real” picture, there’s no point make it apparent like this.
Let me also point out that the idea of not relaying on computer for images processing is naive.
Everyone would have to take pictures with classic mirror camera with film, make photos the old fashioned way in the lab and still photos could be doubted.
Moment any photo gets digitalized all the analogue information in the pictures goes through computing algorithms.
Scanned photos get distorted to fit binary representations and to compensate to scanner “flaws”.
Most of today’s smartphone and cameras apply some sort of upscale, sharpening and other kind of filters to make their photos more attractive.
Don’t let me start on comprehension or up-scaling to be able to render web fast enough and without over thousands of people’s bad connection.
Upscaling especially ai assisted upscaling is a form of faking data that isn’t there.
Although the underlying technology is similar, AI upscaling is not a valuable propaganda tool, so is not in any way shape or form the same as AI “fakes”.
The intention with AI upscaling is to enhance existing detail and remove artefacts while increasing size and scale; not to create a completely new or false image that is different to the input source or changes its narrative. It’s closer to this, than it is to deep fakes or propaganda.
You’re not wrong. But that’s done by inferring what should be there. So it’s still going to appear to be faked, because in a very real sense it is faked. It’s faked within a narrow band of expectations, but it is faked. A better way to send out photos like this is to include the original and the enhanced version in the publication. To remove doubt
True. Upscaling is likely to always trigger a positive with any AI analysis tool, unless it has been calibrated to detect upscaling; including probably some reference to, or pre-processing of, the original image.
So yes… Honestly, including a visible disclaimer, and providing a reference to the original, should be a requirement for ANY digital image adjustment, in ANY work of non-fiction; including adjustments made in photoshop, like making a model skinnier or removing stretch marks. You shouldn’t be able to misrepresent reality to consumers without explicitly telling them it’s a misrepresentation.
Here is an alternative Piped link(s):
this
Piped is a privacy-respecting open-source alternative frontend to YouTube.
I’m open-source; check me out at GitHub.
Technically true, but I don’t think most people would consider upscaling as “faking” an image. However, given a poor quality source (and parameters), I’m sure some terrible “AI” upscalers could produce some kind of massively-modified abomination.
Edit: Taking a closer look, I think this image qualifies. Terribly done at every level, with extreme over-sharpening and hideous artifacts. SMFH.
You play with words to make seem as if upscaling takes Hamas dude from photo where he is pulling injured out of the crambled building and place him in private jet environment.
So even if upscaling “recalculate” most pixels it’s based on what is already in picture and doesn’t change the content or context of the pic. By that i mean information given to t he the viewer stay the same.
If we rely on computers and algorithms to maintain the intented interpretation of data we are setting ourselves up for confusion.
Showing the regular unenhanced data is key to credibility. Enhancing data, is just a fancy way of saying guessing what the data should look like, and there’s a range of valid guesses. Which puts it right up there with artistic interpretation, courtroom sketches, which have their places. And they should be labeled as such, they shouldn’t be called a photograph.
As much as i agree that link to the original is better and that verification is always required. There are these points that make this debate meaningless:
Let me also point out that the idea of not relaying on computer for images processing is naive.