Google’s AI is getting better at the things AI is terrible at. Which begs the question: how can we tell what images are AI?

A sample graphic generated by Google Imagen 4 of a person in a futuristic environment
(Image credit: AI Generated / Google Imagen 4)

Generative AI is typically terrible at things like rendering texture and spelling out actual words inside images. But Google’s latest update to the image generator Imagen is getting far better at the things AI is usually terrible at.

Which begs the question: as AI becomes more capable of photorealistic generations, how are viewers able to tell what’s AI and what’s a real photograph?

This week Google announced Imagen 4, the latest generation of its AI image generator, alongside bringing audio capabilities to its video generator, Veo 3, and launching AI video tool Flow.

The list of improvements to Imagen 4, however, reads a bit like the list of signs that an image is AI-generated.

Google claims that the generator can now render more realistic textures such as animal fur and water droplets, illustrating that claim with a generated image of a golden retriever with his head hanging out of a car window, fur blowing in the wind, and a droplet of drool hanging from his mouth.

(Image credit: AI Generated / Google Imagen 4)

The sample image looks a bit better at creating texture and even reflections than earlier AIs. There’s some texture to the dog’s fur, although something still feels off. The image feels more like it was over-edited, rather than the texture-less images that screamed AI-generated.

Imagen 4 is also better at rendering text inside images, which includes things like labels and street signs, but also makes the AI a more useful tool for tasks like creating greeting cards or comics.

Those two upgrades are part of the shortening list of things that generative AI programs are terrible at, joining features like keeping details such as earrings symmetrical and generating realistic renderings of complex movements.

But with Imagen 4, Google is also launching a way to help identify whether or not something is a photograph or a generation. SynthID Detector is a new software that helps detect if an image is generated or real.

(Image credit: AI Generated / Google Imagen)

The software is currently in beta testing and isn't yet widely available. It works using Google’s SynthID. Introduced in 2023, this is an invisible watermark that is embedded into generated images (as well as other formats, like video and audio) that identifies the graphic as AI-made.

Google has been embedding SynthID into AI graphics for years, but the new SynthID Detector is a new platform for reading that embedded watermark.

Google says that SynthID is designed to remain intact even when an image goes through multiple changes. Similarly, SynthID Detector is designed not just to flag when AI generation is used, but it can also indicate which sections of an image are most likely to be watermarked.

That means the tool can detect both fully generated graphics and partially AI-generated edits.

(Image credit: AI Generated / Google Imagen 4)

SynthID Detector isn’t the only software made to detect AI. The Content Authenticity Initiative, for example, has a tool to read a file’s Content Credentials. Other third-party programs also work to identify what is AI and what is not, such as the Hive AI detector.

While I’m glad that Google has launched an AI detector alongside more realistic AI software, the question remains how often viewers will take the time to check whether or not an image is AI or real. Many seem to still share fake news articles without checking tools like Snopes, and I worry the same fate will befall AI-generated deepfakes.

Internet users have notoriously short attention spans – I think built-in tools that flag AI generations are going to be more widely used than separate tools. Google already does this in the form of About This Image, though it takes a few clicks to get there.

Launching more AI detection tools alongside better AI is certainly a step in the right direction. But as AI generators become more realistic, internet users will need to be even more skeptical of how truthful an image is – and use AI detectors to help clarify what’s a real photograph and what’s not.

You may also like

Read up on the signs that an image is AI-generated, or why you shouldn't call an AI-generated image a "photograph."

Hillary K. Grigonis
US Editor

With more than a decade of experience writing about cameras and technology, Hillary K. Grigonis leads the US coverage for Digital Camera World. Her work has appeared in Business Insider, Digital Trends, Pocket-lint, Rangefinder, The Phoblographer, and more. Her wedding and portrait photography favors a journalistic style. She’s a former Nikon shooter and a current Fujifilm user, but has tested a wide range of cameras and lenses across multiple brands. Hillary is also a licensed drone pilot.

You must confirm your public display name before commenting

Please logout and then login again, you will then be prompted to enter your display name.