The image originally said to show a large explosion near the Pentagon complex appears to be AI generated.
WASHINGTON — Officials say there was no explosion at the Pentagon Monday morning, following the spread of a viral image showing a large column of black smoke. Arlington firefighters and Pentagon Protection Agency officers said there was no danger to the public.
The image was originally shared on Twitter by the user OSINTdefender, a profile described as an open source intelligence monitor just after 9 a.m. The image quickly spread across Twitter before the original poster removed the initial tweet about an hour later.
Arlington Fire Department officials made it clear that there was no need to be alarmed by the image.
"There is NO explosion or incident taking place at or near the Pentagon reservation, and there is no immediate danger or hazards to the public," the Arlington County Fire Department tweeted.
@PFPAOfficial and the ACFD are aware of a social media report circulating online about an explosion near the Pentagon. There is NO explosion or incident taking place at or near the Pentagon reservation, and there is no immediate danger or hazards to the public. pic.twitter.com/uznY0s7deL
— Arlington Fire & EMS (@ArlingtonVaFD) May 22, 2023The image appears to be generated by artificial intelligence, specifically a form of AI called generative artificial intelligence, or Gen-AI. Gen-AI requires someone to prompt, or command, the program to create something completely new.
For the time being I have Deleted my Post regarding the Explosion near The Pentagon in Washington D.C, even though there are Tons of Accounts reporting on it there seems to be No Evidence of it taking place besides 1 Picture and these “Initial Reports.” pic.twitter.com/H4dqZZWGJB
— OSINTdefender (@sentdefender) May 22, 2023Siwei Lyu, the director of the UB Media Forensic Lab at SUNY Buffalo and a deepfake expert, said that while Gen-AI can be helpful, it’s also doing harm when in the wrong hands. It’s being used to “recreate or produce realistic types of media,” including images, video, audio and some text, Lyu said. This includes impersonations or fake content.
“You can use AI models to recreate human faces, making them look like real people, but [is actually] nobody alive,” Lyu said. “You can also use an algorithm to transfer or convert voices using text to voice … You can have videos where the subject is generated by, again, generative AI models to impersonate a particular person.”
Lyu offered some advice to avoid getting duped by Gen-AI videos, text and audio. Learn more about tips and tools to stay informed here.
When it comes to images, use reverse image search tool like TinEye to learn where the image came from. You can also use the photo's metadata to collect data about an image that can be used, along with other information, to verify if an image is real.