Fake, AI-generated satellite images can pose threat to nations and agencies worldwide, a team of researchers warns. These bogus images could be used to create hoaxes ranging from natural disasters to propping up other fake news, or even be used to mislead international governments into conflicts.
A Deepfake, which is a combination of “deep learning” and “fake,” is synthetic media — both photo and video content generated by artificial intelligence — often created with the intention of fooling the content consumer. Although the content can be presented as a lighthearted joke in some situations, for example, when a TikTok user impersonated Tom Cruise, deepfakes can also cause issues of varying severity when used maliciously.
The Guardian reports that this type of false visual content is predominantly used for adult content, for example, to map a female celebrity’s face onto the adult actor. It’s also used to spread false news information or to scam individuals or businesses. In addition to falsifying existing information, deepfakes can create a non-existing person’s profile from scratch, which can be further utilized for spying or for other deceitful or illegal means.
In August 2020, PetaPixel reported on the negative impact this type of manipulated media can have both on celebrities and on businesses who are impersonated and pointed out that detecting and keeping up with deepfake technology is a costly and difficult process for any research group that is prepared to tackle this.
However, deepfakes now also present a threat to nations and security agencies in the form of false and misleading satellite imagery, as first reported by The Verge. The bogus satellite images could be used to create hoaxes about natural disasters or to back up false news; it could also “be a national security issue, as geopolitical adversaries use fake satellite imagery to mislead foes.”
A recent study, led by University of Washington researchers, examined this concern and “its potentials in transforming the human perception of the geographic world.” The study points out that, although detection of deepfakes has had progress to an extent, there are no specific methods for detecting false satellite images in particular.
The team simulated their own deepfakes using Tacoma, Washington as a base map and placed onto it features extracted from Seattle, Washington and Beijing, China. The high rises from Beijing cast shadows in the fake satellite image while the low-rise buildings and greenery were superimposed from the urban landscape found in Seattle.
The team explains that anyone unfamiliar with this type of technology would struggle to differentiate between real and fake results, especially because any odd details or colors can be attributed to poor image quality often found in satellite images. Instead, researchers note that to identify fakes, you can examine the images based on color histogram, spatial domains, and frequency domains.
The lead author of the study, Bo Zhao, explains that the study’s goal was to raise public awareness of the technology that can be used to misinform and to encourage precautions, with a hope that this study can encourage the development of systems that could point out fake satellite images among real ones.
“As technology continues to evolve, this study aims to encourage more holistic understanding of geographic data and information so that we can demystify the question of absolute reliability of satellite images or other geospatial data,” Zhao says to UW News.
Although AI-generated images could create chaos and loss for many security agencies and strategists, the researcher also points out that AI-generated satellite images can be used for positive purposes, too. For example, the technology can help to simulate locations from the past to study climate change, unrestricted growth in urban areas, known as urban sprawl, or how a region may develop in the future.
Image credits: Header photo licensed via Depositphotos.