In the current geopolitical scenario, disinformation has found a new and sophisticated vector: the manipulation of satellite images using artificial intelligence. These falsifications, which simulate damage to critical infrastructures to influence the public narrative, proliferate on social networks. Their danger lies in the perception of objectivity granted by the view from space, taking advantage of the general lack of knowledge about real remote sensing. Analyzing and auditing these geospatial deepfakes has become a crucial technical task. 🛰️
Generation techniques and keys to forensic auditing 🔍
These falsifications are generated with AI tools such as image generators or advanced editors, which can alter a real photo or create a scene from scratch. Forensic auditing is based on identifying inconsistencies. The example of the Qatar fields in flames was detectable by its AI watermark, but there are more clues. Analysts look for visual artifacts like repetitive textures, errors in shadow perspective given the supposed sun position, and impossible geometries in structures. The resolution and quality of the image are usually unnaturally homogeneous. Additionally, the origin and metadata are verified, although these can also be falsified. Comparative analysis with historical images of the same location is fundamental.
Beyond the pixel: credibility in the space age 🧠
This phenomenon shows that no image source is inherently truthful. The limitation of access to high-resolution satellite images during conflicts creates an informational vacuum that bad actors exploit. The defense is not only technical but also cognitive. The public must develop a healthy skepticism even toward apparently objective formats. The battle against this disinformation is fought by combining the expert eye of the geospatial analyst, specialized forensic software, and the critical digital literacy of the citizenry.
How can forensic auditors differentiate a satellite deepfake from a genuine image when the manipulation is done at the pixel level using generative artificial intelligence data?
(P.S.: Detecting deepfakes is like playing Where's Waldo? but with suspicious pixels.)