CIA publishes report on Deepfakes and how to deal with this threat
I still vividly remember the impression that one of the first synthetic videos made on me. It was back in 1994, mixing historical events with Tom Hanks, in the skin of Forrest Gump (1994).
At least for me it was very shocking to see the semblance of reality when my brain was counter-punching that it was impossible.
Deepfakes are a threat that falls under content manipulation techniques: video, audio, image or text that are created synthetically using Artificial Intelligence/Machine Learning technologies and that try to make the recipients believe in the veracity of a fact that is false.
Many of us have been incorporating it little by little, to our personal arsenal: authenticity recognition techniques, particularly in the news that circulate through our social networks. Ask for original sources, contrast the news in various media, try to maintain a skeptical attitude in the face of a media bombshell, etc.
Reality inevitably pushes us to look for tools to protect ourselves from this new threat, increasingly subtle and advanced, which calls into question the wisdom of our popular saying: "if I don't see it, I don't believe it" or "a picture is worth a thousand words".
The irruption and democratization of tools for the manipulation of multimedia content increases the risk considerably.
What used to be only available to a very few (for example, the Hollywood film industry), and was very expensive to manufacture, is now almost available to any of us. Even for those who have no scruples.
The question seems unavoidable: What can we do?
CIA report
Several American security agencies (NSA, FBI, CISA) have published last September a report offering a detailed characterization of the threat associated with deepfakes, listing some of the most relevant dangers: damage to the image of organizations, impersonation of executives and finance personnel or falsification of communications to gain access to private networks.
It is of great interest to any organization of any size to understand the techniques and possible mitigations.
The report focuses on the different advances in protection techniques against this threat, classifying them into two main groups: detection and authentication.
- Regarding detection, forensic techniques are mentioned which, under the assumption that the modification leaves statistically significant traces in the final content, look for these manipulations to alert about their possible fraudulent origin.
This is an area of continuous evolution due to the advances made by attackers and detection techniques. - Authentication, on the other hand, focuses on the incorporation to the multimedia content of information that legitimizes its origin, for example, the incorporation of watermarks, signs of proof of life, inclusion of the hashing of the device that makes/edits the content for its subsequent validation, etc.
Recommendations from the CIA, FBI, and other authorities
These are the main recommendations made by these agencies in the aforementioned report:
- Select and implement technologies that detect deepfakes and verify the provenance of multimedia content in organizations.
◾ For example, the incorporation of proofs of life during live communications that result in a final economic transaction is mentioned.
- Protect and limit publicly available information about VIPs by incorporating response and training plans for organizational staff and conducting frequent simulations to ensure the adequacy of the measures taken and the organization's training and resilience.
The report also collects both proprietary and open-source tools that can help organizations move forward in mitigating this new threat.
However, what happens at a lower level where we are not talking about organizations with budgets allocated to cyber security or when we are talking about ordinary citizens? Are there any measures to be taken?
End users: What to do?
As end users or small organizations, who are still potential victims, or intermediaries (through the viralization/distribution of fake content), in the context of this new threat, we can follow some basic recommendations to help us mitigate the risk depending on the type of content we are facing.
Images
- Notice the background of the image: it is often pixelated/blurry, there are errors/inconsistencies in lighting and shadows.
- Look at the details of accessories, e.g. glasses, necklaces and so on where connections with the image are usually not entirely coherent.
Video
- Notice the consistency of the skin throughout the image: the outline of the eyes is similar in age to the rest of the face.
- Notice the cadence of the blinks, is it rhythmic? Too slow? Fast?
Audio and voice
- Without having a spectrogram handy... we can look to see if the tone is too monotonous or emotionless, the intonation is odd or the absence of background noise.
Conclusions
Deepfakes are here to stay, and their presence will be increasingly relevant due to the democratization, and consequent drop in cost, of the popularity and ease of use of computer image generation tools complemented with generative AI techniques.
We must, from the perspective of end users, ask the companies that manage our social networks to make a continuous effort to create tools for automatic detection of false content, to avoid being part of its viralization.
And while these tools are in development and reach our terminals and our digital life, the main recommendation is calm and caution. Let's try to apply the same mechanisms that we have already incorporated to the news that reach us to audiovisual content:
- Original source?
- Has any reliable media echoed the content?
- Is urgency one of the requested characteristics?
Before any controversial, overwhelming or strange content, it is advisable to wait at least a reasonable time before participating in its distribution.
Image from Rawpixel on Freepik.