Advances in deepfakes: a threat to business
Brad Smith, Microsoft's president and vice president, told U.S. lawmakers a few months ago that one of his biggest concerns is related to the proliferation of deepfakes, and urged U.S. lawmakers to create new laws to protect their national security, as well as to take measures so that people know how to recognize this type of counterfeiting. Deepfake is a method of impersonation in which an advanced AI technique is used to collect data on physical movements, facial features and even voice, to process it and create fake audio-visual, graphic or voice content, with a hyper-realistic result. Several types of deepfakes can be identified, used together or separately, for example: Deepvoice: in which fragments of a person's voice are replicated to broadcast another message or content. Deepface: in which through multimedia content in which a person appears, it is possible to impersonate his face and gestures to broadcast a different content. The first time this technique was used by a particular user was in 2017, by a Reddit user whose profile name was Deepfakes. Since then, and especially during the last few years, deepfakes have gained great relevance and accessibility, being nowadays much more available to the general public through commercial applications, which increases the risk of this type of technology being used with a fraudulent or malicious intentionality. ◾ A recent case is that of some thirty minors in Almendralejo (Badajoz), who reported that photographic montages of themselves made through artificial intelligence, created, and disseminated by other underage children, were circulating. ◾ In addition, an increase of scams has been alerted through calls impersonating the identity of family members, in which money is urgently requested because they are in some urgent situation. Deepfake in the corporate environment This, however, can be directly related to security in the corporate environment, since by modifying the voice and/or image of a member of a company's board of directors, cybercriminals could impersonate their identity and make calls, or even video calls, making decisions that could be harmful or fraudulent for the company. And we have previously commented on the use of deepfakes for this purpose, for example, in CEO Fraud scams. As we have already mentioned, over the last few years deepfake methods have become much more accessible and also, with the rise of artificial intelligence, they have advanced at a dizzying rate, becoming more and more realistic. Focusing on voice deepfake, this category of simulation has achieved that in recent months the reproduction of the artificial voice has a much more natural sound, increasingly resembling the human voice, and therefore making it difficult to discern whether it is a simulation or a real human voice. An example of this is the tool announced by Microsoft earlier this year, called VALL-E, a language modeling tool that can be used to synthesize high-quality custom speech with only a 3-second recorded recording, even supplanting cadence, voice pitch and acoustic environment. While this tool is not yet in circulation, there are many others that can currently be used, although they require a longer voice recording, such as Resemble.ai or CereVoice Me, among others. A recent application of this method for fraudulent use occurred during the spring of this year, when a Florida investor contacted his local Bank of America representative to inform him of a large money transfer. However, during the process a second call was made, in which the investor's identity was impersonated through voice cloning, with the goal of tricking the bank representative into transferring the money to another recipient. In this case, the fraud was quickly detected and was never completed. Another recent case in which money was actually transferred occurred in Baotou (Inner Mongolia, China). This time the technology was used to convince a man to transfer money to a supposed friend who needed 4.3 million yuan to make a deposit during a bidding process. However, cybercriminals had impersonated his friend in order to get the money transferred to them. Ways to Prevent and Detect Deepfakes Some emerging technologies are helping to make deepfakes detectable: Cryptographic algorithms can be used to insert hash values at set intervals during video; if the video is modified, the hash values will change. AI and blockchain can record a tamper-proof fingerprint for videos. Another way to neutralize deepfake attempts is to use a program that inserts specially designed digital "artifacts" into videos to hide the pixel patterns used by facial detection software. These slow down the deepfake algorithms and generate poor quality results. Biometric voice recognition: this recognition uses unique characteristics of a person's voice, such as pitch, cadence, and rhythm, to verify their identity. Spectrogram analysis: voice spectrograms can reveal signs of tampering, such as overlays or edits. Blockchain: some solutions use blockchain technology to track and verify the authenticity of images from their origin. Technology, however, is not the only way to protect against counterfeiting techniques, be they image, video or voice. Here are some useful techniques to effectively detect and prevent deepfake fraud attempts: Multifactor identity verification: combine voice recognition with other verification methods, such as password authentication or facial recognition, to increase security. The presence in any business organization of automatic controls integrated into all processes involving disbursement of funds is also very relevant. Education and awareness: ensuring that both employees and other users are aware of how a deepfake works and the challenges it can pose. Make good use of the media and use good quality sources of information. It is important to note that, given the constant advancement of technology, detection and prevention methods are also constantly evolving. This makes it very important for business organizations to keep up to date with the latest techniques and tools available, and to adapt their security strategies accordingly. AUTORES CARLA MARTÍN RAMÍREZ Intelligence analyst at Telefónica Tech DANIEL SANDMEIER Analyst at Telefónica Tech Cyber Security IA & Data Cyber Security Evolution: AI as a Tool for Attack and Defence June 28, 2023
October 16, 2023