Marta Mallavibarrena

Marta Mallavibarrena

Graduate in Psychology and Master in Economic Intelligence and International Relations, specialized in cybersecurity. Currently Cyber Threat Intelligence Analyst in the Digital Risk Protection team at Telefónica Tech. Enthusiast of data analysis, always looking for new ways to incorporate social sciences to the technology sector.
Cyber Security
Artificial Intelligence, ChatGPT, and Cyber Security
Artificial Intelligence (AI) has become a frequent topic on this blog. Almost all predictions of technological trends for the coming years include it as one of the key advances. In my previous article, we addressed the role that these technologies can play in the creation and dissemination of disinformation and fake news. On that occasion, the protagonists were tools such as DALL-E or GauGAN2 for generating images, and although we already mentioned some text tools, at the end of the year a new tool appeared on the scene that has been making headlines ever since: ChatGPT. A few weeks ago, our colleague Mercedes Blanco introduced us to how ChatGPT works and some of its applications in the business world. This time, however, we will focus on what this tool, and others like it, can mean for cyber security. As with any technological advance, its consequences can be beneficial both for security teams and for those who take advantage of it for more controversial purposes. ChatGPT in security research The tool itself informs us of the many ways in which it can be of use to threat intelligence services, which can be summarised as follows: Provide information and act as an advanced search tool. Support the automation of tasks, reducing the time spent on tasks that are more mechanical and require less detailed analysis. Image 1: screenshot of a conversation with ChatGPT on the topic of the article Artificial intelligence has been making its way into cyber security tools for some time now. Some examples can be found in our Trending Techies last November, such as the project presented by Álvaro García-Recuero for the classification of sensitive content on the internet. In the case of ChatGPT, Microsoft seems to be leading integration efforts in its services, such as its Bing search engine and Azure OpenAI Service or, more focused on cyber security, the case of Microsoft Sentinel, which could help streamline and simplify incident management. Other researchers are betting on its use for the creation of rules that can detect suspicious behaviour, such as YARA rules. Google, for its part, has opted to launch its own tool called Bard, which will be implemented in its search engine in the not too distant future. ChatGPT in cybercrime On the opposite side of cyber security, we can also find multiple applications of tools such as ChatGPT, even though they are initially designed to prevent their use for illicit purposes. In early January 2023, CheckPoint researchers reported the emergence of underground forum posts discussing methods of bypassing ChatGPT restrictions to create malware, encryption tools or trading platforms on the deep web. In terms of malware creation, researchers who have attempted proof-of-concepts have come to the same conclusion: ChatGPT is able to detect when a request asks directly for the creation of malicious code, however, rephrasing the request in a more creative way allows evading these defences to create polymorphic malware, or keyloggers with some nuances. The generated code is neither perfect nor fully complete and will always be based on the material that the artificial intelligence has been trained on, but it opens the door to generating models that can develop this type of malware. Image 2: ChatGPT response on malware creation via AI Another of the possible illicit uses that have been raised with ChatGPT is fraud or social engineering. Among the content that these tools can generate are phishing emails designed to trick victims into downloading infected files or accessing links where they can compromise their personal data, banking information, etc. There is no need for the author of the campaign to master the languages used in the campaign, or to manually write any of them, automatically generating new themes on which to base the fraud. Overall, whether the tool is capable of delivering complete, ready-to-use code or content or not, what is certain is that the accessibility of programmes such as ChatGPT can reduce the sophistication needed to carry out attacks that, until now, required more extensive technical knowledge or more developed skills. In this way, threat actors who were previously limited to launching denial-of-service attacks could move on to developing their own malware and distributing it in phishing email campaigns. Conclusions New AI models like ChatGPT, like any other advancement in technology, can have applications both to support progress and improve security, as well as to attack it. Actual use cases of such tools to commit crimes in cyberspace are anecdotal at the moment, but they allow us to imagine the possible cybersecurity landscape to come in the years to come. The constant updating of knowledge becomes, once again, essential for researchers and professionals in the field of technology. “By far, the greatest danger of Artificial Intelligence is that people conclude too early that they understand it.” Eliezer Yudkowsky Featured photo: Jonathan Kemper / Unsplash
February 15, 2023
Cyber Security
Human factor key in cyber security
Dozens of vulnerabilities are discovered every day in the current landscape (an average of 50 in 2021), and attackers are finding new and ingenious ways to exploit them. It is obvious that the cybersecurity sector needs to keep up its efforts to prevent these attacks from succeeding. This technological race has led to countless advances and developments in the technological infrastructure of companies and institutions, but we cannot forget one critical factor: people, systems with hundreds of known vulnerabilities since the beginning of time, the vast majority of which remain uncorrected. According to data collected by Proofpoint, 20% of users would have interacted with e-mails containing malicious files, and another 12% would have accessed links provided in such e-mails. Various sources put the percentage of employee-induced data leaks at between 88% and 95%. Ignoring this human factor in cyber security poses a huge risk to organisations. Why does it happen? Although there are infinite causes and motivations for a human action to trigger a security incident, from an insider intentionally sharing company information to an accidental mistake that leaves information exposed, the focus of this article is on those cases where there is intentionality on the part of the attacker, but not on the part of the victim. Common examples of this type of cases are phishing campaigns, vishing (by phone) or smishing (by SMS). The techniques used in these types of attacks have not changed much over time. The same methods used by Frank Abagnale Jr. in the 1960s or Kevin Mitnick in the 1980s and 1990s to carry out the frauds that made them famous are just as effective today. Some of them, such as those proposed by Cialdinni, are still used in marketing and communication, and we have even discussed them previously on the blog. If you think technology can solve your security problems, you neither understand the problems, nor understand the technology Bruce Schneier The set of techniques and procedures used to try to motivate the user to perform some action in favour of cybercriminals is known as Social Engineering. Although it is also known by other more artistic names, such as "mental manipulation" or "human hacking", it is nothing more than another example of persuasion or attitude change. In this context, the Elaboration Likelihood Model (ELM) is proposed in psychology. A person's level of elaboration is based on two factors: their ability to understand the message, and their motivation to do so. To be honest, when reading emails on a Monday morning before our first coffee, we do not have either one of these. CYBER SECURITY The risks of not having controlled exposure to information (I) January 12, 2022 Attitude changes produced in a highly processed subject are handled by the so-called "central pathway", and are more profound and long-lasting over time, but require stronger arguments to take effect. Fortunately for cybercriminals, it is enough to last for the seconds necessary for victims to follow a link or enter their credentials, so the victim does not need to be paying too much attention. This combination of factors makes an employee under the effect of factors such as fatigue, stress or sleep the perfect victim of social engineering. This does not necessarily mean that if we are in perfect condition we cannot fall victim to the same techniques, but it does make us enormously vulnerable. What can we do about it? Leaving aside the purely technological component, and focusing on the human component, both companies and users can take measures to try to reduce the success of these social engineering techniques. These include awareness campaigns and training in the detection of fraudulent messages and activity or offering reporting channels so that users can alert in the event of detecting them, among others. As users, also on a personal level, it is important to be aware of our digital footprint: the information available about ourselves in cyberspace can be used to more accurately target attacks using social engineering. CYBER SECURITY Attacking login credentials August 8, 2022
September 28, 2022
AI & Data
Artificial intelligence: making fake news more real
Fake News, the word of the year 2017 according to the Collins dictionary and repeated endlessly both in the media and on social networks. We have even dedicated numerous posts to it in this blog, pointing out possible risks derived from its use, as well as the role of technology in its detection. In this case, the intention is to look at the problem from the other side: how technological development, including that of the systems used to identify fake notifications, is actually contributing to making them more and more real every day. The intention is to show, through examples, the process of creating a totally fake news story from scratch with as little human effort as possible, letting technology do the rest, without going into the specific details of the technical functioning of these algorithms. Creating our main character Every news item needs a protagonist, and this, a context. Thanks to platforms like this X does not exist, in a couple of clicks we can have his face, his mascot or his CV. None of the images generated would have existed until we clicked on them, and they will cease to exist when we refresh the page. Cassidy T. Pettway, 57, from Brighton, Colorado. Image automatically generated through thispersondoesnotexist.com Sundae, one of Cassidy´d cats. Image automatically generated through thiscatdoesnotexist.com In the absence of imagination to add details such as name, nationality, residence, etc. We can also resort to other free resources like fakepersongenerator.com or fauxid.com. Yes, for the cat too. The limitation of this type of approach is that we cannot construct a complete identity from a single photograph, and given that Cassidy does not really exist, we cannot ask him to become more. To overcome this drawback, morphing techniques appear that allow us to obtain different angles of the same photograph, change its expression, increase or decrease its age, etc. These technologies are similar to those used by applications such as FaceApp, which a few years ago had thousands of users on social networks showing "what they would look like when they were 80 years old". They are also the culprit of many headaches for border agents around the world, as the images generated are close enough to the original image for the human eye to identify them as the same person, but can evade biometric systems. Image: example of images modified by SofGAN. Source: apchenstu.github.io/sofgan/ Now that we have enough photos of our main character, we can also add a background, a context. If we don't want to worry about someone recognising the original image we have used in our montage, we can describe the landscape to DALL-E (mini version available on its website) or, if we prefer to bring out our artistic side, we can draw it in Nvidia's GauGAN2. Image: input and output of GauGAN 2, generating realistic images from simple drawings. Source: gaugan.org/gaugan2/ Special mention should be made of the Unreal Engine 5 video game engine, among others, which, although they allow the creation of scenarios and environments capable of fooling anyone, require much more effort on the part of the creator than the examples presented in this post. A recent example is the recreation of the train station in Toyama, Japan, created by the artist Lorenzo Drago. Developing and sharing the news Now that we have given Cassidy a face, it is time for her to fulfil her role as creator, disseminator or even protagonist of false content. If we're not in the literary mood to write it ourselves, there are algorithms for that too. Platforms such as Smodin.io can generate articles or essays of considerable length and quality by simply indicating the title. I may or may not have asked for your help in writing this post. If we were to focus our disinformation strategy on impersonating someone else rather than creating it out of thin air, there are also systems trained to mimic writing styles. In 2017, the Harry Potter chapter generated by Botnik Studios imitating the style of the original author went viral. If instead of a proper article we want to run a disinformation campaign on social media, we can create short text snippets with the Inferkit.com demo. Perfect for a tweet or a Facebook comment. What if Cassidy were to disprove that man landed on the moon? Image: text generated by Inferkit.com - In grey: user-generated text. In green: text added through artificial intelligence. Source: app.inferkit.com/demo In many cases it is not even necessary to create a user on the networks to actually post the content, just a screenshot indicating that you have done so. It could be a WhatsApp conversation, a Facebook comment or even their Tinder profile. To raise grades After generating static images and text, if we wanted to go one step further in our creation of fake news, we could turn to video and sound. The well-known deep fakes are a very useful tool in both cases. The blog has previously discussed how they are used in shootings, to impersonate someone's identity or to carry out a "CEO fraud". In addition to these techniques, more focused on the impersonation or imitation of another image or sound, there are platforms capable of creating new voices: some from scratch, such as This Voice Does Not Exist; others allow us to make adjustments to previously created voices, such as Listnr.tech; and others create new voices from our own, such as Resemble.ai. Conclusion While the threat of misinformation and fake news has been around for centuries, thanks to technological development, we are now able to generate a person's image in one click, give them a pet, a job and a hobby in three clicks, instil certain ideas in three clicks and finally give them a voice. Tasks that used to require a great deal of manual effort by the party interested in creating and disseminating the information can now be automated and done en masse. This also means that these campaigns are now available to anyone and are not limited to governments and large corporations. As long as technology cannot keep up with detecting what it creates, the only possible solution is based on awareness and critical thinking on the part of users, which starts with knowing the threats they face. “Our technological powers increase, but the side effects and potential hazards also escalate”. - Alvin Toffler. Future Shock (1970)
July 11, 2022