Diego Samuel Espitia

Diego Samuel Espitia

Electronic Engineer, Cyber-Security Experto. I have over 15 years of experience in digital security. I started my career configuring enterprise protection systems for Internet services and later worked in pentesting processes across various industries. Currently I am a senior consultant at Telefónica Tech Colombia, where I support commercial and operational teams. I have been a speaker at international events such as BlackHat US Arsenal and AtHack Arabia, and I teach cybersecurity at universities in Colombia and on Udemy.

Cyber Security
From technical data to strategic action: effective vulnerability prioritization
Continuing with vulnerability management, one of the key challenges for organizations—regardless of their size—is how to prioritize the remediation of reports and how to link them directly to the reported level of risk. To do this, it’s important to understand that the risk ratings provided in pentesting or vulnerability analysis reports, as discussed in our previous article, are based purely on a technical Cyber Security analysis using the CVSS (Common Vulnerability Scoring System). This tool classifies vulnerabilities as low, medium, high or critical based on their impact on Cyber Security pillars and the complexity of their exploitation. Therefore, reports are usually accompanied by CVEs (Common Vulnerabilities and Exposures), which group known vulnerabilities. However, this rating is generic and does not take into account factors like the likelihood of exploitation, the existence of an exploit, or business-specific data on the asset’s importance. There are, however, two other tools that are often overlooked in prioritization analyses and can be extremely helpful in making risk-based decisions: EPSS (Exploit Prediction Scoring System), which aims to score the probability that a vulnerability will be exploited. KEV (Known Exploited Vulnerabilities), a catalog that documents known exploits, who developed them, and which vulnerabilities they target. How do these tools interact? Reports always support findings with CVEs. From this baseline, information should be organized using CVSS scoring, which helps us identify the technical severity of each vulnerability and determine the attack vectors that should be analyzed. Once that initial categorization is complete, two key elements must be considered: first, whether the vulnerability is currently being exploited, regardless of its CVSS rating. Second, the likelihood of future exploitation, based on threat intelligence analyses. These capabilities are provided by KEV and EPSS, respectively. So, using the vulnerability identifier, we search KEV to identify the highest-priority items—regardless of the CVSS risk level. Then, with the remaining vulnerabilities, we sort them according to CVSS and check their EPSS score to estimate the probability of future exploitation and plan a proactive remediation strategy. Practical example Let’s suppose we’re using a business messaging service such as The TeleMessage, and our automated vulnerability management system detects and reports CVE-2025-48927, which has a CVSS v3.1 score of 5.3—i.e., a medium severity level. CVSS provides the technical data on the attack vectors, which we can review using the basic calculator from version 4.0, the most up-to-date version. It shows that the attack is conducted over the network, does not require privileges or user interaction, but its impact is limited to confidentiality and is low. Based on this data, remediation might be considered a low priority. However, this decision should be guided by threat intelligence data, starting with KEV reports and, if not listed there, then EPSS. When we search KEV, we find that this vulnerability is actively being exploited in conjunction with two other vulnerabilities that have not been patched by the vendor. Therefore, additional control measures must be implemented. If it hadn’t been listed in KEV, we would refer to the EPSS score provided by the tool, where we can see the scoring history and observe how the probability of exploitation has increased over time. This management process is not simple, but there are online tools that make it easier—for example, CVEDetails. Using the analysis from the report we examined, we can find here the information. On the other hand, it’s always important to remember that this is a technical analysis, but final prioritization should be based on: the criticality of the analyzed asset or service, its exposure to external networks or third parties, and the effort required to remediate or mitigate the impact. Cyber Security Linux and the vulnerability paradox: More reports, more security? February 12, 2025 Imagen: Chris Yang / Unsplash.
September 2, 2025
Cyber Security
Automation, Pentesting and Red Teaming: the triad for vulnerability management
The data on vulnerability disclosure and exposure growth paints a chilling picture of the threat landscape. With an average of 110 new CVEs created per day in 2025, only 6,494 vulnerabilities were reported in 2015, whereas by mid-2025 we’ve already reached 22,717 CVEs—at the time of writing this article. This makes risk management associated with these vulnerabilities an increasingly complex priority, due to the speed required to run detection tests and apply necessary mitigations for each threat, as well as the varying risk levels in each report. As with everything in Cyber Security, this calls for a process and ongoing evolution that involves technology, procedures and people as the foundation for continuous improvement. One common action repeated across many organisations—yet rarely providing significant value—is performing one or two Ethical Hacking or Pentest exercises per year, targeting only a sample of the organisation’s systems or servers. With an average of 110 new CVEs created daily in 2025, the threat landscape demands faster and more intelligent risk management. This practice is rooted in certification standards introduced back in 2012, which required one or two vulnerability scans per year, along with evidence of mitigation efforts. At that time, this approach was valid, considering the volume of reported threats and the technology available. But today, it falls well short of what’s needed. Carrying out such an exercise requires a seasoned offensive security specialist and at least three days to assess a single asset. Therefore, this resource should be deployed strategically—focused on assets identified through more automated analysis as having high or critical threats. In organisations with critical assets or more robust Cyber Security systems, these tests should go a step further, helping validate the full protection framework. Let’s start with the basics If your organisation is just beginning to implement Cyber Security processes—or only conducts Pentests for compliance—you’ll likely find that most reports reveal a high volume of medium and low-level vulnerabilities. These often go unresolved by development or IT teams responsible for remediation, and in some cases, remain unaddressed for years. What’s more concerning is that only a fraction of the infrastructure is tested due to the cost of a comprehensive Pentest, allowing threats across the network to grow and spread unchecked. For this reason, our recommendation for over a decade has been to carry out persistent, automated security testing. An automated offensive service should be the first step any organisation considers in its vulnerability management process. Performing one or two Pentests per year is no longer enough to contain the scale and speed of modern cyber threats. This automation can be applied across the entire network, exposed web services and even to assess the attack surface—delivering a more complete risk and exposure map to the Cyber Security team. This happens in much less time than a semi-annual Pentest—almost in real time—and through a console that enables continuous tracking of mitigation actions. So, what should I do with the high and critical risks? With that map in place and after at least a year of monitoring, not only will mitigation measures have improved, but monitoring will be better aligned to detect exploitation attempts—not just general alerts typically configured in SOCs. This approach also enables leadership to make decisions based on actual risk. One such decision could be to carry out a much deeper Penetration Test or Ethical Hacking exercise than what automated monitoring offers. The goal is for skilled professionals not only to detect vulnerabilities, but to actively attempt to exploit them—demonstrating the real threat level and uncovering gaps that only expert insight can reveal. An automated offensive security service should be the first step any organisation takes in its vulnerability management process. Additionally, this testing helps validate monitoring alerts and the mitigation actions taken, strengthening defensive procedures or highlighting the need for new technologies, process changes or team training. And it doesn’t end there… In Cyber Security, there’s always room to go a step further or make an extra effort. An organisation with the above procedures well established and validated may be considered to have a high level of Cyber Security maturity. But there’s still one crucial test left: the one that evaluates the entire defence system. A red team exercise simulates real attacks to measure an organisation’s true ability to detect, respond and defend. Known as a red team exercise, this test puts Cyber Security teams—and the entire organisation—through the motions of a real incident, without the actual consequences. Its goal is to validate an organisation’s defence capabilities against a simulated attack that mirrors the tactics and techniques used by known cybercriminal groups. This type of testing evaluates the response of defence and monitoring teams and brings a practical, not just theoretical, dimension to vulnerability management. It helps assess detection levels, monitoring effectiveness and readiness to respond to incidents. Skilled professionals must go beyond detection, exploiting vulnerabilities to reveal real threats that only human expertise can uncover. Cyber Security Observability in Cyber Security: see more, react better April 1, 2025
July 1, 2025
Cyber Security
Detecting the insider before the damage is done
As we discussed in the previous article, there are several types of insiders—some act with clear intent, while others unknowingly open the door to external attackers. In either case, the challenge in detecting them lies in distinguishing which behaviours come from the attacker and which from the legitimate user, since it's not possible to tell who’s who at a glance. A notable case appeared in a May 2022 Yahoo report, when a data scientist stole the organisation’s intellectual property—more than 500,000 pages of research. He used this information to stand out in his new job, at a direct competitor of Yahoo. The insider—whether negligent or malicious—represents one of the greatest challenges in cybersecurity. Forensic techniques in the investigation helped confirm the data scientist’s intent and actions, showing that the documents had been transferred using his credentials to one of his personal devices, registered under the company’s BYOD policy. Undoubtedly, a company like Yahoo takes Cyber Security seriously. However, identifying intent in a user’s behaviour is extremely difficult to detect and control—especially when organisational tools and policies are not designed with a proactive cybersecurity approach in mind. A common question from our clients is: 'What can we do to detect an insider before they carry out a malicious action?' The answer isn’t simple or universal—it depends on the organisation and its security policies. However, here are some controls that can help build a perimeter to prevent and detect these types of threats in time. Traditional tools like firewalls, IPS or antimalware are not enough against insiders—especially those who know internal systems inside out. One of the first things to consider is that traditional malicious activity detection tools—like firewalls, IPS, WAFs or antimalware—are not designed to detect internal attackers. This is mainly because these systems “trust” that the user is who they claim to be, but do not analyse whether the behaviour is “normal.” Understanding the types of insiders With this in mind, it's important to understand the risks associated with each of the three types of insiders. Actions performed by a compromised insider or impersonated insider often display unusual behaviours or differ from the legitimate user’s, because someone is impersonating them and trying to explore the organisation. The negligent insider leaves doors open, allowing attackers in—not to impersonate their profile, but to use that entry point to explore systems and networks. The malicious insider is by far the hardest to detect and contain. They know the organisation, will avoid incriminating behaviour, and have time on their side. The malicious insider is, without a doubt, the most difficult to detect and contain. Given this context, there are measures that can help detect an internal attacker early by analysing factors such as: Access to systems and assets. When an engineering employee attempts to access the finance server, it's highly suspicious—even if no malicious action is taken. Behaviour analytics capabilities in EDR or MDR systems can flag these anomalies. Changes in user behaviour. For instance, activity spikes—whether during unusual hours or in the number of logins and processes executed in a short timeframe. Threat hunting teams play a key role in detecting these patterns by investigating processes and services on endpoints. Data movement. Often, we focus on outbound or inbound data transfers over the network, but insiders may move data to removable media or even corporate cloud storage. Policy controls are critical, but detecting behavioural changes via pattern analytics is what makes the difference—this is where properly configured XDR and MDR tools are essential. External devices. These are usually used by compromised insiders, but sometimes negligent insiders allow attackers to connect devices to the corporate network or to their own computers, exposing company data. In such cases, technical controls like UBA are essential to detect suspicious behaviour across devices and networks. Failed login attempts. One of the clearest signs of an attacker is when they try to escalate privileges, causing a spike in failed logins from a user on a machine. This may be the insider’s only big mistake before any malicious activity takes place. To detect this effectively, authentication logs are vital—monitored via SIEM alerts or behaviour anomalies detected by UEBA. Detecting an insider before a malicious act is committed requires a contextual, behaviour-based approach, supported by advanced detection tools. Monitoring access or traffic isn’t enough—you need to understand actions, intentions and deviations from normal patterns. Investing in observability, behavioural analytics, and tools like UEBA, EDR or XDR—properly configured—is now essential to prevent one of the hardest types of attack to detect: the one that comes from within. Understanding the different types of insiders and the risks they pose is essential for developing proactive defence strategies. Telefónica Tech Cyber Security AI & Data Cyber Security trends for 2025 January 13, 2025
June 3, 2025
Telefónica Tech
From legitimate access to chaos: the new face of ransomware driven by insiders
Not long ago, we handled yet another ransomware case—one of the hundreds that occur daily and among the many we respond to each month. However, this case was particularly unusual: when gathering evidence, we discovered that the attacker had been inside the network for just two days, and all the actions had been executed from an administrative account via VPN. They seemed to know the infrastructure and services inside out. And indeed, they did—because the compromised account belonged to one of the IT department leaders, who was also the person who contacted us to activate the incident response protocol. This kind of scenario has become increasingly common in recent months, revealing a growing trend in organization threats: insiders are becoming one of the preferred tools for cybercriminal groups to achieve their financial goals. The link between insiders and ransomware is becoming more evident. Statistics show that in 2024, 47% of ransomware incidents originated from legitimate credentials already in the attackers’ possession. These credentials are often obtained through recruiting campaigns targeting employees to facilitate access to corporate networks from the inside. ■ Pulse documented such a tactic in 2022, when they interviewed 100 IT directors. The study revealed that 57% of employees had been approached by criminal groups to grant access to corporate networks, and several of these attempts ultimately led to ransomware attacks. Why is it so dangerous? Unlike traditional ransomware attacks—which are noisy and typically trigger automatic alerts—insider-driven attacks can be completely silent. They may lie dormant for months, even years, without being detected by monitoring tools and without the need to deploy malware directly. It’s key to understand that there are three types of insiders, each with specific characteristics. And while not all are disloyal employees or contractors, their activity often goes unnoticed because it doesn’t raise immediate red flags: Compromised insiders: users whose credentials have been stolen due to risky behaviors, such as using insecure networks or accessing malicious websites. These accounts are used to steal sensitive data or conduct network reconnaissance. Since the credentials are legitimate, these actions often pass undetected. Negligent insiders: users who, out of ignorance or carelessness, perform actions that compromise an organization's security. For instance, clicking on a malicious link that triggers fileless malware that only runs in the computer’s RAM and evades most traditional security controls. Malicious insiders: the most dangerous and technically skilled of all. These are individuals who intentionally download malware or collaborate with attackers by granting them access to the network. Insiders pose one of the biggest threats when it comes to information leaks and ransomware attacks. What tools do they use? One of the reasons these attackers can remain undetected for so long—or even exit without leaving a trace—is because they use the same software already present in the organization. This technique, known as "Living off the Land", involves leveraging legitimate operating systems or IT administration tools to carry out malicious activities without raising suspicion. For this reason, traditional defense and monitoring tools alone are no longer sufficient. A more robust strategy is required, including: User Behavior Analytics (UBA/UEBA). Identity-based controls. Continuous review of authentication logs. Monitoring of data exfiltration techniques. Integration with threat intelligence. And other tools focused on detecting behavioral anomalies. Conclusion In short, the threat posed by insiders—whether malicious, negligent, or compromised—is exceptionally dangerous due to the inherent trust placed in individuals with legitimate access. Their knowledge of internal systems and the challenge of distinguishing their actions from normal usage make detection extremely difficult. This type of attack allows for stealthy operations with the potential to cause severe damage, both direct and indirect. For this reason, it stands as one of the greatest threats in today’s landscape of data breaches and ransomware attacks. Organizations must recognize this reality and treat insider threats as a top priority. Network monitoring solutions are key tools for detecting anomalous behavior that may indicate the presence of an insider or a compromised device. Likewise, enforcing the principle of least privilege, implementing robust password management policies, and ensuring continuous Cyber Security training for all employees are essential steps to mitigate risk.
May 13, 2025
Cyber Security
Observability in Cyber Security: see more, react better
The digital complexity of the services implemented in organizations today and their relationship with the business is a challenge that traditional methods are proving difficult to manage. As a result, organizations have faced obstacles in ensuring data visibility across processes, proactively managing security, and at the same time delivering satisfactory experience to both their customers and their employees. The hope of managing this enormous amount of information has turned to AI, which, while here to stay, still lacks maturity in some critical aspects of analytics and automation strategies for its implementation to meet all the observability needs required. A couple of years ago we talked about what observability is and how that ability to understand each fragment of a digital process, through telemetry, gives us the ability to observe normal behaviors and detect anomalies in the different components. Translating this concept to the field of cyber security, it provides an improved context for more effective incident response. Observability in cyber security allows organizations to detect and react effectively to anomalies in digital processes. Security observability This is a relatively novel approach, which leverages today's ability to nimbly handle large volumes of data to detect anomalous processes in every component of a digital process. This includes not only traditional network or service components, but also current components such as containers, cloud managers, code segments, DevSecOps, user behavior, among others. This approach goes beyond traditional monitoring and helps information security teams to capture, through relational analytics, the impact of security event detections on the quality of the services offered, and thus on the achievement of business objectives. Some of the main characteristics of this approach are: It is possible to understand through traces and metrics of each fragment of a digital process not only what has happened, as reported by traditional monitoring, but why it happened and how systems have interacted, facilitating the detection of known and unknown threats. More comprehensive collection of events, transformed into near real-time telemetry of IT infrastructure components, both in traditional networks and in the cloud, as well as microservices and application data. Ability to give context to incidents and resources associated with the threat, by relating digital processes within deployed and monitored technologies. The key lies in understanding the interaction of service topologies and their dependencies. Develop contextual security plans automatically, which reflect the actual operation of applications and their APIs, detailing the attack surface, the efficiency of defense mechanisms, the use of vulnerable elements in developments and other important aspects. Consequently, the deployment of observability-oriented technologies adds a differential element to information security, making it possible to understand what and why incidents are occurring, and enabling incident detection through observation. The hope of managing this enormous amount of information has turned to AI, which still lacks maturity in some critical aspects. Incident detection through observability As we have seen, the observability capabilities offered by current technologies, together with the possibility of analyzing large volumes of data in near real time thanks to AI, make it possible to identify the root cause of a cyber incident. This involves not only alerting to a threat but also analyzing the internal state of the process and identifying the specific point of unusual activity. This capability significantly improves threat detection by analyzing patterns and slight variations from normal behavior, using comprehensive data from each of the components of the affected or threatened process. This approach is not limited to alerts triggered by exceeding a predefined threshold or the detection of a known signature, but takes a proactive approach based on the context provided by events, metrics and the trace of each process, facilitating the identification of anomalies and potential threats. Technology must therefore integrate a series of components that help to deploy this approach, which we will mention below: Real-time analysis and correlation that identifies patterns and deviations from normal behavior, linking seemingly unrelated data. These capabilities make it possible to identify and alert anomalies in the historical data set through machine learning. Extensive collection of telemetric data from all parts of the IT infrastructure, because only comprehensive data collection guarantees a holistic view of the digital environment. Focus on why a threat has occurred using tools that provide context, such as the route traces of a request on affected applications or relationships between applications in different digital processes. The ability to develop a contextual security plan that offers a detailed knowledge of how systems really work and allows for the identification of possible vulnerabilities. For example, detecting specific libraries used in a microservice that may be associated with known threats, according to intelligence reports. Observability provides an enhanced context for more effective incident response. Conclusion Incident detection through observability transforms the reactive approach to known threats. This allows for proactive understanding of system behavior and detection of known threats and, more importantly, unknown threats. This capability, by providing detailed information and context in real-time, enables security teams to detect, analyze, and respond more effectively, with the ultimate goal of reducing the mean time to containment. Cyber Security Attacking the risk of false friends February 4, 2025
April 1, 2025
Cyber Security
From reaction to prevention: The evolution of cyber security monitoring.
Cyber Security has evolved significantly over the last 20 years. Since a solution to monitor actions within networks and systems was first mentioned, this solution was called SIEM (Security Information and Event Management) in a Gartner report, as an IT tool to improve vulnerability management. A few years later, SIEM became the heart of the SOC (Security Operations Center), from where specialized personnel perform all the necessary actions to mitigate threats or anomalous behavior before they become incidents. To do this, they take all reported events, correlate them and detect possible threats or anomalies. However, despite technological advances and specialized devices that report these behaviors, cyber security incidents continue to increase day after day. This has generated a growing interest in monitoring that is not based exclusively on events or actions that have already occurred in the network, but that incorporates other parameters or perspectives to make detections more efficient and, above all, preventive. One of the examples that has advanced the most and has given rise to new disciplines around cyber security and preventive protection is EDR (Endpoint Detection and Response). This concept emerged in 2010 and materialized as a development from 2013, becoming today a vital tool for organizations seeking to adopt a more proactive than reactive security model. EDR (Endpoint Detection and Response) is an essential tool for companies today. The reason for its importance lies in its ability to perform research directly on devices and execute predetermined actions on these detections. However, for this capability to reach its full potential, it requires a set of specific actions and knowledge, which we will discuss below. Knowledge of the operation As security management service providers, one of the main challenges we face on a daily basis is the lack of knowledge on the part of our customers about the functional details of their technological infrastructure. A clear example is the difficulty in identifying which devices or services are critical to their operation. Threat monitoring and detection is based on priorities, since it is impossible to analyze the thousands of events per second that can be generated in each of the systems. However, these priorities can only be established through a proper risk analysis and a security plan that defines a clear governance of actions and priorities. These definitions can be achieved through a specific consultancy for the detection of the “crown jewels” or the identification of priorities and critical information paths. It is possible to implement or improve monitoring mechanisms with this data, as well as use it as input to determine EDR automatic response actions, SOC analysts' responses or, in case of a more serious detection, the activation of DFIR (Digital Forensics Incident Response) teams. One of Telefónica Tech's global SOCs Coordination of actions It is critical that IT and security teams are highly coordinated in their actions as EDR monitors and responds directly on each device. This avoids mutual deadlocks or false positives. Most current EDRs generate a line of habitual behavior, allowing automatic responses to be focused on actions that deviate from this line, immediately alerting, monitoring or blocking any anomalies. Therefore, if the IT department needs to deploy new software or perform remote access to a server, it must be coordinated in advance to prevent the EDR from generating unnecessary alerts or blockages. These actions are refined over time and depend directly on the previous point, where clear guidelines have been established on how to execute cyber security procedures and how each asset has a priority and specific policies to comply with. Suspect analysis We begin this article by recounting how monitoring technologies emerged and how these have not been enough to control the growth of incidents. In response to this reality, Threat Hunting was born as a pillar of proactive security, a practice that began as a theory in 2011 and that, since 2017, has become one of the most recommended and used, not only in monitoring, but also in incident response. Its application is largely based on the capabilities provided by EDR within the organization, but depends largely on the knowledge and skills of the analysts who formulate hypotheses and perform searches. This discipline arises to meet the need to detect anomalous behavior beyond those identified by SIEM or configured alerts, which require constant analysis for improvement, but which never advance at the same rate as threats. SIEM has become a critical component of the SOC (Security Operations Center), where specialized personnel mitigate threats before they become incidents. An alert may, for example, identify a connection with a high data flow to an external service that is outside the usual behavior of the network. This connection, however, could be the attacker's last step in exfiltrating information. In contrast, with Threat Hunting, all previous steps that the attacker had to execute could be detected by periodic searches or suspicious actions raised from knowledge of criminal behavior. Using the knowledge of others As mentioned in the previous paragraph, knowledge of criminal behavior is an invaluable database. Organizations should value and apply this knowledge in their searches, both in SIEM and EDR, to increase not only detection capability, but also improve response times. This knowledge base is known as CTI (Cyber Threat Intelligence). Most EDR vendors and security companies have implemented this capability, but not all organizations use or value it as a source of strategic and operational cybersecurity information. It is vital to know the steps of the different threat actors in order to plan an effective defense. This involves coordinating the security plan mentioned at the beginning, with Threat Hunting searches and automatic responses. All this is provided by CTI, completing the set of capabilities that support proactive monitoring. Coordinating actions and understanding the operation are essential for establishing priorities and improving monitoring mechanisms. Conclusions In an environment where cyber threats are becoming increasingly sophisticated and frequent, traditional monitoring based on reacting to events that have already occurred has proven to be insufficient. Organizations must adopt a proactive approach that allows them to anticipate attacks, rather than simply respond to them. Tools such as EDR (Endpoint Detection and Response), practices such as Threat Hunting and information sources such as CTI (Cyber Threat Intelligence) have become fundamental pillars for building an effective cybersecurity strategy. However, their implementation alone is not enough. To reach its full potential, it is necessary to: To have a thorough knowledge of the technological infrastructure. Coordinate teams and processes. Invest in training and talent. Integrate threat intelligence. In short, modern cyber security requires a balance of technology, knowledge and strategy. Organizations that manage to align these capabilities with their business objectives will not only be better prepared to face today's threats, but will also be one step ahead in protecting their most valuable assets. Cyber security is no longer an IT-only issue. It is a strategic priority that must be integrated at all levels of the organization. Is your company ready to make the leap to proactive cyber security?
March 11, 2025
Cyber Security
Attacking the risk of false friends
When a company or individual acquires a software or application, they not only get the functionality of the required service or development, but also integrate a new partner or supplier to their ecosystem. This adds a new element to their supply chain or network of allies. It is not necessary, from a Cyber Security point of view, to maintain an ongoing business relationship with the software or hardware supplier that joins the network. However, by integrating these elements, they become part of the supply chain, becoming a critical factor to consider in security strategies. Third-party attacks or supply chain attacks are the third most costly attack and in the TOP 5 of the most common. In most security statistical studies you will find that third party attacks or “Supply Chain Attacks”, for example, in 2022 IBM indicated that they are the third most costly attack and in turn the 4th most repeated. This is mainly due to two key reasons: Trusting our “friends” The risks associated with a piece of software or hardware are often not analyzed before integrating it into the network. It is often assumed that if others use it or if a business relationship has been stable for years, everything will remain the same. This happens, for example, when solutions from industrial device manufacturers and traffic protection software providers converge. In this case, traffic protection systems from one manufacturer embedded in industrial devices from another manufacturer generate an interdependency with software developed by third parties. As a consequence, any reported vulnerability in one part of the software puts the industrial systems using those devices at risk. In these cases, if a critical software vulnerability occurs in one of the parts, this forces the manufacturers of industrial devices to generate a special report in addition to their regular report to notify the criticality of this risk in all the hardened control systems that used that software. The risks associated with software or hardware are often not analyzed before integrating it into the enterprise network. In practice, there are hundreds or thousands of similar cases in all types of industry, some of them we have included in our blog and that something that may be happening in your company. Therefore, each software or hardware should be analyzed before being acquired, we know if its development and design was thought taking into account Cyber Security, we also think every IP camera or IP sensor that we acquire as can be monitored, validating its impact. Lack of knowledge of what we are acquiring Many companies or individuals acquire technology or software because they need to cover an immediate need, without thoroughly analyzing its characteristics. An example of daily life is downloading applications to cell phones simply because it is fashionable or because I need to meet a need but never check the permissions or who is the developer, in some cases not even validate the legitimacy of the development. In enterprise environments, this is reflected in the incorporation of software or hardware in data center services, where they install their applications on the provider's servers, also known as infrastructure as a service, but rarely is it validated if the equipment has management services from the manufacturer, which remain enabled and public. Just as a reference example, we can take the management service that comes preconfigured on IBM servers, called WebSphere. More than 400 vulnerabilities have been reported and one of the last ones allows remote command execution on the server, as indicated in the IBM report. A search for this service on the Internet using Shodan, shows more than 7300 exposed servers worldwide and with more than 5 public vulnerabilities exposed, as shown in the following image. This is why there are many companies that are vulnerable, not only because of the third party but also because of the supplier's supplier, which is not new and we have talked about it before in our blog. Less trust and more strategy Controlling this risk is very complex and requires several strategies working in conjunction with a clear Cyber Security management policy by the organization. That being said, each organization should develop these strategies and incorporate them into their information security controls, however, using our experience we can outline some strategies for securing this threat. Define a process for incorporating software or hardware or supplier Perform a Cyber Security analysis of that supplier, looking for threat reports, analyzing the response times in correcting a reported vulnerability, asking for the details of the attack surface and any other data that the organization considers that implies a risk for the management of its information. Every software or hardware must be analyzed before being purchased to find out if it meets the premise 'Cyber Security by design'. Nothing goes in without confirmation and assurance Sometimes the rush is to cover the need, but each element should be validated with a security test and with the execution of an assurance process, which validates that our second cause is not present. Strengthening authentication Undoubtedly the most critical risk is that they can impersonate any user on the network, when placing any hardware or software it will know the authentication mechanisms that are used and in many occasions can be used to impersonate users. However, if the authentication and authorization is constantly reinforced and has multiple confirmation factors, this greatly reduces the possibility of exploiting this threat. Monitoring to determine behavior It is almost impossible to monitor every bit or every asset in depth, but it is essential to determine what is the typical behavior of each of the elements of the network. Therefore, once a new element is incorporated, it must have a constant and in-depth monitoring to be clear about that behavior and in a short period of time reduce the monitoring and be able to generate valid alerts when something is abnormal. Minimize access to data Both software, hardware and third-party personnel must be very clearly controlled for authorization and traceability. Being able to determine in detail which provider has the ability to access a service or server is fundamental in an incident response to determine the access points. In addition, having this control allows to place effective alerts on a misuse of these permissions or attempts of lateral movements with these users, increasing the level of prevention and detection of threats. Cyber Security in the supply chain is a challenge, but with a solid strategy and appropriate preventive measures, companies can significantly reduce their exposure to risk.
February 4, 2025
Cyber Security
Lessons learned from the Cyber Security battlefield
In our experience handling cyber incidents, the process always concludes with a meeting to analyze the lessons learned. Undoubtedly, 2024 saw a notable increase in both the volume and complexity of these incidents. For this reason, this article aims to compile the key lessons organizations should consider to face the challenges of 2025. Ransomware takes center stage The predominant attack type in 2024 was ransomware, particularly in its double and triple extortion forms. The main lessons learned in this context revolve around four key aspects: the attack vector, persistence methods, lateral movements, and command and control. 1. How the attack begins Analysis shows that phishing remains the primary attack vector. This technique enables attackers to deceive employees or third parties with network access into executing malicious actions, such as installing malware or establishing remote connections. However, 2024 also saw an increase in the exploitation of common business tools like VPNs and remote access services. Attackers leverage exposed credentials, the lack of regular password changes, the absence of multifactor authentication, or known vulnerabilities on these platforms. Once inside, attackers are difficult to detect as they use legitimate credentials and exhibit seemingly normal behavior. 2. How they stay in the network Attackers use persistence techniques to ensure continuous access to compromised systems, even after reboots or defensive measures. Common methods include scheduling automated tasks, enabling malicious services, and creating administrative users. A key lesson from 2024 is the importance of managing identities within the network. Quickly detecting the creation of suspicious users or unusual activities associated with privileged accounts can make a critical difference in preventing attackers from maintaining a foothold. 3. How they move undetected Lateral movement and privilege escalation are two essential techniques for attackers. In lateral movement, they compromise valid user accounts to access multiple systems, mimicking normal behavior. In privilege escalation, they exploit misconfigurations or excessive permissions to gain administrative rights. It is crucial to review configurations to ensure that standard user accounts do not have unnecessary privileges and to monitor the use of tools like remote desktops and IT services. 4. External control: command and control Command and control refers to the mechanism through which attackers manage compromised machines from outside the network. For example, they can issue commands to encrypt data on all infected devices via signals to external servers, often camouflaged within web traffic or even messages from applications like WhatsApp or Telegram. Detecting traffic to suspicious external IP addresses, particularly low-volume but frequent communications, should be a priority in monitoring strategies. Lessons for 2025 Based on the learnings from 2024, here are the key actions organizations should implement: Minimize privileges: No device should routinely be used with high-privilege accounts. This hinders malicious activities and reduces breach impact. Monitor user creation: Even when following an approved procedure, the creation of new users should always be treated as a critical activity to monitor. Understand internal network services: Knowing the normal behavior of services allows the detection of anomalies, such as off-hours connections or unusual uses of remote desktops. Detect anomalous traffic to external IPs: Set up alerts to identify suspicious patterns in network communications. Manage and monitor privileged users: Identifying the regular activities of high-privilege accounts and monitoring unusual changes is essential for effective prevention and response. Finally, it is imperative to emphasize that no monitoring system is infallible. Therefore, organizations must be prepared to respond to incidents by defining clear roles and responsibilities within the response team. As Sun Tzu said in The Art of War, Know yourself and know your enemy, and you will win a thousand battles.
January 7, 2025
Cyber Security
Investigating is the most important task in Cyber Security
We have seen how organized crime has changed its mode of operation, finding in extortive attacks a quick and interesting source of income. Latin America, like other regions of the world, has become an attractive market for those who wish to exploit security breaches in companies. This has generated changes in the way cyber security is approached globally, focusing on early detection as the only option to mitigate impacts. The cyber security industry has developed tools that enable automated and rapid response to incidents. While this ecosystem has significantly improved incident detection and response, it still relies heavily on how solutions are configured. To be clear on this, it is necessary to understand the behavior of threats and malicious actors that use these mechanisms to generate attacks Early detection as the only option for mitigating impacts. As a result, investigation becomes a primary task for cyber security analyst teams and an integral part of operations. The best example is during an incident where recovery is entirely dependent on the investigation that incident response analysts initiate. Cyber Security investigators can, practically speaking, take many approaches. Some are dedicated to detecting potential breaches in protocols or services without these being associated with active incidents. Others consider how to exploit services or features to generate attacks. We will focus on what an investigator should do during an incident. What to investigate during a security incident During an incident, investigators must determine who, what, when, where and why the attack is taking place. In order to do so, they must be especially observant, know what to ask and how to validate the information found or sought, to understand what each piece of information means and how to turn it into valuable information for the investigation. Latin America, like other regions, has become an attractive market for those who wish to exploit security breaches in some organizations. The incident response teams follow methodologies and share information under teamwork schemes that help to establish the hypotheses to be followed in the investigation to solve these questions. For example, when one investigator states that evidence is an error, another may find valuable information in the same evidence. To avoid duplication of effort, all of this is recorded in the research log. Now, imagine your organization facing an incident alert. Security teams should initiate the investigation before activating the DFIR (Digital Forensics Incident Response) team, who should come in only when the initial analysis confirms five specifics of the attack: 1. Is it a real attack? The first step for investigators will be to confirm that the alert is real. For this, information must be collected from the tools that generated the alert and compared with the devices that generated it to verify its authenticity. The other important piece of information that should emerge from the initial analysis is provided by the tool that generated the alert, where the situation that generated the alert and its respective risk assessment associated with the organization's most critical assets is associated with a tactic or technique. 2. Did the attack generate affectation? Sometimes, alerts are configured to detect previous steps of the attackers, generating a proactive detection of the threat and giving incident response teams time to control the impact. It is therefore very important that the initial investigation determines the impact on services or devices. The investigation becomes a primary task for Cyber Security analyst teams. During an incident, the DFIR team does not react and respond in the same way if the threat detected is before the execution of a ransomware or if all devices have already been encrypted. This response must be quick and clear, in order to determine the activation of an in-depth investigation or, in some cases, to close the incident. 3. What assets are compromised? In parallel with the previous answer, it is important for the investigation team to determine the number of compromised assets and validate their level of importance in relation to the risk matrix and the organization's definition of critical assets. This analysis allows the response to focus on containing actions within the affected assets, protecting other important assets and initiating threat hunting processes to detect other affected assets using the detected indicators of compromise Determine who, what, when, where and why the attack is taking place. 4. What activities did the actor perform? It is not always possible to fully answer this question in the initial review, but having clarity on why it qualified as a real incident provides characteristics of what the attackers performed. This data is valuable in determining whether to activate a crisis room and at what level of criticality the incident is rated, which should be directly associated with the response procedure. This process may seem laborious and time-consuming, but it should be carried out quickly by the investigators, using the monitoring tools to determine which were the actual actions that triggered the alert. This allows for an initial assessment of the attacker's activities, so that the intelligence and threat search teams can begin their work. 5. How should one respond to this attack? Only after it is clear what the attacker did is it possible to propose initial containment actions. This step should only be performed by experienced investigators who have had time to get a complete picture of the actions taken by the attacker. Acting without this investigative foundation generates more damage than solutions. Security teams should initiate the investigation before activating the DFIR team. In a first response, the investigators, with the information described above, can propose some initial actions, such as controlling possible movements of the attacker in the network through changes in the network segments generating an isolation of the compromised equipment, although this depends on the type of incident. Another possible measure is to activate automatic response processes in the EDR (Endpoint Detection and Response), using the IoC (Indicators of Compromise) and IoA (Indicators of Attack) detected, which would mitigate that non-compromised devices would be affected by the already known actions. Conclusion The Cyber Security investigation process is vital in all fields, but the contribution of having a first response investigation team or clear procedures on how to act in the event of an alert that could generate a major incident is invaluable. All companies, regardless of their size or type of business, are exposed to suffer a major incident. The only way to survive is to have an initial investigation team that is prepared to answer these five questions in the shortest possible time, so that management or those involved in the response plan can make the right decisions to respond and contain the threat.
December 3, 2024
Cyber Security
Cybersecurity in OT: a need with differences
Cyber incidents in industrial environments have been increasing significantly since 2010, but it is undoubtedly in the 2020s that these incidents have affected the general population or made the news. There are several examples: 2015: Ukraine's power grid was shut down after a worker opened a hoax email (phishing). 2017: criminals were apparently able to override the entire protection system of a petrochemical plant in the Middle East. 2021: the USA's largest oil pipeline had to stop the flow of fuel for 8 days after a worker's password was compromised and used to hijack the control system. These are just a few examples of incidents that have occurred in recent years. As can be seen in many cases caused by employees' actions. Differences in approaches to cybersecurity This is due to several circumstances, but the main one is due to a difference in the mentality associated with security. In industrial environments, terms such as Anti-DDoS, two-factor authentication or other common expressions in IT security environments are not even unknown. There is a difference in mindset and training related to cybersecurity between corporate and industrial environments It is these differences that generate many of the drawbacks in implementing or enforcing security measures at the convergence between IT and OT (operational technology) environments. So, this provides a great lesson for security teams, where it is not possible for the security approach between IT and OT to be the same, making it necessary to clearly understand the root cause of these differences. The vision of priorities in IT and OT We have always talked about cyber security being based on three pillars, which are integrity, reliability, and availability. These are the same in any system that handles information, but the priority we give to them is different. Decision-making in corporate environments (IT) is fundamentally based on data, so data reliability is the priority objective. Very different in operational environments (OT) that, when interacting with physical environments, data is required in real time in order to have control of the operation, which orients cyber security to give priority to availability. This change of focus means that processes such as automatic updates, micro-segmentation or any action that generates a delay in signals or a shutdown of the operation is not so simple to implement, because the priority of the operation and the problems that these detections generate are more important and prioritised than the implementation of a patch or a security requirement. Cyber Security Vulnerabilities, threats and cyber-attacks on industrial systems May 24, 2022 Calculating cyber risk One of the first steps in cyber security is the calculation of cyber risk, which is why all standards and best practices show how to perform risk calculations and the importance of putting in place controls or mitigation measures to reduce risk. It is always said that the probability of the attack by the impact it generates on the operation, but in operational environments it is said that these two are not the only factors to consider for industrial cybersecurity, but by having the aforementioned interaction with the physical world it is essential to place the parameter of the consequence within the equation. This additional parameter in the equation drastically changes the risk assessment and includes valuable details for operators, whose main focus of security is on life or impacts on their environment, which are never taken into account in IT. The importance of devices In addition to the risk analysis, operational environments have clearly identified devices that are essential to the operation, which are often considered the "crown jewels" and which emerge from the process analysis. Many of these "jewels" are often very old equipment, which from an IT point of view are obsolete, but within the operation are normal times and even within the warranty of the equipment, which shows that security and changes have different speeds between environments. Cyber security concepts Cyber security terms are new to the world of operations, which until less than 5 years ago (even today) relied on the fact that, because they are not connected to the internet, cyber threats do not affect them. That certainly no longer applies, but it brings with it the need to understand and manage concepts that are new. Even the most common concepts of Cybersecurity have not yet permeated industrial companies with sufficient force As we said at the beginning, concepts such as Anti-DDoS are not only unknown, but in some cases inapplicable, but also standards such as IEC62443, models such as Purdue or standards such as NIST, have not permeated strongly enough in industrial companies, so they are still concepts that are not known or are not fully applied. This is a challenge for Industry 4.0, which is gradually being worked on, but which opens a window for cyber-attacks that affect many areas of society, as the interaction with physical elements in systems such as water treatment can affect millions of people. Cyber Security Connectivity & IoT IA & Data Artificial Intelligence applied to industrial Cyber Security (OT) March 25, 2024
December 19, 2022
Cyber Security
How to protect your social media accounts
Companies and individuals use social networks today to generate new revenue or to sell their services and products, much more than just to communicate with other people or to post likes and dislikes. However, few people know how to secure the social network they use and when they are attacked they lose control of the account, finding themselves in big trouble to regain that control. Let's learn about a few tips to be prepared in case we become victims of a cyber-attack on our social network accounts. As it is difficult to tackle all social networks, we will take some of the most common in the world and give the most generic advice possible. Understanding what is on offer All social networks work every day to ensure the identification and authentication of their users, providing multiple ways to authenticate and mechanisms to guarantee the user's identity. However, most users only implement a password and are unaware of what the network offers to ensure its recovery in case of loss. In the case of Facebook, there is a page where they provide advice on how to set up security, but it is divided into three fundamental steps. These steps allow you to set up the minimum access control that any user should have, but in addition to this it is necessary to know what is requested in case you lose control of your account. In the specific case of Facebook, the system asks you to validate your information with a series of photos, including one of your ID card or passport. This is done for identity validation, which also compares the names on the social network with those registered on the document and may be requested by the social network, simply to validate an identity check. Considering the above, it is vital to have names and images that are useful to the network for recovery, in the case of an account theft, no matter what changes have been made by criminals, this allows Facebook to confirm the identity in the historical records. This same procedure is valid for almost all other social networks, such as Instagram, Youtube, LinkedIn and Twitter. However, it does not work for TikTok, where it is not even possible to set up two-factor authentication. Photo: Solen Feyissa / Unsplash TikTok has become one of the most used platforms by companies, entrepreneurs and individuals, but little has been done to analyse the security provided by this platform, where the only configurable parameter is whether the account is private or not. In case of forgetting the password or a change in it, the phone number is requested and a 6-digit pin is sent, but there is no procedure in case of losing control of it, only a procedure is indicated to recover it in case it is deleted and it only works after 30 days of deletion. Knowing who discloses your data Another big problem of the networks is that we end up flooded with advertising in our emails or with several emails to manage advertising networks, for this public email systems can help us with a relatively simple trick. The trick consists of adding the social network to the username of our email, without this implying that we should have an email address for each account. So, if the account with which you registered on Instagram is yourusername@gmail.com, then change it to yourusername+instagram@gmail.com, with outlook.com or hotmail.com accounts it also works. This change will allow advertising or data sent from these platforms to reach your email with this ID and you can have evidence of who disclosed your data. Additionally, many of the attacks of session theft are carried out by automated systems, which take the databases of information leaks and initiate processes to break passwords, but this "new" email is not valid to open the social network. Conclusion Remember that it is always better to be safe than sorry and that criminals are constantly looking for weaknesses to hijack social media accounts, especially now that they have become a popular buying and selling channel for people. Understanding the controls and protections they provide and knowing what to do in the event of an incident is vital to ensuring the security of your information and your environment.
October 11, 2022
Cyber Security
Wireless attacks on OT
Wireless networks are now present in all types of industries. It is undoubtedly one of the most notable changes brought about by smart industry because it has increased productivity and reduced costs. However, in several scenarios it has been shown that wireless networks do not generate security conditions that can be considered optimal. To change this, two wireless transmission protocols have been developed that strive to improve cyber security levels: Ultra-WideBand (UWB) connectivity and the UWB variation of the Real Time Location System (RTLS). Nonetheless, researchers specialised in OT security at Nozomi Networks (a company acquired by Telefónica Tech) conducted a series of security tests on these protocols and found some 0-day vulnerabilities where it is possible to gain access to sensitive information being exchanged in the transmission. Security test results To focus the research a bit specific models of equipment and its use in the industrial and hospital sector were taken, where it was decided to test Sewio Indoor and Avalue Renity, which are two UWB RTLS packages that deliver location, protection functionalities are used in maintenance operations and others. Once these elements were in place, we proceeded to carry out the research on communications and data analysis that are executed in a traditional operation infrastructure with these elements, which is composed of locators, information anchors, the UWB and the RTLS processing server. Using this network architecture, the researchers initiated reverse engineering and analysis processes in various scenarios and responses, which are fully documented in the research team's final report. In the following report you can see the tactics, techniques and procedures that were performed to simulate how an actor can gain access to information by executing a Man in the Middle (MitM) attack and an access to the communication network. CYBER SECURITY What are operational technology (OT) security “Patch Tuesdays”? July 21, 2022 Possible consequences and options for mitigating this attack When an attacker applies these methods in real life, he or she can easily know the position of people or assets in factories, which are used for rescuing people in remote jobs or in cases of emergencies within an operating plant. In hospitals, it is widely used in cases of emergencies and in order to attend to serious medical symptoms. Therefore, in a passive attack, criminals could access information on staff behaviour and habits or know the location of valuable assets. Now, one of the most common RTLS application functionalities is the creation of geofences, which are used from a personnel and asset protection point of view, using the entrances or exits of specific areas, which can generate an alert for proximity to dangerous equipment or other alerts. In testing attacks on these configurations, it was possible to modify the monitored areas or actions taken in the geofence, generating manufacturing stoppages or allowing access to dangerous areas or the possibility of eliminating anti-theft monitoring. CYBER SECURITY Hypocrisy doublespeak in ransomware gangs July 14, 2022 Conclusion All these analysis and results were shown in the BlackHat USA 2022 edition, where it was very well qualified and received. The demonstrations showed that all types of industries can be victims of this type of attack and that the consequences can be not only operational, but also life-threatening. It is important for the industry to validate the possibility of remediating or mitigating the possible impact through network segmentation and the use of industrial firewalls, as well as the implementation of intrusion detection in operating environments that allow for the detection of anomalous behaviour or unexpected movements in the network, and finally, the possibility of implementing encrypted data transmission.
September 5, 2022
Cyber Security
Vulnerabilities, threats and cyber-attacks on industrial systems
We have been monitoring security in industrial environments for some years and have seen how these infrastructures have become a target for cybercriminal groups. Our innovation area has developed a system for capturing threats in industrial environments which allows us to carry out a detailed analysis of the attack techniques and tactics used in this area. Threat detection in industrial systems With this honeypotting tool called Aristeo, we have seen exponential growth in attacks, reaching divs of around 7 million detections in 24 hours and 35 million in 7 days. The data from these samples allows us to show that the IT components of OT infrastructures are the main attack vector. In our tool they are called Engineering Bay and HMI, systems that are usually supported by common operating systems and protocols in IT networks. Telefónica Tech's cybersecurity tool Aristeo These detected attacks can mostly be mapped to the techniques that make up the initial access tactics framed in the ATT&CK matrix for ICS, which was updated on 21 April. Additionally, these detections often turn into ransomware attacks, as indicated by Nozomi Networks in its 2H2021 security report. Timeline of notable ransomware and supply chain attacks in 2021 second half. Source: Nozomi Networks The biggest concern about this increase is the physical repercussions that these attacks can generate, which have increased in the last two years. Cases such as the JBS Food hijacking, which caused meat shortages in several countries around the world, added up to 10 impact cases in 2020, surpassed 20 in 2021, and are projected to reach 50 in 2022, according to waterfall and icsstrive's OT incident report. Improvements for threat detection in industrial systems This trend has led to more in-depth and detailed research into potential vulnerabilities in industrial systems equipment, with 651 reports on 47 manufacturers and 144 products by the second half of 2021 alone. Companies in the industrial sector have improved their own threat detection and testing systems. The best example of this is Siemens, which created a CERT and reports once a month all vulnerabilities or updates of vulnerabilities in its products (a process similar to Microsoft's). By May 2022, they reported 27 alerts, of which 12 were reports of new detections. This initiative has been followed by other companies in the sector, such as Schneider Electric, which, like Siemens, opted for the strategy of monthly reporting of threat detections revealed by its research teams or by external researchers. In this case, for the month of May they reported 6 alerts Cybersecurity, a critical need for industry These changes in industrial environments undoubtedly make cybersecurity a burning need. As we have said on previous occasions, they require a change in the approach of the operations teams and an integration of these networks into the security governance of the entities or companies. One of the common points in specialised OT cybersecurity analyses is the lack of visibility of events within operations networks, which means that incidents cannot be detected in their early stages. The reason is that you can't protect what you can't see, and industry studies have confirmed that less than 62% of companies have complete visibility of network events. In terms of personnel, it is critical that operations environments start with cybersecurity training processes on an ongoing basis, just as they are constantly trained in occupational safety, operational risk and occupational health processes. It is essential that operators understand the importance and procedures they must comply with to safeguard information.
May 24, 2022
Cyber Security
Where is your company on the cybersecurity journey?
Although the cybersecurity path is not linear and each company has its own characteristics, experience has allowed us to classify companies into five levels of cybersecurity evolution. The existence of these levels does not imply that all companies must reach the maximum (this depends a lot on the characteristics and size of the organisations), but they must reach an optimal level that reduces the probability of an incident. In this article, we try to provide companies with a tool to identify where they are, what the challenges are and what they need to do to raise the level of evolution. The aim is to enable them to create their improvement action plan. It is not a definitive guide, but a useful aid to simplify some of the steps indicated by norms or standards without much context. We will analyse each level in detail, taking into account the network security posture, device security, services and file management. Unaware This kind of organisation makes information management decisions based on recommendations or best practices in the market. They usually see the acquisition of cybersecurity equipment as an expense or a compliance with an industry standard. This means that the acquisition of cybersecurity elements is not coherent and is done with the sole objective of having minimal control or compliance. On the other hand, there is no security or information management policy that employees or third parties must comply with, therefore exposing their own and their clients' information. The corporate network usually has perimeter protection systems and browsing controls. This is managed by IT staff, meeting business rather than cybersecurity requirements. No segmentation or device access controls. Remote access to equipment on the network is enabled with the sole control of a username and password, usually shared by several workers, to connect to internal equipment or services from home. CYBER SECURITY The impact of cybersecurity attacks on SMEs and corporates December 2, 2021 The organisation's computers often have a non-enterprise anti-virus system, which cannot be monitored or controlled from a central system. Operating systems are often not managed for proper updates or configurations, so it is common for computers to coexist with malicious software, undetected. Information in these organisations is not controlled or classified, so any user on the network can access all information without restriction. Managers often generate uncontrolled copies of information and work is not done in teams or with traceability over access to data but is handled independently on users' devices. Cloud storage systems do not have access control systems enabled, nor are they encrypted. They are often used connected as an additional directory to the users' operating system, so the main function is as a backup of information. Reagents This kind of organisations start the process of integrating information security in the organisational areas of the company, understanding that in today's world everything depends on the management of information and therefore cybersecurity is essential for the growth of the company. The main characteristic of these organisations is that they have a security operations centre (SOC) service, either externally or internally. Allowing correlation and threat detection to be done reactively in the network and based on detection configurations. Such organisations have many cloud services and multiple security devices in the network that send events to the operations centre for threat detection. In some of these cases, the threats that are monitored and alerted originate from external networks, but rarely are internal threats monitored with equal rigour. CYBER SECURITY 'Insiders' in Cybersecurity: “Catch me if you can” April 25, 2022 Security management is usually the responsibility of the technology area, where network administration teams and core security teams are in place to take reactive action on SOC notifications. Users have VPN access for remote connections, controlled through centralised identification systems such as the active directory and monitored from the SOC. However, the networks are not segmented and VPN connections have the same privileges and access as the organisation's network. User devices are managed from a central administration, which deploys control policies and access permissions, based on user classification, but there are usually local administrators on the machines and administrative users for management or network management. Personal devices are allowed to be connected to the corporate network, allowing possible access by malicious software or the extraction of sensitive information. Given the lack of file controls, this is one of the main causes of information leakage. Non-enterprise backup systems, such as external drives or shared folders in the cloud, have no guarantee of data recovery and are susceptible to data hijacking attacks. Cloud storage systems do not have access control systems enabled, nor are they encrypted. They are often used connected as an additional directory to the users' operating system, so the main function is as a backup of information. Proactives These companies have systems and infrastructures that allow them to take anticipatory controls, which enables them to base all information security decisions on data and the timely detection of threats, for which they have a security architecture oriented to the challenges involved in information management. Not only do they have a SOC, but they also carry out an analysis of the internal and external threats that are detected in these systems, in order to implement improvements in controls and corporate information management policies. These organisations use identity management systems to initiate information classification processes and access control improvements. They control not only access to data, but also allow through multiple authentication factors to guarantee a user's identity, mitigating the most common phishing attacks. CYBER SECURITY What On Earth Is Going on With Ransomware And Why We Won't Stop It Any Time Soon June 29, 2021 In order for this to work properly, corporate controls over network devices and users in the company are in place, allowing not only to detect existing threats, but based on the knowledge and behaviours detected on networks or devices, alerts and controls on suspicious situations can be generated. These implementations use indicators of attack, rather than indicators of compromise, to be proactive in applying control. Another important feature is the level of staff awareness, trained on how to detect threats and which tools to use for business communications, always taking into account the categorisation of documents. All of the above is managed by a dedicated cybersecurity team, with a management level that allows them to give their opinion and analyse corporate decisions with a vision of data protection and that allows them to have teams specialised in monitoring, incident response, identity management, security architecture, among others. Anticipated In these organisations, the platforms, network architecture and corporate procedures are aimed at protecting information and responding in advance to possible threats from the cyber world, generating information protection at any point where it is located and taking care of any way of communicating or connecting to it. The company's executive management is aware of the importance of information security, therefore, every decision made regarding suppliers, equipment, network deployment, use of cloud services and others, has a prior analysis of the information security area, which in turn ensures that policies and controls are aligned with business objectives. Threat Hunting teams and Incident Response teams are essential in these organisations. In close collaboration with the company's defence, monitoring and attack teams, they not only analyse alerts from various detection systems, but also, using the attack techniques and tactics disclosed by companies specialising in information security, generate mechanisms for detecting or analysing possible anomalous behaviour. CYBER SECURITY If you want to change your employees’ security habits, don’t call their will, modify their environment instead March 12, 2019 Document management and classification systems are closely integrated with identity management systems, allowing traceability of events on each corporate file and access control based on identities, not only of employees but also of computers or autonomous systems within the network that programmatically have access to company files. All of this is orchestrated by the security team, which reports directly to the presidency or board of directors, comprising personnel trained in detection, monitoring, threat hunting, attack teams and defence teams, supported by specialised tools for each field and with advanced protection on user devices and network devices, which control access and allow the network architecture to be modified. Automated This is the highest level of corporate information security management. Its main characteristic is that, by having a solid structure and architecture, it is integrated with intelligent automation platforms, which allow orchestrating the various monitoring, detection and threat hunting systems, using deep learning technology and generating automatic reactions to the various threats or behaviours detected. These companies base their information security operation on Zero Trust, which extends controls to all levels and instances where data is handled, managed, generated or manipulated, regardless of whether they are employees, suppliers, third parties, automated devices or anyone who has access to data. In order to manage these orchestration and automation systems, it is necessary to have specialised cybersecurity personnel and aware employees, in addition to having clear security policies that are closely aligned with the business to avoid friction that can be generated in the application of control.
April 20, 2022
Cyber Security
A practical approach to integrating MITRE's ATT&CK and D3FEND
Businesses have become aware of the need to have mechanisms in place to ensure the protection of their information and how important it is to understand their weaknesses in order to improve their resilience in the event of a cyber incident. Although many managers continue to see security as the need to have elements designed to protect and minimise the possibility of an attack, this is no longer the case. Cybersecurity is an ongoing process that requires understanding the adversaries and the risks in the environment. MITRE's ATT&CK, which we have talked about on previous occasions in our blog, was born with this philosophy in 2013, seeking to compile in a matrix the techniques, tactics and procedures used by attackers in real actions against business, mobile and industrial environments, where its evolution has led to the creation of a matrix of defensive capabilities and countermeasures, called D3fend. Mitre states that, "cyber threat intelligence is about knowing what adversaries are doing and then using that information to improve decision making", so regardless of the size of the cybersecurity team, this tool is vital in the process of ensuring information security. In this way, it is possible to associate techniques of the main criminal groups, iconic cases of incidents in different industries, validate which are the common adversaries, know the software used in each of the phases of the attack, among many other tools that are provided. Companies that are just starting out and have few resources in the area can begin by understanding the usual behaviour of the adversaries in their industry, and with this data validate whether the defences implemented detect and mitigate the actions of these groups. To understand how this analysis is done, let's take the example of a logistics company, which has recently been the victim of several ransomware attacks around the world 1. Find your sector. Determine the sector of the industry that the business is focused on. For this purpose, the website provides a search engine at the top. Here we will enter logistics for the example Figure 1: Search result in https://attack.mitre.org/groups/ For the analysis we will take the Cuba ransomware, which we mark in the illustration. It is one of the most widely used against medium-sized companies in Latin America. 2. Adversary information. Once the software or group to be analysed is selected, access is gained to the information provided by the system, such as basic data on the platform being attacked, when it was detected, who detected it and the victim industries. Figure 2: Adversary information 3. Know the techniques. This same adversary page shows the techniques that have been detected in attacks where this malware has been used in a list that enumerates the techniques and sub-techniques used. Figure 3: Techniques used by Cuba in an attack. Right there in the "Navigator Layers" it gives the possibility to see within the matrix what the tactics and their techniques are. Figure 4: Visualisation of tactics and techniques used by Cuba. In this case, it can be seen that the techniques used by the adversary groups to initiate the attack are unknown or not reflected, which is called pre-attack in the matrix. This usually indicates that the techniques used are too varied to establish a specific one. 4. Know the defences. Each of the techniques has a section listing the possible forms of detection that should be implemented to mitigate this action. For the example we will look at a sub-technique used in the execution tactic, and which is usually the first step detected by incident response investigators in ransomware attacks. Figure 5: Technique to be analysed, because it shows us a command. Adversaries use the Windows command console to execute programmes inside the victim machine. In the specific case of Cuba, the cmd.exe /c command has been detected in several of the activities analysed. By accessing the information on the technique, we have the basic data collected on how it has been used, some of the procedures where it has been detected, possible mitigations and ways of detecting its execution. For our example case we will look directly at the information and the possible ways to detect it. Figure 6: T1059.003 Technique data Figure 7: Detection recommendations for T1059.003 With this information, the cybersecurity team can make decisions on how to act to prevent an incident that uses this software to affect their industry sector. They can even reference the technique to search the defence matrix for more information on how to protect themselves. Go to https://d3fend.mitre.org/ and in the search engine called ATT&CK lookup enter the technique, for our example T1059.003. Figure 8: Relation with defence matrix. This shows the map of the forms of defence and detection, and for our example these are as follows. Figure 9: Defence map for T1059.003 In short, this tool is invaluable for all types of businesses and cybersecurity teams, providing information and data to make decisions in the pursuit of better cyber resilience.
February 24, 2022
Cyber Security
AI & Data
TCP/IP Stack Gruyere
In May 2020 during the most complicated phase of the global pandemic, we were told that the internet was broken as a result of bugs (called Ripple20) affecting millions of IoT devices. But this was just one of a series of findings in a series of problems detected in the TCP/IP stack that have been brought together in research called the Memory Project. This project reports vulnerabilities in the implementation of 14 TCP/IP stacks detected after 18 months of research. The result is the disclosure of 97 vulnerabilities grouped in 6 reports that, by their very nature, are rated with a very high risk level and impact millions of devices and hundreds of manufacturers. The first striking feature of the report is the initial release date of the fourteen TCP/IP stacks, which are at least 7 years old and at most 28 years old. Evidence that as in previous occasions in other base protocols, unknown vulnerabilities have been carried over from many decades ago. Year of initial release of each TCP/IP stack analysed This does not imply that all stacks or protocols are vulnerable just because they are old, but it does show that on many cases the processes of correction and improvement in this type of basic elements for the functioning of the internet are somehow slow. The study also indicates that one of the main problems is the lack of response from many manufacturers when they are notified of vulnerabilities or the slow adoption of patches, as in the case of Schneider Electric, which took 308 days to publish the patches to correct the vulnerabilities known as AMNESIA:33. The other very important point is the impact of these vulnerabilities, as most of the implementations are in IoT, IioT and OT devices, which are the basis of the operation of critical infrastructures and industries in the world. Devices such as gas turbines, electrical transmission elements and Siemens brand RTUs, have confirmed by their own CERTs the existence of vulnerabilities in their devices in the last two months SSA-044112 and SSA-316383, which confirm NUCLEUS:13 and NUMBER:JACK respectively. However, it is not the only industry affected. The government and medical services environments have also been severely impacted. In fact, they are the most affected devices reported, both of which account for around 60% of all affected devices. Figure 2: Vulnerable devices per sector As in previous cases, this case highlights the need for greater scrutiny of how vendors and developers are creating or making use of the different TCP/IP stacks in their implementations. The good news is that these types of responsibly reported bugs indicate not only the importance of such analysis, but how vital it is to provide early warning to the world's organisations to raise awareness of the other as yet undiscovered vulnerabilities that can be found in critical environments.
November 23, 2021
Cyber Security
PackageDNA Our Development Package Analysis Framework That Made Its Debut at Blackhat
After several months of research and development, during the BlackHat USA 2021 Arsenal event, you saw our deep analysis tool for development packages called PackageDNA, in the talk "Scanning DNA to detect malicious packages in your code". Its goal was to showcase the library analysis framework that was programmed to help developers and companies validate the security of packages that are being used in their code. Esta herramienta cuando nos planteamos en el equipo de innovación analizar el malware que se oculta dentro de las librerías. From time to time, it was made public that some libraries were supplanting the original ones, for example in this example from late 2018 in which a couple of libraries in PyPi were alerted. The story would repeat itself often since then, but how to do the research without a tool to make the search easier? Our initial idea was to take the PyPi packages only, but we set ourselves a bigger challenge and the idea evolved to take the libraries of the main programming languages. So it became a framework, which should show for each package it parsed in PyPi, RubyGems, NPM and Go, the following data: Metadata of the package. HASH of all the files it contains. Detection of possible IoC, such as IP's, Hash, URL's and emails. Static analysis of the code, with an open-source tool for each language. Analysis using AppInspector, Microsoft's open-source tool for identifying malicious components. Validation of suspicious files against Virustotal. Validation of CVE report on GitHub, taking into account the specific version of the package. Validation of packages generated by the same user within the library and in other programming languages. Checking the possible typosquatting of the package in the same library. This resulted in a powerful framework that allows a deep analysis of the libraries being used in the code being analysed or created, but also gives security analysts a static view of the security of the code, a view of the attacker's behaviour and data for threat intelligence. How to use PackageDNA? The famework is developed in Python3 with an interactive console that allows the user to simply select what they want to do, the first screen the user sees is as follows: You must start with option 7 the configuration of all external tools that are associated with the use of the framework (all are free to use or open source developments) as you can see in the following image is only correctly load each value. Once everything is condivd, the user can do the following with the PyPI, RubyGems, NPM and Go libraries: Analyse the latest version of a package. Analyse all versions of a package and compare results between versions. Load a list of packages with specific versions. Upload a local package for analysis. For threat intelligence analysis, you should select option 4 in the initial panel and it allows you to enter another panel where you can perform. Searches for the packages generated in each of the libraries and developments uploaded to github using the username you want to investigate. Analyse the typosquatting and brandsquatting found in a specific library of a package. Search for code segments within a specific package. While the tool is designed without a database to store all searches, there is an option to review the results of the analysis performed and stored locally on the machine. Having the information initially in the console, but with the option of viewing it in the browser through Flask, as shown in the following images. Attacks on the software supply chain During development, attacks on the software supply chain were gaining prominence around the world, with reports of several packages being detected as malicious in many libraries that were within our scope, so we couldn't have had a better testing scenario. In fact we were able to analyse the versions of maratlib, a PyPI package that was deployed for malicious cryptocurrency mining and that spoofed a package commonly used in mathematics called matplotlib. When running the tool and using the comparison on the two versions, we could clearly see the malicious code segment that is detected by AppInspector and that is present in only one of the versions loaded in the library. But we can also look at the other packages in the report that are generated using typosquatting techniques. So, with this framework we hope to provide the community of developers and code security analysts with a simple but powerful mechanism to achieve their goals. You can download it for free at https://github.com/telefonica/packagedna and we are open to your comments and contributions to improve the tool.
August 30, 2021
Cyber Security
New Threat, Old Techniques
For some years now, the techniques used by malware developers have focused on evading detection mechanisms, finding that obfuscated macros and the use of Windows proprietary tools are an effective mechanism to accomplish their goals, even if they use old office document formats. One of the malware campaigns that has most exploited the technique of ancient and obfuscated macros (some simply hidden) is Emotet, named after an ancient Egyptian king. Since 2014 it has become the most feared banking trojan, with a very strong peak of incidents in 2019. Figure 1: https://any.run/malware-trends/emotet But this week McAfee Labs published a new infection technique that not only uses Office macros as its main tool at the start of the attack, but complements it with the download of malicious DLLs, and which so far in 2021 has mainly affected Spain, Canada and the United States, which has been considered the return of a variant of the Zloader banking malware, which first appeared in 2006 as a variant of the Zeus banking trojan. Figur2 2: https://www.mcafee.com/blogs/other-blogs/mcafee-labs/zloader-with-a-new-infection-technique The initial attack vector is through an email with an attached Microsoft Word file, which downloads a password-protected Microsoft Excel file from a remote server. With the two documents on the machine, they initiate a scheduled interaction with the VBA macros in both files and modify some registry policies to avoid alerts when running the dynamic macros from excel, to finally download the Zloader executable containing a malicious DLL. This is the typical behaviour we have described in Fileless attacks and which the innovation area has sought to prevent with the creation of DIARIO, to detect the malicious content of these macros in the first step of the threat flow while respecting the privacy of the documents. In this case it is not the exception that can be used, as the hash that appears as the main IoC is the Word document with which the attack starts, which is 210f12d1282e90aadb532e7e7e891cbe4f089ef4f4f3ec0568dc459fb5d546c95eaf and that as can be seen in the web answer. Figure 3: https://diario.elevenpaths.com/ We already detected it as malicious, because all 5 macros contain malicious processes and where, when checking them in our tool, you can clearly see how one of them is loading the moment where the Excel is opened with the password from Word. And how from the other macro the download URL is created for this same file DIARIO was designed in the innovation area thinking that the most effective way to detect malicious code of this type is the use of machine learning, so that any other document that contains a variant of the malicious process of this detection, will be immediately recognised and marked as a malicious document. If users were trained and aware enough to analyse every single attachment that arrives in their mail, having a tool like DIARIO within the Outlook client would allow them to mitigate the risk of this type of attack and counter the threat from the first step of the attack flow.
July 27, 2021
Cyber Security
Using Development Libraries to Deploy Malware
Cybercriminals seek strategies to achieve their objectives: in some cases, it is users’ information; in others, connections; sometimes they generate networks of computers under their control (botnets), etc. Any user is a potential victim, but if, in addition, they can get others to distribute their malicious code without knowing it, we are talking about an invaluable gain for criminals. Therefore, they have realised that infiltrating malicious code into packages that developers use to generate their projects is a very effective way of spreading it to as many victims as possible, as well as benefiting from anonymity. In this way, every time a developer, anywhere in the world, uses the corrupt package that was leaked inside the library in any kind of code, he will distribute the malicious segment and making the traceability will be almost impossible, since there are libraries that have been downloaded millions of times. In the last year several samples of this practice have been found using mainly NPM library packages and Python library packages. Criminals used different techniques to hide their actions and bypass the controls in these libraries, let's see which ones What Are the Techniques Used by Cybercriminals? Although the techniques are diverse, we are going to focus on those that, after their detection, could be shown that they had been available in the libraries for a long time: Typosquatting: as we have previously mentioned, this technique is used in various types of computer attacks and is based on modifications in the names of the packages that confuse users or loading one of these malicious codes after a typing error. The clearest example of this method was presented in the PyPi library of Python where two malicious packages were detected that used name mutations for their propagation, as in the case of jeIlyfish with jellyfish. This name mutation was intended to obtain the SSH authentication keys on the different servers or computers where any development using this package was installed These packages were available for over a year in the PyPi library, where they were downloaded more than a hundred thousand times, which gives the attacker a wide impact and dispersion in terms of possible targets, as this code may still be used in some business or home developments that are not properly maintained or monitored. Brandjacking: this type of attack takes advantage of the importance of a package to create a mutation or simulation of it. The main difference with the previous technique is that it does not appeal to the possible error of a developer when typing the requirement into the code, but creates a package that has exactly the same name but usually adds the name of the language that is being worked on. In the NPMjs library packages this technique has been detected several times, using packages like twilio, which has about 500 thousand downloads, to create a malicious package that uses its recognition to supplant it with the twilio-npm package, which with only 3 days online achieved 371 downloads. These two basic examples show that criminals are always looking to deploy their malicious code using various mechanisms, demonstrating that they can put at risk any user with or without computer or information management knowledge. This also confirms that it is vital that development companies look for mechanisms to detect these strategies, complying with methodologies that guarantee safe development in order to minimise that this type of threat is exploited and endangers users. As for the companies behind this type of development language, internal and community efforts are being made to detect these threats in the shortest possible time. An example of these alliances is the OSSF (Open Source Security Foundation), of which we are an active part, that seeks to develop tools and communication with the aim of improving the security of the developments and that the computer development companies have references or elements to validate the life cycle of their developments.
November 19, 2020
Cyber Security
When Preventing a Cyberattack Becomes a Vital Decision
In recent years, the number of incidents in critical infrastructure networks and industrial systems has increased significantly. There have been attacks with a high degree of complexity and knowledge about the elements affected and about how to take advantage of the historical deficiency in security implementations that these types of networks have. This generates a high risk on the lives of the people who work in these industries or who depend on them, as well as on the critical infrastructures of the countries. In previous articles we have talked about how industrial networks base their safety on keeping industrial systems isolated. This is what we know as AirGap, but it is increasingly unlikely and inefficient. The false security confidence generated by this isolation has allowed cyberattackers to take advantage of remote control tools (RAT) to filter into IT networks and reach OT networks. From where they exploit the vulnerabilities of industrial systems without being detected. Security measures have been somehow delayed in reaching these types of environments due to a lack of knowledge in OT cybersecurity, to the isolation that is generated in companies between IT and OT equipment or simply because of the erroneous assumption that these devices cannot be reached by criminals. However, earlier this year, MITRE published the framework known as ATT&CK (Adversarial Tactics, Techniques and Common Knowledge), which specialises in industrial control systems. Source: https://collaborate.mitre.org/attackics/index.php/Main_Page This matrix has been very important in the investigations of incidents that have occurred in the last six months, as our partner Nozomi Networks indicates in its report for the first half of 2020. This report points out how the COVID-19 pandemic is being used to carry out ransomware and botnet expansion attacks on OT and IoT systems, as well as analysing the tactics and techniques used for this purpose. Case Study with MITRE ATT&CK Step by Step To understand how this matrix is applied, it is best to analyse an attack with it. In this case we will take an advanced persistent threat (APT) called GreyEnergy, which was made public in November 2018 but whose first detections are in incidents on Poland's electricity grid in 2015 and later in incidents in the financial sector during 2018. The initial attack used a technique that is well known to all of us who work in security and to which all Internet users are permanently exposed, which is phishing. It is also a technique whose use has increased significantly in this time of pandemic. Therefore, the initial access on the ATT&CK map is in the SpearPhishing Attachment, as the attack begins with a Word document containing a malicious macro with the necessary commands for the following execution, evasion and persistence phases. Since the malicious load is in a macro, which requires user interaction, the User Execution section on the ATT&CK map must be marked. In order to achieve persistence, the malware searches for web servers with a vulnerability in order to hide, managing to camouflage itself in the network. Therefore, the Hooking in Persistence and Masquerading in Evasion are marked in the ATT&CK map, due to the packer it uses to hide the real malicious code. To detect targets within the affected network, the malware uses several widely known tools that can be grouped within the discovery in the ATT&CK map, such as Network Service Scanning and Network Sniffing. Therefore,managing to detect the vulnerable services mentioned above for lateral movement, which in the ATT&CK map would be Exploitation of Remote Services. For command execution it uses a known technique among C&C systems, which is to deploy proxy within the network to redirect requests to external network equipment, hiding the traffic of the network security monitoring systems among the internal traffic. Therefore, in the phase of inhibiting the response functions, Program Download and Alarm Suppression are marked on the ATT&CK map, since they use an external program such as procy and suppress the alarm after hiding in internal traffic. The last two phases within the ATT&CK map are more complex to analyse because, as it is a modular malware, it is possible that the control process that wants to damage changes according to the case and, therefore, its final impact. However, in the samples collected, it was found that they sought to stop services by generating wipes on the hard disks of the human machine interface (HMI), so the final impact would be damage to property or denial of control. Thus, what we should mark would be Service Stop and Damage to Property. In industrial networks this impact is very critical, because when control or visibility of operation is lost, there is no other way out than to stop the service at an emergency stop to mitigate the possibility of human loss, environmental damage or physical damage. Which usually generates very serious economic and reputational losses for the companies affected. Conclusions As can be seen, MITRE ATT&CK makes it possible to clearly identify the tactics and techniques used by cybercriminals in cyberattacks aimed at industrial environments. As well as providing the possibility of obtaining common information gathered in other incidents that help in the deployment of specialised monitoring systems and the application of threat intelligence systems to minimise the impact of an incident. In each of the phases there are possible indicators of compromise, such as the hash of the file used in phishing (f50ee030224bf617ba71d88422c25d7e489571bc1aba9e65dc122a45122c9321) where, as seen below, the macro contains the malware. This would have been detected with our DIARIO tool and the control systems would have made it possible to avoid the start of the incident. This methodology makes it possible to ensure the three stages industrial systems control, as we explained in the articles on introduction to industrial systems a few years ago. Correct measurement of data must be ensured so that the evaluation and processing of the data guarantees compliance with safe working standards. Due to the severity that an incident in industrial environments can cause, it is essential that these security frameworks are considered in such environments so that the monitoring and response to cyberincidents, as well as remote control systems, manage safety requirements more successfully and avoid literally putting lives at risk.
August 25, 2020
Cyber Security
How to Protect Yourself from Pandemic Cyberattacks Using Free Tools
There is no doubt that this COVID-19 pandemic has changed the daily life of humanity, not only while the pandemic lasts, but forever. Many companies are seeking to implement teleworking as a permanent method for their employees. This fact is increasing the time we stay connected, as observed in the connectivity statistics from the months of confinement and beyond, opening multiple possibilities for users and employers who need to use technology for such everyday things as food shopping. However, this is also a great opportunity for cybercriminals to carry out scam-based attacks, that we have discussed in previous articles and that Microsoft has reported as very serious on its security blog. In many articles we have been told about the consequences of these risks materializing, but in many cases we do not know which tools we should set up or how to mitigate them. In this article we will see what free tools we can use and what they protect us from. Attacks While Surfing the Internet When using the browser, we are exposed to many different threats. When you make a typing mistake or receive a DNS attack you may end up on fraudulent websites. When you are at home, without a business protection system, it is very difficult to detect them. To avoid this, it is necessary to set up a system that controls DNS (Domain Name System) spoofing attacks. By doing so, the URL requested in the browser is manipulated to avoid fraudulent sites due to a scanning error or by following a malicious link. In Firefox, all you have to do is install our EasyDoH extension, recently updated to simplify the configuration of the DNS server that the user wants to use. With a simple configuration in the extension, we can see in the following image how it protects from malicious sites: The second threat is when some malicious executable on the website runs a process without "touching the disk". This means that, without us downloading or directly executing anything, they perform actions from the browser's memory. This is a very critical threat, because when it does not reach the disk protection systems such as AntiVirus or EndPoint Response cannot detect the threat. For this we have recently developed an extension that, like the previous one, just needs to be installed for the browser to start controlling this threat. This extension is called AMSIext and is available for Chrome and Firefox. Once installed, it connects the browser to the system called AMSI, which allows to validate the programs to be executed in memory before their execution. File-Based Attacks There is no doubt that file-based scams are one of the techniques most widely used by cybercriminals and have increased significantly in recent times. Criminals use two mechanisms that, although they seem simple, are very effective in bypassing some of our PC's controls. The first technique we are going to focus on is the change of file extensions. Windows trusts file extensions too much and, for example, if the extension is .docx, it opens the file with MS Word regardless of the content. To avoid this risk, we have developed a program to validate that the extension matches the Magic Numbers (forensic technique for full file identification). This program called MEC only needs to be installed on your computer and, automatically, every time the user tries to open a file, the system compares the Magic Numbers with the extension. If they do not match, the program shows the user that the file cannot be opened with the program that the extension suggests. The second file-based threat that has increased exponentially in recent months is malware hidden within Macros and JavaScript in MS Word, MS Excel and PDF documents. This time, if the user opens the files and grants execution permissions, it is actually opening the door for cybercriminals to execute actions or connect to the machine. To combat this type of threat we have developed DIARIO, a free tool for users to check all documents they receive by email or download from the Internet before opening them, and thus validate whether or not they contain malicious macros. To protect users' privacy, DIARIO's artificial intelligence only uses the macro for analysis, protecting the sensitive information that the file may contain. The tool can be used directly on the website or you can download the installer depending on your machine's operating system. The suspicious file is uploaded and then the tool provides information about whether or not it has any executable processes and whether or not they are malicious, as it can be seen in the image below: As we can see, we have several free and simple tools to significantly increase our security levels, closing the door to the most common attacks currently being executed. Nowadays, we are all searching for information, so cybercriminals take advantage of the circumstances to make a profit and attack us. This is why it is necessary to be more protected than ever.
July 2, 2020
Cyber Security
TypoSquatting: Using Your Brain to Trick You
Our brain is amazing and has evolved over thousands of years to make our lives simpler or to minimize processing time on things it considers unnecessary. One of them would be reading each letter in a written text. This can be checked by several ways, as in the following example: Why Does That Happen? This is due to the way we learn to read, since initially we only see images and it is not until after we understand them that we begin to associate sounds with words. Once we are used to reading the same words for a long time, our brain places words where they are not, or immediately replaces the numbers with their corresponding letters, or can read text when it is written backwards, among many other things. Without a doubt, this brain capacity is incredibly powerful, but it also poses some cybersecurity risks because of the possibility it offers to easily generate deception. For example, if you get a message saying "www.gooogle.com" you don't realise that "gooogle" has three "o" instead of the two that the actual website has. What TypoSquatting Is For many years now, criminals have realised that it is possible to use this capacity against us. Phishing campaigns use these small text alterations to trick users, and they are very effective if they are associated with feelings of fear or financial distress. This type of threat has been called TypoSquatting. Due to the current health crisis caused by the Covid-19, this technique is being increasingly used. One of the institutions that has been most targeted by these hoaxes is the World Health Organization, which had to publish a cybersecurity communication intended to mitigate the damages of these hoaxes. One of the thousands of examples can be found in one of the existing pandemic tracking systems, called coronatracker.com. This is used as the basis for different typosquatting-type mutations, as we can see below: To summarise the analysis, only the second domain detected will be taken: coronatracker.info. This domain uses the technique of changing the root domain (from com to info) so that the victim, when focusing on the webpage name, does not notice any other details. In the following example below, an SMS tries to trick the user by using the domain of a bank, changing the root domain from com to one. When performing the analysis of coronatracker.info by using our TheTHE tool, it can be seen how this TypoSquatting hoax hides a dedicated phishing site and that the domain was created during the first week of the pandemic, like thousands of others that have emerged. When using the IP, we see in the first image that it has already been reported in AbuseIP for being a suspicious IP. In the second, we see how the analysis with Maltiverse detects it as malicious. Using the domain, it can be seen that this has already been reported in VirusTotal and responds to 9 different IP addresses. As you can see, criminals do not miss any opportunities to spread malware. This storm of events triggered by the pandemic is the perfect time to use all the mechanisms at their disposal to access personal, data and financial data, or simply to access machines to reach more victims. These techniques are not only applied in domains, but also in mobile applications, development software packages, e-mails, instant messaging, SMS and any other means that may be used to make victims click on the link.
May 7, 2020
Cyber Security
Business Continuity Plan: From Paper to Action
Medium and large companies that must comply with industry or national standards and controls have had to develop what is known as a BCP (Business Continuity Plan). Through it, experts in the company's operations or specialised consultants define the route of action to be taken in different scenarios where business continuity is threatened. On the other hand, many small companies have had to implement them in order to do business with the companies that must by law require them. This emerged after the attacks of September 11th, 2001, when it became clear that many companies did not know how to react in case their headquarters were blocked. Therefore, disaster scenarios were raised on one business area or the whole business, looking for alternatives to fill that gap for a period of time. Some of these plans considered earthquakes, tsunamis, and access closures due to social circumstances, among others. But, how many of them included a pandemic among the potential causes of a business blockage? Not many companies took it into account. However, this is the simplest problem. Even if some of the approaches made for natural disasters or access blockages to headquarters were followed, we cannot know exactly when it would possible to go back to work. The Technology and Security that a BCP Should Include When Facing a Pandemic Let's start by explaining what should have been done previously to be prepared. It is essential to have a pilot project of how our services and employees would react to telework. Why? Because even if we use a VPN that allows us to simulate that the worker is directly connected to the company's network, the services and the network are not necessarily ready to receive requests from that connection. According to the behaviour on the Internet, when performing validations of the services exposed we can see a growth of more than 40% in the use of RDP, as shown by Shodan in its blog. When making a simple search, we find computers having known vulnerabilities: Actually, not all companies have the technology required to deploy enough VPNs to get the entire company connected remotely. However, this should have been taken into account in order to avoid exposing vulnerable services. To this end, there are many comparisons and aids on the Internet to help you make secure decisions fitting within the budget. Secondly, companies must know what they are exposed to on the Internet and how is the regular use of these services. Just by means of this basic data it is possible to identify when the use from external networks is exceeding the capacities of each service or when we are being cyberattacked. So, What's the Next Step? As long as the services exposed are clear, information security measures can be taken. These should be implemented at the moment of starting the continuity plan. In other words, by this time they should be fully operational and under review. These measures must be oriented to the full identification of users. As we are working remotely, the local identification measures such as the network, the MAC of the computer and its configuration are not available. In most cases, only the user and password are controlled, and this has proven not to be a mechanism that guarantees identification. Once you have this control, you must start monitoring events in all services and have fine-tuned alerts to detect external threats, since at this time all connections will be made outside the company network. For this reason, all perimeter security controls must go to what was calculated in the continuity plan. What to Do Next? The last measure that must be covered by this continuity plan is the technological tools that will be used to control the operations and work of the different groups within the company. These must include training for the staff −and to this end, it is essential to have strategic allies in the world of technology. This is because of the endless number of tools available on the Internet today. However, not all comply with the information protection measures required to ensure business continuity. One of the main examples of these tools are in cloud services. In recent years, cloud-based tools have experienced exponential growth in terms of options and implementations. However, not in all cases this is done with sufficient security measures. This is critical considering that this is almost the cornerstone of the digital transformation as well as of a good development of the continuity plan, which today must be operating at its maximum capacity. Conclusions Following the first month of measures at a global level, it has been possible to verify that the business continuity plans of some companies have worked properly in terms of their essential objective of keeping employees performing their functions and being able to access information. Nevertheless, due to the growth of services exposed on the Internet and the vulnerabilities detected in these, information security was not taken into account when designing these plans. This is evidenced by the control reports made from our SOC (Security Operations Centre), which have been widely analysed in different media by our ElevenPaths experts and published in a guide: Risk Guide and Recommendations on Cyber Security in times of COVID-19. For this reason, companies must begin to align their plans with the new circumstances and to implement controls and mechanisms that allow their employees, not only to carry out their tasks, but also guarantee the security of the information that, in the near future, will constitute the continuity of the companies.
April 23, 2020