Remedios Moreno

Remedios Moreno

Cybersecurity Product Manager, Application Security Services at Telefónica Tech

Cyber Security
Microsegmentation: the decisive response against attackers’ lateral movement
Security neither begins nor ends at access points. Threats infiltrate, adapt, and move with alarming ease in increasingly distributed, dynamic, and complex technology environments. The adoption of hybrid and multicloud architectures, along with the rise of microservices and containers, has multiplied the internal attack surface, making control and visibility far more difficult. In this scenario, one of the most critical risks is the lateral movement of attackers: their ability to advance silently within the network once they have compromised a vulnerable server, endpoint, or application. Instead of stopping at that initial target, they leverage internal connections and excessive permissions to escalate towards sensitive data, critical applications, or high-value business systems. This dynamic has been a determining factor in ransomware campaigns, industrial espionage, and advanced persistent threats. Lateral movement is an attacker’s silent weapon, and microsegmentation is the line in the sand that stops it. Microsegmentation has become one of the most effective measures to curb this type of threat. It allows organizations to control traffic between workloads, limit the spread of incidents, and contain their impact without compromising operational continuity. ■ At Telefónica Tech, we see this approach as a fundamental pillar for implementing a Zero Trust architecture, where every communication must be verified before it is allowed. Thanks to the combination of granular visibility, adaptive control, and immediate response, microsegmentation not only strengthens resilience but also helps organizations trust their digital capabilities, maintain business continuity, and comply with increasingly demanding regulations such as the DORA regulation or the NIS2 directive. Visibility: the foundation of microsegmentation The first step in protecting a digital environment is to fully understand it. Microsegmentation provides complete and continuous visibility into communications between users, applications, processes, servers, and cloud environments. Through a centralized dashboard, it is possible to build a dynamic map of all interactions within the infrastructure, showing which applications communicate with each other, how data flows to databases, and what external connections exist. This precise picture helps uncover hidden dependencies, unnecessary or misconfigured flows, and potential security gaps. You can’t protect what you can’t see: microsegmentation delivers a full X-ray of internal communications. This visibility is not only a technical advantage but also a key requirement in today’s regulatory landscape. Having clear, verifiable evidence of how internal communications are managed is essential to demonstrate control and governance to auditors and regulators. Granular policies to block unauthorized movement Once the organization has a detailed understanding of its digital ecosystem, microsegmentation enables the definition of highly precise security policies. Instead of relying on static rules based on IP addresses or network perimeters, logical controls are designed and aligned with the function of each application or service. This means that each workload can communicate only with the resources it truly needs to operate. For example, an application server may access its associated database, but not another database or a different backup system. In this way, routes that an attacker could exploit for lateral movement are effectively closed. Policies can be adapted quickly and scaled across hybrid and multicloud environments without redesigning the existing architecture. This makes adoption progressive and minimizes impact on critical operations. With microsegmentation every workload speaks only to what it truly needs, shutting down the paths attackers rely on. Immediate response and regulatory compliance Microsegmentation not only prevents but also improves detection and response capabilities. By maintaining control over every communication flow, any unauthorized access attempt can be identified and blocked in real time. This drastically reduces containment time in the event of an incident and prevents an attack from escalating into a major crisis. In addition, the ability to generate clear, exportable reports facilitates compliance with regulations such as DORA or NIS2, which require organizations to demonstrate operational resilience, incident traceability, and the application of least privilege principles. With microsegmentation, security teams can provide verifiable evidence that risks are being effectively managed. By controlling every communication flow, organizations can block intrusions on the spot and prove resilience to regulators. Use cases for microsegmentation The value of microsegmentation becomes tangible in multiple business scenarios. In the financial sector, for example, banks and insurers rely on core systems that require maximum protection. With microsegmentation, critical applications such as payment engines can be isolated from less sensitive environments, preventing an intrusion in a peripheral system from compromising the business core. Moreover, this granular control helps meet DORA requirements, which demand evidence of operational resilience and risk governance. In the field of digital healthcare, hospitals operate connected medical devices and electronic health record applications, often supported by legacy systems that are difficult to patch. Through microsegmentation, these devices can be isolated and restricted to communicate only with strictly authorized servers, preventing them from becoming entry points for attackers and ensuring the availability of critical medical services. In the manufacturing industry, where industrial control systems (OT) coexist with IT networks, microsegmentation makes it possible to clearly separate the two domains. Thus, an attack on the corporate network cannot spread to the production plant, ensuring the continuity of industrial processes even in the event of a cyber incident. In multicloud environments, microsegmentation provides a unified layer of control over workloads distributed across hyperscalers, or on-premise datacenters. This prevents security inconsistencies between platforms and provides visibility into hidden dependencies between distributed services, optimizing both protection and architecture. In ransomware containment scenarios, microsegmentation is critical. If a server is compromised, the attack is isolated because it cannot spread to other systems or to backups. The impact is drastically reduced, and recovery is much faster. From banking to healthcare and manufacturing, microsegmentation keeps critical environments safe and stops a breach from snowballing into a crisis. Microsegmentation as a strategic investment Beyond its role as a security control, microsegmentation is consolidating as a strategic investment for organizations aiming to strengthen their digital capabilities. It reinforces business continuity, enables agile adaptation to regulatory requirements, and fosters trust in an increasingly distributed and complex environment. Microsegmentation isn’t just a technical safeguard, it’s a strategic investment aligned with Zero Trust and business continuity. This approach naturally aligns with Zero Trust principles, by establishing granular controls that ensure no communication or traffic flow is trusted without prior verification. Its implementation makes it possible to stop lateral movement within the network, block attacks before they escalate, and act proactively against increasingly sophisticated threats such as ransomware. ■ Microsegmentation is not a passing trend but a structural component of infrastructure and application cyber resilience. At Telefónica Tech, we accompany companies on this journey with solutions that turn security into a sustainable competitive advantage and reinforce digital trust in the long term. Learn more → Cyber Security The strategic role of the SOC in enterprise Cyber Security management August 20, 2025 Photo: cookie_studio / Freepik.
September 22, 2025
Cyber Security
Protection and resilience of applications and infrastructure against cyber threats
Digitalisation has transformed every industry. In the banking and insurance sector, applications enable instant transactions, mobile operations and 24/7 services, but they require robust security to prevent fraud and API attacks. In healthcare, electronic health records and telemedicine systems offer faster patient care but require the protection of highly sensitive data and compliance with strict privacy regulations. In retail and e-commerce, applications have become the face of the business, handling massive traffic peaks that must be managed without downtime, while ensuring the security of customers' payment data. In the public sector, digital transformation is bringing services closer to citizens—but it also exposes critical infrastructure to constant threats. Across all these sectors, applications have become the core of operations. They are a driver of innovation, competitiveness and growth, but also a prime target for attackers seeking to compromise the supply chain, exploit weak configurations or breach underlying infrastructure. Securing them is a strategic necessity. Securing applications and infrastructure is not optional—it's a strategic necessity. Visibility: the starting point One of the biggest challenges in modern security is the lack of real visibility. Many companies and organisations don’t even know how many applications are exposed to the internet, which open-source dependencies contain vulnerabilities, or what network configurations and permissions are active in their cloud environments. This lack of awareness creates blind spots—“shadow IT”—where unmanaged services become open doors for attackers. Visibility is not just about creating an inventory; it also means being able to link assets to their criticality, understand dependencies and assess risk in real time. Without a complete map of the environment, any security strategy ends up being reactive—always one step behind the threat. You can’t protect what you can’t see: visibility is the starting point. Continuous scanning: beyond static snapshots For years, security relied on one-off audits or occasional code reviews. But in today’s dynamic environments, where changes occur hourly, that’s no longer enough. Applications constantly integrate new third-party dependencies, cloud configurations change daily, and new vulnerabilities (CVEs) emerge at a relentless pace. An environment that was secure yesterday could be exploitable tomorrow. This is why the trend today is toward continuous scanning. Static and dynamic analysis tools (SAST/DAST), dependency vulnerability scanners, and infrastructure-as-code (IaC) reviews are being integrated directly into CI/CD pipelines. This allows issues to be detected before deployment—and ensures continued monitoring of what’s already in production. Security stops being a static snapshot and becomes a continuous surveillance flow. What was secure yesterday may be exploitable tomorrow. Hardening: strengthen from the ground up Hardening is the art of reducing the attack surface to a minimum. It’s not about installing more tools, but about properly configuring what already exists. At the application level, this means applying the principle of least privilege, protecting secrets with dedicated managers, encrypting data both in transit and at rest, and disabling any function or port that is not strictly necessary. At the infrastructure level, hardening involves securing operating systems, containers and network services. It means strictly configuring access policies in Kubernetes, segmenting production and development environments, and ensuring that deployed software meets recognised benchmarks such as CIS. Hardening isn’t about installing more—it’s about configuring better. The main challenge here is cultural: in the race to deliver faster, many organisations overlook basic security, leaving gaps that become much more costly to fix later. Runtime security: defending the unexpected Even with good hardening practices and continuous scanning in place, there will always be uncertainty about what happens at runtime. That’s where runtime security comes in: detecting and stopping malicious behaviour in real time, before the impact becomes critical. Runtime security spans multiple technologies: A WAAP (Web Application & API Protection) can block real-time attacks on applications and APIs, mitigating injection attempts, bot abuse or unauthorised access. A CNAPP (Cloud Native Application Protection Platform) combines CSPM, workload protection and compliance capabilities to provide continuous visibility and defence for cloud-native applications. Runtime vulnerability management helps identify which vulnerabilities are actually exploitable in a specific environment and prioritise them based on real-world criticality. Microsegmentation enables granular control over network traffic, preventing attackers from moving laterally between systems. The challenge is not just having these technologies, but integrating them in a way that generates useful and actionable alerts. Security teams can’t handle thousands of false positives—they need contextual intelligence to distinguish noise from real danger. The key isn’t generating more alerts—it’s generating more contextual intelligence. Regulatory compliance: from checklist to continuous practice Regulatory pressure is not uniform: it varies by sector and the type of data each organisation handles. In the financial sector, regulations such as DORA in Europe require digital resilience, incident recovery capabilities and strict third-party governance. Standards like PCI-DSS are essential for protecting cardholder data in banking and retail. In healthcare, regulations like HIPAA in the US or GDPR in Europe focus on the confidentiality and traceability of medical data. The public sector in Spain and the EU must comply with the National Security Framework (ENS) or directives such as NIS2, aimed at securing essential services and critical infrastructure. Compliance is no longer a checklist—it’s a continuous practice. The challenge is not just to pass audits by preparing paperwork and showing reports, but to make compliance a continuous operational practice. This means automating evidence collection, embedding security controls into development and operational processes, and generating real-time reports for auditors and business stakeholders. Compliance is no longer a checklist—it’s a continuous practice. An organisation managing thousands of financial transactions can’t rely on quarterly reviews—it needs live security that can demonstrate compliance with applicable regulations at any time. Only then can compliance become a trust enabler for customers and partners, rather than a burden. DevSecOps culture: security as a shared responsibility No technical solution will succeed without a cultural shift. In many organisations, security is still handled by an isolated team that acts as an auditor at the end of the development cycle. In a world of continuous deployments, this is unworkable—it becomes a bottleneck and stifles innovation. Security is a shared responsibility—not the job of an isolated team. The DevSecOps approach integrates security from the start as a natural part of the software development lifecycle. Developers should have simple tools to identify flaws in their own code, SRE and DevOps teams should have visibility into the infrastructure, and security analysts should act as collaborators—not gatekeepers. The key is to stop seeing security as a blocker and start seeing it as an enabler: the earlier a flaw is detected and fixed, the lower the cost and risk. Conclusion Securing modern applications and infrastructure is no longer about perimeters—it’s about end-to-end resilience. Organisations that want to stay protected must invest in full visibility, continuous scanning, hardening at both application and infrastructure level, real-time protection through advanced technologies, built-in compliance, and a mature DevSecOps culture. In a landscape where cyberattacks are inevitable, the difference between a vulnerable and a resilient organisation lies not in whether it will be attacked, but in its ability to detect, contain and respond in time. ■ Want to learn more about how to protect not only your applications but also the infrastructure that supports them? More information → Cyber Security DevSecOps vs SSDLC: Which is the best secure development strategy? February 25, 2025
September 9, 2025