Image credit: Unsplash
We are a mid-sized startup and our code base is little over a million lines of code. If we include all the packages we import into our code base, it is tens of millions of lines of code. If we include all the tools we use in our development process, it is a few hundred million lines of code. If we include the cloud infrastructure where we deploy the code, it would be a few billion lines of code. If we include all the third party software services we use, it would be tens of billions of lines of code.
Any software vulnerabilities in these billions of lines of code affects the security of our own systems and especially our data. We will be held liable for any breach of our systems or data. This is bit alarming because assuming these billions of lines of code is vulnerability free is the same as assuming they are all bug free. All software engineers know every thousand lines of code will have a certain number of bugs and any one of them can become an exploitable vulnerability. Often attackers don’t even need to exploit any vulnerability, just a misconfiguration will do. Our current approach to cybersecurity relies on the entire software ecosystem to be vulnerability-free and misconfiguration-free. Better approach might be to be able to control who can exploit the vulnerability instead of assuming we can achieve vulnerability free software.
Good analogy to think about is: the security of the real world does not depend on unbreakable locks or doors. It simply depends on the fact that we can identify the thieves and punish them. Doors and locks are there to prevent a casual intrusion[1]. Without our ability to identify and punish the thieves, we would have no law and order in the real world. In a similar vein, unless we can replicate these measures in the realm of cybersecurity, our overall security will remain elusive.
Good example of where this has worked is Apple’s app store ecosystem. It is a walled garden where every app must be signed by its developer and notarized by Apple. If you try to get intentionally malicious software into the app store, when Apple detects it, it can take appropriate action against the publisher. To be clear, Apple has done a number of other things to make their OS secure by design. But one of the biggest reasons has to do with signed binaries.
To proactively identify and thwart malicious actors within our software ecosystem, we must establish a comprehensive identity barrier encompassing the entire software ecosystem. This barrier will enable us to identify the users prior to them accessing the software and exploit it. This will allow us to attribute every action to the responsible individual. We need to deny unknown or unidentifiable users from ever accessing our software and exploit it. True cybersecurity, as we envision it, remains an aspiration until this level of security is achieved. Fortunately, this transition is already in progress. We call this evolution of cybersecurity “Security 2.0“.
The classical cybersecurity stack of Security 1.0 is what we are all familiar with. It focuses mainly on detecting vulnerabilities and malware. These are various scanning tools like anti-malware tools, vulnerability scanners, threat detection tools, posture management tools, various traffic deep inspection tools like firewalls, intrusion detection systems, secure web gateways and so on. These have served us well over the years. It is time to move beyond this approach for two reasons:
- The first reason is the sheer size and complexity of our interconnected software ecosystem. Most of the software ecosystem is also outside our own administrative control. Attackers are able to easily move across this complex ecosystem and across organizational boundaries by exploiting weak links in this interconnected ecosystem. Some examples of such attacks are:
-
- The Solarwinds attack famously affected 18,000 organizations. The attackers were able to infiltrate a huge number of government agencies, critical infrastructure and financial institutions using just one compromised software.
- When the North Korean hackers compromised 3CX software to conduct a supply chain attack that affected 600,000 enterprise users, they performed two levels of supply chain attack. First They compromised the X-trader application and using it, they were able to infiltrate the 3CX network. From there they were able to compromise the 3CX build environment and insert malicious code into 3CX software.
- When the Lapsu$ gang breached Okta, they first performed a phishing attack on an employee at a contractor used by Okta. From there they gained super admin permissions to Okta’s internal tools and were able to breach into Okta’s customer identities that belonged to 366 different organizations.
- In the case of the recent MGM attack, the attackers compromised a user using social-engineering techniques. This user had super admin access to the Okta instance. From there they were able to become an admin on Microsoft Azure AD instance. From there they gained access to the VMWare Vcenter admin console. They were also able to gain on-prem AD domain admin access and launch ransomware attacks. Just notice the ease at which the attacker navigated the complex software ecosystem here.
-
- The second reason is the nature of the modern threats themselves. According to the latest Crowdstrike global threat report, 62% of the attacks were malware free. These are called “living off the land” attacks. Meaning the attacker did not exploit any vulnerabilities in the system. They just used the built in tools and legitimate credentials. Similarly, the Verizon data breach report has been saying for many years that more than 80% of data breaches involve compromised credentials. Our classical Security 1.0 stack (which focuses heavily on known vulnerabilities) is ill suited for preventing these kinds of attacks.
When we talk to security practitioners about this transition, the response we get is not surprise or denial. Most of them say this is the direction they are going in. The goal of this blog post is not to present this as a new idea. It is to formally acknowledge the transition from classical security 1.0 approach to security 2.0 approach and provide a framework for evolving our security stack.
Ongoing transition
We will briefly examine some concrete examples of this ongoing transition. In each of these examples, we have seen an order of magnitude of improvements in security than the state of the art that existed before.
- Apple AppStore Ecosystem is a great example of how attribution improves cybersecurity. Apple requires that all applications that can run on iPhones and other iDevices be signed by its developer. This has improved the security of the apple ecosystem by at least 10x compared to the PC ecosystem.
- Xbox and Playstation Ecosystem. The Xbox ecosystem is an interesting example because it is built on the same Windows OS we all consider very insecure. But by making the software running on Xbox to be signed by the publisher, Microsoft has made it almost malware free.
- Signed container images and SBOM. They ensure only the trusted images are run on your infrastructure. Similarly signed SBOMs with package integrity protection would improve security.
- ServiceMesh and workload identity federation efforts try to protect workload-to-workload communications from attackers and prevent them from moving laterally. They do this by assigning strong workload identities and controlling who can talk to whom.
- Google BeyondCorp and FIDO2 standard: Google famously rolled out a “zero trust” solution to their employees called BeyondCorp. Google has also rolled out FIDO2 compliant hardware keys to its employees and they have said none of their employees were phished since they rolled this out.
IDWall
We are all familiar with Paywalls on the internet. Any content provider or streaming provider would put the paid content behind a paywall. You have to pay for the subscription and they need to know you are a paid subscriber before you can access the content. An IDWall is very similar in concept to a paywall. With IDWall, you want to put your protected software ecosystem behind a barrier where users have to provide an identity before they can access it. If we know their identity before they can exploit a vulnerability in the software ecosystem, we can attribute malicious actions back to them.
The Idea here is very similar to us putting security cameras around our house. We want to know who is entering our property and stealing the packages. As long as we can identify the users reliably, meaning the attacker can not impersonate a trusted user(more on this later), this will be a significant deterrent to exploiting vulnerabilities or misconfigurations. Goal of having the IDWall is that we can associate every action back to a real world person, to an identity, to a device that belongs to a specific user. It goes without saying that if we can not identify the user, we should be able to deny any access to the software ecosystem.
Assumption here is that IDWall is made up of a simpler code base that is easier to defend than the entire software ecosystem. It is patched and updated more frequently, It is built using memory safe languages like Golang or Rust. It goes through a much more rigorous pen-testing and bug bounty program. IDWalls themselves should not be easy to breach and tamper with. They will need to maintain an audit log of all user activity so that we can go back and attribute any malicious activity back to them.
Comparison of two approaches
Let us compare the traditional approach of security 1.0 stack and the newer security 2.0 stack and see how they different. We will compare them in different areas of the cybersecurity industry and we will see that each area of the stack has subtle differences..
Security 1.0 |
Security 2.0 |
|
Primary focus
|
Threat management | Trust management |
Primary Approach
|
Try to identify the vulnerabilities, threats and minimize the attack surface | Manage who can exploit the vulnerabilities and how much do we trust them |
Endpoint and workload security
|
File based malware scanning, known threat detection | Allow only signed binaries along with endpoint detection tools, Attribute malicious actions to publisher |
Network security
|
Traffic deep inspection and threat detection | Identity defined security, identity embedded into networking or transport layer. aka “zero trust” |
Service to service communication
|
Compromisable secrets and bearer tokens | Non compromisable workload identities and proof-of-possession techniques |
Blast radius of a compromise
|
Large blast radius due to static policies and broad privileges | Limited blast radius due to Just-in-time access using dynamic trust management |
Software integrity
|
Assume integrity of software ecosystem and detect malicious activity from software | Verifiable images and verifiable deployments |
Software Supply Chain security
|
Identify known vulnerable packages | Allow only trusted software packages and attribute malicious software back to its author |
Breach liability
|
Victim organization is liable for the breach | Malicious actor is held responsible for the breach |
Conclusion
We outlined how our security stack is evolving from a security 1.0 stack to a security 2.0 stack. This is a highly complex problem. We have not yet discussed the details on how it all comes together. In this blog series we will try to outline how to put it all together and how to evolve our cybersecurity stack.
Addendum
Privacy and Digital Private Property Rights
If we are required to provide our identity for accessing everything on the internet, it brings up the question: are we compromising our privacy rights? It is an important consideration. To be clear we are not advocating putting the entire internet behind an identity wall. We are not calling for an internet wide KYC process. That would be a bureaucratic mess. We are advocating the use of identity just for what you would consider a digital private property.
One of the basic tests of private property rights is that you have the right to control access to your property. In the online world, we need to honor the equivalent private property rights. Owner of the digital property should be able to control who can access it. In the real world, we make the distinction between private and public property. We expect to have privacy rights when we are on public property. We make the distinction between a public place like a mall and a private place like an office even if they are both owned by the same corporation. We dont mind a security guard checking our identity card before we enter an office building.
We need a similar distinction between a digital public place and a digital private property. The requirements for providing your identity should only be applied to digital private properties. People should not be required to provide any identity for accessing today’s public internet.
Do you have thoughts or comments about how to evolve the cybersecurity stack? Please get in touch with us by emailing me at sukhesh at procyon dot ai.
Notes:
[1]: This concept was first articulated by Turing award winning researcher Butler Lampson in his 2006 research paper titled “Practical Principles for Computer Security”.
[2]: Ken Thomson’s 1984 Turing award acceptance speech “Reflections on Trusting Trust“.
Please click here to read the next post in the blog series.
Here are the links to other posts in this series:
- Security 2.0 – This post
- Passkeys and Security 2.0 concepts
- Privileged users in Security 2.0 Stack
- Workload Identity in Security 2.0 Stack
- Code to cloud: verifiable deployments