In the mid-19th century, the mortality rates for women who delivered babies at the best maternity hospitals in America and Europe were three times greater than women who delivered babies with midwives. The medical community explored several ways to detect the and stop the infection and save lives, but to no avail. Finally, a brilliant doctor at the Vienna General Hospital, Dr. Ignaz Semmelweis, discovered and implemented one simple policy – “wash your hands”! In this pre-Louis Pasteur world, people hadn’t discovered the role of the bacteria in transmitting diseases and doctors would go from autopsies to deliveries without washing their hands. This one change in policy brought the hospital childbed-mortality rates down by 90%. In fact, today we all know several other benefits of the simple act of washing your hands and have, since Pasteur, figured out other forms of preventative medicine.
While the cyberworld pales in comparison to the life-or-death situation faced by these women, we can certainly learn from our counterparts in the world of healthcare who did not accept infection as an acceptable risk.
Cybersecurity Ventures’ Annual Cybercrime Report predicts that cost of cybercrime will rise to $6 trillion annually by 2021, up from $3 trillion in 2015, with each breach costing $3.86M. Mega breaches, such as the 2017 Equifax breach, cost several hundred million dollars. Meanwhile, organizations are also plagued by shortage of skilled professionals –predicted to be reach approximately 3 million people globally.
So, it begs a question – how can we get better? Certainly, doing more of the same and expecting different results isn’t an answer.
When you consider our current end-user computing model, we are no different from the doctors of the mid-19th century. We access the internet, filled with potentially harmful elements, and then use the same endpoints to access sensitive assets and information. This unsanitized access is exactly what attackers exploit when they leverage the end-user’s trust to infiltrate organizations, steal credentials, exfiltrate sensitive assets, and encrypt data – leveraging OS, application or policy vulnerabilities as pathways for transmission.
Our current policy-based controls and detection-based security have unfortunately not been able to stop the contagion. The deployment complexity and staffing needs also make it harder for all organizations to adopt cybersecurity, which introduces supply-chain vulnerabilities into the mix. The impact of this unsanitized & uneven cybersecurity is well documented – brand equity, lost revenue, customer loss, and even business termination.
If the internet itself is contaminated, and remains unsanitized, it stands to reason that breaches will continue to occur, just like infections in an unsanitary hospital environment. What if we could follow a similar approach to Dr. Semmelweis’ model? Could it help solve some of our cybersecurity problems?
The key requirements that any solution needs to focus on include:
Speed: Attackers are getting smarter and the business & regulatory impact breaches more expensive
Simplicity: To ensure cybersecurity is simple enough to be within reach of every business & individual
Scale: Help business scale cybersecurity as more processes get digitized while also maximizing the impact of a limited pool of professionals
Isolation-based security is a fairly recent innovation that can indeed help on a lot of fronts. Isolation safeguards endpoints & users from threats by transforming all Internet-content – code, media, scripts, files, etc. into harmless pixel streams delivered to the endpoint. The web represents one of the most vulnerable threat surfaces in any organization – we allow unknown, unchecked, unsanitary code to run freely on our endpoints – offering attackers a much sought-after opening.
Operating under the principles of zero-trust security, isolation-based security does not look to detect or classify content in good or bad, instead assumes all content could be suspect and transforms everything into pixel streams. By removing the (exploited)-trust away from the endpoint, isolation ensures a sterile computing environment – a cyber clean-room, which means a massive reduction in infections & compromises.
Gartner identifies isolation as technology (remote browser isolation) that if adopted correctly, could result in a 70% reduction in attacks that compromise end-user systems. And as more applications become internet-enabled or cloud-aware that number only goes higher.
Another benefit of the isolation-based security model is that it doesn’t rely on an alert-driven approach to protection, which greatly reduces the load on security teams, making them more efficient, more productive and more forward-looking – enabling them to get ahead of the attacks. That efficiency can help democratize security, making it feasible for both large and small companies to equivalently adopt cyber security.
Through deep visibility into web sessions, isolation platforms support several core use cases such as:
- Web-based threats such as drive-by downloads, malvertising, zero-day attacks, etc.
- Document-based threats such as steganography attacks, rootkits, etc.
- Email-based threats such as phishing, spear phishing, etc.
- Ransomware or crypto-mining attacks
- URL-based blocking
- Enforce & report on acceptable use policies (AUP)
- Visibility & monitoring of web applications
The internet took off because of an open-trust model. But that has also been a key failure from a security standpoint. As we look forward to a new generation of internet-of-everything, cyber security could be a key enabler or obstacle – CSIS estimates we lose nearly 1% of global GDP annually to cybercrime. The only way ahead is if we bake security into the basic digital transformation strategy – move from a reactive-security model to security by design.
It’s time we started washing our hands in the cyberworld as well.