You think you have it covered. You have a patching plan. You may even have a patch management solution. The problem is, that only covers part of it – the bit you can see. But in reality, it’s much bigger than you may realise.
Let me explain why.
Right now, cybersecurity threats are coming thick and fast, with every organization constantly under attack. Last year was the worst on record for data breaches and this year is shaping up to be even worse. Traditional front line defenses are simply no longer adequate. For example, estimates suggest only 43% of attacks are blocked by conventional antivirus tools. That’s a sobering statistic. The bottom line? No one can afford to bury their heads in the sand and ignore the situation.
But let’s get back to the topic of patch gaps. What are they and why should we be concerned?
Let’s start by acknowledging a universal truth; all software has vulnerabilities. These are unintentional flaws that can be exploited by hackers probing for security weaknesses. When these flaws are discovered, they are listed and published as CVEs (Common Vulnerabilities and Exposures). There are thousands of them every year. 2018 holds the record so far, with 16,556 CVEs. As soon as a vulnerability becomes known to the software owner or vendor, the race is on to create a patch to fix the issue.
Not surprisingly, the majority of data breaches each year involve unpatched systems. Most organizations remain exposed to these attacks because of what cybersecurity professionals call “patch gaps”.
A patch gap is generally understood to be the period from when a vulnerability is discovered to the point when a patch is deployed to fix it. However, there’s plenty of evidence that the attack threshold is open for a considerable time before the point of formal discovery. This is the ideal scenario for hackers. If they find the weakness first, they can work under the radar and exploit it while no one is the wiser.
The wider patch gap – the true duration of exposure – is from when a vulnerability is born to the point when a patch is deployed to fix it. And it’s a lot longer than you might think.
The diagram below shows this wider patch gap timeline looks like
As you can see, systems can remain unprotected for up to 442 days. That’s a shocking statistic. Let’s take a closer look at how the timeline is broken down.
This is the period when we genuinely don’t know what we don’t know.
This is when any specific vulnerability is a total mystery to those responsible for the software or its users. But that doesn’t mean that it hasn’t been found by a hacker (or hackers).
It’s quite possible flaws are being exploited in the wild during this phase. This could be the case for an extended time. But is 270 days – or nine months – realistic? Absolutely. Zero-day vulnerabilities affecting Chrome and Internet explorer were recently in the news. They were being exploited in the wild over the past year. And who can forget the famous Stuxnet malware? It was under development in 2005, actively being used in 2007, but it wasn’t discovered until 2010.
“Zero-day” describes the point-in-time when a flaw is discovered but where no defense, workaround or fix exists. Zero-day vulnerabilities are regularly discovered by cybersecurity specialist researchers and organizations, and there is a whole raft of them.
This is the starting gun for the software vendor or owner to start work on fixing the issue. But despite the frenetic effort involved, it can still take 70 days to release a patch. That’s over two months during which a vulnerability is widely known and users are exposed.
Is 70 days realistic? Yes, it is. This is simply down to how many flaws and patches are being worked on.Often they are bundled and released together as a batch, such as Microsoft’s patch Tuesdays. This makes them easier to manage but can add an extra delay.
Once a patch has been made available, the average delay in getting it applied is approximately 120 days (four months). Given that a large proportion of data breaches are down to poor patch management, this period is vital.
So why the delay?
Some organizations are either simply struggling with the sheer volume of patches they’re trying to manage. Others are taking longer to test and roll out new software builds to avoid issues or performance impacts. That’s not unreasonable, given the complexity and interdependencies inherent in today’s software environments.
At this point, let me throw in a personal gripe. Patch naming conventions don’t explain what the patch is for or its severity and threat level. Organizations want to prioritize critical vulnerabilities to get them applied fast, but the industry is not helping them much.
Wow! 442 days from the beginning of a vulnerability to the points when it’s finally patched. Over fourteen months of potential exposure and risk. What can be done about it?
A radically different approach is needed for protection throughout this entire patch gap period. And that’s our entire raison d’etre here at Polyverse. Our ground-breaking cybersecurity portfolio, including Polymorphing for Linux and Polyscripting, stops zero-day memory attacks in their tracks and neutralizes code injection exploits. That’s virtually the entire hacker’s toolkit dealt with right there.
The result is that you are safeguarded even when you are unaware of the need for it and the reality of the total patch gap is not an issue.