No one wakes up in the morning and thinks: “today would be a great day to patch some code”. Certainly not if you work in IT operations.
Let’s be honest, software patching is a pain in the neck for everyone. It’s tedious, complicated, time-consuming, and expensive. But even though it’s a chore, we all know that effective patching management is essential if organizations want to avoid being hacked, compromised, or breached.
Because right now, it feels like we are being hit by a tsunami of patches, with a high proportion of them addressing serious security bugs. This week we woke up to news of a new severe buffer overflow vulnerability (CVE-2020-10713) also known as ‘BootHole’. Last week it was the Exim Mail Transfer Agent, (CVE-2019-10149) that Russian cyber actors have been exploiting since at least August 2019.
If you work in IT ops, you’re probably feeling a little nervous at this point.
Why? Because when it comes to patching, the buck normally stops with you. If anything goes wrong, and especially if there is any downtime for production systems, you know where the finger is going to get pointed first, right?
Trust me, I understand this situation well. I’ve been that guy!
The CISO and the wider security team usually decide which security patches should be applied. But after that, it’s all down to the IT and business ops teams to roll them out. It’s their job to work out where the patches should be applied. Then they have to thoroughly test all the software stacks and solution environments for compatibility and interoperability issues. After that, the patches must be smoothly and proficiently deployed across the entire infrastructure.
There’s no doubt the sheer volume of patches is making it difficult to get fixes applied as quickly as most of us would consider ideal. To help get a clearer picture of how large enterprises are dealing with the situation, we recently interviewed ten senior IT professionals and executives, asking them to share their insights, opinions, and experience.
Here are the top things we learned:
Complexity is forcing an uncomfortable trade-off between risk, resources, and cost. This is making the ideal schedule of monthly or weekly patching beyond reach for many. A routine patching cadence of 90 days or even 180 days appears to be more common.
Even though ad-hock emergency patches are regularly released by vendors, large organizations are only carrying out panic (or non-routine) patching between 2 and 4 times a year.
Large enterprises can easily spend between $6 and $9 million on patching-related activities for their core IT infrastructure and systems every year. This is not surprising, given that patching sometimes involves a small army of administrators, technicians, security specialists, business stakeholders, and outsourced expertise.
It turns out that patching is a headache for most large organizations. That’s because very few of them have the luxury of starting with a clean slate. Their IT infrastructures have evolved over time, and via mergers or acquisitions. Today, that leaves them with hundreds of applications stacks to manage, with a mix of legacy, modernized, and more innovative systems.
What if there was an extra layer of cybersecurity protection that could take all the stress out of all of these scenarios.
If your core infrastructure is largely on Linux, then you’re in luck.
Polymorphing for Linux has you covered even if your organization’s patching schedule is less than ideal. It also safeguards you from zero-day vulnerabilities and patch gap attacks.
Better still, the business case and financial justification almost writes itself. It can cost less than 5% of the annual patch management spend for a large enterprise and could pay for itself in less than two months and can demonstrate an impressive ROSI ratio of up to 468 percent.
To find out how, please take a look at the white paper: