I.T. Best Practices
In the world of I.T., keeping a system, or systems, patched is a necessity from both security and operational standpoints. There was a time when keeping systems properly patched wasn’t nearly as complicated as it is today. The reasons for this increased complication are diverse and outside the scope of this article.
The best practices say that patches should be tested before being deployed, and that they should be tested and deployed as soon as possible after their release.
The patches should be tested for compatibility with applications in use within the organization, as well as testing the effects on current system configuration. And this should be done in a test environment.
I.T. best practices are great ways to do things, and if at all possible should be followed. But the reality is somewhat different. Like, night and day different.
I.T. spending is, and always has been, a low priority for most organizations. It is extremely necessary for any modern organization, but when it comes to budget time, I.T. is very far down on the list of priorities. In fact, some organizations will buy new furniture before they buy new I.T. equipment. Sad, but true. This affects software and operating system licensing, hardware, upgrades, and staffing. This is where the ideals of best practices run into the hard wall of reality. It’s something that we have had to deal with for many years and is unlikely to change anytime soon.
Most I.T. departments are under-staffed and over-worked. There are barely enough people to attend to the immediate needs of the users let alone to keep the systems running and secure. Add to that expansion projects, “new & improved” apps that user X just has to have to do her job, and suddenly there aren’t enough hours in the day. Not only that, but you have to somehow make all of it work within your very limited financial budget, including paying for the new app that user X just has to have.
Now, given that I.T. spending is such a low priority, meaning that we have to do the best we can with what we have to work with, the recommended best practices fall short in a few areas.
First, the lab environment. Most of us don’t have the luxury of a lab environment to test patches in. While we could use the free version of VMware or something similar (Virtualization on Linux, VirtualBox, etc.), there is the matter of hardware to put it on. Do we have enough spare hardware to run this stuff? Probably not, or if we do have some extra hardware, it’s probably old enough that we can’t use it.
But what about running a lab environment in an existing virtualization infrastructure? While it is certainly doable, it may cause licensing issues. Are our licenses robust enough to cover the additional VMs? What about future expansion? I’m sure there are a host of other questions that could be asked about that scenario, but if you can do it, then by all means do it.
So, if we can’t or don’t have a lab environment, what then? It means testing on a production system. Definitely not the ideal situation, but we do have limits that we have to work within.
So that brings us to the second part of the problem. Testing. Because we are under-staffed, making the time to test patches may not be possible. We can’t dedicate a person or persons to testing patches, we don’t have the manpower to do it, because those that are higher up the chain don’t want to, or can’t justify, or just can’t afford to spend the money in this area, just like with the lab environment. Meaning you have to test when you can. And that’s something that for most I.T. shops just doesn’t happen.
Now, add to that the number of patches that are put out. Just a quick check of our patching system shows me that there have been over 700 patches put out since the first of the year (9 days ago) for software that we use. Without a dedicated team and a lab environment, we just can’t test that many patches.
All of this leaves us with two options. First, just don’t patch. Not a good option, to be sure, but an option none the less. And the second option is to “spray and pray”, meaning we release the patches into our production network, without testing, and hope that none of them break any of our systems. The second option is what most of us end up doing; after all, it’s better than not patching at all. And if something breaks, well we’ll just put out that fire when and if it springs up.
That’s the ideal of best practices vs. the reality of I.T. If you’re fortunate enough to be able to test patches, good for you, but don’t judge the rest of us. We’re doing the best we can with what we have.