Should Vendors Close All Security Holes?

HardOCP News

[H] News
Joined
Dec 31, 1969
Messages
0
When someone asks “Should vendors close all security holes?” you’d think the answer would be obvious…but is it? Apparently some companies don’t patch low-risk exploits until they are reported publicly. Do you agree? Disagree? Hit the comments link below and share your thoughts.

“Our company spends significantly to root out security issues," says the reader. "We train all our programmers in secure coding, and we follow the basic tenets of secure programming design and management. When bugs are reported, we fix them. Any significant security bug that is likely to be high risk or widely used is also immediately fixed. But if we internally find a low- or medium-risk security bug, we often sit on the bug until it is reported publicly. We still research the bug and come up with tentative solutions, but we don’t patch the problem.”
 
I can see why they wouldn't worry about it...there is probably a large amount of resources and time that is spent patching security holes, and if it's low risk then it probably isnt worth the cost.
 
I can totally understand some of the reasoning for this as it can be costly to fix these issues. In a perfect world we wouldn't have them to begin with, however most companies don't have the monetary resources to have a team devoted to nothing but patching vulnerabilities in a piece of software.
 
Reminds me of the Ford Pinto fiasco, they knew about the problem, but decided that settling up any lawsuits that came about would be cheaper than recalling all the cars. Except now software always come with licensing agreements that you automatically agree to the moment you look at the box the software came with, so there is no issue of a lawsuit, so why should they care except for P.R. reasons.
 
I'm glad doctor's don't think like this. " No, we won't heal you.......there's only been 30 cases of this illness. We're going to wait for an epidemic to find a cure. Sorry, you are on your own!":rolleyes:
 
Ideally, I think that they should fix holes. But realistically, I know that it takes a lot of development processes to fix them, and that a support-based unit of a software company is not generally profitable. Most companies probably have the resources to fix the holes, but it is not a priority, because it does not make them money. It only costs them money if it is a critical issue, and they generally devote resources to that. For low-level exploits, I doubt they really care. They prefer to set their development resources to making new products that can show a profit.
 
The point made are very valid. The problem really starts off with writing bullet proof code. It is very time consuming and so expensive. If we the consumer was willing to pay for that then there wouldn't be so much of a problem.
 
Reminds me of the Ford Pinto fiasco, they knew about the problem, but decided that settling up any lawsuits that came about would be cheaper than recalling all the cars. Except now software always come with licensing agreements that you automatically agree to the moment you look at the box the software came with, so there is no issue of a lawsuit, so why should they care except for P.R. reasons.


Somehow I don't see the exploding pinto that kills people as a reasonable comparison for minor software flaws.

If we're going with bad car analogies, it's more like knowing that 100,000 cars were released with a minor computer glitch that caused them to run at 98% of normal fuel efficiency and not issuing a recall. You COULD fix the problem, but is it reasonable?
 
I think the first thing we need to establish is that 'low risk' is a completely relative term and we never know. In other words some jackass can say a buffer overflow that allows code execution as system user as 'low risk' and we would never know. However unlikely that is, it is there and we have to be mindful of that. Fixing bugs is a process that slowly brings you to perfection. Continuing to make a program with more features while you know the features you already have are buggy then you should stop. Make sure the program works perfectly then add your features.
 
I think the first thing we need to establish is that 'low risk' is a completely relative term and we never know. In other words some jackass can say a buffer overflow that allows code execution as system user as 'low risk' and we would never know. However unlikely that is, it is there and we have to be mindful of that. Fixing bugs is a process that slowly brings you to perfection. Continuing to make a program with more features while you know the features you already have are buggy then you should stop. Make sure the program works perfectly then add your features.



It's nice to know you're fully willing to pay far more for software then you currently do. I on the other hand like having money for other purchases.
 
I think the best argument for not fixing a minor secuirty bug is that there's a non-zero chance of your "fix" introducing an unintended security issue that's actually exploitable, or a usuability issue.

If the original security bug isn't exploitable, it's not worth fixing.

I'm glad doctor's don't think like this. " No, we won't heal you.......there's only been 30 cases of this illness. We're going to wait for an epidemic to find a cure. Sorry, you are on your own!"

A better analogy would be a doctor saying "No, there's nothing obviously wrong with your appendix, but they do have a .01% of becoming inflammed, so let's remove it. Oh, and there's a .5% chance you'll react poorly with the anesthesia and die".
 
The bottom line is that as long as the world is willing to put up with the results of security holes, then they will remain in the current state of affairs. Remember the whole windows xp service pack 2 thing? microsoft was sitting on a ton of the holes fixed in xpsp2 since windows 2000 that were patched in that fell swoop. Blaster, anyone?

99.99% of consumers don't understand security to the point that they have a good reason to demand it, other than they don't want some Romanian hacker running up their credit card. Good marketing passes for security to most folks.

Would people be willing to pay more for more secure software? The real question is why they need to pay more. Simply because most developers don't know how to code it? There are "safe" languages out there (Java.. not to be confused with javascript...., .Net, etc) which provide a runtime environment that is a single point of security concern which eliminates the need of application developers to focus so much on security. But most code out there is written in C and C++ which contains the inherent possibility of security holes due to unchecked data structures.

The bigger problem we face is a loss of a real apprentice-journeyman-master type of environment in the modern day work force, which the software development world would benefit from in a big way. Often interns and junior programmers are simply thrown in to projects in a sink or swim kind of situation, with the more experienced programmers available to hold their hand when they get in over their head. What is not happening that should be happening is that experienced developers should be paired up with the junior developers to work side by side on a single project, pair programming. This doesn't happen because doing such would be viewed as taking 2 human resources and wasting them on a single task. This is the wrong way of viewing the situation, because the investment of such practice pays off immensely from what a junior developer would learn coding side by side with an experienced developer. That, and the way many coding shops under pay and over work their employees, forcing them to gain as much experience working at low level coding jobs while looking for something more stable and reliable. Once they gain enough experience, they take their experience elsewhere, and take their bad coding habits with them.

I'm not saying all junior developers are like this, but I am saying that this is a common scenario. I know from the people I have met and worked with that this happens too much. In this world of share holders and publicly traded companies, the pressure to show results at all costs genuinely hampers the quality of work people will do, particularly when you know there's a 50-50 shot your project is going to get canceled because of the decision of some business analyst who really doesn't understand the technology you are working on for them to begin with.

Anywho, that's my little rant on the subject. Security is only part of the picture. Developers not only need to know how to write secure code, but testable code as well. Until both of those subjects become cornerstones in computer science and computer/software engineering education, we will continue to see more of the same, and the excuse (and believe me, it is an excuse) that secure coding is too expensive will continue to be seen as a valid reason for security holes to go unpatched.
 
On one hand, it seems obvious -- of course they should patch them! But there is a cost to releasing a patch: you give away the vulnerability.

Whenever Microsoft, or any other major software vendor, puts out a patch for a previously-unknown security vulnerability, there is invariably an exploit for that vulnerability published within days or even hours. There is no way to avoid this -- putting out the patch makes it trivial for someone with a bindiff tool to identify the vulnerability. The act of fixing it gives away its nature.

It takes time to patch systems. Consumers tend to patch every week... if they patch at all (so you have two extremes, up-to-date systems and systems that haven't been patched in months.) IT environments, on the other hand, often patch only every month or two, due to the downtime and risk involved.

So now put yourself in a software vendor's shoes. You've found a vulnerability that's quite obscure -- someone will probably find it eventually, but maybe not for a very long time -- and maybe not ever. No one is suffering from it now, but they might later. If you put out a patch, then a few people will get it installed before the exploit or worm gets out... but many won't. By putting out a patch, you ensure that many of your customers will be compromised. Do you do it?

If the issue is something like SQL Slammer, where there's the potential to bring down essentially the entire Internet, or if you have something that's a remotely-exploitable unauthenticated buffer overflow, of course you do -- the risk is too great, someone will find it eventually, you have to stop it. But if it's something small that may never be noticed? If you wait for the next service pack, instead of releasing an easily-reverse-engineered patch, you could save a lot of people a lot of grief... though you take a major PR hit if the vulnerability is discovered before then.

It's not as cut-and-dried as people like to think it is. Patch management is hard.
 
Back
Top