• What we wish MS had learned in 2005

    Home » Forums » Outside the box » Rants » What we wish MS had learned in 2005

    • This topic has 1 reply, 1 voice, and was last updated 2 years ago by anonymous.

    Or why I think that patching createes far more bugs and holes then it fixes. Each patch adds code that with time may prove to be either a bug or a hole. The suggested way to keep your computer “secure” is to patch. But each patch can break several things. They NEVER should have put all patches in one update. Then when something broke you could remove only the bad part. Now it is an all or nothing. And if you get a bad patch, you offen fix it by removing it, wasting time in the process.

    Well I found this article which was mentioned back in 2005


    It was titled “The Six Dumbest Ideas in Computer Security”. And when it come to bad ideas look at #3 (https://www.ranum.com/security/computer_security/editorials/dumb/)

    <h2>#3) Penetrate and Patch</h2>

    There’s an old saying, “You cannot make a silk purse out of a sow’s ear.” It’s pretty much true, unless you wind up using so much silk to patch the sow’s ear that eventually the sow’s ear is completely replaced with silk. Unfortunately, when buggy software is fixed it is almost always fixed through the addition of new code, rather than the removal of old bits of sow’s ear.

    “Penetrate and Patch” is a dumb idea best expressed in the BASIC programming language:

    40 GOTO 10
    70 GOTO 10

    In other words, you attack your firewall/software/website/whatever from the outside, identify a flaw in it, fix the flaw, and then go back to looking. One of my programmer buddies refers to this process as “turd polishing” because, as he says, it doesn’t make your code any less smelly in the long run but management might enjoy its improved, shiny, appearance in the short term. In other words, the problem with “Penetrate and Patch” is not that it makes your code/implementation/system better by design, rather it merely makes it toughened by trial and error. Richard Feynman’s “Personal Observations on the Reliability of the Space Shuttle” used to be required reading for the software engineers that I hired. It contains some profound thoughts on expectation of reliability and how it is achieved in complex systems. In a nutshell its meaning to programmers is: “Unless your system was supposed to be hackable then it shouldn’t be hackable.”

    “Penetrate and Patch” crops up all over the place, and is the primary dumb idea behind the current fad (which has been going on for about 10 years) of vulnerability disclosure and patch updates. The premise of the “vulnerability researchers” is that they are helping the community by finding holes in software and getting them fixed before the hackers find them and exploit them. The premise of the vendors is that they are doing the right thing by pushing out patches to fix the bugs before the hackers and worm-writers can act upon them. Both parties, in this scenario, are being dumb because if the vendors were writing code that had been designed to be secure and reliable then vulnerability discovery would be a tedious and unrewarding game, indeed!

    Let me put it to you in different terms: if “Penetrate and Patch” was effective, we would have run out of security bugs in Internet Explorer by now. What has it been? 2 or 3 a month for 10 years? If you look at major internet applications you’ll find that there are a number that consistently have problems with security vulnerabilities. There are also a handful, like PostFix, Qmail, etc, that were engineered to be compartmented against themselves, with modularized permissions and processing, and – not surprisingly – they have histories of amazingly few bugs. The same logic applies to “penetration testing.” There are networks that I know of which have been “penetration tested” any number of times and are continually getting hacked to pieces. That’s because their design (or their security practices) are so fundamentally flawed that no amount of turd polish is going to keep the hackers out. It just keeps managers and auditors off of the network administrator’s backs. I know other networks that it is, literally, pointless to “penetration test” because they were designed from the ground up to be permeable only in certain directions and only to certain traffic destined to carefully configured servers running carefully secured software. Running a “penetration test” for Apache bugs is completely pointless against a server that is running a custom piece of C code that is running in a locked-down portion of an embedded system. So, “Penetrate and Patch” is pointless either because you know you’re going to find an endless litany of bugs, or because you know you’re not going to find anything comprehensible. Pointless is dumb.

    One clear symptom that you’ve got a case of “Penetrate and Patch ” is when you find that your system is always vulnerable to the “bug of the week.” It means that you’ve put yourself in a situation where every time the hackers invent a new weapon, it works against you. Doesn’t that sound dumb? Your software and systems should be secure by design and should have been designed with flaw-handling in mind.


    This is why you “ought to” patch every time. Because instead of the fixing the code they add a bunch of extra unneeded code that might be fixed by a very few lines. Also check one that graph is #2 and ask yourself what changed in 1994?


    3 users thanked author for this post.
    Viewing 0 reply threads
    • #2391204

      MacroHard is the Sears of this millennium.

      You can “rebrand!!!”, slap on new “Features and Benefits!!!”, and lipstick on a pig all you want, but long-overdue-needs-disrupting is what it is.

      They really need to break up MacroHard and—like any nightmare—make it just go away…

    Viewing 0 reply threads
    Reply To: What we wish MS had learned in 2005

    You can use BBCodes to format your content.
    Your account can't use all available BBCodes, they will be stripped before saving.

    Your information: