?

Log in

No account? Create an account

t3knomanser's Fustian Deposits

Morons oppose security

How Random Babbling Becomes Corporate Policy

run the fuck away

Mad science gone horribly, horribly wrong(or right).

Morons oppose security

Previous Entry Share Next Entry
tesla
Today, ImageShack was hacked by some anonymous children seeking to get their manifesto out there. If nothing else, it does a good job of showing off that the differently-abled can still master basic computer security tools.

Actually, I'm pretty sure this is a prank. They ramble on and on about the evils of "full disclosure". But nobody in the security industry advocates full disclosure. To the contrary, responsible disclosure is the most common approach security researchers take. When a researcher identifies a vulnerability in a software product, the first thing they do is approach the vendor to alert them to it. The vendor is given a reasonable period of time to work out a patch, before the details are revealed to the public. In some cases, the only people who get full details are the people responsible for the software- everyone else gets alerted to the vulnerability.

But in any case, the objection is: why disclose vulnerabilities at all? Why not keep them a secret, thus keeping the bad guys from exploiting them?

Because many of the bad guys are pretty smart. If I'm smart enough to find a security vulnerability, there's good odds that there's a bad guy who is also smart enough to find it. So when a vulnerability is discovered, it's good to assume that the bad guys already know about it too. Maybe they haven't exploited it yet, but they could be working on it. You certainly don't know what they know or what they're doing, and you can't control it either.

Keeping it a secret doesn't keep the bad guys from finding it, but it does keep the good guys ignorant. If the good guys don't know about the vulnerability, they don't know what they can do to defend themselves. Until the vulnerability is patched, they're sitting there exposed- and without disclosure, in blissful ignorance.

Okay, but why full disclosure? Isn't it enough to say, "Hey, I found a vulnerability?" Nope, because the software vendor's response is: "No you didn't. Prove it." And this is where the "science" aspect of computer science kicks in. The researcher has just published the results of an experiment: "I performed test X, and got result Y, which means I just pwned this system." Merely publishing the results is not enough to prove you've done it- you also need to show your work. You have to distribute your methodology, so other people can replicate your work.

The sucky thing, of course, is that, if the bad guys haven't found out about the vulnerability, they certainly have now. Which brings us back to responsible disclosure: a researcher should give a vendor a window in which to resolve the problem. The fact that this window closes is vital to keeping the vendors honest. There have been plenty of occasions in which researchers have alerted software vendors to a vulnerability, and the vendor has ignored them.

Fixing bugs costs money, and fixing security vulnerabilities looks bad- like they're admitting weakness. A company, acting on its short-term best interests, may choose to ignore reports from the security community. But if the researcher discloses the vulnerability to the public- well, now the vendor has to act.
  • speaking of vulnerabilities, did you hear about the possible ssh vulnerability?
    • I didn't, but after a quick Googling, it doesn't look likely. Scary thought, though.
      • to contribute to the rumor mill, the one that got the buzz started appears to have been the result of a very stupid admin leaving a list of his passwords in his google account.

        However, a friend of mine says that his buddy (who I know well enough to know that he's actually a fairly decent hacker and knows enough about kernels to do this) says that they know of an exploit. From what he said (and honestly it was a bit over my head) and the research I did, it seems to be based on these buffer overflow bugs from 2002 and 2003, and it's probable that the only vulnerable versions are based on the Red Hat ssh, since they've been backporting security features instead of updating the version in their releases. This means CentOS is also vulnerable, which is what a lot of web hosts use. He did say that in order to actually use the pointer returned by the exploit to do anything useful, you would practically have to have a clone of the system your attacking set up. Otherwise, the connection just drops. On the other hand, for a farm with a few thousand servers set up identically, you could theoretically brute force by iterating through the servers. A vulnerability in cPanel that gives you a list of all accounts on the machine (and their shell) brings this into the realm of highly unlikely but possible.

        On the plus side, he said that the solution is simply to drop the vulnerable cipher, which (probably not coincidentally) is not used in the latest openssh (5.2).


        • Fortunately that's a nasty one to pull off. Sadly, any vulnerability in SSH is going to pose a serious problem for Internet security.

          I do remember reading about those buffer overrun bugs, but I thought they were properly patched- but I'm not surprised that RedHat is slow on the uptake.
          • I got the impression that the cipher that's broken was just silently dropped and not really officially noted as broken.

            I also just realized that it doesn't matter if you have a valid user/shell because the exploit appears to occurs before authentication.

Powered by LiveJournal.com