Actually, I'm pretty sure this is a prank. They ramble on and on about the evils of "full disclosure". But nobody in the security industry advocates full disclosure. To the contrary, responsible disclosure is the most common approach security researchers take. When a researcher identifies a vulnerability in a software product, the first thing they do is approach the vendor to alert them to it. The vendor is given a reasonable period of time to work out a patch, before the details are revealed to the public. In some cases, the only people who get full details are the people responsible for the software- everyone else gets alerted to the vulnerability.
But in any case, the objection is: why disclose vulnerabilities at all? Why not keep them a secret, thus keeping the bad guys from exploiting them?
Because many of the bad guys are pretty smart. If I'm smart enough to find a security vulnerability, there's good odds that there's a bad guy who is also smart enough to find it. So when a vulnerability is discovered, it's good to assume that the bad guys already know about it too. Maybe they haven't exploited it yet, but they could be working on it. You certainly don't know what they know or what they're doing, and you can't control it either.
Keeping it a secret doesn't keep the bad guys from finding it, but it does keep the good guys ignorant. If the good guys don't know about the vulnerability, they don't know what they can do to defend themselves. Until the vulnerability is patched, they're sitting there exposed- and without disclosure, in blissful ignorance.
Okay, but why full disclosure? Isn't it enough to say, "Hey, I found a vulnerability?" Nope, because the software vendor's response is: "No you didn't. Prove it." And this is where the "science" aspect of computer science kicks in. The researcher has just published the results of an experiment: "I performed test X, and got result Y, which means I just pwned this system." Merely publishing the results is not enough to prove you've done it- you also need to show your work. You have to distribute your methodology, so other people can replicate your work.
The sucky thing, of course, is that, if the bad guys haven't found out about the vulnerability, they certainly have now. Which brings us back to responsible disclosure: a researcher should give a vendor a window in which to resolve the problem. The fact that this window closes is vital to keeping the vendors honest. There have been plenty of occasions in which researchers have alerted software vendors to a vulnerability, and the vendor has ignored them.
Fixing bugs costs money, and fixing security vulnerabilities looks bad- like they're admitting weakness. A company, acting on its short-term best interests, may choose to ignore reports from the security community. But if the researcher discloses the vulnerability to the public- well, now the vendor has to act.