BETA
This is a BETA experience. You may opt-out by clicking here

More From Forbes

Edit Story

When Censorship Backfires: How Blue Coat Silenced A Security Researcher

Following
This article is more than 9 years old.

When security researchers are silenced by governments or private companies, it’s often to the detriment of technology users of all ilks. Ignorance is certainly not bliss when it comes to digital vulnerabilities. It leaves systems open to attack and, consequently, people’s data open to theft.

But on a number of occasions, where utilitarianism has been neutered by bad capitalism, the needs of the few have outnumbered the needs of the many (Spock would not be best pleased). Blue Coat’s successful attempt to stop Airbus security researcher Raphaël Rigo talking about the firm’s technology at the Syscan conference in Singapore marks another chapter in the history of silencing of security researchers. Whilst not the most horrific example of censorship, it was censorship nonetheless. A man was told he could not talk.

Emails obtained by FORBES between the various parties involved show the Sunnyvale firm initially contacted Syscan conference organiser, Thomas Lim of security services provider Coseinc, on 13 January, asking for Rigo’s contact details. A Blue Coat blog released after FORBES’ report about the situation in Singapore claimed the company had learned about Rigo’s presentation “shortly before the conference”.

In that first email, Blue Coat senior security architect Tammy Green, who also heads up the company’s vulnerability response team, told Lim: “As his talk is about one of our products, ProxySG, I would like to contact him prior to the talk to ensure we have time to address any vulnerabilities that he may have discovered.” Lim, believing in the value of close working relationships between researchers and vendors, passed on Green’s email address to Rigo, whilst telling Blue Coat it was down to Rigo if he wanted to get in touch.

He did. In later emails to Lim, Rigo said he told Blue Coat his research revealed no specific vulnerabilities in ProxySG, which intercepts and scans traffic for security threats and bypasses of IT policy, and promised to pass on slides for the talk. He noted he would be detailing the workings of ProxySG’s filesystem, operating system internals and its security, or lack thereof.

Just over two months later and Rigo was preparing to head over to Singapore to give his talk. On Friday 20 March, Rigo contacted Lim to say that the discussions with Blue Coat “may have consequences on either the content of my talk or even the possibility that I give it”. It’s only on 23 March, three days before his talk, that he received the final confirmation from his employers at Airbus that he wouldn’t be allowed to give his talk.

The emails show the frustration felt by Rigo as a result of Blue Coat’s move. According to one of his emails to Lim, he expressed bemusement at a Blue Coat request to remove a slide that contained “information you can find in their public documentation”. He said Blue Coat was “scared” of the presentation, and that he was “very sorry of [sic] this mess which is unfortunately way beyond my control”.

Rigo declined to comment on the matter further during email conversations with FORBES.

A brief history of censorship of security research

Once the world found out about what had gone down, the security community was irate, slamming Blue Coat, claiming it had “bullied” Rigo out of his talk. Even Yahoo ’s head of security, Alex Stamos, took to Twitter , calling on others to shun Blue Coat (he declined to comment further on the matter).

Blue Coat said it had worked with Airbus and continues to do so in learning about Rigo's findings. "Blue Coat did not bully or otherwise threaten Airbus into withdrawing its presentation at the security conference.  Following responsible disclosure practices, Blue Coat requested more time from Airbus to review and validate the research we received in mid-March, and to mitigate any risks to our customers associated with the public disclosure of the presentation. Airbus agreed to postpone disclosure of their presentation."

For those outside the echo chamber of the security community, this might seem like much ado about nothing. But security research not only helps put the world’s digital defenders on the front foot in the face of unceasing attacks, by letting them know about potential weaknesses in their systems so they can patch, it also inspires others to openly share their findings, thereby increasing knowledge amongst those fighting off snoops and digital criminals. From there, technologies can be improved, or built anew, to better protect people’s privacy.

Yet there have been a significant number of cases of private companies and governments pressuring researchers into cancelling their talks over the last 15 years. Security-focused website attrition.org has a list of them. Just last year, French researcher Jean-Marie Bourbon thought he’d lost his job at a consultancy after he posted information on vulnerabilities in security technology from FireEye, though the firm later credited him with his research. Noted researcher Charlie Miller claimed that in 2008 Google attempted to stop him talking about weaknesses he’d found in a T-Mobile phone based on Android. In 2005, Michael Lynn resigned from his job at Internet Security Systems just so he could disclose security vulnerabilities in Cisco routers, claiming he believed he needed to go against the networking giant’s wishes for the good of the internet at large; Cisco’s hardware is vital to many networks across the world.

Sometimes it’s the government that steps in and shuts researchers down. In 2001, Russian cryptography expert Dmitry Sklyarov was arrested and accused of creating tools designed to bypass copyright protections on ebooks. That was a day after he’d presented problems with such technology and related Adobe PDFs. Though they were later charged with offences under the Digital Millennium Copyright Act (DMCA), those charges were eventually dropped.

Daniel Cuthbert, chief operating officer at security consultancy Sensepost, also knows what its like to have the government on his back. In 2005, he was prosecuted and fined for testing security on a donation site for the devastating Boxing Day tsunami of 2004. Cuthbert lost his job as a result of the case.

Cuthbert believes bullying tactics will deter researchers from being responsible and push them into the darker corners of the digital world. “What Blue Coat did was not new and it has been a tried and tested technique used by many companies to silence researchers. The danger of this approach is that you end up attacking the person that's trying to help you,” he told FORBES.

“Rigo could have gone down the road of finding [possible security weaknesses] and then rather than going public, selling them to criminals who wished to use them for nefarious needs. There's no evidence that this was his approach and he took to a public, well-respected, security conference to share his findings with the infosec community.

“There has to be a responsible disclosure approach between the researcher and the company being researched. This gives both parties a clear understanding of the process that should be followed, such as time lines for the flaws to be confirmed, a patch to be created that resolves it in a suitable way so as to not introduce new vulnerabilities and a time suitable for both for the research and them to go public.”

Trey Ford, global security strategist at Rapid7, told FORBES current laws already make life difficult and complex for researchers. When he was general manager for global Black Hat security conferences there were numerous cases where lawyers stepped in, concerned about the application of the law. "A couple years ago at Black Hat we saw several talks on cellular research - four different legal readings landed at four very different decisions on what the researcher could and could not do - or what precautions must be taken before a demonstration or proof-of-concept exploit was done on stage," Ford added.

"This lack of clarity is further exacerbated by venue – the country, state and county the research was done in, the laws where the vendor is located, and where the talk is being done. Even when discussing the conference location – lawyers can’t agree on how best to protect the researcher presenting the work.

"This confusion creates a level of hesitation when the Security Researchers community looks into contacting vendors about a discovered vulnerability. It’s a conditioned response, like the beat pup afraid of men with low voices."

Life may be about to get even more constricted for researchers. The Obama administration’s attempts to update the Computer Fraud and Abuse Act (CFAA) have caused consternation amongst many in the security community, one labelling it a salvo in the president’s “war on hackers”. They believe the proposed provisions would allow the government to prosecute researchers for simple acts, such as visiting links to data stolen by another party or accessing and sharing passwords belonging to others.

It’s hoped the government lives up to its promises it is listening to researchers.