Sunday, December 11, 2011

RFC 6454 and RFC 6455

Today, the IETF published two document: RFC 6454, The Web Origin Concept, and RFC 6455, The WebSocket Protocol.  Both these documents started out as sections in the HTML5 specification, which has been a hotbed of standards activity over the past few years, but they took somewhat different paths through the standards process.

RFC 6454's path through the IETF process was mostly smooth sailing.  The document defines the same-origin policy, which is widely implemented and fairly cut-and-dried.  In addition to the comparison and serialization algorithms we inherited from the WHATWG, the websec working group added a definition of the Origin HTTP header, which is used by CORS, and a broad description of the principles behind the same-origin policy.

RFC 6455's path was less smooth.  The protocol underwent several major revisions in the WHATWG, before reaching the IETF.  The protocol was fairly mature by the time it reached the hybi working group and was implemented in WebKit and Firefox.  Unfortunately, some details of the protocol offended HTTP purists, who wanted the protocol handshake to comply with HTTP.  The working group polished up these details, leading to churn in the protocol.

Around this time, some colleagues and I were studying the interaction between DNS rebinding and transparent proxies.  It occurred to us that folks had analyzed the end-to-end security properties of WebSockets but less effort had been expended analyzing the interaction between WebSockets and transparent proxies.  We studied these issues and found an interesting vulnerability.  We presented our findings to the working group, which updated the protocol to fix issue.

One perspective on these events is that they are a success.  We found and fixed a protocol-level vulnerability before the protocol was deployed widely.  Another perspective is that we annoyed early adopters polishing unimportant protocol details.  My view is that this debate boils down to whether you really believe that worse is better.  For my part, I believe we had a net positive impact, but I hope we can be less disruptive to early adopters when we improve security in the future.

Saturday, December 3, 2011

Timing Attacks on CSS Shaders

CSS Shaders is a new feature folks from Adobe, Apple, and Opera have proposed to the W3C CSS-SVG Effects Task Force.  Rather than being limited to pre-canned effects, such as gradients and drop shadows, CSS Shaders would let web developers apply arbitrary OpenGL shaders to their content.  That makes for some really impressive demos.  Unfortunately, CSS Shaders has a security problem.

To understand the security problem with CSS Shaders, it's helpful to recall a recent security issue with WebGL.  Similar to CSS Shaders, WebGL lets developers use OpenGL shaders in their web applications.  Originally, WebGL let these shaders operate on arbitrary textures, including textures fetched from other origins.  Unfortunately, this design was vulnerable to a timing attack because the runtime of OpenGL shaders can depend on their inputs.

Using the shader code below, James Forshaw built a compelling proof-of-concept attack that extracted pixel values from a cross-origin image using WebGL:
for (int i = 0; i <= 1024; i += 1) {
  // Exit loop early depending on pixel brightness
  currCol.r -= 1.0;
  if (currCol.r <= 0.0) {
    currCol.r = 0.0;
    break;
  }
}
Timing attacks are difficult to mitigate because once the sensitive data is present in the timing channel it's very difficult to remove.  Using techniques like bucketing, we can limit the number of bits an attacker can extract per second, but, given enough time, the attacker can still steal the sensitive data.  The best solution is the one WebGL adopted: prevent sensitive data from entering the timing channel.  WebGL accomplished this by requiring cross-origin textures to be authorized via Cross-Origin Resource Sharing.

There's a direct application of this attack to CSS Shaders.  Because web sites are allowed to display content that they are not allowed to read, an attacker can use a Forshaw-style CSS shader read confidential information via the timing channel.  For example, a web site could use CSS shaders to extract your identity from an embedded Facebook Like button.  More subtly, a web site could extract your browsing history bypassing David Baron's defense against history sniffing.

The authors of the CSS Shaders proposal are aware of these issues.  In the Security Considerations section of their proposal, they write:
However, it seems difficult to mount such an attack with CSS shaders because the means to measure the time taken by a cross-domain shader are limited.
Now, I don't have a proof-of-concept attack, but this claim is fairly dubious.  The history of timing attacks, including other web timing attacks, teaches us that even subtle leaks in the timing channel can lead to practical attacks.  Given that we've seen practical applications of the WebGL version of this attack, it seems quite likely CSS Shaders are vulnerable to timing attacks.

Specifically, there are a number of mechanisms for timing rendering.  For example, MozBeforePaint and MozAfterPaint provide a mechanism for measuring paint times directly.  Also, the behavior of requestAnimationFrame contains information about rendering times.

Without a proof-of-concept attack we cannot be completely certain that these attacks on CSS Shaders are practical, but waiting for proof-of-concept attacks before addressing security concerns isn't a path that leads to security.