What is XSS and why is it restricted?
By Jonathan Marsh
- 10 Mar, 2010
In this article, Jonathan Marsh, VP of Business Development, WSO2 Inc explains about some important security aspects on Cross-site scripting or XSS restrictions.
Table of Content
Disclaimer: I'm not a security guru, so what follows is my opinion, observation, and experience.Please feel free to comment and correct!
However, it is possible that even a trustworthy site, through poor construction or through compromised delivery mechanisms, could be "hacked" by a third party. For instance, accessing the site through an open (but malicious) wireless network may allow the page to be subtly changed during transmission. This change might be to insert a bit of script code that records the interactions the user has with the page, including information he enters such as a password, and also information that is provided by the website to the user. The inserted script could collect this private information, and then "phone home" to the attacker. HTTPS can mitigate such attacks by securing the communication channel, but interactions with plain HTTP sites may still disclose user secrets of various levels of sensitivity.
Attacks can also come, and generally do come, over a trusted internet connection, even possibly through HTTPS. Anytime user-generated content appears in a page (e.g. comments on a blog, etc.) there is a possibility that third party, and thus untrustworthy, content is piggybacked on a trusted site. Plain text third-party content is benign (what you see is what you get), but if the content can be submitted in html, it is possible that such html can include malicious scripts. For this reason, a trustworthy site that allows user-generated content must scrub any user-generated content provided to it, removing anything that could be executed as script.
From the perspective of browser vendors there are a lot of sites out there and not all of them consider the security implications of user-generated content adequately. To help protect the user, XSS counter-measures in the browser attempt to limit the ability of and scripts within the page to "phone home". The is accomplished by preventing HTTP POST (the protocol used to submit forms and upload data) access to any web site domain called on by the page other than the one from which the main page originated. For instance, a page from https://wso2.com can access "safe" content from alternate domains like https://wso2.com, including images, stylesheets, even script libraries. The page however won't be able to post a form containing user input to anywhere but https://wso2.com. The XMLHttp object provides a way to POST from script, and also prevents information from being posted to any domain other than the page domain.
So as a user, having a browser watch out for these types of attack and prevent them seems useful. But let's consider situations where they get in the way of useful, trustworthy work.
For a web developer (especially a mashup developer) XSS can be quite a pain, as it limits your ability to write a page that spans domains. It limits your ability to host AJAX and Web Service interactions (powered by the XMLHttp object) anywhere other than your primary domain. For instance, you can't host a Web service on remote Mashup Server and use it within your own application (at least directly from the browser). Even though both sites may be trusted by you as the web site author, the browser enforces a blanket restriction on this access. (Each browser has mechanisms that may loosen this in some circumstances, but there isn't anything with zero-config or cross-browser.)
This restriction limits applications such as gadget pages (e.g. iGoogle.com) that aggregate information from a large number of sources. The Google Gadget framework, for instance, provides a way to GET information through a proxy on the trusted server, but currently disallows similar capabilities for POSTing.
Don't start feeling too secure as a user, or too disappointed as a developer trying to do legitimate work - there are some loopholes that can be exploited.
As described above, an HTTP GET operation is assumed to be safe across domains, while HTTP POST is not. If one could masquerade a POST as a GET, one could circumvent security restrictions. In particular, script can be fetched regardless of domain. This powers important functionality, such as third party libraries, an important feature supporting simplified development, analytics, and advertising. Basically one needs to translate the body of the POST into url parameters on the GET (recognizing there are length and encoding issues to deal with), insert a <SCRIPT> tag dynamically into the page which uses the GET, and the server on the external domain can access the "posted" information. It can even send a response back in the form of a block of script (essentially a callback). Of course, you need to insert script into the page initially to get the ball rolling - which can be pretty difficult over a secure connection or for sites that properly sanitize user-generated content. But if you're the owner of the original site, it's not terribly difficult once the technique has been mastered.
So, my question is, if XSS restrictions are so painful, yet circumvented with a modest bit of work (hey I'm no genius at this stuff and I did it) why are the XSS restrictions in place at all? Instead of trading off convenience for security, you're imposing convenience without actually making a meaningful contribution to the user's security. The additional level of security provided by making cross domain access simply obscure rather than truly prohibited doesn't seem worth it. Is it time to dump XSS restrictions? Or do we need to add a new (and further inconvenient) restriction against inserting <SCRIPT> tags into a page dynamically? As long as there is any cross-domain access I don't think I'll be completely secure. And that rules out advertisement insertions which I don't think is going to happen anytime soon!
Jonathan Marsh, Director Mashup Technologies, jonathan at wso2 dot com