Cross site scripting is the name given to web site vulnerabilities arising from the embedding of malicious HTML tags into a HTML document which is generated dynamically on the server. By entering HTML tags into an input field of a web site (either via a HTTP POST or GET i.e. as a form post or through the query string), these tags can then be included in a HTML page served from the web site.
Web pages contain both text and HTML markup that is generated by the server and interpreted by the client browser. Servers that generate static pages have full control over how the client will interpret the pages sent by the server. However, servers that generate dynamic pages do not have complete control over how their output is interpreted by the client.
These attacks do not affect the security of the server itself, as they only create malicious tags or code that is sent to the client. However, these tags can be used to annoy users with large graphics and/or sound files, change the expected look of a web page, change the functionality of the web page and thus capture sensitive user information, etc.
The heart of the issue comes back to the age-old knowledge that you cannot trust the client for anything.
Sites that host discussion groups with web interfaces have long guarded against a vulnerability where one client embeds malicious HTML tags in a message intended for another client. For example, an attacker might post a message like:
Hello message board. This is a message.
<script>malicious code</script> This is the end of my message.
When a client with scripts enabled in their browser reads this message, the malicious code may be executed unexpectedly. Scripting tags that can be embedded in this way include
This type of vulnerability is easy to stop by simply filtering/encoding any user input for the offending tags, however as we will learn, the issue is a little more complicated that this.
Many Internet web sites overlook the possibility that a client may send malicious data intended to be used only by itself. This is an easy mistake to make. After all, why would a user enter malicious code that only the user will see?
However, this situation may occur when the client relies on an untrustworthy source of information when submitting a request. For example, an attacker may construct a malicious link such as:
<a href="http://example.com/comment.cgi?mycomment=<script>malicious code</script>">Click here</a>
When an unsuspecting user clicks on this link, the URL sent to example.com includes the malicious code. If the web server sends a page back to the user including the value of mycomment, the malicious code may be executed unexpectedly on the client.
The inline frame tag
<iframe> added to Microsoft IE in version 4, allows the inclusion of a remote document within a parent document:
The script tag can also be used to include a remote script block into the parent document:
In addition to scripting tags, other HTML tags such as the
<form> tag have the potential to be abused by an attacker. For example, by embedding malicious
<form> tags at the right place, an intruder can trick users into revealing sensitive information by modifying the behavior of an existing form.
So far the issue seems pretty simple. Filter an input to make sure tags that could be used for a malicious reason. However the problem is a little deeper than that, as I will explain below.
Any data inserted into an output stream originating from a server is presented as originating from that server, even if it contains output that originates from a user. Web developers must evaluate whether their sites will send untrusted data as part of an output stream, or if they will validate any data before it is included.
Untrusted input can come from, but is not limited to,
A combination of steps must be taken to mitigate this vulnerability. These steps include:
If the web server does not specify which character encoding is in use, the client can not tell which characters are special. Web pages with unspecified character encoding work most of the time because most character sets assign the same characters to byte values below 128. The question comes with the characters above 128, which of these are special?
Some 16-bit character-encoding schemes have additional multi-byte representations for special characters such as "<". Some browsers recognize this alternative encoding and act on it. This is correct behaviour, but it makes attacks using malicious scripts much harder to prevent. The server simply doesn't know which byte sequences represent the special characters.
Web servers should set the character set, then make sure that the data they insert is free from byte sequences that are special in the specified encoding. For example:
<html> <head> <meta http-equiv="Content-Type" content="text/html; charset=ISO-8859-1" /> <title>Using ISO-8859-1 Charset</title> </head> ....
<meta> tag in the
<head> section of this sample HTML forces the page to use the ISO-8859-1 character set encoding.
The next two steps, encoding and filtering, first require an understanding of "special characters". The HTML specification determines which characters are "special", because they have an effect on how the page is displayed. However, many web browsers try to correct common errors in HTML. As a result, they sometimes treat characters as special when, according to the specification, they are not.
In body content:
Each character in the ISO-8859-1 specification can be encoded using its numeric entry value.
The following example uses the copyright mark in an HTML document:
&\#169; 2001 Mauldeth Int.
The copyright character is 169 and using the &# syntax allows the author to insert encoded characters that will be interpreted by the browser.
Encoding untrusted data has benefits over filtering untrusted data, including the preservation of visual appearance in the browser. This is important when special characters are considered acceptable.
Unfortunately, it is unclear whether there are any other characters or character combinations that can be used to expose other vulnerabilities. The recommended method is to select the set of characters that is known to be safe rather than excluding the set of characters that might be bad.
For example, a form element that is expecting a person's age can be limited to only accept numeric characters between a certain range (1 to 99 would probably do). There is no reason for this age element to accept any letters or other special characters. Using this positive approach of selecting the characters that are acceptable will help to reduce the ability to exploit other yet unknown vulnerabilities.
The filtering process can be done as part of the data input process, the data output process, or both. I would recommend filtering the data during the input process as this has the benefits of only needing to be done once, and (if the input is going to be stored in a database) can be included as part of your database input validation (to continue the example above, an age would be expected to be held in a numeric database field, trying to insert a string into this field would cause an input error anyway). The only disadvantage of filtering on input is that you must make sure that all input is filtered at every entry point to the storage device.
Cookies can often be overlooked. Since cookies are just text documents held on the client's machine, it is easy for a client to alter their cookie to allow inclusion of malicious content or send bogus information in their HTTP requests. Like anything else from the client, cookies should not be trusted and the information within them should be verified before being used.
Since the issue isn't just about scripting, and there isn't necessarily anything cross site about it, the name Cross Site Scripting is a little ambiguous. It was coined earlier on when the problem was less understood, and has stuck, it's as simple as that.