Disclaimer: I am not an subject matter expert in DOM-based XSS (D-XSS). In fact, I have yet to see an exploitable D-XSS flaw in all my years of application security testing. However, I have a curious mind and love code, so I am always looking to learn more about web application flaws and uncover new ways to approach finding and exploiting them. That being said, if you have experience dealing with D-XSS and would like to contribute to this topic, whether to correct an inaccuracy in this article or provide insight, please send me an email or tweet. I welcome and appreciate all input.
The best way to learn about a web application flaw is to experience the flaw from the position of the developer and the attacker. This can be done by conducting the following exercises.
- Write an application that intentionally implements the flaw in a realistic scenario.
- Practice exploiting the application through modern day browsers.
- Modify the application to successfully mitigate the flaw.
I make a habit of doing this for every type of flaw that gets discovered and have found that it helps to truly understand the flaw and how to prevent it through writing secure code. Repeating this exercise routinely sharpens my understanding of the flaw and provides insight into how exploit payloads are handled by modern browsers.
A few days ago I decided to revisit DOM-based XSS, as things have changed considerably from a browser perspective since I last played with the flaw. For those that are not familiar with D-XSS, it is a flaw that occurs when a developer creates dynamic content from pieces of the DOM that can be easily manipulated by the user. D-XSS differs from other types of XSS in the following ways:
- Reflected and Stored XSS
- The payload is sent to the server, processed, and used by the application in a response.
- The flaw exists in the server-side code.
- DOM-based XSS
- The payload doesn't have to be sent to the server to exploit the flaw.
- The flaw exists in the client-side code.
You are here: <span id="location"></span> <script> var loc = document.location.href; document.getElementById("location").innerHTML = loc; </script>
document.location.href DOM attribute and assigns it to a variable named
loc. The developer then modifies the DOM and sets the
innerHTML value of a
span element to the value of the
loc variable. The URL can now be used to inject malicious client-side code into the page via D-XSS.
Similar techniques are often used to parse parameter values from the URL and update the client UI for applications that make few synchronous requests to the server due to the use of AJAX. In years past, this behavior made it quite simple to set the parameter value to valid HTML content that would be parsed and added to the page, leading to exploitable D-XSS flaws. Exploiting these D-XSS flaws was as easy as injecting a
<script> HTML element into the parsed parameter's value. Consider the following example code and exploit.
Hello <script> var name = document.URL.substring(document.URL.indexOf("name=")+5); document.write(name + "!"); </script>
document.URL DOM attribute. The developer then writes the parameter value directly to the page as part of a greeting. The "name" parameter and anything appearing after it in the URL can now be used to inject malicious client-side code into the page via D-XSS.
Modern day browsers have begun protecting users and developers from this type of vulnerability by encoding DOM objects that contain input from the client. However, developers still desire the ability to parse DOM objects to create dynamic client-side content. Given the current browser controls, if the parameter value includes anything but valid URL characters, the value is URL encoded, giving developers URL encoded strings to work with rather than unencoded plain text. To compensate, developers decode these values with the
The hash (#) character in a URI denotes the beginning of a URI fragment. According to the RFC 3986, clients are not supposed to send URI fragments to the server, as the client should recognize that they reference a resource secondary to the current, or primary, resource. What does this mean for D-XSS? First, the fragment is stored in the DOM as a part of the
document.location object, as well as in the
document.URL attributes. If a developer parses either of these elements, the fragment will be included. Depending on how the developer parses the URL to extract parameter values, the use of a hash may have no effect on the parser, allowing an attacker to use a hash to inject the payload into the URL, but prevent the payload from being sent to the server where it may be scrutinized. Below is the same example as before, but the exploit is changed by introducing the hash character.
Hello <script> var name = document.URL.substring(document.URL.indexOf("name=")+5); document.write(name + "!"); </script>
In this example, the parameter value of
Tim is still sent to the server, but
Tim#<script>alert(42)</script> is parsed from the
document.URL DOM attribute and added to the HTML of the page, exposing the target to the payload. This exploit bypasses any server-side mitigation to D-XSS.
The second impact that the hash character has on D-XSS is that not all browsers treat URIs and URI fragments the same way. I tested Internet Explorer 11, Chrome v33, and Firefox v27 by using the above vulnerable code snippets and the following exploit payload:
?<b>Tim</b>=<b>Tim</b>#<b>Tim</b> This payload tests for encoding in the parameter name, parameter value, and URI fragment sections of the URL. My testing yielded the following results:
- Internet Explorer 11 does not encode anything.
- Chrome v33 does not encode the URI fragment portions of the URL.
- Firefox v27 encodes everything.
Therefore, if our target is using Chrome or Internet Explorer, we can use the hash character to inject D-XSS payloads without requiring the developer to decode the injectable parameter value prior to updating the DOM, all while bypassing server-side mitigations.
The most obvious challenge to preventing harvesting on registration systems is that the application must ask for a unique piece of information with which to identify the applicant. In most cases, this piece of information is the username. If we enforce this distinction during the traditional registration process and provide visual feedback, then we create the possibility for username harvesting.
The typical user account registration system will ask for the applicant to provide all of the information required to create an account on a registration page. When the registration page is submitted, the application validates the uniqueness of the username. The application then responds with one of the following messages:
- An account with matching data already exists.
- The account is created.
- An activation link has been sent to the email address provided in the registration data.
This behavior can be leveraged to harvest valid users of the application by attempting to register accounts with suspected usernames and analyzing the responses. There are several traditional defenses to this type of attack on registration pages:
- CAPTCHAs. CAPTCHAs can be used to slow automated attacks on this behavior. However, an attacker can still leverage this vulnerability over time, attempt to bypass the CAPTCHA system, or script through the CAPTCHA restriction using a third party CAPTCHA answering service.
- Blocking. Blocking at a lower level of the OSI model can also be used to prevent automated attacks on this behavior. However, if the blocking system is not implemented correctly, it can lead to an unintentional Denial-of-Service vulnerability. In addition, blocks that target a source IP address are easily circumvented by spreading requests across open proxies.
- Approval. A system requiring the manual approval of new accounts by a system administrator is another way to mitigate attacks on this behavior. However, this adds the element of human interaction which has administrative ramifications in terms of time required to monitor and manage the system, as well as possible exploitation of the approving authority.
A quick solution to this problem would be to discard custom usernames and enforce the use of an email address as the unique ID for all accounts. Then, respond to registration requests with a generic message stating that "An email regarding the steps remaining to register has been sent to the provided email address." regardless of whether the information provided matches an existing account. If an account matching the email address provided already exists, then a notice is sent. If a matching account does not exist, then a one-time-use account activation link is sent. The account should not be created until activation has occurred.
A variation of the previous solution changes the order of events. Instead of gathering applicant information in the registration form, it would require only an email address. When the email address is submitted to the registration form, the application responds with a similar generic message such as, "An email regarding the steps remaining to register has been sent to the provided email address." regardless of whether the email address provided matches an existing account. If an account matching the email address provided already exists, then a notice is sent. If a matching account does not exist, then a one-time-use registration link is sent to the address for the user to complete the registration process.
The above solutions are very similar, with the main difference being when the email is sent. In the first solution, the email is sent after the applicant's information has been given, so the email contains an activation link. In the second solution, the email is sent before the applicant's information has been given, so the email contains a registration link. Either solution solves the problem, but depending on the current registration system, one solution may be easier to implement than the other. The bottom line is, there are two keys to making a registration system impervious to harvesting attacks:
- Force the use of an email address as the username, or unique ID, for user accounts.
- Provide a consistent response to registration requests.
Enforcing the use of an email address as the username provides benefits in other areas as well. Developers won't be required to maintain reversible versions of passwords, as reseting a password would be as simple as sending a password reset link to the registered email address. The need for a password could actually be completely removed by implementing a login system where users can authenticate by using a one-time-use link sent to them by submitting their email address to a login form. This places the burden of authentication on the email system, which in most enterprises is managed internally or by a trusted third party. There's also the administrative benefit of an email address being much easier for users to remember across multiple applications than custom usernames.
Just something to consider the next time someone asks for a secure away to handle user registration.
Some users may have noticed an usually high number of bugs over the past couple of weeks. We've been making some sweeping changes to the guts of the framework during that time, and since the entire user population is the beta test community, you have been instrumental in helping spot and fix issues. For that I thank you. We've been trying to fix bugs as fast as they've been reported, so at this point, I believe most issues have been identified and resolved. However, please continue to report any strange behavior or bugs.
Also taking place over the past several weeks was voting for the 2013 Toolsmith Tool of the Year and the ToolsWatch 2013 Top Security Tools competitions. Users voted the Recon-ng framework as the #1 2013 Toolsmith Tool of the Year and the #7 ToolsWatch Top Security Tools of 2013 (ahead of Metasploit, WOW!). This acknowledgment of the Recon-ng framework as a popular and useful addition to professional's toolsets across the industry validates the time that's been poured into its development. After all, the good of the industry and making an overall positive impact on security is the reason I do this.
In appreciation of your votes, and because I just generally enjoy working on Recon-ng, quite a few new features have been added to the framework. Below is a quick round-up of the new features.
Browser Emulation via Mechanize
Many users may have noticed the obvious absence of harvesting modules for resources like Facebook and Pastebin. This has been due to the way that these web sites require true browser functionality to render the desired content. None of the builtin web request modules (urllib, urllib2, httplib) have the ability to do this natively. Therefore, the popular Mechanize browser emulator package has been added to the framework. Now, any resource that requires true browser-like functionality to access data can be leveraged by the framework.
Persistent Module Options (Migration Required)
We've been receiving requests for quite some time to make module options persist across sessions like the global options always have. One of the sweeping changes that took place over the past several weeks was an overhaul of the options management system. Now, all options at all contexts are stored and loaded dynamically. Therefore, if there is a module that inherits a global option, but you want it set as something else, you won't have to reset it every time you return to that module. This also makes debugging much easier for developers who are working with specific test scenarios. All options are still stored according to workspace, so there is no danger of information leakage between engagements. In order for this new feature to work properly in existing workspaces, remove the old "config.dat" file in the workspace and allow it to be dynamically regenerated the next time the workspace is loaded. A huge thanks to Ethan Robish for making this feature a reality.
Since the inception of the framework, I've wanted the ability to spool output to a local file for data retention, proof of performance, and general CYA reasons. However, we've tried multiple implementations in testing and never liked any of them enough to push to the master branch. We've also tried all of the builtin OS tools like "tee" and "script", but they break functionality, like tab completion, and muck with output formatting. A couple weeks ago, a brilliant contributor, Quentin Kaiser, put me on to a technique that looked promising. A few nights later, a solution was pushed to the master branch that accomplishes spooling quite well. Spooling has been implemented as the "spool" command, and works very similarly to the "record" command by giving users the ability to start and stop spooling, or check the current spooling status. The destination file for the spooled data is set as a parameter of the "spool start" command,
spool start <filename>.
JSON Support for Requests
It wasn't until I recently attempted to send a POST request with a JSON payload that I realized the custom requests method built for Recon-ng didn't support anything but standard POST content subtypes. Therefore, support for JSON content subtypes was implemented by adding a "content" parameter to the "request" method that accepts a string identifying the content subtype. While only JSON is currently supported, this implementation allows for other content subtypes to be easily added at a later time. In addition to this, the custom requests method was separated from the framework and placed into a module called "dragons.py" (as in, "here they lie"). This was done so that I, or anyone else, can leverage its functionality in other projects, as it does such a good job of simplifying the many things that can be done with web requests and urllib2.
Revamped HTML Report
If you're interested in contributing to the framework, please see the issues page for module ideas and feature requests. All contributions are welcome from anyone with any level of Python experience, including no experience. I am in this to teach as much as I am to develop, and I thoroughly enjoy helping those new to Python. Thanks again, and enjoy the framework.
I’ve been working on some really nice features for the Recon-ng framework that I was finally able to push up to the master branch of the repo last night. Below is a quick round-up of the new features, migration requirements, and information about how the changes will effect user experience.
Home Folder Migration
To this point, all user generated data has been saved within the Recon-ng directory structure. While this worked fine in situations where users have root privileges, the framework was unusable in restricted user environments. Therefore, I decided to standardize the framework according best practices and make use of "home" folders. Using the "home" folder provides several key advantages. It avoids write errors in restricted user environments and allows for segregated multi-user environments. I began the "home" folder migration several weeks ago by adding the ability to build a separate module tree underneath a user’s "home" directory for custom modules (see wiki for details). As of today, the migration is complete.
After pulling down the new version of the framework, users will notice that none of their workspaces or API key data is available. Don't worry. It's still there. It just needs to be migrated to the new location by following these steps.
- Launch the framework. The framework will detect whether or not migration has occurred. If it has not, the framework will build the necessary directory structure in the "home" (~) folder.
- Exit the framework.
- Move all workspaces from the "recon-ng/workspaces/" directory to the "~/.recon-ng/workspaces/" directory.
- Move "recon-ng/data/keys.dat" to "~/.recon-ng/keys.dat".
Record Command Changes
I wanted to give users more flexibility on where commands are recorded by the "record" command without having to set a global framework option. Therefore, I modified the "record" command to require an additional resource filename parameter for the "record start" command,
record start <filename>. Now users can specify the resource file at runtime rather than have to set a global option.
Something didn't feel right about having the workspace as a global framework option. Therefore, I separated workspace control from the global options by implementing a new "workspace" command to the global context. Not only does this provide segregation, but it also allows for flexibility of workspace control through future expansion of the "workspace" command.
Both the "rec_file" and "workspace" global options were removed from the global options list to support the above changes. As a result, the saved "config.dat" files in each workspace must be changed to remove these options or the framework will behave unpredictably. This can be done in one of two ways.
- Remove the "config.dat" file from all workspaces. A new "config.dat" file will be recreated the next time the workspace is loaded.
- Edit the "config.dat" file in all workspaces and remove the "rec_file" and "workspace" options from the stored JSON string.
I conducted a Twitter poll asking users of the framework to choose which they preferred between two prompt formats: the current
recon-ng > or a proposed
[workspace] recon-ng >. Users of the framework unanimously chose the proposed prompt. However, after seeing what the prompt looked like when a module was loaded,
[workspace] recon-ng [module] >, I elected to make it
[recon-ng][workspace][module] >. I tried many variations, but this one seemed to be the most aesthetically pleasing. Thanks to all those who provided feedback.
Testing of the new features has been limited. Please report any bugs so that I can promptly address them. Thank you, and enjoy.
Anyone that has been doing penetration tests for a reasonable amount of time has at some point encountered a restricted user environment. A restricted user environment is a locked down, and usually shared, environment which restricts users to very limited functionality. These configurations are commonly seen in public kiosks and shared terminal servers.
The first instinct to achieve shell in one of these environments is to simply run "cmd.exe". In most cases, it's not that easy. Finding a means to run "cmd.exe" can be challenging. The typical routes such as the "Run" command, Windows Explorer, and "Programs" menu are usually disabled. But there are ways to do it. Below I cover one such technique I have been using for several years and have not seen documented elsewhere. It leverages Internet Explorer Developer Tools. Let me show you how it works.
Most restricted user environments exist solely to provide functionality that is accessed via a web browser. Therefore, Internet Explorer is authorized in just about every restricted Windows environment. While not guaranteed, it has been available in every such environment that I have encountered to date. Built into Internet Explorer is the feature that we are going to leverage, a feature named Developer Tools.
The Internet Explorer Developer Tools provide similar functionality to that of Chrome and Firefox. However, there is some additional functionality that becomes quite beneficial in solving our current predicament. Once the Developer Tools panel is loaded via pressing the "F12" key or clicking on "Developers Tools" in the "Tools" menu, a click on the "File" menu of the Developer Tools panel reveals an option named "Customize Internet Explorer view source".
This menu option allows the user to select which program on the local system is used to load the HTML source of a web page in Internet Explorer when the "View Source" menu item is selected on the "Page" menu. The first instinct of any penetration tester should be to browse to "cmd.exe", select it as the program, click "OK", then view the source of any web page. While this sounds like a decent plan, there are 2 issues that must be addressed before we can achieve shell this way.
The first issue is that in restricted user environments, direct access to the contents of the system drive is usually disallowed. The solution to this problem is very simple. By typing the drive letter of the system drive in the "File name" box and hitting the "Enter" key, we are greeted with the contents of the drive.
At this point, we browse to the "C:\Windows\System32" folder, select "cmd.exe", and view the source of any web page. We are promptly greeted with the following result.
This is the second issue. Administrators have become savvy to the use of the command prompt by those looking to conduct nefarious activities on their tightly controlled system, and have leveraged local security policy to disable it. Fortunately, solving this issue is almost as easy as the first, but with a little twist.
PowerShell fans everywhere should be screaming at me through their computer screens right about now. The partial answer here is to try and execute PowerShell rather than "cmd.exe" as it is often forgotten by administrators and is not restricted by the security policy setting that explicitly disables the command prompt.
So we use the "Customize Internet Explorer view source" approach from above to browse to "C:\Windows\System32\WindowsPowerShell\v1.0", select "powershell.exe", and again view the source of any web page. This time around, we are greeted with the following result.
This image was difficult to capture because, unfortunately, PowerShell doesn't understand the use of a cached HTML file name as syntactically correct input, fails, and exits without providing access to the shell. Bummer. However, there is still another option. Look back 3 images and notice the "powershell_ise.exe" file. The "powershell_ise.exe" program is the PowerShell Integrated Scripting Environment (ISE). It just so happens that by using this as our program to view the source of web pages in Internet Explorer, we are greeted with the following result.
A friendly PowerShell IDE! We see our HTML loaded into the script editor and an interactive PowerShell prompt at the bottom of the window. The output from our commands populate the middle pane. This should be sufficient to move forward, but if you would rather have a raw PowerShell prompt, simply click the PowerShell button at the top of the page and you have your wish.
At this point, we have accomplished our goal of gaining shell access in the restricted user environment. We can now use PowerSploit to conduct all kinds of nastiness on the target machine and take measures to elevate privilege.
From the defensive perspective, how do we prevent this type of attack? I am no Active Directory expert, but I am intimately familiar with the concepts of white listing and black listing. There are security policy rules that allow for explicit filtering of accessible programs in restricted user environments.
I recommend using one of these security policy rules, preferably the white list rule, to ensure that binary executables which can result in shell are inaccessible to the user.