I’ve been working on some really nice features for the Recon-ng framework that I was finally able to push up to the master branch of the repo last night. Below is a quick round-up of the new features, migration requirements, and information about how the changes will effect user experience.
Home Folder Migration
To this point, all user generated data has been saved within the Recon-ng directory structure. While this worked fine in situations where users have root privileges, the framework was unusable in restricted user environments. Therefore, I decided to standardize the framework according best practices and make use of "home" folders. Using the "home" folder provides several key advantages. It avoids write errors in restricted user environments and allows for segregated multi-user environments. I began the "home" folder migration several weeks ago by adding the ability to build a separate module tree underneath a user’s "home" directory for custom modules (see wiki for details). As of today, the migration is complete.
After pulling down the new version of the framework, users will notice that none of their workspaces or API key data is available. Don't worry. It's still there. It just needs to be migrated to the new location by following these steps.
- Launch the framework. The framework will detect whether or not migration has occurred. If it has not, the framework will build the necessary directory structure in the "home" (~) folder.
- Exit the framework.
- Move all workspaces from the "recon-ng/workspaces/" directory to the "~/.recon-ng/workspaces/" directory.
- Move "recon-ng/data/keys.dat" to "~/.recon-ng/keys.dat".
Record Command Changes
I wanted to give users more flexibility on where commands are recorded by the "record" command without having to set a global framework option. Therefore, I modified the "record" command to require an additional resource filename parameter for the "record start" command,
record start <filename>. Now users can specify the resource file at runtime rather than have to set a global option.
Something didn't feel right about having the workspace as a global framework option. Therefore, I separated workspace control from the global options by implementing a new "workspace" command to the global context. Not only does this provide segregation, but it also allows for flexibility of workspace control through future expansion of the "workspace" command.
Both the "rec_file" and "workspace" global options were removed from the global options list to support the above changes. As a result, the saved "config.dat" files in each workspace must be changed to remove these options or the framework will behave unpredictably. This can be done in one of two ways.
- Remove the "config.dat" file from all workspaces. A new "config.dat" file will be recreated the next time the workspace is loaded.
- Edit the "config.dat" file in all workspaces and remove the "rec_file" and "workspace" options from the stored JSON string.
I conducted a Twitter poll asking users of the framework to choose which they preferred between two prompt formats: the current
recon-ng > or a proposed
[workspace] recon-ng >. Users of the framework unanimously chose the proposed prompt. However, after seeing what the prompt looked like when a module was loaded,
[workspace] recon-ng [module] >, I elected to make it
[recon-ng][workspace][module] >. I tried many variations, but this one seemed to be the most aesthetically pleasing. Thanks to all those who provided feedback.
Testing of the new features has been limited. Please report any bugs so that I can promptly address them. Thank you, and enjoy.
Anyone that has been doing penetration tests for a reasonable amount of time has at some point encountered a restricted user environment. A restricted user environment is a locked down, and usually shared, environment which restricts users to very limited functionality. These configurations are commonly seen in public kiosks and shared terminal servers.
The first instinct to achieve shell in one of these environments is to simply run "cmd.exe". In most cases, it's not that easy. Finding a means to run "cmd.exe" can be challenging. The typical routes such as the "Run" command, Windows Explorer, and "Programs" menu are usually disabled. But there are ways to do it. Below I cover one such technique I have been using for several years and have not seen documented elsewhere. It leverages Internet Explorer Developer Tools. Let me show you how it works.
Most restricted user environments exist solely to provide functionality that is accessed via a web browser. Therefore, Internet Explorer is authorized in just about every restricted Windows environment. While not guaranteed, it has been available in every such environment that I have encountered to date. Built into Internet Explorer is the feature that we are going to leverage, a feature named Developer Tools.
The Internet Explorer Developer Tools provide similar functionality to that of Chrome and Firefox. However, there is some additional functionality that becomes quite beneficial in solving our current predicament. Once the Developer Tools panel is loaded via pressing the "F12" key or clicking on "Developers Tools" in the "Tools" menu, a click on the "File" menu of the Developer Tools panel reveals an option named "Customize Internet Explorer view source".
This menu option allows the user to select which program on the local system is used to load the HTML source of a web page in Internet Explorer when the "View Source" menu item is selected on the "Page" menu. The first instinct of any penetration tester should be to browse to "cmd.exe", select it as the program, click "OK", then view the source of any web page. While this sounds like a decent plan, there are 2 issues that must be addressed before we can achieve shell this way.
The first issue is that in restricted user environments, direct access to the contents of the system drive is usually disallowed. The solution to this problem is very simple. By typing the drive letter of the system drive in the "File name" box and hitting the "Enter" key, we are greeted with the contents of the drive.
At this point, we browse to the "C:\Windows\System32" folder, select "cmd.exe", and view the source of any web page. We are promptly greeted with the following result.
This is the second issue. Administrators have become savvy to the use of the command prompt by those looking to conduct nefarious activities on their tightly controlled system, and have leveraged local security policy to disable it. Fortunately, solving this issue is almost as easy as the first, but with a little twist.
PowerShell fans everywhere should be screaming at me through their computer screens right about now. The partial answer here is to try and execute PowerShell rather than "cmd.exe" as it is often forgotten by administrators and is not restricted by the security policy setting that explicitly disables the command prompt.
So we use the "Customize Internet Explorer view source" approach from above to browse to "C:\Windows\System32\WindowsPowerShell\v1.0", select "powershell.exe", and again view the source of any web page. This time around, we are greeted with the following result.
This image was difficult to capture because, unfortunately, PowerShell doesn't understand the use of a cached HTML file name as syntactically correct input, fails, and exits without providing access to the shell. Bummer. However, there is still another option. Look back 3 images and notice the "powershell_ise.exe" file. The "powershell_ise.exe" program is the PowerShell Integrated Scripting Environment (ISE). It just so happens that by using this as our program to view the source of web pages in Internet Explorer, we are greeted with the following result.
A friendly PowerShell IDE! We see our HTML loaded into the script editor and an interactive PowerShell prompt at the bottom of the window. The output from our commands populate the middle pane. This should be sufficient to move forward, but if you would rather have a raw PowerShell prompt, simply click the PowerShell button at the top of the page and you have your wish.
At this point, we have accomplished our goal of gaining shell access in the restricted user environment. We can now use PowerSploit to conduct all kinds of nastiness on the target machine and take measures to elevate privilege.
From the defensive perspective, how do we prevent this type of attack? I am no Active Directory expert, but I am intimately familiar with the concepts of white listing and black listing. There are security policy rules that allow for explicit filtering of accessible programs in restricted user environments.
I recommend using one of these security policy rules, preferably the white list rule, to ensure that binary executables which can result in shell are inaccessible to the user.
A co-worker of mine, Ethan Robish, and I encountered several complicated CSRF situations for which he came up with a brilliant solution. A solution worthy of recording here for future reference.
Let's say you encounter a situation where an attack requires multiple CSRFs in order to conduct some sort of undesirable action i.e. transfer funds between accounts or manipulate a forgot password system. This is easily accomplished if the target accepts GET requests. The attacker can set up a couple of dummy images and launch multiple CSRF requests with ease. However, what if the target application only accepts POST requests? While this complicates things, the attack can still be accomplished as long as the attacker doesn't mind engaging the target user once for each POST request. But what if the attacker has one opportunity to engage the target user? This is the situation that Ethan and I were faced with.
Rather than blindly explain the technique, let's consider the following code that Ethan provides as a template for the attack:
Let's break it down.
And that, my friends, is how we do multi-POST CSRF at Black Hills Information Security . Enjoy the template and please share your success stories and improvements with us.
This is not the first disclosure of multi-POST CSRF. Below is a list of links to similar articles and tools which assist in executing the above attack. We will continue to update this list as we come across additional resources. Enjoy!
Nothing impacts a penetration tester's ability to replicate real world threats more than a time restriction. However, time restriction is something that penetration testers almost always have to deal with as most organizations aren't willing to fund open ended black box testing. Sadly, in most cases, the maturity of network defenses is lacking, reducing the impact of time restriction. But in other cases, defenders do stuff right, enhancing the issues brought about by time restrictions through the implementation of time consuming defensive countermeasures. In these cases, penetration testers must find interesting ways to use traditional tools while avoiding detection.
On a recent penetration test, I ran head on into an Intrusion Prevention System (IPS) that was actively preventing port scanning. The IPS was performing temporary blocks on IP addresses that appeared to be scanning the network from external locations. While it was possible to scan slowly to enumerate ports and services, given the size of IP space to be scanned, doing so would have taken all of the time allotted for the test without scratching the surface. Therefore, I came up with a methodology for approaching environments like this in the future.
When anti-port scanning countermeasures are in place, options are limited. Slowing down or getting the information from a third party that was able to bypass the countermeasure are typically the only choices. The aforementioned time restriction usually rules out the possibility of slowing down, so port scan data from a third party resource is preferred.
The first place most people go for third party scan data is Shodan. While a fantastic resource for more than just service availability, Shodan limits the number of ports that it scans, leaving out many critical services. While this is acceptable for raw reconnaissance, thoroughness is critical during discovery.
Another great resource for third party scan data is exfiltrated.com. Exfiltrated.com is a search front end to the Internet Census 2012 which leveraged vulnerable embedded devices to create a distributed port scanner and scan the entire Internet for the most popular 1024 ports as designated by Nmap. Using this search engine, users are able to retrieve port scan results for all public facing version 4 IP addresses on the Internet, as seen in 2012. The accuracy of the results may vary depending on the target, but for most corporate networks, the Census is likely to produce reliable results as external resources typically don't change often and the distributed nature of the Census scanner allowed it remain undetected by port scanning countermeasures.
Once port scan data is had, it is usually safe to begin using Nmap in a very controlled fashion to fingerprint services. In my experience, fingerprinting individual services at T2 ("polite" timing rate option) with Nmap is sufficient to avoid detection. The next natural step is to conduct research on the fingerprinted services to identify vulnerabilities and possible exploit vectors. Once again, time becomes an issue. While manual research and exploitation is the stealthiest way to exploit a target, the process can be sped up by restricting a traditional vulnerability scanner and using it to determine the exploitability of identified services.
Restricting a vulnerability scanner to only assess the security of a few services seems like something all scanners should be equipped to do. This is not the case. Take Nessus for instance. Nessus does not allow explicit port restrictions for the plugin scanner. Nessus allows port scanning to be restricted to specific ports, but when the plugin scanner kicks off, tcpdump shows that Nessus stills generates traffic on ports that are not designated for port scanning. After exchanging emails and tweets with multiple people from Tenable, it appears that while port scanning can be restricted, the Nessus plugin scanner will continue to scan ports and services that are associated with enabled plugins. This does't make much sense to me and seems awfully inefficient. If the Nessus port scanner reports a port as closed, why would the plugin scanner not cross reference the port scan results and launch plugins for only available services? In any case, we have options to enforce this kind of restriction ourselves.
The first option is a technique proposed by my good friend Jake Williams. He recommends setting up a virtual machine, preferably local to the Nessus instance, with port forwarding to remote targets for only the desired ports. Then, scan the port forwarding host with Nessus. While a viable technique, this is difficult to implement using a third party hosted Nessus server. This option is good for local Nessus servers, where it is trivial to establish systems and traffic flow around an existing Nessus server.
The second option is a technique proposed by John Strand. He recommends using iptables to restrict outbound traffic to remote services. This option is good for remote Nessus servers, as everything is self contained and doesn't require control of any additional remote resources.
While using iptables to restrict outbound traffic is a simple idea, it's implementation can be a little tricky, as remote Nessus servers require communication over port 22 for SSH and port 8834 for the Nessus daemon. With a little tinkering, iptables flexes its muscles and provides us with a decent solution. Here is an iptables configuration that works well.
iptables -A OUTPUT -p tcp --sport 22 -m state --state ESTABLISHED,RELATED -j ACCEPT iptables -A OUTPUT -p tcp --sport 8834 -m state --state ESTABLISHED,RELATED -j ACCEPT iptables -A OUTPUT -p tcp --dport 443 -j ACCEPT iptables -A OUTPUT -p tcp -j REJECT --reject-with tcp-reset iptables -A OUTPUT -j DROP
Let's look at the configuration line by line. Notice that all of the rules are applied to the OUTPUT chain. This assures that the restrictions apply to the traffic originating on the machine as well.
iptables -A OUTPUT -p tcp --sport 22 -m state --state ESTABLISHED,RELATED -j ACCEPT iptables -A OUTPUT -p tcp --sport 8834 -m state --state ESTABLISHED,RELATED -j ACCEPT
These two lines ensure that established TCP connections or TCP handshakes originating from remote hosts for services on port 22 and 8834 are allowed to communicate without allowing Nessus to initiate sessions over port 22 or 8834. This is required for continued remote administration of the server and access to the Nessus server web interface.
iptables -A OUTPUT -p tcp --dport 443 -j ACCEPT
This line specifies which remote port traffic is being restricting to. In this example, Nessus is only able to scan remote port 443. Everything else will be restricted by the following lines.
iptables -A OUTPUT -p tcp -j REJECT --reject-with tcp-reset
This line replies to all TCP connection requests not explicitly allowed by the previous rule(s) with a TCP reset. This prevents long waits for TCP timeouts resulting from dropped packets, speeding up the scanner.
iptables -A OUTPUT -j DROP
This line drops all other traffic that was not rejected by the above statement. This typically applies to UDP and ICMP traffic.
Once iptables is configured, the scanner must be set up to use this configuration efficiently. Nessus allows users to establish scan policies under the "Policies" tab. A copy of an original policy should be created and the "Port Scanning" settings under "Policy General Settings" should be set similar to the following. These settings configure Nessus to conduct minimal port scanning of only port 443.
At this point, all that is left is creating a scan, configuring it to use the customized policy, and launching it. Then, rinse and repeat for each interesting service.
Carlos Perez proposes another option for surgical scanning with Nessus. He recommends using the
nessus.rules file to create rules which restrict Nessus to only certain IP addresses, ports and plug-ins. The
nessus.rules file would look something like the following.
# Nessus rules # # Syntax: accept|reject address/netmask # reject 10.42.123.0/24 reject 192.168.0.1 reject 192.168.0.2 # You can also deny/allow certain ports: # Forbid connecting to port 80 for 10.0.0.1: reject 10.0.0.1:80 # Forbid connecting to ports 8000 - 10000 for any host in the 192.168.0.0/24 subnet: reject 192.168.0.0/24:8000-10000 # You can also deny/allow the use of certain plugin IDs: plugin-reject 10335 plugin-accept 10000-40000 # Accept to test anything: default accept
This article is more for future reference than anything else, but here's the deal. While doing an assessment, I encountered a public facing LDAP server. Not a huge deal, except that this LDAP server allowed empty base objects and NULL BINDs. Basically, this means that any anonymous Internet user could extract information from the LDAP server. This LDAP server was also tied directly into the internal Windows Active Directory infrastructure. Oops.
I tried a bunch of tools to assist me in enumerating information from the server. LdapMiner, LDAP Explorer, ldapsearch, and JXplorer to name a few. The only tool that properly leveraged the empty base object and NULL BIND vulnerabilities to produce useful results was JXplorer.
The LDAP server administrator did do one thing right. He limited the responses to all LDAP queries to 25 results. Whether or not it was intentional, I don't know, but it made it painful to extract large chunks of data. Basically, it forced attackers to use many alphabetical queries with wildcards to enumerate all entries, much like exploiting a blind SQL Injection vulnerability.
ldapsearch -h <ldap_host> -p 389 -x -b "O=<known_dn>" "cn=aa*" ldapsearch -h <ldap_host> -p 389 -x -b "O=<known_dn>" "cn=ab*" ldapsearch -h <ldap_host> -p 389 -x -b "O=<known_dn>" "cn=ac*" ldapsearch -h <ldap_host> -p 389 -x -b "O=<known_dn>" "cn=ad*"
Not even JXplorer could do this, and was restricted to extracting only the first 25 nodes in each identified node throughout the directory tree. The thing that set JXplorer apart was that while some of the other tools pulled the first 25 nodes from the directory using the empty base object and NULL BIND, JXplorer crawled the tree and continued to pull the first 25 nodes from each of the child nodes it discovered. This was a good start, but I would have liked to dump the entire directory, and getting data in a useful form was cumbersome. I didn't have time to write a tool (on my list of things to do), so instead of dumping the directory, I used the empty base object and NULL BIND vulnerabilities to validate email addresses harvested with Recon-ng. Here are the commands I used to do that using the ldapsearch utility.
Verify single email address:
ldapsearch -h <ldap_host> -p 389 -x -b "O=<known_dn>" "mail=<email_address>"
Verify list of email addresses:
for line in $(cat list.txt); do ldapsearch -h <ldap_host> -p 389 -x -b "O=<known_dn>" "mail=$line" | grep mail: | cut -d" " -f2; done
The danger of an Internet facing LDAP server configured like this should be fairly obvious. Spammers and attackers have access to the full name and email address of every person in your environment that has an account in Active Directory. This will drastically increase the amount of spam your organization receives and the likelihood of phishing attacks. In addition, if you have web facing VPNs or web applications, you are giving attackers part of what is required to authenticate. This is a very bad idea.