When you first release software online you don't put too much thought into the software license (I didn't at least). You have no idea if the project will take off. If your intention is for your peers to use it freely your first thought may be Open Source. The most popular Open Source license is the GNU GPL, so why not use that!?
I released WPScan on the 16th of June 2011 along with the GNU GPL license. After a while I built up a team, The WPScan Team, which were people who had the same goals as me, to make an awesome black box WordPress scanning tool. The WPScan Team (3 other awesome people) and I have been working on WPScan in our spare time as volunteers for almost 4 years. Countless hours, days, weeks and months of man hours have been put into WPScan and recently the WPScan Vulnerability Database by us.
It's almost the end of the year so I thought I would take the oppertunity to reflect on my achievements during 2014 before the holidays start. It is a good way for me to put the year into perspective and focus on my goals for 2015.
Last week I came across a service on the Internet running on TCP port 11211, Memcached's default port. I had heard of Memcached before but I probably only knew it was some kind of database system, that was the extent of my familiarity with it.
I quickly learnt that connecting to Memcached does not require authentication. Authentication can be implmented but even then Memcached's own documentation says it should not be fully trusted.
I am pleased to announce that I launched the WPScan Vulnerability Database, a WordPress Vulnerability Database, last week during the BruCON security conference in Ghent, Belgium. The WPScan Vulnerability Database's development was funded by BruCON's 5by5 project, talked about in a previous post.
For those of you who have been living under a rock, BruCON is a security conference held every year in Belgium (originally Brussels, now Ghent). I have attended every BruCON conference since the second. Last year was the 5th time the conference had been held (correct me if I'm wrong) and so the year before (2012) they setup what they called 5by5. This allowed BruCON, as it's a non-for-profit, to share its extra left over cash by supporting community projects.
Last year, they allocated up to 5,000 euros to 4 different community projects. These projects were:
1. OWASP OWTF (Abraham Aranguren)
2. The Cloudbug Project (Carlos Garcia Prado)
3. A tool a month (Robin Wood)
4. Eccentric Authentication (Guido Witmond)
As last year was such a success, they're doing it again this year! And this year I put in a proposal!
GitHub was recently the target of a large weak password brute force attack which involved 40k unique IP addresses. One of many of the security measures GitHub has now taken is to ban users to register with 'commonly-used weak passwords'.
To find out what GitHub considers as 'commonly-used weak passwords' I decided to compile a list of GitHub valid passwords from a few password lists found online and one of my own.
GitHub's password policy is reasonable (at least 7 chars, 1 number and 1 letter) so from all of the wordlists used only 331 passwords were found to conform to GitHub's password policy.
1. *Advisory Information*
Title: SimpleRisk v.20130915-01 CSRF-XSS Account Compromise
Advisory ID: RS-2013-0001
Date Published: 2013-09-30
2. *Vulnerability Information*
Type: Cross-Site Request Forgery (CSRF) [CWE-352, OWASP-A8], Cross-Site Scripting (XSS) [CWE-79, OWASP-A3]
Impact: Full Account Compromise
Remotely Exploitable: Yes
Locally Exploitable: Yes
CVE-ID: CVE-2013-5748 (CSRF) and CVE-2013-5749 (non-httponly cookie)
3. *Software Description*
SimpleRisk a simple and free tool to perform risk management activities. Based entirely on open source technologies and sporting a Mozilla Public License 2.0, a SimpleRisk instance can be stood up in minutes and instantly provides the security professional with the ability to submit risks, plan mitigations, facilitate management reviews, prioritize for project planning, and track regular reviews. It is highly configurable and includes dynamic reporting and the ability to tweak risk formulas on the fly. It is under active development with new features being added all the time. SimpleRisk is truly Enterprise Risk Management simplified. 
Recently I became faced with my first Web Application Security Assessment which relied heavily on HTML5's WebSockets.
The initial WebSocket handshake is carried out over HTTP using an 'upgrade request'. After the initial exchange over HTTP all future communication is carried out over TCP. On the application I was testing the WebSocket handshake over HTTP within WireShark looked like this:
In part 1 of this blog post I conducted a DNS Zone Transfer (axfr) against the top 2000 sites of the Alexa Top 1 Million. I did this to create a better subdomain brute forcing word list. At the time, conducting the Zone Transfer against the top 2000 sites took about 12 hours, this was using a single threaded bash script. I was pretty proud of this achievement at the time and thought that doing the same for the whole top 1 million sites was beyond the time and resources that I had.
After creating a multithreaded and parallelised PoC in Ruby to do the Zone Transfers, it took about 5 minutes to conduct the Zone Transfers against the top 2000 compared to the 12 hours it took me to do the top 2000 using a single thread. I decided it was possible to do a Zone Transfer against the whole top 1 million sites.
There were 60,472 successful Zone Transfers (%6) out of the Alexa Top 1 Million, this equates to 566MB of raw data on disk. This amount of data brings its own challenges when attempting to manipulate it.
At work as part of every assessment we do a some reconnaissance which includes attempting a DNS Zone Transfer (axfr) and conducting a subdomain brute force on the target domain/s. The subdomain brute force is only as good as your wordlist, the Zone Transfer is a matter of luck.
Alexa release a list of the top 1 million sites which is updated on a daily basis. To create a better subdomain wordlist to conduct subdomain brute forcing I attempted a DNS Zone Transfer against the first 2000 sites in the Alexa Top 1 Million list. With every successful Zone Transfer the DNS A records were stored in a CSV file.
This was all done using Carlos Perez's dnsrecon DNS enumeration tool. Dnsrecon was ever so slightly modified to only save A records, apart from that I just used a bash script to iterate over the Top 1 Million list and ran dnsrecon's axfr option for each site with CSV output enabled.
A client emailed to say they had forgotten a password for their Microsoft Excel .xls document and asked if it was possible to recover it. After searching on Google it was clear that there was plenty of shi...bloatware, which may have worked if you were willing to go through a few of them and pay a few dollars. It wasn't that important of a document according to the client but nevertheless a challenge is a challenge.
The document was encrypted when using 'save as', according to various sources online the encryption algorithm is 40bit RC4. As it is encrypted nothing could be gleaned by opening the document with a hex editor.
As always when Google turns up nothing useful I turn to Twitter. A few people recommended Elcomsoft which do Windows software to both recover and obtain the password of a Microsoft Excel document. This looked like a good bet and they offer free trials! The recover software which seems to do a brute force attack looked like it could have worked (especially now I know how weak the password was) but I was running the software on a Virtual Machine. The recovery tool unfortunately didn't reveal the password, the paid for version may have, I don't know.
The new OWASP Top 10 2013 was released not so long ago, while reading over it I noticed this:
"Attackers can trick victims into performing any state changing operation the victim is authorized to perform, e.g., updating account details, making purchases, logout and even login." - https://www.owasp.org/index.php/Top_10_2013-A8-Cross-Site_Request_Forgery_(CSRF)
This must be a mistake I thought, why would you ever want to CSRF a user to log them into their own account? If you already had their login credentials this must be utterly pointless.
Today I came across an academic paper which gives three examples of why Login CSRF can be an issue and how wrong I was.
"Search History. Many search engines, including Yahoo! and Google, allow their users to opt-in to saving their search history and provide an interface for a user to review his or her personal search history. Search queries contain sensitive details about the user’s interests and activities [41, 4] and could be used by an attacker to embarrass the user, to steal the user’s identity, or to spy on the user. An attacker can spy on a user’s search history by logging the user into the search engine as the attacker; see Figure 1. The user’s search queries are then stored in the attacker’s search history, and the attacker can retrieve the queries by logging into his or her own account."