Certificate Pinning is an extra layer of security that is used by applications to ensure that the certificate provided by the remote server is the one which is expected.
By including the remote server’s x509 certificate or public key within the application, it is possible to compare the locally stored certificate or key with the one provided by the remote server.
If you have been unable to intercept (Man-in-the-Middle) the application’s HTTPS traffic, after taking the necessary steps, this is probably due to the application using Certificate Pinning.
A few days ago (October, 2015) the OWASP Application Security Verification Standard (ASVS) version 3.0 was released which I had the opportunity to contribute to in a small way by helping review some of the draft documents before the official release.
I became aware of the OWASP ASVS as a growing number of my clients started asking if I used it. At the time I didn't use it but I decided to invest some time into finding out what it was. This blog post's intention is to bring some attention to the OWASP ASVS project for those unaware of it.
Today Burp Suite announced the release of a new feature they call Burp Collaborator which is enabled by default.
By default this new feature makes use of a third-party server hosted by Burp to detect vulnerabilities which are not easily detectable in the usual 'request<->response' method most vulnerability scanners use.
The legitimate concern of some people online has been that client information may now be leaked to Burp or worse leaked to the Internet if the third-party server Burp uses is ever compromised.
When you first release software online you don't put too much thought into the software license (I didn't at least). You have no idea if the project will take off. If your intention is for your peers to use it freely your first thought may be Open Source. The most popular Open Source license is the GNU GPL, so why not use that!?
I released WPScan on the 16th of June 2011 along with the GNU GPL license. After a while I built up a team, The WPScan Team, which were people who had the same goals as me, to make an awesome black box WordPress scanning tool. The WPScan Team (3 other awesome people) and I have been working on WPScan in our spare time as volunteers for almost 4 years. Countless hours, days, weeks and months of man hours have been put into WPScan and recently the WPScan Vulnerability Database by us.
It's almost the end of the year so I thought I would take the opportunity to reflect on my achievements during 2014 before the holidays start. It is a good way for me to put the year into perspective and focus on my goals for 2015.
Last week I came across a service on the Internet running on TCP port 11211, Memcached's default port. I had heard of Memcached before but I probably only knew it was some kind of database system, that was the extent of my familiarity with it.
I quickly learnt that connecting to Memcached does not require authentication. Authentication can be implmented but even then Memcached's own documentation says it should not be fully trusted.
For those of you who have been living under a rock, BruCON is a security conference held every year in Belgium (originally Brussels, now Ghent). I have attended every BruCON conference since the second. Last year was the 5th time the conference had been held (correct me if I'm wrong) and so the year before (2012) they setup what they called 5by5. This allowed BruCON, as it's a non-for-profit, to share its extra left over cash by supporting community projects.
Last year, they allocated up to 5,000 euros to 4 different community projects. These projects were:
1. OWASP OWTF (Abraham Aranguren)
2. The Cloudbug Project (Carlos Garcia Prado)
3. A tool a month (Robin Wood)
4. Eccentric Authentication (Guido Witmond)
As last year was such a success, they're doing it again this year! And this year I put in a proposal!
GitHub was recently the target of a large weak password brute force attack which involved 40k unique IP addresses. One of many of the security measures GitHub has now taken is to ban users to register with 'commonly-used weak passwords'.
To find out what GitHub considers as 'commonly-used weak passwords' I decided to compile a list of GitHub valid passwords from a few password lists found online and one of my own.
GitHub's password policy is reasonable (at least 7 chars, 1 number and 1 letter) so from all of the wordlists used only 331 passwords were found to conform to GitHub's password policy.
1. *Advisory Information*
Title: SimpleRisk v.20130915-01 CSRF-XSS Account Compromise
Advisory ID: RS-2013-0001
Date Published: 2013-09-30
2. *Vulnerability Information*
Type: Cross-Site Request Forgery (CSRF) [CWE-352, OWASP-A8], Cross-Site Scripting (XSS) [CWE-79, OWASP-A3]
Impact: Full Account Compromise
Remotely Exploitable: Yes
Locally Exploitable: Yes
CVE-ID: CVE-2013-5748 (CSRF) and CVE-2013-5749 (non-httponly cookie)
3. *Software Description*
SimpleRisk a simple and free tool to perform risk management activities. Based entirely on open source technologies and sporting a Mozilla Public License 2.0, a SimpleRisk instance can be stood up in minutes and instantly provides the security professional with the ability to submit risks, plan mitigations, facilitate management reviews, prioritize for project planning, and track regular reviews. It is highly configurable and includes dynamic reporting and the ability to tweak risk formulas on the fly. It is under active development with new features being added all the time. SimpleRisk is truly Enterprise Risk Management simplified. 
Recently I became faced with my first Web Application Security Assessment which relied heavily on HTML5's WebSockets.
The initial WebSocket handshake is carried out over HTTP using an 'upgrade request'. After the initial exchange over HTTP all future communication is carried out over TCP. On the application I was testing the WebSocket handshake over HTTP within WireShark looked like this:
In part 1 of this blog post I conducted a DNS Zone Transfer (axfr) against the top 2000 sites of the Alexa Top 1 Million. I did this to create a better subdomain brute forcing word list. At the time, conducting the Zone Transfer against the top 2000 sites took about 12 hours, this was using a single threaded bash script. I was pretty proud of this achievement at the time and thought that doing the same for the whole top 1 million sites was beyond the time and resources that I had.
After creating a multithreaded and parallelised PoC in Ruby to do the Zone Transfers, it took about 5 minutes to conduct the Zone Transfers against the top 2000 compared to the 12 hours it took me to do the top 2000 using a single thread. I decided it was possible to do a Zone Transfer against the whole top 1 million sites.
There were 60,472 successful Zone Transfers (%6) out of the Alexa Top 1 Million, this equates to 566MB of raw data on disk. This amount of data brings its own challenges when attempting to manipulate it.
At work as part of every assessment we do a some reconnaissance which includes attempting a DNS Zone Transfer (axfr) and conducting a subdomain brute force on the target domain/s. The subdomain brute force is only as good as your wordlist, the Zone Transfer is a matter of luck.
Alexa release a list of the top 1 million sites which is updated on a daily basis. To create a better subdomain wordlist to conduct subdomain brute forcing I attempted a DNS Zone Transfer against the first 2000 sites in the Alexa Top 1 Million list. With every successful Zone Transfer the DNS A records were stored in a CSV file.
This was all done using Carlos Perez's dnsrecon DNS enumeration tool. Dnsrecon was ever so slightly modified to only save A records, apart from that I just used a bash script to iterate over the Top 1 Million list and ran dnsrecon's axfr option for each site with CSV output enabled.