Showing posts with label pentest. Show all posts
Showing posts with label pentest. Show all posts

28 August 2015

In search of golden fleece

Key activity when looking for reflected XSS is to check what parameters provided in request are echoed back in response. Doing that manually is tedious and that time can be spent in more productive way. For example you can write burp extension that will do it for you. So, I present Argonaut.

Extension works in very simple way - it parses captured request to extract all parameters (cookies included) and later search through response body to see if value in question has been echoed back. In such case a short snippet of match is presented to the user.

Currently a parameter parsing is done in quite a dumb way - it works quite well with standard GET and POST parameters, but for example is unable to extract param values from JSON or XML and tried to see for exact match of whole payload. That is not very effective, but it is on my TODO list. One more thing to remember - parameter values shorter then 3 characters are ignored (you don't want 300 matches of '1' in result table).

Hey, but what about escaping, you ask? No worries, I got this covered. Let's say you are testing a web application written on top of Django. Most likely you are going to use Jinja2 template engine, and it applies escaping. Argonaut will search the response body for plain parameter value (let's say test">), but will also apply various defined transformations/escaping to see if for example application returned 'test">'.

I've chosen Jinja2 example for a reason - truth be told Jinja2 is the only transformation implemented so far, but mechanism is in place and I'm planning to add new ones very soon.

There is still work to be done. Some simple tasks will be completed soon - for example new transformations and some UI work. Others, harder - like support for contextual autoescaping libraries and type dependent parameter extraction will have to wait a bit. Anyway, stay tuned and let me know what do you think.

11 February 2013

Jar full of cookies

Few posts back I've been giving tips about how to organize web fuzzing - you remember that part, color highlights, marking stuff for later. But one person (I think that was my only semi-active reader) asked me: "But those request are gonna expire, session will die". That is true - very often you no longer can reuse that request, unless of course you are planning to copy and paste all the cookies from more recent one. There, however is a faster method.

Set things up

Burp Suite has this nifty feature called Jar Cookie - basically Burp has ability to parse every Set-Cookie header and store cookies in a database. Good thing is that other tools are able to use the same jar. While issuing a request Burp will replace every matching cookie header with the most recent value obtained from the jar.
In the Options/Sessions tab you have the ability to set which tool traffic should be monitored to update a jar. To configure what tool should use the cookie Jar you have to edit default session handling rule - take a look at scope tab. Now, before you start fuzzing (or just playing with some stored requests) you only have to login to application through proxy and newest cookies will be placed in a jar.

How about magic trick

This is just the beginning - cookie jar/session management options are even richer. In Options/Sessions tab you can set a lot of possible actions. First - macros. You can set up automatic sequences of request, retrieve some parameters like anti-CSRF token or simply log you automatically to the application. In session handling rules you can configure some behaviours making use of previously set up macros (but not only). For example in Intruder before every request you may want to issue different request to obtain a valid anti-CSRF token and then use it while issuing one with tampered parameters. Of course details will differ between applications you are testing, but I encourage you to try it yourself. Remember - what sometimes seems to be overly complicated can in fact save you a lot of manual and mindless cop-and-paste job.

As always some additional information can be find at BurpSuite Blog.

17 October 2012

Using Burp in a smart way

If I would get a penny for every BurpSuite tutorial I saw on the internet I would be rich. No, not really, I would just have 3.5 pennies. Well, let's face it - I suck at constructing metaphors. Back to BurpSuite then. As I said before - I've seen BurpSuite tutorials - they are good at explaining how to use certain tools in terms of 'this button does this and you can click here to do that'. Very often those tutorials does not explain why would you use this function and what is the effective way of doing certain tasks. I'm hoping to fill that gap in following post.

Setup

The most important advice I can give to you at the beginning is to set up your workspace and tools correctly to avoid problems on later stages.

First - layout your windows to get a better overview. In my case (two 23 inch monitors) left monitor is being used for Burp window and right for browser and firebug/terminal window (two panels, each one occupies half of the screen - courtesy of unity wm). It's quite important to be able to move your attention between windows and to be able to see more than one window at any given moment - you won't waste time on context switching.

Default settings are quite reasonable, but there are some things you can tweak. First - it's a Java app, so give it at least 1GB (2 would be optimal) of RAM via -Xmx.
For evidence retention you might want to configure Automatic Backup (options/misc) - it will save a copy of Burp state periodically and on the exit. Crash of BurpSuite never happened to me but better to be safe then sorry (and you might click that Install updates and reboot your computer button at 3am and waste whole evening of work).

Another important task is to configure your SSL certificates. Because Burp is acting as an intercepting proxy you are not really connecting to a site, but to Burp, and then Burp is making connection to a chosen site. The result is that your browser warns you, that SSL certificate presented to you by a page is not trusted. It is not, because every Burp instance generate his own SSL certificate. To avoid annoying SSL alerts you need to install Burp CA certificate into your browser - instructions are available here. As suggested by my friend in firefox you can create separate profile for pentesting with all the security options disabled and Burp CA certificated (because Firefox has a separate certificate store per profile).

Another thing you might want to do in terms of setup (again, thanks go to Daniel F.) is to move your folder where Burp stores request/responses. By default it's in /tmp directory and is world readable - it mean that by default all your credentials would be visible by all the people with access to your computer.

First look

Very rarely you would want to just look at the traffic being sent by your browser to all the pages in dozens of tabs you have open (and we call paranoid people who do). Well, actually if you are using the same browser to do pentesting and to casually browse the internet please stop - for pentesting it's better to use browser with all the security features disabled and of course you don't want to browse internet with it.
So, as I was saying - most likely you want to focus your attention at one particular domain (with maybe some additional subdomains or somehow related domains) and you do it in Burp by setting a scope. That way you declutter a history and target views removing unnecessary entries.

Now time to do your first run over the application - it should be clean - behave like a model citizen. Don't try to look for vulnerabilities yet - you will have plenty of time later on. I call the first run a 'pattern' upon which you will work in next stages. It's important to hit most important and most frequently used functions of application. Any experience from UAT scenarios might come in handy. Do it for every role in the application.

Now you have to run through history you've just accumulated. Personally I mark every candidate for data input validation testing (parameters being passed) with green highlight, vertical and horizontal authorization bypass candidate with blue and other suspicious request and responses with yellow. Also, if site you are testing has some complex authentication mechanism I add comments like auth stage 1 etc.

Personally I don't use active scanner but passive scanner is quite capable of spotting some obvious vulnerabilities like missing cookie flags, mixed content or clear text password submission.

Dirbusting

Now it's time to discover some hidden content aka dirbusting. For that purpouse of this task you can employ Intruder but it has some limitation - it cannot do recursive scanning automatically. After every found catalogue you have to reconfigure intruder to follow it deeper.

Better option would be to use skipfish or DirBuster for that task until Dafydd decides to code this tool into Suite.

On the other hand maybe you just need quick look at the directory structure (I had to kill last DirBuster run after 13 hours).

Mashing the inputs

Remember the tedious task of highlighting the requests in the history? Now it's time to look for some vulnerabilities. Grab first green request and send it to intruder - we will do some fuzzing (and repeat it for every highlighted request).

So - short fuzzing with intruder guide begins. It's really easy - first you need to set up a payload position, attack type (they are well explained in help) and then you need to choose payload. You can of course pick up some pre-set payload list like fuzzing-quick or even, remain calm, fuzzing-full but this does not bring you even close to proper coverage. Don't try to create your own fuzz list - save yourself a hassle and use fuzzdb.

This is what I usually do for every field at the beginning - I pick a list named URIhex.fuzz.txt, set up a payload processing rule Add suffix: xxx and run it against every field. Doing this you will have some sort of understanding which characters are allowed in which field and which ones are filtered or encoded.

Of course this is just scratching the surface - you never know what kind of filtering mechanism is behind that data input routine you are just fuzzing - maybe some characters are allowed, but certain combinations are not? There might be some strip_tags function or some really weird regexp. In that case fuzzdb is your friend - just pick a right list and off we go.

There is one difficult thing in fuzzing - choosing the right payload/payload generation method. There is also one tedious thing in fuzzing - browsing fuzzing results. You can however save some time by setting up Intruder properly.

Let's get back to our first example - checking which characters are allowed. After doing this you've ended up with 256 results for every input field. Browsing this by hand? No, thank you but no. So, what to do? Fortunately intruder have some tools to help you extract meaningful information from server responses.

We start by looking for an SQL Injection. Let's assume that you're just testing some simple search function - one field only. Payload position is set, as a payload you've picked pre-set list called Fuzzing - SQL Injection, no weird payload processing is needed and you are ready to hit big red button. No so fast - before running scan you need to make sure, that your baseline request is legitimate and guarantees obtaining valid results. Remember those green patterns we established couple paragraphs before? You should be using them now.
Short moment after running the intruder you should have a nice 134 (+ baseline) pairs of request/response. Now, couple of important tips. First look at response length - any significant deviation (especially decrease) can indicate that something went wrong. Look also for responses with status code different then 200.

Also Intruder options might come in handy - set a grep-match to look for any keyword that might indicate SQL server problems - mysql, ORA, error, ODBC and such. Search engine will probably print number of retrieved results - you can get them using grep-extract and print it in attack result table. This way you will have all important information summarised in one place.

Now let's hunt for some XSS-es. It's somehow more complicated then looking for SQLI - after fuzzing with, let's assume xss-rsnake.txt you will end up with 74 results. Status code and response length won't allow you to distinguish between successful and unsuccessful attack. We however can help ourselves with two intruder options.

First would be grep-extract. If you have a baseline request you can see where your inputs get echoed back. Set a proper patter and you will see all outputs in attack table. I still however forces us to review hundreds (if we combine couple of fuzz lists) results looking for stripped characters or difference in character encoding. This is good method up to 50-70 results but surly we can do better then that.

That brings us to grep-payload - very nifty tool to review fuzzing results. The most important option is Search responses for payload string - this will flag every request where payload in request is echoed exactly in the response - and this is a strong indication that there might be a potential for xss vulnerability.

Closing word

We don't want to turn this post into long list of vulnerabilities and how to look for them - you are smart so you can figure the rest of your own. There are of course more complicated examples of attacks and obstacles that you can hit during fuzzing (like CSRF-protected forms) but I hope to cover them in the future.

I was thinking about writing such guide for some time hoping to be first. In one way I've succeed but in other way I've lost the race - this guy is writing a whole book about Burp. Maybe I can get a draft?