21 February 2017

Annotate all the things

I don't do reverse engineering for a living but I still like to peek under the hood of binaries from time to time. Either because of testing, looking for bugs or just for fun. Problem is, that IDA Pro, de-facto standard tool for any Reverse Engineer is prohibitively expensive for most of the people. On top of that, licensing policy is very annoying and illogical. But enough about IDA Pro - let's talk about new contender on this field - Binary Ninja.

I'm not going to repeat all the praises that this tool is receiving. Instead, you may for example read how you can use it to automatically reverse 2000 binaries or maybe how the underlying Low Level Instrumentation Language works. All in all platform looks very promising and I couldn't wait to try it after seeing it for the first time. Couple of months ago I was playing with the Beta and pretty much bought it first day it was released.

There is one tiny problem with Binary Ninja however - IDA Pro was here for years, therefore it is both feature rich and ecosystem around it is pretty robust. Binja still has a long way to go in this department - there are not that many useful plugins and some features are missing. One thing I've noticed for example is that while reversing basic libc functions and system calls are not annotated in any way. There is no prototype of them and arguments are not marked in any way. So instead of complaining I've decided to utilize available API and just fix that.

Let's start by defining a problem. For example we have a listing like this:

Not terribly descriptive, right? Well, at least for strcpy() we roughly remember the prototype so we can quickly find where arguments are being pushed on the stack. But what about fchmodat() or sigaction(). Yeah, you need to get back to man page. How cool would be to open a binary and get this:

This is exactly what Annotator plugin does - it iterates through all instruction in the code building a virtual stack as it goes, but instead of variables it tracks instructions that pushed a given variable on to the stack. Upon encountering a call of known function it uses this virtual stack to annotate it with a proper argument prototype.

This is a very first release so it is probably riddled with bugs. Not to mention some features are missing. Right now not all glibc function prototypes are present because I haven't found a good and reliable way to extract them from headers - instead I'm using a combination of grep, regex and cut with some manual cleanup effort. That unfortunately takes time. Same goes for system calls, but I should be able to put all Linux 32bit ones today. Ah, and you have to run plugin manually in every function you view - right now there is no way to automatically apply it to all the functions - I'm contemplating to write one method allowing user to apply it to whole underlying call graph, but we will see about that.

Another thing is quite naive virtual stack implementation - for sure it requires more work to track stack growth more accurately and for example track number of arguments for functions with va_arg type of arguments. Right now I'm also scanning blocks of code in linear manner, but for future version I will probably switch to recursive mode with stack isolation for each path (well, right now I haven't encountered situation where functions arguments are done in different code block than the call itself, but better safe than sorry). Last thing to improve is number of virtual stacks - first for x64 platforms and later for ARM architecture.

Please, let me know what do you think about the extension and report all the bugs.

28 August 2015

In search of golden fleece

Key activity when looking for reflected XSS is to check what parameters provided in request are echoed back in response. Doing that manually is tedious and that time can be spent in more productive way. For example you can write burp extension that will do it for you. So, I present Argonaut.

Extension works in very simple way - it parses captured request to extract all parameters (cookies included) and later search through response body to see if value in question has been echoed back. In such case a short snippet of match is presented to the user.

Currently a parameter parsing is done in quite a dumb way - it works quite well with standard GET and POST parameters, but for example is unable to extract param values from JSON or XML and tried to see for exact match of whole payload. That is not very effective, but it is on my TODO list. One more thing to remember - parameter values shorter then 3 characters are ignored (you don't want 300 matches of '1' in result table).

Hey, but what about escaping, you ask? No worries, I got this covered. Let's say you are testing a web application written on top of Django. Most likely you are going to use Jinja2 template engine, and it applies escaping. Argonaut will search the response body for plain parameter value (let's say test">), but will also apply various defined transformations/escaping to see if for example application returned 'test">'.

I've chosen Jinja2 example for a reason - truth be told Jinja2 is the only transformation implemented so far, but mechanism is in place and I'm planning to add new ones very soon.

There is still work to be done. Some simple tasks will be completed soon - for example new transformations and some UI work. Others, harder - like support for contextual autoescaping libraries and type dependent parameter extraction will have to wait a bit. Anyway, stay tuned and let me know what do you think.

27 July 2015

Migrating repository

Because code.google.com will be finally deprecated really soon I've moved all my projects to github. That includes JSONDecoder.

14 August 2013

MutProxy

Recently I had very little time to write anything meaningful. New post are coming, slowly but steady. In the meantime I've stumbled upon short code at Gynvael page. It reminded me of a project I wrote some years ago for one assessment.
When I finally found it the code wasn't in state where I'd like to show it to anyone. Past few days I've spent cleaning and expanding it a bit. Today I've pushed code into GitHub. Here, take a look.

So, what MutProxy does? (Yep, I know that name is not very original nor brilliant, but come on, I'm not a Junior Creative Director in D'Arcy, I'm just a plain pentester.) It's just a simple proxy/tunnel with ability to attach functions to alter or log traffic in different ways. ReadMe does not exists at the moment, so you will have to read the code to determine functionality. There is some documentation in code comments :).

A lot of work still to be done - mutators are very basic and act more as an example then real deal, logger is very plain and documentation does not exist. Waiting for more free time. I was also planning to write more how to force applications to go through your proxy.

18 June 2013

Small update

This is going to be very short (let's call it a warmup) post.
Just wanted to let you know that I've made small update to JSONDecoder. Changes are mostly cosmetics:

  • Content type check is case insensitive now
  • Decoder is now removing garbage from JSON payload (like }]);)
  • Another Content-type is being checked: text/javascript (twitter uses that)
More stuff soon.

11 February 2013

Jar full of cookies

Few posts back I've been giving tips about how to organize web fuzzing - you remember that part, color highlights, marking stuff for later. But one person (I think that was my only semi-active reader) asked me: "But those request are gonna expire, session will die". That is true - very often you no longer can reuse that request, unless of course you are planning to copy and paste all the cookies from more recent one. There, however is a faster method.

Set things up

Burp Suite has this nifty feature called Jar Cookie - basically Burp has ability to parse every Set-Cookie header and store cookies in a database. Good thing is that other tools are able to use the same jar. While issuing a request Burp will replace every matching cookie header with the most recent value obtained from the jar.
In the Options/Sessions tab you have the ability to set which tool traffic should be monitored to update a jar. To configure what tool should use the cookie Jar you have to edit default session handling rule - take a look at scope tab. Now, before you start fuzzing (or just playing with some stored requests) you only have to login to application through proxy and newest cookies will be placed in a jar.

How about magic trick

This is just the beginning - cookie jar/session management options are even richer. In Options/Sessions tab you can set a lot of possible actions. First - macros. You can set up automatic sequences of request, retrieve some parameters like anti-CSRF token or simply log you automatically to the application. In session handling rules you can configure some behaviours making use of previously set up macros (but not only). For example in Intruder before every request you may want to issue different request to obtain a valid anti-CSRF token and then use it while issuing one with tampered parameters. Of course details will differ between applications you are testing, but I encourage you to try it yourself. Remember - what sometimes seems to be overly complicated can in fact save you a lot of manual and mindless cop-and-paste job.

As always some additional information can be find at BurpSuite Blog.

6 February 2013

JSON Decoder

Long time no see. Usually people start such notes with oh-so-cliche quote from Mark Twain, but I've already did that on numerous occasions, so no. Anyway, despite the hidden motto of this blog ("no promises, it will be released when it's done") I wrote something. Finally, yesterday I've overcome my pathological laziness and finished version 1 of very small Burp Extensions - JSON Decoder. Code itself is not very impressing, nor is the functionality, but it's a start - now, knowing the basics I can move to more impressive stuff.

The Extension

Since version 1.5.01 Burp Suite Pro comes with new API for writing extensions. No longer you need to write them in Java, bundle into JAR and are forced to do some mojo magic to make them run. New API also gives you access to much more of the Burp internals. I'm not going to give you a tutorial how to write them, but I encourage you to read some of official tutorials on PortSwigger blog. If I see correctly there are eleven tutorials covering quite wide selection of topics.

So, what is my extension doing? Not that much (at least in this version) - it's just an additional tab with pretty printed JSON packet. I have other plans for that but I need to find time (and I've started flying BMS 4.32 again, so no rest for the wicked). I have some others extensions as a work in progress, but they are not in the ready-to-show state.

Debugging

Debugging burp extension is a bit like "Why? Because Fuck You, that's why" experience. You have made a typo, mixed expected type or declared too many parameters in function definition? All you get is JavaRuntimeException. You think that you won't made those mistakes? Let me show you what kind of mistakes I did while coding this extension.

Typos - I've spend 30 minutes failing to spot the difference between CreateTxtEditor() and createTxtEditor(). While writing an extension make sure that every API function follows CamelCase conventions (it can be tricky, because python names are usually flat). For example you can convert byte[] data variable in two ways - burp.helpers.byteToString(data) or data.tostring().

Difference between Java.String and byte[] - some functions accept byte[], some String - always check which type function expects and what it returns. It will save you time spent inserting countless println() lines.

Given the low complexity of my code I was able to use oldest, print everything technique of debugging, but if you are writing something more complex please read this blog entry.

Bit more about Burp stuff

If you are a new to Burp I can recommend a book written by my friend - grab it here. You can read it yourself or give to that new Junior Pentester that just joined.