How fuzzing helped us find a nasty bug

Posted by user on 08 Jul 2013

> The issue described in this post has been fixed in quasardb 1.0.1 released today. We strongly encourage our customers to upgrade their clusters. This issue can cause a denial of service if an attacker has access to the network where your quasardb cluster is installed.

We take security very seriously at Bureau 14, even if quasardb – our flagship product – is generally used beyond closed walls in “safe environments”.

Security issues are entangled with reliability issues in perverse ways and our customer expects us to solve the very difficult problem of scalable data management from end to end.

All our network communications are low-level socket operations and we've wrote our own high performance, zero-overhead marshalling library to make sure that nothing gets in the way in performance.

We also designed this library to be safe thanks to the use of template meta-programming and a heavy dose of Boost.Fusion. What that means is that a lot of mundane errors are detected at compile time.

On top of that we have a very large amount of unit and integration tests, because we’re not at a point where the compile time check can catch everything, and even if they did, checks must be checked!

Nevertheless, one day I woke up and thought it would be nice to fuzz quasardb to see what would happen.

Fuzzing consists in injecting random data into the software. It’s generally a humbling moment where you realize your code isn't as strong as you thought it to be.

Fuzzing can help you find at least two kind of errors:

  • Errors generated by a malicious attacker in order to compromise a machine or cause a denial of service
  • Errors triggered by a software bug (operating system, driver and of course the server itself) or by a hardware fault.

One of the key principles to writing robust software is that whatever input you get you must fail gracefully. Don’t be the gateway to Hell.

How we fuzzed

We set the first bytes to be a valid header – fuzzing everything unproductive – and generated thousands of random packets up to 20 bytes.

In the case of our lightweight protocol, it wouldn't be very interesting to generate very large amounts of data: as soon as the server detects it is garbage it drops the connection altogether. With messages up to 20 bytes we already have a very good coverage.

We've used a cryptographically strong random number generator, but to be honest this was just for convenience (as we already had one available within quasardb). A linear congruential generator or a Mersenne twister would have been fine as well.

The part we wanted to fuzz was the unmarshalling part, where the server receives a packet and decodes it. This is the most sensitive part because this is where memory allocation and structures constructions take place.

The server is already hardened against other kinds of attacks through the use of timeouts and heuristics to detect suspicious behaviors from the client. Fuzzing isn't very useful in that case.

Therefore our fuzzing is very low level and directly injects random data into the marshalling code. In other words, we didn't write (or use) a client that injects random TCP bytes into the server, we wrote specific code to craft valid packets with randomize parts where it makes sense.

As I said, pure random input only helps to find obvious defects and if your protocol includes a checksum, the only thing you’re testing is the checksum code (which nevertheless ought to be tested).

Houston?

To my surprise fuzzing didn’t trigger any error.

I was pretty happy with the results but somehow I told myself: “this is the first time I fuzz software without finding any bug”. I thought it was because the fuzzing wasn't appropriate enough and because the software has already been extensively tested.

Nevertheless, this was good because it meant that quasardb was resilient enough to absorb random garbage.

All was good and well and I could go back and finish Bioshock Infinite. What else could a man ask for?
Until it crashed on the FreeBSD build. Well. It didn’t really crash, it just got killed by the operating system.

We had two options:

  • Solution 1: Run the fuzzing program again knowing the error will probably not be triggered twice, look the other way and forget about it
  • Solution 2: Track down the error and fix it

As great as Bioshock Infinite is, writing correct software comes first.

First reproduce

Fortunately we had enough experience with fuzzing to know that you should always print the tested sequence before actually testing it. Good luck finding which combination triggered the error otherwise...

We took the bytes sequence, wrote a test to reproduce the problem and checked that the error was still there.

1
2
3
4
5
6
7
8
static const sizet bufsize = 12 ;
const unsigned char buf [buf_size ] = { 0x0d, 0x00, 0xbb, 0xce, 0x44,
  0x13, 0x8f, 0x82, 0xde, 0xc9, 0x5b, 0x31 } ;

boost::asio::constbuffer in(buf, bufsize);

message m;
BOOSTCHECKNO_THROW(qdb::network::unmarshal(in, m));

There’s a lot of template meta-programming magic going on behind those five lines of code, it doesn’t take 15 seconds on my desktop machine to build those lines for no reason (an insanely over-powered Xeon computer with enough RAM to store the whole Internet in it).

Anyway, enough bragging. I ran the code on my Windows computer and got no error.

Oh noes! This is one of those nasty platform-specific bug!

Well not really. It didn't crash on Windows but the debugger caught a couple of "std::bad_alloc" exceptions.

The bug was a classic case of Denial of Service. Some packets could create a valid request that would cause the server to allocate very large amounts of data. To be more precise, when the server received a packet containing a collection (a list or a vector), it would accept very large values and try to allocate as many requested elements before deserializing the rest.

What happened on FreeBSD is that the server instance attempted to allocate the requested amount of memory and got killed by the OS in the process.

Then fix

We were a little bit ashamed because we took extra care to protect the serialization code against those issues. For example, when the server receives a buffer, it reads the size first. If the size is greater than the remaining bytes, the packet is dropped.

Want the server to allocate 20 GiB to cause a denial of service? Send those bytes first! That helps mitigate the attacks without impacting performance or imposing any limit. We want to avoid asymmetric situations where an attacker can make the server crash with little or no effort, but we don't want to castrate our capabilities.

In this very case however, there was simply no check, except for a catch of std::bad_alloc exceptions that proved to be useless in FreeBSD.

We could have set a hard limit, refusing, for example, collections of more than 1,000,000 elements.

Generally speaking, I’m not a big fan of hard-coded limits as they tend to be forgotten and can cause nasty bugs.

In addition, the limit would have to be very high (several millions if not billions) meaning that an asymmetric denial of service would still be possible by sending multiple packets at the same time.

Fortunately, there is a way to be clever and do something similar as what we do for buffers.

Although the marshalled data can be much smaller than the unmarshalled data, there is a ratio. We can therefore compare the collection size against the remaining bytes. If there are less than ten times more bytes remaining, the size is potentially valid. Otherwise, this is an error and we drop the packet.

This gives us:

1
2
3
4
5
6
static const boost :: uint64t maxratio = 10 ;

if(size >(boost::asio::buffersize(in)* maxratio))
{
    return makeerrorcode(unexpected_eof);
}

> Yes, as you can see, we directly operate on the incoming network buffer. Our zero copy feature is not a marketing ploy...

Finally, make sure it never happens again

What matters is that we make sure this specific bug never happens again, because errare humanum est, perseverare diabolicum.

All we have to do for that is to add this test to our regression tests suite, and every build will be checked against the error.

We could add more tests, but this is where I think you shouldn't write too many tests: you can quickly end up spending your life writing tests instead of writing production code.

Closing words

We took extra care to protect against attacks and denial and service and yet through fuzzing we quickly discovered an issue. Fortunately the issue is mitigated by the way quasardb clusters are configured and installed (another case for in-depth security), but this is still a serious issue.

What this tells us is that writing reliable software is a long, hard, painful and extremely expensive process. However good your engineers may be, however thorough your process might be, it will not be enough, errors will go through and some of them will be very serious.

Like I said, fuzzing is often a humbling moment.

Topics: c++, fuzzing, quasardb, testing, Uncategorized