The same principle needs to be applied to parsing network protocols. Incoming network packets are exposed raw to hackers on the Internet. Hackers break into machines because inexperienced programmers make common mistakes, like buffer overflows.
This is the problem with Silent Circle, a company who makes a product for encrypting phone calls. Their crypto algorithms are designed by crypto experts, such as Phil Zimmerman (of PGP fame). Their network parsers, though, are designed by inexperienced coders.
Silent Circle released their code yesterday. I haven’t found an exploitable flaw yet, but the first file I looked at demonstrated common parser mistakes. I thought I’d document them in this post. Specifically, I’m referring to their SDP parser. SDP is part of SIP, which is used to establish phone calls (call, hangup, etc.), but doesn’t itself carry the data.
char vs. unsigned char
Their code refers to the packet as “char *buf”. This is wrong. The packet contains “bytes” not “characters”. It should be “unsigned char *” or “uint8_t *” instead.
SDP is a text-based protocol. That implies to programmers that it contains “characters”. But that’s not a requirement, just a convention. Hackers don’t have to follow that convention, and can insert any sort of bytes they want into the packet.
The problem is that “char” is not defined to be “signed” or “unsigned”. When “char” is signed, it sign extends. This causes 32-bit values to appear when the code is only expecting 8-bit values. A good example is the call to “islower(*buf)” on line 196 of their code. If “*buf” is 0x80, then it sign extends to 0xffffff80. This can cause weird things to happen.
Some static code analyzers check this. Some dynamic stuff also checks this. If you were to run this SDP parser under Microsoft Visual Studio and then fuzz this protocol, it’ll throw an assertion on the call to “islower(0xffffff80)”.
A lot of programmers will claim that I’m being pedantic, that this isn’t really a problem. Compilers can be set to force “char” to be unsigned anyway, so it’s “good enough”. That’s the same thinking pedantic cryptographers have been fighting for years. This is the thinking behind most crypto failures. Likewise, protocol experts have a lot of experience when “char*” isn’t good enough.
Const correct
The “const” keyword is rarely used in their code. This is wrong. Most pointer parameters to should have it. Functions should avoid changing their input. The “const” keyword enforces this.
Non-const inputs lead to code instability. Over time, a programmer may decide to change input in a function, while caller depends upon the parameters not changing. Thus as a general principle of good code, more “const” is needed.
This is especially true for packets. Packet data read from the network should be immutable. Parsers should not change the data as they parse it. Again, this is one of those pedantic things: in the past, there are lots of known problems that arose because packet data was changed unexpectedly, so we’ve developed the principle that it shouldn’t change.
Thus, the correct type of packet bytes is "const unsigned char *", or equivalent things like "const uint8_t *". Everything else is incorrect.
Conditional compiles
Protocol parsing is CPU independent. There is never a reason to do something different depending upon the CPU.
Consider line 96 in their code that says the following:
#if !defined(_WIN32) && !defined(__SYMBIAN32__) //if not arm
The code is trying for an optimization here, to compare 4 bytes at once rather than 4 separate single byte compares. The problem is that same RISC processors crash when given unaligned data. Hence the conditional compile, either the fast way for some CPUs or the slow but safe way for others.
This is monumentally stupid. I don’t want to sound harsh, but this optimization is so tiny it makes no measurable difference ever. All it does is destabilize the code for no good reason. It makes testing the code a lot harder. It allows subtle bugs to creep through testing as things behave differently on different platforms.
Per-OS, per-compiler, per-CPU conditional compiles are NEVER needed for protocol parsing. If you use them, then you are doing something very wrong.
Casting and byte-swapping
The mistake of doing an #if conditional above comes from trying to fix an underlying mistake of casting between “char*” and “int*’. This is a common technique taught in your “UNIX Network Programming” class. It’s also wrong. You should never do it when parsing packets.
There are two reasons to avoid this. The first is that (as mentioned above) some CPUs, such as SPARC and some versions of ARM crash when referencing unaligned integers. This makes network code unstable on RISC systems, because most integers are usually aligned anyway, meaning a lot of alignment issues escape undetected into shipping code. The only way to make stable code is to stop casting integers in network (or file) parsers.
The second problem is that it causes confusion with byte-order/endianess that doesn’t happen if you just don’t cast integers. Consider the IP address “10.1.2.3”. There are only two forms for this number, either an integer with the value of 0x0a010203, or an array of bytes with the value 0a 01 02 03. The problem is that little endian machines are weird. The integer 0x0a010203 is represented internally as 03 02 01 0a on x86 processors, with the order of bytes “swapped”.
But this is just an internal detail that YOU NEVER NEED TO WORRY ABOUT. As long as you never cross the streams and cast from a “char*” to an “int*” (or the reverse), then the byte-order/endianness never matters.
If your first language was anything other than C/C++, then you are probably scratching your head wondering what the fuss is. Since your language doesn’t allow you to cast between “char*” and “int*”, you’ve never been confused with byte-order/endianess.
On line 119 their code calls ‘reverseIP’ to byte swap the IP address. That’s a clear sign that things aren’t getting coded right.
The only time to byte-swap is right at the moment of using the Sockets interface. That’s because Sockets itself was written wrong back in the early 1980s. Thus, the only time you byte-swap IP addresses is when you do something like “sin.sin_addr.s_addr = htonl(my_ip_address)”.
Follow the spec
Early Internet doctrine was to write code that followed the spec “close enough”. This was good in developing the protocols themselves, but was bad for long-term interoperability as popular products proliferated using wrong techniques, that all products then had to unofficially support.
Being liberal is allows hackers to inject a lot bad packets into the system that will nonetheless work. This allows them to reach vulnerable code that would otherwise have been rejected. Thus, critical software (like telephone encryption) needs to be conservative about what they accept, not liberal.
Consider the line from SDP like the following:
c=IN IP4 224.2.17.12/127
The SilentCircle parser checks for the ‘c’, then skips the next character whatever it is. Thus, the following line is allowed:
crIN IP4 224.2.17.12/127
Silent Circle needs to more formally follow the SDP spec at RFC 4566, rather than coding "whatever works".
Update: In the comments to this post, somebody proves I'm wrong on this point, because while '=' isn't check in the expected location in the code, it's checked elsewhere. Thus, my "crIN" example won't actually work.
Some good stuff
The SilentCircle code isn’t all bad. They do some things right.
One thing they get right is the “concreteness” of their parser. Protocol specifications are ugly, and code should be written to match the specification. That means protocol parsing code should be ugly. There’s nothing more annoying than debugging protocol parsers that have been prettied up, where I can't match up things in the spec and the packes with what I see in the code.
In school, your professors stressed that code should have “no magic numbers”. They were wrong: protocol parsers should be full of magic numbers. Consider the line 255 in the SDP code where is references the letter ‘c’. What does ‘c’ mean? Shouldn’t we instead use a #define somewhere, like:
#define SIP_SDP_CONNECTION_INFO 'c'
Wouldn’t that help make the code more self documenting? No! it wouldn’t! The spec says “c” so the code should too. Otherwise, I have to hunt all over the place to match things up.
My point is this: other’s might criticize the code for being ugly, but it’s not. It’s as pretty as it should be.
The other good thing the code does is correct coupling and cohesion. All this module does is parse the data, but it doesn’t really act on the data. That’s the correct feature of protocol parsers. Parsers are the external attack surface exposed to hackers. You want to keep that bit as independent from the rest of the system as possible. The SDP file has only two includes. That’s awesome.
Conclusion
The Silent Sircle protocol parsers aren’t all bad. Nor have I found a bug. It’s just that the first file I looked at, the SDP parser, demonstrates inexperience on the part of the coder. Good crypto needs experienced cryptographers, good network parsing code likewise needs experienced coders.
6 comments:
c=IN IP4 224.2.17.12/127
vs
crIN IP4 224.2.17.12/127
if(!(
*(buf-2)=='\r' &&
*(buf-1)=='\n' &&
*(buf+1)=='=' // cr will fail, c= will not
))
{
buf++;
continue;
}
Very appreciative for this insight. We are already evaluating this with our team. A well-balanced and insightful review. Its rare that we find security teams that understand why the code is "ugly" -it shows your experience with parser code. We often hear the so called experts say "its not pretty" -so its good to actually see a review that "gets the why". We appreciate all helpful security checks -this is a good one.
Respectfully,
M.Janke
CEO -Silent Circle
Mike, parsers shouldn't have to be ugly -- even in C. It's not feature-complete yet, but you might have a look at Hammer, a parser combinator library my team is writing in C to address this exact problem. (The missing features have to do with additional backends; the packrat parsing backend, which is fully implemented, is sufficient to handle the grammar of SDP.)
It's great to hear that your parsing is decoupled from processing, though. Way too many products (that we still use every day...) make that mistake, and I'm glad to hear Silent Circle got that part right.
Code that compares successive locations in memory to byte constants is an error-prone attempt at an optimization. Instead of:
if(!(
*(buf-2)=='\r' &&
*(buf-1)=='\n' &&
*(buf+1)=='=' // cr will fail, c= will not
))
how about:
if (memcmp(buf - 2, "\r\nc=", 3))
Modern compilers will expand that into whatever code is fastest for the particular target they are producing code for (so you actually are likely to get the optimization shown above that compares 4 bytes at a time - except that the compiler will actually get it right).
Besides, this code is more readable: It's quite obvious at a glance what it's looking for.
But ... there's a much more immediate concern. The code makes a basic, common mistake: It assumes that the buffer actually contains the bytes it's looking for Look at that comparison again: What if the the "c" that got us here was the last character in the buffer? Then either comparison will check the character after the end of the buffer against "=". Not good. (This is slightly funky code because of the way it looks *back* for the \r\n. It might not be vulnerable in context. But the code has constructs like:
while(*buf>=' '){buf++;}
which can walk through arbitrary amounts of memory. My basic rule for parsers: Everything that takes a buffer pointer takes a length (in practice, a pointer to the end of the buffer is better), and everything that manipulates the buffer pointer checks that the new value is actually within the buffer. (If you use negative addressing, like the example code above, you have to check that you don't go "before the beginning" as well as that you don't go "after the end".)
Oops, the memcmp should have have had a length of 4, not 3 - I initially missed the fact that the original code skips over the byte at *buf. See the hazards of micro-optimization?
I would likely use a macro for the particular operation of comparing to a constant string and let the compiler get the length right.
Thanks for sharing your ideas. Can I copy and use the exact code for date encryptions and security?
Post a Comment