Coding practices are out of date, laughably so. If you learn how to program in C in a university today, your textbook and your professor will teach you how to write code as if it were 1984 and not 2014. They will teach you to use "strcpy()", a function prone to buffer-overflows that is widely banned in modern projects. There are fifty other issues with C that are just as important.
In this post, I'm going to focus on one of those out-of-date practices called "pointer-arithmetic". It's a feature unique to C. No other language allows it -- for good reason. Pointer-arithmetic leads to unstable, hard-to-maintain code.
In normal languages, if you want to enumerate all the elements in an array, you'd do so with with an expression like the following:
p[i++]
The above code works in a wide variety of programming languages. It works in C, too, and indeed, most languages got it by copying C syntax. However, in C, you may optionally use a different expression:
*p++
This is pointer-arithmetic. Instead of a fixed pointer and a variable index, the pointer is variable, moving through the array.
To demonstrate how this gets you into trouble, I present the following bit of code from "openssl/ssl/s3_srvr.c":
{
s2n(strlen(s->ctx->psk_identity_hint), p);
strncpy((char *)p, s->ctx->psk_identity_hint, strlen(s->ctx->psk_identity_hint));
p+=strlen(s->ctx->psk_identity_hint);
}
The first thing to notice is the line I've highlighted. This line contains the old programming joke:
strncpy(dst,src,strlen(src));
The purpose of strncpy() is to guard against buffer-overflows by double-checking the size of the destination. The joke version double-checks the size of the source -- defeating the purpose, causing the same buffer-overflow as if the programmer had just used the original strcpy() in the first place.
This is a funny bit of code, but it turns out it's not stupid. In C, text strings are nul terminated, meaning that a byte with the value of 0 is added to the end of every string. The intent of the code above is to prevent the nul termination, not to prevent buffer-overflows. In other words, the true intent of the programmer can be expressed changing the above function from "strncpy()" to "memcpy()".
The reason the programmer wants to avoid nul termination is because they are building a protocol buffer where the string will be prefixed by a length. That's the effect of the macro "s2n()" in the first line of code, which inserts a 2 byte length field and invisibly moves the pointer 'p' forward two bytes. (By the way, macros that invisible alter variables are likewise bad programming practice).
The correct fix for the above code is to change from a pointer-arithmetic paradigm to an integer-indexed paradigm. The code would look like the following:
append_short(p, &offset, max, strlen(s->ctx->psk_identity_hint));
append_string(p, &offset, max, s->ctx->psk_identity_hint);
The value 'p' remains fixed, we increment the "offset" as we append fields, and we track the maximum size of the buffer with the variable "max". This both untangles the code and also makes it inherently safe, preventing buffer-overflows.
Last year, college professor John Regehr had a little contest to write a simple function to parse integers. Most solutions to the contest used the pointer-arithmetic approach, only a few (like my solution) used the integer-index paradigm. I urge you to click on those links and compare other solutions to mine.My solution, using integer indexes |
Typical other solution, using pointer-arithmetic |
Many justify pointer-arithmetic claiming it's faster. This isn't really true. In the above contest, my solution was one of the fastest solutions. Indeed, I'm famous for the fact that my code is usually an order of magnitude faster than other people's code. Sure, you can show with some micro-benchmarks that pointer-arithmetic is faster in some cases, but that difference rarely matters. The simplest rule is to never use it -- and if you ever do, write a big comment block explaining why you are doing something so ugly, and include the benchmarks proving it's faster.
Others justify pointer-arithmetic out of language bigotry. We are taught to look down at people who try to program in one language as if it were another language. If you program in C the way you'd program in Java, then (according to this theory) you should just stick with Java. That my snippet of code above works equally well in Java and C is considered a bad thing.
This bigotry is wrong. Yes, when a language gives you desirable constructs, you should use them. But pointer-arithmetic isn't desirable. We use C not because it's a good language, but because it's low-level and the lingua franca of libraries. We can write a library in C for use with Java, but not the reverse. We use C because we have to. We shouldn't be programming in the C paradigm -- we should be adopting the paradigms of other languages. For example, C should be "object oriented", where complex structures have clear constructors, destructors, and accessor member functions. C is hostile to that paradigm of programming -- but it's still the right way to program.
Pointer-arithmetic is just one of many issues effecting the OpenSSL source-base. I point it out here because of the lulz of the strncpy() function. Perhaps in later posts I'll describe some of it's other flaws.
Update: Another good style is "functional" programming, where functions avoid "side effects". Again, C is hostile to the idea, but when coder's can avoid side-effects, they should.
2 comments:
At first I was very happy that my pointy solution to str2long was nearly twice as fast as your subscript version.
Then I noticed that the code in robert_2.c keeps on scanning the string until the end even after it has noticed an error.
Free source code on request.
Post a Comment