Neat trick: Flexible Macro Vs Function Definitions

I was looking over musl's ctype.h header while working on libc earlier today. In the header, I noticed a curious set of definitions:

#define isalpha(a) (0 ? isalpha(a) : (((unsigned)(a)|32)-'a') < 26)
#define isdigit(a) (0 ? isdigit(a) : ((unsigned)(a)-'0') < 10)
#define islower(a) (0 ? islower(a) : ((unsigned)(a)-'a') < 26)
#define isupper(a) (0 ? isupper(a) : ((unsigned)(a)-'A') < 26)
#define isprint(a) (0 ? isprint(a) : ((unsigned)(a)-0x20) < 0x5f)
#define isgraph(a) (0 ? isgraph(a) : ((unsigned)(a)-0x21) < 0x5e)

It took me a few moments to figure out what was going on. Notice the conditional block that is used:

(0 ? isalpha(a) : (((unsigned)(a)|32)-'a') < 26)

Since 0 is the predicate for the if statement, the "inline" version of the isalpha macro will be used instead of the function call. If you want to use the function, you simply convert the 0 to 1. This is a clever way of controlling the behavior of code which could be either inline or a function call.

But why do we care about selecting between the two versions in the first place?

If you are re-using specific lines of code repeatedly (e.g. functions implemented as macros), the code is re-added for every invocation of the macro. This means that you have multiple copies of that particular code in multiple locations, increasing your overall binary size due to duplicate code.

The binary size increase may be worth it for any number of reasons. However, in many constrained embedded environments this binary size bloat is not desirable. Functions provide a reduction in code storage (you just need the one copy), but they do incur overhead penalties.

Optimizing for speed and binary size can be tricky. If you find yourself in a situation where you need to compare macro-vs-functions, this may be a helpful paradigm.

Happy hacking!