'How important is it to check for malloc failures?

Is always protecting mallocs important? By protecting I mean:

char *test_malloc = malloc(sizeof(char) * 10000);

if (!test_malloc)
  exit(EXIT_FAILURE);

I mean, in electronic devices, I don't doubt it's essential. But in programs I'm running on my own machine that I'm sure my allocation size will be positive and that the size will not be astronomical. Some people say, "Ah imagine there’s not enough memory in your computer at this moment."



Solution 1:[1]

There's always the possibility that the system can run out of memory and not have any more to allocate, so it's always a good idea to do so.

Given that there's almost no way to recover from a failed malloc, I prefer to use a wrapper function to do the allocation and checking. That makes the code more readable.

void *safe_malloc(size_t size)
{
    void *p = malloc(size);
    if (p == NULL) {
        perror("malloc failed!");
        exit(EXIT_FAILURE);
    }
    return p;
}

Solution 2:[2]

The other reason it's a good idea to check for malloc failures (and why I always do) is: to catch programming mistakes.

It's true, memory is effectively infinite on many machines today, so malloc almost never fails because it ran out of memory. But it often fails (for me, at least) because I screwed up and overwrote memory in some way, screwing up the heap, and the next call to malloc often returns NULL to let you know it noticed the corruption. And that's something I really want to know about, and fix.

Over the years, probably 10% of the time malloc has returned NULL on me was because it was out of memory, and the other 90% was because of those other problems, my problems, that I really needed to know about.

So, to this day, I still maintain a pretty religious habit of always checking malloc.

Solution 3:[3]

It is important to check for error conditions? Absolutely!

Now when it comes to malloc you might find, that some implementations of the C standard library – the glibc for example – will never return NULL and instead abort, due to the assumption that if allocating memory fails, a program won't be able to recover from that. IMHO that assumption is ill founded. But it places malloc in that weird groups of function where, even if you properly implement out-of-memory condition handling that works without requesting any more memory, your program is still being crashed.

That being said: You should definitely check for error conditions, for the sole reason you might run your program in an environment that gives you a chance to properly react to them.

Solution 4:[4]

The Standard makes no distinction between a number of kinds of implementations:

  1. Those where a program a program could allocate as much memory as it malloc() will supply, without affecting system stability, and where a program that could behave usefully without needing the storage from the failed allocation may run normally.

  2. Those where system memory exhaustion may cause programs to terminate unexpectedly even without malloc() ever indicating a failure, but where attempting to dereference a pointer returned via malloc() will either access a valid memory region or force program termination.

  3. Those where malloc() might return null, and dereferencing a null pointer might have unwanted effects beyond abnormal program termination, but where programs might be unexpectedly terminated due to a lack of storage even when malloc() reports success.

If there's no way a program would be able to usefully continue operation if malloc() fails, it's probably best to allocate memory via wrapper function that calls malloc() and forcibly terminates the program if it returns null.

Sources

This article follows the attribution requirements of Stack Overflow and is licensed under CC BY-SA 3.0.

Source: Stack Overflow

Solution Source
Solution 1 dbush
Solution 2 Steve Summit
Solution 3 datenwolf
Solution 4 supercat