this instance is integer-overflowable, but we cannot keep hand-auditing
every instance (or apathetically ignoring these issues) when the simple
calloc idiom is better in the presence of a good calloc(). It is simply
unfeasible to always enter correct range checks before the aggregate
size calculation, just go find some 4000 lines of code, REPAIR THEM ALL,
then come back and tell me I am wrong.
This only works on systems where calloc() does the integer overflow
check, but if your system doesn't do this, you need to ask your vendor
WHY THEY ARE 10 YEARS BEHIND IN BEST PRACTICE? This is the kind of
problem that needs to be solved at the right layer.
malloc integer-overflow was implicated in the 2002 OpenSSH hole. OpenSSH
and much other code is now written to use calloc(), for instance OpenSSH
has 103 calls to it. We feel safer with our use of calloc(). It is a
natural approach for us to use calloc(). How safe do you feel on systems
which lack that range check in their calloc()?
Good writeup from 2006: http://undeadly.org/cgi?action=article&sid=
20060330071917
curr = curr->next;
}
- number_uses = malloc((max_strength_bits + 1) * sizeof(int));
+ number_uses = calloc((max_strength_bits + 1), sizeof(int));
if (!number_uses) {
SSLerr(SSL_F_SSL_CIPHER_STRENGTH_SORT, ERR_R_MALLOC_FAILURE);
return (0);
}
- memset(number_uses, 0, (max_strength_bits + 1) * sizeof(int));
/*
* Now find the strength_bits values actually used
curr = curr->next;
}
- number_uses = malloc((max_strength_bits + 1) * sizeof(int));
+ number_uses = calloc((max_strength_bits + 1), sizeof(int));
if (!number_uses) {
SSLerr(SSL_F_SSL_CIPHER_STRENGTH_SORT, ERR_R_MALLOC_FAILURE);
return (0);
}
- memset(number_uses, 0, (max_strength_bits + 1) * sizeof(int));
/*
* Now find the strength_bits values actually used