I’m implementing an API key based authentication scheme and I’m caching valid API key entries (hash, scope etc.) in a memory cache. For the cache key, I had been using the first 8 characters of the base 64 representation of the key hash. I did this because there are 64^8 (281,474,976,710,656) possible keys, so I thought that would be more than enough.
Then when a key arrived, I hashed it, encoded it in base64, grabbed the first 8 characters then pulled the auth details from the cache and returned it to the API logic.
But considering it further, I significantly reduce collision resistance by doing this. It still has a large amount of collision resistance, but haven’t I just reduced key entropy to 32 bit instead of 128 bit? So an attacker only has to match the first 32 bits of the key to pass.
I have a couple of options:
-
Leave it as it is, but do an equality check between the full length keys before returning the matching auth entry. (If it doesn’t match, log the attempt and fail the auth request)
-
Use the full length hash for the cache key.
Approach 1 seems to be a bit awkward, if I’m doing an equality check straight after, the key might as well be better. Approach 2 seems better.
I expect 128bit hashing for the cache key is probably slightly slower than 32 bit hashing + an equality check. But I suspect it’s neither here nor there.
Is there any other advantage to either approach (or a different approach I haven’t considered)?
1