I have a list of around 2 billion distinct numbers stored in memory for some computation. Currently, every time I need to add a new entry I have to search the entire list for a potential duplicate.
I have 18 digit, 2 billion numbers to be stored in memory. Some of the numbers are repeated. So, I want to load new number in the memory and ignore if the number already exists. Right now, I append the number in the list if it does not exist already.
This process of linear scan of the already existing number is taking long time. What can I do to get around this performance problem?
5
Linear search of 2 billion numbers is going to be painful. On average, you’ll scan almost a billion numbers before finding the one you want. Even if the numbers are in memory, that’s got to take some time.
A better approach is to sort those numbers as you load them into memory and then use a Binary Search algorithm to quickly find the number you want in O(log n) time.
I’m not going to repeat Wikipedia’s excellent write up of the algorithm here, but the general idea is to sample the middle of the list and compare it to the number you’re looking for. If your number is higher, you can immediately discard the smaller half of the list and try again with what’s left.
- Your first sample would either find the number or eliminate 1 billion numbers.
- Your second sample would either find the number or eliminate 500 million numbers
- Your 3rd sample would either find the number or reduce the search set to 250 million numbers.
… and so on. As you can see, this converges on a solution (either the number is found or you know it’s not there) very quickly. There’s the overhead of initially sorting the numbers, but that’s peanuts compared to the time you’ll save on the search.
If you can store the numbers on disk sorted, you’ll be even that much more ahead of the game.
You could use a hash table using the number itself as the key.
This means lookups and insertions would both be constant in time. If you know you have approximately two billion records you can pre-allocate that much space ahead of time so resizing is not an issue.
Use a set.
It has many of the semantic properties of list
, but only stores unique values. To be honest, I have am unsure if it will perform reasonably for billions of elements, but it is backed by hash tables, and I have successfully used it to do unions and intersections of sets with over 10 million entries.