I am writing a utility in Go that concurrently attempts to crack passwords given a salted hash and the salt. That part works fine, but now I’d like to take advantage of go routines and have them crack concurrently. The issue is that the word list is shared and they all iterate line by line over the same word list. Should each goroutine get its own copy of the word list or can they share the same memory-mapped file? My thinking is it would be slower if they are all trying to iterate over the same area of memory concurrently. Is there a better design pattern?
1
No, concurrently reading from the same memory does not slow things down. So that is a perfectly fine way to manage your data flow.
However, concurrently using some other resources may be problematic. In that case, having a single reader goroutine that writes into buffered channels for the worker goroutines can be an efficient and elegant architecture.
Note that CPU cracking is fine when you just want to experiment with cracking concepts or are cracking extremely weak hash algorithms like MD5. For realistic workloads, there’s no way around massively parallel GPU cracking. For a compute-bound workload such as cracking, your “hashes per second” is limited primarily by the number of cores. Using more goroutines than CPU cores will be detrimental. But whereas a CPU has four or maybe even 32 cores, a GPU has hundreds or thousands of cores.
1