What’s the basis behind SHA-1 or SHA-2 or other Checksum algorithms?
I read about it here http://en.wikipedia.org/wiki/SHA-1#Data_Integrity
But I am still wondering about an answer in a layman’s language.
Can I understand it as a very, very compressed code that can be translated back into original data?
Let’s say, I have a letter written in notepad. Then the whole of my 1 A4 page size data can be converted into something like this “9b90417b6a186b6f314f0b679f439c89a3b0cdf5”. So whenever I want my original data back, I can convert this back into original data?
I am very sure that I am wrong here, because it is weird how data that itself contains combination of letters and numbers can be represented by smaller set of letters and numbers. Illogical!
Then, what’s the basic?
6
A hash is a one-way function to digest an arbitrary amount of data into a result. The function shall have the property that for a particular input, it generates the same output.
You could consider addition or multiplication a very horrible hashing function. Given a sum or product, you cannot uniquely determine the numbers that were added or multiplied to produce that result, but you can always given a set of numbers re-add or re-multiply them to test that your set is either probably right, or definitely wrong.
A good hash function scrambles the structure of the source data such that the resulting hash bears very little resemblance to the original data, and has properties that small changes in the input causes large changes in the result.
Are you familiar with the concept of a normal checksum? e.g. adding up the ascii values of all the letters in a string?
Take your name:
V I S H W A S
86 + 73 + 83 + 72 + 87 + 65 + 83 = 549
The problem with using something so simple to verify the correctness of a transmission is that many simply errors will leave the checksum unchanged.
e.g. if two letters get swapped around, so VISHWAS becomes VIHSWAS, the checksum will be the same.
Or if one letter is wrong and another is wrong in the opposite direction: VISGWBS
With a good hashing function like SHA-1, a small error in the transmission will result in a completely different hash. And it is incredibly unlikely that any two errors will combine to give the same hash result.
SHA1/2 and many others are Hash Functions or Cryptographic Hash. The key point of hash functions is that they compute “fingerprint” from arbitrary big data and it should not be possible to:
- Find a data, that has specific hash
- Find 2 different data, that has same hash
- Modify data, so that it will have same hash
Hashes are commonly used for data integrity checks eg. sender sends data and hash of it, receiver gets data, computes hash from them. If hashes are equal, then data were received correctly. Hashes are also used for Digital signatures, where they allow us to use awfully slow asymmetric cryptography to create non-falsible “fingerprint” of data.
3
This is what I would tell someone without a computer science or programming background.
A hash is like the hand written signature of a person. But the signatures are unreadable. So unreadable that you cannot tell the name of the person just by looking at the signature. But you still can compare two signatures and decide if the same person signed two different documents.
In this analogy the data is the person, the computed hash is the signature and computing the hash is like asking the person to sign something. If you have computed the hash, you can compare it to another hash. But you have no way to know the data (from which the hash was computed) just by looking at the hash.
2