I am learning git.
I am wondering why the git spec is explicit about how the commit SHA’s are calculated? It is my understanding that the delta blob (the unit-of-work) is calculated into this SHA as well as information about the committing author.
Why is this? Wouldn’t a GUID suffice?
Does the git architecture design for having two different commits at two different times to produce the same SHA?
Does the SHA serve as checksum to see that the data payload is not corrupted?
Can somebody also confirm that the SHA calculation is only considering the delta and not the complete resulting code structure?
SHA vs. GUID
From The Git Object Model, emphasis is mine:
- Git can quickly determine whether two objects are identical or not, just by comparing names.
- Since object names are computed the same way in every repository, the same content stored in two repositories will always be stored under the same name.
- Git can detect errors when it reads an object, by checking that the object’s name is still the SHA1 hash of its contents.
So, yes, it is a protection against corruption. And that’s why GUID won’t work – it is random and does not depend on contents.
Checksums of what?
Also from the same article:
It is important to note that this is very different from most SCM systems that you may be familiar with. Subversion, CVS, Perforce, Mercurial and the like all use Delta Storage systems – they store the differences between one commit and the next. Git does not do this – it stores a snapshot of what all the files in your project look like in this tree structure each time you commit. This is a very important concept to understand when using Git.
Git stores binary snapshots, but not deltas, in blob
objects. Directory structure and file names are stored in tree
objects. Good explanation can be found here: Git Internals – Git Objects
Git calculates SHA checksums of this objects, not deltas or original files. blob
s contain only contents of original files, while names of those files go into tree
s.
Learning Git
If you want to learn git as a user, you don’t have to know about internals. At the moment, I’ve been using Git for over 3 years and haven’t used any of this information in practice even once. It’s good to know, but not required.
If you’re interested in general architecture of Git, read this: Git in The Architecture of Open Source Applications.
Linus Torvalds explains the reason for using a SHA hash in his git presentation at Google (By the way: I recommend anyone who wants to understand what git is all about to watch it completely):
|video|transcript|
Having a good hash is good for being able to trust your data, it happens to have some other good features, too, it means when we hash objects, we know the hash is well distributed and we do not have to worry about certain distribution issues. Internally it means from the implementation standpoint, we can trust that the hash is so good that we can use hashing algorithms and know there are no bad cases. So there are some reasons to like the cryptographic side too, but it’s really about the ability to trust your data. I guarantee you, if you put your data in git, you can trust the fact that five years later, after it is converted from your harddisc to DVD to whatever new technology and you copied it along, five years later you can verify the data you get back out is the exact same data you put in. And that is something you really should look for in a source code management system.
One of the reasons I care is we actually had for the kernel a break-in on one of the BitKeeper sites, where people tried to corrupt the kernel source code repository, and BitKeeper actually caught it. BitKeeper did not have a really fancy hash at all, I think it is only 16-bit CRC, something like that. But it was good enough that you could actually see clumsy attempt, it was not cryptographically secure but it was hard enough in practice to overcome that it was caught immediately. But when that happens once to you, you got burned once, you do not ever want to get burned again. Maybe your projects aren’t that important, my projects, they are important. There is a reason I care.
[…]
So maybe I am a cuckoo, maybe I am a bit crazy, and I care about security more than most people do. But the whole notion that I would give the master copy of source code that I trust and I care about so much I would give it to a third party is ludicrous. Not even Google. Not a way in Hell would I do that. I allow Google to have a copy of it, but I want to have something I know that nobody touched it. By the way I am not a great MIS person so disc corruption issue is definitely a case that I might worry about because I do not do backups, so it’s Ok if I can then download it again from multiple trusted parties I can verify them against each other that part is really easy, I can verify them against hopefully that 20 bytes that I really really cared about, hopefully I have that in a few places. 20-byte is easier to track than 180MB. And corruption is less likely to hit those 20 bytes. If I have those 20 bytes, I can download a git repository from a completely untrusted source and I can guarantee that they did not do anything bad to it. That’s a huge thing and that is something when you do hosted repositories for other people if you use subversion you are just not doing it right. You are not allowing them to sleep well at night. Of course, if you do it for 70… how many, 75,000 projects? Most of them are pretty small and not that important so it’s Ok. That should make people feel better.
When you host your repository on a 3rd party site, you can use the cryptographic hash of every revision to make sure that they didn’t tamper with it. When they just change a single byte, the hashes won’t fit anymore. That means writing down the hash of your HEAD revisions from time to time prevents you from malicious manipulation of your codebase, even when you don’t host your code yourself.