We have to develop our own file management system in Java web application. We need to sync files between our main server and client severs and find out whether all the client server has all the latest version of files.
Our files are in pdf, doc and xls format they changes every now and then as and when it is required.
What we are thinking of using MD5 checksum to find out hashcode of files on Main server and store it in database. Same would be there in Client Servers database. After comparing records on database we would come to know whether client servers are synced or not.
Please suggest if there are any better ways to do the same.
5
Yes, MD5 is almost guaranteed to detect any change in files you produce. Collision attacks (methods to create different files with identical hash sums) are possible, but that is only a concern when you are fighting an attacker who is actively trying to produce them. In normal operation this is not a concern; your hardware failing is much, much more likely than an accidental collision.
So, mathematically, using MD5 for sync algorithms is fine. But as others have pointed out, there may be ready-made solutions that make more sense for you, or it may be too expensive to scan the full content of every file regularly – that depends on your particular situation..
1
About your new wheel design concept – it’s been done: rsync
utility software and network protocol for Unix-like systems (with ports to Windows) that synchronizes files and directories from one location to another while minimizing data transfer by using delta encoding when appropriate…
The recipient splits its copy of the file into fixed-size non-overlapping chunks and computes two checksums for each chunk: the MD5 hash, and a weaker ‘rolling checksum’… It sends these checksums to the sender…
4
Yes. You know all client files are sync’ed from the server. Therefore, if you keep sufficient history on the server, the client only needs to send its file version. Expensive MD5 calculations are unnecessary.
In the (hypothetical) case that a client can update its copy, you have a much more complicated problem anyway. You cannot support concurrent modifications (not with Excel or PDF) so you would need a checkout-modify-checkin system. At that point you’re re-inventing a VCS, so you’d just choose SVN instead.
1
It might be easier to just remember when the files were last synced. Calculating a hash for a large file could be expensive. If the modification date is after the sync date on either machine, the file needs to be synced again. Comparing dates is cheap and doesn’t depend on file size.
If you do this, I would recommend comparing the size in bytes prior to the MD5 (or other) hash.
If the size is different between the two machines, you know the file is different. No need to waste time calculating a hash. And for most file types – certainly including those that you mentioned – it is extremely unlikely that a change will leave you with exactly the same file size.
1
MD5
is said to by quite slow (see below). The MD5
digest is rather long. I’d recommend modification time, size and CRC-32
checksum for file comparisons. A discussion on CRC-32
is here. As its name implies, CRC-32
has 32-bit hash values. A Java implementation is available in java.util.zip.CRC32
Edit:
The speed advantage of CRC-32 vs. MD5 is smaller than I’ve expected. CRC-32 needs some 20% less time compared to MD5.
I used the following Java code to find the difference (and demo the usage of both methods):
import java.security.*;
import java.util.Random;
import java.util.zip.CRC32;
public class HashBench {
@SuppressWarnings("unused")
public static void main(String[] args) throws Exception {
int noOfLoopIterations = 100 * 1000;
int bytesInMessageBuffer = 100 * 1024;
byte randomByteBuffer[] = new byte[bytesInMessageBuffer];
byte md5Digest[];
MessageDigest md5;
CRC32 crc;
long crcValue;
long startTime;
long elapsedTime;
new Random().nextBytes(randomByteBuffer);
// MD5 benchmark
o("Starting MD5 benchmark ...(" + bytesInMessageBuffer/1024 + "KByte messages)");
md5 = MessageDigest.getInstance("MD5");
startTime = System.nanoTime();
for (int i = 1; i < noOfLoopIterations; i++)
{
md5Digest = md5.digest(randomByteBuffer);
}
showElapsed(noOfLoopIterations, startTime);
// CRC-32 benchmark
o("Starting CRC-32 benchmark ... (" + bytesInMessageBuffer/1024 + "KByte messages)");
crc = new CRC32();
startTime = System.nanoTime();
for (int i = 1; i < noOfLoopIterations; i++)
{
crc.reset();
crc.update(randomByteBuffer);
crcValue = crc.getValue();
}
showElapsed(noOfLoopIterations, startTime);
o("Ciao!");
}
private static void showElapsed(int noOfLoopIterations, long startTime) {
long elapsedTime;
elapsedTime = System.nanoTime() - startTime;
o("Elapsed time: " + num(elapsedTime / 1000000000.0) + "s for " + String.format("%1$,.0f", 1.0 * noOfLoopIterations) + " loops");
o("Time per digest: " + num(elapsedTime / (1000000.0 * noOfLoopIterations)) + "ms");
o("");
}
private static void o(String s) {
System.out.println(s);
}
private static String num(double x) {
return String.format("%1$,.2f", x);
}
}
The result:
Starting MD5 benchmark ...(100KByte messages)
Elapsed time: 28,94s for 100.000 loops
Time per digest: 0,29ms
Starting CRC-32 benchmark ... (100KByte messages)
Elapsed time: 23,89s for 100.000 loops
Time per digest: 0,24ms
To avoid the influence of disk caching and other external effects, I just fill a byte array with random values. The benchmark executes the hash/checksum calculation repeatedly.
Conclusion: Calculation speed is not a convincing reason in this case.
7