A run a simple test JDK 1.7.0_45 (Windows 7, 64bit):
Test 1:
long start = System.nanoTime();
for (int i=0; i < 1000000; i++) {
System.currentTimeMillis();
}
elapsed = System.nanoTime() - start;
Versus Test 2:
long start = System.nanoTime();
long adjust = 313231;
for (int i=0; i < 1000000; i++) {
long result = System.currentTimeMillis() + adjust;
}
elapsed = System.nanoTime() - start;
On my system, first test ran at around 28 nano/call. The second at around 1250 nano/call. That is a whopping 44x overhead. Can anyone explain such a huge difference?
5
Look at the generated byte code. One obvious explanation would be that your first call to currentTimeMillis()
is so obviously useless that the optimizer removed it altogether, and the second isn’t. There are countless similar and less similar possible reasons, and speculating about them without looking at what’s actually going on is pretty useless.
5