I have a very quick question, based on the following C# code:
decimal _sampleCounter = 2100m;
decimal wrong = _sampleCounter % (30m / 2.3m);
decimal correct = 2100m % (30m / 2.3m);
Console.WriteLine(wrong); // -> 13.043478260869565217391304320
Console.WriteLine(correct); // -> 0.0000000000000000000000000
The wrong result equals the result when using double instead of decimal.
The mathematically correct result is indeed exactly 0.
So what is happening here? Why is it not in both cases 0.0000…?
I could imagine that the compiler directly translates the variable “correct” to it’s final value, but that doesn’t explain why both results differ.
Best regards,
Philipp
New contributor
Philipp is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.