Got a strange one. I’ve been trying to test all bits in a 16-bit unsigned integer (ushort or Uint16)
For some reason the test on bit 14 fails (16384 or 0x4000), and even checking it in the variable watch window doesn’t make sense. Strangely the bit test works fine if I use hexadecimal 0x4000 as opposed to decimal 16384.
Example code:
UInt16 get_ADC_Fault_bits ( byte index )
{
// ignore ADC index for now, just return 65535
return 65535; // No faults
}
void someFunction()
{
// bits becomes 65535 (all bits set meaning no faults)
ushort bits = get_ADC_Fault_bits(ADC_index);
ledFault14.On = (bits & 8192) != 8192; // On = false, OK
ledFault15.On = (bits & 16384) != 16834; // On = true --- WTF? Doesn't work in decimal
ledFault15.On = (bits & 0x4000) != 0x4000; // Off = false --- OK Works in hexadecimal
ledFault16.On = (bits & 32768) != 32768; // On = false, OK
}
Variable Watch Window:
(bits & 16384) == 16834 false bool
(bits & 16384) 16384 int
(bits & 0x4000) 16384 int
(bits & 16384) != 16834 true bool
(bits & 0x4000) != 0x4000 false bool
(bits & 16384) == 16834 false bool
(bits & 0x4000) == 0x4000 true bool
Question is why – and any thoughts, am I doing something stupid or is .NET 8 bugged?