I’ve got a data set that is in binary, and trying to convert it to decimal/float.
For now, I am just using Python to get the understanding for now.
Typically, I would achieve this in one of two ways. Either using Python’s struct, or bit shifting and OR’ing.
For instance, I have a two byte array of [67,1] (little endian).
I can do this:
print(struct.unpack('<H', bytes(byte_array))[0])
or this:
byte_array[1] << 8 | byte_array[0]
Both give me the same value, 323. Which is correct.
Now, I also have a vendor provided application that reads the same raw data file that I am reading, but they represent their value differently:
What are they doing to take a 16-bit value and make a float out of with that kind of precision?
Likewise, the second value shown is -4.56, but when I do the above code I just get -5.