I’m working with an embedded device with a unique protocol that sends messages to other devices and I’m making an application that parses the sent packets. Each packet carries 8 bytes. The protocol is defined as where first byte is header and remaining 7 bytes is the data.
They are trying to pass a particular ID string but the ID string is 8 characters long (ASCII) so it won’t fit in 7 bytes.
What my colleague told me is that they’re going to turn the 8 ascii bytes of original string into integer (decimal) and send me 4 bytes of it. They told me I should be able to get the original string from the 4 bytes. I’m having a hard time wrapping my head around on this.
So if you have an ID string like “IO123456”, that’s 0x49 0x4f 0x31 0x32 0x33 0x34 0x35 0x36 in ASCII.. How on earth can you compress that in 4 bytes by turning it into an integer and I can get the original string from it? Am I missing something or is my colleague mistaken? I understand this is a really bizzare question but this seriously does not make any sense to me.
1
Is the ID always in the form: IO123456? What your colleague could mean is that he only sends the numeric part, which fits easily in 4 bytes omitting the “IO” part.
2
If the first two characters are not constant (but are always letters) and the remaining six characters are always numbers, a string like “IO123456” can be packed into 5 bytes by converting the numbers into binary-coded decimal (BCD) format:
IO123456 -> 0x49 0x4f 0x31 0x32 0x33 0x34 0x35 0x36
| | / / /
0x49 0x4f 0x12 0x34 0x56
If there is a limited set of possible identifiers (the first two letters), you can encode these into a number and send that instead (as long as there are no more than 256 combinations), e.g.:
IO -> 0x00
RD -> 0x01
WT -> 0x02
...
AB -> 0x10
...
ZZ -> 0xff
so that the original string is packed into 4 bytes without any loss of information:
IO123456 -> 0x49 0x4f 0x31 0x32 0x33 0x34 0x35 0x36
/ / / /
0x00 0x12 0x34 0x56
Of course this process can also be reversed to get the original ID string.
If the string can be any sequence of characters:
-
If you can be sure your strings don’t use the most-significant bit in each byte, you can chop each one down to seven bits and use bitwise operations to shift the remaining 56 bits into the 56 bits you have available.
-
If the strings are only letters and digits, come up with a 6-bit representation of just that set and make a 48-bit string of your identifier.
If the format is always two letters followed by a string of digits:
-
Leave the first two bytes alone and encode the number into a six-byte integer.
IO123456
becomes0x49 0x4f 0x01E240
. -
Leave the first two bytes alone and pack the digits as binary-coded decimal.
IO123456
becomes0x49 0x4f 0x12 0x34 0x56
.
From the context of the question posted here, it points to some industrial protocol called HART. This protocol has a unique way of wrapping the ASCII characters. It is called as Packed-ASCII. But still it doesn’t pack the 8 characters to 4! According to Packed-ASCII, the 8 ASCII bytes are converted to 6. 4 to 3 and so on.
In this protocol, the length of the parameters in a certain request is always fixed. So the remaining characters needs to be padded up by Space characters. Still, this all stuff is HART-Specific. If you confirm that you are working on this, i will put the exact procedure of packing & unpacking.
Possibly by converting ‘0123456’ to a long integer.
But this would only work for numeric IDs.
Another possible scheme would be to convert your 7 to 6 bit ECMA-1 encoding which would give you a Six byte string but you would be limited to the character set to numbers upper case letters and a limited set of punctuation characters.