Was following along with a coding tutorial, and stumbled upon this:
uint8_t m_saved_data = static_cast<uint8_t>(data & 0xff); //save bottom byte
data
being of type long
.
I’m unsure why & 0xff
is needed here, I thought the static_cast<uint8_t>
is truncating all but the bottom byte anyway, why do we use what I assume is an unecessary bitmask, is it for clarity? or does removing the bitcast result in different effects.
I’ve done a bit of testing myself but I can’t seem to find any situations where removing the bitcast changes the result.
Thanks in advance
venture601 is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.
5