I am writing a parser for a LIN Description File(LDF). In a LDF file there may be floats. Currently I have a lexer that produces following question-relevant tokens:
- Number: any character sequence composed of 0-9 digits stored in a ReadOnlySequence(Leading and trailing zeroes are preserved)
- Dot: represents a ‘.’ character without a value
Now whenever a floating point number is expected one of following sequences of tokens is expected:
- Number -> Dot -> Number
- Number
Now with the information I can get from these tokens I could create a Span of chars that has the digits of both Number tokens separated with a ‘.’ or simply a Span of chars that has the digits of the Number. I can then call double.TryParse with the newly created Span.
Now I am concerned about the precision truncating nature of double.TryParse. It will return Infinity if the parsed number is to large for a double and I can check it and act accordingly. However, what I would like to do is somehow detect if the parsed number can’t be represented “precisely enough”(truncating) and then inform the user that a floating point number of this precision is not supported and preferably fail this parsing instance.
What I mean by precisely enough is:
- 0.1 can’t be precisely represented as a double or float for that matter. But it gets close enough. The loss of precision happens because this fraction is recurring in base 2. This should be allowed because that’s the best we can do with any IEEE-754 number
- 1.123456789123456789 would result in 1.1234567891234568 where the last two digits are lost. I had hoped that the TryParse function method would fail in this case since the input is two precise. This is not the case. I am looking for a workaround where I can detect a loss of precision caused by an invalid value rather than because of limitation of floating point numbers themselves.
Is there a way to detect such an overflow in significant digits? I need to do this without any managed heap allocations.
The rationale behind this is that whoever wrote the LDF file meant something by including this many significant digits. If some of those digits were to be truncated this might cause the program to not behave correctly based on the input file. This is why I want to consider this an error. Potentially I could limit the amount if decimal digits(Sum of ReadOnlySequence lengths) to something that would definitely not overflow but then very small numbers with many 0s before significant digits would be false positives. I hope someone can share some insights into this.