I’ve recently tried to parse an array with a fixed size in antlr4:
array[int size]
: {size > 0}? thing array[$size - 1]
|
;
This allows me to have arrays with a size determined by my data, e.g.
foo: number array[$number.value]; // array with $number amount of elements
My problem is that this approach has massive overhead (stack overflow for big arrays).
Is there a good way to linearize this approach?
I’ve tried to “optimize” it: (TARGET: Java)
array[int length] returns [ThingContext[] objects]:
{
_localctx.objects = new ThingContext[length];
}
( {$length > 0}?
firstObject=thing {
_localctx.objects[0] = _localctx.firstObject;
for(int i=1; i<length; i++){
_localctx.objects[i] = thing();
}
}
|
);
This seems to mess with the adaptive prediction, though (I’m sometimes getting an infinite loop).
Any ideas?
New contributor
SporkWarrior21 is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.