I am trying to find a method to do piecewise linear curve-fitting where the break points are determined by some purely local method (i.e., that the breakpoint identification would not be affected by a distal truncation of the data set.
The idea is that if the breakpoint in a time series data set is really, identification of the breakpoint should not be affected by something that happens far in the future or happened far in the past.
Right now, I am using some standard python libraries and I am finding that breakpoint identification is sensitivity to truncation of the data series.
I am thinking that there might be a method like having a moving window of a specified bandwidth and then fit both a parabola and a straight line and then have some procedure to decide when the parabola is sufficiently different from the straight line to determine the location of a breakpoint, and then maybe filter the breakpoints determined by this method.
However, I am doing this for an academic study, and ideally would like to cite someone else’s method rather than making up a method of my own. Any suggestions? [I am working in python.]
Ken C is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.