Software packages usually get a label for stability, like ‘alpha’, ‘beta’, or ‘stable’, next to their version string.
It seems the most used metric to decide which stage a package belongs to is the number of issues reported by its users per unit of time (be it a week, a month, or a quarter).
But this strikes me as undefined when a package has just been developed and released: there are no users yet, and it is difficult to win the users’ confidence if the package has a big ‘alpha status’ written all over it.
Reported issues may also be a good measure for open source packages with (potentially) a wide audience, but what happens to small projects developed in-house for internal use within a company? Or for niche audiences, like audio processing or astronomy simulations? What if a project never attracts enough users, should it be considered forever ‘alpha’, or is it enough that its authors have used it for their intended purposes and the package did what it was supposed to do?
In summary, I would like to ask if there is an alternative metric for software stability that I am not aware of. Preferably a widely used metric, and one that avoids such pitfalls as I have described further up.
5