I am trying to find a formula to check if the sample size (10) I am using to check the runtime of an algorithm is enough for the current system and system load.
I am collecting 10 samples of algorithm run time and claiming that the mean value is the runtime of the algorithm. But I want to use mean, standard deviation to reverse calculate how many samples I need to take in a particular system to make sure mean value is within 95% confidence level to the population mean.
Problem is we don’t know the population size.