I am trying to find optimal parameters for an ML model using Bayesian optimization. The optimization seems to be working well, but I have some doubts about its convergence. I have attached a figure that shows the target values across different iterations. Generally, I would consider an almost constant value as optimized. However, in this case, most of the iterations towards the end vary between 30 and 35, while for a few iterations, the value drops to around 0.5. Would you consider this to be converged? If yes, are there other ways to check the convergence? If not, what do you suggest to achieve convergence? I am using a library called bayes_opt.
First, it is taking a lot of iterations. I tried using different utility functions and changed the kappa and xi values to check for any changes. With higher kappa, the algorithm seems to converge a little bit faster. I also changed the limits of different parameters by comparing them with the target value. After doing all this, the figure looks more or less the same.