When we test a new optimization algorithm, what the process that we need to do?For example, do we need to run the algorithm several times, and pick a best performance,i.e., in terms of accuracy, f1 score .etc, and do the same for an old optimization algorithm, or do we need to compute the average performance,i.e.,the average value of accuracy or f1 scores for these runs, to show that it is better than the old optimization algorithm? Because when I read the papers on a new optimization algorithm, I don't know how they calculate the performance and draw the train-loss vs iters curves, because it has random effects, and for different runs we may get different performance and different curves.
Asked
Active
Viewed 74 times
5
-
Hi and welcome to AI SE. Could you please link any paper that you talk about in your question? – naive Oct 05 '19 at 16:21
1 Answers
1
See here for a potential way to do it:
http://infinity77.net/global_optimization/#motivation-motivation
http://infinity77.net/global_optimization/#rules-the-rules
You basically test the two (or more) optimization algorithms against known objective functions, with several random (but repeatable) starting points and then analyze the outcome.

Infinity77
- 111
- 1
-
1Thanks!So, do you mean we choose several random seed(initial value) and run once for each of them? And then do we choose the best performance among of these runs or do we choose just the average performance, and then use it to compare different optimization algorithms? – abc Sep 23 '19 at 01:32
-
1Normally you would use the average of the results. In my tests I have run multiple optimization algorithms with 100 random starting points (but the same for all algorithms) and then created a few graphics to compare performances: http://infinity77.net/global_optimization/multidimensional.html . Please don’t be caught into believing papers or website using CPU time or elapsed time as a performance measure, those are useless metrics. – Infinity77 Sep 23 '19 at 06:17