Multiple identical CvAnn_MLPs yield different training results (Feature #1411)
when training two seperate ANN's with identical training sets and parameters, they yield completely different results. Here is what I do:
//layout is a random working layout...
//snip... fill training_hists,training_outputs,weights
qDebug()<<"test train iterations:"<<
//run a simplistic test
Now, here is what I get:
train iterations: 83
test train iterations: 52
1.12094 -1.1355 -1.15789
0.997264 -1.02149 -1.15789
These results are reproducable and at each new run, they are the same. If I trigger a new training, again with the same data set and parameters, I get again a different result, even though I use new CvANN_MLP objects and destroy the old ones.
Interestingly, all later runs of train() have a lower iteration count than the first one.
After further investigation, I guess this is okayish behaviour, as ANN weights are initialized with a Nguyen-Widrow window. Still some problems remain:
-The RNG is NOT random at all by default. It's the same for every run!
-Initialization of weights should be made user accessible.
-Why are all later runs terminating quicker than the first one?
I don't mind this "bug" any more, as I switched to FANN for Neural Networks.