This method implements weights update procedure for the output neurons
Calculates delta/error and calls updateNeuronWeights to update neuron's weights
for each output neuron
Returns output vector size of training elements in this training set This
method is implementation of EngineIndexableSet interface, and it is added
to provide compatibility with Encog data sets and FlatNetwork
Decimal scaling normalization method, which normalize data by moving decimal point
in regard to max element in training set (by columns)
Normalization is done according to formula:
normalizedVector[i] = vector[i] / scaleFactor[i]
This method updates network weights in batch mode - use accumulated weights change stored in Weight.deltaWeight
It is executed after each learning epoch, only if learning is done in batch mode.
Creates full connectivity within layer - each neuron with all other
within the same layer with the specified weight and delay values for all
conections.
Returns input vector size of training elements in this training set This
method is implementation of EngineIndexableSet interface, and it is added
to provide compatibility with Encog data sets and FlatNetwork
This interface is implemented by classes who are listening to learning events (iterations, error etc.)
LearningEvent class holds the information about event.
Starts learning with specified learning rule in new thread to learn the
specified training set, and immediately returns from method to the
current thread execution
LMS() -
Constructor for class org.neuroph.nnet.learning.LMS
Creates a new LMS learning rule
This learning rule is used to train Adaline neural network,
and this class is base for all LMS based learning rules like
PerceptronLearning, DeltaRule, SigmoidDeltaRule, Backpropagation etc.
Max training iterations (when to stopLearning training)
TODO: this field should be private, to force use of setMaxIterations from derived classes, so
iterationsLimited flag is also set at the sam etime.Wil that break backward compatibility with serialized networks?
MaxMin normalization method, which normalize data in regard to min and max elements in training set (by columns)
Normalization is done according to formula:
normalizedVector[i] = (vector[i] - min[i]) / (max[i] - min[i])
Max normalization method, which normalize data in regard to max element in training set (by columns)
Normalization is done according to formula:
normalizedVector[i] = vector[i] / max[i]
This class is example of custom benchmarking task for Multi Layer Perceptorn network
Note that this benchmark only measures the speed at implementation level - the
speed of data flow forward and backward through network
This class provides NguyenWidrow randmization technique, which gives very good results
for Multi Layer Perceptrons trained with back propagation family of learning rules.
This method implements weights update procedure for the single neuron
It iterates through all neuron's input connections, and calculates/set weight change for each weight
using formula
deltaWeight = learningRate * neuronError * input
where neuronError is difference between desired and actual output for specific neuron
neuronError = desiredOutput[i] - actualOutput[i] (see method SuprevisedLearning.calculateOutputError)