int main(int argc, char* argv[])
{
using vect_t = nu::MlpNN::FpVector;
nu::MlpNN::Topology topology = {
2, // input layer takes a two dimensional vector
2, // hidden layer size
1 // output
};
Step 3: Construct the network object specifying topology, learning rate and momentum
To construct the network object, you need to specify the network's topology, learning rate, and momentum. This can be done as follows:
try
{
nu::MlpNN nn {
topology,
0.4, // learning rate
0.9, // momentum
};
Step 4: Create a training set needed to train the net.
The training set should consist of a collection of input-output pairs, where each pair represents the desired behavior of the network. The input vector represents the input values, and the output vector represents the expected output values. You can create the training set as follows:
// Create a training set
using training_set_t = std::map< std::vector<double>, std::vector<double> >;
training_set_t traing_set = {
{ { 0, 0 },{ 0 } },
{ { 0, 1 },{ 1 } },
{ { 1, 0 },{ 1 } },
{ { 1, 1 },{ 0 } }
};
Step 5: Train the net using a trainer object.
The trainer object iterates over each element of the training set until either of the following conditions is met: the maximum number of epochs (20000) is reached, or the error computed by the provided error function falls below the minimum error threshold (0.01).
Here is an example of how the trainer iterates over the training set:
nu::MlpNNTrainer trainer(
nn,
20000, // Max number of epochs
0.01 // Min error
);
std::cout
<< "XOR training start ( Max epochs count=" << trainer.get_epochs()
<< " Minimum error=" << trainer.get_min_err() << " )"
<< std::endl;
trainer.train<training_set_t>(
traing_set,
[](
nu::MlpNN& net,
const nu::MlpNN::FpVector & target) -> double
{
static size_t i = 0;
if (i++ % 200 == 0)
std::cout << ">";
return net.calcMSE(target);
}
);
Step 6: Test if the network has learned the XOR function.
After training the network, you can test its performance on the XOR function to see if it has learned the desired behavior. You can provide input values to the network and compare the output with the expected output.
Here is an example of how to test the network on the XOR function:
auto step_f = [](double x) {
return x < 0.5 ? 0 : 1;
};
std::cout << std::endl << " XOR Test " << std::endl;
for (int a = 0; a < 2; ++a) {
for (int b = 0; b < 2; ++b) {
vect_t output_vec{ 0.0 };
vect_t input_vec{ double(a), double(b) };
nn.setInputVector(input_vec);
nn.feedForward();
nn.getOutputVector(output_vec);
// Dump the network status
std::cout << nn;
std::cout << "-------------------------------" << std::endl;
auto net_res = step_f(output_vec[0]);
std::cout << a << " xor " << b << " = " << net_res << std::endl;
auto xor_res = a ^ b;
if (xor_res != net_res) {
std::cerr
<< "ERROR!: xor(" << a << "," << b << ") !="
<< xor_res
<< std::endl;
return 1;
}
std::cout << "-------------------------------" << std::endl;
}
}
std::cout << "Test completed successfully" << std::endl;
}
catch (nu::MlpNN::Exception & e)
{
std::cerr << "nu::MlpNN::exception_t n# " << int(e) << std::endl;
std::cerr << "Check for configuration parameters and retry" << std::endl;
return 1;
}
catch (...)
{
std::cerr
<< "Fatal error. Check for configuration parameters and retry" << std::endl;
return 1;
}
return 0;
}
XOR training start ( Max epochs count=20000 Minimum error=0.01)
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
XOR Test
Net Inputs
[0] = 0
[1] = 0
Neuron layer 0 Hidden
Neuron 0
Input [0] = 0
Weight [0] = 0.941384
Input [1] = 0
Weight [1] = 0.94404
Bias = 0.0307751
Ouput = 0.507693
Error = 0.0707432
Neuron 1
Input [0] = 0
Weight [0] = 6.19317
Input [1] = 0
Weight [1] = 6.49756
Bias = -0.0227467
Ouput = 0.494314
Error = -0.0568667
Neuron layer 1 Output
Neuron 0
Input [0] = 0.507693
Weight [0] = -16.4831
Input [1] = 0.494314
Weight [1] = 13.2566
Bias = -0.00652012
Ouput = 0.139202
Error = -0.0171672
-------------------------------
0 xor 0 = 0
-------------------------------
Net Inputs
[0] = 0
[1] = 1
Neuron layer 0 Hidden
Neuron 0
Input [0] = 0
Weight [0] = 0.941384
Input [1] = 1
Weight [1] = 0.94404
Bias = 0.0307751
Ouput = 0.726078
Error = 0.0707432
Neuron 1
Input [0] = 0
Weight [0] = 6.19317
Input [1] = 1
Weight [1] = 6.49756
Bias = -0.0227467
Ouput = 0.998461
Error = -0.0568667
Neuron layer 1 Output
Neuron 0
Input [0] = 0.726078
Weight [0] = -16.4831
Input [1] = 0.998461
Weight [1] = 13.2566
Bias = -0.00652012
Ouput = 0.779318
Error = -0.0171672
-------------------------------
0 xor 1 = 1
-------------------------------
Net Inputs
[0] = 1
[1] = 0
Neuron layer 0 Hidden
Neuron 0
Input [0] = 1
Weight [0] = 0.941384
Input [1] = 0
Weight [1] = 0.94404
Bias = 0.0307751
Ouput = 0.72555
Error = 0.0707432
Neuron 1
Input [0] = 1
Weight [0] = 6.19317
Input [1] = 0
Weight [1] = 6.49756
Bias = -0.0227467
Ouput = 0.997914
Error = -0.0568667
Neuron layer 1 Output
Neuron 0
Input [0] = 0.72555
Weight [0] = -16.4831
Input [1] = 0.997914
Weight [1] = 13.2566
Bias = -0.00652012
Ouput = 0.77957
Error = -0.0171672
-------------------------------
1 xor 0 = 1
-------------------------------
Net Inputs
[0] = 1
[1] = 1
Neuron layer 0 Hidden
Neuron 0
Input [0] = 1
Weight [0] = 0.941384
Input [1] = 1
Weight [1] = 0.94404
Bias = 0.0307751
Ouput = 0.871714
Error = 0.0707432
Neuron 1
Input [0] = 1
Weight [0] = 6.19317
Input [1] = 1
Weight [1] = 6.49756
Bias = -0.0227467
Ouput = 0.999997
Error = -0.0568667
Neuron layer 1 Output
Neuron 0
Input [0] = 0.871714
Weight [0] = -16.4831
Input [1] = 0.999997
Weight [1] = 13.2566
Bias = -0.00652012
Ouput = 0.246297
Error = -0.0171672
-------------------------------
1 xor 1 = 0
-------------------------------
Test completed successfully
Perceptron AND sample (and_test)
The AND function can be implemented using a perceptron, which is a type of neural network. The AND function is a typical example of a linearly separable function. It can be learned by a single perceptron neural net.
The AND function computes the logical-AND operation, which yields 1 if and only if both inputs have the value 1.
Hopfield Test (hopfield_test)
The Hopfield network is a type of recurrent artificial neural network that can be used to solve the recall problem of matching cues for an input pattern to an associated pre-learned pattern. It serves as a content-addressable memory system with binary threshold nodes.
In this test, we demonstrate the use of a Hopfield network as an auto-associative memory. The goal is to recognize a 100-pixel picture using a 100-neuron neural network.
The Hopfield network works by storing the pre-learned patterns as stable states in the network's neuron activations. Given an input pattern, the network iteratively updates the neuron activations until it converges to a stable state that matches the closest learned pattern.