nuNN Library 2.0

(link to version 1.56)

nunnLib is a Free Open Source Machine Learning Library distributed under GPLv2 License and written in C++17

Repository

nunn 2.0

Nunn Library is a Free Open Source Machine Learning Library distributed under GPLv2 License and written in C++17


Nunn Topology -> Graphviz format converter (nunn_topo)

With this tool, you have the capability to export neural network topologies and visualize them using Graphviz dot. Dot is a tool that creates drawings of directed graphs. It takes graph text files with attributes as input and generates visual representations, which can be saved as graph files or in various graphics formats like GIF, PNG, SVG, or PostScript. These graphical representations can also be converted to PDF format if desired. This allows you to easily visualize and share the structure of your neural networks using professional-looking diagrams.

MNIST Test Demo (mnist_test)

The MNIST Test Demo (mnist_test) is a program that trains and tests an (R)MLP neural network using the MNIST dataset. The dataset consists of 60,000+10,000 scanned images of handwritten digits, along with their corresponding correct classifications. These images are grayscale and have a size of 28 by 28 pixels.

The first part of the dataset, consisting of 60,000 images, is used for training the neural network. The second part, containing 10,000 images, is used for testing its performance. It's important to note that the test data comes from a different set of individuals than the original training data, ensuring a more comprehensive evaluation.

During training, the input images are treated as 784-dimensional vectors, where each entry represents the grayscale value of a single pixel in the image. The desired output is a 10-dimensional vector, representing the correct classification of the digit.

To learn more about the MNIST dataset, you can visit the following link: http://yann.lecun.com/exdb/mnist/. This provides additional information about the dataset and its characteristics.

Handwritten Digit OCR Demo (ocr_test)

The Handwritten Digit OCR Demo (ocr_test) is an interactive program that utilizes an MNIST-trained neural network created using the nunn library. The neural network's training has been performed by the mnist_test application, resulting in nunn status files (.net).

This demo allows users to interact with the trained neural network for performing Optical Character Recognition (OCR) on handwritten digits. By leveraging the capabilities of the MNIST-trained network, the program can recognize and classify handwritten digits with high accuracy.

Users can input handwritten digits into the demo, and the neural network will process the image and provide a classification result. This allows users to test the effectiveness of the trained model and observe its performance in real-time.

The utilization of the nunn library and the previously generated nunn status files enables the demo to leverage the knowledge and patterns learned during the training process, enhancing the accuracy of the OCR functionality.

Overall, the Handwritten Digit OCR Demo based on nunn library provides an interactive platform for evaluating the MNIST-trained neural network's ability to recognize and classify handwritten digits.

TicTacToe Demo (tictactoe/winttt)

The TicTacToe Demo (tictactoe) is a basic implementation of the Tic Tac Toe game that incorporates neural networks. This demo provides an interactive platform for playing the popular game against an AI opponent powered by neural networks. The neural networks have been trained to make intelligent moves and provide a challenging gameplay experience.

In addition to the standard Tic Tac Toe demo, there is also a Windows-specific version called Winttt (winttt). Winttt offers an enhanced user interface and additional features. It provides the option to dynamically train the neural networks during gameplay or utilize pre-trained neural networks generated by the tictactoe program.

With Winttt, users can enjoy playing Tic Tac Toe against advanced AI opponents that utilize the power of neural networks for decision-making. The neural networks' capabilities have been honed through training using the tictactoe program, ensuring strategic and intelligent gameplay.

Whether you prefer the basic Tic Tac Toe demo or the feature-rich Winttt version, both provide an engaging and challenging experience by combining the traditional game mechanics with the capabilities of neural networks.

XOR/AND Problem Examples

The XOR function serves as a typical example of a non-linearly separable function. Implementing the XOR function has been a classic problem in the field of neural networks, as it highlights the ability of neural networks to handle non-linear relationships. Unlike logistic regression algorithms, which are unable to solve the XOR problem, neural networks excel at tackling such complex and non-linearly separable functions.

The XOR function, which stands for "exclusive or," takes two binary inputs and returns a binary output based on the following logic: if the inputs are different, the output is 1; otherwise, the output is 0. Despite its simplicity, the XOR function poses a challenge for linear models like logistic regression due to its non-linear nature.

Neural networks, on the other hand, can successfully address the XOR problem by utilizing hidden layers and non-linear activation functions. By employing these components, neural networks can capture and learn the complex relationships inherent in the XOR function. This capability showcases the power and flexibility of neural networks in handling non-linearly separable functions.

In summary, the XOR function is a classic example that demonstrates the limitations of linear models like logistic regression and the effectiveness of neural networks in tackling non-linear problems.

Solving the XOR Problem with Nunn Library

The XOR function accepts two input arguments within the range of [0, 1] and produces a single output value also within the range of [0, 1]. The behavior of the XOR function can be described using the following table:

 x1|x2 |  y

 --+---+----

 0 | 0 |  0

 0 | 1 |  1

 1 | 0 |  1

 1 | 1 |  0

In this table, the inputs are represented by "x1" and "x2," while the corresponding output is indicated under the "y" column. The XOR function follows a specific pattern where the output is 0 if both inputs are the same (either both 0 or both 1), and the output is 1 if the inputs differ from each other (one is 0 and the other is 1).

It is important to note that the XOR function is non-linear and cannot be accurately represented by a simple linear relationship. 

This classification problem cannot be solved using linear separation because a straight line cannot effectively separate the inputs into distinct output categories.

Solving the XOR function is relatively straightforward for a Multi-Layer Perceptron (MLP) neural network. MLPs have the ability to generate non-linear solutions, allowing them to effectively learn and represent the XOR function. By employing multiple layers and non-linear activation functions, an MLP can learn the complex decision boundary required to accurately classify XOR inputs and produce the corresponding outputs.

Xor function implementation step by step

Test has been performed training an MLP network. During training you can give the algorithm examples of what you want the network to do and it changes the network’s weights. When training is finished, it will give you the required output for a particular input.

Step 1: include MLP NN header

#include "nu_mlpnn.h"

#include <iostream>

#include <map>

Step 2: Define net topology


The network topology is defined using a vector of positive integers. The first integer represents the size of the input layer, and the last integer represents the size of the output layer. Any integers in between represent the sizes of the hidden layers, ordered from input to output.

Here are the key points regarding the network topology:

By specifying the network topology in this manner, you define the structure of the neural network, including the number of layers and the number of nodes in each layer.

int main(int argc, char* argv[])

{

  using vect_t = nu::MlpNN::FpVector;

  nu::MlpNN::Topology topology = {

      2, // input layer takes a two dimensional vector

      2, // hidden layer size

      1  // output

  };

To construct the network object, you need to specify the network's topology, learning rate, and momentum. This can be done as follows:

  try 

  {

      nu::MlpNN nn {

         topology,

         0.4, // learning rate

         0.9, // momentum

      };

Step 4: Create a training set needed to train the net. 

The training set should consist of a collection of input-output pairs, where each pair represents the desired behavior of the network. The input vector represents the input values, and the output vector represents the expected output values. You can create the training set as follows:

      // Create a training set

      using training_set_t = std::map< std::vector<double>, std::vector<double> >;

      training_set_t traing_set = {

         { { 0, 0 },{ 0 } },

         { { 0, 1 },{ 1 } },

         { { 1, 0 },{ 1 } },

         { { 1, 1 },{ 0 } }

      };

Step 5: Train the net using a trainer object. 


The trainer object iterates over each element of the training set until either of the following conditions is met: the maximum number of epochs (20000) is reached, or the error computed by the provided error function falls below the minimum error threshold (0.01).

Here is an example of how the trainer iterates over the training set:

      nu::MlpNNTrainer trainer(

         nn,

         20000, // Max number of epochs

         0.01   // Min error

      );


      std::cout

         << "XOR training start ( Max epochs count=" << trainer.get_epochs()

         << " Minimum error=" << trainer.get_min_err() << " )"

         << std::endl;


      trainer.train<training_set_t>(

         traing_set,

         [](

            nu::MlpNN& net,

            const nu::MlpNN::FpVector & target) -> double

         {

            static size_t i = 0;

            if (i++ % 200 == 0)

               std::cout << ">";

             return net.calcMSE(target);

         }

      );

Step 6: Test if the network has learned the XOR function.

After training the network, you can test its performance on the XOR function to see if it has learned the desired behavior. You can provide input values to the network and compare the output with the expected output.

Here is an example of how to test the network on the XOR function:

     auto step_f = [](double x) { 

         return x < 0.5 ? 0 : 1; 

     };


     std::cout <<  std::endl << " XOR Test " << std::endl;

     

     for (int a = 0; a < 2; ++a) {

        for (int b = 0; b < 2; ++b) {  

            vect_t output_vec{ 0.0 };

            vect_t input_vec{ double(a), double(b) }; 

            nn.setInputVector(input_vec);

            nn.feedForward();

            nn.getOutputVector(output_vec);


            // Dump the network status

            std::cout << nn;

            std::cout << "-------------------------------" << std::endl;

            auto net_res = step_f(output_vec[0]);

            std::cout << a << " xor " << b << " = " << net_res << std::endl;

            auto xor_res = a ^ b;


            if (xor_res != net_res) {

               std::cerr

                  << "ERROR!: xor(" << a << "," << b << ") !="

                  << xor_res

                  << std::endl; 

               return 1;

            }

            std::cout << "-------------------------------" << std::endl;

        }

     } 

      std::cout << "Test completed successfully" << std::endl;

   }

   catch (nu::MlpNN::Exception & e)

   {

      std::cerr << "nu::MlpNN::exception_t n# " << int(e) << std::endl;

      std::cerr << "Check for configuration parameters and retry" << std::endl; 

      return 1;

   }

   catch (...)

   {

      std::cerr

         << "Fatal error. Check for configuration parameters and retry" << std::endl;

      return 1;

   } 

   return 0;

}


Program output

XOR training start ( Max epochs count=20000 Minimum error=0.01) 

>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> 


XOR Test

Net Inputs

        [0] = 0

        [1] = 0


Neuron layer 0 Hidden

        Neuron 0

                Input  [0] = 0

                Weight [0] = 0.941384

                Input  [1] = 0

                Weight [1] = 0.94404

                Bias =       0.0307751

                Ouput = 0.507693

                Error = 0.0707432

        Neuron 1

                Input  [0] = 0

                Weight [0] = 6.19317

                Input  [1] = 0

                Weight [1] = 6.49756

                Bias =       -0.0227467

                Ouput = 0.494314

                Error = -0.0568667


Neuron layer 1 Output

        Neuron 0

                Input  [0] = 0.507693

                Weight [0] = -16.4831

                Input  [1] = 0.494314

                Weight [1] = 13.2566

                Bias =       -0.00652012

                Ouput = 0.139202

                Error = -0.0171672

-------------------------------

0 xor 0 = 0

-------------------------------

Net Inputs

        [0] = 0

        [1] = 1


Neuron layer 0 Hidden

        Neuron 0

                Input  [0] = 0

                Weight [0] = 0.941384

                Input  [1] = 1

                Weight [1] = 0.94404

                Bias =       0.0307751

                Ouput = 0.726078

                Error = 0.0707432

        Neuron 1

                Input  [0] = 0

                Weight [0] = 6.19317

                Input  [1] = 1

                Weight [1] = 6.49756

                Bias =       -0.0227467

                Ouput = 0.998461

                Error = -0.0568667


Neuron layer 1 Output

        Neuron 0

                Input  [0] = 0.726078

                Weight [0] = -16.4831

                Input  [1] = 0.998461

                Weight [1] = 13.2566

                Bias =       -0.00652012

                Ouput = 0.779318

                Error = -0.0171672

-------------------------------

0 xor 1 = 1

-------------------------------

Net Inputs

        [0] = 1

        [1] = 0


Neuron layer 0 Hidden

        Neuron 0

                Input  [0] = 1

                Weight [0] = 0.941384

                Input  [1] = 0

                Weight [1] = 0.94404

                Bias =       0.0307751

                Ouput = 0.72555

                Error = 0.0707432

        Neuron 1

                Input  [0] = 1

                Weight [0] = 6.19317

                Input  [1] = 0

                Weight [1] = 6.49756

                Bias =       -0.0227467

                Ouput = 0.997914

                Error = -0.0568667


Neuron layer 1 Output

        Neuron 0

                Input  [0] = 0.72555

                Weight [0] = -16.4831

                Input  [1] = 0.997914

                Weight [1] = 13.2566

                Bias =       -0.00652012

                Ouput = 0.77957

                Error = -0.0171672

-------------------------------

1 xor 0 = 1

-------------------------------

Net Inputs

        [0] = 1

        [1] = 1


Neuron layer 0 Hidden

        Neuron 0

                Input  [0] = 1

                Weight [0] = 0.941384

                Input  [1] = 1

                Weight [1] = 0.94404

                Bias =       0.0307751

                Ouput = 0.871714

                Error = 0.0707432

        Neuron 1

                Input  [0] = 1

                Weight [0] = 6.19317

                Input  [1] = 1

                Weight [1] = 6.49756

                Bias =       -0.0227467

                Ouput = 0.999997

                Error = -0.0568667


Neuron layer 1 Output

        Neuron 0

                Input  [0] = 0.871714

                Weight [0] = -16.4831

                Input  [1] = 0.999997

                Weight [1] = 13.2566

                Bias =       -0.00652012

                Ouput = 0.246297

                Error = -0.0171672

-------------------------------

1 xor 1 = 0

-------------------------------

Test completed successfully

Perceptron AND sample (and_test)

The AND function can be implemented using a perceptron, which is a type of neural network. The AND function is a typical example of a linearly separable function. It can be learned by a single perceptron neural net.

The AND function computes the logical-AND operation, which yields 1 if and only if both inputs have the value 1.

Hopfield Test (hopfield_test)

The Hopfield network is a type of recurrent artificial neural network that can be used to solve the recall problem of matching cues for an input pattern to an associated pre-learned pattern. It serves as a content-addressable memory system with binary threshold nodes.

In this test, we demonstrate the use of a Hopfield network as an auto-associative memory. The goal is to recognize a 100-pixel picture using a 100-neuron neural network.

The Hopfield network works by storing the pre-learned patterns as stable states in the network's neuron activations. Given an input pattern, the network iteratively updates the neuron activations until it converges to a stable state that matches the closest learned pattern.

Reinforcement learning

This library provides algorithms for supporting the reinforcement learning (https://en.wikipedia.org/wiki/Reinforcement_learning) in particular Q-learning (https://en.wikipedia.org/wiki/Q-learning) and State–action–reward–state–action - SARSA (https://en.wikipedia.org/wiki/State%E2%80%93action%E2%80%93reward%E2%80%93state%E2%80%93action) algorithms.

See the maze (https://github.com/eantcal/nunn/blob/master/examples/maze/maze.cc) and path finder (https://github.com/eantcal/nunn/blob/master/examples/path_finder/path_finder.cc) examples.