Go to file
2023-06-20 20:15:27 +02:00
src/mlp Added a check size of input data array in test training methods 2023-06-20 20:15:27 +02:00
tests Added a check size of input data array in test training methods 2023-06-20 20:15:27 +02:00
.gitignore Initial commit 2023-06-19 17:20:39 +02:00
LICENSE Added README and LICENSE files 2023-06-20 19:19:22 +02:00
README.md Added README and LICENSE files 2023-06-20 19:19:22 +02:00
requirements.txt Initial commit 2023-06-19 17:20:39 +02:00
setup.py Initial commit 2023-06-19 17:20:39 +02:00

Artificial Neural Networks (ANN)

A good way to start learning concepts related to AI and neural networks is working with a Multilayer Perceptron (MLP). This code implements a simple MLP for beginners and includes support to any size ANN.

The source code file mlp.py basically defines a class called MLP. The included public methods simple_inequality() and max_average() can be used as axamples to define problems to be solved by MLP and training routines.

Check greater number

Suppose qe want to train our MLP to decide if one real number (between 0 an 1) is bigger than another. This example allows us to train the MLP with a few lines of code. To train you can try something like this pytest function:

def test_mytest():
    try:
        mycfg = np.array([2, 60, 60, 60, 60, 1], dtype=int) # Here we define our ANN
        mymlp = MLP(mycfg, mtype='random')
        print(f"Començant l'entrenament...")
        epochs = 50000
        err_radius = 0.01
        acc_rate = mymlp.simple_inequality(epochs, err_radius)

        print(f"\nEpochs={epochs}, Fiabilitat = {100 * acc_rate / epochs:3.2f}%, amb radi d'error = {err_radius}")
        print(f"Entrenament finalitzat!")
        
        # Here we test our trained MLP
        test_input = np.array([0.87, 0.23])
        mymlp.ldata_output(test_input)
        print(f"És {test_input[0]} > {test_input[1]} ? Resposta obtinguda = {100 * mymlp.out_data[-1][0]:3.2f}%, Resposta esperada = {100 if test_input[0] > test_input[1] else 0}%")

    except MLPException as e:
        print(e)
        assert False

    assert True

Learn to calculate average and maximum

Train a MLP to learn several thing in the same time is possible. Suposse we want to train a MLP to calculate the average and the maximum of three input numbers (between 0 an 1). You can try something like this pytest function:

def test_mytest():
    try:
        mycfg = np.array([3, 60, 60, 60, 60, 2], dtype=int)
        mymlp = MLP(mycfg, mtype='random')
        print(f"Començant l'entrenament...")
        epochs = 50000
        err_radius = 0.01
        acc_rate = mymlp.max_average(epochs, err_radius)

        print(f"\nEpochs={epochs}, Fiabilitat = {100 * acc_rate / epochs:3.2f}%, amb radi d'error = {err_radius}")
        print(f"Entrenament finalitzat!")

        test_input = np.array([0.87, 0.23, 0.27])
        mymlp.ldata_output(test_input)
        print(f"Quin és el maxim entre {test_input} ? Resposta obtinguda = {mymlp.out_data[-1][0]:1.3f}, "
              f"Resposta esperada = {np.max(test_input):1.3f}")
        print(f"Quin és el promig entre {test_input} ? Resposta obtinguda = {mymlp.out_data[-1][1]:1.3f}, "
              f"Resposta esperada = {np.average(test_input):1.3f}")

    except MLPException as e:
        print(e)
        assert False

    assert True