Skip to content
Permalink

Comparing changes

Choose two branches to see what’s changed or to start a new pull request. If you need to, you can also or learn more about diff comparisons.

Open a pull request

Create a new pull request by comparing changes across two branches. If you need to, you can also . Learn more about diff comparisons here.
base repository: yabata/pyrenn
Failed to load repositories. Confirm that selected base ref is valid, then try again.
Loading
base: v0.1
Choose a base ref
...
head repository: yabata/pyrenn
Failed to load repositories. Confirm that selected head ref is valid, then try again.
Loading
compare: master
Choose a head ref

Commits on Jan 20, 2016

  1. DOI and documentation link

    yabata committed Jan 20, 2016
    Copy the full SHA
    9d62f5e View commit details
  2. update doc

    yabata committed Jan 20, 2016
    Copy the full SHA
    25d39e1 View commit details
  3. Update index.rst

    yabata committed Jan 20, 2016
    Copy the full SHA
    7028250 View commit details

Commits on Jan 21, 2016

  1. update example_compair.py

    yabata committed Jan 21, 2016
    Copy the full SHA
    ece8223 View commit details
  2. fix save and load function

    yabata committed Jan 21, 2016
    Copy the full SHA
    3be5d8c View commit details
  3. Update index.rst

    yabata committed Jan 21, 2016
    Copy the full SHA
    bac2110 View commit details
  4. Update README.md

    yabata committed Jan 21, 2016
    Copy the full SHA
    2a13916 View commit details

Commits on Feb 15, 2016

  1. Copy the full SHA
    63bec92 View commit details
  2. Copy the full SHA
    2cce666 View commit details
  3. Delete loadNN.m

    yabata committed Feb 15, 2016
    Copy the full SHA
    c39607f View commit details

Commits on Feb 16, 2016

  1. replace xlsread in loadNN.m

    yabata committed Feb 16, 2016
    Copy the full SHA
    49bf0c5 View commit details

Commits on May 3, 2016

  1. Copy the full SHA
    ad2d226 View commit details

Commits on Nov 2, 2016

  1. matlab and python:

    modify initialization of a in prepare_data for given P0 and Y0. There was an error using P0 and Y0 when the NN had internal delays
    
    python:
    using int() to avoid numpy warnings while indexing
    yabata committed Nov 2, 2016
    Copy the full SHA
    59179f5 View commit details

Commits on Nov 3, 2016

  1. In pyrenn.py in the RTRL function:

    the sensitivity Matrix S is reseted every time
    in dA_dw (derivative of layer outputs a with respect to weight vector w) entries older than q-max_delay are deleted
    
    As suggested by jjboltz1234 here #2 (comment):
    yabata committed Nov 3, 2016
    Copy the full SHA
    dbdf1fe View commit details

Commits on Nov 14, 2016

  1. Copy the full SHA
    ec4db9e View commit details

Commits on May 22, 2017

  1. Update train.rst

    training functions are called train_LM and train_BFGS, not trainLM and trainBFGS. Corrected this in the documentation.
    yabata authored May 22, 2017
    Copy the full SHA
    d00ea51 View commit details

Commits on May 23, 2017

  1. Update pyrenn.py

    using np instead of numpy
    yabata authored May 23, 2017
    Copy the full SHA
    49e6608 View commit details

Commits on Jun 16, 2017

  1. Incorrect delay on dIn

    It wasn't loading correctly the NN when the internal delay was set it
    DiMiGi authored Jun 16, 2017
    Copy the full SHA
    707068f View commit details

Commits on Jun 20, 2017

  1. Merge pull request #4 from DiMiGi/patch-1

    used wrong index when loading internal delay from saved NN in function loadNN
    yabata authored Jun 20, 2017
    Copy the full SHA
    a3445fd View commit details

Commits on Aug 9, 2017

  1. Update train.rst

    yabata authored Aug 9, 2017
    Copy the full SHA
    09eadc4 View commit details
  2. Update create.rst

    yabata authored Aug 9, 2017
    Copy the full SHA
    ff784f2 View commit details

Commits on Aug 17, 2017

  1. add classification example

    yabata committed Aug 17, 2017
    Copy the full SHA
    dd2b47d View commit details

Commits on Sep 25, 2017

  1. Add setup file (#1)

    akbargumbira authored Sep 25, 2017
    Copy the full SHA
    1cf0f77 View commit details
  2. Copy the full SHA
    5311198 View commit details

Commits on Sep 27, 2017

  1. Merge pull request #5 from akbargumbira/master

    Add setup file for easier installation
    yabata authored Sep 27, 2017
    Copy the full SHA
    2cce9c8 View commit details
  2. Copy the full SHA
    2a159e5 View commit details

Commits on Jun 25, 2018

  1. update docs

    dennisatabay committed Jun 25, 2018
    Copy the full SHA
    e0abef4 View commit details

Commits on Jun 30, 2018

  1. fix doc errors

    yabata committed Jun 30, 2018
    Copy the full SHA
    1bc7fb1 View commit details
  2. update docs

    yabata committed Jun 30, 2018
    Copy the full SHA
    c332c39 View commit details

Commits on Feb 18, 2019

  1. Update pyrenn.py

    The proposed changes help in two ways:
    1.  Prevent training from running forever when error improves slightly
    2. Stop a training early before k_max is reached if error doesn't improve any more
    mc10011 authored Feb 18, 2019
    Copy the full SHA
    32a6198 View commit details

Commits on Apr 1, 2019

  1. Refine early stopping criterium

    Refine early stopping criterium for resetting when a step is successful and verbose is chosen
    mc10011 authored Apr 1, 2019
    Copy the full SHA
    8e46bf3 View commit details
  2. Minor change

    deleted a redundant comment
    mc10011 authored Apr 1, 2019
    Copy the full SHA
    dbdba59 View commit details

Commits on Apr 22, 2019

  1. Merge pull request #8 from mc10011/patch-2

    Update Train_LM with minimum error step criterium
    yabata authored Apr 22, 2019
    Copy the full SHA
    708be9b View commit details

Commits on Jan 31, 2020

  1. add article

    VKdennis committed Jan 31, 2020
    Copy the full SHA
    83ef2f1 View commit details
  2. Copy the full SHA
    fdf48ca View commit details
5 changes: 4 additions & 1 deletion .gitignore
Original file line number Diff line number Diff line change
@@ -1,3 +1,6 @@
*~
*.pyc
python/__pycache__\*
python/__pycache__\*
.idea/*
dist/
python/pyrenn.egg-info
23 changes: 19 additions & 4 deletions README.md
Original file line number Diff line number Diff line change
@@ -2,25 +2,40 @@

pyrenn is a [recurrent neural network](https://en.wikipedia.org/wiki/Recurrent_neural_network) toolbox for Python and Matlab.

[![Documentation Status](https://readthedocs.org/projects/pyrenn/badge/?version=latest)](https://pyrenn.readthedocs.org/en/latest/) [![DOI](https://zenodo.org/badge/18757/yabata/pyrenn.svg)](https://zenodo.org/badge/latestdoi/18757/yabata/pyrenn)

## Features

* pyrenn allows to create a wide range of (recurrent) neural network topologies
* pyrenn allows to create a wide range of (recurrent) neural network configurations
* It is very easy to create, train and use neural networks
* It uses the [Levenberg–Marquardt algorithm](https://en.wikipedia.org/wiki/Levenberg%E2%80%93Marquardt_algorithm) (a second-order Quasi-Newton optimization method) for training, which is much faster than first-order methods like [gradient descent](https://en.wikipedia.org/wiki/Gradient_descent). In the matlab version additionally the [Broyden–Fletcher–Goldfarb–Shanno algorithm](https://en.wikipedia.org/wiki/Broyden%E2%80%93Fletcher%E2%80%93Goldfarb%E2%80%93Shanno_algorithm) is implemented
* The python version is written in pure python and numpy and the matlab version in pure matlab (no toolboxes needed)
* [Real-Time Recurrent Learning (RTRL) algorithm](http://www.mitpressjournals.org/doi/abs/10.1162/neco.1989.1.2.270#.VpDullJ1F3Q) and [Backpropagation Through Time (BPTT) algorithm](https://en.wikipedia.org/wiki/Backpropagation_through_time) are implemented and can be used to implement further training algorithms
* It comes with various examples which show how to create, train and use the neural network

## Articles

## Get Started
* [Pyrenn Levenberg-Marquardt (LM) Neural Network Training Algorithm as an Alternative to Matlab's LM Training Algorithm](https://mikescodeprojects.com/2020/01/12/pyrenn-vs-matlab/)

## Installation

### Install with pip (python only)

From your command line, run:

```bash
pip install pyrenn
```

### Install manually
1. [download](https://github.com/yabata/pyrenn/archive/master.zip) or clone (with [git](http://git-scm.com/)) this repository to a directory of your choice.
2.
* Python: Copy the `pyrenn.py` file in the `python` folder to a directory which is already in python's search path or add the `python` folder to python's search path (sys.path) ([how to](http://stackoverflow.com/questions/17806673/where-shall-i-put-my-self-written-python-packages/17811151#17811151))
* Matlab: Add the `matlab` folder to Matlab's search path ([how to](http://www.mathworks.com/help/matlab/matlab_env/add-remove-or-reorder-folders-on-the-search-path.html))
3. Run the given examples in the `examples` folder.
4. Read the [documenation](http://pyrenn.readthedocs.org) and create your own neural network

## Get Started
1. Run the given examples in the `examples` folder.
2. Read the [documenation](http://pyrenn.readthedocs.org) and create your own neural network


## Copyright
9 changes: 4 additions & 5 deletions doc/create.rst
Original file line number Diff line number Diff line change
@@ -2,8 +2,8 @@

.. _create:

Creating a Neural Network
============
Create a neural network
========================

This chapter describes how to create a feed forward or recurrent neural network in pyrenn.

@@ -18,7 +18,7 @@ pyrenn allows to create `multilayer perceptron (MLP)`_ neural networks. A MLP is
* :math:`M-1` hidden layers, where each layer :math:`m` has an abritary number of neurons :math:`S^\text{m}`
* and an output layer with :math:`S^\text{M}` number of neurons, which corespond to the number of outputs of the neural network

The following notation allows n short description of an MLP which gives the number of inputs :math:`R`, the number of layers :math:`M` and the number of neurons :math:`S^\text{m}` in each layer :math:`m`:
The following notation allows a short description of a MLP which gives the number of inputs :math:`R`, the number of layers :math:`M` and the number of neurons :math:`S^\text{m}` in each layer :math:`m`:

.. math::
@@ -112,8 +112,7 @@ The connection weights :math:`w` are represented by the matrix :math:`\widetilde
w^2_{2,1} & w^2_{2,2}
\end{bmatrix}\;
\widetilde{LW}^{3,2}= \begin{bmatrix}
w^3_{1,1} & w^3_{1,2} \\
w^3_{2,1} & w^3_{2,2}
w^3_{1,1} & w^3_{1,2}
\end{bmatrix}

`figure 3` shows the array-matrix illustration of the MLP of `figure 1` :
100 changes: 99 additions & 1 deletion doc/examples.rst
Original file line number Diff line number Diff line change
@@ -670,7 +670,7 @@ Then the neural network is created. Since we have a system with 3 inputs and 2 o

.. code-block:: matlab
nn = [3 4 4 2];
nn = [3 5 5 2];
dIn = [0];
dIntern=[];
dOut=[1];
@@ -994,3 +994,101 @@ The function ``prn.BPTT()`` uses the Back Propagation Through Time algorithm and
.. code-block:: matlab
g_bptt = BPTT(net,data);
Classification (MNIST Data)
------------------------

In this example a neural network is used to learn to recognize handwritten digits.
Thefore the MNIST dataset hosted on `Yann LeCun's website`_ is used.
The data set consists of 60,000 data points for training and 10,000 data points for testing. To reduce the size of the data file, here only 25,000 data points for training and 5,000 for testing are used.
Each data point is defined by an 28X28 pixel image (784 numbers) and the
corresponing number represesented by a 10 element vector
(one element for each digit 0,1,2,3,4,5,6,7,8,9). For the number n,
only the n-th element is 1, all others are zero. So the vector [0 0 0 0 0 1 0 0 0 0] represents the number 5.
A more detailed explanation of the MNIST data can be found in the `Tensorflow tutorial`_.

.. _ Yann LeCun's website: http://yann.lecun.com/exdb/mnist/page
.. _Tensorflow tutorial: https://www.tensorflow.org/get_started/mnist/beginners

Python
^^^^^^^^^^^

At first the needed packages are imported. pickle for reading the data, matplotlib for plotting the results, numpy for its random function and pyrenn for the neural network.

::

import matplotlib as mpl
import matplotlib.pyplot as plt
import pickle
import numpy as np
import pyrenn as prn

Then the training input data P and output (target) data Y as well as the test input data Ptest and output data Ytest is read from the given pickle file. Each image is defined by the value of its 784 pixel, so P is a 2d array of size (784,Q), where Q is the number of data samples (25,000). Y is defined by an 10 element vector, which gives us a 2d array of size (10,Q=5000).


::

mnist = pickle.load( open( "MNIST_data.pkl", "rb" ) )
P = mnist['P']
Y = mnist['Y']
Ptest = mnist['Ptest']
Ytest = mnist['Ytest']

Then the neural network is created. Since we have a system with 28*28 inputs and 10 outputs, we need a neural network with the same number of inputs and outputs. For this system we choose a neural network with one hidden layer with 10 neurons. Since there is no interconnection between the images, we do not need a recurrent network and no delayed inputs, so we do not have to change the delay inputs.

::

net = prn.CreateNN([28*28,10,10])

Because training the network with all the available training data would need a lot of memory and time, we randomly extract a batch of 1000 data samples and use it to train the network. Because we want to use as much information of our data as possible, we only perform one iteration (k_max=1) and then extract a new batch. In this example we do this 20 times, so we train the net for 20 iterations, but each iteration with new training data.
``verbose=True`` activates diplaying the error during training.

::

batch_size = 1000
number_of_batches=20

for i in range(number_of_batches):
r = np.random.randint(0,25000-batch_size)
Ptrain = P[:,r:r+batch_size]
Ytrain = Y[:,r:r+batch_size]

#Train NN with training data Ptrain=input and Ytrain=target
#Set maximum number of iterations k_max
#Set termination condition for Error E_stop
#The Training will stop after k_max iterations or when the Error <=E_stop
net = prn.train_LM(Ptrain,Ytrain,net,
verbose=True,k_max=1,E_stop=1e-5)
print('Batch No. ',i,' of ',number_of_batches)
After the training is finished, we can use the neural network. Therefore we choose 9 random samples of the test data set and use the input to calculate the NN outputs
Then we can plot the results, comparing the output of the neural network (number above the image) with the training (image).

::

idx = np.random.randint(0,5000-9)
P_ = Ptest[:,idx:idx+9]
Y_ = prn.NNOut(P_,net)

fig = plt.figure(figsize=[11,7])
gs = mpl.gridspec.GridSpec(3,3)

for i in range(9):
ax = fig.add_subplot(gs[i])
y_ = np.argmax(Y_[:,i]) #find index with highest value in NN output
p_ = P_[:,i].reshape(28,28) #Convert input data for plotting
ax.imshow(p_) #plot input data
ax.set_xticks([])
ax.set_yticks([])
ax.set_title(str(y_), fontsize=18)
plt.show()

.. figure:: img/example_python_classification.png
:width: 95%
:align: center

Binary file modified doc/img/MLP2221_detailed.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added doc/img/example_python_classification.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file modified doc/img/recurrent_nn.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
12 changes: 10 additions & 2 deletions doc/index.rst
Original file line number Diff line number Diff line change
@@ -20,6 +20,9 @@ pyrenn: A recurrent neural network toolbox for python and matlab
.. __: http://creativecommons.org/licenses/by/4.0/


.. image:: https://zenodo.org/badge/18757/yabata/pyrenn.svg
:target: https://zenodo.org/badge/latestdoi/18757/yabata/pyrenn

Contents
--------

@@ -37,13 +40,18 @@ This documentation contains the following pages:

Features
--------
* pyrenn allows to create a wide range of (recurrent) neural network topologies
* pyrenn allows to create a wide range of (recurrent) neural network configurations
* It is very easy to create, train and use neural networks
* It uses the `Levenberg–Marquardt algorithm`_ (a second-order Quasi-Newton optimization method) for training, which is much faster than first-order methods like `gradient descent`_. In the matlab version additionally the `Broyden–Fletcher–Goldfarb–Shanno algorithm`_ is implemented
* The python version is written in pure python and numpy and the matlab version in pure matlab (no toolboxes needed)
* `Real-Time Recurrent Learning (RTRL) algorithm`_ and `Backpropagation Through Time (BPTT) algorithm`_ are implemented and can be used to implement further training algorithms
* It comes with various examples which show how to create, train and use the neural network

Articles
--------

* `Pyrenn Levenberg-Marquardt (LM) Neural Network Training Algorithm as an Alternative to Matlab's LM Training Algorithm<https://mikescodeprojects.com/2020/01/12/pyrenn-vs-matlab>`_


Get Started
-----------
@@ -77,4 +85,4 @@ Dependencies (Python)
.. _Real-Time Recurrent Learning (RTRL) algorithm: http://www.mitpressjournals.org/doi/abs/10.1162/neco.1989.1.2.270#.VpDullJ1F3Q
.. _Backpropagation Through Time (BPTT) algorithm: https://en.wikipedia.org/wiki/Backpropagation_through_time
.. _download: https://github.com/yabata/pyrenn/archive/master.zip
.. _git: http://git-scm.com/
.. _git: http://git-scm.com/
14 changes: 7 additions & 7 deletions doc/train.rst
Original file line number Diff line number Diff line change
@@ -47,10 +47,10 @@ The training repeats adapting the weights of the weight vector :math:`\underline
* the maximal number of iterations (epochs) :math:`k_{max}` is reached
* the Error is minimized to the goal :math:`E \leq E_{stop}`

``trainLM()``: train with Levenberg-Marquardt Algorithm
``train_LM()``: train with Levenberg-Marquardt Algorithm
--------------------------------------------------------

The function ``trainLM()`` is an implementation of the `Levenberg–Marquardt algorithm`_ (LM) based on:
The function ``train_LM()`` is an implementation of the `Levenberg–Marquardt algorithm`_ (LM) based on:

Levenberg, K.: A Method for the Solution of Certain Problems in Least Squares. Quarterly of Applied Mathematics, 2:164-168, 1944.

@@ -65,7 +65,7 @@ Williams, Ronald J.; Zipser, David: A Learning Algorithm for Continually Running
Python
^^^^^^^^^^^

.. py:function:: pyrenn.trainLM(P, Y, net ,[k_max=100, E_stop=1e-10, dampfac=3.0, dampconst=10.0, verbose = False])
.. py:function:: pyrenn.train_LM(P, Y, net ,[k_max=100, E_stop=1e-10, dampfac=3.0, dampconst=10.0, verbose = False])
Trains the given neural network ``net`` with the training data inputs ``P`` and outputs (targets) ``Y`` using the Levenberg–Marquardt algorithm.

@@ -84,7 +84,7 @@ Python
Matlab
^^^^^^^^^^^

.. c:function:: trainLM(P, Y, net ,[k_max=100, E_stop=1e-10])
.. c:function:: train_LM(P, Y, net ,[k_max=100, E_stop=1e-10])
Trains the given neural network ``net`` with the training data inputs ``P`` and outputs (targets) ``Y`` using the Levenberg–Marquardt algorithm.

@@ -98,18 +98,18 @@ Matlab
:rtype: struct


``trainBFGS()``: train with Broyden–Fletcher–Goldfarb–Shanno Algorithm (Matlab only)
``train_BFGS()``: train with Broyden–Fletcher–Goldfarb–Shanno Algorithm (Matlab only)
-------------------------------------------------------------------------------------

The function ``trainBFGS()`` is an implementation of the `Broyden–Fletcher–Goldfarb–Shanno algorithm`_ (BFGS). The BFGS algorithm is a second order optimization method that uses rank-one updates specified by evaluations of the gradient :math:`\underline{g}` to approximate the Hessian matrix :math:`H`. In pyrenn the gradient :math:`\underline{g}` for BFGS is calculated using the `Backpropagation Through Time (BPTT) algorithm`_ based on:
The function ``train_BFGS()`` is an implementation of the `Broyden–Fletcher–Goldfarb–Shanno algorithm`_ (BFGS). The BFGS algorithm is a second order optimization method that uses rank-one updates specified by evaluations of the gradient :math:`\underline{g}` to approximate the Hessian matrix :math:`H`. In pyrenn the gradient :math:`\underline{g}` for BFGS is calculated using the `Backpropagation Through Time (BPTT) algorithm`_ based on:

Werbos, Paul: Backpropagation through time: what it does and how to do it. In: Proceedings of the IEEE, Nummer 10, Vol. 78 (1990), S. 1550-1560.


Matlab
^^^^^^^^^^^

.. c:function:: trainBFGS(P, Y, net ,[k_max=100, E_stop=1e-10])
.. c:function:: train_BFGS(P, Y, net ,[k_max=100, E_stop=1e-10])
Trains the given neural network ``net`` with the training data inputs ``P`` and outputs (targets) ``Y`` using the Broyden–Fletcher–Goldfarb–Shanno algorithm.

8 changes: 4 additions & 4 deletions matlab/examples/example_compair.m
Original file line number Diff line number Diff line change
@@ -12,16 +12,16 @@
%%
%Create NN

%create recurrent neural network with 1 input, 2 hidden layers with
%3 neurons each and 1 output
%create recurrent neural network with 3 inputs, 2 hidden layers with
%5 neurons each and 3 outputs
%the NN uses the input data at timestep t-1 and t-2
%The NN has a recurrent connection with delay of 1,2 and 3 timesteps from the output
% to the first layer (and no recurrent connection of the hidden layers)
nn = [3 4 4 2];
nn = [3 5 5 2];
dIn = [0];
dIntern=[];
dOut=[1];
net = CreateNN(nn,dIn,dIntern,dOut); %alternative: net = CreateNN([3,4,4,2],[0],[],[1]);
net = CreateNN(nn,dIn,dIntern,dOut); %alternative: net = CreateNN([3,5,5,2],[0],[],[1]);

%%
%Train with LM-Algorithm
4 changes: 2 additions & 2 deletions matlab/examples/example_narendra4.m
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
%%
u%%
%Read Example Data
file = 'example_data.xlsx';
num = xlsread(file,'narendra4');
@@ -17,7 +17,7 @@
%the NN uses the input data at timestep t-1 and t-2
%The NN has a recurrent connection with delay of 1,2 and 3 timesteps from the output
% to the first layer (and no recurrent connection of the hidden layers)
nn = [1 2 2 1];
nn = [1 3 3 1];
dIn = [1,2];
dIntern=[];
dOut=[1,2,3];
6 changes: 3 additions & 3 deletions matlab/examples/example_using_P0Y0_compair.m
Original file line number Diff line number Diff line change
@@ -20,16 +20,16 @@
%%
%Create NN

%create recurrent neural network with 1 input, 2 hidden layers with
%3 neurons each and 1 output
%create recurrent neural network with 3 inputs, 2 hidden layers with
%5 neurons each and 2 outputs
%the NN uses the input data at timestep t-1 and t-2
%The NN has a recurrent connection with delay of 1,2 and 3 timesteps from the output
% to the first layer (and no recurrent connection of the hidden layers)
nn = [3 5 5 2];
dIn = [0];
dIntern=[];
dOut=[1];
net = CreateNN(nn,dIn,dIntern,dOut); %alternative: net = CreateNN([3,4,4,2],[0],[],[1]);
net = CreateNN(nn,dIn,dIntern,dOut); %alternative: net = CreateNN([3,5,5,2],[0],[],[1]);

%%
%Train with LM-Algorithm
2 changes: 1 addition & 1 deletion matlab/examples/example_using_P0Y0_narendra4.m
Original file line number Diff line number Diff line change
@@ -25,7 +25,7 @@
%the NN uses the input data at timestep t-1 and t-2
%The NN has a recurrent connection with delay of 1,2 and 3 timesteps from the output
% to the first layer (and no recurrent connection of the hidden layers)
nn = [1 2 2 1];
nn = [1 3 3 1];
dIn = [1,2];
dIntern=[];
dOut=[1,2,3];
Loading