This is a toy torch-like framework for educational purposes.
The central idea of this project is to implement autogradient mechanics and show how it works.
OS X or Linux-like:
chmod +x install.sh
./install.sh
To avoid dependency conflict we will keep dev under python virtual environment:
source venv/bin/activate
Let's take a look at how the simplest function can be implemented within our framework and compute it's gradient:
import numpy as np
from mini_torch.tensor import Tensor as T
w1 = T.from_numpy('w1', np.array([-0.91]).astype(float))
w0 = T.from_numpy('w0', np.array([1.5]).astype(float))
x = T.from_numpy('x', np.array([2.]).astype(float), required_grad=False)
y = w0 + w1*x
print(y)
> <class 'mini_torch.tensor.Tensor'>
> [-0.32]
> shape: (1,)
y.backward()
print('grad w0', w0.grad)
print('grad w1', w1.grad)
> {'w0': array([1.])}
> {'w1': array([2.])}
Denistr16 – @github
Distributed under the MIT license.
See LICENSE
for more information.
https://github.com/denistr16/miniPyTorch
- Fork it (https://github.com/denistr16/miniPyTorch/fork)
- Create your feature branch (
git checkout -b feature/myNewFeature
) - Commit your changes (
git commit -am 'Add some myNewFeature'
) - Push to the branch (
git push origin feature/myNewFeature
) - Create a new Pull Request