Skip to content

Commit

Permalink
fixed typo
Browse files Browse the repository at this point in the history
  • Loading branch information
moralapablo committed Oct 30, 2023
1 parent 6eacf72 commit 0ad1efb
Show file tree
Hide file tree
Showing 2 changed files with 5 additions and 5 deletions.
2 changes: 1 addition & 1 deletion vignettes/nn2poly-04-torch-regression.Rmd
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,7 @@ output:
fig_width: 6
date: "2023-10-30"
vignette: >
%\VignetteIndexEntry{02 - Regression example using tensorflow}
%\VignetteIndexEntry{04 - Regression example using torch}
%\VignetteEngine{knitr::rmarkdown}
%\VignetteEncoding{UTF-8}
---
Expand Down
8 changes: 4 additions & 4 deletions vignettes/source/nn2poly-04-torch-regression.Rmd.orig
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,7 @@ output:
fig_width: 6
date: "`r Sys.Date()`"
vignette: >
%\VignetteIndexEntry{02 - Regression example using tensorflow}
%\VignetteIndexEntry{04 - Regression example using torch}
%\VignetteEngine{knitr::rmarkdown}
%\VignetteEncoding{UTF-8}
---
Expand Down Expand Up @@ -146,7 +146,7 @@ Then, the NNs will be built using a sequential model with `luz`and `torch`. As w

luz_nn1 <- function() {
torch::torch_manual_seed(42)

luz_model_sequential(
torch::nn_linear(p,100),
torch::nn_tanh(),
Expand All @@ -160,7 +160,7 @@ luz_nn1 <- function() {

luz_nn2 <- function() {
torch::torch_manual_seed(42)

luz_model_sequential(
torch::nn_linear(p,100),
torch::nn_tanh(),
Expand Down Expand Up @@ -190,7 +190,7 @@ fitted_1 <- luz_nn1() %>%
luz::fit(torch_data$train, epochs = 50, valid_data = torch_data$valid)
```

In order to implement the desired constraints, we provide the `add_constraints()` function, that allows to set the desired constraints in the torch training setup as callbacks.
In order to implement the desired constraints, we provide the `add_constraints()` function, that allows to set the desired constraints in the torch training setup as callbacks.

Our constraints implementation is such that the bias on each neuron is included in the weights vector incident on that neuron, meaning that if the previous layer had $h$ neurons, then the considered weight vector including the bias at a given neuron would have dimension $h+1$, having the bias as it first element. Currently, L1 norm and L2 norm equal to 1 are implemented as options.

Expand Down

0 comments on commit 0ad1efb

Please sign in to comment.