Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Update solve.jl docstring to remove mention of extra return and minor… #908

Merged
merged 1 commit into from
Jan 16, 2025
Merged
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
15 changes: 8 additions & 7 deletions src/solve.jl
Original file line number Diff line number Diff line change
Expand Up @@ -32,27 +32,28 @@ keyword arguments for the `local_method` of a global optimizer are passed as a

Over time, we hope to cover more of these keyword arguments under the common interface.

If a common argument is not implemented for a optimizer, a warning will be shown.
A warning will be shown if a common argument is not implemented for an optimizer.

## Callback Functions

The callback function `callback` is a function which is called after every optimizer
The callback function `callback` is a function that is called after every optimizer
step. Its signature is:

```julia
callback = (state, loss_val) -> false
```

where `state` is a `OptimizationState` and stores information for the current
where `state` is an `OptimizationState` and stores information for the current
iteration of the solver and `loss_val` is loss/objective value. For more
information about the fields of the `state` look at the `OptimizationState`
documentation. The callback should return a Boolean value, and the default
should be `false`, such that the optimization gets stopped if it returns `true`.
should be `false`, so the optimization stops if it returns `true`.

### Callback Example

Here we show an example a callback function that plots the prediction at the current value of the optimization variables.
The loss function here returns the loss and the prediction i.e. the solution of the `ODEProblem` `prob`, so we can use the prediction in the callback.
Here we show an example of a callback function that plots the prediction at the current value of the optimization variables.
For a visualization callback, we would need the prediction at the current parameters i.e. the solution of the `ODEProblem` `prob`.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

[JuliaFormatter] reported by reviewdog 🐶

Suggested change
For a visualization callback, we would need the prediction at the current parameters i.e. the solution of the `ODEProblem` `prob`.
For a visualization callback, we would need the prediction at the current parameters i.e. the solution of the `ODEProblem` `prob`.

So we call the `predict` function within the callback again.

```julia
function predict(u)
Expand All @@ -61,7 +62,7 @@ end

function loss(u, p)
pred = predict(u)
sum(abs2, batch .- pred), pred
sum(abs2, batch .- pred)
end

callback = function (state, l; doplot = false) #callback function to observe training
Expand Down
Loading