-
-
Notifications
You must be signed in to change notification settings - Fork 622
Migration note : v0.3.0 to v0.4.0
PyTorch-Ignite v0.4.0 has several backward compatibility breaking changes. Here we provide some information on how to adapt your code to v0.4.0.
Engine in v0.3.0 provided automatically a dataflow synchronization during the run to manage a reproducible training. However, it was reported that this approach has several important drawbacks that we decided to remove this behaviour in Engine of v0.4.0 (https://github.com/pytorch/ignite/pull/940):
- no more internal patching of torch DataLoader
- seed argument of
Engine.run
is deprecated
trainer = ...
trainer.run(data, max_epochs=N, seed=12)
from ignite.utils import manual_seed
trainer = ...
manual_seed(12)
trainer.run(data, max_epochs=N)
Similar behaviour to v0.3.0 can be achieved with DeterministicEngine.
In v0.4.0 create_supervised_trainer
and create_supervised_evaluator
do not move model to device anymore (https://github.com/pytorch/ignite/pull/910).
This is due to the fact that user should not move model to another device after constructing an optimizer for the model:
model = ...
optimizer = SGD(model.parameters(), ...)
criterion = ...
trainer = create_supervised_trainer(model, optimizer, criterion, device="cuda")
model = ...
model.to("cuda")
optimizer = SGD(model.parameters(), ...)
criterion = ...
trainer = create_supervised_trainer(model, optimizer, criterion, device="cuda")
In v0.4.0 all built-in and custom events inherit from EventEnum
. To define user event class, we need obligatory to inherit from EventEnum
:
from ignite.engine import EventEnum
class BackpropEvents(EventEnum):
BACKWARD_STARTED = 'backward_started'
BACKWARD_COMPLETED = 'backward_completed'
OPTIM_STEP_COMPLETED = 'optim_step_completed'
trainer = Engine(update)
trainer.register_events(*BackpropEvents)
PyTorch-Ignite presented to you with love by PyTorch community