From 53ab44f995be14480147be80dde9958877064d00 Mon Sep 17 00:00:00 2001 From: Jason Andrews Date: Tue, 31 Dec 2024 13:08:30 -0600 Subject: [PATCH] spelling --- .../pytorch-digit-classification-arch-training/model-opt.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/content/learning-paths/cross-platform/pytorch-digit-classification-arch-training/model-opt.md b/content/learning-paths/cross-platform/pytorch-digit-classification-arch-training/model-opt.md index 33c998290..ec4cc9c61 100644 --- a/content/learning-paths/cross-platform/pytorch-digit-classification-arch-training/model-opt.md +++ b/content/learning-paths/cross-platform/pytorch-digit-classification-arch-training/model-opt.md @@ -57,7 +57,7 @@ class NeuralNetwork(nn.Module): return x # Outputs raw logits ``` -This code defines a neural network in PyTorch for digit classification, consisting of three linear layers with ReLU activations and optional dropout layers for regularization. The network first flattens the input, that is a 28x28 image, and passes it through two linear layers, each followed by a ReLU activation and if enbaled, a dropout layer. The final layer produces raw logits as the output. Notably, the softmax layer has been removed to enable quantization and layer fusion during model optimization, allowing better performance when deploying the model on mobile or edge devices. +This code defines a neural network in PyTorch for digit classification, consisting of three linear layers with ReLU activations and optional dropout layers for regularization. The network first flattens the input, that is a 28x28 image, and passes it through two linear layers, each followed by a ReLU activation and if enabled, a dropout layer. The final layer produces raw logits as the output. Notably, the softmax layer has been removed to enable quantization and layer fusion during model optimization, allowing better performance when deploying the model on mobile or edge devices. The output is left as logits, and the softmax function can be applied during post-processing, particularly during inference.