From d589ae9b73ca521a5866565ddc28e903c62e50b4 Mon Sep 17 00:00:00 2001 From: Maddy Underwood <167196745+madeline-underwood@users.noreply.github.com> Date: Wed, 18 Dec 2024 15:18:15 +0000 Subject: [PATCH 01/96] Editorial. --- content/install-guides/skopeo.md | 58 ++++++++++++++++---------------- 1 file changed, 29 insertions(+), 29 deletions(-) diff --git a/content/install-guides/skopeo.md b/content/install-guides/skopeo.md index d298cb311..8463a440d 100644 --- a/content/install-guides/skopeo.md +++ b/content/install-guides/skopeo.md @@ -24,7 +24,7 @@ Skopeo is a command-line utility that performs various operations on container i This article explains how to install Skopeo for Ubuntu on Arm. -Skopeo is available for Windows, macOS, and Linux and supports the Arm architecture. Refer to [Installing Skopeo](https://github.com/containers/skopeo/blob/main/install.md) for information about other operating systems and architectures. +Skopeo is available for Windows, macOS, and Linux and supports the Arm architecture. See [Installing Skopeo](https://github.com/containers/skopeo/blob/main/install.md) for further information about other operating systems and architectures. ## What should I consider before installing Skopeo on Arm? @@ -56,13 +56,13 @@ Confirm the installation by checking the version: skopeo --version ``` -To see the help message: +To see the help message use this command: ```bash skopeo --help ``` -The output is: +The output that you will see should be: ```output Various operations with container images and container image registries @@ -72,44 +72,44 @@ Usage: skopeo [command] Available Commands: - copy Copy an IMAGE-NAME from one location to another - delete Delete image IMAGE-NAME - generate-sigstore-key Generate a sigstore public/private key pair - help Help about any command - inspect Inspect image IMAGE-NAME - list-tags List tags in the transport/repository specified by the SOURCE-IMAGE - login Login to a container registry - logout Logout of a container registry - manifest-digest Compute a manifest digest of a file - standalone-sign Create a signature using local files - standalone-verify Verify a signature using local files - sync Synchronize one or more images from one location to another + copy Copy an IMAGE-NAME from one location to another. + delete Delete image IMAGE-NAME. + generate-sigstore-key Generate a sigstore public/private key pair. + help Help about any command. + inspect Inspect image IMAGE-NAME. + list-tags List tags in the transport/repository specified by the SOURCE-IMAGE. + login Log in to a container registry. + logout Log out of a container registry. + manifest-digest Compute a manifest digest of a file. + standalone-sign Create a signature using local files. + standalone-verify Verify a signature using local files. + sync Synchronize one or more images from one location to another. Flags: - --command-timeout duration timeout for the command execution - --debug enable debug output - -h, --help help for skopeo - --insecure-policy run the tool without any policy check - --override-arch ARCH use ARCH instead of the architecture of the machine for choosing images - --override-os OS use OS instead of the running OS for choosing images - --override-variant VARIANT use VARIANT instead of the running architecture variant for choosing images - --policy string Path to a trust policy file - --registries.d DIR use registry configuration files in DIR (e.g. for container signature storage) - --tmpdir string directory used to store temporary files - -v, --version Version for Skopeo + --command-timeout duration Timeout for the command execution. + --debug Enable debug output. + -h, --help Help for skopeo. + --insecure-policy Run the tool without any policy check. + --override-arch ARCH Use ARCH instead of the architecture of the machine for choosing images. + --override-os OS Use OS instead of the running OS for choosing images. + --override-variant VARIANT Use VARIANT instead of the running architecture variant for choosing images. + --policy string Path to a trust policy file. + --registries.d DIR Use registry configuration files in DIR (for example, for container signature storage). + --tmpdir string Directory used to store temporary files. + -v, --version Version for Skopeo. Use "skopeo [command] --help" for more information about a command. ``` ## How do I get started with Skopeo? -Some commands to get you started with Skopeo are demonstrated below. +You can use the commands listed below to get you started with Skopeo. ### How can I check if a container image supports Arm? To find out if an image is multi-architecture, including Arm, you can inspect the image's manifest. -For example, to check if the dev container available for creating Arm Learning Paths supports the Arm architecture run: +For example, to check if the dev container available for creating Arm Learning Paths supports the Arm architecture, run: ```bash skopeo inspect --raw docker://docker.io/armswdev/learn-dev-container:latest | jq '.manifests[] | select(.platform.architecture == "arm64")' @@ -166,7 +166,7 @@ The output confirms that both `arm64` and `amd64` are supported architectures as ## What are some other uses for Skopeo? -Copy an image from a registry to a local directory. This command is similar to `docker pull` and will copy the image from the remote registry to your local directory. +Copy an image from a registry to a local directory. This command is similar to `docker pull` and copies the image from the remote registry to your local directory. ```bash skopeo copy docker://docker.io/armswdev/uname:latest dir:./uname From 71882491d95d3b0050fb7d67468991ce06dad76c Mon Sep 17 00:00:00 2001 From: Maddy Underwood <167196745+madeline-underwood@users.noreply.github.com> Date: Wed, 18 Dec 2024 17:29:01 +0000 Subject: [PATCH 02/96] Tweaked index file. Changed title, improved target audience statement. --- .../snort3-multithreading/_index.md | 10 +++------- 1 file changed, 3 insertions(+), 7 deletions(-) diff --git a/content/learning-paths/servers-and-cloud-computing/snort3-multithreading/_index.md b/content/learning-paths/servers-and-cloud-computing/snort3-multithreading/_index.md index 7cddca063..75422d95e 100644 --- a/content/learning-paths/servers-and-cloud-computing/snort3-multithreading/_index.md +++ b/content/learning-paths/servers-and-cloud-computing/snort3-multithreading/_index.md @@ -1,16 +1,12 @@ --- -title: Scaling Snort3 - use multithreading for improved performance - -draft: true -cascade: - draft: true +title: Optimize performance of Snort 3 using multithreading minutes_to_complete: 45 -who_is_this_for: This blog is for engineers familiar with Snort who want to enhance its performance by leveraging the benefits of multithreading. +who_is_this_for: This Learning Path is for software developers familiar with Snort who want to optimize performance by leveraging the benefits of multithreading. learning_objectives: - - Install Snort with all of its dependencies. + - Install Snort and all of its dependencies. - Configure Snort Lua files to enable multithreading. - Use multithreading to process capture files and measure performance. From 462d3ddc13f63e2e1dcd538685968d48479f2909 Mon Sep 17 00:00:00 2001 From: Maddy Underwood <167196745+madeline-underwood@users.noreply.github.com> Date: Wed, 18 Dec 2024 17:38:28 +0000 Subject: [PATCH 03/96] Some tweaks --- .../build-and-install.md | 28 +++++++++---------- 1 file changed, 14 insertions(+), 14 deletions(-) diff --git a/content/learning-paths/servers-and-cloud-computing/snort3-multithreading/build-and-install.md b/content/learning-paths/servers-and-cloud-computing/snort3-multithreading/build-and-install.md index 502f43755..3b9fdc2cd 100644 --- a/content/learning-paths/servers-and-cloud-computing/snort3-multithreading/build-and-install.md +++ b/content/learning-paths/servers-and-cloud-computing/snort3-multithreading/build-and-install.md @@ -1,5 +1,5 @@ --- -title: Install Snort3 and the required dependencies +title: Install Snort 3 and dependencies weight: 2 ### FIXED, DO NOT MODIFY @@ -10,15 +10,15 @@ Snort is an Open Source Intrusion Prevention System (IPS). Snort uses a series o Multithreading in Snort 3 refers to the ability to associate multiple threads with a single Snort instance enabling the concurrent processing of multiple packet files. This optimization frees up additional memory for further packet processing. -In order to enable multithreading in Snort3, specify the quantity of threads designated for processing network traffic using either the '--max-packet-threads' or '-z' option. +In order to enable multithreading in Snort 3, specify the quantity of threads designated for processing network traffic using either the '--max-packet-threads' or '-z' option. {{%notice Note%}} - The instructions provided have been tested on AWS EC2 Graviton4 instance, based on Neoverse V2. The examples are easiest to use if you have at least 16 cores in the system. + The instructions provided have been tested on AWS EC2 Graviton4 instance, based on Arm Neoverse V2. The examples work best if you have at least 16 cores in your system. {{%/notice%}} -## Compile and build Snort3 +## Compile and build Snort 3 -To install Snort3, use a text editor to save the script below on your Arm server in a file named `install-snort.sh`. +To install Snort 3, use a text editor to save the script below on your Arm server in a file named `install-snort.sh`. ``` bash @@ -193,21 +193,21 @@ echo 'make sure to source ~/.bashrc or set LD_LIBRARY_PATH using:"' echo ' export LD_LIBRARY_PATH="/usr/local/lib:$LD_LIBRARY_PATH"' ``` -The script takes 2 arguments: -- the directory used to build Snort3 and its dependencies -- the number of processors to use for the build +The script takes two arguments: +* The directory used to build Snort 3 and its dependencies. +* The number of processors to use for the build. -To build in a new directory named `build` with the number of processors in your system, run the script: +To create a new directory named `build` with the number of processors in your system listed, run the script: ```bash bash ./install-snort.sh build `nproc` ``` -You don't need to run the script as `root` but it assumes you are on Ubuntu 20.04 or 22.04 and have sudo permission. +You do not need to run the script as `root`, but it assumes you are on Ubuntu 20.04 or 22.04 and have sudo permission. -When the build completes you have the snort3 directory with all compiled software, and the `snort` executable is located in `/usr/local/bin`. +When the build completes, you will have the snort 3 directory with all compiled software, and the `snort` executable is located in `/usr/local/bin`. -To verify the installation is complete, run the command below and see the version printed: +To verify the installation is complete, run the command below and observe the version printed: ```bash { output_lines = "2-20" } snort -V @@ -228,6 +228,6 @@ To verify the installation is complete, run the command below and see the versio ``` -Don't delete the `build` directory as it will be used in the next step. +Do not delete the `build` directory as you will use it in the next step. -Proceed to learn how to test Snort3 multithreading. \ No newline at end of file +Now you can move on to learn how to test Snort 3 multithreading. From 5e971f99de42aa971dd33658694f1c05c1d2a55e Mon Sep 17 00:00:00 2001 From: Maddy Underwood <167196745+madeline-underwood@users.noreply.github.com> Date: Wed, 18 Dec 2024 17:46:53 +0000 Subject: [PATCH 04/96] Update usecase.md --- .../snort3-multithreading/usecase.md | 32 +++++++++---------- 1 file changed, 16 insertions(+), 16 deletions(-) diff --git a/content/learning-paths/servers-and-cloud-computing/snort3-multithreading/usecase.md b/content/learning-paths/servers-and-cloud-computing/snort3-multithreading/usecase.md index 8d7507138..0ec4e83e4 100644 --- a/content/learning-paths/servers-and-cloud-computing/snort3-multithreading/usecase.md +++ b/content/learning-paths/servers-and-cloud-computing/snort3-multithreading/usecase.md @@ -1,17 +1,17 @@ --- -title: Test Snort3 multithreading +title: Test Snort 3 multithreading weight: 3 ### FIXED, DO NOT MODIFY layout: learningpathall --- -Before testing multithreading performance, perform the following steps to configure your system: +Before testing the Snort 3 multi-threading, configure your system by following these steps: -1. Configure Grub settings -2. Set up the Snort3 rule set -3. Download the PCAP files -4. Adjust Lua configurations +1. Configure Grub settings. +2. Set up the Snort3 rule set. +3. Download the PCAP files. +4. Adjust Lua configurations. ## Configure Grub settings @@ -39,8 +39,7 @@ After making this change, execute update-grub to apply the configuration: sudo update-grub ``` -Reboot the system to activate the settings. - +Reboot the system to activate the settings: ```bash sudo reboot ``` @@ -71,9 +70,9 @@ The output shows the isolated processors: 0-9 ``` -## Set up the Snort3 rule set +## Set up the Snort 3 rule set -Download the rule set from https://www.snort.org/ and extract it into your working directory. You should start in the `build` directory you used to build snort. +Download the rule set from https://www.snort.org/ and extract it into your working directory. Start in the `build` directory you used to build snort. ```bash cd $HOME/build @@ -95,7 +94,7 @@ Copy the `lua` folder from the `snort3` source directory into the rules director cp -r snort3/lua/ Test/snortrules/ ``` -## Download the packet capture (PCAP) files +## Download the Packet Capture (PCAP) files You can use any PCAP files that are relevant to your test scenario. @@ -115,10 +114,11 @@ cp maccdc2010_00000_20100310205651.pcap Test/Pcap/ ## Adjust Lua configurations There are two modifications to the Lau configurations: -- Pin each Snort thread to a unique core, ensuring that the cores match those isolated in the GRUB configuration -- Enable the desired ruleset and enabling profiling -### Pin snort threads to unique cpu core +* Pin each Snort thread to a unique core, ensuring that the cores match those isolated in the GRUB configuration. +* Enable the desired ruleset and enabling profiling. + +### Pin Snort Threads to Unique CPU Core Navigate to the `Test/snortrules/lua` directory. @@ -290,7 +290,7 @@ The output is similar to: 22:52:28 9 97.50 0.00 2.50 0.00 0.00 0.00 0.00 0.00 0.00 0.00 ``` -## Test Snort3 multi-threading to process single pcap file +## Test Snort 3 multi-threading to process a single PCAP file The example usage demonstrates how multithreading increases the number of packets processed per second. @@ -307,4 +307,4 @@ Performance results | 1 | 940960 | 91.777964 | | 10 | 9406134 | 9.181182 | -The results demonstrate how increasing the thread count by ten times results in a ten times increase in packets processed per second, while reducing the execution time by ten times. \ No newline at end of file +The results demonstrate how increasing the thread count by ten times results in a ten times increase in packets processed per second, while reducing the execution time by ten times. From 28748febb2f4403bee99d32c4160adfde818df62 Mon Sep 17 00:00:00 2001 From: Maddy Underwood <167196745+madeline-underwood@users.noreply.github.com> Date: Wed, 18 Dec 2024 17:53:05 +0000 Subject: [PATCH 05/96] Update _review.md --- .../snort3-multithreading/_review.md | 28 +++++++++---------- 1 file changed, 14 insertions(+), 14 deletions(-) diff --git a/content/learning-paths/servers-and-cloud-computing/snort3-multithreading/_review.md b/content/learning-paths/servers-and-cloud-computing/snort3-multithreading/_review.md index c439b6749..571a8df09 100644 --- a/content/learning-paths/servers-and-cloud-computing/snort3-multithreading/_review.md +++ b/content/learning-paths/servers-and-cloud-computing/snort3-multithreading/_review.md @@ -2,11 +2,11 @@ review: - questions: question: > - Which of the following is a key benefit of Snort3's multithreading support? + Which of the following is a key benefit of Snort 3's multithreading support? answers: - It allows Snort to detect encrypted traffic. - - It improves packet processing performance - - It enables Snort to be run on legacy hardware + - It improves packet processing performance. + - It enables Snort to be run on legacy hardware. - It support multiple rule sets at the same time. correct_answer: 2 explanation: > @@ -14,27 +14,27 @@ review: - questions: question: > - Which parameter is used to enable multithreading in Snort3? + Which parameter is used to enable multithreading in Snort 3? answers: - - --max-packet-threads - - --enable-threads - - --enable-multithreading - - --packet-loop + - --max-packet-threads. + - --enable-threads. + - --enable-multithreading. + - --packet-loop. correct_answer: 1 explanation: > --max-packet-threads parameter is used to enable and configure multithreading. - questions: question: > - In Snort 3, which DAQ (Data Acquisition) module is used to read capture files for packet processing? + In Snort 3, which Data Acquisition (DAQ) module is used to read capture files for packet processing? answers: - - afpacket - - vpp - - dump - - pcap + - afpacket. + - vpp. + - dump. + - pcap. correct_answer: 3 explanation: > - The dump module in Snort3 is used to read capture files (such as .pcap or .pcapng files) for offline packet analysis. + The dump module in Snort 3 is used to read capture files (such as .pcap or .pcapng files) for offline packet analysis. From 19a7b41330650d7ac5a34fe52a047fa34476e0c5 Mon Sep 17 00:00:00 2001 From: Maddy Underwood <167196745+madeline-underwood@users.noreply.github.com> Date: Wed, 18 Dec 2024 17:57:22 +0000 Subject: [PATCH 06/96] Update usecase.md --- .../snort3-multithreading/usecase.md | 14 +++++++------- 1 file changed, 7 insertions(+), 7 deletions(-) diff --git a/content/learning-paths/servers-and-cloud-computing/snort3-multithreading/usecase.md b/content/learning-paths/servers-and-cloud-computing/snort3-multithreading/usecase.md index 0ec4e83e4..45d5baab3 100644 --- a/content/learning-paths/servers-and-cloud-computing/snort3-multithreading/usecase.md +++ b/content/learning-paths/servers-and-cloud-computing/snort3-multithreading/usecase.md @@ -9,7 +9,7 @@ layout: learningpathall Before testing the Snort 3 multi-threading, configure your system by following these steps: 1. Configure Grub settings. -2. Set up the Snort3 rule set. +2. Set up the Snort 3 rule set. 3. Download the PCAP files. 4. Adjust Lua configurations. @@ -72,7 +72,7 @@ The output shows the isolated processors: ## Set up the Snort 3 rule set -Download the rule set from https://www.snort.org/ and extract it into your working directory. Start in the `build` directory you used to build snort. +Download the rule set from https://www.snort.org/ and extract it into your working directory. Start in the `build` directory you used to build Snort. ```bash cd $HOME/build @@ -176,9 +176,9 @@ Continue to edit `snort.lua` and comment out the `profiler` and `latency` lines ### Modify the IPS policy -Snort3 allows you to fine-tune setups with the `--tweaks` parameter. This feature allows you to use one of Snort's policy files to enhance the detection engine for improved performance or increased security. +Snort 3 allows you to fine-tune setups with the `--tweaks` parameter. This feature allows you to use one of Snort's policy files to enhance the detection engine for improved performance or increased security. -Snort3 includes four preset policy files: max_detect, security, balanced, and connectivity. +Snort 3 includes four preset policy files: max_detect, security, balanced, and connectivity. The max_detect policy favors maximum security, whereas the connectivity policy focuses on performance and uptime, which may come at the expense of security. @@ -186,7 +186,7 @@ The max_detect policy favors maximum security, whereas the connectivity policy f Snort supports DAQ modules which serves as an abstraction layer for interfacing with data source such as network interface. -To see list of DAQ modules supported by snort use `--daq-list` command. +To see list of DAQ modules supported by Snort use `--daq-list` command. Return to the `build` directory: @@ -250,9 +250,9 @@ trace(v1): inline unpriv wrapper For testing, you can use `--daq dump` to analyze PCAP files. -## Spawn Snort3 process with multithreading +## Spawn Snort 3 process with multithreading -To run Snort3 with multithreading start from the `Test` directory. +To run Snort 3 with multithreading start from the `Test` directory. ```bash cd $HOME/build/Test From b640564baefc337e8eed7e70eb1a107329a942ee Mon Sep 17 00:00:00 2001 From: Maddy Underwood <167196745+madeline-underwood@users.noreply.github.com> Date: Wed, 18 Dec 2024 18:00:26 +0000 Subject: [PATCH 07/96] Update build-and-install.md --- .../snort3-multithreading/build-and-install.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/content/learning-paths/servers-and-cloud-computing/snort3-multithreading/build-and-install.md b/content/learning-paths/servers-and-cloud-computing/snort3-multithreading/build-and-install.md index 3b9fdc2cd..737396f49 100644 --- a/content/learning-paths/servers-and-cloud-computing/snort3-multithreading/build-and-install.md +++ b/content/learning-paths/servers-and-cloud-computing/snort3-multithreading/build-and-install.md @@ -205,7 +205,7 @@ bash ./install-snort.sh build `nproc` You do not need to run the script as `root`, but it assumes you are on Ubuntu 20.04 or 22.04 and have sudo permission. -When the build completes, you will have the snort 3 directory with all compiled software, and the `snort` executable is located in `/usr/local/bin`. +When the build completes, you will have the Snort 3 directory with all compiled software, and the `snort` executable is located in `/usr/local/bin`. To verify the installation is complete, run the command below and observe the version printed: From 7397ba967b27ae985e4d3fa4f90721da1eb19063 Mon Sep 17 00:00:00 2001 From: Maddy Underwood <167196745+madeline-underwood@users.noreply.github.com> Date: Wed, 18 Dec 2024 22:03:57 +0000 Subject: [PATCH 08/96] Added missing definite article in the title. --- .../snort3-multithreading/_index.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/content/learning-paths/servers-and-cloud-computing/snort3-multithreading/_index.md b/content/learning-paths/servers-and-cloud-computing/snort3-multithreading/_index.md index 75422d95e..d0d007d26 100644 --- a/content/learning-paths/servers-and-cloud-computing/snort3-multithreading/_index.md +++ b/content/learning-paths/servers-and-cloud-computing/snort3-multithreading/_index.md @@ -1,5 +1,5 @@ --- -title: Optimize performance of Snort 3 using multithreading +title: Optimize the performance of Snort 3 using multithreading minutes_to_complete: 45 @@ -11,7 +11,7 @@ learning_objectives: - Use multithreading to process capture files and measure performance. prerequisites: - - An Arm-based instance from a cloud provider or an Arm server running Ubuntu 20.04 or 22.04. + - An Arm-based instance from a cloud provider, or an Arm server running Ubuntu 20.04 or 22.04. - A basic understanding of Snort's operation and configuration. From fde91ddc15fd8fd27ee31e4ee0a1ece33e553902 Mon Sep 17 00:00:00 2001 From: Maddy Underwood <167196745+madeline-underwood@users.noreply.github.com> Date: Wed, 18 Dec 2024 22:06:14 +0000 Subject: [PATCH 09/96] Update _next-steps.md --- .../snort3-multithreading/_next-steps.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/content/learning-paths/servers-and-cloud-computing/snort3-multithreading/_next-steps.md b/content/learning-paths/servers-and-cloud-computing/snort3-multithreading/_next-steps.md index 5d7e1d691..311f62435 100644 --- a/content/learning-paths/servers-and-cloud-computing/snort3-multithreading/_next-steps.md +++ b/content/learning-paths/servers-and-cloud-computing/snort3-multithreading/_next-steps.md @@ -1,11 +1,11 @@ --- -next_step_guidance: To continue learning about enabling hyperscan on arm,please refer to the learning path provided below. +next_step_guidance: You can now try the Learning Path about enabling hyperscan on arm. See the link below. recommended_path: /learning-paths/servers-and-cloud-computing/vectorscan/ further_reading: - resource: - title: Snort3 Documentation + title: Snort 3 Documentation link: https://docs.snort.org/start/ type: documentation - resource: From b46c7285b585774681019181f74f3d5869db37e7 Mon Sep 17 00:00:00 2001 From: Maddy Underwood <167196745+madeline-underwood@users.noreply.github.com> Date: Wed, 18 Dec 2024 22:07:59 +0000 Subject: [PATCH 10/96] Update _review.md --- .../snort3-multithreading/_review.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/content/learning-paths/servers-and-cloud-computing/snort3-multithreading/_review.md b/content/learning-paths/servers-and-cloud-computing/snort3-multithreading/_review.md index 571a8df09..9dcd37ae6 100644 --- a/content/learning-paths/servers-and-cloud-computing/snort3-multithreading/_review.md +++ b/content/learning-paths/servers-and-cloud-computing/snort3-multithreading/_review.md @@ -7,7 +7,7 @@ review: - It allows Snort to detect encrypted traffic. - It improves packet processing performance. - It enables Snort to be run on legacy hardware. - - It support multiple rule sets at the same time. + - It supports multiple rule sets at the same time. correct_answer: 2 explanation: > It improves packet processing performance by parallelizing tasks. @@ -34,7 +34,7 @@ review: - pcap. correct_answer: 3 explanation: > - The dump module in Snort 3 is used to read capture files (such as .pcap or .pcapng files) for offline packet analysis. + The dump module in Snort 3 is used to read capture files, such as .pcap or .pcapng files, for offline packet analysis. From 70b073f3115273255688a1719704f90a7c2166a0 Mon Sep 17 00:00:00 2001 From: Maddy Underwood <167196745+madeline-underwood@users.noreply.github.com> Date: Wed, 18 Dec 2024 22:14:54 +0000 Subject: [PATCH 11/96] Update build-and-install.md --- .../snort3-multithreading/build-and-install.md | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/content/learning-paths/servers-and-cloud-computing/snort3-multithreading/build-and-install.md b/content/learning-paths/servers-and-cloud-computing/snort3-multithreading/build-and-install.md index 737396f49..c05024491 100644 --- a/content/learning-paths/servers-and-cloud-computing/snort3-multithreading/build-and-install.md +++ b/content/learning-paths/servers-and-cloud-computing/snort3-multithreading/build-and-install.md @@ -1,5 +1,5 @@ --- -title: Install Snort 3 and dependencies +title: Install Snort 3 and its Dependencies weight: 2 ### FIXED, DO NOT MODIFY @@ -8,7 +8,7 @@ layout: learningpathall Snort is an Open Source Intrusion Prevention System (IPS). Snort uses a series of rules to define malicious network activity. If malicious activity is found, Snort generates alerts. -Multithreading in Snort 3 refers to the ability to associate multiple threads with a single Snort instance enabling the concurrent processing of multiple packet files. This optimization frees up additional memory for further packet processing. +Multithreading in Snort 3 refers to the ability to associate multiple threads with a single Snort instance, which enables the concurrent processing of multiple packet files. This optimization frees up additional memory for further packet processing. In order to enable multithreading in Snort 3, specify the quantity of threads designated for processing network traffic using either the '--max-packet-threads' or '-z' option. @@ -18,7 +18,7 @@ In order to enable multithreading in Snort 3, specify the quantity of threads de ## Compile and build Snort 3 -To install Snort 3, use a text editor to save the script below on your Arm server in a file named `install-snort.sh`. +To install Snort 3, use a text editor to copy the text below and save the script on your Arm server in a file named `install-snort.sh`. ``` bash From 88b8060e87f69dd50e876378069d66d263bda938 Mon Sep 17 00:00:00 2001 From: Maddy Underwood <167196745+madeline-underwood@users.noreply.github.com> Date: Wed, 18 Dec 2024 22:29:24 +0000 Subject: [PATCH 12/96] Update usecase.md --- .../snort3-multithreading/usecase.md | 37 +++++++++++-------- 1 file changed, 21 insertions(+), 16 deletions(-) diff --git a/content/learning-paths/servers-and-cloud-computing/snort3-multithreading/usecase.md b/content/learning-paths/servers-and-cloud-computing/snort3-multithreading/usecase.md index 45d5baab3..97c4b74ff 100644 --- a/content/learning-paths/servers-and-cloud-computing/snort3-multithreading/usecase.md +++ b/content/learning-paths/servers-and-cloud-computing/snort3-multithreading/usecase.md @@ -6,7 +6,7 @@ weight: 3 layout: learningpathall --- -Before testing the Snort 3 multi-threading, configure your system by following these steps: +Before testing the Snort 3 multithreading, configure your system by following these steps: 1. Configure Grub settings. 2. Set up the Snort 3 rule set. @@ -19,7 +19,8 @@ To enable Transparent HugePages (THP) and configure CPU isolation and affinity, For the total available online CPUs ranging from 0 to 95, with CPUs 0 to 9 pinned to Snort, the grubfile configuration is shown below. -Feel free to modify the CPU numbers as needed. +You can modify the CPU numbers as needed: + ```bash CMDLINE="cma=128" HUGEPAGES="default_hugepagesz=1G hugepagesz=1G hugepages=300" @@ -98,12 +99,11 @@ cp -r snort3/lua/ Test/snortrules/ You can use any PCAP files that are relevant to your test scenario. -One place to get PCAP files is: -https://www.netresec.com/?page=MACCDC +You can obtain PCAP files at: https://www.netresec.com/?page=MACCDC. Visit https://share.netresec.com/s/wC4mqF2HNso4Ten and download a PCAP file. -Copy the file to your working directory and extract it, adjust the file name as needed if you downloaded a different PCAP file. +Copy the file to your working directory, and extract it. If you downloaded a different PCAP file, you might want to change the file name. ```bash gunzip maccdc2010_00000_20100310205651.pcap.gz @@ -113,7 +113,7 @@ cp maccdc2010_00000_20100310205651.pcap Test/Pcap/ ## Adjust Lua configurations -There are two modifications to the Lau configurations: +Now make two modifications to the Lau configurations: * Pin each Snort thread to a unique core, ensuring that the cores match those isolated in the GRUB configuration. * Enable the desired ruleset and enabling profiling. @@ -126,7 +126,7 @@ Navigate to the `Test/snortrules/lua` directory. cd Test/snortrules/lua ```` -Use an editor to create a file named `common.lua` with the contents below. +Use an editor to create a file named `common.lua`, and copy-and-paste in the contents below: ```bash ------------------------------------------------------------------------------- @@ -151,7 +151,7 @@ search_engine = { } snort_whitelist_append("threads") ``` -Include the above file in `snort.lua` by editing the file and adding the line below to the end of the file. +Edit `snort.lua` to include the contents above, and then add in the line below to the end of the file: ``` bash include('common.lua') @@ -178,15 +178,20 @@ Continue to edit `snort.lua` and comment out the `profiler` and `latency` lines Snort 3 allows you to fine-tune setups with the `--tweaks` parameter. This feature allows you to use one of Snort's policy files to enhance the detection engine for improved performance or increased security. -Snort 3 includes four preset policy files: max_detect, security, balanced, and connectivity. +Snort 3 includes four preset policy files: + +* Max_detect. +* Security. +* Balanced. +* Connectivity. -The max_detect policy favors maximum security, whereas the connectivity policy focuses on performance and uptime, which may come at the expense of security. +The max_detect policy focuses on maximum security, and the connectivity policy focuses on performance and uptime, which might come at the expense of security. ### Specify the data acquisition module Snort supports DAQ modules which serves as an abstraction layer for interfacing with data source such as network interface. -To see list of DAQ modules supported by Snort use `--daq-list` command. +To see list of DAQ modules supported by Snort use the `--daq-list` command. Return to the `build` directory: @@ -200,7 +205,7 @@ Run using the command: snort --daq-dir ./snort3/dependencies/libdaq/install/lib/daq --daq-list ``` -The output is: +The output should look like: ```output Available DAQ modules: @@ -248,17 +253,17 @@ trace(v1): inline unpriv wrapper file - Filename to write text traces to (default: inline-out.txt) ``` -For testing, you can use `--daq dump` to analyze PCAP files. +For testing, you can use `--daq dump` to analyze Pthe CAP files. ## Spawn Snort 3 process with multithreading -To run Snort 3 with multithreading start from the `Test` directory. +To run Snort 3 with multithreading, start from the `Test` directory. ```bash cd $HOME/build/Test ``` -The following example shows how to use multiple Snort threads to analyze PCAP files. +The following example shows you how to use multiple Snort threads to analyze PCAP files. ``` bash MPSE=hyperscan POLICY=./snortrules/lua/snort.lua TCMALLOC_MEMFS_MALLOC_PATH=/dev/hugepages/test snort -c ./snortrules/lua/snort.lua --lua detection.allow_missing_so_rules=true --pcap-filter maccdc2010_00000_20100310205651.pcap --pcap-loop 10 --snaplen 0 --max-packet-threads 10 --daq dump --daq-dir /usr/local/lib/daq --daq-var output=none -H --pcap-dir Pcap -Q --warn-conf-strict --tweaks security @@ -290,7 +295,7 @@ The output is similar to: 22:52:28 9 97.50 0.00 2.50 0.00 0.00 0.00 0.00 0.00 0.00 0.00 ``` -## Test Snort 3 multi-threading to process a single PCAP file +## Test Snort 3 multithreading to process a single PCAP file The example usage demonstrates how multithreading increases the number of packets processed per second. From e1329726e1837c5d69bf3eb620629283e6a4d4e9 Mon Sep 17 00:00:00 2001 From: Maddy Underwood Date: Thu, 19 Dec 2024 05:17:00 +0000 Subject: [PATCH 13/96] Improvements to Snort 3 LP --- .../build-and-install.md | 31 ++++++++++------ .../snort3-multithreading/usecase.md | 35 ++++++++++--------- 2 files changed, 39 insertions(+), 27 deletions(-) diff --git a/content/learning-paths/servers-and-cloud-computing/snort3-multithreading/build-and-install.md b/content/learning-paths/servers-and-cloud-computing/snort3-multithreading/build-and-install.md index c05024491..8224ab15a 100644 --- a/content/learning-paths/servers-and-cloud-computing/snort3-multithreading/build-and-install.md +++ b/content/learning-paths/servers-and-cloud-computing/snort3-multithreading/build-and-install.md @@ -1,24 +1,31 @@ --- -title: Install Snort 3 and its Dependencies +title: Install Snort 3 and Dependencies weight: 2 ### FIXED, DO NOT MODIFY layout: learningpathall --- -Snort is an Open Source Intrusion Prevention System (IPS). Snort uses a series of rules to define malicious network activity. If malicious activity is found, Snort generates alerts. +#### Snort 3 -Multithreading in Snort 3 refers to the ability to associate multiple threads with a single Snort instance, which enables the concurrent processing of multiple packet files. This optimization frees up additional memory for further packet processing. +Snort is an Open Source Intrusion Prevention System (IPS). Snort uses a series of rules to define malicious network activity. If malicious activity is detected, Snort generates alerts. -In order to enable multithreading in Snort 3, specify the quantity of threads designated for processing network traffic using either the '--max-packet-threads' or '-z' option. +Snort 3 benefits from multithreading, which means that it enables the concurrent processing of multiple packet processing threads with a single Snort instance. This optimization frees up additional memory for further packet processing. + +#### Enable multithreading + +In order to enable multithreading in Snort 3, specify the quantity of threads designated for processing network traffic using either of these two options: + +* `--max-packet-threads` +* `-z` {{%notice Note%}} - The instructions provided have been tested on AWS EC2 Graviton4 instance, based on Arm Neoverse V2. The examples work best if you have at least 16 cores in your system. + These instructions have been tested on an AWS EC2 Graviton4 instance, based on Arm Neoverse V2. The examples work best if you have at least 16 cores in your system. {{%/notice%}} ## Compile and build Snort 3 -To install Snort 3, use a text editor to copy the text below and save the script on your Arm server in a file named `install-snort.sh`. +To install Snort 3, use a text editor to copy-and-paste the text below and save the script on your Arm server in a file named `install-snort.sh`. ``` bash @@ -197,17 +204,17 @@ The script takes two arguments: * The directory used to build Snort 3 and its dependencies. * The number of processors to use for the build. -To create a new directory named `build` with the number of processors in your system listed, run the script: +To create a new directory named `build` which lists the number of processors in your system, run the script: ```bash bash ./install-snort.sh build `nproc` ``` -You do not need to run the script as `root`, but it assumes you are on Ubuntu 20.04 or 22.04 and have sudo permission. +You do not need to run the script as `root`, but you do need to be running Ubuntu 20.04 or 22.04, and have sudo permission. -When the build completes, you will have the Snort 3 directory with all compiled software, and the `snort` executable is located in `/usr/local/bin`. +When the build completes, you will have the Snort 3 directory with all compiled software, and the `snort` executable will be located in `/usr/local/bin`. -To verify the installation is complete, run the command below and observe the version printed: +To verify completed installation, run the command below and look at the version that it prints to screen: ```bash { output_lines = "2-20" } snort -V @@ -228,6 +235,8 @@ To verify the installation is complete, run the command below and observe the ve ``` +{{% notice Note %}} Do not delete the `build` directory as you will use it in the next step. +{{% /notice %}} -Now you can move on to learn how to test Snort 3 multithreading. +Now you can move on to learn about how to test Snort 3 multithreading. diff --git a/content/learning-paths/servers-and-cloud-computing/snort3-multithreading/usecase.md b/content/learning-paths/servers-and-cloud-computing/snort3-multithreading/usecase.md index 97c4b74ff..0f3791dfb 100644 --- a/content/learning-paths/servers-and-cloud-computing/snort3-multithreading/usecase.md +++ b/content/learning-paths/servers-and-cloud-computing/snort3-multithreading/usecase.md @@ -5,21 +5,23 @@ weight: 3 ### FIXED, DO NOT MODIFY layout: learningpathall --- +## System Configuration Before testing the Snort 3 multithreading, configure your system by following these steps: -1. Configure Grub settings. -2. Set up the Snort 3 rule set. -3. Download the PCAP files. -4. Adjust Lua configurations. +* Configure Grub settings. +* Set up the Snort 3 rule set. +* Download the PCAP files. +* Adjust Lua configurations. -## Configure Grub settings +### Configure Grub settings -To enable Transparent HugePages (THP) and configure CPU isolation and affinity, append the following line to the /etc/default/grub file: +To enable Transparent HugePages (THP) and configure CPU isolation and affinity, append the following line to the `/etc/default/grub file`, modifying the CPU numbers as required: +{{% notice Note %}} For the total available online CPUs ranging from 0 to 95, with CPUs 0 to 9 pinned to Snort, the grubfile configuration is shown below. +{{% /notice %}} -You can modify the CPU numbers as needed: ```bash CMDLINE="cma=128" @@ -34,13 +36,14 @@ THP="transparent_hugepage=madvise" GRUB_CMDLINE_LINUX="${CMDLINE} ${HUGEPAGES} ${ISOLCPUS} ${IRQAFFINITY} ${NOHZ} ${RCU} ${MAXCPUS} ${IOMMU} ${THP}" ``` -After making this change, execute update-grub to apply the configuration: +After making this change, execute `update-grub` to apply the configuration: ```bash sudo update-grub ``` Reboot the system to activate the settings: + ```bash sudo reboot ``` @@ -51,9 +54,7 @@ Confirm the new command line was used for the last boot: cat /proc/cmdline ``` -The output shows the additions to the kernel command line. - -It is similar to: +The output shows the additions to the kernel command line, and will look something like this: ```output BOOT_IMAGE=/boot/vmlinuz-6.5.0-1020-aws root=PARTUUID=2ca5cb77-b92b-4112-a3e0-eb8bd3cee2a2 ro cma=128 default_hugepagesz=1G hugepagesz=1G hugepages=300 isolcpus=nohz,domain,0-9 irqaffinity=10-95 nohz_full=0-9 rcu_nocbs=0-9 iommu.passthrough=1 transparent_hugepage=madvise console=tty1 console=ttyS0 nvme_core.io_timeout=4294967295 panic=-1 @@ -73,7 +74,9 @@ The output shows the isolated processors: ## Set up the Snort 3 rule set -Download the rule set from https://www.snort.org/ and extract it into your working directory. Start in the `build` directory you used to build Snort. +Download the rule set from https://www.snort.org/ and extract it into your working directory. + +Start in the `build` directory you used to build Snort: ```bash cd $HOME/build @@ -103,7 +106,7 @@ You can obtain PCAP files at: https://www.netresec.com/?page=MACCDC. Visit https://share.netresec.com/s/wC4mqF2HNso4Ten and download a PCAP file. -Copy the file to your working directory, and extract it. If you downloaded a different PCAP file, you might want to change the file name. +Copy the file to your working directory, and extract it. If you downloaded a different PCAP file, you can change the file name. ```bash gunzip maccdc2010_00000_20100310205651.pcap.gz @@ -205,7 +208,7 @@ Run using the command: snort --daq-dir ./snort3/dependencies/libdaq/install/lib/daq --daq-list ``` -The output should look like: +The output should look like this: ```output Available DAQ modules: @@ -269,7 +272,7 @@ The following example shows you how to use multiple Snort threads to analyze PCA MPSE=hyperscan POLICY=./snortrules/lua/snort.lua TCMALLOC_MEMFS_MALLOC_PATH=/dev/hugepages/test snort -c ./snortrules/lua/snort.lua --lua detection.allow_missing_so_rules=true --pcap-filter maccdc2010_00000_20100310205651.pcap --pcap-loop 10 --snaplen 0 --max-packet-threads 10 --daq dump --daq-dir /usr/local/lib/daq --daq-var output=none -H --pcap-dir Pcap -Q --warn-conf-strict --tweaks security ``` -Use `--pcap-loop` to loop PCAP files a number of time, 10 in this example. +Use `--pcap-loop` to loop PCAP files a number of times, 10 in this example. Use `--max-packet-threads` to specify the number of threads, 10 in this example. @@ -297,7 +300,7 @@ The output is similar to: ## Test Snort 3 multithreading to process a single PCAP file -The example usage demonstrates how multithreading increases the number of packets processed per second. +The example demonstrates how multithreading increases the number of packets processed per second. PCAP File Description From acfdafe9a44fdeed4aa67c78a5e78088cf2936d8 Mon Sep 17 00:00:00 2001 From: Maddy Underwood Date: Thu, 19 Dec 2024 05:37:06 +0000 Subject: [PATCH 14/96] Further improvements. --- .../snort3-multithreading/_index.md | 2 +- .../build-and-install.md | 6 ++--- .../snort3-multithreading/usecase.md | 22 +++++++++---------- 3 files changed, 14 insertions(+), 16 deletions(-) diff --git a/content/learning-paths/servers-and-cloud-computing/snort3-multithreading/_index.md b/content/learning-paths/servers-and-cloud-computing/snort3-multithreading/_index.md index d0d007d26..e484d97bf 100644 --- a/content/learning-paths/servers-and-cloud-computing/snort3-multithreading/_index.md +++ b/content/learning-paths/servers-and-cloud-computing/snort3-multithreading/_index.md @@ -6,7 +6,7 @@ minutes_to_complete: 45 who_is_this_for: This Learning Path is for software developers familiar with Snort who want to optimize performance by leveraging the benefits of multithreading. learning_objectives: - - Install Snort and all of its dependencies. + - Install Snort and dependencies. - Configure Snort Lua files to enable multithreading. - Use multithreading to process capture files and measure performance. diff --git a/content/learning-paths/servers-and-cloud-computing/snort3-multithreading/build-and-install.md b/content/learning-paths/servers-and-cloud-computing/snort3-multithreading/build-and-install.md index 8224ab15a..6e611b64a 100644 --- a/content/learning-paths/servers-and-cloud-computing/snort3-multithreading/build-and-install.md +++ b/content/learning-paths/servers-and-cloud-computing/snort3-multithreading/build-and-install.md @@ -6,7 +6,7 @@ weight: 2 layout: learningpathall --- -#### Snort 3 +## Snort 3 Snort is an Open Source Intrusion Prevention System (IPS). Snort uses a series of rules to define malicious network activity. If malicious activity is detected, Snort generates alerts. @@ -23,7 +23,7 @@ In order to enable multithreading in Snort 3, specify the quantity of threads de These instructions have been tested on an AWS EC2 Graviton4 instance, based on Arm Neoverse V2. The examples work best if you have at least 16 cores in your system. {{%/notice%}} -## Compile and build Snort 3 +### How do I compile and build Snort 3? To install Snort 3, use a text editor to copy-and-paste the text below and save the script on your Arm server in a file named `install-snort.sh`. @@ -47,7 +47,7 @@ declare -a PACKAGE_URLS=( "https://github.com/gperftools/gperftools/releases/download/gperftools-2.13/gperftools-2.13.tar.gz" ) -downlaodPackages() +downloadPackages() { for url in "${PACKAGE_URLS[@]}"; do # Extract the file name from the URL diff --git a/content/learning-paths/servers-and-cloud-computing/snort3-multithreading/usecase.md b/content/learning-paths/servers-and-cloud-computing/snort3-multithreading/usecase.md index 0f3791dfb..605f71f5d 100644 --- a/content/learning-paths/servers-and-cloud-computing/snort3-multithreading/usecase.md +++ b/content/learning-paths/servers-and-cloud-computing/snort3-multithreading/usecase.md @@ -14,7 +14,7 @@ Before testing the Snort 3 multithreading, configure your system by following th * Download the PCAP files. * Adjust Lua configurations. -### Configure Grub settings +#### Configure Grub settings To enable Transparent HugePages (THP) and configure CPU isolation and affinity, append the following line to the `/etc/default/grub file`, modifying the CPU numbers as required: @@ -72,7 +72,7 @@ The output shows the isolated processors: 0-9 ``` -## Set up the Snort 3 rule set +#### Set up the Snort 3 rule set Download the rule set from https://www.snort.org/ and extract it into your working directory. @@ -98,7 +98,7 @@ Copy the `lua` folder from the `snort3` source directory into the rules director cp -r snort3/lua/ Test/snortrules/ ``` -## Download the Packet Capture (PCAP) files +#### Download the Packet Capture (PCAP) files You can use any PCAP files that are relevant to your test scenario. @@ -114,14 +114,14 @@ mkdir Test/Pcap cp maccdc2010_00000_20100310205651.pcap Test/Pcap/ ``` -## Adjust Lua configurations +#### Adjust Lua configurations Now make two modifications to the Lau configurations: * Pin each Snort thread to a unique core, ensuring that the cores match those isolated in the GRUB configuration. * Enable the desired ruleset and enabling profiling. -### Pin Snort Threads to Unique CPU Core +#### Pin Snort Threads to Unique CPU Core Navigate to the `Test/snortrules/lua` directory. @@ -160,7 +160,7 @@ Edit `snort.lua` to include the contents above, and then add in the line below t include('common.lua') ``` -### Modify the snort.lua file to enable rules and profiling +#### Modify the snort.lua file to enable rules and profiling Use an editor to modify the `snort.lua` file. @@ -175,9 +175,7 @@ rules = [[ Continue to edit `snort.lua` and comment out the `profiler` and `latency` lines to enable profiling and packet statistics. -## Review the Snort parameters - -### Modify the IPS policy +#### Review the Snort parameters: modify the IPS policy Snort 3 allows you to fine-tune setups with the `--tweaks` parameter. This feature allows you to use one of Snort's policy files to enhance the detection engine for improved performance or increased security. @@ -190,7 +188,7 @@ Snort 3 includes four preset policy files: The max_detect policy focuses on maximum security, and the connectivity policy focuses on performance and uptime, which might come at the expense of security. -### Specify the data acquisition module +#### Specify the data acquisition module Snort supports DAQ modules which serves as an abstraction layer for interfacing with data source such as network interface. @@ -258,7 +256,7 @@ trace(v1): inline unpriv wrapper For testing, you can use `--daq dump` to analyze Pthe CAP files. -## Spawn Snort 3 process with multithreading +#### Spawn Snort 3 process with multithreading To run Snort 3 with multithreading, start from the `Test` directory. @@ -298,7 +296,7 @@ The output is similar to: 22:52:28 9 97.50 0.00 2.50 0.00 0.00 0.00 0.00 0.00 0.00 0.00 ``` -## Test Snort 3 multithreading to process a single PCAP file +#### Test Snort 3 multithreading to process a single PCAP file The example demonstrates how multithreading increases the number of packets processed per second. From fda161427af24549482563ee4a815294113eaf8d Mon Sep 17 00:00:00 2001 From: Maddy Underwood Date: Thu, 19 Dec 2024 05:50:30 +0000 Subject: [PATCH 15/96] Fixing Next Steps --- .../snort3-multithreading/_next-steps.md | 6 ++++-- 1 file changed, 4 insertions(+), 2 deletions(-) diff --git a/content/learning-paths/servers-and-cloud-computing/snort3-multithreading/_next-steps.md b/content/learning-paths/servers-and-cloud-computing/snort3-multithreading/_next-steps.md index 311f62435..2e7b32886 100644 --- a/content/learning-paths/servers-and-cloud-computing/snort3-multithreading/_next-steps.md +++ b/content/learning-paths/servers-and-cloud-computing/snort3-multithreading/_next-steps.md @@ -1,11 +1,11 @@ --- -next_step_guidance: You can now try the Learning Path about enabling hyperscan on arm. See the link below. +next_step_guidance: To continue learning about enabling hyperscan on arm,please refer to the learning path provided below. recommended_path: /learning-paths/servers-and-cloud-computing/vectorscan/ further_reading: - resource: - title: Snort 3 Documentation + title: Snort3 Documentation link: https://docs.snort.org/start/ type: documentation - resource: @@ -20,3 +20,5 @@ weight: 21 # set to always be larger than the content in this p title: "Next Steps" # Always the same layout: "learningpathall" # All files under learning paths have this same wrapper --- + + From 58de3b399297c6b0aea5338da418ed1662f72549 Mon Sep 17 00:00:00 2001 From: Maddy Underwood Date: Thu, 19 Dec 2024 05:58:03 +0000 Subject: [PATCH 16/96] Final fix of Next Steps. --- .../snort3-multithreading/_next-steps.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/content/learning-paths/servers-and-cloud-computing/snort3-multithreading/_next-steps.md b/content/learning-paths/servers-and-cloud-computing/snort3-multithreading/_next-steps.md index 2e7b32886..52ddbd993 100644 --- a/content/learning-paths/servers-and-cloud-computing/snort3-multithreading/_next-steps.md +++ b/content/learning-paths/servers-and-cloud-computing/snort3-multithreading/_next-steps.md @@ -1,5 +1,5 @@ --- -next_step_guidance: To continue learning about enabling hyperscan on arm,please refer to the learning path provided below. +next_step_guidance: To continue learning, try this next Learning Path about enabling hyperscan on Arm. recommended_path: /learning-paths/servers-and-cloud-computing/vectorscan/ From c7eb602b24b73488b69d7df03f8b3b7e4584f402 Mon Sep 17 00:00:00 2001 From: Maddy Underwood Date: Thu, 19 Dec 2024 14:25:30 +0000 Subject: [PATCH 17/96] Correct download typo in code. --- .../snort3-multithreading/build-and-install.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/content/learning-paths/servers-and-cloud-computing/snort3-multithreading/build-and-install.md b/content/learning-paths/servers-and-cloud-computing/snort3-multithreading/build-and-install.md index 6e611b64a..c63057445 100644 --- a/content/learning-paths/servers-and-cloud-computing/snort3-multithreading/build-and-install.md +++ b/content/learning-paths/servers-and-cloud-computing/snort3-multithreading/build-and-install.md @@ -96,7 +96,7 @@ installPackages() sudo apt-get install -y $LIST_OF_APPS # required to get optimized result from Snort3 - downlaodPackages + downloadPackages mkdir -p ${ROOT_DIR}/snort3 tar -xzf 3.3.5.0.tar.gz --directory ${ROOT_DIR}/snort3 --strip-components=1 echo "@@@@@@@@@@@@@@@@@@ Installing Snort3 Dependencies ... @@@@@@@@@@@@@@@@@@@@" From 92e1c3544c7dd76195f1dafb362cbf47c3a2ed33 Mon Sep 17 00:00:00 2001 From: GitHub Actions Stats Bot <> Date: Mon, 23 Dec 2024 01:28:36 +0000 Subject: [PATCH 18/96] automatic update of stats files --- data/stats_current_test_info.yml | 10 ++-- data/stats_weekly_data.yml | 89 ++++++++++++++++++++++++++++++++ 2 files changed, 92 insertions(+), 7 deletions(-) diff --git a/data/stats_current_test_info.yml b/data/stats_current_test_info.yml index ec608c337..bbb19d51e 100644 --- a/data/stats_current_test_info.yml +++ b/data/stats_current_test_info.yml @@ -1,7 +1,7 @@ summary: - content_total: 308 - content_with_all_tests_passing: 32 - content_with_tests_enabled: 34 + content_total: 312 + content_with_all_tests_passing: 31 + content_with_tests_enabled: 33 sw_categories: cross-platform: dynamic-memory-allocator: @@ -117,10 +117,6 @@ sw_categories: readable_title: Measure Machine Learning Inference Performance on Arm servers tests_and_status: - ubuntu:latest: passed - mongodb: - readable_title: Test the performance of MongoDB on Arm servers - tests_and_status: - - mongo:latest: passed mysql_tune: readable_title: Learn how to Tune MySQL tests_and_status: diff --git a/data/stats_weekly_data.yml b/data/stats_weekly_data.yml index a1b3279f9..44d5a4394 100644 --- a/data/stats_weekly_data.yml +++ b/data/stats_weekly_data.yml @@ -4258,3 +4258,92 @@ avg_close_time_hrs: 0 num_issues: 14 percent_closed_vs_total: 0.0 +- a_date: '2024-12-23' + content: + cross-platform: 26 + embedded-systems: 19 + install-guides: 90 + laptops-and-desktops: 34 + microcontrollers: 25 + servers-and-cloud-computing: 93 + smartphones-and-mobile: 25 + total: 312 + contributions: + external: 45 + internal: 362 + github_engagement: + num_forks: 30 + num_prs: 8 + individual_authors: + alaaeddine-chakroun: 2 + alexandros-lamprineas: 1 + annie-tallund: 1 + arm: 3 + arnaud-de-grandmaison: 1 + arnaud-de-grandmaison,-paul-howard,-and-pareena-verma: 1 + basma-el-gaabouri: 1 + ben-clark: 1 + bolt-liu: 2 + brenda-strech: 1 + chaodong-gong,-alex-su,-kieran-hejmadi: 1 + chen-zhang: 1 + christopher-seidl: 7 + cyril-rohr: 1 + daniel-gubay: 1 + daniel-nguyen: 1 + david-spickett: 2 + dawid-borycki: 31 + diego-russo: 1 + diego-russo-and-leandro-nunes: 1 + elham-harirpoush: 2 + florent-lebeau: 5 + "fr\xE9d\xE9ric--lefred--descamps": 2 + gabriel-peterson: 5 + gayathri-narayana-yegna-narayanan: 1 + georgios-mermigkis-and-konstantinos-margaritis,-vectorcamp: 1 + graham-woodward: 1 + iago-calvo-lista,-arm: 1 + james-whitaker,-arm: 1 + jason-andrews: 91 + joe-stech: 1 + johanna-skinnider: 2 + jonathan-davies: 2 + jose-emilio-munoz-lopez,-arm: 1 + julie-gaskin: 4 + julio-suarez: 5 + kasper-mecklenburg: 1 + kieran-hejmadi: 1 + koki-mitsunami: 2 + konstantinos-margaritis: 7 + kristof-beyls: 1 + liliya-wu: 1 + mathias-brossard: 1 + michael-hall: 5 + nikhil-gupta,-pareena-verma,-nobel-chowdary-mandepudi,-ravi-malhotra: 1 + odin-shen: 1 + owen-wu,-arm: 2 + pareena-verma: 34 + pareena-verma,-annie-tallund: 1 + pareena-verma,-jason-andrews,-and-zach-lasiuk: 1 + pareena-verma,-joe-stech,-adnan-alsinan: 1 + paul-howard: 1 + pranay-bakre: 4 + przemyslaw-wirkus: 1 + rin-dobrescu: 1 + roberto-lopez-mendez: 2 + ronan-synnott: 45 + thirdai: 1 + tianyu-li: 1 + tom-pilar: 1 + uma-ramalingam: 1 + varun-chari,-albin-bernhardsson: 1 + varun-chari,-pareena-verma: 1 + visualsilicon: 1 + ying-yu: 1 + ying-yu,-arm: 1 + zach-lasiuk: 1 + zhengjun-xing: 2 + issues: + avg_close_time_hrs: 0 + num_issues: 12 + percent_closed_vs_total: 0.0 From 88f0719208ea90e3a03162dd8b1d55cc585e9f48 Mon Sep 17 00:00:00 2001 From: Maddy Underwood Date: Mon, 23 Dec 2024 10:23:18 +0000 Subject: [PATCH 19/96] Post-production clean-up. --- .../cca-veraison/_index.md | 6 +++--- .../cca-veraison/attestation-token.md | 12 +++++++----- .../cca-veraison/attestation-verification.md | 4 ++-- .../cca-veraison/cca-attestation.md | 2 +- .../cca-veraison/evaluate-result.md | 4 ++-- .../cca-veraison/how-to-use.md | 8 ++++---- .../cca-veraison/veraison.md | 4 +++- 7 files changed, 22 insertions(+), 18 deletions(-) diff --git a/content/learning-paths/servers-and-cloud-computing/cca-veraison/_index.md b/content/learning-paths/servers-and-cloud-computing/cca-veraison/_index.md index 5f7f30a4b..01d6058a3 100644 --- a/content/learning-paths/servers-and-cloud-computing/cca-veraison/_index.md +++ b/content/learning-paths/servers-and-cloud-computing/cca-veraison/_index.md @@ -7,15 +7,15 @@ minutes_to_complete: 30 who_is_this_for: This Learning Path is for developers who would like to learn about attestation in confidential computing, using Arm’s Confidential Computing Architecture (CCA). learning_objectives: - - Describe the importance of attestation for confidential computing. + - Describe the importance of attestation in confidential computing. - Understand what a CCA attestation token is, and describe its format. - Inspect the contents of a CCA attestation token using command-line tools. - Use an attestation verification service to evaluate a CCA attestation token. - - Understand the purpose of the Open source Veraison project. + - Understand the purpose of the Open-Source Veraison project. prerequisites: - - An Arm-based or x86 computer running Ubuntu. You can use a server instance from the cloud service provider of your choice. + - An Arm-based or x86 computer running Ubuntu. You can use a server instance from a cloud service provider of your choice. author_primary: Paul Howard diff --git a/content/learning-paths/servers-and-cloud-computing/cca-veraison/attestation-token.md b/content/learning-paths/servers-and-cloud-computing/cca-veraison/attestation-token.md index 08e625f92..911f3b4c4 100644 --- a/content/learning-paths/servers-and-cloud-computing/cca-veraison/attestation-token.md +++ b/content/learning-paths/servers-and-cloud-computing/cca-veraison/attestation-token.md @@ -19,7 +19,7 @@ wget https://go.dev/dl/go1.23.3.linux-$(dpkg --print-architecture).tar.gz tar -C /usr/local -xzf go1.23.3.linux-$(dpkg --print-architecture).tar.gz ``` -Export the installation path and add it to your `$PATH environment` variable. +Export the installation path and add it to your `$PATH environment` variable: ```bash export PATH=$PATH:/usr/local/go/bin @@ -56,17 +56,19 @@ Use GitHub’s download button, located on the right of the upper toolbar, to do ![download_raw.png](./download_raw.png) -Place this file in the `$HOME` folder, while retaining the file name. The rest of this Learning Path uses the notation `$HOME/cca_example_token.cbor` as the file path. +Place this file in the `$HOME` folder, while retaining the file name. + +The rest of this Learning Path uses the notation `$HOME/cca_example_token.cbor` as the file path. {{% notice Note %}} You will notice that the filename extension on the example token is `.cbor`, which also denotes the format of the data. CBOR is an acronym for Concise Binary Object Representation. You might already be familiar with JSON (the JavaScript Object Notation). JSON provides a standard way of conveying the nested structures of key-value pairs. CBOR is conceptually the same as JSON. The difference is that CBOR is a binary format, rather than a text-based format like JSON. CBOR is designed for compactness and machine-readability, but at the expense of human-readability. You can learn more about CBOR [here](https://cbor.io/). {{% /notice %}} -## Build the EVCLI Tool +## Build the evcli tool -Now that you have downloaded the example CCA attestation token, the next step is to look inside the token and learn about the data that it contains. As the token is a binary file, you will need to use a tool to parse the file and display its contents. The tool that you will use is a command-line tool called `evcli`. Evcli is an acronym for EVidence Command Line Interface, linking with the idea that attestation tokens are used to convey evidence about realms and the platforms on which they are hosted. +Now that you have downloaded the example CCA attestation token, the next step is to look inside the token and learn about the data that it contains. As the token is a binary file, you will need to use a tool to parse the file and display its contents. The tool that you will use is a command-line tool called `evcli`. -The `evcli` tool is part of the Veraison Open-Source project, which was covered in the previous section. +`evcli` is an acronym for EVidence Command Line Interface, which goes back to the idea that attestation tokens are used to convey evidence about realms and the platforms on which they are hosted. The `evcli` tool is part of the Veraison Open-Source project, which was covered in the previous section. Clone the source code using git as follows: diff --git a/content/learning-paths/servers-and-cloud-computing/cca-veraison/attestation-verification.md b/content/learning-paths/servers-and-cloud-computing/cca-veraison/attestation-verification.md index 3b33f116b..6630f24d7 100644 --- a/content/learning-paths/servers-and-cloud-computing/cca-veraison/attestation-verification.md +++ b/content/learning-paths/servers-and-cloud-computing/cca-veraison/attestation-verification.md @@ -11,7 +11,7 @@ layout: learningpathall Linaro’s verification service is implemented using components from the open source [Veraison](https://github.com/veraison) project. -The URL for reaching this experimental verifier service is http://veraison.test.linaro.org:8080 +The URL for reaching this experimental verifier service is http://veraison.test.linaro.org:8080. To check that you can reach the Linaro attestation verifier service, run the following command: @@ -89,4 +89,4 @@ The `| tr -d \"` is used to remove the double quotes in capturing the output fro {{% /notice %}} The verification service has now evaluated the token and returned a result, which you have saved. -The last two steps in this learning path will be about understanding the result data that came back from the verification service. +The last two steps in this Learning Path are about understanding the resultant data that came back from the verification service. diff --git a/content/learning-paths/servers-and-cloud-computing/cca-veraison/cca-attestation.md b/content/learning-paths/servers-and-cloud-computing/cca-veraison/cca-attestation.md index f949a3fd6..0365e3c2f 100644 --- a/content/learning-paths/servers-and-cloud-computing/cca-veraison/cca-attestation.md +++ b/content/learning-paths/servers-and-cloud-computing/cca-veraison/cca-attestation.md @@ -8,7 +8,7 @@ layout: learningpathall ## Overview Confidential computing is about protecting data in use. This protection comes from the creation of a security boundary around the computation being performed. This security boundary creates what is called a Trusted Execution Environment (TEE). The data and code that executes within the TEE is protected from the outside world. Different technologies exist for creating this secure boundary. In the case of Arm CCA, the Realm Management Extension (RME), which is part of the Armv9 Architecture for A-profile CPUs, provides the secure boundary. -A secure boundary is necessary for confidential computing, but it is not sufficient alone. There must also be a way to establish trust with the TEE, the target compute environment, that the boundary is protecting. Trusting the environment implicitly does not meet the strict definition of confidential computing. Instead, trust needs to be built by a process that is both explicit and transparent. This process is known as attestation. The role of attestation is described in the Figure 1. +A secure boundary is necessary for confidential computing, but it is not sufficient alone. There must also be a way to establish trust with the TEE, the target compute environment, that the boundary is protecting. Trusting the environment implicitly does not meet the strict definition of confidential computing. Instead, trust needs to be built by a process that is both explicit and transparent. This process is known as attestation. The role of attestation is described in Figure 1. ![Attestation role alt-text#center](./attestation-role.png "Figure 1: The Role of Attestation") diff --git a/content/learning-paths/servers-and-cloud-computing/cca-veraison/evaluate-result.md b/content/learning-paths/servers-and-cloud-computing/cca-veraison/evaluate-result.md index af6ad86da..f86747886 100644 --- a/content/learning-paths/servers-and-cloud-computing/cca-veraison/evaluate-result.md +++ b/content/learning-paths/servers-and-cloud-computing/cca-veraison/evaluate-result.md @@ -6,9 +6,9 @@ weight: 7 layout: learningpathall --- -## Build the ARC Tool +## Build the arc tool -You are already familiar with the evcli tool, which you can use to process attestation tokens. There is a very similar tool called `arc`, which you can use to process attestation results. The arc tool is also part of the Veraison project. +You are already familiar with the evcli tool, which you can use to process attestation tokens. There is a very similar tool called `arc`, which you can use to process attestation results. The `arc` tool is also part of the Veraison project. Clone its repository as follows: diff --git a/content/learning-paths/servers-and-cloud-computing/cca-veraison/how-to-use.md b/content/learning-paths/servers-and-cloud-computing/cca-veraison/how-to-use.md index fc76620f4..a88cb4e49 100644 --- a/content/learning-paths/servers-and-cloud-computing/cca-veraison/how-to-use.md +++ b/content/learning-paths/servers-and-cloud-computing/cca-veraison/how-to-use.md @@ -8,10 +8,10 @@ layout: learningpathall ## Highlights -Some highlights of using this Learning Path are the following: +These are some highlights of using this Learning Path: -* Practical, hands-on experience with the data formats and workflows associated with attestation, which in turn will help to provide you with a joined-up understanding of the many separate documents and specifications that exist on this topic. +* Code examples that demonstrate some of the common concepts in attestation. -* An opportunity to learn about the common concepts in attestation, supported by code examples as a demonstration. +* Practical, hands-on experience with the data formats and workflows associated with attestation, which help to provide you with a joined-up understanding of the many separate documents and specifications that exist on this topic. -* In advance of the practical sections, a chance to read theoretical overviews of both CCA Attestation and Veraison to help you grasp the basic concepts before progressing to the practical sections. \ No newline at end of file +* Theoretical overviews of both CCA Attestation and Veraison to help you grasp the basic concepts before progressing to the practical sections. \ No newline at end of file diff --git a/content/learning-paths/servers-and-cloud-computing/cca-veraison/veraison.md b/content/learning-paths/servers-and-cloud-computing/cca-veraison/veraison.md index 3490c8c1c..22da58e39 100644 --- a/content/learning-paths/servers-and-cloud-computing/cca-veraison/veraison.md +++ b/content/learning-paths/servers-and-cloud-computing/cca-veraison/veraison.md @@ -8,7 +8,9 @@ layout: learningpathall ## Veraison -The tools and services that you will use in this Learning Path derive from an Open-Source project called [Veraison](https://github.com/veraison). Veraison is a project that was founded within Arm but has since been donated to the [Confidential Computing Consortium](https://confidentialcomputing.io/) as an ongoing community project with a growing number of contributors from other organizations. Veraison addresses the verification aspect of attestation. It provides reusable tools and components that can be deployed in the construction of verification services or libraries. +The tools and services that you will use in this Learning Path derive from an Open-Source project called [Veraison](https://github.com/veraison). + +Veraison is a project that was founded within Arm but has since been donated to the [Confidential Computing Consortium](https://confidentialcomputing.io/) as an ongoing community project with a growing number of contributors from other organizations. Veraison addresses the verification aspect of attestation. It provides reusable tools and components that can be deployed in the construction of verification services or libraries. Confidential computing is a new, and fast-growing industry. There are many stakeholders, including: From ab52d1eac092fc22134400d5659412efd9997e7b Mon Sep 17 00:00:00 2001 From: Maddy Underwood Date: Mon, 23 Dec 2024 11:20:36 +0000 Subject: [PATCH 20/96] Cleaned up reference to prerequisite reading that had a title change, and some other editorial tweaks. --- .../cca-essentials/_index.md | 2 +- .../cca-essentials/cca-essentials.md | 4 +-- .../cca-essentials/example.md | 28 +++++++++---------- 3 files changed, 17 insertions(+), 17 deletions(-) diff --git a/content/learning-paths/servers-and-cloud-computing/cca-essentials/_index.md b/content/learning-paths/servers-and-cloud-computing/cca-essentials/_index.md index 8fec5dc3b..39ff6e389 100644 --- a/content/learning-paths/servers-and-cloud-computing/cca-essentials/_index.md +++ b/content/learning-paths/servers-and-cloud-computing/cca-essentials/_index.md @@ -12,7 +12,7 @@ learning_objectives: prerequisites: - An AArch64 or x86_64 computer running Linux. You can use cloud instances, see this list of [Arm cloud service providers](/learning-paths/servers-and-cloud-computing/csp/). - - Completion of the [Introduction to CCA Attestation with Veraison](/learning-paths/servers-and-cloud-computing/cca-veraison) Learning Path. + - Completion of [Get Started with CCA Attestation and Veraison](/learning-paths/servers-and-cloud-computing/cca-veraison) Learning Path. - Completion of the [Run an application in a Realm using the Arm Confidential Computing Architecture (CCA)](/learning-paths/servers-and-cloud-computing/cca-container/) Learning Path. author_primary: Arnaud de Grandmaison, Paul Howard, and Pareena Verma diff --git a/content/learning-paths/servers-and-cloud-computing/cca-essentials/cca-essentials.md b/content/learning-paths/servers-and-cloud-computing/cca-essentials/cca-essentials.md index f583ba760..4a4b232e2 100644 --- a/content/learning-paths/servers-and-cloud-computing/cca-essentials/cca-essentials.md +++ b/content/learning-paths/servers-and-cloud-computing/cca-essentials/cca-essentials.md @@ -20,7 +20,7 @@ The role of the KBS is to be a repository for encryption keys or other confident The workload that runs inside the realm is a client of the KBS. It calls the KBS to request a secret, but the KBS does not return the secret immediately. Instead, it issues an attestation challenge back to the client. The client must respond with evidence in the form of a [CCA attestation token](/learning-paths/servers-and-cloud-computing/cca-container/cca-container/#obtain-a-cca-attestation-token-from-the-virtual-guest-in-a-realm). -When the KBS receives an attestation token from the realm, it needs to call a verification service that checks the token's cryptographic signature and verifies that it denotes a confidential computing platform. As you saw in the prerequisite reading [Introduction to CCA Attestation with Veraison Learning Path](/learning-paths/servers-and-cloud-computing/cca-veraison), Linaro provides such an attestation verifier for use with pre-silicon CCA platforms. This verifier is built from the open-source [Veraison project](https://github.com/veraison). The KBS calls this verifier to obtain an attestation result. The KBS then uses this result to decide whether to release the secrets into the realm for processing. +When the KBS receives an attestation token from the realm, it needs to call a verification service that checks the token's cryptographic signature and verifies that it denotes a confidential computing platform. As you saw in the prerequisite reading Learning Path [Get Started with CCA Attestation and Veraison](/learning-paths/servers-and-cloud-computing/cca-veraison), Linaro provides such an attestation verifier for use with pre-silicon CCA platforms. This verifier is built from the Open-Source [Veraison project](https://github.com/veraison). The KBS calls this verifier to obtain an attestation result. The KBS then uses this result to decide whether to release the secrets into the realm for processing. For additional security, the KBS does not release any secrets in clear text, even after a successful verification of the attestation token. Instead, the realm provides an additional public encryption key to the KBS. This is known as a wrapping key. The KBS uses this public key to wrap, which here means encrypt, the secrets. The client workload inside the realm is then able to use its own private key to unwrap the secrets and use them. @@ -32,6 +32,6 @@ The attestation verification service is hosted by Linaro, so it is not necessary Figure 1 demonstrates the software architecture that you will construct to run the attestation example. -![cca-essentials](cca-essentials.png "Figure 1: Software architecture for running attestation") +![cca-essentials](cca-essentials.png "Figure 1: Software architecture for running attestation.") You can now proceed to the next section to run the end-to-end attestation example with the software components and architecture as described here. diff --git a/content/learning-paths/servers-and-cloud-computing/cca-essentials/example.md b/content/learning-paths/servers-and-cloud-computing/cca-essentials/example.md index 6424081f4..276279ced 100644 --- a/content/learning-paths/servers-and-cloud-computing/cca-essentials/example.md +++ b/content/learning-paths/servers-and-cloud-computing/cca-essentials/example.md @@ -59,11 +59,11 @@ INFO Actix runtime found; starting in Actix runtime INFO starting service: "actix-web-service-172.17.0.2:8088", workers: 16, listening on: 172.17.0.2:8088 ``` -With the key broker server running in one terminal, open up a new terminal in which you will run the key broker client in the next step. +With the Key Broker Server running in one terminal, open up a new terminal in which you will run the Key Broker Client in the next step. ## Run the Key Broker Client -In the new terminal that you have just opened, pull the docker container image that contains the FVP and pre-built software binaries to run the key broker client in a realm. +In the new terminal that you have just opened, pull the docker container image that contains the FVP and pre-built software binaries to run the Key Broker Client in a realm. ```bash docker pull armswdev/cca-learning-path:cca-simulation-v1 @@ -134,15 +134,15 @@ realm login: root (realm) # ``` -Now run the key broker client application in the realm. +Now run the Key Broker Client application in the realm. -Use the endpoint address that the key broker server is listening in on the other terminal: +Use the endpoint address that the Key Broker Server is listening in on the other terminal: ```bash cd /cca ./keybroker-app -v --endpoint http://172.17.0.2:8088 skywalker ``` -In the command above, `skywalker` is the key name that is requested from the key broker server. +In the command above, `skywalker` is the key name that is requested from the Key Broker Server. After some time, you should see the following output: ``` @@ -151,11 +151,11 @@ INFO Challenge (64 bytes) = [0f, ea, c4, e2, 24, 4e, fa, dc, 1d, ea, ea, 3d, 60, INFO Submitting evidence to URL http://172.17.0.2:8088/keys/v1/evidence/3974368321 INFO Attestation failure :-( ! AttestationFailure: No attestation result was obtained. No known-good reference values. ``` -You can see from the key broker client application output that the `skywalker` key is requested from the key broker server, which did send a challenge. +You can see from the Key Broker client application output that the `skywalker` key is requested from the Key Broker Server, which did send a challenge. -The key broker client application uses the challenge to submit its evidence back to the key broker server, but it receives an attestation failure. This is because the server does not have any known good reference values. +The Key Broker Client application uses the challenge to submit its evidence back to the Key Broker Server, but it receives an attestation failure. This is because the server does not have any known good reference values. -Now look at the key broker server output on the terminal where the server is running. It will look like this: +Now look at the Key Broker Server output on the terminal where the server is running. It will look like this: ```output INFO Known-good RIM values are missing. If you trust the client that submitted @@ -164,15 +164,15 @@ command-line option to populate it with known-good RIM values: --reference-values <(echo '{ "reference-values": [ "tiA66VOokO071FfsCHr7es02vUbtVH5FpLLqTzT7jps=" ] }') INFO Evidence submitted for challenge 1302147796: no attestation result was obtained. No known-good reference values. ``` -From the server output, you can see that it did create the challenge for the key broker application, but it reports that it has no known good reference values. +From the server output, you can see that it did create the challenge for the Key Broker application, but it reports that it has no known good reference values. -It does however provide a way to provision the key broker server with known good values if the client is trusted. +It does however provide a way to provision the Key Broker Server with known good values if the client is trusted. -In a production environment, the known good reference value is generated using a deployment- specific process, but for demonstration purposes and simplification, you will use the value proposed by the key broker server. +In a production environment, the known good reference value is generated using a deployment- specific process, but for demonstration purposes and simplification, you will use the value proposed by the Key Broker Server. -Now go ahead and terminate the running instance of the key broker server using Ctrl+C and restart it with the known good reference value. +Now go ahead and terminate the running instance of the Key Broker Server using Ctrl+C and restart it with the known good reference value. -Notice here that you need to copy the `--reference-values` argument directly from the previous error message reported by the key broker. +Notice here that you need to copy the `--reference-values` argument directly from the previous error message reported by the Key Broker. When running the next command, ensure that you are copying the exact value reported, for example: @@ -180,7 +180,7 @@ When running the next command, ensure that you are copying the exact value repor ./keybroker-server -v --addr 172.17.0.2 --reference-values <(echo '{ "reference-values": [ "tiA66VOokO071FfsCHr7es02vUbtVH5FpLLqTzT7jps=" ] }') ``` -On the terminal with the running realm, rerun the key broker client application with the exact same command line parameters as before: +On the terminal with the running realm, rerun the Key Broker Client application with the exact same command line parameters as before: ```bash ./keybroker-app -v --endpoint http://172.17.0.2:8088 skywalker From 8147bfb055435ac2b281e3b889c384bc87597f5c Mon Sep 17 00:00:00 2001 From: pareenaverma Date: Mon, 23 Dec 2024 18:12:43 +0000 Subject: [PATCH 21/96] Selfie Android LP review --- .../2-app-scaffolding.md | 2 +- .../3-camera-permission.md | 118 ------------------ .../4-introduce-mediapipe.md | 40 +++--- .../_index.md | 6 +- 4 files changed, 24 insertions(+), 142 deletions(-) delete mode 100644 content/learning-paths/smartphones-and-mobile/build-android-selfie-app-using-mediapipe-multimodality/3-camera-permission.md diff --git a/content/learning-paths/smartphones-and-mobile/build-android-selfie-app-using-mediapipe-multimodality/2-app-scaffolding.md b/content/learning-paths/smartphones-and-mobile/build-android-selfie-app-using-mediapipe-multimodality/2-app-scaffolding.md index f64377100..02dd95ffc 100644 --- a/content/learning-paths/smartphones-and-mobile/build-android-selfie-app-using-mediapipe-multimodality/2-app-scaffolding.md +++ b/content/learning-paths/smartphones-and-mobile/build-android-selfie-app-using-mediapipe-multimodality/2-app-scaffolding.md @@ -12,7 +12,7 @@ This learning path will teach you to architect an app following [modern Android Download and install the latest version of [Android Studio](https://developer.android.com/studio/) on your host machine. -This learning path's instructions and screenshots are taken on macOS with Apple Silicon, but you may choose any of the supported hardware systems as described [here](https://developer.android.com/studio/install). +The instructions for this learning path were tested on a Apple Silicon host machine running macOS, but you may choose any of the supported hardware systems as described [here](https://developer.android.com/studio/install). Upon first installation, open Android Studio and proceed with the default or recommended settings. Accept license agreements and let Android Studio download all the required assets. diff --git a/content/learning-paths/smartphones-and-mobile/build-android-selfie-app-using-mediapipe-multimodality/3-camera-permission.md b/content/learning-paths/smartphones-and-mobile/build-android-selfie-app-using-mediapipe-multimodality/3-camera-permission.md deleted file mode 100644 index 121436976..000000000 --- a/content/learning-paths/smartphones-and-mobile/build-android-selfie-app-using-mediapipe-multimodality/3-camera-permission.md +++ /dev/null @@ -1,118 +0,0 @@ ---- -title: Handle camera permission -weight: 3 - -### FIXED, DO NOT MODIFY -layout: learningpathall ---- - -## Run the app on your device - -1. Connect your Android device to your computer via a USB **data** cable. If this is your first time running and debugging Android apps, follow [this guide](https://developer.android.com/studio/run/device#setting-up) and double check this checklist: - - 1. You have enabled **USB debugging** on your Android device following [this doc](https://developer.android.com/studio/debug/dev-options#Enable-debugging). - - 2. You have confirmed by tapping "OK" on your Android device when an **"Allow USB debugging"** dialog pops up, and checked "Always allow from this computer". - - ![Allow USB debugging dialog](https://ftc-docs.firstinspires.org/en/latest/_images/AllowUSBDebugging.jpg) - - -2. Make sure your device model name and SDK version correctly show up on the top right toolbar. Click the **"Run"** button to build and run, as described [here](https://developer.android.com/studio/run). - -3. After waiting for a while, you should be seeing success notification in Android Studio and the app showing up on your Android device. - -4. However, the app shows only a black screen while printing error messages in your [Logcat](https://developer.android.com/tools/logcat) which looks like this: - -``` -2024-11-20 11:15:00.398 18782-18818 Camera2CameraImpl com.example.holisticselfiedemo E Camera reopening attempted for 10000ms without success. -2024-11-20 11:30:13.560 667-707 BufferQueueProducer pid-667 E [SurfaceView - com.example.holisticselfiedemo/com.example.holisticselfiedemo.MainActivity#0](id:29b00000283,api:4,p:2657,c:667) queueBuffer: BufferQueue has been abandoned -2024-11-20 11:36:13.100 20487-20499 isticselfiedem com.example.holisticselfiedemo E Failed to read message from agent control socket! Retrying: Bad file descriptor -2024-11-20 11:43:03.408 2709-3807 PackageManager pid-2709 E Permission android.permission.CAMERA isn't requested by package com.example.holisticselfiedemo -``` - -5. Worry not. This is expected behavior because we haven't correctly configured this app's [permissions](https://developer.android.com/guide/topics/permissions/overview) yet, therefore Android OS restricts this app's access to camera features due to privacy reasons. - -## Request camera permission at runtime - -1. Navigate to `manifest.xml` in your `app` subproject's `src/main` path. Declare camera hardware and permission by inserting the following lines into the `` element. Make sure it's **outside** and **above** `` element. - -```xml - - -``` - -2. Navigate to `strings.xml` in your `app` subproject's `src/main/res/values` path. Insert the following lines of text resources, which will be used later. - -```xml - Camera permission is required to recognize face and hands - To grant Camera permission to this app, please go to system settings -``` - -3. Navigate to `MainActivity.kt` and add the following permission related values to companion object: - -```kotlin - // Permissions - private val PERMISSIONS_REQUIRED = arrayOf(Manifest.permission.CAMERA) - private const val REQUEST_CODE_CAMERA_PERMISSION = 233 -``` - -4. Add a new method named `hasPermissions()` to check on runtime whether camera permission has been granted: - -```kotlin - private fun hasPermissions(context: Context) = PERMISSIONS_REQUIRED.all { - ContextCompat.checkSelfPermission(context, it) == PackageManager.PERMISSION_GRANTED - } -``` - -5. Add a condition check in `onCreate()` wrapping `setupCamera()` method, to request camera permission on runtime. - -```kotlin - if (!hasPermissions(baseContext)) { - requestPermissions( - arrayOf(Manifest.permission.CAMERA), - REQUEST_CODE_CAMERA_PERMISSION - ) - } else { - setupCamera() - } -``` - -6. Override `onRequestPermissionsResult` method to handle permission request results: - -```kotlin - override fun onRequestPermissionsResult( - requestCode: Int, - permissions: Array, - grantResults: IntArray - ) { - when (requestCode) { - REQUEST_CODE_CAMERA_PERMISSION -> { - if (PackageManager.PERMISSION_GRANTED == grantResults.getOrNull(0)) { - setupCamera() - } else { - val messageResId = - if (shouldShowRequestPermissionRationale(Manifest.permission.CAMERA)) - R.string.permission_request_camera_rationale - else - R.string.permission_request_camera_message - Toast.makeText(baseContext, getString(messageResId), Toast.LENGTH_LONG).show() - } - } - else -> super.onRequestPermissionsResult(requestCode, permissions, grantResults) - } - } -``` - -## Verify camera permission - -1. Rebuild and run the app. Now you should be seeing a dialog pops up requesting camera permissions! - -2. Tap `Allow` or `While using the app` (depending on your Android OS versions), then you should be seeing your own face in the camera preview. Good job! - -{{% notice Tip %}} -Sometimes you might need to restart the app to observe the permission change take effect. -{{% /notice %}} - -In the next chapter, we will introduce MediaPipe vision solutions. diff --git a/content/learning-paths/smartphones-and-mobile/build-android-selfie-app-using-mediapipe-multimodality/4-introduce-mediapipe.md b/content/learning-paths/smartphones-and-mobile/build-android-selfie-app-using-mediapipe-multimodality/4-introduce-mediapipe.md index 0c743ef94..c5f14c073 100644 --- a/content/learning-paths/smartphones-and-mobile/build-android-selfie-app-using-mediapipe-multimodality/4-introduce-mediapipe.md +++ b/content/learning-paths/smartphones-and-mobile/build-android-selfie-app-using-mediapipe-multimodality/4-introduce-mediapipe.md @@ -8,9 +8,9 @@ layout: learningpathall [MediaPipe Solutions](https://ai.google.dev/edge/mediapipe/solutions/guide) provides a suite of libraries and tools for you to quickly apply artificial intelligence (AI) and machine learning (ML) techniques in your applications. -MediaPipe Tasks provides the core programming interface of the MediaPipe Solutions suite, including a set of libraries for deploying innovative ML solutions onto devices with a minimum of code. It supports multiple platforms, including Android, Web / JavaScript, Python, etc. +MediaPipe Tasks provides the core programming interface of the MediaPipe Solutions suite, including a set of libraries for deploying innovative ML solutions onto devices with a minimum of code. It supports multiple platforms, including Android, Web, JavaScript, Python, etc. -## Introduce MediaPipe dependencies +## Add MediaPipe dependencies 1. Navigate to `libs.versions.toml` and append the following line to the end of `[versions]` section. This defines the version of MediaPipe library we will be using. @@ -19,16 +19,16 @@ mediapipe-vision = "0.10.15" ``` {{% notice Note %}} -Please stick with this version and do not use newer versions due to bugs and unexpected behaviors. +Please use this version and do not use newer versions as this introduces bugs and unexpected behavior. {{% /notice %}} -2. Append the following lines to the end of `[libraries]` section. This declares MediaPipe's vision dependency. +2. Append the following lines to the end of `[libraries]` section. This declares MediaPipe's vision dependency: ```toml mediapipe-vision = { group = "com.google.mediapipe", name = "tasks-vision", version.ref = "mediapipe-vision" } ``` -3. Navigate to `build.gradle.kts` in your project's `app` directory, then insert the following line into `dependencies` block, ideally between `implementation` and `testImplementation`. +3. Navigate to `build.gradle.kts` in your project's `app` directory, then insert the following line into `dependencies` block, between `implementation` and `testImplementation`. ```kotlin implementation(libs.mediapipe.vision) @@ -36,40 +36,40 @@ implementation(libs.mediapipe.vision) ## Prepare model asset bundles -In this app, we will be using MediaPipe's [Face Landmark Detection](https://ai.google.dev/edge/mediapipe/solutions/vision/face_landmarker) and [Gesture Recognizer](https://ai.google.dev/edge/mediapipe/solutions/vision/gesture_recognizer) solutions, which requires their model asset bundle files to initialize. +In this app, you will use MediaPipe's [Face Landmark Detection](https://ai.google.dev/edge/mediapipe/solutions/vision/face_landmarker) and [Gesture Recognizer](https://ai.google.dev/edge/mediapipe/solutions/vision/gesture_recognizer) solutions, which requires their model asset bundle files to initialize. Choose one of the two options below that aligns best with your learning needs. -### Basic approach: manual downloading +### Basic approach: manual download -Simply download the following two files, then move them into the default asset directory: `app/src/main/assets`. +Download the following two files, then move them into the default asset directory: `app/src/main/assets`. -``` +```console https://storage.googleapis.com/mediapipe-models/face_landmarker/face_landmarker/float16/1/face_landmarker.task https://storage.googleapis.com/mediapipe-models/gesture_recognizer/gesture_recognizer/float16/1/gesture_recognizer.task ``` {{% notice Tip %}} -You might need to create the `assets` directory if not exist. +You might need to create the `assets` directory if it does not exist. {{% /notice %}} ### Advanced approach: configure prebuild download tasks -Gradle doesn't come with a convenient [Task](https://docs.gradle.org/current/userguide/tutorial_using_tasks.html) type to manage downloads, therefore we will introduce [gradle-download-task](https://github.com/michel-kraemer/gradle-download-task) dependency. +Gradle doesn't come with a convenient [Task](https://docs.gradle.org/current/userguide/tutorial_using_tasks.html) type to manage downloads, so you will use the [gradle-download-task](https://github.com/michel-kraemer/gradle-download-task) dependency. -1. Again, navigate to `libs.versions.toml`. Append `download = "5.6.0"` to `[versions]` section, and `de-undercouch-download = { id = "de.undercouch.download", version.ref = "download" }` to `[plugins]` section. +1. Navigate to `libs.versions.toml`. Append `download = "5.6.0"` to `[versions]` section, and `de-undercouch-download = { id = "de.undercouch.download", version.ref = "download" }` to `[plugins]` section. -2. Again, navigate to `build.gradle.kts` in your project's `app` directory and append `alias(libs.plugins.de.undercouch.download)` to the `plugins` block. This enables the aforementioned _Download_ task plugin in this `app` subproject. +2. Navigate to `build.gradle.kts` in your project's `app` directory and append `alias(libs.plugins.de.undercouch.download)` to the `plugins` block. This enables the _Download_ task plugin in this `app` subproject. -4. Insert the following lines between `plugins` block and `android` block to define the constant values, including: asset directory path and the URLs for both models. +3. Insert the following lines between `plugins` block and `android` block to define the constant values, including: asset directory path and the URLs for both models. ```kotlin val assetDir = "$projectDir/src/main/assets" val gestureTaskUrl = "https://storage.googleapis.com/mediapipe-models/gesture_recognizer/gesture_recognizer/float16/1/gesture_recognizer.task" val faceTaskUrl = "https://storage.googleapis.com/mediapipe-models/face_landmarker/face_landmarker/float16/1/face_landmarker.task" ``` -5. Insert `import de.undercouch.gradle.tasks.download.Download` into **the top of this file**, then append the following code to **the end of this file**, which hooks two _Download_ tasks to be executed before `preBuild`: +4. Insert `import de.undercouch.gradle.tasks.download.Download` to the top of this file, then append the following code to the end of this file, which hooks two _Download_ tasks to be executed before `preBuild`: ```kotlin tasks.register("downloadGestureTaskAsset") { @@ -97,11 +97,11 @@ tasks.named("preBuild") { Refer to [this section](2-app-scaffolding.md#enable-view-binding) if you need help. {{% /notice %}} -2. Now you should be seeing both model asset bundles in your `assets` directory, as shown below: +2. Now you should see both model asset bundles in your `assets` directory, as shown below: ![model asset bundles](images/4/model%20asset%20bundles.png) -3. Now you are ready to import MediaPipe's Face Landmark Detection and Gesture Recognizer into the project. Actually, we have already implemented the code below for you based on [MediaPipe's sample code](https://github.com/google-ai-edge/mediapipe-samples/tree/main/examples). Simply create a new file `HolisticRecognizerHelper.kt` placed in the source directory along with `MainActivity.kt`, then copy paste the code below into it. +3. You are ready to import MediaPipe's Face Landmark Detection and Gesture Recognizer into the project. Example code is already implemented for ease of use based on [MediaPipe's sample code](https://github.com/google-ai-edge/mediapipe-samples/tree/main/examples). Simply create a new file `HolisticRecognizerHelper.kt` placed in the source directory along with `MainActivity.kt`, then copy paste the code below into it. ```kotlin package com.example.holisticselfiedemo @@ -426,9 +426,9 @@ data class GestureResultBundle( ``` {{% notice Info %}} -In this learning path we are only configuring the MediaPipe vision solutions to recognize one person with at most two hands in the camera. +In this learning path you are only configuring the MediaPipe vision solutions to recognize one person with at most two hands in the camera. -If you'd like to experiment with more people, simply change the `FACES_COUNT` constant to be your desired value. +If you'd like to experiment with more people, change the `FACES_COUNT` constant to be your desired value. {{% /notice %}} -In the next chapter, we will connect the dots from this helper class to the UI layer via a ViewModel. +In the next section, you will connect the dots from this helper class to the UI layer via a ViewModel. diff --git a/content/learning-paths/smartphones-and-mobile/build-android-selfie-app-using-mediapipe-multimodality/_index.md b/content/learning-paths/smartphones-and-mobile/build-android-selfie-app-using-mediapipe-multimodality/_index.md index 19dd54f7b..40dfdf129 100644 --- a/content/learning-paths/smartphones-and-mobile/build-android-selfie-app-using-mediapipe-multimodality/_index.md +++ b/content/learning-paths/smartphones-and-mobile/build-android-selfie-app-using-mediapipe-multimodality/_index.md @@ -28,9 +28,9 @@ author_primary: Han Yin skilllevels: Beginner subjects: ML armips: - - ARM Cortex-A - - ARM Cortex-X - - ARM Mali GPU + - Cortex-A + - Cortex-X + - Mali GPU tools_software_languages: - mobile - Android Studio From 57808b142967c218ab5cc0abc68641451022c7d8 Mon Sep 17 00:00:00 2001 From: pareenaverma Date: Mon, 23 Dec 2024 20:01:28 +0000 Subject: [PATCH 22/96] Android Selfie App LP review --- .../6-flow-data-to-view-1.md | 14 +++---- .../7-flow-data-to-view-2.md | 12 +++--- .../8-mediate-flows.md | 39 ++++--------------- .../9-avoid-redundant-requests.md | 12 +++--- 4 files changed, 27 insertions(+), 50 deletions(-) diff --git a/content/learning-paths/smartphones-and-mobile/build-android-selfie-app-using-mediapipe-multimodality/6-flow-data-to-view-1.md b/content/learning-paths/smartphones-and-mobile/build-android-selfie-app-using-mediapipe-multimodality/6-flow-data-to-view-1.md index 76029594e..f1e0c7daf 100644 --- a/content/learning-paths/smartphones-and-mobile/build-android-selfie-app-using-mediapipe-multimodality/6-flow-data-to-view-1.md +++ b/content/learning-paths/smartphones-and-mobile/build-android-selfie-app-using-mediapipe-multimodality/6-flow-data-to-view-1.md @@ -8,7 +8,7 @@ layout: learningpathall [SharedFlow](https://developer.android.com/kotlin/flow/stateflow-and-sharedflow#sharedflow) and [StateFlow](https://developer.android.com/kotlin/flow/stateflow-and-sharedflow#stateflow) are [Kotlin Flow](https://developer.android.com/kotlin/flow) APIs that enable Flows to optimally emit state updates and emit values to multiple consumers. -In this learning path, you will have the opportunity to experiment with both `SharedFlow` and `StateFlow`. This chapter will focus on SharedFlow while the next chapter will focus on StateFlow. +In this learning path, you will experiment with both `SharedFlow` and `StateFlow`. This section will focus on SharedFlow while the next chapter will focus on StateFlow. `SharedFlow` is a general-purpose, hot flow that can emit values to multiple subscribers. It is highly configurable, allowing you to set the replay cache size, buffer capacity, etc. @@ -54,9 +54,9 @@ This `SharedFlow` is initialized with a replay size of `1`. This retains the mos ## Visualize face and gesture results -To visualize the results of Face Landmark Detection and Gesture Recognition tasks, we have prepared the following code for you based on [MediaPipe's samples](https://github.com/google-ai-edge/mediapipe-samples/tree/main/examples). +To visualize the results of Face Landmark Detection and Gesture Recognition tasks, based on [MediaPipe's samples](https://github.com/google-ai-edge/mediapipe-samples/tree/main/examples) follow the intructions in this section. -1. Create a new file named `FaceLandmarkerOverlayView.kt` and fill in the content below: +1. Create a new file named `FaceLandmarkerOverlayView.kt` and copy the content below: ```kotlin /* @@ -180,7 +180,7 @@ class FaceLandmarkerOverlayView(context: Context?, attrs: AttributeSet?) : ``` -2. Create a new file named `GestureOverlayView.kt` and fill in the content below: +2. Create a new file named `GestureOverlayView.kt` and copy the content below: ```kotlin /* @@ -302,7 +302,7 @@ class GestureOverlayView(context: Context?, attrs: AttributeSet?) : ## Update UI in the view controller -1. Add the above two overlay views to `activity_main.xml` layout file: +1. Add the two overlay views to `activity_main.xml` layout file: ```xml ``` -2. Collect the new SharedFlow `uiEvents` in `MainActivity` by appending the code below to the end of `onCreate` method, **below** `setupCamera()` method call. +2. Collect the new SharedFlow `uiEvents` in `MainActivity` by appending the code below to the end of `onCreate` method, below `setupCamera()` method call. ```kotlin lifecycleScope.launch { @@ -363,7 +363,7 @@ class GestureOverlayView(context: Context?, attrs: AttributeSet?) : } ``` -4. Build and run the app again. Now you should be seeing face and gesture overlays on top of the camera preview as shown below. Good job! +4. Build and run the app again. Now you should see face and gesture overlays on top of the camera preview as shown below. Good job! ![overlay views](images/6/overlay%20views.png) diff --git a/content/learning-paths/smartphones-and-mobile/build-android-selfie-app-using-mediapipe-multimodality/7-flow-data-to-view-2.md b/content/learning-paths/smartphones-and-mobile/build-android-selfie-app-using-mediapipe-multimodality/7-flow-data-to-view-2.md index ca61e998b..e2dd74cf5 100644 --- a/content/learning-paths/smartphones-and-mobile/build-android-selfie-app-using-mediapipe-multimodality/7-flow-data-to-view-2.md +++ b/content/learning-paths/smartphones-and-mobile/build-android-selfie-app-using-mediapipe-multimodality/7-flow-data-to-view-2.md @@ -25,7 +25,7 @@ Therefore, `StateFlow` is a specialized type of `SharedFlow` that represents a s val gestureOk: StateFlow = _gestureOk ``` -2. Append the following constant values to `MainViewModel`'s companion object. In this demo app, we are only focusing on smiling faces and thumb-up gestures. +2. Append the following constant values to `MainViewModel`'s companion object. In this demo app, you will focus on smiling faces and thumb-up gestures. ```kotlin private const val FACE_CATEGORY_MOUTH_SMILE = "mouthSmile" @@ -75,7 +75,7 @@ Therefore, `StateFlow` is a specialized type of `SharedFlow` that represents a s Gesture ``` -2. In the same directory, create a new resource file named `dimens.xml` if not exist, which is used to define layout related dimension values: +2. In the same directory, create a new resource file named `dimens.xml` if it does not exist. This file is used to define layout related dimension values: ```xml @@ -85,7 +85,7 @@ Therefore, `StateFlow` is a specialized type of `SharedFlow` that represents a s ``` -3. Navigate to `activity_main.xml` layout file and add the following code to the root `ConstraintLayout`, **below** the two overlay views which you just added in the previous chapter. +3. Navigate to `activity_main.xml` layout file and add the following code to the root `ConstraintLayout`. Add this code after the two overlay views which you just added in the previous section. ```xml ``` -4. Finally, navigate to `MainActivity.kt` and append the following code inside `repeatOnLifecycle(Lifecycle.State.RESUMED)` block, **below** the `launch` block you just added in the previous chapter. This makes sure each of the **three** parallel `launch` runs in its own Coroutine concurrently without blocking each other. +4. Finally, navigate to `MainActivity.kt` and append the following code inside `repeatOnLifecycle(Lifecycle.State.RESUMED)` block, after the `launch` block you just added in the previous section. This makes sure each of the three parallel `launch` run in its own co-routine concurrently without blocking each other. ```kotlin launch { @@ -127,7 +127,7 @@ Therefore, `StateFlow` is a specialized type of `SharedFlow` that represents a s } ``` -5. Build and run the app again. Now you should be seeing two switches on the bottom of the screen as shown below, which turns on and off while you smile and show thumb-up gestures. Good job! +5. Build and run the app again. Now you should see two switches on the bottom of the screen as shown below, which turn on and off while you smile and show thumb-up gestures. Good job! ![indicator UI](images/7/indicator%20ui.png) @@ -135,7 +135,7 @@ Therefore, `StateFlow` is a specialized type of `SharedFlow` that represents a s This app uses `SharedFlow` for dispatching overlay views' UI events without mandating a specific stateful model, which avoids redundant computation. Meanwhile, it uses `StateFlow` for dispatching condition switches' UI states, which prevents duplicated emission and consequent UI updates. -Here's a breakdown of the differences between `SharedFlow` and `StateFlow`: +Here's a overview of the differences between `SharedFlow` and `StateFlow`: | | SharedFlow | StateFlow | | --- | --- | --- | diff --git a/content/learning-paths/smartphones-and-mobile/build-android-selfie-app-using-mediapipe-multimodality/8-mediate-flows.md b/content/learning-paths/smartphones-and-mobile/build-android-selfie-app-using-mediapipe-multimodality/8-mediate-flows.md index 4798ccadd..9bbfff018 100644 --- a/content/learning-paths/smartphones-and-mobile/build-android-selfie-app-using-mediapipe-multimodality/8-mediate-flows.md +++ b/content/learning-paths/smartphones-and-mobile/build-android-selfie-app-using-mediapipe-multimodality/8-mediate-flows.md @@ -6,7 +6,7 @@ weight: 8 layout: learningpathall --- -Now you have two independent Flows indicating the conditions of face landmark detection and gesture recognition. The simplest multimodality strategy is to combine multiple source Flows into a single output Flow, which emits consolidated values as the [single source of truth](https://en.wikipedia.org/wiki/Single_source_of_truth) for its observers (collectors) to carry out corresponding actions. +Now you have two independent Flows indicating the conditions of face landmark detection and gesture recognition. The simplest multimodality strategy is to combine multiple source Flows into a single output Flow, which emits consolidated values as the single source of truth for its observers (collectors) to carry out corresponding actions. ## Combine two Flows into a single Flow @@ -33,9 +33,9 @@ Now you have two independent Flows indicating the conditions of face landmark de ``` {{% notice Note %}} -Kotlin Flow's [`combine`](https://kotlinlang.org/api/kotlinx.coroutines/kotlinx-coroutines-core/kotlinx.coroutines.flow/combine.html) transformation is equivalent to ReactiveX's [`combineLatest`](https://reactivex.io/documentation/operators/combinelatest.html). It combines emissions from multiple observables, so that each time **any** observable emits, the combinator function is called with the latest values from all sources. +Kotlin Flow's [`combine`](https://kotlinlang.org/api/kotlinx.coroutines/kotlinx-coroutines-core/kotlinx.coroutines.flow/combine.html) transformation is equivalent to ReactiveX's [`combineLatest`](https://reactivex.io/documentation/operators/combinelatest.html). It combines emissions from multiple observables, so that each time any observable emits, the combinator function is called with the latest values from all sources. -You might need to add `@OptIn(FlowPreview::class)` annotation since `sample` is still in preview. For more information on similar transformations, please refer to [this blog](https://kt.academy/article/cc-flow-combine). +You might need to add `@OptIn(FlowPreview::class)` annotation since `sample` is still in preview. {{% /notice %}} @@ -49,30 +49,7 @@ You might need to add `@OptIn(FlowPreview::class)` annotation since `sample` is .shareIn(viewModelScope, SharingStarted.WhileSubscribed()) ``` -If this code looks confusing to you, please see the explanations below for Kotlin beginners. - -{{% notice Info %}} - -###### Keyword "it" - -The operation `filter { it }` is simplified from `filter { bothOk -> bothOk == true }`. - -Since Kotlin allows for implictly calling the single parameter in a lambda `it`, `{ bothOk -> bothOk == true }` is equivalent to `{ it == true }`, and again `{ it }`. - -See [this doc](https://kotlinlang.org/docs/lambdas.html#it-implicit-name-of-a-single-parameter) for more details. - -{{% /notice %}} - -{{% notice Info %}} - -###### "Unit" type -This `SharedFlow` has a generic type `Unit`, which doesn't contain any value. You may think of it as a "pulse" signal. - -The operation `map { }` simply maps the upstream `Boolean` value emitted from `_bothOk` to `Unit` regardless their values are true or false. It's simplified from `map { bothOk -> Unit }`, which becomes `map { Unit } ` where the keyword `it` is not used at all. Since an empty block already returns `Unit` implicitly, we don't need to explicitly return it. - -{{% /notice %}} - -If this still looks confusing, you may also opt to use `SharedFlow` and remove the `map { }` operation. Just note that when you collect this Flow, it doesn't matter whether the emitted `Boolean` values are true or false. In fact, they are always `true` due to the `filter` operation. +You may also opt to use `SharedFlow` and remove the `map { }` operation. Just note that when you collect this Flow, it doesn't matter whether the emitted `Boolean` values are true or false. In fact, they are always `true` due to the `filter` operation. ## Configure ImageCapture use case @@ -92,7 +69,7 @@ If this still looks confusing, you may also opt to use `SharedFlow` and .build() ``` -3. Again, don't forget to append this use case to `bindToLifecycle`. +3. Append this use case to `bindToLifecycle`. ```kotlin camera = cameraProvider.bindToLifecycle( @@ -102,7 +79,7 @@ If this still looks confusing, you may also opt to use `SharedFlow` and ## Execute photo capture with ImageCapture -1. Append the following constant values to `MainActivity`'s companion object. They define the file name format and the [MIME type](https://en.wikipedia.org/wiki/Media_type). +1. Append the following constant values to `MainActivity`'s companion object. They define the file name format and the media type: ```kotlin // Image capture @@ -165,7 +142,7 @@ If this still looks confusing, you may also opt to use `SharedFlow` and ## Add a flash effect upon capturing photo -1. Navigate to `activity_main.xml` layout file and insert the following `View` element **between** the two overlay views and two `SwitchCompat` views. This is essentially just a white blank view covering the whole surface. +1. Navigate to `activity_main.xml` layout file and insert the following `View` element between the two overlay views and two `SwitchCompat` views. This is essentially just a white blank view covering the whole surface. ``` ` and } ``` -3. Invoke `showFlashEffect()` method in `executeCapturePhoto()` method, **before** invoking `imageCapture.takePicture()` +3. Invoke `showFlashEffect()` method in `executeCapturePhoto()` method, before invoking `imageCapture.takePicture()` 4. Build and run the app. Try keeping up a smiling face while presenting thumb-up gestures. When you see both switches turn on and stay stable for approximately half a second, the screen should flash white and then a photo should be captured and shows up in your album, which may take a few seconds depending on your Android device's hardware. Good job! diff --git a/content/learning-paths/smartphones-and-mobile/build-android-selfie-app-using-mediapipe-multimodality/9-avoid-redundant-requests.md b/content/learning-paths/smartphones-and-mobile/build-android-selfie-app-using-mediapipe-multimodality/9-avoid-redundant-requests.md index 99608ce13..b4b58ed8b 100644 --- a/content/learning-paths/smartphones-and-mobile/build-android-selfie-app-using-mediapipe-multimodality/9-avoid-redundant-requests.md +++ b/content/learning-paths/smartphones-and-mobile/build-android-selfie-app-using-mediapipe-multimodality/9-avoid-redundant-requests.md @@ -1,16 +1,16 @@ --- -title: Avoid duplicated photo capture requests +title: Avoid duplicate photo capture requests weight: 9 ### FIXED, DO NOT MODIFY layout: learningpathall --- -So far, we have implemented the core logic for mediating MediaPipe's face and gesture task results and executing photo captures. However, the view controller does not communicate its execution results back to the view model. This introduces risks such as photo capture failures, frequent or duplicate requests, and other potential issues. +So far, you have implemented the core logic for MediaPipe's face and gesture task results and executing photo captures. However, the view controller does not communicate its execution results back to the view model. This introduces risks such as photo capture failures, frequent or duplicate requests, and other potential issues. ## Introduce camera readiness state -It is a best practice to complete the data flow cycle by providing callbacks for the view controller's states. This ensures that the view model does not emit values in undesired states, such as when the camera is busy or unavailable. +It is best practice to complete the data flow cycle by providing callbacks for the view controller's states. This ensures that the view model does not emit values in undesired states, such as when the camera is busy or unavailable. 1. Navigate to `MainViewModel` and add a `MutableStateFlow` named `_isCameraReady` as a private member variable. This keeps track of whether the camera is busy or unavailable. @@ -58,7 +58,7 @@ The duration of image capture can vary across Android devices due to hardware di To address this, implementing a simple cooldown mechanism after each photo capture can enhance the user experience while conserving computing resources. -1. Add the following constant value to `MainViewModel`'s companion object. This defines a `3` sec cooldown before marking the camera available again. +1. Add the following constant value to `MainViewModel`'s companion object. This defines a 3 seconds cooldown before making the camera available again. ```kotlin private const val IMAGE_CAPTURE_DEFAULT_COUNTDOWN = 3000L @@ -91,6 +91,6 @@ However, silently failing without notifying the user is not a good practice for ## Completed sample code on GitHub -If you run into any difficulties completing this learning path, feel free to check out the [completed sample code](https://github.com/hanyin-arm/sample-android-selfie-app-using-mediapipe-multimodality) and import it into Android Studio. +If you run into any difficulties completing this learning path, you can check out the [complete sample code](https://github.com/hanyin-arm/sample-android-selfie-app-using-mediapipe-multimodality) and import it into Android Studio. -If you discover a bug, encounter an issue, or have suggestions for improvement, we’d love to hear from you! Please feel free to [open an issue](https://github.com/hanyin-arm/sample-android-selfie-app-using-mediapipe-multimodality/issues/new) with detailed information. +If you discover a bug, encounter an issue, or have suggestions for improvement, please feel free to [open an issue](https://github.com/hanyin-arm/sample-android-selfie-app-using-mediapipe-multimodality/issues/new) with detailed information. From 01327d0b1fce85e3baf361b8d96359c226e5e908 Mon Sep 17 00:00:00 2001 From: pareenaverma Date: Mon, 23 Dec 2024 20:28:03 +0000 Subject: [PATCH 23/96] Android Selfie LP review --- .../_index.md | 1 + 1 file changed, 1 insertion(+) diff --git a/content/learning-paths/smartphones-and-mobile/build-android-selfie-app-using-mediapipe-multimodality/_index.md b/content/learning-paths/smartphones-and-mobile/build-android-selfie-app-using-mediapipe-multimodality/_index.md index 40dfdf129..ba2a32c66 100644 --- a/content/learning-paths/smartphones-and-mobile/build-android-selfie-app-using-mediapipe-multimodality/_index.md +++ b/content/learning-paths/smartphones-and-mobile/build-android-selfie-app-using-mediapipe-multimodality/_index.md @@ -1,3 +1,4 @@ +more :q! --- title: Build a Hands-Free Selfie app with Modern Android Development and MediaPipe Multimodal AI draft: true From 6819c157c06b2880534e5c0b499a0322f914c49f Mon Sep 17 00:00:00 2001 From: pareenaverma Date: Mon, 23 Dec 2024 20:35:09 +0000 Subject: [PATCH 24/96] Android selfie App review --- .../3-camera-permission.md | 118 ++++++++++++++++++ 1 file changed, 118 insertions(+) create mode 100644 content/learning-paths/smartphones-and-mobile/build-android-selfie-app-using-mediapipe-multimodality/3-camera-permission.md diff --git a/content/learning-paths/smartphones-and-mobile/build-android-selfie-app-using-mediapipe-multimodality/3-camera-permission.md b/content/learning-paths/smartphones-and-mobile/build-android-selfie-app-using-mediapipe-multimodality/3-camera-permission.md new file mode 100644 index 000000000..80262dfae --- /dev/null +++ b/content/learning-paths/smartphones-and-mobile/build-android-selfie-app-using-mediapipe-multimodality/3-camera-permission.md @@ -0,0 +1,118 @@ +--- +title: Handle camera permissions +weight: 3 + +### FIXED, DO NOT MODIFY +layout: learningpathall +--- + +## Run the app on your device + +1. Connect your Android device to your computer via a USB data cable. If this is your first time running and debugging Android apps, follow [this guide](https://developer.android.com/studio/run/device#setting-up) and double check this checklist: + + 1. You have enabled USB debugging on your Android device following [this doc](https://developer.android.com/studio/debug/dev-options#Enable-debugging). + + 2. You have confirmed by tapping "OK" on your Android device when an "Allow USB debugging" dialog pops up, and checked "Always allow from this computer". + + ![Allow USB debugging dialog](https://ftc-docs.firstinspires.org/en/latest/_images/AllowUSBDebugging.jpg) + + +2. Make sure your device model name and SDK version correctly show up on the top right toolbar. Click the "Run" button to build and run the app. + +3. After a while, you should see a success notification in Android Studio and the app showing up on your Android device. + +4. However, the app shows only a black screen while printing error messages in your [Logcat](https://developer.android.com/tools/logcat) which looks like this: + +``` +2024-11-20 11:15:00.398 18782-18818 Camera2CameraImpl com.example.holisticselfiedemo E Camera reopening attempted for 10000ms without success. +2024-11-20 11:30:13.560 667-707 BufferQueueProducer pid-667 E [SurfaceView - com.example.holisticselfiedemo/com.example.holisticselfiedemo.MainActivity#0](id:29b00000283,api:4,p:2657,c:667) queueBuffer: BufferQueue has been abandoned +2024-11-20 11:36:13.100 20487-20499 isticselfiedem com.example.holisticselfiedemo E Failed to read message from agent control socket! Retrying: Bad file descriptor +2024-11-20 11:43:03.408 2709-3807 PackageManager pid-2709 E Permission android.permission.CAMERA isn't requested by package com.example.holisticselfiedemo +``` + +5. Do not worry. This is expected behavior because you haven't correctly configured this app's [permissions](https://developer.android.com/guide/topics/permissions/overview) yet. Android OS restricts this app's access to camera features due to privacy reasons. + +## Request camera permission at runtime + +1. Navigate to `manifest.xml` in your `app` subproject's `src/main` path. Declare camera hardware and permission by inserting the following lines into the `` element. Make sure it's declared outside and above `` element. + +```xml + + +``` + +2. Navigate to `strings.xml` in your `app` subproject's `src/main/res/values` path. Insert the following lines of text resources, which will be used later. + +```xml + Camera permission is required to recognize face and hands + To grant Camera permission to this app, please go to system settings +``` + +3. Navigate to `MainActivity.kt` and add the following permission related values to companion object: + +```kotlin + // Permissions + private val PERMISSIONS_REQUIRED = arrayOf(Manifest.permission.CAMERA) + private const val REQUEST_CODE_CAMERA_PERMISSION = 233 +``` + +4. Add a new method named `hasPermissions()` to check on runtime whether camera permission has been granted: + +```kotlin + private fun hasPermissions(context: Context) = PERMISSIONS_REQUIRED.all { + ContextCompat.checkSelfPermission(context, it) == PackageManager.PERMISSION_GRANTED + } +``` + +5. Add a condition check in `onCreate()` wrapping `setupCamera()` method, to request camera permission on runtime. + +```kotlin + if (!hasPermissions(baseContext)) { + requestPermissions( + arrayOf(Manifest.permission.CAMERA), + REQUEST_CODE_CAMERA_PERMISSION + ) + } else { + setupCamera() + } +``` + +6. Override `onRequestPermissionsResult` method to handle permission request results: + +```kotlin + override fun onRequestPermissionsResult( + requestCode: Int, + permissions: Array, + grantResults: IntArray + ) { + when (requestCode) { + REQUEST_CODE_CAMERA_PERMISSION -> { + if (PackageManager.PERMISSION_GRANTED == grantResults.getOrNull(0)) { + setupCamera() + } else { + val messageResId = + if (shouldShowRequestPermissionRationale(Manifest.permission.CAMERA)) + R.string.permission_request_camera_rationale + else + R.string.permission_request_camera_message + Toast.makeText(baseContext, getString(messageResId), Toast.LENGTH_LONG).show() + } + } + else -> super.onRequestPermissionsResult(requestCode, permissions, grantResults) + } + } +``` + +## Verify camera permission + +1. Rebuild and run the app. Now you should see a dialog pop up requesting camera permissions! + +2. Tap `Allow` or `While using the app` (depending on your Android OS versions). Then you should see your own face in the camera preview. Good job! + +{{% notice Tip %}} +Sometimes you might need to restart the app to observe the permission change take effect. +{{% /notice %}} + +In the next section, you will learn how to integrate MediaPipe vision solutions. From 453cdda978523916f355a905eade812501bb6e08 Mon Sep 17 00:00:00 2001 From: pareenaverma Date: Mon, 23 Dec 2024 20:39:04 +0000 Subject: [PATCH 25/96] Android selfie app review --- .../_index.md | 25 +++++++++++-------- 1 file changed, 14 insertions(+), 11 deletions(-) diff --git a/content/learning-paths/smartphones-and-mobile/build-android-selfie-app-using-mediapipe-multimodality/_index.md b/content/learning-paths/smartphones-and-mobile/build-android-selfie-app-using-mediapipe-multimodality/_index.md index ba2a32c66..56a2cf596 100644 --- a/content/learning-paths/smartphones-and-mobile/build-android-selfie-app-using-mediapipe-multimodality/_index.md +++ b/content/learning-paths/smartphones-and-mobile/build-android-selfie-app-using-mediapipe-multimodality/_index.md @@ -1,15 +1,16 @@ -more :q! --- -title: Build a Hands-Free Selfie app with Modern Android Development and MediaPipe Multimodal AI +title: Build a Hands-Free Selfie Android application with MediaPipe + draft: true cascade: draft: true -minutes_to_complete: 120 -who_is_this_for: This is an introductory topic for mobile application developers interested in learning how to build an Android selfie app with MediaPipe, Kotlin flows and CameraX, following the modern Android architecture design. +minutes_to_complete: 120 +who_is_this_for: This is an advanced topic for mobile application developers interested in learning how to build an Androi +d selfie app with MediaPipe, Kotlin flows and CameraX. -learning_objectives: +learning_objectives: - Architect a modern Android app with a focus on the UI layer. - Leverage lifecycle-aware components within the MVVM architecture. - Combine MediaPipe's face landmark detection and gesture recognition for a multimodel selfie solution. @@ -17,16 +18,16 @@ learning_objectives: - Use Kotlin Flow APIs to handle multiple asynchronous data streams. prerequisites: - - A development machine compatible with [**Android Studio**](https://developer.android.com/studio). - - A recent **physical** Android device (with **front camera**) and a USB **data** cable. + - A development machine with [**Android Studio**](https://developer.android.com/studio) installed. + - A recent Arm powered Android phone (with **front camera**) and a USB data cable. - Familiarity with Android development concepts. - - Basic knowledge of modern Android architecture. - - Basic knowledge of Kotlin programming language, such as [coroutines](https://kotlinlang.org/docs/coroutines-overview.html) and [flows](https://kotlinlang.org/docs/flow.html). + - Basic knowledge of Kotlin programming language, such as [coroutines](https://kotlinlang.org/docs/coroutines-overview +.html) and [flows](https://kotlinlang.org/docs/flow.html). author_primary: Han Yin ### Tags -skilllevels: Beginner +skilllevels: Advanced subjects: ML armips: - Cortex-A @@ -45,5 +46,7 @@ operatingsystems: # ================================================================================ weight: 1 # _index.md always has weight of 1 to order correctly layout: "learningpathall" # All files under learning paths have this same wrapper -learning_path_main_page: "yes" # This should be surfaced when looking for related content. Only set for _index.md of learning path content. +learning_path_main_page: "yes" # This should be surfaced when looking for related content. Only set for _index.md of lear +ning path content. --- + From 3686393e165026a7cbdb71467ec66266e6cf86e0 Mon Sep 17 00:00:00 2001 From: pareenaverma Date: Mon, 23 Dec 2024 20:51:12 +0000 Subject: [PATCH 26/96] Android selfie LP review --- .../2-app-scaffolding.md | 10 +++++----- .../9-avoid-redundant-requests.md | 2 +- .../_index.md | 13 +++---------- 3 files changed, 9 insertions(+), 16 deletions(-) diff --git a/content/learning-paths/smartphones-and-mobile/build-android-selfie-app-using-mediapipe-multimodality/2-app-scaffolding.md b/content/learning-paths/smartphones-and-mobile/build-android-selfie-app-using-mediapipe-multimodality/2-app-scaffolding.md index 02dd95ffc..999a8cbdb 100644 --- a/content/learning-paths/smartphones-and-mobile/build-android-selfie-app-using-mediapipe-multimodality/2-app-scaffolding.md +++ b/content/learning-paths/smartphones-and-mobile/build-android-selfie-app-using-mediapipe-multimodality/2-app-scaffolding.md @@ -1,5 +1,5 @@ --- -title: Scaffold a new Android project +title: Create a new Android project weight: 2 ### FIXED, DO NOT MODIFY @@ -26,12 +26,12 @@ Before you proceed to coding, here are some tips that might come handy: ## Create a new Android project -1. Navigate to **File > New > New Project...**. +1. Navigate to File > New > New Project.... -2. Select **Empty Views Activity** in **Phone and Tablet** galary as shown below, then click **Next**. +2. Select Empty Views Activity in the Phone and Tablet gallery as shown below, then click Next. ![Empty Views Activity](images/2/empty%20project.png) -3. Proceed with a cool project name and default configurations as shown below. Make sure that **Language** is set to **Kotlin**, and that **Build configuration language** is set to **Kotlin DSL**. +3. Enter a project name and use the default configurations as shown below. Make sure that Language is set to Kotlin, and that Build configuration language is set to Kotlin DSL. ![Project configuration](images/2/project%20config.png) ### Introduce CameraX dependencies @@ -194,4 +194,4 @@ private fun bindCameraUseCases() { } ``` -In the next chapter, we will build and run the app to make sure the camera works well. +In the next section, you will build and run the app to make sure the camera works well. diff --git a/content/learning-paths/smartphones-and-mobile/build-android-selfie-app-using-mediapipe-multimodality/9-avoid-redundant-requests.md b/content/learning-paths/smartphones-and-mobile/build-android-selfie-app-using-mediapipe-multimodality/9-avoid-redundant-requests.md index b4b58ed8b..13d998eb4 100644 --- a/content/learning-paths/smartphones-and-mobile/build-android-selfie-app-using-mediapipe-multimodality/9-avoid-redundant-requests.md +++ b/content/learning-paths/smartphones-and-mobile/build-android-selfie-app-using-mediapipe-multimodality/9-avoid-redundant-requests.md @@ -89,7 +89,7 @@ However, silently failing without notifying the user is not a good practice for {{% /notice %}} -## Completed sample code on GitHub +## Entire sample code on GitHub If you run into any difficulties completing this learning path, you can check out the [complete sample code](https://github.com/hanyin-arm/sample-android-selfie-app-using-mediapipe-multimodality) and import it into Android Studio. diff --git a/content/learning-paths/smartphones-and-mobile/build-android-selfie-app-using-mediapipe-multimodality/_index.md b/content/learning-paths/smartphones-and-mobile/build-android-selfie-app-using-mediapipe-multimodality/_index.md index 56a2cf596..3ae0f79bf 100644 --- a/content/learning-paths/smartphones-and-mobile/build-android-selfie-app-using-mediapipe-multimodality/_index.md +++ b/content/learning-paths/smartphones-and-mobile/build-android-selfie-app-using-mediapipe-multimodality/_index.md @@ -1,14 +1,9 @@ --- title: Build a Hands-Free Selfie Android application with MediaPipe -draft: true -cascade: - draft: true - minutes_to_complete: 120 -who_is_this_for: This is an advanced topic for mobile application developers interested in learning how to build an Androi -d selfie app with MediaPipe, Kotlin flows and CameraX. +who_is_this_for: This is an advanced topic for mobile application developers interested in learning how to build an Android selfie application with MediaPipe, Kotlin flows and CameraX. learning_objectives: - Architect a modern Android app with a focus on the UI layer. @@ -21,8 +16,7 @@ prerequisites: - A development machine with [**Android Studio**](https://developer.android.com/studio) installed. - A recent Arm powered Android phone (with **front camera**) and a USB data cable. - Familiarity with Android development concepts. - - Basic knowledge of Kotlin programming language, such as [coroutines](https://kotlinlang.org/docs/coroutines-overview -.html) and [flows](https://kotlinlang.org/docs/flow.html). + - Basic knowledge of Kotlin programming language. author_primary: Han Yin @@ -46,7 +40,6 @@ operatingsystems: # ================================================================================ weight: 1 # _index.md always has weight of 1 to order correctly layout: "learningpathall" # All files under learning paths have this same wrapper -learning_path_main_page: "yes" # This should be surfaced when looking for related content. Only set for _index.md of lear -ning path content. +learning_path_main_page: "yes" # This should be surfaced when looking for related content. Only set for _index.md of learning path content. --- From 72039b85d6c9fa855c301bacfc6654d6b0722af4 Mon Sep 17 00:00:00 2001 From: pareenaverma Date: Mon, 23 Dec 2024 20:51:50 +0000 Subject: [PATCH 27/96] Android selfie LP review --- .../_index.md | 4 ++++ 1 file changed, 4 insertions(+) diff --git a/content/learning-paths/smartphones-and-mobile/build-android-selfie-app-using-mediapipe-multimodality/_index.md b/content/learning-paths/smartphones-and-mobile/build-android-selfie-app-using-mediapipe-multimodality/_index.md index 3ae0f79bf..bbaad71db 100644 --- a/content/learning-paths/smartphones-and-mobile/build-android-selfie-app-using-mediapipe-multimodality/_index.md +++ b/content/learning-paths/smartphones-and-mobile/build-android-selfie-app-using-mediapipe-multimodality/_index.md @@ -1,6 +1,10 @@ --- title: Build a Hands-Free Selfie Android application with MediaPipe +draft: true +cascade: + draft: true + minutes_to_complete: 120 who_is_this_for: This is an advanced topic for mobile application developers interested in learning how to build an Android selfie application with MediaPipe, Kotlin flows and CameraX. From 576e8d416eaddb9d988483aa115541cecfcfcecc Mon Sep 17 00:00:00 2001 From: Maddy Underwood Date: Tue, 24 Dec 2024 05:24:43 +0000 Subject: [PATCH 28/96] Starting editorial. --- .../net-aspire/_index.md | 10 +++++----- .../net-aspire/background.md | 20 +++++++++++++++---- 2 files changed, 21 insertions(+), 9 deletions(-) diff --git a/content/learning-paths/servers-and-cloud-computing/net-aspire/_index.md b/content/learning-paths/servers-and-cloud-computing/net-aspire/_index.md index 1ee31d81c..2bd7cdc84 100644 --- a/content/learning-paths/servers-and-cloud-computing/net-aspire/_index.md +++ b/content/learning-paths/servers-and-cloud-computing/net-aspire/_index.md @@ -3,15 +3,15 @@ title: Run .NET Aspire applications on Arm-based Virtual Machines in AWS and GCP minutes_to_complete: 60 -who_is_this_for: This is an introductory learning path for software developers interested in learning how to deploy .NET Aspire applications in AWS and GCP +who_is_this_for: This is an introductory topic for software developers interested in learning how to deploy .NET Aspire applications in Amazon Web Services (AWS) and Google Cloud Platform (GCP). learning_objectives: - - Learn about .NET Aspire. - - Create a .NET Aspire project and deploy it to the Arm-powered Virtual Machines in the Cloud. + - Describe .NET Aspire. + - Create a .NET Aspire project and deploy it to Arm-powered Virtual Machines in the Cloud. prerequisites: - - A Windows on Arm computer such as [Windows Dev Kit 2023](https://learn.microsoft.com/en-us/windows/arm/dev-kit), a Lenovo Thinkpad X13s running Windows 11 to build the .NET Aspire project. - - An [Arm based instance](/learning-paths/servers-and-cloud-computing/csp/) from AWS or GCP to deploy the application. + - A Windows on Arm machine such as [Windows Dev Kit 2023](https://learn.microsoft.com/en-us/windows/arm/dev-kit), a Lenovo Thinkpad X13s running Windows 11 to build the .NET Aspire project. + - An [Arm based instance](/learning-paths/servers-and-cloud-computing/csp/) from AWS or GCP. - Any code editor. [Visual Studio Code for Arm64](https://code.visualstudio.com/docs/?dv=win32arm64user) is suitable. author_primary: Dawid Borycki diff --git a/content/learning-paths/servers-and-cloud-computing/net-aspire/background.md b/content/learning-paths/servers-and-cloud-computing/net-aspire/background.md index 3f8efcbcc..e4099cca3 100644 --- a/content/learning-paths/servers-and-cloud-computing/net-aspire/background.md +++ b/content/learning-paths/servers-and-cloud-computing/net-aspire/background.md @@ -6,12 +6,24 @@ weight: 2 layout: learningpathall --- -### What is the .NET Aspire -.NET Aspire is a comprehensive suite of powerful tools, templates, and packages designed to simplify the development of cloud-native applications using the .NET platform. Delivered through a collection of NuGet packages, .NET Aspire addresses specific cloud-native concerns, enabling developers to build observable and production-ready apps efficiently. +### What is .NET Aspire? +.NET Aspire is a comprehensive suite of powerful tools, templates, and packages designed to simplify the development of cloud-native applications using the .NET platform. Delivered through a collection of NuGet packages, .NET Aspire provides solutions for building cloud-native apps, enabling developers to build observable and production-ready projects efficiently. -Cloud-native applications are typically composed of small, interconnected services or microservices rather than a single monolithic codebase. These applications often consume a variety of services such as databases, messaging systems, and caching mechanisms. With .NET Aspire you get a consistent set of tools and patterns that help you build and run distributed applications, taking full advantage of the scalability, resilience, and manageability of cloud infrastructures. +Cloud-native applications are typically composed of small, interconnected services or microservices, rather than a single monolithic codebase. These applications often consume a variety of services such as: -.NET Aspire enhances the local development experience by simplifying the management of your application's configuration and interconnections. It abstracts low-level implementation details, streamlining the setup of service discovery, environment variables, and container configurations. Specifically, with a few helper method calls, you can create local resources (like a Redis container), wait for them to become available, and configure appropriate connection strings in your projects. +* Databases. +* Messaging systems. +* Caching mechanisms. + +.NET Aspire gives you a consistent set of tools and patterns that help you to build and run distributed applications, taking full advantage of the scalability, resilience, and manageability of cloud infrastructures. + +.NET Aspire enhances the local development experience by simplifying the management of your application's configuration and interconnections. It abstracts low-level implementation details, and streamlines the following: + +* The setup of service discovery. +* Environment variables. +* Container configurations. + +Specifically, with a few helper method calls, you can create local resources, such as a Redis container, wait for them to become available, and configure appropriate connection strings in your projects. .NET Aspire offers integrations for popular services like Redis and PostgreSQL, ensuring standardized interfaces and seamless connections with your app. These integrations handle cloud-native concerns such as health checks and telemetry through consistent configuration patterns. By referencing named resources, configurations are injected automatically, simplifying the process of connecting services. From 793af56093191f33b4ea45d7fb6d448e08b928fe Mon Sep 17 00:00:00 2001 From: Maddy Underwood Date: Tue, 24 Dec 2024 11:48:16 +0000 Subject: [PATCH 29/96] Marking up formatting of step-by-step instructions. --- .../net-aspire/_index.md | 8 ++-- .../net-aspire/aws.md | 48 +++++++++++-------- .../net-aspire/background.md | 8 ++-- .../net-aspire/project.md | 2 +- 4 files changed, 36 insertions(+), 30 deletions(-) diff --git a/content/learning-paths/servers-and-cloud-computing/net-aspire/_index.md b/content/learning-paths/servers-and-cloud-computing/net-aspire/_index.md index 2bd7cdc84..7390e297e 100644 --- a/content/learning-paths/servers-and-cloud-computing/net-aspire/_index.md +++ b/content/learning-paths/servers-and-cloud-computing/net-aspire/_index.md @@ -3,15 +3,15 @@ title: Run .NET Aspire applications on Arm-based Virtual Machines in AWS and GCP minutes_to_complete: 60 -who_is_this_for: This is an introductory topic for software developers interested in learning how to deploy .NET Aspire applications in Amazon Web Services (AWS) and Google Cloud Platform (GCP). +who_is_this_for: This is an introductory topic for software developers interested in learning how to deploy .NET Aspire applications on Amazon Web Services (AWS) and Google Cloud Platform (GCP). learning_objectives: - Describe .NET Aspire. - - Create a .NET Aspire project and deploy it to Arm-powered Virtual Machines in the Cloud. + - Create a .NET Aspire project and deploy it to Arm-powered virtual machines in the Cloud. prerequisites: - - A Windows on Arm machine such as [Windows Dev Kit 2023](https://learn.microsoft.com/en-us/windows/arm/dev-kit), a Lenovo Thinkpad X13s running Windows 11 to build the .NET Aspire project. - - An [Arm based instance](/learning-paths/servers-and-cloud-computing/csp/) from AWS or GCP. + - A Windows on Arm machine, for example [Windows Dev Kit 2023](https://learn.microsoft.com/en-us/windows/arm/dev-kit), or a Lenovo Thinkpad X13s running Windows 11 to build the .NET Aspire project. + - An [Arm-based instance](/learning-paths/servers-and-cloud-computing/csp/) from AWS or GCP. - Any code editor. [Visual Studio Code for Arm64](https://code.visualstudio.com/docs/?dv=win32arm64user) is suitable. author_primary: Dawid Borycki diff --git a/content/learning-paths/servers-and-cloud-computing/net-aspire/aws.md b/content/learning-paths/servers-and-cloud-computing/net-aspire/aws.md index a15228605..67855cb8a 100644 --- a/content/learning-paths/servers-and-cloud-computing/net-aspire/aws.md +++ b/content/learning-paths/servers-and-cloud-computing/net-aspire/aws.md @@ -7,30 +7,36 @@ layout: learningpathall --- ### Objective -In this section you will learn how to deploy the .NET Aspire application onto an AWS EC2 Virtual Machine powered by Arm-based processors, such as AWS Graviton. This involves leveraging the cost and performance benefits of Arm architecture while demonstrating the seamless deployment of cloud-native applications on modern infrastructure. +In this section, you will learn how to deploy the .NET Aspire application on to an AWS EC2 (Elastic Compute Cloud) Virtual Machine powered by Arm-based processors, such as AWS Graviton. This involves leveraging the cost and performance benefits of Arm architecture while demonstrating the seamless deployment of cloud-native applications on modern infrastructure. -### Setup your AWS EC2 Instance -Follow these steps to deploy an app to an Arm-powered EC2 instance:: -1. Log in to AWS Management Console [here](http://console.aws.amazon.com) -2. Navigate to EC2 Service. In the search box type "EC2". Then, click EC2: +### Set up your AWS EC2 Instance +Follow these steps to deploy an app on to an Arm-powered EC2 instance: +1. Log in to the [AWS Management Console](http://console.aws.amazon.com). +2. Navigate to the EC2 Service. -![fig5](figures/05.png) + As shown in Figure 5, in the search box, type "EC2". + + Then, click on **EC2** in the search results: -3. In the EC2 Dashboard, click “Launch Instance” and fill out the following details: -* Name: type arm-server -* AMI: Select Arm-compatible Amazon Machine Image, Ubuntu 22.04 LTS for Arm64. -* Architecture: Select 64-bit (Arm). -* Instance Type: Select t4g.small. +![Figure 5 alt-text#center](figures/05.png "Figure 5: Search for EC2 Service in the AWS Management Console.") -The configuration should look as follows: +3. In the EC2 Dashboard, click **Launch Instance** and fill out the following details: +* Name: type "arm-server". +* AMI: select **Arm-compatible Amazon Machine Image, Ubuntu 22.04 LTS for Arm64**. +* Architecture: select **64-bit (Arm)**. +* Instance Type: select **t4g.small**. + +The configuration should look like the configuration fields shown in Figure 6: -![fig6](figures/06.png) +![Figure 6 alt-text#center](figures/06.png "Figure 6: Configuration.") -4. Scroll down to "Key pair (login)", and click "Create new key pair". This will display the "Create key pair" window, in which you configure the following: -* Key pair name: arm-key-pair -* Key pair type: RSA -* Private key format: .pem -* Click the Create key pair button, and download the key pair to your computer +4. Scroll down to **Key pair** (login), and click **Create new key pair**. + This displays the "Create key pair" window. + Now configure the following fields: +* Key pair name: **arm-key-pair**. +* Key pair type: **RSA**. +* Private key format: **.pem**. +* Click the **Create key pair** button, and download the key pair to your computer. ![fig7](figures/07.png) @@ -106,7 +112,7 @@ Trust the development certificate: ```console dotnet dev-certs https --trust ``` - Build and run the project + Build and run the project: ```console dotnet restore dotnet run --project NetAspire.Arm.AppHost @@ -116,10 +122,10 @@ The application will run the same way as locally. You should see the following: ![fig12](figures/12.png) -Finally, open the application in the web browser (using the EC2's public IP): +Finally, open the application in the web browser, using the EC2's public IP: ![fig13](figures/13.png) ### Summary -You have successfully deployed the Aspire app onto an Arm-powered AWS EC2 instance. This demonstrates the compatibility of .NET applications with Arm architecture and AWS Graviton instances, offering high performance and cost-efficiency. +You have successfully deployed the Aspire app on to an Arm-powered AWS EC2 instance. This demonstrates the compatibility of .NET applications with Arm architecture and AWS Graviton instances, offering high performance and cost-efficiency. diff --git a/content/learning-paths/servers-and-cloud-computing/net-aspire/background.md b/content/learning-paths/servers-and-cloud-computing/net-aspire/background.md index e4099cca3..e146dd30f 100644 --- a/content/learning-paths/servers-and-cloud-computing/net-aspire/background.md +++ b/content/learning-paths/servers-and-cloud-computing/net-aspire/background.md @@ -23,12 +23,12 @@ Cloud-native applications are typically composed of small, interconnected servic * Environment variables. * Container configurations. -Specifically, with a few helper method calls, you can create local resources, such as a Redis container, wait for them to become available, and configure appropriate connection strings in your projects. +Specifically, with a few helper method calls, you can create local resources, such as a Redis container, wait for the resources to become available, and then configure appropriate connection strings in your projects. -.NET Aspire offers integrations for popular services like Redis and PostgreSQL, ensuring standardized interfaces and seamless connections with your app. These integrations handle cloud-native concerns such as health checks and telemetry through consistent configuration patterns. By referencing named resources, configurations are injected automatically, simplifying the process of connecting services. +.NET Aspire offers integrations for popular services like Redis and PostgreSQL, ensuring standardized interfaces and seamless connections with your app. These integrations handle specific cloud-native requirements such as health checks and telemetry through consistent configuration patterns. By referencing named resources, configurations are injected automatically, simplifying the process of connecting services. -.NET Aspire provides project templates that include boilerplate code and configurations common to cloud-native apps, such as telemetry, health checks, and service discovery. It offers tooling experiences for Visual Studio, Visual Studio Code, and the .NET CLI to help you create and interact with .NET Aspire projects. The templates come with defaults to help you get started quickly, reducing setup time and increasing productivity. +.NET Aspire provides project templates that include boilerplate code and configurations common to cloud-native apps, such as the aforementioned health checks and telemetry, as well as service discovery. It offers tooling experiences for Visual Studio, Visual Studio Code, and .NET CLI to help you create and interact with .NET Aspire projects. The templates come with default settings that you can use to get started quickly, which reduces setup time and increases productivity. -By providing a consistent set of tools and patterns, .NET Aspire streamlines the development process of cloud-native applications. It manages complex applications during the development phase without dealing with low-level implementation details. .NET Aspire easily connects to commonly used services with standardized interfaces and configurations. There are also various templates and tooling to accelerate project setup and development cycles. Finally, with .NET Aspire, you can create applications that are ready for production with built-in support for telemetry, health checks, and service discovery. +By providing a consistent set of tools and patterns, .NET Aspire streamlines the development process of cloud-native applications. It manages complex applications during the development phase without dealing with low-level implementation details. .NET Aspire easily connects to commonly-used services with standardized interfaces and configurations. There are also various templates and tooling to accelerate project setup and development cycles. In this Learning Path, you will learn how to create a .NET Aspire application, describe the project, and modify the code on a Windows on Arm development machine. You will then deploy the application to AWS and GCP Arm-powered virtual machines. diff --git a/content/learning-paths/servers-and-cloud-computing/net-aspire/project.md b/content/learning-paths/servers-and-cloud-computing/net-aspire/project.md index 7d170687f..c37a4bb85 100644 --- a/content/learning-paths/servers-and-cloud-computing/net-aspire/project.md +++ b/content/learning-paths/servers-and-cloud-computing/net-aspire/project.md @@ -6,7 +6,7 @@ weight: 3 layout: learningpathall --- -In this section, you will set up the project. This involves several steps, including installing the Aspire workload. Then, you will learn about the project structure and launch it locally. Finally, you will modify the project to add additional computations to mimic computationally intensive work. +In this section, you will set up the project. This involves several steps, including installing the Aspire workload. Then, you will learn about the project structure and launch it locally. Finally, you will modify the project to add additional computations to mimic computationally-intensive work. ## Create a Project To create a .NET Aspire application, first ensure that you have [.NET 8.0 or later installed](https://dotnet.microsoft.com/en-us/download/dotnet) on your Windows on Arm development machine. From bfb815d69a34687498b9846ce527e484978b2016 Mon Sep 17 00:00:00 2001 From: Maddy Underwood Date: Wed, 25 Dec 2024 04:29:34 +0000 Subject: [PATCH 30/96] Marking up and clarifying instructions on setup. --- .../net-aspire/_index.md | 2 +- .../net-aspire/aws.md | 38 +++++++++---------- .../net-aspire/gcp.md | 32 ++++++++-------- 3 files changed, 36 insertions(+), 36 deletions(-) diff --git a/content/learning-paths/servers-and-cloud-computing/net-aspire/_index.md b/content/learning-paths/servers-and-cloud-computing/net-aspire/_index.md index 7390e297e..941c0f319 100644 --- a/content/learning-paths/servers-and-cloud-computing/net-aspire/_index.md +++ b/content/learning-paths/servers-and-cloud-computing/net-aspire/_index.md @@ -1,5 +1,5 @@ --- -title: Run .NET Aspire applications on Arm-based Virtual Machines in AWS and GCP +title: Run .NET Aspire applications on Arm-based virtual machines on AWS and GCP minutes_to_complete: 60 diff --git a/content/learning-paths/servers-and-cloud-computing/net-aspire/aws.md b/content/learning-paths/servers-and-cloud-computing/net-aspire/aws.md index 67855cb8a..67cc7afd4 100644 --- a/content/learning-paths/servers-and-cloud-computing/net-aspire/aws.md +++ b/content/learning-paths/servers-and-cloud-computing/net-aspire/aws.md @@ -7,10 +7,10 @@ layout: learningpathall --- ### Objective -In this section, you will learn how to deploy the .NET Aspire application on to an AWS EC2 (Elastic Compute Cloud) Virtual Machine powered by Arm-based processors, such as AWS Graviton. This involves leveraging the cost and performance benefits of Arm architecture while demonstrating the seamless deployment of cloud-native applications on modern infrastructure. +In this section, you will learn how to deploy the .NET Aspire application on to an AWS Elastic Compute Cloud (EC2) Virtual Machine powered by Arm-based processors, such as AWS Graviton. This leverages the cost and performance benefits of Arm architecture while demonstrating the seamless deployment of cloud-native applications on modern infrastructure. ### Set up your AWS EC2 Instance -Follow these steps to deploy an app on to an Arm-powered EC2 instance: +Follow these steps to set up an Arm-powered EC2 instance: 1. Log in to the [AWS Management Console](http://console.aws.amazon.com). 2. Navigate to the EC2 Service. @@ -20,13 +20,13 @@ Follow these steps to deploy an app on to an Arm-powered EC2 instance: ![Figure 5 alt-text#center](figures/05.png "Figure 5: Search for EC2 Service in the AWS Management Console.") -3. In the EC2 Dashboard, click **Launch Instance** and fill out the following details: +3. In the EC2 Dashboard, click **Launch Instance** and add this information to configure your setup: * Name: type "arm-server". * AMI: select **Arm-compatible Amazon Machine Image, Ubuntu 22.04 LTS for Arm64**. * Architecture: select **64-bit (Arm)**. * Instance Type: select **t4g.small**. -The configuration should look like the configuration fields shown in Figure 6: +The configuration should look like the configuration fields that are shown in Figure 6: ![Figure 6 alt-text#center](figures/06.png "Figure 6: Configuration.") @@ -40,29 +40,29 @@ The configuration should look like the configuration fields shown in Figure 6: ![fig7](figures/07.png) -5. Scroll down to "Network Settings", where: -* VPC: use default -* Subnet: select no preference -* Auto-assign public IP: Enable -* Firewall: Check Create security group -* Security group name: arm-security-group -* Description: arm-security-group -* Inbound security groups +5. Scroll down to "Network Settings", and confgure the settings in this way: +* VPC: select the default. +* Subnet: select no preference. +* Auto-assign public IP: Enable. +* Firewall: Check Create security group. +* Security group name: arm-security-group. +* Description: arm-security-group. +* Inbound security groups. ![fig8](figures/08.png) -6. Configure "Inbound Security Group Rules". Specifically, click "Add Rule" and set the following details: -* Type: Custom TCP -* Protocol: TCP +6. Configure "Inbound Security Group Rules" by clicking "Add Rule" and then setting the following details: +* Type: Custom TCP. +* Protocol: TCP. * Port Range: 7133. -* Source: Select Anywhere (0.0.0.0/0) for public access or restrict access to your specific IP for better security. -* Repeat this step for all three ports the application is using. Here I have 7133, 7511, 17222. These must match the values we had, when we run the app locally. +* Source: Select "Anywhere (0.0.0.0/0)" for public access or restrict access to your specific IP for better security. +* Repeat this step for all three ports the application is using. This example demonstrates setup using ports 7133, 7511, and 17222. These must match the values that you have when you run the app locally. -The configuration should look as follows: +The configuration should look like: ![fig9](figures/09.png) -7. Launch an instance by clicking "Launch instance" button. You should see the green box with the Success label. This box also contains a link to the EC2 instance. Click it. It will take you to the instance dashboard, which looks like the one below: +7. Launch an instance by clicking the **Launch instance** button. You should see the green box with the Success label. This box also contains a link to the EC2 instance. Click it, and it will take you to the instance dashboard, which looks like Figure 10: ![fig10](figures/10.png) diff --git a/content/learning-paths/servers-and-cloud-computing/net-aspire/gcp.md b/content/learning-paths/servers-and-cloud-computing/net-aspire/gcp.md index 3a12230e0..68c422e89 100644 --- a/content/learning-paths/servers-and-cloud-computing/net-aspire/gcp.md +++ b/content/learning-paths/servers-and-cloud-computing/net-aspire/gcp.md @@ -10,19 +10,19 @@ layout: learningpathall In this section, you will learn how to deploy a .NET Aspire application onto an Arm-based instance running on Google Cloud Platform (GCP). You will start by creating an instance of an Arm64 virtual machine on GCP. You will then connect to it, install the required software, and run the application. -### Create an Arm64 Virtual Machine +### Create an Arm64 virtual machine Follow these steps to create an Arm64 VM: 1. Create a Google Cloud Account. If you don’t already have an account, sign up for Google Cloud. -2. Open the Google Cloud Console [here](https://console.cloud.google.com) +2. Open the Google Cloud Console [here](https://console.cloud.google.com). 3. Navigate to Compute Engine. In the Google Cloud Console, open the Navigation menu and go to Compute Engine > VM Instances. Enable any relevant APIs if prompted. 4. Click “Create Instance”. 5. Configure the VM Instance as follows: -* Name: arm-server -* Region/Zone: Choose a region and zone where Arm64 processors are available (e.g., us-central1). -* Machine Family: Select General-purpose. -* Series: T2A -* Machine Type: Select t2a-standard-1. -The configuration should resemble the following: +* Name: **arm-server** +* Region/Zone: choose a region and zone where Arm64 processors are available, for example us-central1. +* Machine Family: select **General-purpose**. +* Series: T2A. +* Machine Type: select **t2a-standard-1**. +The configuration setup should resemble the following: ![fig14](figures/14.png) @@ -38,7 +38,7 @@ The configuration should resemble the following: ### Connecting to VM After creating the VM, connect to it as follows: -1. In Compute Engine, click the SSH dropdown next to your VM, and select “Open in browser window”: +1. In Compute Engine, click the SSH drop-down menu next to your VM, and select **Open in browser window**: ![fig16](figures/16.png) @@ -51,7 +51,7 @@ After creating the VM, connect to it as follows: ![fig18](figures/18.png) ### Installing dependencies and deploying an app -Once the connection is established, you can install the required dependencies (.NET SDK, Aspire workload, Git), fetch the application code, and deploy it: +Once the connection is established, you can install the required dependencies (.NET SDK, Aspire workload, and Git), fetch the application code, and deploy it: Update the Package List: ```console sudo apt update && sudo apt upgrade -y @@ -87,7 +87,7 @@ Trust the development certificate: ```console dotnet dev-certs https --trust ``` -Build and run the project +Build and run the project: ```console dotnet restore dotnet run --project NetAspire.Arm.AppHost @@ -96,17 +96,17 @@ dotnet run --project NetAspire.Arm.AppHost You will see output similar to this: ![fig19](figures/19.png) -### Exposing the Application to the Public -To make your application accessible publicly, configure firewall rules: -1. In the Google Cloud Console, go to VPC Network > Firewall. +### Exposing the application to the Public +To make your application publicly-accessible, configure the firewall rules: +1. In the Google Cloud Console, navigate to **VPC Network** > **Firewall**. 2. Click “Create Firewall Rule” and configure the following: * Name: allow-dotnet-ports * Target Tags: dotnet-app * Source IP Ranges: 0.0.0.0/0 (for public access). * Protocols and Ports: allow TCP on ports 7133, 7511, and 17222. -* Click the Create button. +* Click the **Create** button. 3. Go back to your VM instance. -4. Click Edit, and under Networking find Network Tags, add the tag dotnet-app. +4. Click **Edit**, and under Networking find Network Tags, add the tag dotnet-app. 5. Click the Save button. ### Summary From 9c9955a2bd83ba3d16c5502a41073eeca555a0bc Mon Sep 17 00:00:00 2001 From: Maddy Underwood Date: Wed, 25 Dec 2024 04:46:47 +0000 Subject: [PATCH 31/96] Tweaked title. --- .../servers-and-cloud-computing/net-aspire/_index.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/content/learning-paths/servers-and-cloud-computing/net-aspire/_index.md b/content/learning-paths/servers-and-cloud-computing/net-aspire/_index.md index 941c0f319..f7dd45d76 100644 --- a/content/learning-paths/servers-and-cloud-computing/net-aspire/_index.md +++ b/content/learning-paths/servers-and-cloud-computing/net-aspire/_index.md @@ -1,5 +1,5 @@ --- -title: Run .NET Aspire applications on Arm-based virtual machines on AWS and GCP +title: Run a .NET Aspire application on Arm-based virtual machines on AWS and GCP minutes_to_complete: 60 From a6ad5ff6bc693f7270e36dae1f5dd8ff8737ce19 Mon Sep 17 00:00:00 2001 From: Maddy Underwood Date: Fri, 27 Dec 2024 05:24:36 +0000 Subject: [PATCH 32/96] Enhancing index file. --- .../servers-and-cloud-computing/net-aspire/_index.md | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) diff --git a/content/learning-paths/servers-and-cloud-computing/net-aspire/_index.md b/content/learning-paths/servers-and-cloud-computing/net-aspire/_index.md index f7dd45d76..e9f28ddb5 100644 --- a/content/learning-paths/servers-and-cloud-computing/net-aspire/_index.md +++ b/content/learning-paths/servers-and-cloud-computing/net-aspire/_index.md @@ -1,13 +1,13 @@ --- -title: Run a .NET Aspire application on Arm-based virtual machines on AWS and GCP +title: Run a .NET Aspire application on Arm-based VMs on AWS and GCP minutes_to_complete: 60 -who_is_this_for: This is an introductory topic for software developers interested in learning how to deploy .NET Aspire applications on Amazon Web Services (AWS) and Google Cloud Platform (GCP). +who_is_this_for: This is an introductory topic for software developers interested in learning how to deploy .NET Aspire applications on Arm-based Virtual Machines (VMs) on Amazon Web Services (AWS) and Google Cloud Platform (GCP). learning_objectives: - - Describe .NET Aspire. - - Create a .NET Aspire project and deploy it to Arm-powered virtual machines in the Cloud. + - Describe .NET Aspire, and what it can achieve. + - Create a .NET Aspire project, and deploy it to Arm-powered virtual machines in the Cloud. prerequisites: - A Windows on Arm machine, for example [Windows Dev Kit 2023](https://learn.microsoft.com/en-us/windows/arm/dev-kit), or a Lenovo Thinkpad X13s running Windows 11 to build the .NET Aspire project. From 6cbee6e1268dcfbd0b772720753e53d1c453bf2c Mon Sep 17 00:00:00 2001 From: Maddy Underwood Date: Fri, 27 Dec 2024 05:36:18 +0000 Subject: [PATCH 33/96] Reviewed questions. --- .../net-aspire/_index.md | 2 +- .../net-aspire/_review.md | 22 +++++++++---------- 2 files changed, 12 insertions(+), 12 deletions(-) diff --git a/content/learning-paths/servers-and-cloud-computing/net-aspire/_index.md b/content/learning-paths/servers-and-cloud-computing/net-aspire/_index.md index e9f28ddb5..27d57c361 100644 --- a/content/learning-paths/servers-and-cloud-computing/net-aspire/_index.md +++ b/content/learning-paths/servers-and-cloud-computing/net-aspire/_index.md @@ -3,7 +3,7 @@ title: Run a .NET Aspire application on Arm-based VMs on AWS and GCP minutes_to_complete: 60 -who_is_this_for: This is an introductory topic for software developers interested in learning how to deploy .NET Aspire applications on Arm-based Virtual Machines (VMs) on Amazon Web Services (AWS) and Google Cloud Platform (GCP). +who_is_this_for: This is an introductory topic for software developers interested in learning how to deploy .NET Aspire applications on Arm-based virtual machines (VMs) on Amazon Web Services (AWS) and Google Cloud Platform (GCP). learning_objectives: - Describe .NET Aspire, and what it can achieve. diff --git a/content/learning-paths/servers-and-cloud-computing/net-aspire/_review.md b/content/learning-paths/servers-and-cloud-computing/net-aspire/_review.md index 64b1d38ef..c38f28e6d 100644 --- a/content/learning-paths/servers-and-cloud-computing/net-aspire/_review.md +++ b/content/learning-paths/servers-and-cloud-computing/net-aspire/_review.md @@ -4,9 +4,9 @@ review: question: > Which command do you use to install the Aspire workload on an Arm-powered VM? answers: - - sudo apt install aspire - - dotnet workload install aspire - - dotnet install aspire --arm64 + - sudo apt install aspire. + - dotnet workload install aspire. + - dotnet install aspire --arm64. correct_answer: 2 explanation: > The correct command to install the Aspire workload is `dotnet workload install aspire`, as it uses the .NET CLI to manage workloads. @@ -15,23 +15,23 @@ review: question: > When creating an AWS EC2 instance, which step ensures secure remote access to the VM? answers: - - Creating a new key pair in the "Key pair (login)" section - - Selecting the appropriate security group for the instance - - Allowing HTTP and HTTPS traffic in the network settings + - Creating a new key pair in the "Key pair (login)" section. + - Selecting the appropriate security group for the instance. + - Allowing HTTP and HTTPS traffic in the network settings. correct_answer: 1 explanation: > Creating a new key pair in the "Key pair (login)" section generates a private key file that is essential for secure SSH access to the EC2 instance. - questions: question: > - In Google Cloud Platform, what series should you select to use an Arm64 processor for your VM? + In Google Cloud Platform, which series should you select to use an Arm64 processor for your VM? answers: - - T2A (Ampere Altra Arm) - - E2 (General Purpose) - - N2D (Compute Optimized) + - T2A (Ampere Altra Arm). + - E2 (General Purpose). + - N2D (Compute Optimized). correct_answer: 1 explanation: > - The T2A series (Ampere Altra Arm) is designed specifically for Arm64 processors and provides cost-effective, high-performance computing in GCP. + The T2A series (Ampere Altra Arm) is designed specifically for Arm64 processors and provides cost-effective, high-performance computing in the Google Cloud Platform. # ================================================================================ # FIXED, DO NOT MODIFY From 6ce3a0c18ee45f88f695740397e98a2513b051ef Mon Sep 17 00:00:00 2001 From: Maddy Underwood Date: Fri, 27 Dec 2024 05:50:04 +0000 Subject: [PATCH 34/96] Review aws.md. --- .../net-aspire/aws.md | 45 ++++++++++--------- 1 file changed, 23 insertions(+), 22 deletions(-) diff --git a/content/learning-paths/servers-and-cloud-computing/net-aspire/aws.md b/content/learning-paths/servers-and-cloud-computing/net-aspire/aws.md index 67cc7afd4..db098b81f 100644 --- a/content/learning-paths/servers-and-cloud-computing/net-aspire/aws.md +++ b/content/learning-paths/servers-and-cloud-computing/net-aspire/aws.md @@ -7,10 +7,10 @@ layout: learningpathall --- ### Objective -In this section, you will learn how to deploy the .NET Aspire application on to an AWS Elastic Compute Cloud (EC2) Virtual Machine powered by Arm-based processors, such as AWS Graviton. This leverages the cost and performance benefits of Arm architecture while demonstrating the seamless deployment of cloud-native applications on modern infrastructure. +In this section, you will learn how to deploy the .NET Aspire application on to an AWS Elastic Compute Cloud (EC2) Virtual Machine powered by Arm-based processors, such as AWS Graviton. This allows you to leverage the cost and performance benefits of Arm architecture while benefiting from the seamless deployment of cloud-native applications on modern infrastructure. ### Set up your AWS EC2 Instance -Follow these steps to set up an Arm-powered EC2 instance: +To set up an Arm-powered EC2 instance, follow these steps: 1. Log in to the [AWS Management Console](http://console.aws.amazon.com). 2. Navigate to the EC2 Service. @@ -18,9 +18,9 @@ Follow these steps to set up an Arm-powered EC2 instance: Then, click on **EC2** in the search results: -![Figure 5 alt-text#center](figures/05.png "Figure 5: Search for EC2 Service in the AWS Management Console.") +![Figure 5 alt-text#center](figures/05.png "Figure 5: Search for the EC2 Service in the AWS Management Console.") -3. In the EC2 Dashboard, click **Launch Instance** and add this information to configure your setup: +3. In the EC2 Dashboard, click **Launch Instance** and add the following information in these corresponding data fields to configure your setup: * Name: type "arm-server". * AMI: select **Arm-compatible Amazon Machine Image, Ubuntu 22.04 LTS for Arm64**. * Architecture: select **64-bit (Arm)**. @@ -28,7 +28,7 @@ Follow these steps to set up an Arm-powered EC2 instance: The configuration should look like the configuration fields that are shown in Figure 6: -![Figure 6 alt-text#center](figures/06.png "Figure 6: Configuration.") +![Figure 6 alt-text#center](figures/06.png "Figure 6: Configuration Fields.") 4. Scroll down to **Key pair** (login), and click **Create new key pair**. This displays the "Create key pair" window. @@ -40,38 +40,39 @@ The configuration should look like the configuration fields that are shown in Fi ![fig7](figures/07.png) -5. Scroll down to "Network Settings", and confgure the settings in this way: +5. Scroll down to "Network Settings", and confgure the settings: * VPC: select the default. -* Subnet: select no preference. -* Auto-assign public IP: Enable. -* Firewall: Check Create security group. +* Subnet: select **No preference**. +* Auto-assign public IP: **Enable**. +* Firewall: Check **Create security group**. * Security group name: arm-security-group. * Description: arm-security-group. * Inbound security groups. ![fig8](figures/08.png) -6. Configure "Inbound Security Group Rules" by clicking "Add Rule" and then setting the following details: +6. Configure "Inbound Security Group Rules" by selecting **Add Rule** and then setting the following details: * Type: Custom TCP. * Protocol: TCP. * Port Range: 7133. * Source: Select "Anywhere (0.0.0.0/0)" for public access or restrict access to your specific IP for better security. -* Repeat this step for all three ports the application is using. This example demonstrates setup using ports 7133, 7511, and 17222. These must match the values that you have when you run the app locally. + +Repeat this step for all three ports that the application is using. This example demonstrates setup using ports 7133, 7511, and 17222. These must match the values that you have when you run the app locally. The configuration should look like: ![fig9](figures/09.png) -7. Launch an instance by clicking the **Launch instance** button. You should see the green box with the Success label. This box also contains a link to the EC2 instance. Click it, and it will take you to the instance dashboard, which looks like Figure 10: +7. Launch an instance by clicking the **Launch instance** button. You should see the green box with the Success label. This box also contains a link to the EC2 instance. Click it, and it takes you to the instance dashboard, which looks like Figure 10: ![fig10](figures/10.png) ### Deploy the application -Once the EC2 instance is ready, you can connect to it and deploy the application. Follow these steps to connect: -1. Locate the instance public IP (e.g. 98.83.137.101 in this case). +Once the EC2 instance is ready, you can connect to it, and deploy the application. Follow these steps to connect: +1. Locate the instance public IP (here this is 98.83.137.101). 2. Use an SSH client to connect: -* Open the terminal -* Set appropriate permissions for the key pair file (remember to use your IP address) +* Open the terminal. +* Set the appropriate permissions for the key pair file, using your own IP address: ```console chmod 400 arm-key-pair.pem ssh -i arm-key-pair.pem ubuntu@98.83.137.101 @@ -79,13 +80,13 @@ ssh -i arm-key-pair.pem ubuntu@98.83.137.101 ![fig11](figures/11.png) -You can now install required components, pull the application code from git, and launch the app: +You can now install the required components, pull the application code from git, and launch the app: In the EC2 terminal run: ```console sudo apt update && sudo apt upgrade -y ``` -This will update the package list and upgrade the installed packages. +This updates the package list and upgrades the installed packages. Install .NET SDK using the following commands: ```console @@ -99,11 +100,11 @@ Verify the installation: ```console dotnet --version ``` -Install the Aspire workload using the dotnet CLI +Install the Aspire workload using the dotnet CLI: ```console dotnet workload install aspire ``` -Clone the repository which contains the application you created in the previous section: +Clone the repository that contains the application that you created in the previous section: ```console git clone https://github.com/dawidborycki/NetAspire.Arm.git cd NetAspire.Arm/ @@ -112,13 +113,13 @@ Trust the development certificate: ```console dotnet dev-certs https --trust ``` - Build and run the project: +Build and run the project: ```console dotnet restore dotnet run --project NetAspire.Arm.AppHost ``` -The application will run the same way as locally. You should see the following: +The application runs the same way as it does locally. You should see the following: ![fig12](figures/12.png) From 90bdd5b3b9f162530fc37f0f2ccaceb09a703a0b Mon Sep 17 00:00:00 2001 From: Maddy Underwood Date: Fri, 27 Dec 2024 08:57:12 +0000 Subject: [PATCH 35/96] Split second learning objective in to two objectives. --- .../servers-and-cloud-computing/net-aspire/_index.md | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/content/learning-paths/servers-and-cloud-computing/net-aspire/_index.md b/content/learning-paths/servers-and-cloud-computing/net-aspire/_index.md index 27d57c361..2ea4e98db 100644 --- a/content/learning-paths/servers-and-cloud-computing/net-aspire/_index.md +++ b/content/learning-paths/servers-and-cloud-computing/net-aspire/_index.md @@ -6,9 +6,9 @@ minutes_to_complete: 60 who_is_this_for: This is an introductory topic for software developers interested in learning how to deploy .NET Aspire applications on Arm-based virtual machines (VMs) on Amazon Web Services (AWS) and Google Cloud Platform (GCP). learning_objectives: - - Describe .NET Aspire, and what it can achieve. - - Create a .NET Aspire project, and deploy it to Arm-powered virtual machines in the Cloud. - + - Describe .NET Aspire, including what it can achieve. + - Create a .NET Aspire application. + - Deploy a .NET Aspire application to Arm-powered virtual machines in the Cloud. prerequisites: - A Windows on Arm machine, for example [Windows Dev Kit 2023](https://learn.microsoft.com/en-us/windows/arm/dev-kit), or a Lenovo Thinkpad X13s running Windows 11 to build the .NET Aspire project. - An [Arm-based instance](/learning-paths/servers-and-cloud-computing/csp/) from AWS or GCP. From 780b6cb2b401f36c299b5a6b0457fa202f455f7f Mon Sep 17 00:00:00 2001 From: Maddy Underwood Date: Fri, 27 Dec 2024 09:22:03 +0000 Subject: [PATCH 36/96] Added a fourth LO. --- .../servers-and-cloud-computing/net-aspire/_index.md | 3 ++- .../servers-and-cloud-computing/net-aspire/background.md | 6 +++--- 2 files changed, 5 insertions(+), 4 deletions(-) diff --git a/content/learning-paths/servers-and-cloud-computing/net-aspire/_index.md b/content/learning-paths/servers-and-cloud-computing/net-aspire/_index.md index 2ea4e98db..049a500cb 100644 --- a/content/learning-paths/servers-and-cloud-computing/net-aspire/_index.md +++ b/content/learning-paths/servers-and-cloud-computing/net-aspire/_index.md @@ -3,11 +3,12 @@ title: Run a .NET Aspire application on Arm-based VMs on AWS and GCP minutes_to_complete: 60 -who_is_this_for: This is an introductory topic for software developers interested in learning how to deploy .NET Aspire applications on Arm-based virtual machines (VMs) on Amazon Web Services (AWS) and Google Cloud Platform (GCP). +who_is_this_for: This is an introductory topic for software developers interested in learning how to deploy .NET Aspire applications on Arm-based Virtual Machines (VMs) on Amazon Web Services (AWS) and Google Cloud Platform (GCP). learning_objectives: - Describe .NET Aspire, including what it can achieve. - Create a .NET Aspire application. + - Modify code on a Windows on Arm development machine. - Deploy a .NET Aspire application to Arm-powered virtual machines in the Cloud. prerequisites: - A Windows on Arm machine, for example [Windows Dev Kit 2023](https://learn.microsoft.com/en-us/windows/arm/dev-kit), or a Lenovo Thinkpad X13s running Windows 11 to build the .NET Aspire project. diff --git a/content/learning-paths/servers-and-cloud-computing/net-aspire/background.md b/content/learning-paths/servers-and-cloud-computing/net-aspire/background.md index e146dd30f..fa99c8b9a 100644 --- a/content/learning-paths/servers-and-cloud-computing/net-aspire/background.md +++ b/content/learning-paths/servers-and-cloud-computing/net-aspire/background.md @@ -23,11 +23,11 @@ Cloud-native applications are typically composed of small, interconnected servic * Environment variables. * Container configurations. -Specifically, with a few helper method calls, you can create local resources, such as a Redis container, wait for the resources to become available, and then configure appropriate connection strings in your projects. +With a few helper method calls, you can create local resources, wait for the resources to become available, and then configure appropriate connection strings in your projects. -.NET Aspire offers integrations for popular services like Redis and PostgreSQL, ensuring standardized interfaces and seamless connections with your app. These integrations handle specific cloud-native requirements such as health checks and telemetry through consistent configuration patterns. By referencing named resources, configurations are injected automatically, simplifying the process of connecting services. +.NET Aspire offers integrations for popular services like Redis and PostgreSQL, ensuring standardized interfaces and seamless connections with your app. These integrations handle specific cloud-native requirements through consistent configuration patterns. By referencing named resources, configurations are injected automatically, simplifying the process of connecting services. -.NET Aspire provides project templates that include boilerplate code and configurations common to cloud-native apps, such as the aforementioned health checks and telemetry, as well as service discovery. It offers tooling experiences for Visual Studio, Visual Studio Code, and .NET CLI to help you create and interact with .NET Aspire projects. The templates come with default settings that you can use to get started quickly, which reduces setup time and increases productivity. +.NET Aspire provides project templates that include boilerplate code and configurations common to cloud-native apps, such as health checks and telemetry, as well as service discovery. It offers tooling experiences for Visual Studio, Visual Studio Code, and .NET CLI to help you create and interact with .NET Aspire projects. The templates come with default settings that you can use to get started quickly, which reduces setup time and increases productivity. By providing a consistent set of tools and patterns, .NET Aspire streamlines the development process of cloud-native applications. It manages complex applications during the development phase without dealing with low-level implementation details. .NET Aspire easily connects to commonly-used services with standardized interfaces and configurations. There are also various templates and tooling to accelerate project setup and development cycles. From fdb099b1e28fe78da93a86fe86e4feb789dc2258 Mon Sep 17 00:00:00 2001 From: Maddy Underwood Date: Fri, 27 Dec 2024 09:39:58 +0000 Subject: [PATCH 37/96] Created new page on running the project. --- .../net-aspire/aws.md | 2 +- .../net-aspire/background.md | 2 +- .../net-aspire/gcp.md | 2 +- .../net-aspire/project.md | 14 +- .../net-aspire/run_app.md | 123 ++++++++++++++++++ 5 files changed, 133 insertions(+), 10 deletions(-) create mode 100644 content/learning-paths/servers-and-cloud-computing/net-aspire/run_app.md diff --git a/content/learning-paths/servers-and-cloud-computing/net-aspire/aws.md b/content/learning-paths/servers-and-cloud-computing/net-aspire/aws.md index db098b81f..6a14b4142 100644 --- a/content/learning-paths/servers-and-cloud-computing/net-aspire/aws.md +++ b/content/learning-paths/servers-and-cloud-computing/net-aspire/aws.md @@ -1,6 +1,6 @@ --- title: Deploy to AWS EC2 -weight: 4 +weight: 5 ### FIXED, DO NOT MODIFY layout: learningpathall diff --git a/content/learning-paths/servers-and-cloud-computing/net-aspire/background.md b/content/learning-paths/servers-and-cloud-computing/net-aspire/background.md index fa99c8b9a..fdfb39b96 100644 --- a/content/learning-paths/servers-and-cloud-computing/net-aspire/background.md +++ b/content/learning-paths/servers-and-cloud-computing/net-aspire/background.md @@ -31,4 +31,4 @@ With a few helper method calls, you can create local resources, wait for the res By providing a consistent set of tools and patterns, .NET Aspire streamlines the development process of cloud-native applications. It manages complex applications during the development phase without dealing with low-level implementation details. .NET Aspire easily connects to commonly-used services with standardized interfaces and configurations. There are also various templates and tooling to accelerate project setup and development cycles. -In this Learning Path, you will learn how to create a .NET Aspire application, describe the project, and modify the code on a Windows on Arm development machine. You will then deploy the application to AWS and GCP Arm-powered virtual machines. +In this Learning Path, you will learn how to create a .NET Aspire application, describe the project, and modify the code on a Windows on Arm development machine. You will then deploy the application firstly, to an AWS Arm-powered virtual machines, and secondly, to a GCP Arm-powered virtual machine. diff --git a/content/learning-paths/servers-and-cloud-computing/net-aspire/gcp.md b/content/learning-paths/servers-and-cloud-computing/net-aspire/gcp.md index 68c422e89..a0612cba3 100644 --- a/content/learning-paths/servers-and-cloud-computing/net-aspire/gcp.md +++ b/content/learning-paths/servers-and-cloud-computing/net-aspire/gcp.md @@ -1,6 +1,6 @@ --- title: Deploy to GCP -weight: 5 +weight: 6 ### FIXED, DO NOT MODIFY layout: learningpathall diff --git a/content/learning-paths/servers-and-cloud-computing/net-aspire/project.md b/content/learning-paths/servers-and-cloud-computing/net-aspire/project.md index c37a4bb85..3a7fd5713 100644 --- a/content/learning-paths/servers-and-cloud-computing/net-aspire/project.md +++ b/content/learning-paths/servers-and-cloud-computing/net-aspire/project.md @@ -1,5 +1,5 @@ --- -title: Application +title: Create an application weight: 3 ### FIXED, DO NOT MODIFY @@ -11,11 +11,11 @@ In this section, you will set up the project. This involves several steps, inclu ## Create a Project To create a .NET Aspire application, first ensure that you have [.NET 8.0 or later installed](https://dotnet.microsoft.com/en-us/download/dotnet) on your Windows on Arm development machine. -Open a Powershell terminal and run: +To find out which version you have, open a Powershell terminal and run: ```console dotnet --version ``` -The output should return the version of .NET SDK installed on your machine. +The output should tell you which version of .NET SDK you have installed on your machine. Next, install the Aspire workload: @@ -39,13 +39,13 @@ dotnet new aspire-starter -o NetAspire.Arm ``` This command generates a solution with the following structure: -* NetAspire.Arm.AppHost - the orchestrator, or coordinator, project serves as the backbone of your distributed application. Its primary responsibilities include defining how services connect to one another, configuring ports and endpoints to ensure seamless communication, managing service discovery to enable efficient interactions between components, and handling container orchestration to streamline the deployment and operation of services within your application. +* **NetAspire.Arm.AppHost** - the orchestrator, or coordinator, project serves as the backbone of your distributed application. Its primary responsibilities include defining how services connect to one another, configuring ports and endpoints to ensure seamless communication, managing service discovery to enable efficient interactions between components, and handling container orchestration to streamline the deployment and operation of services within your application. -* NetAspire.Arm.ApiService - the sample REST API service, built with ASP.NET Core, acts as a core component of your application by implementing business logic and managing data access. The default implementation comes preconfigured with essential features, including a WeatherForecast endpoint for demonstration purposes, built-in health checks to monitor the service’s status, and telemetry setup to track performance and usage metrics. +* **NetAspire.Arm.ApiService** - the sample REST API service, built with ASP.NET Core, acts as a core component of your application by implementing business logic and managing data access. The default implementation comes preconfigured with essential features, including a WeatherForecast endpoint for demonstration purposes, built-in health checks to monitor the service’s status, and telemetry setup to track performance and usage metrics. -* NetAspire.Arm.Web - the web frontend application, implemented with Blazor, serves as the user-facing layer of your application. It communicates with the API service to provide an interactive experience. This application includes a user interface for presenting data, client-side logic for handling interactions, and preconfigured patterns for consuming services. +* **NetAspire.Arm.Web** - the web frontend application, implemented with Blazor, serves as the user-facing layer of your application. It communicates with the API service to provide an interactive experience. This application includes a user interface for presenting data, client-side logic for handling interactions, and preconfigured patterns for consuming services. -* NetAspire.Arm.ServiceDefaults - the shared library provides a centralized foundation for common service configurations across your application. It includes a default middleware setup, preconfigured telemetry settings for tracking performance, standard health check implementations, and logging configurations to ensure consistent and efficient monitoring and debugging. +* **NetAspire.Arm.ServiceDefaults** - the shared library provides a centralized foundation for common service configurations across your application. It includes a default middleware setup, preconfigured telemetry settings for tracking performance, standard health check implementations, and logging configurations to ensure consistent and efficient monitoring and debugging. The structure of this project is designed to enhance efficiency and simplify the development of cloud-native applications. At its core, it incorporates features to ensure seamless service interactions, robust monitoring, and an exceptional development experience. diff --git a/content/learning-paths/servers-and-cloud-computing/net-aspire/run_app.md b/content/learning-paths/servers-and-cloud-computing/net-aspire/run_app.md new file mode 100644 index 000000000..d3c86867c --- /dev/null +++ b/content/learning-paths/servers-and-cloud-computing/net-aspire/run_app.md @@ -0,0 +1,123 @@ +--- +title: Run the Project +weight: 4 + +### FIXED, DO NOT MODIFY +layout: learningpathall +--- +## Run the Project +The application will issue a certificate. Before you run the application, add support to trust the HTTPS development certificate by running: + +```console +dotnet dev-certs https --trust +``` + +Now run the project: +```console +cd .\NetAspire.Arm\ +dotnet run --project NetAspire.Arm.AppHost +``` + +The output will look like below: +```output +Building... +info: Aspire.Hosting.DistributedApplication[0] + Aspire version: 8.2.2+5fa9337a84a52e9bd185d04d156eccbdcf592f74 +info: Aspire.Hosting.DistributedApplication[0] + Distributed application starting. +info: Aspire.Hosting.DistributedApplication[0] + Application host directory is: /Users/db/Repos/NetAspire.Arm/NetAspire.Arm.AppHost +info: Aspire.Hosting.DistributedApplication[0] + Now listening on: https://localhost:17222 +info: Aspire.Hosting.DistributedApplication[0] + Login to the dashboard at https://localhost:17222/login?t=81f99566c9ec462e66f5eab5aa9307b0 +``` + +Click on the link generated for the dashboard. In this case it is: https://localhost:17222/login?t=81f99566c9ec462e66f5eab5aa9307b0. This will direct you to the application dashboard, as shown below: + +![fig1](figures/01.png) + +On the dashboard, locate and click the endpoint link for `NetAspire.Arm.Web`. This will take you to the Blazor based web application. In the Blazor app, navigate to the Weather section to access and display data retrieved from the WeatherForecast API: + +![fig2](figures/02.png) + +Return to the dashboard and select the Traces option. This section provides detailed telemetry tracing, allowing you to view the flow of requests, track service dependencies, and analyze performance metrics for your application: + +![fig3](figures/03.png) + +By following these steps, you will explore the key components of the .NET Aspire application, including its dashboard, data interaction through APIs, and telemetry tracing capabilities. + +## Modify the Project +You will now include additional code for the purpose of demonstrating computation intense work. Go to the `NetAspire.Arm.ApiService` project, and create a new file `ComputationService.cs`. Add the code shown below to this file: + +```cs +static class ComputationService +{ + public static void PerformIntensiveCalculations(int matrixSize) + { + var matrix1 = GenerateMatrix(matrixSize); + var matrix2 = GenerateMatrix(matrixSize); + + // Matrix multiplication + var matrixResult = Enumerable.Range(0, matrixSize) + .SelectMany(i => Enumerable.Range(0, matrixSize) + .Select(j => + { + double sum = 0; + for (int k = 0; k < matrixSize; k++) + { + sum += matrix1[i * matrixSize + k] * matrix2[k * matrixSize + j]; + } + return sum; + })) + .ToArray(); + } + + private static double[] GenerateMatrix(int matrixSize) { + return Enumerable.Range(1, matrixSize * matrixSize) + .Select(x => Random.Shared.NextDouble()) + .ToArray(); + } +} +``` + +This code defines a static class, ComputationService, designed to perform computationally intensive tasks, specifically matrix multiplication. It contains a public method, PerformIntensiveCalculations, which generates two matrices of a specified size, multiplies them, and stores the resulting matrix. + +The private method GenerateMatrix creates a one-dimensional array representing a matrix of the given size (matrixSize x matrixSize). Each element in the matrix is initialized with a random double value generated using Random.Shared.NextDouble(). + +The public method PerformIntensiveCalculations multiplies two matrices (matrix1 and matrix2) element by element using nested loops and LINQ. It iterates through each row of the first matrix and each column of the second matrix, calculating the dot product for each element in the resulting matrix. The result of the multiplication is stored in a flattened one-dimensional array, matrixResult. + +This code is provided for demonstrating heavy computational operations, such as large matrix manipulations, and can simulate workloads in scenarios that mimic intensive data processing or scientific calculations. + +Then, open the `Program.cs` file in the `NetAspire.Arm.ApiService` directory and add modify the `MapGet` function of the app as shown: + +```cs +app.MapGet("/weatherforecast", () => +{ + ComputationService.PerformIntensiveCalculations(matrixSize: 800); + + var forecast = Enumerable.Range(1, 5).Select(index => + new WeatherForecast + ( + DateOnly.FromDateTime(DateTime.Now.AddDays(index)), + Random.Shared.Next(-20, 55), + summaries[Random.Shared.Next(summaries.Length)] + )) + .ToArray(); + return forecast; +}); +``` + +This will trigger matrix multiplications when you click Weather in the web frontend application. + +To test the code, re-run the application using the following command: + +```console +dotnet run --project NetAspire.Arm.AppHost +``` + +Next, navigate to the web frontend, click Weather, and then return to the dashboard. Click Traces to observe that the operation now takes significantly longer to complete—approximately 4 seconds in the example below: + +![fig4](figures/04.png) + +You are now ready to deploy the application to the cloud. From e453acace5e60b6bac40fe2038ad0a8f22552683 Mon Sep 17 00:00:00 2001 From: Maddy Underwood Date: Fri, 27 Dec 2024 12:18:41 +0000 Subject: [PATCH 38/96] Update --- .../net-aspire/_index.md | 2 +- .../net-aspire/aws.md | 2 +- .../net-aspire/gcp.md | 2 +- .../net-aspire/modify_project.md | 82 +++++++++++++ .../net-aspire/project.md | 116 ------------------ .../net-aspire/run_app.md | 78 +----------- 6 files changed, 87 insertions(+), 195 deletions(-) create mode 100644 content/learning-paths/servers-and-cloud-computing/net-aspire/modify_project.md diff --git a/content/learning-paths/servers-and-cloud-computing/net-aspire/_index.md b/content/learning-paths/servers-and-cloud-computing/net-aspire/_index.md index 049a500cb..0c7618057 100644 --- a/content/learning-paths/servers-and-cloud-computing/net-aspire/_index.md +++ b/content/learning-paths/servers-and-cloud-computing/net-aspire/_index.md @@ -20,7 +20,7 @@ author_primary: Dawid Borycki ### Tags skilllevels: Introductory subjects: Containers and Virtualization -cloud_service_providers: AWS, Google Cloud +cloud_service_providers: AWS, Google Cloud armips: - Neoverse diff --git a/content/learning-paths/servers-and-cloud-computing/net-aspire/aws.md b/content/learning-paths/servers-and-cloud-computing/net-aspire/aws.md index 6a14b4142..6763edbcf 100644 --- a/content/learning-paths/servers-and-cloud-computing/net-aspire/aws.md +++ b/content/learning-paths/servers-and-cloud-computing/net-aspire/aws.md @@ -1,6 +1,6 @@ --- title: Deploy to AWS EC2 -weight: 5 +weight: 6 ### FIXED, DO NOT MODIFY layout: learningpathall diff --git a/content/learning-paths/servers-and-cloud-computing/net-aspire/gcp.md b/content/learning-paths/servers-and-cloud-computing/net-aspire/gcp.md index a0612cba3..f791fb39c 100644 --- a/content/learning-paths/servers-and-cloud-computing/net-aspire/gcp.md +++ b/content/learning-paths/servers-and-cloud-computing/net-aspire/gcp.md @@ -1,6 +1,6 @@ --- title: Deploy to GCP -weight: 6 +weight: 7 ### FIXED, DO NOT MODIFY layout: learningpathall diff --git a/content/learning-paths/servers-and-cloud-computing/net-aspire/modify_project.md b/content/learning-paths/servers-and-cloud-computing/net-aspire/modify_project.md new file mode 100644 index 000000000..411452b5c --- /dev/null +++ b/content/learning-paths/servers-and-cloud-computing/net-aspire/modify_project.md @@ -0,0 +1,82 @@ +--- +title: Modify the Project +weight: 5 + +### FIXED, DO NOT MODIFY +layout: learningpathall +--- + +## Modify the Project +You will now include additional code for the purpose of demonstrating computation intense work. Go to the `NetAspire.Arm.ApiService` project, and create a new file `ComputationService.cs`. Add the code shown below to this file: + +```cs +static class ComputationService +{ + public static void PerformIntensiveCalculations(int matrixSize) + { + var matrix1 = GenerateMatrix(matrixSize); + var matrix2 = GenerateMatrix(matrixSize); + + // Matrix multiplication + var matrixResult = Enumerable.Range(0, matrixSize) + .SelectMany(i => Enumerable.Range(0, matrixSize) + .Select(j => + { + double sum = 0; + for (int k = 0; k < matrixSize; k++) + { + sum += matrix1[i * matrixSize + k] * matrix2[k * matrixSize + j]; + } + return sum; + })) + .ToArray(); + } + + private static double[] GenerateMatrix(int matrixSize) { + return Enumerable.Range(1, matrixSize * matrixSize) + .Select(x => Random.Shared.NextDouble()) + .ToArray(); + } +} +``` + +This code defines a static class, ComputationService, designed to perform computationally intensive tasks, specifically matrix multiplication. It contains a public method, PerformIntensiveCalculations, which generates two matrices of a specified size, multiplies them, and stores the resulting matrix. + +The private method GenerateMatrix creates a one-dimensional array representing a matrix of the given size (matrixSize x matrixSize). Each element in the matrix is initialized with a random double value generated using Random.Shared.NextDouble(). + +The public method PerformIntensiveCalculations multiplies two matrices (matrix1 and matrix2) element by element using nested loops and LINQ. It iterates through each row of the first matrix and each column of the second matrix, calculating the dot product for each element in the resulting matrix. The result of the multiplication is stored in a flattened one-dimensional array, matrixResult. + +This code is provided for demonstrating heavy computational operations, such as large matrix manipulations, and can simulate workloads in scenarios that mimic intensive data processing or scientific calculations. + +Then, open the `Program.cs` file in the `NetAspire.Arm.ApiService` directory and add modify the `MapGet` function of the app as shown: + +```cs +app.MapGet("/weatherforecast", () => +{ + ComputationService.PerformIntensiveCalculations(matrixSize: 800); + + var forecast = Enumerable.Range(1, 5).Select(index => + new WeatherForecast + ( + DateOnly.FromDateTime(DateTime.Now.AddDays(index)), + Random.Shared.Next(-20, 55), + summaries[Random.Shared.Next(summaries.Length)] + )) + .ToArray(); + return forecast; +}); +``` + +This will trigger matrix multiplications when you click Weather in the web frontend application. + +To test the code, re-run the application using the following command: + +```console +dotnet run --project NetAspire.Arm.AppHost +``` + +Next, navigate to the web frontend, click Weather, and then return to the dashboard. Click Traces to observe that the operation now takes significantly longer to complete—approximately 4 seconds in the example below: + +![fig4](figures/04.png) + +You are now ready to deploy the application to the cloud. diff --git a/content/learning-paths/servers-and-cloud-computing/net-aspire/project.md b/content/learning-paths/servers-and-cloud-computing/net-aspire/project.md index 3a7fd5713..dcb0e7e6c 100644 --- a/content/learning-paths/servers-and-cloud-computing/net-aspire/project.md +++ b/content/learning-paths/servers-and-cloud-computing/net-aspire/project.md @@ -59,119 +59,3 @@ The architecture is also tailored to improve the development experience. Develop This thoughtfully crafted architecture embodies microservices best practices, promoting scalability, maintainability, and service isolation. It not only simplifies deployment and monitoring but also fosters developer productivity by streamlining workflows and providing intuitive tools for building modern, distributed applications. -## Run the Project -The application will issue a certificate. Before you run the application, add support to trust the HTTPS development certificate by running: - -```console -dotnet dev-certs https --trust -``` - -Now run the project: -```console -cd .\NetAspire.Arm\ -dotnet run --project NetAspire.Arm.AppHost -``` - -The output will look like below: -```output -Building... -info: Aspire.Hosting.DistributedApplication[0] - Aspire version: 8.2.2+5fa9337a84a52e9bd185d04d156eccbdcf592f74 -info: Aspire.Hosting.DistributedApplication[0] - Distributed application starting. -info: Aspire.Hosting.DistributedApplication[0] - Application host directory is: /Users/db/Repos/NetAspire.Arm/NetAspire.Arm.AppHost -info: Aspire.Hosting.DistributedApplication[0] - Now listening on: https://localhost:17222 -info: Aspire.Hosting.DistributedApplication[0] - Login to the dashboard at https://localhost:17222/login?t=81f99566c9ec462e66f5eab5aa9307b0 -``` - -Click on the link generated for the dashboard. In this case it is: https://localhost:17222/login?t=81f99566c9ec462e66f5eab5aa9307b0. This will direct you to the application dashboard, as shown below: - -![fig1](figures/01.png) - -On the dashboard, locate and click the endpoint link for `NetAspire.Arm.Web`. This will take you to the Blazor based web application. In the Blazor app, navigate to the Weather section to access and display data retrieved from the WeatherForecast API: - -![fig2](figures/02.png) - -Return to the dashboard and select the Traces option. This section provides detailed telemetry tracing, allowing you to view the flow of requests, track service dependencies, and analyze performance metrics for your application: - -![fig3](figures/03.png) - -By following these steps, you will explore the key components of the .NET Aspire application, including its dashboard, data interaction through APIs, and telemetry tracing capabilities. - -## Modify the Project -You will now include additional code for the purpose of demonstrating computation intense work. Go to the `NetAspire.Arm.ApiService` project, and create a new file `ComputationService.cs`. Add the code shown below to this file: - -```cs -static class ComputationService -{ - public static void PerformIntensiveCalculations(int matrixSize) - { - var matrix1 = GenerateMatrix(matrixSize); - var matrix2 = GenerateMatrix(matrixSize); - - // Matrix multiplication - var matrixResult = Enumerable.Range(0, matrixSize) - .SelectMany(i => Enumerable.Range(0, matrixSize) - .Select(j => - { - double sum = 0; - for (int k = 0; k < matrixSize; k++) - { - sum += matrix1[i * matrixSize + k] * matrix2[k * matrixSize + j]; - } - return sum; - })) - .ToArray(); - } - - private static double[] GenerateMatrix(int matrixSize) { - return Enumerable.Range(1, matrixSize * matrixSize) - .Select(x => Random.Shared.NextDouble()) - .ToArray(); - } -} -``` - -This code defines a static class, ComputationService, designed to perform computationally intensive tasks, specifically matrix multiplication. It contains a public method, PerformIntensiveCalculations, which generates two matrices of a specified size, multiplies them, and stores the resulting matrix. - -The private method GenerateMatrix creates a one-dimensional array representing a matrix of the given size (matrixSize x matrixSize). Each element in the matrix is initialized with a random double value generated using Random.Shared.NextDouble(). - -The public method PerformIntensiveCalculations multiplies two matrices (matrix1 and matrix2) element by element using nested loops and LINQ. It iterates through each row of the first matrix and each column of the second matrix, calculating the dot product for each element in the resulting matrix. The result of the multiplication is stored in a flattened one-dimensional array, matrixResult. - -This code is provided for demonstrating heavy computational operations, such as large matrix manipulations, and can simulate workloads in scenarios that mimic intensive data processing or scientific calculations. - -Then, open the `Program.cs` file in the `NetAspire.Arm.ApiService` directory and add modify the `MapGet` function of the app as shown: - -```cs -app.MapGet("/weatherforecast", () => -{ - ComputationService.PerformIntensiveCalculations(matrixSize: 800); - - var forecast = Enumerable.Range(1, 5).Select(index => - new WeatherForecast - ( - DateOnly.FromDateTime(DateTime.Now.AddDays(index)), - Random.Shared.Next(-20, 55), - summaries[Random.Shared.Next(summaries.Length)] - )) - .ToArray(); - return forecast; -}); -``` - -This will trigger matrix multiplications when you click Weather in the web frontend application. - -To test the code, re-run the application using the following command: - -```console -dotnet run --project NetAspire.Arm.AppHost -``` - -Next, navigate to the web frontend, click Weather, and then return to the dashboard. Click Traces to observe that the operation now takes significantly longer to complete—approximately 4 seconds in the example below: - -![fig4](figures/04.png) - -You are now ready to deploy the application to the cloud. diff --git a/content/learning-paths/servers-and-cloud-computing/net-aspire/run_app.md b/content/learning-paths/servers-and-cloud-computing/net-aspire/run_app.md index d3c86867c..9f76b704e 100644 --- a/content/learning-paths/servers-and-cloud-computing/net-aspire/run_app.md +++ b/content/learning-paths/servers-and-cloud-computing/net-aspire/run_app.md @@ -1,10 +1,11 @@ --- -title: Run the Project +title: Run the application weight: 4 ### FIXED, DO NOT MODIFY layout: learningpathall --- + ## Run the Project The application will issue a certificate. Before you run the application, add support to trust the HTTPS development certificate by running: @@ -46,78 +47,3 @@ Return to the dashboard and select the Traces option. This section provides deta ![fig3](figures/03.png) By following these steps, you will explore the key components of the .NET Aspire application, including its dashboard, data interaction through APIs, and telemetry tracing capabilities. - -## Modify the Project -You will now include additional code for the purpose of demonstrating computation intense work. Go to the `NetAspire.Arm.ApiService` project, and create a new file `ComputationService.cs`. Add the code shown below to this file: - -```cs -static class ComputationService -{ - public static void PerformIntensiveCalculations(int matrixSize) - { - var matrix1 = GenerateMatrix(matrixSize); - var matrix2 = GenerateMatrix(matrixSize); - - // Matrix multiplication - var matrixResult = Enumerable.Range(0, matrixSize) - .SelectMany(i => Enumerable.Range(0, matrixSize) - .Select(j => - { - double sum = 0; - for (int k = 0; k < matrixSize; k++) - { - sum += matrix1[i * matrixSize + k] * matrix2[k * matrixSize + j]; - } - return sum; - })) - .ToArray(); - } - - private static double[] GenerateMatrix(int matrixSize) { - return Enumerable.Range(1, matrixSize * matrixSize) - .Select(x => Random.Shared.NextDouble()) - .ToArray(); - } -} -``` - -This code defines a static class, ComputationService, designed to perform computationally intensive tasks, specifically matrix multiplication. It contains a public method, PerformIntensiveCalculations, which generates two matrices of a specified size, multiplies them, and stores the resulting matrix. - -The private method GenerateMatrix creates a one-dimensional array representing a matrix of the given size (matrixSize x matrixSize). Each element in the matrix is initialized with a random double value generated using Random.Shared.NextDouble(). - -The public method PerformIntensiveCalculations multiplies two matrices (matrix1 and matrix2) element by element using nested loops and LINQ. It iterates through each row of the first matrix and each column of the second matrix, calculating the dot product for each element in the resulting matrix. The result of the multiplication is stored in a flattened one-dimensional array, matrixResult. - -This code is provided for demonstrating heavy computational operations, such as large matrix manipulations, and can simulate workloads in scenarios that mimic intensive data processing or scientific calculations. - -Then, open the `Program.cs` file in the `NetAspire.Arm.ApiService` directory and add modify the `MapGet` function of the app as shown: - -```cs -app.MapGet("/weatherforecast", () => -{ - ComputationService.PerformIntensiveCalculations(matrixSize: 800); - - var forecast = Enumerable.Range(1, 5).Select(index => - new WeatherForecast - ( - DateOnly.FromDateTime(DateTime.Now.AddDays(index)), - Random.Shared.Next(-20, 55), - summaries[Random.Shared.Next(summaries.Length)] - )) - .ToArray(); - return forecast; -}); -``` - -This will trigger matrix multiplications when you click Weather in the web frontend application. - -To test the code, re-run the application using the following command: - -```console -dotnet run --project NetAspire.Arm.AppHost -``` - -Next, navigate to the web frontend, click Weather, and then return to the dashboard. Click Traces to observe that the operation now takes significantly longer to complete—approximately 4 seconds in the example below: - -![fig4](figures/04.png) - -You are now ready to deploy the application to the cloud. From 3a586be94703a4560cb64e4c94f34613f1fcf2cc Mon Sep 17 00:00:00 2001 From: Maddy Underwood Date: Fri, 27 Dec 2024 12:47:30 +0000 Subject: [PATCH 39/96] Structural changes. --- .../servers-and-cloud-computing/net-aspire/background.md | 2 +- .../net-aspire/modify_project.md | 3 +++ .../servers-and-cloud-computing/net-aspire/project.md | 7 ++++--- .../servers-and-cloud-computing/net-aspire/run_app.md | 1 - 4 files changed, 8 insertions(+), 5 deletions(-) diff --git a/content/learning-paths/servers-and-cloud-computing/net-aspire/background.md b/content/learning-paths/servers-and-cloud-computing/net-aspire/background.md index fdfb39b96..ce828bbf3 100644 --- a/content/learning-paths/servers-and-cloud-computing/net-aspire/background.md +++ b/content/learning-paths/servers-and-cloud-computing/net-aspire/background.md @@ -1,5 +1,5 @@ --- -title: Background +title: .NET Aspire weight: 2 ### FIXED, DO NOT MODIFY diff --git a/content/learning-paths/servers-and-cloud-computing/net-aspire/modify_project.md b/content/learning-paths/servers-and-cloud-computing/net-aspire/modify_project.md index 411452b5c..ad9347936 100644 --- a/content/learning-paths/servers-and-cloud-computing/net-aspire/modify_project.md +++ b/content/learning-paths/servers-and-cloud-computing/net-aspire/modify_project.md @@ -7,6 +7,9 @@ layout: learningpathall --- ## Modify the Project + +Now modify the project to add additional computations to mimic computationally-intensive work. + You will now include additional code for the purpose of demonstrating computation intense work. Go to the `NetAspire.Arm.ApiService` project, and create a new file `ComputationService.cs`. Add the code shown below to this file: ```cs diff --git a/content/learning-paths/servers-and-cloud-computing/net-aspire/project.md b/content/learning-paths/servers-and-cloud-computing/net-aspire/project.md index dcb0e7e6c..1a790de5c 100644 --- a/content/learning-paths/servers-and-cloud-computing/net-aspire/project.md +++ b/content/learning-paths/servers-and-cloud-computing/net-aspire/project.md @@ -6,9 +6,11 @@ weight: 3 layout: learningpathall --- -In this section, you will set up the project. This involves several steps, including installing the Aspire workload. Then, you will learn about the project structure and launch it locally. Finally, you will modify the project to add additional computations to mimic computationally-intensive work. ## Create a Project + +In this section, you will set up the project, which involves installing the Aspire workload. + To create a .NET Aspire application, first ensure that you have [.NET 8.0 or later installed](https://dotnet.microsoft.com/en-us/download/dotnet) on your Windows on Arm development machine. To find out which version you have, open a Powershell terminal and run: @@ -37,7 +39,6 @@ Once the Aspire workload is installed, you can create a new application by execu ```console dotnet new aspire-starter -o NetAspire.Arm ``` - This command generates a solution with the following structure: * **NetAspire.Arm.AppHost** - the orchestrator, or coordinator, project serves as the backbone of your distributed application. Its primary responsibilities include defining how services connect to one another, configuring ports and endpoints to ensure seamless communication, managing service discovery to enable efficient interactions between components, and handling container orchestration to streamline the deployment and operation of services within your application. @@ -57,5 +58,5 @@ Configuration management offers environment-based settings that make deploying a The architecture is also tailored to improve the development experience. Developers can benefit from local debugging support and a powerful monitoring dashboard. This dashboard provides a detailed view of service health, logs, metrics, trace information, resource usage, and service dependencies. Additionally, hot reload capability allows real-time updates during development, and container support ensures consistency across local and production environments. -This thoughtfully crafted architecture embodies microservices best practices, promoting scalability, maintainability, and service isolation. It not only simplifies deployment and monitoring but also fosters developer productivity by streamlining workflows and providing intuitive tools for building modern, distributed applications. +This thoughtfully-crafted architecture embodies microservices best practices, promoting scalability, maintainability, and service isolation. It not only simplifies deployment and monitoring, but also fosters developer productivity by streamlining workflows and providing intuitive tools for building modern, distributed applications. diff --git a/content/learning-paths/servers-and-cloud-computing/net-aspire/run_app.md b/content/learning-paths/servers-and-cloud-computing/net-aspire/run_app.md index 9f76b704e..e1d3ef5c3 100644 --- a/content/learning-paths/servers-and-cloud-computing/net-aspire/run_app.md +++ b/content/learning-paths/servers-and-cloud-computing/net-aspire/run_app.md @@ -5,7 +5,6 @@ weight: 4 ### FIXED, DO NOT MODIFY layout: learningpathall --- - ## Run the Project The application will issue a certificate. Before you run the application, add support to trust the HTTPS development certificate by running: From adc43970136944eaaf5de7cdd1fcdd488a937b63 Mon Sep 17 00:00:00 2001 From: Maddy Underwood Date: Fri, 27 Dec 2024 14:32:13 +0000 Subject: [PATCH 40/96] More updates, including Figure titles. --- .../net-aspire/_index.md | 4 +- .../net-aspire/project.md | 44 +++++++++++++++---- .../net-aspire/run_app.md | 12 ++--- 3 files changed, 43 insertions(+), 17 deletions(-) diff --git a/content/learning-paths/servers-and-cloud-computing/net-aspire/_index.md b/content/learning-paths/servers-and-cloud-computing/net-aspire/_index.md index 0c7618057..127d1c097 100644 --- a/content/learning-paths/servers-and-cloud-computing/net-aspire/_index.md +++ b/content/learning-paths/servers-and-cloud-computing/net-aspire/_index.md @@ -11,9 +11,9 @@ learning_objectives: - Modify code on a Windows on Arm development machine. - Deploy a .NET Aspire application to Arm-powered virtual machines in the Cloud. prerequisites: - - A Windows on Arm machine, for example [Windows Dev Kit 2023](https://learn.microsoft.com/en-us/windows/arm/dev-kit), or a Lenovo Thinkpad X13s running Windows 11 to build the .NET Aspire project. + - A Windows on Arm machine, for example the [Windows Dev Kit 2023](https://learn.microsoft.com/en-us/windows/arm/dev-kit), or a Lenovo Thinkpad X13s running Windows 11 to build the .NET Aspire project. - An [Arm-based instance](/learning-paths/servers-and-cloud-computing/csp/) from AWS or GCP. - - Any code editor. [Visual Studio Code for Arm64](https://code.visualstudio.com/docs/?dv=win32arm64user) is suitable. + - Any code editor. [Visual Studio Code for Arm64](https://code.visualstudio.com/docs/?dv=win32arm64user) is an example of a suitable editor. author_primary: Dawid Borycki diff --git a/content/learning-paths/servers-and-cloud-computing/net-aspire/project.md b/content/learning-paths/servers-and-cloud-computing/net-aspire/project.md index 1a790de5c..adbaae797 100644 --- a/content/learning-paths/servers-and-cloud-computing/net-aspire/project.md +++ b/content/learning-paths/servers-and-cloud-computing/net-aspire/project.md @@ -1,5 +1,5 @@ --- -title: Create an application +title: Create a project and then an application weight: 3 ### FIXED, DO NOT MODIFY @@ -7,7 +7,7 @@ layout: learningpathall --- -## Create a Project +## Create a project In this section, you will set up the project, which involves installing the Aspire workload. @@ -34,21 +34,41 @@ Installing Aspire.ProjectTemplates.Msi.arm64 ..... Done Successfully installed workload(s) aspire. ``` +## Create an application + Once the Aspire workload is installed, you can create a new application by executing: ```console dotnet new aspire-starter -o NetAspire.Arm ``` This command generates a solution with the following structure: -* **NetAspire.Arm.AppHost** - the orchestrator, or coordinator, project serves as the backbone of your distributed application. Its primary responsibilities include defining how services connect to one another, configuring ports and endpoints to ensure seamless communication, managing service discovery to enable efficient interactions between components, and handling container orchestration to streamline the deployment and operation of services within your application. +* **NetAspire.Arm.AppHost** - the orchestrator, or coordinator, project serves as the backbone of your distributed application. Its primary responsibilities include: + + - Defining how services connect to one another. + - Configuring ports and endpoints to ensure seamless communication. + - Managing service discovery to enable efficient interactions between components. + - Handling container orchestration to streamline the deployment and operation of services within your application. + +* **NetAspire.Arm.ApiService** - the sample REST API service, built with ASP.NET Core, acts as a core component of your application by implementing business logic and managing data access. The default implementation comes preconfigured with essential features, that include: -* **NetAspire.Arm.ApiService** - the sample REST API service, built with ASP.NET Core, acts as a core component of your application by implementing business logic and managing data access. The default implementation comes preconfigured with essential features, including a WeatherForecast endpoint for demonstration purposes, built-in health checks to monitor the service’s status, and telemetry setup to track performance and usage metrics. + * A weatherForecast endpoint for demonstration purposes. + * Built-in health checks to monitor the service’s status + * Telemetry setup to track performance and usage metrics. -* **NetAspire.Arm.Web** - the web frontend application, implemented with Blazor, serves as the user-facing layer of your application. It communicates with the API service to provide an interactive experience. This application includes a user interface for presenting data, client-side logic for handling interactions, and preconfigured patterns for consuming services. +* **NetAspire.Arm.Web** - the web frontend application, implemented with Blazor, serves as the user-facing layer of your application. It communicates with the API service to provide an interactive experience. This application includes: -* **NetAspire.Arm.ServiceDefaults** - the shared library provides a centralized foundation for common service configurations across your application. It includes a default middleware setup, preconfigured telemetry settings for tracking performance, standard health check implementations, and logging configurations to ensure consistent and efficient monitoring and debugging. + * A user interface for presenting data. + * Client-side logic for handling interactions. + * Preconfigured patterns for consuming services. -The structure of this project is designed to enhance efficiency and simplify the development of cloud-native applications. At its core, it incorporates features to ensure seamless service interactions, robust monitoring, and an exceptional development experience. +* **NetAspire.Arm.ServiceDefaults** - the shared library provides a centralized foundation for common service configurations across your application. It includes: + + * A default middleware setup. + * Preconfigured telemetry settings for tracking performance. + * Standard health check implementations. + * Logging configurations to ensure consistent and efficient monitoring and debugging. + +The structure of this project is designed to enhance efficiency, and simplify the development of cloud-native applications. At its core, it incorporates features to ensure seamless service interactions, robust monitoring, and an exceptional development experience. One of the foundational elements is service discovery, which enables automatic service registration, dynamic endpoint resolution, and load balancing. These features ensure that services communicate effectively and handle traffic efficiently, even in complex, distributed environments. @@ -56,7 +76,13 @@ For monitoring and telemetry, the architecture integrates tools like built-in he Configuration management offers environment-based settings that make deploying applications across different stages straightforward. Secure secrets management safeguards sensitive information, while standardized service-to-service communication simplifies interactions between microservices. -The architecture is also tailored to improve the development experience. Developers can benefit from local debugging support and a powerful monitoring dashboard. This dashboard provides a detailed view of service health, logs, metrics, trace information, resource usage, and service dependencies. Additionally, hot reload capability allows real-time updates during development, and container support ensures consistency across local and production environments. +The architecture is also tailored to improve the development experience. Developers can benefit from local debugging support and a powerful monitoring dashboard. This dashboard provides a detailed view of the following: + +* Service health. +* Logs. +* Metrics. +* Trace information. +* Resource usage. -This thoughtfully-crafted architecture embodies microservices best practices, promoting scalability, maintainability, and service isolation. It not only simplifies deployment and monitoring, but also fosters developer productivity by streamlining workflows and providing intuitive tools for building modern, distributed applications. +This thoughtfully-crafted architecture embodies best practices for microservices, and promotes scalability, maintainability, and service isolation. It not only simplifies deployment and monitoring, but also fosters developer productivity by streamlining workflows and providing intuitive tools for building modern, distributed applications. diff --git a/content/learning-paths/servers-and-cloud-computing/net-aspire/run_app.md b/content/learning-paths/servers-and-cloud-computing/net-aspire/run_app.md index e1d3ef5c3..eee1b7fc7 100644 --- a/content/learning-paths/servers-and-cloud-computing/net-aspire/run_app.md +++ b/content/learning-paths/servers-and-cloud-computing/net-aspire/run_app.md @@ -18,7 +18,7 @@ cd .\NetAspire.Arm\ dotnet run --project NetAspire.Arm.AppHost ``` -The output will look like below: +The output should look like the text below: ```output Building... info: Aspire.Hosting.DistributedApplication[0] @@ -33,15 +33,15 @@ info: Aspire.Hosting.DistributedApplication[0] Login to the dashboard at https://localhost:17222/login?t=81f99566c9ec462e66f5eab5aa9307b0 ``` -Click on the link generated for the dashboard. In this case it is: https://localhost:17222/login?t=81f99566c9ec462e66f5eab5aa9307b0. This will direct you to the application dashboard, as shown below: +Click on the link generated for the dashboard. In this case it is: [https://localhost:17222/login?t=81f99566c9ec462e66f5eab5aa9307b0](https://localhost:17222/login?t=81f99566c9ec462e66f5eab5aa9307b0). This directs you to the application dashboard, as shown in Figure 1: -![fig1](figures/01.png) +![fig1 alt-text#center](figures/01.png "Figure 1: Application Dashboard.") -On the dashboard, locate and click the endpoint link for `NetAspire.Arm.Web`. This will take you to the Blazor based web application. In the Blazor app, navigate to the Weather section to access and display data retrieved from the WeatherForecast API: +On the dashboard, locate and click the endpoint link for `NetAspire.Arm.Web`. This takes you to the Blazor-based web application. In the Blazor app, navigate to the Weather section to access and display data retrieved from the WeatherForecast API: -![fig2](figures/02.png) +![fig2 alt-text#center](figures/02.png "Figure 2: Data Displayed from WeatherForecast API.") -Return to the dashboard and select the Traces option. This section provides detailed telemetry tracing, allowing you to view the flow of requests, track service dependencies, and analyze performance metrics for your application: +Now return to the dashboard, and select the **Traces** option. This section provides detailed telemetry tracing, allowing you to view the flow of requests, track service dependencies, and analyze performance metrics for your application: ![fig3](figures/03.png) From 6fe1f1338fdaea64df7e33e9263593248957d44a Mon Sep 17 00:00:00 2001 From: Maddy Underwood Date: Fri, 27 Dec 2024 14:44:25 +0000 Subject: [PATCH 41/96] Added more Figure labels. --- .../servers-and-cloud-computing/net-aspire/run_app.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/content/learning-paths/servers-and-cloud-computing/net-aspire/run_app.md b/content/learning-paths/servers-and-cloud-computing/net-aspire/run_app.md index eee1b7fc7..8cb73885a 100644 --- a/content/learning-paths/servers-and-cloud-computing/net-aspire/run_app.md +++ b/content/learning-paths/servers-and-cloud-computing/net-aspire/run_app.md @@ -43,6 +43,6 @@ On the dashboard, locate and click the endpoint link for `NetAspire.Arm.Web`. Th Now return to the dashboard, and select the **Traces** option. This section provides detailed telemetry tracing, allowing you to view the flow of requests, track service dependencies, and analyze performance metrics for your application: -![fig3](figures/03.png) +![fig3 alt-text#center](figures/03.png "Figure 3: Traces Option.") -By following these steps, you will explore the key components of the .NET Aspire application, including its dashboard, data interaction through APIs, and telemetry tracing capabilities. +By following these steps, you can explore the key components of the .NET Aspire application, including its dashboard, data interaction through APIs, and telemetry tracing capabilities. From 758cb79de584156d764db9735caf2b9f74807f8f Mon Sep 17 00:00:00 2001 From: Maddy Underwood Date: Fri, 27 Dec 2024 14:59:46 +0000 Subject: [PATCH 42/96] More tweaks. --- .../net-aspire/aws.md | 4 ++-- .../net-aspire/gcp.md | 12 +++++------ .../net-aspire/modify_project.md | 20 ++++++++++--------- .../net-aspire/run_app.md | 2 +- 4 files changed, 20 insertions(+), 18 deletions(-) diff --git a/content/learning-paths/servers-and-cloud-computing/net-aspire/aws.md b/content/learning-paths/servers-and-cloud-computing/net-aspire/aws.md index 6763edbcf..4a53b6f27 100644 --- a/content/learning-paths/servers-and-cloud-computing/net-aspire/aws.md +++ b/content/learning-paths/servers-and-cloud-computing/net-aspire/aws.md @@ -14,7 +14,7 @@ To set up an Arm-powered EC2 instance, follow these steps: 1. Log in to the [AWS Management Console](http://console.aws.amazon.com). 2. Navigate to the EC2 Service. - As shown in Figure 5, in the search box, type "EC2". + As Figure 5 shows, in the search box, type "EC2". Then, click on **EC2** in the search results: @@ -26,7 +26,7 @@ To set up an Arm-powered EC2 instance, follow these steps: * Architecture: select **64-bit (Arm)**. * Instance Type: select **t4g.small**. -The configuration should look like the configuration fields that are shown in Figure 6: +The configuration should look like the configuration fields that Figure 6 shows: ![Figure 6 alt-text#center](figures/06.png "Figure 6: Configuration Fields.") diff --git a/content/learning-paths/servers-and-cloud-computing/net-aspire/gcp.md b/content/learning-paths/servers-and-cloud-computing/net-aspire/gcp.md index f791fb39c..d76689af2 100644 --- a/content/learning-paths/servers-and-cloud-computing/net-aspire/gcp.md +++ b/content/learning-paths/servers-and-cloud-computing/net-aspire/gcp.md @@ -26,15 +26,15 @@ The configuration setup should resemble the following: ![fig14](figures/14.png) -6. Configure the Remaining Settings. +6. Configure the Remaining Settings: * Availability Policies: Standard. -* Boot Disk: Click Change, then select Ubuntu as the operating system. +* Boot Disk: Click **Change**, then select **Ubuntu** as the operating system. * Identity and API Access: Keep the default settings. -* Firewall Settings: Check Allow HTTP traffic and Allow HTTPS traffic. +* Firewall Settings: Check **Allow HTTP traffic** and **Allow HTTPS traffic**. ![fig15](figures/15.png) -7. Click the Create Button and wait for the VM to be created. +7. Click the **Create** Button and wait for the VM to be created. ### Connecting to VM After creating the VM, connect to it as follows: @@ -42,7 +42,7 @@ After creating the VM, connect to it as follows: ![fig16](figures/16.png) -2. This will open a browser window. First, click the Authorize button: +2. This opens a browser window. First, click the **Authorize** button: ![fig17](figures/17.png) @@ -99,7 +99,7 @@ You will see output similar to this: ### Exposing the application to the Public To make your application publicly-accessible, configure the firewall rules: 1. In the Google Cloud Console, navigate to **VPC Network** > **Firewall**. -2. Click “Create Firewall Rule” and configure the following: +2. Click **Create Firewall Rule** and configure the following: * Name: allow-dotnet-ports * Target Tags: dotnet-app * Source IP Ranges: 0.0.0.0/0 (for public access). diff --git a/content/learning-paths/servers-and-cloud-computing/net-aspire/modify_project.md b/content/learning-paths/servers-and-cloud-computing/net-aspire/modify_project.md index ad9347936..82c16d9c2 100644 --- a/content/learning-paths/servers-and-cloud-computing/net-aspire/modify_project.md +++ b/content/learning-paths/servers-and-cloud-computing/net-aspire/modify_project.md @@ -8,9 +8,11 @@ layout: learningpathall ## Modify the Project -Now modify the project to add additional computations to mimic computationally-intensive work. +Now you can move on to add additional computations to mimic computationally-intensive work. -You will now include additional code for the purpose of demonstrating computation intense work. Go to the `NetAspire.Arm.ApiService` project, and create a new file `ComputationService.cs`. Add the code shown below to this file: +Go to the `NetAspire.Arm.ApiService` project, and create a new file, and name it `ComputationService.cs`. + +Add the code shown below to this file: ```cs static class ComputationService @@ -43,15 +45,15 @@ static class ComputationService } ``` -This code defines a static class, ComputationService, designed to perform computationally intensive tasks, specifically matrix multiplication. It contains a public method, PerformIntensiveCalculations, which generates two matrices of a specified size, multiplies them, and stores the resulting matrix. +This code defines a static class, ComputationService, designed to perform computationally-intensive tasks; in particular, matrix multiplication. It contains a public method, PerformIntensiveCalculations, which generates two matrices of a specified size, multiplies them, and stores the resulting matrix. -The private method GenerateMatrix creates a one-dimensional array representing a matrix of the given size (matrixSize x matrixSize). Each element in the matrix is initialized with a random double value generated using Random.Shared.NextDouble(). +* The private method, GenerateMatrix, creates a one-dimensional array representing a matrix of the given size (matrixSize x matrixSize). Each element in the matrix is initialized with a random double-value generated using Random.Shared.NextDouble(). -The public method PerformIntensiveCalculations multiplies two matrices (matrix1 and matrix2) element by element using nested loops and LINQ. It iterates through each row of the first matrix and each column of the second matrix, calculating the dot product for each element in the resulting matrix. The result of the multiplication is stored in a flattened one-dimensional array, matrixResult. +* The public method, PerformIntensiveCalculations, multiplies two matrices (matrix1 and matrix2) element-by-element using nested loops and LINQ. It iterates through each row of the first matrix and each column of the second matrix, calculating the dot product for each element in the resulting matrix. The result of the multiplication is stored in a flattened one-dimensional array, called matrixResult. This code is provided for demonstrating heavy computational operations, such as large matrix manipulations, and can simulate workloads in scenarios that mimic intensive data processing or scientific calculations. -Then, open the `Program.cs` file in the `NetAspire.Arm.ApiService` directory and add modify the `MapGet` function of the app as shown: +Now open the `Program.cs` file in the `NetAspire.Arm.ApiService` directory, and modify the `MapGet` function of the app as shown: ```cs app.MapGet("/weatherforecast", () => @@ -70,7 +72,7 @@ app.MapGet("/weatherforecast", () => }); ``` -This will trigger matrix multiplications when you click Weather in the web frontend application. +This triggers matrix multiplications when you select **Weather** in the web-frontend application. To test the code, re-run the application using the following command: @@ -78,8 +80,8 @@ To test the code, re-run the application using the following command: dotnet run --project NetAspire.Arm.AppHost ``` -Next, navigate to the web frontend, click Weather, and then return to the dashboard. Click Traces to observe that the operation now takes significantly longer to complete—approximately 4 seconds in the example below: +Next, navigate to the web frontend, select **Weather**, and then return to the dashboard. Click **Traces** and note that the operation now takes significantly longer to complete — approximately four seconds in the example below: -![fig4](figures/04.png) +![fig4 alt-text#center](figures/04.png "Figure 4: Traces Example.") You are now ready to deploy the application to the cloud. diff --git a/content/learning-paths/servers-and-cloud-computing/net-aspire/run_app.md b/content/learning-paths/servers-and-cloud-computing/net-aspire/run_app.md index 8cb73885a..f11dfcc60 100644 --- a/content/learning-paths/servers-and-cloud-computing/net-aspire/run_app.md +++ b/content/learning-paths/servers-and-cloud-computing/net-aspire/run_app.md @@ -43,6 +43,6 @@ On the dashboard, locate and click the endpoint link for `NetAspire.Arm.Web`. Th Now return to the dashboard, and select the **Traces** option. This section provides detailed telemetry tracing, allowing you to view the flow of requests, track service dependencies, and analyze performance metrics for your application: -![fig3 alt-text#center](figures/03.png "Figure 3: Traces Option.") +![fig3 alt-text#center](figures/03.png "Figure 3: Traces.") By following these steps, you can explore the key components of the .NET Aspire application, including its dashboard, data interaction through APIs, and telemetry tracing capabilities. From 211a13a67433984fb3f1bca2fb9387b239ddc19a Mon Sep 17 00:00:00 2001 From: Maddy Underwood Date: Fri, 27 Dec 2024 16:03:53 +0000 Subject: [PATCH 43/96] Further formatting tweaks. --- .../net-aspire/aws.md | 2 +- .../net-aspire/gcp.md | 39 +++++++++---------- 2 files changed, 20 insertions(+), 21 deletions(-) diff --git a/content/learning-paths/servers-and-cloud-computing/net-aspire/aws.md b/content/learning-paths/servers-and-cloud-computing/net-aspire/aws.md index 4a53b6f27..b219faa1c 100644 --- a/content/learning-paths/servers-and-cloud-computing/net-aspire/aws.md +++ b/content/learning-paths/servers-and-cloud-computing/net-aspire/aws.md @@ -7,7 +7,7 @@ layout: learningpathall --- ### Objective -In this section, you will learn how to deploy the .NET Aspire application on to an AWS Elastic Compute Cloud (EC2) Virtual Machine powered by Arm-based processors, such as AWS Graviton. This allows you to leverage the cost and performance benefits of Arm architecture while benefiting from the seamless deployment of cloud-native applications on modern infrastructure. +In this section, you will learn how to deploy the .NET Aspire application you created on to an AWS Elastic Compute Cloud (EC2) Virtual Machine powered by Arm-based processors, such as AWS Graviton. This allows you to leverage the cost and performance benefits of Arm architecture while benefiting from the seamless deployment of cloud-native applications on modern infrastructure. ### Set up your AWS EC2 Instance To set up an Arm-powered EC2 instance, follow these steps: diff --git a/content/learning-paths/servers-and-cloud-computing/net-aspire/gcp.md b/content/learning-paths/servers-and-cloud-computing/net-aspire/gcp.md index d76689af2..ccd67e151 100644 --- a/content/learning-paths/servers-and-cloud-computing/net-aspire/gcp.md +++ b/content/learning-paths/servers-and-cloud-computing/net-aspire/gcp.md @@ -7,30 +7,29 @@ layout: learningpathall --- ### Objective -In this section, you will learn how to deploy a .NET Aspire application onto an Arm-based instance running on Google Cloud Platform (GCP). -You will start by creating an instance of an Arm64 virtual machine on GCP. You will then connect to it, install the required software, and run the application. +In this section, you will learn how to deploy the .NET Aspire application you created onto an Arm-based instance running on Google Cloud Platform (GCP). You will start by creating an instance of an Arm64 virtual machine on GCP. You will then connect to it, install the required software, and run the application. ### Create an Arm64 virtual machine -Follow these steps to create an Arm64 VM: +To create an Arm64 VM, follow these steps: 1. Create a Google Cloud Account. If you don’t already have an account, sign up for Google Cloud. 2. Open the Google Cloud Console [here](https://console.cloud.google.com). -3. Navigate to Compute Engine. In the Google Cloud Console, open the Navigation menu and go to Compute Engine > VM Instances. Enable any relevant APIs if prompted. -4. Click “Create Instance”. +3. Navigate to Compute Engine. In the Google Cloud Console, open the Navigation menu, and go to Compute Engine > VM Instances. Enable any relevant APIs if prompted. +4. Click **Create Instance**. 5. Configure the VM Instance as follows: -* Name: **arm-server** -* Region/Zone: choose a region and zone where Arm64 processors are available, for example us-central1. -* Machine Family: select **General-purpose**. -* Series: T2A. -* Machine Type: select **t2a-standard-1**. + * Name: **arm-server** + * Region/Zone: choose a region and zone where Arm64 processors are available, for example us-central1. + * Machine Family: select **General-purpose**. + * Series: T2A. + * Machine Type: select **t2a-standard-1**. The configuration setup should resemble the following: ![fig14](figures/14.png) 6. Configure the Remaining Settings: -* Availability Policies: Standard. -* Boot Disk: Click **Change**, then select **Ubuntu** as the operating system. -* Identity and API Access: Keep the default settings. -* Firewall Settings: Check **Allow HTTP traffic** and **Allow HTTPS traffic**. + * Availability Policies: **Standard**. + * Boot Disk: Click **Change**, then select **Ubuntu** as the operating system. + * Identity and API Access: keep the default settings. + * Firewall Settings: Check **Allow HTTP traffic** and **Allow HTTPS traffic**. ![fig15](figures/15.png) @@ -100,14 +99,14 @@ You will see output similar to this: To make your application publicly-accessible, configure the firewall rules: 1. In the Google Cloud Console, navigate to **VPC Network** > **Firewall**. 2. Click **Create Firewall Rule** and configure the following: -* Name: allow-dotnet-ports -* Target Tags: dotnet-app -* Source IP Ranges: 0.0.0.0/0 (for public access). -* Protocols and Ports: allow TCP on ports 7133, 7511, and 17222. -* Click the **Create** button. + * Name: allow-dotnet-ports. + * Target Tags: dotnet-app. + * Source IP Ranges: 0.0.0.0/0 (for public access). + * Protocols and Ports: allow TCP on ports 7133, 7511, and 17222. + * Click the **Create** button. 3. Go back to your VM instance. 4. Click **Edit**, and under Networking find Network Tags, add the tag dotnet-app. -5. Click the Save button. +5. Click the **Save** button. ### Summary You have successfully deployed the Aspire app onto an Arm-powered GCP Virtual Machine. This deployment demonstrates the compatibility of .NET applications with Arm architecture and GCP, offering high performance and cost-efficiency. From 77c30f2a73b477f6069bfb2bedeae91ee5b39c07 Mon Sep 17 00:00:00 2001 From: Maddy Underwood Date: Fri, 27 Dec 2024 16:38:30 +0000 Subject: [PATCH 44/96] Added more formatting. --- .../net-aspire/_index.md | 2 +- .../net-aspire/_review.md | 2 +- .../net-aspire/aws.md | 2 +- .../net-aspire/background.md | 2 +- .../net-aspire/gcp.md | 34 ++++++++++++------- 5 files changed, 25 insertions(+), 17 deletions(-) diff --git a/content/learning-paths/servers-and-cloud-computing/net-aspire/_index.md b/content/learning-paths/servers-and-cloud-computing/net-aspire/_index.md index 127d1c097..4752581ff 100644 --- a/content/learning-paths/servers-and-cloud-computing/net-aspire/_index.md +++ b/content/learning-paths/servers-and-cloud-computing/net-aspire/_index.md @@ -3,7 +3,7 @@ title: Run a .NET Aspire application on Arm-based VMs on AWS and GCP minutes_to_complete: 60 -who_is_this_for: This is an introductory topic for software developers interested in learning how to deploy .NET Aspire applications on Arm-based Virtual Machines (VMs) on Amazon Web Services (AWS) and Google Cloud Platform (GCP). +who_is_this_for: This is an introductory topic for software developers interested in learning how to deploy .NET Aspire applications on Arm-based virtual machines (VMs) on Amazon Web Services (AWS) and Google Cloud Platform (GCP). learning_objectives: - Describe .NET Aspire, including what it can achieve. diff --git a/content/learning-paths/servers-and-cloud-computing/net-aspire/_review.md b/content/learning-paths/servers-and-cloud-computing/net-aspire/_review.md index c38f28e6d..caa6a0161 100644 --- a/content/learning-paths/servers-and-cloud-computing/net-aspire/_review.md +++ b/content/learning-paths/servers-and-cloud-computing/net-aspire/_review.md @@ -24,7 +24,7 @@ review: - questions: question: > - In Google Cloud Platform, which series should you select to use an Arm64 processor for your VM? + In Google Cloud Platform, which series should you select to use an Arm64 processor for the VM? answers: - T2A (Ampere Altra Arm). - E2 (General Purpose). diff --git a/content/learning-paths/servers-and-cloud-computing/net-aspire/aws.md b/content/learning-paths/servers-and-cloud-computing/net-aspire/aws.md index b219faa1c..546d07d01 100644 --- a/content/learning-paths/servers-and-cloud-computing/net-aspire/aws.md +++ b/content/learning-paths/servers-and-cloud-computing/net-aspire/aws.md @@ -7,7 +7,7 @@ layout: learningpathall --- ### Objective -In this section, you will learn how to deploy the .NET Aspire application you created on to an AWS Elastic Compute Cloud (EC2) Virtual Machine powered by Arm-based processors, such as AWS Graviton. This allows you to leverage the cost and performance benefits of Arm architecture while benefiting from the seamless deployment of cloud-native applications on modern infrastructure. +In this section, you will learn how to deploy the .NET Aspire application you created on to an AWS Elastic Compute Cloud (EC2) virtual machine powered by Arm-based processors, such as AWS Graviton. This allows you to leverage the cost and performance benefits of Arm architecture while benefiting from the seamless deployment of cloud-native applications on modern infrastructure. ### Set up your AWS EC2 Instance To set up an Arm-powered EC2 instance, follow these steps: diff --git a/content/learning-paths/servers-and-cloud-computing/net-aspire/background.md b/content/learning-paths/servers-and-cloud-computing/net-aspire/background.md index ce828bbf3..d2c6a58d0 100644 --- a/content/learning-paths/servers-and-cloud-computing/net-aspire/background.md +++ b/content/learning-paths/servers-and-cloud-computing/net-aspire/background.md @@ -31,4 +31,4 @@ With a few helper method calls, you can create local resources, wait for the res By providing a consistent set of tools and patterns, .NET Aspire streamlines the development process of cloud-native applications. It manages complex applications during the development phase without dealing with low-level implementation details. .NET Aspire easily connects to commonly-used services with standardized interfaces and configurations. There are also various templates and tooling to accelerate project setup and development cycles. -In this Learning Path, you will learn how to create a .NET Aspire application, describe the project, and modify the code on a Windows on Arm development machine. You will then deploy the application firstly, to an AWS Arm-powered virtual machines, and secondly, to a GCP Arm-powered virtual machine. +In this Learning Path, you will learn how to create a .NET Aspire application, describe the project, and modify the code on a Windows on Arm development machine. You will then deploy the application firstly, to an AWS Arm-powered virtual machine, and secondly, to a GCP Arm-powered virtual machine. diff --git a/content/learning-paths/servers-and-cloud-computing/net-aspire/gcp.md b/content/learning-paths/servers-and-cloud-computing/net-aspire/gcp.md index ccd67e151..3bca94965 100644 --- a/content/learning-paths/servers-and-cloud-computing/net-aspire/gcp.md +++ b/content/learning-paths/servers-and-cloud-computing/net-aspire/gcp.md @@ -12,20 +12,26 @@ In this section, you will learn how to deploy the .NET Aspire application you cr ### Create an Arm64 virtual machine To create an Arm64 VM, follow these steps: 1. Create a Google Cloud Account. If you don’t already have an account, sign up for Google Cloud. + 2. Open the Google Cloud Console [here](https://console.cloud.google.com). -3. Navigate to Compute Engine. In the Google Cloud Console, open the Navigation menu, and go to Compute Engine > VM Instances. Enable any relevant APIs if prompted. + +3. Navigate to Compute Engine. In the Google Cloud Console, open the Navigation menu, and go to **Compute Engine** > **VM Instances**. Enable any relevant APIs if prompted. + 4. Click **Create Instance**. + 5. Configure the VM Instance as follows: - * Name: **arm-server** - * Region/Zone: choose a region and zone where Arm64 processors are available, for example us-central1. + * Name: **arm-server**. + * Region/Zone: choose a region and zone where Arm64 processors are available, for example, **us-central1**. * Machine Family: select **General-purpose**. - * Series: T2A. + * Series: **T2A**. * Machine Type: select **t2a-standard-1**. + The configuration setup should resemble the following: ![fig14](figures/14.png) 6. Configure the Remaining Settings: + * Availability Policies: **Standard**. * Boot Disk: Click **Change**, then select **Ubuntu** as the operating system. * Identity and API Access: keep the default settings. @@ -37,7 +43,7 @@ The configuration setup should resemble the following: ### Connecting to VM After creating the VM, connect to it as follows: -1. In Compute Engine, click the SSH drop-down menu next to your VM, and select **Open in browser window**: +1. In **Compute Engine**, click the SSH drop-down menu next to your VM, and select **Open in browser window**: ![fig16](figures/16.png) @@ -50,7 +56,8 @@ After creating the VM, connect to it as follows: ![fig18](figures/18.png) ### Installing dependencies and deploying an app -Once the connection is established, you can install the required dependencies (.NET SDK, Aspire workload, and Git), fetch the application code, and deploy it: +Once the connection is established, you can install the required dependencies (.NET SDK, Aspire workload, and Git), fetch the application code, and deploy it. + Update the Package List: ```console sudo apt update && sudo apt upgrade -y @@ -95,18 +102,19 @@ dotnet run --project NetAspire.Arm.AppHost You will see output similar to this: ![fig19](figures/19.png) -### Exposing the application to the Public +### Making your application public + To make your application publicly-accessible, configure the firewall rules: 1. In the Google Cloud Console, navigate to **VPC Network** > **Firewall**. 2. Click **Create Firewall Rule** and configure the following: - * Name: allow-dotnet-ports. - * Target Tags: dotnet-app. - * Source IP Ranges: 0.0.0.0/0 (for public access). - * Protocols and Ports: allow TCP on ports 7133, 7511, and 17222. + * Name: **allow-dotnet-ports**. + * Target Tags: **dotnet-app**. + * Source IP Ranges: **0.0.0.0/0** (for public access). + * Protocols and Ports: **allow TCP on ports 7133, 7511, and 17222**. * Click the **Create** button. 3. Go back to your VM instance. -4. Click **Edit**, and under Networking find Network Tags, add the tag dotnet-app. +4. Click **Edit**, and under **Networking** find **Network Tags**, add the tag **dotnet-app**. 5. Click the **Save** button. ### Summary -You have successfully deployed the Aspire app onto an Arm-powered GCP Virtual Machine. This deployment demonstrates the compatibility of .NET applications with Arm architecture and GCP, offering high performance and cost-efficiency. +You have successfully deployed the Aspire app onto an Arm-powered GCP virtual machine. This deployment demonstrates the compatibility of .NET applications with Arm architecture and GCP, offering high performance and cost-efficiency. From 1b72e8bc83027531c0f13e94c29dfefcec53fbd5 Mon Sep 17 00:00:00 2001 From: Maddy Underwood Date: Fri, 27 Dec 2024 17:01:05 +0000 Subject: [PATCH 45/96] More improvements. --- .../servers-and-cloud-computing/net-aspire/aws.md | 4 +++- .../net-aspire/background.md | 5 ++++- .../servers-and-cloud-computing/net-aspire/gcp.md | 4 ++++ .../servers-and-cloud-computing/net-aspire/project.md | 10 +++++++++- 4 files changed, 20 insertions(+), 3 deletions(-) diff --git a/content/learning-paths/servers-and-cloud-computing/net-aspire/aws.md b/content/learning-paths/servers-and-cloud-computing/net-aspire/aws.md index 546d07d01..75643b804 100644 --- a/content/learning-paths/servers-and-cloud-computing/net-aspire/aws.md +++ b/content/learning-paths/servers-and-cloud-computing/net-aspire/aws.md @@ -5,13 +5,13 @@ weight: 6 ### FIXED, DO NOT MODIFY layout: learningpathall --- - ### Objective In this section, you will learn how to deploy the .NET Aspire application you created on to an AWS Elastic Compute Cloud (EC2) virtual machine powered by Arm-based processors, such as AWS Graviton. This allows you to leverage the cost and performance benefits of Arm architecture while benefiting from the seamless deployment of cloud-native applications on modern infrastructure. ### Set up your AWS EC2 Instance To set up an Arm-powered EC2 instance, follow these steps: 1. Log in to the [AWS Management Console](http://console.aws.amazon.com). + 2. Navigate to the EC2 Service. As Figure 5 shows, in the search box, type "EC2". @@ -69,7 +69,9 @@ The configuration should look like: ### Deploy the application Once the EC2 instance is ready, you can connect to it, and deploy the application. Follow these steps to connect: + 1. Locate the instance public IP (here this is 98.83.137.101). + 2. Use an SSH client to connect: * Open the terminal. * Set the appropriate permissions for the key pair file, using your own IP address: diff --git a/content/learning-paths/servers-and-cloud-computing/net-aspire/background.md b/content/learning-paths/servers-and-cloud-computing/net-aspire/background.md index d2c6a58d0..9eb8f4c78 100644 --- a/content/learning-paths/servers-and-cloud-computing/net-aspire/background.md +++ b/content/learning-paths/servers-and-cloud-computing/net-aspire/background.md @@ -31,4 +31,7 @@ With a few helper method calls, you can create local resources, wait for the res By providing a consistent set of tools and patterns, .NET Aspire streamlines the development process of cloud-native applications. It manages complex applications during the development phase without dealing with low-level implementation details. .NET Aspire easily connects to commonly-used services with standardized interfaces and configurations. There are also various templates and tooling to accelerate project setup and development cycles. -In this Learning Path, you will learn how to create a .NET Aspire application, describe the project, and modify the code on a Windows on Arm development machine. You will then deploy the application firstly, to an AWS Arm-powered virtual machine, and secondly, to a GCP Arm-powered virtual machine. +In this Learning Path, you will learn how to create a .NET Aspire application, describe the project, and modify the code on a Windows on Arm development machine. You will then deploy the application: + +* Firstly, to an AWS Arm-powered virtual machine. +* Secondly, to a GCP Arm-powered virtual machine. diff --git a/content/learning-paths/servers-and-cloud-computing/net-aspire/gcp.md b/content/learning-paths/servers-and-cloud-computing/net-aspire/gcp.md index 3bca94965..716b06455 100644 --- a/content/learning-paths/servers-and-cloud-computing/net-aspire/gcp.md +++ b/content/learning-paths/servers-and-cloud-computing/net-aspire/gcp.md @@ -106,14 +106,18 @@ You will see output similar to this: To make your application publicly-accessible, configure the firewall rules: 1. In the Google Cloud Console, navigate to **VPC Network** > **Firewall**. + 2. Click **Create Firewall Rule** and configure the following: * Name: **allow-dotnet-ports**. * Target Tags: **dotnet-app**. * Source IP Ranges: **0.0.0.0/0** (for public access). * Protocols and Ports: **allow TCP on ports 7133, 7511, and 17222**. * Click the **Create** button. + 3. Go back to your VM instance. + 4. Click **Edit**, and under **Networking** find **Network Tags**, add the tag **dotnet-app**. + 5. Click the **Save** button. ### Summary diff --git a/content/learning-paths/servers-and-cloud-computing/net-aspire/project.md b/content/learning-paths/servers-and-cloud-computing/net-aspire/project.md index adbaae797..3e97658b8 100644 --- a/content/learning-paths/servers-and-cloud-computing/net-aspire/project.md +++ b/content/learning-paths/servers-and-cloud-computing/net-aspire/project.md @@ -68,14 +68,22 @@ This command generates a solution with the following structure: * Standard health check implementations. * Logging configurations to ensure consistent and efficient monitoring and debugging. -The structure of this project is designed to enhance efficiency, and simplify the development of cloud-native applications. At its core, it incorporates features to ensure seamless service interactions, robust monitoring, and an exceptional development experience. +The structure of this project is designed to enhance efficiency and simplify the development of cloud-native applications. At its core, it incorporates features to ensure seamless service interactions, robust monitoring, and an exceptional development experience. + +#### Service discovery One of the foundational elements is service discovery, which enables automatic service registration, dynamic endpoint resolution, and load balancing. These features ensure that services communicate effectively and handle traffic efficiently, even in complex, distributed environments. +#### Monitoring and telemetry + For monitoring and telemetry, the architecture integrates tools like built-in health checks, OpenTelemetry for monitoring, and metrics collection with distributed tracing. These features provide developers with deep insights into application performance, helping to maintain reliability and optimize system operations. +#### Configuration management + Configuration management offers environment-based settings that make deploying applications across different stages straightforward. Secure secrets management safeguards sensitive information, while standardized service-to-service communication simplifies interactions between microservices. +#### Improved development experience + The architecture is also tailored to improve the development experience. Developers can benefit from local debugging support and a powerful monitoring dashboard. This dashboard provides a detailed view of the following: * Service health. From dc73b3e292acf96d426dcaf70238ffd3f5d4d646 Mon Sep 17 00:00:00 2001 From: Maddy Underwood Date: Fri, 27 Dec 2024 17:18:52 +0000 Subject: [PATCH 46/96] Final checks. --- .../net-aspire/aws.md | 24 ++++++++++--------- .../net-aspire/gcp.md | 2 +- .../net-aspire/modify_project.md | 8 +++---- .../net-aspire/run_app.md | 12 +++++++--- 4 files changed, 27 insertions(+), 19 deletions(-) diff --git a/content/learning-paths/servers-and-cloud-computing/net-aspire/aws.md b/content/learning-paths/servers-and-cloud-computing/net-aspire/aws.md index 75643b804..c8d3ed659 100644 --- a/content/learning-paths/servers-and-cloud-computing/net-aspire/aws.md +++ b/content/learning-paths/servers-and-cloud-computing/net-aspire/aws.md @@ -21,7 +21,7 @@ To set up an Arm-powered EC2 instance, follow these steps: ![Figure 5 alt-text#center](figures/05.png "Figure 5: Search for the EC2 Service in the AWS Management Console.") 3. In the EC2 Dashboard, click **Launch Instance** and add the following information in these corresponding data fields to configure your setup: -* Name: type "arm-server". +* Name: enter **arm-server**. * AMI: select **Arm-compatible Amazon Machine Image, Ubuntu 22.04 LTS for Arm64**. * Architecture: select **64-bit (Arm)**. * Instance Type: select **t4g.small**. @@ -31,7 +31,7 @@ The configuration should look like the configuration fields that Figure 6 shows: ![Figure 6 alt-text#center](figures/06.png "Figure 6: Configuration Fields.") 4. Scroll down to **Key pair** (login), and click **Create new key pair**. - This displays the "Create key pair" window. + This displays the **Create key pair** window. Now configure the following fields: * Key pair name: **arm-key-pair**. * Key pair type: **RSA**. @@ -40,22 +40,22 @@ The configuration should look like the configuration fields that Figure 6 shows: ![fig7](figures/07.png) -5. Scroll down to "Network Settings", and confgure the settings: +5. Scroll down to **Network Settings**, and configure the settings: * VPC: select the default. * Subnet: select **No preference**. * Auto-assign public IP: **Enable**. * Firewall: Check **Create security group**. -* Security group name: arm-security-group. -* Description: arm-security-group. +* Security group name: **arm-security-group**. +* Description: **arm-security-group**. * Inbound security groups. ![fig8](figures/08.png) -6. Configure "Inbound Security Group Rules" by selecting **Add Rule** and then setting the following details: -* Type: Custom TCP. -* Protocol: TCP. -* Port Range: 7133. -* Source: Select "Anywhere (0.0.0.0/0)" for public access or restrict access to your specific IP for better security. +6. Configure **Inbound Security Group Rules** by selecting **Add Rule** and then setting the following details: +* Type: **Custom TCP**. +* Protocol: **TCP**. +* Port Range: **7133**. +* Source: Select **Anywhere (0.0.0.0/0)** for public access or restrict access to your specific IP for better security. Repeat this step for all three ports that the application is using. This example demonstrates setup using ports 7133, 7511, and 17222. These must match the values that you have when you run the app locally. @@ -63,7 +63,9 @@ The configuration should look like: ![fig9](figures/09.png) -7. Launch an instance by clicking the **Launch instance** button. You should see the green box with the Success label. This box also contains a link to the EC2 instance. Click it, and it takes you to the instance dashboard, which looks like Figure 10: +7. Launch an instance by clicking the **Launch instance** button. You should see the green box with the **Success** label. + +This box also contains a link to the EC2 instance. Click on it, and it takes you to the instance dashboard, as Figure 10 shows: ![fig10](figures/10.png) diff --git a/content/learning-paths/servers-and-cloud-computing/net-aspire/gcp.md b/content/learning-paths/servers-and-cloud-computing/net-aspire/gcp.md index 716b06455..191ec0d5f 100644 --- a/content/learning-paths/servers-and-cloud-computing/net-aspire/gcp.md +++ b/content/learning-paths/servers-and-cloud-computing/net-aspire/gcp.md @@ -121,4 +121,4 @@ To make your application publicly-accessible, configure the firewall rules: 5. Click the **Save** button. ### Summary -You have successfully deployed the Aspire app onto an Arm-powered GCP virtual machine. This deployment demonstrates the compatibility of .NET applications with Arm architecture and GCP, offering high performance and cost-efficiency. +You have successfully deployed the Aspire app onto an Arm-powered GCP virtual machine. This deployment demonstrates the compatibility of .NET applications with Arm architecture and GCP, offering high performance and cost efficiency. diff --git a/content/learning-paths/servers-and-cloud-computing/net-aspire/modify_project.md b/content/learning-paths/servers-and-cloud-computing/net-aspire/modify_project.md index 82c16d9c2..6e56a43a4 100644 --- a/content/learning-paths/servers-and-cloud-computing/net-aspire/modify_project.md +++ b/content/learning-paths/servers-and-cloud-computing/net-aspire/modify_project.md @@ -6,7 +6,7 @@ weight: 5 layout: learningpathall --- -## Modify the Project +## Add additional computations Now you can move on to add additional computations to mimic computationally-intensive work. @@ -45,11 +45,11 @@ static class ComputationService } ``` -This code defines a static class, ComputationService, designed to perform computationally-intensive tasks; in particular, matrix multiplication. It contains a public method, PerformIntensiveCalculations, which generates two matrices of a specified size, multiplies them, and stores the resulting matrix. +This code defines a static class, **ComputationService**, designed to perform computationally-intensive tasks; in particular, matrix multiplication. It contains a public method, **PerformIntensiveCalculations**, which generates two matrices of a specified size, multiplies them, and stores the resulting matrix. -* The private method, GenerateMatrix, creates a one-dimensional array representing a matrix of the given size (matrixSize x matrixSize). Each element in the matrix is initialized with a random double-value generated using Random.Shared.NextDouble(). +* The private method, **GenerateMatrix**, creates a one-dimensional array representing a matrix of the given size (matrixSize x matrixSize). Each element in the matrix is initialized with a random double-value generated using **Random.Shared.NextDouble()**. -* The public method, PerformIntensiveCalculations, multiplies two matrices (matrix1 and matrix2) element-by-element using nested loops and LINQ. It iterates through each row of the first matrix and each column of the second matrix, calculating the dot product for each element in the resulting matrix. The result of the multiplication is stored in a flattened one-dimensional array, called matrixResult. +* The public method, **PerformIntensiveCalculations**, multiplies two matrices (matrix1 and matrix2) element-by-element using nested loops and LINQ. It iterates through each row of the first matrix and each column of the second matrix, calculating the dot product for each element in the resulting matrix. The result of the multiplication is stored in a flattened one-dimensional array, called **matrixResult**. This code is provided for demonstrating heavy computational operations, such as large matrix manipulations, and can simulate workloads in scenarios that mimic intensive data processing or scientific calculations. diff --git a/content/learning-paths/servers-and-cloud-computing/net-aspire/run_app.md b/content/learning-paths/servers-and-cloud-computing/net-aspire/run_app.md index f11dfcc60..2bc850baa 100644 --- a/content/learning-paths/servers-and-cloud-computing/net-aspire/run_app.md +++ b/content/learning-paths/servers-and-cloud-computing/net-aspire/run_app.md @@ -5,7 +5,7 @@ weight: 4 ### FIXED, DO NOT MODIFY layout: learningpathall --- -## Run the Project +## Using the dashboard The application will issue a certificate. Before you run the application, add support to trust the HTTPS development certificate by running: ```console @@ -33,11 +33,17 @@ info: Aspire.Hosting.DistributedApplication[0] Login to the dashboard at https://localhost:17222/login?t=81f99566c9ec462e66f5eab5aa9307b0 ``` -Click on the link generated for the dashboard. In this case it is: [https://localhost:17222/login?t=81f99566c9ec462e66f5eab5aa9307b0](https://localhost:17222/login?t=81f99566c9ec462e66f5eab5aa9307b0). This directs you to the application dashboard, as shown in Figure 1: +Click on the link generated for the dashboard. + +In this case, it is: [https://localhost:17222/login?t=81f99566c9ec462e66f5eab5aa9307b0](https://localhost:17222/login?t=81f99566c9ec462e66f5eab5aa9307b0). + +This directs you to the application dashboard, as Figure 1 shows: ![fig1 alt-text#center](figures/01.png "Figure 1: Application Dashboard.") -On the dashboard, locate and click the endpoint link for `NetAspire.Arm.Web`. This takes you to the Blazor-based web application. In the Blazor app, navigate to the Weather section to access and display data retrieved from the WeatherForecast API: +On the dashboard, locate and click the endpoint link for `NetAspire.Arm.Web`. + +This takes you to the Blazor-based web application. In the Blazor app, navigate to the Weather section to access and display data retrieved from the WeatherForecast API: ![fig2 alt-text#center](figures/02.png "Figure 2: Data Displayed from WeatherForecast API.") From 435a3e47f1f74e710575dabdad834f6d2c9e7fae Mon Sep 17 00:00:00 2001 From: Maddy Underwood <167196745+madeline-underwood@users.noreply.github.com> Date: Sat, 28 Dec 2024 05:46:18 +0000 Subject: [PATCH 47/96] Update intro.md Tweaked language; added Figure labels. --- .../intro.md | 32 +++++++++---------- 1 file changed, 16 insertions(+), 16 deletions(-) diff --git a/content/learning-paths/cross-platform/pytorch-digit-classification-arch-training/intro.md b/content/learning-paths/cross-platform/pytorch-digit-classification-arch-training/intro.md index 536228ccc..fdb573d9e 100644 --- a/content/learning-paths/cross-platform/pytorch-digit-classification-arch-training/intro.md +++ b/content/learning-paths/cross-platform/pytorch-digit-classification-arch-training/intro.md @@ -9,7 +9,7 @@ layout: "learningpathall" ## Introduction to PyTorch -PyTorch is an open-source deep learning framework that is developed by Meta AI and is now part of the Linux Foundation. +PyTorch is an open source deep learning framework that is developed by Meta AI and is now part of the Linux Foundation. PyTorch is designed to provide a flexible and efficient platform for building and training neural networks. It is widely used due to its dynamic computational graph, which allows users to modify the architecture during runtime, making debugging and experimentation easier. @@ -28,31 +28,31 @@ In this Learning Path, you will explore how to use PyTorch to create and train a ## Before you begin -Before you begin make sure Python3 is installed on your system. You can check this by running: +Before you begin, make sure Python3 is installed on your system. You can check this by running: ```console python3 --version ``` -The expected output is the Python version, for example: +You should then see the Python version output, for example: ```output Python 3.11.2 ``` -If Python3 is not installed, download and install it from [python.org](https://www.python.org/downloads/). +If Python3 is not installed, you can download and install it from [python.org](https://www.python.org/downloads/). Alternatively, you can also install Python3 using package managers such as Homebrew or APT. -If you are using Windows on Arm you can refer to the [Python install guide](https://learn.arm.com/install-guides/py-woa/). +If you are using Windows on Arm, see the [Python install guide](https://learn.arm.com/install-guides/py-woa/). -Next, download and install [Visual Studio Code](https://code.visualstudio.com/download). +Next, if you do not already have it, download and install [Visual Studio Code](https://code.visualstudio.com/download). ## Install PyTorch and additional Python packages To prepare a virtual Python environment, install PyTorch, and the additional tools you will need for this Learning Path: -1. Open a terminal or command prompt and navigate to your project directory. +1. Open a terminal or command prompt, and navigate to your project directory. 2. Create a virtual environment by running: @@ -60,7 +60,7 @@ To prepare a virtual Python environment, install PyTorch, and the additional too python -m venv pytorch-env ``` -This will create a virtual environment named pytorch-env. +This will create a virtual environment named `pytorch-env`. 3. Activate the virtual environment: @@ -74,7 +74,7 @@ pytorch-env\Scripts\activate source pytorch-env/bin/activate ``` -Once activated, you see the virtual environment name `(pytorch-env)` before your terminal prompt. +Once activated, you can see the virtual environment name `(pytorch-env)` before your terminal prompt. 3. Install PyTorch using Pip: @@ -98,20 +98,20 @@ python3 -m ipykernel install --user --name=pytorch-env 6. Install the Jupyter Extension in VS Code: -* Open VS Code and go to the Extensions view (click on the Extensions icon or press Ctrl+Shift+X). +* Open VS Code and go to the **Extensions** view, by clicking on the **Extensions** icon or pressing Ctrl+Shift+X. * Search for “Jupyter” and install the official Jupyter extension. -* Optionally, also install the Python extension if you haven’t already, as it improves Python language support in VS Code. +* Optionally, also install the Python extension if you have not already, as it improves Python language support in VS Code. -To ensure everything is set up correctly: +To ensure everything is set up correctly, follow these next steps: 1. Open Visual Studio Code. -2. Click New file, and select `Jupyter Notebook .ipynb Support`. +2. Click **New file**, and select `Jupyter Notebook .ipynb Support`. 3. Save the file as `pytorch-digits.ipynb`. -4. Select the Python kernel you created earlier (pytorch-env). To do so, click Kernels in the top right corner. Then, click Jupyter Kernel..., and you will see the Python kernel as shown below: +4. Select the Python kernel you created earlier, `pytorch-env`. To do so, click **Kernels** in the top right-hand corner. Then, click **Jupyter Kernel...**, and you will see the Python kernel as shown below: -![img1](Figures/1.png) +![img1 alt-text#center](Figures/1.png "Figure 1: Python kernel.") 5. In your Jupyter notebook, run the following code to verify PyTorch is working correctly: @@ -121,6 +121,6 @@ print(torch.__version__) ``` It will look as follows: -![img2](Figures/2.png) +![img2 alt-text#center](Figures/2.png "Figure 2: Jupyter Notebook.") With your development environment created, you can proceed to creating a PyTorch model. From 529c71b1782251ba68eda3881a83f731dcd1abfa Mon Sep 17 00:00:00 2001 From: Maddy Underwood <167196745+madeline-underwood@users.noreply.github.com> Date: Sat, 28 Dec 2024 06:01:46 +0000 Subject: [PATCH 48/96] Editorial Review. --- .../model.md | 14 +++++++------- 1 file changed, 7 insertions(+), 7 deletions(-) diff --git a/content/learning-paths/cross-platform/pytorch-digit-classification-arch-training/model.md b/content/learning-paths/cross-platform/pytorch-digit-classification-arch-training/model.md index 1c03a3e1f..ceabd50d8 100644 --- a/content/learning-paths/cross-platform/pytorch-digit-classification-arch-training/model.md +++ b/content/learning-paths/cross-platform/pytorch-digit-classification-arch-training/model.md @@ -7,15 +7,15 @@ weight: 3 layout: "learningpathall" --- -You can create and train a feedforward neural network to classify handwritten digits from the MNIST dataset. This dataset contains 70,000 images, comprised of 60,000 training images and 10,000 testing images, of handwritten numerals (0-9), each with dimensions of 28x28 pixels. Some representative MNIST digits with their corresponding labels are shown below. +You can create and train a feedforward neural network to classify handwritten digits from the MNIST dataset (Modified National Institute of Standards and Technology database). This dataset contains 70,000 images, comprised of 60,000 training images and 10,000 testing images, of handwritten numerals (0-9), each with dimensions of 28x28 pixels. Some representative MNIST digits with their corresponding labels are shown below. -![img3](Figures/3.png) +![img3 alt-text#center](Figures/3.png "Figure 3: MNIST Digits and Labels.") -The neural network begins with an input layer containing 28x28 = 784 input nodes, with each node accepting a single pixel from an MNIST image. +The neural network begins with an input layer containing 28x28 = 784 input nodes, with each node accepting a single pixel from a MNIST image. You will add a linear hidden layer with 96 nodes, using the hyperbolic tangent (tanh) activation function. To prevent overfitting, a dropout layer is applied, randomly setting 20% of the nodes to zero. -You will then include another hidden layer with 256 nodes, followed by a second dropout layer that again removes 20% of the nodes. Finally, the output layer consists of ten nodes, each representing the probability of recognizing one of the digits (0-9). +You will then include another hidden layer with 256 nodes, followed by a second dropout layer that again removes 20% of the nodes. Finally, you will reach a situation where the output layer consists of ten nodes, each representing the probability of recognizing one of the digits (0-9). The total number of trainable parameters for this network is calculated as follows: @@ -23,7 +23,7 @@ The total number of trainable parameters for this network is calculated as follo * Second hidden layer: 96 x 256 + 256 = 24,832 parameters. * Output layer: 256 x 10 + 10 = 2,570 parameters. -In total, the network will have 102,762 trainable parameters. +So in total, the network has 102,762 trainable parameters. # Implementation @@ -75,7 +75,7 @@ The network consists of: * Another Dropout layer, that removes 20% of the nodes. * A final Linear layer, with 10 nodes (matching the number of classes in the dataset), followed by a Softmax activation function that outputs class probabilities. -2. forward method +2. Forward method This method defines the forward pass of the network. It takes an input tensor x, flattens it using self.flatten, and then passes it through the defined sequential stack of layers (self.linear_stack). @@ -91,7 +91,7 @@ summary(model, (1, 28, 28)) After running the notebook, you will see the following output: -![img4](Figures/4.png) +![img4 alt-text#center](Figures/4.png "Figure 4: Notebook Output.") You will see a detailed summary of the NeuralNetwork model’s architecture, including the following information: From 635fe0440dc620b346162ebcb000867dfebf8218 Mon Sep 17 00:00:00 2001 From: Maddy Underwood <167196745+madeline-underwood@users.noreply.github.com> Date: Sun, 29 Dec 2024 03:06:48 +0000 Subject: [PATCH 49/96] Typo --- .../profiling-ml-on-arm/nn-profiling-executenetwork.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/content/learning-paths/smartphones-and-mobile/profiling-ml-on-arm/nn-profiling-executenetwork.md b/content/learning-paths/smartphones-and-mobile/profiling-ml-on-arm/nn-profiling-executenetwork.md index 323d72367..d200e0227 100644 --- a/content/learning-paths/smartphones-and-mobile/profiling-ml-on-arm/nn-profiling-executenetwork.md +++ b/content/learning-paths/smartphones-and-mobile/profiling-ml-on-arm/nn-profiling-executenetwork.md @@ -1,5 +1,5 @@ --- -title: ML profiling of a LiteRT model with ExecuteNetwork +title: ML Profiling of a LiteRT model with ExecuteNetwork weight: 6 ### FIXED, DO NOT MODIFY From af6d65e017a909368b12843fc2fb0b4862637096 Mon Sep 17 00:00:00 2001 From: Maddy Underwood <167196745+madeline-underwood@users.noreply.github.com> Date: Sun, 29 Dec 2024 03:42:25 +0000 Subject: [PATCH 50/96] Improvements to the phrasing of the language. --- .../intro.md | 33 ++++++++++--------- 1 file changed, 18 insertions(+), 15 deletions(-) diff --git a/content/learning-paths/cross-platform/pytorch-digit-classification-arch-training/intro.md b/content/learning-paths/cross-platform/pytorch-digit-classification-arch-training/intro.md index fdb573d9e..6ec1d118f 100644 --- a/content/learning-paths/cross-platform/pytorch-digit-classification-arch-training/intro.md +++ b/content/learning-paths/cross-platform/pytorch-digit-classification-arch-training/intro.md @@ -9,16 +9,19 @@ layout: "learningpathall" ## Introduction to PyTorch -PyTorch is an open source deep learning framework that is developed by Meta AI and is now part of the Linux Foundation. +Meta AI have designed an Open Source deep learning framework called PyTorch, that is now part of the Linux Foundation. -PyTorch is designed to provide a flexible and efficient platform for building and training neural networks. It is widely used due to its dynamic computational graph, which allows users to modify the architecture during runtime, making debugging and experimentation easier. +PyTorch provides a flexible and efficient platform for building and training neural networks. It has a dynamic computational graph that allows users to modify the architecture during runtime, making debugging and experimentation easier, and therefore makes it popular amongste developers. -PyTorch's objective is to provide a more flexible, user-friendly deep learning framework that addresses the limitations of static computational graphs found in earlier tools like TensorFlow. +PyTorch provides a more flexible, user-friendly deep learning framework that reduces the limitations of static computational graphs found in earlier tools, such as TensorFlow. -Prior to PyTorch, many frameworks used static computation graphs that require the entire model structure to be defined before training, making experimentation and debugging cumbersome. PyTorch introduced dynamic computational graphs, also known as “define-by-run”, that allow the graph to be constructed dynamically as operations are executed. This flexibility significantly improves ease of use for researchers and developers, enabling faster prototyping, easier debugging, and more intuitive code. +Prior to PyTorch, many frameworks used static computational graphs that require the entire model structure to be defined before training, which makes experimentation and debugging cumbersome. PyTorch introduced dynamic computational graphs, also known as “define-by-run”, that allow the graph to be constructed dynamically as operations are executed. This flexibility significantly improves ease of use for researchers and developers, enabling: +* Faster prototyping. +* Easier debugging. +* More intuitive code. -Additionally, PyTorch seamlessly integrates with Python, encouraging a native coding experience. Its deep integration with GPU acceleration also makes it a powerful tool for both research and production environments. This combination of flexibility, usability, and performance has contributed to PyTorch’s rapid adoption, especially in academic research, where experimentation and iteration are crucial. +PyTorch also seamlessly integrates with Python, which creates a native coding experience. Its deep integration with GPU acceleration also makes it a powerful tool for both research and production environments. This combination of flexibility, usability, and performance has ensured PyTorch’s rapid adoption, particualrly in academic research, where experimentation and iteration are crucial activities. A typical process for creating a feedforward neural network in PyTorch involves defining a sequential stack of fully-connected layers, which are also known as linear layers. Each layer transforms the input by applying a set of weights and biases, followed by an activation function like ReLU. PyTorch supports this process using the torch.nn module, where layers are easily defined and composed. @@ -34,7 +37,7 @@ Before you begin, make sure Python3 is installed on your system. You can check t python3 --version ``` -You should then see the Python version output, for example: +You should then see the Python version printed in the output, for example: ```output Python 3.11.2 @@ -50,7 +53,7 @@ Next, if you do not already have it, download and install [Visual Studio Code](h ## Install PyTorch and additional Python packages -To prepare a virtual Python environment, install PyTorch, and the additional tools you will need for this Learning Path: +To prepare a virtual Python environment, first you need to install PyTorch, and then move on to installing the additional tools that you will need for this Learning Path. 1. Open a terminal or command prompt, and navigate to your project directory. @@ -60,29 +63,29 @@ To prepare a virtual Python environment, install PyTorch, and the additional too python -m venv pytorch-env ``` -This will create a virtual environment named `pytorch-env`. +This creates a virtual environment called `pytorch-env`. 3. Activate the virtual environment: -* On Windows: +* On Windows, run the following: ```console pytorch-env\Scripts\activate ``` -* On macOS or Linux: +* On macOS or Linux, run this code: ```console source pytorch-env/bin/activate ``` Once activated, you can see the virtual environment name `(pytorch-env)` before your terminal prompt. -3. Install PyTorch using Pip: +4. Install PyTorch using Pip: ```console pip install torch torchvision --index-url https://download.pytorch.org/whl/cpu ``` -4. Install torchsummary, Jupyter and IPython Kernel: +5. Install torchsummary, Jupyter and IPython Kernel: ```console pip install torchsummary @@ -90,13 +93,13 @@ pip install jupyter pip install ipykernel ``` -5. Register your virtual environment as a new kernel: +6. Register your virtual environment as a new kernel: ```console python3 -m ipykernel install --user --name=pytorch-env ``` -6. Install the Jupyter Extension in VS Code: +7. Install the Jupyter Extension in VS Code: * Open VS Code and go to the **Extensions** view, by clicking on the **Extensions** icon or pressing Ctrl+Shift+X. @@ -123,4 +126,4 @@ print(torch.__version__) It will look as follows: ![img2 alt-text#center](Figures/2.png "Figure 2: Jupyter Notebook.") -With your development environment created, you can proceed to creating a PyTorch model. +Now you have set up your development environment, you can move on to creating a PyTorch model. From 1dfabd6beaf036f426cf56bafdc6708858403c59 Mon Sep 17 00:00:00 2001 From: Maddy Underwood <167196745+madeline-underwood@users.noreply.github.com> Date: Sun, 29 Dec 2024 07:41:28 +0000 Subject: [PATCH 51/96] Update inference.md --- .../inference.md | 28 +++++++++++-------- 1 file changed, 17 insertions(+), 11 deletions(-) diff --git a/content/learning-paths/cross-platform/pytorch-digit-classification-arch-training/inference.md b/content/learning-paths/cross-platform/pytorch-digit-classification-arch-training/inference.md index 9aed5754e..4a1bf25e3 100644 --- a/content/learning-paths/cross-platform/pytorch-digit-classification-arch-training/inference.md +++ b/content/learning-paths/cross-platform/pytorch-digit-classification-arch-training/inference.md @@ -7,18 +7,18 @@ weight: 6 layout: "learningpathall" --- -The inference process involves using a trained model to make predictions on new, unseen data. It typically follows these steps: +You can use a trained model to make predictions on new, unseen data. This is called the inference process, and it typically follows these steps: -1. **Load the Trained Model**: the model, along with its learned parameters - weights and biases - is loaded from a saved file. -2. **Prepare the Input Data**: the input data is pre-processed in the same way as during training, for example, normalization and tensor conversion, to ensure compatibility with the model. -3. **Make Predictions**: the pre-processed data is fed into the model, which computes the output based on its trained parameters. The output is often a probability distribution over possible classes. -4. **Interpret the Results**: the predicted class is usually the one with the highest probability. The results can then be used for further analysis or decision-making. +1. **Load the Trained Model**: firstly, you load the model with its learned weights and biases, the parameters, from a saved file. +2. **Prepare the Input Data**: next, the input data is pre-processed in the same way as during training, for example, normalization and tensor conversion, to ensure compatibility with the model. +3. **Make Predictions**: then the pre-processed data is fed into the model, which computes the output based on its trained parameters. The output is often a probability distribution over possible classes. +4. **Interpret the Results**: finally, you can interpret the results. The predicted class is usually the one with the highest probability. You can then use the results for further analysis or decision-making. This process allows the model to generalize its learned knowledge to make accurate predictions on new data. # Running inference in PyTorch -You can inference in PyTorch using the previously saved model. To display results, you can use matplotlib. +You can run inference in PyTorch using the previously-saved model. You can then use matplotlib to display the results. Start by installing matplotlib package: @@ -26,7 +26,7 @@ Start by installing matplotlib package: pip install matplotlib ``` -Use Visual Studio Code to create a new file named `pytorch-digits-inference.ipynb` and modify the file to include the code below: +Use Visual Studio Code to create a new file named `pytorch-digits-inference.ipynb`, and modify the file to include the code below: ```python import torch @@ -83,11 +83,17 @@ plt.tight_layout() plt.show() ``` -The above code performs inference on the saved PyTorch model using 16 randomly-selected images from the MNIST test dataset and displays them along with their actual and predicted labels. +This code detailed above performs inference on the saved PyTorch model using 16 randomly-selected images from the MNIST test dataset, and then displays them alongside their actual and predicted labels. -As before, start by importing the necessary Python libraries: torch, datasets, transforms, matplotlib.pyplot, and random. Torch is used for loading the model and performing tensor operations. Datasets and transforms from torchvision are used for loading and transforming the MNIST dataset. Use matplotlib.pyplot for plotting and displaying images, and random is used for selecting random images from the dataset. +As before, start by importing the necessary Python libraries: -Next, load the MNIST test dataset using datasets.MNIST() with train=False to specify that it’s the test data. The dataset is automatically downloaded if it’s not available locally. +* Torch - used for loading the model and performing tensor operations. +* Datasets - used for loading the MNIST dataset. +* Transforms - used for transforming the MNIST dataset. +* Matplotlib.pyplot - used for plotting and displaying images. +* Random - used for selecting random images from the dataset. + +Next, load the MNIST test dataset using datasets.MNIST() with train=False to specify that it is the test data. The dataset is automatically downloaded if it is not available locally. Load the saved model using torch.jit.load("model.pth") and set the model to evaluation mode using model.eval(). This ensures that layers like dropout and batch normalization behave appropriately during inference. @@ -111,4 +117,4 @@ Next, you performed inference. You loaded the saved model and set it to evaluati This comprehensive process, from model training and saving to inference and visualization, illustrates the end-to-end workflow for building and deploying a machine learning model in PyTorch. It demonstrates how to train a model, save it in a portable format, and then use it to make predictions on new data. -In the next step, you will learn how to use the model in an Android application. \ No newline at end of file +In the next step, you will learn how to use the model in an Android application. From 2e720fd1220cdc95f9b9733a91365ade86cf5ed0 Mon Sep 17 00:00:00 2001 From: Maddy Underwood <167196745+madeline-underwood@users.noreply.github.com> Date: Sun, 29 Dec 2024 08:18:14 +0000 Subject: [PATCH 52/96] Update inference.md --- .../inference.md | 18 +++++++++--------- 1 file changed, 9 insertions(+), 9 deletions(-) diff --git a/content/learning-paths/cross-platform/pytorch-digit-classification-arch-training/inference.md b/content/learning-paths/cross-platform/pytorch-digit-classification-arch-training/inference.md index 4a1bf25e3..f82f2d0ca 100644 --- a/content/learning-paths/cross-platform/pytorch-digit-classification-arch-training/inference.md +++ b/content/learning-paths/cross-platform/pytorch-digit-classification-arch-training/inference.md @@ -83,23 +83,23 @@ plt.tight_layout() plt.show() ``` -This code detailed above performs inference on the saved PyTorch model using 16 randomly-selected images from the MNIST test dataset, and then displays them alongside their actual and predicted labels. +The code detailed above performs inference on the saved PyTorch model using 16 randomly-selected images from the MNIST test dataset, and then displays them alongside their actual and predicted labels. As before, start by importing the necessary Python libraries: -* Torch - used for loading the model and performing tensor operations. -* Datasets - used for loading the MNIST dataset. -* Transforms - used for transforming the MNIST dataset. -* Matplotlib.pyplot - used for plotting and displaying images. -* Random - used for selecting random images from the dataset. +* Torch - for loading the model and performing tensor operations. +* Datasets - for loading the MNIST dataset. +* Transforms - for transforming the MNIST dataset. +* Matplotlib.pyplot - for plotting and displaying images. +* Random - for selecting random images from the dataset. -Next, load the MNIST test dataset using datasets.MNIST() with train=False to specify that it is the test data. The dataset is automatically downloaded if it is not available locally. +Next, load the MNIST test dataset using `datasets.MNIST()` with `train=False` to specify that it is the test data. The dataset is automatically downloaded if it is not available locally. -Load the saved model using torch.jit.load("model.pth") and set the model to evaluation mode using model.eval(). This ensures that layers like dropout and batch normalization behave appropriately during inference. +Load the saved model using `torch.jit.load("model.pth")` and set the model to evaluation mode using `model.eval()`. This ensures that layers like dropout and batch normalization behave appropriately during inference. Subsequently, select 16 random images and create a 4x4 grid of subplots using plt.subplots(4, 4, figsize=(12, 12)) for displaying the images. -Afterwards, perform inference and display the images in a loop. Specifically, for each of the 16 selected images, the image and its label are retrieved from the dataset using the random index. The image tensor is expanded to include a batch dimension (image.unsqueeze(0)) because the model expects a batch of images. Inference is performed with model(image_batch) to get the prediction. The predicted label is determined using torch.argmax() to find the index of the maximum probability in the output. Each image is displayed in its respective subplot with the actual and predicted labels. We use plt.tight_layout() to ensure that the layout is adjusted nicely, and plt.show() to display the 16 images with their actual and predicted labels. +Afterwards, perform inference and display the images in a loop. Specifically, for each of the 16 selected images, the image and its label are retrieved from the dataset using the random index. The image tensor is expanded to include a batch dimension (image.unsqueeze(0)) because the model expects a batch of images. Inference is performed with model(image_batch) to get the prediction. The predicted label is determined using torch.argmax() to find the index of the maximum probability in the output. Each image is displayed in its respective subplot with the actual and predicted labels. You can use plt.tight_layout() to ensure that the layout is adjusted nicely, and plt.show() to display the 16 images with their actual and predicted labels. This code demonstrates how to use a saved PyTorch model for inference and visualization of predictions on a subset of the MNIST test dataset. From 63d9a89e65b7630129e43f2393625ca1dbee28de Mon Sep 17 00:00:00 2001 From: Maddy Underwood <167196745+madeline-underwood@users.noreply.github.com> Date: Sun, 29 Dec 2024 15:43:36 +0000 Subject: [PATCH 53/96] Update inference.md --- .../pytorch-digit-classification-arch-training/inference.md | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/content/learning-paths/cross-platform/pytorch-digit-classification-arch-training/inference.md b/content/learning-paths/cross-platform/pytorch-digit-classification-arch-training/inference.md index f82f2d0ca..c459c1ef0 100644 --- a/content/learning-paths/cross-platform/pytorch-digit-classification-arch-training/inference.md +++ b/content/learning-paths/cross-platform/pytorch-digit-classification-arch-training/inference.md @@ -7,10 +7,10 @@ weight: 6 layout: "learningpathall" --- -You can use a trained model to make predictions on new, unseen data. This is called the inference process, and it typically follows these steps: +You can use a trained model to make predictions on new, unseen data. The model uses a process called inference, and it typically follows these steps: -1. **Load the Trained Model**: firstly, you load the model with its learned weights and biases, the parameters, from a saved file. -2. **Prepare the Input Data**: next, the input data is pre-processed in the same way as during training, for example, normalization and tensor conversion, to ensure compatibility with the model. +1. **Load the Trained Model**: firstly, you load the model with its parameters that consist of learned weights and biases, from a saved file. +2. **Prepare the Input Data**: next, the input data is pre-processed in the same way as during training, for example, undergoing normalization and tensor conversion, to ensure compatibility with the model. 3. **Make Predictions**: then the pre-processed data is fed into the model, which computes the output based on its trained parameters. The output is often a probability distribution over possible classes. 4. **Interpret the Results**: finally, you can interpret the results. The predicted class is usually the one with the highest probability. You can then use the results for further analysis or decision-making. From 2d250272b00ee991673d354c88e16a758a3ea212 Mon Sep 17 00:00:00 2001 From: Maddy Underwood <167196745+madeline-underwood@users.noreply.github.com> Date: Sun, 29 Dec 2024 16:01:31 +0000 Subject: [PATCH 54/96] Update inference.md --- .../inference.md | 31 ++++++++++--------- 1 file changed, 17 insertions(+), 14 deletions(-) diff --git a/content/learning-paths/cross-platform/pytorch-digit-classification-arch-training/inference.md b/content/learning-paths/cross-platform/pytorch-digit-classification-arch-training/inference.md index c459c1ef0..d6e05d604 100644 --- a/content/learning-paths/cross-platform/pytorch-digit-classification-arch-training/inference.md +++ b/content/learning-paths/cross-platform/pytorch-digit-classification-arch-training/inference.md @@ -10,23 +10,26 @@ layout: "learningpathall" You can use a trained model to make predictions on new, unseen data. The model uses a process called inference, and it typically follows these steps: 1. **Load the Trained Model**: firstly, you load the model with its parameters that consist of learned weights and biases, from a saved file. + 2. **Prepare the Input Data**: next, the input data is pre-processed in the same way as during training, for example, undergoing normalization and tensor conversion, to ensure compatibility with the model. -3. **Make Predictions**: then the pre-processed data is fed into the model, which computes the output based on its trained parameters. The output is often a probability distribution over possible classes. + +3. **Feed Pre-Processed Data into the Model to compute predictions**: then the pre-processed data is fed into the model, which computes the output based on its trained parameters. The output is often a probability distribution over possible classes. + 4. **Interpret the Results**: finally, you can interpret the results. The predicted class is usually the one with the highest probability. You can then use the results for further analysis or decision-making. This process allows the model to generalize its learned knowledge to make accurate predictions on new data. # Running inference in PyTorch -You can run inference in PyTorch using the previously-saved model. You can then use matplotlib to display the results. +You can run inference in PyTorch using the previously-saved model. You can then use `matplotlib` to display the results. -Start by installing matplotlib package: +Start by installing the `matplotlib` package: ```console pip install matplotlib ``` -Use Visual Studio Code to create a new file named `pytorch-digits-inference.ipynb`, and modify the file to include the code below: +Use Visual Studio Code to create a new file named `pytorch-digits-inference.ipynb`, and modify the file to include the code: ```python import torch @@ -83,23 +86,23 @@ plt.tight_layout() plt.show() ``` -The code detailed above performs inference on the saved PyTorch model using 16 randomly-selected images from the MNIST test dataset, and then displays them alongside their actual and predicted labels. +The code detailed above performs inference on the saved PyTorch model using 16 randomly-selected images from the MNIST test dataset, and then displays them alongside their predicted and actual labels. As before, start by importing the necessary Python libraries: -* Torch - for loading the model and performing tensor operations. -* Datasets - for loading the MNIST dataset. -* Transforms - for transforming the MNIST dataset. -* Matplotlib.pyplot - for plotting and displaying images. -* Random - for selecting random images from the dataset. +* `Torch` - for loading the model and performing tensor operations. +* `Datasets` - for loading the MNIST dataset. +* `Transforms` - for transforming the MNIST dataset. +* `Matplotlib.pyplot` - for plotting and displaying images. +* `Random` - for selecting random images from the dataset. Next, load the MNIST test dataset using `datasets.MNIST()` with `train=False` to specify that it is the test data. The dataset is automatically downloaded if it is not available locally. Load the saved model using `torch.jit.load("model.pth")` and set the model to evaluation mode using `model.eval()`. This ensures that layers like dropout and batch normalization behave appropriately during inference. -Subsequently, select 16 random images and create a 4x4 grid of subplots using plt.subplots(4, 4, figsize=(12, 12)) for displaying the images. +Then select 16 random images and create a 4x4 grid of subplots using `plt.subplots(4, 4, figsize=(12, 12))` for displaying the images. -Afterwards, perform inference and display the images in a loop. Specifically, for each of the 16 selected images, the image and its label are retrieved from the dataset using the random index. The image tensor is expanded to include a batch dimension (image.unsqueeze(0)) because the model expects a batch of images. Inference is performed with model(image_batch) to get the prediction. The predicted label is determined using torch.argmax() to find the index of the maximum probability in the output. Each image is displayed in its respective subplot with the actual and predicted labels. You can use plt.tight_layout() to ensure that the layout is adjusted nicely, and plt.show() to display the 16 images with their actual and predicted labels. +Afterwards, perform inference and display the images in a loop. Specifically, for each of the 16 selected images, the image and its label are retrieved from the dataset using the random index. The image tensor is expanded to include a batch dimension `(image.unsqueeze(0))` because the model expects a batch of images. Inference is performed with `model(image_batch)` to get the prediction. The predicted label is determined using torch.argmax() to find the index of the maximum probability in the output. Each image is displayed in its respective subplot with the actual and predicted labels. You can use plt.tight_layout() to ensure that the layout is well-adjusted, and plt.show() to display the 16 images with their predicted and actual labels. This code demonstrates how to use a saved PyTorch model for inference and visualization of predictions on a subset of the MNIST test dataset. @@ -109,9 +112,9 @@ After running the code, you should see results similar to the following figure: # What have you learned? -You have completed the process of training and using a PyTorch model for digit classification on the MNIST dataset. Using the training dataset, you optimized the model’s weights and biases over multiple epochs. You employed the CrossEntropyLoss function and the Adam optimizer to minimize prediction errors and improve accuracy. You periodically evaluated the model on the test dataset to monitor its performance, ensuring it was learning effectively without overfitting. +You have completed the process of training and using a PyTorch model for digit classification on the MNIST dataset. Using the training dataset, you optimized the model’s weights and biases over multiple epochs. You employed the `CrossEntropyLoss` function and the `Adam optimizer` to minimize prediction errors and improve accuracy. You periodically evaluated the model on the test dataset to monitor its performance, ensuring it was learning effectively without overfitting. -After training, you saved the model using TorchScript, which captures both the model’s architecture and its learned parameters. This made the model portable and independent of the original class definition, simplifying deployment. +After training, you saved the model using `TorchScript`, which captures both the model’s architecture and its learned parameters. This improved the flexibility of the model; making it portable and able to function independently of the original class definition, which simplifyies deployment. Next, you performed inference. You loaded the saved model and set it to evaluation mode to ensure that layers like dropout and batch normalization behaved correctly during inference. You randomly selected 16 images from the MNIST test dataset to evaluate the model’s performance on unseen data. For each selected image, you used the model to predict the digit, comparing the predicted labels with the actual ones. You displayed the images alongside their actual and predicted labels in a 4x4 grid, visually assessing the model’s accuracy and performance. From ebe6bbf90c50d3fbd7be67d7bc7f513c9b4b4645 Mon Sep 17 00:00:00 2001 From: GitHub Actions Stats Bot <> Date: Mon, 30 Dec 2024 01:28:39 +0000 Subject: [PATCH 55/96] automatic update of stats files --- data/stats_current_test_info.yml | 2 +- data/stats_weekly_data.yml | 90 ++++++++++++++++++++++++++++++++ 2 files changed, 91 insertions(+), 1 deletion(-) diff --git a/data/stats_current_test_info.yml b/data/stats_current_test_info.yml index bbb19d51e..a7894b33d 100644 --- a/data/stats_current_test_info.yml +++ b/data/stats_current_test_info.yml @@ -1,5 +1,5 @@ summary: - content_total: 312 + content_total: 313 content_with_all_tests_passing: 31 content_with_tests_enabled: 33 sw_categories: diff --git a/data/stats_weekly_data.yml b/data/stats_weekly_data.yml index 44d5a4394..6136a7222 100644 --- a/data/stats_weekly_data.yml +++ b/data/stats_weekly_data.yml @@ -4347,3 +4347,93 @@ avg_close_time_hrs: 0 num_issues: 12 percent_closed_vs_total: 0.0 +- a_date: '2024-12-30' + content: + cross-platform: 26 + embedded-systems: 19 + install-guides: 90 + laptops-and-desktops: 34 + microcontrollers: 25 + servers-and-cloud-computing: 94 + smartphones-and-mobile: 25 + total: 313 + contributions: + external: 45 + internal: 362 + github_engagement: + num_forks: 30 + num_prs: 9 + individual_authors: + alaaeddine-chakroun: 2 + alexandros-lamprineas: 1 + annie-tallund: 1 + arm: 3 + arnaud-de-grandmaison: 1 + arnaud-de-grandmaison,-paul-howard,-and-pareena-verma: 1 + basma-el-gaabouri: 1 + ben-clark: 1 + bolt-liu: 2 + brenda-strech: 1 + chaodong-gong,-alex-su,-kieran-hejmadi: 1 + chen-zhang: 1 + christopher-seidl: 7 + cyril-rohr: 1 + daniel-gubay: 1 + daniel-nguyen: 1 + david-spickett: 2 + dawid-borycki: 31 + diego-russo: 1 + diego-russo-and-leandro-nunes: 1 + elham-harirpoush: 2 + florent-lebeau: 5 + "fr\xE9d\xE9ric--lefred--descamps": 2 + gabriel-peterson: 5 + gayathri-narayana-yegna-narayanan: 1 + georgios-mermigkis-and-konstantinos-margaritis,-vectorcamp: 1 + graham-woodward: 1 + iago-calvo-lista,-arm: 1 + james-whitaker,-arm: 1 + jason-andrews: 91 + joe-stech: 1 + johanna-skinnider: 2 + jonathan-davies: 2 + jose-emilio-munoz-lopez,-arm: 1 + julie-gaskin: 4 + julio-suarez: 5 + kasper-mecklenburg: 1 + kieran-hejmadi: 1 + koki-mitsunami: 2 + konstantinos-margaritis: 7 + kristof-beyls: 1 + liliya-wu: 1 + mathias-brossard: 1 + michael-hall: 5 + nikhil-gupta,-pareena-verma,-nobel-chowdary-mandepudi,-ravi-malhotra: 1 + odin-shen: 1 + owen-wu,-arm: 2 + pareena-verma: 34 + pareena-verma,-annie-tallund: 1 + pareena-verma,-jason-andrews,-and-zach-lasiuk: 1 + pareena-verma,-joe-stech,-adnan-alsinan: 1 + paul-howard: 1 + pranay-bakre: 4 + preema-merlin-dsouza: 1 + przemyslaw-wirkus: 1 + rin-dobrescu: 1 + roberto-lopez-mendez: 2 + ronan-synnott: 45 + thirdai: 1 + tianyu-li: 1 + tom-pilar: 1 + uma-ramalingam: 1 + varun-chari,-albin-bernhardsson: 1 + varun-chari,-pareena-verma: 1 + visualsilicon: 1 + ying-yu: 1 + ying-yu,-arm: 1 + zach-lasiuk: 1 + zhengjun-xing: 2 + issues: + avg_close_time_hrs: 0 + num_issues: 13 + percent_closed_vs_total: 0.0 From 9a864686ad8c241046c1e195cf179ce12b06e57c Mon Sep 17 00:00:00 2001 From: Maddy Underwood <167196745+madeline-underwood@users.noreply.github.com> Date: Mon, 30 Dec 2024 06:07:51 +0000 Subject: [PATCH 56/96] Update inference.md --- .../inference.md | 10 +++++----- 1 file changed, 5 insertions(+), 5 deletions(-) diff --git a/content/learning-paths/cross-platform/pytorch-digit-classification-arch-training/inference.md b/content/learning-paths/cross-platform/pytorch-digit-classification-arch-training/inference.md index d6e05d604..3dd292ad8 100644 --- a/content/learning-paths/cross-platform/pytorch-digit-classification-arch-training/inference.md +++ b/content/learning-paths/cross-platform/pytorch-digit-classification-arch-training/inference.md @@ -9,13 +9,13 @@ layout: "learningpathall" You can use a trained model to make predictions on new, unseen data. The model uses a process called inference, and it typically follows these steps: -1. **Load the Trained Model**: firstly, you load the model with its parameters that consist of learned weights and biases, from a saved file. +1. **Load the Trained Model**: load the trained model with its parameters that consist of learned weights and biases, from a saved file. -2. **Prepare the Input Data**: next, the input data is pre-processed in the same way as during training, for example, undergoing normalization and tensor conversion, to ensure compatibility with the model. +2. **Prepare the Input Data**: prepare the input data with pre-processing in the same way as during training. For example, undergoing normalization and tensor conversion, to ensure compatibility with the model. -3. **Feed Pre-Processed Data into the Model to compute predictions**: then the pre-processed data is fed into the model, which computes the output based on its trained parameters. The output is often a probability distribution over possible classes. +3. **Feed Pre-Processed Data into the Model to compute predictions**: feed the pre-processed data into the model, which then computes the output based on its trained parameters. The output is often a probability distribution over possible classes. -4. **Interpret the Results**: finally, you can interpret the results. The predicted class is usually the one with the highest probability. You can then use the results for further analysis or decision-making. +4. **Interpret the Results**: finally, you can interpret the results. The predicted class is usually the one with the highest probability. You can also use the results for further analysis or decision-making. This process allows the model to generalize its learned knowledge to make accurate predictions on new data. @@ -86,7 +86,7 @@ plt.tight_layout() plt.show() ``` -The code detailed above performs inference on the saved PyTorch model using 16 randomly-selected images from the MNIST test dataset, and then displays them alongside their predicted and actual labels. +This code performs inference on the saved PyTorch model using 16 randomly-selected images from the MNIST test dataset, and then displays them alongside their predicted and actual labels. As before, start by importing the necessary Python libraries: From 1a609ab6f2e71cf5d15e1d532aaa97a984c0271c Mon Sep 17 00:00:00 2001 From: Maddy Underwood <167196745+madeline-underwood@users.noreply.github.com> Date: Mon, 30 Dec 2024 06:12:29 +0000 Subject: [PATCH 57/96] Update _index.md --- .../_index.md | 10 +++++----- 1 file changed, 5 insertions(+), 5 deletions(-) diff --git a/content/learning-paths/cross-platform/pytorch-digit-classification-arch-training/_index.md b/content/learning-paths/cross-platform/pytorch-digit-classification-arch-training/_index.md index f76e3568d..e8d80e248 100644 --- a/content/learning-paths/cross-platform/pytorch-digit-classification-arch-training/_index.md +++ b/content/learning-paths/cross-platform/pytorch-digit-classification-arch-training/_index.md @@ -8,16 +8,16 @@ who_is_this_for: This is an advanced topic for software developers interested in learning_objectives: - Prepare a PyTorch development environment. - Download and prepare the MNIST dataset. - - Create a neural network architecture using PyTorch. - - Train a neural network using PyTorch. - - Create an Android app and loading the pre-trained model. + - Create and train a neural network architecture using PyTorch. + - Create an Android app and load the pre-trained model. - Prepare an input dataset. - Measure the inference time. - Optimize a neural network architecture using quantization and fusing. - - Use an optimized model in the Android application. + - Deploy an optimized model in an Android application. prerequisites: - - A computer that can run Python3, Visual Studio Code, and Android Studio. The OS can be Windows, Linux, or macOS. + - A computer that can run Python3, Visual Studio Code, and Android Studio. + - For the OS, you can use Windows, Linux, or macOS. author_primary: Dawid Borycki From 0eb090e38bbc7ad213c7478e6b7abd07760c363c Mon Sep 17 00:00:00 2001 From: Maddy Underwood <167196745+madeline-underwood@users.noreply.github.com> Date: Mon, 30 Dec 2024 06:27:26 +0000 Subject: [PATCH 58/96] Update _next-steps.md --- .../pytorch-digit-classification-arch-training/_next-steps.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/content/learning-paths/cross-platform/pytorch-digit-classification-arch-training/_next-steps.md b/content/learning-paths/cross-platform/pytorch-digit-classification-arch-training/_next-steps.md index 82cf1f985..4c12b745e 100644 --- a/content/learning-paths/cross-platform/pytorch-digit-classification-arch-training/_next-steps.md +++ b/content/learning-paths/cross-platform/pytorch-digit-classification-arch-training/_next-steps.md @@ -4,7 +4,7 @@ # ================================================================================ next_step_guidance: > - Proceed to Use Keras Core with TensorFlow, PyTorch, and JAX backends to continue exploring Machine Learning. + To continue exploring Maching Learning, you can now learn about using Keras Core with TensorFlow, PyTorch, and JAX backends. # 1-3 sentence recommendation outlining how the reader can generally keep learning about these topics, and a specific explanation of why the next step is being recommended. From 5b3025d021a0f3fb7cb1c2cfd01e9373d6d85931 Mon Sep 17 00:00:00 2001 From: Maddy Underwood <167196745+madeline-underwood@users.noreply.github.com> Date: Mon, 30 Dec 2024 06:40:59 +0000 Subject: [PATCH 59/96] Restructured PyTorch model training process outline. --- .../intro2.md | 15 +++++++++------ 1 file changed, 9 insertions(+), 6 deletions(-) diff --git a/content/learning-paths/cross-platform/pytorch-digit-classification-arch-training/intro2.md b/content/learning-paths/cross-platform/pytorch-digit-classification-arch-training/intro2.md index c1e3a1021..770a7c9bf 100644 --- a/content/learning-paths/cross-platform/pytorch-digit-classification-arch-training/intro2.md +++ b/content/learning-paths/cross-platform/pytorch-digit-classification-arch-training/intro2.md @@ -9,15 +9,18 @@ layout: "learningpathall" ## PyTorch model training -In the previous section, you created a feedforward neural network for digit classification using the MNIST dataset. The network was left untrained and lacks the ability to make accurate predictions. +In the previous section, you created a feedforward neural network for digit classification using the MNIST dataset. To enable the network to recognize handwritten digits effectively and make accurate predictions, training is needed. -To enable the network to recognize handwritten digits effectively, training is needed. Training in PyTorch involves configuring the network's parameters, such as weights and biases, by exposing the model to labeled data and iteratively adjusting these parameters to minimize prediction errors. This process allows the model to learn the patterns in the data, enabling it to make accurate classifications on new, unseen inputs. +Training in PyTorch involves exposing the model to labeled data and iteratively configuring the network's parameters, such as the weights and biases, to reduce the number of prediction errors. This process allows the model to learn the patterns in the data, enabling it to make accurate classifications on new, unseen inputs. -The typical approach to training a neural network in PyTorch involves several key steps. +The typical approach to training a neural network in PyTorch involves several key steps: -First, obtain and preprocess the dataset, which usually includes normalizing the data and converting it into a format suitable for the model. - -Next, the dataset is split into training and testing subsets. Training data is used to update the model's parameters, while testing data evaluates its performance. During training, feed batches of input data through the network, calculate the prediction error or loss using a loss function (such as cross-entropy for classification tasks), and optimize the model's weights and biases using backpropagation. Backpropagation involves computing the gradient of the loss with respect to each parameter and then updating the parameters using an optimizer, like Stochastic Gradient Descent (SGD) or Adam. This process is repeated for multiple epochs until the model achieves satisfactory performance, balancing accuracy and generalization. +* Firstly, preprocess the dataset, which often involves normalizing the data and converting it into a format suitable for the model. +* Next, split the dataset into training and testing subsets. Training data is used to update the model's parameters, while testing data evaluates its performance. +* Feed batches of input data through the network. +* Calculate the prediction error or loss using a loss function (such as cross-entropy for classification tasks). +* Optimize the model's weights and biases using backpropagation. Backpropagation involves computing the gradient of the loss with respect to each parameter and then updating the parameters using an optimizer, like Stochastic Gradient Descent (SGD) or Adam. +* Repeat the process for multiple epochs until the model achieves satisfactory performance, balancing accuracy and generalization. ### Loss, gradients, epoch and backpropagation From b551f10dc1100ac52ad8ba50d78ead06b0fb6e60 Mon Sep 17 00:00:00 2001 From: Maddy Underwood <167196745+madeline-underwood@users.noreply.github.com> Date: Mon, 30 Dec 2024 07:02:49 +0000 Subject: [PATCH 60/96] Update intro2.md --- .../intro2.md | 17 +++++++++++------ 1 file changed, 11 insertions(+), 6 deletions(-) diff --git a/content/learning-paths/cross-platform/pytorch-digit-classification-arch-training/intro2.md b/content/learning-paths/cross-platform/pytorch-digit-classification-arch-training/intro2.md index 770a7c9bf..efc27a6da 100644 --- a/content/learning-paths/cross-platform/pytorch-digit-classification-arch-training/intro2.md +++ b/content/learning-paths/cross-platform/pytorch-digit-classification-arch-training/intro2.md @@ -9,22 +9,27 @@ layout: "learningpathall" ## PyTorch model training -In the previous section, you created a feedforward neural network for digit classification using the MNIST dataset. To enable the network to recognize handwritten digits effectively and make accurate predictions, training is needed. +Now you have created a feedforward neural network for digit classification using the MNIST dataset, to enable the network to recognize handwritten digits effectively and make accurate predictions, training is needed. -Training in PyTorch involves exposing the model to labeled data and iteratively configuring the network's parameters, such as the weights and biases, to reduce the number of prediction errors. This process allows the model to learn the patterns in the data, enabling it to make accurate classifications on new, unseen inputs. +Training in PyTorch involves exposing the model to labeled data and iteratively configuring the network's parameters. These parameters, such as the weights and biases, can be adjusted to reduce the number of prediction errors. This process allows the model to learn the patterns in the data, enabling it to make accurate classifications on new, unseen inputs. The typical approach to training a neural network in PyTorch involves several key steps: -* Firstly, preprocess the dataset, which often involves normalizing the data and converting it into a format suitable for the model. -* Next, split the dataset into training and testing subsets. Training data is used to update the model's parameters, while testing data evaluates its performance. +* Pre-process the dataset, for example normalize the data and convert it into a suitable format. + +* Divide the dataset into training and testing subsets. You can use training data to update the model's parameters, and testing data to evaluate its performance. + * Feed batches of input data through the network. -* Calculate the prediction error or loss using a loss function (such as cross-entropy for classification tasks). + +* Calculate the prediction error or loss using a loss function, such as Cross-Entropy for classification tasks. + * Optimize the model's weights and biases using backpropagation. Backpropagation involves computing the gradient of the loss with respect to each parameter and then updating the parameters using an optimizer, like Stochastic Gradient Descent (SGD) or Adam. + * Repeat the process for multiple epochs until the model achieves satisfactory performance, balancing accuracy and generalization. ### Loss, gradients, epoch and backpropagation -Loss is a measure of how well a model's predictions match the true labels of the data. It quantifies the difference between the predicted output and the actual output. The lower the loss, the better the model's performance. In classification tasks, a common loss function is Cross-Entropy Loss, while Mean Squared Error (MSE) is often used for regression tasks. The goal of training is to minimize the loss, which indicates that the model's predictions are getting closer to the actual labels. +Loss is a measure of how well a model's predictions match the true labels of the data. It quantifies the difference between the predicted output and the actual output. The lower the loss, the better the model's performance. In classification tasks, a common loss function is Cross-Entropy Loss, while Mean Squared Error (MSE) is often used for regression tasks. The goal of training is to minimize the loss, and get the model's predictions closer to the actual labels. Gradients represent the rate of change of the loss with respect to each of the model's parameters (weights and biases). They are used to update the model's parameters in the direction that reduces the loss. Gradients are calculated during the backpropagation step, where the loss is propagated backward through the network to compute how each parameter contributes to the overall loss. Optimizers like SGD or Adam use these gradients to adjust the parameters, effectively “teaching” the model to improve its predictions. From eca6ff59c8fe7e17cde17359af7060ac2ea9dd15 Mon Sep 17 00:00:00 2001 From: Maddy Underwood <167196745+madeline-underwood@users.noreply.github.com> Date: Mon, 30 Dec 2024 11:30:01 +0000 Subject: [PATCH 61/96] Update intro2.md --- .../pytorch-digit-classification-arch-training/intro2.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/content/learning-paths/cross-platform/pytorch-digit-classification-arch-training/intro2.md b/content/learning-paths/cross-platform/pytorch-digit-classification-arch-training/intro2.md index efc27a6da..1972b6724 100644 --- a/content/learning-paths/cross-platform/pytorch-digit-classification-arch-training/intro2.md +++ b/content/learning-paths/cross-platform/pytorch-digit-classification-arch-training/intro2.md @@ -15,7 +15,7 @@ Training in PyTorch involves exposing the model to labeled data and iteratively The typical approach to training a neural network in PyTorch involves several key steps: -* Pre-process the dataset, for example normalize the data and convert it into a suitable format. +* Preprocess the dataset, for example normalize the data and convert it into a suitable format. * Divide the dataset into training and testing subsets. You can use training data to update the model's parameters, and testing data to evaluate its performance. From e62d510495d1c555c7deac7274e4d6afa831784d Mon Sep 17 00:00:00 2001 From: Maddy Underwood <167196745+madeline-underwood@users.noreply.github.com> Date: Mon, 30 Dec 2024 11:47:05 +0000 Subject: [PATCH 62/96] Suggestion from KB. --- .../servers-and-cloud-computing/net-aspire/_index.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/content/learning-paths/servers-and-cloud-computing/net-aspire/_index.md b/content/learning-paths/servers-and-cloud-computing/net-aspire/_index.md index 4752581ff..ea073fc72 100644 --- a/content/learning-paths/servers-and-cloud-computing/net-aspire/_index.md +++ b/content/learning-paths/servers-and-cloud-computing/net-aspire/_index.md @@ -6,7 +6,7 @@ minutes_to_complete: 60 who_is_this_for: This is an introductory topic for software developers interested in learning how to deploy .NET Aspire applications on Arm-based virtual machines (VMs) on Amazon Web Services (AWS) and Google Cloud Platform (GCP). learning_objectives: - - Describe .NET Aspire, including what it can achieve. + - Demonstrate knowledge and understanding of .NET Aspire developer tools. - Create a .NET Aspire application. - Modify code on a Windows on Arm development machine. - Deploy a .NET Aspire application to Arm-powered virtual machines in the Cloud. From cb687acd060cd2f1434a4b9cb8cbc84c068d3bdb Mon Sep 17 00:00:00 2001 From: Maddy Underwood <167196745+madeline-underwood@users.noreply.github.com> Date: Mon, 30 Dec 2024 13:57:06 +0000 Subject: [PATCH 63/96] Update intro-opt.md --- .../intro-opt.md | 10 +++++----- 1 file changed, 5 insertions(+), 5 deletions(-) diff --git a/content/learning-paths/cross-platform/pytorch-digit-classification-arch-training/intro-opt.md b/content/learning-paths/cross-platform/pytorch-digit-classification-arch-training/intro-opt.md index 870aa445d..ac34d805f 100644 --- a/content/learning-paths/cross-platform/pytorch-digit-classification-arch-training/intro-opt.md +++ b/content/learning-paths/cross-platform/pytorch-digit-classification-arch-training/intro-opt.md @@ -11,19 +11,19 @@ layout: "learningpathall" Optimizing models is crucial to achieving efficient performance while minimizing resource consumption. -Because mobile and edge devices can have limited computational power, memory, and energy availability, various strategies are used to ensure that ML models can run effectively in these constrained environments. +As mobile and edge devices can have limited computational power, memory, and energy availability, various strategies can be deployed to ensure that ML models can run effectively in these constrained environments. ### Quantization -Quantization is one of the most widely used techniques, which reduces the precision of the model's weights and activations from floating-point to lower-bit representations, such as int8 or float16. This not only reduces the model size but also accelerates inference speed on hardware that supports lower precision arithmetic. +Quantization is one of the most widely used techniques, which reduces the precision of the model's weights and activations from floating-point to lower-bit representations, such as int8 or float16. This not only reduces the model size but also accelerates inference speed on hardware that supports low-precision arithmetic. ### Layer fusion -Another key optimization strategy is layer fusion, where multiple operations, such as combining linear layers with their subsequent activation functions (like ReLU), into a single layer. This reduces the number of operations that need to be executed during inference, minimizing latency and improving throughput. +Another key optimization strategy is layer fusion. Layer fusion involves combining linear layers with their subsequent activation functions, such as ReLU, into a single layer. This reduces the number of operations that need to be executed during inference, minimizing latency and improving throughput. ### Pruning -In addition to these techniques, pruning, which involves removing less important weights or neurons from the model, can help in creating a leaner model that requires fewer resources without significantly affecting accuracy. +In addition to these techniques, pruning, which involves removing less significant weights or neurons from the model, can help in creating a leaner model that requires fewer resources without markedly affecting accuracy. ### Android NNAPI @@ -62,4 +62,4 @@ After adjusting the training pipeline to produce an optimized version of the mod Once these changes are made, you will modify the Android application to load either the original or the optimized model based on user input, allowing you to switch between them dynamically. -This setup enables you to compare the inference speed of both models on the device, providing valuable insights into the performance benefits of model optimization techniques in real-world scenarios. \ No newline at end of file +This setup enables you to compare the inference speed of both models on the device, providing valuable insights into the performance benefits of model optimization techniques in real-world scenarios. From d34732911152f8f8b06df4176401f07bd282c636 Mon Sep 17 00:00:00 2001 From: Maddy Underwood <167196745+madeline-underwood@users.noreply.github.com> Date: Mon, 30 Dec 2024 19:08:00 +0000 Subject: [PATCH 64/96] Tweaked audience statement. --- .../pytorch-digit-classification-arch-training/_index.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/content/learning-paths/cross-platform/pytorch-digit-classification-arch-training/_index.md b/content/learning-paths/cross-platform/pytorch-digit-classification-arch-training/_index.md index e8d80e248..0046ef12e 100644 --- a/content/learning-paths/cross-platform/pytorch-digit-classification-arch-training/_index.md +++ b/content/learning-paths/cross-platform/pytorch-digit-classification-arch-training/_index.md @@ -3,7 +3,7 @@ title: Create and train a PyTorch model for digit classification minutes_to_complete: 160 -who_is_this_for: This is an advanced topic for software developers interested in learning how to use PyTorch to create and train a feedforward neural network for digit classification. You will also learn how to use the trained model in an Android application. Finally, you will apply model optimizations. +who_is_this_for: This is an advanced topic for software developers interested in learning how to use PyTorch to create and train a feedforward neural network for digit classification, and also software developers interested in learning how to use and apply optimizations to the trained model in an Android application. learning_objectives: - Prepare a PyTorch development environment. From bc9f1aa933a993ce9ca1f658449a2cb90b384453 Mon Sep 17 00:00:00 2001 From: Maddy Underwood <167196745+madeline-underwood@users.noreply.github.com> Date: Mon, 30 Dec 2024 19:18:52 +0000 Subject: [PATCH 65/96] Added UI formatting and corrected spelling error. --- .../app.md | 28 ++++++++++--------- 1 file changed, 15 insertions(+), 13 deletions(-) diff --git a/content/learning-paths/cross-platform/pytorch-digit-classification-arch-training/app.md b/content/learning-paths/cross-platform/pytorch-digit-classification-arch-training/app.md index d591afe57..89c7747ee 100644 --- a/content/learning-paths/cross-platform/pytorch-digit-classification-arch-training/app.md +++ b/content/learning-paths/cross-platform/pytorch-digit-classification-arch-training/app.md @@ -7,29 +7,31 @@ weight: 10 layout: "learningpathall" --- -You are now ready to run the Android application. You can use an emulator or a physical device. - -The screenshots below show an emulator. +You are now ready to run the Android application. The screenshots below show an emulator, but you can also use a physical device. To run the app in Android Studio using an emulator, follow these steps: 1. Configure the Emulator: -* Go to Tools > Device Manager (or click the Device Manager icon on the toolbar). -* Click Create Device to set up a new virtual device (if you haven’t done so already). -* Choose a device model, such as Pixel 4, and click Next. -* Select a system image, such as Android 11, API level 30, and click Next. -* Review the settings and click Finish to create the emulator. + +* Go to **Tools** > **Device Manager**, or click the Device Manager icon on the toolbar. +* Click **Create Device** to set up a new virtual device, if you haven’t done so already. +* Choose a device model, such as the Pixel 4, and click **Next**. +* Select a system image, such as Android 11, API level 30, and click **Next**. +* Review the settings, and click **Finish** to create the emulator. 2. Run the App: -* Make sure the emulator is selected in the device dropdown menu in the toolbar (next to the “Run” button). -* Click the Run button (a green triangle). Android Studio will build the app, install it on the emulator, and launch it. -3. View the App on the Emulator: Once the app is installed, it will automatically open on the emulator screen, allowing you to interact with it as if it were running on a real device. +* Make sure the emulator is selected in the device drop-down menu in the toolbar, next to the **Run** button. +* Click the **Run** button, which is a green triangle. Android Studio builds the app, installs it on the emulator, and then launches it. + +3. View the App on the Emulator: + +* Once the app is installed, it automatically opens on the emulator screen, allowing you to interact with it as if it were running on a real device. -Once the application is started, click the Load Image button. It will load a randomly selected image. Then, click Run Inference to recognize the digit. The application will display the predicted label and the inference time as shown below: +Once the application starts, click the **Load Image** button. It loads a randomly-selected image. Then, click **Run Inference** to recognize the digit. The application displays the predicted label and the inference time as shown below: ![img](Figures/05.png) ![img](Figures/06.png) -In the next step you will learn how to further optimize the model. +In the next step of this Learning Path, you will learn how to further optimize the model. From ce412da8e7373365f23551621ce7f298a0f488ac Mon Sep 17 00:00:00 2001 From: Jason Andrews Date: Mon, 30 Dec 2024 17:53:15 -0600 Subject: [PATCH 66/96] spelling updates --- .wordlist.txt | 8 +++----- .../learning-paths/servers-and-cloud-computing/_index.md | 3 +-- .../servers-and-cloud-computing/flink/_index.md | 2 +- .../5-render-a-simple-3D-object-part-1.md | 2 +- .../2-app-scaffolding.md | 2 +- 5 files changed, 7 insertions(+), 10 deletions(-) diff --git a/.wordlist.txt b/.wordlist.txt index b38472514..ee88c9c91 100644 --- a/.wordlist.txt +++ b/.wordlist.txt @@ -3312,7 +3312,6 @@ WGSL WPR WebGL WebGPU -WebGPU’s Xperf andc andnot @@ -3352,7 +3351,6 @@ LastWriteTime LiteRT OV Seeed -WebGPU’s WiseEye Yolov blp @@ -3400,7 +3398,6 @@ Preema Roesch Sourcefire TPACKET -WebGPU’s Whitepaper YGCT axion @@ -3456,7 +3453,7 @@ TestOpenCV TrustedFirmware Veraison WeatherForecast -WebGPU’s +WebGPU's Wiredtiger androidml ar @@ -3510,4 +3507,5 @@ unutilized vLLM veraison verifier -vllm \ No newline at end of file +vllm +observables \ No newline at end of file diff --git a/content/learning-paths/servers-and-cloud-computing/_index.md b/content/learning-paths/servers-and-cloud-computing/_index.md index 7351be0c9..0e8ca7ef9 100644 --- a/content/learning-paths/servers-and-cloud-computing/_index.md +++ b/content/learning-paths/servers-and-cloud-computing/_index.md @@ -83,8 +83,7 @@ tools_software_languages_filter: - HammerDB: 1 - InnoDB: 1 - Intrinsics: 1 -- JAVA: 1 -- Java: 2 +- Java: 3 - JAX: 1 - Kafka: 1 - Keras: 1 diff --git a/content/learning-paths/servers-and-cloud-computing/flink/_index.md b/content/learning-paths/servers-and-cloud-computing/flink/_index.md index 177718ad5..503524688 100644 --- a/content/learning-paths/servers-and-cloud-computing/flink/_index.md +++ b/content/learning-paths/servers-and-cloud-computing/flink/_index.md @@ -32,7 +32,7 @@ operatingsystems: tools_software_languages: - Flink -- JAVA +- Java - Nexmark diff --git a/content/learning-paths/smartphones-and-mobile/android_webgpu_dawn/5-render-a-simple-3D-object-part-1.md b/content/learning-paths/smartphones-and-mobile/android_webgpu_dawn/5-render-a-simple-3D-object-part-1.md index cd8d77ac1..c6e3f9f54 100644 --- a/content/learning-paths/smartphones-and-mobile/android_webgpu_dawn/5-render-a-simple-3D-object-part-1.md +++ b/content/learning-paths/smartphones-and-mobile/android_webgpu_dawn/5-render-a-simple-3D-object-part-1.md @@ -113,7 +113,7 @@ ShaderModule shaderModule = device.createShaderModule(shaderDesc); By default the `nextInChain` member of `ShaderModuleDescriptor` is a `nullptr`. -The `nextInChain` pointer is the entry point of WebGPU’s extension mechanism. It is either null or pointing to a structure of type `WGPUChainedStruct`. +The `nextInChain` pointer is the entry point of WebGPU's extension mechanism. It is either null or pointing to a structure of type `WGPUChainedStruct`. It may recursively have a next element (again, either null or pointing to a `WGPUChainedStruct`). diff --git a/content/learning-paths/smartphones-and-mobile/build-android-selfie-app-using-mediapipe-multimodality/2-app-scaffolding.md b/content/learning-paths/smartphones-and-mobile/build-android-selfie-app-using-mediapipe-multimodality/2-app-scaffolding.md index 999a8cbdb..d85541cc2 100644 --- a/content/learning-paths/smartphones-and-mobile/build-android-selfie-app-using-mediapipe-multimodality/2-app-scaffolding.md +++ b/content/learning-paths/smartphones-and-mobile/build-android-selfie-app-using-mediapipe-multimodality/2-app-scaffolding.md @@ -89,7 +89,7 @@ camera-view = { group = "androidx.camera", name = "camera-view", version.ref = " {{% notice Tip %}} -You may also click the __"Sync Project with Gradle Files"__ button in the toolbar or pressing the corresponding shorcut to start a sync. +You may also click the __"Sync Project with Gradle Files"__ button in the toolbar or pressing the corresponding shortcut to start a sync. ![Sync Project with Gradle Files](images/2/sync%20project%20with%20gradle%20files.png) {{% /notice %}} From 80197b93e7d746844e5541d4d1f489a1577da2a4 Mon Sep 17 00:00:00 2001 From: Maddy Underwood <167196745+madeline-underwood@users.noreply.github.com> Date: Tue, 31 Dec 2024 02:13:42 +0000 Subject: [PATCH 67/96] Adjusted title --- .../pytorch-digit-classification-arch-training/_index.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/content/learning-paths/cross-platform/pytorch-digit-classification-arch-training/_index.md b/content/learning-paths/cross-platform/pytorch-digit-classification-arch-training/_index.md index 0046ef12e..50f1b1fe9 100644 --- a/content/learning-paths/cross-platform/pytorch-digit-classification-arch-training/_index.md +++ b/content/learning-paths/cross-platform/pytorch-digit-classification-arch-training/_index.md @@ -1,5 +1,5 @@ --- -title: Create and train a PyTorch model for digit classification +title: Create and train a PyTorch model for digit classification using the MNIST dataset minutes_to_complete: 160 @@ -16,7 +16,7 @@ learning_objectives: - Deploy an optimized model in an Android application. prerequisites: - - A computer that can run Python3, Visual Studio Code, and Android Studio. + - A machine that can run Python3, Visual Studio Code, and Android Studio. - For the OS, you can use Windows, Linux, or macOS. From 1a32dd62668b9e0e1f7f91ec96292052c965506e Mon Sep 17 00:00:00 2001 From: Maddy Underwood <167196745+madeline-underwood@users.noreply.github.com> Date: Tue, 31 Dec 2024 02:26:02 +0000 Subject: [PATCH 68/96] Reviewed questions. --- .../_review.md | 22 +++++++++---------- 1 file changed, 11 insertions(+), 11 deletions(-) diff --git a/content/learning-paths/cross-platform/pytorch-digit-classification-arch-training/_review.md b/content/learning-paths/cross-platform/pytorch-digit-classification-arch-training/_review.md index 8347d010f..c25b83c56 100644 --- a/content/learning-paths/cross-platform/pytorch-digit-classification-arch-training/_review.md +++ b/content/learning-paths/cross-platform/pytorch-digit-classification-arch-training/_review.md @@ -15,31 +15,31 @@ review: question: > Does the input layer of the model flatten the 28x28 pixel image into a 1D array of 784 elements? answers: - - "Yes" - - "No" + - "Yes." + - "No." correct_answer: 1 explanation: > Yes, the model uses nn.Flatten() to reshape the 28x28 pixel image into a 1D array of 784 elements for processing by the fully connected layers. - questions: question: > - Will the model make random predictions if it’s run before training? + Will the model make random predictions if it is run before training? answers: - - "Yes" - - "No" + - "Yes." + - "No." correct_answer: 1 explanation: > - Yes, however in such the case the model will produce random outputs, as the network has not been trained to recognize any patterns from the data. + Yes, however in this scenario the model will produce random outputs, as the network has not been trained to recognize any patterns from the data. - questions: question: > - Which loss function was used to train the PyTorch model on the MNIST dataset? + Which loss function did you use to train the PyTorch model on the MNIST dataset in this Learning Path? answers: - - Mean Squared Error Loss - - Cross Entropy Loss - - Hinge Loss + - Mean Squared Error Loss. + - Cross-Entropy Loss. + - Hinge Loss. - Binary Cross-Entropy Loss correct_answer: 2 explanation: > - Cross Entropy Loss was used to train the model because it is suitable for multi-class classification tasks like digit classification. It measures the difference between the predicted probabilities and the true class labels, helping the model learn to make accurate predictions. + Cross-Entropy Loss was used to train the model as it is suitable for multi-class classification such as digit classification. It measures the difference between the predicted probabilities and the true class labels, helping the model to learn to make accurate predictions. # ================================================================================ # FIXED, DO NOT MODIFY From 72a88b7fe0acdab3e43f391c969e4a6b38c33c85 Mon Sep 17 00:00:00 2001 From: Maddy Underwood <167196745+madeline-underwood@users.noreply.github.com> Date: Tue, 31 Dec 2024 02:58:59 +0000 Subject: [PATCH 69/96] Review. --- .../datasets-and-training.md | 52 ++++++++++++------- 1 file changed, 33 insertions(+), 19 deletions(-) diff --git a/content/learning-paths/cross-platform/pytorch-digit-classification-arch-training/datasets-and-training.md b/content/learning-paths/cross-platform/pytorch-digit-classification-arch-training/datasets-and-training.md index d1e499113..77b669133 100644 --- a/content/learning-paths/cross-platform/pytorch-digit-classification-arch-training/datasets-and-training.md +++ b/content/learning-paths/cross-platform/pytorch-digit-classification-arch-training/datasets-and-training.md @@ -9,9 +9,9 @@ layout: "learningpathall" ## Prepare the MNIST data -Start by downloading the MNIST dataset. Proceed as follows: +Start by downloading the MNIST dataset. -1. Open the pytorch-digits.ipynb you created earlier. +1. Open the `pytorch-digits.ipynb` you created earlier. 2. Add the following statements: @@ -42,9 +42,15 @@ train_dataloader = DataLoader(training_data, batch_size=batch_size) test_dataloader = DataLoader(test_data, batch_size=batch_size) ``` -The above code snippet downloads the MNIST dataset, transforms the images into tensors, and sets up data loaders for training and testing. Specifically, the `datasets.MNIST` function is used to download the MNIST dataset, with `train=True` indicating training data and `train=False` indicating test data. The `transform=transforms.ToTensor()` argument converts each image in the dataset into a PyTorch tensor, which is necessary for model training and evaluation. +Using this code enables you to: -The DataLoader wraps the datasets and allows efficient loading of data in batches. It handles data shuffling, batching, and parallel loading. Here, the train_dataloader and test_dataloader are created with a batch_size of 32, meaning they will load 32 images per batch during training and testing. +* Download the MNIST dataset. +* Transform the images into tensors. +* Set up data loaders for training and testing. + +Specifically, the `datasets.MNIST` function downloads the MNIST dataset, with `train=True` indicating training data and `train=False` indicating test data. The `transform=transforms.ToTensor()` argument converts each image in the dataset into a PyTorch tensor, which is necessary for model training and evaluation. + +The DataLoader wraps the datasets and enables efficient loading of data in batches. It handles data shuffling, batching, and parallel loading. Here, the train_dataloader and test_dataloader are created with a batch_size of 32, meaning they will load 32 images per batch during training and testing. This setup prepares the training and test datasets for use in a machine learning model, enabling efficient data handling and model training in PyTorch. @@ -54,17 +60,19 @@ To run the above code, you will need to install certifi package: pip install certifi ``` -The certifi Python package provides the Mozilla root certificates, which are essential for ensuring the SSL connections are secure. If you’re using macOS, you may also need to install the certificates by running: +The certifi Python package provides the Mozilla root certificates, which are essential for ensuring the SSL connections are secure. If you’re using macOS, you might also need to install the certificates by running: ```console /Applications/Python\ 3.x/Install\ Certificates.command ``` -Make sure to replace `x` with the number of Python version you have installed. +{{% notice Note %}} +Make sure to replace 'x' with the version number of Python that you have installed. +{{% /notice %}} -After running the code you see output similar to the screenshot below: +After running the code, you will see output similar to Figure 1: -![image](Figures/01.png) +![image alt-text#center](Figures/01.png "Figure 1. Output 1".) # Train the model @@ -77,7 +85,7 @@ loss_fn = nn.CrossEntropyLoss() optimizer = torch.optim.Adam(model.parameters(), lr=learning_rate) ``` -Use CrossEntropyLoss as the loss function and the Adam optimizer for training. The learning rate is set to 1e-3. +Use `CrossEntropyLoss` as the loss function and the Adam optimizer for training. The learning rate is set to 1e-3. Next, define the methods for training and evaluating the feedforward neural network: @@ -111,7 +119,7 @@ def test_loop(dataloader, model, loss_fn): print(f"Accuracy: {(100*correct):>0.1f}%, Avg loss: {test_loss:>8f} \n") ``` -The first method, `train_loop`, uses the backpropagation algorithm to optimize the trainable parameters and minimize the prediction error of the neural network. The second method, `test_loop`, calculates the neural network error using the test images and displays the accuracy and loss values. +The first method, `train_loop`, uses the backpropagation algorithm to optimize the trainable parameters and minimize the prediction error rate of the neural network. The second method, `test_loop`, calculates the neural network error rate using the test images, and displays the accuracy and loss values. You can now invoke these methods to train and evaluate the model using 10 epochs. @@ -124,9 +132,9 @@ for t in range(epochs): test_loop(test_dataloader, model, loss_fn) ``` -After running the code, you see the following output showing the training progress. +After running the code, you see the following output showing the training progress, as displayed in Figure 2. -![image](Figures/02.png) +![image alt-text#center](Figures/02.png "Figure 2. Output 2") Once the training is complete, you see output similar to: @@ -139,13 +147,13 @@ The output shows the model achieved around 95% accuracy. # Save the model -Once the model is trained, you can save it. There are various approaches for this. In PyTorch, you can save both the model’s structure and its weights to the same file using the `torch.save()` function. Alternatively, you can save only the weights (parameters) of the model, not the model architecture itself. This requires you to have the model’s architecture defined separately when loading. To save the model weights, you can use the following command: +Once the model is trained, you can save it. There are various approaches for this. In PyTorch, you can save both the model’s structure and its weights to the same file using the `torch.save()` function. Alternatively, you can save only the weights of the model, not the model architecture itself. This requires you to have the model’s architecture defined separately when loading. To save the model weights, you can use the following command: ```Python torch.save(model.state_dict(), "model_weights.pth"). ``` -However, PyTorch does not save the definition of the class itself. When you load the model using `torch.load()`, PyTorch needs to know the class definition to recreate the model object. +However, PyTorch does not save the definition of the class itself. When you load the model using `torch.load()`, PyTorch requires the class definition to recreate the model object. Therefore, when you later want to use the saved model for inference, you will need to provide the definition of the model class. @@ -164,16 +172,22 @@ traced_model = torch.jit.trace(model, torch.rand(1, 1, 28, 28)) traced_model.save("model.pth") ``` -The above commands set the model to evaluation mode, trace the model, and save it. Tracing is useful for converting models with static computation graphs to TorchScript, making them portable and independent of the original class definition. +The above commands perform the following tasks: + +* They set the model to evaluation mode. +* They trace the model. +* They save it. + +Tracing is useful for converting models with static computation graphs to TorchScript, making them flexible and independent of the original class definition. Setting the model to evaluation mode before tracing is important for several reasons: -1. Behavior of Layers like Dropout and BatchNorm: - * Dropout. During training, dropout randomly zeroes out some of the activations to prevent overfitting. During evaluation dropout is turned off, and all activations are used. +1. Behavior of Layers like Dropout and BatchNorm: + * Dropout. During training, dropout randomly zeroes out some of the activations to prevent overfitting. During evaluation, dropout is turned off, and all activations are used. * BatchNorm. During training, Batch Normalization layers use batch statistics to normalize the input. During evaluation, they use running averages calculated during training. -2. Consistent Inference Behavior. By setting the model to eval mode, you ensure that the traced model will behave consistently during inference, as it will not use dropout or batch statistics that are inappropriate for inference. +2. Consistent Inference Behavior. By setting the model to eval mode, you ensure that the traced model behaves consistently during inference, as it does not use dropout or batch statistics that are inappropriate for inference. -3. Correct Tracing. Tracing captures the operations performed by the model using a given input. If the model is in training mode, the traced graph may include operations related to dropout and batch normalization updates. These operations can affect the correctness and performance of the model during inference. +3. Correct Tracing. Tracing captures the operations performed by the model using a given input. If the model is in training mode, the traced graph might include operations related to dropout and batch normalization updates. These operations can affect the correctness and performance of the model during inference. In the next step, you will use the saved model for ML inference. From 3f8738a03f4662559b744193664042d526c99ac8 Mon Sep 17 00:00:00 2001 From: Maddy Underwood <167196745+madeline-underwood@users.noreply.github.com> Date: Tue, 31 Dec 2024 03:06:19 +0000 Subject: [PATCH 70/96] Update intro-android.md --- .../intro-android.md | 10 ++++++---- 1 file changed, 6 insertions(+), 4 deletions(-) diff --git a/content/learning-paths/cross-platform/pytorch-digit-classification-arch-training/intro-android.md b/content/learning-paths/cross-platform/pytorch-digit-classification-arch-training/intro-android.md index 0e29ad251..022d82107 100644 --- a/content/learning-paths/cross-platform/pytorch-digit-classification-arch-training/intro-android.md +++ b/content/learning-paths/cross-platform/pytorch-digit-classification-arch-training/intro-android.md @@ -1,6 +1,6 @@ --- # User change -title: "Understand inference on Android" +title: "Learn about inference on Android" weight: 7 @@ -15,11 +15,13 @@ Arm provides a wide range of hardware and software accelerators designed to opti Running a machine learning model on Android involves a few key steps. -First, you train and save the model in a mobile-friendly format, such as TensorFlow Lite, ONNX, or TorchScript, depending on the framework you are using. +* You train and save the model in a mobile-friendly format, such as TensorFlow Lite, ONNX, or TorchScript, depending on the framework you are using. -Next, you add the model file to your Android project's assets directory. In your application's code, use the corresponding framework's Android library, such as TensorFlow Lite or PyTorch Mobile, to load the model. +* You add the model file to your Android project's assets directory. In your application's code, use the corresponding framework's Android library, such as TensorFlow Lite or PyTorch Mobile, to load the model. -You then prepare the input data, ensuring it is formatted and preprocessed in the same way as during model training. The input data is passed through the model, and the output predictions are retrieved and interpreted accordingly. For improved performance, you can leverage hardware acceleration using Android’s Neural Networks API (NNAPI) or use GPU support if available. This process enables the Android app to make real-time predictions and execute complex machine learning tasks directly on the device. +* You prepare the input data, ensuring it is formatted and preprocessed in the same way as during model training. The input data is passed through the model, and the output predictions are retrieved and interpreted accordingly. + +For improved performance, you can leverage hardware acceleration using Android’s Neural Networks API (NNAPI) or use GPU support if available. This process enables the Android app to make real-time predictions and execute complex machine learning tasks directly on the device. In this Learning Path, you will learn how to perform inference in an Android application using the pre-trained digit classifier from the previous sections. From d4ef2b8a36dcc2f8162b1ed631d4fa5a347c912f Mon Sep 17 00:00:00 2001 From: Maddy Underwood <167196745+madeline-underwood@users.noreply.github.com> Date: Tue, 31 Dec 2024 11:10:04 +0000 Subject: [PATCH 71/96] Review. --- .../mobile-app.md | 12 ++++++------ 1 file changed, 6 insertions(+), 6 deletions(-) diff --git a/content/learning-paths/cross-platform/pytorch-digit-classification-arch-training/mobile-app.md b/content/learning-paths/cross-platform/pytorch-digit-classification-arch-training/mobile-app.md index fe897f817..6ede98431 100644 --- a/content/learning-paths/cross-platform/pytorch-digit-classification-arch-training/mobile-app.md +++ b/content/learning-paths/cross-platform/pytorch-digit-classification-arch-training/mobile-app.md @@ -21,9 +21,9 @@ Start by modifying the `activity_main.xml` by adding a `CheckBox` to use the opt android:textSize="16sp"/> ``` -Copy the optimized model to the `assets` folder of the Android project. +Copy the optimized model to the `assets` folder in the Android project. -Replace the `MainActivity.kt` by the following code: +Replace the code in `MainActivity.kt` Kotlin file with the following code: ```Kotlin package com.arm.armpytorchmnistinference @@ -214,7 +214,7 @@ class MainActivity : AppCompatActivity() { } ``` -The updated version of the Android application includes modifications to the Android Activity to dynamically load the model based on the state of the `CheckBox`. +The updated version of the Android application includes modifications to the Android Activity source code to dynamically load the model based on the state of the `CheckBox`. When the `CheckBox` is selected, the app loads the optimized model, which is quantized and fused for improved performance. @@ -222,11 +222,11 @@ If the `CheckBox` is not selected, the app loads the original model. After the model is loaded, the inference is run. To better estimate the execution time, the `runInference()` method executes the inference 100 times in a loop. This provides a more reliable measure of the average inference time by smoothing out any inconsistencies from single executions. -The results for a run on a physical device are shown below. These results indicate that, on average, the optimized model reduced the inference time to about 65% of the original model's execution time, showing a significant improvement in performance. +The results for a run on a physical device are shown below. These results indicate that, on average, the optimized model reduced the inference time to about 65% of the original model's execution time, which demonstrates a significant improvement in performance. This optimization showcases the benefits of quantization and layer fusion for mobile inference, and there is further potential for enhancement by enabling hardware acceleration on supported devices. -This would allow the model to take full advantage of the device's computational capabilities, potentially reducing the inference time even more. +This would allow the model to take full advantage of the device's computational capabilities, potentially further reducing the inference time. ![fig](Figures/07.jpg) @@ -240,4 +240,4 @@ Quantization and layer fusion removed unnecessary elements such as dropout layer By running multiple iterations of the inference process, you learned that the optimized model significantly reduced the average inference time to around 65% of the original time. -You also learned that there is potential for further performance improvements by leveraging hardware acceleration. \ No newline at end of file +You also learned that there is potential for further performance improvements by leveraging hardware acceleration. From 31d91c033481ba8257344a32e4d759195d0df9b2 Mon Sep 17 00:00:00 2001 From: Maddy Underwood <167196745+madeline-underwood@users.noreply.github.com> Date: Tue, 31 Dec 2024 11:17:34 +0000 Subject: [PATCH 72/96] Review. --- .../prepare-data.md | 16 +++++++++------- 1 file changed, 9 insertions(+), 7 deletions(-) diff --git a/content/learning-paths/cross-platform/pytorch-digit-classification-arch-training/prepare-data.md b/content/learning-paths/cross-platform/pytorch-digit-classification-arch-training/prepare-data.md index c16cb4c20..0ed46592a 100644 --- a/content/learning-paths/cross-platform/pytorch-digit-classification-arch-training/prepare-data.md +++ b/content/learning-paths/cross-platform/pytorch-digit-classification-arch-training/prepare-data.md @@ -1,19 +1,19 @@ --- # User change -title: "Prepare Test Data" +title: "Prepare the Test Data" weight: 9 layout: "learningpathall" --- -In this section you will add the pre-trained model and copy the bitmap image data to the Android project. +In this section, you will add the pre-trained model and copy the bitmap image data to the Android project. ## Model To add the model, create a folder named `assets` in the `app/src/main` folder. -Copy the pre-trained model you created in the previous steps, `model.pth` to the `assets` folder. +Copy the pre-trained model, named `model.pth`, to the `assets` folder. The model is also available in the [GitHub repository](https://github.com/dawidborycki/Arm.PyTorch.MNIST.Inference.git) if you need to copy it. @@ -66,16 +66,18 @@ for i, (image, label) in enumerate(test_data): break ``` -The above code processes the MNIST test dataset to generate and save bitmap images for digit classification. +This code processes the MNIST test dataset to generate and save bitmap images for digit classification. It defines constants for the number of unique digits (0-9) and the number of examples to collect per digit. The dataset is loaded using `torchvision.datasets` with a transformation to convert images to tensors. A directory named `mnist_bitmaps` is created to store the images. A dictionary tracks the number of collected examples for each digit. The code iterates through the dataset, converting each image tensor back to a PIL image, and saves two examples of each digit in the format `digit_index_example_index.png`. -The loop breaks once the specified number of examples per digit is saved, ensuring that exactly 20 images (2 per digit) are generated and stored in the specified directory. +The loop breaks once the specified number of examples per digit is saved, ensuring that exactly 20 images, two per digit, are generated and stored in the specified directory. -For your convenience the data is included in the [GitHub repository](https://github.com/dawidborycki/Arm.PyTorch.MNIST.Inference.git) +{% notice Note %}} +This data is included in the [GitHub repository](https://github.com/dawidborycki/Arm.PyTorch.MNIST.Inference.git) +{{% /notice %}} Copy the `mnist_bitmaps` folder to the `assets` folder. -Once you have the `model.pth` and the `mnist_bitmaps` folder in the `assets` folder continue to the next step to run the Android application. \ No newline at end of file +Once you have the `model.pth` and the `mnist_bitmaps` folder in the `assets` folder, continue to the next step to run the Android application. From 32c7028c58143f1c5129dbecc6cb97dbdccf4e88 Mon Sep 17 00:00:00 2001 From: Maddy Underwood <167196745+madeline-underwood@users.noreply.github.com> Date: Tue, 31 Dec 2024 12:19:49 +0000 Subject: [PATCH 73/96] Restructuring long-list content into bullets. --- .../user-interface.md | 38 +++++++++++++------ 1 file changed, 27 insertions(+), 11 deletions(-) diff --git a/content/learning-paths/cross-platform/pytorch-digit-classification-arch-training/user-interface.md b/content/learning-paths/cross-platform/pytorch-digit-classification-arch-training/user-interface.md index bcf84520b..a46d7620d 100644 --- a/content/learning-paths/cross-platform/pytorch-digit-classification-arch-training/user-interface.md +++ b/content/learning-paths/cross-platform/pytorch-digit-classification-arch-training/user-interface.md @@ -1,6 +1,6 @@ --- # User change -title: "Create an Android application" +title: "Create an Android Application" weight: 8 @@ -17,17 +17,17 @@ The application runs an inference on the image and predicts the digit value. Start by creating a project: -1. Open Android Studio and create a new project with an “Empty Views Activity.” +1. Open Android Studio and create a new project with an **Empty Views Activity**. 2. Set the project name to **ArmPyTorchMNISTInference**, set the package name to: **com.arm.armpytorchmnistinference**, select **Kotlin** as the language, and set the minimum SDK to **API 27 ("Oreo" Android 8.1)**. -Set the API to Android 8.1 (API level 27) because this version introduced NNAPI, providing a standard interface for running computationally intensive machine learning models on Android devices. +Set the API to Android 8.1 (API level 27). This version introduced NNAPI, providing a standard interface for running computationally-intensive machine learning models on Android devices. -Devices with hardware accelerators can leverage NNAPI to offload ML tasks to specialized hardware, such as NPUs (Neural Processing Units), DSPs (Digital Signal Processors), or GPUs (Graphics Processing Units). +Devices with hardware accelerators can leverage NNAPI to offload ML tasks to specialized hardware, such as Neural Processing Units (NPUs), Digital Signal Processors (DSPs), or Graphics Processing Units (GPUs). ## User interface design -The user interface design contains the following: +The user interface design contains different components: - A header. - `ImageView` and `TextView` sections to display the image and its true label. @@ -313,15 +313,31 @@ class MainActivity : AppCompatActivity() { } ``` -The above Kotlin code defines an Android app activity called `MainActivity` that performs inference on the MNIST dataset using a pre-trained PyTorch model. The app allows the user to load a random MNIST image from the `assets` folder and runs the model to classify the image. +This Kotlin code defines an Android app activity called `MainActivity` that performs inference on the MNIST dataset using a pre-trained PyTorch model. The app allows the user to load a random MNIST image from the `assets` folder and run the model to classify the image. -The MainActivity class contains several methods. The first one, `onCreate()` is called when the activity is first created. It sets up the user interface by inflating the layout defined in `activity_main.xml` and initializes several UI components, including an `ImageView` to display the image, `TextView` controls to show the true label and predicted label, and two buttons (`selectImageButton` and `runInferenceButton`) to select an image and run inference. The method then loads the PyTorch model from the assets folder using the `assetFilePath()` function and sets up click listeners for the buttons. The `selectImageButton` is configured to select a random image from the `mnist_bitmaps` folder, while the `runInferenceButton` runs the inference on the selected image. +The `MainActivity` class contains several methods: -Next, the `selectRandomImageFromAssets()` method is responsible for selecting a random image from the `mnist_bitmaps` folder in the assets. It lists all the files in the folder, picks one at random, and loads it as a bitmap. The method then extracts the true label from the filename (e.g., 07_00.png implies a true label of 7), displays the selected image in the `ImageView`, and updates the `trueLabel TextView` with the correct label. If there is an error loading the image or the folder is empty, an appropriate error message is displayed in the `trueLabel TextView`. +* The first one, `onCreate()` is called when the activity is first created. It sets up the user interface by inflating the layout defined in `activity_main.xml` and initializes several UI components, including an `ImageView` to display the image, `TextView` controls to show the true label and predicted label, and two buttons, `selectImageButton` and `runInferenceButton`, to select an image and run inference. This method then loads the PyTorch model from the `assets` folder using the `assetFilePath()` function, and sets up click listeners for the buttons. The `selectImageButton` is configured to select a random image from the `mnist_bitmaps` folder, while the `runInferenceButton` runs the inference on the selected image. -Afterward, the `createTensorFromBitmap()` converts a grayscale bitmap of size 28x28 (an image from the MNIST dataset) into a PyTorch Tensor. First, the method verifies that the bitmap has the correct dimensions. Then, it extracts pixel data from the bitmap, normalizes each pixel value to a float in the range [0, 1], and stores the values in a float array. The method finally constructs and returns a tensor with the shape [1, 1, 28, 28], where 1 is the batch size, 1 is the number of channels (for grayscale), and 28 represents the width and height of the image. This is required to match the input expected by the model. +* The next method, `selectRandomImageFromAssets()`, is responsible for selecting a random image from the `mnist_bitmaps` folder in `assets`. It lists all the files in the folder, picks one at random, and loads it as a bitmap. This method then does the following: + + * It extracts the true label from the filename. For example, 07_00.png implies a true label of 7. + * It displays the selected image in the `ImageView`. + * It updates the `trueLabel TextView` with the correct label. + +If there is an error loading the image or the folder is empty, an appropriate error message is displayed in the `trueLabel TextView`. -Subsequently, we have the `runInference()` method. It accepts a bitmap as input and performs inference using the pre-trained PyTorch model. It first converts the bitmap to a tensor using the `createTensorFromBitmap()` method. Then, it measures the time taken to run the forward pass of the model using the `measureTimeMicros()` method. The output tensor from the model, which contains the scores for each digit class, is processed to determine the predicted label. This predicted label is displayed in the `predictedLabel TextView`. The method also updates the `inferenceTime TextView` with the time taken for the inference in microseconds. +* The next method, `createTensorFromBitmap()`, converts a grayscale bitmap of size 28x28 (an image from the MNIST dataset) into a PyTorch Tensor, through the following steps: + + * The method begins by verifying that the bitmap has the correct dimensions. + * Then it extracts pixel data from the bitmap. + * It normalizes each pixel value to a float in the range [0, 1], and stores the values in a float array. + * Then it constructs and returns a tensor with the shape [1, 1, 28, 28], where 1 is the batch size, 1 is the number of channels (for grayscale), and 28 represents the width and height of the image. This is required to match the input expected by the model. + +* Subsequently, there is the `runInference()` method, which accepts a bitmap as input and performs inference using the pre-trained PyTorch model, through the following steps: + + * First, it converts the bitmap to a tensor using the `createTensorFromBitmap()` method. + * Then, it measures the time taken to run the forward pass of the model using the `measureTimeMicros()` method. The output tensor from the model, which contains the scores for each digit class, is processed to determine the predicted label. This predicted label is displayed in the `predictedLabel TextView`. The method also updates the `inferenceTime TextView` with the time taken for the inference in microseconds. Also, we have an inline function `measureTimeMicros()`. It is a utility method that measures the execution time of the provided code block in microseconds. It uses the `measureNanoTime()` function to get the execution time in nanoseconds and then converts it to microseconds by dividing the result by 1000. This method is used to measure the time taken for model inference in the `runInference()` method. @@ -329,4 +345,4 @@ The `assetFilePath()` method is a helper function that copies a file from the as The `MainActivity` class initializes the UI components, loads a pre-trained PyTorch model, and allows the user to select random MNIST images and run inference on them. Each method is designed to handle a specific aspect of the functionality, such as loading images, converting them to tensors, running inference, and measuring execution time. The code is modular and organized, making it easy to understand and maintain. -To be able to successfully run the application you need to add the model and prepare the bitmaps. Continue to see how to prepare the data. \ No newline at end of file +To be able to successfully run the application you need to add the model and prepare the bitmaps. Continue to see how to prepare the data. From 8dc7d588c02573d55440f717c7ef6f1ceacab0e1 Mon Sep 17 00:00:00 2001 From: Maddy Underwood <167196745+madeline-underwood@users.noreply.github.com> Date: Tue, 31 Dec 2024 14:02:37 +0000 Subject: [PATCH 74/96] Final editorial. --- .../user-interface.md | 50 +++++++++++++------ 1 file changed, 34 insertions(+), 16 deletions(-) diff --git a/content/learning-paths/cross-platform/pytorch-digit-classification-arch-training/user-interface.md b/content/learning-paths/cross-platform/pytorch-digit-classification-arch-training/user-interface.md index a46d7620d..05156e79d 100644 --- a/content/learning-paths/cross-platform/pytorch-digit-classification-arch-training/user-interface.md +++ b/content/learning-paths/cross-platform/pytorch-digit-classification-arch-training/user-interface.md @@ -19,9 +19,12 @@ Start by creating a project: 1. Open Android Studio and create a new project with an **Empty Views Activity**. -2. Set the project name to **ArmPyTorchMNISTInference**, set the package name to: **com.arm.armpytorchmnistinference**, select **Kotlin** as the language, and set the minimum SDK to **API 27 ("Oreo" Android 8.1)**. - -Set the API to Android 8.1 (API level 27). This version introduced NNAPI, providing a standard interface for running computationally-intensive machine learning models on Android devices. +2. Configure as follows: + * Set the project name to **ArmPyTorchMNISTInference**. + * Set the package name to: **com.arm.armpytorchmnistinference**. + * Select **Kotlin** as the language. + * Set the minimum SDK to **API 27 ("Oreo" Android 8.1)**. + * Set the API to Android 8.1 (API level 27). This version introduced NNAPI, providing a standard interface for running computationally-intensive machine learning models on Android devices. Devices with hardware accelerators can leverage NNAPI to offload ML tasks to specialized hardware, such as Neural Processing Units (NPUs), Digital Signal Processors (DSPs), or Graphics Processing Units (GPUs). @@ -33,9 +36,9 @@ The user interface design contains different components: - `ImageView` and `TextView` sections to display the image and its true label. - A button to load the image. - A button to run inference. -- Two `TextView` controls to display the predicted label and inference time. +- Two `TextView` controls to display the predicted label and the inference time. -Use the Android Studio editor to replace the contents of `activity_main.xml`, located in `src/main/res/layout` with the following code: +Use the editor in Android Studio to replace the contents of `activity_main.xml`, located in `src/main/res/layout` with the following code: ```XML @@ -109,9 +112,9 @@ Use the Android Studio editor to replace the contents of `activity_main.xml`, lo ``` -The above XML code defines a user interface layout for an Android activity using a vertical `LinearLayout`. It includes several UI components arranged vertically with padding and centered alignment. +The XML code above defines a user interface layout for an Android activity using a vertical `LinearLayout`. It includes several UI components arranged vertically with padding and centered alignment. -At the top, there is a `TextView` acting as a header, displaying the text `Digit Recognition` in bold and with a large font size. +At the top, there is a `TextView` acting as a header, displaying the text **Digit Recognition** in bold and with a large font size. Below the header, an `ImageView` displays an image, with a default source set to `sample_image`. @@ -317,9 +320,9 @@ This Kotlin code defines an Android app activity called `MainActivity` that perf The `MainActivity` class contains several methods: -* The first one, `onCreate()` is called when the activity is first created. It sets up the user interface by inflating the layout defined in `activity_main.xml` and initializes several UI components, including an `ImageView` to display the image, `TextView` controls to show the true label and predicted label, and two buttons, `selectImageButton` and `runInferenceButton`, to select an image and run inference. This method then loads the PyTorch model from the `assets` folder using the `assetFilePath()` function, and sets up click listeners for the buttons. The `selectImageButton` is configured to select a random image from the `mnist_bitmaps` folder, while the `runInferenceButton` runs the inference on the selected image. +* The `onCreate()` method is called when the activity is first created. It sets up the user interface by inflating the layout defined in `activity_main.xml` and initializes several UI components, including an `ImageView` to display the image, `TextView` controls to show the true label and predicted label, and two buttons, `selectImageButton` and `runInferenceButton`, to select an image and run inference. This method then loads the PyTorch model from the `assets` folder using the `assetFilePath()` function, and sets up click listeners for the buttons. The `selectImageButton` is configured to select a random image from the `mnist_bitmaps` folder, while the `runInferenceButton` runs the inference on the selected image. -* The next method, `selectRandomImageFromAssets()`, is responsible for selecting a random image from the `mnist_bitmaps` folder in `assets`. It lists all the files in the folder, picks one at random, and loads it as a bitmap. This method then does the following: +* The `selectRandomImageFromAssets()` method is responsible for selecting a random image from the `mnist_bitmaps` folder in `assets`. It lists all the files in the folder, picks one at random, and loads it as a bitmap. This method then does the following: * It extracts the true label from the filename. For example, 07_00.png implies a true label of 7. * It displays the selected image in the `ImageView`. @@ -327,22 +330,37 @@ The `MainActivity` class contains several methods: If there is an error loading the image or the folder is empty, an appropriate error message is displayed in the `trueLabel TextView`. -* The next method, `createTensorFromBitmap()`, converts a grayscale bitmap of size 28x28 (an image from the MNIST dataset) into a PyTorch Tensor, through the following steps: +* The `createTensorFromBitmap()` method converts a grayscale bitmap of size 28x28 (an image from the MNIST dataset) into a PyTorch Tensor, through the following steps: * The method begins by verifying that the bitmap has the correct dimensions. * Then it extracts pixel data from the bitmap. * It normalizes each pixel value to a float in the range [0, 1], and stores the values in a float array. * Then it constructs and returns a tensor with the shape [1, 1, 28, 28], where 1 is the batch size, 1 is the number of channels (for grayscale), and 28 represents the width and height of the image. This is required to match the input expected by the model. -* Subsequently, there is the `runInference()` method, which accepts a bitmap as input and performs inference using the pre-trained PyTorch model, through the following steps: +* The `runInference()` method accepts a bitmap as input and performs inference using the pre-trained PyTorch model, through the following steps: * First, it converts the bitmap to a tensor using the `createTensorFromBitmap()` method. - * Then, it measures the time taken to run the forward pass of the model using the `measureTimeMicros()` method. The output tensor from the model, which contains the scores for each digit class, is processed to determine the predicted label. This predicted label is displayed in the `predictedLabel TextView`. The method also updates the `inferenceTime TextView` with the time taken for the inference in microseconds. + * Then, it measures the time taken to run the forward pass of the model using the `measureTimeMicros()` method. + * The output tensor from the model, which contains the scores for each digit class, is then processed to determine the predicted label. + * The predicted label is displayed in the `predictedLabel TextView`. + * The method also updates the `inferenceTime TextView` with the time taken for the inference in microseconds. + +* The inline function `measureTimeMicros()` is a utility method that measures the execution time of the given code block in microseconds: + + * It uses the `measureNanoTime()` function to get the execution time in nanoseconds. + * It converts the resultant execution time to microseconds by dividing the result by 1000. + * This method is used to measure the time taken for model inference in the `runInference()` method. + +* The `assetFilePath()` method is a helper function that copies a file from the assets folder to the application's internal storage and returns the absolute path of the copied file. This is necessary because PyTorch’s `Module.load()` method requires a file path, not an InputStream. The `assetFilePath()` method does the following: + + * The function reads the specified asset file. + * It writes its contents to a file in the internal storage. + * It returns the path to this file. -Also, we have an inline function `measureTimeMicros()`. It is a utility method that measures the execution time of the provided code block in microseconds. It uses the `measureNanoTime()` function to get the execution time in nanoseconds and then converts it to microseconds by dividing the result by 1000. This method is used to measure the time taken for model inference in the `runInference()` method. +This method is used in `onCreate()` to load the PyTorch model file, `model.pth`, from the `assets` folder. -The `assetFilePath()` method is a helper function that copies a file from the assets folder to the application's internal storage and returns the absolute path of the copied file. This is necessary because PyTorch’s `Module.load()` method requires a file path, not an InputStream. The function reads the specified asset file, writes its contents to a file in the internal storage, and returns the path to this file. This method is used in `onCreate()` to load the PyTorch model file, `model.pth`, from the `assets` folder. +* The `MainActivity` class initializes the UI components, loads a pre-trained PyTorch model, and allows the user to select random MNIST images and run inference on them. -The `MainActivity` class initializes the UI components, loads a pre-trained PyTorch model, and allows the user to select random MNIST images and run inference on them. Each method is designed to handle a specific aspect of the functionality, such as loading images, converting them to tensors, running inference, and measuring execution time. The code is modular and organized, making it easy to understand and maintain. +Each method is designed to handle a specific aspect of the functionality, such as loading images, converting them to tensors, running inference, and measuring execution time. The code is modular and organized, making it easy to understand and maintain. -To be able to successfully run the application you need to add the model and prepare the bitmaps. Continue to see how to prepare the data. +To be able to successfully run the application, you need to add the model and prepare the bitmaps. Continue with this Learning Path to learn how to prepare the data. From 6b76a1c0705287cfcd56d221a41a35c9cb03219d Mon Sep 17 00:00:00 2001 From: Maddy Underwood <167196745+madeline-underwood@users.noreply.github.com> Date: Tue, 31 Dec 2024 14:04:23 +0000 Subject: [PATCH 75/96] Final editorial. --- .../pytorch-digit-classification-arch-training/prepare-data.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/content/learning-paths/cross-platform/pytorch-digit-classification-arch-training/prepare-data.md b/content/learning-paths/cross-platform/pytorch-digit-classification-arch-training/prepare-data.md index 0ed46592a..4affa5797 100644 --- a/content/learning-paths/cross-platform/pytorch-digit-classification-arch-training/prepare-data.md +++ b/content/learning-paths/cross-platform/pytorch-digit-classification-arch-training/prepare-data.md @@ -15,7 +15,7 @@ To add the model, create a folder named `assets` in the `app/src/main` folder. Copy the pre-trained model, named `model.pth`, to the `assets` folder. -The model is also available in the [GitHub repository](https://github.com/dawidborycki/Arm.PyTorch.MNIST.Inference.git) if you need to copy it. +The model is also available in the [GitHub repository](https://github.com/dawidborycki/Arm.PyTorch.MNIST.Inference.git) if you require it. ## Image data From 08e548c028e9e937129a2441e57a790389b5820c Mon Sep 17 00:00:00 2001 From: Maddy Underwood <167196745+madeline-underwood@users.noreply.github.com> Date: Tue, 31 Dec 2024 14:11:02 +0000 Subject: [PATCH 76/96] Final editorial. --- .../optimisation.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/content/learning-paths/cross-platform/pytorch-digit-classification-arch-training/optimisation.md b/content/learning-paths/cross-platform/pytorch-digit-classification-arch-training/optimisation.md index 06778bf47..67569d078 100644 --- a/content/learning-paths/cross-platform/pytorch-digit-classification-arch-training/optimisation.md +++ b/content/learning-paths/cross-platform/pytorch-digit-classification-arch-training/optimisation.md @@ -7,7 +7,7 @@ weight: 13 layout: "learningpathall" --- -To optimize the model use the `pytorch-digits-model-optimisations.ipynb` to add the following lines: +To optimize the model, use the `pytorch-digits-model-optimizations.ipynb` to add the following lines: ```python from torch.utils.mobile_optimizer import optimize_for_mobile @@ -62,4 +62,4 @@ Finally, the traced model is optimized for mobile using `optimize_for_mobile()`, The optimized model is saved in a format suitable for the PyTorch Lite Interpreter for efficient deployment on mobile platforms. -The result is an optimized and quantized model stored as `"optimized_model.ptl"`, ready for deployment. \ No newline at end of file +The result is an optimized and quantized model stored as `"optimized_model.ptl"`, ready for deployment. From ea412c746c83280fff8598d3174013bb64a3737c Mon Sep 17 00:00:00 2001 From: Maddy Underwood <167196745+madeline-underwood@users.noreply.github.com> Date: Tue, 31 Dec 2024 14:15:53 +0000 Subject: [PATCH 77/96] Final editorial. --- .../model.md | 13 ++++++++----- 1 file changed, 8 insertions(+), 5 deletions(-) diff --git a/content/learning-paths/cross-platform/pytorch-digit-classification-arch-training/model.md b/content/learning-paths/cross-platform/pytorch-digit-classification-arch-training/model.md index ceabd50d8..f2b070820 100644 --- a/content/learning-paths/cross-platform/pytorch-digit-classification-arch-training/model.md +++ b/content/learning-paths/cross-platform/pytorch-digit-classification-arch-training/model.md @@ -7,23 +7,23 @@ weight: 3 layout: "learningpathall" --- -You can create and train a feedforward neural network to classify handwritten digits from the MNIST dataset (Modified National Institute of Standards and Technology database). This dataset contains 70,000 images, comprised of 60,000 training images and 10,000 testing images, of handwritten numerals (0-9), each with dimensions of 28x28 pixels. Some representative MNIST digits with their corresponding labels are shown below. +You can create and train a feedforward neural network to classify handwritten digits from the MNIST dataset. This dataset contains 70,000 images, comprising 60,000 training images and 10,000 testing images of handwritten numerals (0-9), each with dimensions of 28x28 pixels. Some representative MNIST digits with their corresponding labels are shown below. ![img3 alt-text#center](Figures/3.png "Figure 3: MNIST Digits and Labels.") The neural network begins with an input layer containing 28x28 = 784 input nodes, with each node accepting a single pixel from a MNIST image. -You will add a linear hidden layer with 96 nodes, using the hyperbolic tangent (tanh) activation function. To prevent overfitting, a dropout layer is applied, randomly setting 20% of the nodes to zero. +You will add a linear hidden layer with 96 nodes, using the hyperbolic tangent (tanh) activation function. To prevent overfitting, you will apply a dropout layer, randomly setting 20% of the nodes to zero. You will then include another hidden layer with 256 nodes, followed by a second dropout layer that again removes 20% of the nodes. Finally, you will reach a situation where the output layer consists of ten nodes, each representing the probability of recognizing one of the digits (0-9). The total number of trainable parameters for this network is calculated as follows: -* First hidden layer: 784 x 96 + 96 = 75,360 parameters (weights + biases). +* First hidden layer: 784 x 96 + 96 = 75,360 parameters (weights and biases). * Second hidden layer: 96 x 256 + 256 = 24,832 parameters. * Output layer: 256 x 10 + 10 = 2,570 parameters. -So in total, the network has 102,762 trainable parameters. +In total, the network has 102,762 trainable parameters. # Implementation @@ -58,7 +58,9 @@ class NeuralNetwork(nn.Module): return logits ``` -To build the neural network in PyTorch, define a class that inherits from PyTorch’s nn.Module. This approach is similar to TensorFlow’s subclassing API. In this case, define a class named NeuralNetwork, which consists of two main components: +To build the neural network in PyTorch, define a class that inherits from PyTorch’s nn.Module. This approach is similar to TensorFlow’s subclassing API. + +Define a class named NeuralNetwork, which consists of two main components: 1. __init__ method @@ -69,6 +71,7 @@ First initialize the nn.Module with super(NeuralNetwork, self).__init__(). Insid Next, create a sequential stack of layers using nn.Sequential. The network consists of: + * A fully-connected (Linear) layer with 96 nodes, followed by the Tanh activation function. * A Dropout layer with a 20% dropout rate to prevent overfitting. * A second Linear layer, with 256 nodes, followed by the Sigmoid activation function. From 01f73f917ae919a01fa4bc9985329e7f118e3db5 Mon Sep 17 00:00:00 2001 From: Maddy Underwood <167196745+madeline-underwood@users.noreply.github.com> Date: Tue, 31 Dec 2024 15:05:33 +0000 Subject: [PATCH 78/96] Final editorial. --- .../model-opt.md | 44 ++++++++++--------- 1 file changed, 24 insertions(+), 20 deletions(-) diff --git a/content/learning-paths/cross-platform/pytorch-digit-classification-arch-training/model-opt.md b/content/learning-paths/cross-platform/pytorch-digit-classification-arch-training/model-opt.md index dc5b8556e..76703398e 100644 --- a/content/learning-paths/cross-platform/pytorch-digit-classification-arch-training/model-opt.md +++ b/content/learning-paths/cross-platform/pytorch-digit-classification-arch-training/model-opt.md @@ -7,15 +7,19 @@ weight: 12 layout: "learningpathall" --- -You can create and train an optimized feedforward neural network to classify handwritten digits from the MNIST dataset. As a reminder, the dataset contains 70,000 images, comprising 60,000 training and 10,000 testing images, of handwritten numerals (0-9), each with dimensions of 28x28 pixels. - -This time you will introduce several changes to enable model quantization and fusing. +You can create and train an optimized feedforward neural network to classify handwritten digits from the MNIST dataset. This time you will introduce several changes to enable model quantization and fusing. # Model architecture -Start by creating a new notebook named `pytorch-digits-model-optimisations.ipynb`. +Start by creating a new notebook named `pytorch-digits-model-optimizations.ipynb`. + +Then define the model architecture using the code below. + +{% notice Note%}} +You can also find the source code on [GitHub](https://github.com/dawidborycki/Arm.PyTorch.MNIST.Inference.Python). +{{% /notice %}} + -Then define the model architecture using the code below. You can also find the source code on [GitHub](https://github.com/dawidborycki/Arm.PyTorch.MNIST.Inference.Python) ```python import torch @@ -53,13 +57,13 @@ class NeuralNetwork(nn.Module): return x # Outputs raw logits ``` -This code defines a neural network in PyTorch for digit classification, consisting of three linear layers with ReLU activations and optional dropout layers for regularization. The network first flattens the input (a 28x28 image) and passes it through two linear layers, each followed by a ReLU activation and a dropout layer (if enabled). The final layer produces raw logits as the output. Notably, the softmax layer has been removed to enable quantization and layer fusion during model optimization, allowing better performance when deploying the model on mobile or edge devices. +This code defines a neural network in PyTorch for digit classification, consisting of three linear layers with ReLU activations and optional dropout layers for regularization. The network first flattens the input, that is a 28x28 image, and passes it through two linear layers, each followed by a ReLU activation and if enbaled, a dropout layer. The final layer produces raw logits as the output. Notably, the softmax layer has been removed to enable quantization and layer fusion during model optimization, allowing better performance when deploying the model on mobile or edge devices. The output is left as logits, and the softmax function can be applied during post-processing, particularly during inference. This model includes dropout layers, which are used during training to randomly set a portion of the neurons to zero in order to prevent overfitting and improve generalization. -The `use_dropout` parameter allows you to enable or disable dropout, with the option to bypass dropout by replacing it with an `nn.Identity` layer when set to `False`, which is typically done during inference or quantization for more consistent behavior. +The `use_dropout` parameter allows you to enable or disable dropout, with the option to bypass dropout by replacing it with a `nn.Identity` layer when set to `False`, which is typically done during inference or quantization for more consistent behavior. Add the following lines to display the model architecture: @@ -69,7 +73,7 @@ model = NeuralNetwork() summary(model, (1, 28, 28)) ``` -After running the code, you see the following output: +After running the code, you will see the following output: ```output ---------------------------------------------------------------- @@ -98,14 +102,14 @@ Estimated Total Size (MB): 0.41 The output shows the structure of the neural network, including the layers, their output shapes, and the number of parameters. * The network starts with a Flatten layer, which reshapes the input from [1, 28, 28] to [1, 784] without adding any parameters. -* This is followed by two Linear (fully connected) layers with ReLU activations and optional Dropout layers in between, contributing to the parameter count. -* The first linear layer (from 784 to 96 units) has 75,360 parameters, while the second (from 96 to 256 units) has 24,832 parameters. +* This is followed by two linear, fully-connected, layers with ReLU activations and optional Dropout layers in between that contribute to the parameter count. +* The first linear layer, from 784 to 96 units, has 75,360 parameters, while the second, from 96 to 256 units, has 24,832 parameters. * The final linear layer, which outputs raw logits for the 10 classes, has 2,570 parameters. -* The total number of trainable parameters in the model is 102,762, with no non-trainable parameters. +* The total number of trainable parameters in the model is 102,762, without any non-trainable parameters. # Training the model -Now add the data loading, train, and test loops to actually train the model. This proceeds exactly the same as in the original model: +Now add the load-the-data, train, and test loops to train the model. This process is the same as with the original model: ``` from torchvision import transforms, datasets @@ -175,21 +179,21 @@ for t in range(epochs): test_loop(test_dataloader, model, loss_fn) ``` -You begin by preparing the MNIST dataset for training and testing our neural network model. +Begin by preparing the MNIST dataset for training and testing the neural network model. -Using the torchvision library, you download the MNIST dataset and apply a transformation to convert the images into tensors, making them suitable for input into the model. +Using the torchvision library, download the MNIST dataset and apply a transformation to convert the images into tensors, making them suitable for input into the model. Next, create two data loaders: one for the training set and one for the test set, each configured with a batch size of 32. These data loaders allow you to easily feed batches of images into the model during training and testing. -Next, define a training loop, which is the core of the model’s learning process. For each batch of images and labels, the model generates predictions, and you calculate the cross-entropy loss to measure how far off the predictions are from the true labels. +Next, define a training loop, which is at the core of the model’s learning process. For each batch of images and labels, the model generates predictions, and you calculate the cross-entropy loss to measure how far off the predictions are from the true labels. The Adam optimizer is used to perform backpropagation, updating the model's weights to reduce this error. The process repeats for every batch in the training dataset, gradually improving model accuracy over time. -To ensure the model is learning effectively, you also define a testing loop. +To ensure the model is learning effectively, you need to define a testing loop. -Here, the model is evaluated on a separate set of test images that it hasn't seen during training. You calculate both the average loss and the accuracy of the predictions, giving a clear sense of how well the model is performing. Importantly, this evaluation is done without updating the model's weights, as the goal is simply to measure its performance. +Here, the model is evaluated on a separate set of test images that it has not seen during training. You can calculate both the average loss and the accuracy of the predictions, and it will give you a clear sense of how well the model is performing. This evaluation must be done without updating the model's weights, as the goal is simply to measure its performance. -Finally, run the training and testing loops over the course of 10 epochs. With each epoch, the model trains on the full training dataset, and afterward, you test it to monitor its progress. By the end of the process, the model has learned to classify the MNIST digits with a high degree of accuracy, as reflected in the final test results. +Finally, run the training and testing loops over the course of 10 epochs. With each epoch, the model trains on the full training dataset, and afterwards, you can test it to monitor its progress. By the end of the process, the model has learned to classify the MNIST digits with a high degree of accuracy, as reflected in the final test results. This setup efficiently trains and evaluates the model for digit classification, providing feedback after each epoch on accuracy and loss. @@ -227,8 +231,8 @@ Epoch 10: Accuracy: 96.5%, Avg loss: 0.137004 ``` -The above shows a similar accuracy as the original model. +These results show a similar rate of accuracy as the original model. You now have the trained model with the modified architecture. -In the next step you will optimize it for mobile inference. \ No newline at end of file +In the next step you will optimize it for mobile inference. From 817e03dbcc898a7e1266a2653ca3fe8478331c6b Mon Sep 17 00:00:00 2001 From: Maddy Underwood <167196745+madeline-underwood@users.noreply.github.com> Date: Tue, 31 Dec 2024 15:27:35 +0000 Subject: [PATCH 79/96] Editorial Final. --- .../pytorch-digit-classification-arch-training/mobile-app.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/content/learning-paths/cross-platform/pytorch-digit-classification-arch-training/mobile-app.md b/content/learning-paths/cross-platform/pytorch-digit-classification-arch-training/mobile-app.md index 6ede98431..ffbfc374a 100644 --- a/content/learning-paths/cross-platform/pytorch-digit-classification-arch-training/mobile-app.md +++ b/content/learning-paths/cross-platform/pytorch-digit-classification-arch-training/mobile-app.md @@ -232,11 +232,11 @@ This would allow the model to take full advantage of the device's computational ![fig](Figures/08.jpg) -# What have you learned? +### What have you learned? You have successfully optimized a neural network model for mobile inference using quantization and layer fusion. -Quantization and layer fusion removed unnecessary elements such as dropout layers during inference. +Quantization and layer fusion removes unnecessary elements such as dropout layers during inference. By running multiple iterations of the inference process, you learned that the optimized model significantly reduced the average inference time to around 65% of the original time. From adbcd08e161f660edf5efc71ae4af997acc4eeae Mon Sep 17 00:00:00 2001 From: Maddy Underwood <167196745+madeline-underwood@users.noreply.github.com> Date: Tue, 31 Dec 2024 15:29:40 +0000 Subject: [PATCH 80/96] Final editorial. --- .../pytorch-digit-classification-arch-training/intro2.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/content/learning-paths/cross-platform/pytorch-digit-classification-arch-training/intro2.md b/content/learning-paths/cross-platform/pytorch-digit-classification-arch-training/intro2.md index 1972b6724..dbbdd5b62 100644 --- a/content/learning-paths/cross-platform/pytorch-digit-classification-arch-training/intro2.md +++ b/content/learning-paths/cross-platform/pytorch-digit-classification-arch-training/intro2.md @@ -1,13 +1,13 @@ --- # User change -title: "About PyTorch model training" +title: "About PyTorch Model Training" weight: 4 layout: "learningpathall" --- -## PyTorch model training +## Training Now you have created a feedforward neural network for digit classification using the MNIST dataset, to enable the network to recognize handwritten digits effectively and make accurate predictions, training is needed. From 70d6345975e0f1bbef7898e8c12a3dc4bcf4a362 Mon Sep 17 00:00:00 2001 From: Maddy Underwood <167196745+madeline-underwood@users.noreply.github.com> Date: Tue, 31 Dec 2024 15:34:08 +0000 Subject: [PATCH 81/96] Final editorial. --- .../pytorch-digit-classification-arch-training/intro.md | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/content/learning-paths/cross-platform/pytorch-digit-classification-arch-training/intro.md b/content/learning-paths/cross-platform/pytorch-digit-classification-arch-training/intro.md index 6ec1d118f..9d61aacf4 100644 --- a/content/learning-paths/cross-platform/pytorch-digit-classification-arch-training/intro.md +++ b/content/learning-paths/cross-platform/pytorch-digit-classification-arch-training/intro.md @@ -1,6 +1,6 @@ --- # User change -title: "Prepare a PyTorch development environment" +title: "Prepare a PyTorch Development Environment" weight: 2 @@ -11,7 +11,7 @@ layout: "learningpathall" Meta AI have designed an Open Source deep learning framework called PyTorch, that is now part of the Linux Foundation. -PyTorch provides a flexible and efficient platform for building and training neural networks. It has a dynamic computational graph that allows users to modify the architecture during runtime, making debugging and experimentation easier, and therefore makes it popular amongste developers. +PyTorch provides a flexible and efficient platform for building and training neural networks. It has a dynamic computational graph that allows users to modify the architecture during runtime, making debugging and experimentation easier, and therefore makes it popular amongst developers. PyTorch provides a more flexible, user-friendly deep learning framework that reduces the limitations of static computational graphs found in earlier tools, such as TensorFlow. @@ -21,7 +21,7 @@ Prior to PyTorch, many frameworks used static computational graphs that require * Easier debugging. * More intuitive code. -PyTorch also seamlessly integrates with Python, which creates a native coding experience. Its deep integration with GPU acceleration also makes it a powerful tool for both research and production environments. This combination of flexibility, usability, and performance has ensured PyTorch’s rapid adoption, particualrly in academic research, where experimentation and iteration are crucial activities. +PyTorch also seamlessly integrates with Python, which creates a native coding experience. Its deep integration with GPU acceleration also makes it a powerful tool for both research and production environments. This combination of flexibility, usability, and performance has ensured PyTorch’s rapid adoption, particularly in academic research, where experimentation and iteration are crucial activities. A typical process for creating a feedforward neural network in PyTorch involves defining a sequential stack of fully-connected layers, which are also known as linear layers. Each layer transforms the input by applying a set of weights and biases, followed by an activation function like ReLU. PyTorch supports this process using the torch.nn module, where layers are easily defined and composed. From 8fefed29a305eb9661110354b208e288a4174b43 Mon Sep 17 00:00:00 2001 From: Maddy Underwood <167196745+madeline-underwood@users.noreply.github.com> Date: Tue, 31 Dec 2024 15:35:57 +0000 Subject: [PATCH 82/96] Final editorial. --- .../intro-opt.md | 10 +++++----- 1 file changed, 5 insertions(+), 5 deletions(-) diff --git a/content/learning-paths/cross-platform/pytorch-digit-classification-arch-training/intro-opt.md b/content/learning-paths/cross-platform/pytorch-digit-classification-arch-training/intro-opt.md index ac34d805f..4a429b88c 100644 --- a/content/learning-paths/cross-platform/pytorch-digit-classification-arch-training/intro-opt.md +++ b/content/learning-paths/cross-platform/pytorch-digit-classification-arch-training/intro-opt.md @@ -1,13 +1,13 @@ --- # User change -title: "Optimizing neural network models in PyTorch" +title: "Optimizing Neural Network Models in PyTorch" weight: 11 layout: "learningpathall" --- -## Optimizing models +## Optimizing Models Optimizing models is crucial to achieving efficient performance while minimizing resource consumption. @@ -17,7 +17,7 @@ As mobile and edge devices can have limited computational power, memory, and ene Quantization is one of the most widely used techniques, which reduces the precision of the model's weights and activations from floating-point to lower-bit representations, such as int8 or float16. This not only reduces the model size but also accelerates inference speed on hardware that supports low-precision arithmetic. -### Layer fusion +### Layer Fusion Another key optimization strategy is layer fusion. Layer fusion involves combining linear layers with their subsequent activation functions, such as ReLU, into a single layer. This reduces the number of operations that need to be executed during inference, minimizing latency and improving throughput. @@ -30,7 +30,7 @@ In addition to these techniques, pruning, which involves removing less significa Leveraging hardware-specific optimizations, such as the Android Neural Networks API (NNAPI) allows you to take full advantage of the underlying hardware acceleration available on edge devices. -### More on optimization +### More on Optimization By employing these strategies, you can significantly enhance the efficiency of ML models for deployment on mobile and edge platforms, ensuring a balance between performance and resource utilization. @@ -46,7 +46,7 @@ PyTorch’s integration with hardware acceleration libraries, such as NNAPI for Overall, PyTorch provides a comprehensive ecosystem that empowers developers to implement effective optimizations for mobile and edge deployment, enhancing both speed and efficiency. -### Optimization Next steps +### Optimization Next Steps In the following sections, you will delve into the techniques of quantization and fusion using the previously created neural network model and Android application. From 20ba43a9bf1db3f146550ad15b177f2ea53855d5 Mon Sep 17 00:00:00 2001 From: Maddy Underwood <167196745+madeline-underwood@users.noreply.github.com> Date: Tue, 31 Dec 2024 15:38:54 +0000 Subject: [PATCH 83/96] Final editorial. --- .../intro-android.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/content/learning-paths/cross-platform/pytorch-digit-classification-arch-training/intro-android.md b/content/learning-paths/cross-platform/pytorch-digit-classification-arch-training/intro-android.md index 022d82107..5af70ca8a 100644 --- a/content/learning-paths/cross-platform/pytorch-digit-classification-arch-training/intro-android.md +++ b/content/learning-paths/cross-platform/pytorch-digit-classification-arch-training/intro-android.md @@ -1,6 +1,6 @@ --- # User change -title: "Learn about inference on Android" +title: "Learn about Inference on Android" weight: 7 @@ -29,7 +29,7 @@ In this Learning Path, you will learn how to perform inference in an Android app Before you begin make [Android Studio](https://developer.android.com/studio/install) is installed on your system. -## Project source code +## Project Source Code The following steps explain how to build an Android application for MNIST inference. The application can be constructed from scratch, but there are two GitHub repositories available if you need to copy any files from them as you learn how to create the Android application. From 4db0127ed27e5c650948e9390d467c9d10e3b990 Mon Sep 17 00:00:00 2001 From: Maddy Underwood <167196745+madeline-underwood@users.noreply.github.com> Date: Tue, 31 Dec 2024 15:41:14 +0000 Subject: [PATCH 84/96] Final editorial. --- .../pytorch-digit-classification-arch-training/inference.md | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/content/learning-paths/cross-platform/pytorch-digit-classification-arch-training/inference.md b/content/learning-paths/cross-platform/pytorch-digit-classification-arch-training/inference.md index 3dd292ad8..a317c3224 100644 --- a/content/learning-paths/cross-platform/pytorch-digit-classification-arch-training/inference.md +++ b/content/learning-paths/cross-platform/pytorch-digit-classification-arch-training/inference.md @@ -1,6 +1,6 @@ --- # User change -title: "Use the model for inference" +title: "Deploy the Model for Inference" weight: 6 @@ -110,11 +110,11 @@ After running the code, you should see results similar to the following figure: ![image](Figures/03.png) -# What have you learned? +### What have you learned? You have completed the process of training and using a PyTorch model for digit classification on the MNIST dataset. Using the training dataset, you optimized the model’s weights and biases over multiple epochs. You employed the `CrossEntropyLoss` function and the `Adam optimizer` to minimize prediction errors and improve accuracy. You periodically evaluated the model on the test dataset to monitor its performance, ensuring it was learning effectively without overfitting. -After training, you saved the model using `TorchScript`, which captures both the model’s architecture and its learned parameters. This improved the flexibility of the model; making it portable and able to function independently of the original class definition, which simplifyies deployment. +After training, you saved the model using `TorchScript`, which captures both the model’s architecture and its learned parameters. This improved the flexibility of the model; making it portable and able to function independently of the original class definition, which simplifies deployment. Next, you performed inference. You loaded the saved model and set it to evaluation mode to ensure that layers like dropout and batch normalization behaved correctly during inference. You randomly selected 16 images from the MNIST test dataset to evaluate the model’s performance on unseen data. For each selected image, you used the model to predict the digit, comparing the predicted labels with the actual ones. You displayed the images alongside their actual and predicted labels in a 4x4 grid, visually assessing the model’s accuracy and performance. From ca69adb436da77478f026818c92600b3d11a026a Mon Sep 17 00:00:00 2001 From: Maddy Underwood <167196745+madeline-underwood@users.noreply.github.com> Date: Tue, 31 Dec 2024 15:43:15 +0000 Subject: [PATCH 85/96] Final editorial. --- .../datasets-and-training.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/content/learning-paths/cross-platform/pytorch-digit-classification-arch-training/datasets-and-training.md b/content/learning-paths/cross-platform/pytorch-digit-classification-arch-training/datasets-and-training.md index 77b669133..acc743698 100644 --- a/content/learning-paths/cross-platform/pytorch-digit-classification-arch-training/datasets-and-training.md +++ b/content/learning-paths/cross-platform/pytorch-digit-classification-arch-training/datasets-and-training.md @@ -1,6 +1,6 @@ --- # User change -title: "Perform training and save the model" +title: "Perform Training and Save the Model" weight: 5 @@ -74,7 +74,7 @@ After running the code, you will see output similar to Figure 1: ![image alt-text#center](Figures/01.png "Figure 1. Output 1".) -# Train the model +## Train the Model To train the model, specify the loss function and the optimizer: From 9eec2d770352e77a08617300947bcad70cff47fd Mon Sep 17 00:00:00 2001 From: Jason Andrews Date: Tue, 31 Dec 2024 09:54:53 -0600 Subject: [PATCH 86/96] Skopeo install guide ready for publish --- content/install-guides/skopeo.md | 1 - 1 file changed, 1 deletion(-) diff --git a/content/install-guides/skopeo.md b/content/install-guides/skopeo.md index d3c9f6087..2bb0008e4 100644 --- a/content/install-guides/skopeo.md +++ b/content/install-guides/skopeo.md @@ -1,6 +1,5 @@ --- title: Skopeo -draft: true author_primary: Jason Andrews minutes_to_complete: 10 official_docs: https://github.com/containers/skopeo From 4a3e7cc76906db0796f228f7e94e2f0a6ef50427 Mon Sep 17 00:00:00 2001 From: Maddy Underwood <167196745+madeline-underwood@users.noreply.github.com> Date: Tue, 31 Dec 2024 16:30:05 +0000 Subject: [PATCH 87/96] Update model.md --- .../pytorch-digit-classification-arch-training/model.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/content/learning-paths/cross-platform/pytorch-digit-classification-arch-training/model.md b/content/learning-paths/cross-platform/pytorch-digit-classification-arch-training/model.md index f2b070820..1db5d1e79 100644 --- a/content/learning-paths/cross-platform/pytorch-digit-classification-arch-training/model.md +++ b/content/learning-paths/cross-platform/pytorch-digit-classification-arch-training/model.md @@ -7,7 +7,7 @@ weight: 3 layout: "learningpathall" --- -You can create and train a feedforward neural network to classify handwritten digits from the MNIST dataset. This dataset contains 70,000 images, comprising 60,000 training images and 10,000 testing images of handwritten numerals (0-9), each with dimensions of 28x28 pixels. Some representative MNIST digits with their corresponding labels are shown below. +You can create and train a feedforward neural network to classify handwritten digits from the MNIST dataset. This dataset contains 70,000 images, comprising 60,000 training images and 10,000 testing images of handwritten numerals (0-9), each with dimensions of 28x28 pixels. Some representative MNIST digits with their corresponding labels are shown in Figure 3: ![img3 alt-text#center](Figures/3.png "Figure 3: MNIST Digits and Labels.") @@ -92,7 +92,7 @@ model = NeuralNetwork() summary(model, (1, 28, 28)) ``` -After running the notebook, you will see the following output: +After running the notebook, you will see the output as shown in Figure 4: ![img4 alt-text#center](Figures/4.png "Figure 4: Notebook Output.") From 7e434f5c9d9cb6db2fb7ecc44fce3ba45abd3812 Mon Sep 17 00:00:00 2001 From: Maddy Underwood <167196745+madeline-underwood@users.noreply.github.com> Date: Tue, 31 Dec 2024 16:33:46 +0000 Subject: [PATCH 88/96] Update datasets-and-training.md --- .../datasets-and-training.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/content/learning-paths/cross-platform/pytorch-digit-classification-arch-training/datasets-and-training.md b/content/learning-paths/cross-platform/pytorch-digit-classification-arch-training/datasets-and-training.md index acc743698..ce888b7f2 100644 --- a/content/learning-paths/cross-platform/pytorch-digit-classification-arch-training/datasets-and-training.md +++ b/content/learning-paths/cross-platform/pytorch-digit-classification-arch-training/datasets-and-training.md @@ -70,9 +70,9 @@ The certifi Python package provides the Mozilla root certificates, which are ess Make sure to replace 'x' with the version number of Python that you have installed. {{% /notice %}} -After running the code, you will see output similar to Figure 1: +After running the code, you will see output similar to Figure 5: -![image alt-text#center](Figures/01.png "Figure 1. Output 1".) +![image alt-text#center](Figures/01.png "Figure 5. Output".) ## Train the Model From 8d619f1bbad0deac6e8b0df1f726ef9da48ae78e Mon Sep 17 00:00:00 2001 From: Maddy Underwood <167196745+madeline-underwood@users.noreply.github.com> Date: Tue, 31 Dec 2024 16:36:44 +0000 Subject: [PATCH 89/96] Update inference.md --- .../pytorch-digit-classification-arch-training/inference.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/content/learning-paths/cross-platform/pytorch-digit-classification-arch-training/inference.md b/content/learning-paths/cross-platform/pytorch-digit-classification-arch-training/inference.md index a317c3224..fea124286 100644 --- a/content/learning-paths/cross-platform/pytorch-digit-classification-arch-training/inference.md +++ b/content/learning-paths/cross-platform/pytorch-digit-classification-arch-training/inference.md @@ -108,7 +108,7 @@ This code demonstrates how to use a saved PyTorch model for inference and visual After running the code, you should see results similar to the following figure: -![image](Figures/03.png) +![image](Figures/03.png "Figure 6. Example image caption") ### What have you learned? From 80aba52220d9512e5bcebcb9e1f936cd6a4ed569 Mon Sep 17 00:00:00 2001 From: Maddy Underwood <167196745+madeline-underwood@users.noreply.github.com> Date: Tue, 31 Dec 2024 16:38:07 +0000 Subject: [PATCH 90/96] Update inference.md --- .../pytorch-digit-classification-arch-training/inference.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/content/learning-paths/cross-platform/pytorch-digit-classification-arch-training/inference.md b/content/learning-paths/cross-platform/pytorch-digit-classification-arch-training/inference.md index fea124286..343f3a822 100644 --- a/content/learning-paths/cross-platform/pytorch-digit-classification-arch-training/inference.md +++ b/content/learning-paths/cross-platform/pytorch-digit-classification-arch-training/inference.md @@ -108,7 +108,7 @@ This code demonstrates how to use a saved PyTorch model for inference and visual After running the code, you should see results similar to the following figure: -![image](Figures/03.png "Figure 6. Example image caption") +![image](Figures/03.png "Figure 6. Results Displayed") ### What have you learned? From ae7122fb8e3d0f2475da04b79aacf7df719e887f Mon Sep 17 00:00:00 2001 From: Maddy Underwood <167196745+madeline-underwood@users.noreply.github.com> Date: Tue, 31 Dec 2024 16:41:28 +0000 Subject: [PATCH 91/96] Update app.md --- .../pytorch-digit-classification-arch-training/app.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/content/learning-paths/cross-platform/pytorch-digit-classification-arch-training/app.md b/content/learning-paths/cross-platform/pytorch-digit-classification-arch-training/app.md index 89c7747ee..5848fe038 100644 --- a/content/learning-paths/cross-platform/pytorch-digit-classification-arch-training/app.md +++ b/content/learning-paths/cross-platform/pytorch-digit-classification-arch-training/app.md @@ -30,8 +30,8 @@ To run the app in Android Studio using an emulator, follow these steps: Once the application starts, click the **Load Image** button. It loads a randomly-selected image. Then, click **Run Inference** to recognize the digit. The application displays the predicted label and the inference time as shown below: -![img](Figures/05.png) +![img alt-text#center](Figures/05.png "Figure 7. Digit Recognition 1") -![img](Figures/06.png) +![img alt-text#center](Figures/06.png "Figure 8. Digit Recognition 2") In the next step of this Learning Path, you will learn how to further optimize the model. From 68847e5e29fa959c4ef8edc0898e42b9f0125209 Mon Sep 17 00:00:00 2001 From: Maddy Underwood <167196745+madeline-underwood@users.noreply.github.com> Date: Tue, 31 Dec 2024 17:03:23 +0000 Subject: [PATCH 92/96] Update mobile-app.md --- .../pytorch-digit-classification-arch-training/mobile-app.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/content/learning-paths/cross-platform/pytorch-digit-classification-arch-training/mobile-app.md b/content/learning-paths/cross-platform/pytorch-digit-classification-arch-training/mobile-app.md index ffbfc374a..cfbc922d4 100644 --- a/content/learning-paths/cross-platform/pytorch-digit-classification-arch-training/mobile-app.md +++ b/content/learning-paths/cross-platform/pytorch-digit-classification-arch-training/mobile-app.md @@ -228,9 +228,9 @@ This optimization showcases the benefits of quantization and layer fusion for mo This would allow the model to take full advantage of the device's computational capabilities, potentially further reducing the inference time. -![fig](Figures/07.jpg) +![fig alt-text#center](Figures/07.jpg "Figure 9.") -![fig](Figures/08.jpg) +![fig alt-text#center](Figures/08.jpg "Figure 10.") ### What have you learned? From 76249e5fc4dfff5a0ecd7a14bb61db4b61dc5ee3 Mon Sep 17 00:00:00 2001 From: Jason Andrews Date: Tue, 31 Dec 2024 11:08:53 -0600 Subject: [PATCH 93/96] spelling updates --- .wordlist.txt | 4 ++-- .../snort3-multithreading/usecase.md | 2 +- 2 files changed, 3 insertions(+), 3 deletions(-) diff --git a/.wordlist.txt b/.wordlist.txt index ee88c9c91..bd42294b6 100644 --- a/.wordlist.txt +++ b/.wordlist.txt @@ -3382,7 +3382,7 @@ Dsouza FGCT GCT GCs -GC’s +GC's HNso HeapRegionSize HugePages @@ -3428,7 +3428,6 @@ EOF EVCLI EVidence Evcli -GC’s GenerateMatrix ImageCapture InputStream @@ -3453,6 +3452,7 @@ TestOpenCV TrustedFirmware Veraison WeatherForecast +weatherForecast WebGPU's Wiredtiger androidml diff --git a/content/learning-paths/servers-and-cloud-computing/snort3-multithreading/usecase.md b/content/learning-paths/servers-and-cloud-computing/snort3-multithreading/usecase.md index aa7200a02..bb22024ee 100644 --- a/content/learning-paths/servers-and-cloud-computing/snort3-multithreading/usecase.md +++ b/content/learning-paths/servers-and-cloud-computing/snort3-multithreading/usecase.md @@ -253,7 +253,7 @@ trace(v1): inline unpriv wrapper file - Filename to write text traces to (default: inline-out.txt) ``` -For testing, you can use `--daq dump` to analyze Pthe CAP files. +For testing, you can use `--daq dump` to analyze the CAP files. #### Spawn Snort 3 process with multithreading From e273c981422cf64b110e283003339a2ad933f91f Mon Sep 17 00:00:00 2001 From: Jason Andrews Date: Tue, 31 Dec 2024 11:35:05 -0600 Subject: [PATCH 94/96] fix notices in MNIST Learning Path --- .../pytorch-digit-classification-arch-training/model-opt.md | 2 +- .../pytorch-digit-classification-arch-training/prepare-data.md | 2 +- 2 files changed, 2 insertions(+), 2 deletions(-) diff --git a/content/learning-paths/cross-platform/pytorch-digit-classification-arch-training/model-opt.md b/content/learning-paths/cross-platform/pytorch-digit-classification-arch-training/model-opt.md index 76703398e..33c998290 100644 --- a/content/learning-paths/cross-platform/pytorch-digit-classification-arch-training/model-opt.md +++ b/content/learning-paths/cross-platform/pytorch-digit-classification-arch-training/model-opt.md @@ -15,7 +15,7 @@ Start by creating a new notebook named `pytorch-digits-model-optimizations.ipynb Then define the model architecture using the code below. -{% notice Note%}} +{{% notice Note%}} You can also find the source code on [GitHub](https://github.com/dawidborycki/Arm.PyTorch.MNIST.Inference.Python). {{% /notice %}} diff --git a/content/learning-paths/cross-platform/pytorch-digit-classification-arch-training/prepare-data.md b/content/learning-paths/cross-platform/pytorch-digit-classification-arch-training/prepare-data.md index 4affa5797..3de3ec106 100644 --- a/content/learning-paths/cross-platform/pytorch-digit-classification-arch-training/prepare-data.md +++ b/content/learning-paths/cross-platform/pytorch-digit-classification-arch-training/prepare-data.md @@ -74,7 +74,7 @@ A directory named `mnist_bitmaps` is created to store the images. A dictionary t The loop breaks once the specified number of examples per digit is saved, ensuring that exactly 20 images, two per digit, are generated and stored in the specified directory. -{% notice Note %}} +{{% notice Note %}} This data is included in the [GitHub repository](https://github.com/dawidborycki/Arm.PyTorch.MNIST.Inference.git) {{% /notice %}} From ea143a23b5896569e700616393b2770e262fbb23 Mon Sep 17 00:00:00 2001 From: Jason Andrews Date: Tue, 31 Dec 2024 12:53:31 -0600 Subject: [PATCH 95/96] spelling fixes --- .../pytorch-digit-classification-arch-training/_next-steps.md | 2 +- .../pytorch-digit-classification-arch-training/intro.md | 2 +- .../mongodb/mongodb_configuration.md | 2 +- .../smartphones-and-mobile/profiling-ml-on-arm/_review.md | 2 +- 4 files changed, 4 insertions(+), 4 deletions(-) diff --git a/content/learning-paths/cross-platform/pytorch-digit-classification-arch-training/_next-steps.md b/content/learning-paths/cross-platform/pytorch-digit-classification-arch-training/_next-steps.md index 4c12b745e..2696479e6 100644 --- a/content/learning-paths/cross-platform/pytorch-digit-classification-arch-training/_next-steps.md +++ b/content/learning-paths/cross-platform/pytorch-digit-classification-arch-training/_next-steps.md @@ -4,7 +4,7 @@ # ================================================================================ next_step_guidance: > - To continue exploring Maching Learning, you can now learn about using Keras Core with TensorFlow, PyTorch, and JAX backends. + To continue exploring Machine Learning, you can now learn about using Keras Core with TensorFlow, PyTorch, and JAX backends. # 1-3 sentence recommendation outlining how the reader can generally keep learning about these topics, and a specific explanation of why the next step is being recommended. diff --git a/content/learning-paths/cross-platform/pytorch-digit-classification-arch-training/intro.md b/content/learning-paths/cross-platform/pytorch-digit-classification-arch-training/intro.md index 9d61aacf4..0d70ca8c4 100644 --- a/content/learning-paths/cross-platform/pytorch-digit-classification-arch-training/intro.md +++ b/content/learning-paths/cross-platform/pytorch-digit-classification-arch-training/intro.md @@ -11,7 +11,7 @@ layout: "learningpathall" Meta AI have designed an Open Source deep learning framework called PyTorch, that is now part of the Linux Foundation. -PyTorch provides a flexible and efficient platform for building and training neural networks. It has a dynamic computational graph that allows users to modify the architecture during runtime, making debugging and experimentation easier, and therefore makes it popular amongst developers. +PyTorch provides a flexible and efficient platform for building and training neural networks. It has a dynamic computational graph that allows users to modify the architecture during runtime, making debugging and experimentation easier, and therefore makes it popular among developers. PyTorch provides a more flexible, user-friendly deep learning framework that reduces the limitations of static computational graphs found in earlier tools, such as TensorFlow. diff --git a/content/learning-paths/servers-and-cloud-computing/mongodb/mongodb_configuration.md b/content/learning-paths/servers-and-cloud-computing/mongodb/mongodb_configuration.md index ca9c06906..16a28ac70 100644 --- a/content/learning-paths/servers-and-cloud-computing/mongodb/mongodb_configuration.md +++ b/content/learning-paths/servers-and-cloud-computing/mongodb/mongodb_configuration.md @@ -74,7 +74,7 @@ setParameter: - **port:** 27017 is the port used for replica sets - **maxIncomingConnections:** The maximum number of incoming connections supported by MongoDB -**setParameter:** Addtional options +**setParameter:** Additional options - **diagnosticDataCollectionDirectorySizeMB:** 400 is based on the docs. - **honorSystemUmask:** Sets read and write permissions only to the owner of new files - **lockCodeSegmentsInMemory:** Locks code into memory and prevents it from being swapped. diff --git a/content/learning-paths/smartphones-and-mobile/profiling-ml-on-arm/_review.md b/content/learning-paths/smartphones-and-mobile/profiling-ml-on-arm/_review.md index 451c2b044..4b34f78b0 100644 --- a/content/learning-paths/smartphones-and-mobile/profiling-ml-on-arm/_review.md +++ b/content/learning-paths/smartphones-and-mobile/profiling-ml-on-arm/_review.md @@ -20,7 +20,7 @@ review: - "No." correct_answer: 1 explanation: > - Yes, Android Studio has a built-in profiler that can be used to monitor the memory usage of your application, amongst other functions. + Yes, Android Studio has a built-in profiler that can be used to monitor the memory usage of your application, among other functions. - questions: question: > From 53ab44f995be14480147be80dde9958877064d00 Mon Sep 17 00:00:00 2001 From: Jason Andrews Date: Tue, 31 Dec 2024 13:08:30 -0600 Subject: [PATCH 96/96] spelling --- .../pytorch-digit-classification-arch-training/model-opt.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/content/learning-paths/cross-platform/pytorch-digit-classification-arch-training/model-opt.md b/content/learning-paths/cross-platform/pytorch-digit-classification-arch-training/model-opt.md index 33c998290..ec4cc9c61 100644 --- a/content/learning-paths/cross-platform/pytorch-digit-classification-arch-training/model-opt.md +++ b/content/learning-paths/cross-platform/pytorch-digit-classification-arch-training/model-opt.md @@ -57,7 +57,7 @@ class NeuralNetwork(nn.Module): return x # Outputs raw logits ``` -This code defines a neural network in PyTorch for digit classification, consisting of three linear layers with ReLU activations and optional dropout layers for regularization. The network first flattens the input, that is a 28x28 image, and passes it through two linear layers, each followed by a ReLU activation and if enbaled, a dropout layer. The final layer produces raw logits as the output. Notably, the softmax layer has been removed to enable quantization and layer fusion during model optimization, allowing better performance when deploying the model on mobile or edge devices. +This code defines a neural network in PyTorch for digit classification, consisting of three linear layers with ReLU activations and optional dropout layers for regularization. The network first flattens the input, that is a 28x28 image, and passes it through two linear layers, each followed by a ReLU activation and if enabled, a dropout layer. The final layer produces raw logits as the output. Notably, the softmax layer has been removed to enable quantization and layer fusion during model optimization, allowing better performance when deploying the model on mobile or edge devices. The output is left as logits, and the softmax function can be applied during post-processing, particularly during inference.