From 97259f0c4c5558044862ded291169b1ba05b35ea Mon Sep 17 00:00:00 2001 From: ci-doc-deploy-bot Date: Tue, 1 Oct 2024 20:39:06 +0000 Subject: [PATCH] [skip ci] docs build of aa64f47b86788a6cf362ef2c776de8a9615cd49e --- .buildinfo | 2 +- _sources/content/mooreslaw-tutorial.ipynb | 90 +++++----- _sources/content/pairing.ipynb | 6 +- _sources/content/save-load-arrays.ipynb | 54 +++--- .../tutorial-air-quality-analysis.ipynb | 78 ++++---- .../tutorial-deep-learning-on-mnist.ipynb | 96 +++++----- ...ement-learning-with-pong-from-pixels.ipynb | 2 +- _sources/content/tutorial-ma.ipynb | 98 +++++------ .../content/tutorial-plotting-fractals.ipynb | 156 ++++++++-------- .../content/tutorial-static_equilibrium.ipynb | 62 +++---- _sources/content/tutorial-style-guide.ipynb | 10 +- _sources/content/tutorial-svd.ipynb | 166 +++++++++--------- .../tutorial-x-ray-image-processing.ipynb | 122 ++++++------- applications.html | 17 -- articles.html | 17 -- content/mooreslaw-tutorial.html | 19 +- content/pairing.html | 17 -- content/save-load-arrays.html | 17 -- content/tutorial-air-quality-analysis.html | 19 +- content/tutorial-deep-learning-on-mnist.html | 17 -- ...cement-learning-with-pong-from-pixels.html | 17 -- content/tutorial-ma.html | 19 +- content/tutorial-nlp-from-scratch.html | 17 -- content/tutorial-plotting-fractals.html | 17 -- content/tutorial-static_equilibrium.html | 17 -- content/tutorial-style-guide.html | 17 -- content/tutorial-svd.html | 17 -- content/tutorial-x-ray-image-processing.html | 17 -- contributing.html | 17 -- features.html | 17 -- index.html | 17 -- searchindex.js | 2 +- 32 files changed, 475 insertions(+), 781 deletions(-) diff --git a/.buildinfo b/.buildinfo index aed29679..67b7b6b0 100644 --- a/.buildinfo +++ b/.buildinfo @@ -1,4 +1,4 @@ # Sphinx build info version 1 # This file hashes the configuration used when building these files. When it is not found, a full rebuild will be done. -config: e963335c808eaf5b9d26c05534951b3d +config: 7dfcb4991759350352a4d15c8aec86b2 tags: 645f666f9bcd5a90fca523b33c5a78b7 diff --git a/_sources/content/mooreslaw-tutorial.ipynb b/_sources/content/mooreslaw-tutorial.ipynb index e7d43936..da6ec7c5 100644 --- a/_sources/content/mooreslaw-tutorial.ipynb +++ b/_sources/content/mooreslaw-tutorial.ipynb @@ -2,7 +2,7 @@ "cells": [ { "cell_type": "markdown", - "id": "46dd879d", + "id": "ddf1527b", "metadata": {}, "source": [ "# Determining Moore's Law with real data in NumPy\n", @@ -45,7 +45,7 @@ { "cell_type": "code", "execution_count": 1, - "id": "2ae541b2", + "id": "17a7ef23", "metadata": {}, "outputs": [], "source": [ @@ -55,7 +55,7 @@ }, { "cell_type": "markdown", - "id": "154d535b", + "id": "d7b5ffd1", "metadata": {}, "source": [ "**2.** Since this is an exponential growth law you need a little background in doing math with [natural logs](https://en.wikipedia.org/wiki/Natural_logarithm) and [exponentials](https://en.wikipedia.org/wiki/Exponential_function).\n", @@ -77,7 +77,7 @@ }, { "cell_type": "markdown", - "id": "a02bbb86", + "id": "dd7729be", "metadata": {}, "source": [ "---\n", @@ -123,7 +123,7 @@ { "cell_type": "code", "execution_count": 2, - "id": "d00f5ff1", + "id": "17272c80", "metadata": {}, "outputs": [], "source": [ @@ -134,7 +134,7 @@ }, { "cell_type": "markdown", - "id": "54611e48", + "id": "12802a26", "metadata": {}, "source": [ "In 1971, there were 2250 transistors on the Intel 4004 chip. Use\n", @@ -145,7 +145,7 @@ { "cell_type": "code", "execution_count": 3, - "id": "a32a67b0", + "id": "9b31a753", "metadata": {}, "outputs": [ { @@ -166,7 +166,7 @@ }, { "cell_type": "markdown", - "id": "f8a43e3d", + "id": "175046fc", "metadata": {}, "source": [ "## Loading historical manufacturing data to your workspace\n", @@ -190,7 +190,7 @@ { "cell_type": "code", "execution_count": 4, - "id": "bb8301a1", + "id": "68098cd3", "metadata": {}, "outputs": [ { @@ -216,7 +216,7 @@ }, { "cell_type": "markdown", - "id": "66ba4f5f", + "id": "5902f7fd", "metadata": {}, "source": [ "You don't need the columns that specify __Processor__, __Designer__,\n", @@ -235,7 +235,7 @@ { "cell_type": "code", "execution_count": 5, - "id": "64ae29cd", + "id": "e2cbcb93", "metadata": {}, "outputs": [], "source": [ @@ -244,7 +244,7 @@ }, { "cell_type": "markdown", - "id": "c31000a0", + "id": "2b337e0d", "metadata": {}, "source": [ "You loaded the entire history of semiconducting into a NumPy array named\n", @@ -261,7 +261,7 @@ { "cell_type": "code", "execution_count": 6, - "id": "cbbb7f0f", + "id": "af4069f8", "metadata": {}, "outputs": [ { @@ -283,7 +283,7 @@ }, { "cell_type": "markdown", - "id": "04312d1b", + "id": "53c5bbd8", "metadata": {}, "source": [ "You are creating a function that predicts the transistor count given a\n", @@ -301,7 +301,7 @@ { "cell_type": "code", "execution_count": 7, - "id": "3853fc2c", + "id": "ea2caa63", "metadata": {}, "outputs": [], "source": [ @@ -310,7 +310,7 @@ }, { "cell_type": "markdown", - "id": "61a9b2af", + "id": "43ae7164", "metadata": {}, "source": [ "## Calculating the historical growth curve for transistors\n", @@ -339,7 +339,7 @@ { "cell_type": "code", "execution_count": 8, - "id": "a44bff9e", + "id": "8a025306", "metadata": {}, "outputs": [], "source": [ @@ -348,7 +348,7 @@ }, { "cell_type": "markdown", - "id": "558f2e93", + "id": "72ba9e59", "metadata": {}, "source": [ "By default, `Polynomial.fit` performs the fit in the domain determined by the\n", @@ -360,7 +360,7 @@ { "cell_type": "code", "execution_count": 9, - "id": "e4658e9d", + "id": "4df287d9", "metadata": {}, "outputs": [ { @@ -384,7 +384,7 @@ }, { "cell_type": "markdown", - "id": "44524c5d", + "id": "8ec0a7ba", "metadata": {}, "source": [ "The individual parameters $A$ and $B$ are the coefficients of our linear model:" @@ -393,7 +393,7 @@ { "cell_type": "code", "execution_count": 10, - "id": "cddd027c", + "id": "cf696982", "metadata": {}, "outputs": [], "source": [ @@ -402,7 +402,7 @@ }, { "cell_type": "markdown", - "id": "935c2176", + "id": "eb39e43f", "metadata": {}, "source": [ "Did manufacturers double the transistor count every two years? You have\n", @@ -418,7 +418,7 @@ { "cell_type": "code", "execution_count": 11, - "id": "ca1cea75", + "id": "d7900298", "metadata": {}, "outputs": [ { @@ -435,7 +435,7 @@ }, { "cell_type": "markdown", - "id": "d29b9874", + "id": "480b10f4", "metadata": {}, "source": [ "Based upon your least-squares regression model, the number of\n", @@ -466,7 +466,7 @@ }, { "cell_type": "markdown", - "id": "e99f9c32", + "id": "f55ff826", "metadata": {}, "source": [ "In the next plot, use the\n", @@ -480,7 +480,7 @@ { "cell_type": "code", "execution_count": 12, - "id": "3f682b57", + "id": "fd0c651f", "metadata": {}, "outputs": [ { @@ -525,7 +525,7 @@ }, { "cell_type": "markdown", - "id": "d4289527", + "id": "22c8f7cc", "metadata": {}, "source": [ "_A scatter plot of MOS transistor count per microprocessor every two years with a red line for the ordinary least squares prediction and an orange line for Moore's law._\n", @@ -564,7 +564,7 @@ { "cell_type": "code", "execution_count": 13, - "id": "a3e613d6", + "id": "5fe8c23e", "metadata": {}, "outputs": [ { @@ -577,7 +577,7 @@ { "data": { "text/plain": [ - "" + "" ] }, "execution_count": 13, @@ -621,7 +621,7 @@ }, { "cell_type": "markdown", - "id": "213b559d", + "id": "6429f64e", "metadata": {}, "source": [ "The result is that your model is close to the mean, but Gordon\n", @@ -638,7 +638,7 @@ }, { "cell_type": "markdown", - "id": "1e512026", + "id": "cdb13cce", "metadata": {}, "source": [ "## Sharing your results as zipped arrays and a csv\n", @@ -663,7 +663,7 @@ { "cell_type": "code", "execution_count": 14, - "id": "073ec490", + "id": "c6063eed", "metadata": {}, "outputs": [ { @@ -697,7 +697,7 @@ { "cell_type": "code", "execution_count": 15, - "id": "4e5d3d0a", + "id": "6b063329", "metadata": {}, "outputs": [], "source": [ @@ -715,7 +715,7 @@ { "cell_type": "code", "execution_count": 16, - "id": "e2b55588", + "id": "599e82ae", "metadata": {}, "outputs": [], "source": [ @@ -725,7 +725,7 @@ { "cell_type": "code", "execution_count": 17, - "id": "0899954c", + "id": "9a2f0d39", "metadata": {}, "outputs": [ { @@ -743,7 +743,7 @@ { "cell_type": "code", "execution_count": 18, - "id": "1cf8fd55", + "id": "650fbbfd", "metadata": {}, "outputs": [ { @@ -781,7 +781,7 @@ }, { "cell_type": "markdown", - "id": "dd9a6dd4", + "id": "c843ea4f", "metadata": {}, "source": [ "The benefit of `np.savez` is you can save hundreds of arrays with\n", @@ -809,7 +809,7 @@ { "cell_type": "code", "execution_count": 19, - "id": "5de69370", + "id": "571bbc73", "metadata": {}, "outputs": [ { @@ -842,7 +842,7 @@ }, { "cell_type": "markdown", - "id": "b45135ca", + "id": "16cfcfd3", "metadata": {}, "source": [ "Build a single 2D array to export to csv. Tabular data is inherently two\n", @@ -868,7 +868,7 @@ { "cell_type": "code", "execution_count": 20, - "id": "a4f68bfa", + "id": "22c82e75", "metadata": {}, "outputs": [], "source": [ @@ -884,7 +884,7 @@ }, { "cell_type": "markdown", - "id": "ea575edd", + "id": "6dd6db2b", "metadata": {}, "source": [ "Creating the `mooreslaw_regression.csv` with `np.savetxt`, use three\n", @@ -898,7 +898,7 @@ { "cell_type": "code", "execution_count": 21, - "id": "358a0d33", + "id": "d6c925e1", "metadata": {}, "outputs": [], "source": [ @@ -908,7 +908,7 @@ { "cell_type": "code", "execution_count": 22, - "id": "d73059c0", + "id": "c629fe02", "metadata": {}, "outputs": [ { @@ -934,7 +934,7 @@ }, { "cell_type": "markdown", - "id": "e5605862", + "id": "e02561ce", "metadata": {}, "source": [ "## Wrapping up\n", @@ -958,7 +958,7 @@ }, { "cell_type": "markdown", - "id": "94467fba", + "id": "c2e54b24", "metadata": {}, "source": [ "## References\n", diff --git a/_sources/content/pairing.ipynb b/_sources/content/pairing.ipynb index a71c852c..e4a2b229 100644 --- a/_sources/content/pairing.ipynb +++ b/_sources/content/pairing.ipynb @@ -2,7 +2,7 @@ "cells": [ { "cell_type": "markdown", - "id": "82a6afa4", + "id": "dc59551e", "metadata": {}, "source": [ "# Pairing Jupyter notebooks and MyST-NB\n", @@ -76,7 +76,7 @@ { "cell_type": "code", "execution_count": 1, - "id": "99fa02dc", + "id": "eb2ae434", "metadata": {}, "outputs": [ { @@ -94,7 +94,7 @@ }, { "cell_type": "markdown", - "id": "d2bb4eff", + "id": "d7387326", "metadata": {}, "source": [ "---\n", diff --git a/_sources/content/save-load-arrays.ipynb b/_sources/content/save-load-arrays.ipynb index 265c184b..bae5f466 100644 --- a/_sources/content/save-load-arrays.ipynb +++ b/_sources/content/save-load-arrays.ipynb @@ -2,7 +2,7 @@ "cells": [ { "cell_type": "markdown", - "id": "b2431dbf", + "id": "5ee8ce31", "metadata": {}, "source": [ "# Saving and sharing your NumPy arrays\n", @@ -37,7 +37,7 @@ { "cell_type": "code", "execution_count": 1, - "id": "6310b0a2", + "id": "cd1b1ee6", "metadata": {}, "outputs": [], "source": [ @@ -46,7 +46,7 @@ }, { "cell_type": "markdown", - "id": "85e9caf9", + "id": "99be7dcc", "metadata": {}, "source": [ "In this tutorial, you will use the following Python, IPython magic, and NumPy functions:\n", @@ -64,7 +64,7 @@ }, { "cell_type": "markdown", - "id": "e6ece4f3", + "id": "dd5ae9e4", "metadata": {}, "source": [ "---\n", @@ -81,7 +81,7 @@ { "cell_type": "code", "execution_count": 2, - "id": "65c993bb", + "id": "2ac3f44e", "metadata": {}, "outputs": [ { @@ -102,7 +102,7 @@ }, { "cell_type": "markdown", - "id": "dedb0b1f", + "id": "b958c30d", "metadata": {}, "source": [ "## Save your arrays with NumPy's [`savez`](https://numpy.org/doc/stable/reference/generated/numpy.savez.html?highlight=savez#numpy.savez)\n", @@ -125,7 +125,7 @@ { "cell_type": "code", "execution_count": 3, - "id": "ae33cfcb", + "id": "6568bfb3", "metadata": {}, "outputs": [], "source": [ @@ -134,7 +134,7 @@ }, { "cell_type": "markdown", - "id": "8fa2be4d", + "id": "fc9f5f99", "metadata": {}, "source": [ "## Remove the saved arrays and load them back with NumPy's [`load`](https://numpy.org/doc/stable/reference/generated/numpy.load.html#numpy.load)\n", @@ -159,7 +159,7 @@ { "cell_type": "code", "execution_count": 4, - "id": "7c638dc8", + "id": "6fede376", "metadata": {}, "outputs": [], "source": [ @@ -169,7 +169,7 @@ { "cell_type": "code", "execution_count": 5, - "id": "0ee14e2c", + "id": "a3a0632d", "metadata": {}, "outputs": [ { @@ -189,7 +189,7 @@ { "cell_type": "code", "execution_count": 6, - "id": "2eb0fded", + "id": "c28c8998", "metadata": {}, "outputs": [ { @@ -209,7 +209,7 @@ { "cell_type": "code", "execution_count": 7, - "id": "55a27c58", + "id": "ee70a4fc", "metadata": {}, "outputs": [ { @@ -229,7 +229,7 @@ }, { "cell_type": "markdown", - "id": "25c3f018", + "id": "32b876e0", "metadata": {}, "source": [ "## Reassign the NpzFile arrays to `x` and `y`\n", @@ -242,7 +242,7 @@ { "cell_type": "code", "execution_count": 8, - "id": "c80899e0", + "id": "eeedca67", "metadata": {}, "outputs": [ { @@ -263,7 +263,7 @@ }, { "cell_type": "markdown", - "id": "2ecc0c48", + "id": "410bd892", "metadata": {}, "source": [ "## Success\n", @@ -294,7 +294,7 @@ { "cell_type": "code", "execution_count": 9, - "id": "3d23e101", + "id": "0498e697", "metadata": {}, "outputs": [ { @@ -323,7 +323,7 @@ }, { "cell_type": "markdown", - "id": "406903c3", + "id": "71f0c349", "metadata": {}, "source": [ "## Save the data to csv file using [`savetxt`](https://numpy.org/doc/stable/reference/generated/numpy.savetxt.html#numpy.savetxt)\n", @@ -338,7 +338,7 @@ { "cell_type": "code", "execution_count": 10, - "id": "417a225e", + "id": "b4e4114d", "metadata": {}, "outputs": [], "source": [ @@ -347,7 +347,7 @@ }, { "cell_type": "markdown", - "id": "426f43d0", + "id": "54f7d250", "metadata": {}, "source": [ "Open the file, `x_y-squared.csv`, and you'll see the following:" @@ -356,7 +356,7 @@ { "cell_type": "code", "execution_count": 11, - "id": "c1497571", + "id": "adb87b83", "metadata": {}, "outputs": [ { @@ -382,7 +382,7 @@ }, { "cell_type": "markdown", - "id": "70231c85", + "id": "19f73a1d", "metadata": {}, "source": [ "## Our arrays as a csv file\n", @@ -406,7 +406,7 @@ { "cell_type": "code", "execution_count": 12, - "id": "cb0c3a9e", + "id": "23fc3067", "metadata": {}, "outputs": [], "source": [ @@ -416,7 +416,7 @@ { "cell_type": "code", "execution_count": 13, - "id": "039cc779", + "id": "63f84437", "metadata": {}, "outputs": [], "source": [ @@ -426,7 +426,7 @@ { "cell_type": "code", "execution_count": 14, - "id": "5fbb51e9", + "id": "571e19b5", "metadata": {}, "outputs": [ { @@ -447,7 +447,7 @@ { "cell_type": "code", "execution_count": 15, - "id": "16b167e8", + "id": "2af81c27", "metadata": {}, "outputs": [ { @@ -468,7 +468,7 @@ }, { "cell_type": "markdown", - "id": "61bdae58", + "id": "52128b90", "metadata": {}, "source": [ "## Success, but remember your types\n", @@ -479,7 +479,7 @@ }, { "cell_type": "markdown", - "id": "a11006a0", + "id": "0917e0e3", "metadata": {}, "source": [ "## Wrapping up\n", diff --git a/_sources/content/tutorial-air-quality-analysis.ipynb b/_sources/content/tutorial-air-quality-analysis.ipynb index 41c923e5..83d0c8ad 100644 --- a/_sources/content/tutorial-air-quality-analysis.ipynb +++ b/_sources/content/tutorial-air-quality-analysis.ipynb @@ -2,7 +2,7 @@ "cells": [ { "cell_type": "markdown", - "id": "738218f7", + "id": "ae60d69f", "metadata": {}, "source": [ "# Analyzing the impact of the lockdown on air quality in Delhi, India\n", @@ -36,7 +36,7 @@ }, { "cell_type": "markdown", - "id": "7de92dca", + "id": "f4ce8d0a", "metadata": {}, "source": [ "## The problem of air pollution\n", @@ -54,7 +54,7 @@ { "cell_type": "code", "execution_count": 1, - "id": "c3023c01", + "id": "e23f9f6e", "metadata": {}, "outputs": [], "source": [ @@ -65,7 +65,7 @@ }, { "cell_type": "markdown", - "id": "f2531e11", + "id": "b047bc3b", "metadata": {}, "source": [ "## Building the dataset\n", @@ -80,7 +80,7 @@ { "cell_type": "code", "execution_count": 2, - "id": "af5fa3a7", + "id": "0f4f84c4", "metadata": {}, "outputs": [ { @@ -106,7 +106,7 @@ }, { "cell_type": "markdown", - "id": "299cf46a", + "id": "3f505f2d", "metadata": {}, "source": [ "For the purpose of this tutorial, we are only concerned with standard pollutants required for calculating the AQI, viz., PM 2.5, PM 10, NO2, NH3, SO2, CO, and O3. So, we will only import these particular columns with [np.loadtxt](https://numpy.org/devdocs/reference/generated/numpy.loadtxt.html). We'll then [slice](https://numpy.org/devdocs/glossary.html#term-0) and create two sets: `pollutants_A` with PM 2.5, PM 10, NO2, NH3, and SO2, and `pollutants_B` with CO and O3. The\n", @@ -116,7 +116,7 @@ { "cell_type": "code", "execution_count": 3, - "id": "2ddae71a", + "id": "a48561c3", "metadata": {}, "outputs": [ { @@ -140,7 +140,7 @@ }, { "cell_type": "markdown", - "id": "7073a0f0", + "id": "7a55d860", "metadata": {}, "source": [ "Our dataset might contain missing values, denoted by `NaN`, so let's do a quick check with [np.isfinite](https://numpy.org/devdocs/reference/generated/numpy.isfinite.html)." @@ -149,7 +149,7 @@ { "cell_type": "code", "execution_count": 4, - "id": "f2a96a67", + "id": "f8e9910a", "metadata": {}, "outputs": [ { @@ -169,7 +169,7 @@ }, { "cell_type": "markdown", - "id": "ea5f7301", + "id": "c96cc47f", "metadata": {}, "source": [ "With this, we have successfully imported the data and checked that it is complete. Let's move on to the AQI calculations!" @@ -177,7 +177,7 @@ }, { "cell_type": "markdown", - "id": "4ce93cfc", + "id": "5f279940", "metadata": {}, "source": [ "## Calculating the Air Quality Index\n", @@ -219,7 +219,7 @@ { "cell_type": "code", "execution_count": 5, - "id": "6ab653a7", + "id": "6a796804", "metadata": {}, "outputs": [], "source": [ @@ -238,7 +238,7 @@ }, { "cell_type": "markdown", - "id": "c07931c8", + "id": "532a75fb", "metadata": {}, "source": [ "### Moving averages\n", @@ -253,7 +253,7 @@ { "cell_type": "code", "execution_count": 6, - "id": "8b29bf81", + "id": "c8dfa828", "metadata": {}, "outputs": [], "source": [ @@ -268,7 +268,7 @@ }, { "cell_type": "markdown", - "id": "24352c5f", + "id": "b08f5b63", "metadata": {}, "source": [ "Now, we can join both sets with [np.concatenate](https://numpy.org/devdocs/reference/generated/numpy.concatenate.html) to form a single data set of all the averaged concentrations. Note that we have to join our arrays column-wise so we pass the\n", @@ -278,7 +278,7 @@ { "cell_type": "code", "execution_count": 7, - "id": "ad159f49", + "id": "865e6ca6", "metadata": {}, "outputs": [], "source": [ @@ -287,7 +287,7 @@ }, { "cell_type": "markdown", - "id": "84ee8417", + "id": "e48472ce", "metadata": {}, "source": [ "### Sub-indices\n", @@ -304,7 +304,7 @@ { "cell_type": "code", "execution_count": 8, - "id": "641952d0", + "id": "8136e3f0", "metadata": {}, "outputs": [], "source": [ @@ -360,7 +360,7 @@ }, { "cell_type": "markdown", - "id": "a85b45e9", + "id": "ab038a96", "metadata": {}, "source": [ "We will use [np.vectorize](https://numpy.org/devdocs/reference/generated/numpy.vectorize.html) to utilize the concept of vectorization. This simply means we don't have loop over each element of the pollutant array ourselves. [Vectorization](https://numpy.org/devdocs/user/whatisnumpy.html#why-is-numpy-fast) is one of the key advantages of NumPy." @@ -369,7 +369,7 @@ { "cell_type": "code", "execution_count": 9, - "id": "f47e5ee4", + "id": "60f07c74", "metadata": {}, "outputs": [], "source": [ @@ -378,7 +378,7 @@ }, { "cell_type": "markdown", - "id": "2c66be92", + "id": "5814ba95", "metadata": {}, "source": [ "By calling our vectorized function `vcompute_indices` for each pollutant, we get the sub-indices. To get back an array with the original shape, we use [np.stack](https://numpy.org/devdocs/reference/generated/numpy.stack.html)." @@ -387,7 +387,7 @@ { "cell_type": "code", "execution_count": 10, - "id": "d5f6a6eb", + "id": "2e90b0f9", "metadata": {}, "outputs": [], "source": [ @@ -402,7 +402,7 @@ }, { "cell_type": "markdown", - "id": "924a987d", + "id": "1a824ff8", "metadata": {}, "source": [ "### Air quality indices\n", @@ -413,7 +413,7 @@ { "cell_type": "code", "execution_count": 11, - "id": "46d09085", + "id": "99d46e7a", "metadata": {}, "outputs": [], "source": [ @@ -422,7 +422,7 @@ }, { "cell_type": "markdown", - "id": "f47e816e", + "id": "d613aadd", "metadata": {}, "source": [ "With this, we have the AQI for every hour from June 1, 2019 to June 30, 2020. Note that even though we started out with\n", @@ -431,7 +431,7 @@ }, { "cell_type": "markdown", - "id": "c4d73da5", + "id": "778a95f7", "metadata": {}, "source": [ "## Paired Student's t-test on the AQIs\n", @@ -448,7 +448,7 @@ { "cell_type": "code", "execution_count": 12, - "id": "b2af734a", + "id": "667da2fc", "metadata": {}, "outputs": [], "source": [ @@ -458,7 +458,7 @@ }, { "cell_type": "markdown", - "id": "1a435cd5", + "id": "20d61836", "metadata": {}, "source": [ "Since total lockdown commenced in Delhi from March 24, 2020, the after-lockdown subset is of the period March 24, 2020 to June 30, 2020. The before-lockdown subset is for the same length of time before 24th March." @@ -467,7 +467,7 @@ { "cell_type": "code", "execution_count": 13, - "id": "682ac326", + "id": "95afb5f5", "metadata": {}, "outputs": [ { @@ -490,7 +490,7 @@ }, { "cell_type": "markdown", - "id": "5dacc81d", + "id": "98318dcc", "metadata": {}, "source": [ "To make sure our samples are *approximately* normally distributed, we take samples of size `n = 30`. `before_sample` and `after_sample` are the set of random observations drawn before and after the total lockdown. We use [random.Generator.choice](https://numpy.org/devdocs/reference/random/generated/numpy.random.Generator.choice.html) to generate the samples." @@ -499,7 +499,7 @@ { "cell_type": "code", "execution_count": 14, - "id": "69cd7350", + "id": "a9a90977", "metadata": {}, "outputs": [], "source": [ @@ -511,7 +511,7 @@ }, { "cell_type": "markdown", - "id": "0102fa71", + "id": "5f1499b6", "metadata": {}, "source": [ "### Defining the hypothesis\n", @@ -524,7 +524,7 @@ }, { "cell_type": "markdown", - "id": "b67da97f", + "id": "b08810f8", "metadata": {}, "source": [ "### Calculating the test statistics\n", @@ -545,7 +545,7 @@ { "cell_type": "code", "execution_count": 15, - "id": "c31a5f53", + "id": "e62bd635", "metadata": {}, "outputs": [], "source": [ @@ -561,7 +561,7 @@ }, { "cell_type": "markdown", - "id": "4e1fe09f", + "id": "fcbdd9c1", "metadata": {}, "source": [ "For the `p` value, we will use SciPy's `stats.distributions.t.cdf()` function. It takes two arguments- the `t statistic` and the degrees of freedom (`dof`). The formula for `dof` is `n - 1`." @@ -570,14 +570,14 @@ { "cell_type": "code", "execution_count": 16, - "id": "5ea71f26", + "id": "99d2258f", "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ - "The t value is -9.904282450791534 and the p value is 4.1027278384837894e-11.\n" + "The t value is -7.692321175384364 and the p value is 8.77879892721121e-09.\n" ] } ], @@ -591,7 +591,7 @@ }, { "cell_type": "markdown", - "id": "aa1d02c6", + "id": "dd813927", "metadata": {}, "source": [ "## What do the `t` and `p` values mean?\n", @@ -609,7 +609,7 @@ }, { "cell_type": "markdown", - "id": "8ce30d6e", + "id": "8b47dbe7", "metadata": {}, "source": [ "***\n", diff --git a/_sources/content/tutorial-deep-learning-on-mnist.ipynb b/_sources/content/tutorial-deep-learning-on-mnist.ipynb index 8aaf4adc..f8bff135 100644 --- a/_sources/content/tutorial-deep-learning-on-mnist.ipynb +++ b/_sources/content/tutorial-deep-learning-on-mnist.ipynb @@ -2,7 +2,7 @@ "cells": [ { "cell_type": "markdown", - "id": "f9cbddd2", + "id": "eed496bd", "metadata": {}, "source": [ "# Deep learning on MNIST\n", @@ -64,7 +64,7 @@ { "cell_type": "code", "execution_count": 1, - "id": "8861acc3", + "id": "9a6893ec", "metadata": {}, "outputs": [], "source": [ @@ -78,7 +78,7 @@ }, { "cell_type": "markdown", - "id": "ca9b0d7f", + "id": "2ce93afd", "metadata": {}, "source": [ "**2.** Load the data. First check if the data is stored locally; if not, then\n", @@ -88,7 +88,7 @@ { "cell_type": "code", "execution_count": 2, - "id": "365647ef", + "id": "b30f5bad", "metadata": { "tags": [ "remove-cell" @@ -110,7 +110,7 @@ { "cell_type": "code", "execution_count": 3, - "id": "dd3828b4", + "id": "dbe3fd59", "metadata": {}, "outputs": [], "source": [ @@ -135,7 +135,7 @@ }, { "cell_type": "markdown", - "id": "906fcf2d", + "id": "2ff33ff8", "metadata": {}, "source": [ "**3.** Decompress the 4 files and create 4 [`ndarrays`](https://numpy.org/doc/stable/reference/arrays.ndarray.html), saving them into a dictionary. Each original image is of size 28x28 and neural networks normally expect a 1D vector input; therefore, you also need to reshape the images by multiplying 28 by 28 (784)." @@ -144,7 +144,7 @@ { "cell_type": "code", "execution_count": 4, - "id": "61a9f529", + "id": "15d18128", "metadata": {}, "outputs": [], "source": [ @@ -167,7 +167,7 @@ }, { "cell_type": "markdown", - "id": "f28d3e50", + "id": "4d4dcde9", "metadata": {}, "source": [ "**4.** Split the data into training and test sets using the standard notation of `x` for data and `y` for labels, calling the training and test set images `x_train` and `x_test`, and the labels `y_train` and `y_test`:" @@ -176,7 +176,7 @@ { "cell_type": "code", "execution_count": 5, - "id": "b305290a", + "id": "8d49bc73", "metadata": {}, "outputs": [], "source": [ @@ -190,7 +190,7 @@ }, { "cell_type": "markdown", - "id": "0b095fa6", + "id": "54cf2eea", "metadata": {}, "source": [ "**5.** You can confirm that the shape of the image arrays is `(60000, 784)` and `(10000, 784)` for training and test sets, respectively, and the labels — `(60000,)` and `(10000,)`:" @@ -199,7 +199,7 @@ { "cell_type": "code", "execution_count": 6, - "id": "319e6cc1", + "id": "8354071d", "metadata": {}, "outputs": [ { @@ -226,7 +226,7 @@ }, { "cell_type": "markdown", - "id": "c46dd467", + "id": "a487b5d0", "metadata": {}, "source": [ "**6.** And you can inspect some images using Matplotlib:" @@ -235,7 +235,7 @@ { "cell_type": "code", "execution_count": 7, - "id": "22074ead", + "id": "912b9848", "metadata": {}, "outputs": [ { @@ -264,7 +264,7 @@ { "cell_type": "code", "execution_count": 8, - "id": "69312362", + "id": "ed14fb54", "metadata": {}, "outputs": [ { @@ -291,7 +291,7 @@ }, { "cell_type": "markdown", - "id": "aa2f6f19", + "id": "45413ef0", "metadata": {}, "source": [ "_Above are five images taken from the MNIST training set. Various hand-drawn\n", @@ -312,7 +312,7 @@ { "cell_type": "code", "execution_count": 9, - "id": "03b0f451", + "id": "430f4ae6", "metadata": {}, "outputs": [ { @@ -333,7 +333,7 @@ }, { "cell_type": "markdown", - "id": "1c31afb0", + "id": "60809852", "metadata": {}, "source": [ "## 2. Preprocess the data\n", @@ -359,7 +359,7 @@ { "cell_type": "code", "execution_count": 10, - "id": "04c36bad", + "id": "44c12362", "metadata": {}, "outputs": [ { @@ -378,7 +378,7 @@ }, { "cell_type": "markdown", - "id": "d76e50c6", + "id": "e312c16f", "metadata": {}, "source": [ "**2.** Normalize the arrays by dividing them by 255 (and thus promoting the data type from `uint8` to `float64`) and then assign the train and test image data variables — `x_train` and `x_test` — to `training_images` and `train_labels`, respectively.\n", @@ -393,7 +393,7 @@ { "cell_type": "code", "execution_count": 11, - "id": "f5fef09e", + "id": "fbba1330", "metadata": {}, "outputs": [], "source": [ @@ -404,7 +404,7 @@ }, { "cell_type": "markdown", - "id": "60d1ef99", + "id": "a22da667", "metadata": {}, "source": [ "**3.** Confirm that the image data has changed to the floating-point format:" @@ -413,7 +413,7 @@ { "cell_type": "code", "execution_count": 12, - "id": "d8a204d7", + "id": "fdfda4f2", "metadata": {}, "outputs": [ { @@ -432,7 +432,7 @@ }, { "cell_type": "markdown", - "id": "2e8efe0d", + "id": "553352ab", "metadata": {}, "source": [ "> **Note:** You can also check that normalization was successful by printing `training_images[0]` in a notebook cell. Your long output should contain an array of floating-point numbers:\n", @@ -461,7 +461,7 @@ { "cell_type": "code", "execution_count": 13, - "id": "0be1499c", + "id": "3409dc2a", "metadata": {}, "outputs": [ { @@ -480,7 +480,7 @@ }, { "cell_type": "markdown", - "id": "8da96268", + "id": "46de7e35", "metadata": {}, "source": [ "**2.** Define a function that performs one-hot encoding on arrays:" @@ -489,7 +489,7 @@ { "cell_type": "code", "execution_count": 14, - "id": "4ad710f9", + "id": "cba98097", "metadata": {}, "outputs": [], "source": [ @@ -503,7 +503,7 @@ }, { "cell_type": "markdown", - "id": "b7d730e5", + "id": "c5c297cf", "metadata": {}, "source": [ "**3.** Encode the labels and assign the values to new variables:" @@ -512,7 +512,7 @@ { "cell_type": "code", "execution_count": 15, - "id": "f2fb65ae", + "id": "8880b60b", "metadata": {}, "outputs": [], "source": [ @@ -522,7 +522,7 @@ }, { "cell_type": "markdown", - "id": "beb2dc8f", + "id": "c41a457d", "metadata": {}, "source": [ "**4.** Check that the data type has changed to floating point:" @@ -531,7 +531,7 @@ { "cell_type": "code", "execution_count": 16, - "id": "f7c68097", + "id": "de310789", "metadata": {}, "outputs": [ { @@ -550,7 +550,7 @@ }, { "cell_type": "markdown", - "id": "01d5689b", + "id": "cbb11957", "metadata": {}, "source": [ "**5.** Examine a few encoded labels:" @@ -559,7 +559,7 @@ { "cell_type": "code", "execution_count": 17, - "id": "50517f06", + "id": "b2e6fa0c", "metadata": {}, "outputs": [ { @@ -580,7 +580,7 @@ }, { "cell_type": "markdown", - "id": "9a6aab88", + "id": "f87f9c34", "metadata": {}, "source": [ "...and compare to the originals:" @@ -589,7 +589,7 @@ { "cell_type": "code", "execution_count": 18, - "id": "d58573a0", + "id": "5db0eb10", "metadata": {}, "outputs": [ { @@ -610,7 +610,7 @@ }, { "cell_type": "markdown", - "id": "90cb05d4", + "id": "ff883a0e", "metadata": {}, "source": [ "You have finished preparing the dataset.\n", @@ -705,7 +705,7 @@ { "cell_type": "code", "execution_count": 19, - "id": "0a2b68c5", + "id": "bbc92529", "metadata": {}, "outputs": [], "source": [ @@ -715,7 +715,7 @@ }, { "cell_type": "markdown", - "id": "f0f453ed", + "id": "ff5fa426", "metadata": {}, "source": [ "**2.** For the hidden layer, define the ReLU activation function for forward propagation and ReLU's derivative that will be used during backpropagation:" @@ -724,7 +724,7 @@ { "cell_type": "code", "execution_count": 20, - "id": "f4c5a81f", + "id": "bddb8081", "metadata": {}, "outputs": [], "source": [ @@ -741,7 +741,7 @@ }, { "cell_type": "markdown", - "id": "1e3fe996", + "id": "c3921d63", "metadata": {}, "source": [ "**3.** Set certain default values of [hyperparameters](https://en.wikipedia.org/wiki/Hyperparameter_(machine_learning)), such as:\n", @@ -756,7 +756,7 @@ { "cell_type": "code", "execution_count": 21, - "id": "a3adbc70", + "id": "4120499c", "metadata": {}, "outputs": [], "source": [ @@ -769,7 +769,7 @@ }, { "cell_type": "markdown", - "id": "fe7083d6", + "id": "30291681", "metadata": {}, "source": [ "**4.** Initialize the weight vectors that will be used in the hidden and output layers with random values:" @@ -778,7 +778,7 @@ { "cell_type": "code", "execution_count": 22, - "id": "7f30ee43", + "id": "84a190e2", "metadata": {}, "outputs": [], "source": [ @@ -788,7 +788,7 @@ }, { "cell_type": "markdown", - "id": "75fe0451", + "id": "a0d82388", "metadata": {}, "source": [ "**5.** Set up the neural network's learning experiment with a training loop and start the training process.\n", @@ -801,7 +801,7 @@ { "cell_type": "code", "execution_count": 23, - "id": "3b761d13", + "id": "43943493", "metadata": {}, "outputs": [ { @@ -1130,7 +1130,7 @@ }, { "cell_type": "markdown", - "id": "fee8434c", + "id": "215ff900", "metadata": {}, "source": [ "The training process may take many minutes, depending on a number of factors, such as the processing power of the machine you are running the experiment on and the number of epochs. To reduce the waiting time, you can change the epoch (iteration) variable from 100 to a lower number, reset the runtime (which will reset the weights), and run the notebook cells again." @@ -1138,7 +1138,7 @@ }, { "cell_type": "markdown", - "id": "9e1a056d", + "id": "696456f6", "metadata": {}, "source": [ "After executing the cell above, you can visualize the training and test set errors and accuracy for an instance of this training process." @@ -1147,7 +1147,7 @@ { "cell_type": "code", "execution_count": 24, - "id": "fa36deb1", + "id": "fe38b668", "metadata": {}, "outputs": [ { @@ -1192,7 +1192,7 @@ }, { "cell_type": "markdown", - "id": "30f93cc5", + "id": "b0835b39", "metadata": {}, "source": [ "_The training and testing error is shown above in the left and right\n", diff --git a/_sources/content/tutorial-deep-reinforcement-learning-with-pong-from-pixels.ipynb b/_sources/content/tutorial-deep-reinforcement-learning-with-pong-from-pixels.ipynb index 8ad0bf12..575601ff 100644 --- a/_sources/content/tutorial-deep-reinforcement-learning-with-pong-from-pixels.ipynb +++ b/_sources/content/tutorial-deep-reinforcement-learning-with-pong-from-pixels.ipynb @@ -2,7 +2,7 @@ "cells": [ { "cell_type": "markdown", - "id": "f9c13695", + "id": "7bfd86f5", "metadata": {}, "source": [ "# Deep reinforcement learning with Pong from pixels\n", diff --git a/_sources/content/tutorial-ma.ipynb b/_sources/content/tutorial-ma.ipynb index 999106d3..5e691e83 100644 --- a/_sources/content/tutorial-ma.ipynb +++ b/_sources/content/tutorial-ma.ipynb @@ -2,7 +2,7 @@ "cells": [ { "cell_type": "markdown", - "id": "f4517967", + "id": "b51872b0", "metadata": {}, "source": [ "# Masked Arrays\n", @@ -26,7 +26,7 @@ }, { "cell_type": "markdown", - "id": "214e62c6", + "id": "bbc80267", "metadata": {}, "source": [ "***" @@ -34,7 +34,7 @@ }, { "cell_type": "markdown", - "id": "3d7318a5", + "id": "2fabef4c", "metadata": {}, "source": [ "## What are masked arrays?\n", @@ -65,7 +65,7 @@ }, { "cell_type": "markdown", - "id": "504163a4", + "id": "e2ea309e", "metadata": {}, "source": [ "## Using masked arrays to see COVID-19 data\n", @@ -76,7 +76,7 @@ { "cell_type": "code", "execution_count": 1, - "id": "98166867", + "id": "7e9b26d5", "metadata": {}, "outputs": [], "source": [ @@ -91,7 +91,7 @@ }, { "cell_type": "markdown", - "id": "3eaf895d", + "id": "e05ae304", "metadata": {}, "source": [ "The data file contains data of different types and is organized as follows:\n", @@ -107,7 +107,7 @@ { "cell_type": "code", "execution_count": 2, - "id": "86fe0eb5", + "id": "a7ffe45a", "metadata": {}, "outputs": [], "source": [ @@ -145,7 +145,7 @@ }, { "cell_type": "markdown", - "id": "416ccaa0", + "id": "459db552", "metadata": {}, "source": [ "Included in the `numpy.genfromtxt` function call, we have selected the [numpy.dtype](https://numpy.org/devdocs/reference/generated/numpy.dtype.html#numpy.dtype) for each subset of the data (either an integer - `numpy.int_` - or a string of characters - `numpy.str_`). We have also used the `encoding` argument to select `utf-8-sig` as the encoding for the file (read more about encoding in the [official Python documentation](https://docs.python.org/3/library/codecs.html#encodings-and-unicode). You can read more about the `numpy.genfromtxt` function from the [Reference Documentation](https://numpy.org/devdocs/reference/generated/numpy.genfromtxt.html#numpy.genfromtxt) or from the [Basic IO tutorial](https://numpy.org/devdocs/user/basics.io.genfromtxt.html)." @@ -153,7 +153,7 @@ }, { "cell_type": "markdown", - "id": "80930706", + "id": "54b5c0d5", "metadata": {}, "source": [ "## Exploring the data\n", @@ -164,7 +164,7 @@ { "cell_type": "code", "execution_count": 3, - "id": "290110ca", + "id": "50fbe0d0", "metadata": {}, "outputs": [ { @@ -199,7 +199,7 @@ }, { "cell_type": "markdown", - "id": "173f8a72", + "id": "e415c56b", "metadata": {}, "source": [ "The graph has a strange shape from January 24th to February 1st. It would be interesting to know where this data comes from. If we look at the `locations` array we extracted from the `.csv` file, we can see that we have two columns, where the first would contain regions and the second would contain the name of the country. However, only the first few rows contain data for the the first column (province names in China). Following that, we only have country names. So it would make sense to group all the data from China into a single row. For this, we'll select from the `nbcases` array only the rows for which the second entry of the `locations` array corresponds to China. Next, we'll use the [numpy.sum](https://numpy.org/devdocs/reference/generated/numpy.sum.html#numpy.sum) function to sum all the selected rows (`axis=0`). Note also that row 35 corresponds to the total counts for the whole country for each date. Since we want to calculate the sum ourselves from the provinces data, we have to remove that row first from both `locations` and `nbcases`:" @@ -208,7 +208,7 @@ { "cell_type": "code", "execution_count": 4, - "id": "48a980d8", + "id": "c01b022e", "metadata": {}, "outputs": [ { @@ -234,7 +234,7 @@ }, { "cell_type": "markdown", - "id": "f0e31d8d", + "id": "0c49b6ce", "metadata": {}, "source": [ "Something's wrong with this data - we are not supposed to have negative values in a cumulative data set. What's going on?" @@ -242,7 +242,7 @@ }, { "cell_type": "markdown", - "id": "0dfcfb55", + "id": "4df2c08b", "metadata": {}, "source": [ "## Missing data\n", @@ -253,7 +253,7 @@ { "cell_type": "code", "execution_count": 5, - "id": "fe13e2ac", + "id": "66af1b1e", "metadata": {}, "outputs": [ { @@ -279,7 +279,7 @@ }, { "cell_type": "markdown", - "id": "d5ab77ef", + "id": "68c7b95f", "metadata": {}, "source": [ "All the `-1` values we are seeing come from `numpy.genfromtxt` attempting to read missing data from the original `.csv` file. Obviously, we\n", @@ -289,7 +289,7 @@ { "cell_type": "code", "execution_count": 6, - "id": "da46bede", + "id": "d484bd87", "metadata": {}, "outputs": [], "source": [ @@ -300,7 +300,7 @@ }, { "cell_type": "markdown", - "id": "5ef9957e", + "id": "5fdb379d", "metadata": {}, "source": [ "If we look at the `nbcases_ma` masked array, this is what we have:" @@ -309,7 +309,7 @@ { "cell_type": "code", "execution_count": 7, - "id": "e0d3a908", + "id": "17b65445", "metadata": {}, "outputs": [ { @@ -344,7 +344,7 @@ }, { "cell_type": "markdown", - "id": "6e29fde8", + "id": "d0f7b0c2", "metadata": {}, "source": [ "We can see that this is a different kind of array. As mentioned in the introduction, it has three attributes (`data`, `mask` and `fill_value`).\n", @@ -353,7 +353,7 @@ }, { "cell_type": "markdown", - "id": "49af15fd", + "id": "f7ad330c", "metadata": {}, "source": [ "Let's try and see what the data looks like excluding the first row (data from the Hubei province in China) so we can look at the missing data more\n", @@ -363,7 +363,7 @@ { "cell_type": "code", "execution_count": 8, - "id": "0a811150", + "id": "32e83368", "metadata": {}, "outputs": [ { @@ -395,7 +395,7 @@ }, { "cell_type": "markdown", - "id": "9c96b681", + "id": "b547fa52", "metadata": {}, "source": [ "Now that our data has been masked, let's try summing up all the cases in China:" @@ -404,7 +404,7 @@ { "cell_type": "code", "execution_count": 9, - "id": "ade4ba08", + "id": "a3f8c480", "metadata": {}, "outputs": [ { @@ -429,7 +429,7 @@ }, { "cell_type": "markdown", - "id": "b7c4ec9c", + "id": "029d457a", "metadata": {}, "source": [ "Note that `china_masked` is a masked array, so it has a different data structure than a regular NumPy array. Now, we can access its data directly by using the `.data` attribute:" @@ -438,7 +438,7 @@ { "cell_type": "code", "execution_count": 10, - "id": "698c646a", + "id": "e5594694", "metadata": {}, "outputs": [ { @@ -460,7 +460,7 @@ }, { "cell_type": "markdown", - "id": "f9eff036", + "id": "eb121aae", "metadata": {}, "source": [ "That is better: no more negative values. However, we can still see that for some days, the cumulative number of cases seems to go down (from 835 to 10, for example), which does not agree with the definition of \"cumulative data\". If we look more closely at the data, we can see that in the period where there was missing data in mainland China, there was valid data for Hong Kong, Taiwan, Macau and \"Unspecified\" regions of China. Maybe we can remove those from the total sum of cases in China, to get a better understanding of the data.\n", @@ -471,7 +471,7 @@ { "cell_type": "code", "execution_count": 11, - "id": "e427b7ab", + "id": "52c36deb", "metadata": {}, "outputs": [], "source": [ @@ -486,7 +486,7 @@ }, { "cell_type": "markdown", - "id": "e5b57a1c", + "id": "d568687c", "metadata": {}, "source": [ "Now, `china_mask` is an array of boolean values (`True` or `False`); we can check that the indices are what we wanted with the [ma.nonzero](https://numpy.org/devdocs/reference/generated/numpy.ma.nonzero.html#numpy.ma.nonzero) method for masked arrays:" @@ -495,7 +495,7 @@ { "cell_type": "code", "execution_count": 12, - "id": "64b9fa7a", + "id": "395851ee", "metadata": {}, "outputs": [ { @@ -516,7 +516,7 @@ }, { "cell_type": "markdown", - "id": "a629c680", + "id": "452420f2", "metadata": {}, "source": [ "Now we can correctly sum entries for mainland China:" @@ -525,7 +525,7 @@ { "cell_type": "code", "execution_count": 13, - "id": "7ea0a5b1", + "id": "e495ba86", "metadata": {}, "outputs": [ { @@ -550,7 +550,7 @@ }, { "cell_type": "markdown", - "id": "128b0402", + "id": "4724b31f", "metadata": {}, "source": [ "We can replace the data with this information and plot a new graph, focusing on Mainland China:" @@ -559,7 +559,7 @@ { "cell_type": "code", "execution_count": 14, - "id": "24e5cd39", + "id": "ede0bb3f", "metadata": {}, "outputs": [ { @@ -591,7 +591,7 @@ }, { "cell_type": "markdown", - "id": "8f02f487", + "id": "7b02a320", "metadata": {}, "source": [ "It's clear that masked arrays are the right solution here. We cannot represent the missing data without mischaracterizing the evolution of the curve." @@ -599,7 +599,7 @@ }, { "cell_type": "markdown", - "id": "55b65e0a", + "id": "a338e968", "metadata": {}, "source": [ "## Fitting Data\n", @@ -610,7 +610,7 @@ { "cell_type": "code", "execution_count": 15, - "id": "3537316a", + "id": "218750c9", "metadata": {}, "outputs": [ { @@ -635,7 +635,7 @@ }, { "cell_type": "markdown", - "id": "d8f1a904", + "id": "a7b794c3", "metadata": {}, "source": [ "We can also access the valid entries by using the logical negation for this mask:" @@ -644,7 +644,7 @@ { "cell_type": "code", "execution_count": 16, - "id": "34cf3906", + "id": "fe4c8df0", "metadata": {}, "outputs": [ { @@ -667,7 +667,7 @@ }, { "cell_type": "markdown", - "id": "7c7bca50", + "id": "3d1f3e1b", "metadata": {}, "source": [ "Now, if we want to create a very simple approximation for this data, we should take into account the valid entries around the invalid ones. So first let's select the dates for which the data is valid. Note that we can use the mask from the `china_total` masked array to index the dates array:" @@ -676,7 +676,7 @@ { "cell_type": "code", "execution_count": 17, - "id": "cb16e14a", + "id": "0f2cd8ae", "metadata": {}, "outputs": [ { @@ -697,7 +697,7 @@ }, { "cell_type": "markdown", - "id": "ca207db8", + "id": "4d0c0b15", "metadata": {}, "source": [ "Finally, we can use the\n", @@ -708,13 +708,13 @@ { "cell_type": "code", "execution_count": 18, - "id": "a10ca6f9", + "id": "60b0e862", "metadata": {}, "outputs": [ { "data": { "text/plain": [ - "[]" + "[]" ] }, "execution_count": 18, @@ -741,7 +741,7 @@ }, { "cell_type": "markdown", - "id": "e22bf391", + "id": "13318c57", "metadata": {}, "source": [ "This plot is not so readable since the lines seem to be over each other, so let's summarize in a more elaborate plot. We'll plot the real data when\n", @@ -751,7 +751,7 @@ { "cell_type": "code", "execution_count": 19, - "id": "f195a337", + "id": "f29cccc0", "metadata": {}, "outputs": [ { @@ -790,7 +790,7 @@ }, { "cell_type": "markdown", - "id": "23131159", + "id": "16ceea66", "metadata": {}, "source": [ "## In practice" @@ -798,7 +798,7 @@ }, { "cell_type": "markdown", - "id": "a79e3c74", + "id": "2b79496a", "metadata": {}, "source": [ "- Adding `-1` to missing data is not a problem with `numpy.genfromtxt`; in this particular case, substituting the missing value with `0` might have been fine, but we'll see later that this is far from a general solution. Also, it is possible to call the `numpy.genfromtxt` function using the `usemask` parameter. If `usemask=True`, `numpy.genfromtxt` automatically returns a masked array." @@ -806,7 +806,7 @@ }, { "cell_type": "markdown", - "id": "4eeb47df", + "id": "d0cf8875", "metadata": {}, "source": [ "## Further reading\n", diff --git a/_sources/content/tutorial-plotting-fractals.ipynb b/_sources/content/tutorial-plotting-fractals.ipynb index 85af986a..734c4466 100644 --- a/_sources/content/tutorial-plotting-fractals.ipynb +++ b/_sources/content/tutorial-plotting-fractals.ipynb @@ -2,7 +2,7 @@ "cells": [ { "cell_type": "markdown", - "id": "f910fa22", + "id": "0d90d9dd", "metadata": {}, "source": [ "# Plotting Fractals" @@ -10,7 +10,7 @@ }, { "cell_type": "markdown", - "id": "bf52a22b", + "id": "0e9e2367", "metadata": {}, "source": [ "![Fractal picture](tutorial-plotting-fractals/fractal.png)" @@ -18,7 +18,7 @@ }, { "cell_type": "markdown", - "id": "1aa9e7c4", + "id": "e72b0def", "metadata": {}, "source": [ "Fractals are beautiful, compelling mathematical forms that can be oftentimes created from a relatively simple set of instructions. In nature they can be found in various places, such as coastlines, seashells, and ferns, and even were used in creating certain types of antennas. The mathematical idea of fractals was known for quite some time, but they really began to be truly appreciated in the 1970's as advancements in computer graphics and some accidental discoveries lead researchers like [Benoît Mandelbrot](https://en.wikipedia.org/wiki/Benoit_Mandelbrot) to stumble upon the truly mystifying visualizations that fractals possess.\n", @@ -28,7 +28,7 @@ }, { "cell_type": "markdown", - "id": "4deac151", + "id": "24864e30", "metadata": {}, "source": [ "## What you'll do\n", @@ -41,7 +41,7 @@ }, { "cell_type": "markdown", - "id": "28ecf767", + "id": "d201b9b4", "metadata": {}, "source": [ "## What you'll learn\n", @@ -54,7 +54,7 @@ }, { "cell_type": "markdown", - "id": "b1d6fc2c", + "id": "2661a6d0", "metadata": {}, "source": [ "## What you'll need\n", @@ -68,7 +68,7 @@ { "cell_type": "code", "execution_count": 1, - "id": "e54553a2", + "id": "c45a16c2", "metadata": {}, "outputs": [], "source": [ @@ -79,7 +79,7 @@ }, { "cell_type": "markdown", - "id": "ac1b97a2", + "id": "ffc16982", "metadata": {}, "source": [ "- Some familiarity with Python, NumPy and matplotlib\n", @@ -90,7 +90,7 @@ }, { "cell_type": "markdown", - "id": "6df6cfad", + "id": "bef86375", "metadata": {}, "source": [ "## Warmup\n", @@ -109,7 +109,7 @@ { "cell_type": "code", "execution_count": 2, - "id": "b64773aa", + "id": "86fff1c9", "metadata": {}, "outputs": [], "source": [ @@ -119,7 +119,7 @@ }, { "cell_type": "markdown", - "id": "86a497ba", + "id": "92e319ed", "metadata": {}, "source": [ "Note that the square function we used is an example of a **[NumPy universal function](https://numpy.org/doc/stable/reference/ufuncs.html)**; we will come back to the significance of this decision shortly.\n", @@ -132,7 +132,7 @@ { "cell_type": "code", "execution_count": 3, - "id": "e44fed78", + "id": "ebb7c6af", "metadata": {}, "outputs": [ { @@ -152,7 +152,7 @@ }, { "cell_type": "markdown", - "id": "a9b8b84f", + "id": "ce369c04", "metadata": {}, "source": [ "Since we used a universal function in our design, we can compute multiple inputs at the same time:" @@ -161,7 +161,7 @@ { "cell_type": "code", "execution_count": 4, - "id": "bf63496b", + "id": "b113f658", "metadata": {}, "outputs": [ { @@ -182,7 +182,7 @@ }, { "cell_type": "markdown", - "id": "d4d19a95", + "id": "928b03cf", "metadata": {}, "source": [ "Some values grow, some values shrink, some don't experience much change.\n", @@ -193,7 +193,7 @@ { "cell_type": "code", "execution_count": 5, - "id": "2d60ef81", + "id": "ff454b75", "metadata": {}, "outputs": [], "source": [ @@ -203,7 +203,7 @@ }, { "cell_type": "markdown", - "id": "ea78d0e2", + "id": "b6eff995", "metadata": {}, "source": [ "Now we will apply our function to each value contained in the mesh. Since we used a universal function in our design, this means that we can pass in the entire mesh all at once. This is extremely convenient for two reasons: It reduces the amount of code needed to be written and greatly increases the efficiency (as universal functions make use of system level C programming in their computations).\n", @@ -215,7 +215,7 @@ { "cell_type": "code", "execution_count": 6, - "id": "57666002", + "id": "61c1b961", "metadata": {}, "outputs": [ { @@ -245,7 +245,7 @@ }, { "cell_type": "markdown", - "id": "6961dff7", + "id": "6026fd9d", "metadata": {}, "source": [ "This gives us a rough idea of what one iteration of the function does. Certain areas (notably in the areas closest to $(0,0i)$) remain rather small while other areas grow quite considerably. Note that we lose information about the output by taking the absolute value, but it is the only way for us to be able to make a plot.\n", @@ -256,7 +256,7 @@ { "cell_type": "code", "execution_count": 7, - "id": "17f266bf", + "id": "f9f5e45d", "metadata": {}, "outputs": [ { @@ -285,7 +285,7 @@ }, { "cell_type": "markdown", - "id": "c38cc0dd", + "id": "43bf827c", "metadata": {}, "source": [ "Once again, we see that values around the origin remain small, and values with a larger absolute value (or modulus) “explode”.\n", @@ -295,7 +295,7 @@ }, { "cell_type": "markdown", - "id": "12b8ee87", + "id": "9acbad88", "metadata": {}, "source": [ "Consider three complex numbers:\n", @@ -312,7 +312,7 @@ { "cell_type": "code", "execution_count": 8, - "id": "9e56e541", + "id": "cc0e301e", "metadata": {}, "outputs": [ { @@ -349,7 +349,7 @@ }, { "cell_type": "markdown", - "id": "32e90066", + "id": "57f34a5a", "metadata": {}, "source": [ "To our surprise, the behaviour of the function did not come close to matching our hypothesis. This is a prime example of the chaotic behaviour fractals possess. In the first two plots, the value \"exploded\" on the last iteration, jumping way beyond the region that it was contained in previously. The third plot on the other hand remained bounded to a small region close to the origin, yielding completely different behaviour despite the tiny change in value.\n", @@ -366,7 +366,7 @@ { "cell_type": "code", "execution_count": 9, - "id": "94a84e5f", + "id": "75036afa", "metadata": {}, "outputs": [], "source": [ @@ -386,7 +386,7 @@ }, { "cell_type": "markdown", - "id": "805619f8", + "id": "ed0a549c", "metadata": {}, "source": [ "The behaviour of this function may look confusing at first glance, so it will help to explain some of the notation.\n", @@ -401,7 +401,7 @@ { "cell_type": "code", "execution_count": 10, - "id": "7f1f617c", + "id": "3ad3de7c", "metadata": {}, "outputs": [ { @@ -436,7 +436,7 @@ }, { "cell_type": "markdown", - "id": "ebdb7b11", + "id": "23e75415", "metadata": {}, "source": [ "What this stunning visual conveys is the complexity of the function’s behaviour. The yellow region represents values that remain small, while the purple region represents the divergent values. The beautiful pattern that arises on the border of the converging and diverging values is even more fascinating when you realize that it is created from such a simple function." @@ -444,7 +444,7 @@ }, { "cell_type": "markdown", - "id": "6dc21f19", + "id": "61571c9e", "metadata": {}, "source": [ "## Julia set\n", @@ -459,7 +459,7 @@ { "cell_type": "code", "execution_count": 11, - "id": "9f09b7f9", + "id": "70e28b29", "metadata": {}, "outputs": [], "source": [ @@ -478,7 +478,7 @@ }, { "cell_type": "markdown", - "id": "6b59428f", + "id": "6997c6a3", "metadata": {}, "source": [ "To make our lives easier, we will create a couple meshes that we will reuse throughout the rest of the examples:" @@ -487,7 +487,7 @@ { "cell_type": "code", "execution_count": 12, - "id": "bf6ef8dd", + "id": "8b00ad21", "metadata": {}, "outputs": [], "source": [ @@ -500,7 +500,7 @@ }, { "cell_type": "markdown", - "id": "b7f76f6c", + "id": "eb2d20b1", "metadata": {}, "source": [ "We will also write a function that we will use to create our fractal plots:" @@ -509,7 +509,7 @@ { "cell_type": "code", "execution_count": 13, - "id": "05a20e3a", + "id": "9acc9141", "metadata": {}, "outputs": [], "source": [ @@ -530,7 +530,7 @@ }, { "cell_type": "markdown", - "id": "c3f9adb9", + "id": "59fa5310", "metadata": {}, "source": [ "Using our newly defined functions, we can make a quick plot of the first fractal again:" @@ -539,7 +539,7 @@ { "cell_type": "code", "execution_count": 14, - "id": "3ef71c1f", + "id": "d0f10e22", "metadata": {}, "outputs": [ { @@ -562,7 +562,7 @@ }, { "cell_type": "markdown", - "id": "c4c15ec7", + "id": "d10213b6", "metadata": {}, "source": [ "We also can explore some different Julia sets by experimenting with different values of $c$. It can be surprising how much influence it has on the shape of the fractal.\n", @@ -573,7 +573,7 @@ { "cell_type": "code", "execution_count": 15, - "id": "c08f8b25", + "id": "ff3434da", "metadata": {}, "outputs": [ { @@ -597,7 +597,7 @@ { "cell_type": "code", "execution_count": 16, - "id": "8ecacaf9", + "id": "7ee956c9", "metadata": {}, "outputs": [ { @@ -620,7 +620,7 @@ }, { "cell_type": "markdown", - "id": "f4322df9", + "id": "ff8e207e", "metadata": {}, "source": [ "## Mandelbrot set\n", @@ -631,7 +631,7 @@ { "cell_type": "code", "execution_count": 17, - "id": "627f3baf", + "id": "de1b2552", "metadata": {}, "outputs": [], "source": [ @@ -652,7 +652,7 @@ { "cell_type": "code", "execution_count": 18, - "id": "32188dd1", + "id": "7485093a", "metadata": {}, "outputs": [ { @@ -675,7 +675,7 @@ }, { "cell_type": "markdown", - "id": "8de5414e", + "id": "05891439", "metadata": {}, "source": [ "## Generalizing the Julia set\n", @@ -686,7 +686,7 @@ { "cell_type": "code", "execution_count": 19, - "id": "6d9dfc77", + "id": "3ba3e1b5", "metadata": {}, "outputs": [], "source": [ @@ -705,7 +705,7 @@ }, { "cell_type": "markdown", - "id": "6777a899", + "id": "a4085b6e", "metadata": {}, "source": [ "One cool set of fractals that can be plotted using our general Julia function are ones of the form $f(z) = z^n + c$ for some positive integer $n$. A very cool pattern which emerges is that the number of regions that 'stick out' matches the degree in which we raise the function to while iterating over it." @@ -714,7 +714,7 @@ { "cell_type": "code", "execution_count": 20, - "id": "57d758b6", + "id": "f2fa0da5", "metadata": {}, "outputs": [ { @@ -743,7 +743,7 @@ }, { "cell_type": "markdown", - "id": "7914e851", + "id": "13b99563", "metadata": {}, "source": [ "Needless to say, there is a large amount of exploring that can be done by fiddling with the inputted function, value of $c$, number of iterations, radius and even the density of the mesh and choice of colours." @@ -751,7 +751,7 @@ }, { "cell_type": "markdown", - "id": "9e6c8065", + "id": "5ab22d00", "metadata": {}, "source": [ "### Newton Fractals\n", @@ -766,7 +766,7 @@ { "cell_type": "code", "execution_count": 21, - "id": "3d03b95d", + "id": "eb593faf", "metadata": {}, "outputs": [], "source": [ @@ -787,7 +787,7 @@ }, { "cell_type": "markdown", - "id": "b469ad6d", + "id": "34659d40", "metadata": {}, "source": [ "Now we can experiment with some different functions. For polynomials, we can create our plots quite effortlessly using the [NumPy Polynomial class](https://numpy.org/doc/stable/reference/generated/numpy.polynomial.polynomial.Polynomial.html), which has built in functionality for computing derivatives.\n", @@ -798,7 +798,7 @@ { "cell_type": "code", "execution_count": 22, - "id": "f7eddf79", + "id": "1bdd7ffe", "metadata": {}, "outputs": [ { @@ -822,7 +822,7 @@ }, { "cell_type": "markdown", - "id": "2c002a8d", + "id": "991aaf92", "metadata": {}, "source": [ "which has the derivative:" @@ -831,7 +831,7 @@ { "cell_type": "code", "execution_count": 23, - "id": "a65cecbb", + "id": "c874efeb", "metadata": {}, "outputs": [ { @@ -855,7 +855,7 @@ { "cell_type": "code", "execution_count": 24, - "id": "446ec38b", + "id": "8fd558cb", "metadata": {}, "outputs": [ { @@ -878,7 +878,7 @@ }, { "cell_type": "markdown", - "id": "55f64398", + "id": "a4002c31", "metadata": {}, "source": [ "Beautiful! Let's try another one:\n", @@ -893,7 +893,7 @@ { "cell_type": "code", "execution_count": 25, - "id": "355994e1", + "id": "6b6304f3", "metadata": {}, "outputs": [], "source": [ @@ -908,7 +908,7 @@ { "cell_type": "code", "execution_count": 26, - "id": "ccd3915f", + "id": "a78f8172", "metadata": {}, "outputs": [ { @@ -931,7 +931,7 @@ }, { "cell_type": "markdown", - "id": "3ed8f2cb", + "id": "d60233bf", "metadata": {}, "source": [ "Note that you sometimes have to play with the radius in order to get a neat looking fractal.\n", @@ -946,7 +946,7 @@ { "cell_type": "code", "execution_count": 27, - "id": "4b9af39b", + "id": "d0e1cd6e", "metadata": {}, "outputs": [], "source": [ @@ -966,7 +966,7 @@ }, { "cell_type": "markdown", - "id": "b59ad8a2", + "id": "64f4d564", "metadata": {}, "source": [ "We will denote this one 'Wacky fractal', as its equation would not be fun to try and put in the title." @@ -975,7 +975,7 @@ { "cell_type": "code", "execution_count": 28, - "id": "f6a569fc", + "id": "2b713451", "metadata": {}, "outputs": [ { @@ -998,7 +998,7 @@ }, { "cell_type": "markdown", - "id": "41ccde12", + "id": "882afd3d", "metadata": {}, "source": [ "It is truly fascinating how distinct yet similar these fractals are with each other. This leads us to the final section." @@ -1006,7 +1006,7 @@ }, { "cell_type": "markdown", - "id": "089b9f95", + "id": "21ceddad", "metadata": {}, "source": [ "## Creating your own fractals\n", @@ -1024,7 +1024,7 @@ { "cell_type": "code", "execution_count": 29, - "id": "e53727b7", + "id": "42877ced", "metadata": {}, "outputs": [], "source": [ @@ -1035,7 +1035,7 @@ { "cell_type": "code", "execution_count": 30, - "id": "3a20142b", + "id": "16e36fda", "metadata": {}, "outputs": [ { @@ -1058,7 +1058,7 @@ }, { "cell_type": "markdown", - "id": "f5c625be", + "id": "1281b236", "metadata": {}, "source": [ "What happens if we compose our defined function inside of a sine function?\n", @@ -1071,7 +1071,7 @@ { "cell_type": "code", "execution_count": 31, - "id": "7699136b", + "id": "8e3f7953", "metadata": {}, "outputs": [], "source": [ @@ -1082,7 +1082,7 @@ { "cell_type": "code", "execution_count": 32, - "id": "ea4ad69c", + "id": "ece5f1a3", "metadata": {}, "outputs": [ { @@ -1105,7 +1105,7 @@ }, { "cell_type": "markdown", - "id": "dee3a955", + "id": "68196005", "metadata": {}, "source": [ "Next, let's create a function that applies both f and g to the inputs each iteration and adds the result together:\n", @@ -1116,7 +1116,7 @@ { "cell_type": "code", "execution_count": 33, - "id": "f559f37f", + "id": "60f20df3", "metadata": {}, "outputs": [], "source": [ @@ -1127,7 +1127,7 @@ { "cell_type": "code", "execution_count": 34, - "id": "30463bb9", + "id": "77605876", "metadata": {}, "outputs": [ { @@ -1150,7 +1150,7 @@ }, { "cell_type": "markdown", - "id": "0c52bf61", + "id": "78f70e19", "metadata": {}, "source": [ "You can even create beautiful fractals through your own errors. Here is one that got created accidently by making a mistake in computing the derivative of a Newton fractal:" @@ -1159,7 +1159,7 @@ { "cell_type": "code", "execution_count": 35, - "id": "3047c821", + "id": "f96c0f94", "metadata": {}, "outputs": [], "source": [ @@ -1170,7 +1170,7 @@ { "cell_type": "code", "execution_count": 36, - "id": "20108b82", + "id": "586a9412", "metadata": {}, "outputs": [ { @@ -1193,7 +1193,7 @@ }, { "cell_type": "markdown", - "id": "82f99e4e", + "id": "d490a07d", "metadata": {}, "source": [ "Needless to say, there are a nearly endless supply of interesting fractal creations that can be made just by playing around with various combinations of NumPy universal functions and by tinkering with the parameters." @@ -1201,7 +1201,7 @@ }, { "cell_type": "markdown", - "id": "d0419adb", + "id": "a1aeb261", "metadata": {}, "source": [ "## In conclusion\n", @@ -1219,7 +1219,7 @@ }, { "cell_type": "markdown", - "id": "a4372b8b", + "id": "4a9167f1", "metadata": {}, "source": [ "## On your own\n", @@ -1231,7 +1231,7 @@ }, { "cell_type": "markdown", - "id": "3f8e6bc0", + "id": "1c8d363a", "metadata": {}, "source": [ "## Further reading\n", diff --git a/_sources/content/tutorial-static_equilibrium.ipynb b/_sources/content/tutorial-static_equilibrium.ipynb index fdb2e9e1..3030472e 100644 --- a/_sources/content/tutorial-static_equilibrium.ipynb +++ b/_sources/content/tutorial-static_equilibrium.ipynb @@ -2,7 +2,7 @@ "cells": [ { "cell_type": "markdown", - "id": "81368171", + "id": "ee9f5e9c", "metadata": {}, "source": [ "# Determining Static Equilibrium in NumPy\n", @@ -30,7 +30,7 @@ { "cell_type": "code", "execution_count": 1, - "id": "5d764f39", + "id": "6d558df7", "metadata": {}, "outputs": [], "source": [ @@ -40,7 +40,7 @@ }, { "cell_type": "markdown", - "id": "dde44a52", + "id": "d3d894b4", "metadata": {}, "source": [ "In this tutorial you will use the following NumPy tools:\n", @@ -51,7 +51,7 @@ }, { "cell_type": "markdown", - "id": "d046c3a7", + "id": "bcb64b47", "metadata": {}, "source": [ "## Solving equilibrium with Newton's second law\n", @@ -80,7 +80,7 @@ { "cell_type": "code", "execution_count": 2, - "id": "7001cacf", + "id": "429b1a23", "metadata": {}, "outputs": [ { @@ -101,7 +101,7 @@ }, { "cell_type": "markdown", - "id": "c462e320", + "id": "28c1c832", "metadata": {}, "source": [ "This defines `forceA` as being a vector with magnitude of 1 in the $x$ direction and `forceB` as magnitude 1 in the $y$ direction.\n", @@ -114,7 +114,7 @@ { "cell_type": "code", "execution_count": 3, - "id": "3f79789a", + "id": "144a3d3b", "metadata": {}, "outputs": [ { @@ -151,7 +151,7 @@ }, { "cell_type": "markdown", - "id": "6c134453", + "id": "a3122277", "metadata": {}, "source": [ "There are two forces emanating from a single point. In order to simplify this problem, you can add them together to find the sum of forces. Note that both `forceA` and `forceB` are three-dimensional vectors, represented by NumPy as arrays with three components. Because NumPy is meant to simplify and optimize operations between vectors, you can easily compute the sum of these two vectors as follows:" @@ -160,7 +160,7 @@ { "cell_type": "code", "execution_count": 4, - "id": "f3177a67", + "id": "bf1af4ab", "metadata": {}, "outputs": [ { @@ -178,7 +178,7 @@ }, { "cell_type": "markdown", - "id": "bb4fcd44", + "id": "7e4ac84f", "metadata": {}, "source": [ "Force C now acts as a single force that represents both A and B.\n", @@ -188,7 +188,7 @@ { "cell_type": "code", "execution_count": 5, - "id": "9414e47c", + "id": "359f09c5", "metadata": {}, "outputs": [ { @@ -226,7 +226,7 @@ }, { "cell_type": "markdown", - "id": "175e8c93", + "id": "9617727d", "metadata": {}, "source": [ "However, the goal is equilibrium.\n", @@ -257,7 +257,7 @@ { "cell_type": "code", "execution_count": 6, - "id": "e146d878", + "id": "920ec8a9", "metadata": {}, "outputs": [ { @@ -292,7 +292,7 @@ }, { "cell_type": "markdown", - "id": "fa482961", + "id": "4b9408a6", "metadata": {}, "source": [ "The empty graph signifies that there are no outlying forces. This denotes a system in equilibrium.\n", @@ -316,7 +316,7 @@ { "cell_type": "code", "execution_count": 7, - "id": "645078c5", + "id": "cef0d452", "metadata": {}, "outputs": [ { @@ -340,7 +340,7 @@ }, { "cell_type": "markdown", - "id": "867fa643", + "id": "7aaf5a67", "metadata": {}, "source": [ "## Finding values with physical properties\n", @@ -361,7 +361,7 @@ { "cell_type": "code", "execution_count": 8, - "id": "8794c1a7", + "id": "58140b20", "metadata": {}, "outputs": [ { @@ -386,7 +386,7 @@ }, { "cell_type": "markdown", - "id": "5a729b18", + "id": "e25c979e", "metadata": {}, "source": [ "In order to use these vectors in relation to forces you need to convert them into unit vectors.\n", @@ -396,7 +396,7 @@ { "cell_type": "code", "execution_count": 9, - "id": "82ac22af", + "id": "c787c9be", "metadata": {}, "outputs": [ { @@ -414,7 +414,7 @@ }, { "cell_type": "markdown", - "id": "da77992c", + "id": "f2eadea8", "metadata": {}, "source": [ "You can then multiply this direction with the magnitude of the force in order to find the force vector.\n", @@ -425,7 +425,7 @@ { "cell_type": "code", "execution_count": 10, - "id": "641d9223", + "id": "f6cdf697", "metadata": {}, "outputs": [ { @@ -444,7 +444,7 @@ }, { "cell_type": "markdown", - "id": "98b5e2c5", + "id": "60d173fe", "metadata": {}, "source": [ "In order to find the moment you need the cross product of the force vector and the radius." @@ -453,7 +453,7 @@ { "cell_type": "code", "execution_count": 11, - "id": "3f07033a", + "id": "a30e3772", "metadata": {}, "outputs": [ { @@ -471,7 +471,7 @@ }, { "cell_type": "markdown", - "id": "8562d248", + "id": "1cb98586", "metadata": {}, "source": [ "Now all you need to do is find the reaction force and moment." @@ -480,7 +480,7 @@ { "cell_type": "code", "execution_count": 12, - "id": "b1e567ef", + "id": "fcff960c", "metadata": {}, "outputs": [ { @@ -502,7 +502,7 @@ }, { "cell_type": "markdown", - "id": "11326469", + "id": "cfd917c3", "metadata": {}, "source": [ "### Another Example\n", @@ -520,7 +520,7 @@ { "cell_type": "code", "execution_count": 13, - "id": "8e6ebddf", + "id": "6529deea", "metadata": {}, "outputs": [], "source": [ @@ -534,7 +534,7 @@ }, { "cell_type": "markdown", - "id": "c93d4d37", + "id": "61de859e", "metadata": {}, "source": [ "From these equations, you start by determining vector directions with unit vectors." @@ -543,7 +543,7 @@ { "cell_type": "code", "execution_count": 14, - "id": "3b6ca8ee", + "id": "6285d769", "metadata": {}, "outputs": [], "source": [ @@ -564,7 +564,7 @@ }, { "cell_type": "markdown", - "id": "acffd406", + "id": "fd66d28f", "metadata": {}, "source": [ "This lets you represent the tension (T) and reaction (R) forces acting on the system as\n", @@ -645,7 +645,7 @@ }, { "cell_type": "markdown", - "id": "a95244a8", + "id": "19aaf05d", "metadata": {}, "source": [ "## Wrapping up\n", diff --git a/_sources/content/tutorial-style-guide.ipynb b/_sources/content/tutorial-style-guide.ipynb index 4ffa4e2d..089879df 100644 --- a/_sources/content/tutorial-style-guide.ipynb +++ b/_sources/content/tutorial-style-guide.ipynb @@ -2,7 +2,7 @@ "cells": [ { "cell_type": "markdown", - "id": "58194c75", + "id": "14a5633b", "metadata": {}, "source": [ "# Learn to write a NumPy tutorial\n", @@ -13,7 +13,7 @@ }, { "cell_type": "markdown", - "id": "d654c07e", + "id": "219305aa", "metadata": {}, "source": [ "## What you'll do\n", @@ -125,7 +125,7 @@ { "cell_type": "code", "execution_count": 1, - "id": "23bd30d3", + "id": "2df87e85", "metadata": {}, "outputs": [], "source": [ @@ -134,7 +134,7 @@ }, { "cell_type": "markdown", - "id": "1ac880fe", + "id": "3bcf73d6", "metadata": {}, "source": [ "
\n", @@ -151,7 +151,7 @@ }, { "cell_type": "markdown", - "id": "d4887ea9", + "id": "ac181934", "metadata": {}, "source": [ "***\n", diff --git a/_sources/content/tutorial-svd.ipynb b/_sources/content/tutorial-svd.ipynb index 24fbc1ca..aef11f96 100644 --- a/_sources/content/tutorial-svd.ipynb +++ b/_sources/content/tutorial-svd.ipynb @@ -2,7 +2,7 @@ "cells": [ { "cell_type": "markdown", - "id": "459890f2", + "id": "a948799f", "metadata": {}, "source": [ "# Linear algebra on n-dimensional arrays" @@ -10,7 +10,7 @@ }, { "cell_type": "markdown", - "id": "1f1492f6", + "id": "c9939b53", "metadata": {}, "source": [ "## Prerequisites\n", @@ -39,7 +39,7 @@ { "cell_type": "code", "execution_count": 1, - "id": "bee85dbc", + "id": "35da13c3", "metadata": {}, "outputs": [ { @@ -62,7 +62,7 @@ }, { "cell_type": "markdown", - "id": "21862caf", + "id": "f50839fd", "metadata": {}, "source": [ "**Note**: If you prefer, you can use your own image as you work through this tutorial. In order to transform your image into a NumPy array that can be manipulated, you can use the `imread` function from the [matplotlib.pyplot](https://matplotlib.org/api/_as_gen/matplotlib.pyplot.html#module-matplotlib.pyplot) submodule. Alternatively, you can use the [imageio.imread](https://imageio.readthedocs.io/en/stable/_autosummary/imageio.v3.imread.html) function from the `imageio` library. Be aware that if you use your own image, you'll likely need to adapt the steps below. For more information on how images are treated when converted to NumPy arrays, see [A crash course on NumPy for images](https://scikit-image.org/docs/stable/user_guide/numpy_images.html) from the `scikit-image` documentation." @@ -70,7 +70,7 @@ }, { "cell_type": "markdown", - "id": "c51f5d6f", + "id": "bedb5cac", "metadata": {}, "source": [ "Now, `img` is a NumPy array, as we can see when using the `type` function:" @@ -79,7 +79,7 @@ { "cell_type": "code", "execution_count": 2, - "id": "dcbc9b04", + "id": "99c33a11", "metadata": {}, "outputs": [ { @@ -99,7 +99,7 @@ }, { "cell_type": "markdown", - "id": "d7ded4e4", + "id": "a0cfbc6c", "metadata": {}, "source": [ "We can see the image using the [matplotlib.pyplot.imshow](https://matplotlib.org/api/_as_gen/matplotlib.pyplot.imshow.html#matplotlib.pyplot.imshow) function & the special iPython command, `%matplotlib inline` to display plots inline:" @@ -108,7 +108,7 @@ { "cell_type": "code", "execution_count": 3, - "id": "3b63c38a", + "id": "c93f1f0a", "metadata": {}, "outputs": [], "source": [ @@ -120,7 +120,7 @@ { "cell_type": "code", "execution_count": 4, - "id": "157ff4bf", + "id": "c5327ea5", "metadata": {}, "outputs": [ { @@ -141,7 +141,7 @@ }, { "cell_type": "markdown", - "id": "0af828c4", + "id": "825a3dfd", "metadata": {}, "source": [ "### Shape, axis and array properties\n", @@ -154,7 +154,7 @@ { "cell_type": "code", "execution_count": 5, - "id": "6328b47f", + "id": "b3602c9e", "metadata": {}, "outputs": [ { @@ -174,7 +174,7 @@ }, { "cell_type": "markdown", - "id": "694c08ba", + "id": "76bc6acd", "metadata": {}, "source": [ "The output is a [tuple](https://docs.python.org/dev/tutorial/datastructures.html#tut-tuples) with three elements, which means that this is a three-dimensional array. In fact, since this is a color image, and we have used the `imread` function to read it, the data is organized in three 2D arrays, representing color channels (in this case, red, green and blue - RGB). You can see this by looking at the shape above: it indicates that we have an array of 3 matrices, each having shape 768x1024.\n", @@ -185,7 +185,7 @@ { "cell_type": "code", "execution_count": 6, - "id": "bf83ed39", + "id": "159c9896", "metadata": {}, "outputs": [ { @@ -205,7 +205,7 @@ }, { "cell_type": "markdown", - "id": "9cc79383", + "id": "0f2dba9d", "metadata": {}, "source": [ "NumPy refers to each dimension as an *axis*. Because of how `imread` works, the *first index in the 3rd axis* is the red pixel data for our image. We can access this by using the syntax" @@ -214,7 +214,7 @@ { "cell_type": "code", "execution_count": 7, - "id": "056bbffb", + "id": "e4d04c73", "metadata": {}, "outputs": [ { @@ -240,7 +240,7 @@ }, { "cell_type": "markdown", - "id": "bd5708ed", + "id": "e4ff8d26", "metadata": {}, "source": [ "From the output above, we can see that every value in `img[:, :, 0]` is an integer value between 0 and 255, representing the level of red in each corresponding image pixel (keep in mind that this might be different if you\n", @@ -252,7 +252,7 @@ { "cell_type": "code", "execution_count": 8, - "id": "b4b48951", + "id": "f8bd2e2f", "metadata": {}, "outputs": [ { @@ -272,7 +272,7 @@ }, { "cell_type": "markdown", - "id": "80176a43", + "id": "575ae5d4", "metadata": {}, "source": [ "Since we are going to perform linear algebra operations on this data, it might be more interesting to have real numbers between 0 and 1 in each entry of the matrices to represent the RGB values. We can do that by setting" @@ -281,7 +281,7 @@ { "cell_type": "code", "execution_count": 9, - "id": "a459ee73", + "id": "a50b827d", "metadata": {}, "outputs": [], "source": [ @@ -290,7 +290,7 @@ }, { "cell_type": "markdown", - "id": "28c63388", + "id": "8aab1596", "metadata": {}, "source": [ "This operation, dividing an array by a scalar, works because of NumPy's [broadcasting rules](https://numpy.org/devdocs/user/theory.broadcasting.html#array-broadcasting-in-numpy). (Note that in real-world applications, it would be better to use, for example, the [img_as_float](https://scikit-image.org/docs/stable/api/skimage.html#skimage.img_as_float) utility function from `scikit-image`).\n", @@ -302,7 +302,7 @@ { "cell_type": "code", "execution_count": 10, - "id": "6c4b913e", + "id": "9cbaf092", "metadata": {}, "outputs": [ { @@ -322,7 +322,7 @@ }, { "cell_type": "markdown", - "id": "23637951", + "id": "ca308f5e", "metadata": {}, "source": [ "or checking the type of data in the array:" @@ -331,7 +331,7 @@ { "cell_type": "code", "execution_count": 11, - "id": "c9b630b6", + "id": "98bcbd1e", "metadata": {}, "outputs": [ { @@ -351,7 +351,7 @@ }, { "cell_type": "markdown", - "id": "fd0244f9", + "id": "4376510f", "metadata": {}, "source": [ "Note that we can assign each color channel to a separate matrix using the slice syntax:" @@ -360,7 +360,7 @@ { "cell_type": "code", "execution_count": 12, - "id": "1ee6911b", + "id": "311131cd", "metadata": {}, "outputs": [], "source": [ @@ -371,7 +371,7 @@ }, { "cell_type": "markdown", - "id": "9691b633", + "id": "970e47ae", "metadata": {}, "source": [ "### Operations on an axis\n", @@ -381,7 +381,7 @@ }, { "cell_type": "markdown", - "id": "77025595", + "id": "6c82e460", "metadata": {}, "source": [ "**Note**: We will use NumPy's linear algebra module, [numpy.linalg](https://numpy.org/devdocs/reference/routines.linalg.html#module-numpy.linalg), to perform the operations in this tutorial. Most of the linear algebra functions in this module can also be found in [scipy.linalg](https://docs.scipy.org/doc/scipy/reference/linalg.html#module-scipy.linalg), and users are encouraged to use the [scipy](https://docs.scipy.org/doc/scipy/reference/index.html#module-scipy) module for real-world applications. However, some functions in the [scipy.linalg](https://docs.scipy.org/doc/scipy/reference/linalg.html#module-scipy.linalg) module, such as the SVD function, only support 2D arrays. For more information on this, check the [scipy.linalg page](https://docs.scipy.org/doc/scipy/tutorial/linalg.html)." @@ -389,7 +389,7 @@ }, { "cell_type": "markdown", - "id": "a0ec9ff4", + "id": "a62936f7", "metadata": {}, "source": [ "To proceed, import the linear algebra submodule from NumPy:" @@ -398,7 +398,7 @@ { "cell_type": "code", "execution_count": 13, - "id": "f8a5e9d7", + "id": "67694655", "metadata": {}, "outputs": [], "source": [ @@ -407,7 +407,7 @@ }, { "cell_type": "markdown", - "id": "a65af595", + "id": "14cc8d79", "metadata": {}, "source": [ "In order to extract information from a given matrix, we can use the SVD to obtain 3 arrays which can be multiplied to obtain the original matrix. From the theory of linear algebra, given a matrix $A$, the following product can be computed:\n", @@ -427,7 +427,7 @@ { "cell_type": "code", "execution_count": 14, - "id": "1fe22e2f", + "id": "97fb3722", "metadata": {}, "outputs": [], "source": [ @@ -436,7 +436,7 @@ }, { "cell_type": "markdown", - "id": "e82da843", + "id": "3f0b5a96", "metadata": {}, "source": [ "Now, `img_gray` has shape" @@ -445,7 +445,7 @@ { "cell_type": "code", "execution_count": 15, - "id": "c5785166", + "id": "2465b9be", "metadata": {}, "outputs": [ { @@ -465,7 +465,7 @@ }, { "cell_type": "markdown", - "id": "d135ac1d", + "id": "daa28116", "metadata": {}, "source": [ "To see if this makes sense in our image, we should use a colormap from `matplotlib` corresponding to the color we wish to see in out image (otherwise, `matplotlib` will default to a colormap that does not correspond to the real data).\n", @@ -476,7 +476,7 @@ { "cell_type": "code", "execution_count": 16, - "id": "58d390bd", + "id": "048b3340", "metadata": {}, "outputs": [ { @@ -497,7 +497,7 @@ }, { "cell_type": "markdown", - "id": "846a460c", + "id": "f061195f", "metadata": {}, "source": [ "Now, applying the [linalg.svd](https://numpy.org/devdocs/reference/generated/numpy.linalg.svd.html#numpy.linalg.svd) function to this matrix, we obtain the following decomposition:" @@ -506,7 +506,7 @@ { "cell_type": "code", "execution_count": 17, - "id": "52a51bfb", + "id": "5e4ac792", "metadata": {}, "outputs": [], "source": [ @@ -515,7 +515,7 @@ }, { "cell_type": "markdown", - "id": "67ef9813", + "id": "b28457a4", "metadata": {}, "source": [ "**Note** If you are using your own image, this command might take a while to run, depending on the size of your image and your hardware. Don't worry, this is normal! The SVD can be a pretty intensive computation." @@ -523,7 +523,7 @@ }, { "cell_type": "markdown", - "id": "3c57311b", + "id": "19dbba08", "metadata": {}, "source": [ "Let's check that this is what we expected:" @@ -532,7 +532,7 @@ { "cell_type": "code", "execution_count": 18, - "id": "7db93a05", + "id": "5954315c", "metadata": {}, "outputs": [ { @@ -552,7 +552,7 @@ }, { "cell_type": "markdown", - "id": "91645ff2", + "id": "cafd00d4", "metadata": {}, "source": [ "Note that `s` has a particular shape: it has only one dimension. This means that some linear algebra functions that expect 2d arrays might not work. For example, from the theory, one might expect `s` and `Vt` to be\n", @@ -568,7 +568,7 @@ { "cell_type": "code", "execution_count": 19, - "id": "09b29ae5", + "id": "d67d84f3", "metadata": {}, "outputs": [], "source": [ @@ -580,7 +580,7 @@ }, { "cell_type": "markdown", - "id": "eb80aab2", + "id": "cd57079d", "metadata": {}, "source": [ "Now, we want to check if the reconstructed `U @ Sigma @ Vt` is close to the original `img_gray` matrix." @@ -588,7 +588,7 @@ }, { "cell_type": "markdown", - "id": "1b9c7148", + "id": "3e4bf84b", "metadata": {}, "source": [ "## Approximation\n", @@ -599,7 +599,7 @@ { "cell_type": "code", "execution_count": 20, - "id": "e63a4a8b", + "id": "b600a750", "metadata": {}, "outputs": [ { @@ -619,7 +619,7 @@ }, { "cell_type": "markdown", - "id": "a0aa435e", + "id": "6124a6fc", "metadata": {}, "source": [ "(The actual result of this operation might be different depending on your architecture and linear algebra setup. Regardless, you should see a small number.)\n", @@ -630,7 +630,7 @@ { "cell_type": "code", "execution_count": 21, - "id": "d39fa2eb", + "id": "e14eaf1a", "metadata": {}, "outputs": [ { @@ -650,7 +650,7 @@ }, { "cell_type": "markdown", - "id": "5abbe7d1", + "id": "ecd2eb7d", "metadata": {}, "source": [ "To see if an approximation is reasonable, we can check the values in `s`:" @@ -659,7 +659,7 @@ { "cell_type": "code", "execution_count": 22, - "id": "2bd21f5a", + "id": "2289f544", "metadata": {}, "outputs": [ { @@ -680,7 +680,7 @@ }, { "cell_type": "markdown", - "id": "dfcf425c", + "id": "5f088ea2", "metadata": {}, "source": [ "In the graph, we can see that although we have 768 singular values in `s`, most of those (after the 150th entry or so) are pretty small. So it might make sense to use only the information related to the first (say, 50) *singular values* to build a more economical approximation to our image.\n", @@ -693,7 +693,7 @@ { "cell_type": "code", "execution_count": 23, - "id": "772ae204", + "id": "22f55ffe", "metadata": {}, "outputs": [], "source": [ @@ -702,7 +702,7 @@ }, { "cell_type": "markdown", - "id": "9c122e33", + "id": "3d153b5b", "metadata": {}, "source": [ "we can build the approximation by doing" @@ -711,7 +711,7 @@ { "cell_type": "code", "execution_count": 24, - "id": "c0dabd9e", + "id": "9e6b6053", "metadata": {}, "outputs": [], "source": [ @@ -720,7 +720,7 @@ }, { "cell_type": "markdown", - "id": "155ad445", + "id": "0f3e98fb", "metadata": {}, "source": [ "Note that we had to use only the first `k` rows of `Vt`, since all other rows would be multiplied by the zeros corresponding to the singular values we eliminated from this approximation." @@ -729,7 +729,7 @@ { "cell_type": "code", "execution_count": 25, - "id": "1f0b0559", + "id": "2236e66c", "metadata": {}, "outputs": [ { @@ -750,7 +750,7 @@ }, { "cell_type": "markdown", - "id": "9b989d78", + "id": "1a77fd52", "metadata": {}, "source": [ "Now, you can go ahead and repeat this experiment with other values of `k`, and each of your experiments should give you a slightly better (or worse) image depending on the value you choose." @@ -758,7 +758,7 @@ }, { "cell_type": "markdown", - "id": "34ab2b17", + "id": "1b649823", "metadata": {}, "source": [ "### Applying to all colors\n", @@ -774,7 +774,7 @@ { "cell_type": "code", "execution_count": 26, - "id": "00164230", + "id": "0fbaa563", "metadata": {}, "outputs": [ { @@ -794,7 +794,7 @@ }, { "cell_type": "markdown", - "id": "61abb0b0", + "id": "2af8fd19", "metadata": {}, "source": [ "so we need to permutate the axis on this array to get a shape like `(3, 768, 1024)`. Fortunately, the [numpy.transpose](https://numpy.org/devdocs/reference/generated/numpy.transpose.html#numpy.transpose) function can do that for us:\n", @@ -809,7 +809,7 @@ { "cell_type": "code", "execution_count": 27, - "id": "19bdc3e0", + "id": "9a343746", "metadata": {}, "outputs": [ { @@ -830,7 +830,7 @@ }, { "cell_type": "markdown", - "id": "263f772b", + "id": "651a38f8", "metadata": {}, "source": [ "Now we are ready to apply the SVD:" @@ -839,7 +839,7 @@ { "cell_type": "code", "execution_count": 28, - "id": "3e615a85", + "id": "bc262567", "metadata": {}, "outputs": [], "source": [ @@ -848,7 +848,7 @@ }, { "cell_type": "markdown", - "id": "70bd629a", + "id": "732d5eae", "metadata": {}, "source": [ "Finally, to obtain the full approximated image, we need to reassemble these matrices into the approximation. Now, note that" @@ -857,7 +857,7 @@ { "cell_type": "code", "execution_count": 29, - "id": "d67130f4", + "id": "cd7ea66c", "metadata": {}, "outputs": [ { @@ -877,7 +877,7 @@ }, { "cell_type": "markdown", - "id": "19d968dd", + "id": "3ff6c711", "metadata": {}, "source": [ "To build the final approximation matrix, we must understand how multiplication across different axes works." @@ -885,7 +885,7 @@ }, { "cell_type": "markdown", - "id": "5757a5ce", + "id": "a12efbb9", "metadata": {}, "source": [ "### Products with n-dimensional arrays\n", @@ -898,7 +898,7 @@ { "cell_type": "code", "execution_count": 30, - "id": "68898565", + "id": "7c01a24a", "metadata": {}, "outputs": [], "source": [ @@ -909,7 +909,7 @@ }, { "cell_type": "markdown", - "id": "808075eb", + "id": "9fc70b4e", "metadata": {}, "source": [ "Now, if we wish to rebuild the full SVD (with no approximation), we can do" @@ -918,7 +918,7 @@ { "cell_type": "code", "execution_count": 31, - "id": "226148ee", + "id": "aeb8294f", "metadata": {}, "outputs": [], "source": [ @@ -927,7 +927,7 @@ }, { "cell_type": "markdown", - "id": "d6cb332d", + "id": "ec5e6d3d", "metadata": {}, "source": [ "Note that" @@ -936,7 +936,7 @@ { "cell_type": "code", "execution_count": 32, - "id": "b86d37c9", + "id": "a6548a88", "metadata": {}, "outputs": [ { @@ -956,7 +956,7 @@ }, { "cell_type": "markdown", - "id": "9431564f", + "id": "21f84928", "metadata": {}, "source": [ "The reconstructed image should be indistinguishable from the original one, except for differences due to floating point errors from the reconstruction. Recall that our original image consisted of floating point values in the range `[0., 1.]`. The accumulation of floating point error from the reconstruction can result in values slightly outside this original range:" @@ -965,7 +965,7 @@ { "cell_type": "code", "execution_count": 33, - "id": "7f26281e", + "id": "36048dc3", "metadata": {}, "outputs": [ { @@ -985,7 +985,7 @@ }, { "cell_type": "markdown", - "id": "fa0dd632", + "id": "6ac87dc6", "metadata": {}, "source": [ "Since `imshow` expects values in the range, we can use `clip` to excise the floating point error:" @@ -994,7 +994,7 @@ { "cell_type": "code", "execution_count": 34, - "id": "ee30ae36", + "id": "94587bff", "metadata": {}, "outputs": [ { @@ -1016,7 +1016,7 @@ }, { "cell_type": "markdown", - "id": "ed742fe2", + "id": "1b7e7f4c", "metadata": {}, "source": [ "In fact, `imshow` peforms this clipping under-the-hood, so if you skip the first line in the previous code cell, you might see a warning message saying `\"Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers).\"`\n", @@ -1027,7 +1027,7 @@ { "cell_type": "code", "execution_count": 35, - "id": "d5afb7ae", + "id": "e2c5ced8", "metadata": {}, "outputs": [], "source": [ @@ -1036,7 +1036,7 @@ }, { "cell_type": "markdown", - "id": "285bcf31", + "id": "d4acfcff", "metadata": {}, "source": [ "You can see that we have selected only the first `k` components of the last axis for `Sigma` (this means that we have used only the first `k` columns of each of the three matrices in the stack), and that we have selected only the first `k` components in the second-to-last axis of `Vt` (this means we have selected only the first `k` rows from every matrix in the stack `Vt` and all columns). If you are unfamiliar with the ellipsis syntax, it is a\n", @@ -1048,7 +1048,7 @@ { "cell_type": "code", "execution_count": 36, - "id": "fba09909", + "id": "5a1e7733", "metadata": {}, "outputs": [ { @@ -1068,7 +1068,7 @@ }, { "cell_type": "markdown", - "id": "f014bf9e", + "id": "9b2c3d6e", "metadata": {}, "source": [ "which is not the right shape for showing the image. Finally, reordering the axes back to our original shape of `(768, 1024, 3)`, we can see our approximation:" @@ -1077,7 +1077,7 @@ { "cell_type": "code", "execution_count": 37, - "id": "661d94e3", + "id": "4de31fe2", "metadata": {}, "outputs": [ { @@ -1105,7 +1105,7 @@ }, { "cell_type": "markdown", - "id": "549e4a76", + "id": "812c2c85", "metadata": {}, "source": [ "Even though the image is not as sharp, using a small number of `k` singular values (compared to the original set of 768 values), we can recover many of the distinguishing features from this image." @@ -1113,7 +1113,7 @@ }, { "cell_type": "markdown", - "id": "58807099", + "id": "a5865386", "metadata": {}, "source": [ "### Final words\n", diff --git a/_sources/content/tutorial-x-ray-image-processing.ipynb b/_sources/content/tutorial-x-ray-image-processing.ipynb index 60a1995c..ce4dc1de 100644 --- a/_sources/content/tutorial-x-ray-image-processing.ipynb +++ b/_sources/content/tutorial-x-ray-image-processing.ipynb @@ -2,7 +2,7 @@ "cells": [ { "cell_type": "markdown", - "id": "a5574dcf", + "id": "50699a4b", "metadata": {}, "source": [ "# X-ray image processing" @@ -10,7 +10,7 @@ }, { "cell_type": "markdown", - "id": "52909beb", + "id": "e21fb615", "metadata": {}, "source": [ "This tutorial demonstrates how to read and process X-ray images with NumPy,\n", @@ -53,7 +53,7 @@ }, { "cell_type": "markdown", - "id": "1d93809f", + "id": "1b4d28c7", "metadata": {}, "source": [ "## Prerequisites" @@ -61,7 +61,7 @@ }, { "cell_type": "markdown", - "id": "76f81b77", + "id": "be11ed0f", "metadata": {}, "source": [ "The reader should have some knowledge of Python, NumPy arrays, and Matplotlib.\n", @@ -91,7 +91,7 @@ }, { "cell_type": "markdown", - "id": "e12cd4fa", + "id": "33f7890a", "metadata": {}, "source": [ "## Table of contents" @@ -99,7 +99,7 @@ }, { "cell_type": "markdown", - "id": "db72d6e5", + "id": "145a5dc2", "metadata": {}, "source": [ "1. Examine an X-ray with `imageio`\n", @@ -114,7 +114,7 @@ }, { "cell_type": "markdown", - "id": "4c103bb4", + "id": "2e3916f1", "metadata": {}, "source": [ "## Examine an X-ray with `imageio`" @@ -122,7 +122,7 @@ }, { "cell_type": "markdown", - "id": "ccfc98b4", + "id": "f5dd91d8", "metadata": {}, "source": [ "Let's begin with a simple example using just one X-ray image from the\n", @@ -134,7 +134,7 @@ }, { "cell_type": "markdown", - "id": "b26c12ef", + "id": "a6a82c65", "metadata": {}, "source": [ "**1.** Load the image with `imageio`:" @@ -143,7 +143,7 @@ { "cell_type": "code", "execution_count": 1, - "id": "34223f4e", + "id": "bdc60820", "metadata": {}, "outputs": [], "source": [ @@ -157,7 +157,7 @@ }, { "cell_type": "markdown", - "id": "64186b10", + "id": "11e779c9", "metadata": {}, "source": [ "**2.** Check that its shape is 1024x1024 pixels and that the array is made up of\n", @@ -167,7 +167,7 @@ { "cell_type": "code", "execution_count": 2, - "id": "78b6e457", + "id": "8c22f227", "metadata": {}, "outputs": [ { @@ -186,7 +186,7 @@ }, { "cell_type": "markdown", - "id": "d5bf7db3", + "id": "e2214059", "metadata": {}, "source": [ "**3.** Import `matplotlib` and display the image in a grayscale colormap:" @@ -195,7 +195,7 @@ { "cell_type": "code", "execution_count": 3, - "id": "8b275b29", + "id": "5bded5cb", "metadata": {}, "outputs": [ { @@ -219,7 +219,7 @@ }, { "cell_type": "markdown", - "id": "09bf0398", + "id": "b0fd1067", "metadata": {}, "source": [ "## Combine images into a multidimensional array to demonstrate progression" @@ -227,7 +227,7 @@ }, { "cell_type": "markdown", - "id": "e980a59e", + "id": "7a758935", "metadata": {}, "source": [ "In the next example, instead of 1 image you'll use 9 X-ray 1024x1024-pixel\n", @@ -242,7 +242,7 @@ { "cell_type": "code", "execution_count": 4, - "id": "04a87f1d", + "id": "100cefc8", "metadata": {}, "outputs": [], "source": [ @@ -256,7 +256,7 @@ }, { "cell_type": "markdown", - "id": "bc4cb12d", + "id": "edc1cc45", "metadata": {}, "source": [ "**2.** Check the shape of the new X-ray image array containing 9 stacked images:" @@ -265,7 +265,7 @@ { "cell_type": "code", "execution_count": 5, - "id": "8ea19522", + "id": "cd43711f", "metadata": {}, "outputs": [ { @@ -285,7 +285,7 @@ }, { "cell_type": "markdown", - "id": "9a52e248", + "id": "ab27def2", "metadata": {}, "source": [ "Note that the shape in the first dimension matches `num_imgs`, so the\n", @@ -298,7 +298,7 @@ { "cell_type": "code", "execution_count": 6, - "id": "1d833428", + "id": "c7bd099d", "metadata": {}, "outputs": [ { @@ -322,7 +322,7 @@ }, { "cell_type": "markdown", - "id": "6b15099a", + "id": "d3c6bbcd", "metadata": {}, "source": [ "**4.** In addition, it can be helpful to show the progress as an animation.\n", @@ -333,7 +333,7 @@ { "cell_type": "code", "execution_count": 7, - "id": "233b2835", + "id": "01d0813c", "metadata": {}, "outputs": [], "source": [ @@ -343,7 +343,7 @@ }, { "cell_type": "markdown", - "id": "890f697c", + "id": "d9525295", "metadata": {}, "source": [ "Which gives us:\n", @@ -353,7 +353,7 @@ }, { "cell_type": "markdown", - "id": "60cb78d5", + "id": "3f635219", "metadata": {}, "source": [ "When processing biomedical data, it can be useful to emphasize the 2D\n", @@ -365,7 +365,7 @@ }, { "cell_type": "markdown", - "id": "81022690", + "id": "a030ff59", "metadata": {}, "source": [ "### The Laplace filter with Gaussian second derivatives\n", @@ -382,7 +382,7 @@ }, { "cell_type": "markdown", - "id": "1e28216c", + "id": "d421ef2a", "metadata": {}, "source": [ "- The implementation of the Laplacian-Gaussian filter is relatively\n", @@ -395,7 +395,7 @@ { "cell_type": "code", "execution_count": 8, - "id": "6385e844", + "id": "e170ad5b", "metadata": {}, "outputs": [], "source": [ @@ -406,7 +406,7 @@ }, { "cell_type": "markdown", - "id": "f5788884", + "id": "b0c5f4b3", "metadata": {}, "source": [ "Display the original X-ray and the one with the Laplacian-Gaussian filter:" @@ -415,7 +415,7 @@ { "cell_type": "code", "execution_count": 9, - "id": "dbd70272", + "id": "ac8f2b7f", "metadata": {}, "outputs": [ { @@ -443,7 +443,7 @@ }, { "cell_type": "markdown", - "id": "40e3697d", + "id": "a691abd8", "metadata": {}, "source": [ "### The Gaussian gradient magnitude method\n", @@ -458,7 +458,7 @@ }, { "cell_type": "markdown", - "id": "11f211f9", + "id": "ac941f34", "metadata": {}, "source": [ "**1.** Call [`scipy.ndimage.gaussian_gradient_magnitude()`](https://docs.scipy.org/doc/scipy/reference/generated/scipy.ndimage.gaussian_gradient_magnitude.html)\n", @@ -469,7 +469,7 @@ { "cell_type": "code", "execution_count": 10, - "id": "d1109ac7", + "id": "dbb7927d", "metadata": {}, "outputs": [], "source": [ @@ -478,7 +478,7 @@ }, { "cell_type": "markdown", - "id": "bbf77a47", + "id": "4e1bdb81", "metadata": {}, "source": [ "**2.** Display the original X-ray and the one with the Gaussian gradient filter:" @@ -487,7 +487,7 @@ { "cell_type": "code", "execution_count": 11, - "id": "4e7ef163", + "id": "dc91cb77", "metadata": {}, "outputs": [ { @@ -515,7 +515,7 @@ }, { "cell_type": "markdown", - "id": "36dc84b1", + "id": "3b431093", "metadata": {}, "source": [ "### The Sobel-Feldman operator (the Sobel filter)\n", @@ -532,7 +532,7 @@ }, { "cell_type": "markdown", - "id": "145ea8d4", + "id": "4332b03a", "metadata": {}, "source": [ "**1.** Use the Sobel filters — ([`scipy.ndimage.sobel()`](https://docs.scipy.org/doc/scipy/reference/generated/scipy.ndimage.sobel.html))\n", @@ -552,7 +552,7 @@ { "cell_type": "code", "execution_count": 12, - "id": "59a44db1", + "id": "23e1d229", "metadata": {}, "outputs": [], "source": [ @@ -566,7 +566,7 @@ }, { "cell_type": "markdown", - "id": "35bd21a4", + "id": "1218aef4", "metadata": {}, "source": [ "**2.** Change the new image array data type to the 32-bit floating-point format\n", @@ -577,7 +577,7 @@ { "cell_type": "code", "execution_count": 13, - "id": "01e83c23", + "id": "7114d71f", "metadata": {}, "outputs": [ { @@ -599,7 +599,7 @@ }, { "cell_type": "markdown", - "id": "22467796", + "id": "a6c2163e", "metadata": {}, "source": [ "**3.** Display the original X-ray and the one with the Sobel \"edge\" filter\n", @@ -610,7 +610,7 @@ { "cell_type": "code", "execution_count": 14, - "id": "17479a39", + "id": "7ceec05b", "metadata": {}, "outputs": [ { @@ -640,7 +640,7 @@ }, { "cell_type": "markdown", - "id": "8ca17e17", + "id": "fa1bcb6c", "metadata": {}, "source": [ "### The Canny filter\n", @@ -665,7 +665,7 @@ }, { "cell_type": "markdown", - "id": "8bebe546", + "id": "5472c44d", "metadata": {}, "source": [ "**1.** Use SciPy's Fourier filters — [`scipy.ndimage.fourier_gaussian()`](https://docs.scipy.org/doc/scipy/reference/generated/scipy.ndimage.fourier_gaussian.html)\n", @@ -679,7 +679,7 @@ { "cell_type": "code", "execution_count": 15, - "id": "992b0444", + "id": "043d6757", "metadata": {}, "outputs": [ { @@ -705,7 +705,7 @@ }, { "cell_type": "markdown", - "id": "393ff101", + "id": "bf52e31f", "metadata": {}, "source": [ "**2.** Plot the original X-ray image and the ones with the edges detected with\n", @@ -716,7 +716,7 @@ { "cell_type": "code", "execution_count": 16, - "id": "f7c97970", + "id": "939ffb33", "metadata": {}, "outputs": [ { @@ -748,7 +748,7 @@ }, { "cell_type": "markdown", - "id": "ad1e4ff7", + "id": "7ae4e4c7", "metadata": {}, "source": [ "## Apply masks to X-rays with `np.where()`" @@ -756,7 +756,7 @@ }, { "cell_type": "markdown", - "id": "e17eb8a3", + "id": "8d348f50", "metadata": {}, "source": [ "To screen out only certain pixels in X-ray images to help detect particular\n", @@ -771,7 +771,7 @@ }, { "cell_type": "markdown", - "id": "1dd89678", + "id": "6233eb07", "metadata": {}, "source": [ "**1.** Retrieve some basics statistics about the pixel values in the original\n", @@ -781,7 +781,7 @@ { "cell_type": "code", "execution_count": 17, - "id": "b691a8fc", + "id": "4a230010", "metadata": {}, "outputs": [ { @@ -806,7 +806,7 @@ }, { "cell_type": "markdown", - "id": "e58b6a27", + "id": "23e02d5e", "metadata": {}, "source": [ "**2.** The array data type is `uint8` and the minimum/maximum value results\n", @@ -818,7 +818,7 @@ { "cell_type": "code", "execution_count": 18, - "id": "37136458", + "id": "2609c92c", "metadata": {}, "outputs": [ { @@ -844,7 +844,7 @@ }, { "cell_type": "markdown", - "id": "766719e3", + "id": "c0285df2", "metadata": {}, "source": [ "As the pixel intensity distribution suggests, there are many low (between around\n", @@ -858,7 +858,7 @@ { "cell_type": "code", "execution_count": 19, - "id": "e625bd26", + "id": "3f1891bd", "metadata": {}, "outputs": [ { @@ -885,7 +885,7 @@ { "cell_type": "code", "execution_count": 20, - "id": "cb2420dd", + "id": "28aba4d9", "metadata": {}, "outputs": [ { @@ -911,7 +911,7 @@ }, { "cell_type": "markdown", - "id": "266e9a88", + "id": "33969ae8", "metadata": {}, "source": [ "## Compare the results" @@ -919,7 +919,7 @@ }, { "cell_type": "markdown", - "id": "92c8bb80", + "id": "6dfdbbf7", "metadata": {}, "source": [ "Let's display some of the results of processed X-ray images you've worked with\n", @@ -929,7 +929,7 @@ { "cell_type": "code", "execution_count": 21, - "id": "9c7446cb", + "id": "8e72b3e4", "metadata": {}, "outputs": [ { @@ -971,7 +971,7 @@ }, { "cell_type": "markdown", - "id": "8ba3a99f", + "id": "cba10f78", "metadata": {}, "source": [ "## Next steps" @@ -979,7 +979,7 @@ }, { "cell_type": "markdown", - "id": "294a8ca9", + "id": "04cf0432", "metadata": {}, "source": [ "If you want to use your own samples, you can use\n", diff --git a/applications.html b/applications.html index 6e90c427..fdbe9aa8 100644 --- a/applications.html +++ b/applications.html @@ -278,23 +278,6 @@ -
  • - - - - - -Suggest edit - -
  • - - - -
  • - - - - - -Suggest edit - -
  • - - - -
  • - - - - - -Suggest edit - -
  • - - - -
  • Calculating the historical growth curve for transistors
    19200000000.0 250000000.0 7050000000.0
     
  • -
    <matplotlib.legend.Legend at 0x7fb56152dc30>
    +
    <matplotlib.legend.Legend at 0x7f3fa9b6e6e0>
     
    ../_images/de56d8475573db439c81b6991f8986a681fc818dc513671d95136628a18a7c70.png diff --git a/content/pairing.html b/content/pairing.html index 9cbeab60..fc68f2e4 100644 --- a/content/pairing.html +++ b/content/pairing.html @@ -307,23 +307,6 @@ -
  • - - - - - -Suggest edit - -
  • - - - -
  • - - - - - -Suggest edit - -
  • - - - -
  • - - - - - -Suggest edit - -
  • - - - -
  • Calculating the test statistics -
    The t value is -9.904282450791534 and the p value is 4.1027278384837894e-11.
    +
    The t value is -7.692321175384364 and the p value is 8.77879892721121e-09.
     
    diff --git a/content/tutorial-deep-learning-on-mnist.html b/content/tutorial-deep-learning-on-mnist.html index 28296f65..a1869697 100644 --- a/content/tutorial-deep-learning-on-mnist.html +++ b/content/tutorial-deep-learning-on-mnist.html @@ -307,23 +307,6 @@ -
  • - - - - - -Suggest edit - -
  • - - - -
  • - - - - - -Suggest edit - -
  • - - - -
  • - - - - - -Suggest edit - -
  • - - - -
  • Fitting Data -