diff --git a/book/_toc.yml b/book/_toc.yml index 5a5a08f..bd5d821 100644 --- a/book/_toc.yml +++ b/book/_toc.yml @@ -48,6 +48,8 @@ parts: title: SWESARR sections: - file: tutorials/swesarr/swesarr_tut + - file: tutorials/GPR_Lidar_HackweekTutorial + title: GPR and Lidar for Snow Parameter Retrieval - file: tutorials/albedo/index title: Albedo sections: diff --git a/book/tutorials/GPR_Lidar_Figs/HackweekGraphicalAbstract_GPR_Lidar_v2.png b/book/tutorials/GPR_Lidar_Figs/HackweekGraphicalAbstract_GPR_Lidar_v2.png new file mode 100644 index 0000000..ea940ad Binary files /dev/null and b/book/tutorials/GPR_Lidar_Figs/HackweekGraphicalAbstract_GPR_Lidar_v2.png differ diff --git a/book/tutorials/GPR_Lidar_Figs/MeehanFigure.png b/book/tutorials/GPR_Lidar_Figs/MeehanFigure.png new file mode 100644 index 0000000..e13fcdf Binary files /dev/null and b/book/tutorials/GPR_Lidar_Figs/MeehanFigure.png differ diff --git a/book/tutorials/GPR_Lidar_Figs/aso_figure.png b/book/tutorials/GPR_Lidar_Figs/aso_figure.png new file mode 100644 index 0000000..f72ad8f Binary files /dev/null and b/book/tutorials/GPR_Lidar_Figs/aso_figure.png differ diff --git a/book/tutorials/GPR_Lidar_Figs/radargrams.png b/book/tutorials/GPR_Lidar_Figs/radargrams.png new file mode 100644 index 0000000..617e253 Binary files /dev/null and b/book/tutorials/GPR_Lidar_Figs/radargrams.png differ diff --git a/book/tutorials/GPR_Lidar_HackweekTutorial.ipynb b/book/tutorials/GPR_Lidar_HackweekTutorial.ipynb new file mode 100644 index 0000000..eff75a9 --- /dev/null +++ b/book/tutorials/GPR_Lidar_HackweekTutorial.ipynb @@ -0,0 +1,661 @@ +{ + "cells": [ + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "# SnowEx Hackweek - GPR and Lidar Tutorial\n", + "\n", + "## Author: Randall Bonnell\n", + "\n", + "## Outline:\n", + "1. GPR Methods for the Retrieval of Snow Depth and SWE\n", + "2. Lidar Methods for Snow Depth Retrieval and SWE Estimation\n", + "3. Leveraging Coincident GPR and Lidar Data Sets to Derive Snow Density\n", + "4. SnowEx23 GPR/Lidar Derived Permittivities/Densities in the Boreal Forest, Alaska\n", + "5. Discussion: Improving Density Estimation\n", + "6. GPR SnowEx Analysis-Ready Datasets\n", + "7. References" + ] + }, + { + "attachments": {}, + "cell_type": "markdown", + "metadata": {}, + "source": [ + "![Title Card](./GPR_Lidar_Figs/HackweekGraphicalAbstract_GPR_Lidar_v2.png)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## 1. GPR Methods for the Retrieval of Snow Depth and SWE\n", + "\n", + "### Previous GPR tutorial developed by Tate Meehan (CRREL) that may be of interest: https://snowex-2021.hackweek.io/tutorials/gpr/gpr.html\n", + "\n", + "### SnowEx Review\n", + "- Ground-based, airborne, and satellite radars were operated as part of the NASA SnowEx campaigns.\n", + "- Ground-based radars included ground-penetrating radar (GPR), frequency-modulated continuous-wave radar (FMCW), and tower mounted radars.\n", + "- What airborne and satellite radars were tasked?\n", + "\n", + "### A Brief Blurb on Radar Physics\n", + "- Radar is fully transmissible in dry snow, but there is frequency-dependent interaction between the radar signal and the snowpack.\n", + "- At L-band frequencies (1–2 GHz, ~25 cm wavelength) there limited to no interaction with the snowpack.\n", + "\n", + "### What is GPR?\n", + "- We use L-band GPR, which was operated during all SnowEx campaigns!\n", + "- GPR transmits a radar signal into the snowpack, which then reflects off objects/interfaces with contrasting dielectric permittivity. The GPR records the amplitude and two-way travel time (twt) of the reflections.\n", + "- Dielectric permittivity refers to the dielectric properties of the snowpack that define how EM energy transmits through the medium.\n", + "- Usually, we are interested in the snow-ground interface and we measure the snowpack thickness in twt (nanoseconds).\n", + "- However, in complex vegetation, radargrams are difficult to interpret! Causes increased uncertainty.\n", + "- See radargram examples below for the boreal forest GPR surveys (credit Kajsa Holland-Goon)." + ] + }, + { + "attachments": {}, + "cell_type": "markdown", + "metadata": {}, + "source": [ + "![Radargram Examples](./GPR_Lidar_Figs/radargrams.png)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Snow Depth, SWE, and Density Calculations\n", + "- To calculate snow depth $(d_s)$ from twt, we need to estimate the relative permittivity $(\\epsilon_s)$ and radar velocity $(v_s)$ of the snowpack:\n", + " - $v_s = \\frac{c}{\\sqrt{\\epsilon_s}}$; --> Where c is the velocity of EM energy in a vacuum.\n", + " - $\\epsilon_s = (1+\\frac{0.845\\rho_s}{1000})^2$; --> Kovacs et al. (1995), but more than 19 equations exist for dry snow conditions.\n", + " - $d_s = \\frac{twt}{2}*v_s;$\n", + " - $SWE = d_s\\rho_s;$--> Where SWE is snow water equivalent.\n", + "- But...If we know the snow depth, we can constrain the radar velocity and estimate relative permittivity and density!\n", + " - $\\epsilon_s=(\\frac{c*twt}{2d_s})^2$\n", + " - $\\rho_s=(\\sqrt{\\epsilon_s}-1)\\frac{1000}{0.845}$\n", + " - How can we find the snow depth?\n", + "\n", + "### A Shameless Plug...\n", + "- Most analysis-ready GPR products have twt, snow depth, and snow water equivalent. Some have been updated with derived snow densities. See _6. SnowEx GPR Analysis-Ready Datasets_ below." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## 2. Lidar Methods for Snow Depth Retrieval and SWE Estimation\n", + "### Previous lidar tutorial developed by Naheem Adebisi (ESRI) that may be of interest: https://snowex-2022.hackweek.io/tutorials/lidar/index.html\n", + "\n", + "### A (Very) General Review of Lidar\n", + "- Lidar emits photons and measures the twt of the returned photons\n", + "- These twt are converted to elevation surfaces (e.g., DEM, DTM, DSM).\n", + "- Lidar can be collected from a variety of platforms:\n", + " - Terrestrial\n", + " - UAV\n", + " - Airborne\n", + " - Satellite\n", + "- Two acquisitions are required for snow, a snow-on acquisition and a snow-off acquisition. Snow depth can be calculated in two general ways:\n", + " - Raster-based approaches (see figure below, credit Airborne Snow Observatories Inc.)\n", + " - Point cloud approaches\n", + "\n", + "### How is SWE calculated from lidar snow depths?\n", + "- At larger scales, SWE is calculated via modeled densities (e.g., M3 Works and ASO).\n", + "- At smaller field sites, it may be appropriate to use representative in situ measurements." + ] + }, + { + "attachments": {}, + "cell_type": "markdown", + "metadata": {}, + "source": [ + "![ASO Lidar Figure](./GPR_Lidar_Figs/aso_figure.png)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## 3. Leveraging Coincident GPR and Lidar Data Sets to Derive Snow Density\n", + "- Density, liquid water content, and relative permittivity are understudied relative to snow depth and/or SWE.\n", + "- Combined coincident snow depths and twt can yield spatially distributed measurements of relative permittivity.\n", + " - In wet snow, relative permittivity can be converted to liquid water content (e.g., Webb et al., 2018, 2020, 2022; Bonnell et al., 2021).\n", + " - In dry snow, density can be estimated from the relative permittivity (Yildiz et al., 2021; McGrath et al., 2022; Bonnell et al., 2023; Meehan et al., 2024).\n", + "- This technique has provided an unprecedented glimpse into the spatial properties of these parameters!\n", + "- Critically, studies have noted a large random error in derived products that should be considered (see figure below, credit: Meehan et al., 2024).\n" + ] + }, + { + "attachments": {}, + "cell_type": "markdown", + "metadata": {}, + "source": [ + "![Meehan2024](./GPR_Lidar_Figs/MeehanFigure.png)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## 4. SnowEx23 GPR/Lidar Derived Permittivities/Densities in the Boreal Forest, Alaska\n", + "\n", + "### Here, we will use this approach to derive densities at Farmer's Loop Creamer's Field during the SnowEx23 Alaska Campaign\n", + "- Lidar data was collected on 11 March 2023\n", + "- GPR data was collected on 7, 11, 13, and 16 March 2023\n" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "#1.1 Load relevant packages\n", + "import os\n", + "import numpy as np \n", + "from datetime import date\n", + "from scipy.spatial import cKDTree\n", + "\n", + "#packages for figures\n", + "import matplotlib.pyplot as plt\n", + "from rasterio.plot import show\n", + "\n", + "#geospatial packages\n", + "import geopandas as gpd #for vector data\n", + "import xarray as xr\n", + "import rioxarray #for raster data\n", + "import pandas as pd\n", + "from shapely.geometry import box, Point\n", + "import rasterio as rio\n", + "\n", + "#Import SnowEx database\n", + "from snowexsql.api import PointMeasurements, LayerMeasurements, RasterMeasurements\n" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Part 1: Load the GPR data from the SnowEx data base\n", + "-Huge thank you to Micah Johnson and Micah Sandusky for their support!\n", + "- Note that if we used the full GPR/Lidar dataset, we would need to allocate way more memory. This example focuses on a single date of collection in very dense forest.\n", + "- Examine the headers from the GPR csv --> what are the variables that we are interested in?" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "# 1.2 Load GPR data\n", + "\n", + "#Note, memory space is fairly limited, will need to pull only one date\n", + "\n", + "#Set a number of dates to pull GPR for\n", + "#dt1 = date(2023, 3, 7)\n", + "dt2 = date(2023, 3, 11)\n", + "#dt3 = date(2023, 3, 13)\n", + "#dt4 = date(2023, 3, 16)\n", + "\n", + "#site1 = LayerMeasurements.from_filter(date=dt1, site_name='Fairbanks', site_id='FLCF', limit=1)\n", + "site2 = LayerMeasurements.from_filter(date=dt2, site_name='Fairbanks', site_id='FLCF', limit=1)\n", + "#site3 = LayerMeasurements.from_filter(date=dt3, site_name='Fairbanks', site_id='FLCF', limit=1)\n", + "#site4 = LayerMeasurements.from_filter(date=dt4, site_name='Fairbanks', site_id='FLCF', limit=1)\n", + "\n", + "#Use pandas ot read in csv data\n", + "#gpr_df_dt1 = PointMeasurements.from_area(pt=site1.geometry[0], crs=26906, buffer=10000,\n", + "# type='two_way_travel',\n", + "# observers='Randall Bonnell',\n", + "# date=dt1, site_name='farmers-creamers',\n", + "# limit=29432)#The number of expected measurements\n", + "gpr_df_dt2 = PointMeasurements.from_area(pt=site2.geometry[0], crs=26906, buffer=10000,\n", + " type='two_way_travel',\n", + " observers='Randall Bonnell',\n", + " date=dt2, site_name='farmers-creamers',\n", + " limit=20213)#The number of expected measurements\n", + "#gpr_df_dt3 = PointMeasurements.from_area(pt=site3.geometry[0], crs=26906, buffer=10000,\n", + "# type='two_way_travel',\n", + "# observers='Randall Bonnell',\n", + "# date=dt3, site_name='farmers-creamers',\n", + "# limit=19024)\n", + "#gpr_df_dt4 = PointMeasurements.from_area(pt=site4.geometry[0], crs=26906, buffer=10000,\n", + "# type='two_way_travel',\n", + "# observers='Randall Bonnell',\n", + "# date=dt4, site_name='farmers-creamers',\n", + "# limit=15785)\n", + "\n", + "\n", + "#Compile into one dataframe\n", + "#flcf_gpr_df = pd.concat([gpr_df_dt1,gpr_df_dt2,gpr_df_dt3,gpr_df_dt4],axis=0, join='outer', ignore_index=True, keys=None, levels=None,names=None,verify_integrity=False,sort=False,copy=None)\n", + "flcf_gpr_df = gpr_df_dt2\n", + "#Print out the csv headers and initial entries --> What's important here and what do we need?\n", + "print(flcf_gpr_df.head())" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "# Let's look at the distribution of gpr two-way travel times and estimated snow depths\n", + "#Estimate snow depths from twt by assuming a velocity of 0.25 m/ns --> Is this an appropriate velocity estimate?\n", + "flcf_gpr_df['Depth_estimated'] = (flcf_gpr_df['value']/2)*0.25\n", + "\n", + "ax1 = flcf_gpr_df.plot.hist(column=[\"value\"], edgecolor='black', title='two-way travel time (ns)')\n", + "ax2 = flcf_gpr_df.plot.hist(column=[\"Depth_estimated\"], edgecolor='black', title='Snow depth (m)')" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "#Extract x/y limits from GPR data --> these will be used when loading the lidar snow depths\n", + "bounds = flcf_gpr_df.total_bounds\n", + "\n", + "# Create a bounding box\n", + "gpr_limits = box(*bounds)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Let's load in the lidar canopy heights and snow depths.\n", + "- We'll look at the canopy heights to get an idea of what kind of forest the data were collected in.\n", + "- Then, we'll look at the lidar snow depths to visualize the snow distribution.\n", + "\n", + "### Discussion questions:\n", + "- What type of survey design was implemented for the GPR?\n", + "- Do the lidar snow depth patterns seem to exhibit any kind of dependence upon the forest cover?" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "# 1.3 Load Lidar vegetation/canopy heights --> This may take a few minutes\n", + "#Read in the canopy heights raster from Farmer's Loop/Creamer's Field Alaska\n", + "flcf_ch = RasterMeasurements.from_area(shp = gpr_limits, crs=26906,\n", + " buffer=None, type='canopy_height',\n", + " site_name='farmers-creamers',\n", + " observers='chris larsen')\n", + "print(flcf_ch)\n", + "\n", + "#Plot the datasets\n", + "fig, ax = plt.subplots()\n", + "show(flcf_ch, ax=ax, cmap='Greens', clim=(0,5), title = 'Canopy Height (m)')\n", + "#Plot the GPR points on top\n", + "flcf_gpr_df.plot(ax=ax, color='blue', markersize = 10)" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "# 1.4 Load Lidar Snow depths --> This will take a few minutes\n", + "\n", + "#Read in the canopy heights raster from Farmer's Loop/Creamer's Field Alaska\n", + "flcf_ds = RasterMeasurements.from_area(shp = gpr_limits, crs=26906,\n", + " buffer=None, type='depth',\n", + " site_name='farmers-creamers',\n", + " observers='chris larsen')\n", + "\n", + "#Plot the datasets\n", + "fig, ax = plt.subplots()\n", + "show(flcf_ds, ax=ax, cmap='Blues', clim=(0,1.5), title='Snow Depth (m)')\n", + "#Plot the GPR points on top\n", + "flcf_gpr_df.plot(ax=ax, color='red', markersize = 10)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Part 2: Match the GPR data to the lidar grid and derive relative permittivity and density\n", + "- There are two conceptual paths forward:\n", + " - Rasterize the GPR data or\n", + " - Vectorize the lidar data\n", + "- For simplicity, the following code:\n", + " - vectorizes the lidar data\n", + " - performs a nearest neighbor search between the lidar and GPR coordinate vectors\n", + " - Calculates the median GPR twt from the nearest neighbors\n", + " - Derives relative permittivity and density from the lidar snow depths and median twt" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### We need to know the resolutions of the lidar and GPR datasets\n", + "\n", + "- The GPR dataset consists of points that are spaced ~0.10 m apart.\n", + "- What about the lidar? Run the code block below to answer this question.\n", + "- How many GPR points would you expect to have per lidar pixel? Assume linear transects through each pixel." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "#2.1 Let's learn a bit about the resolution of the lidar rasters\n", + "\n", + "height, width = flcf_ds.read(1).shape #Find the height and width of the array\n", + "\n", + "#Use meshgrid to create two arrays matching the height/width of the input raster\n", + "#The GPR dataset consists of vectors --> we will eventually need to vectorize these lidar arrays\n", + "cols, rows = np.meshgrid(np.arange(width), np.arange(height)) \n", + "\n", + "\n", + "#Extract the easting/northing from the raster \n", + "x_lidar, y_lidar = rio.transform.xy(flcf_ds.transform, rows, cols) \n", + "\n", + "#What's the resolution of the lidar dataset?\n", + "print(\"The x resolution of the snow depth raster is:\",x_lidar[0][1]-x_lidar[0][0])\n", + "print(\"The y resolution of the snow depth raster is:\",y_lidar[0][0]-y_lidar[1][0])\n" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "# 2.2 Matching GPR to the lidar grid\n", + "\n", + "#Two conceptual paths forward: rasterize the GPR data, or convert lidar data to points\n", + "\n", + "#Let's vectorize the raster data\n", + "x_lidar_vec = np.array(x_lidar).flatten()\n", + "y_lidar_vec = np.array(y_lidar).flatten()\n", + "flcf_ds_vec = flcf_ds.read().flatten()\n", + "\n", + "#Pull vectors from geo dataframe\n", + "gpr_arr = np.stack([flcf_gpr_df.geometry.x, flcf_gpr_df.geometry.y,flcf_gpr_df['value']], axis=1)\n", + "gpr_x=gpr_arr[:,0]\n", + "gpr_y=gpr_arr[:,1]\n", + "gpr_twt=gpr_arr[:,2].reshape(len(gpr_arr[:,2]),1)\n" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "#2.3 Create sets of coordinates for the nearest neighbors search\n", + "coordinates_set1 = np.column_stack((x_lidar_vec,y_lidar_vec))\n", + "coordinates_set2 = np.column_stack((gpr_x,gpr_y))\n", + "\n", + "# Build KDTree from the second set of coordinates\n", + "tree = cKDTree(coordinates_set2)\n", + "\n", + "# Define the radius (in meters)\n", + "radius = 0.25\n", + "\n", + "# Function to find the median of travel times within a radius --> Credit where credit is due, this function was generated in part by chatgpt\n", + "def find_median_travel_time_within_radius(point, tree, coordinates_set1, gpr_twt, radius):\n", + " indices = tree.query_ball_point(point, radius)\n", + " if indices:\n", + " # Retrieve travel times for the nearest neighbors\n", + " neighbor_twt = gpr_twt[indices]\n", + " median_twt = np.median(neighbor_twt)\n", + " return median_twt\n", + " else:\n", + " return np.nan # Return NaN if no neighbors are within the radius\n", + "# Find medians for each lidar point\n", + "medians = np.array([find_median_travel_time_within_radius(point, tree, coordinates_set2, gpr_twt, radius) for point in coordinates_set1])\n" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### The GPR data is not as spatially continuous as the lidar data, so most of the median twt dataset consists of nan's\n", + "- Let's remove the nan's to free up memory and reduce processing time." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "#At this point, all lidar points should have an associated gpr twt --> most are likely nan's though. But let's check!\n", + "print(\"The gpr array has size:\",medians.shape)\n", + "print(\"The lidar array has size:\",flcf_ds_vec.shape)\n" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "#2.4 Before we get to the math part, let's clear out the nan's from all important vectors:\n", + "#Create mask for gpr medians that are nan's\n", + "mask = np.isnan(medians)\n", + "\n", + "#Remove entries from the lidar snow depth, x, and y vectors that align with the nan twt values\n", + "flcf_ds_vec_clean = flcf_ds_vec[~mask]\n", + "coordinates_set1_clean=coordinates_set1[~mask]\n", + "\n", + "#Lastly, remove entries from the twt medians\n", + "medians_clean = medians[~mask]\n", + "\n", + "#Let's check the new size of the twt array\n", + "print(medians_clean.shape)\n", + "\n" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Discussion questions:\n", + "- Roughly, how many points were removed?\n", + "- When we are done, we will have derived 3788 snow density estimates. In the same area, about four snow pits were dug, resulting in four bulk density measurements. How useful do you think our data will be?\n", + "- Is more always better?\n", + "\n", + "### Let's now transition to the relative permittivity and density calculations" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "#2.5 We finally get to the math part!!\n", + "#Let's calculate relative permittivity first...\n", + "c=0.2998#The speed of light in a vacuum\n", + "e_s = ((c * medians_clean) / (2 * flcf_ds_vec_clean)) ** 2\n", + "\n", + "#And then calculate density\n", + "rho_s = ((np.sqrt(e_s) - 1) / 0.845) * 1000" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Part 3: Examining the derived densities" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "# 3.1 Finally, let's take a peek at what the derived densities look like...\n", + "plt.figure()\n", + "plt.scatter(coordinates_set1_clean[:,0], coordinates_set1_clean[:,1], s=10, c=rho_s, cmap='viridis', clim=(0, 500), edgecolor=None)\n", + "\n", + "# Add colorbar to show the scale of color values\n", + "plt.colorbar()\n", + "plt.title('Snow Density (kg m-3)')\n", + "\n", + "# Show the plot\n", + "plt.show()" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "# 3.2 What does the histogram distribution look like??\n", + "# Define bin edges\n", + "bin_edges = np.arange(np.min(rho_s), np.max(rho_s), 25) # Create bin edges from min(x) to max(x) with step size 25\n", + "\n", + "# Create the histogram\n", + "plt.figure() # Create a new figure\n", + "plt.hist(rho_s, bins=bin_edges, edgecolor=None) # Plot histogram with specified bin edges\n", + "\n", + "plt.title('Snow Density Histogram')\n", + "\n", + "# Show the plot\n", + "plt.show()" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "#Let's zoom in a little...\n", + "# Define bin edges\n", + "bin_edges = np.arange(0, 500, 25) # Create bin edges from min(x) to max(x) with step size 25\n", + "\n", + "# Create the histogram\n", + "plt.figure() # Create a new figure\n", + "plt.hist(rho_s, bins=bin_edges, edgecolor='black') # Plot histogram with specified bin edges\n", + "\n", + "plt.title('Snow Density Histogram')\n", + "\n", + "# Show the plot\n", + "plt.show()" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## 5. Discussion: Improving Density Estimation\n", + "- What do you think? Do the derived densities look usable at this stage?\n", + "\n", + "### What contributes to the random error?\n", + "There are three groups of factors that control the random error:\n", + "1. Measurement accuracy for lidar snow depths and GPR twt. Reduced accuracy for either or both of the techniques will lead to large errors. The boreal forest had a lot of complex vegetation that may have impeded the accuracy of these instruments.\n", + "2. Depth of the snowpack. The accuracy of the lidar is not a function of snow depth. Thus, the random errors reduce as the snow depth increases. The boreal forest snow depths were shallow!\n", + "3. Geolocation alignment. GPR coordinates were post-processed, but accuracy is still likely on the order of ±3 m." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Improving the derived densities\n", + "Let’s say we want to learn something about snow density in the boreal forest. The derived densities offer a HUGE increase in the number of available density measurements compared to in situ. But, in situ are much more accurate. How can we improve this dataset?\n", + "\n", + "- Increase the footprint of the derived densities by upsampling the lidar (e.g., to 3 m).\n", + " - This will reduce the impact of GPR geolocation accuracy and the lidar/GPR observation uncertainty.\n", + " - Need to be careful! The GPR footprint is large, but it may not scale well past 3 m.\n", + "- Remove erroneous values. \n", + " - How does the lidar survey time compare with the GPR survey time? Was the snow disturbed or did more snow accumulate between surveys?\n", + " - Relative permittivity of snow cannot be less than air $(\\epsilon_a = 1.0)$ or greater than liquid water $(\\epsilon_w = 88)$.\n", + " - For dry snow, relative permittivity is usually between 1.0 and 2.0. The removal of values outside a certain number of standard deviations and/or the interquartile range may be warranted.\n", + "- Run a spatial averaging filter.\n", + " - Our surveys were primarily spirals --> should pair nicely with such a filter!\n", + " - Experiment with the window size of the filter. How would a 5 m x 5 m filter compare to a 25 m x 25 m filter?\n", + " - Should the data be parsed into different forest cover classes before such a filter is run?\n", + " - Be careful of linear transects! Large windows tend to remove any density variability along such transects.\n", + " - Once you reach this point, it is likely that the densities will be analysis ready. You could run a predictive model to fill in the void spaces, use the densities to evaluate models, calculate experimental variograms, etc.\n" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## 6. SnowEx GPR Analysis-Ready Datasets\n", + "_Grand Mesa, Colorado (SnowEx 2017, 2020)_\n", + "- Webb et al. (2019). https://doi.org/10.5067/G21LGCNLFSC5\n", + "- Bonnell et al. (2021). https://doi.org/10.5067/S5EGFLCIAB18\n", + "- Meehan (2021). https://doi.org/10.5067/Q2LFK0QSVGS2\n", + "- Webb (2021). https://doi.org/10.5067/WE9GI1GVMQF6\n", + "- Meehan & Hojatimalekshah (2024). https://doi.org/10.5067/LANQ53RTJ2DR\n", + "\n", + "_Cameron Pass, Colorado (SnowEx 2020, 2021)_\n", + "- McGrath et al. (2021). https://doi.org/10.5067/U4Q3X27BMRR4\n", + "- Bonnell et al. (2022). https://doi.org/10.5067/SRWGLYCB6ZC4\n", + "- Bonnell et al. (2024). https://doi.org/10.5067/W0EJNWUZBYSL\n", + "\n", + "_Jemez Mountains, New Mexico (SnowEx 2020)_\n", + "- Webb (2021). https://doi.org/10.5067/H38Q5FTBPZ8K\n", + "\n", + "_Arctic Coastal Plains, Alaska (SnowEx 2023)_\n", + "- Webb (2024). https://doi.org/10.5067/H3D9IT1W6JT6" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## 7. References\n", + "_Lidar Datasets_\n", + "- Larsen (2024). https://doi.org/10.5067/BV4D8RRU1H7U\n", + "\n", + "_Relevant GPR LWC Studies_\n", + "- Webb et al. (2018). https://doi.org/10.1029/2018WR022680\n", + "- Webb et al. (2020). https://doi.org/10.1002/hyp.13686\n", + "- Bonnell et al. (2021). https://doi.org/10.3390/rs13214223\n", + "- Webb et al. (2022). https://doi.org/10.1002/hyp.14541\n", + "\n", + "_Relevant GPR Density Studies_\n", + "- Yildiz et al. (2021). https://doi.org/10.1002/hyp.14190\n", + "- McGrath et al. (2022). https://doi.org/10.3389/frsen.2022.886747\n", + "- Bonnell et al. (2023). https://doi.org/10.1002/hyp.14996\n", + "- Meehan et al. (2024). https://doi.org/10.5194/tc-18-3253-2024" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [] + } + ], + "metadata": { + "kernelspec": { + "display_name": "Python 3 (ipykernel)", + "language": "python", + "name": "python3" + }, + "language_info": { + "codemirror_mode": { + "name": "ipython", + "version": 3 + }, + "file_extension": ".py", + "mimetype": "text/x-python", + "name": "python", + "nbconvert_exporter": "python", + "pygments_lexer": "ipython3", + "version": "3.11.9" + } + }, + "nbformat": 4, + "nbformat_minor": 4 +}