1 Introduction

In this module, we will discuss the following concepts:

  1. The different types of energy we capture with remote sensors.
  2. How to build JavaScript dictionaries and lists to select individual raster bands.
  3. How to visualize different combinations of multiband and single band rasters.

2 Background

The ability to visually interpret images is an important skill that will aid you as you explore how Google Earth Engine and remote sensing data can be integrated into your research. While many algorithms are designed to extract and classify imagery automatically, computers are simply not as advanced as the human brain when it comes to pattern and feature recognition. This means you will often need to manually identify features in imagery not only for yourself but also for your advisor, project partners, or other stakeholders. The ability to communicate this information effectively in Google Earth Engine will ultimately depend on your ability to visualize and interpret raster datasets. Although your learning environment in these modules is the Google Earth Engine interface, you may find that the concepts and skills presented here will be useful across different remote sensing software options.


Passive vs. Active Data Collection
Optical sensors rely on the collection of reflected solar energy and are described as ‘Passive’. Much of the time, data from passive sensors will be formatted in multi-band rasters that allow for differing combinations. These combinations allow us to highlight environmental features on the landscape. In contrast, sensors that emit and measure their own energy (e.g. LiDAR, SAR) are commonly called ‘Active’. In Google Earth Engine, you will not find much of the raw data from active sensors. Instead, derived products (e.g. digital elevation maps) are readily available but often lack the multi-band structure of passively collected imagery.

Active and passive sensors will often have distinct mission objectives and associated data products - allowing you, as a Google Earth Engine user, to manipulate their data for unique ecological applications.


Visualization of passive vs. active data collection from space-based sensors. Image Credit: GrindGIS.


The Electromagnetic Spectrum
To understand passively collected imagery slightly more in-depth, it is a good idea to discuss the electromagnetic spectrum (EMS). Broadly speaking, the EMS is the range of wavelengths of energy. In the context of remote sensing, we encounter a tiny fraction of this energy that, after being emitted from the sun, is filtered through specific ranges due to windows in the atmosphere where there is little energy absorbed. Within these windows are the wavelengths that we commonly utilize in remote sensing. Over the years, specific wavelength ranges have been found to correspond with elements of the physical environment, such as vegetation, water, and human constructed objects. In this module, you will find some ways of manipulating these wavelengths to highlight features within your raster data and area of interest. These methods will be expanded upon in Module 6


header image

Examples of the broad range of electromagnetic energy and forms. Note that the visible portion of this spectrum is very narrow! Image Credit: NASA.

3 Visualizing Multiple Bands

Visualization parameters can be defined in several ways with Google Earth Engine. The first is in the script, where we can create a dictionary object. Even if you are unfamiliar with dictionaries in the context of programming, the concept is familiar to that of a physical dictionary. For example, when you look up a word in a physical dictionary, there is some length of text explaining that word. Similarly, a dictionary object in JavaScript contains a ‘key’ and ‘item’ pairing separated by a colon (:). In Google Earth Engine, your ‘items’ will usually be either in quotations (‘character’) or as numbers (1.0). See the code snippet below for a couple examples.


3.1 True-Color (TCI)

In this module, we will be working with a number of datasets. For this section, we will import the Landsat 8 surface reflectance collection. We will begin by using a true color image (TCI) to investigate and interpret the landscape in Montreal, Canada. Using the code below should produce the image shown below. Many of the space-based images we encounter in the news and online are TCI. By using TCI in Google Earth Engine, we can use our own experience and common sense recognition to identify and classify objects. Start a new script with the code below to generate a TCI like the one below.

header image

Visualizing a true color image from Landsat 8 surface reflectance data over Montreal, Canada in 2017.

3.2 Color Infrared (CI)

Appending the code below to our script, we can compare the TCI to the Color Infrared (CI). With CI, we can begin to differentiate land cover classes (i.e. forest cover versus cropland) but also conditions within the classes, such as more active vegetation where areas are darker red. Click run again to see an image similar to the one below.

header image

Replacing our true-color image with a color infrared image to identify areas of active vegetation and waterways in Montreal, Canada and surrounding land areas.

3.3 False Color 1 and 2 (FC1/FC2)

Once more, if we append the code below to our script, we can compare two additional false color composites (FC1) to our other multiband images. In water bodies, there is high absorption of short and long wave infrared wavelengths. Therefore, water bodies will appear very dark in these images in contrast to the true color image where some dense vegetation will also appear very dark. The band combinations in FC1 should also highlight areas of dense urban development in gray/purple and fallow agricultural fields will appear in light brown. Clicking run again, your map window should appear similar to the images below.

header image


header image

Our final depictions of Montreal, Canada using two distinct (but related) false color images highlighting the differences between agricultural and urban areas as well as waterways.

4 Visualizing Single Bands

When we are using raster data from sources with a single band, we will need to use a different map visualization technique: palettes. Palettes are the way in which we can convey either continuous or categorical data in Google Earth Engine but it is important to understand our minimum and maximum values when using palettes. Remember that we can sample values by clicking on the ‘Inspector’ tab.

When sharing your data it is often wise to check and see if the color palette you have chosen is still interpretable if your audience may be colorblind. Two great resources for this are Color Brewer, where you can see different palette ideas, and Colblinder, where you can simulate how different colorblind conditions affect color perception.

4.1 Continuous Data

To look out how to effectively use a palette to interpret continuous data, we will be using a Gross Primary Productivity raster, derived from Landsat. Gross Primary Productivity is defined here as a measure of “the amount of carbon captured by plants in an ecosystem.” Start a new script with the code below to visualize the data across the eastern slopes of the Cascade Mountains in Washington State, USA. Convenient for us, the dataset has already been filtered out for bodies of water and atmospheric effects. Running the script, you should see an image like the one below.

By using this particular palette, we can start to pick out land management activities. Strongly geometric and/or linear features can indicate activities such as forest harvest and agriculture while irregular shapes can often be areas of preservation (i.e. National Forest or Wilderness Areas).

gpp image

Visualizing gross primary productivity (derived from Landsat data) on the eastern slopes of the Cascade Mountains, Washington, USA with multiple land uses highlighted in red.

4.2 Categorical Data

We can also use palettes to highlight categorical data. If we wanted to create a forest/non-forest mask in a study area, we could use the National Land Cover Dataset NLCD to highlight all the forest categories in dark green. There are many categories of land cover in the NLCD but to simplify things a bit for this example, we are only using “Deciduous forest”, “Evergreen forest”, and “Mixed forest” across the same area of the eastern Cascades. Start a new script one more time and paste in the code below. You should see a resulting image similar to the one below.

A simple binary visualization of forested vs. non-forested areas, again on the eastern slopes of the Cascade Mountains, using the National Land Cover Dataset classes.

We will work with categorical data again in Module 9.

5 Conclusion

In this module we discussed the differences between passive and active remote sensing as well as the electromagnetic spectrum and the narrow sliver of it that constitutes what we know as visible light. We also reviewed the structure of JavaScript dictionaries and lists and how to use them to select individual raster bands. Using that knowledge, we walked through a number of ways to visualize multiple and single band imagery to effectively communicate differences in land cover, land use, and vegetation state.

6 Complete Code for Module 3

// This is a dictionary object. It has a key and an item.
var dictionary_one = {key: 'item', term: 'value'}
var dictionary_two = {first_key: 1.1, second_key: 2.2, third_key: 3.3}
print(dictionary_one,"dict1");
print(dictionary_two, "dict2");

// Occasionally, dictionary items can be lists. Lists are simply containers for multiple items.
var dictionary_with_list = {key: ['item1', 'item2', 'item3']};
print(dictionary_with_list, "dictList");

// Load the Landsat image collection.
var ls8 = ee.ImageCollection("LANDSAT/LC08/C01/T1_SR");

// Filter the collection by date and cloud cover.
var ls8 = ls8
.filterDate('2017-05-01','2017-09-30')
.filterMetadata('CLOUD_COVER','less_than', 3.0);

// There are a number of keys that you can use in a visParams dictionary.
// Here, we are going to start by defining our 'bands', 'min', 'max', and 'gamma' values.
// You can think of 'gamma' essentially as a brightness level.
var ls_tci = {bands: ['B4','B3','B2'], min: 0, max: 3000, gamma: 1.4};

Map.setCenter(-73.755, 45.5453, 11);
Map.addLayer(ls8, ls_tci, 'True Color');

// Add the Color Infrared visualization.
var ls_ci = {bands: ['B5','B4','B3'], min: 0, max: 3000, gamma: [0.7, 1, 1]};
Map.addLayer(ls8, ls_ci, 'Color Infrared');

// Add the False Color 1 visualization.
var ls_fc1 = {bands: ['B6','B5','B4'], min: 0, max: 4000, gamma: 0.9};
Map.addLayer(ls8, ls_fc1, 'False Color 1');

// Add the False Color 2 visualization.
var ls_fc2 = {bands: ['B7','B6','B4'], min: 0, max: 3000, gamma: 0.9};
Map.addLayer(ls8, ls_fc2, 'False Color 2');

// Load and select the Gross Primary Production (GPP) image collection.
var dataset = ee.ImageCollection('UMT/NTSG/v2/LANDSAT/GPP')
                  .filter(ee.Filter.date('2018-05-01', '2018-10-31'));
var gpp = dataset.select('GPP');

// Build a set of visualization parameters and add the GPP layer to the map.
var gppVis = {
  min: 0.0,
  max: 500.0,
  palette: ['ffffe5','f7fcb9','d9f0a3','addd8e','78c679','41ab5d','238443','005a32']};

Map.setCenter(-120.9893, 47.2208, 10);
Map.addLayer(gpp, gppVis, 'GPP');

// Load the NLCD image collection and select the land cover layer.
var dataset = ee.ImageCollection('USGS/NLCD');
var landcover = dataset.select('landcover');

// Values from 41-43 represent explicitly defined forest types.
var landcoverVis = {
  min: 41.0,
  max: 43.0,
  palette: ['black','green', 'green','green','black'],
  opacity: 0.75
};

Map.setCenter(-120.9893, 47.2208, 10);
Map.addLayer(landcover, landcoverVis, 'Landcover');