In this module, we will discuss the following concepts:
The ability to visually interpret images is an important skill that will aid you as you explore how Google Earth Engine and remote sensing data can be integrated into your research. While many algorithms are designed to extract and classify imagery automatically, computers are simply not as advanced as the human brain when it comes to pattern and feature recognition. This means you will often need to manually identify features in imagery not only for yourself but also for your advisor, project partners, or other stakeholders. The ability to communicate this information effectively in Google Earth Engine will ultimately depend on your ability to visualize and interpret raster datasets. Although your learning environment in these modules is the Google Earth Engine interface, you may find that the concepts and skills presented here will be useful across different remote sensing software options.
Passive vs. Active Data Collection
Optical sensors rely on the collection of reflected solar energy and are described as ‘Passive’. Much of the time, data from passive sensors will be formatted in multi-band rasters that allow for differing combinations. These combinations allow us to highlight environmental features on the landscape. In contrast, sensors that emit and measure their own energy (e.g. LiDAR, SAR) are commonly called ‘Active’. In Google Earth Engine, you will not find much of the raw data from active sensors. Instead, derived products (e.g. digital elevation maps) are readily available but often lack the multi-band structure of passively collected imagery.
Active and passive sensors will often have distinct mission objectives and associated data products - allowing you, as a Google Earth Engine user, to manipulate their data for unique ecological applications.
Visualization of passive vs. active data collection from space-based sensors. Image Credit: GrindGIS.
The Electromagnetic Spectrum
To understand passively collected imagery slightly more in-depth, it is a good idea to discuss the electromagnetic spectrum (EMS). Broadly speaking, the EMS is the range of wavelengths of energy. In the context of remote sensing, we encounter a tiny fraction of this energy that, after being emitted from the sun, is filtered through specific ranges due to windows in the atmosphere where there is little energy absorbed. Within these windows are the wavelengths that we commonly utilize in remote sensing. Over the years, specific wavelength ranges have been found to correspond with elements of the physical environment, such as vegetation, water, and human constructed objects. In this module, you will find some ways of manipulating these wavelengths to highlight features within your raster data and area of interest. These methods will be expanded upon in Module 6
Examples of the broad range of electromagnetic energy and forms. Note that the visible portion of this spectrum is very narrow! Image Credit: NASA.
Visualization parameters can be defined in several ways with Google Earth Engine. The first is in the script, where we can create a dictionary object. Even if you are unfamiliar with dictionaries in the context of programming, the concept is familiar to that of a physical dictionary. For example, when you look up a word in a physical dictionary, there is some length of text explaining that word. Similarly, a dictionary object in JavaScript contains a ‘key’ and ‘item’ pairing separated by a colon (:). In Google Earth Engine, your ‘items’ will usually be either in quotations (‘character’) or as numbers (1.0). See the code snippet below for a couple examples.
// This is a dictionary object. It has a key and an item.
var dictionary_one = {key: 'item', term: 'value'};
var dictionary_two = {first_key: 1.1, second_key: 2.2, third_key: 3.3};
print(dictionary_one,"dict1");
print(dictionary_two, "dict2");
// Occasionally, dictionary items can be lists. Lists are simply containers for multiple items.
var dictionary_with_list = {key: ['item1', 'item2', 'item3']};
print(dictionary_with_list, "dictList");
In this module, we will be working with a number of datasets. For this section, we will import the Landsat 8 surface reflectance collection. We will begin by using a true color image (TCI) to investigate and interpret the landscape in Montreal, Canada. Using the code below should produce the image shown below. Many of the space-based images we encounter in the news and online are TCI. By using TCI in Google Earth Engine, we can use our own experience and common sense recognition to identify and classify objects. Start a new script with the code below to generate a TCI like the one below.
// Load the Landsat image collection.
var ls8 = ee.ImageCollection("LANDSAT/LC08/C01/T1_SR");
// Filter the collection by date and cloud cover.
var ls8 = ls8
.filterDate('2017-05-01','2017-09-30')
.filterMetadata('CLOUD_COVER','less_than', 3.0);
// There are a number of keys that you can use in a visParams dictionary.
// Here, we are going to start by defining our 'bands', 'min', 'max', and 'gamma' values.
// You can think of 'gamma' essentially as a brightness level.
var ls_tci = {bands: ['B4','B3','B2'], min: 0, max: 3000, gamma: 1.4};
Map.setCenter(-73.755, 45.5453, 11);
Map.addLayer(ls8, ls_tci, 'True Color');