ENVI Deep Learning 1.1 contains many exciting improvements to improve usability and training. See the following sections:

Multiclass Architecture


At a high level, this feature includes:

  • The ability to extract more than one feature at a time. ENVI Deep Learning 1.0 was a single-class classifier. With the new multiclass architecture, you can train with up to 255 classes.
  • Significant improvement in model performance with a new, customized architecture. With this new architecture, you can get classifier accuracies in the high 90s. It is better at recreating the same geometry that you used to label your imagery such as circles, rectangles, or polygons.
  • This architecture can be used for any number of classes, including single-class applications that may have been created with ENVI Deep Learning 1.0. This is the recommended architecture to use for all training sessions and has replaced the single-class architecture in the default ENVI Modeler workflows.

The following image shows a result of using the new, multiclass architecture to identify building damage after a tornado. You can see how the architecture captures the shape and outline of the blue tarps as well as identifies classes with indistinct boundaries.

See ENVI Deep Learning Tutorial: Extract Multiple Features to try this scenario yourself.

While four features may be a common multiclass scenario, the following example shows the real power of the multiclass architecture. This is a screen capture of an 84-class landcover classification image derived from Landsat 8 imagery and the U.S. Department of Agriculture Crop Land data layer.

Deep Learning Labeling Tool


ENVI Deep Learning has made significant improvements to the process of labeling and managing data with the addition of the Deep Learning Labeling Tool.

Here is how the labeling tool helps streamline the training process:

  • It introduces projects so you can manage all of the training data for a particular scenario in one place.
  • It manages a single list of classes so that, when you have any number of features, you do not need to create new ENVI ROIs each time you add a new image. The labeling tool will automatically create the ROI base classes for you if they do not exist when you open an image.
  • It ensures that your progress is never lost. As you add new labels, or if you make updates to existing features and geometry, all changes are immediately saved and persisted to disk.
  • It keeps track of which images have been labeled so you know where you left off. The following screen capture shows a statistics report created from the labeling tool:
  • It automates the training process. It rasterizes labels for you if the raster version does not exist or if your training data was updated since the last training session. This means that the labeling tool not only manages your labels but also allows you to generate models from it.

TensorBoard Integration


Knowing how your models are performing in real time can save you time if you discover that you entered a parameter value incorrectly or did not create adequate training data. To provide real-time feedback on training, TensorBoard was integrated into the training process. TensorBoard starts automatically when you begin training a model. It opens a window in your web browser that looks similar to the following screen capture:

Here are some details about TensorBoard integration with ENVI Deep Learning:

  • TensorBoard reports Accuracy, Loss, Precision, and Recall for each batch and epoch during training and for each epoch for validation.
  • TensorBoard lets you verify that Accuracy, Precision, and Recall are increasing during training and that Loss is decreasing. A recommended approach is to train for at least two epochs and verify that this is the case. If not, you can stop training, adjust parameters, add more data, and try again. For some complex features, you may need to train for longer than a few epochs to get a general idea of the performance.
  • You can view training metrics with the built-in widget browser or your system browser.
  • You can access and manage training metrics by selecting Show Training Metrics from the Deep Learning Guide Map menu bar.
  • You can easily compare training sessions to one another. The training logs from TensorBoard persist between training sessions so that, when you retrain, you can compare model performance to see if new training data or updated parameters were beneficial.

Validate System Requirements


When you first install ENVI Deep Learning, you should run the Test Installation and Configuration tool, which is available under the Tools menu in the Deep Learning Guide Map. This tool verifies that your system is properly configured with the correct NVIDIA drivers, NVIDIA GPU, and installation libraries.

The Test Installation and Configuration tool was updated to run a small training session, which verifies that everything completes as expected. When finished, it displays a dialog that indicates whether or not your system is ready to use ENVI Deep Learning.

Other Notable Changes


Other changes improve the usability of ENVI Deep Learning:

  • CUDA 10 support. You can now use ENVI Deep Learning with the latest NVIDIA GPUs.
  • When classifying images using the TensorFlow Mask Classification tool, you can now generate a classification image as well as the now optional class activation image. This can reduce processing time by returning the classification image directly instead of generating it from the class activation image as a separate step. We recommended saving the classification image instead of the class activation image unless you want to threshold the class activation image yourself, especially for single-class cases.
  • During training, the Train TensorFlow Pixel Model tool will now, by default, save the best and last model. The best model is the one with the lowest Loss value for the validation data at the end of each epoch. Most of the time, this model will perform better on other data than the last model (and could even be the last model). However, depending on how carefully validation data is created, how similar it is to the training data, how similar it is to other data being used with the model, etc., sometimes training longer produces a model that performs better. So the last model is also provided.

Programming


This release provides the following routines and tasks:

Routine/Task

Description

ClassActivationToPolylineShapefile task

Create a polyline shapefile from a class activation raster.

ENVITensorBoard

Manually display TensorBoard or start and stop a TensorBoard server.

API Updates and Breaking Changes

ENVI Deep Learning 1.1 has a few updates and breaking changes from the previous version. If you used any of the following tasks in your IDL code or ENVI Modeler workflows, you should update the code or models so that they will work with version 1.1.

In the TensorFlowClassification task, the OUTPUT_RASTER property was replaced by OUTPUT_CLASSIFICATION_RASTER and OUTPUT_CLASS_ACTIVATION_RASTER. Also, OUTPUT_RASTER_URI was replaced by OUTPUT_CLASSIFICATION_RASTER_URI and OUTPUT_CLASS_ACTIVATION_RASTER_URI.

In the RandomizeTrainTensorFlowMaskModel task, the EPOCHS, OUTPUT_EPOCHS, PATCHES_PER_EPOCH, and OUTPUT_PATCHES_PER_EPOCH properties were removed.

The TrainTensorFlowMaskModel task has a new OUTPUT_LAST_MODEL property that returns a model from the last epoch of training. See Other Notable Changes above for details on how this differs from the OUTPUT_MODEL property, which represents the model with the lowest validation Loss value.