Skip To Content

Use the model

You can use this model in the Classify Pixels Using Deep Learning tool available in the Image Analyst toolbox in ArcGIS Pro. Follow the steps below to use the model for classifying crops in imagery.

Recommended imagery configuration

The recommended imagery configuration is as follows:

  • Imagery—Raster, mosaic dataset, or image service. Composite raster consisting of 3 time-steps each with 6 bands totalling 18 bands of Harmonized Landsat 8 (HLSL30) or Harmonized Sentinel 2 (HLSS30). The model can also be used with level 2 products of Sentinel-2 and Landsat 8 but the model works best with HLSL30 and HLSS30. To prepare the composite raster, download three scenes with low cloud cover occurring between March and September. Ensure that one scene is captured early in the season, another in the middle, and the third toward the end of the crop season.

    The composite raster should contain the following bands: Blue, Green, Red, Narrow NIR, SWIR, and SWIR 2.

    Respective band numbers for the above mentioned bands are as follows:

    • For HLSS30 and Sentinel-2: Band2, Band3, Band4, Band8A, Band11, Band12
    • For HLSL30 and Landsat 8: Band2, Band3, Band4, Band5, Band6, Band7

  • Resolution—30 meters

Use the model

Complete the following steps to classify crops from imagery:

  1. Download the Prithvi - Crop Classification model and add the imagery layer in ArcGIS Pro.
    Imagery added in ArcGIS Pro
  2. Zoom to an area of interest.
  3. Browse to Tools on the Analysis tab.
    Tools on the Analysis tab
  4. Click the Toolboxes tab in the Geoprocessing pane, select Image Analyst Tools, and browse to the Classify Pixels Using Deep Learning tool under Deep Learning.
    Classify Pixels Using Deep Learning tool
  5. Set the variables on the Parameters tab as follows:
    1. Input Raster—Select the imagery.
    2. Output Raster Dataset—Set the output feature class that will contain the classification results.
    3. Model Definition—Select the pretrained or fine-tuned model .dlpk file.
    4. Arguments (optional)—Change the values of the arguments if required.
      • padding—Number of pixels at the border of image tiles from which predictions are blended for adjacent tiles. Increase its value to smooth the output while reducing edge artifacts. The maximum value of the padding can be half of the tile size value.
      • batch_size—Number of image tiles processed in each step of the model inference. This depends on the memory of your graphic card.
      • test_time_augmentation—Performs test time augmentation while predicting. If true, predictions of flipped and rotated variants of the input image will be merged into the final output.
      • predict_background—If set to True, background class is also classified.
    Classify Pixels Using Deep Learning Parameters tab
  6. Set the variables on the Environments tab as follows:
    1. Processing Extent—Select Current Display Extent or any other option from the drop-down menu.
    2. Cell Size (required)—Set the value to 30.

      The expected raster resolution is 30 meters.

    3. Processor Type—Select CPU or GPU.

      It is recommended that you select GPU, if available, and set GPU ID to specify the GPU to be used.

    Classify Pixels Using Deep Learning Environments tab
  7. Click Run.

    The output layer is added to the map.

    Classified results from the model