You can fine-tune the Prithvi - Burn Scars Segmentation model to suit your geographic area, imagery, or features of interest. Fine-tuning a model requires less training data, computational resources, and time compared to training a new model.
Fine-tuning the model is recommended if you do not get satisfactory results from the available ArcGIS pretrained deep learning models. This can happen when your area of interest falls outside the applicable geographies for the models, or if your imagery properties such as resolution, scale, and seasonality are different.
You can use the Export Training Data For Deep Learning tool to prepare training data. Next, you can fine-tune this model on your data using the Train Deep Learning Model tool in ArcGIS Pro. Follow the steps below to fine-tune the model.
Prepare training data
This model is trained on six bands composite of Harmonized Landsat 8 (HLSL30) or Harmonized Sentinel 2 (HLSS30) and burn scars labels. Use the Export Training Data For Deep Learning tool to prepare training data for fine-tuning the model.
- Browse to Tools under the Analysis tab.
- Click the Toolboxes tab in the Geoprocessing pane, select Image Analyst Tools and browse to the Export Training Data For Deep Learning tool in the Deep Learning toolset.
- Set the variables under the Parameters tab as follows:
- Input Raster—Select a 6-band imagery. For more details regarding input raster, see Recommended imagery configuration.
- Output Folder—Any directory of your choice on your machine.
- Input Feature Class Or Classified Raster Or Table— Select the labeled feature class or classified raster representing burn scars. In the case of a feature class, it should include a text field named ClassName containing the name and another field named ClassValue.
- Class Value Field— If a feature class is used in the preceding step, add the ClassValue field referencing the class value assigned to burn scar within the mentioned feature class.
- Image Format—TIFF format
- Tile Size X—224
- Tile Size Y—224
- Stride X—0
- Stride Y—0
- Metadata Format—Classified Tiles
- Set the variables under the Environments tab.
- Processing Extent—Select Current Display Extent or any other option from the drop-down menu as needed.
- Cell Size—Set the value to the desired cell size.
- Click Run. Once processing is complete, the exported training data is saved in the specified directory.
Fine-tune the Prithvi - Burn Scars Segmentation model
Complete the following steps to fine-tune the model using learn module of ArcGIS API for Python:
- Open Python Command Prompt with the environment consisting of deep learning dependencies, go to the desired directory, and type jupyter-notebook.
- In the browser, click New and select Python 3 (ipykernel) to create a notebook.
- Use the following functions for fine-tuning the model:
- import arcgis.learn module.
- prepare_data—Prepare a data object from the training sample exported by the Export Training Data For Deep Learning tool or training samples in the supported dataset formats.
- path: Provide the path to your exported training data from the previous step.
- batch_size: Specify the number of image tiles processed in each step of the model inference, and it depends on the memory of your graphic card.
- Initialize a MMSegmentation model and assign it to a variable name, for example, model
- data: Provide the data object created using the prepare_data function.
- model: Specify the model name as prithivi100m_crop_classification.
- fit—Train the model for the specified number of epochs using the automatically generated optimal learning rate using the fit method.
- save—Save the trained model as a deep learning package file (.dlpk) using the save method. The deep learning package file format is the standard format used to deploy deep learning models on the ArcGIS platform. By default, it will be saved to the models subfolder within the training data folder.
- per_class_metrics—Compute per class precision, recall and f1-score on validation set using the `per_class_metrics` method.
- You can now use the saved model to run inferencing against your imagery.