Train Point Cloud Classification Model (3D Analyst)

Summary

Trains a deep learning model for point cloud classification.

Learn more about training a point cloud classification model

Usage

  • This tool requires the installation of Deep Learning Essentials, which provides multiple neural network solutions that include neural architectures for classifying point clouds.

    To set up your machine to use deep learning frameworks in ArcGIS AllSource, see Install deep learning frameworks for ArcGIS.

  • The point cloud classification model can be trained using either a CUDA-capable NVIDIA graphics card or the CPU. Using the GPU is typically faster than using the CPU. Use the CPU only if no GPU is available. When using the CPU for training, start by using the RandLA-Net architecture since it consumes less memory than PointCNN. You can also experiment using the smallest possible training sample to estimate the time it will take to process the data before training with the full training dataset.

  • When using the GPU to train a model on a computer with multiple graphics cards, the tool will use the fastest graphics card on the computer. You can also specify the GPU using the GPU ID environment setting. If multiple graphics cards are present on the computer, you can maximize the training performance by dedicating the graphics card with the greatest computational resources for training and the one with lesser resources for display. If the selected GPU is also used for the display, its available memory will be diminished by the operating system and any applications using the display during the training process.

  • Using a pretrained model is advantageous, especially when facing limitations in data, time, or computational resources. Pretrained models reduce the need for extensive training and offer a reliable starting point that can accelerate the creation of a useful model. To take advantage of the pretrained model, the new training data must be compatible with the pretrained model. Ensure that the new training data has the same attributes and class codes as the training data that was used to create the pretrained model. If class codes in the training data do not much the classes in the pretrained model, the training data's classes must be remapped accordingly.

  • When the tool is running, its progress message reports the following statistics about the training results that were achieved in each epoch:

    • Epoch—The epoch number with which the result is associated
    • Training Loss—The result of the entropy loss function that was averaged for the training data
    • Validation Loss—The result of the entropy loss function that was determined when applying the model trained in the epoch on the validation data
    • Accuracy—The ratio of points in the validation data that were correctly classified by the model trained in the epoch (true positives) over all the points in the validation data
    • Precision—The macro average of the precision for all class codes
    • Recall—The macro average of the recall for all class codes
    • F1 Score—The harmonic mean of the macro average of the precision and recall values for all class codes

    A model that achieves low training loss but high validation loss is considered to be overfitting the training data, whereby it detects patterns from artifacts in the training data that result in the model not working well for the validation data. A model that achieves a high training loss and a high validation loss is considered to be underfitting the training data, whereby no patterns are being learned effectively to produce a usable model.

    Learn more about assessing point cloud training results

  • A folder is created to store the checkpoint models, which are models that are created at the end of each epoch. The name of this folder is the same as the model with a suffix of .checkpoints, and it is stored in the Output Model Location parameter value. Once the training is finished, a CSV table with a name that begins with the Output Model Name parameter value and ends with _stats.csv is created in the checkpoint folder. This table includes the following fields related to the results obtained for each class code and epoch:

    • Epoch—The epoch number associated with the results in the row. This value corresponds to the model created in the checkpoint models directory. The results are obtained by applying the model trained in the epoch on the validation data.
    • Class_Code—The class code for which the results are being reported.
    • Precision—The ratio of points that were correctly classified (true positives) over all the points that were classified (true positives and false positives).
    • Recall—The ratio of correctly classified points (true positives) over all the points that should have been classified with this value (true positives and false negatives).
    • F1_Score—The harmonic mean of the precision and recall value.
  • The dedicated memory used during training is the sum of memory allocated by the deep learning framework and the size of the data processed in each batch of an iteration in a given epoch. The size of the data in each batch depends on the number of additional point attributes specified in the Attribute Selection parameter, the total number of points in any given block, and the number of blocks that will be processed in each batch as specified by the Batch Size parameter. The maximum number of points per block is determined when the training data is exported, and this value should be assumed when estimating the potential memory footprint of the training operation.

  • The Relative Height option in the Attribute Selection parameter is an attribute that identifies a point's height from a reference surface, such as a bare earth elevation model. The use of this attribute can potentially improve the model's ability to learn directional relationships during the training process.

Parameters

LabelExplanationData Type
Input Training Data

The point cloud training data (*.pctd file) that will be used to train the classification model.

File
Output Model Location

An existing folder that will store the new directory containing the deep learning model.

Folder
Output Model Name

The name of the output Esri model definition file (*.emd), deep learning package (*.dlpk), and the directory that will be created to store them.

String
Pre-trained Model
(Optional)

The pretrained model that will be refined. When a pretrained model is provided, the input training data must have the same attributes, class codes, and maximum number of points that were used by the training data that generated this model.

File
Attribute Selection
(Optional)

Specifies the point attributes that will be used to train the model. Only the attributes that are present in the point cloud training data will be available. No additional attributes are included by default.

  • IntensityThe measure of the magnitude of the lidar pulse return will be used.
  • Return NumberThe ordinal position of the point obtained from a given lidar pulse will be used.
  • Number of ReturnsThe total number of lidar returns that were identified as points from the pulse associated with a given point will be used.
  • Red BandThe red band's value from a point cloud with color information will be used.
  • Green BandThe green band's value from a point cloud with color information will be used.
  • Blue BandThe blue band's value from a point cloud with color information will be used.
  • Near Infrared BandThe near infrared band's value from a point cloud with near infrared information will be used.
  • Relative HeightThe relative height of each point in relation to a reference surface, which would typically be a bare earth DEM, will be used.
String
Minimum Points Per Block
(Optional)

The minimum number of points that must be present in a given block for it to be used when training the model. The default is 0.

Long
Class Remapping
(Optional)

Defines how class code values will map to new values before training the deep learning model.

Value Table
Class Codes Of Interest
(Optional)

The class codes that will be used to filter the blocks in the training data. When class codes of interest are specified, all other class codes are remapped to the background class code.

Long
Background Class Code
(Optional)

The class code value that will be used for all other class codes when class codes of interest have been specified.

Long
Class Description
(Optional)

The descriptions of what each class code in the training data represents.

Value Table
Model Selection Criteria
(Optional)

Specifies the statistical basis that will be used to determine the final model.

  • Validation LossThe model that achieves the lowest result when the entropy loss function is applied to the validation data will be used.
  • RecallThe model that achieves the best macro average of the recall for all class codes will be used. Each class code's recall value is determined by the ratio of correctly classified points (true positives) over all the points that should have been classified with this value (expected positives). This is the default.
  • F1 ScoreThe model that achieves the best harmonic mean between the macro average of the precision and recall values for all class codes will be used. This provides a balance between precision and recall, which favors better overall performance.
  • PrecisionThe model that achieves the best macro average of the precision for all class codes will be used. Each class code's precision is determined by the ratio of points that are correctly classified (true positives) over all the points that are classified (true positives and false positives).
  • AccuracyThe model that achieves the highest ratio of corrected classified points over all the points in the validation data will be used.
String
Maximum Number of Epochs
(Optional)

The number of times each block of data will be passed forward and backward through the neural network. The default is 25.

Long
Iterations Per Epoch (%)
(Optional)

The percentage of the data that will be processed in each training epoch. The default is 100.

Double
Learning Rate
(Optional)

The rate at which existing information will be overwritten with new information. If no value is provided, the optimal learning rate will be extracted from the learning curve during the training process. This is the default.

Double
Batch Size
(Optional)

The number of training data blocks that will be processed at any given time. The default is 2.

Long
Stop training when model no longer improves
(Optional)

Specifies whether the model training will stop when the metric specified in the Model Selection Criteria parameter does not register any improvement after five consecutive epochs.

  • Checked—The model training will stop when the model is no longer improving. This is the default.
  • Unchecked—The model training will continue until the maximum number of epochs has been reached.
Boolean
Learning Rate Strategy
(Optional)

Specifies how the learning rate will be modified during training.

  • One Cycle Learning RateThe learning rate will be cycled throughout each epoch using Fast.AI's implementation of the 1cycle technique for training neural networks to help improve the training of a convolutional neural network. This is the default.
  • Fixed Learning RateThe same learning rate will be used throughout the training process.
String
Model Architecture
(Optional)

Specifies the neural network architecture that will be used to train the model. When a pretrained model is specified, the architecture used for creating the pretrained model will be automatically set.

  • PointCNNThe PointCNN architecture will be used.
  • RandLA-Net The RandLA-Net architecture will be used. RandLA-Net is built on the principles of simple random sampling and local feature aggregation. This is the default.
  • Semantic Query NetworkThe Semantic Query Network (SQN) architecture will be used. SQN does not require a comprehensive classification of the training data as the other neural network architectures do.
String
Loss Function
(Optional)

Specifies the loss function that will be used during training.

  • Cross Entropy LossCross entropy loss will be used. This function is best suited for training data in which each class has a similar number of points to the other classes. This is the default.
  • Focal LossFocal loss will be used. This function is best suited for training data in which the classes that are being trained may have point counts with great variation.
String

Derived Output

LabelExplanationData Type
Output Model

The resulting model generated by this tool.

File
Output Model Statistics

The .csv file containing the precision, recall, and F1 scores for each class code and epoch.

Text File
Output Epoch Statistics

The .csv file containing the training loss, validation loss, accuracy, precision, recall, and F1 scores obtained in each epoch.

Text File

arcpy.ddd.TrainPointCloudClassificationModel(in_training_data, out_model_location, out_model_name, {pretrained_model}, {attributes}, {min_points}, {class_remap}, {target_classes}, {background_class}, {class_descriptions}, {model_selection_criteria}, {max_epochs}, {epoch_iterations}, {learning_rate}, {batch_size}, {early_stop}, {learning_rate_strategy}, {architecture}, {loss_function})
NameExplanationData Type
in_training_data

The point cloud training data (*.pctd file) that will be used to train the classification model.

File
out_model_location

An existing folder that will store the new directory containing the deep learning model.

Folder
out_model_name

The name of the output Esri model definition file (*.emd), deep learning package (*.dlpk), and the directory that will be created to store them.

String
pretrained_model
(Optional)

The pretrained model that will be refined. When a pretrained model is provided, the input training data must have the same attributes, class codes, and maximum number of points that were used by the training data that generated this model.

File
attributes
[attributes,...]
(Optional)

Specifies the point attributes that will be used to train the model. Only the attributes that are present in the point cloud training data will be available. No additional attributes are included by default.

  • INTENSITYThe measure of the magnitude of the lidar pulse return will be used.
  • RETURN_NUMBERThe ordinal position of the point obtained from a given lidar pulse will be used.
  • NUMBER_OF_RETURNSThe total number of lidar returns that were identified as points from the pulse associated with a given point will be used.
  • REDThe red band's value from a point cloud with color information will be used.
  • GREENThe green band's value from a point cloud with color information will be used.
  • BLUEThe blue band's value from a point cloud with color information will be used.
  • NEAR_INFRAREDThe near infrared band's value from a point cloud with near infrared information will be used.
  • RELATIVE_HEIGHTThe relative height of each point in relation to a reference surface, which would typically be a bare earth DEM, will be used.
String
min_points
(Optional)

The minimum number of points that must be present in a given block for it to be used when training the model. The default is 0.

Long
class_remap
[class_remap,...]
(Optional)

Defines how class code values will map to new values before training the deep learning model.

Value Table
target_classes
[target_classes,...]
(Optional)

The class codes that will be used to filter the blocks in the training data. When class codes of interest are specified, all other class codes are remapped to the background class code.

Long
background_class
(Optional)

The class code value that will be used for all other class codes when class codes of interest have been specified.

Long
class_descriptions
[class_descriptions,...]
(Optional)

The descriptions of what each class code in the training data represents.

Value Table
model_selection_criteria
(Optional)

Specifies the statistical basis that will be used to determine the final model.

  • VALIDATION_LOSSThe model that achieves the lowest result when the entropy loss function is applied to the validation data will be used.
  • RECALLThe model that achieves the best macro average of the recall for all class codes will be used. Each class code's recall value is determined by the ratio of correctly classified points (true positives) over all the points that should have been classified with this value (expected positives). This is the default.
  • F1_SCOREThe model that achieves the best harmonic mean between the macro average of the precision and recall values for all class codes will be used. This provides a balance between precision and recall, which favors better overall performance.
  • PRECISIONThe model that achieves the best macro average of the precision for all class codes will be used. Each class code's precision is determined by the ratio of points that are correctly classified (true positives) over all the points that are classified (true positives and false positives).
  • ACCURACYThe model that achieves the highest ratio of corrected classified points over all the points in the validation data will be used.
String
max_epochs
(Optional)

The number of times each block of data will be passed forward and backward through the neural network. The default is 25.

Long
epoch_iterations
(Optional)

The percentage of the data that will be processed in each training epoch. The default is 100.

Double
learning_rate
(Optional)

The rate at which existing information will be overwritten with new information. If no value is provided, the optimal learning rate will be extracted from the learning curve during the training process. This is the default.

Double
batch_size
(Optional)

The number of training data blocks that will be processed at any given time. The default is 2.

Long
early_stop
(Optional)

Specifies whether the model training will stop when the metric specified in the model_selection_criteria parameter does not register any improvement after five consecutive epochs.

  • EARLY_STOPThe model training will stop when the model is no longer improving. This is the default.
  • NO_EARLY_STOPThe model training will continue until the maximum number of epochs has been reached.
Boolean
learning_rate_strategy
(Optional)

Specifies how the learning rate will be modified during training.

  • ONE_CYCLEThe learning rate will be cycled throughout each epoch using Fast.AI's implementation of the 1cycle technique for training neural networks to help improve the training of a convolutional neural network. This is the default.
  • FIXEDThe same learning rate will be used throughout the training process.
String
architecture
(Optional)

Specifies the neural network architecture that will be used to train the model. When a pretrained model is specified, the architecture used for creating the pretrained model will be automatically set.

  • POINTCNNThe PointCNN architecture will be used.
  • RANDLANET The RandLA-Net architecture will be used. RandLA-Net is built on the principles of simple random sampling and local feature aggregation. This is the default.
  • SQNThe Semantic Query Network (SQN) architecture will be used. SQN does not require a comprehensive classification of the training data as the other neural network architectures do.
String
loss_function
(Optional)

Specifies the loss function that will be used during training.

  • CROSS_ENTROPY_LOSSCross entropy loss will be used. This function is best suited for training data in which each class has a similar number of points to the other classes. This is the default.
  • FOCAL_LOSSFocal loss will be used. This function is best suited for training data in which the classes that are being trained may have point counts with great variation.
String

Derived Output

NameExplanationData Type
out_model

The resulting model generated by this tool.

File
out_model_stats

The .csv file containing the precision, recall, and F1 scores for each class code and epoch.

Text File
out_epoch_stats

The .csv file containing the training loss, validation loss, accuracy, precision, recall, and F1 scores obtained in each epoch.

Text File

Code sample

TrainPointCloudClassificationModel example (stand-alone script)

The following sample demonstrates the use of this tool in the Python window:

import arcpy

arcpy.env.workspace = "D:/Deep_Learning_Workspace"
arcpy.ddd.TrainPointCloudClassificationModel(
    "Powerline_Training.pctd", "D:/DL_Models", "Powerline", 
    attributes=['INTENSITY', 'RETURN_NUMBER', 'NUMBER_OF_RETURNS'],
    target_classes=[14, 15], background_class=1,
    class_descriptions=[[1, "Background"], [14, "Wire Conductor"], [15, "Transmission Tower"]],
    model_selection_criteria="F1_SCORE", max_epochs=10)

Environments

Related topics