Classify Point Cloud Using Trained Model (3D Analyst)

Summary

Classifies a point cloud using a deep learning model.

Usage

  • This tool requires the installation of deep learning frameworks.

    To set up your machine to use deep learning frameworks in ArcGIS AllSource, see Install deep learning frameworks for ArcGIS.

  • The tool classifies all points in the input point cloud by default. If any of the existing class codes in the input point cloud are classified properly, exclude those points from being modified by indicating which class codes should be edited or preserved using the Existing Class Code Handling and Existing Class Codes parameters.

    Learn more about classifying a point cloud with deep learning

  • The input point cloud must have the same attributes as the training data used to develop the classification model. For example, if the trained model used the intensity attribute, the point cloud being processed must also have intensity values. The point density and distribution should also be similar to the data used when training the model.

  • If the spatial reference of the input point cloud does not use a projected coordinate system, the Output Coordinate System environment can be used to define a projected coordinate system that will be used when classifying its points.

  • The model is trained to process point cloud data in blocks of a particular size that contain a certain number of points and a given set of attributes. The input point cloud is processed by dividing it into blocks defined by the model. Several blocks can be processed at any given time. The set of blocks that are processed together are referred to as a batch. The batch size controls the amount of computer resources that will be used during the inferencing operation. Although the CPU can be used to process the point cloud, inferencing runs more efficiently with a CUDA-capable NVIDIA GPU. You can control the number of blocks that are simultaneously processed to limit the amount of GPU memory that is consumed by specifying a value in the Batch Size parameter. Avoid specifying a batch size that exceeds the computer's resources. The available GPU memory before and when the tool is run can be observed through NVIDIA's System Management Interface (SMI) utility. This is a command line utility that is automatically installed with the NVIDIA driver for your GPU. This application will track the GPU usage. To control how much memory remains available while the tool is run, consider evaluating a subset of the point cloud data with a small batch size and incrementally increase this value until the desired usage is achieved. Otherwise, you can let the tool automatically maximize the use of the GPU's memory.

  • The Reference Surface parameter is required when the input model was trained with relative height attributes. The raster surface is used as a reference height from which relative heights are interpolated for each point. This provides additional information for the model that can be used to more readily differentiate objects. The raster surface provided for this parameter should represent the same type of data as the raster that was used in the training data that created the model. In most cases, this will be a raster created from ground classified points. A raster surface can be generated from the ground classified points in the LAS dataset by applying a ground filter and using the LAS Dataset To Raster tool. A ground surface can also be generated from a point cloud scene layer using the Point Cloud To Raster tool. Raster surfaces that are not sourced from the input point cloud can also be used, but you must ensure that the z-values in the raster correspond appropriately with the z-values in the point cloud.

  • When the input model was trained with points from certain classes that were excluded from the training data, use the Excluded Class Codes parameter to ensure that those points are omitted from the set of points evaluated by this model. Excluding classes that do not provide a useful context for the objectives of a given model will reduce the number of points that are evaluated, which will improve the speed of training and applying the model. For example, points representing buildings will usually have no relevance for points that represents objects such as traffic lights, power lines, or cars. Building points can also be reliably classified using the Classify LAS Building tool. If points with class 6, which represents buildings, were excluded in the training data that was used to create the model, the input point cloud must also classify building points and have them excluded in this tool.

Parameters

LabelExplanationData Type
Target Point Cloud

The point cloud that will be classified.

LAS Dataset Layer
Input Model Definition

The input Esri model definition file (*.emd) or deep learning package (*.dlpk) that will be used to classify the point cloud. A URL for a deep learning package that is published on ArcGIS Online or ArcGIS Living Atlas can also be used.

File; String
Target Classification

The class codes from the trained model that will be used to classify the input point cloud. All classes from the input model will be used by default unless a subset is specified.

String
Existing Class Code Handling
(Optional)

Specifies how the editable points from the input point cloud will be defined.

  • Edit All Points
  • Edit Selected Points
  • Preserve Selected Points
String
Existing Class Codes
(Optional)

The classes for which points will be edited or have their original class code designation preserved based on the Existing Class Code Handling parameter value.

Long
Compute statistics
(Optional)

Specifies whether statistics will be computed for the .las files referenced by the LAS dataset. Computing statistics provides a spatial index for each .las file, which improves analysis and display performance. Statistics also enhance the filtering and symbology experience by limiting the display of LAS attributes, such as classification codes and return information, to values that are present in the .las file.

  • Checked—Statistics will be computed. This is the default.
  • Unchecked—Statistics will not be computed.
Boolean
Processing Boundary

The polygon boundary that defines the subset of points to be processed from the input point cloud. Points outside the boundary features will not be evaluated.

Feature Layer
Update pyramid
(Optional)

Specifies whether the LAS dataset pyramid will be updated after the class codes are modified.

  • Checked—The LAS dataset pyramid will be updated. This is the default.
  • Unchecked—The LAS dataset pyramid will not be updated.
Boolean
Reference Surface
(Optional)

The raster surface that will be used to provide relative height values for each point in the point cloud data. Points that do not overlap with the raster will be omitted from the analysis.

Raster Layer
Excluded Class Codes
(Optional)

The class codes that will be excluded from processing. Any value in the range of 0 to 255 can be specified.

Long
Batch Size
(Optional)

The number of point cloud data blocks that will be simultaneously processed during the inferencing operation. Generally, a larger batch size will cause faster processing of the data, but avoid using a batch size that is too large for the resources of the computer. When using the GPU, the available GPU memory is the most common constraint on the batch size the computer can handle. The memory used by a given block depends on the model's block point limit and required point attributes. To find the available GPU memory and for more information about evaluating GPU memory usage, use the NVIDIA SMI command line utility described in the Usages section.

For certain architectures, an optimal batch size will be calculated if the batch size is unspecified. When the GPU is used, the optimal batch size will be based on how much memory is consumed by a given block of data and how much GPU memory is freely available when the tool is run. When the CPU is used for inferencing, each block is processed on a CPU thread, and the optimal batch size is calculated to be half of the available, unused CPU threads.

Long

Derived Output

LabelExplanationData Type
Output Point Cloud

The point cloud that was classified by the deep learning model.

Feature Layer

arcpy.ddd.ClassifyPointCloudUsingTrainedModel(in_point_cloud, in_trained_model, output_classes, {in_class_mode}, {target_classes}, {compute_stats}, boundary, {update_pyramid}, {reference_height}, {excluded_class_codes}, {batch_size})
NameExplanationData Type
in_point_cloud

The point cloud that will be classified.

LAS Dataset Layer
in_trained_model

The input Esri model definition file (*.emd) or deep learning package (*.dlpk) that will be used to classify the point cloud. A URL for a deep learning package that is published on ArcGIS Online or ArcGIS Living Atlas can also be used.

File; String
output_classes
[output_classes,...]

The class codes from the trained model that will be used to classify the input point cloud. All classes from the input model will be used by default unless a subset is specified.

String
in_class_mode
(Optional)

Specifies how the editable points from the input point cloud will be defined.

  • EDIT_ALLAll points in the input point cloud will be edited. This is the default.
  • EDIT_SELECTEDOnly points with class codes specified in the target_classes parameter will be edited; all other points will remain unchanged.
  • PRESERVE_SELECTEDPoints with class codes specified in the target_classes parameter will be preserved; the remaining points will be edited.
String
target_classes
[target_classes,...]
(Optional)

The classes for which points will be edited or have their original class code designation preserved based on the in_class_mode parameter value.

Long
compute_stats
(Optional)

Specifies whether statistics will be computed for the .las files referenced by the LAS dataset. Computing statistics provides a spatial index for each .las file, which improves analysis and display performance. Statistics also enhance the filtering and symbology experience by limiting the display of LAS attributes, such as classification codes and return information, to values that are present in the .las file.

  • COMPUTE_STATSStatistics will be computed. This is the default.
  • NO_COMPUTE_STATSStatistics will not be computed.
Boolean
boundary

The polygon boundary that defines the subset of points to be processed from the input point cloud. Points outside the boundary features will not be evaluated.

Feature Layer
update_pyramid
(Optional)

Specifies whether the LAS dataset pyramid will be updated after the class codes are modified.

  • UPDATE_PYRAMIDThe LAS dataset pyramid will be updated. This is the default.
  • NO_UPDATE_PYRAMIDThe LAS dataset pyramid will not be updated.
Boolean
reference_height
(Optional)

The raster surface that will be used to provide relative height values for each point in the point cloud data. Points that do not overlap with the raster will be omitted from the analysis.

Raster Layer
excluded_class_codes
[excluded_class_codes,...]
(Optional)

The class codes that will be excluded from processing. Any value in the range of 0 to 255 can be specified.

Long
batch_size
(Optional)

The number of point cloud data blocks that will be simultaneously processed during the inferencing operation. Generally, a larger batch size will cause faster processing of the data, but avoid using a batch size that is too large for the resources of the computer. When using the GPU, the available GPU memory is the most common constraint on the batch size the computer can handle. The memory used by a given block depends on the model's block point limit and required point attributes. To find the available GPU memory and for more information about evaluating GPU memory usage, use the NVIDIA SMI command line utility described in the Usages section.

For certain architectures, an optimal batch size will be calculated if the batch size is unspecified. When the GPU is used, the optimal batch size will be based on how much memory is consumed by a given block of data and how much GPU memory is freely available when the tool is run. When the CPU is used for inferencing, each block is processed on a CPU thread, and the optimal batch size is calculated to be half of the available, unused CPU threads.

Long

Derived Output

NameExplanationData Type
out_point_cloud

The point cloud that was classified by the deep learning model.

Feature Layer

Code sample

ClassifyPointCloudUsingTrainedModel example (stand-alone script)

The following sample demonstrates the use of this tool in the Python window:

import arcpy
arcpy.env.workspace = 'C:/data/'
arcpy.ddd.ClassifyPointCloudUsingTrainedModel(
    '2018_survey.lasd', 'electrical_infrastructure_classification.emd',
    [14, 15], 'EDIT_SELECTED', [0,1]
)

Related topics