Skip To Content

Run inference tool parameters

Available with Image Server

In the Run inference step, several tool parameters are available when performing deep learning analysis in Deep Learning Studio.

Once the deep learning model is trained, you can use the inferencing tool to create the output of the deep learning process. The output can be classified pixels, a feature layer of objects, or a feature layer indicating the quality of objects.

Note:
The type of deep learning analysis available for the inferencing tool is defined by the option you selected when the Deep Learning Studio project was created.
You can make adjustments to the following parameters in this tool:

ParameterDescription
Model

Specify the deep learning framework model. The default is the deep learning framework model created in the project, but any available model can be used.

Note:

Deep learning packages from ArcGIS Living Atlas are now supported as input.

Model parameters

The model arguments created while training the model.

Input imagery source

The input imagery source to use. The default is the imagery source specified in the project, but any available imagery layer or image collection from the data store can be used. Object classification can be a feature layer with attachments.

Area of interest

The area delineated by a polygon where the process will be run.

Processing mode

Specify how each item in the imagery layer is processed.

After the inferencing tool runs, the resulting output is visible in the map. You can evaluate the output from the deep learning process to determine whether more work is necessary or the results are satisfactory. Depending on the results, the decision to continue to improve the model is typically based on the desired output results, the amount of time and effort necessary to improve the results, and the project time frame.

If object detection or object classification tools were used, the objects found include a confidence value indicating the level of confidence for the object. You can review the confidence values by selecting objects and reviewing the pop-up information.


In this topic