Prepare smart assistants

To prepare smart assistants for the Survey123 field app, you must have an object detection or image classification model. You can create the models used by smart assistants, download models from ArcGIS Living Atlas of the World, or access models through built-in APIs. For more information, see Smart assistants.

Add smart attributes to a survey

The following sections outline how to add smart attributes to a survey.

Add a model

To add a model to a survey, complete the following steps:

  1. Create a survey in Survey123 Connect.
  2. Add an image question.
  3. Copy the model files (<model_name>.tflite and <model_name>.txt or <model_name>.emd) to the survey's media folder.

    File names cannot contain spaces.

    The Common Object Detection model in ArcGIS Living Atlas of the World can be used to familiarize yourself with smart assistants. You can either download the model or link the model to the survey.

  4. Add the smartAttributes parameter in the bind::esri:parameters column and specify the name of the model.

    smartAttributes=CommonObjectDetection

  5. Optionally, include properties after the model name in the bind::esri:parameters column to control the minimum confidence score, camera preview, labels, and object classes. Use an ampersand (&) to separate properties.

    smartAttributes=CommonObjectDetection&minScore=0.6&cameraPreview=true

When an image is captured, the objects detected in the image are written to its EXIF metadata. You can use calculations in the survey to extract the attributes from an image and use them in other questions.

The following table lists the required and optional properties that can be used with the smartAttributes parameter.

PropertyDefault valueDescription

<model_name>

N/A

Required. The object detection or image classification model. Model names cannot contain spaces. The name must match the file name (without the extension) of the model file stored in the survey's media folder.

Value: <model_name>

Example:

smartAttributes=CommonObjectDetection

minScore

0.5

Optional. Specify the minimum confidence level for object detection or image classification.

Values: 0 - 1

Example:

minScore=0.7

cameraPreview

false

Optional. Enable real-time preview. Objects detected by the model are identified by bounding boxes in the live camera view.

Values: true | false

Example:

cameraPreview=true

label

true

Optional. Specify whether object class labels are shown in the camera preview.

This is valid only when cameraPreview=true.

Values: true | false

Example:

label=false

class

N/A (all classes in the model will be used)

Optional. The object classes to detect. All other classes in the model will be ignored. Class names must be identical to the classes in the model. Multiple classes can be provided, separated by commas.

Value: <class name>

Example:

class=parking_meter,stop_sign,traffic_light

Extract attributes from an image

To extract attributes from an image, complete the following steps:

  1. Add a text or calculate question to the survey.

    If you use a text question, the JSON object will be visible on the form. If you use a calculate question, the JSON object will not be visible on the form but values can be referenced by other questions.

  2. In the calculation column, enter the following: string(pulldata("@exif", ${photo}, "ImageDescription"))

    This expression retrieves the image description from the EXIF metadata of the image, in the form of a JSON object. Ensure that the bind::esri:fieldLength column of the question is long enough to store the result. The JSON contains information about the objects detected in the image in which classNames is a comma-separated list of the object classes and classes lists the name, score, and bounding box coordinates of each object. See the following example:

    {
      "classNames": "person,bottle,keyboard",
      "classes": [
        {
          "name": "person",
          "score": 0.67421875,
          "xmin": 47,
          "ymin": 20,
          "xmax": 1086,
          "ymax": 262
        },
        {
          "name": "bottle",
          "score": 0.7625,
          "xmin": 237,
          "ymin": 469,
          "xmax": 552,
          "ymax": 639
        },
        {
          "name": "keyboard",
          "score": 0.55078125,
          "xmin": 28,
          "ymin": 49,
          "xmax": 1078,
          "ymax": 385
        }
      ]
    }
  3. Add a text question to the survey named results.
  4. In the calculation column, enter the following: string(pulldata("@json", ${results}, "classNames"))

    This expression retrieves the classNames value from the JSON object.

Individual results can optionally be retrieved from the result question and used to populate select one, select multiple, or other text questions. For examples, see the Smart Attributes sample survey in Survey123 Connect.

Add smart annotation to a survey

To add smart annotation to a survey, complete the following steps:

  1. Create a survey in Survey123 Connect.
  2. Add an image question with the annotate appearance.

    Optionally, type method=camera,browse in the body::esri:style column for the image question. The default behavior for the annotate appearance is to allow new photos to be taken with the camera in the field app. Adding method=camera,browse ensures that smart annotation can be used with images selected from device storage, in addition to photos that are taken with the camera.

  3. Copy the model files (<model_name>.tflite and <model_name>.txt or <model_name>.emd) to the survey's media folder.

    File names cannot contain spaces.

    The Common Object Detection model in ArcGIS Living Atlas of the World can be used to familiarize yourself with smart assistants. You can either download the model or link the model to the survey.

  4. Add the smartAnnotation parameter to the bind::esri:parameters column and specify the name of the model.

    smartAnnotation=CommonObjectDetection

  5. Optionally, include properties after the model name in the bind::esri:parameters column to control the minimum confidence score, camera preview, labels, object classes, bounding boxes, and font formatting. Use an ampersand (&) to separate properties.

    smartAnnotation=CommonObjectDetection&minScore=0.6&cameraPreview=true&class=car&fontSize=24

When an image is captured, the objects detected in the image are identified by bounding boxes and labels on the annotation canvas. You can add, modify, and delete annotation.

The following table lists the required and optional properties that can be used with the smartAnnotation parameter.

PropertyDefault valueDescription

<model_name>

N/A

Required. The object detection model. Model names cannot contain spaces. The name must match the file name (without the extension) of the model file stored in the survey's media folder.

Value: <model_name>

Examples:

smartAnnotation=CommonObjectDetection

minScore

0.5

Optional. Specify the minimum confidence level for object detection.

Values: 0 - 1

Examples:

minScore=0.7

cameraPreview

false

Optional. Enable real-time preview. Objects detected by the model are identified by bounding boxes in the live camera view.

Values: true | false

Examples:

cameraPreview=true

label

true

Optional. Specify whether object class labels are shown in the camera preview and annotation canvas.

This is valid only when cameraPreview=true.

Values: true | false

Examples:

label=false

class

N/A (all classes in the model will be used)

Optional. The object classes to detect. All other classes in the model will be ignored. Class names must be identical to the classes in the model. Multiple classes can be provided, separated by commas.

Value: <class name>

Examples:

class=truck,car,motorcycle

boundingBoxes

true

Optional. Specify whether bounding box polygons are created as graphic elements in the annotation canvas to identify detected objects.

Values: true | false

Examples:

boundingBoxes=false

outlineWidth

2

Optional. Specify the width of bounding boxes. This is valid only when boundingBoxes=true.

Values: <integer>

Examples:

outlineWidth=3

font

Survey123 field app font

Optional. The font to use for labels in the annotation canvas.

Note:

Not every font is available on every device. Review the annotation produced by your preferred devices to ensure that the annotation appears as intended.

Value: <font name>

Examples:

font=verdana

fontSize

20

Optional. Specify the size of the labels in the annotation canvas.

Value: <integer>

Examples:

fontSize=30

bold

false

Optional. Specify whether bold formatting is applied to labels in the annotation canvas.

Values: true | false

Examples:

bold=true

italic

false

Optional. Specify whether italic formatting is applied to labels in the annotation canvas.

Values: true | false

Examples:

italic=true

You can customize how objects are annotated in the canvas by creating a custom annotation palette. You can format labels and define the style of the bounding boxes or marker symbols used to identify each object class. To apply a custom annotation style to an object class, the value in the label column in the XLSPalette template must match the name of the class in the model. For more information, see Draw and annotate palettes.

Note:

When you use a custom annotation palette with smart annotation, the following properties are ignored:

  • label (for the annotation canvas only)
  • boundingBoxes
  • outlineWidth
  • font
  • fontSize
  • bold
  • italic

Add smart redaction to a survey

You can add redaction to an image question in a survey by adding the redaction parameter to the bind::esri:parameters column. There are three ways to configure image redaction in a survey:

  • Add redaction by including a model with the survey. This method is supported on Android, iOS, and Windows.
  • Add built-in face redaction. This method is supported on Android and iOS. Users may need to turn on enhanced camera features in the field app to enable smart redaction. For more information, see Machine learning.
  • Add manual redaction only. This method allows users to redact regions of an image by manually adding bounding boxes. This method is supported on Android, iOS, and Windows.

The following table lists the required and optional properties that can be used with the redaction parameter.

PropertyDefault valueDescription

<model_name>

N/A

Required. The object detection model. Model names cannot contain spaces. The name must match the file name (without the extension) of the model file stored in the survey's media folder. Alternatively, use @faces to use built-in face detection or @manual to enable manual redaction only.

Values: <model_name> | @faces | @manual

Examples:

redaction=CommonObjectDetection
redaction=@faces
redaction=@manual

minScore

0.5

Optional. Specify the minimum confidence level for object detection. This is ignored when the model name is @faces or @manual.

Values: 0 - 1

Examples:

minScore=0.7

cameraPreview

false

Optional. Enable real-time preview. Objects detected by the model are identified by bounding boxes in the live camera view. This is ignored when the model name is @manual.

Values: true | false

Examples:

cameraPreview=true

label

true

Optional. Specify whether object class labels are shown in the camera preview.

This is valid only when cameraPreview=true.

Values: true | false

Examples:

label=false

class

N/A (all classes in the model will be used)

Optional. The object classes to detect. All other classes in the model will be ignored. Class names must be identical to the classes in the model. This is ignored when the model name is @faces or @manual.

Value: <class_name> separated by comma (,)

Examples:

class=person,cat,dog

engine

N/A

Optional. Use the Apple built-in Vision API for face detection. For more information, see Machine learning. This property is only valid when the model name is @faces and is only supported on iOS devices.

Value: vision

Examples:

engine=vision

effect

pixelate

Optional. Specify a redaction effect to apply to redacted regions.

Values: pixelate | blur | blockout | symbol

Examples:

effect=blur

symbol

N/A

Optional. The symbol to apply to redacted regions.

This is valid only when effect=symbol.

Values: <Emoji> | <SVG file name>

Examples:

symbol=🚫
symbol=blockout.svg

fillColor

#000000

Optional. The fill color of the blockout boxes. This is valid only when effect=blockout.

Values: <HTML color name> | <hex color code>

Examples:

fillColor=Blue

scale

1

Optional. Specify the size of bounding boxes up to a maximum as twice large as the default size. This is ignored when the model name is @manual.

Values: 1 | 2

Examples:

scale=2

Add redaction by including a model

To add redaction to a survey by including an object detection model, complete the following steps:

  1. Create a survey in Survey123 Connect.
  2. Add an image question.
  3. Copy the model files (<model_name>.tflite and <model_name>.txt or <model_name>.emd) to the survey's media folder.

    File names cannot contain spaces.

    The Common Object Detection model in ArcGIS Living Atlas of the World can be used to familiarize yourself with smart assistants. You can either download the model or link the model to the survey.

  4. Add the redaction parameter to the bind::esri:parameters column and specify the name of the model.

    redaction=CommonObjectDetection

  5. Optionally, include properties after the model name in the bind::esri:parameters column to control the minimum confidence score, camera preview, labels, object classes, engine, and redaction effects. Use an ampersand (&) to separate properties.

    redaction=CommonObjectDetection&minScore=0.6&cameraPreview=true&effect=blur

Add built-in face redaction

To add built-in face redaction to a survey without including an object detection model, complete the following steps:

  1. Create a survey in Survey123 Connect.
  2. Add an image question.
  3. Add the redaction parameter to the bind::esri:parameters column and name the model @faces.

    redaction=@faces

    The @faces property uses built-in technology to redact faces in images. Enhanced camera features must be enabled in the field app for this redaction to work. For more information, see Machine learning.

  4. Optionally, include properties after the model name in the bind::esri:parameters column to control the minimum confidence score, camera preview, labels, object classes, engine, and redaction effects. Use an ampersand (&) to separate properties.

    redaction=@faces&cameraPreview=true&effect=blur

Add manual redaction

To add manual redaction to a survey, complete the following steps:

  1. Create a survey in Survey123 Connect.
  2. Add an image question.
  3. Add the redaction parameter to the bind::esri:parameters column and name the model @manual.

    redaction=@manual

  4. Optionally, include properties after the model name in the bind::esri:parameters column to control the redaction effects. Use an ampersand (&) to separate properties.

    redaction=@manual&effect=blockout&fillColor=#000000