Available with Image Analyst license.
One capability of motion imagery is tracking specific objects in video data while it plays. These objects can be stationary or moving, may become obscured and re-emerge, or change shape (such as a person entering a vehicle). The object tracking capability in full-motion video (FMV) provides automated and computer-assisted tools to address a variety of situations when tracking objects in video imagery. It relies on deep learning and computer vision technology to assist in object tracking, feature extraction, and matching. You can build a deep learning object tracking model, and use the suite of tools to select and track an object of interest. The centroids corresponding to the object's identification rectangles can be digitized and saved as a point class in the project's geodatabase. The saved points can then be displayed as the archived video plays.
Requirements
Object tracking capability in FMV is available in ArcGIS AllSource with the ArcGIS Image Analyst extension.
Note:
Ensure that your video card drivers are current.
Deep learning model
Tracking objects in a video requires one or more trained deep learning models. The effectiveness of tracking depends on the quality of the deep learning training sample data and how closely the object of interest is associated with the training data. For example, to track a truck moving along a highway, you must have labeled training samples of trucks from many angles. The source of the training samples (annotated images) must be motion imagery. The labeled training samples are used to train the deep learning model to track objects, for instance, trucks in this case. The model will have limited ability to track objects with a different appearance, such as cars, but may have success in tracking larger recreational vehicles or buses.
You must install deep learning framework packages to perform deep learning workflows in ArcGIS AllSource. Use a variety of tools to prepare video and still imagery training data, label objects, create deep learning models, inference, and review results. For information about how to install these packages, see Install deep learning frameworks for ArcGIS.
Deep learning is computationally intensive, and a powerful GPU is recommended with CUDA Compute Capability support, version 6.0 or later.
For details about deep learning and deep learning workflows, see Introduction to deep learning and Deep learning in ArcGIS Pro. For more information about the suite of deep learning tools in ArcGIS, see An overview of the Deep Learning toolset.
Object Tracking tab
The Object Tracking contextual tab is enabled when you select a video in the Contents pane.
Note:
The Object Tracking tab is available once the deep learning packages are installed and enabled in ArcGIS AllSource.The sections below describe the tools in the various groups on the Object Tracking tab.
Tracked Objects group
The tools in the Tracked Objects groups allow you to identify and manage object tracking in video data:
- Enable Tracking—Activate object tracking using the specified object tracker configuration.
- Add Object—Add an object to perform object tracking by clicking, or interactively drawing, a rectangle around the object. Double-click to enable persistence mode.
- Move Object—Click an existing object's tracking rectangle to select it, and redraw the rectangle around the object's updated position. Double-click to enable persistence mode.
- Remove Object—Remove tracked objects from the video player by clicking, or drawing, a box around the object. Double-click to enable persistence mode.
- Delete Objects—Remove tracked objects from the video player and tracked objects manager by clicking, or drawing, a box around the objects. Double-click to enable persistence mode.
- Object(s) to Feature—Save the centroids of the object detection rectangles as a new feature class.
Save group
Use the Objects to Features tool in the Save group to save tracked object centroids to a geodatabase.
Manage group
The tools in the Manage group help you manage the object tracker:
- Configure Tracker—Configure the object tracker and apply updates to the default object tracker models.
- Tracked Objects—Display the Tracked Objects Manager pane.
Configure Object Tracker pane
Click the Configure Tracker button to open the Configure Object Tracker pane. The pane contains the Object Tracking Model and Automatic Detection Model settings.
Settings
The Object Tracking Model parameter allows you to choose the deep learning model and set additional parameters for tracking objects.
Click the browse button to open the Add Deep Learning Model From Path dialog box. Specify the path to the deep learning model package file (.dlpk) by providing a URL or by browsing to the file in a local directory. You can assign an alias for the model's name in the Model text box. Click Add to load the model and close the dialog box. The model name appears in, and is selected from, the Model drop-down list.
The settings contain options to control object tracking, including Minimum Object Size (pixels), Detect Lost Tracks, Recover Lost Tracks, and Auto Detection Model.
- Detect Lost Tracks—Specify whether the object is successfully tracked based on changes in appearance, such as changes in view angle, obscuration, or movement out of the frame. The default is checked.
- Interval (frames)—Set the interval, in number of frames, at which the application checks for object appearance changes. The default is 5.
- Recover Lost Tracks—Specify whether an attempt is made to find an object after the track has been lost. The default is checked.
- Confidence Threshold (0-1)—Set the minimum ratio between matched source image features and searched object features for successful recovery. The confidence threshold is a number between 0 and 1. The default value is 0.1.
- Overlap Threshold (0-1)—Set the minimum overlap ratio between the detected object and searched object for successful recovery. The threshold is a number between 0 and 1. The default value is 0.1.
- Max Search Interval (frames)—Set the maximum search interval, defined in units of video frames, when an object is lost. The default value is 60.
- Status Queue Size—The number of frames for which an object status is maintained when the object is lost before the search interval expires.
- Auto Detection Model—Specify whether the detection and identification of target objects is performed automatically using a deep learning-based detector model. The default is unchecked. Specify the path to the deep learning model package file by browsing to the file in a local directory using the browse button and selecting the .dlpk file or providing a URL.
- Interval (frames)—Set the interval, in number of frames, at which object appearance changes are checked. The default is 5 frames.
- Confidence Threshold (0-1)—Set the minimum ratio between matched source image features and searched object features for successful recovery. The confidence threshold is a number between 0 and 1. The default value is 0.1.
Tracked Objects Manager pane
The Tracked Objects tab is where you can view and manage tracked objects. The Status, ID, and Source values, described below, of every tracked object are listed in the table.
- Status—The status of each tracked object: actively tracked, lost, or in a search.
- ID—The unique identifier for each tracked object.
- Source—The source video file in which the object is identified.
The Tracked Objects Manager pane interacts with the Tracked Objects ribbon when Enable Tracking is active.
- Enable Tracking—Activate object tracking using the specified object tracker configuration.
- Add Object—Add an object to perform object tracking by clicking, or interactively drawing, a rectangle around the object. Double-click to enable persistence mode.
- Move Object—Move an existing object by clicking its tracking rectangle to select it, and redraw the rectangle around the object's updated position. Double-click to enable persistence mode.
- Remove Object—Remove tracked objects from the video player by clicking, or drawing a box around, the objects. Double-click to enable persistence mode.
- Delete Objects—Remove tracked objects from the video player and Tracked Objects Manager pane by clicking, or drawing a box around, the objects. Double-click to enable persistence mode.
Track objects
To track objects, complete the following steps:
- Load a deep learning model and set the tracking parameters in the Configure Tracker pane, in the Object Tracking Display pane.
- Click Enable Tracking to activate the object tracking tools.
- Click Add Object to draw a rectangle around the object, or click the object you want to track in the video player.
This step is not required when using Auto Detector mode.
The object will be tracked in every video frame.
- On the Tracked Objects Manager pane, view the status of the tracked objects.
- If the object becomes obscured and tracking is lost, click Move Object, and redraw a rectangle around the object's updated position to re-engage tracking.
This step is not required when using Auto Detector mode.
- Optionally, click Add to add an object to be tracked.
This step is not required when using Auto Detector mode.
- Optionally, click Remove to remove an object from active tracking.
This step is not required when using Auto Detector mode.
- On the Tracking tab, in the Save group, click Object(s) to Feature and specify the output location and prefix name to use to store the object centroids to a feature class.
Optionally, specify whether the feature class will be added to the map, as well as the frequency interval in seconds by which the centroids will be saved.