The following are important considerations for ArcGIS Online organization administrators when working with ArcGIS Velocity.
Plan for the capacity of an ArcGIS Velocity subscription
A Velocity subscription includes processing and storage capabilities to collect real-time observations, detect incidents of interest, store observations historically, and analyze historical data for patterns over time. It is important, however, to consider your organization's target use cases and evaluate how to configure them within the capacity included in your ArcGIS Velocity license, especially regarding the velocity of incoming data. For details on license levels, see Licensing.
The velocity of real-time data streams varies due to factors such as the type of data, the number of assets being tracked, the number of sensors deployed, and the frequency at which new observations are reported. Severe storms, for example, are of interest to many organizations from the perspective of protecting critical infrastructure, but the number of storms active at any given time is low (depending on the study area) and updates do not typically come in at high velocities. On the other hand, a logistics or shipping company may have thousands of vehicles that report their position every few seconds.
The use cases for real-time data streams are also varied. For example, transportation organizations may be interested in use cases such as the following:
- Fleet (vehicle) tracking
- Equipment tracking
- Anomaly detection (such as stalled cars on the roadway)
- Digital messaging (such as alerts on weather events)
- Work zone intrusion alerting
- Connected car tracking
Both data velocity and analytic use cases are considerations for effective use of your ArcGIS Velocity subscription. If your organization has fewer than 200 hundred work vehicles or pieces of equipment to track, and asset locations are only updated every few minutes, the velocity supported in the Standard license may be sufficient. For more information on data velocity supported at difference license levels, see Licensing. Additionally, the Standard license supports running up to five items at a time, one of which in this case would be the feed ingesting the asset data. With the remaining four running items, you can configure four separate analytics to handle different alerting or analysis scenarios. Conversely, you can perform multiple types of incident detection and alerting in a single real-time analytic and collect and track additional data streams with the remaining three feeds or analytics.
The Dedicated license level provides a dedicated compute architecture that can be leveraged to support either high numbers of feeds and analytics or sustain higher velocity data and complex analytics. For example, connected car tracking typically involves high velocity data streams, and so would consume more capacity (as an individual use case) than the other examples.
The capacity of the Advanced and Dedicated subscriptions can be augmented with additional processing and storage. Organization administrators are encouraged to work with their Esri account representatives to plan the right capacity for their intended workflows. For more information, see Compute and storage capacity.
Consider which users need real-time privileges
ArcGIS Velocity allows users to create feeds, real-time analytics, and big data analytics to work with tracking and observation data. Both feeds and real-time analytics are continuous or real-time tasks, meaning they are always running and consuming capacity in your Velocity subscription. You may want to consider the key feeds and real-time analytics needed for your organizational workflows and limit the privileges for these items to users who will manage those processes. For more information, see Create roles and assign users.
A common pattern is running a defined set of feeds and associated real-time analytics that process and store the incoming data in a feature layer. A broader set of users can then run adhoc big data analytics against the feature layer to answer different mission-related questions.
Encourage users to proactively manage their real-time items
As both feeds and real-time analytics are tasks that are always running and consuming capacity, it is important to proactively manage these items. Encourage users to stop feeds or real-time analytics that are not needed or that have been set up largely for testing and development.
Review the actively running real-time items
On a periodic basis, it is recommended that you review the real-time tasks being published with Velocity in case there are excess tasks running that are not needed. On any of the item list pages, choose to view Organization Content instead of My Content. When you view Organization Content, you can inspect certain details of user items such as a feed's schema or item logs and you can stop any running tasks. This allows you to free up processing capacity if necessary. For more information, see Feed and analytic management.
Apply shorter data retention time periods
When creating output feature layers, users can apply data retention policies that range from one hour to one year. As a best practice, consider both the available storage and the needs of your users and use cases.
Each Velocity subscription includes a given amount of storage. When storing incoming data over time, the recommended best practice is to test and observe how the feature layer grows over several days. The percent of storage used by a feature layer can be explored on the Storage Utilization page in the application. Set the data retention time period so that the feature layer will not consume an excessive portion of your overall storage before older data is deleted.
Additionally, consider the actual time period for which your data is relevant to your day-to-day workflows versus occasional analysis workflows. Set the time period for which you need data available for immediate exploration and visualization as the data retention policy. If you need older data for occasional analysis, you can choose to export the data to the archive (cold store) prior to it being purged from the feature layer.
Applying shorter data retention policies for datasets that grow in real-time maximizes the remaining feature storage that will be available for analytical results. For more information, see Introduction to data retention.