The Control Event Volume tool filters events with multiple observations based on a time span and count. New track observations that fall below the count threshold for the current time span are retained. New track observations that exceed the count threshold for the current time span are discarded.
- A construction company wants to be notified when a vehicle unexpectedly leaves a job site. They want to be notified when it first occurs and once more if it continues. The locations of the vehicles are reported every 5 minutes. To avoid sending an email every 5 minutes when a vehicle's location remains outside a job site, the Control Event Volume tool can be configured to allow two events, or emails in this case, every hour. This ensures that an email is sent the first time a vehicle is outside a job site and again 5 minutes later if it remains outside the job site. No additional messages are sent for that vehicle until an hour has passed.
- A temperature sensor reports the temperature every 10 seconds. Operations personnel want to receive a text message when the temperature is above safe levels. To avoid sending a text message every 10 seconds when the temperature is reported to be too hot, the Control Event Volume tool can be configured to allow one event, or text message in this case, every 3 minutes. A text message will be sent the first time the temperature exceeds safe levels and once again 3 minutes later if the temperature continues to exceed the safe level.
This tool only processes events with a unique Track ID. A Track ID field must be identified on the input dataset to use this tool.
The time span for which each track observation will be evaluated for filtering. This interval repeats for as long as the analytic is running.
Max Events Per Interval
The total number of repeat track observations that can be processed in an interval.
Considerations and limitations
The tool discards track observations that exceed the Max Events Per Interval value for the current interval. These events are not cached or queued. If maintaining all track observations is important, consider branching the analytic to store the same data as features in a feature layer or cloud storage output.