The Azure Blob Storage source type in ArcGIS Velocity reads records from files stored in Azure Blob Storage.
The following are example uses of the Azure Blob Storage source:
- A researcher wants to load hundreds of delimited text files from Azure Blob Storage into Velocity to perform analysis on the data.
- A GIS department stores commonly used boundary shapefiles in Azure Blob Storage and wants to load the county boundary shapefile into Velocity as an aggregation boundary.
Keep the following in mind when working with the Azure Blob Storage source:
- All files identified in Azure Blob Storage by the naming pattern in the Dataset parameter must have the same schema and geometry type. If specifying a folder name for the Dataset parameter, all files in the directories must have the same file type and schema.
- The account key is encrypted the first time the analytic is saved and stored in an encrypted state.
- When specifying the folder path, use forward slashes (//).
- After configuring source connection parameters, see Configure input data to learn how to define the schema and the key parameters.
The account access key for Azure Blob Storage.
Velocity uses the account access key to load specified data sources into the app.
The account access key is encrypted the first time an analytic is saved and stored in an encrypted state.
The name of the Azure Storage Account that contains Azure Blob Storage containers.
The endpoint suffix used to access Azure Blob Storage. For most users, this will be the following: core.windows.net.
The name of the Azure Blob Storage container containing the files to load.
The folder path of the folder containing the files to load into Velocity. The following are examples:
The name of the file to read if you are loading a single file, or a pattern indicating a set of files, followed by the file type extension.
To build a pattern indicating a set of files, use an asterisk (*) as a wildcard either on its own or in conjunction with a partial file name.
All files identified by the naming pattern must have the same schema and geometry type.
Alternatively, if loading multiple files and/or nested folders, you can also specify the containing folder name as the dataset name instead of a file name with extension. If specifying a containing folder name as the dataset, you cannot use wildcards or restrict file types. All files from the specified folder will be ingested and they should all have the same file type.
The following are examples:
Load recent files only
Specifies whether the Azure Blob Storage source loads all files or only the files created or modified since the last run of the analytic.
The parameter can only be set to true for scheduled big data analytics.
For the first run of a scheduled big data analytic with the parameter set to true, big data analytics do not load any files and the analytic run will complete. Subsequent analytic runs load files with a last modified date since the last scheduled run of the analytic.
Considerations and limitations
There are several considerations and limitations to keep in mind when using the Azure Blob Storage source:
- All files identified in Azure Blob Storage by the naming pattern in the dataset property must have the same schema and geometry type.
- Ingesting JSON with an array of objects referenced by a root node is not currently supported for Amazon S3 or Azure Blob Storage.