Skip To Content

Use Amazon S3 records

Use records from files stored in an Amazon S3 bucket as input to ArcGIS Data Pipelines.

Usage notes

Keep the following in mind when working with Amazon S3:

  • To use a dataset from Amazon S3, you must first create a data store item. Data store items securely store credentials and connection information so the data can be read by Data Pipelines. To create a data store, follow the steps in the Connect to Amazon S3 section below.
  • To change the data store item you configured, use the Data store item parameter to remove the currently selected item, and choose one of the following options:
    • Add data store—Create a new data store item.
    • Select item—Browse your content to select an existing data store item.
  • Specify the dataset or the folder containing the dataset using the Dataset path parameter. For example, MyHurricanesDataset references a single file, and MyFolder/ references a collection of files that can be used as a single dataset. Datasets in a folder must have the same schema and file type to be used as a single dataset. If the folder contains files with different types, you can specify the files using a wildcard. For example, if a folder contains both .csv files and .orc files, you can specify only the .orc files using a path value of MyFolder/*.orc.
  • Use the File format parameter to specify the file format of the dataset specified in the Dataset path parameter. The following format options are available:
    • CSV or delimited (for example, .csv, .tsv, and .txt)
    • JSON (for example, .json or a .txt file containing data formatted as JSON)
    • Shapefile (.shp)
    • GeoJSON (for example, .geojson or a .txt file containing data formatted as GeoJSON)
    • ORC (.orc)
    • Parquet (.parquet)
  • If the CSV or delimited format option is specified, the following dataset definition parameters are available:
    • Delimiter—The delimiter used to split field (or column) and record (or row) values. The default is comma delimited (,). Other common delimiter formats include, but are not limited to, tab (\t), semicolon (;), vertical bar (|), and forward and backward slashes (/ and \).
    • Has header row—Specifies whether the dataset contains a header row. The default is true. If set to false, the first row of the dataset will be considered a record.
    • Has multiline data—Specifies whether the dataset has records that contain new line characters. The default is false. If set to true, data that contains multiline data will be read and formatted correctly.
    • Character encoding—Specifies the encoding type used to read the specified dataset. The default is UTF-8. You can choose from the available encoding options, or specify an encoding type. Spaces are not supported in encoding values. For example, specifying a value of ISO 8859-8 is invalid and must be specified as ISO-8859-8.
  • Fields is available to configure field names and types when the data format value is CSV or delimited. The Configure schema button opens a dialog box containing the dataset fields with the following options:
    • Include or drop fields—You can remove fields by checking the check box next to the field. By default, all fields are included.
    • Field name—The name of the field as it will be used in Data Pipelines. This value can be edited. By default, this value will be the same as the field in the source dataset unless the source name contains invalid characters or is a reserved word. Invalid characters will be replaced with an underscore (_), and reserved words will be prefixed with an underscore (_).
    • Field type—The field type as it will be used in Data Pipelines. This value can be edited.
    The following table describes the available field types:

    Field typeDescription

    String

    String fields support a string of text characters.

    Small integer

    Small integer fields support whole numbers between -32768 and 32767.

    Integer

    Integer fields support whole numbers between -2147483648 and 2147483647.

    Big integer

    Big integer fields support whole numbers between -9223372036854776000 and 9223372036854776000.

    Float

    Float fields support fractional numbers between approximately -3.4E38 and 3.4E38.

    Double

    Double fields support fractional numbers between approximately -2.2E308 and 1.8E308.

    Date

    Date fields support values in the format yyyy-MM-dd HH:mm:ss, for example, a valid value is 2022-12-31 13:30:30. If the date values are stored in a different format, use the Create date time tool to calculate a date field.

    Boolean

    Boolean fields support values of True and False. If a field contains integer representations of Boolean values (0 and 1), use the Update fields tool to cast the integers to Boolean values.

  • If the JSON format option is specified, the Root property parameter is available. You can use this parameter to specify a property in the JSON to read data from. You can reference nested properties using a decimal separator between each property, for example, property.subProperty. By default, the full JSON file will be read.
  • If the GeoJSON format option is specified, the Geometry type parameter is available. This parameter is optional. By default, the geometry type in the GeoJSON file is used. If the GeoJSON file contains more than one geometry type, you must specify the value for this parameter. Mixed geometry types are not supported and only the specified type will be used. The options are Point, Multipoint, Polyline, and Polygon. A geometry field containing the locations of the GeoJSON data will be automatically calculated and added to the input dataset. The geometry field can be used as input to spatial operations or to enable geometry on the output result.
  • To improve the performance of reading input datasets, consider the following options:
    • Use the Use caching parameter to store a copy of the dataset. The cached copy is only maintained while at least one browser tab open to the editor is connected. This may make it faster to access the data during processing. If the source data has been updated since it was cached, uncheck this parameter and preview or run the tool again.
    • After configuring an input dataset, configure any of the following tools that limit the amount of data being processed:

Connect to Amazon S3

To use data stored in Amazon S3, complete the following steps to create a data store item in the Data Pipelines editor.

  1. On the Data Pipelines editor toolbar, click Inputs and choose Amazon S3.

    The Select a data store connection dialog box appears.

  2. Choose Add a new data store.
  3. Click Next.

    The Add a connection to a data store dialog box appears.

  4. Provide the access key ID and corresponding secret access key you obtained from your Amazon Web Services (AWS) account.
  5. Provide the region where the bucket exists and type the name of the bucket.
  6. Optionally, provide the path to a folder within the container to register it.
  7. Click Next.

    The item details pane appears.

  8. Provide a title for the new data store item.

    This title will appear in your content. You can also store the item in a specific folder and provide item tags or a summary.

  9. Click Create connection to create the data store item.

    An Amazon S3 element that you can configure for a specific dataset is added to the canvas.

Limitations

The following are known limitations:

  • If you specify a folder containing multiple files that represent a single dataset, all files identified in the Amazon S3 folder must have the same schema and geometry type.
  • Zipped files (.zip) are not supported.
  • Esri JSON files (.esrijson) are not supported.
  • If the dataset includes field names with spaces or invalid characters, the names are automatically updated to use underscores. For example, a field named Population 2022 is renamed Population_2022, and a field named %Employed is renamed _Employed.
  • To use a data store item to connect to external data sources, you must be the owner of the data store item. Data store items that are shared with you are not supported as input.

Licensing requirements

The following licensing and configurations are required:

  • Creator or GIS Professional user type
  • Publisher, Facilitator, or Administrator role, or an equivalent custom role

To learn more about Data Pipelines requirements, see Requirements.