Oriented imagery dataset

You can create an oriented imagery dataset in a geodatabase to manage a collection of oriented images. The dataset defines both collection-wide properties, such as the elevation source, as well as image-specific metadata, such as the camera location and orientation.

When added to a map, the dataset is visualized as an oriented imagery layer.

Oriented imagery dataset creation and publication

Use the following geoprocessing tools in the Oriented imagery toolbox to create an oriented imagery dataset:

  • Create Oriented Imagery Dataset creates an empty oriented imagery dataset in a geodatabase.
  • Add Images To Oriented Imagery Dataset populates the oriented imagery dataset with images and corresponding metadata. Input sources can be a file, folder, table, list of image paths, or a point feature layer. If the input source is a file, folder, or list of image paths, the tool reads image metadata directly from the EXIF and XMP metadata in .jpeg files. If the input data is not in a standard metadata format, an oriented imagery custom type can be defined in ArcPy and used to add images to the oriented imagery dataset using the Add Images From Custom Input Type geoprocessing tool.
  • Build Oriented Imagery Footprint generates a feature layer to show areas on the map that reference the images in the oriented imagery dataset.
  • Generate Service From Oriented Imagery Dataset generates a feature service with the oriented imagery layer and the footprint layer as sublayers. The tool can be used to publish local image files referenced in the oriented imagery dataset as feature attachments to the oriented imagery layer.

You can publish an oriented imagery layer (and optionally, the oriented imagery footprint) to ArcGIS Online or an ArcGIS Enterprise portal using the standard sharing workflow. To include the oriented imagery footprint layer when you publish, select both the oriented imagery footprint layer and oriented imagery layer before selecting Share As Web Layer.

Note:

When using ArcGIS Enterprise portals, oriented imagery layers can only be created in versions 11.2 or later.

Image formats and storage

The oriented imagery dataset stores the image location path in its attribute table. The images can be in local storage, network storage, or in publicly accessible cloud storage. The images can also be added as a feature attachment to the oriented imagery layer. The oriented imagery dataset supports JPG, JPEG, TIFF, and MRF image formats.

Note:

To publish an oriented imagery dataset to ArcGIS Online or ArcGIS Enterprise, the images must be in publicly accessible cloud storage.

Camera position and orientation

The Shape field in the attribute table defines the location of the camera in the dataset coordinate system. The camera orientation is described in terms of the Camera Heading, Camera Pitch, and Camera Roll field values. These angles describe the camera orientation relative to a local projected coordinate system and refer to the point between the camera position and a point running through the center of the image.

The camera orientations are as follows:

  • The initial camera orientation is with the lens aimed at nadir (negative z-axis), with the top of the camera (columns of pixels) pointed north and rows of pixels in the sensor aligned with the x-axis of the coordinate system.
  • The first rotation (Camera Heading) is around the z-axis (optical axis of the lens), positive rotations clockwise (left-hand rule) from north.
  • The second rotation (Camera Pitch) is around the x-axis of the camera (rows of pixels), positive counterclockwise (right-hand rule) starting at nadir.
  • The final rotation (Camera Roll) is a second rotation around the z-axis of the camera, positive clockwise (left-hand rule).

Oriented imagery camera angles

Assuming you are standing at the camera location looking north, rotate (heading) clockwise, tilt the camera up (pitch), and turn along the axis of the camera (roll) to point in the specified direction.

The following are example orientations:

  • The camera pointing down with the rows of pixels going from west to east has the orientation 0,0,0.
  • Rotating the camera 90 degrees so that the pixels are oriented from north to south is 90,0,0.
  • Rotating the camera to the horizon has the orientation 90,90,0.
  • Rotating the camera counterclockwise by 20 degrees results in an orientation of 90,90,20.

In most applications, the roll angle is Zero. The roll angle is used to indicate that the camera body is rotated around the lens axis and is required to determine the correct pixel-to-image relationship.

In some cases, the image is rotated with respect to the camera. For example, when taking a picture with most digital cameras or mobile phones, the resulting image is oriented with the top of the image up, even if you rotate the camera. This is handled by the Image Rotation field, which determines an additional rotation with respect to the camera. The horizontal field of view (HFOV) and vertical field of view (VFOV) should be determined by the camera, and should not change based on the roll angle.

Oriented imagery categories

The imagery category is used to specify the type of images that are added to the dataset, and define the default oriented imagery properties of the dataset. These properties can be changed using the Update Oriented Imagery Dataset Properties tool. The following are the categories and associated properties:

  • Horizontal—Images where the exposure is parallel to the ground and pointed to the horizon.
  • Oblique—Images where the exposure is at an angle to the ground, at about 45 degrees, so the sides of objects can be seen.
  • Nadir—Images where the exposure is perpendicular to the ground and looking vertically down. Only the top of objects can be seen.
  • 360—Images taken using specialized cameras that provide 360-degree spherical surround views.
  • Inspection—Close-up images of assets (less than 5 meters from camera location).

Imagery categoryCamera pitch (degrees)Camera roll (degrees)HFOV (degrees)VFOV (degrees)Camera height (m)Near distance (m)Far distance (m)Maximum distance (m)

Horizontal

90

0

60

40

1.8

1

30

200

Oblique

45

0

60

40

200

1

500

2000

Nadir

0

0

60

40

200

1

500

1000

360

90

0

360

180

1.8

1

30

100

Inspection

90

0

60

40

1.8

0

5

30

Note:

Visualizing 360-degree imagery using the Oriented imagery viewer is supported in ArcGIS AllSource 3.4 or later.

Oriented imagery attribute table

An attribute table is generated when you create an oriented imagery dataset in which some fields appear by default. The fields are populated when images are added, and more fields can be added to contain specific metadata information. The metadata provides search capability that allows you to find and display images covering a site of interest, and includes a number of approximations.

The attribute table supports the following fields:

  • ObjectID—The unique ID for each row in the table. This field is maintained by ArcGIS
  • Shape—The defined location of the camera.
  • Name (optional)—An alias name that identifies the image.
  • ImagePath—The path to the image file. The image path can be a local path or a web-accessible URL. Also, the image path can be a "FA" if the image is stored as an attachment to the feature. Images can be in JPEG, JPG, TIFF, or MRF format.
  • AcquisitionDate (optional)—The date when the image was collected. The time of the image collection can also be included.
  • CameraHeading (optional)—The camera orientation of the first rotation around the z-axis of the camera. The value is in degrees. The heading values are measured in the positive clockwise direction, where north is defined as 0 degrees. -999 is used when the orientation is unknown.
  • CameraPitch (optional)—The camera orientation of the second rotation around the x-axis of the camera in the positive counterclockwise direction. The value is in degrees. The pitch is 0 degrees when the camera is facing vertically down to the ground. The valid range of pitch value is from 0 to 180 degrees, with 180 degrees for a camera facing vertically up, and 90 degrees for a camera facing horizontally.
  • CameraRoll (optional)—The camera orientation of the final rotation around the z-axis of the camera in the positive clockwise direction. The value is in degrees. Valid values range from -90 to 90.
  • CameraHeight (optional)—The height of the camera above the ground (elevation source). The units are in meters. Camera height is used to determine the visible extent of the image in which large values result in a greater view extent. Values must be greater than 0.
  • HorizontalFieldOfView (optional)—The camera’s scope in the horizontal direction. The units are in degrees, and valid values range from 0 to 360.
  • VerticalFieldOfView (optional)—The camera’s scope in the vertical direction. The units are in degrees, and valid values range from 0 to 180.
  • NearDistance (optional)—The nearest usable distance of the imagery from the camera position. The units are in meters.
  • FarDistance (optional)—The farthest usable distance of the imagery from the camera position. This value is used to determine the extent of the image footprint, which is used to determine whether an image is returned when you click the map, and for creating optional footprint features. The units are in meters. This value must be greater than 0.
  • OrientedImageryType (optional)—Specifies the imagery type from the following:
    • Horizontal
    • Oblique
    • Nadir
    • 360
    • Inspection
  • ImageRotation (optional)—The orientation of the camera in degrees relative to the scene when the image was captured. The value is added to CameraRoll. The valid values range from -360 to 360.
  • CameraOrientation (optional)—Stores detailed camera orientation parameters as a pipe-separated string. This field provides support for more accurate image-to-ground and ground-to-image transformations.
  • Matrix (optional)—The row-wise sorted rotation matrix that defines the transformation from image space to map space, specified as nine floating-point values, delimited by semicolons. Period or full-stop must be the decimal separator for all the values.
  • FocalLength (optional)—The focal length of the camera lens. The unit can be microns, millimeters, or pixels.
  • PrincipalX (optional)—The x-coordinate of the principal point of the autocollimation. The unit must be the same as the unit used for FocalLength. By default, the value is zero.
  • PrincipalY (optional)—The y-coordinate of the principal point of the autocollimation. The unit must be the same as the unit used for FocalLength. By default, the value is zero.
  • Radial (optional)—The radial distortion is specified as a set of three semicolon-delimited coefficients, such as 0;0;0 for K1;K2;K3. The coupling unit is the same as the unit specified for FocalLength.
  • Tangential (optional)—The tangential distortion is specified as a set of two semicolon-delimited coefficients, such as 0;0 for P1;P2. The coupling unit is the same as the unit used for FocalLength.
  • A0,A1,A2 B0,B1,B2 (optional)—The coefficient of the affine transformation that establishes a relationship between the sensor space and image space. The direction is from ground to image.