In Drone2Map, you can adjust the processing options for a project to customize it. You can run steps independently, minimizing the time required to generate the desired products; however, you must run the initial step at least once.
Use the Processing Options window to configure which steps will run, the settings for each step, and which products will be created. To open the window, on the ribbon, on the Home tab, in the Processing group, click Options.
General
On the General tab,
- Dense Matching
- Point Cloud Density—Density of the point cloud used to derive the level of geometric detail of the resulting reconstruction. Increasing this value will improve edge sharpness of features, and will increase processing time. Generally, point cloud densities below High should only be used for rapid assessment and testing. To prevent the point cloud from becoming too sparse, it is recommended that you increase the GSD resolution in 2D Processing, as the point cloud density is decreased.
Note:
Point cloud settings are tied to the selected Project Resolution. See the next section for more details.
- Ultra—Highest level of density point cloud. Use for final products that require the highest detail possible.
- High—High level of density point cloud. This is the recommended setting for most projects. This is the default.
- Medium—Medium level of density point cloud. It is suitable for quick projects or testing.
- Low—Low level of density point cloud. This is typically used only for rough testing.
- Point Cloud Density—Density of the point cloud used to derive the level of geometric detail of the resulting reconstruction. Increasing this value will improve edge sharpness of features, and will increase processing time. Generally, point cloud densities below High should only be used for rapid assessment and testing. To prevent the point cloud from becoming too sparse, it is recommended that you increase the GSD resolution in 2D Processing, as the point cloud density is decreased.
- Project Resolution—Defines the spatial resolution used to generate output products.
- Automatic (default)—Uses the resolution of your source imagery. Changing this value changes the resolution by multiples of the ground sample distance (GSD).
- 1x (default)—Recommended image scale value. This scale allows the selection of the Ultra or High point cloud settings.
- 4x—Recommended for very large projects with high overlap, 4x Source Resolution can be used to speed up processing, which often results in slightly reduced accuracy since fewer features are extracted. This scale is also recommended for very blurry or very low-textured images. This scale defaults to the Medium point cloud setting.
- 8x—For very large projects with high overlap, 8x Source Resolution can be used to speed up processing, which usually results in slightly reduced accuracy, as fewer features are extracted. This scale defaults to the Low point cloud setting.
- User Defined—A resolution value can be manually defined in centimeters or pixels for the GSD. This scale allows the selection of the Ultra or High point cloud settings.
- Automatic (default)—Uses the resolution of your source imagery. Changing this value changes the resolution by multiples of the ground sample distance (GSD).
- Keep Intermediate Products—Defines whether any intermediate products should be kept after processing completes.
- DSM Point Cloud—Allows you to choose if you want to keep DSM point cloud files.
- Orthomosaic Tiles—Allows you to choose if you want to keep orthomosaic tile files
- 3D Point Cloud—Allows you to choose if you want to keep 3D point cloud files.
- Hardware—Configure CPU and GPU hardware options.
- CPU Threads—The number of central processing unit (CPU) threads dedicated to processing your project. Slide the bar to the left or right to adjust the number of CPU threads.
- Processor Type—Offers the choice of defining how image processing should be offloaded to the computer's hardware.
- CPU + GPU (default)—Processing is performed by both the CPU and GPU.
- CPU—Processing is restricted to the CPU only.
- GPU ID—Define a specific GPU ID to use for multi-GPU systems.
Adjust Images
On the Adjust Images tab, the options allow you to define key adjustments to be used in the block adjustment process, tie point matching, and point cloud generation.
- Fix Image Location for High Accuracy GPS (RTK and PPK)—When enabled, this will change the Matching Neighborhood setting to Small (Optimized). This option is used only for imagery acquired with high-accuracy, differential GPS, such as Real Time Kinematic (RTK) or Post Processing Kinematic (PPK). If this option is checked, the process will only adjust the orientation parameters of the imagery and leave the GPS measurements fixed. Using GCPs in conjunction with fixed GPS measurements may over-constrain the bundle adjustment, introducing errors or causing processing to fail. When using GCPs with RTK/PPK imagery, it is recommended that you set Matching Neighborhood to Medium.
Note:
If a high number of uncalibrated images appear, changing Matching Neighborhood to a larger setting will increase the likelihood of those images being calibrated, but may increase processing time.
- Use Image Orientations—When enabled, the orientation data from the source images will be used and the initial orientation adjustment during the Adjust Images step will be skipped. This option can be used when yaw, pitch, and roll are present in the image metadata, or Omega, Phi, Kappa values are imported from an external geolocation file. Instead of the AT calculating the initial orientation, Drone2Map will use the Yaw, Pitch, and Roll of the images to calculate the initial orientation of the imagery.
- Initial Image Scale—This controls the way feature points are extracted. The project adjusts the value by default based on the template you chose when creating the project, 2D Products (1x) or Rapid (4x). More tie points will be generated when the setting is closer to source resolution (1x); however, the processing time will also increase accordingly.
- 1 (Original image size)—Recommended image scale value.
- 1/2 (Half image size)—Recommended for projects using small images (for example, 640x320 pixels), because more features will be extracted, and it will assist with the accuracy of results.
- 1/4 (Quarter image size)—Recommended for very large projects with high overlap, 4x Source Resolution can be used to speed up processing, which often results in slightly reduced accuracy since fewer features are extracted. This scale is also recommended for very blurry or very low textured images.
- 1/8 (Eighth image size)—For very large projects with high overlap, 8x Source Resolution can be used to speed up processing, which usually results in slightly reduced accuracy, as fewer features are extracted.
- Refine Adjustment—Specifies whether the camera model will be further optimized using the selected image scale. If the Initial Image Scale image size is already at 1, there is no additional benefit. It is always advisable to have Refine Adjustment checked for final product generation. For doing quick quality adjustments, you can uncheck the setting.
- Checked—The camera model is first estimated using the Initial Image Scale setting and further optimized using the selected image scale. This option produces the most accurate results.
- Unchecked—The camera model will be estimated using the Initial Image Scale setting with no additional optimization. This option produces the fastest results at the expense of accuracy.
- 1 (Original image size)—Adjustment will be done at the original image size. This is the recommended image size.
- 2 (Double image size)—Adjustment will be done at double the image size. This size is recommended for projects using small images (for example, 640x320 pixels), because more features will be extracted, and it will assist with the accuracy of results.
- Tie Point Residual Error Threshold—Tie points with a residual error greater than the threshold value are not used in computing the adjustment. The measurement unit of the residual is pixels.
- Matching Neighborhood—Determines the number of images from each search neighborhood that are used to compute image matches. A search neighborhood is the area between each of the four ordinal directions (NE, SE, SW, and NW). Larger neighborhood sizes will increase processing times, but they will also increase matches with neighboring images. If a high number of uncalibrated images is detected during initial adjustment, it is recommended that you increase the neighborhood size. Otherwise, use the default settings.
- Small (Optimized)—An image is matched to the three closest images for each search neighborhood, a total of 9.
- Medium—An image is matched to the six closest images for each search neighborhood, a total of 24.
- Large—An image is matched to the 12 closest images for each search neighborhood, a total of 48.
- X-Large (Slowest)—An image is matched to the 20 closest images for each search neighborhood, a total of 80.
- Camera Calibration—Internal camera parameters used for image adjustment. If checked, the value is automatically taken from the Edit Camera pane. If a value is missing, the initial value is then calculated from the EXIF. If unchecked, the calibration is fixed to whatever values are manually defined in the Edit Camera pane.
- Focal Length—The focal length of the camera lens, measured in millimeters.
- Principal Point—The offset between the fiducial center and the principal point of autocollimation (PPA). The principal point of symmetry (PPS) is assumed to be the same as the PPA.
- K1,K2,K3—Konrady coefficients to calculate radial distortion.
- P1,P2—Tangential coefficients to calculate distortion between the lens and image plane.
- Elevation Source—Source elevation layer used to orthorectify the project.
- Average Elevation from Image Metadata—Elevation is averaged from the values within the source imagery EXIF information.
- Average Elevation from DEM—Elevation is averaged from a user-defined DEM. The default will automatically be populated to the Esri World Elevation surface.
- DEM—Elevation is directly taken from the user-defined DEM. This is typically used when access to internet is limited or when wanting to use a high-resolution local DEM.
- DEM—Source location of the DEM elevation layer. The input can be a local raster dataset, layer file, TIN, or an image service.
- Z Factor—Conversion factor to adjust the vertical units of measurement if they differ from the horizontal units of the input surface.
- Z Offset—Constant value added to the base height of the input layer to compensate for any offset.
- Geoid—When checked, a geoid correction is applied. Most elevation data uses orthometric heights, so a geoid correction is only needed if the units and base of the elevation are different from the imagery. A Z Factor value and a Z Offset value are also required.
2D products
On the 2D Products tab, the following options allow you to adjust the processing options and desired outputs for orthomosaic, digital surface model (DSM), and digital terrain model (DTM) images:
- Image Collection—Defines the options for the Image Collection mosaic product.
- Orthorectification Method—Elevation source used to orthorectify the mosaicked images.
- None—No elevation source is used.
- Solution Points—Elevation source created from solution points generated during adjustment.
- Sparse Point Cloud—Elevation source created from a point cloud derived from the image collection.
- Dense Point Cloud—Elevation source created from a dense matching point cloud.
Note:
To use the Dense Point Cloud option a DSM product must first be present.
- Color Balancing—Make transitions from one image to an adjoining image appear seamless.
- Seamlines—Sort overlapping imagery and produce a smoother-looking mosaic.
- Orthorectification Method—Elevation source used to orthorectify the mosaicked images.
- Create Orthomosaic—Generates an orthomosaic from the project's images.
- Color Balancing—Blends differences between images and removes or reduces artifacts along seams.
- Enhance Orthomosaic—Brightens dark areas, makes orthomosaics more vibrant and homogeneous, and leaves the input of the images unchanged.
- Merge Tiles—When checked, merges the tiles into a single orthomosaic image. When unchecked, it creates a mosaic dataset of your tiles that can be used in tile-based processing.
- Create DSM—Generates a DSM from the project images.
- Create Contours—Generates contour lines using the DSM.
- Contour interval—Defines the contour line elevation interval in meters. It can be any positive value. The elevation interval must be smaller than the (maximum–minimum) altitude of the DSM.
- Contour Base—Defines the relative altitude, which is used as a contour line base in meters.
- Contour Z Factor—The contour lines are generated based on the z-values in the input raster, which are often measured in units of meters or feet. With the default value of 1, the contours will be in the same units as the z-values of the input raster. To create contours in a different unit than that of the z-values, set an appropriate value for the z-factor. Note that it is not necessary to have the ground x,y and surface z-units be consistent for this tool. For example, if the elevation values in your input raster are in feet, but you want the contours to be generated based on units of meters, set the z-factor to 0.3048 (since 1 foot = 0.3048 meter).
- Export Shapefile—Export contour lines in shapefile format.
- Create DTM—Generates a DTM from the project images.
- Contours—Generates contour lines using the DTM.
- Contour interval—Defines the contour line elevation interval in meters. It can be any positive value. The elevation interval must be smaller than the (maximum–minimum) altitude of the DTM.
- Contour Base—Defines the relative altitude, which is used as a contour line base in meters.
- Contour Z Factor—The contour lines are generated based on the z-values in the input raster, which are often measured in units of meters or feet. With the default value of 1, the contours will be in the same units as the z-values of the input raster. To create contours in a different unit than that of the z-values, set an appropriate value for the z-factor. Note that it is not necessary to have the ground x,y and surface z-units be consistent for this tool. For example, if the elevation values in your input raster are in feet, but you want the contours to be generated based on units of meters, set the z-factor to 0.3048 (since 1 foot = 0.3048 meter).
- Export Shapefile—Export contour lines in shapefile format.
- Contours—Generates contour lines using the DTM.
3D products
Note:
3D processing capabilities are included with an ArcGIS Drone2Map Advanced license. See: Drone2Map license levels
On the 3D Products tab, these options allow you to change the desired outputs for the point cloud and 3D textured mesh created in this step.
- Create Point Clouds—Allows you to select the desired output formats for the point cloud. The options are as follows:
- SLPK—Creates a scene layer package (.slpk file).
- LAS—Creates a lidar LAS file with x,y,z position and color information for each point of the point cloud.
- Merge LAS Tiles—Merges LAS tiles into a single LAS file instead of the default individual LAS tile files.
- Create DSM Textured Meshes—Allows you to generate 3D meshes from DSM data with an imagery overlay.
- SLPK—Creates a scene layer package (.slpk file).
- DAE—Converts DSM data into a DAE (Collada) file.
- OBJ—Converts DSM data into a OBJ (Wavefront) file.
- OSGB—Converts DSM data into a OSGB (OpenSceneGraph binary) file.
- Create 3D Textured Meshes—Allows you to generate 3D meshes from point cloud data with an imagery overlay.
Note:
The densified point cloud is used to generate a surface composed of triangles. It uses the points to minimize the distance between the points and the surface that they define, but even the vertices of the triangles are not necessarily an exact point of the densified point cloud.
- SLPK—Creates a scene layer package (.slpk file).
- DAE—Converts point cloud data into a DAE (Collada) file.
- OBJ—Converts point cloud data into a OBJ (Wavefront) file.
- OSGB—Converts point cloud data into a OSGB (OpenSceneGraph binary) file.
- General Mesh Settings—Allows you to configure additional mesh quality settings.
- Enhance Textured Mesh—Brightens dark areas and makes textured meshes more vibrant and homogeneous.
Coordinate systems
On the Coordinate systems tab, the following options define the horizontal and vertical coordinate system for your images and the project.
- Image Coordinate System—Defines the spatial reference for your images.
- Current XY—Defines the horizontal coordinate system for your images. The default horizontal coordinate system for images is WGS84. To update the image horizontal coordinate system, click the button next to select the appropriate coordinate system, and click OK.
- Current Z—Defines the vertical reference for your images. The default vertical reference is EGM96 for images. Most image heights are referenced to the EGM96 geoid and are either embedded in the EXIF header of the image or are contained in a separate file. Most GPS receivers convert the WGS84 ellipsoidal heights provided by global navigation satellites to EGM96 heights, so if you're unsure, accept the default of EGM96.
- Project Coordinate System—Defines an output spatial reference for your Drone2Map output products.
Note:
You can only modify project coordinate system and vertical reference if control points are not included in the project. If you have control points, the project coordinate system and vertical reference of Drone2Map are determined by the coordinate system and vertical reference of the control points.
If you don't have control points, the coordinate system and vertical reference model used in the creation of Drone2Map are determined by the coordinate system and vertical reference of the images themselves. If the images have a geographic coordinate system, Drone2Map generates products using the local WGS84 UTM Zone.
- Current XY—Defines the output horizontal coordinate system. To update the project coordinate system, click the Set Horizontal and Vertical Spatial Reference button next to select the appropriate projected coordinate system, and click OK. If you select a geographic coordinate system, Drone2Map generates products using the local WGS84 UTM zone.
- Current Z—Defines the output vertical reference system for your Drone2Map products. This is relevant if your input images contain ellipsoidal heights and you plan to publish a 3D mesh as a scene layer, since ArcGIS Online and ArcGIS Pro both use the orthometric EGM96 geoid height model. EGM96 is the default.
Resources
On the Resources tab, you can view project image information and relevant project paths.
- Image Information—Information about the number of images and total gigapixels in the current project.
- Enabled Images—Total number of images with a status of Enabled to be used in processing.
- Gigapixels—Number of gigapixels used in the current project. See the note below for more information.
Note:
Combined project imagery size is limited to 100 gigapixels. Calculate the size by multiplying the number of images by image megapixel size and dividing by 1,000.
For example, a project with 400 13-megapixel imagery is (400 x 13)/1000 = 5.2 gigapixels.
- Locations—File path locations of the project file, source images, and project log file.
- Project—The location of the current project in the file system. Click the link to open the file location.
- Images—The location of the source images used in processing the current project. Click the link to open the image location.
- Log File—The location of the project log file. Click the link to open the file location. This file is useful when troubleshooting issues with Drone2Map.
- Delete Logs—Delete all project logs for the current open project.
Export template
Drone2Map templates are designed to help you get your projects started quickly. The templates are preconfigured with specific processing options based on the template and desired products. You can update the processing options to customize processing settings and outputs. If you have a particular set of custom options that you use frequently, you can export your processing options as a template. Once your processing options are set, in the Options window, select Export Template, browse to the location where you want to save your template, and click Save. When you create your next project, choose your exported template, and your settings and options are loaded into Drone2Map.