Processing options

In Drone2Map, you can adjust the processing options for a project to customize it. Steps can be run independently, minimizing the time required to generate the desired products; however, the initial step must be run at least once.

Use the Processing Options window to configure which steps will run, the settings for each step, and which products will be created. To open the window, on the ribbon, on the Home tab, in the Processing group, click Options.

2D products

On the 2D Products tab, the following options allow you to adjust the processing options and desired outputs for orthomosaic, digital surface model (DSM), and digital terrain model (DTM) images:

  • Create Orthomosaic—Generates an orthomosaic from the project images.
    • Resolution—Defines the spatial resolution used to generate the orthomosaic and DSM.
      • Automatic (default)—The resolution of your source imagery is used. Changing this value changes the resolution by multiples of the ground sample distance (GSD).
      • User defined—Allows you to select any defined value in cm or pixel for the GSD.
  • Create Digital Surface Model—Generates a DSM from the project images.
    • Method—The method used for the raster DSM generation. The method affects the processing time and the quality of the results.
      • Inverse Distance—The values to unknown points are calculated with a weighted average of the values available at the known points. This method is recommended for buildings.
      • Triangulation—The triangulation algorithm is used. This method is recommended for flat areas (agriculture fields) and stockpiles.
        Note:

        The triangulation method can be up to 10 times faster than the Inverse Distance method, but the results may not be as good, especially for buildings.

    • Filters—Allows you to define parameters to filter and smooth the points of the densified point cloud.
      • Use Noise Filtering—Generation of the densified point cloud can lead to noisy and erroneous points. The noise filtering corrects the altitude of these points with the median altitude of the neighboring points.
      • Use Surface Smoothing—Once the noise filter has been applied, a surface is generated from the obtained points. This surface can contain areas with erroneous small bumps. The surface smoothing corrects these areas by flattening them. This option allows you to set the following types of smoothing:
        • Smooth—Tries to smooth areas, assuming that sharp features exist because of noise and they should be removed. Areas that are not very planar are smoothed and become planar.
        • Medium—Compromise between smooth and sharp. It tries to preserve sharp features while flattening rough planar areas.
        • Sharp (default)—Tries to preserve the orientation of the surface and keep sharp features such as corners and edges of buildings. Therefore, only quasi-planar areas are flattened.
    • Contours—Generates contour lines using the DSM.
      • Contour interval—Defines the contour line elevation interval in meters. It can be any positive value. The elevation interval must be smaller than the (maximum - minimum) altitude of the DSM.
      • Contour Base—Defines the relative altitude, which is used as a contour line base in meters.
      • Contour Z Factor—The contour lines are generated based on the z-values in the input raster, which are often measured in units of meters or feet. With the default value of 1, the contours will be in the same units as the z-values of the input raster. To create contours in a different unit than that of the z-values, set an appropriate value for the z-factor. Note that it is not necessary to have the ground x,y and surface z-units be consistent for this tool. For example, if the elevation values in your input raster are in feet, but you want the contours to be generated based on units of meters, set the z-factor to 0.3048 (since 1 foot = 0.3048 meter).
      • Export Shapefile—Export contour lines in shapefile format.
  • Create Digital Terrain Model—Generates a DTM from the project images.
    • Resolution—Defines the spatial resolution used to generate the digital terrain model (DTM).
      • Automatic (default)—The resolution of your source imagery is used. Changing this value changes the resolution by multiples of the GSD.
      • User defined—Allows you to select any defined value in cm or pixel for the GSD.
    • Contours—Generates contour lines using the DTM.
      • Contour interval—Defines the contour line elevation interval in meters. It can be any positive value. The elevation interval must be smaller than the (maximum - minimum) altitude of the DTM.
      • Contour Base—Defines the relative altitude, which is used as a contour line base in meters.
      • Contour Z Factor—The contour lines are generated based on the z-values in the input raster, which are often measured in units of meters or feet. With the default value of 1, the contours will be in the same units as the z-values of the input raster. To create contours in a different unit than that of the z-values, set an appropriate value for the z-factor. Note that it is not necessary to have the ground x,y and surface z-units be consistent for this tool. For example, if the elevation values in your input raster are in feet, but you want the contours to be generated based on units of meters, set the z-factor to 0.3048 (since 1 foot = 0.3048 meter).
      • Export Shapefile—Export contour lines in shapefile format.

3D products

On the 3D Products tab, these options allow you to change the desired outputs for the point cloud and 3D textured mesh created in this step.

  • Create Point Clouds—Allows you to select the desired output formats for the point cloud. The options are as follows:
    • SLPK—Creates a scene layer package (.slpk file).
      • Scene Layer Version—Determines the scene layer version.
    • LAS (default)—Lidar LAS file with x,y,z position and color information for each point of the point cloud.
    • zLAS (default)—Derived from the LAS file, this option produces a lidar zLAS file using Esri's optimized LAS format with x,y,z position and color information for each point of the point cloud. If you choose zLAS, you will also get a LAS file.
    • PLY—PLY file with X,Y,Z position and color information for each point of the point cloud.
    • XYZ—ASCII text file with X,Y,Z position and color information for each point of the point cloud.
      • Delimiter—Defines the delimiter character of the file, used to separate the values. The drop-down list has the following options:
        • Space
        • Tab
        • Comma
        • Semicolon
  • Create Textured Meshes—Allows you to generate the 3D textured mesh while processing and allows you to set up the parameters to use for mesh generation.

    Note:

    The densified point cloud is used to generate a surface composed of triangles. It uses the points to minimize the distance between the points and the surface that they define, but even the vertices of the triangles are not necessarily an exact point of the densified point cloud.

    • Multi LOD Meshes—A level-of-detail (LOD) mesh allows you to adjust the resolution and number of levels of detail for your mesh. Drone2Map includes the following LOD mesh formats:
      • OSGB.osgb
      • Scene Layer Package (default)—.slpk
    • Single LOD Mesh
      • OBJ (default)—An OBJ file with the following:
        • The x,y,z position for each vertex of the 3D textured mesh
        • Texture information (using .jpg and .mtl texture files)
      • FBX—An FBX file with the following:
        • The x,y,z position for each vertex of the 3D textured mesh
        • Texture information
      • AutoCAD DXF—A DXF file with the following:
        • The x,y,z position for each vertex of the 3D textured mesh
      • PLY—A PLY file with the following:
        • The x,y,z position for each vertex of the 3D textured mesh
        • Texture information (using a .jpg texture file)
          Note:

          The 3D textured mesh file is not georeferenced. It has coordinates on a local coordinate system centered around the project.

      • 3D PDF (default)—A PDF file containing a 3D model of the 3D textured mesh. The texture size of the 3D textured mesh that is displayed in the 3D PDF is 2000x2000 pixels.
        • Logo—You can select a logo (.jpg or .tif file) to display on the 3D PDF.
    • General 3D Options—Allows you to select the desired output formats for the 3D textured mesh.
      • Classify Point Clouds— Enables the generation of the point cloud classification.
        Note:

        When the point cloud classification is used for the DTM generation, it significantly improves the DTM.

      • Merge LAS Tiles— If the point cloud consists of many points, several tiles are generated. This option produces a single file with all the points.
      • LOD Texture Quality—Allows you to define the resolution of the texture. You can select from the following:
        • Low—512x512
        • Medium—1024x1024
        • High—4096x4096
      • Number of Levels—Allows you to define the number of levels of detail to be generated between 1 and 6. The higher the number of levels, the more detailed the representation and the longer the processing time.
        Note:

        The level of detail (LOD) mesh is a representation of the 3D mesh that contains multiple levels of detail, decreasing the complexity of the model as it is divided into more levels. Fewer details are available as you zoom out in the model.

        For large projects, it is possible that a level cannot be generated for a high number of levels, as there is a maximum number of 20,000 triangles that can be generated for each level of detail.

      • Texture Color Balance—The Color Balancing algorithm will be used for the generation of the texture of the 3D Texture Mesh. The Color Balancing algorithm ensures that the texture will be homogeneous.
      • Mesh Resolution—The available parameters are as follows:
        • High—High level of detail. Recommended to maximize the visual aspect of the 3D textured mesh. Computing time and size will increase significantly. High resolution uses the following settings:
          • Max Octree Depth—14
          • Texture Size—16384x16384
          • Decimation Criteria—Qualitative
          • Max Triangles—1000000
          • Decimation Strategy—Sensitive
        • Medium Resolution—Recommended setting for most projects. Strikes a good balance between size, computing time, and level of detail for the 3D textured mesh.
          • Max Octree Depth—12
          • Texture Size—8192x8192
          • Decimation Criteria—Quantitative
          • Max Triangles—1000000
          • Decimation Strategy—Sensitive
        • Low Resolution—Lower level of detail, leading to faster computing time and lower size. This is a good compromise for sharing the 3D textured mesh.
          • Max Octree Depth—10
          • Texture Size—4096x4096
          • Decimation Criteria—Quantitative
          • Max Triangles—100000
          • Decimation Strategy—Sensitive
        • Custom—Allows you to select the options for the 3D textured mesh generation:
          • Maximum Octree Depth—To create the 3D textured mesh, the project is iteratively subdivided into eight subregions. These are organized in a tree structure, and this parameter indicates how many subdivisions should be created. Higher values mean more regions will be created, hence each region will be small, leading to higher resolution and higher computing times. The value can be from 5 through 20.
          • Texture Size (pixels)—Defines the resolution of the texture of the model, affecting the pixel size.
            Note:
            • The higher the parameter selected, the longer the processing time. Using high-definition parameters has more visual impact when zooming in and visualizing the model up close. This allows better detail identification in the model.
            • The texture of sizes 65536x65536 and 131072x131072 are only supported for the .obj format.
          • Decimation Criteria—After the first step in the mesh creation, if too many triangles are created, this parameter indicates how the spurious triangles should be discarded.
            • Quantitative—Some triangles will be discarded until the desired number is reached.
              • Maximum number of triangles—The number of triangles in the final 3D textured mesh. The number will depend on the geometry and the size of the project.
            • Qualitative—Some triangles will be discarded to maintain the original geometry.
              • Sensitive—Selected triangles have a priority to maintain the original geometry of the 3D textured mesh.
              • Aggressive—Selected triangles have a priority to maintain a lower number of triangles.

Initial processing

On the Initial tab, initial processing options change the way Drone2Map calculates keypoints and matching image pairs.

  • Run Initial—Enables the initial processing step.
  • Keypoints Image Scale—Controls the way keypoints are extracted. The project defaults to Full or Rapid based on the template you chose when creating the project.
    • Full—Sets full image scale for precise results. This requires longer processing time. This option is useful when you are in the office and creating your GIS-ready products.
    • Rapid—Sets a lower image scale for faster results and lower precision. This option is useful when you need to quickly verify your collection while in the field.
    • Custom—Select the feature scale manually as follows:
      • 1 (original image size, default)—Recommended image scale value.
      • 2 (double image size)—For small images (for example, 640x320 pixels), a scale of 2 (double size images) should be used. More features will be extracted, and it will assist with the accuracy of results.
      • 1/2 (half image size)—For large projects with high overlap, half-size images can be used to speed up processing, which usually results in slightly reduced accuracy, as fewer features are extracted. This scale is also recommended for blurry or low-textured images. It usually results in better output than the default scale for such images.
      • 1/4 (quarter image size)—For very large projects with high overlap, quarter-size images can be used to speed up processing, which usually results in slightly reduced accuracy, as fewer features are extracted. This scale is also recommended for very blurry or very low textured images.
      • 1/8 (eighth image size)—For very large projects with high overlap, eighth-size images can be used to speed up processing, which usually results in slightly reduced accuracy, as fewer features are extracted.
  • Matching Image Pairs—Allows you to select which pairs of images are matched.
    • Aerial Grid or Corridor—Optimizes the pairs matching for aerial grid or corridor flight paths.
    • Free flight or Terrestrial—Optimizes the pairs matching for free-flight paths or terrestrial images (for example, taking images around a building or tower).
    • Custom—Use this parameter to specify pairs matching parameters. This is suggested for advanced users if one of the options above does not provide the desired results.
      Note:

      A higher number of matches will increase the quality of results while increasing the processing time. In some cases, increasing the number of pair matches can generate results for problematic projects that otherwise fail using the default matching options.

      • Use Capture Time—Matches images based on the time they were taken.
        • Number of Neighboring Images—The number of images (before and after in time) used for the pairs matching.
      • Use Triangulation of Image Geolocation—Only available if the images have geolocation. This is only useful for aerial flights. The geolocation position of the images is triangulated. Each image is then matched with images with which it is connected by a triangle.
      • Use Distance—Only available if the images have geolocation. This is useful for oblique or terrestrial projects. Each image is matched with images within a relative distance. Each image is matched with images that lie within a sphere. The radius of the sphere is calculated by multiplying the average distance between images by the defined relative distance. For example, if the average distance between images is 2 meters and the relative distance is 5, the radius of the sphere will be (2*5) = 10 meters.
        • Relative Distance Between Consecutive Images—Defines the relative distance when the Use Distance matching parameter is selected.
      • Use Image Similarity—Uses image content for pairs matching. This matches the n images with most similar content.
        • Maximum Number of Pairs for Each Image Based on Similarity—Maximum number of image pairs with similar image content.
      • Use Time for Multiple Cameras—For multiple flights without geolocation using the same flight plan over the same area with different camera models for each flight; it matches the images from one flight with the other flight using the time information.
  • Matching Strategy—Allows you to determine how images are matched.
    • Use Geometrically Verified Matching—Slower but more robust. When this option is chosen, a proof process is done to select geometrically consistent matches between images using the geometry content of the clearest matches between images. When this option is selected, only the most similar features are used.
  • Targeted Number of Key Points—Allows you to set up the number of keypoints to be extracted.
    • Automatic—Automatic way to select which keypoints are extracted.
    • Custom—Allows you to restrict the number of keypoints.
      • Number of Keypoints—Maximum number of keypoints to be extracted per image.
        Note:

        When extracting the keypoints per image, an internal scoring is assigned to them. Based on this scoring, the best keypoints are selected.

  • Calibration Method—Allows you to select how the camera's internal and external parameters are optimized.
    • Standard (default)
    • Alternative—Optimized for aerial nadir images with accurate geolocation and low texture content and for relatively flat terrain.
    • Geolocation and Orientation—Optimized for the project with very accurate image geolocation and orientation. This calibration method requires all images to be geolocated and oriented.
  • Camera Optimization—Defines which camera parameters are optimized.

    Note:

    The Camera Optimization processing options define which camera parameters are optimized. There are two types of camera parameters:

    • Internal camera parameters—The parameters of the camera model.
    • External camera parameters—The position and orientation of the camera.

    The optimization procedure starts with some initial values in order to compute the optimized values. The initial values are listed below.

    • Internal camera parameters—The initial values are extracted from the selected camera model.
    • External camera parameters—The initial values are extracted from initial processing, or from the geolocation and IMU data when Accurate Geolocation and Orientation has been selected as the Calibration Method.

    The initial and optimized values for the internal camera parameters are detailed in the Quality Report.

    • Internal Parameters Optimization—Defines which internal camera parameters are optimized.
      • All—Optimizes all the internal camera parameters. Small cameras, such as those used with UAVs, are much more sensitive to temperature or vibrations, which affect the camera calibration. Therefore, it is recommended to select this option when processing images that were taken with such cameras. This is the default.
      • None—Does not optimize any of the internal camera parameters. This is recommended when using metric cameras that are already calibrated, and when the calibration parameters are used for processing.
      • Leading—Optimizes the most important internal camera parameters. This option is used to process certain cameras such as cameras with a slow rolling shutter speed. The most important internal camera parameters for perspective lens camera models are the focal length and the first two radial distortion parameters. The most important internal camera parameters for fisheye lens camera models are the polynomial coefficients.
      • All Prior—Forces the optimal internal parameters to be close to the initial values.
        Note:

        If the difference between the initial and optimized camera parameters is higher than 5 percent, the All Prior values can be used to keep the computed values close to the initial values. This typically occurs in datasets of flat and homogenous areas that do not provide enough visual information for optimal camera calibration.

    • External Parameters Optimization—Models the position and orientation of the camera. Defines how the external camera parameters are optimized.
      • All—Optimizes the rotation and position of the camera, including the linear rolling shutter if needed. For cameras deploying a linear rolling shutter, the camera model should be defined on the Edit Camera Model dialog box located in the Manage Category of the Flight Data tab. This is the default.
      • None—Does not optimize any of the external camera parameters. This option is enabled only when Accurate Geolocation and Orientation has been selected as the Calibration Method. It is recommended only when the camera orientation and position are known and very accurate.
      • Orientation—Optimizes the orientation of the camera. This option is enabled only when Accurate Geolocation and Orientation has been selected as the Calibration Method. It is recommended only when the camera position is known and very accurate, and the camera orientation is not as accurate as the camera position.
  • Rematch—Allows you to add more matches after the first part of the initial processing, which usually improves the quality of the reconstruction:
    • Automatic (default)—Enables rematching only for projects with fewer than 500 images.
    • Custom—Allows you to select whether rematch is done for the project.
      • Rematch—Enables the rematch option.

Dense

The following options are available on the Dense tab.

  • Point Cloud Densification —Allows you to set parameters for the point cloud densification. It contains the following options:
    • Image Scale—Defines the scale of the image at which additional 3D points are computed. From the drop-down list, you can select the following:
      • 1/2 (half image size, default)—Half-size images are used to compute additional 3D points. This is the recommended image scale.
      • 1 (original image size, slow)—The original image size is used to compute additional 3D points. More points are computed than with half image scale, especially in areas where features can be easily matched (for example, cities, rocks, and so on). This option can require four times more RAM and time than the default value 1/2 (half image size) and usually does not significantly improve the results.
      • 1/4 (quarter image size, fast)—Quarter-size images are used to compute additional 3D points. Fewer points are computed than with the 1/2 image scale. However, more points are computed in areas with features that cannot easily be matched, such as vegetation areas. This scale is recommended for projects with vegetation.
      • 1/8 (eighth image size, tolerant)—Eighth-size images are used to compute additional 3D points. Fewer points are computed than with the 1/2 or 1/4 image scale. However, more points are computed in areas with features that cannot easily be matched, such as vegetation areas. This scale is recommended for projects with vegetation.
      • Multiscale (default)—When this option is used, additional 3D points are computed on multiple image scales, starting with the chosen scale from the Image Scale drop-down list and going to the 1/8 scale (eighth image size, tolerant). For example, if 1/2 (half image size, default) is selected, the additional 3D points are computed on images with half, quarter, and eighth image size. This is useful for computing additional 3D points on vegetation areas as well as keeping details about areas without vegetation.

    Note:

    The image scale has an impact on the number of 3D points generated.

    • Point Density—This parameter defines the density of the point cloud. The point density can be chosen from the following options:
      • Optimal—A 3D point is computed for every 4/image scale pixel. For example, if Image Scale is set to 1/2 (half image size), one 3D point is computed every 4/(0.5) = 8 pixels of the original image. This is the recommended point cloud density.
      • High (slow)—A 3D point is computed for every image scale pixel. The result is an over-sampled point cloud that requires up to four times more time and RAM than optimal density. This point cloud usually does not significantly improve the results.
      • Low (fast)—A 3D point is computed for every 16/image scale pixel. For example, if Image Scale is set to 1/2 (half image size), one 3D point is computed every 16/(0.5) = 32 pixels of the original image. The final point cloud is computed up to four times faster and uses up to four times less RAM than optimal density.

      Note:

      Point density has an impact on the number of 3D points generated.

    • Minimum Number of Matches—The minimum number of matches per 3D point represents the minimum number of valid reprojections of this 3D point in the images. The minimum number of matches per 3D point can be one of the following:
      • 2—Each 3D point must be correctly reprojected in at least two images. This option is recommended for projects with small overlap, but it produces a point cloud with more noise and artifacts.
      • 3 (default)—Each 3D point must be correctly reprojected in at least three images.
      • 4—Each 3D point must be correctly reprojected in at least four images. This option reduces the noise and improves the quality of the point cloud, but it may compute fewer 3D points in the final point cloud.
      • 5—Each 3D point must be correctly reprojected in at least five images. This option reduces the noise and improves the quality of the point cloud, but it may compute fewer 3D points in the final point cloud. This option is recommended for oblique imagery projects that have high overlap.
      • 6—Each 3D point must be correctly reprojected in at least six images. This option reduces the noise and improves the quality of the point cloud, but it may compute fewer 3D points in the final point cloud. This option is recommended for oblique imagery projects that have very high overlap.
    • Point Cloud Densification—Allows you to set parameters for the point cloud densification. The options are as follows:
      • 7 x 7 pixels—Faster processing. Suggested when using aerial nadir images.
      • 9 x 9 pixels—Finds a more accurate position for the densified points in the original images. Suggested when using oblique and terrestrial images.
    • Limit Camera Depth Automatically—Prevents the reconstruction of background objects. This is useful for oblique/terrestrial projects around objects.

Coordinate systems

On the Coordinate systems tab, the following options define the horizontal and vertical coordinate system for your images and the project.

  • Image Coordinate System—Defines the spatial reference for your images.
    • Horizontal coordinate system—Defines the horizontal coordinate system for your images. The default horizontal coordinate system for images is WGS84. To update the image horizontal coordinate system, click the globe button next to the Horizontal Coordinate System, select the appropriate coordinate system, and click OK.
    • Vertical reference—Defines the vertical reference for your images. The default vertical reference is EGM96 for images. Most image heights will be referenced to the EGM96 geoid and are either embedded in the exif header of the image or are contained in a separate file. Most GPS receivers convert the WGS84 ellipsoidal heights provided by global navigation satellites to EGM96 heights, so if you're unsure, accept the default of EGM96. See Vertical reference for selecting an appropriate vertical reference.
  • Project Coordinate System—Defines an output spatial reference for your Drone2Map output products.
    Note:

    Project coordinate system and vertical reference can only be modified if control points are not included in the project. If you have control points, the project coordinate system and vertical reference of Drone2Map are determined by the coordinate system and vertical reference of the control points.

    If you don't have control points, the coordinate system and vertical reference model used in the creation of Drone2Map are determined by the coordinate system and vertical reference of the images themselves. If the images have a geographic coordinate system, Drone2Mapwill generate products using the local WGS84 UTM Zone.

    • Horizontal Coordinate System—Defines the output horizontal coordinate system. To update the project coordinate system, click the globe button next to the Horizontal Coordinate System, select the appropriate projected coordinate system, and click OK. If you select a geographic coordinate system, Drone2Map will generate products using the local WGS84 UTM zone.
    • Vertical Reference—Defines the output vertical reference system for your Drone2Map products. This is relevant if your input images contain ellipsoidal heights and you plan to publish a 3D mesh as a scene layer, since ArcGIS Online and ArcGIS Pro both use the orthometric EGM96 geoid height model. EGM96 is the default. See Vertical reference for selecting an appropriate vertical reference.

Resources

On the Resources tab, you can view and adjust the following settings for the current project:

  • Location—The location of the current project in the file system. Click the link to open the file location.
  • Images—The location of the source images used in processing the current project. Clink the link to open the image location.
  • Log File—The location of the project log file. Click the link to open the file location. This file is useful when troubleshooting issues with Drone2Map.
  • CPU Threads—The amount of central processing unit (CPU) threads dedicated to processing your project. Slide the bar to the left or right to adjust the number of CPU threads.
  • Use CUDA—Check or uncheck to use the computer's graphics processing unit (GPU) during image processing.
Note:

Adjusting the CPU threads to a lower number increases the time required to complete image processing.

Export template

Drone2Map templates are designed to help you get your projects started quickly. The templates are preconfigured with specific processing options based on the template and desired products. You can update the processing options to customize processing settings and outputs. If you have a particular set of custom options that you use frequently, you can export your processing options as a template. Once your processing options are set, in the Options window, select Export Template, browse to the location where you want to save your template, and click Save. When you create your next project, choose your exported template, and your settings and options are loaded into Drone2Map.