Skip To Content

Processing options

In Drone2Map, you can adjust the processing options for a project to customize it. Steps can be run independently, minimizing the time required to generate the desired products; however, the initial step must be run at least once. The three steps are described below.

Use the Processing Options window to configure which steps will run, the settings for each step, and which products will be created.

Initial

Initial processing options change the way Drone2Map calculates keypoints and matching images pairs.

  • Keypoints Image Scale—Controls the way keypoints are extracted. The project defaults to Full or Rapid based on the template you chose when creating the project.
    • Full—Sets full image scale for precise results. This requires longer processing time. This option is useful when you are in the office and creating your GIS-ready products.
    • Rapid—Sets a lower image scale for faster results and lower precision. This option is useful when you need to quickly verify your collection while in the field.
    • Custom—Select the feature scale manually as follows:
      • 1 (original image size, default)—Recommended image scale value.
      • 2 (double image size)—For small images (for example, 640x320 pixels), a scale of 2 (double size images) should be used. More features will be extracted, and it will assist with the accuracy of results.
      • 1/2 (half image size)—For large projects with high overlap, half size images can be used to speed up processing, which usually results in slightly reduced accuracy, as less features are extracted. This scale is also recommended for blurry or low textured images. It usually results in better output than the default scale for such images.
      • 1/4 (quarter image size)—For very large projects with high overlap, quarter size images can be used to speed up processing, which usually results in slightly reduced accuracy, as less features are extracted. This scale is also recommended for very blurry or very low textured images.
      • 1/8 (eighth image size)—For very large projects with high overlap, eighth size images can be used to speed up processing, which usually results in slightly reduced accuracy, as less features are extracted.
  • Matching Image Pairs—Allows you to select which pairs of images are matched.
    • Aerial Grid or Corridor—Optimizes the pairs matching for aerial grid or corridor flight paths.
    • Free flight or Terrestrial—Optimizes the pairs matching for free-flight paths or terrestrial images (for example, taking images around a building or tower).
    • Custom—Specific pairs matching parameter useful in specific projects and for advanced users only. This is suggested if one of the options above does not provide the desired results.
      • Use Capture Time—Matches images based on the time they were taken.
        • Number of Neighboring Images—The number of images (before and after in time) used for the pairs matching.
      • Use Triangulation of Image Geolocation—Only available if the images have geolocation. This is only useful for aerial flights. The geolocation position of the images is triangulated. Each image is then matched with images with which it is connected by a triangle.
      • Use Distance—Only available if the images have geolocation. This is useful for oblique or terrestrial projects. A pair matches a single image with other images within a relative distance of each other.
        • Relative Distance Between Consecutive Images—Only available if the images have geolocation. This is useful for oblique or terrestrial projects. Each image is matched with images within a relative distance.
      • Use Image Similarity—Uses image content for pairs matching. This matches the n images with most similar content.
        • Maximum Number of Pairs for Each Image Based on Similarity—Maximum number of image pairs with similar image content.
      • Use Time for Multiple Cameras—For multiple flights without geolocation using the same flight plan over the same area with different camera models for each flight, it matches the images from one flight with the other flight using the time information.
  • Matching Strategy—Allows you to determine how images are matched.
    • Use Geometrically Verified Matching—Slower but more robust. When this option is chosen, a proof process is done to select geometrically consistent matches between images using the geometry content of the clearest matches between images. When this option is not selected, only the most similar features are used.
  • Targeted Number of Key Points—Allows you to set up the number of keypoints to be extracted.
    • Automatic—Automatic way to select which keypoints are extracted.
    • Custom—Allows you to restrict the number of keypoints.
      • Number of Keypoints—Maximum number of keypoints to be extracted per image.
        Note:

        When extracting the keypoints per image, an internal scoring is assigned to them. Based on this scoring, the best keypoints are selected.

  • Calibration Method—Allows you to select how the camera internal and external parameters are optimized.
    • Standard (default)—
    • Alternative—Optimized for aerial nadir images with accurate geolocation and low texture content and for relatively flat terrain.
  • Rematch—Allows you to add more matches after the first part of the initial processing, which usually improves the quality of the reconstruction:
    • Automatic (default)—Enables rematching only for projects with less than 500 images.
    • Custom—Allows you to select whether or not rematch is done for the project.
      • Rematch—Enables the rematch option.
  • Output Coordinate System—Defines an output spatial reference for your Drone2Map output products.
    Note:

    Output coordinate system and vertical reference can only be modified if ground control points (GCPs) are not included in the project. If you have GCPs, the output coordinate system and vertical reference of Drone2Map output products is always determined by the coordinate system and vertical reference model of the ground control.

    If you don't have GCPs, the coordinate system and vertical reference model used in the creation of Drone2Map products is determined by the coordinate system and vertical reference of the images themselves. If the images have an XY coordinate system of WGS84, Drone2Mapoutput products are created using an XY coordinate system of the relevant UTM zone. These defaults can be modified in processing options.

    This parameter is only available if your project has no GCPs. To update the output XY coordinate system, click Edit, select the appropriate projected coordinate system, and click OK.
    • Vertical Reference—Defines the output vertical reference system for your Drone2Map products. This is relevant if your input images contain ellipsoidal heights and you plan to publish a 3D mesh as a scene layer, since ArcGIS Online and ArcGIS Pro both use the orthometric EGM96 geoid height model.

      EGM96 is the default, but you can select from the following options:

      • EGM 84—For altitudes based on the EGM84 geoid
      • EGM 96—For altitudes based on the EGM96 geoid
      • EGM 2008—For altitudes based on the EGM2008 geoid
      • No Conversion
      • WGS 1984 Ellipsoid—For altitudes based on the ellipsoid specified in the XY coordinate system
    • Vertical units—Displays the vertical units as determined by the output coordinate system.

Dense

  • Point Cloud Densification —Allows you to set parameters for the point cloud densification. It contains the following options:
    • Image Scale—Defines the scale of the image at which additional 3D points are computed. From the drop-down list, you can select the following:
      • 1/2 (half image size, default)—Half size images are used to compute additional 3D points. This is the recommended image scale.
      • 1 (original image size, slow)—The original image size is used to compute additional 3D points. More points are computed than with half image scale, especially in areas where features can be easily matched (for example, cities, rocks, and so on). This option can require four times more RAM and time than the default value 1/2 (half image size) and usually does not significantly improve the results.
      • 1/4 (quarter image size, fast)—Quarter size images are used to compute additional 3D points. Less points are computed than with the 1/2 image scale. However, more points are computed in areas with features that cannot easily be matched, such as vegetation areas. This scale is recommended for projects with vegetation.
      • 1/8 (eighth image size, tolerant)—Eighth size images are used to compute additional 3D points. Less points are computed than with the 1/2 or 1/4 image scale. However, more points are computed in areas with features that cannot easily be matched, such as vegetation areas. This scale is recommended for projects with vegetation.
      • Multiscale (default)—When this option is used, additional 3D points are computed on multiple image scales, starting with the chosen scale from the Image Scale drop-down list and going to the 1/8 scale (eighth image size, tolerant). For example, if 1/2 (half image size, default) is selected, the additional 3D points are computed on images with half, quarter, and eighth image size. This is useful for computing additional 3D points on vegetation areas as well as keeping details about areas without vegetation.

    Note:

    The image scale has an impact on the number of 3D points generated.

    • Point Density—This parameter defines the density of the point cloud. The point density can be chosen from the following options:
      • Optimal—A 3D point is computed for every 4/image scale pixel. For example, if Image Scale is set to 1/2 (half image size), one 3D point is computed every 4/(0.5) = 8 pixels of the original image. This is the recommended point cloud density.
      • High (slow)—A 3D point is computed for every image scale pixel. The result is an over-sampled point cloud that requires up to four times more time and RAM than optimal density. This point cloud usually does not significantly improve the results.
      • Low (fast)—A 3D point is computed for every 16/image scale pixel. For example, if Image Scale is set to 1/2 (half image size), one 3D point is computed every 16/(0.5) = 32 pixels of the original image. The final point cloud is computed up to four times faster and uses up to four times less RAM than optimal density.

      Note:

      Point density has an impact on the number of 3D points generated.

    • Minimum Number of Matches—The minimum number of matches per 3D point represents the minimum number of valid reprojections of this 3D point in the images. The minimum number of matches per 3D point can be one of the following:
      • 2—Each 3D point must be correctly reprojected in at least two images. This option is recommended for projects with small overlap, but it produces a point cloud with more noise and artifacts.
      • 3 (default)—Each 3D point must be correctly reprojected in at least three images.
      • 4—Each 3D point must be correctly reprojected in at least four images. This option reduces the noise and improves the quality of the point cloud, but it may compute fewer 3D points in the final point cloud.
      • 5—Each 3D point must be correctly reprojected in at least five images. This option reduces the noise and improves the quality of the point cloud, but it may compute fewer 3D points in the final point cloud. This option is recommended for oblique imagery projects that have high overlap.
      • 6—Each 3D point must be correctly reprojected in at least six images. This option reduces the noise and improves the quality of the point cloud, but it may compute fewer 3D points in the final point cloud. This option is recommended for oblique imagery projects that have very high overlap.
    • Point Cloud Densification—Allows you to set parameters for the point cloud densification. The options are as follows:
      • 7 x 7 pixels—Faster processing. Suggested when using aerial nadir images.
      • 9 x 9 pixels—Finds a more accurate position for the densified points in the original images. Suggested when using oblique and terrestrial images.
    • Limit Camera Depth Automatically—Prevents the reconstruction of background objects. This is useful for oblique/terrestrial projects around objects.

2D products

The following options allow you to adjust the processing options and desired outputs for orthomosaic, digital surface model (DSM), and Normalized Difference Vegetation Index (NDVI) images:

  • Create Orthomosaic—Generates an orthomosaic from the project images.
    • Resolution—Defines the spatial resolution used to generate the orthomosaic and DSM.
      • Automatic (default)—The resolution of your source imagery is used. Changing this value changes the resolution by multiples of the ground sample distance (GSD).
      • User defined—Allows you to select any defined value in cm or pixel for the GSD.
    • Merge tiles (default)—Generates a single orthomosaic GeoTIFF file by merging the individual tiles. If you don't choose this option, the merged orthomosaic file is not generated, and the product is not added to the map. A message displays that orthomosaic tiles are successfully created and are available in the 2D products folder.
    • Create BigTIFF—Produces orthomosaic images in BigTIFF format rather than standard TIFF format.
      Note:

      Standard Tiff format is limited to images that are 4 GB or smaller in size. Images larger than 4 GB require the BigTIFF format. Checking the Create BigTIFF check box forces the creation of the orthomosaic in BigTIFF format regardless of size. Certain software packages may not support the BigTIFF format and will not be able to open orthomosaic images produced by Drone2Map written in BigTIFF format.

  • Create Digital Surface Model—Generates a DSM from the project images.
    • Method—The method used for the raster DSM generation. The method affects the processing time and the quality of the results.
      • Inverse Distance Weighting—The values to unknown points are calculated with a weighted average of the values available at the known points. This method is recommended for buildings.
      • Triangulation—The triangulation algorithm is used. This method is recommended for flat areas (agriculture fields) and stockpiles.
        Note:

        The triangulation method can be up to ten times faster than the Inverse Distance Weighting method, but the results may not be as good, especially for buildings.

    • DSM Filters—Allows you to define parameters to filter and smooth the points of the densified point cloud.
      • Use Noise Filtering—Generation of the densified point cloud can lead to noisy and erroneous points. The noise filtering corrects the altitude of these points with the median altitude of the neighboring points.
      • Use Surface Smoothing—Once the noise filter has been applied, a surface is generated from the obtained points. This surface can contain areas with erroneous small bumps. The surface smoothing corrects these areas by flattening them. This option allows you to set the following types of smoothing:
        • Smooth—Tries to smooth areas, assuming that sharp features exist because of noise and they should be removed. Areas that are not very planar are smoothed and become planar.
        • Medium—Compromise between smooth and sharp. It tries to preserve sharp features while flattening rough planar areas.
        • Sharp (default)—Tries to preserve the orientation of the surface and keep sharp features such as corners and edges of buildings. Therefore, only quasi-planar areas are flattened.
    • Merge tiles (default)—Generates a single DSM GeoTIFF file by merging the individual tiles. When this option is not selected, the merged DSM file is not generated, and the product is not added to the map. A message displays that DSM tiles are successfully created and are available in the 2D products folder.
  • Create Digital Terrain Model—Generates a DSM from the project images.
    • Resolution—Defines the spatial resolution used to generate the digital terrain model (DTM).
      • Automatic (default)—The resolution of your source imagery is used. Changing this value changes the resolution by multiples of the GSD.
      • User defined—Allows you to select any defined value in cm or pixel for the GSD.
    • Merge tiles (default)—Generates a single DTM GeoTIFF file by merging the individual tiles. When this option is not selected, the merged DTM file is not generated, and the product is not added to the map. A message displays that DTM tiles are successfully created and available in the 2D products folder.
  • Create contours—Generates contour lines using the raster DSM or the raster DTM.
    Note:

    The contour lines are computed using either of the following:

    • Raster DTM if it is generated. This is useful when the area is covered by buildings or other objects as the objects are filtered out for the DTM generation and do not affect the contour lines based on the DTM.
    • –Raster DSM if the raster DTM is not generated or if the DTM tiles are not merged. This is useful when the area is not covered by buildings or other objects as the objects affect the contour lines based on the DSM.

    Note:

    The option to create contour lines is unavailable if the option to merge tiles of the raster DSM is unavailable.

    • Output format
      • SHP—If generating contours, this option is automatically checked and cannot be unchecked. The contour lines will be generated in .shp format.
      • DXF—When this option is selected, the contour lines file is generated in .dxf format.
    • Contour Settings
      • Contour base—Defines the relative altitude, which is used as a contour line base in meters.
      • Elevation interval—Defines the contour line elevation interval in meters. It can be any positive value. The elevation interval must be smaller than the (maximum - minimum) altitude of the DSM.
      • Resolution—Defines the horizontal distance for which an altitude value is registered. The higher the resolution value, the smoother the contour lines.
      • Minimum line size—Defines the minimum number of vertices that a contour line can have. Lines with fewer vertices will be deleted and less noise will be produced.
  • Multispectral Indices—Generates a color NDVI from your multispectral image collection. Source imagery must contain an infrared band.

Note:

Modifying standard cameras by changing the filters to get infrared measurements has become common practice. If you want to generate an NDVI product from images taken with a modified camera, you need to identify the bands in the Processing Options window. In the Processing Options window, on the 2D products tab, check the box to Manually set NDVI bands (for use with a modified camera), and select the proper near infrared and red bands for the modified camera from the Red and NIR drop-down lists.

3D products

These options allow you to change the desired outputs for the point cloud and 3D textured mesh created in this step.

  • Create Point Clouds—Allows you to select the desired output formats for the point cloud. The options are as follows:
    • zLAS (default)—Derived from the LAS file, this option produces a lidar zLAS file using Esri's optimized LAS format with X,Y,Z position and color information for each point of the point cloud. If you choose zLAS, you will also get a LAS file.
    • LAS (default)—Lidar LAS file with X,Y,Z position and color information for each point of the point cloud.
    • PLY—PLY file with X,Y,Z position and color information for each point of the point cloud.
    • XYZ—ASCII text file with X,Y,Z position and color information for each point of the point cloud.
      • Delimiter—Defines the delimiter character of the file, used to separate the values. The drop-down list has the following options:
        • Space
        • Tab
        • Comma
        • Semicolon
    • Settings
      • Merge Tiles— If the point cloud consists of many points, several tiles are generated. This option produces a single file with all the points.
      • Classify— Enables the generation of the point cloud classification.
        Note:

        When the point cloud classification is used for the DTM generation, it significantly improves the DTM.

  • Create Textured Meshes—Allows you to generate the 3D textured mesh while processing and allows you to set up the parameters to use for mesh generation.

    Note:

    The densified point cloud is used to generate a surface composed of triangles. It uses the points to minimize the distance between the points and the surface that they define, but even the vertices of the triangles are not necessarily an exact point of the densified point cloud.

    • 3D Textured Mesh Outputs—Allows you to select the desired output formats for the 3D textured mesh.

      You can choose from the following output formats:

      • A level-of-detail (LoD) mesh allows you to adjust the resolution and number of levels of detail for your mesh.
        • Texture Quality—Allows you to define the resolution of the texture. You can select from the following:
          • Low—512x512
          • Medium—1024x1024
          • High—4096x4096
        • Number of Levels—Allows you to define the number of different levels of detail to be generated between 1 and 7. The higher the number of levels. the more detailed the representation and the longer the processing time.
          Note:

          The level of detail (LOD) mesh is a representation of the 3D mesh that contains multiple levels of detail, decreasing the complexity of the model as it is divided into more levels. Fewer details are available as you zoom out in the model.

          For large projects, it is possible that a level cannot be generated for a high number of levels as there is a maximum number of 20,000 triangles that can be generated for each level of details.

        • Drone2Map includes the following LoD mesh formats:
          • OSGB.osgb
          • Scene Layer Package (default)—.slpk
      • OBJ (default)—An OBJ file with the following:
        • X,Y,Z position for each vertex of the 3D textured mesh
        • Texture information (using .jpg and .mtl texture files)
      • FBX—An FBX file with the following:
        • X,Y,Z position for each vertex of the 3D textured mesh
        • Texture information
      • AutoCAD DXF—A DXF file with the following:
        • X,Y,Z position for each vertex of the 3D textured mesh
      • PLY—A PLY file with the following:
        • X,Y,Z position for each vertex of the 3D textured mesh
        • Texture information (using a .jpg texture file)
          Note:

          The 3D textured mesh file is not georeferenced. It has coordinates on a local coordinate system centered around the project.

      • 3D PDF (default)—A PDF file containing a 3D model of the 3D textured mesh. The texture size of the 3D textured mesh that is displayed in the 3D PDF is 2000x2000 pixels.
        • Logo—You can select a logo (.jpg or .tif file) to display on the 3D PDF.
  • Settings
    Note:

    • The point cloud is used to generate a surface composed of triangles. The distance between the mesh and the points of the point cloud is optimized to be minimal, but this means that points of the mesh do not necessarily correspond to points of the point cloud.
    • Since the mesh is 3D, it is unfolded onto a 2D plane to define the resolution (pixel size). Then the 3D position of the pixel is reprojected into the original images to obtain the color. Blending is used instead of stitching to generate the texture of the 3D textured mesh.
    • The 3D textured mesh will be generated using the point cloud. If a processing area or image annotations are defined, and if the corresponding options are selected in the Point Cloud Filters options, they will also be used for the generation of the 3D textured mesh.

    The available parameters are as follows:

    • High Resolution—High level of detail. Recommended to maximize the visual aspect of the 3D textured mesh. Computing time and size will increase significantly. High resolution uses the following settings:
      • Max Octree Depth—14
      • Texture Size—16384
      • Decimation Criteria—Qualitative
      • Max Triangles—1000000
      • Decimation Strategy—Sensitive
    • Medium Resolution—Recommended setting for most projects. Strikes a good balance between size, computing time, and level of detail for the 3D textured mesh.
      • Max Octree Depth—12
      • Texture Size—8192
      • Decimation Criteria—Quantitative
      • Max Triangles—1000000
      • Decimation Strategy—Sensitive
    • Low Resolution—Lower level of detail leading to faster computing time and lower size. This is a good compromise for sharing the 3D textured mesh.
      • Max Octree Depth—10
      • Texture Size—4096
      • Decimation Criteria—Quantitative
      • Max Triangles—100000
      • Decimation Strategy—Sensitive
    • Custom—Allows you to select the options for the 3D textured mesh generation:
      • Maximum Octree Depth—To create the 3D textured mesh, the project is iteratively subdivided into eight subregions. These are organized in a tree structure, and this parameter indicates how many subdivisions should be created. Higher values mean more regions will be created, hence each region will be small, leading to higher resolution and higher computing times. The value can be from 5 through 20.
      • Texture Size (pixels)—Defines the resolution of the texture of the model, affecting the pixel size.
        Note:
        • The higher the parameter selected, the longer the processing time. Using high-definition parameters has more visual impact when zooming in and visualizing the model up close. This allows better detail identification in the model.
        • Texture of sizes 65536x65536 and 131072x131072 are only supported for the .obj format.
      • Decimation Criteria—After the first step in the mesh creation, if too many triangles are created, this parameter indicates how the spurious triangles should be discarded.
        • Quantitative—Some triangles will be discarded until the desired number is reached.
          • Maximum number of triangles—The number of triangles in the final 3D textured mesh. The number will depend on the geometry and the size of the project.
        • Qualitative—Some triangles will be discarded to maintain the original geometry.
          • Sensitive—Selected triangles have a priority to maintain the original geometry of the 3D textured mesh.
          • Aggressive—Selected triangles have a priority to maintain a lower number of triangles.
    • Use Color Balancing for Texture—The Color Balancing algorithm will be used for the generation of the texture of the 3D texture mesh. The Color Balancing algorithm ensures that the texture will be homogeneous.

Resources

You can view and adjust the following settings for the current project:

  • Project Name—Name of the current project.
  • Location—Location of the current project in the file system. Click the link to open the file location.
  • Images—Location of the source images used in processing the current project. Clink the link to open the image location.
  • CPU Threads—Amount of computer processing unit (CPU) threads dedicated to processing your project. Slide the bar to the left or right to adjust the number of CPU threads.
  • Use CUDA—Check or uncheck to use the computer's graphics processing unit (GPU) during image processing.
  • Log File—Location of the D2M log file. Click the link to open the file location. This file is useful when troubleshooting issues with Drone2Map.
Note:

Adjusting the CPU threads to a lower number increases the time required to complete image processing.

Export as template

Drone2Map templates are designed to help you get your projects started quickly. The templates are preconfigured with specific processing options based on the template and desired products. You can update the processing options to customize processing settings and outputs. If you have a particular set of custom options that you use frequently, you can export your processing options as a template. Once your processing options are set, in the Processing Options window, select Export as template, browse to the location where you want to save your template, and click Save. When you create your next project, choose your exported template, and your settings and options are loaded into Drone2Map.