Train Text Transformation Model (GeoAI)

Summary

Trains a text transformation model to transform, translate, or summarize text.

Learn more about how Text Transformation works

Usage

  • This tool requires deep learning frameworks be installed. To set up your machine to use deep learning frameworks in ArcGIS AllSource, see Install deep learning frameworks for ArcGIS.

  • This tool can also be used to fine-tune an existing trained model.

  • To run this tool using GPU, set the Processor Type environment to GPU. If you have more than one GPU, specify the GPU ID environment instead.

  • The input to the tool is a table or a feature class containing training data, with a text field containing the input text and a label field containing the transformed text.

  • This tool uses transformer-based backbones for training text transformation models and also supports in-context learning with prompts using the Mistral LLM. To install the Mistral backbone, see ArcGIS Mistral Backbone.

  • For information about requirements for running this tool and issues you may encounter, see Deep Learning frequently asked questions.

Parameters

LabelExplanationData Type
Input Table

A feature class or table containing a text field with the input text for the model and a label field containing the target transformed text.

Feature Layer; Table View
Text Field

A text field in the input feature class or table that contains the input text that will be transformed by the model.

Field
Label Field

A text field in the input feature class or table that contains the target transformed text for training the model.

Field
Output Model

The output folder location where the trained model will be stored.

Folder
Pretrained Model File
(Optional)

A pretrained model that will be used to fine-tune the new model. The input can be an Esri model definition file (.emd) or a deep learning package file (.dlpk).

A pretrained model that performs a similar task can be fine-tuned to fit the training data. The pretrained model must have been trained with the same model type and backbone model that will be used to train the new model.

File
Max Epochs
(Optional)

The maximum number of epochs for which the model will be trained. A maximum epoch value of 1 means the dataset will be passed through the neural network one time. The default value is 5.

Long
Model Backbone
(Optional)

Specifies the preconfigured neural network that will be used as the architecture for training the new model.

  • t5-smallThe new model will be trained using the T5 neural network. T5 is a unified framework that converts every language problem into a text-to-text format. t5-small is the small variant of T5.
  • t5-baseThe new model will be trained using the T5 neural network. T5 is a unified framework that converts every language problem into a text-to-text format. t5-base is the medium variant of T5.
  • t5-largeThe new model will be trained using the T5 neural network. T5 is a unified framework that converts every language problem into a text-to-text format. t5-large is the large variant of T5.
  • mistralThe model will be trained using the Mistral large language model (LLM). Mistral is a decoder-only transformer that uses Sliding Window Attention, Grouped Query Attention, and the Byte-fallback BPE tokenizer. To install the Mistral backbone, see ArcGIS Mistral Backbone.
String
Batch Size
(Optional)

The number of training samples that will be processed at one time. The default value is 2.

Increasing the batch size can improve tool performance; however, as the batch size increases, more memory is used. If an out of memory error occurs, use a smaller batch size.

Double
Model Arguments
(Optional)

Additional arguments that will be used for initializing the model. The supported model argument is sequence_length, which is used to set the maximum sequence length of the training data that will be considered for training the model.

Value Table
Learning Rate
(Optional)

The step size indicating how much the model weights will be adjusted during the training process. If no value is specified, an optimal learning rate will be derived automatically.

Double
Validation Percentage
(Optional)

The percentage of training samples that will be used for validating the model. The default value is 10 for transformer-based model backbones and 50 for the Mistral backbone.

Double
Stop when model stops improving
(Optional)

Specifies whether model training will stop when the model is no longer improving or continue until the Max Epochs parameter value is reached.

  • Checked—The model training will stop when the model is no longer improving, regardless of the Max Epochs parameter value. This is the default.
  • Unchecked—The model training will continue until the Max Epochs parameter value is reached.
Boolean
Make model backbone trainable
(Optional)

Specifies whether the backbone layers in the pretrained model will be frozen, so that the weights and biases remain as originally designed.

  • Checked—The backbone layers will not be frozen, and the weights and biases of the Model Backbone parameter value can be altered to fit the training samples. This takes more time to process but typically produces better results. This is the default.
  • Unchecked—The backbone layers will be frozen, and the predefined weights and biases of the Model Backbone parameter value will not be altered during training.

Boolean
Remove HTML Tags
(Optional)

Specifies whether HTML tags will be removed from the input text.

  • Checked—The HTML tags in the input text will be removed. This is the default.
  • Unchecked—The HTML tags in the input text will not be removed.

Boolean
Remove URLs
(Optional)

Specifies whether URLs will removed from the input text.

  • Checked—The URLs in the input text will be removed. This is the default.
  • Unchecked—The URLs in the input text will not be removed.

Boolean
Prompt
(Optional)

A specific input or instruction given to a large language model (LLM) to generate an expected output.

The default value is Transform the input text from the text field into the transformed text present in the label field.

String

arcpy.geoai.TrainTextTransformationModel(in_table, text_field, label_field, out_model, {pretrained_model_file}, {max_epochs}, {model_backbone}, {batch_size}, {model_arguments}, {learning_rate}, {validation_percentage}, {stop_training}, {make_trainable}, {remove_html_tags}, {remove_urls}, {prompt})
NameExplanationData Type
in_table

A feature class or table containing a text field with the input text for the model and a label field containing the target transformed text.

Feature Layer; Table View
text_field

A text field in the input feature class or table that contains the input text that will be transformed by the model.

Field
label_field

A text field in the input feature class or table that contains the target transformed text for training the model.

Field
out_model

The output folder location where the trained model will be stored.

Folder
pretrained_model_file
(Optional)

A pretrained model that will be used to fine-tune the new model. The input can be an Esri model definition file (.emd) or a deep learning package file (.dlpk).

A pretrained model that performs a similar task can be fine-tuned to fit the training data. The pretrained model must have been trained with the same model type and backbone model that will be used to train the new model.

File
max_epochs
(Optional)

The maximum number of epochs for which the model will be trained. A maximum epoch value of 1 means the dataset will be passed through the neural network one time. The default value is 5.

Long
model_backbone
(Optional)

Specifies the preconfigured neural network that will be used as the architecture for training the new model.

  • t5-smallThe new model will be trained using the T5 neural network. T5 is a unified framework that converts every language problem into a text-to-text format. t5-small is the small variant of T5.
  • t5-baseThe new model will be trained using the T5 neural network. T5 is a unified framework that converts every language problem into a text-to-text format. t5-base is the medium variant of T5.
  • t5-largeThe new model will be trained using the T5 neural network. T5 is a unified framework that converts every language problem into a text-to-text format. t5-large is the large variant of T5.
  • mistralThe model will be trained using the Mistral large language model (LLM). Mistral is a decoder-only transformer that uses Sliding Window Attention, Grouped Query Attention, and the Byte-fallback BPE tokenizer. To install the Mistral backbone, see ArcGIS Mistral Backbone.
String
batch_size
(Optional)

The number of training samples that will be processed at one time. The default value is 2.

Increasing the batch size can improve tool performance; however, as the batch size increases, more memory is used. If an out of memory error occurs, use a smaller batch size.

Double
model_arguments
[model_arguments,...]
(Optional)

Additional arguments that will be used for initializing the model. The supported model argument is sequence_length, which is used to set the maximum sequence length of the training data that will be considered for training the model.

Value Table
learning_rate
(Optional)

The step size indicating how much the model weights will be adjusted during the training process. If no value is specified, an optimal learning rate will be derived automatically.

Double
validation_percentage
(Optional)

The percentage of training samples that will be used for validating the model. The default value is 10 for transformer-based model backbones and 50 for the Mistral backbone.

Double
stop_training
(Optional)

Specifies whether model training will stop when the model is no longer improving or continue until the max_epochs parameter value is reached.

  • STOP_TRAININGThe model training will stop when the model is no longer improving, regardless of the max_epochs parameter value. This is the default.
  • CONTINUE_TRAININGThe model training will continue until the max_epochs parameter value is reached.
Boolean
make_trainable
(Optional)

Specifies whether the backbone layers in the pretrained model will be frozen, so that the weights and biases remain as originally designed.

  • TRAIN_MODEL_BACKBONEThe backbone layers will not be frozen, and the weights and biases of the model_backbone parameter value can be altered to fit the training samples. This takes more time to process but typically produces better results. This is the default.
  • FREEZE_MODEL_BACKBONEThe backbone layers will be frozen, and the predefined weights and biases of the model_backbone parameter value will not be altered during training.
Boolean
remove_html_tags
(Optional)

Specifies whether HTML tags will be removed from the input text.

  • REMOVE_HTML_TAGSThe HTML tags in the input text will be removed. This is the default.
  • DO_NOT_REMOVE_HTML_TAGSThe HTML tags in the input text will not be removed.
Boolean
remove_urls
(Optional)

Specifies whether URLs will be removed from the input text.

  • REMOVE_URLSThe URLs in the input text will be removed. This is the default.
  • DO_NOT_REMOVE_URLSThe URLs in the input text will not be removed.
Boolean
prompt
(Optional)

A specific input or instruction given to a large language model (LLM) to generate an expected output.

The default value is Transform the input text from the text field into the transformed text present in the label field.

String

Code sample

TrainTextTransformationModel (Python window)

The following Python window script demonstrates how to use the TrainTextTransformationModel function.

# Name: TrainTextTransformation.py
# Description: Train a sequence-to-sequence model to translate text from English to German.  
#
# Requirements: ArcGIS Pro Advanced license

# Import system modules
import arcpy
import os

# Set local variables
in_table = "training_data.csv"
out_folder = "c\\texttransformer"

# Run Train Text Transformation Model
arcpy.geoai.TrainTextTransformationModel(in_table, out_folder, max_epochs=2,
         text_field="input", label_field="target", batch_size=16)

Environments