AutoCreateTable | Dropdown | If the table does not already exist within the targeted database, it will create a new one automatically. |
AutoGenerateMerge | Dropdown | Allows for an automatically generated SQL merge based on the primary key of the targeted table. |
CDCSource | Dropdown | Whether the Source data is from a CDC enabled database. This is used to modify the way in which the data is manipulated as CDC tables include additional data. |
ChunkField | Text | If wanting to break down the extraction into multiple files, use the column names to break it apart. |
ChunkSize | Integer | The amount of rows to be used for each ‘chunk' of data extracted. |
CustomDefinitions | Text | This field is used for the user if they wish to define anything of their own. |
DataFileName | Text | Name of the file being referenced, filetype included. NOTE: In the case of a target file name, this may be a folder if it is being created by a spark cluster (Synapse). This may also reference a folder if it is being used by a spark cluster as a source. |
DeleteAfterCompletion | Dropdown | This will delete the source files after a successful run of the activity. |
ExecuteNotebook | Text | The notebook that the user wishes to execute for the activity. Many task types will have this prefilled with the Lockbox supplied notebook for the task. If the user has a custom notebook for the task they can reference it here. |
ExtractionSQL | SQL | A custom SQL statement to be used to extract the data. This is ignored if you have selected a Table sub type. |
FirstRowAsHeader | Dropdown | Whether the user wants the first row to be used as the column names (header columns). |
IncrementalType | Dropdown | Whether the user wishes to do a full extraction or a watermark based extraction of the data. |
MaxConcurrentConnections | Integer | The limit of concurrent connections to the file allowed during the execution of the task. |
MergeSQL | SQL | A custom merge SQL statement. This will be ignored if AutoGenerateMerge is true. |
Pagination | Dropdown | This will toggle whether the notebook will attempt to execute pagination on the request using the chosen key (found in the SystemJson). This is an experimental feature and may not work on every use case, please refer to the RestAPINotebook for several examples of pagination and cater towards your API's by using a custom RestAPINotebook in the case it does not work for your REST source. More information regarding how to use this feature can be found in the Task Types section |
PostCopySQL | SQL | A custom SQL statement to be run AFTER the copy of the table has been completed. |
PreCopySQL | SQL | A custom SQL statement to be run BEFORE the copy of the table has been completed. |
Purview | Dropdown | Whether the user wishes to write the pipeline execution to the Purview account registered with the selected Execution Engine (which is done in the next step). This must be a Synapse Workspace based Execution Engine. |
QualifiedIDAssociation | Dropdown | Allows the user to decide which identifier to use for the Purview unique ID (when creating the objects for Purview Lineage) of the pipeline when writing to the Purview API. |
QueryTimeout | HH:MM:SS | The timeout limit for the relevant SQL query, this is done in the hh:MM:ss format. |
Recursively | Dropdown | Whether you wish to target any subfolders found within the directory provided. This is useful if the file is split into parts or subfolders depending on something such as dates. |
RelativePath | Text | The full relative path of the file. Pattern match characters can be used. |
RelativeUrl | Text | The Relative URL for the REST API being referenced. |
RequestBody | JSON | The request body of a REST API request, this will be blank for a GET request. The response of the request will be stored in the target system. |
RequestMethod | Text | The Request method to be used with the REST API call. |
SchemaFileName | Text | The name of the Schema file to be used when generating the table. If one is not provided, one will be generated. It is expected for the schema to be within the same relative path as the DataFileName. |
SheetName | Text | The name of the targeted excel sheet. |
SkipLineCount | Text | Number of rows (if any) to skip or ignore. |
SparkTableCreate | Dropdown | Whether the user wishes to additionally write the source data to a Spark Table upon completion. |
SparkTableDBName | Text | The name of the database for the spark table being created. |
SparkTableName | Text | The name of the spark table to write the data to. |
SQLPoolName | Text | The name of the SQL Dedicated Pool to have the corresponding action taken against |
SQLPoolOperation | Dropdown | Whether the user wishes to start or pause the specified SQL Dedicated Pool. |
SQLStatement | SQL | SQL statement to execute for the task. |
StagingTableName | Text | The table name of the transient table which is used before being merged into the final table. |
TableName | Text | The table name of the final table for the data to be merged into. |
TableSchema | Text | The schema name of the final table for the data to be merged into. |
TriggerUsingAzureStorageCache | Checkbox | Whether you wish to use the storage cache instead of polling the storage account. |
UseNotebookActivity | Dropdown | Whether the user wishes to use a notebook pipeline activity to execute the notebook or not. If disabled, it will use an azure function to call it. |