datallog run
The datallog run command executes a specific automation locally within the current project. This is the primary way to test and debug your automation's logic before pushing it to the cloud.
Usage
datallog run [options] <automation_name>Arguments
<automation_name>
- Type:
string - Required: Yes
The name of the automation to run. This must match the name of an automation created with datallog create-automation.
Options
-s, --seed <seed>
- Type:
string - Default:
None
An optional seed value to pass as the initial input to the automation's @core_task. This is useful for providing a small piece of data directly from the command line. This option cannot be used with --seed-file.
-f, --seed-file <seed_file>
- Type:
string(file path) - Default:
seed.jsonin the automation's directory. If this file exists, its content will be passed as the initial data to the@core_taskwhen running the automation locally.
The path to a file containing the seed data for the automation. This is ideal for providing larger or more complex JSON data as the initial input. This option cannot be used with --seed.
-p, --parallelism <n>
- Type:
integer - Default:
1
The number of parallel workers to use when executing the automation. If a task returns a list, Datallog will process the items in that list using this many parallel workers.
-l, --log-to-dir <log_to_dir>
- Type:
string(directory path) - Default:
None
Specifies a directory where the output logs of the automation run should be stored. This is useful for capturing detailed execution logs for later analysis.
Examples
Run the hello-automation automation:
datallog run hello-automationRun the automation with a simple string as the seed data:
datallog run hello-automation -s "user-id-123"Run the automation using a JSON file as the seed data:
datallog run hello-automation -f ./initial-data.jsonRun the automation with 8 parallel workers and save the logs to a run_logs directory:
datallog run hello-automation -p 8 -l ./run_logs