Run or resume an experiment.
usage: dvc exp run [-h] [-q | -v] [-f] [repro_options ...] [-S [<filename>:]<params_list>] [--queue] [--run-all] [-j <number>] [--temp] [-r <experiment_rev>] [--reset] [targets [targets ...]] positional arguments: targets Stages to reproduce. 'dvc.yaml' by default
Provides a way to execute and track experiments in your project without polluting it with unnecessary commits, branches, directories, etc.
dvc exp runis equivalent to
dvc reprofor experiments. It has the same behavior when it comes to
targetsand stage execution (restores the dependency graph, etc.). See the command options for more on the differences.
Before running an experiment, you'll probably want to make modifications such as
data and code updates, or hyperparameter tuning. For the latter,
you can use the
-S) option of this command to change
dvc param values on-the fly.
Each experiment creates and tracks a project variation based on your
workspace changes. Experiments will have a unique, auto-generated
exp-bfe64 by default, which can be customized using the
The results of the last
dvc exp run can be seen in the workspace. To display
and compare multiple experiments, use
dvc exp show or
dvc exp diff
plots diff also accepts experiment names as
dvc exp apply
to restore the results of any other experiment instead.
Note that experiment data will remain in the cache until you use regular
dvc gcto clean it up.
To track successive steps in a longer or deeper experiment, you can
register checkpoints from your code. Each
dvc exp run will resume from the
First, mark at least stage output with
checkpoint: true in
dvc.yaml. This is needed so that the experiment can resume later, based on the
cached output(s) (circular dependency).
Then, use the
dvc.api.make_checkpoint() function (Python code), or write a
signal file (any programming language) following the same steps as that
You can now use
dvc exp run to begin the experiment. All checkpoints
registered at runtime will be preserved, even if the process gets interrupted
[Ctrl] C, or by an error). Without interruption, a "wrap-up"
checkpoint will be added (if needed), so that changes to pipeline outputs don't
remain in the workspace.
Subsequent uses of
dvc exp run will continue from the latest checkpoint (using
the latest cached versions of all outputs).
List previous checkpoints with
dvc exp show. To resume from a previous
checkpoint, you must first
dvc exp apply it before using
dvc exp run. For
--temp runs (see next section), use
--rev instead to specify
the checkpoint to continue from.
--reset to start over (discards previous checkpoints and
their outputs). This is useful for re-training ML models, for example.
--queue option lets you create an experiment as usual, except that nothing
is actually run. Instead, the experiment is put in a wait-list for later
dvc exp show will mark queued experiments with an asterisk
Note that queuing an experiment that uses checkpoints implies
--reset, unless a
--revis provided (refer to the previous section).
dvc exp run --run-all to process the queue. This is done outside your
workspace (in temporary dirs in
.dvc/tmp/exps) to preserve any
changes between/after queueing runs.
💡 You can also run a single experiment outside the workspace with
dvc exp run --temp, for example to continue working on the project meanwhile
(e.g. on another terminal).
⚠️ Note that only tracked files and directories will be included in
--queue/tempexperiments. To include untracked files, stage them with
git addfirst (before
dvc exp run). Feel free to
git resetthem afterwards. Git-ignored files/dirs are explicitly excluded from runs outside the workspace to avoid committing unwanted files into experiments.
--jobs), experiment queues can be run in parallel for better
performance (creates a tmp dir for each job).
⚠️ Parallel runs are experimental and may be unstable at this time. ⚠️ Make sure you're using a number of jobs that your environment can handle (no more than the CPU cores).
Note that each job runs the entire pipeline (or
targets) serially. DVC makes no attempt to distribute stage commands among jobs. The order in which they were queued is also not preserved when running them.
--set-param [<filename>:]<param_name>=<param_value> - set the value of
dvc params for this experiment.
filename can be any valid params
params.yaml by default). This will override the param values coming
from the params file.
--name <name> - specify a unique name for this experiment. A
default one will generated otherwise, such as
exp-f80g4 (based on the
--temp - run this experiment outside your workspace (in
Useful to continue working (e.g. in another terminal) while a long experiment
--queue - place this experiment at the end of a line for future execution,
but don't actually run it yet. Use
dvc exp run --run-all to process the
queue. For checkpoint experiments, this implies
--reset unless a
--run-all - run all queued experiments (see
--queue) and outside your
-j to execute them
--jobs <number> - run this
number of queued experiments in
parallel. Only has an effect along with
--run-all. Defaults to 1 (the queue
is processed serially).
--rev <commit> - continue an experiment from a specific
checkpoint name or hash (
--reset - deletes
checkpoint outputs before running this experiment
dvc.lock). Useful for ML model re-training.
--force - reproduce pipelines even if no changes were found (same as
dvc repro -f).
--help - prints the usage/help message, and exit.
--quiet - do not write anything to standard output. Exit with 0 if all
stages are up to date or if all stages are successfully executed, otherwise
exit with 1. The command defined in the stage is free to write output
regardless of this flag.
--verbose - displays detailed tracing information.
These examples are based on our Get Started, where you can find the actual source code.
Let's check the latest metrics of the project:
$ dvc metrics show Path avg_prec roc_auc scores.json 0.60405 0.9608
For this experiment, we want to see the results for a smaller dataset input, so
let's limit the data to 20 MB and reproduce the pipeline with
dvc exp run:
$ truncate --size=20M data/data.xml $ dvc exp run ... Reproduced experiment(s): exp-44136 Experiment results have been applied to your workspace. $ dvc metrics diff Path Metric HEAD workspace Change scores.json avg_prec 0.60405 0.56103 -0.04302 scores.json roc_auc 0.9608 0.94003 -0.02077
dvc metrics diff command shows the difference in performance for the
experiment we just ran (
You could modify a params file just like any other dependency and
run an experiment on that basis. Since this is a common need,
dvc exp run
comes with the
-S) option built-in to update existing
parameters. This saves you the need to manually edit the params file.
$ dvc exp run -S prepare.split=0.25 -S featurize.max_features=2000 ... Reproduced experiment(s): exp-18bf6 Experiment results have been applied to your workspace.
To see the results, we can use
dvc exp diff which compares both params and
metrics to the previous project version:
$ dvc exp diff Path Metric Value Change scores.json avg_prec 0.58187 -0.022184 scores.json roc_auc 0.93634 -0.024464 Path Param Value Change params.yaml featurize.max_features 2000 -1000 params.yaml prepare.split 0.25 0.05
Notice that experiments run as a series don't build up on each other. They are all based on