usage: dvc exp run [-h] [-q | -v] [-f] [repro_options ...] [-S [<filename>:]<params_list>] [--queue] [--run-all] [-j <number>] [--temp] [-r <experiment_rev>] [--reset] [targets [targets ...]] positional arguments: targets Stages to reproduce. 'dvc.yaml' by default
Provides a way to execute and track
dvc experiments in your
project without polluting it with unnecessary commits, branches,
dvc exp runis equivalent to
dvc reprofor experiments. It has the same behavior when it comes to
targetsand stage execution (restores the dependency graph, etc.). See the command options for more on the differences.
Before running an experiment, you'll probably want to make modifications such as
data and code updates, or hyperparameter tuning. For the latter,
you can use the
-S) option of this command to change
dvc param values on-the fly.
Each experiment creates and tracks a project variation based on your
workspace changes. Experiments will have an auto-generated name
exp-bfe64 by default, which can be customized using the
The results of the last
dvc exp run can be seen in the workspace. To display
and compare multiple experiments, use
dvc exp show or
dvc exp diff
plots diff also accepts experiment names as
dvc exp apply
to restore the results of any other experiment instead.
Note that experiment data will remain in the cache until you use regular
dvc gcto clean it up.
To track successive steps in a longer experiment, you can register checkpoints with DVC during your code or script runtime (similar to a logger).
To do so, first mark stage
checkpoint: true in
least one checkpoint output is needed so that the experiment can
later continue from that output's last cached state.
Then, in your code either call the
(Python), or write a signal file (any programming language) following the same
make_checkpoint() — please refer to its reference for details.
You can now use
dvc exp run to begin the experiment. All checkpoints
registered at runtime will be preserved even if the process gets interrupted
[Ctrl] C, or by an error*). A "wrap-up" checkpoint will be added
(if needed), so that no changes remain in the workspace. Subsequent uses of
dvc exp run will resume from this point (using the latest cached versions of
* Stage command(s) should return a non-error exit code (
0) for the final checkpoint to happen.
List previous checkpoints with
dvc exp show. To continue from a previous
checkpoint, you must first
dvc exp apply it before using
dvc exp run. For
--temp runs (see next section), use
--rev instead to specify
the checkpoint to continue from.
--reset to start over (discards previous checkpoints and
their outputs). This is useful for re-training ML models, for example.
--queue option lets you create an experiment as usual, except that nothing
is actually run. Instead, the experiment is put in a wait-list for later
dvc exp show will mark queued experiments with an asterisk
Note that queuing an experiment that uses checkpoints implies
--reset, unless a
--revis provided (refer to the previous section).
dvc exp run --run-all to process the queue. This is done outside your
workspace (in temporary dirs in
.dvc/tmp/exps) to preserve any
changes between/after queueing runs.
💡 You can also run a single experiment outside the workspace with
dvc exp run --temp, for example to continue working on the project meanwhile
(e.g. on another terminal).
⚠️ Note that only tracked files and directories will be included in
--queue/tempexperiments. To include untracked files, stage them with
git addfirst (before
dvc exp run). Feel free to
git resetthem afterwards. Git-ignored files/dirs are explicitly excluded from runs outside the workspace to avoid committing unwanted files into experiments.
--jobs), experiment queues can be run in parallel for better
performance (creates a tmp dir for each job).
⚠️ Parallel runs are experimental and may be unstable at this time. ⚠️ Make sure you're using a number of jobs that your environment can handle (no more than the CPU cores).
Note that each job runs the entire pipeline (or
targets) serially. DVC makes no attempt to distribute stage commands among jobs. The order in which they were queued is also not preserved when running them.
--set-param [<filename>:]<param_name>=<param_value>- set the specified
dvc paramsfor this experiment.
filenamecan be any valid params file (
params.yamlby default). This will override the param values coming from the params file.
--name <name>- specify a name for this experiment. A default name will generated by default, such as
exp-f80g4(based on the experiment's hash).
--temp- run this experiment outside your workspace (in
.dvc/tmp/exps). Useful to continue working (e.g. in another terminal) while a long experiment runs.
--queue- place this experiment at the end of a line for future execution, but don't actually run it yet. Use
dvc exp run --run-allto process the queue. For checkpoint experiments, this implies
--run-all- run all queued experiments (see
--queue) and outside your workspace (in
-jto execute them in parallel.
--jobs <number>- run this
numberof queued experiments in parallel. Only has an effect along with
--run-all. Defaults to 1 (the queue is processed serially).
--rev <commit>- continue an experiment from a specific checkpoint name or hash (
checkpointoutputs before running this experiment (regardless of
dvc.lock). Useful for ML model re-training.
--force- reproduce pipelines even if no changes were found (same as
dvc repro -f).
--help- prints the usage/help message, and exit.
--quiet- do not write anything to standard output. Exit with 0 if all stages are up to date or if all stages are successfully executed, otherwise exit with 1. The command defined in the stage is free to write output regardless of this flag.
--verbose- displays detailed tracing information.
These examples are based on our Get Started, where you can find the actual source code.
Let's check the latest metrics of the project:
$ dvc metrics show Path avg_prec roc_auc scores.json 0.60405 0.9608
For this experiment, we want to see the results for a smaller dataset input, so
let's limit the data to 20 MB and reproduce the pipeline with
dvc exp run:
$ truncate --size=20M data/data.xml $ dvc exp run ... Reproduced experiment(s): exp-44136 Experiment results have been applied to your workspace. $ dvc metrics diff Path Metric Old New Change scores.json avg_prec 0.60405 0.56103 -0.04302 scores.json roc_auc 0.9608 0.94003 -0.02077
dvc metrics diff command shows the difference in performance for the
experiment we just ran (
You could modify a params file just like any other dependency and
run an experiment on that basis. Since this is a common need,
dvc exp run
comes with the
-S) option built-in. This saves you the need to
manually edit the params file:
$ dvc exp run -S prepare.split=0.25 -S featurize.max_features=2000 ... Reproduced experiment(s): exp-18bf6 Experiment results have been applied to your workspace.
To see the results, we can use
dvc exp diff which compares both params and
metrics to the previous project version:
$ dvc exp diff Path Metric Value Change scores.json avg_prec 0.58187 -0.022184 scores.json roc_auc 0.93634 -0.024464 Path Param Value Change params.yaml featurize.max_features 2000 -1000 params.yaml prepare.split 0.25 0.05
Notice that experiments run as a series don't build up on each other. They are all based on