Edit on GitHub

Data Access

Okay, now that we've learned how to track data and models with DVC and how to version them with Git, next question is how can we use these artifacts outside of the project? How do I download a model to deploy it? How do I download a specific version of a model? How do I reuse datasets across different projects?

These questions tend to come up when you browse the files that DVC saves to remote storage, e.g. s3://dvc-public/remote/get-started/fb/89904ef053f04d64eafcc3d70db673 ๐Ÿ˜ฑ instead of the original files, name such as model.pkl or data.xml.

Remember those .dvc files dvc add generates? Those files (and dvc.lock that we'll cover later), their history in Git, DVC remote storage config saved in Git contain all the information needed to access and download any version of datasets, files, and models. It means that Git repository with DVC files becomes an entry point and can be used instead of accessing files directly.

Find a file or directory

You can use dvc list to explore a DVC repository hosted on any Git server. For example, let's see what's in the get-started/ directory of our dataset-registry repo:

$ dvc list https://github.com/iterative/dataset-registry get-started

The benefit of this command over browsing a Git hosting website is that the list includes files and directories tracked by both Git and DVC (data.xml is not visible if you check GitHub).


One way is to simply download the data with dvc get. This is useful when working outside of a DVC project environment, for example in an automated ML model deployment task:

$ dvc get https://github.com/iterative/dataset-registry \

When working inside another DVC project though, this is not the best strategy because the connection between the projects is lost โ€” others won't know where the data came from or whether new versions are available.

Import file or directory

dvc import also downloads any file or directory, while also creating a .dvc file that can be saved in the project:

$ dvc import https://github.com/iterative/dataset-registry \
             get-started/data.xml -o data/data.xml

This is similar to dvc get + dvc add, but the resulting .dvc files includes metadata to track changes in the source repository. This allows you to bring in changes from the data source later, using dvc update.

๐Ÿ’ก Expand to see what happens under the hood.

Note that the dataset registry repository doesn't actually contain a get-started/data.xml file. Like dvc get, dvc import downloads from remote storage.

.dvc files created by dvc import have special fields, such as the data source repo, and path (under deps):

+ deps:
+ - path: get-started/data.xml
+   repo:
+     url: https://github.com/iterative/dataset-registry
+     rev_lock: f31f5c4cdae787b4bdeb97a717687d44667d9e62
 - md5: a304afb96060aad90176268345e10355
   path: data.xml

The url and rev_lock subfields under repo are used to save the origin and version of the dependency, respectively.

Python API

It's also possible to integrate your data or models directly in source code with DVC's Python API. This lets you access the data contents directly from within an application at runtime. For example:

import dvc.api

with dvc.api.open(
        ) as fd:
    # ... fd is a file descriptor that can be processed normally.

โ–ถ๏ธ It can be run online:

Run in Katacoda

๐Ÿ› Found an issue? Let us know! Or fix it:

Edit on GitHub

โ“ Have a question? Join our chat, we will help you:

Discord Chat