Every month we are sharing here our news, findings, interesting reads, community takeaways, and everything along the way.
Some of those are related to our brainchild DVC and its journey. The others are a collection of exciting stories and ideas centered around ML best practices and workflow.
As we continue to grow DVC together with our fantastic contributors, we enjoy more and more insights, discussions, and articles either created or brought to us by our community. We feel it is the right time to start sharing more of your news, your stories and your discoveries. New Heartbeat is here!
Speaking of our own news — next month DVC team is going to the Open Source North America Summit. It is taking place in San Diego on August 21–23. Dmitry and Sveta will be giving talks and we will run a booth. So looking forward to it! Stop by for a chat and some cool swag. And if you are in San Diego on those days and want to catch up — please let us know here or on Twitter!
Open Source Summit + ELC North America 2019: Open Source Tools for ML Experiments Man...
Open Source Summit + ELC North America 2019: Speaker Preparation: Simple Steps with a...
Every month our team is excited to discover new great pieces of content addressing some of the burning ML issues. Here are some of the links that caught our eye in June:
As we’ve seen in this article some tools and practices can be borrowed from regular software engineering. However, the needs of machine learning projects dictate tools that better fit the purpose.
There are lots of hidden gems in our Discord community discussions. Sometimes they are scattered all over the channels and hard to track down.
We are sifting through the issues and discussions and share with you the most interesting takeaways.
You can definitely add more than one remote (see
dvc remote add) and then
dvc push has a
-R option to
pick which one to send the cached data files (deps, outs, etc) to. We would not
recommend doing this though. It complicates the commands you have to run — you
will need to remember to specify a remote name for every command that deals with
status, etc. Please, leave a comment in
the relevant issue here if this
case is important for you.
-Moutput of a training stage), and also apart knowing the accuracy of the NN on a test set (another
-Moutput of eval stage).
Yes, it is totally fine to use
-M in different stages.
dvc metrics show will
just show both metrics.
-oflag. I have manually added another file into or modified the artifacts folder but when I do
dvc pushnothing happens, is there anyway around this?
Let’s first do a quick recap on how DVC handles data files (you can definitely find more information on the DVC documentation site).
dvc importDVC puts artifacts (in case of
dvc runartifacts == outputs produced by the command) into
.dvc/cachedirectory (default cache location). You don’t see this happening because DVC keeps links (or in certain cases creates a copy) to these files/directories.
dvc pushdoes not move files from the workspace (that what you see) to the remote storage, it always moves files/directories that are already in cache (default is .dvc/cache).
git commit. You change the file, you do
git commit, only after that you can push something to Git server (Github/Gitlab, etc). The difference is that DVC is doing commit (moves files to cache) automatically in certain cases —
dvc run, etc.
There is an explicit command —
dvc commit - that you should run if you want to
enforce the change to the output produced by
dvc run. This command will update
the corresponding DVC- files (.dvc extension) and will move data to cache. After
that you should be able to run
dvc push to save your data on the external
Note, when you do an explicit commit like this you are potentially “breaking”
the reproducibility. In a sense that there is no guarantee now that your
directory can be produced by
dvc repro — since you changed it
dvc runto do this because it doesn’t allow the same directory as an output and a dependency.
You could do this in one step (one stage). So that getting your data and modifying it, is one stage. So you don’t depend on the data folder. You just could depend on your download + modifying script.
Most likely it means that there is a DVC-file that have the same output twice. Or there two DVC-files that share the same output file.
dvc repro. The command runs find if I don’t use DVC.
That happens because dvc run is trying to ensure that your command is the one
creating your output and removes existing outputs before executing the command.
So that when you run
dvc repro later, it will be able to fully reproduce the
output. So you need to make the script create the directory or file.
dvc pullinside my Docker container when initializing a training job. Can DVC be used in this way?
Yes, it’s definitely a valid case for DVC. There are different ways of organizing the storage that training machines are using to access data. From the very simple — using local storage volume and pulling data from the remote storage every time — to using NAS or EFS to store a shared DVC cache.
HTTP remotes do not support upload yet. Example Get Started repository is using HTTP to keep it read-only and abstract the actual storage provider we are using internally. If you actually check the remote URL, you should see that it is an S3 bucket and AWS provides an HTTP end-point to read data from it.
This is not an issue, it is an implementation detail. There’s no current way to
upload the files with the original filename (In this case, the S3 bucket will
have the file
data.csv but with another name
20/893143…). The reason behind
this decision is because we want to store a file only once no matter how many
dataset versions it’s used in. Also, it’s a reliable way to uniquely identify
the file. You don’t have to be afraid that someone decided to create a file with
the same name (path) but a different content.
Yes and it’s one of the very common use case, actually. All you need to do is to use dvc cache dir command to setup an external cache. There are few caveats though. Please, read this link for an example of the workflow.