remote modify
Configure a DVC remote.
This command is commonly needed after dvc remote add
or dvc remote default
to set up credentials or for other customizations specific to the
storage type.
Synopsis
usage: dvc remote modify [-h] [--global | --system | --project | --local]
[-q | -v] [-u]
name option [value]
positional arguments:
name Name of the remote
option Name of the option to modify
value (optional) Value of the option
Description
Remote name
and option
name are required. Config option names are specific
to the remote type. See dvc remote add
and
Available parameters below for a list
of remote storage types.
This command modifies a remote
section in the project's
config file. Alternatively, dvc config
or
manual editing could be used to change the configuration.
Command options (flags)
-
-u
,--unset
- remove the configurationoption
from a config file. Don't provide avalue
argument when employing this flag. -
--system
- modify the system config file (e.g./etc/xdg/dvc/config
) instead of.dvc/config
. -
--global
- modify the global config file (e.g.~/.config/dvc/config
) instead of.dvc/config
. -
--project
- modify the project's config file (.dvc/config
). This is the default behavior. -
--local
- modify the Git-ignored local config file (located in.dvc/config.local
) instead of.dvc/config
. This is useful to save private remote config that you don't want to track and share with Git (credentials, private locations, etc.). -
-h
,--help
- prints the usage/help message, and exit. -
-q
,--quiet
- do not write anything to standard output. Exit with 0 if no problems arise, otherwise 1. -
-v
,--verbose
- displays detailed tracing information.
Available parameters for all remotes
The following config options are available for all remote types:
-
url
- the remote location can always be modified. This is how DVC determines what type of remote it is, and thus which other config options can be modified (see each type in the next section for more details).For example, for an Amazon S3 remote (see more details in the S3 section below):
$ dvc remote modify myremote url s3://mybucket/new/path
Or a local remote (a directory in the file system):
$ dvc remote modify localremote url /home/user/dvcstore
-
jobs
- change the default number of processes for remote storage synchronization operations (see the--jobs
option ofdvc push
,dvc pull
,dvc get
,dvc import
,dvc update
,dvc add --to-remote
,dvc gc -c
, etc.). Accepts positive integers. The default is4 \* cpu_count()
.$ dvc remote modify myremote jobs 8
-
verify
- upon downloading cache files (dvc pull
,dvc fetch
) DVC will recalculate the file hashes, to check that their contents have not changed. This may slow down the aforementioned commands. The calculated hash is compared to the value saved in the corresponding DVC file.Note that this option is enabled on Google Drive remotes by default.
$ dvc remote modify myremote verify true
Available parameters per storage type
The following are the types of remote storage (protocols) and their config options:
-
url
- remote location, in thes3://<bucket>/<key>
format:$ dvc remote modify myremote url s3://mybucket/path
-
region
- change S3 remote region:$ dvc remote modify myremote region us-east-2
-
read_timeout
- set the time in seconds till a timeout exception is thrown when attempting to read from a connection (60 by default). Let's set it to 5 minutes for example:$ dvc remote modify myremote read_timeout 300
-
connect_timeout
- set the time in seconds till a timeout exception is thrown when attempting to make a connection (60 by default). Let's set it to 5 minutes for example:$ dvc remote modify myremote connect_timeout 300
By default, DVC authenticates using your AWS CLI configuration (if set). This uses the default AWS credentials file. Use the following parameters to customize the authentication method:
If any values given to the parameters below contain sensitive user info, add them with the
--local
option, so they're written to a Git-ignored config file.
-
profile
- credentials profile name to access S3:$ dvc remote modify --local myremote profile myprofile
-
credentialpath
- S3 credentials file path:$ dvc remote modify --local myremote credentialpath /path/to/creds
-
configpath
- path to the AWS CLI config file. The default AWS CLI config file path (e.g.~/.aws/config
) is used if this parameter isn't set.$ dvc remote modify --local myremote configpath /path/to/config
Note that only the S3-specific configuration values are used.
-
endpointurl
- endpoint URL to access S3:$ dvc remote modify myremote endpointurl https://myendpoint.com
-
access_key_id
- AWS Access Key ID. May be used (along withsecret_access_key
) instead ofcredentialpath
:$ dvc remote modify --local myremote access_key_id 'mykey'
-
secret_access_key
- AWS Secret Access Key. May be used (along withaccess_key_id
) instead ofcredentialpath
:$ dvc remote modify --local myremote \ secret_access_key 'mysecret'
-
session_token
- AWS MFA session token. May be used (along withaccess_key_id
andsecret_access_key
) instead ofcredentialpath
when MFA is required:$ dvc remote modify --local myremote session_token my-session-token
-
use_ssl
- whether or not to use SSL. By default, SSL is used.$ dvc remote modify myremote use_ssl false
-
ssl_verify
- whether or not to verify SSL certificates, or a path to a custom CA certificates bundle to do so (impliestrue
). The certs in AWS CLI config (if any) are used by default.$ dvc remote modify myremote ssl_verify false # or $ dvc remote modify myremote ssl_verify path/to/ca_bundle.pem
Operational details
Make sure you have the following permissions enabled: s3:ListBucket
,
s3:GetObject
, s3:PutObject
, s3:DeleteObject
. This enables the S3 API
methods that are performed by DVC (list_objects_v2
or list_objects
,
head_object
, upload_file
, download_file
, delete_object
, copy
).
-
listobjects
- whether or not to uselist_objects
. By default,list_objects_v2
is used. Useful for ceph and other S3 emulators.$ dvc remote modify myremote listobjects true
-
sse
- server-side encryption algorithm to use:AES256
oraws:kms
. By default, no encryption is used.$ dvc remote modify myremote sse AES256
-
sse_kms_key_id
- identifier of the key to encrypt data uploaded when using SSE-KMS (seesse
). This parameter will be passed directly to AWS S3, so DVC supports any value that S3 supports, including both key IDs and aliases.$ dvc remote modify --local myremote sse_kms_key_id 'key-alias'
-
sse_customer_key
- key to encrypt data uploaded when using customer-provided encryption keys (SSE-C). instead ofsse
. The value should be a base64-encoded 256 bit key.$ dvc remote modify --local myremote sse_customer_key 'mysecret'
-
sse_customer_algorithm
- server-side encryption algorithm to use withsse_customer_key
. This parameter will be passed directly to AWS S3, so DVC supports any value that S3 supports.AES256
by default.$ dvc remote modify myremote sse_customer_algorithm 'AES256'
-
acl
- set object level access control list (ACL) such asprivate
,public-read
, etc. By default, no ACL is specified.$ dvc remote modify myremote acl bucket-owner-full-control
-
grant_read
* - grantsREAD
permissions at object level access control list for specific grantees**. Grantee can read object and its metadata.$ dvc remote modify myremote grant_read \ id=aws-canonical-user-id,id=another-aws-canonical-user-id
-
grant_read_acp
* - grantsREAD_ACP
permissions at object level access control list for specific grantees**. Grantee can read the object's ACP.$ dvc remote modify myremote grant_read_acp \ id=aws-canonical-user-id,id=another-aws-canonical-user-id
-
grant_write_acp
* - grantsWRITE_ACP
permissions at object level access control list for specific grantees**. Grantee can modify the object's ACP.$ dvc remote modify myremote grant_write_acp \ id=aws-canonical-user-id,id=another-aws-canonical-user-id
-
grant_full_control
* - grantsFULL_CONTROL
permissions at object level access control list for specific grantees**. Equivalent of grant_read + grant_read_acp + grant_write_acp$ dvc remote modify myremote grant_full_control \ id=aws-canonical-user-id,id=another-aws-canonical-user-id
*
grant_read
,grant_read_acp
,grant_write_acp
andgrant_full_control
params are mutually exclusive withacl
.** default ACL grantees are overwritten. Grantees are AWS accounts identifiable by
id
(AWS Canonical User ID),emailAddress
oruri
(predefined group).References
Note that S3 remotes can also be configured via environment variables (instead
of dvc remote modify
). These are tried if none of the params above are set.
Authentication example:
$ dvc remote add -d myremote s3://mybucket/path
$ export AWS_ACCESS_KEY_ID='mykey'
$ export AWS_SECRET_ACCESS_KEY='mysecret'
$ dvc push
For more on the supported env vars, please see the boto3 docs
-
version_aware
- Use version-aware cloud versioning features for this S3 remote. Files stored in the remote will retain their original filenames and directory hierarchy, and different versions of files will be stored as separate versions of the corresponding object in the remote. -
worktree
- Use worktree cloud versioning features for this S3 remote. Files stored in the remote will retain their original filenames and directory hierarchy, and different versions of files will be stored as separate versions of the corresponding object in cloud storage. DVC will also attempt to ensure that the current version of objects in the remote match the latest version of files in the DVC repository. When bothversion_aware
andworktree
are set,worktree
takes precedence.
The version_aware
and worktree
options require that
S3 Versioning
be enabled on the specified S3 bucket.
-
endpointurl
- URL to connect to the S3-compatible storage server or service (e.g. Minio, DigitalOcean Spaces, IBM Cloud Object Storage etc.):$ dvc remote modify myremote \ endpointurl https://storage.example.com
Any other S3 parameter (see previous section) can also be set for S3-compatible storage. Whether they're effective depends on each storage platform.
If any values given to the parameters below contain sensitive user info, add them with the
--local
option, so they're written to a Git-ignored config file.
-
url
(required) - remote location, in theazure://<container>/<object>
format:$ dvc remote modify myremote url azure://mycontainer/path
Note that if the given container name isn't found in your account, DVC will attempt to create it.
-
account_name
- storage account name. Required for every authentication method exceptconnection_string
(which already includes it).$ dvc remote modify myremote account_name 'myaccount'
By default, DVC authenticates using an account_name
and its default
credential (if any), which uses environment variables (e.g. set by az cli
) or
a Microsoft application.
When using default authentication, you may need to enable some of these exclusion parameters depending on your setup (details):
$ dvc remote modify --system myremote
exclude_environment_credential true
$ dvc remote modify --system myremote
exclude_visual_studio_code_credential true
$ dvc remote modify --system myremote
exclude_shared_token_cache_credential true
$ dvc remote modify --system myremote
exclude_managed_identity_credential true
To use a custom authentication method, you can either use this command to configure the appropriate auth params, use environment variables, or rely on an Azure config file (in that order). More details below.
See some Azure auth examples.
Authenticate with DVC config parameters
The following parameters are listed in the order they are used by DVC when attempting to authenticate with Azure:
connection_string
is used for authentication if given (account_name
is ignored).- If
tenant_id
andclient_id
,client_secret
are given, Active Directory (AD) service principal auth is performed. - DVC will next try to connect with
account_key
orsas_token
(in this order) if either are provided. - If
allow_anonymous_login
is set toTrue
, then DVC will try to connect anonymously.
-
connection_string
- Azure Storage connection string (recommended).$ dvc remote modify --local myremote \ connection_string 'mysecret'
-
tenant_id
- tenant ID for AD service principal authentication (requiresclient_id
andclient_secret
along with this):$ dvc remote modify --local myremote tenant_id 'mytenant'
-
client_id
- client ID for service principal authentication (whentenant_id
is set):$ dvc remote modify --local myremote client_id 'myclient'
-
client_secret
- client Secret for service principal authentication (whentenant_id
is set):$ dvc remote modify --local myremote client_secret 'mysecret'
-
account_key
- storage account key:$ dvc remote modify --local myremote account_key 'mykey'
-
sas_token
- shared access signature token:$ dvc remote modify --local myremote sas_token 'mysecret'
-
allow_anonymous_login
- whether to fall back to anonymous login if no other auth params are given (besidesaccount_name
). This will only work with public buckets:$ dvc remote modify myremote allow_anonymous_login true
Authenticate with environment variables
Azure remotes can also authenticate via env vars (instead of
dvc remote modify
). These are tried if none of the params above are set.
For Azure connection string:
$ export AZURE_STORAGE_CONNECTION_STRING='mysecret'
For account name and key/token auth:
$ export AZURE_STORAGE_ACCOUNT='myaccount'
# and
$ export AZURE_STORAGE_KEY='mysecret'
# or
$ export AZURE_STORAGE_SAS_TOKEN='mysecret'
For service principal auth (via certificate file):
$ export AZURE_TENANT_ID='directory-id'
$ export AZURE_CLIENT_ID='client-id'
$ export AZURE_CLIENT_CERTIFICATE_PATH='/path/to/certificate'
For simple username/password login:
$ export AZURE_CLIENT_ID='client-id'
$ export AZURE_USERNAME='myuser'
$ export AZURE_PASSWORD='mysecret'
See also description here for some env vars available.
Authenticate with an Azure config file
As a final option (if no params or env vars are set), some of the auth methods
can propagate from an Azure configuration file (typically managed with
az config):
connection_string
, account_name
, account_key
, sas_token
and
container_name
. The default directory where it will be searched for is
~/.azure
but this can be customized with the AZURE_CONFIG_DIR
env var.
-
version_aware
- Use version-aware cloud versioning features for this Azure remote. Files stored in the remote will retain their original filenames and directory hierarchy, and different versions of files will be stored as separate versions of the corresponding object in the remote. -
worktree
- Use worktree cloud versioning features for this Azure remote. Files stored in the remote will retain their original filenames and directory hierarchy, and different versions of files will be stored as separate versions of the corresponding object in cloud storage. DVC will also attempt to ensure that the current version of objects in the remote match the latest version of files in the DVC repository. When bothversion_aware
andworktree
are set,worktree
takes precedence.
The version_aware
and worktree
options require that
Blob versioning
be enabled on the specified Azure storage account and container.
If any values given to the parameters below contain sensitive user info, add them with the
--local
option, so they're written to a Git-ignored config file.
Please see Set up a Google Drive DVC Remote for a full guide on using Google Drive as DVC remote storage.
-
url
- remote location. See valid URL format.$ dvc remote modify myremote url \ gdrive://0AIac4JZqHhKmUk9PDA/dvcstore
-
gdrive_client_id
- Client ID for authentication with OAuth 2.0 when using a custom Google Client project. Also requires usinggdrive_client_secret
.$ dvc remote modify myremote gdrive_client_id 'client-id'
-
gdrive_client_secret
- Client secret for authentication with OAuth 2.0 when using a custom Google Client project. Also requires usinggdrive_client_id
.$ dvc remote modify myremote gdrive_client_secret 'client-secret'
-
profile
- file basename used to cache OAuth credentials. Helpful to avoid using the wrong credentials when multiple GDrive remotes use the samegdrive_client_id
. The default value isdefault
.$ dvc remote modify --local myremote profile myprofile
-
gdrive_user_credentials_file
- specific file path to cache OAuth credentials. The default is$CACHE_HOME/pydrive2fs/{gdrive_client_id}/default.json
(unlessprofile
is specified), where theCACHE_HOME
location per platform is:macOS Linux (*typical) Windows ~/Library/Caches
~/.cache
%CSIDL_LOCAL_APPDATA%
$ dvc remote modify myremote \ gdrive_user_credentials_file path/to/mycredentials.json
See Authorization for more details.
-
gdrive_trash_only
- configuresdvc gc
to move remote files to trash instead of deleting them permanently.false
by default, meaning "delete". Useful for shared drives/folders, where delete permissions may not be given.$ dvc remote modify myremote gdrive_trash_only true
Please note our Privacy Policy (Google APIs).
-
gdrive_acknowledge_abuse
- acknowledge the risk of downloading potentially abusive files. Anything identified as such (malware, personal info., etc.) can only be downloaded by their owner (with this param enabled).$ dvc remote modify myremote gdrive_acknowledge_abuse true
For service accounts:
A service account is a Google account associated with your GCP project, and not a specific user. Please refer to Using service accounts for more information.
-
gdrive_use_service_account
- authenticate using a service account. Make sure that the service account has read/write access (as needed) to the file structure in the remoteurl
.$ dvc remote modify myremote gdrive_use_service_account true
-
gdrive_service_account_json_file_path
- path to the Google Project's service account.json
key file (credentials).$ dvc remote modify --local myremote \ gdrive_service_account_json_file_path \ path/to/file.json
-
gdrive_service_account_user_email
- the authority of a user account can be delegated to the service account if needed.$ dvc remote modify myremote \ gdrive_service_account_user_email 'myemail-addr'
โ ๏ธ DVC requires the following OAuth Scopes:
https://www.googleapis.com/auth/drive
https://www.googleapis.com/auth/drive.appdata
If any values given to the parameters below contain sensitive user info, add them with the
--local
option, so they're written to a Git-ignored config file.
-
url
- remote location, in thegs://<bucket>/<object>
format:$ dvc remote modify myremote url gs://mybucket/path
-
projectname
- override or provide a project name to use, if a default one is not set.$ dvc remote modify myremote projectname myproject
For service accounts:
A service account is a Google account associated with your GCP project, and not a specific user. Please refer to Using service accounts for more information.
-
credentialpath
- path to the file that contains the service account key. Make sure that the service account has read/write access (as needed) to the file structure in the remoteurl
.$ dvc remote modify --local myremote \ credentialpath '/home/.../project-XXX.json'
Alternatively, the GOOGLE_APPLICATION_CREDENTIALS
environment variable can be
set:
$ export GOOGLE_APPLICATION_CREDENTIALS='.../project-XXX.json'
-
version_aware
- Use version-aware cloud versioning features for this Google Cloud Storage remote. Files stored in the remote will retain their original filenames and directory hierarchy, and different versions of files will be stored as separate versions of the corresponding object in the remote. -
worktree
- Use worktree cloud versioning features for this Google Cloud Storage remote. Files stored in the remote will retain their original filenames and directory hierarchy, and different versions of files will be stored as separate versions of the corresponding object in cloud storage. DVC will also attempt to ensure that the current version of objects in the remote match the latest version of files in the DVC repository. When bothversion_aware
andworktree
are set,worktree
takes precedence.
The version_aware
and worktree
options require that
Object versioning be
enabled on the specified bucket.
If any values given to the parameters below contain sensitive user info, add them with the
--local
option, so they're written to a Git-ignored config file.
-
url
- remote location, in theoss://<bucket>/<object>
format:$ dvc remote modify myremote url oss://mybucket/path
-
oss_endpoint
- OSS endpoint values for accessing the remote container.$ dvc remote modify myremote oss_endpoint endpoint
-
oss_key_id
- OSS key ID to access the remote.$ dvc remote modify --local myremote oss_key_id 'mykey'
-
oss_key_secret
- OSS secret key for authorizing access into the remote.$ dvc remote modify --local myremote oss_key_secret 'mysecret'
Note that OSS remotes can also be configured via environment variables (instead
of dvc remote modify
). These are tried if none of the params above are set.
The available ones are shown below:
$ export OSS_ACCESS_KEY_ID='mykey'
$ export OSS_ACCESS_KEY_SECRET='mysecret'
$ export OSS_ENDPOINT='endpoint'
If any values given to the parameters below contain sensitive user info, add them with the
--local
option, so they're written to a Git-ignored config file.
-
url
- remote location, in a regular SSH format. Note that this can already include theuser
parameter, embedded into the URL:$ dvc remote modify myremote url \ ssh://user@example.com:1234/path
โ ๏ธ DVC requires both SSH and SFTP access to work with remote SSH locations. Please check that you are able to connect both ways with tools like
ssh
andsftp
(GNU/Linux).Note that your server's SFTP root might differ from its physical root (
/
). -
user
- user name to access the remote:$ dvc remote modify --local myremote user myuser
The order in which DVC picks the user name:
user
parameter set with this command (found in.dvc/config
);- User defined in the URL (e.g.
ssh://user@example.com/path
); - User defined in the SSH config file (e.g.
~/.ssh/config
) for this host (URL); - Current system user
-
port
- port to access the remote.$ dvc remote modify myremote port 2222
The order in which DVC decide the port number:
port
parameter set with this command (found in.dvc/config
);- Port defined in the URL (e.g.
ssh://example.com:1234/path
); - Port defined in the SSH config file (e.g.
~/.ssh/config
) for this host (URL); - Default SSH port 22
-
keyfile
- path to private key to access the remote.$ dvc remote modify --local myremote keyfile /path/to/keyfile
-
password
- a password to access the remote$ dvc remote modify --local myremote password mypassword
-
ask_password
- ask for a password to access the remote.$ dvc remote modify myremote ask_password true
-
passphrase
- a private key passphrase to access the remote$ dvc remote modify --local myremote passphrase mypassphrase
-
ask_passphrase
- ask for a private key passphrase to access the remote.$ dvc remote modify myremote ask_passphrase true
-
gss_auth
- use Generic Security Services authentication if available on host (for example, with kerberos). Using this param requiresparamiko[gssapi]
, which is currently only supported by our pip package, and could be installed withpip install 'dvc[ssh_gssapi]'
. Other packages (Conda, Windows, and macOS PKG) do not support it.$ dvc remote modify myremote gss_auth true
-
allow_agent
- whether to use SSH agents (true
by default). Setting this tofalse
is useful whenssh-agent
is causing problems, such as a "No existing session" error:$ dvc remote modify myremote allow_agent false
๐ก Using a HDFS cluster as remote storage is also supported via the WebHDFS API.
Read more about by expanding the WebHDFS section in
dvc remote add
.
If any values given to the parameters below contain sensitive user info, add them with the
--local
option, so they're written to a Git-ignored config file.
-
url
- remote location:$ dvc remote modify myremote url hdfs://user@example.com/path
-
user
- user name to access the remote.$ dvc remote modify --local myremote user myuser
-
kerb_ticket
- path to the Kerberos ticket cache for Kerberos-secured HDFS clusters$ dvc remote modify --local myremote \ kerb_ticket /path/to/ticket/cache
๐ก WebHDFS serves as an alternative for using the same remote storage supported
by HDFS. Read more about by expanding the WebHDFS section in
dvc remote add
.
If any values given to the parameters below contain sensitive user info, add them with the
--local
option, so they're written to a Git-ignored config file.
-
url
- remote location:$ dvc remote modify myremote url webhdfs://user@example.com/path
Do not provide a
user
in the URL withkerberos
ortoken
authentication. -
user
- user name to access the remote. Do not set this withkerberos
ortoken
authentication.$ dvc remote modify --local myremote user myuser
-
kerberos
- enable Kerberos authentication (false
by default):$ dvc remote modify myremote kerberos true
-
kerberos_principal
- Kerberos principal to use, in case you have multiple ones (for example service accounts). Only used ifkerberos
istrue
.$ dvc remote modify myremote kerberos_principal myprincipal
-
proxy_to
- Hadoop superuser to proxy as. Proxy user feature must be enabled on the cluster, and the user must have the correct access rights. If the cluster is secured, Kerberos must be enabled (setkerberos
totrue
) for this to work. This parameter is incompatible withtoken
.$ dvc remote modify myremote proxy_to myuser
-
use_https
- enables SWebHdfs. Note that DVC still expects the protocol inurl
to bewebhdfs://
, and will fail ifswebhdfs://
is used.$ dvc remote modify myremote use_https true
-
ssl_verify
- whether to verify SSL requests. Defaults totrue
whenuse_https
is enabled,false
otherwise.$ dvc remote modify myremote ssl_verify false
-
token
- Hadoop delegation token (as returned by the WebHDFS API). If the cluster is secured, Kerberos must be enabled (setkerberos
totrue
) for this to work. This parameter is incompatible with providing auser
and withproxy_to
.$ dvc remote modify myremote token "mysecret"
If any values given to the parameters below contain sensitive user info, add them with the
--local
option, so they're written to a Git-ignored config file.
-
url
- remote location:$ dvc remote modify myremote url https://example.com/path
The URL can include a query string, which will be preserved (e.g.
example.com?loc=path%2Fto%2Fdir
) -
auth
- authentication method to use when accessing the remote. The accepted values are:basic
- basic authentication scheme.user
andpassword
(orask_password
) parameters should also be configured.digest
(removed in 2.7.1) - digest Access Authentication Scheme.user
andpassword
(orask_password
) parameters should also be configured.custom
- an additional HTTP header field will be set for all HTTP requests to the remote in the form:custom_auth_header: password
.custom_auth_header
andpassword
(orask_password
) parameters should also be configured.
$ dvc remote modify myremote auth basic
-
method
- override the HTTP method to use for file uploads (e.g.PUT
should be used for Artifactory). By default,POST
is used.$ dvc remote modify myremote method PUT
-
custom_auth_header
- HTTP header field name to use when theauth
parameter is set tocustom
.$ dvc remote modify --local myremote \ custom_auth_header 'My-Header'
-
user
- user name to use when theauth
parameter is set tobasic
.$ dvc remote modify --local myremote user myuser
The order in which DVC picks the user name:
user
parameter set with this command (found in.dvc/config
);- User defined in the URL (e.g.
http://user@example.com/path
);
-
password
- password to use for anyauth
method.$ dvc remote modify --local myremote password mypassword
-
ask_password
- ask each time for the password to use for anyauth
method.$ dvc remote modify myremote ask_password true
Note that the
password
parameter takes precedence overask_password
. Ifpassword
is specified, DVC will not prompt the user to enter a password for this remote. -
ssl_verify
- whether or not to verify SSL certificates, or a path to a custom CA bundle to do so (true
by default).$ dvc remote modify myremote ssl_verify false # or $ dvc remote modify myremote ssl_verify path/to/ca_bundle.pem
-
read_timeout
- set the time in seconds till a timeout exception is thrown when attempting to read a portion of data from a connection (60 by default). Let's set it to 5 minutes for example:$ dvc remote modify myremote read_timeout 300
-
connect_timeout
- set the time in seconds till a timeout exception is thrown when attempting to make a connection (60 by default). Let's set it to 5 minutes for example:$ dvc remote modify myremote connect_timeout 300
If any values given to the parameters below contain sensitive user info, add them with the
--local
option, so they're written to a Git-ignored config file.
-
url
- remote location:$ dvc remote modify myremote url \ webdavs://example.com/nextcloud/remote.php/dav/files/myuser/
-
token
- token for WebDAV server, can be empty in case of usinguser/password
authentication.$ dvc remote modify --local myremote token 'mytoken'
-
user
- user name for WebDAV server, can be empty in case of usingtoken
authentication.$ dvc remote modify --local myremote user myuser
The order in which DVC searches for user name is:
user
parameter set with this command (found in.dvc/config
);- User defined in the URL (e.g.
webdavs://user@example.com/endpoint/path
)
-
password
- password for WebDAV server, can be empty in case of usingtoken
authentication.$ dvc remote modify --local myremote password mypassword
Note that
user/password
andtoken
authentication are incompatible. You should authenticate against your WebDAV remote by eitheruser/password
ortoken
.
-
ask_password
- ask each time for the password to use foruser/password
authentication. This has no effect ifpassword
ortoken
are set.$ dvc remote modify myremote ask_password true
-
ssl_verify
- whether or not to verify SSL certificates, or a path to a custom CA bundle to do so (true
by default).$ dvc remote modify myremote ssl_verify false # or $ dvc remote modify myremote ssl_verify path/to/ca_bundle.pem
-
cert_path
- path to certificate used for WebDAV server authentication, if you need to use local client side certificates.$ dvc remote modify --local myremote cert_path /path/to/cert
-
key_path
- path to private key to use to access a remote. Only has an effect in combination withcert_path
.$ dvc remote modify --local myremote key_path /path/to/key
Note that the certificate in
cert_path
might already contain the private key. -
timeout
- connection timeout (in seconds) for WebDAV server (default: 30).$ dvc remote modify myremote timeout 120
Example: Customize an S3 remote
Let's first set up a default S3 remote.
๐ก Before adding an S3 remote, be sure to Create a Bucket.
$ dvc remote add -d myremote s3://mybucket/path
Setting 'myremote' as a default remote.
Modify its access profile:
$ dvc remote modify myremote profile myprofile
Now the project config file should look like this:
['remote "myremote"']
url = s3://mybucket/path
profile = myuser
[core]
remote = myremote
Example: Some Azure authentication methods
Using a default identity (e.g. credentials set by az cli
):
$ dvc remote add -d myremote azure://mycontainer/object
$ dvc remote modify myremote account_name 'myaccount'
$ dvc push
Note that this may require the
Storage Blob Data Contributor
and other roles on the account.
Using a connection_string
:
$ dvc remote add -d myremote azure://mycontainer/object
$ dvc remote modify --local myremote connection_string 'mysecret'
$ dvc push
Using account_key
:
$ dvc remote add -d myremote azure://mycontainer/object
$ dvc remote modify --local myremote account_name 'myaccount'
$ dvc remote modify --local myremote account_key 'mysecret'
$ dvc push
Using sas_token
:
$ dvc remote add -d myremote azure://mycontainer/object
$ dvc remote modify --local myremote account_name 'myaccount'
$ dvc remote modify --local myremote sas_token 'mysecret'
$ dvc push