docs for serve docker and docker plugin (#5415)

This commit is contained in:
Ivan Andreev 2021-06-01 17:13:44 +03:00
parent 55153403aa
commit 7436768d62
8 changed files with 646 additions and 18 deletions

1
.gitignore vendored
View File

@ -13,3 +13,4 @@ rclone.iml
fuzz-build.zip fuzz-build.zip
*.orig *.orig
*.rej *.rej
Thumbs.db

View File

@ -0,0 +1,19 @@
[Unit]
Description=Docker Volume Plugin for rclone
Requires=docker.service
Before=docker.service
After=network.target
Requires=docker-volume-rclone.socket
After=docker-volume-rclone.socket
[Service]
ExecStart=/usr/bin/rclone serve docker
ExecStartPre=/bin/mkdir -p /var/lib/docker-volumes/rclone
ExecStartPre=/bin/mkdir -p /var/lib/docker-plugins/rclone/config
ExecStartPre=/bin/mkdir -p /var/lib/docker-plugins/rclone/cache
Environment=RCLONE_CONFIG=/var/lib/docker-plugins/rclone/config/rclone.conf
Environment=RCLONE_CACHE_DIR=/var/lib/docker-plugins/rclone/cache
Environment=RCLONE_VERBOSE=1
[Install]
WantedBy=multi-user.target

View File

@ -0,0 +1,8 @@
[Unit]
Description=Docker Volume Plugin for rclone
[Socket]
ListenStream=/run/docker/plugins/rclone.sock
[Install]
WantedBy=sockets.target

View File

@ -4,4 +4,40 @@ package docker
var longHelp = ` var longHelp = `
This command implements the Docker volume plugin API allowing docker to use This command implements the Docker volume plugin API allowing docker to use
rclone as a data storage mechanism for various cloud providers. rclone as a data storage mechanism for various cloud providers.
rclone provides [docker volume plugin](/docker) based on it.
To create a docker plugin, one must create a Unix or TCP socket that Docker
will look for when you use the plugin and then it listens for commands from
docker daemon and runs the corresponding code when necessary.
Docker plugins can run as a managed plugin under control of the docker daemon
or as an independent native service. For testing, you can just run it directly
from the command line, for example:
|||
sudo rclone serve docker --base-dir /tmp/rclone-volumes --socket-addr localhost:8787 -vv
|||
Running |rclone serve docker| will create the said socket, listening for
commands from Docker to create the necessary Volumes. Normally you need not
give the |--socket-addr| flag. The API will listen on the unix domain socket
at |/run/docker/plugins/rclone.sock|. In the example above rclone will create
a TCP socket and a small file |/etc/docker/plugins/rclone.spec| containing
the socket address. We use |sudo| because both paths are writeable only by
the root user.
If you later decide to change listening socket, the docker daemon must be
restarted to reconnect to |/run/docker/plugins/rclone.sock|
or parse new |/etc/docker/plugins/rclone.spec|. Until you restart, any
volume related docker commands will timeout trying to access the old socket.
Running directly is supported on **Linux only**, not on Windows or MacOS.
This is not a problem with managed plugin mode described in details
in the [full documentation](https://rclone.org/docker).
The command will create volume mounts under the path given by |--base-dir|
(by default |/var/lib/docker-volumes/rclone| available only to root)
and maintain the JSON formatted file |docker-plugin.state| in the rclone cache
directory with book-keeping records of created and mounted volumes.
All mount and VFS options are submitted by the docker daemon via API, but
you can also provide defaults on the command line as well as set path to the
config file and cache directory or adjust logging verbosity.
` `

View File

@ -39,6 +39,7 @@ See the [global flags page](/flags/) for global options not listed here.
* [rclone backend](/commands/rclone_backend/) - Run a backend specific command. * [rclone backend](/commands/rclone_backend/) - Run a backend specific command.
* [rclone cat](/commands/rclone_cat/) - Concatenates any files and sends them to stdout. * [rclone cat](/commands/rclone_cat/) - Concatenates any files and sends them to stdout.
* [rclone check](/commands/rclone_check/) - Checks the files in the source and destination match. * [rclone check](/commands/rclone_check/) - Checks the files in the source and destination match.
* [rclone checksum](/commands/rclone_checksum/) - Checks the files in the source against a SUM file.
* [rclone cleanup](/commands/rclone_cleanup/) - Clean up the remote if possible. * [rclone cleanup](/commands/rclone_cleanup/) - Clean up the remote if possible.
* [rclone config](/commands/rclone_config/) - Enter an interactive configuration session. * [rclone config](/commands/rclone_config/) - Enter an interactive configuration session.
* [rclone copy](/commands/rclone_copy/) - Copy files from source to dest, skipping already copied. * [rclone copy](/commands/rclone_copy/) - Copy files from source to dest, skipping already copied.

View File

@ -35,6 +35,7 @@ See the [global flags page](/flags/) for global options not listed here.
* [rclone](/commands/rclone/) - Show help for rclone commands, flags and backends. * [rclone](/commands/rclone/) - Show help for rclone commands, flags and backends.
* [rclone serve dlna](/commands/rclone_serve_dlna/) - Serve remote:path over DLNA * [rclone serve dlna](/commands/rclone_serve_dlna/) - Serve remote:path over DLNA
* [rclone serve docker](/commands/rclone_serve_docker/) - Serve any remote on docker's volume plugin API.
* [rclone serve ftp](/commands/rclone_serve_ftp/) - Serve remote:path over FTP. * [rclone serve ftp](/commands/rclone_serve_ftp/) - Serve remote:path over FTP.
* [rclone serve http](/commands/rclone_serve_http/) - Serve the remote over HTTP. * [rclone serve http](/commands/rclone_serve_http/) - Serve the remote over HTTP.
* [rclone serve restic](/commands/rclone_serve_restic/) - Serve the remote for restic's REST API. * [rclone serve restic](/commands/rclone_serve_restic/) - Serve the remote for restic's REST API.

View File

@ -9,13 +9,49 @@ url: /commands/rclone_serve_docker/
Serve any remote on docker's volume plugin API. Serve any remote on docker's volume plugin API.
# Synopsis ## Synopsis
rclone serve docker implements docker's volume plugin API. This command implements the Docker volume plugin API allowing docker to use
This allows docker to use rclone as a data storage mechanism for various cloud providers. rclone as a data storage mechanism for various cloud providers.
rclone provides [docker volume plugin](/docker) based on it.
# VFS - Virtual File System To create a docker plugin, one must create a Unix or TCP socket that Docker
will look for when you use the plugin and then it listens for commands from
docker daemon and runs the corresponding code when necessary.
Docker plugins can run as a managed plugin under control of the docker daemon
or as an independent native service. For testing, you can just run it directly
from the command line, for example:
```
sudo rclone serve docker --base-dir /tmp/rclone-volumes --socket-addr localhost:8787 -vv
```
Running `rclone serve docker` will create the said socket, listening for
commands from Docker to create the necessary Volumes. Normally you need not
give the `--socket-addr` flag. The API will listen on the unix domain socket
at `/run/docker/plugins/rclone.sock`. In the example above rclone will create
a TCP socket and a small file `/etc/docker/plugins/rclone.spec` containing
the socket address. We use `sudo` because both paths are writeable only by
the root user.
If you later decide to change listening socket, the docker daemon must be
restarted to reconnect to `/run/docker/plugins/rclone.sock`
or parse new `/etc/docker/plugins/rclone.spec`. Until you restart, any
volume related docker commands will timeout trying to access the old socket.
Running directly is supported on **Linux only**, not on Windows or MacOS.
This is not a problem with managed plugin mode described in details
in the [full documentation](https://rclone.org/docker).
The command will create volume mounts under the path given by `--base-dir`
(by default `/var/lib/docker-volumes/rclone` available only to root)
and maintain the JSON formatted file `docker-plugin.state` in the rclone cache
directory with book-keeping records of created and mounted volumes.
All mount and VFS options are submitted by the docker daemon via API, but
you can also provide defaults on the command line as well as set path to the
config file and cache directory or adjust logging verbosity.
## VFS - Virtual File System
This command uses the VFS layer. This adapts the cloud storage objects This command uses the VFS layer. This adapts the cloud storage objects
that rclone uses into something which looks much more like a disk that rclone uses into something which looks much more like a disk
@ -29,7 +65,7 @@ doing this there are various options explained below.
The VFS layer also implements a directory cache - this caches info The VFS layer also implements a directory cache - this caches info
about files and directories (but not the data) in memory. about files and directories (but not the data) in memory.
# VFS Directory Cache ## VFS Directory Cache
Using the `--dir-cache-time` flag, you can control how long a Using the `--dir-cache-time` flag, you can control how long a
directory should be considered up to date and not refreshed from the directory should be considered up to date and not refreshed from the
@ -37,7 +73,7 @@ backend. Changes made through the mount will appear immediately or
invalidate the cache. invalidate the cache.
--dir-cache-time duration Time to cache directory entries for. (default 5m0s) --dir-cache-time duration Time to cache directory entries for. (default 5m0s)
--poll-interval duration Time to wait between polling for changes. --poll-interval duration Time to wait between polling for changes. Must be smaller than dir-cache-time. Only on supported remotes. Set to 0 to disable. (default 1m0s)
However, changes made directly on the cloud storage by the web However, changes made directly on the cloud storage by the web
interface or a different copy of rclone will only be picked up once interface or a different copy of rclone will only be picked up once
@ -60,7 +96,7 @@ Or individual files or directories:
rclone rc vfs/forget file=path/to/file dir=path/to/dir rclone rc vfs/forget file=path/to/file dir=path/to/dir
# VFS File Buffering ## VFS File Buffering
The `--buffer-size` flag determines the amount of memory, The `--buffer-size` flag determines the amount of memory,
that will be used to buffer data in advance. that will be used to buffer data in advance.
@ -77,7 +113,7 @@ be used.
The maximum memory used by rclone for buffering can be up to The maximum memory used by rclone for buffering can be up to
`--buffer-size * open files`. `--buffer-size * open files`.
# VFS File Caching ## VFS File Caching
These flags control the VFS file caching options. File caching is These flags control the VFS file caching options. File caching is
necessary to make the VFS layer appear compatible with a normal file necessary to make the VFS layer appear compatible with a normal file
@ -123,7 +159,7 @@ around this by giving each rclone its own cache hierarchy with
`--cache-dir`. You don't need to worry about this if the remotes in `--cache-dir`. You don't need to worry about this if the remotes in
use don't overlap. use don't overlap.
## --vfs-cache-mode off ### --vfs-cache-mode off
In this mode (the default) the cache will read directly from the remote and write In this mode (the default) the cache will read directly from the remote and write
directly to the remote without caching anything on disk. directly to the remote without caching anything on disk.
@ -138,7 +174,7 @@ This will mean some operations are not possible
* Open modes O_APPEND, O_TRUNC are ignored * Open modes O_APPEND, O_TRUNC are ignored
* If an upload fails it can't be retried * If an upload fails it can't be retried
## --vfs-cache-mode minimal ### --vfs-cache-mode minimal
This is very similar to "off" except that files opened for read AND This is very similar to "off" except that files opened for read AND
write will be buffered to disk. This means that files opened for write will be buffered to disk. This means that files opened for
@ -151,7 +187,7 @@ These operations are not possible
* Files opened for write only will ignore O_APPEND, O_TRUNC * Files opened for write only will ignore O_APPEND, O_TRUNC
* If an upload fails it can't be retried * If an upload fails it can't be retried
## --vfs-cache-mode writes ### --vfs-cache-mode writes
In this mode files opened for read only are still read directly from In this mode files opened for read only are still read directly from
the remote, write only and read/write files are buffered to disk the remote, write only and read/write files are buffered to disk
@ -162,7 +198,7 @@ This mode should support all normal file system operations.
If an upload fails it will be retried at exponentially increasing If an upload fails it will be retried at exponentially increasing
intervals up to 1 minute. intervals up to 1 minute.
## --vfs-cache-mode full ### --vfs-cache-mode full
In this mode all reads and writes are buffered to and from disk. When In this mode all reads and writes are buffered to and from disk. When
data is read from the remote this is buffered to disk as well. data is read from the remote this is buffered to disk as well.
@ -190,7 +226,7 @@ FAT/exFAT do not. Rclone will perform very badly if the cache
directory is on a filesystem which doesn't support sparse files and it directory is on a filesystem which doesn't support sparse files and it
will log an ERROR message if one is detected. will log an ERROR message if one is detected.
# VFS Performance ## VFS Performance
These flags may be used to enable/disable features of the VFS for These flags may be used to enable/disable features of the VFS for
performance or other reasons. performance or other reasons.
@ -231,7 +267,7 @@ modified files from cache (the related global flag --checkers have no effect on
--transfers int Number of file transfers to run in parallel. (default 4) --transfers int Number of file transfers to run in parallel. (default 4)
# VFS Case Sensitivity ## VFS Case Sensitivity
Linux file systems are case-sensitive: two files can differ only Linux file systems are case-sensitive: two files can differ only
by case, and the exact case must be used when opening a file. by case, and the exact case must be used when opening a file.
@ -266,7 +302,7 @@ If the flag is not provided on the command line, then its default value depends
on the operating system where rclone runs: "true" on Windows and macOS, "false" on the operating system where rclone runs: "true" on Windows and macOS, "false"
otherwise. If the flag is provided without a value, then it is "true". otherwise. If the flag is provided without a value, then it is "true".
# Alternate report of used bytes ## Alternate report of used bytes
Some backends, most notably S3, do not report the amount of bytes used. Some backends, most notably S3, do not report the amount of bytes used.
If you need this information to be available when running `df` on the If you need this information to be available when running `df` on the
@ -284,7 +320,7 @@ calls resulting in extra charges. Use it as a last resort and only with caching.
rclone serve docker [flags] rclone serve docker [flags]
``` ```
# Options ## Options
``` ```
--allow-non-empty Allow mounting over a non-empty directory. Not supported on Windows. --allow-non-empty Allow mounting over a non-empty directory. Not supported on Windows.
@ -292,7 +328,7 @@ rclone serve docker [flags]
--allow-root Allow access to root user. Not supported on Windows. --allow-root Allow access to root user. Not supported on Windows.
--async-read Use asynchronous reads. Not supported on Windows. (default true) --async-read Use asynchronous reads. Not supported on Windows. (default true)
--attr-timeout duration Time for which file/directory attributes are cached. (default 1s) --attr-timeout duration Time for which file/directory attributes are cached. (default 1s)
--base-dir string base directory for volumes (default "/var/lib/docker/plugins/rclone/volumes") --base-dir string base directory for volumes (default "/var/lib/docker-volumes/rclone")
--daemon Run mount as a daemon (background mode). Not supported on Windows. --daemon Run mount as a daemon (background mode). Not supported on Windows.
--daemon-timeout duration Time limit for rclone to respond to kernel. Not supported on Windows. --daemon-timeout duration Time limit for rclone to respond to kernel. Not supported on Windows.
--debug-fuse Debug the FUSE internals - needs -v. --debug-fuse Debug the FUSE internals - needs -v.
@ -337,7 +373,7 @@ rclone serve docker [flags]
See the [global flags page](/flags/) for global options not listed here. See the [global flags page](/flags/) for global options not listed here.
# SEE ALSO ## SEE ALSO
* [rclone serve](/commands/rclone_serve/) - Serve a remote over a protocol. * [rclone serve](/commands/rclone_serve/) - Serve a remote over a protocol.

526
docs/content/docker.md Normal file
View File

@ -0,0 +1,526 @@
---
title: "Docker Volume Plugin"
description: "Docker Volume Plugin"
---
# Docker Volume Plugin
## Introduction
Docker 1.9 has added support for creating
[named volumes](https://docs.docker.com/storage/volumes/) via
[command-line interface](https://docs.docker.com/engine/reference/commandline/volume_create/)
and mounting them in containers as a way to share data between them.
Since Docker 1.10 you can create named volumes with
[Docker Compose](https://docs.docker.com/compose/) by descriptions in
[docker-compose.yml](https://docs.docker.com/compose/compose-file/compose-file-v2/#volume-configuration-reference)
files for use by container groups on a single host.
As of Docker 1.12 volumes are supported by
[Docker Swarm](https://docs.docker.com/engine/swarm/key-concepts/)
included with Docker Engine and created from descriptions in
[swarm compose v3](https://docs.docker.com/compose/compose-file/compose-file-v3/#volume-configuration-reference)
files for use with _swarm stacks_ across multiple cluster nodes.
[Docker Volume Plugins](https://docs.docker.com/engine/extend/plugins_volume/)
augment the default `local` volume driver included in Docker with stateful
volumes shared across containers and hosts. Unlike local volumes, your
data will _not_ be deleted when such volume is removed. Plugins can run
managed by the docker daemon, as a native system service
(under systemd, _sysv_ or _upstart_) or as a standalone executable.
Rclone can run as docker volume plugin in all these modes.
It interacts with the local docker daemon
via [plugin API](https://docs.docker.com/engine/extend/plugin_api/) and
handles mounting of remote file systems into docker containers so it must
run on the same host as the docker daemon or on every Swarm node.
## Getting started
In the first example we will use the [SFTP](/sftp/)
rclone volume with Docker engine on a standalone Ubuntu machine.
Start from [installing Docker](https://docs.docker.com/engine/install/)
on the host.
The _FUSE_ driver is a prerequisite for rclone mounting and should be
installed on host:
```
sudo apt-get -y install fuse
```
Create two directories required by rclone docker plugin:
```
sudo mkdir -p /var/lib/docker-plugins/rclone/config
sudo mkdir -p /var/lib/docker-plugins/rclone/cache
```
Install the managed rclone docker plugin:
```
docker plugin install rclone/docker-volume-rclone args="-v" --alias rclone --grant-all-permissions
docker plugin list
```
Create your [SFTP volume](/sftp/#standard-options):
```
docker volume create firstvolume -d rclone -o type=sftp -o sftp-host=_hostname_ -o sftp-user=_username_ -o sftp-pass=_password_ -o allow-other=true
```
Note that since all options are static, you don't even have to run
`rclone config` or create the `rclone.conf` file (but the `config` directory
should still be present). In the simplest case you can use `localhost`
as _hostname_ and your SSH credentials as _username_ and _password_.
You can also change the remote path to your home directory on the host,
for example `-o path=/home/username`.
Time to create a test container and mount the volume into it:
```
docker run --rm -it -v firstvolume:/mnt --workdir /mnt ubuntu:latest bash
```
If all goes well, you will enter the new container and change right to
the mounted SFTP remote. You can type `ls` to list the mounted directory
or otherwise play with it. Type `exit` when you are done.
The container will stop but the volume will stay, ready to be reused.
When it's not needed anymore, remove it:
```
docker volume list
docker volume remove firstvolume
```
Now let us try **something more elaborate**:
[Google Drive](/drive/) volume on multi-node Docker Swarm.
You should start from installing Docker and FUSE, creating plugin
directories and installing rclone plugin on _every_ swarm node.
Then [setup the Swarm](https://docs.docker.com/engine/swarm/swarm-mode/).
Google Drive volumes need an access token which can be setup via web
browser and will be periodically renewed by rclone. The managed
plugin cannot run a browser so we will use a technique similar to the
[rclone setup on a headless box](/remote_setup/).
Run [rclone config](/commands/rclone_config_create/)
on _another_ machine equipped with _web browser_ and graphical user interface.
Create the [Google Drive remote](/drive/#standard-options).
When done, transfer the resulting `rclone.conf` to the Swarm cluster
and save as `/var/lib/docker-plugins/rclone/config/rclone.conf`
on _every_ node. By default this location is accessible only to the
root user so you will need appropriate privileges. The resulting config
will look like this:
```
[gdrive]
type = drive
scope = drive
drive_id = 1234567...
root_folder_id = 0Abcd...
token = {"access_token":...}
```
Now create the file named `example.yml` with a swarm stack description
like this:
```
version: '3'
services:
heimdall:
image: linuxserver/heimdall:latest
ports: [8080:80]
volumes: [configdata:/config]
volumes:
configdata:
driver: rclone
driver_opts:
remote: 'gdrive:heimdall'
allow_other: 'true'
vfs_cache_mode: full
poll_interval: 0
```
and run the stack:
```
docker stack deploy example -c ./example.yml
```
After a few seconds docker will spread the parsed stack description
over cluster, create the `example_heimdall` service on port _8080_,
run service containers on one or more cluster nodes and request
the `example_configdata` volume from rclone plugins on the node hosts.
You can use the following commands to confirm results:
```
docker service ls
docker service ps example_heimdall
docker volume ls
```
Point your browser to `http://cluster.host.address:8080` and play with
the service. Stop it with `docker stack remove example` when you are done.
Note that the `example_configdata` volume(s) created on demand at the
cluster nodes will not be automatically removed together with the stack
but stay for future reuse. You can remove them manually by invoking
the `docker volume remove example_configdata` command on every node.
## Creating Volumes via CLI
Volumes can be created with [docker volume create](https://docs.docker.com/engine/reference/commandline/volume_create/).
Here are a few examples:
```
docker volume create vol1 -d rclone -o remote=storj: -o vfs-cache-mode=full
docker volume create vol2 -d rclone -o remote=:tardigrade,access_grant=xxx:heimdall
docker volume create vol3 -d rclone -o type=tardigrade -o path=heimdall -o tardigrade-access-grant=xxx -o poll-interval=0
```
Note the `-d rclone` flag that tells docker to request volume from the
rclone driver. This works even if you installed managed driver by its full
name `rclone/docker-volume-rclone` because you provided the `--alias rclone`
option.
Volumes can be inspected as follows:
```
docker volume list
docker volume inspect vol1
```
## Volume Configuration
Rclone flags and volume options are set via the `-o` flag to the
`docker volume create` command. They include backend-specific parameters
as well as mount and _VFS_ options. Also there are a few
special `-o` options:
`remote`, `fs`, `type`, `path`, `mount-type` and `persist`.
`remote` determines an existing remote name from the config file, with
trailing colon and optionally with a remote path. See the full syntax in
the [rclone documentation](/docs/#syntax-of-remote-paths).
This option can be aliased as `fs` to prevent confusion with the
_remote_ parameter of such backends as _crypt_ or _alias_.
The `remote=:backend:dir/subdir` syntax can be used to create
[on-the-fly (config-less) remotes](/docs/#backend-path-to-dir),
while the `type` and `path` options provide a simpler alternative for this.
Using two split options
```
-o type=backend -o path=dir/subdir
```
is equivalent to the combined syntax
```
-o remote=:backend:dir/subdir
```
but is arguably easier to parameterize in scripts.
The `path` part is optional.
[Mount and VFS options](/commands/rclone_serve_docker/#options)
as well as [backend parameters](/flags/#backend-flags) are named
like their twin command-line flags without the `--` CLI prefix.
Optionally you can use underscores instead of dashes in option names.
For example, `--vfs-cache-mode full` becomes
`-o vfs-cache-mode=full` or `-o vfs_cache_mode=full`.
Boolean CLI flags without value will gain the `true` value, e.g.
`--allow-other` becomes `-o allow-other=true` or `-o allow_other=true`.
Please note that you can provide parameters only for the backend immediately
referenced by the backend type of mounted `remote`.
If this is a wrapping backend like _alias, chunker or crypt_, you cannot
provide options for the referred to remote or backend. This limitation is
imposed by the rclone connection string parser. The only workaround is to
feed plugin with `rclone.conf` or configure plugin arguments (see below).
## Special Volume Options
`mount-type` determines the mount method and in general can be one of:
`mount`, `cmount`, or `mount2`. This can be aliased as `mount_type`.
It should be noted that the managed rclone docker plugin currently does
not support the `cmount` method and `mount2` is rarely needed.
This option defaults to the first found method, which is usually `mount`
so you generally won't need it.
`persist` is a reserved boolean (true/false) option.
In future it will allow to persist on-the-fly remotes in the plugin
`rclone.conf` file.
## Connection Strings
The `remote` value can be extended
with [connection strings](/docs/#connection-strings)
as an alternative way to supply backend parameters. This is equivalent
to the `-o` backend options with one _syntactic difference_.
Inside connection string the backend prefix must be dropped from parameter
names but in the `-o param=value` array it must be present.
For instance, compare the following option array
```
-o remote=:sftp:/home -o sftp-host=localhost
```
with equivalent connection string:
```
-o remote=:sftp,host=localhost:/home
```
This difference exists because flag options `-o key=val` include not only
backend parameters but also mount/VFS flags and possibly other settings.
Also it allows to discriminate the `remote` option from the `crypt-remote`
(or similarly named backend parameters) and arguably simplifies scripting
due to clearer value substitution.
## Using with Swarm or Compose
Both _Docker Swarm_ and _Docker Compose_ use
[YAML](http://yaml.org/spec/1.2/spec.html)-formatted text files to describe
groups (stacks) of containers, their properties, networks and volumes.
_Compose_ uses the [compose v2](https://docs.docker.com/compose/compose-file/compose-file-v2/#volume-configuration-reference) format,
_Swarm_ uses the [compose v3](https://docs.docker.com/compose/compose-file/compose-file-v3/#volume-configuration-reference) format.
They are mostly similar, differences are explained in the
[docker documentation](https://docs.docker.com/compose/compose-file/compose-versioning/#upgrading).
Volumes are described by the children of the top-level `volumes:` node.
Each of them should be named after its volume and have at least two
elements, the self-explanatory `driver: rclone` value and the
`driver_opts:` structure playing the same role as `-o key=val` CLI flags:
```
volumes:
volume_name_1:
driver: rclone
driver_opts:
remote: 'gdrive:'
allow_other: 'true'
vfs_cache_mode: full
token: '{"type": "borrower", "expires": "2021-12-31"}'
poll_interval: 0
```
Notice a few important details:
- YAML prefers `_` in option names instead of `-`.
- YAML treats single and double quotes interchangeably.
Simple strings and integers can be left unquoted.
- Boolean values must be quoted like `'true'` or `"false"` because
these two words are reserved by YAML.
- The filesystem string is keyed with `remote` (or with `fs`).
Normally you can omit quotes here, but if the string ends with colon,
you **must** quote it like `remote: "storage_box:"`.
- YAML is picky about surrounding braces in values as this is in fact
another [syntax for key/value mappings](http://yaml.org/spec/1.2/spec.html#id2790832).
For example, JSON access tokens usually contain double quotes and
surrounding braces, so you must put them in single quotes.
## Installing as Managed Plugin
Docker daemon can install plugins from an image registry and run them managed.
We maintain the
[docker-volume-rclone](https://hub.docker.com/p/rclone/docker-volume-rclone/)
plugin image on [Docker Hub](https://hub.docker.com).
The plugin requires presence of two directories on the host before it can
be installed. Note that plugin will **not** create them automatically.
By default they must exist on host at the following locations
(though you can tweak the paths):
- `/var/lib/docker-plugins/rclone/config`
is reserved for the `rclone.conf` config file and **must** exist
even if it's empty and the config file is not present.
- `/var/lib/docker-plugins/rclone/cache`
holds the plugin state file as well as optional VFS caches.
You can [install managed plugin](https://docs.docker.com/engine/reference/commandline/plugin_install/)
with default settings as follows:
```
docker plugin install rclone/docker-volume-rclone:latest --grant-all-permissions --alias rclone
```
Managed plugin is in fact a special container running in a namespace separate
from normal docker containers. Inside it runs the `rclone serve docker`
command. The config and cache directories are bind-mounted into the
container at start. The docker daemon connects to a unix socket created
by the command inside the container. The command creates on-demand remote
mounts right inside, then docker machinery propagates them through kernel
mount namespaces and bind-mounts into requesting user containers.
You can tweak a few plugin settings after installation when it's disabled
(not in use), for instance:
```
docker plugin disable rclone
docker plugin set rclone RCLONE_VERBOSE=2 config=/etc/rclone args="--vfs-cache-mode=writes --allow-other"
docker plugin enable rclone
docker plugin inspect rclone
```
Note that if docker refuses to disable the plugin, you should find and
remove all active volumes connected with it as well as containers and
swarm services that use them. This is rather tedious so please carefully
plan in advance.
You can tweak the following settings:
`args`, `config`, `cache`, and `RCLONE_VERBOSE`.
It's _your_ task to keep plugin settings in sync across swarm cluster nodes.
`args` sets command-line arguments for the `rclone serve docker` command
(_none_ by default). Arguments should be separated by space so you will
normally want to put them in quotes on the
[docker plugin set](https://docs.docker.com/engine/reference/commandline/plugin_set/)
command line. Both [serve docker flags](/commands/rclone_serve_docker/#options)
and [generic rclone flags](/flags/) are supported, including backend
parameters that will be used as defaults for volume creation.
Note that plugin will fail (due to [this docker bug](https://github.com/moby/moby/blob/v20.10.7/plugin/v2/plugin.go#L195))
if the `args` value is empty. Use e.g. `args="-v"` as a workaround.
`config=/host/dir` sets alternative host location for the config directory.
Plugin will look for `rclone.conf` here. It's not an error if the config
file is not present but the directory must exist. Please note that plugin
can periodically rewrite the config file, for example when it renews
storage access tokens. Keep this in mind and try to avoid races between
the plugin and other instances of rclone on the host that might try to
change the config simultaneously resulting in corrupted `rclone.conf`.
You can also put stuff like private key files for SFTP remotes in this
directory. Just note that it's bind-mounted inside the plugin container
at the predefined path `/data/config`. For example, if your key file is
named `sftp-box1.key` on the host, the corresponding volume config option
should read `-o sftp-key-file=/data/config/sftp-box1.key`.
`cache=/host/dir` sets alternative host location for the _cache_ directory.
The plugin will keep VFS caches here. Also it will create and maintain
the `docker-plugin.state` file in this directory. When the plugin is
restarted or reinstalled, it will look in this file to recreate any volumes
that existed previously. However, they will not be re-mounted into
consuming containers after restart. Usually this is not a problem as
the docker daemon normally will restart affected user containers after
failures, daemon restarts or host reboots.
`RCLONE_VERBOSE` sets plugin verbosity from `0` (errors only, by default)
to `2` (debugging). Verbosity can be also tweaked via `args="-v [-v] ..."`.
Since arguments are more generic, you will rarely need this setting.
The plugin output by default feeds the docker daemon log on local host.
Log entries are reflected as _errors_ in the docker log but retain their
actual level assigned by rclone in the encapsulated message string.
You can set custom plugin options right when you install it, _in one go_:
```
docker plugin remove rclone
docker plugin install rclone/docker-volume-rclone:latest \
--alias rclone --grant-all-permissions \
args="-v --allow-other" config=/etc/rclone
docker plugin inspect rclone
```
## Healthchecks
The docker plugin volume protocol doesn't provide a way for plugins
to inform the docker daemon that a volume is (un-)available.
As a workaround you can setup a healthcheck to verify that the mount
is responding, for example:
```
services:
my_service:
image: my_image
healthcheck:
test: ls /path/to/rclone/mount || exit 1
interval: 1m
timeout: 15s
retries: 3
start_period: 15s
```
## Running Plugin under Systemd
In most cases you should prefer managed mode. Moreover, MacOS and Windows
do not support native Docker plugins. Please use managed mode on these
systems. Proceed further only if you are on Linux.
First, [install rclone](/install/).
You can just run it (type `rclone serve docker` and hit enter) for the test.
Install _FUSE_:
```
sudo apt-get -y install fuse
```
Download two systemd configuration files:
[docker-volume-rclone.service](https://raw.githubusercontent.com/rclone/rclone/ci-docker/cmd/serve/docker/contrib/systemd/docker-volume-rclone.service)
and [docker-volume-rclone.socket](https://raw.githubusercontent.com/rclone/rclone/ci-docker/cmd/serve/docker/contrib/systemd/docker-volume-rclone.socket).
Put them to the `/etc/systemd/system/` directory:
```
cp docker-volume-plugin.service /etc/systemd/system/
cp docker-volume-plugin.socket /etc/systemd/system/
```
Please note that all commands in this section must be run as _root_ but
we omit `sudo` prefix for brevity.
Now create directories required by the service:
```
mkdir -p /var/lib/docker-volumes/rclone
mkdir -p /var/lib/docker-plugins/rclone/config
mkdir -p /var/lib/docker-plugins/rclone/cache
```
Run the docker plugin service in the socket activated mode:
```
systemctl daemon-reload
systemctl start docker-volume-rclone.service
systemctl enable docker-volume-rclone.socket
systemctl start docker-volume-rclone.socket
systemctl restart docker
```
Or run the service directly:
- run `systemctl daemon-reload` to let systemd pick up new config
- run `systemctl enable docker-volume-rclone.service` to make the new
service start automatically when you power on your machine.
- run `systemctl start docker-volume-rclone.service`
to start the service now.
- run `systemctl restart docker` to restart docker daemon and let it
detect the new plugin socket. Note that this step is not needed in
managed mode where docker knows about plugin state changes.
The two methods are equivalent from the user perspective, but I personally
prefer socket activation.
## Troubleshooting
You can [see managed plugin settings](https://docs.docker.com/engine/extend/#debugging-plugins)
with
```
docker plugin list
docker plugin inspect rclone
```
Note that docker (including latest 20.10.7) will not show actual values
of `args`, just the defaults.
Use `journalctl --unit docker` to see managed plugin output as part of
the docker daemon log. Note that docker reflects plugin lines as _errors_
but their actual level can be seen from encapsulated message string.
You will usually install the latest version of managed plugin.
Use the following commands to print the actual installed version:
```
PLUGID=$(docker plugin list --no-trunc | awk '/rclone/{print$1}')
sudo runc --root /run/docker/runtime-runc/plugins.moby exec $PLUGID rclone version
```
You can even use `runc` to run shell inside the plugin container:
```
sudo runc --root /run/docker/runtime-runc/plugins.moby exec --tty $PLUGID bash
```
Also you can use curl to check the plugin socket connectivity:
```
docker plugin list --no-trunc
PLUGID=123abc...
sudo curl -H Content-Type:application/json -XPOST -d {} --unix-socket /run/docker/plugins/$PLUGID/rclone.sock http://localhost/Plugin.Activate
```
though this is rarely needed.
Finally I'd like to mention a _caveat with updating volume settings_.
Docker CLI does not have a dedicated command like `docker volume update`.
It may be tempting to invoke `docker volume create` with updated options
on existing volume, but there is a gotcha. The command will do nothing,
it won't even return an error. I hope that docker maintainers will fix
this some day. In the meantime be aware that you must remove your volume
before recreating it with new settings:
```
docker volume remove my_vol
docker volume create my_vol -d rclone -o opt1=new_val1 ...
```
and verify that settings did update:
```
docker volume list
docker volume inspect my_vol
```
If docker refuses to remove the volume, you should find containers
or swarm services that use it and stop them first.