docs: s3: add section on using too much memory #7974

This commit is contained in:
Nick Craig-Wood 2024-07-30 09:51:30 +01:00
parent c5c7bcdd45
commit 9866d1c636

View File

@ -5018,6 +5018,28 @@ nodes across the network.
For more detailed comparison please check the documentation of the
[storj](/storj) backend.
## Memory usage {memory}
The most common cause of rclone using lots of memory is a single
directory with millions of files in. Despite s3 not really having the
concepts of directories, rclone does the sync on a directory by
directory basis to be compatible with normal filing systems.
Rclone loads each directory into memory as rclone objects. Each rclone
object takes 0.5k-1k of memory, so approximately 1GB per 1,000,000
files, and the sync for that directory does not begin until it is
entirely loaded in memory. So the sync can take a long time to start
for large directories.
To sync a directory with 100,000,000 files in you would need approximately
100 GB of memory. At some point the amount of memory becomes difficult
to provide so there is
[a workaround for this](https://github.com/rclone/rclone/wiki/Big-syncs-with-millions-of-files)
which involves a bit of scripting.
At some point rclone will gain a sync mode which is effectively this
workaround but built in to rclone.
## Limitations
`rclone about` is not supported by the S3 backend. Backends without
@ -5028,7 +5050,6 @@ remote.
See [List of backends that do not support rclone about](https://rclone.org/overview/#optional-features) and [rclone about](https://rclone.org/commands/rclone_about/)
### Synology C2 Object Storage {#synology-c2}
[Synology C2 Object Storage](https://c2.synology.com/en-global/object-storage/overview) provides a secure, S3-compatible, and cost-effective cloud storage solution without API request, download fees, and deletion penalty.