Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Expand documentation on concurrent transfers #780

Open
wants to merge 1 commit into
base: master
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
9 changes: 8 additions & 1 deletion docs/Configuration.md
Original file line number Diff line number Diff line change
Expand Up @@ -90,7 +90,14 @@ max_backup_count = 0
; Used to throttle S3 backups/restores:
transfer_max_bandwidth = 50MB/s

; Max number of downloads/uploads. Not used by the GCS backend.
; Regardless of the storage provider, determines the number of files to process in parallel when uploading or downloading.
; Each group of concurrently processed files has to have all files processed before the next group starts
; (so the groups are synchronous, example).
; Then there is the storage-provider specific behaviour:
; - For Google, it has no extra meaning.
; - For Azure, we pass it to the SDK library we use if the file is bigger than the multipart threshold (100MB in your case).
; - For S3, this controls the size of the executor we submit transfer tasks into. We do not propagate is to the
; boto's concurrency parameter.
concurrent_transfers = 1

; Size over which S3 uploads will be using the awscli with multi part uploads. Defaults to 100MB.
Expand Down
9 changes: 8 additions & 1 deletion medusa-example.ini
Original file line number Diff line number Diff line change
Expand Up @@ -95,7 +95,14 @@ max_backup_count = 0
; Used to throttle S3 backups/restores:
transfer_max_bandwidth = 50MB/s

; Max number of concurrent downloads/uploads.
; Regardless of the storage provider, determines the number of files to process in parallel when uploading or downloading.
; Each group of concurrently processed files has to have all files processed before the next group starts
; (so the groups are synchronous, example).
; Then there is the storage-provider specific behaviour:
; - For Google, it has no extra meaning.
; - For Azure, we pass it to the SDK library we use if the file is bigger than the multipart threshold (100MB in your case).
; - For S3, this controls the size of the executor we submit transfer tasks into. We do not propagate is to the
; boto's concurrency parameter.
concurrent_transfers = 1

; Size over which uploads will be using multi part uploads. Defaults to 20MB.
Expand Down
Loading