This repo is based on sekka1/docker-s3cmd, but we modified the way it works quite heavily.
s3cmd in a Docker container. This is useful if you are already using Docker. You can simply pull this container to that Docker server and move things between the local box and S3 by just running a container.
Using Alpine linux. This image is ~15MB.
You can find an automated build of this container on the Docker Hub: hochzehn/s3cmd
AWS_KEY=<YOUR AWS KEY>
AWS_SECRET=<YOUR AWS SECRET>
BUCKET=s3://your-bucket-name/
LOCAL_FILE=/tmp/database
docker run --rm \
--env aws_key=${AWS_KEY} \
--env aws_secret=${AWS_SECRET} \
--env cmd=sync-local-to-s3 \
--env DEST_S3=${BUCKET} \
-v ${LOCAL_FILE}:/opt/src \
hochzehn/s3cmd
- Change
LOCAL_FILE
to file/folder you want to upload to S3 - Append any options to pass to
s3cmd
at the end, e.g.--delete-removed
:... hochzehn/s3cmd --delete-removed
AWS_KEY=<YOUR AWS KEY>
AWS_SECRET=<YOUR AWS SECRET>
BUCKET=s3://your-bucket-name/
LOCAL_FILE=/tmp
docker run --rm \
--env aws_key=${AWS_KEY} \
--env aws_secret=${AWS_SECRET} \
--env cmd=sync-s3-to-local \
--env SRC_S3=${BUCKET} \
-v ${LOCAL_FILE}:/opt/dest \
hochzehn/s3cmd
- Change
LOCAL_FILE
to the file/folder to download the files from S3 to - Append any options to pass to
s3cmd
at the end, e.g.--delete-removed
:... hochzehn/s3cmd --delete-removed
AWS_KEY=<YOUR AWS KEY>
AWS_SECRET=<YOUR AWS SECRET>
docker run -rm \
--env aws_key=${AWS_KEY} \
--env aws_secret=${AWS_SECRET} \
hochzehn/s3cmd \
ls /