Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Multipart copy method #1590

Closed
wants to merge 8 commits into from
Closed
Show file tree
Hide file tree
Changes from 7 commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
3 changes: 3 additions & 0 deletions docs/integration/simple-s3.md
Original file line number Diff line number Diff line change
Expand Up @@ -32,6 +32,9 @@ $resource = \fopen('/path/to/cat/image.jpg', 'r');
$s3->upload('my-image-bucket', 'photos/cat_2.jpg', $resource);
$s3->upload('my-image-bucket', 'photos/cat_2.txt', 'I like this cat');

// Copy objects between buckets
$s3->copy('source-bucket', 'source-key', 'destination-bucket', 'destination-key');

// Check if a file exists
$s3->has('my-image-bucket', 'photos/cat_2.jpg'); // true

Expand Down
3 changes: 2 additions & 1 deletion manifest.json
Original file line number Diff line number Diff line change
Expand Up @@ -577,7 +577,8 @@
"PutObject",
"PutObjectAcl",
"PutObjectTagging",
"UploadPart"
"UploadPart",
"UploadPartCopy"
]
},
"Scheduler": {
Expand Down
4 changes: 4 additions & 0 deletions src/Integration/Aws/SimpleS3/CHANGELOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -8,6 +8,10 @@

- Upgrade to `async-aws/s3` 2.0

### Added

- Added `SimpleS3Client::copy()` method

## 1.1.1

### Changed
Expand Down
94 changes: 94 additions & 0 deletions src/Integration/Aws/SimpleS3/src/SimpleS3Client.php
Original file line number Diff line number Diff line change
Expand Up @@ -7,10 +7,15 @@
use AsyncAws\Core\Stream\FixedSizeStream;
use AsyncAws\Core\Stream\ResultStream;
use AsyncAws\Core\Stream\StreamFactory;
use AsyncAws\S3\Input\CompleteMultipartUploadRequest;
use AsyncAws\S3\Input\CopyObjectRequest;
use AsyncAws\S3\Input\CreateMultipartUploadRequest;
use AsyncAws\S3\Input\GetObjectRequest;
use AsyncAws\S3\Input\UploadPartCopyRequest;
use AsyncAws\S3\S3Client;
use AsyncAws\S3\ValueObject\CompletedMultipartUpload;
use AsyncAws\S3\ValueObject\CompletedPart;
use AsyncAws\S3\ValueObject\CopyPartResult;

/**
* A simplified S3 client that hides some of the complexity of working with S3.
Expand Down Expand Up @@ -47,6 +52,71 @@ public function has(string $bucket, string $key): bool
return $this->objectExists(['Bucket' => $bucket, 'Key' => $key])->isSuccess();
}

/**
* @param array{
* ACL?: \AsyncAws\S3\Enum\ObjectCannedACL::*,
* CacheControl?: string,
* ContentLength?: int,
* ContentType?: string,
* Metadata?: array<string, string>,
* PartSize?: int,
* } $options
*/
public function copy(string $srcBucket, string $srcKey, string $destBucket, string $destKey, array $options = []): void
{
$megabyte = 1024 * 1024;
if (!empty($options['ContentLength'])) {
$contentLength = (int) $options['ContentLength'];
unset($options['ContentLength']);
} else {
$contentLength = (int) $this->headObject(['Bucket' => $srcBucket, 'Key' => $srcKey])->getContentLength();
}

/*
* The maximum number of parts is 10.000. The partSize must be a power of 2.
* We default this to 64MB per part. That means that we only support to copy
* files smaller than 64 * 10 000 = 640GB. If you are coping larger files,
* please set PartSize to a higher number, like 128, 256 or 512. (Max 4096).
*/
$partSize = ($options['PartSize'] ?? 64) * $megabyte;
unset($options['PartSize']);

// If file is less than 5GB, use normal atomic copy
if ($contentLength < 5120 * $megabyte) {
$this->copyObject(
CopyObjectRequest::create(
array_merge($options, ['Bucket' => $destBucket, 'Key' => $destKey, 'CopySource' => "{$srcBucket}/{$srcKey}"])
)
);

return;
}

/** @var string $uploadId */
$uploadId = $this->createMultipartUpload(
CreateMultipartUploadRequest::create(
array_merge($options, ['Bucket' => $destBucket, 'Key' => $destKey])
)
)->getUploadId();

$bytePosition = 0;
$parts = [];
for ($i = 1; $bytePosition < $contentLength; ++$i) {
$startByte = $bytePosition;
$endByte = $bytePosition + $partSize - 1 >= $contentLength ? $contentLength - 1 : $bytePosition + $partSize - 1;
$parts[] = $this->doMultipartCopy($destBucket, $destKey, $uploadId, $i, sprintf('%s/%s', $srcBucket, $srcKey), $startByte, $endByte);
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

would it make sens to run this in parallel?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

makes sense
but i have no idea how to implement this))

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

How this could be runned in parallel?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Avoid consuming the response: When you access a property (calling a getter) the response blocks until the response is fully processed.

once again, I don't know if it's better to run theses requests in sequence or parallel.
(Maybe run too many requests in parallel have worse performance) => you need to check AWS recommendations for this.

to process requests in parallel you should do something like

for (...) {
  $responses[] = $client->uploadPartCopy(...)
}

$success = true;
foreach ($responses as $response) {
  try {
    $copyPartResult = $response->getCopyPartResult();
    $parts[] = new CompletedPart(['ETag' => $copyPartResult->getEtag(), 'PartNumber' => $partNumber]);
  } catch (\Throwable $e) {
    $success = false;
    break'
  }
}

if (!$success) {
  $this->abortMultipartUpload(['Bucket' => $bucket, 'Key' => $key, 'UploadId' => $uploadId]);
  foreach ($responses as $response) {
    try {
      $response->cancel();
    } catch (\Throwable $e) {
      // ...
    }
  }

   throw ...;
}

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Got it!
Seems like the only limit is 10 000 connections (equal to parts)
https://docs.aws.amazon.com/AmazonS3/latest/userguide/qfacts.html
Anyway, think that 10k connections, even without body, it is too much.

AWS SDK has concurrency option for such copy
Will see how to implement it here

$bytePosition += $partSize;
}
$this->completeMultipartUpload(
CompleteMultipartUploadRequest::create([
'Bucket' => $destBucket,
'Key' => $destKey,
'UploadId' => $uploadId,
'MultipartUpload' => new CompletedMultipartUpload(['Parts' => $parts]),
])
);
}

/**
* @param string|resource|(callable(int): string)|iterable<string> $object
* @param array{
Expand Down Expand Up @@ -195,4 +265,28 @@ private function doSmallFileUpload(array $options, string $bucket, string $key,
'Body' => $object,
]));
}

private function doMultipartCopy(string $bucket, string $key, string $uploadId, int $partNumber, string $copySource, int $startByte, int $endByte): CompletedPart
{
try {
$response = $this->uploadPartCopy(
jderusse marked this conversation as resolved.
Show resolved Hide resolved
UploadPartCopyRequest::create([
'Bucket' => $bucket,
'Key' => $key,
'UploadId' => $uploadId,
'CopySource' => $copySource,
'CopySourceRange' => sprintf('bytes=%d-%d', $startByte, $endByte),
'PartNumber' => $partNumber,
])
);
/** @var CopyPartResult $copyPartResult */
$copyPartResult = $response->getCopyPartResult();

return new CompletedPart(['ETag' => $copyPartResult->getEtag(), 'PartNumber' => $partNumber]);
} catch (\Throwable $e) {
$this->abortMultipartUpload(['Bucket' => $bucket, 'Key' => $key, 'UploadId' => $uploadId]);

throw $e;
}
}
}
71 changes: 71 additions & 0 deletions src/Integration/Aws/SimpleS3/tests/Unit/SimpleS3ClientTest.php
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,11 @@

use AsyncAws\Core\Credentials\NullProvider;
use AsyncAws\Core\Test\ResultMockFactory;
use AsyncAws\S3\Input\CompleteMultipartUploadRequest;
use AsyncAws\S3\Result\CreateMultipartUploadOutput;
use AsyncAws\S3\Result\HeadObjectOutput;
use AsyncAws\S3\Result\UploadPartCopyOutput;
use AsyncAws\S3\ValueObject\CopyPartResult;
use AsyncAws\SimpleS3\SimpleS3Client;
use PHPUnit\Framework\TestCase;
use Symfony\Component\HttpClient\MockHttpClient;
Expand Down Expand Up @@ -137,6 +141,73 @@ public function testUploadSmallFileEmptyClosure()
});
}

public function testCopySmallFileWithProvidedLength()
{
$megabyte = 1024 * 1024;
$s3 = $this->getMockBuilder(SimpleS3Client::class)
->disableOriginalConstructor()
->onlyMethods(['createMultipartUpload', 'abortMultipartUpload', 'copyObject', 'completeMultipartUpload'])
->getMock();

$s3->expects(self::never())->method('createMultipartUpload');
$s3->expects(self::never())->method('abortMultipartUpload');
$s3->expects(self::never())->method('completeMultipartUpload');
$s3->expects(self::once())->method('copyObject');

$s3->copy('bucket', 'robots.txt', 'bucket', 'copy-robots.txt', ['ContentLength' => 5 * $megabyte]);
}

public function testCopySmallFileWithoutProvidedLength()
{
$megabyte = 1024 * 1024;
$s3 = $this->getMockBuilder(SimpleS3Client::class)
->disableOriginalConstructor()
->onlyMethods(['createMultipartUpload', 'abortMultipartUpload', 'copyObject', 'completeMultipartUpload', 'headObject'])
->getMock();

$s3->expects(self::never())->method('createMultipartUpload');
$s3->expects(self::never())->method('abortMultipartUpload');
$s3->expects(self::never())->method('completeMultipartUpload');
$s3->expects(self::once())->method('copyObject');
$s3->expects(self::once())->method('headObject')
->willReturn(ResultMockFactory::create(HeadObjectOutput::class, ['ContentLength' => 50 * $megabyte]));

$s3->copy('bucket', 'robots.txt', 'bucket', 'copy-robots.txt');
}

public function testCopyLargeFile()
{
$megabyte = 1024 * 1024;
$uploadedParts = 0;
$completedParts = 0;

$s3 = $this->getMockBuilder(SimpleS3Client::class)
->disableOriginalConstructor()
->onlyMethods(['createMultipartUpload', 'abortMultipartUpload', 'copyObject', 'completeMultipartUpload', 'uploadPartCopy'])
->getMock();

$s3->expects(self::once())->method('createMultipartUpload')
->willReturn(ResultMockFactory::create(CreateMultipartUploadOutput::class, ['UploadId' => '4711']));
$s3->expects(self::never())->method('abortMultipartUpload');
$s3->expects(self::never())->method('copyObject');
$s3->expects(self::any())->method('uploadPartCopy')
->with(self::callback(function () use (&$uploadedParts) {
++$uploadedParts;

return true;
}))
->willReturn(ResultMockFactory::create(UploadPartCopyOutput::class, ['copyPartResult' => new CopyPartResult(['ETag' => 'etag-4711'])]));
$s3->expects(self::once())->method('completeMultipartUpload')->with(self::callback(function (CompleteMultipartUploadRequest $request) use (&$completedParts) {
$completedParts = \count($request->getMultipartUpload()->getParts());

return true;
}));

$s3->copy('bucket', 'robots.txt', 'bucket', 'copy-robots.txt', ['ContentLength' => 6144 * $megabyte]);

self::assertEquals($completedParts, $uploadedParts);
}

private function assertSmallFileUpload(\Closure $callback, string $bucket, string $file, $object): void
{
$s3 = $this->getMockBuilder(SimpleS3Client::class)
Expand Down
Loading
Loading