-
Notifications
You must be signed in to change notification settings - Fork 832
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
AWS S3 output fails to upload objects in partitioned path #2869
Comments
Hey @bkh-kl 👋 Thanks for reporting this issue! Unfortunately, I wasn't able to reproduce it using the Localstack Docker container, which seems to accept that path just fine. I also tried replacing I do wonder, though, if the issue might be caused by metadata instead (see docs here). Can you please add a |
Thanks @mihaitodor I removed the
You are right! in Localstack S3 Bucket it works correctly when I use the above path as you can see in the following screenshot: However, when I switch the same stream to AWS S3 Bucket, the same error appears: {"@service":"redpanda-connect","label":"s3_output","level":"error","msg":"Failed to send message to aws_s3: operation error S3: PutObject, https response error StatusCode: 403, RequestID: XYZ, HostID: XYZ, api error SignatureDoesNotMatch: The request signature we calculated does not match the signature you provided. Check your key and signing method.","path":"root.output","stream":"stream-2","time":"2024-09-16T10:12:05Z"} I have also set the |
Thanks for checking @bkh-kl! Dunno how to reproduce it without an AWS account, but I see some other projects do use percent encoding for paths (for example peak/s5cmd#280). Maybe give it a shot and see what happens: path: '${! ["v1", "events", "year=55", "stream_2-%s.parquet".format(uuid_v4())].map_each(e -> e.escape_url_query()).join("/") }' |
OK, thanks for checking! We'll have to try and reproduce it somehow and see what we can do to fix this. If you have experience with Go, please try and see if you can get a hello world example which works. |
Thanks @mihaitodor! I don't have experience with Go, but definitely will give it a try.. |
Hello!
I'm using
aws_s3
output and would like to utilize certain metadata in the path to enable AWS Glue to identify the partition keys based on thekey=value
format. (AWS doc)This is the path example I'd like to upload my objects into:
bucket/events/year=2024/month=09/object.gz
However, the moment I add the
=
character in the path, the output fails with the following error message:Is this error caused by my misconfiguration or the output does not support it yet?
I also searched your documentation with no luck to understand if
=
must be escaped to be able to be used.Thank you!
The text was updated successfully, but these errors were encountered: