You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The code doesn't work on a CloudTrail bucket with lots of data - for example a CloudTrail bucket with a years data from 100+ accounts and all regions.
Before the call in line 17 in the handler.js is finished, the lambda function reaches its execution time limit:
to const partitionTree = await getAllParitions(bucket, path);
Also, please note that you have a minor typo in getAllParitions method - it should probably be getAllPartitions, but since the method is also spelled the same way in s3.js it doesn't really matter.
What does matter is, that it can take more than 15 minutes to enumerate a CloudTrail with lots of data. Is there a way you can store the enumeration data in DynamoDB as well, so multiple runs of the Lambda could allow it to pick up where it left?
The text was updated successfully, but these errors were encountered:
The code doesn't work on a CloudTrail bucket with lots of data - for example a CloudTrail bucket with a years data from 100+ accounts and all regions.
Before the call in line 17 in the handler.js is finished, the lambda function reaches its execution time limit:
to const partitionTree = await getAllParitions(bucket, path);
Also, please note that you have a minor typo in getAllParitions method - it should probably be getAllPartitions, but since the method is also spelled the same way in s3.js it doesn't really matter.
What does matter is, that it can take more than 15 minutes to enumerate a CloudTrail with lots of data. Is there a way you can store the enumeration data in DynamoDB as well, so multiple runs of the Lambda could allow it to pick up where it left?
The text was updated successfully, but these errors were encountered: