Comparison between AWS Web Adapter and (Python) Mangum + Fastapi #514
Replies: 10 comments
-
I don't have exact numbers to share. In general, a framework adds a bit of cold-start time because it starts a full-blown web server. Fastapi with Uvicorn usually adds about 100~200ms cold-start time. Usually it is a worthy trade off for the productivity and portability gain. The main goal of this project is to provide an easy on-ramp for people new to Lambda, and wants to start building a Serverless Web App using familar tools and frameworks. In addition, people found that this tool help to migrate existing web apps to Lambda without requiring major refactoring of the exiting code base, such as this one. |
Beta Was this translation helpful? Give feedback.
-
I agree, portability is the key-word. Regarding the overhead, 100-200 ms doesn't bother me at all. |
Beta Was this translation helpful? Give feedback.
-
Moreover: would you say that in absolute numbers, an adapter made of Mangum vs the AWS lambda adapter, is Mangum going to be faster? by many ms? |
Beta Was this translation helpful? Give feedback.
-
I will do a test in this weekend and post the resultes here. |
Beta Was this translation helpful? Give feedback.
-
Here come the results: the cold start time is actually pretty close for LWA and Mangum, within 100ms. Test setup: two Lambda functions (256MB memory) with Http API endopoints, triggered every 10 minutes to ensure we hit cold start every time. I collected 144 data pointes over 12 hours (excluding the first a few outliers caused by the cold cache in Lambda). The latenency data is from Http Api's IntegrationLatency metric, which is the end-to-end latency of Lambda invoke. Here is the Http Api IntegrationLantecy over different percentiles. LWA is faster at high percentil p99 and p100. Mangum is faster at lower percentil P90 to P0. The difference is less than 100ms. Here is the lantency graph over 12 hours. And here are two samples of x-ray traces. |
Beta Was this translation helpful? Give feedback.
-
Here is the function code and docker files used in the test.
from fastapi import FastAPI
app = FastAPI()
@app.get("/")
async def root():
print("in root method")
return {"message": "Hello World"}
FROM public.ecr.aws/docker/library/python:3.11.5-slim
COPY --from=public.ecr.aws/awsguru/aws-lambda-adapter:0.7.1 /lambda-adapter /opt/extensions/lambda-adapter
ENV PORT=8000
WORKDIR /var/task
COPY requirements.txt ./
RUN python -m pip install -r requirements.txt
COPY *.py ./
CMD exec uvicorn --port=$PORT main:app
from fastapi import FastAPI
from mangum import Mangum
app = FastAPI()
@app.get("/")
async def root():
print("in root method")
return {"message": "Hello World"}
handler = Mangum(app, lifespan="off")
FROM public.ecr.aws/docker/library/python:3.11.5-slim
ENV PORT=8000
WORKDIR /var/task
COPY requirements.txt ./
RUN python -m pip install -r requirements.txt
COPY *.py ./
ENTRYPOINT [ "/usr/local/bin/python", "-m", "awslambdaric" ]
CMD ["main.handler"] |
Beta Was this translation helpful? Give feedback.
-
Thanks so much! |
Beta Was this translation helpful? Give feedback.
-
No, for warm-start, it is very close: almost identical at p50.
Summary:
Total: 10.5154 secs
Slowest: 0.6041 secs
Fastest: 0.0103 secs
Average: 0.0188 secs
Requests/sec: 950.9877
Total data: 250000 bytes
Size/request: 25 bytes
Response time histogram:
0.010 [1] |
0.070 [9906] |■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■
0.129 [92] |
0.188 [0] |
0.248 [0] |
0.307 [0] |
0.367 [0] |
0.426 [0] |
0.485 [0] |
0.545 [0] |
0.604 [1] |
Latency distribution:
10% in 0.0144 secs
25% in 0.0157 secs
50% in 0.0174 secs
75% in 0.0194 secs
90% in 0.0221 secs
95% in 0.0265 secs
99% in 0.0631 secs
Details (average, fastest, slowest):
DNS+dialup: 0.0003 secs, 0.0103 secs, 0.6041 secs
DNS-lookup: 0.0000 secs, 0.0000 secs, 0.0206 secs
req write: 0.0000 secs, 0.0000 secs, 0.0024 secs
resp wait: 0.0181 secs, 0.0102 secs, 0.6035 secs
resp read: 0.0000 secs, 0.0000 secs, 0.0006 secs
Status code distribution:
[200] 10000 responses
Summary:
Total: 10.0350 secs
Slowest: 0.1506 secs
Fastest: 0.0105 secs
Average: 0.0191 secs
Requests/sec: 996.5162
Total data: 250000 bytes
Size/request: 25 bytes
Response time histogram:
0.011 [1] |
0.025 [9522] |■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■
0.039 [340] |■
0.053 [23] |
0.067 [8] |
0.081 [5] |
0.095 [0] |
0.109 [1] |
0.123 [49] |
0.137 [45] |
0.151 [6] |
Latency distribution:
10% in 0.0143 secs
25% in 0.0157 secs
50% in 0.0175 secs
75% in 0.0197 secs
90% in 0.0221 secs
95% in 0.0243 secs
99% in 0.1120 secs
Details (average, fastest, slowest):
DNS+dialup: 0.0009 secs, 0.0105 secs, 0.1506 secs
DNS-lookup: 0.0001 secs, 0.0000 secs, 0.0239 secs
req write: 0.0000 secs, 0.0000 secs, 0.0023 secs
resp wait: 0.0181 secs, 0.0104 secs, 0.1389 secs
resp read: 0.0000 secs, 0.0000 secs, 0.0008 secs
Status code distribution:
[200] 10000 responses
|
Beta Was this translation helpful? Give feedback.
-
@gabriels1234 I'm closing this issue. Feel free to open new ones if additional questions come up. |
Beta Was this translation helpful? Give feedback.
-
I just learned about LWA today and had this exact same question. Thank you so much @bnusunny for the thorough comparison. Super helpful!! |
Beta Was this translation helpful? Give feedback.
-
Hi, Just saw the presentation and was thinking, performance-wise (mainly cold-start time) how would this compare to using the Mangum adapter + a framework (such as fastapi).
Thanks!
Beta Was this translation helpful? Give feedback.
All reactions