You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi, a few days ago we have encountered this exception when pulling dvc tracked data using dvc get command. We were pulling about 800 of 4GB files.
The issue can be related to this comment iterative/dvc#9070 (comment)
Traceback (most recent call last):
File "reproduce_pull.py", line 50, in <module>
copy(
File "/srv/workplace/pchmelar/repos/dvc/venv/lib/python3.8/site-packages/dvc_objects/fs/generic.py", line 93, in copy
return _get(
File "/srv/workplace/pchmelar/repos/dvc/venv/lib/python3.8/site-packages/dvc_objects/fs/generic.py", line 241, in _get
raise result
File "/srv/workplace/pchmelar/repos/dvc/venv/lib/python3.8/site-packages/dvc_objects/executors.py", line 134, in batch_coros
result = fut.result()
File "/srv/workplace/pchmelar/repos/dvc/venv/lib/python3.8/site-packages/dvc_objects/fs/generic.py", line 220, in _get_one_coro
return await get_coro(
File "/srv/workplace/pchmelar/repos/dvc/venv/lib/python3.8/site-packages/dvc_objects/fs/callbacks.py", line 84, in func
return await wrapped(path1, path2, **kw)
File "/srv/workplace/pchmelar/repos/dvc/venv/lib/python3.8/site-packages/dvc_objects/fs/callbacks.py", line 52, in wrapped
res = await fn(*args, **kwargs)
File "/srv/workplace/pchmelar/repos/dvc/venv/lib/python3.8/site-packages/s3fs/core.py", line 1224, in _get_file
body, content_length = await _open_file(range=0)
File "/srv/workplace/pchmelar/repos/dvc/venv/lib/python3.8/site-packages/s3fs/core.py", line 1215, in _open_file
resp = await self._call_s3(
File "/srv/workplace/pchmelar/repos/dvc/venv/lib/python3.8/site-packages/s3fs/core.py", line 348, in _call_s3
return await _error_wrapper(
File "/srv/workplace/pchmelar/repos/dvc/venv/lib/python3.8/site-packages/s3fs/core.py", line 140, in _error_wrapper
raise err
PermissionError: The difference between the request time and the server's time is too large.
We found out that the command fails semi-deterministically after about 20 minutes of pulling process.
We also found out that setting the --jobs parameter to 10 instead of 4x cpu_count (which is default) worked-around the problem.
>>>importos>>>os.cpu_count()
56
I did a little bit digging around and maybe found a root cause of the problem:
In this SO comment someone mentioned that this problem can be sometimes caused by asynchronous calls. Mainly because the request is created much earlier than executed which can cause request time and s3 server time diff.
according to the SO comment the maximal request-servertime diff is 15 minutes for S3
=> That means that for large number of jobs (hundreds) a session waiting time longer than 15 minutes can cause the "The difference between the request time and the server's time is too large." error because the request timestamp differs by more than 15 minutes from server time.
Reproduction
I was not able to reproduce the issue by simulating the download process so I needed to pull actual file from s3.
The following code creates s3 bucket, uploads file and tries to copy it from s3 using dvc-data in 1000 threads.
If some of the 1000 threads do not finish in 15 minutes the request will probably fail. In our case it failed after 17 minutes.
Can you try it with the most recent DVC version (it seems the whole stack of libraries is quite outdated).
I don't see an option from the DVC side to alter the max_pool_connections. Only the options from this config are taken into account, but that doesn't affect the number of actual connections made to S3. We should probably give a way to pass additional config options (e.g. max_pool_connections). I don't see immediately a way to specify it via AWS config file or AWS env variables.
It even a bit more complicated considered that probably a single file could be downloaded using multiple requests concurrently. So, the total number of connections needed can quite large in general.
I would not though do any custom logic to try to predict the needed value. Probably just us being able to pass the max_pool_connections down should be enough.
@pmrowla if you have time by chance, since you touched this recently, does my logic sound reasonable to you? :)
Exposing max_pool_connections should be fine here, but I'm not sure this is still an issue given that there's been a ton of significant changes between dvc 2.55 and the current release
Hi, a few days ago we have encountered this exception when pulling dvc tracked data using
dvc get
command. We were pulling about 800 of 4GB files.The issue can be related to this comment iterative/dvc#9070 (comment)
We found out that the command fails semi-deterministically after about 20 minutes of pulling process.
We also found out that setting the
--jobs
parameter to 10 instead of 4x cpu_count (which is default) worked-around the problem.I did a little bit digging around and maybe found a root cause of the problem:
dvc-data
package usess3fs
that callsasync _get_file
https://github.com/fsspec/s3fs/blob/main/s3fs/core.py#L1204C15-L1204C24 during the pull_get_file
is called asynchronously as many times as the--jobs
parameter specifiesbody, content_length = await _open_file(range=0)
of _get_file https://github.com/fsspec/s3fs/blob/main/s3fs/core.py#L1204C15-L1204C24 will be waiting after 10 calls, because the aiobotocore by default provides only 10 sessions. Q: S3 get object parallel call seems not really parallel? aio-libs/aiobotocore#651=> That means that for large number of jobs (hundreds) a session waiting time longer than 15 minutes can cause the "The difference between the request time and the server's time is too large." error because the request timestamp differs by more than 15 minutes from server time.
Reproduction
I was not able to reproduce the issue by simulating the download process so I needed to pull actual file from s3.
The following code creates s3 bucket, uploads file and tries to copy it from s3 using dvc-data in 1000 threads.
If some of the 1000 threads do not finish in 15 minutes the request will probably fail. In our case it failed after 17 minutes.
DVC doctor
The text was updated successfully, but these errors were encountered: