Replies: 2 comments
-
Yup! It's all zero-copy :) Internally a lot of our data is represented as Arrow, so it's "true" zero-copy in that we just move the pointers to the data. We also handle larger than memory datasets without requiring arrow dataframes. It should happen automatically if you use our Ray runner, which will spill data to disk. |
Beta Was this translation helpful? Give feedback.
0 replies
-
Thanks @jaychia , love it. I will update my blog. Still trying to figure out how to use the abfss paths in Fabric with daft. |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Hi duckdb has a zero-copy integration with arrow dataframes to handle larger than memory datasets. If I use
.from_arrow()
, does it also work the same way as duckdb? ThanksBeta Was this translation helpful? Give feedback.
All reactions