Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add safer abstractions for buffer rings #256

Closed
wants to merge 1 commit into from

Conversation

SUPERCILEX
Copy link
Contributor

See tokio-rs/tokio-uring#112 (comment)

I'm leaving this in the draft state until I've fully built out my server and fuzzed it while using this API to battle test it.

pub struct BufRing {
ring: Mmap,
ring_entries: u16,
entry_size: u32,
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

you might want to decouple this in the future.
Although its terribly complicated, you might want to reuse parts of existing buffers.

Lets say you have 1k buffers, but a message was received with only 8 bytes. In theory you could immediately requeue the rest of that buffer onto the buf_ring for the next network data. The hard part is obviously accounting for this and soon you might have built an allocator internally - but possibly worth allowing for this use case in the future.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, originally I wanted to enable full flexibility, but that made the API a pain to use for the simple case. I forgot to mention that for creation I decided against allowing you to specify arbitrary buffer sizes because that effectively limits you to the smallest buffer size since the kernel can't pick the buffer size it wants and has to go in order. Similar logic applies for what you described: if the data is tiny and processing will take a while, you should probably just copy it off the buffer, but if the data is large, then you'll be adding back a small fraction of the buffer to the ring making it have limited utility when the kernel tries to use it.

Anyway, that was the rationale. But I don't think this current architecture prevents flexibility in the future (or I could add it now). I think you could have an unsafe get method get returns a buffer struct and an unsafe add method that adds a buffer struct. Then we could special case entry_size of 0 to not initialize any of the buffers so you can allocate your own.

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

i think that sounds generally sensible - was just in case you hadn't thought of it.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, totally fair. :)

@SUPERCILEX
Copy link
Contributor Author

Ok, I've been running this for a while and I'm pretty confident it works at this point. I also expanded the API to support back pressure by carrying the buffer token through several io_uring steps before releasing it back to the kernel (that way if a client spams requests, it'll run out of buffers and need to wait until the server is ready to accept more).

I'm not really happy with the API, but I do think it gets the job done. Thoughts on it? I can add docs if we're going forward with it.

Also in case anyone's interested, here's how I'm using this: https://github.com/SUPERCILEX/clipboard-history/blob/418b2612f8e62693e42057029df78f6fbf49de3e/server/src/reactor.rs#L248

@SUPERCILEX SUPERCILEX marked this pull request as ready for review April 22, 2024 18:52
@SUPERCILEX
Copy link
Contributor Author

Closing as I'm probably the only one using this. The canonical implementation now lives in https://github.com/SUPERCILEX/clipboard-history/blob/9662f08b43502feafceec0c694237498bc7ea127/server/src/io_uring.rs#L167

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants