Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

mpmc_bounded_queue is not longer in Rust std public API #62

Closed
elinorbgr opened this issue Dec 6, 2014 · 4 comments
Closed

mpmc_bounded_queue is not longer in Rust std public API #62

elinorbgr opened this issue Dec 6, 2014 · 4 comments

Comments

@elinorbgr
Copy link
Contributor

Following rust-lang/rust#19274, mpmc_bounded_queue is no longer in the libstd public API of Rust.

Thus mio does no longer compile on last Rust and I don't see any immediate fix.

@ayosec
Copy link

ayosec commented Dec 6, 2014

It would be nice if the Notify implementation were optional. I guess that most applications can use their own implementation of a message queue, and they only need the os::Awakener object.

@carllerche
Copy link
Member

@vberger the easy fix would be to just vendor mpmc_bounded_queue.

@ayosec I don't follow. Why would it be nice if the notify implementation were optional? Is there overhead to having it? I assume you mean the generic? My guess is that the ergonomics of the API will go away with associated types & type defaults.

@ayosec
Copy link

ayosec commented Dec 7, 2014

I don't follow. Why would it be nice if the notify implementation were optional?

I think that the feature is really useful. My only concern is that the current implementation uses a fixed-size queue. If the server receives an unexpected peak, I guess that the new connections have to be dropped.

@carllerche
Copy link
Member

Ah, yes, I plan to make the queue pluggable. However, IMO one should always use a bounded queue. When you use an unbounded queue, you are really just using a bounded queue w/ an unknown cap (hitting OOM) and not being able to sanely handle reaching those limits.

Creating a queue w/ 2MM entries would plausibly take less than 10MB RAM.

But anyway, like I said, I would like the queue to be pluggable eventually.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants