-
-
Notifications
You must be signed in to change notification settings - Fork 1.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Connection pool using client::get(uri).with_connector(...) is not respecting keep-alive #212
Comments
what version of actix-web do you use? |
just a note, each worker thread uses it's own connector. your code starts multiple threads, depends on number of cpu you have |
checked master and 0.5.6, both works. i noticed let resp = await!(client::get(uri)
.with_connector(state.conn.clone())
.no_default_headers()
.upgrade()
.finish()?
.send()
)?; actix-web always closes upgraded connections |
Actix 0.5.6. Here's my Cargo.toml: [package]
name = "actixello-world"
version = "0.1.0"
authors = ["Jay Oster <[email protected]>"]
[dependencies]
actix = "0.5.6"
actix-web = "0.5.6"
futures-await = "0.1.1" I agree, it works functionally. The bug is there's no keep-alive. Even if I comment |
The delay between last |
i made some changes extern crate actix;
extern crate actix_web;
use actix::{Actor, Addr, Context, Handler, Syn, Unsync};
use actix_web::client::ClientConnectorStats;
use actix_web::http::Method;
use actix_web::{client, server, App, AsyncResponder, FutureResponse, HttpMessage, Path, State};
use futures::prelude::*;
struct AppState {
conn: Addr<Unsync, client::ClientConnector>,
}
fn index(info: Path<(u32, String)>, state: State<AppState>) -> FutureResponse<String> {
let uri = format!("http://127.0.0.1:8081/{}/{}/index.html", info.0, info.1);
client::get(uri)
.with_connector(state.conn.clone())
.finish()
.unwrap()
.send()
.from_err()
.and_then(|resp| {
resp.body()
.from_err()
.and_then(|body| Ok(std::str::from_utf8(&body)?.to_owned()))
})
.responder()
}
pub struct Stats;
impl Actor for Stats {
type Context = Context<Self>;
}
impl Handler<ClientConnectorStats> for Stats {
type Result = ();
fn handle(&mut self, msg: ClientConnectorStats, _: &mut Self::Context) {
println!("REUSED: {}", msg.reused);
println!("CLOSED: {}", msg.closed);
}
}
fn main() {
let addr = "127.0.0.1:8080";
println!("Listening on http://{}", addr);
server::new(|| {
let stats: Addr<Syn, _> = Stats.start();
let conn: Addr<Unsync, _> = client::ClientConnector::default()
.stats(stats.recipient())
.start();
App::with_state(AppState { conn: conn }).resource("/{id}/{name}/index.html", |r| {
r.method(Method::GET).with2(index)
})
}).bind(addr)
.unwrap()
.threads(1)
.run();
} stats doesnt show any closed connections |
what is your os level tcp keep-alive settings? by default actix-web keep-alive does not enforce keep-alive, it relies on os level tcp settings |
That's a good question. I'm on macOS, so it's just Darwin defaults. Which are basically the same as Linux defaults:
|
could you try code i posted |
I get a lot of errors when compiling the code you provided verbatim (missing
|
client closes connection after keep-alive period. https://actix.rs/actix-web/actix_web/client/struct.ClientConnector.html#method.conn_keep_alive |
I cannot seem to override that value... let conn: Addr<Unsync, _> = client::ClientConnector::default()
.conn_keep_alive(std::time::Duration::new(60, 0))
.stats(stats.recipient())
.start(); Still closes the connection after a few seconds. I'm also very confused about why using |
I found the cause of It still closes the connection earlier than expected, regardless of the
|
@fafhrd91 After some digging, I found the problem. The documentation doesn't match the code:
actix-web/src/client/connector.rs Lines 246 to 247 in ecda97a
The values are swapped! Increasing the lifetime to 75 seconds indeed keeps the socket open for 75 seconds. Setting There is a bug after all, just not what I was experiencing. The values for |
oh! thanks for debugging! |
fixed in master, i will release new version later this week. |
I hacked together a quick proxy which is very similar to the proxy example, except I'm using
ClientRequestBuilder.with_connector()
trying to utilize the built-in connection pooling.The proxy functionality works (using the basic hello-world example as an upstream on port 8081), but keep-alive is not working, as you can see from tcpdump:
I would expect the
FIN
packets to not be sent to port 8081 (timestamps 18:05:43.200008 and above) if keep-alive was working, and the sockets should be reused indefinitely. TheFIN
is also delayed from the lastACK
packet at18:05:42.542827
... by about 650ms.The text was updated successfully, but these errors were encountered: