Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Make Transport covered by semver #952

Open
2 tasks
algesten opened this issue Jan 27, 2025 · 6 comments
Open
2 tasks

Make Transport covered by semver #952

algesten opened this issue Jan 27, 2025 · 6 comments
Labels
help wanted Extra attention is needed

Comments

@algesten
Copy link
Owner

It currently sits under unversioned to reserve our rights to break compatibility. This issue serves as a place holder for discussion of what needs to happen to move it out from unversioned.

The list of currently known things to do:

  • Geed feedback of how Transport is to implement by someone outside of ureq crate. Ideally someone would try publishing a crate like ureq-mbedtls to provide an alternative TLS implementation, or ureq-uds for unix domain sockets instead of the regular built-in TcpSocket.
  • Can we fix LazyBuffers? There's an implicit contract for how to use it correctly, and it is quite complicated. It would be interesting to explore design patterns or even radically other architectural ideas of how to make the contract explicit.
@algesten algesten added the help wanted Extra attention is needed label Jan 27, 2025
@NielDuysters
Copy link

Hi @algesten

I lately worked extensively with the ureq crate (3.0.0-rc4) to implement both Transport and Resolver. As a voluntary contributor to the (Arti) Tor Project I am currently implementing a library to allow users to easily use ureq to make requests over Tor. The merge request can be found here.

In the code you can see I am implementing a custom Connector to use TLS connections over Tor. The Transport implemented on line 352 works by simply using the splitted stream (in read/write) of Arti's connection.

To e.g retrieve input I read the buffer from ureq and send it to the read-stream from the Arti connection.

Implementing Transport was pretty straight forward once I got the hang of it. I personally wouldn't change too much about it.

I hope that by sharing my implementation I provided some extra insight on how Transport could be implemented by someone outside the ureq crate.

I am also looking forward to a notice about when Transport becomes semver and stable. Because it'd help making my own library stable itself.

@algesten
Copy link
Owner Author

@NielDuysters thank you! Very nice feedback indeed.

I promise to notify you on any developments on the semver status. I'll make a point of pinging you if there are any breaking changes coming along in Connector/Transport/Resolver. They will not happen in patch releases. Only minor.

@sosthene-nitrokey
Copy link
Contributor

sosthene-nitrokey commented Feb 1, 2025

We were able to migrate to the upstream ureq 3.0.3 release from our own fork of ureq 2: Nitrokey/nethsm-pkcs11#229

The transport API allowed us to:

  • Implement TCP keepalives by having our own TcpTransport. Ideally we wouldn't have to re-implement all the logic and we would just be able to add another transport after TcpTransport that gets the TcpStream from the TcpTransport and uses it to enable keepalives. But there is also the option of having the TcpTransport with TcpKeepalive be part of a seperate crate so that there is no need for everyone to re-implement it.
  • Implement a custom rustls configuration by having our own RustlsTransport
  • Implement "pool clearing" manually (AKA Feature request: ability to clear the pool #810). This is currently done through a hack in the TcpTransport, but I would still like to have a better option for that. I still haven't tried the approach of deleting and re-creating the agent to clear the pool.

One thing that does not concern us but could be impactful to some other users: there doesn't appear currently to be any way to pass per-request configuration to a transport beyond the currently available config flags (for example wanting to disable keepalive for a specfic request). I'm not sure what a API for that would look like.

@algesten
Copy link
Owner Author

algesten commented Feb 1, 2025

We were able to migrate to the upstream ureq 3.0.3 release from our own fork of ureq 2: Nitrokey/nethsm-pkcs11#229

Nice!

The transport API allowed us to:

* Implement TCP keepalives by having our own `TcpTransport`. Ideally we wouldn't have to re-implement all the logic and we would just be able to add another transport after `TcpTransport` that gets the `TcpStream` from the `TcpTransport` and uses it to enable keepalives. But there is also the option of having the `TcpTransport` with `TcpKeepalive` be part of a seperate crate so that there is no need for everyone to re-implement it.

Feels like you should be able to wrap TcpTransport to modify it's output. You'd still recreate the full connector chain though.

* Implement a custom `rustls` configuration by having our own `RustlsTransport`

Also note that there is now another way to do that.

* Implement "pool clearing" manually (AKA [Feature request: ability to clear the pool #810](https://github.com/algesten/ureq/issues/810)). This is currently done through a hack in the `TcpTransport`, but I would still like to have a better option for that. I still haven't tried the approach of deleting and re-creating the agent to clear the pool.

I think generally Agent should be considered light weight enough that you should not worry about recreating it.

One thing that does not concern us but could be impactful to some other users: there doesn't appear currently to be any way to pass per-request configuration to a transport beyond the currently available config flags (for example wanting to disable keepalive for a specfic request). I'm not sure what a API for that would look like.

You could maybe use http crate extensions API to squirrel away bespoke config.

Thanks for the feedback!

@sosthene-nitrokey
Copy link
Contributor

* Implement a custom `rustls` configuration by having our own `RustlsTransport`

Also note that there is now another way to do that.

Thanks, I didn't see that. But what we need to modify is not the crypto_provider, but instead the full ClientConfig so we can override the server cert verifier.

The transport API allowed us to:

* Implement TCP keepalives by having our own `TcpTransport`. Ideally we wouldn't have to re-implement all the logic and we would just be able to add another transport after `TcpTransport` that gets the `TcpStream` from the `TcpTransport` and uses it to enable keepalives. But there is also the option of having the `TcpTransport` with `TcpKeepalive` be part of a seperate crate so that there is no need for everyone to re-implement it.

Feels like you should be able to wrap TcpTransport to modify it's output. You'd still recreate the full connector chain though.

Yes, I looked into that, but currently the TcpStream in the TcpTransport is not pub, so we can't read it.
Even making it pub would be an issue because the socket2 library needs to own the TcpStream to be able to configure it, an &mut TcpStream is not enough. So you would have to de-structure then recreate the TcpTransport, which would require all its fields to be public. I don't think it's viable for stabilizing this API.

* Implement "pool clearing" manually (AKA [Feature request: ability to clear the pool #810](https://github.com/algesten/ureq/issues/810)). This is currently done through a hack in the `TcpTransport`, but I would still like to have a better option for that. I still haven't tried the approach of deleting and re-creating the agent to clear the pool.

I think generally Agent should be considered light weight enough that you should not worry about recreating it.

I moved to doing that. It works fine but it's harder to make work with dependencies that expect to be given a ureq::Agent and own it for a long time. Now re-creating the agent is also the recommended way curl does it.

You could maybe use http crate extensions API to squirrel away bespoke config.

Thanks. Keeping that in mind for the future!

@algesten
Copy link
Owner Author

algesten commented Feb 3, 2025

Even making it pub would be an issue because the socket2 library needs to own the TcpStream to be able to configure it, an &mut TcpStream is not enough. So you would have to de-structure then recreate the TcpTransport, which would require all its fields to be public. I don't think it's viable for stabilizing this API.

I think socket2 has a way to convert back/forth between std::net::TcpStream and its own variant. When I worked with UdpSocket the other day it was as simple as .into()

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
help wanted Extra attention is needed
Projects
None yet
Development

No branches or pull requests

3 participants