Skip to content
This repository has been archived by the owner on Jun 20, 2024. It is now read-only.

Smart handling of legacy /api/v0 endpoints #13

Closed
lidel opened this issue Feb 6, 2023 · 2 comments · Fixed by #19
Closed

Smart handling of legacy /api/v0 endpoints #13

lidel opened this issue Feb 6, 2023 · 2 comments · Fixed by #19
Milestone

Comments

@lidel
Copy link
Collaborator

lidel commented Feb 6, 2023

/api/v0 is not part of Gateway, it is RPC specific to Kubo, but ipfs.io exposes a subset of it for legacy reasons.

Based on @aschmahmann's research: https://www.notion.so/pl-strflt/API-v0-Distribution-9342e803ecee49619989427d62dd0f42

resolves: name/resolve, resolve, dag resolve, dns

These are the majority of requests and need to remain fast, as they are used by various tools, including ipfs-companion users that have no local node, but still want to copy CID. These are the only thing we need to support inside `bifrost-gateway, and we should have sensible caching for these.

We will require routing these to a Kubo RPC at box with accelerated DHT client and IPNS. Seee https://github.com/protocol/bifrost-infra/issues/2327

gets: cat, dag get, block/get

  • cat return HTTP 302 redirect to ipfs.io/ipfs/cid
  • dag get return HTTP 302 redirect to ipfs.io/ipfs/cid?format=dag-json (or json/cbor/dag-cbor, if they passed explicit one in &output-codec=
  • block/get return HTTP 302 redirect to ipfs.io/ipfs/cid?format=raw

everything else

Return HTTP 501 Not Implemented with plain/text body explaining /api/v0 RPC is being removed from gateways and if someone needs it they can self-host Kubo instance + link to https://docs.ipfs.tech/install/command-line/.

If there is a good reason to special-handle some additional endpoints, we can. Drop comment below.

@lidel lidel modified the milestones: M0.5: CARs, M1: Mar 31 Feb 6, 2023
@lidel lidel moved this to 📋 Backlog in bifrost-gateway Feb 6, 2023
@aschmahmann
Copy link
Contributor

The highest usage ones by a bunch are:

  • name/resolve
  • cat
  • resolve

I'm not sure how much work these necessarily generate but if they're a lot of work we should handle them here.

  • name/resolve, whether we use nginx to move these or not we're going to have to do this work internally anyhow. We could keep it here if we wanted 🤷
  • resolve similar to name/resolve
  • cat 302 is probably fine, however cat also has parameter flags that map basically to range-requests. Given we can't 302 and make the client use headers we'd either need to return an error or not use a 302. I don't know the numbers, though someone whose poked around at Kibana more could probably get them out easily.

Is the reason you didn't suggest 302-ing dag/export due to potential behavior changes?


Aside: I'm curious what people are doing calls to version for. We can certainly omit it, although it'd be nice if we had a way for people to learn more about the gateway itself. Perhaps instead of redirecting gateway.ipfs.io to ipfs.tech we could just use it as a landing page for the gateway.

@hacdias
Copy link
Collaborator

hacdias commented Feb 8, 2023

I opened #19 to address this. Also, I added dag/export -> ?format=car.


@aschmahmann regarding the version, I agree with you. Perhaps we could redirect /api/v0/version to / with some explanation and place where to fill issues, or even redirect to this repository.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants