-
Notifications
You must be signed in to change notification settings - Fork 20
Smart handling of legacy /api/v0 endpoints #13
Comments
The highest usage ones by a bunch are:
I'm not sure how much work these necessarily generate but if they're a lot of work we should handle them here.
Is the reason you didn't suggest 302-ing Aside: I'm curious what people are doing calls to |
I opened #19 to address this. Also, I added @aschmahmann regarding the version, I agree with you. Perhaps we could redirect |
/api/v0
is not part of Gateway, it is RPC specific to Kubo, butipfs.io
exposes a subset of it for legacy reasons.Based on @aschmahmann's research: https://www.notion.so/pl-strflt/API-v0-Distribution-9342e803ecee49619989427d62dd0f42
resolves:
name/resolve
,resolve
,dag resolve
,dns
These are the majority of requests and need to remain fast, as they are used by various tools, including ipfs-companion users that have no local node, but still want to copy CID. These are the only thing we need to support inside `bifrost-gateway, and we should have sensible caching for these.
We will require routing these to a Kubo RPC at box with accelerated DHT client and IPNS. Seee https://github.com/protocol/bifrost-infra/issues/2327
gets:
cat
,dag get
,block/get
cat
return HTTP 302 redirect toipfs.io/ipfs/cid
dag get
return HTTP 302 redirect toipfs.io/ipfs/cid?format=dag-json
(or json/cbor/dag-cbor, if they passed explicit one in&output-codec=
block/get
return HTTP 302 redirect toipfs.io/ipfs/cid?format=raw
everything else
Return HTTP 501 Not Implemented with
plain/text
body explaining/api/v0
RPC is being removed from gateways and if someone needs it they can self-host Kubo instance + link to https://docs.ipfs.tech/install/command-line/.If there is a good reason to special-handle some additional endpoints, we can. Drop comment below.
The text was updated successfully, but these errors were encountered: