-
Notifications
You must be signed in to change notification settings - Fork 20
Conversation
Requires the following to be merged: |
@achingbrain I remember that go-ipfs used Can you create a series of interop tests at http://github.com/ipfs/interop to ensure that these things are well captured? |
@diasdavid aha, I didn't know about that repo. Is it run as part of CI anywhere? |
I've added a test here: ipfs/interop#22 - of course, it won't pass for js-ipfs until the mfs functionality has been merged and released. I found that I asked in the |
This is a massive loss for deduping. @achingbrain see https://github.com/ipfs/js-ipfs-unixfs-engine/issues/119 and ipfs/kubo#3576 |
I see what you mean, but logically if a file is split up into a bunch of chunks, each chunk is just raw data and not a file so probably shouldn't be marked as such. That said, it's just a file because it's a named stream of bytes stored in a directory so it could all just be E.g. to extend the example from the
🤷♂️ Just thinking out loud. ipfs/kubo#3576 has been open for well over a year. We can't really enforce interop without replicating go's behaviour, even if it's wrong. How do you want to proceed? |
@achingbrain target interop first and outline a plan for migration. |
In that case we're going to have to take the hit on deduping... |
ipfs-inactive/js-ipfs-unixfs-engine#214 changes the algorithm used when adding files to be more consistent with the go implementation. This means some generated hashes will change. This PR updates those hashes in our test suite.
ipfs-inactive/js-ipfs-unixfs-engine#214 changes the algorithm used when adding files to be more consistent with the go implementation. This means some generated hashes will change. This PR updates those hashes in our test suite.
ipfs-inactive/js-ipfs-unixfs-engine#214 changes the algorithm used when adding files to be more consistent with the go implementation. This means some generated hashes will change. This PR updates those hashes in our test suite.
ipfs-inactive/js-ipfs-unixfs-engine#214 changes the algorithm used when adding files to be more consistent with the go implementation. This means some generated hashes will change. This PR updates those hashes in our test suite.
ipfs-inactive/js-ipfs-unixfs-engine#214 changes the algorithm used when adding files to be more consistent with the go implementation. This means some generated hashes will change. This PR updates those hashes in our test suite.
ipfs-inactive/js-ipfs-unixfs-engine#214 changes the algorithm used when adding files to be more consistent with the go implementation. This means some generated hashes will change. This PR updates those hashes in our test suite.
ping @achingbrain what's left to do on this one - flip the |
7744b68
to
61e5e4e
Compare
go uses `raw` unixfs nodes for leaf data whereas this module uses `file` nodes, that causes CIDs to differ between the two implementations
61e5e4e
to
9d44a75
Compare
@alanshaw from what I can see the go implementation uses different importers depending on what you are doing.
Until someone decides which is correct we should probably leave the defaults as they are. |
I've added some tests and updated the readme too. The Mac Jenkins slaves are preventing the build from passing as they are out of disk space so I'm going to merge this. |
ipfs-inactive/js-ipfs-unixfs-engine#214 changes the algorithm used when adding files to be more consistent with the go implementation. This means some generated hashes will change. This PR updates those hashes in our test suite.
ipfs-inactive/js-ipfs-unixfs-engine#214 changes the algorithm used when adding files to be more consistent with the go implementation. This means some generated hashes will change. This PR updates those hashes in our test suite.
ipfs-inactive/js-ipfs-unixfs-engine#214 changes the algorithm used when adding files to be more consistent with the go implementation. This means some generated hashes will change. This PR updates those hashes in our test suite.
go uses
raw
unixfs nodes for leaf data whereas this module usesfile
nodes - that causes CIDs to differ between the two implementations. I've added arawLeafNodes
option (false by default) that if set to true will cause this module to also useraw
nodes for leaf data.It also seems that setting the
reduceSingleLeafToSelf
option to false in order to disable that optimisation was being ignored if the optimisation was possible so I made it true by default and respected if explicitly set to false.