-
Notifications
You must be signed in to change notification settings - Fork 50
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Init check fails but version does exist (migrating 9 -> 10) #299
Comments
Thank you for submitting your first issue to this repository! A maintainer will be here shortly to triage and review.
Finally, remember to use https://discuss.ipfs.io if you just need general support. |
For context we have updated our dependencies from // prev versions
"ipfs": "0.52.2",
"ipfs-http-gateway": "^0.3.0",
"ipfs-http-server": "0.3.1", to // curr versions
"ipfs": "~0.54.2",
"ipfs-http-gateway": "~0.3.2",
"ipfs-http-server": "~0.3.3" |
Could you check the config and see if the root datastore is being sharded? I think what may be happening here is when a repo is first initted the basic structure is created - the blocks dir, version file, etc - without any sharding. Sharding is applied per-datastore so doesn't need to be applied to the root. When the repo-migrations tool tries to read the version number it applies whatever datastore options have been passed - if you're using the default createRepo script bundled with I have updated the file in ipfs/js-datastore-s3#33 to only shard the blockstore and the pinstore as the other datastores should be small enough that sharding isn't necessary, and the root itself should never be sharded. |
Hey @achingbrain thanks for looking into this. We are creating the repo like so const repo = configuration.ipfsS3RepoEnabled ? createRepo({
path: configuration.ipfsPath,
}, {
bucket: configuration.awsBucketName,
accessKeyId: configuration.awsAccessKeyId,
secretAccessKey: configuration.awsSecretAccessKey,
}) : configuration.ipfsPath Current config (pre-migration of course) is this "datastore": {
"Spec": {
"type": "mount",
"mounts": [
{
"mountpoint": "/blocks",
"type": "measure",
"prefix": "flatfs.datastore",
"child": {
"type": "flatfs",
"path": "blocks",
"sync": true,
"shardFunc": "/repo/flatfs/shard/v1/next-to-last/2"
}
},
{
"mountpoint": "/",
"type": "measure",
"prefix": "leveldb.datastore",
"child": {
"type": "levelds",
"path": "datastore",
"compression": "none"
}
}
]
}
}, So my guess is that this bug is affecting us. |
Have you managed to perform the migration? It should be a case of using the (updated) create-s3-repo.js file from The only thing to double check would be which datastores are sharded - if you look in the |
@achingbrain thanks for merging this fix. I'll be testing it within the next couple days as we copy over the create repo code and let you know how it goes. |
@valmack feel free to reopen if still an issue in the latest release. |
Hello!
I'm trying to migrate a repo from v9 to v10. It uses s3 as a backend for ipfs-repo.
Before migration attempt
The repo clearly has version and config files.
During migration attempt
The migration is failing and shows the following logs:
After migration attempt
The repo gets 2 new files which seem like they should not be there and version is still on 9
Could the issue be here?
js-ipfs-repo/src/version.js
Lines 26 to 31 in a36e695
The text was updated successfully, but these errors were encountered: