-
Notifications
You must be signed in to change notification settings - Fork 819
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Unable to write to flash due to "LFS_ERR_NOSPC" which is wrong #896
Comments
I managed to make it work downgrading version of littlefs to v2.0.5. I will probably lose some functionality but i actually dont really care. Issue is still open but not that urgent |
Hi @NLLK, I don't have a good answer, but I wanted to add some info that may be useful. LittleFS is not well tested when the size of an individual file's metadata exceeds what can fit in a metadata log. I noticed you set To make matters worse, this only matters when metadata compaction occurs, meaning a metadata limit error can be unrelated to the file currently being written. The good news is that this should be improving as a part of some larger work, with LittleFS tracking the amount of metadata associated with files and error if limits are exceeded earlier rather than later. But this is work-in-progress.
LittleFS uses LFS_ERR_NOSPC to indicate both "out of blocks" and "out of space in a metadata log". From user feedback it's become quite clear that this is very confusing, so LFS_ERR_RANGE is being added in the future to indicate "out of space in a metadata log" with a different error code.
Since long names are a common cause for running out of metadata space, LittleFS reports "out of space in a metadata log" as LFS_ERR_NAMETOOLONG if file creation was related. But this was a mistake and will be changed to LFS_ERR_RANGE.
This assert catches when pcaches have not been flushed. Which makes a bit of sense since LittleFS errored abruptly. Unfortunately LittleFS is not able to recover from errors very well without unmount+mount. This is another area of work...
Is this the most recent version that works? I'd be very curious if v2.5.1 is what broke things, since it's a small changeset. I'm guessing more likely the root cause is the introduction of FCRCs (erase-state checksums) in v2.6.0. This added an additional checksum to every commit, which isn't that significant, but may be enough to bump metadata over some limit or trigger some new corner case. |
Thank you for the reply. It has a good explanation of things which i appreciate. I will try things you suggested and will comment this issue again if something will help |
I know it's late, but I found a bug that is probably the cause of this issue: #1031 And thanks for reporting, even though it took a while to get to, the more info available the easier it is to root cause bugs like this. |
glad to help |
I'm using v2.8.1 version of LittleFS. The device is W25Q64JV. Build with STM32 GNU compiler, C++17. Configs are:
My write function also could be useful to solve issue:
It mounts just fine. I can read and write to flash. I can write 5 files with average file size about 130 bytes and read them. I can rewrite these files as many times as i want. Files content is JSON-strings.
But if i write another file (for example, size: 125, filename length: 26), it returns -28. I call lfs_file_open and then lfs_file_write (they dont throw any error codes). Then call lfs_file_close (which returns -28). Call stack is:
To be exact, lfs.c:1575 returns LFS_ERR_NOSPC while checking condition "commit->off + dsize > commit->end".
commit->off = 256;
commit->end = 248;
dsize = 12;
And for additional context, if i repeat this operation (try to do write operation again with the same file) it will ASSERT on lfs.c:262 (function "", string: LFS_ASSERT(pcache->block == LFS_BLOCK_NULL)), where pcache->block = 955
When i read flash into hex-file i see that there are much more data that i supposed to write and they are repeat each other no matter if i rewrite file or write new. Image viewer show something like this:
So the main question is: how do i solve such issue?
PS: Also write operation could throw "LFS_ERR_NAMETOOLONG" with such filename "/db/c_l/c_10_683642954602", which is not true since the length is only 27.
The text was updated successfully, but these errors were encountered: