-
Notifications
You must be signed in to change notification settings - Fork 152
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
File::readdir fails for filename longer than 128 bytes #251
Comments
Any feedback on this? It is blocking us at the moment. I have not had much movement on the associated libssh2 bug report but I was thinking that maybe in the interim we could improve the situation by
Would a PR that implemented these two suggestions be accepted? |
As pointed out in alexcrichton#251, we were not actually resizing these buffers before.
As pointed out in #251, we were not actually resizing these buffers before.
I think this is not yet published to crates.io? If there are now new active maintainers who can push a new crate release may I ask that this be pushed? |
If a directory contains a file with a name longer than 128 bytes, that filename is not returned when iterating over the directory contents using
readdir()
.Example program:
In the test directory, create a file with name 128 chars long:
Output:
The problem appears to be the way in which the buffer is managed:
Ie. create a Vec with capacity 128, then pass it in to libssh2 readdir_ex. If we get back ERROR_BUFFER_TOO_SMALL, then double the buffer's capacity.
The problem is that
Vec::reserve
reserves enough capacity in the vec for at leastadditional
elements, and does nothing if capacity is already sufficient. Since the actual length of the vec is still 0,buf.reserve(cap)
does nothing because the Vec already has enough space for that.A simple solution would be:
Unfortunately, in the process of testing this, we found a bug in libssh2, where if it returns
LIBSSH2_ERROR_BUFFER_TOO_SMALL
, then it returns one less that the actual number of directories. See this libssh2 bug report.Until this is fixed, perhaps we should be allocating a bigger buffer from the start.
The text was updated successfully, but these errors were encountered: