Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

msync stalls with current master (cda4db408...) #958

Closed
r0ssar00 opened this issue Sep 12, 2012 · 5 comments
Closed

msync stalls with current master (cda4db408...) #958

r0ssar00 opened this issue Sep 12, 2012 · 5 comments
Milestone

Comments

@r0ssar00
Copy link

Background+history: I've been using the launchpad ubuntu dailies on precise for a couple of months and the latest update introduced a regression in the initrd generation so I reverted back to an older version of the initrd support for zfs and then grabbed cda4db4 and openzfs/spl@dd87332 and built+installed both.

Since this update, every time I attempt to run a command that attempts an msync on my zfs rootfs, the msync call stalls and leaves the system's I/O wait maxed out ("wa" in top, not sure if I've used the proper phrasing+terminology here but I hope the idea is at least there). I ran an apt-get update using strace and the last few lines are

read(6, "aspell-mr (<= 0.10-1)\nProvides: "..., 65262) = 65262
read(6, " mutiple instances over DBus - s"..., 65200) = 65200
read(6, "ation on a large\n network, or a "..., 64105) = 43090
read(6, "", 21015)                      = 0
close(6)                                = 0
msync(0x7f8e4c638000, 53490348, MS_SYNC

The missing ')' after MS_SYNC is intentional.

It seems a deadlock has been introduced between the daily v0.6.0.75 (0ef0ff5) and master.

@dechamps
Copy link
Contributor

You're saying that the call stalls indefinitely, right? It doesn't complete after 5 seconds or so?

Can you try reverting 2b28613 and see how it goes? Also, it would be interesting to test with my msync.c program described in #907.

@r0ssar00
Copy link
Author

Exactly. I sat there for a half hour or so waiting for it to complete with no progress.

Will do, however it may be a few days until I can actually try this due to school.

@behlendorf
Copy link
Contributor

Yes, please try reverting 2b28613 . If you could also collect the stack tracks from the system when it's hung that would be helpful. As root just echo t >proc/sysrq-trigger and then grab them from dmesg.

@behlendorf
Copy link
Contributor

This was resolved by commit 8312c6d

@r0ssar00
Copy link
Author

After testing a few days using current HEAD, it seems 8312c6d is the solution. At first, I got hundreds of exceptions in one of the spl threads (the log files actually exhausted disk space before I realized) but those resolved themselves too.

pcd1193182 pushed a commit to pcd1193182/zfs that referenced this issue Sep 26, 2023
…s#958)

Bumps [pest_derive](https://github.com/pest-parser/pest) from 2.6.1 to 2.7.0.
- [Release notes](https://github.com/pest-parser/pest/releases)
- [Commits](pest-parser/pest@v2.6.1...v2.7.0)

---
updated-dependencies:
- dependency-name: pest_derive
  dependency-type: indirect
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <[email protected]>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants