-
-
Notifications
You must be signed in to change notification settings - Fork 3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
t0030-mount.sh: 2 known breakages vanished #945
Conversation
I think this is the same as #926 -- though as i mentioned in the diff there, it looks like one of these is still broken on osx. Oddly, It seems that the new osx travis tests pass them though... I wonder why they fail consistently on my machine. Perhaps someone else with osx could try? (cc @whyrusleeping ) cc @chriscool |
I ran sharness three times on my osx 10.10.2 with osxfuse 2.7.5 and got this:
|
Hrm.. not sure if my system was in a clean state. Maybe four times is the charm..?
edit: restarted - passed again. |
Pretty weird. The first test fails iff So, either How about running |
I mean |
@kkoroviev if you dont have osx to test, you can setup a repo to repro the problem and have travis vms do it. (have to ask travis to enable osx though: docs.travis-ci.com/user/multi-os/ ) My observations:
|
The question is how it fails. Does it segfault or return 0? Still want to see |
|
Got myself a Hackintosh, for testing.
or:
Looks like a race condition to me, I thought. So I built |
We never tried to By the way, the |
|
Gonna investigate the race condition and rewrite the tests after a short period of sleep. |
Interesting! perhaps we should have a version of the sharness tests that runs it all under |
Those races were not the cause of this weird behavior. |
Would like one (or both) of you to check the tests again on this branch. |
Not closed yet. |
GitHub's smart auto-closer is so smart, yet so stupid :) Is there any way to disable it? |
Besides, ready to be merged; if it looks good to you. |
I created one helper ( |
When and if it will be merged upstream, we can delete our (identical) version from |
} | ||
|
||
test_should_contain() { | ||
test "$#" = 2 || error "bug in the test script: not 2 parameters to test_should_contain" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
maybe "test_should_contain: requires 2 parameters"
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ah, okay. that's fine then. good enough for junio, good enough for me.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Though your variant is definitely better, uniformity is even more better.
Ran the tests on osx with #958 - 10 times, both fixed every time and no stale mounts hanging around. Awesome find @kkoroviev!! |
CC @jbenet @chriscool. |
Ok for me, thanks @kkoroviev |
@kkoroviev this looks great-- almost ready.
It is critical that we test our output formats as exactly as we can, so that if we accidentally change the output format we notice. People will build scripts on top of our outputs and they will only be able to do so as long as we do not change the outputs unless absolutely necessary. This should be a very explicit decision, and test cases help us make sure we do this carefully.
These tests are not for testing large ranges of possibilities. We want to be much more exact so that we don't screw up other people's programs by accidentally changing something. |
@jbenet I agree that in general it is a very good idea to test command output very carefully, but in this case, @kkoroviev 's patch tests the stdout output properly; it tests that it is empty. In the Git tests for example the stderr output is not checked very carefully when a command fails, because to detect failures in scripts, users are encouraged to rely on the exit code and to just pass the stderr output as is to the user. Also the git error messages on stderr are sometimes improved to help user recover from errors and can be translated, so it might not be a good idea for users to rely on them. If many different failure modes can be expected and a user's script might be interested in knowing what happened, then the command should probably return different exit codes for the different failure modes. I would be more concerned about having 3 tests (one for the exit code, one for stdout output and one for stderr output) instead of just 2 (one for the exit code and the other for the output), as it is a bit different from what is done elsewhere and it's good to be consistent. |
Ok-- you know better here.
I wonder if it makes sense to move that output to stdout in the general > ipfs mount
ipfs mounted at /ipfs
ipns mounted at /ipns
> ipfs mount ipfs foo/my_ipfs
> ipfs mount ipns bar/my_ipns (the latter two subcommands may be good to add in another PR)
Agreed-- perhaps change this, @kkoroviev |
If we leave everything as it is, we'll get a more granular output. I don't see how we could benefit much from "consistency". This is how @chriscool checks for empty output. This is how I do it. It is inconsistent too. Is it bad? I don't think so. |
Consistency is very important when expecting many people working on the same codebase to modify and improve it quickly and safely. I suggest reading: These give you a sense of the importance of consistency in large scale engineering. I'm sure you can find more good documents out there. It's not about being rigid or inflexible-- it's about using patterns known to be good and which leverage shared mental state. |
Can't see how Go relates to discussion about shell test scripts. |
I mean we shouldn't be consistent with the code written before if there is a better way to do things. I wrote my tests in Git style (1 big test for all outputs), it got rejected. I rewrote it so it tests all three outputs (exit code, stdout, stderr) separately, it is being rejected too. Why would you want such a mixed style (1 test for exit code, 1 test for stdout and stderr)? If we wanna be consistent with the best practice, it should be 1 big test for all outputs, because Git does so. 3 small tests are fine too. 2 tests don't make sense to me. |
@kkoroviev follow @chriscool's style. use The style you pointed at as @chriscool's is not actually his, it's @cryptix -- see the full history: https://github.com/jbenet/go-ipfs/commits/master/test/sharness/t0051-object.sh If you look over all the tests-- they're not all very consistent yet, and they can definitely be improved. We want to push towards that more and more. It's ok for code to be somewhere in between while we find exactly what works best for us, or we get around to improving things. We're trying to guide these there. |
This was labeled as "in progress", but we don't have anyone assigned 😨 Please assign someone 😃 |
Most of the time stderr should be empty when a command succeeds, do you want to test that? I don't think git often tests that, and I am not sure it's worth doing. Testing stderr when a command fails could be done, but as I wrote in a comment above, I am not sure it is worth to be very strict about the stderr output. Anyway I am open to make testing even more consistent, and perhaps more strict, but this should probably be discussed in its own issue first. I was ok and I am still ok to merge this PR as it is now, even if it's not very consistent with the rest of the tests, because I know that the rest of the tests are not fully consistent either. About
Anyway please open an issue if you want some change. Thanks both! |
This looks good to me. |
Ok, sounds good. Let's rebase this and push it in another branch shortly. (need to grab the latest changes).
ah, i mixed up the normal operation and the error case-- thought we were saying the output of |
I've rebased on top of latest over at #998 -- will merge there -- thanks @kkoroviev and @chriscool |
Related: #341.