Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

t0030-mount.sh: 2 known breakages vanished #945

Closed
wants to merge 4 commits into from

Conversation

kkoroviev
Copy link
Contributor

Related: #341.

@jbenet
Copy link
Member

jbenet commented Mar 19, 2015

I think this is the same as #926 -- though as i mentioned in the diff there, it looks like one of these is still broken on osx. Oddly, It seems that the new osx travis tests pass them though... I wonder why they fail consistently on my machine. Perhaps someone else with osx could try? (cc @whyrusleeping )

cc @chriscool

@cryptix
Copy link
Contributor

cryptix commented Mar 19, 2015

I ran sharness three times on my osx 10.10.2 with osxfuse 2.7.5 and got this:

./t0030-mount.sh
not ok 6 - 'ipfs mount' fails when no mount dir (issue #341) # TODO known breakage
ok 7 - 'ipfs mount' looks good when it fails (issue #341) # TODO known breakage vanished

./t0060-daemon.sh
ok 5 - ipfs daemon output looks good # TODO known breakage vanished


./t0090-get.sh
ok 17 - gzipped tar archive output is valid # TODO known breakage vanished
ok 23 - gzipped tar archive output is valid (directory) # TODO known breakage vanished

@cryptix
Copy link
Contributor

cryptix commented Mar 19, 2015

Hrm.. not sure if my system was in a clean state. Maybe four times is the charm..?

./t0030-mount.sh

ok 6 - 'ipfs mount' fails when no mount dir (issue #341) # TODO known breakage vanished
ok 7 - 'ipfs mount' looks good when it fails (issue #341) # TODO known breakage vanished

edit: restarted - passed again.

@kkoroviev
Copy link
Contributor Author

./t0030-mount.sh
not ok 6 - 'ipfs mount' fails when no mount dir (issue #341) # TODO known breakage
ok 7 - 'ipfs mount' looks good when it fails (issue #341) # TODO known breakage vanished

Pretty weird. The first test fails iff ipfs mount returns 0 or segfaults. The second test finds nothing in the ipfs mount output.

So, either ipfs mount returned 0 and outputted nothing (very unlikely) or ipfs mount segfaulted (if that's the case, the output should be empty too).

How about running ./t0030-mount.sh -v several (a lot of) times?

@kkoroviev
Copy link
Contributor Author

I mean ./t0030-mount.sh -v.

@jbenet
Copy link
Member

jbenet commented Mar 20, 2015

@kkoroviev if you dont have osx to test, you can setup a repo to repro the problem and have travis vms do it. (have to ask travis to enable osx though: docs.travis-ci.com/user/multi-os/ )

My observations:

  1. not ok 6 - 'ipfs mount' fails when no mount dir is still broken on our end and fails some % of the time
  2. ok 7 - 'ipfs mount' looks good when it fails is no longer broken.

@kkoroviev
Copy link
Contributor Author

@jbenet @cryptix @chriscool

  1. not ok 6 - 'ipfs mount' fails when no mount dir is still broken on our end and fails some % of the time

The question is how it fails. Does it segfault or return 0?

Still want to see ./t0030-mount.sh -v for a failed run.

@cryptix
Copy link
Contributor

cryptix commented Mar 20, 2015

[cryptix@emmBP ~/Documents/go/src/github.com/jbenet/go-ipfs/test/sharness:master] ./t0030-mount.sh -v
expecting success:
        export IPFS_PATH="$(pwd)/.go-ipfs" &&
        ipfs init -b=1024 > /dev/null

ok 1 - ipfs init succeeds

expecting success:
        mkdir mountdir ipfs ipns &&
        test_config_set Mounts.IPFS "$(pwd)/ipfs" &&
        test_config_set Mounts.IPNS "$(pwd)/ipns" &&
        test_config_set Addresses.API "$ADDR_API" &&
        test_config_set Addresses.Gateway "$ADDR_GWAY" &&
        ipfs bootstrap rm --all ||
        test_fsh cat "\"$IPFS_PATH/config\""

/ip4/104.131.131.82/tcp/4001/ipfs/QmaCpDMGvV2BGHeYERUEnRQAwe3N8SzbUtfsmvsqQLuvuJ
/ip4/104.236.151.122/tcp/4001/ipfs/QmSoLju6m7xTh3DuokvT3886QRYqxAzb1kShaanJgW36yx
/ip4/104.236.176.52/tcp/4001/ipfs/QmSoLnSGccFuZQJzRadHn95W2CrSFmZuTdDWP8HXaHca9z
/ip4/104.236.179.241/tcp/4001/ipfs/QmSoLpPVmHKQ4XTPdz8tjDFgdeRFkpV8JgYq8JVJ69RrZm
/ip4/104.236.76.40/tcp/4001/ipfs/QmSoLV4Bbm51jM9C4gDYZQ9Cy3U6aXMJDAbzgu2fzaDs64
/ip4/128.199.219.111/tcp/4001/ipfs/QmSoLSafTMBsPKadTEgaXctDQVcqN88CNLHXMkTNwMKPnu
/ip4/162.243.248.213/tcp/4001/ipfs/QmSoLueR4xBeUbY9WZ9xGUUxunbKWcrNFTDAadQJmocnWm
/ip4/178.62.158.247/tcp/4001/ipfs/QmSoLer265NRgSp2LA3dPaeykiS1J6DifTC88f5uVQKNAd
/ip4/178.62.61.185/tcp/4001/ipfs/QmSoLMeWqB7YGVLJN3pNLQpmmEk35v6wYtsMGLzSr5QBU3
ok 2 - prepare config -- mounting and bootstrap rm

expecting success:
        ipfs daemon >actual_daemon 2>daemon_err &

ok 3 - 'ipfs daemon' succeeds

expecting success:
        IPFS_PID=$! &&
        pollEndpoint -ep=/version -host=$ADDR_API -v -tout=1s -tries=60 2>poll_apierr > poll_apiout ||
        test_fsh cat actual_daemon || test_fsh cat daemon_err || test_fsh cat poll_apierr || test_fsh cat poll_apiout

ok 4 - 'ipfs daemon' is ready

expecting success:
            pollEndpoint -ep=/version -host=$ADDR_GWAY -v -tout=1s -tries=60 2>poll_gwerr > poll_gwout ||
            test_fsh cat daemon_err || test_fsh cat poll_gwerr || test_fsh cat poll_gwout

ok 5 - 'ipfs daemon' output includes Gateway address

checking known breakage:
    test_must_fail ipfs mount -f=not_ipfs -n=not_ipns >actual

test_must_fail: command succeeded: ipfs mount -f=not_ipfs -n=not_ipns
not ok 6 - 'ipfs mount' fails when no mount dir (issue #341) # TODO known breakage

checking known breakage:
    ! grep "IPFS mounted at: $(pwd)/ipfs" actual >/dev/null &&
    ! grep "IPNS mounted at: $(pwd)/ipns" actual >/dev/null ||
    test_fsh cat actual

ok 7 - 'ipfs mount' looks good when it fails (issue #341) # TODO known breakage vanished

expecting success:
        umount $(pwd)/ipfs || true &&
        umount $(pwd)/ipns || true &&
        ipfs mount >actual

umount: /Users/cryptix/Documents/go/src/github.com/jbenet/go-ipfs/test/sharness/trash: not currently mounted
umount: directory.t0030-mount.sh/ipfs: not currently mounted
umount: /Users/cryptix/Documents/go/src/github.com/jbenet/go-ipfs/test/sharness/trash: not currently mounted
umount: directory.t0030-mount.sh/ipns: not currently mounted
ok 8 - 'ipfs mount' succeeds

expecting success:
        echo "IPFS mounted at: $(pwd)/ipfs" >expected &&
        echo "IPNS mounted at: $(pwd)/ipns" >>expected &&
        test_cmp expected actual

ok 9 - 'ipfs mount' output looks good

expecting success:
    test_must_fail rmdir ipfs ipns 2>/dev/null

ok 10 - mount directories cannot be removed while active

expecting success:
        kill -0 $IPFS_PID

ok 11 - 'ipfs daemon' is still running

expecting success:
        test_kill_repeat_10_sec $IPFS_PID

ok 12 - 'ipfs daemon' can be killed

expecting success:
    rmdir ipfs ipns

ok 13 - mount directories can be removed after shutdown

# 1 known breakage(s) vanished; please update test(s)
# still have 1 known breakage(s)
# passed all remaining 11 test(s)
1..13
[cryptix@emmBP ~/Documents/go/src/github.com/jbenet/go-ipfs/test/sharness:master]

@kkoroviev
Copy link
Contributor Author

Got myself a Hackintosh, for testing.

ipfs mount outputs either:

$ ipfs mount -f=not_ipfs -n=not_ipns; echo $?
IPFS mounted at: not_ipfs
IPNS mounted at: not_ipns
0

or:

$ ipfs mount -f=not_ipfs -n=not_ipns; echo $?
Error: exit status 64: mount_osxfusefs: /not_ipns: No such file or directory
1

Looks like a race condition to me, I thought. So I built ipfs with go build -race. Indeed, the first output was gone, only the second one survived, and the daemon began spitting info about data races every time I ran ipfs mount.

@kkoroviev
Copy link
Contributor Author

checking known breakage:
    ! grep "IPFS mounted at: $(pwd)/ipfs" actual >/dev/null &&
    ! grep "IPNS mounted at: $(pwd)/ipns" actual >/dev/null ||
    test_fsh cat actual

ok 7 - 'ipfs mount' looks good when it fails (issue #341) # TODO known breakage vanished

We never tried to ipfs mount -f=ipfs -n=ipns (we did try to ipfs mount -f=not_ipfs -n=not_ipns). So, why do we expect to find "IPFS mounted at: $(pwd)/ipfs" in the output?

By the way, the $(pwd)/ part is useless, ipfs mount is not going to output that.

@kkoroviev
Copy link
Contributor Author

ok 7 - 'ipfs mount' looks good when it fails (issue #341) is broken: it doesn't test what it should.

@kkoroviev
Copy link
Contributor Author

@jbenet

Gonna investigate the race condition and rewrite the tests after a short period of sleep.

@jbenet
Copy link
Member

jbenet commented Mar 22, 2015

daemon began spitting info about data races every time I ran ipfs mount.

Interesting! perhaps we should have a version of the sharness tests that runs it all under -race. Post race output logs in issues.

@kkoroviev
Copy link
Contributor Author

Those races were not the cause of this weird behavior.

@kkoroviev
Copy link
Contributor Author

@jbenet @cryptix

Would like one (or both) of you to check the tests again on this branch.

@jbenet jbenet closed this in #955 Mar 22, 2015
@kkoroviev
Copy link
Contributor Author

@jbenet

Not closed yet.

@jbenet jbenet reopened this Mar 23, 2015
@jbenet jbenet added the status/in-progress In progress label Mar 23, 2015
@jbenet
Copy link
Member

jbenet commented Mar 23, 2015

Sorry -- github took the "this should fix #945" in #955 and automatically closed the issue.

@kkoroviev
Copy link
Contributor Author

@jbenet

Sorry idk how this got closed.

GitHub's smart auto-closer is so smart, yet so stupid :)

Is there any way to disable it?

@kkoroviev
Copy link
Contributor Author

Besides, ready to be merged; if it looks good to you.

@kkoroviev
Copy link
Contributor Author

I created one helper (test_should_contain) and copy-pasted one (test_should_be_empty) from the Git source code. PR for the second one is pending.

@kkoroviev
Copy link
Contributor Author

When and if it will be merged upstream, we can delete our (identical) version from test-lib.sh.

}

test_should_contain() {
test "$#" = 2 || error "bug in the test script: not 2 parameters to test_should_contain"
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

maybe "test_should_contain: requires 2 parameters"

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I decided to mimic original errors: examples are here and here.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ah, okay. that's fine then. good enough for junio, good enough for me.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Though your variant is definitely better, uniformity is even more better.

@cryptix
Copy link
Contributor

cryptix commented Mar 23, 2015

Ran the tests on osx with #958 - 10 times, both fixed every time and no stale mounts hanging around.

Awesome find @kkoroviev!!

@kkoroviev
Copy link
Contributor Author

CC @jbenet @chriscool.

@chriscool
Copy link
Contributor

Ok for me, thanks @kkoroviev

@jbenet
Copy link
Member

jbenet commented Mar 24, 2015

@kkoroviev this looks great-- almost ready.

The set of all possible outputs that don't contain "IPFS mounted at:" and "IPNS mounted at:" is too broad. What if someone changes our output message to "IPFS mounted at" (without the ":") and then someone introduces a bug that causes ipfs mount to print our output message? The test will pass. The test will pass under almost any circumstances. And it haven't helped us the last time.

It is critical that we test our output formats as exactly as we can, so that if we accidentally change the output format we notice. People will build scripts on top of our outputs and they will only be able to do so as long as we do not change the outputs unless absolutely necessary. This should be a very explicit decision, and test cases help us make sure we do this carefully.

my test is good enough to handle almost every possible situation.

These tests are not for testing large ranges of possibilities. We want to be much more exact so that we don't screw up other people's programs by accidentally changing something.

@chriscool
Copy link
Contributor

@jbenet I agree that in general it is a very good idea to test command output very carefully, but in this case, @kkoroviev 's patch tests the stdout output properly; it tests that it is empty.
Also in this case the first test checks that the ipfs command fails. So testing the stderr output very carefully is not so important in my opinion.

In the Git tests for example the stderr output is not checked very carefully when a command fails, because to detect failures in scripts, users are encouraged to rely on the exit code and to just pass the stderr output as is to the user. Also the git error messages on stderr are sometimes improved to help user recover from errors and can be translated, so it might not be a good idea for users to rely on them.

If many different failure modes can be expected and a user's script might be interested in knowing what happened, then the command should probably return different exit codes for the different failure modes.

I would be more concerned about having 3 tests (one for the exit code, one for stdout output and one for stderr output) instead of just 2 (one for the exit code and the other for the output), as it is a bit different from what is done elsewhere and it's good to be consistent.

@whyrusleeping whyrusleeping added the kind/test Testing work label Mar 28, 2015
@jbenet
Copy link
Member

jbenet commented Mar 30, 2015

So testing the stderr output very carefully is not so important in my opinion.

Ok-- you know better here.

stderr

I wonder if it makes sense to move that output to stdout in the general ipfs mount case? I'd expect something like:

> ipfs mount
ipfs mounted at /ipfs
ipns mounted at /ipns
> ipfs mount ipfs foo/my_ipfs
> ipfs mount ipns bar/my_ipns

(the latter two subcommands may be good to add in another PR)

I would be more concerned about having 3 tests (one for the exit code, one for stdout output and one for stderr output) instead of just 2 (one for the exit code and the other for the output), as it is a bit different from what is done elsewhere and it's good to be consistent.

Agreed-- perhaps change this, @kkoroviev

@kkoroviev
Copy link
Contributor Author

@jbenet

If we leave everything as it is, we'll get a more granular output. I don't see how we could benefit much from "consistency".

This is how @chriscool checks for empty output. This is how I do it. It is inconsistent too. Is it bad? I don't think so.

@jbenet
Copy link
Member

jbenet commented Mar 30, 2015

I don't see how we could benefit much from "consistency".

Consistency is very important when expecting many people working on the same codebase to modify and improve it quickly and safely. I suggest reading:

These give you a sense of the importance of consistency in large scale engineering. I'm sure you can find more good documents out there. It's not about being rigid or inflexible-- it's about using patterns known to be good and which leverage shared mental state.

@kkoroviev
Copy link
Contributor Author

@jbenet

Which one would you prefer: this or this?

@kkoroviev
Copy link
Contributor Author

@jbenet

Can't see how Go relates to discussion about shell test scripts.

@kkoroviev
Copy link
Contributor Author

I mean we shouldn't be consistent with the code written before if there is a better way to do things.

I wrote my tests in Git style (1 big test for all outputs), it got rejected. I rewrote it so it tests all three outputs (exit code, stdout, stderr) separately, it is being rejected too.

Why would you want such a mixed style (1 test for exit code, 1 test for stdout and stderr)?

If we wanna be consistent with the best practice, it should be 1 big test for all outputs, because Git does so.

3 small tests are fine too.

2 tests don't make sense to me.

@jbenet
Copy link
Member

jbenet commented Mar 30, 2015

@kkoroviev follow @chriscool's style. use git blame to see the proper history.

The style you pointed at as @chriscool's is not actually his, it's @cryptix -- see the full history: https://github.com/jbenet/go-ipfs/commits/master/test/sharness/t0051-object.sh

If you look over all the tests-- they're not all very consistent yet, and they can definitely be improved. We want to push towards that more and more. It's ok for code to be somewhere in between while we find exactly what works best for us, or we get around to improving things. We're trying to guide these there.

@dylanPowers
Copy link
Member

This was labeled as "in progress", but we don't have anyone assigned 😨 Please assign someone 😃

@kkoroviev
Copy link
Contributor Author

@jbenet

We redirect both stdout and stderr (to a real file, not /dev/null) only here and here. Yet my code is already inconsistent with pre-established practice. Why not change the pre-established practice?

I want to establish the practice of checking both stdout and stderr.

@chriscool
Copy link
Contributor

@kkoroviev

Most of the time stderr should be empty when a command succeeds, do you want to test that? I don't think git often tests that, and I am not sure it's worth doing.

Testing stderr when a command fails could be done, but as I wrote in a comment above, I am not sure it is worth to be very strict about the stderr output.

Anyway I am open to make testing even more consistent, and perhaps more strict, but this should probably be discussed in its own issue first.

I was ok and I am still ok to merge this PR as it is now, even if it's not very consistent with the rest of the tests, because I know that the rest of the tests are not fully consistent either.

@jbenet

About ipfs mount I don't understand which output you want to move to stdout above. ipfs mount already outputs on stdout and we check that properly in test-lib.sh (https://github.com/ipfs/go-ipfs/blob/master/test/sharness/lib/test-lib.sh#L213):

        # make sure stuff is unmounted first.
        test_expect_success FUSE "'ipfs mount' succeeds" '
                umount $(pwd)/ipfs || true &&
                umount $(pwd)/ipns || true &&
                ipfs mount >actual
        '

        test_expect_success FUSE "'ipfs mount' output looks good" '
                echo "IPFS mounted at: $(pwd)/ipfs" >expected &&
                echo "IPNS mounted at: $(pwd)/ipns" >>expected &&
                test_cmp expected actual
        '

Anyway please open an issue if you want some change.

Thanks both!

@whyrusleeping
Copy link
Member

This looks good to me.

@jbenet
Copy link
Member

jbenet commented Apr 1, 2015

I was ok and I am still ok to merge this PR as it is now, even if it's not very consistent with the rest of the tests, because I know that the rest of the tests are not fully consistent either.

Ok, sounds good. Let's rebase this and push it in another branch shortly. (need to grab the latest changes).

About ipfs mount I don't understand which output you want to move to stdout above.

ah, i mixed up the normal operation and the error case-- thought we were saying the output of ipfs mount was going to stderr instead of stdout (it isnt, it is going to stdout correctly, as your comment describes).

@jbenet jbenet self-assigned this Apr 1, 2015
@jbenet
Copy link
Member

jbenet commented Apr 1, 2015

I've rebased on top of latest over at #998 -- will merge there -- thanks @kkoroviev and @chriscool

@jbenet jbenet closed this Apr 1, 2015
@whyrusleeping whyrusleeping removed the status/in-progress In progress label Apr 1, 2015
@jbenet jbenet mentioned this pull request Apr 1, 2015
52 tasks
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/test Testing work
Projects
None yet
Development

Successfully merging this pull request may close these issues.

6 participants