-
Notifications
You must be signed in to change notification settings - Fork 156
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Test suite hardening #209
Test suite hardening #209
Conversation
One of my concerns regarding other platforms is the need of |
This is almost complete, the remaining bits from the old setup that I need to go over is now tiny:
The only part for which I don't have a clear plan is ensuring that no port collisions may occur. I will likely add a new test helper for ports but keep today's best-effort poor-man's pseudo-pseudo-random port selection. Then I'll document the helpers a bit, and see how to further improve the readability of test reports. See XXX comments in the test suite to get an idea of where polish is needed. |
I've only had a quick look for now, but this is looking really good! I think the following is the only thing needed to fix make dist, with that I'd be happy to merge.
|
This is embarrassing... However before fixing this, can you have a look at the XXX comments I added to the test suite? At least those present in test cases:
Since this work is incomplete, I may push the rest directly if you agree. |
Rebased against master, but not ready to merge because the test cases introduced since this pull request was opened aren't migrated yet. When hitch fails, we are currently blind in the test report. For that I added e4a115e to dump any |
OpenBSD doesn't have |
@klenan, thanks for letting us know, I can take care of it and switch to |
Instead of having all tests repeat the same boilerplate, move the requirements in one place and generalize the handling of PID files inside the temp directory.
It takes care of feeding them newlines or checking that options like -alpn are supported. Start using it in NPN and ALPN test cases. About this trick: ! grep ERROR $DUMPFILE Instead of using run_cmd we can grep for errors, but this will fail when there are no errors. With `set -e` enforced in tests 15 and 16 the lack of error would fail the whole test. So instead the exit status must be non-zero (thanks to the ! prefix) and when errors occur they are printed thanks to the lack of a -q option.
Instead provide a reusable test helper.
Unless explicitly specified.
Now that tests cd in their own temp directory, we can use that to make things a lot more readable.
Well, almost...
Test 19 is now half as small because both frontend and global variants are tested at once.
Whatever happened to test 26, test 27 highlights reload problems with regard to relative file names.
And keep these functions as part of the new test setup.
The $(...) notation is in POSIX and should be preferred.
Since --daemon implies --quiet, drop it from `start_hitch`.
b3b1298
to
9f80c01
Compare
Rebased against current master, with the addition of 8d796e5 for the tests that were added after the pull request was started. Previous test failures are gone, but test 27 shows relative paths problems. I believe we can merge this into master, and keep the remaining enhancements for later. One more time, please pay attention to the XXX comments throughout the test suite. |
src/tests/test08-test-configs.sh
Outdated
|
||
# XXX: unclear check |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
|
||
# this will fail on platforms that have OpenSSL compiled without SSLv3 | ||
openssl s_client -connect $LISTENADDR:$LISTENPORT -tls1_2 | ||
test $? -eq 0 || die "Connecting using TLS 1.2 failed." | ||
# XXX: find how to detect the lack of SSLv3 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
One naive way would be to check for the -ssl3
flag missing (note how -no_ssl3
is still there):
$ openssl s_client -help 2>&1 | grep -Fq -- -ssl3
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We don't want to run s_client
in a sub-shell (referring to a previous comment) and -ssl3
wasn't used in the the original test, so I'd need to build openssl
and find out myself. Otherwise the new check is as comprehensive as the old one. I should probably document how to use the test helpers, I was at least planning to...
But we could do something like this:
openssl s_client -help 2>&1 |
grep -q -e -ssl3 ||
skip "no ssl3 support"
Same as here:
Ready to be merged IMO, and I will retire Cheers! |
No complaints from me - great work. With any remaining issues(?) it's regardless a big net improvement from the current testing setup, so please go ahead and merge. |
Instead of merging I decided to get a quick stab at After upgrading to Fedora 27 I have one test failure and one test error (consistently). Off the top of my head one possible related change is that now Fedora is building
In order to implement the
@klenan, please let us know how things pan out on OpenBSD with the latest patch. Courtesy of my recent OS upgrade ;) |
I think the test errors you are seeing are due to breakage in the backend refresh bits, hopefully addressed by @dmatetelki in PR #224 |
@klenan, I'm getting this increasing pressure to merge from the team. Once you have time to test this, please either comment here if it works fine on OpenBSD or open a new issue if it doesn't. Thanks for the reviews! |
This is work in progress, following #207 I already pushed some improvements but I feel that these changes should run on more than just my system.
Don't hesitate to start reviewing it sooner than later, even though it is far from complete. This should give a precise idea of the direction I'm taking, and how writing test cases should be overall more readable and less error-prone if it proves to be a good direction.