-
Notifications
You must be signed in to change notification settings - Fork 105
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Various fixes to functional tests #783
Conversation
self.node = remote.tempesta | ||
self.workdir = self.node.workdir | ||
self.config_name = 'tempesta_fw.conf' | ||
self.config_name = tempfile.mktemp(dir = '/tmp', prefix = 'tempesta_fw.conf.') |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I believe that location of config file should be configured in tests_config.ini
with defaults /tmp/tempesta_fw.conf
. It is good to have the only place to control filenames for generated files.
@@ -30,6 +31,7 @@ def __init__(self, binary, uri=''): | |||
in `form_command()` function. This would allow to update options until | |||
client will be started. See `Wrk` class for example | |||
""" | |||
remote.connect_client() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I see that you have updated run_tests.sh
to support running only specific tests. There is no need to create new connections by calling remote.connect_*()
in every test. run_tests.sh
seems to be a good place to setup connections to remote nodes.
This also appy Nginx
and Tempesta
classes below.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I assumed you still want to launch testcase modules as stand-alone programs (OTOH, I tried that and failed, there are problems with relative imports).
However — it seems like modules are re-imported afresh in each testcase, so that won't work.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm not sure, that running test cases as standalone programs is required. Running only a few test cases instead of entire test framework is desired. Actually old variant has at lest one serious drawback: you just can't interrupt test cases gracefully, so Tempesta, backend, and client may be still running after interruption of the test cases. You have added a possibility to run only specific test cases via run_tests.py
. I really like that.
And concerning re-importing all modules. As i said i'm not good in python, so there can be dirty python code and wrong python libraries usage. On some reason i don't have issues with paramiko
you have faced, and i see only one call of remote.create_node()
per node on current master. If you say the change is needed, then ok, let's have connect()
and disconnect()
present in TestCase
classes. Is it possible to move connect()
and disconnect()
to unittest.Testcase.setUpClass()
and tearDownClass()
?
@@ -330,7 +333,7 @@ def __init__(self, listen_port, workers=1): | |||
self.requests = 0 | |||
|
|||
def get_name(self): | |||
return ':'.join([self.node.host, str(self.config.port)]) | |||
return ':'.join([self.ip, str(self.config.port)]) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good catch!
def copy_file(self, filename, content): | ||
# workdir will be ignored if an absolute filename is passed | ||
filename = os.path.join(self.workdir, filename) | ||
self.mkdir(os.path.dirname(filename)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Better to create directory it it doesn't equal to self.workdir
. Or the self.workdir
will be created too often, e.g. some tests creates 32 instances of nginx and each have its own config file. This slows down tests a lot.
err_msg=''): | ||
tf_cfg.dbg(4, "\tRun command '%s' on host %s" % (cmd, self.host)) | ||
err_msg='', env={}): | ||
tf_cfg.dbg(4, "\tRun command '%s' on host %s with environment %s" % (cmd, self.host, env)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Too long line
|
||
# the default value of fs.nr_open | ||
nofile = 1048576 | ||
resource.setrlimit(resource.RLIMIT_NOFILE, (nofile, nofile)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
why put here but not in run_tests.sh
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As per above, I assumed you wanted to preserve the possibility of launching testcases as standalone programs.
if not remainder: | ||
tests = loader.discover('.') | ||
else: | ||
tests = loader.loadTestsFromNames(remainder) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Really good change: much easier to run only specified tests and have the same control on verbose and interruption by ^C
. And much better than running pyhton2 -m unittest <testname>
. Please update tests readme with the new instructions on running only specified tests.
@@ -64,6 +62,7 @@ def tearDown(self): | |||
self.tempesta.stop() | |||
if self.tester: | |||
self.tester.close_all() | |||
remote.disconnect() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I have proposed to move connect()
to run_tests.sh
, so disconnect()
won't be needed here.
@@ -54,29 +62,29 @@ wrk = wrk | |||
# | |||
# ex.: siege = /usr/bin/siege (default siege) | |||
# | |||
siege = /usr/bin/siege | |||
siege = siege |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I have experienced a lot of troubles with siege
. Sometimes it performs god, but in other cases it is too buggy. It is obsoleted and can be removed without any problems
@@ -79,7 +79,11 @@ def usage(): | |||
|
|||
#run tests | |||
loader = unittest.TestLoader() | |||
tests = loader.discover('.') | |||
if not remainder: | |||
tests = loader.discover('.') |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Not a problem of this patchset, just remark on current problems. discover()
sorts tests in very strange way, e.g. selftests for the test suit is done somewhere in the middle of test cases.
principle of least astonishment
ea97f3b
to
c7a6750
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good for me, but a few more errors in functional tests must be fixed in this patch set or in a new one. See #789 (comment) , #782 (comment)
# wait for responses to last requests in each connection before closing it | ||
# and does not account for those requests. | ||
# So, [0; concurrent_connections] requests will be missed by the client. | ||
exp_max = self.wrk.requests + self.wrk.connections |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Positive allowance is common across all stress test cases using wrk, so it should be documented in testers.stress
whis is a base class for all such tests.
exp_min = self.wrk.requests - expected_err - 1 | ||
exp_max = self.wrk.requests | ||
# Negative allowance: this means some requests are missed by the server. | ||
# This happens because some (at most one) requests in each connection |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This happens because some (at most one)
At least one in wrk thread, at most one in each connection
# This happens because some (at most one) requests in each connection | ||
# are sent without a session cookie and replied 302 by Tempesta without | ||
# any forwarding, which is considered a "success" by wrk. | ||
# So, [0; concurrent_connections] requests will be missed by the backend. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
[1; concurrent_connections]
expected_err = int(tf_cfg.cfg.get('General', 'concurrent_connections')) | ||
exp_min = self.wrk.requests - expected_err - 1 | ||
exp_max = self.wrk.requests | ||
# Negative allowance: this means some requests are missed by the server. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Not "missed by the server", but "are not forwarded to server"
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Just forgot to choose Approve
tick in the previous review
Each thread in wrk has its own Lua context and hence get its own session cookie, failing the test. Found by @ikoveshnikov
I have shuffled commits a bit (misrebase; two commits got their contents swapped). No real code changes. |
(Mostly) fixes #759.
Remaining issues are: