Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Various fixes to functional tests #783

Merged
merged 20 commits into from
Aug 9, 2017
Merged

Various fixes to functional tests #783

merged 20 commits into from
Aug 9, 2017

Conversation

intelfx
Copy link
Contributor

@intelfx intelfx commented Aug 1, 2017

(Mostly) fixes #759.

Remaining issues are:

  • rare sporadic failures (empty replies) in functional tests
  • rare sporadic failures in stress tests when all three components are ran on the same host

@intelfx intelfx requested a review from vankoven August 1, 2017 13:49
self.node = remote.tempesta
self.workdir = self.node.workdir
self.config_name = 'tempesta_fw.conf'
self.config_name = tempfile.mktemp(dir = '/tmp', prefix = 'tempesta_fw.conf.')
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I believe that location of config file should be configured in tests_config.ini with defaults /tmp/tempesta_fw.conf. It is good to have the only place to control filenames for generated files.

@@ -30,6 +31,7 @@ def __init__(self, binary, uri=''):
in `form_command()` function. This would allow to update options until
client will be started. See `Wrk` class for example
"""
remote.connect_client()
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I see that you have updated run_tests.sh to support running only specific tests. There is no need to create new connections by calling remote.connect_*() in every test. run_tests.sh seems to be a good place to setup connections to remote nodes.

This also appy Nginx and Tempesta classes below.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I assumed you still want to launch testcase modules as stand-alone programs (OTOH, I tried that and failed, there are problems with relative imports).

However — it seems like modules are re-imported afresh in each testcase, so that won't work.

Copy link
Contributor

@vankoven vankoven Aug 2, 2017

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm not sure, that running test cases as standalone programs is required. Running only a few test cases instead of entire test framework is desired. Actually old variant has at lest one serious drawback: you just can't interrupt test cases gracefully, so Tempesta, backend, and client may be still running after interruption of the test cases. You have added a possibility to run only specific test cases via run_tests.py. I really like that.

And concerning re-importing all modules. As i said i'm not good in python, so there can be dirty python code and wrong python libraries usage. On some reason i don't have issues with paramiko you have faced, and i see only one call of remote.create_node() per node on current master. If you say the change is needed, then ok, let's have connect() and disconnect() present in TestCase classes. Is it possible to move connect() and disconnect() to unittest.Testcase.setUpClass() and tearDownClass()?

@@ -330,7 +333,7 @@ def __init__(self, listen_port, workers=1):
self.requests = 0

def get_name(self):
return ':'.join([self.node.host, str(self.config.port)])
return ':'.join([self.ip, str(self.config.port)])
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good catch!

def copy_file(self, filename, content):
# workdir will be ignored if an absolute filename is passed
filename = os.path.join(self.workdir, filename)
self.mkdir(os.path.dirname(filename))
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Better to create directory it it doesn't equal to self.workdir. Or the self.workdir will be created too often, e.g. some tests creates 32 instances of nginx and each have its own config file. This slows down tests a lot.

err_msg=''):
tf_cfg.dbg(4, "\tRun command '%s' on host %s" % (cmd, self.host))
err_msg='', env={}):
tf_cfg.dbg(4, "\tRun command '%s' on host %s with environment %s" % (cmd, self.host, env))
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Too long line


# the default value of fs.nr_open
nofile = 1048576
resource.setrlimit(resource.RLIMIT_NOFILE, (nofile, nofile))
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

why put here but not in run_tests.sh?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As per above, I assumed you wanted to preserve the possibility of launching testcases as standalone programs.

if not remainder:
tests = loader.discover('.')
else:
tests = loader.loadTestsFromNames(remainder)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Really good change: much easier to run only specified tests and have the same control on verbose and interruption by ^C. And much better than running pyhton2 -m unittest <testname>. Please update tests readme with the new instructions on running only specified tests.

@@ -64,6 +62,7 @@ def tearDown(self):
self.tempesta.stop()
if self.tester:
self.tester.close_all()
remote.disconnect()
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I have proposed to move connect() to run_tests.sh, so disconnect() won't be needed here.

@@ -54,29 +62,29 @@ wrk = wrk
#
# ex.: siege = /usr/bin/siege (default siege)
#
siege = /usr/bin/siege
siege = siege
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I have experienced a lot of troubles with siege. Sometimes it performs god, but in other cases it is too buggy. It is obsoleted and can be removed without any problems

@@ -79,7 +79,11 @@ def usage():

#run tests
loader = unittest.TestLoader()
tests = loader.discover('.')
if not remainder:
tests = loader.discover('.')
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not a problem of this patchset, just remark on current problems. discover() sorts tests in very strange way, e.g. selftests for the test suit is done somewhere in the middle of test cases.

@intelfx intelfx force-pushed the is-759 branch 4 times, most recently from ea97f3b to c7a6750 Compare August 4, 2017 13:29
Copy link
Contributor

@vankoven vankoven left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good for me, but a few more errors in functional tests must be fixed in this patch set or in a new one. See #789 (comment) , #782 (comment)

# wait for responses to last requests in each connection before closing it
# and does not account for those requests.
# So, [0; concurrent_connections] requests will be missed by the client.
exp_max = self.wrk.requests + self.wrk.connections
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Positive allowance is common across all stress test cases using wrk, so it should be documented in testers.stress whis is a base class for all such tests.

exp_min = self.wrk.requests - expected_err - 1
exp_max = self.wrk.requests
# Negative allowance: this means some requests are missed by the server.
# This happens because some (at most one) requests in each connection
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This happens because some (at most one)

At least one in wrk thread, at most one in each connection

# This happens because some (at most one) requests in each connection
# are sent without a session cookie and replied 302 by Tempesta without
# any forwarding, which is considered a "success" by wrk.
# So, [0; concurrent_connections] requests will be missed by the backend.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

[1; concurrent_connections]

expected_err = int(tf_cfg.cfg.get('General', 'concurrent_connections'))
exp_min = self.wrk.requests - expected_err - 1
exp_max = self.wrk.requests
# Negative allowance: this means some requests are missed by the server.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not "missed by the server", but "are not forwarded to server"

Copy link
Contributor

@vankoven vankoven left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just forgot to choose Approve tick in the previous review

@intelfx
Copy link
Contributor Author

intelfx commented Aug 8, 2017

I have shuffled commits a bit (misrebase; two commits got their contents swapped). No real code changes.

@intelfx intelfx merged commit 85ac463 into master Aug 9, 2017
@intelfx intelfx deleted the is-759 branch August 9, 2017 13:33
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Failures in functional tests
2 participants