Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Crypto/tinycrypt testcases are failing on multiple boards #10922

Closed
yerabolu opened this issue Oct 29, 2018 · 2 comments
Closed

Crypto/tinycrypt testcases are failing on multiple boards #10922

yerabolu opened this issue Oct 29, 2018 · 2 comments

Comments

@yerabolu
Copy link
Contributor

yerabolu commented Oct 29, 2018

Boards: arduino2:arm, esp32:xtensa, ma:x86, emsk7d_v22:arc
Commit ID: 203948e

Log on Arduino:

console output: Performing montecarlo_signverify test:
console output: ..........
eval error trace: Traceback (most recent call last):
eval error trace: File "/home/jenkins/workspace/zephyr-master-tcf-v0.11-branch/LABEL/verify/SHARD/2-6/ZEPHYR_GCC_VARIANT/zephyr/tcf.git/tcfl/tc.py", line 1826, in _decorated_fn
eval error trace: r = fn(*args, **kwargs)
eval error trace: File "/home/jenkins/workspace/zephyr-master-tcf-v0.11-branch/LABEL/verify/SHARD/2-6/ZEPHYR_GCC_VARIANT/zephyr/tcf.git/tcfl/tc.py", line 3559, in _method_run
eval error trace: return self.__method_trampoline_call(fname, fn, _type, targets)
eval error trace: File "/home/jenkins/workspace/zephyr-master-tcf-v0.11-branch/LABEL/verify/SHARD/2-6/ZEPHYR_GCC_VARIANT/zephyr/tcf.git/tcfl/tc.py", line 3515, in __method_trampoline_call
eval error trace: r = getattr(self, fname)(*targets)
eval error trace: File "/home/jenkins/workspace/zephyr-master-tcf-v0.11-branch/LABEL/verify/SHARD/2-6/ZEPHYR_GCC_VARIANT/zephyr/tcf.git/tcfl/tc_zephyr_sanity.py", line 1167, in eval_50
eval error trace: console = target.kws.get("console", None))
eval error trace: File "/home/jenkins/workspace/zephyr-master-tcf-v0.11-branch/LABEL/verify/SHARD/2-6/ZEPHYR_GCC_VARIANT/zephyr/tcf.git/tcfl/tc.py", line 1288, in expect
eval error trace: self.testcase.expecter.run()
eval error trace: File "/home/jenkins/workspace/zephyr-master-tcf-v0.11-branch/LABEL/verify/SHARD/2-6/ZEPHYR_GCC_VARIANT/zephyr/tcf.git/tcfl/expecter.py", line 260, in run
eval error trace: r = f(*args)
eval error trace: File "/home/jenkins/workspace/zephyr-master-tcf-v0.11-branch/LABEL/verify/SHARD/2-6/ZEPHYR_GCC_VARIANT/zephyr/tcf.git/tcfl/expecter.py", line 421, in console_rx_eval
eval error trace: { 'target': target, "console output": of })
eval error trace: error_e: (u"expected console output 'RunID: ci-181028-1008-2370:gqdx' from console 'arduino2-17:default' NOT FOUND after 60.0 s", {'console output': <open file u'/home/jenkins/workspace/zephyr-master-tcf-v0.11-branch/LABEL/verify/SHARD/2-6/ZEPHYR_GCC_VARIANT/zephyr/tmp/tcf.run-f0f2AY/zlmb/eval-buffers-00-0/console-jfsotc07arduino2-17:zlmb-default.log', mode 'a+' at 0x7f9d21922f60>, 'target': <tcfl.tc.target_c object at 0x7f9d1fff5450>})
E#1 @Local ecc_dsa: no way to determine end of output
@Local evaluation error
qc1000-01-arc.txt
esp32.txt
emsk7d_v22.txt

Please find the logs attached for esp32,qc1000-arc and emsk7d_v22.

@yerabolu yerabolu changed the title Crypto/tinycrypt testcases are failing on multiple devices Crypto/tinycrypt testcases are failing on multiple boards Oct 29, 2018
@ceolin
Copy link
Member

ceolin commented Oct 30, 2018

the log is just showing the python script backtrace. The script crashes when the test fails or is other problem ?

@nniranjhana
Copy link

nniranjhana commented Nov 2, 2018

Well, looks to me like our script waits for 60s for the 'PROJECT EXECUTION SUCCESSFUL' result, and this test takes a while longer, and hence complains of an evaluation error.

If we do a tcf console-read on the DUT, we can see the PASS after the Test # 4 (Monte Carlo test which is getting snipped in our TCF log).

If we see any of the logs @yerabolu has attached, we find:

eval error trace: error_e: (u"expected console output from console 'qc1000-01:default' NOT FOUND after 60.2 s"

with ecc_dh: no way to determine end of output running into an evaluation error.

So the test is indeed a pass, and so closing this issue. In future, I would suggest to read the DUT console directly and copy paste those logs instead to identify failures.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants