-
Notifications
You must be signed in to change notification settings - Fork 35
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Julia tests don't pass #13
Comments
For +
I think we will have some problem finishing tests that do this type of combinatorial testing. |
Thanks! Agreed this is an issue. Plans here: #12 (comment) |
The numbers seem substantially better on a relatively recent build of Julia's master branch. I found that julia 1.0.x doesn't even finish, at least on an earlier version the script (though I didn't investigate why). I also ran with a much higher |
Now that #24 is merged, OP updated to provide helpful links about how to debug problems. |
With the stdlibs now running, it seems the REPL tests deadlock. Perhaps this is something to do with the tasks in the REPL tests (which are notorious for deadlocking if something goes wrong). In this case the |
FWIW: Here is the result with running all stdlibs (except REPL)Julia Version 1.2.0-DEV.321
|
We are really getting somewhere! I am loving this collaboration. |
I took the liberty of updating the table with the latest stdlib run on master. |
How should we handle "commentary"? For example, it looks like all failures in I'm still not sure why that test is shown as |
A number of tests, at least
Are there any plans for how to handle that? |
Interesting, I hadn't gotten that far yet. Given that there have already been quite a few bugs related to
Then we handle this expression in Some subtleties:
Any thoughts? This is just spitballing, I haven't tried any of this yet. |
It seems all the "kills Julia" items have been resolved. There are, however, a bunch of (smallish?) issues that cause runtime errors in the tests (the Now I'm going to shift my attention to implementing breakpoints. If anyone else wants to join the fun, these are pretty easy fixes, if you just follow the instructions at the top and look at the models in those recent linked PRs. (For anything that looks like a Julia bug, it's also good to report that.) EDIT: if I were to guess blindly, tests like |
Are you still looking for help here? I've got some cycles, but relatively inexperienced with this level of Julia details. I did notice that the calls to handle_err in test/utils.jl should be |
We have started the 3-day release train. @KristofferC, @pfitzseb, and I are going to be quite busy getting dependent packages (Debugger.jl, LoweredCodeUtils.jl, Revise.jl, Rebugger.jl, and of course Juno itself) in shape. At the same time, a late-breaking change c5c3ec8 is a bit scary in terms of the potential implications for this issue. If any users want to pitch in to find out, we busy developers would be most grateful! It doesn't have to be perfect, but we do want to iron out the most obvious issues before it becomes fully registered. CC some of our former heroes, @GunnarFarneback and @macd. |
Wow, it looks like a lot has changed in such a short time! Just a note to others, you do need to rebuild the package after doing a git pull (maybe obvious, but I forgot) |
Yes, it needs to be rebuilt. Not obvious. Manual management is needed only for people who have this on You also need to build separately for each different minor Julia release. |
I wonder if the file should just be automatically generated if it doesn't exist during precompilation time instead of having this as a build step. If you add the package on 1.1 and then you add the same version on 1.0 it will not be rebuilt (packages are only built when they get downloaded) and error when loading. The current build system doesn't work well when you need to build multiple times for different julia versions. |
Oops, I had some broken stuff in |
Here's a status list for progress with JuliaInterpreter, running Julia's own test suite. It's organized by the number of passes, failures (ideally should be 0), errors (ideally 0), broken (these are not JuliaInterpreter's problem), and aborted blocks (tests that took too long, given the settings). Some tests error outside a
@test
(marked by "X" in the table below) and others cause Julia itself to exit (marked by ☠️)The tests below were run on a multiprocessor server from the Linux command line with
The
--nstmts 1000000
allows you to control the maximum number of interpreter statements per lowered block of code; tests that require more than this are marked as being "aborted." The default setting is 10000 (10^4). The higher you make this number, in general the more tests that should finish, but of course also the longer the suite will take to run. On my laptop, running with 2 worker processes the entire suite takes less than 5 minutes to complete using the default settings.The remaining arguments are the same as given to Julia's own
test/runtests.jl
: you can either provide a list of tests you want to run (e.g.,julia --startup-file=no juliatests.jl ambiguous
), or you can list some to skip (here, all thecompiler/*
tests). "Blank" indicates that one is running all the tests, so the line above runs everything except those incompiler/*
.The key point of having a status list is that it allows us to discover issues with JuliaInterpreter; consequently, the next step is to use these results to fix those problems. Help is very much wanted! Here are good ways to help out:
(note: with the possible exceptions of(EDIT: all of these appear to be fixed now). Then would be error that occurs outside of tests (thechannels
,worlds
, andarrayops
, it appears that most such errors are due to a single cause, MWE of char crash #28; deleting this block and rebuilding Julia fixes them)X
s), errors that occur inside a@test
(those marked as Errors by the test suite), failures, and of lowest priority the aborted blocks. Note that aborted blocks can lead to test failures due to repeating work (see Compiled resumers #44), so many of these may go away if you increasenstmts
. However, note that aborted blocks could indicate that the interpreter has incorrectly gotten itself stuck in an infinite loop (yes, the author has seen that happen), and as a consequence it's possible that some of these too are actually errors.A good way to get started is to pick one test that's exhibiting problems, and uncomment these lines. Then, the easiest way to dive into this is to run tests in a REPL session, e.g.,
from within JuliaInterpreter's
test/
directory. If you get failures, make sure you first check whether they go away if you increasenstmts
(typically by 10x or more).When you see test errors, the expression printed right above it is the one causing the problem. Go into the source code and copy the relevant source lines into a
quote
block. Once you have a minimal expressionex
that triggers a problem, do this:where
m
is the module you want to execute this in. You may want to doto isolate the tests from your current session.
To diagnose problems in greater detail, uncommenting these lines can be a great first start.
Without further ado, here's the current list (note the time of the run to determine how current this is):
Julia Version 1.1.1-pre.0
Commit a84cf6f56c (2019-01-22 04:33 UTC)
Platform Info:
OS: Linux (x86_64-linux-gnu)
CPU: Intel(R) Xeon(R) CPU E5-2640 v3 @ 2.60GHz
WORD_SIZE: 64
LIBM: libopenlibm
LLVM: libLLVM-6.0.1 (ORCJIT, haswell)
Test run at: 2019-02-26T11:36:49.456
Maximum number of statements per lowered expression: 1000000
The text was updated successfully, but these errors were encountered: