-
-
Notifications
You must be signed in to change notification settings - Fork 316
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add smoke tests for AdoptOpenJDK #2024
Comments
First PR (check TLS/SSL stack) has landed: #2049. |
I believe many of these can be added into functional/packaging or functional/build directories. Created JDK_VENDOR (adoptium/TKG#115) and JDK_VERSION enhancment (adoptium/TKG#116) issues so that we can come back and update playlists appropriately (does not block us from adding tests that temporarily do their own checking of vendor and update versions in the interim). |
Something to cover with tests: adoptium/temurin-build#2229 |
After virtual deskside review of #2140 with @adamfarley and also considering #2067, felt it would be useful to create a table of the known specific tests, and noting how one may verify it, in order to see the common patterns we can implement to cover all the known cases and better prepare for new unknown cases. Note: This table is incomplete and will undergo several revisions (need to verify which vendors each test case is applicable to), and add new known/needed test cases to the list, even if they are not implemented or are in a PR at present.
Given this, we see we need generic methods, findFile, findStringInFile, checkCommandLine via ProcessBuilder or cmdLineTester, connectToServer via HttpsURLConnection. All utility methods to find version info, or build info can be moved to a common area for use by all tests (we have many implementations of similar type utility methods scattered around, and an effort should be made to consolidate some of them). |
@aahlenst - Based on our respective PRs (#2140 and #2067), Shelley's comments above, and your comment in #2140, I'd say we've got several items to complete here. Since much of the work is already done, here's my draft of a todo list. Please review and let me know your thoughts.
|
Both PRs are stuck because we haven't yet decided how to organize the tests and how to select the tests. Shelley said a while ago that she made some experiments but I do not know what the outcome was. From my POV, there are multiple ways to tackle this. What we have seen so far:
Because of that, I think the approach we both took (determining in the test whether something is supported) is wrong and will lead to unmaintainable and very complicated tests. But the question is: What's the alternative? First aspect, versions a.k.a time: Usually, test material is saved alongside the code to be tested. So if you change the code, you run the tests that you find in an adjacent directory and they all must pass. There's no "Uhm, take the test suite from a year ago and it should be green for the current code" or "What about using today's test suite for a one year old release. Sure they'll pass." But by using version numbers as some sort of feature flags ("must be present in 8u272 or newer but absent before"), we try exactly that. This is madness and we should stop. Even though our test material is separate, who in their right mind would run today's test suite against a build of 8u192? So we go back to "That's a test for 11, that's for 8" and call it a day. After a test has been added, it is expected that all newer builds pass. If there are material changes required while one version is in rampdown (e.g. 8u282) and the other one is in development (e.g. 8u292), we're screwed. But that's never going to happen 🤞And if it does, there's still ProblemList.txt or a human that can ignore the failure. Vendors: We have seen that VM props can be unreliable. Therefore, I'd argue that the vendor selection needs to be an argument set by a human. We usually know what we're building anyway. This cuts down on potential problems. Now that we know what the vendor is, how do we know what tests to run? Having a single test for everyone and letting it figure out what values to expect puts a big burden on the author (you need to check dozens of builds even if you have zero interest in a subset of products) and the tests become very complex. Just look at the number of conditions required for Shenandoah. This again is madness and we need to stop. Alternatives:
Option 1 is sufficient for tests like TLS 1.3 or Shenandoah because those boil down to "Run test: yes/no". However, it does not work for tests of VM properties or the list of jmods which vary per vendor/platform. So we're back at having long if-elseif-else blocks or would require something like Cucumber specs. Option 2 would do away with the if-elseif-else blocks (simple tests, yay) but comes at the expense of having duplicated or triplicated tests that vary slightly. Is it so bad? Upside: Maintenance of vendor specific folders could be outsourced to the respective teams. Last aspect, how do we combine vendor and version? Having a folder per version in a vendor's directory (like To sum up, I'd end up with folders per vendors and simple version range expressions per test. But maybe I missed some aspects or it doesn't fit with what we have at all. What I've completely ignored are platform differences, but maybe that's something we have to ultimately solve within each test or could be done in a similar way as the version ranges, for example through some sort of annotation per file. As soon as we have decided what we want, we can make the necessary changes to TKG/... Factoring out common functionality (as well as using a better assertion library) can come afterwards and does not look like such a big deal to me. |
This code has been delivered into openjdk-build repo via adoptium/temurin-build#2397 |
re: #2024 (comment) JDK_VENDOR support is now in TKG and soon we will look at adding minor version support also, in the meanwhile the code that handles minor version support stays in the test code, but when it sits at that level, it has to be repeated in each group of test code, and results in a lot of needless duplication. |
java.vendor.version change merged: adoptium/temurin-build#2544 |
I will close this as completed, minor fixes for now incorporated smoke tests are being done via fresh issues in the openjdk-build repo. |
In https://github.com/adoptopenjdk/openjdk-build, we configure a lot of things without actually testing/ensuring that they have the desired effect. Examples:
To make things worse, some of these things are version/platform dependant (for example, Shenandoah).
This needs to be covered by tests to reduce breakages and to ensure the long-term maintainability of the project.
The text was updated successfully, but these errors were encountered: