generated from NOAA-OWP/owp-open-source-project-template
-
Notifications
You must be signed in to change notification settings - Fork 63
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Ngen automated test #726
Merged
Merged
Ngen automated test #726
Conversation
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
program--
requested changes
Feb 26, 2024
Have made update according to your suggestions.
…On Mon, Feb 26, 2024 at 3:03 PM Justin Singh-M. - NOAA < ***@***.***> wrote:
***@***.**** requested changes on this pull request.
------------------------------
In doc/AUTOMATED_TEST.md
<#726 (comment)>:
> @@ -0,0 +1,40 @@
+# Github Automated Testing
+- [What Is Automated Test and How to Start It](#what-to-do-when-a-test-fail)
+- [What Code Tests Are Performed](#what-code-tests-are-performed)
+- [Local Testing](#local-testing)
+- [What to Do When a Test Fail](#what-to-do-when-a-test-fail)
+
+## What Is Automated Test and How to Start It
+
+In **ngen** github repo, we use github actions/workflows (for an online reference, see for [example](https://docs.github.com/en/actions/learn-github-actions)) to automatically test the validity of commited codes by developers. The test is triggered when a developer pushes some codes to a branch in his **ngen** fork and creates a `Pull Request`. The successful test is marked by a `green check` mark, a failed test is marked by a `red cross` mark. If a test fails, you have to debug your codes (see [What to Do When a Test Fail](#what-to-do-when-a-test-fail) below) and commit and push to the same branch again and the automatic testing will restart in the `Pull Request`.
+
+## What Code Tests Are Performed
+
+- Unit tests: this includes every set of codes that serves a unique functionality. Unit test eveolves as new codes are added.
+- [BMI](https://bmi.readthedocs.io/en/stable/) (Basical Model Interface) based formulations tests including codes in C, C++, Fortran, and Python.
⬇️ Suggested change
-- [BMI](https://bmi.readthedocs.io/en/stable/) (Basical Model Interface) based formulations tests including codes in C, C++, Fortran, and Python.
+- [BMI](https://bmi.readthedocs.io/en/stable/) (Basic Model Interface) based formulation tests for C, C++, Fortran, and Python.
------------------------------
In doc/AUTOMATED_TEST.md
<#726 (comment)>:
> @@ -0,0 +1,40 @@
+# Github Automated Testing
+- [What Is Automated Test and How to Start It](#what-to-do-when-a-test-fail)
+- [What Code Tests Are Performed](#what-code-tests-are-performed)
+- [Local Testing](#local-testing)
+- [What to Do When a Test Fail](#what-to-do-when-a-test-fail)
+
+## What Is Automated Test and How to Start It
+
+In **ngen** github repo, we use github actions/workflows (for an online reference, see for [example](https://docs.github.com/en/actions/learn-github-actions)) to automatically test the validity of commited codes by developers. The test is triggered when a developer pushes some codes to a branch in his **ngen** fork and creates a `Pull Request`. The successful test is marked by a `green check` mark, a failed test is marked by a `red cross` mark. If a test fails, you have to debug your codes (see [What to Do When a Test Fail](#what-to-do-when-a-test-fail) below) and commit and push to the same branch again and the automatic testing will restart in the `Pull Request`.
+
+## What Code Tests Are Performed
+
+- Unit tests: this includes every set of codes that serves a unique functionality. Unit test eveolves as new codes are added.
⬇️ Suggested change
-- Unit tests: this includes every set of codes that serves a unique functionality. Unit test eveolves as new codes are added.
+- Unit tests: this includes every set of codes that serves a unique functionality. Unit test evolve as new codes are added.
------------------------------
In doc/AUTOMATED_TEST.md
<#726 (comment)>:
> @@ -0,0 +1,40 @@
+# Github Automated Testing
+- [What Is Automated Test and How to Start It](#what-to-do-when-a-test-fail)
+- [What Code Tests Are Performed](#what-code-tests-are-performed)
+- [Local Testing](#local-testing)
+- [What to Do When a Test Fail](#what-to-do-when-a-test-fail)
+
+## What Is Automated Test and How to Start It
+
+In **ngen** github repo, we use github actions/workflows (for an online reference, see for [example](https://docs.github.com/en/actions/learn-github-actions)) to automatically test the validity of commited codes by developers. The test is triggered when a developer pushes some codes to a branch in his **ngen** fork and creates a `Pull Request`. The successful test is marked by a `green check` mark, a failed test is marked by a `red cross` mark. If a test fails, you have to debug your codes (see [What to Do When a Test Fail](#what-to-do-when-a-test-fail) below) and commit and push to the same branch again and the automatic testing will restart in the `Pull Request`.
⬇️ Suggested change
-In **ngen** github repo, we use github actions/workflows (for an online reference, see for [example](https://docs.github.com/en/actions/learn-github-actions)) to automatically test the validity of commited codes by developers. The test is triggered when a developer pushes some codes to a branch in his **ngen** fork and creates a `Pull Request`. The successful test is marked by a `green check` mark, a failed test is marked by a `red cross` mark. If a test fails, you have to debug your codes (see [What to Do When a Test Fail](#what-to-do-when-a-test-fail) below) and commit and push to the same branch again and the automatic testing will restart in the `Pull Request`.
+In the **ngen** github repo, we use github actions/workflows (for an online reference, see for [example](https://docs.github.com/en/actions/learn-github-actions)) to automatically test validity of committed codes by developers. The test is triggered when a developer pushes some code to a branch in their **ngen** fork and creates a `Pull Request`. The successful test is marked by a `green check` mark, a failed test is marked by a `red cross` mark. If a test fails, you have to debug your code (see [What to Do When a Test Fail](#what-to-do-when-a-test-fail) below) and commit/push to the same branch again. The automatic testing will restart in the `Pull Request`.
------------------------------
In doc/AUTOMATED_TEST.md
<#726 (comment)>:
> +- Unit tests: this includes every set of codes that serves a unique functionality. Unit test eveolves as new codes are added.
+- [BMI](https://bmi.readthedocs.io/en/stable/) (Basical Model Interface) based formulations tests including codes in C, C++, Fortran, and Python.
+- Running **ngen** executable on example hydrofabric with various realistic modules/models, initial condition, and forcing data.
+
+## Local Testing
+
+We strongly recommend you run all the tests on your local computer before you push up the new codes to the branch in your fork. To be able to run the tests, when building the executables (for `build` with `cmake` see [here](https://github.com/stcui007/ngen/blob/ngen_automated_test/doc/BUILDS_AND_CMAKE.md)), you need to set the test option to `ON` using `cmake` option:
+
+ -DNGEN_WITH_TESTS:BOOL=ON
+
+After `build` completes, assuming your build directory is `cmake_build`, you can check that the `ngen` executable is in `cmake_build` directory, and all `unit test` executables are in `cmake_build/test` directory. To run a unit test, for example, run the following command while in the top project directory:
+
+ ./cmake_build/test/test_unit
+
+There are many other unit test executables you need to run for a complete test.
+To run a ngen test job, for example, using the data set in the `data` directory, and run the following command:
⬇️ Suggested change
-To run a ngen test job, for example, using the data set in the `data` directory, and run the following command:
+For example, to run an **ngen** test job using the data set in the `data` directory, use the following command:
------------------------------
In doc/AUTOMATED_TEST.md
<#726 (comment)>:
> + ./cmake_build/test/test_unit
+
+There are many other unit test executables you need to run for a complete test.
+To run a ngen test job, for example, using the data set in the `data` directory, and run the following command:
+
+ ./cmake_build/ngen data/catchment_data.geojson '' data/nexus_data.geojson '' data/example_bmi_multi_realization_config.json
+
+To run a multi-processors job with MPI, please see a complete description in [here](https://github.com/stcui007/ngen/blob/ngen_automated_test/doc/DISTRIBUTED_PROCESSING.md)
+
+## What to Do When a Test Fail
+
+- Before getting into debugging, first thing, we recommend that you perform all the necessary tests listed above on you local computer. If all tests are successful, then push up your codes to the intended branch in your fork.
+
+- Sometimes, some tests may fail even they have passed tests on local computer. If that happens, you have to look into details why they failed. To do that, click on the word `Details` in blue on the right for a particular test. This will open a window with detailed information for that particular test, the error information are usually near the bottom. You can scroll up and down the side bar for more information. you can also search by key workds in `Search logs` menu entry at the upper right corner.
+
+- Othertimes, if you are lucky (or unlucky depending on your perspective), the test may have failed due to time out or some unknown reasons, in that cases, you may rerun the test by placing the cursor on the test name, a cycling icon will appear and you can rerun your test by clicking on the icon. In any case, you may manually rerun any failed test by following procedure described above. But it is strongly recommended that you carefully examine the fail error first.
⬇️ Suggested change
-- Othertimes, if you are lucky (or unlucky depending on your perspective), the test may have failed due to time out or some unknown reasons, in that cases, you may rerun the test by placing the cursor on the test name, a cycling icon will appear and you can rerun your test by clicking on the icon. In any case, you may manually rerun any failed test by following procedure described above. But it is strongly recommended that you carefully examine the fail error first.
+- Otherwise, if you are lucky (or unlucky depending on your perspective), the test may have failed due to time out or some unknown reasons. In those cases, you may rerun the test by placing the cursor on the test name, a cycling icon will appear and you can rerun your test by clicking on the icon. In any case, you may manually rerun any failed test by following procedure described above. But, it is strongly recommended that you carefully examine the error first.
------------------------------
In doc/AUTOMATED_TEST.md
<#726 (comment)>:
> @@ -0,0 +1,40 @@
+# Github Automated Testing
+- [What Is Automated Test and How to Start It](#what-to-do-when-a-test-fail)
+- [What Code Tests Are Performed](#what-code-tests-are-performed)
+- [Local Testing](#local-testing)
+- [What to Do When a Test Fail](#what-to-do-when-a-test-fail)
+
+## What Is Automated Test and How to Start It
⬇️ Suggested change
-## What Is Automated Test and How to Start It
+## What Are Automated Tests and How to Start Them
------------------------------
In doc/AUTOMATED_TEST.md
<#726 (comment)>:
> +After `build` completes, assuming your build directory is `cmake_build`, you can check that the `ngen` executable is in `cmake_build` directory, and all `unit test` executables are in `cmake_build/test` directory. To run a unit test, for example, run the following command while in the top project directory:
+
+ ./cmake_build/test/test_unit
+
+There are many other unit test executables you need to run for a complete test.
+To run a ngen test job, for example, using the data set in the `data` directory, and run the following command:
+
+ ./cmake_build/ngen data/catchment_data.geojson '' data/nexus_data.geojson '' data/example_bmi_multi_realization_config.json
+
+To run a multi-processors job with MPI, please see a complete description in [here](https://github.com/stcui007/ngen/blob/ngen_automated_test/doc/DISTRIBUTED_PROCESSING.md)
+
+## What to Do When a Test Fail
+
+- Before getting into debugging, first thing, we recommend that you perform all the necessary tests listed above on you local computer. If all tests are successful, then push up your codes to the intended branch in your fork.
+
+- Sometimes, some tests may fail even they have passed tests on local computer. If that happens, you have to look into details why they failed. To do that, click on the word `Details` in blue on the right for a particular test. This will open a window with detailed information for that particular test, the error information are usually near the bottom. You can scroll up and down the side bar for more information. you can also search by key workds in `Search logs` menu entry at the upper right corner.
⬇️ Suggested change
-- Sometimes, some tests may fail even they have passed tests on local computer. If that happens, you have to look into details why they failed. To do that, click on the word `Details` in blue on the right for a particular test. This will open a window with detailed information for that particular test, the error information are usually near the bottom. You can scroll up and down the side bar for more information. you can also search by key workds in `Search logs` menu entry at the upper right corner.
+- Sometimes, tests may fail even when they have passed locally. If that happens, you have to look into the details of why they failed. To do that, click on the word `Details` in blue on the right for a particular test. This will open a window with detailed information for that particular test, the error information are usually near the bottom. You can scroll up and down the side bar for more information. You can also search by key words in `Search logs` menu entry at the upper right corner.
—
Reply to this email directly, view it on GitHub
<#726 (review)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/ACA4SROHB65KHWGREXOZJ3DYVTZ77AVCNFSM6AAAAABDBZMQOGVHI2DSMVQWIX3LMV43YUDVNRWFEZLROVSXG5CSMV3GSZLXHMYTSMBRHE3TAMBVGU>
.
You are receiving this because you were assigned.Message ID:
***@***.***>
|
program--
approved these changes
Feb 26, 2024
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
This PR describe in detail how automated test work in
ngen
repo.Additions
doc/AUTOMATED_TEST.md
Removals
Changes
Testing
Screenshots
Notes
Todos
Checklist
Testing checklist (automated report can be put here)
Target Environment support