Skip to content

Commit

Permalink
Merge pull request #259 from EESSI/develop
Browse files Browse the repository at this point in the history
release v0.4.0
  • Loading branch information
laraPPr authored Feb 28, 2024
2 parents 6ba9625 + 618a605 commit fa91fcd
Show file tree
Hide file tree
Showing 13 changed files with 580 additions and 131 deletions.
98 changes: 98 additions & 0 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -404,6 +404,20 @@ submit_command = /usr/bin/sbatch
```
`submit_command` is the full path to the Slurm job submission command used for submitting batch jobs. You may want to verify if `sbatch` is provided at that path or determine its actual location (using `which sbatch`).

```
build_permission = GH_ACCOUNT_1 GH_ACCOUNT_2 ...
```
`build_permission` defines which GitHub accounts have the permission to trigger
build jobs, i.e., for which accounts the bot acts on `bot: build ...` commands.
If the value is left empty, everyone can trigger build jobs.

```
no_build_permission_comment = The `bot: build ...` command has been used by user `{build_labeler}`, but this person does not have permission to trigger builds.
```
`no_build_permission_comment` defines a comment (template) that is used when
the account trying to trigger build jobs has no permission to do so.


#### `[bot_control]` section

The `[bot_control]` section contains settings for configuring the feature to
Expand Down Expand Up @@ -485,6 +499,43 @@ This defines a message that is added to the status table in a PR comment
corresponding to a job whose tarball should have been uploaded (e.g., after
setting the `bot:deploy` label).


```
metadata_prefix = LOCATION_WHERE_METADATA_FILE_GETS_DEPOSITED
tarball_prefix = LOCATION_WHERE_TARBALL_GETS_DEPOSITED
```

These two settings are used to define where (which directory) in the S3 bucket
(see `bucket_name` above) the metadata file and the tarball will be stored. The
value `LOCATION...` can be a string value to always use the same 'prefix'
regardless of the target CVMFS repository, or can be a mapping of a target
repository id (see also `repo_target_map` below) to a prefix.

The prefix itself can use some (environment) variables that are set within
the upload script (see `tarball_upload_script` above). Currently those are:
* `'${github_repository}'` (which would be expanded to the full name of the GitHub
repository, e.g., `EESSI/software-layer`),
* `'${legacy_aws_path}'` (which expands to the legacy/old prefix being used for
storing tarballs/metadata files, the old prefix is
`EESSI_VERSION/TARBALL_TYPE/OS_TYPE/CPU_ARCHITECTURE/TIMESTAMP/`), _and_
* `'${pull_request_number}'` (which would be expanded to the number of the pull
request from which the tarball originates).
Note, it's important to single-quote (`'`) the variables as shown above, because
they may likely not be defined when the bot calls the upload script.

The list of supported variables can be shown by running
`scripts/eessi-upload-to-staging --list-variables`.

**Examples:**
```
metadata_prefix = {"eessi.io-2023.06": "new/${github_repository}/${pull_request_number}"}
tarball_prefix = {
"eessi-pilot-2023.06": "",
"eessi.io-2023.06": "new/${github_repository}/${pull_request_number}"
}
```
If left empty, the old/legacy prefix is being used.

#### `[architecturetargets]` section

The section `[architecturetargets]` defines for which targets (OS/SUBDIR), (for example `linux/x86_64/amd/zen2`) the EESSI bot should submit jobs, and which additional `sbatch` parameters will be used for requesting a compute node with the CPU microarchitecture needed to build the software stack.
Expand Down Expand Up @@ -657,6 +708,53 @@ job_test_unknown_fmt = <details><summary>:shrug: UNKNOWN _(click triangle for de
`job_test_unknown_fmt` is used in case no test file (produced by `bot/check-test.sh`
provided by target repository) was found.


#### `[download_pr_comments]` section

The `[download_pr_comments]` section sets templates for messages related to
downloading the contents of a pull request.
```
git_clone_failure = Unable to clone the target repository.
```
`git_clone_failure` is shown when `git clone` failed.

```
git_clone_tip = _Tip: This could be a connection failure. Try again and if the issue remains check if the address is correct_.
```
`git_clone_tip` should contain some hint on how to deal with the issue. It is shown when `git clone` failed.

```
git_checkout_failure = Unable to checkout to the correct branch.
```
`git_checkout_failure` is shown when `git checkout` failed.

```
git_checkout_tip = _Tip: Ensure that the branch name is correct and the target branch is available._
```
`git_checkout_tip` should contain some hint on how to deal with the failure. It
is shown when `git checkout` failed.

```
curl_failure = Unable to download the `.diff` file.
```
`curl_failure` is shown when downloading the `PR_NUMBER.diff`
```
curl_tip = _Tip: This could be a connection failure. Try again and if the issue remains check if the address is correct_
```
`curl_tip` should help in how to deal with failing downloads of the `.diff` file.

```
git_apply_failure = Unable to download or merge changes between the source branch and the destination branch.
```
`git_apply_failure` is shown when applying the `.diff` file with `git apply`
failed.

```
git_apply_tip = _Tip: This can usually be resolved by syncing your branch and resolving any merge conflicts._
```
`git_apply_tip` should guide the contributor/maintainer about resolving the cause
of `git apply` failing.

# Instructions to run the bot components

The bot consists of three components:
Expand Down
16 changes: 16 additions & 0 deletions RELEASE_NOTES
Original file line number Diff line number Diff line change
@@ -1,6 +1,22 @@
This file contains a description of the major changes to the EESSI
build-and-deploy bot. For more detailed information, please see the git log.

v0.4.0 (28 February 2024)
--------------------------

This is a minor release of the EESSI build-and-deploy bot.

Bug fixes:
* fixes issue using wrong values when using the `bot: status` command (#251)

Improvements:
* make bot report when preparing the job working directory failed, for example due to merge conflict in a pull request (#248)
* adding the pull request comment id to the metadata file that is uploaded to the
the S3 bucket (#247, #249, #250, #253)
* enabling configurable upload directories for tarball and metadata file (#254)
* only make bot respond to pull request comments that contain a bot command (#257)


v0.3.0 (30 January 2024)
--------------------------

Expand Down
32 changes: 32 additions & 0 deletions app.cfg.example
Original file line number Diff line number Diff line change
Expand Up @@ -147,6 +147,28 @@ deploy_permission =
# template for comment when user who set a label has no permission to trigger deploying tarballs
no_deploy_permission_comment = Label `bot:deploy` has been set by user `{deploy_labeler}`, but this person does not have permission to trigger deployments

# settings for where (directory) in the S3 bucket to store the metadata file and
# the tarball
# - Can be a string value to always use the same 'prefix' regardless of the target
# CVMFS repository, or can be a mapping of a target repository id (see also
# repo_target_map) to a prefix.
# - The prefix itself can use some (environment) variables that are set within
# the script. Currently those are:
# * 'github_repository' (which would be expanded to the full name of the GitHub
# repository, e.g., 'EESSI/software-layer'),
# * 'legacy_aws_path' (which expands to the legacy/old prefix being used for
# storing tarballs/metadata files) and
# * 'pull_request_number' (which would be expanded to the number of the pull
# request from which the tarball originates).
# - The list of supported variables can be shown by running
# `scripts/eessi-upload-to-staging --list-variables`.
# - Examples:
# metadata_prefix = {"eessi.io-2023.06": "new/${github_repository}/${pull_request_number}"}
# tarball_prefix = {"eessi-pilot-2023.06": "", "eessi.io-2023.06": "new/${github_repository}/${pull_request_number}"}
# If left empty, the old/legacy prefix is being used.
metadata_prefix =
tarball_prefix =


[architecturetargets]
# defines both for which architectures the bot will build
Expand Down Expand Up @@ -219,3 +241,13 @@ no_matching_tarball = No tarball matching `{tarball_pattern}` found in job dir.
multiple_tarballs = Found {num_tarballs} tarballs in job dir - only 1 matching `{tarball_pattern}` expected.
job_result_unknown_fmt = <details><summary>:shrug: UNKNOWN _(click triangle for detailed information)_</summary><ul><li>Job results file `{filename}` does not exist in job directory, or parsing it failed.</li><li>No artefacts were found/reported.</li></ul></details>
job_test_unknown_fmt = <details><summary>:shrug: UNKNOWN _(click triangle for detailed information)_</summary><ul><li>Job test file `{filename}` does not exist in job directory, or parsing it failed.</li></ul></details>

[download_pr_comments]
git_clone_failure = Unable to clone the target repository.
git_clone_tip = _Tip: This could be a connection failure. Try again and if the issue remains check if the address is correct_.
git_checkout_failure = Unable to checkout to the correct branch.
git_checkout_tip = _Tip: Ensure that the branch name is correct and the target branch is available._
curl_failure = Unable to download the `.diff` file.
curl_tip = _Tip: This could be a connection failure. Try again and if the issue remains check if the address is correct_
git_apply_failure = Unable to download or merge changes between the source branch and the destination branch.
git_apply_tip = _Tip: This can usually be resolved by syncing your branch and resolving any merge conflicts._
34 changes: 21 additions & 13 deletions eessi_bot_event_handler.py
Original file line number Diff line number Diff line change
Expand Up @@ -31,7 +31,8 @@
from tasks.deploy import deploy_built_artefacts
from tools import config
from tools.args import event_handler_parse
from tools.commands import EESSIBotCommand, EESSIBotCommandError, get_bot_command
from tools.commands import EESSIBotCommand, EESSIBotCommandError, \
contains_any_bot_command, get_bot_command
from tools.permissions import check_command_permission
from tools.pr_comments import create_comment

Expand Down Expand Up @@ -113,7 +114,25 @@ def handle_issue_comment_event(self, event_info, log_file=None):
# currently, only commands in new comments are supported
# - commands have the syntax 'bot: COMMAND [ARGS*]'

# first check if sender is authorized to send any command
# only scan for commands in newly created comments
if action == 'created':
comment_received = request_body['comment']['body']
self.log(f"comment action '{action}' is handled")
else:
# NOTE we do not respond to an updated PR comment with yet another
# new PR comment, because it would make the bot very noisy or
# worse could result in letting the bot enter an endless loop
self.log(f"comment action '{action}' not handled")
return
# at this point we know that we are handling a new comment

# check if comment does not contain a bot command
if not contains_any_bot_command(comment_received):
self.log("comment does not contain a bot comment; not processing it further")
return
# at this point we know that the comment contains a bot command

# check if sender is authorized to send any command
# - this serves a double purpose:
# 1. check permission
# 2. skip any comment updates that were done by the bot itself
Expand Down Expand Up @@ -150,17 +169,6 @@ def handle_issue_comment_event(self, event_info, log_file=None):
else:
self.log(f"account `{sender}` has permission to send commands to bot")

# only scan for commands in newly created comments
if action == 'created':
comment_received = request_body['comment']['body']
self.log(f"comment action '{action}' is handled")
else:
# NOTE we do not respond to an updated PR comment with yet another
# new PR comment, because it would make the bot very noisy or
# worse could result in letting the bot enter an endless loop
self.log(f"comment action '{action}' not handled")
return

# search for commands in comment
comment_response = ''
commands = []
Expand Down
48 changes: 18 additions & 30 deletions eessi_bot_job_manager.py
Original file line number Diff line number Diff line change
Expand Up @@ -42,7 +42,7 @@
from connections import github
from tools import config, run_cmd
from tools.args import job_manager_parse
from tools.job_metadata import read_metadata_file
from tools.job_metadata import read_job_metadata_from_file, read_metadata_file
from tools.pr_comments import get_submitted_job_comment, update_comment


Expand Down Expand Up @@ -251,24 +251,6 @@ def determine_finished_jobs(self, known_jobs, current_jobs):

return finished_jobs

def read_job_pr_metadata(self, job_metadata_path):
"""
Read job metadata file and return the contents of the 'PR' section.
Args:
job_metadata_path (string): path to job metadata file
Returns:
(ConfigParser): instance of ConfigParser corresponding to the 'PR'
section or None
"""
# reuse function from module tools.job_metadata to read metadata file
metadata = read_metadata_file(job_metadata_path, self.logfile)
if metadata and "PR" in metadata:
return metadata["PR"]
else:
return None

def read_job_result(self, job_result_file_path):
"""
Read job result file and return the contents of the 'RESULT' section.
Expand Down Expand Up @@ -350,7 +332,7 @@ def process_new_job(self, new_job):

# assuming that a bot job's working directory contains a metadata
# file, its existence is used to check if the job belongs to the bot
metadata_pr = self.read_job_pr_metadata(job_metadata_path)
metadata_pr = read_job_metadata_from_file(job_metadata_path, self.logfile)

if metadata_pr is None:
log(f"No metadata file found at {job_metadata_path} for job {job_id}, so skipping it",
Expand Down Expand Up @@ -446,7 +428,7 @@ def process_running_jobs(self, running_job):
job_metadata_path = os.path.join(job_dir, metadata_file)

# check if metadata file exist
metadata_pr = self.read_job_pr_metadata(job_metadata_path)
metadata_pr = read_job_metadata_from_file(job_metadata_path, self.logfile)
if metadata_pr is None:
raise Exception("Unable to find metadata file")

Expand Down Expand Up @@ -591,7 +573,7 @@ def process_finished_job(self, finished_job):
# obtain id of PR comment to be updated (from file '_bot_jobID.metadata')
metadata_file = f"_bot_job{job_id}.metadata"
job_metadata_path = os.path.join(new_symlink, metadata_file)
metadata_pr = self.read_job_pr_metadata(job_metadata_path)
metadata_pr = read_job_metadata_from_file(job_metadata_path, self.logfile)
if metadata_pr is None:
raise Exception("Unable to find metadata file ... skip updating PR comment")

Expand Down Expand Up @@ -685,14 +667,26 @@ def main():
if max_iter != 0:
known_jobs = job_manager.get_known_jobs()
while max_iter < 0 or i < max_iter:
# sleep poll_interval seconds (not for the first iteration)
if i != 0:
log(
"job manager main loop: sleep %d seconds" % poll_interval,
job_manager.logfile,
)
time.sleep(poll_interval)
log("job manager main loop: iteration %d" % i, job_manager.logfile)
log(
"job manager main loop: known_jobs='%s'" % ",".join(
known_jobs.keys()),
job_manager.logfile,
)

current_jobs = job_manager.get_current_jobs()
try:
current_jobs = job_manager.get_current_jobs()
except RuntimeError:
i = i + 1
continue

log(
"job manager main loop: current_jobs='%s'" % ",".join(
current_jobs.keys()),
Expand Down Expand Up @@ -747,13 +741,7 @@ def main():

known_jobs = current_jobs

# sleep poll_interval seconds (only if at least one more iteration)
if max_iter < 0 or i + 1 < max_iter:
log(
"job manager main loop: sleep %d seconds" % poll_interval,
job_manager.logfile,
)
time.sleep(poll_interval)
# add one iteration to the loop
i = i + 1


Expand Down
Loading

0 comments on commit fa91fcd

Please sign in to comment.