-
-
Notifications
You must be signed in to change notification settings - Fork 244
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
No such image: oxsecurity/megalinter-<flavor>:latest #2648
Comments
Same thing started happening today with |
hmmm what about a big |
If that were going to fix it, then I don't see how it could possibly be broken in GitHub Actions as well? There's also the fact that it broke abruptly without any Docker operations on our end. |
Dropping the caret in mega-linter-runner@^6.19.0 did fix it. But we've pinned to this version for the m1 Mac users. |
Neither |
Does pre commit call npx mega linter runner too? Should we check on the runner side? The Docker side? Our images? Is there a new version of something elsewhere that affects? |
All those tags exist on docker hub... I don't see what we could do on MegaLinter side :/ mega-linter-runner just does a docker pull then a docker run :/ Maybe something new on docker itself ? |
Is this going on on GitHub actions too? |
Yeah, pre-commit simply calls npx mega-linter-runner in the manner specified above. I think we should probably see what is different in mega-linter-runner between v6.20.0 and v6.20.1. I am guessing there is something weird going on when |
I vaguely remember that the version of node in the runners was about to change, is it in that timeframe? |
No, we pin Node.js both locally and in GitHub Actions, currently to v18.16.0, which hasn't changed since April. |
I've not seen any issue of that type with the MegaLinter GitHub action |
No, I meant that running the pre-commit hook fails the same way in GitHub Actions; part of the point of pre-commit is to reuse the same configuration one uses locally in CI. I was able to reproduce the issue by running |
For the moment the --platform with default value linux/amd64 is just to force to use the linux/amd64 on platforms that are not, like M1 Maybe if we don't specify the platform it could work better ? We could add |
The outage just resolved itself both locally and in GitHub Actions, so I'm guessing this was a bug in Docker Hub or something. Closing this for now. Others should feel free to comment if they experience the same issue, and I can look into it further. |
Hi there, I'm definitely still having a similar issue. It appears to me that pulling the image is working fine, but when I've attempted to run exactly the same way the default pre-commit configuration runs it, with: Here is the output I get:
I can verify that it pulled the correct v6 image to my machine at this point. I can work around by retagging the image with Another work around I've discovered works for me, based on @btilford's comment above, is by tagging the megalinter-runner-version to 6.19.0, so:
I can add that to my pre-commit config for now, but I can confirm that pre-commit will fail if it runs as the documentation currently suggests. My currently working pre-commit hook in - repo: https://github.com/oxsecurity/megalinter
rev: v6.22.2 # Git tag specifying the hook, not mega-linter-runner, version
hooks:
- id: megalinter-incremental # Faster, less thorough
stages:
- commit
args:
- [email protected]
- --containername
- "megalinter-incremental"
- --remove-container
- --fix
- --env
- "'APPLY_FIXES=all'"
- --env
- "'CLEAR_REPORT_FOLDER=true'"
- --env
- "'LOG_LEVEL=warning'"
- --filesonly
- id: megalinter-full # Slower, more thorough
stages:
- push
args:
- [email protected]
- --containername
- "megalinter-full"
- --remove-container
- --fix
- --env
- "'APPLY_FIXES=all'"
- --env
- "'CLEAR_REPORT_FOLDER=true'"
- --env
- "'LOG_LEVEL=warning'" |
Can you reproduce the issue more simply with |
@Kurt-von-Laven I'm having no issue pulling down the images with |
@cam-barts what if you try with mega-linter-runner@beta ? |
I think I have it, PR on the way --platform argument was used for docker pull, but not for docker run |
Wild. That is actually a different bug with the same symptom, because I couldn't get past the |
Describe the bug
[email protected] abruptly started failing when the MegaLinter Docker image isn't explicitly tagged
latest
locally. The issue began at some point within the last 20 hours without pertinent changes on our end. Both mega-linter-runner and MegaLinter are pinned to v6.22.2.To Reproduce
Steps to reproduce the behavior:
Configure the
megalinter-full
pre-commit hook in your.pre-commit-config.yaml
:Run
docker rmi oxsecurity/megalinter-python:latest
to verify that you don't haveoxsecurity/megalinter-python:latest
installed locally.Run the hook with:
pre-commit run megalinter-full --all-files --hook-stage push
.The hook fails with the following output:
Expected behavior
I expected the hook to pass like it did yesterday.
Additional context
This could be related to #2646. I encountered the issue on Linux using rootless Docker both locally and in GitHub Actions. I was not able to reproduce the issue in WSL (Windows Subsystem for Linux), which only supports rootful Docker. Running
docker pull oxsecurity/megalinter-python:latest
works around the issue even though I already had pulledoxsecurity/megalinter-python:v6.22.2
, which is currently the same image. The issue reproduces on all of the flavors I tried: documentation, dotnet, javascript, and python. I'm confused as to what changed since I didn't have thelatest
tag present locally before.The text was updated successfully, but these errors were encountered: