-
-
Notifications
You must be signed in to change notification settings - Fork 30
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add common check: solution identical to exemplar #157
Conversation
test/support/exercise_test_case.ex
Outdated
%{"files" => %{"exemplar" => [path]}} <- Jason.decode!(config_file), | ||
exemplar_code <- | ||
[@concept_exercice_path, slug, path] |> Path.join() |> File.read!() do | ||
exemplar_code |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is this readable? I really squeezed the power of with
here...
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It is not 😁 I think the biggest problem are the multi-step clauses that don't fit on a single line. The mixture of File.read
and File.read!
is also confusing to me. If this is for testing purposes only, can't everything just raise an error?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Excellent point.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actually, I spoke too fast, this mustn't crash because it has to return nil
for practice exercises.
I'll still rewrite it.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm struggling with this. Every !
has a precise reason for being there or not. I settled for adding comments for the moment.
The tests seem to fail, I guess it's because the submodule isn't loaded... Hopefully it will work locally, you need to run |
I modified reading the exemplar file according to the discussion with Jeremy, then added tests. There are so many files modified now. Would you prefer I break it down in smaller PRs? I also realized that I think showing the students errors when the exemplar files are not found or parsed wrong is probably a bad idea, it's better to have it in the logs. We have access to these right? |
159898b
to
794f901
Compare
I'm sorry, I still don't have the time and energy to do a full review, but I'll try to answer some of the questions now.
I agree.
In theory yes. In practice we only have access to an ad hoc page for maintainers where the logs are visible for a few minutes and then they get pushed out by metadata from other submissions. Only a limited number of most recent submissions are shown, and that limit is platform-wide. So in practice you need to run a problematic submission and take a look at that page within a minute, otherwise you won't find the log again. I really hope that's something that will be resolved soon. Better debugging tools for maintainers would allow a lot of growth and improvements in all the tracks (cc @iHiD).
I think this also means we need to get rid of the git submodule. Is that possible? Jeremy said he doesn't want analyzer deployments if nothing functional changes in the analyzer. And every commit to main does trigger a deployment, including updating the reference to the submodule, right?
No need, if they're all part of this one big feature, it's fine. |
Agreed. There's a very long TODO list atm. Need to find some money, hire some people, and move faster :) |
I'll modify to use logs then.
I'm still using the submodule for testing all the analyzer checks, since short of copy-pasting, that's the only way we can compare the the exemplar solution. In that sense, we could use the CI to update the submodule on |
Can this be achieved before making the commit on |
[path | _] -> Path.join(params.path, path) | ||
_ -> nil |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hmm. This line of code assumes that exemplars are always one file and it will ignore the other files if there are more. That seems like asking for trouble. I'm pretty sure that we shouldn't be doing the exemplar check if there are unexpectedly more than one exemplar files, but I am not sure what to do instead. Crash? Log an error and behave as if there was no exemplar at all?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
A few line above there was already something similar
relative_code_path = meta_config["files"]["solution"] |> hd()
This ignores anything but the first file and crash if it's empty.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I thought about this a bit more.
For the exemplar code, I guess it's mostly fine because AFAIK, there is only one file for every exercise fo far, and unexpected situations (no files, multiple files) are very unlikely.
For student code, that is more of an issue. Not for website submissions, since I don't think they can create new files, but for CLI submissions, there could be many (Tim loves 3+ files solutions). So the analysis would miss anything outside of the main file.
Am I correct in saying that the directory/file structure in Elixir doesn't matter as long as it's in lib
, and that only module names are important?
If that is the case, a simple solution is to concatenate all the files that are in lib
(as opposed to meta_config["files"]["solution"]
which only points to the stub files) and pretend they were all one file. Actually we can do the same for the exemplar.
test/support/exercise_test_case.ex
Outdated
%{"files" => %{"exemplar" => [path]}} <- Jason.decode!(config_file), | ||
exemplar_code <- | ||
[@concept_exercice_path, slug, path] |> Path.join() |> File.read!() do | ||
exemplar_code |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It is not 😁 I think the biggest problem are the multi-step clauses that don't fit on a single line. The mixture of File.read
and File.read!
is also confusing to me. If this is for testing purposes only, can't everything just raise an error?
Really? It would make sense to me that the CI would be run on the push, but before the deployment. For example you may want to have an action that prevents the deployment in some cases. This is just guesswork though, I'll look into it. |
Deployment is handled through the GHA on push, so what angelika says is correct. However, presuming the submodule is being ignored in the dockerfile (haven't checked, but it should be) the second run should be a noop. But it would need enough time between the two for that to happen. Another option would be to update submodules automatically as a PR CI, so it happens on the pull-request action instead where it adds a new commit. It's a messy option though. |
I see. Another option would be to have this is in the test CI. Maybe we don't need to commit and push the submodule sync, just checking it out would be enough. If we want to run tests locally, we update the submodule locally and commit that along PRs... A bit opaque maybe. Yet another option is to do without submodules and checking the exemplar from the repo URL, we do that for checking comments exist. |
That's a hard dilemma. I'm not sure what's the best course of action here. This sounds reasonable:
But it might cause a situation where the analyzer returns both a celebratory comment and a negative comment. This would happen if we update the analyzer to do some new check but forgot to verify the new check against the most up to date exemplar. Sounds unlikely though, so maybe this is the best option? This sounds the cleanest and safest:
Except that it will significantly slow down the tests a lot if each test needs to pull the exemplar, or slow down the tests slightly if we git clone the repo once when starting the tests. |
This could happen locally, but then the CI would catch it. That's what it's for. I forget to run
Not a huge deal IMO, the tests on I know this doesn't help much but I don't really have a preference among all these options... |
I'm unsubscribing from this. Re-tag me if you want my thoughts on anything 🙂 |
Then let's go for that option. To sum up, this is what I think that means:
|
That's exactly what I think that means too. Messing with CI with my amount of experience will probably mean some amount of trial and error, so I apologize in advance if I end up spamming commits. |
Spamming commits is unfortunately The Only Way ™️ to work with GitHub workflows. I even have a dedicated repo for this for myself :D (https://github.com/angelikatyborska/gha-test) |
Looks like it's working. It only took 14 commits on my fork before I found the solution, which was literally adding |
Is this also going to update the submodule to the latest commit? |
Te
…Sent from my iPhone
On Sep 29, 2021, at 0:14, Angelika Tyborska ***@***.***> wrote:
which was literally adding with: submodules: true. 🙈
Is this also going to update the submodule to the latest commit?
—
You are receiving this because you authored the thread.
Reply to this email directly, view it on GitHub, or unsubscribe.
Triage notifications on the go with GitHub Mobile for iOS or Android.
|
Sorry for previous message :)
Yes, I believe it does. I checked the log of the action and it runs
|
I tried to resolve the conficts and merge but I don't think that's straight-forward, could you do it? I also noticed a small typo |
On it! |
7cdc6d0
to
55beb7c
Compare
Let's see what happens when I press the merge button 😁 |
Covers some features discussed in #132.
This PR:
elixir
repoelixir
and compares it to the code, if they are identical, a celebratory comment is addedIt does not
This is quite a lot of code, and I have some questions, so this is probably not ready to merge as is. I will leave my questions as comments so it's easier to discuss.