Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

20 Practice Exercises #31

Closed
20 tasks done
loziniak opened this issue May 28, 2021 · 16 comments
Closed
20 tasks done

20 Practice Exercises #31

loziniak opened this issue May 28, 2021 · 16 comments
Labels
good first issue Good for newcomers help wanted Extra attention is needed

Comments

@loziniak
Copy link
Member

loziniak commented May 28, 2021

At least 20 Practice Exercises are needed to launch track.

checklist:

exercise name (so-called "slug") is linked to a description.
task's difficulty is in parentheses. the choice is arbitrary, based on personal reception of exercise description. it's open for discussion/change.
this list is chosen randomly from Exercism's problem database.
it's sorted by difficulty, and this order should be kept also in config.json.

instructions:

  1. Comment here to let anybody know which exercises are being worked on,
  2. Clone this repo,
  3. Run exercise generator:
$ red _tools/generate-practice-exercise.red <exercise-slug>
  1. In exercises/practice/<exercise-slug>/<exercise-slug>-test.red change comments like this, to test example solution:
; test-init/limit %exercise-slug.red 1
test-init/limit %.meta/example.red 1
  1. Solve exercise example, by editing exercises/practice/<exercise-slug>/.meta/example.red,
  2. Run tests. You'll need to change second argument of test-init function from 1 to how many tests you want to run in <exercise-slug>-test.red.
$ cd exercises/practice/<exercise-slug>
$ red <exercise-slug>-test.red
  1. Once your solution passes all the tests, remember to revert changes in test-init line: uncomment solution file, comment example file and change limit to 1 (second argument).
  2. Change exercise's difficulty in track's config,json. If you want, add practices and prerequisites concepts. Copy exercise's config to proper position, so that all exercises are sorted from easiest to toughest.
  3. Make a commit to separate branch and make Pull Requeset.
@loziniak loziniak added good first issue Good for newcomers help wanted Extra attention is needed labels May 28, 2021
@loziniak

This comment has been minimized.

@wallysilva
Copy link

wallysilva commented Oct 2, 2021

I will work on the hello-world (1)

@loziniak
Copy link
Member Author

loziniak commented Oct 2, 2021

It already has a solution, but of course you can solve it as a learning example. Most needed are solution for exercises with empty checkbox.

@wallysilva
Copy link

I believe you can find some of the solutions in the Rosetta Code. For instance, the solution for the Roman Numerals problem: http://www.rosettacode.org/wiki/Roman_numerals/Decode#Red

@dander
Copy link
Contributor

dander commented Oct 18, 2021

I will try out darts if someone else isn't already doing it. I'm curious about the workflow for the tests. Is the suggested way to increment the ignore-after field each time the tests all pass and run them again?

@loziniak
Copy link
Member Author

@dander exactly. This is a workflow in every Exercism track. Although nothing can stop you from running all the tests from the very beginning. I suspect it's a sort of TDD good practice perhaps. BTW nice to see you involved!

@loziniak
Copy link
Member Author

@wallysilva I took a roman-numerals solution from Rosetta Code, thanks for advice!

@dander
Copy link
Contributor

dander commented Oct 27, 2021

I've started working on sgf-parsing

@loziniak
Copy link
Member Author

loziniak commented Mar 3, 2022

I've started working on sgf-parsing

How is it going? Perhaps we could go live soon, do you want to finish that? If not, I'll take it.

@dander
Copy link
Contributor

dander commented Mar 3, 2022

I had to stop for a while because of general life stuff going on, but I've been trying to get back to it lately. I was finding it a bit tricky to indicate in the tests what was wrong with the outputs in a clear way. Also, trying to figure out an appropriate way to handle expected errors. Have you encountered that in some of the other challenges? To clarify what I mean, some of the tests have expected values containing nested data structures

    expected: #(
        properties: #(
            A: ["b"]
            C: ["d"]
        )
        children: []
    )

While some have an 'error' property with an associated error message

    expected: #(
        error: "properties without delimiter"
    )

I'm interpreting that to mean that the test should trigger a 'user error with that message.

One thing I've been a bit conflicted on is whether the tree-like structure should be the strict map/block structure above, or more flexible, since there could be different kinds of solutions.

@loziniak
Copy link
Member Author

loziniak commented Mar 4, 2022

Yes, this error pattern appeared for me in largest-series-product: 79a9cf0 . You should be able to throw a map with error key, or cause-error with appropriate message. I extended a testing "framework" lately (0096791), but errors should still work.

To satisfy structured output expectations, I would suggest you just return a map from tested function.

@dander
Copy link
Contributor

dander commented May 1, 2022

I pushed up an example solution for the sgf-parsing exercise (finally). #62

I initially wanted to use parse with collect/keep, but since the outputs expect nested maps, and collect only can generate blocks, I wasn't sure how that would work. So instead I used a stack to keep track of the current location in the data structure to insert child nodes. I found this problem to be quite difficult to get right. I'm not sure that using parse is the easiest solution for it, but it seems like a natural place to display the feature of the language.

I'm looking into adding the project metadata pieces. I am considering adding concepts for parse, and recursion. Is there a catalog of existing concepts somewhere that I should reference? Is there anything I need to know about the uuids, or do I just generate a new one?

@dander
Copy link
Contributor

dander commented Jun 11, 2022

@loziniak I think #62 is ready to be merged, if it looks good to you. I ended up adding stubs for a parse concept, but removed the recursion one. Though I suppose it could be solved without parse.

@loziniak
Copy link
Member Author

There is no central point for concepts. For me it felt natural to just solve exercise examples and look if I could need any new concepts to explain it. So, it seems just as you did with parse. I have some initial work done to start with basics and evaluation concepts. Do you think about working more on concepts? It's a great feature, perhaps we could add concepts one-by-one. There is a task for it: #37 .

UUIDs can be generated offline by hand, they just need to be unique througout the project. You can use configlet for this, or just any online or sytem tool you prefer. Also, during track unit tests, configlet is used to check for uniqueness of UUIDs, so all errors are caught.

@dander
Copy link
Contributor

dander commented Jun 21, 2022

I will consider contributing to the concepts. I just need to be weary of how big a bite I take.

Configlet is pretty cool. I discovered it when the pull request generated a failed configlet run.

@loziniak
Copy link
Member Author

Just pushed last ex. from the 20 yay!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
good first issue Good for newcomers help wanted Extra attention is needed
Projects
None yet
Development

No branches or pull requests

3 participants