Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

More completion duration updates #3781

Conversation

marcellamaki
Copy link
Member

@marcellamaki marcellamaki commented Oct 28, 2022

Summary

Description of the change(s) you made

Cohack with @rtibbles that began as changes to address feedback, but resulted in some broader refactoring.

  • Completion and duration should now work for all content types
  • The descriptive aspect of duration has been hidden for now, so duration setting is only showed if a time based completion criterion is set. This was done due to the confusing interface language in the case that the it was being set descriptively.
  • The learner managed state is moved into the completion options component, so that when it is set when no other completion criterion is set, the default completion criteria state to ensure validity.
  • To simplify frontend state management, backend completion criteria validation is only run when the node is set as complete, allowing saving of partial completion criteria

Issue for Kolibri filed separately in Kolibri repo

Manual verification steps performed

  1. many many many permutations of testing
  2. test each content type
  3. test all completion durations options of content type
  4. switch from each completion option to each other option (i.e from reference to time-based, reference to all content viewed, time based to all content viewed, time based to reference, all content viewed to reference, all content viewed to time-based)
  5. ensure "learner can mark as complete" works without any other interactions with completion/duration (i.e. check the checkbox, but make no other changes to default state)

Screenshots (if applicable)

toooo many to add

Does this introduce any tech-debt items?


Reviewer guidance

How can a reviewer test these changes?

Are there any risky areas that deserve extra testing?

References

Comments


Contributor's Checklist

PR process:

  • If this is an important user-facing change, PR or related issue the CHANGELOG label been added to this PR. Note: items with this label will be added to the CHANGELOG at a later time
  • If this includes an internal dependency change, a link to the diff is provided
  • The docs label has been added if this introduces a change that needs to be updated in the user docs?
  • If any Python requirements have changed, the updated requirements.txt files also included in this PR
  • Opportunities for using Google Analytics here are noted
  • Migrations are safe for a large db

Studio-specifc:

  • All user-facing strings are translated properly
  • The notranslate class been added to elements that shouldn't be translated by Google Chrome's automatic translation feature (e.g. icons, user-generated text)
  • All UI components are LTR and RTL compliant
  • Views are organized into pages, components, and layouts directories as described in the docs
  • Users' storage used is recalculated properly on any changes to main tree files
  • If there new ways this uses user data that needs to be factored into our Privacy Policy, it has been noted.

Testing:

  • Code is clean and well-commented
  • Contributor has fully tested the PR manually
  • If there are any front-end changes, before/after screenshots are included
  • Critical user journeys are covered by Gherkin stories
  • Any new interactions have been added to the QA Sheet
  • Critical and brittle code paths are covered by unit tests

Reviewer's Checklist

This section is for reviewers to fill out.

  • Automated test coverage is satisfactory
  • PR is fully functional
  • PR has been tested for accessibility regressions
  • External dependency files were updated if necessary (yarn and pip)
  • Documentation is updated
  • Contributor is in AUTHORS.md

@rtibbles rtibbles changed the base branch from unstable to hotfixes October 28, 2022 23:36
@marcellamaki marcellamaki marked this pull request as ready for review November 1, 2022 18:12
@marcellamaki marcellamaki requested a review from bjester November 1, 2022 20:20
Copy link
Member

@bjester bjester left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Tested several different resource types and everything seemed okay. One thing I was sad to see removed here was the recall of M/N values when switching the mastery goal-- it was a small feature that allowed for consequence-free experimental discovery since fields appear and disappear depending on what you select. Left some other comments too

@rtibbles
Copy link
Member

rtibbles commented Nov 2, 2022

One thing I was sad to see removed here was the recall of M/N values when switching the mastery goal

Yeah, there were two reasons for this - by always setting it to do_all for < 0.16 Kolibris the mastery model is a bit more aligned with a practice quiz.

The other was in the case that one or more of the m and n values were undefined, we had to switch the mastery model to something else for it to be valid.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
3 participants