Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Refactor the default completion and duration content completion optio… #4182

Conversation

marcellamaki
Copy link
Member

Summary

Description of the change(s) you made

This PR refactors the default completion and duration model logic - previously only used in the EditModal to help with

Manual verification steps performed

  1. Default completion criteria are visible on the "devsetup" generated sample resources, where such completion data has not been added
  2. Testing with the EditModal to ensure functionality is still present
  3. Previewing the resource panel to ensure changes made in the EditModal are accurately reflected for all content types

Screenshots (if applicable)

There are a great deal of permutations, so I have not included them all, but a few samples -
Screenshot 2023-07-02 at 5 28 15 PM
Screenshot 2023-07-02 at 5 28 59 PM
Screenshot 2023-07-02 at 5 28 44 PM

Does this introduce any tech-debt items?

I don't think so! Hopefully it rather helps reduce it.


Reviewer guidance

How can a reviewer test these changes?

For code review:

  1. Run yarn run devsetup and see the default completion and duration values for the different content types (note that there are not default 'duration' values for anything other than audio/video content)
  2. Add your own content and see the default values displaying within the edit modal and, when saved, in the Resource Panel
  3. Add your own content and change the default values

Are there any risky areas that deserve extra testing?

The EditModal should be fully regression tested by QA team to ensure it is still working as expected.

References

Fixes #4180

Comments

I have tried to keep this as simple as possible for release - there might be other optimizations we could add later (i.e. do we want an empty duration to display at all by default?) but that did not seem to be critical at this point.


Contributor's Checklist

PR process:

  • If this is an important user-facing change, PR or related issue the CHANGELOG label been added to this PR. Note: items with this label will be added to the CHANGELOG at a later time
  • If this includes an internal dependency change, a link to the diff is provided
  • The docs label has been added if this introduces a change that needs to be updated in the user docs?
  • If any Python requirements have changed, the updated requirements.txt files also included in this PR
  • Opportunities for using Google Analytics here are noted
  • Migrations are safe for a large db

Studio-specifc:

  • All user-facing strings are translated properly
  • The notranslate class been added to elements that shouldn't be translated by Google Chrome's automatic translation feature (e.g. icons, user-generated text)
  • All UI components are LTR and RTL compliant
  • Views are organized into pages, components, and layouts directories as described in the docs
  • Users' storage used is recalculated properly on any changes to main tree files
  • If there new ways this uses user data that needs to be factored into our Privacy Policy, it has been noted.

Testing:

  • Code is clean and well-commented
  • Contributor has fully tested the PR manually
  • If there are any front-end changes, before/after screenshots are included
  • Critical user journeys are covered by Gherkin stories
  • Any new interactions have been added to the QA Sheet
  • Critical and brittle code paths are covered by unit tests

Reviewer's Checklist

This section is for reviewers to fill out.

  • Automated test coverage is satisfactory
  • PR is fully functional
  • PR has been tested for accessibility regressions
  • External dependency files were updated if necessary (yarn and pip)
  • Documentation is updated
  • Contributor is in AUTHORS.md

…ns to be in utils.js, and make the default data visible in the resource panel
Copy link
Member

@MisRob MisRob left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank you @marcellamaki. When it comes to code, I think overall it's a great improvement and feels much more clear and robust than before. Also thanks for documenting new helper functions.

I left a few comments. Nothing is blocking. I'd welcome if those related to audio/video logic and covering it with one or two unit tests could be addressed, but it'd be also fine to do it as a follow-up.

@marcellamaki marcellamaki requested a review from MisRob July 3, 2023 20:47
Copy link
Member

@MisRob MisRob left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank you for the new updates @marcellamaki, overall it's looking fine. The only thing that'd be good to do before merging is to remove console.logs (not sure why linter checks didn't catch them?), rest of my new review is not blocking. It's already a great improvement toward reducing tech debt.

@@ -336,17 +332,19 @@ export function getCompletionCriteriaLabels(node) {
duration: '-',
};

console.log('filesss', files);
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can be remoooved :) Also applies to other console.log occurrences in this file

});
});
});
describe(`for exercises`, () => {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It'd be good to place this outside of the describe('getCompletionCriteriaLabels') block, because it doesn't test, neither describe behavior of getCompletionCriteriaLabels function like other tests cases but rather that of getCompletionDataFromNode. This applies to another new test case below as well.

A nice way to try out if tests organizations and descriptions make sense from the documentation point of view is to run the test suite with --verbose flag and read through the output

yarn test-jest channelEdit/__tests__/utils.spec --verbose

Screenshot from 2023-07-04 07-54-35

And this is nitpick (squared), on the screenshot above you can also see that it may be nice to distinguish between audio and video test cases descriptions to be able to say which one corresponds to which as right now, their names are identical. If one of them failed, ideally we'd be able to say immediately if it fails for video or for audio without digging deep into the tests code.

@marcellamaki marcellamaki merged commit 3155ca9 into learningequality:hotfixes Jul 4, 2023
export function getCompletionCriteriaLabels(node) {
if (!node) {
export function getCompletionCriteriaLabels(node = {}, files = []) {
if (!node && !files) {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Noting that if node = {} then !node is false, and if files = [] then !files is also false.

const { model, threshold } = node.extra_fields?.options?.completion_criteria || {};
const masteryModel = threshold?.mastery_model;
const completionCriteria = node.extra_fields?.options?.completion_criteria
? JSON.parse(JSON.stringify(node.extra_fields?.options?.completion_criteria))
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why was the stringify and parse required?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants