Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Draw generator fails when no metrics are provided #1108

Closed
ArthurPaiva opened this issue Jun 8, 2019 · 18 comments · Fixed by #1149
Closed

Draw generator fails when no metrics are provided #1108

ArthurPaiva opened this issue Jun 8, 2019 · 18 comments · Fixed by #1149
Milestone

Comments

@ArthurPaiva
Copy link

I hit an internal server error while trying to load this URL:
https://v-potiguar.herokuapp.com/vpotiguar2019/admin/draw/round/4/create/

Please describe what you were doing when the error happened, along with any other useful information:

Whenever try to generate a draw to round 4 on my tournment, this error displays:
error

@tienne-B
Copy link
Member

tienne-B commented Jun 8, 2019

That is not really enough information to determine what is the problem. However, your site did send an error report, so I'll be examining through that (BACKEND-1QA).

Do you have team metrics set? Go to https://v-potiguar.herokuapp.com/vpotiguar2019/admin/options/standings/ and put "Wins" under "Team standings precedence". Or was it already set?

@ArthurPaiva
Copy link
Author

For some reason I can't explain, now I can't even log in to TabbyCat as admin; it keeps saying that username and password does not match. As if it weren't enough, when I try to reset my password by sending an e-mail of that type to me, the same error [(Internal Server Error (500)] occurs.
1123123

@tienne-B
Copy link
Member

tienne-B commented Jun 8, 2019

Try going to the Heroku dashboard ( https://dashboard.heroku.com/apps/sucdi-tabs?web-console=v-potiguar ) and create a new user.

Click on the "More" dropdown (top-right) and select "Run Console". From there, type dj createsuperuser. It'll then ask for a new username and password. Once created, try logging in as the new account.

@ArthurPaiva
Copy link
Author

I created the SU as you said and then reached to standins and put 'Wins'. It wasn't set. What should I do now? Because I still can't create the draw for Round 4, as I said before.

@ArthurPaiva
Copy link
Author

GOOD NEWS! It has returned to work! So much thanks for the helping!

@tienne-B
Copy link
Member

tienne-B commented Jun 8, 2019

Did you save the preferences? You should be able to create the draw for round 4 now...

Oh great!

@tienne-B tienne-B closed this as completed Jun 8, 2019
@tienne-B tienne-B changed the title Internal server error report Draw generator fails when no metrics are provided Jun 8, 2019
@czlee
Copy link
Member

czlee commented Jun 8, 2019

Thanks for the report @ArthurPaiva, and thanks for diagnosing it @tienne-B! Would I be understanding it right if I said that the issue seems to be that the system crashes when the team standings metrics are blank? If so, we should probably do something more graceful than throw a 500 in this circumstance, right?

@ArthurPaiva
Copy link
Author

Yeah, exactly, @czlee. And for what's worth, I agree with you. xD

@tienne-B
Copy link
Member

tienne-B commented Jun 8, 2019

A warning could be placed on the availabilities page if power-pairing without metrics, or use wins as the default if the preference is not set, with a notice to that effect in the view draw page.

@tienne-B tienne-B reopened this Jun 8, 2019
@czlee
Copy link
Member

czlee commented Jun 8, 2019

Hmm, the preference form shouldn't even be permitting it to save at all if the metrics are blank…

@czlee
Copy link
Member

czlee commented Jun 8, 2019

Oh damn okay, so I had to make allow_empty = True so that the "---------" option will show up. But as a consequence, it permits them all to be submitted empty, which isn't great.

This means there are two distinct issues:

  • The preference form should validate this entry and refuse to save if it's all empty
  • If metrics do happen to be empty (this can still happen, because the database can be edited), the standings generator should raise some sensible StandingsError for higher levels to handle.

@czlee
Copy link
Member

czlee commented Jun 8, 2019

Related: #749

Apparently I once put in a fix to protect against a similar situation… when there's only one metric in the precedence.

@czlee
Copy link
Member

czlee commented Jun 8, 2019

Actually, having thought about it more, there's no intrinsic reason Tabbycat should just refuse to run if the tab director didn't specify any team metrics. Everything in principle can still run, it'll just be very boring: everyone will trivially have exactly the rank.

Or should we just enforce it to make our lives simpler? Thoughts, @tienne-B @philipbelesky?

czlee added a commit that referenced this issue Jun 8, 2019
@philipbelesky
Copy link
Member

Um I think that it’s probably best to try and enforce it within the form as that is probably the best time for a user to understand the behaviour.

Ideally it should also run if that behaviour is somehow set, but maybe display a warning highlighting why the rankings will look wrong.

@czlee
Copy link
Member

czlee commented Jun 9, 2019

Oh, sorry, my phrasing was poor. Obviously we should do something on the form. What I mean is: Enforce, or warn only?

To elaborate: "Enforce" means refuse to save preferences with a blank standings precedence, and raise a StandingsError to communicate to user why the standings/draw couldn't be generated. "Warn only" means save preferences but warn the user about the consequences, and generate the trivial standings/draw but with a message explaining why it might look weird.

I think we should be consistent between the preferences form and the draw/standings pages, i.e. either preference validation is enforced and it raises a (graceful) error if bad, or warnings appear in both places.

Basically the question whether we feel strongly enough about this to cut the option off from users entirely (given that our general default is to let users shoot themselves in the foot if they have good reason for doing so, but warn them about what they're doing).

@philipbelesky
Copy link
Member

Hmm, I would lean on the side of enforce in the absence of a valid use case. We have a similar safeguard in place for the teams_in_debate and pre-adj ballots preference combination.

@czlee
Copy link
Member

czlee commented Jun 9, 2019

Potential use cases:

  1. Allowing a summary display of wins/losses in debates (because that's what the team tab, minus the last few columns, is), without implying an official ranking among teams (because of some quirk with a tournament format).
  2. "Hacking" a BP draw that is mostly random (not power-paired), but still runs position balance. (Not officially supported, but could be done in this way.)

Whether these are "valid" is… well, open to argument, I guess.

(The teams_in_debate+per-adj combination isn't really analogous: there's no way for us to code around that without building an entire system for dealing with voting ballots in BP. Paraphrased, that one's not just, "it will do something stupid by default", but, "what does that even mean?!")

czlee added a commit that referenced this issue Jun 10, 2019
czlee added a commit that referenced this issue Jun 10, 2019
This rewrites metricgetter() to avoid using Python's itemgetter(),
so that the function it returns will always return a tuple. This
simplifies a lot of handling, because it avoids the special case
where itemgetter(item) called with one argument returns a single
item rather than a tuple of length 1.

As a consequence, the case where no metrics are specified now
shouldn't crash. Also added standings test for this case.
@czlee
Copy link
Member

czlee commented Jun 10, 2019

Okay, with 6f3e8dc I've pretty much just made it so it works fine with no metrics, I guess (while simplifying some related code). I think this means I'd now lean towards warning only.

czlee added a commit that referenced this issue Jun 10, 2019
This rewrites metricgetter() to avoid using Python's itemgetter(),
so that the function it returns will always return a tuple. This
simplifies a lot of handling, because it avoids the special case
where itemgetter(item) called with one argument returns a single
item rather than a tuple of length 1.

As a consequence, the case where no metrics are specified now
shouldn't crash. Also added standings test for this case.
@czlee czlee added this to the LaPerm milestone Jul 5, 2019
czlee added a commit that referenced this issue Jul 21, 2019
This rewrites metricgetter() to avoid using Python's itemgetter(),
so that the function it returns will always return a tuple. This
simplifies a lot of handling, because it avoids the special case
where itemgetter(item) called with one argument returns a single
item rather than a tuple of length 1.

As a consequence, the case where no metrics are specified now
shouldn't crash. Also added standings test for this case.
czlee added a commit that referenced this issue Jul 21, 2019
This rewrites metricgetter() to avoid using Python's itemgetter(),
so that the function it returns will always return a tuple. This
simplifies a lot of handling, because it avoids the special case
where itemgetter(item) called with one argument returns a single
item rather than a tuple of length 1.

As a consequence, the case where no metrics are specified now
shouldn't crash. Also added standings test for this case.
czlee added a commit that referenced this issue Jul 21, 2019
This rewrites metricgetter() to avoid using Python's itemgetter(),
so that the function it returns will always return a tuple. This
simplifies a lot of handling, because it avoids the special case
where itemgetter(item) called with one argument returns a single
item rather than a tuple of length 1.

As a consequence, the case where no metrics are specified now
shouldn't crash. Also added standings test for this case.
@czlee czlee modified the milestones: LaPerm, M-Release Jul 21, 2019
philipbelesky pushed a commit that referenced this issue Jul 29, 2019
* Make standings generation more robust #1108

This rewrites metricgetter() to avoid using Python's itemgetter(),
so that the function it returns will always return a tuple. This
simplifies a lot of handling, because it avoids the special case
where itemgetter(item) called with one argument returns a single
item rather than a tuple of length 1.

As a consequence, the case where no metrics are specified now
shouldn't crash. Also added standings test for this case.

* Fix bug where extra metrics cause standings to crash

Also added a unit test for this scenario

* Add user alerts for blank standings precedence #1108
@philipbelesky philipbelesky modified the milestones: M-Release, LaPerm Aug 18, 2019
@czlee czlee modified the milestones: LaPerm, Maine Coon Jun 8, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

4 participants