-
-
Notifications
You must be signed in to change notification settings - Fork 1.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. Weβll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fix a crash on psycopg2 for elif used #5369
Conversation
Pull Request Test Coverage Report for Build 1498507098
π - Coveralls |
c2ce2bb
to
7dda588
Compare
New warnings are false positives changing to draft. |
We do not remove the hotfix in order to have a false positive instead of a crash is something else is still buggy.
7dda588
to
d5dec5c
Compare
@DanielNoord @cdce8p this is ready for review. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good! Small changes here and there which we could do, I'll leave that decision to you!
and (node.lineno, node.col_offset) in self._elifs | ||
and self._elifs[(node.lineno, node.col_offset)] == "if" | ||
): | ||
self.add_message("else-if-used", node=node, confidence=HIGH) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
π
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Not having an off by one error when we miscount an If increased the confidence π But really the other option are:
Confidence = namedtuple("Confidence", ["name", "description"])
# Warning Certainties
HIGH = Confidence("HIGH", "No false positive possible.")
INFERENCE = Confidence("INFERENCE", "Warning based on inference result.")
INFERENCE_FAILURE = Confidence(
"INFERENCE_FAILURE", "Warning based on inference with failures."
)
UNDEFINED = Confidence("UNDEFINED", "Warning without any associated confidence level.")
CONFIDENCE_LEVELS = [HIGH, INFERENCE, INFERENCE_FAILURE, UNDEFINED]
There's no inference here, so it should probably by HIGH. But also, we should probably refactor this (?) It was not used before so we might as well make it make sense. I'm not confident enough to say there will never be any false positive just because there was no inference.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Seems like we could use a NO_INFERENCE
level. Maybe add the to the growing list of todo's for 2.13
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Updated #5317
I don't understand the |
The report on |
Type of Changes
Description
https://github.com/psycopg/psycopg/blob/master/psycopg/psycopg/_queries.py#L281
Exception on node <If l.281 at 0x7f2f9141a550>
pylint crashed with a
IndexError
and with the following stacktrace:The hot fix would be:
It seem to me that counting the if/elif properly would take some time. I created a minimal reproduction example:
The issue come from the f string with a comprehension inside it that is not separated for the token processing.
Detected in #5173