-
Notifications
You must be signed in to change notification settings - Fork 1.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Ruff 0.9 #15238
Ruff 0.9 #15238
Conversation
|
code | total | + violation | - violation | + fix | - fix |
---|---|---|---|---|---|
A005 | 82 | 82 | 0 | 0 | 0 |
PYI041 | 80 | 0 | 0 | 80 | 0 |
PT006 | 49 | 49 | 0 | 0 | 0 |
RUF032 | 25 | 25 | 0 | 0 | 0 |
FURB188 | 21 | 21 | 0 | 0 | 0 |
RUF100 | 15 | 15 | 0 | 0 | 0 |
RUF008 | 10 | 10 | 0 | 0 | 0 |
PYI016 | 10 | 0 | 0 | 10 | 0 |
RUF012 | 6 | 0 | 6 | 0 | 0 |
RUF009 | 2 | 2 | 0 | 0 | 0 |
A006 | 2 | 2 | 0 | 0 | 0 |
RUF034 | 2 | 2 | 0 | 0 | 0 |
PT007 | 1 | 1 | 0 | 0 | 0 |
PYI006 | 1 | 1 | 0 | 0 | 0 |
PLR1716 | 1 | 1 | 0 | 0 | 0 |
Linter (preview)
✅ ecosystem check detected no linter changes.
Formatter (stable)
ℹ️ ecosystem check detected format changes. (+3220 -3659 lines in 973 files in 41 projects; 14 projects unchanged)
DisnakeDev/disnake (+6 -10 lines across 4 files)
disnake/ext/commands/help.py~L952
for command in commands:
name = command.name
width = max_size - (get_width(name) - len(name))
- entry = f'{self.indent * " "}{name:<{width}} {command.short_doc}'
+ entry = f"{self.indent * ' '}{name:<{width}} {command.short_doc}"
self.paginator.add_line(self.shorten_text(entry))
async def send_pages(self) -> None:
disnake/ext/commands/help.py~L1199
aliases: Sequence[:class:`str`]
A list of aliases to format.
"""
- self.paginator.add_line(f'**{self.aliases_heading}** {", ".join(aliases)}', empty=True)
+ self.paginator.add_line(f"**{self.aliases_heading}** {', '.join(aliases)}", empty=True)
def add_command_formatting(self, command) -> None:
"""A utility function to format commands and groups.
self.scopes: List[str] = data.get("scopes") or []
def __repr__(self) -> str:
- return (
- f"<{self.__class__.__name__} id={self.id}"
- f" name={self.name!r} scopes={self.scopes!r}>"
- )
+ return f"<{self.__class__.__name__} id={self.id} name={self.name!r} scopes={self.scopes!r}>"
def _integration_factory(value: str) -> Tuple[Type[Integration], str]:
def set_bandwidth(self, req: BAND_CTL) -> None:
if req not in band_ctl:
raise KeyError(
- f'{req!r} is not a valid bandwidth setting. Try one of: {",".join(band_ctl)}'
+ f"{req!r} is not a valid bandwidth setting. Try one of: {','.join(band_ctl)}"
)
k = band_ctl[req]
def set_signal_type(self, req: SIGNAL_CTL) -> None:
if req not in signal_ctl:
raise KeyError(
- f'{req!r} is not a valid bandwidth setting. Try one of: {",".join(signal_ctl)}'
+ f"{req!r} is not a valid bandwidth setting. Try one of: {','.join(signal_ctl)}"
)
k = signal_ctl[req]
fallback = cls._probe_codec_fallback
else:
raise TypeError(
- "Expected str or callable for parameter 'probe', "
- f"not '{method.__class__.__name__}'"
+ f"Expected str or callable for parameter 'probe', not '{method.__class__.__name__}'"
)
codec = bitrate = None
RasaHQ/rasa (+94 -116 lines across 44 files)
.github/tests/test_model_regression_test_read_dataset_branch_tmpl.py~L17
],
)
def test_read_dataset_branch(comment_body_file: Text, expected_dataset_branch: Text):
- cmd = (
- "gomplate "
- f"-d github={TEST_DATA_DIR}/{comment_body_file} "
- f"-f {TEMPLATE_FPATH}"
- )
+ cmd = f"gomplate -d github={TEST_DATA_DIR}/{comment_body_file} -f {TEMPLATE_FPATH}"
output = subprocess.check_output(cmd.split(" "), cwd=REPO_DIR)
output = output.decode("utf-8").strip()
assert output == f'export DATASET_BRANCH="{expected_dataset_branch}"'
rasa/cli/arguments/export.py~L9
parser,
default=DEFAULT_ENDPOINTS_PATH,
help_text=(
- "Endpoint configuration file specifying the tracker store "
- "and event broker."
+ "Endpoint configuration file specifying the tracker store and event broker."
),
)
os.makedirs(path)
except (PermissionError, OSError, FileExistsError) as e:
print_error_and_exit(
- f"Failed to create project path at '{path}'. " f"Error: {e}"
+ f"Failed to create project path at '{path}'. Error: {e}"
)
else:
print_success(
# Check if a valid setting for `max_history` was given
if isinstance(max_history, int) and max_history < 1:
raise argparse.ArgumentTypeError(
- f"The value of `--max-history {max_history}` " f"is not a positive integer."
+ f"The value of `--max-history {max_history}` is not a positive integer."
)
return validator.verify_story_structure(
attempts -= 1
rasa.shared.utils.cli.print_error_and_exit(
- "Could not fetch runtime config from server at '{}'. " "Exiting.".format(
+ "Could not fetch runtime config from server at '{}'. Exiting.".format(
config_endpoint
)
)
rasa/core/actions/action.py~L322
if message is None:
if not self.silent_fail:
logger.error(
- "Couldn't create message for response '{}'." "".format(
+ "Couldn't create message for response '{}'.".format(
self.utter_action
)
)
rasa/core/actions/action.py~L470
else:
if not self.silent_fail:
logger.error(
- "Couldn't create message for response action '{}'." "".format(
+ "Couldn't create message for response action '{}'.".format(
self.action_name
)
)
rasa/core/channels/console.py~L194
exit_text = INTENT_MESSAGE_PREFIX + "stop"
rasa.shared.utils.cli.print_success(
- "Bot loaded. Type a message and press enter " "(use '{}' to exit): ".format(
+ "Bot loaded. Type a message and press enter (use '{}' to exit): ".format(
exit_text
)
)
rasa/core/channels/telegram.py~L97
reply_markup.add(KeyboardButton(button["title"]))
else:
logger.error(
- "Trying to send text with buttons for unknown " "button type {}".format(
+ "Trying to send text with buttons for unknown button type {}".format(
button_type
)
)
conversation_ids_to_process = await self._get_conversation_ids_to_process()
rasa.shared.utils.cli.print_info(
- f"Fetching events for {len(conversation_ids_to_process)} "
- f"conversation IDs:"
+ f"Fetching events for {len(conversation_ids_to_process)} conversation IDs:"
)
for conversation_id in tqdm(conversation_ids_to_process, "conversation IDs"):
tracker = await self.tracker_store.retrieve_full_tracker(conversation_id)
body = nlg_request_format(utter_action, tracker, output_channel, **kwargs)
logger.debug(
- "Requesting NLG for {} from {}." "The request body is {}." "".format(
+ "Requesting NLG for {} from {}.The request body is {}.".format(
utter_action, self.nlg_endpoint.url, json.dumps(body)
)
)
rasa/core/policies/policy.py~L250
max_training_samples = kwargs.get("max_training_samples")
if max_training_samples is not None:
logger.debug(
- "Limit training data to {} training samples." "".format(
+ "Limit training data to {} training samples.".format(
max_training_samples
)
)
rasa/core/policies/ted_policy.py~L837
# take the last prediction in the sequence
similarities = outputs["similarities"][:, -1, :]
else:
- raise TypeError(
- "model output for `similarities` " "should be a numpy array"
- )
+ raise TypeError("model output for `similarities` should be a numpy array")
if isinstance(outputs["scores"], np.ndarray):
confidences = outputs["scores"][:, -1, :]
else:
rasa/core/policies/unexpected_intent_policy.py~L612
if isinstance(output["similarities"], np.ndarray):
sequence_similarities = output["similarities"][:, -1, :]
else:
- raise TypeError(
- "model output for `similarities` " "should be a numpy array"
- )
+ raise TypeError("model output for `similarities` should be a numpy array")
# Check for unlikely intent
last_user_uttered_event = tracker.get_last_event_for(UserUttered)
):
story_dump = YAMLStoryWriter().dumps(partial_tracker.as_story().story_steps)
error_msg = (
- f"Model predicted a wrong action. Failed Story: " f"\n\n{story_dump}"
+ f"Model predicted a wrong action. Failed Story: \n\n{story_dump}"
)
raise WrongPredictionException(error_msg)
elif prev_action_unlikely_intent:
for policy_config in policy_configs:
config_name = os.path.splitext(os.path.basename(policy_config))[0]
logging.info(
- "Starting to train {} round {}/{}" " with {}% exclusion" "".format(
+ "Starting to train {} round {}/{} with {}% exclusion".format(
config_name, current_run, len(exclusion_percentages), percentage
)
)
domain,
policy_config,
stories=story_file,
- output=str(Path(output_path, f"run_{r +1}")),
+ output=str(Path(output_path, f"run_{r + 1}")),
fixed_model_name=config_name + PERCENTAGE_KEY + str(percentage),
additional_arguments={
**additional_arguments,
rasa/core/training/converters/responses_prefix_converter.py~L26
The name of the response, starting with `utter_`.
"""
return (
- f"{UTTER_PREFIX}{action_name[len(OBSOLETE_RESPOND_PREFIX):]}"
+ f"{UTTER_PREFIX}{action_name[len(OBSOLETE_RESPOND_PREFIX) :]}"
if action_name.startswith(OBSOLETE_RESPOND_PREFIX)
else action_name
)
rasa/core/training/interactive.py~L346
choices = []
for p in sorted_intents:
name_with_confidence = (
- f'{p.get("confidence"):03.2f} {p.get(INTENT_NAME_KEY):40}'
+ f"{p.get('confidence'):03.2f} {p.get(INTENT_NAME_KEY):40}"
)
choice = {
INTENT_NAME_KEY: name_with_confidence,
rasa/core/training/interactive.py~L674
await _print_history(conversation_id, endpoint)
choices = [
- {"name": f'{a["score"]:03.2f} {a["action"]:40}', "value": a["action"]}
+ {"name": f"{a['score']:03.2f} {a['action']:40}", "value": a["action"]}
for a in predictions
]
rasa/core/training/interactive.py~L723
# export training data and quit
questions = questionary.form(
export_stories=questionary.text(
- message="Export stories to (if file exists, this "
- "will append the stories)",
+ message="Export stories to (if file exists, this will append the stories)",
default=PATHS["stories"],
validate=io_utils.file_type_validator(
rasa.shared.data.YAML_FILE_EXTENSIONS,
rasa/core/training/interactive.py~L738
default=PATHS["nlu"],
validate=io_utils.file_type_validator(
list(rasa.shared.data.TRAINING_DATA_EXTENSIONS),
- "Please provide a valid export path for the NLU data, "
- "e.g. 'nlu.yml'.",
+ "Please provide a valid export path for the NLU data, e.g. 'nlu.yml'.",
),
),
export_domain=questionary.text(
- message="Export domain file to (if file exists, this "
- "will be overwritten)",
+ message="Export domain file to (if file exists, this will be overwritten)",
default=PATHS["domain"],
validate=io_utils.file_type_validator(
rasa.shared.data.YAML_FILE_EXTENSIONS,
"""
if use_syslog:
formatter = logging.Formatter(
- "%(asctime)s [%(levelname)-5.5s] [%(process)d]" " %(message)s"
+ "%(asctime)s [%(levelname)-5.5s] [%(process)d] %(message)s"
)
socktype = SOCK_STREAM if syslog_protocol == TCP_PROTOCOL else SOCK_DGRAM
syslog_handler = logging.handlers.SysLogHandler(
"""
if hot_idx >= length:
raise ValueError(
- "Can't create one hot. Index '{}' is out " "of range (length '{}')".format(
+ "Can't create one hot. Index '{}' is out of range (length '{}')".format(
hot_idx, length
)
)
)
rasa.shared.utils.cli.print_success(
- "No training of components required "
- "(the responses might still need updating!)."
+ "No training of components required (the responses might still need updating!)."
)
return TrainingResult(
code=CODE_NO_NEED_TO_TRAIN, dry_run_results=fingerprint_results
rasa/nlu/featurizers/sparse_featurizer/count_vectors_featurizer.py~L166
)
if self.stop_words is not None:
logger.warning(
- "Analyzer is set to character, "
- "provided stop words will be ignored."
+ "Analyzer is set to character, provided stop words will be ignored."
)
if self.max_ngram == 1:
logger.warning(
raise ErrorResponse(
HTTPStatus.BAD_REQUEST,
"BadRequest",
- "Invalid parameter value for 'include_events'. "
- "Should be one of {}".format(enum_values),
+ "Invalid parameter value for 'include_events'. Should be one of {}".format(
+ enum_values
+ ),
{"parameter": "include_events", "in": "query"},
)
rasa/shared/core/domain.py~L198
domain = cls.from_directory(path)
else:
raise InvalidDomain(
- "Failed to load domain specification from '{}'. "
- "File not found!".format(os.path.abspath(path))
+ "Failed to load domain specification from '{}'. File not found!".format(
+ os.path.abspath(path)
+ )
)
return domain
rasa/shared/core/events.py~L1964
def __str__(self) -> Text:
"""Returns text representation of event."""
- return (
- "ActionExecutionRejected("
- "action: {}, policy: {}, confidence: {})"
- "".format(self.action_name, self.policy, self.confidence)
+ return "ActionExecutionRejected(action: {}, policy: {}, confidence: {})".format(
+ self.action_name, self.policy, self.confidence
)
def __hash__(self) -> int:
rasa/shared/core/generator.py~L401
if num_active_trackers:
logger.debug(
- "Starting {} ... (with {} trackers)" "".format(
+ "Starting {} ... (with {} trackers)".format(
phase_name, num_active_trackers
)
)
rasa/shared/core/generator.py~L517
phase = 0
else:
logger.debug(
- "Found {} unused checkpoints " "in current phase." "".format(
+ "Found {} unused checkpoints in current phase.".format(
len(unused_checkpoints)
)
)
logger.debug(
- "Found {} active trackers " "for these checkpoints." "".format(
+ "Found {} active trackers for these checkpoints.".format(
num_active_trackers
)
)
rasa/shared/core/generator.py~L553
augmented_trackers, self.config.max_number_of_augmented_trackers
)
logger.debug(
- "Subsampled to {} augmented training trackers." "".format(
+ "Subsampled to {} augmented training trackers.".format(
len(augmented_trackers)
)
)
rasa/shared/core/trackers.py~L634
"""
if not isinstance(dialogue, Dialogue):
raise ValueError(
- f"story {dialogue} is not of type Dialogue. "
- f"Have you deserialized it?"
+ f"story {dialogue} is not of type Dialogue. Have you deserialized it?"
)
self._reset()
rasa/shared/core/training_data/story_reader/story_reader.py~L83
)
if parsed_events is None:
raise StoryParseError(
- "Unknown event '{}'. It is Neither an event " "nor an action).".format(
+ "Unknown event '{}'. It is Neither an event nor an action).".format(
event_name
)
)
rasa/shared/core/training_data/story_reader/yaml_story_reader.py~L334
if not self.domain:
logger.debug(
- "Skipped validating if intent is in domain as domain " "is `None`."
+ "Skipped validating if intent is in domain as domain is `None`."
)
return
rasa/shared/nlu/training_data/formats/dialogflow.py~L34
if fformat not in {DIALOGFLOW_INTENT, DIALOGFLOW_ENTITIES}:
raise ValueError(
- "fformat must be either {}, or {}" "".format(
+ "fformat must be either {}, or {}".format(
DIALOGFLOW_INTENT, DIALOGFLOW_ENTITIES
)
)
rasa/shared/nlu/training_data/util.py~L24
ESCAPE_DCT = {"\b": "\\b", "\f": "\\f", "\n": "\\n", "\r": "\\r", "\t": "\\t"}
ESCAPE_CHARS = set(ESCAPE_DCT.keys())
-ESCAPE = re.compile(f'[{"".join(ESCAPE_DCT.values())}]')
+ESCAPE = re.compile(f"[{''.join(ESCAPE_DCT.values())}]")
UNESCAPE_DCT = {espaced_char: char for char, espaced_char in ESCAPE_DCT.items()}
-UNESCAPE = re.compile(f'[{"".join(UNESCAPE_DCT.values())}]')
+UNESCAPE = re.compile(f"[{''.join(UNESCAPE_DCT.values())}]")
GROUP_COMPLETE_MATCH = 0
return f.read()
except FileNotFoundError:
raise FileNotFoundException(
- f"Failed to read file, " f"'{os.path.abspath(filename)}' does not exist."
+ f"Failed to read file, '{os.path.abspath(filename)}' does not exist."
)
except UnicodeDecodeError:
raise FileIOException(
"""
if not isinstance(path, str):
raise ValueError(
- f"`resource_name` must be a string type. " f"Got `{type(path)}` instead"
+ f"`resource_name` must be a string type. Got `{type(path)}` instead"
)
if os.path.isfile(path):
)
except FileNotFoundError:
raise FileNotFoundException(
- f"Failed to read file, " f"'{os.path.abspath(file_path)}' does not exist."
+ f"Failed to read file, '{os.path.abspath(file_path)}' does not exist."
)
access_logger.addHandler(file_handler)
if use_syslog:
formatter = logging.Formatter(
- "%(asctime)s [%(levelname)-5.5s] [%(process)d]" " %(message)s"
+ "%(asctime)s [%(levelname)-5.5s] [%(process)d] %(message)s"
)
socktype = SOCK_STREAM if syslog_protocol == TCP_PROTOCOL else SOCK_DGRAM
syslog_handler = logging.handlers.SysLogHandler(
return EndpointConfig.from_dict(content[endpoint_type])
except FileNotFoundError:
logger.error(
- "Failed to read endpoint configuration " "from {}. No such file.".format(
+ "Failed to read endpoint configuration from {}. No such file.".format(
os.path.abspath(filename)
)
)
tests/core/test_evaluation.py~L563
True,
],
[
- "data/test_yaml_stories/"
- "test_prediction_with_wrong_intent_wrong_entity.yml",
+ "data/test_yaml_stories/test_prediction_with_wrong_intent_wrong_entity.yml",
False,
False,
],
tests/core/test_migrate.py~L971
"responses.yml",
)
- return domain_dir, "Domain files with multiple 'slots' sections were " "provided."
+ return domain_dir, "Domain files with multiple 'slots' sections were provided."
@pytest.mark.parametrize(
tests/core/test_tracker_stores.py~L311
assert isinstance(tracker_store, InMemoryTrackerStore)
-async def _tracker_store_and_tracker_with_slot_set() -> (
- Tuple[InMemoryTrackerStore, DialogueStateTracker]
-):
+async def _tracker_store_and_tracker_with_slot_set() -> Tuple[
+ InMemoryTrackerStore, DialogueStateTracker
+]:
# returns an InMemoryTrackerStore containing a tracker with a slot set
slot_key = "cuisine"
tests/engine/recipes/test_default_recipe.py~L98
(
"data/test_config/config_pretrained_embeddings_mitie.yml",
"data/graph_schemas/config_pretrained_embeddings_mitie_train_schema.yml",
- "data/graph_schemas/"
- "config_pretrained_embeddings_mitie_predict_schema.yml",
+ "data/graph_schemas/config_pretrained_embeddings_mitie_predict_schema.yml",
TrainingType.BOTH,
False,
),
tests/graph_components/validators/test_default_recipe_validator.py~L780
if should_warn:
with pytest.warns(
UserWarning,
- match=(f"'{RulePolicy.__name__}' is not " "included in the model's "),
+ match=(f"'{RulePolicy.__name__}' is not included in the model's "),
) as records:
validator.validate(importer)
else:
tests/graph_components/validators/test_default_recipe_validator.py~L883
num_duplicates: bool,
priority: int,
):
- assert (
- len(policy_types) >= priority + num_duplicates
- ), f"This tests needs at least {priority+num_duplicates} many types."
+ assert len(policy_types) >= priority + num_duplicates, (
+ f"This tests needs at least {priority + num_duplicates} many types."
+ )
# start with a schema where node i has priority i
nodes = {
tests/graph_components/validators/test_default_recipe_validator.py~L895
# give nodes p+1, .., p+num_duplicates-1 priority "priority"
for idx in range(num_duplicates):
- nodes[f"{priority+idx+1}"].config["priority"] = priority
+ nodes[f"{priority + idx + 1}"].config["priority"] = priority
validator = DefaultV1RecipeValidator(graph_schema=GraphSchema(nodes))
monkeypatch.setattr(
tests/graph_components/validators/test_default_recipe_validator.py~L992
with pytest.warns(
UserWarning,
match=(
- "Found rule-based training data but no policy "
- "supporting rule-based data."
+ "Found rule-based training data but no policy supporting rule-based data."
),
):
validator.validate(importer)
tests/nlu/featurizers/test_count_vectors_featurizer.py~L772
@pytest.mark.parametrize(
- "initial_train_text, additional_train_text, " "use_shared_vocab",
+ "initial_train_text, additional_train_text, use_shared_vocab",
[("am I the coolest person?", "no", True), ("rasa rasa", "sara sara", False)],
)
def test_use_shared_vocab_exception(
tests/nlu/featurizers/test_regex_featurizer.py~L44
@pytest.mark.parametrize(
- "sentence, expected_sequence_features, expected_sentence_features,"
- "labeled_tokens",
+ "sentence, expected_sequence_features, expected_sentence_features,labeled_tokens",
[
(
"hey how are you today",
tests/nlu/featurizers/test_regex_featurizer.py~L219
@pytest.mark.parametrize(
- "sentence, expected_sequence_features, expected_sentence_features, "
- "labeled_tokens",
+ "sentence, expected_sequence_features, expected_sentence_features, labeled_tokens",
[
(
"lemonade and mapo tofu",
tests/nlu/featurizers/test_regex_featurizer.py~L383
@pytest.mark.parametrize(
- "sentence, expected_sequence_features, expected_sentence_features,"
- "case_sensitive",
+ "sentence, expected_sequence_features, expected_sentence_features,case_sensitive",
[
("Hey How are you today", [0.0, 0.0, 0.0], [0.0, 0.0, 0.0], True),
("Hey How are you today", [0.0, 1.0, 0.0], [0.0, 1.0, 0.0], False),
tests/nlu/featurizers/test_spacy_featurizer.py~L133
vecs = ftr._features_for_doc(doc)
vecs_capitalized = ftr._features_for_doc(doc_capitalized)
- assert np.allclose(
- vecs, vecs_capitalized, atol=1e-5
- ), "Vectors are unequal for texts '{}' and '{}'".format(
- e.get(TEXT), e.get(TEXT).capitalize()
+ assert np.allclose(vecs, vecs_capitalized, atol=1e-5), (
+ "Vectors are unequal for texts '{}' and '{}'".format(
+ e.get(TEXT), e.get(TEXT).capitalize()
+ )
)
# publicly available anymore
# (see https://github.com/RasaHQ/rasa/issues/6806)
continue
- assert (
- cls.__name__ in all_components
- ), "`all_components` template is missing component."
+ assert cls.__name__ in all_components, (
+ "`all_components` template is missing component."
+ )
@pytest.mark.timeout(600, func_only=True)
tests/shared/core/test_events.py~L87
)
def test_event_has_proper_implementation(one_event, another_event):
# equals tests
- assert (
- one_event != another_event
- ), "Same events with different values need to be different"
+ assert one_event != another_event, (
+ "Same events with different values need to be different"
+ )
assert one_event == copy.deepcopy(one_event), "Event copies need to be the same"
assert one_event != 42, "Events aren't equal to 42!"
# hash test
- assert hash(one_event) == hash(
- copy.deepcopy(one_event)
- ), "Same events should have the same hash"
- assert hash(one_event) != hash(
- another_event
- ), "Different events should have different hashes"
+ assert hash(one_event) == hash(copy.deepcopy(one_event)), (
+ "Same events should have the same hash"
+ )
+ assert hash(one_event) != hash(another_event), (
+ "Different events should have different hashes"
+ )
# str test
assert "object at 0x" not in str(one_event), "Event has a proper str method"
tests/shared/core/test_slots.py~L52
value, expected = value_feature_pair
slot.value = value
assert slot.as_feature() == expected
- assert (
- len(slot.as_feature()) == slot.feature_dimensionality()
- ), "Wrong feature dimensionality"
+ assert len(slot.as_feature()) == slot.feature_dimensionality(), (
+ "Wrong feature dimensionality"
+ )
# now reset the slot to get initial value again
slot.reset()
- assert (
- slot.value == slot.initial_value
- ), "Slot should be reset to its initial value"
+ assert slot.value == slot.initial_value, (
+ "Slot should be reset to its initial value"
+ )
def test_empty_slot_featurization(self, mappings: List[Dict[Text, Any]]):
slot = self.create_slot(mappings=mappings, influence_conversation=True)
- assert (
- slot.value == slot.initial_value
- ), "An empty slot should be set to the initial value"
+ assert slot.value == slot.initial_value, (
+ "An empty slot should be set to the initial value"
+ )
assert len(slot.as_feature()) == slot.feature_dimensionality()
def test_featurization_if_marked_as_unfeaturized(
tests/shared/core/training_data/test_graph.py~L10
for n in sorted_nodes:
deps = incoming_edges.get(n, [])
# checks that all incoming edges are from nodes we have already visited
- assert all(
- [d in visited or (d, n) in removed_edges for d in deps]
- ), "Found an incoming edge from a node that wasn't visited yet!"
+ assert all([d in visited or (d, n) in removed_edges for d in deps]), (
+ "Found an incoming edge from a node that wasn't visited yet!"
+ )
visited.add(n)
Snowflake-Labs/snowcli (+52 -50 lines across 17 files)
src/snowflake/cli/_plugins/connection/commands.py~L334
"Host": conn.host,
"Account": conn.account,
"User": conn.user,
- "Role": f'{conn.role or "not set"}',
- "Database": f'{conn.database or "not set"}',
- "Warehouse": f'{conn.warehouse or "not set"}',
+ "Role": f"{conn.role or 'not set'}",
+ "Database": f"{conn.database or 'not set'}",
+ "Warehouse": f"{conn.warehouse or 'not set'}",
}
if conn_ctx.enable_diag:
src/snowflake/cli/_plugins/nativeapp/artifacts.py~L250
def __init__(self, *, project_root: Path, deploy_root: Path):
# If a relative path ends up here, it's a bug in the app and can lead to other
# subtle bugs as paths would be resolved relative to the current working directory.
- assert (
- project_root.is_absolute()
- ), f"Project root {project_root} must be an absolute path."
- assert (
- deploy_root.is_absolute()
- ), f"Deploy root {deploy_root} must be an absolute path."
+ assert project_root.is_absolute(), (
+ f"Project root {project_root} must be an absolute path."
+ )
+ assert deploy_root.is_absolute(), (
+ f"Deploy root {deploy_root} must be an absolute path."
+ )
self._project_root: Path = resolve_without_follow(project_root)
self._deploy_root: Path = resolve_without_follow(deploy_root)
src/snowflake/cli/_plugins/nativeapp/codegen/snowpark/python_processor.py~L433
create_query += f"\nEXTERNAL_ACCESS_INTEGRATIONS=({', '.join(ensure_all_string_literals(extension_fn.external_access_integrations))})"
if extension_fn.secrets:
- create_query += f"""\nSECRETS=({', '.join([f"{ensure_string_literal(k)}={v}" for k, v in extension_fn.secrets.items()])})"""
+ create_query += f"""\nSECRETS=({", ".join([f"{ensure_string_literal(k)}={v}" for k, v in extension_fn.secrets.items()])})"""
create_query += f"\nHANDLER={ensure_string_literal(extension_fn.handler)}"
src/snowflake/cli/_plugins/stage/manager.py~L106
def get_standard_stage_path(self) -> str:
path = self.get_full_stage_path(self.path)
- return f"@{path}{'/'if self.is_directory and not path.endswith('/') else ''}"
+ return f"@{path}{'/' if self.is_directory and not path.endswith('/') else ''}"
def get_standard_stage_directory_path(self) -> str:
path = self.get_standard_stage_path()
src/snowflake/cli/api/project/schemas/project_d...[Comment body truncated]
…15329) Stabilise [`slice-to-remove-prefix-or-suffix`](https://docs.astral.sh/ruff/rules/slice-to-remove-prefix-or-suffix/) (`FURB188`) for the Ruff 0.9 release. This is a stylistic rule, but I think it's a pretty uncontroversial one. There are no open issues or PRs regarding it and it's been in preview for a while now.
…ze` calls" (`PT006`) (#15327) Co-authored-by: Micha Reiser <[email protected]> Resolves #15324. Stabilizes the behavior changes introduced in #14515.
…e-union-members` (`PYI016`) (#15342)
I think the f-string docs commit (#15341) got removed in the latest rebase. I'll open a new PR for that. |
Summary
Feature branch for Ruff 0.9
Future me: Make sure to rebase merge this PR!
Changelog
slice-to-remove-prefix-or-suffix
(FURB188
) #15329decimal-from-float-literal
(RUF032
) #15333flake8-pytest-style
] Stabilize "Detect morepytest.mark.parametrize
calls" (PT006
) #15327pycodestyle
] Stabilize: Exemptpytest.importorskip
calls (E402
) #15338flake8-builtins
rules #15322pytest-parametrize-names-wrong-type
(PT006
) to edit both argnames and argvalues if both of them are single-element tuples/lists (Fixpytest-parametrize-names-wrong-type (PT006)
to edit bothargnames
andargvalues
if both of them are single-element tuples/lists #14699)flake8-pyi
] Stabilize autofix forredundant-numeric-union
(PYI041
) #15343ruff
] Stabilize: Detectattrs
dataclasses (RUF008
,RUF009
) #15345flake8-pyi
]: Stabilize: Provide more automated fixes forduplicate-union-members
(PYI016
) #15342flake8-pyi
] Stabilize: include all python file types forPYI006
#15340ruff
] Stabilizepost-init-default
(RUF033) #15352pylint
]: Stabilizeboolean-chained-comparison
(PLR1716
) #15354ruff
] Stabilizeuseless-if-else
(RUF034
) #15351TODOs
flake8-builtins
rules (comment)flake8-builtins
] RenameA005
and improve its error message #15348