Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Ruff 0.9 #15238

Merged
merged 18 commits into from
Jan 9, 2025
Merged

Ruff 0.9 #15238

merged 18 commits into from
Jan 9, 2025

Conversation

MichaReiser
Copy link
Member

@MichaReiser MichaReiser commented Jan 3, 2025

Summary

Feature branch for Ruff 0.9

Future me: Make sure to rebase merge this PR!

Changelog

TODOs

@MichaReiser MichaReiser added this to the v0.9 milestone Jan 3, 2025
Copy link
Contributor

github-actions bot commented Jan 3, 2025

ruff-ecosystem results

Linter (stable)

ℹ️ ecosystem check detected linter changes. (+211 -6 violations, +90 -0 fixes in 17 projects; 38 projects unchanged)

RasaHQ/rasa (+14 -0 violations, +0 -0 fixes)

+ tests/core/test_utils.py:41:29: RUF032 `Decimal()` called with float literal argument
+ tests/core/test_utils.py:49:28: RUF032 `Decimal()` called with float literal argument
+ tests/core/test_utils.py:49:52: RUF032 `Decimal()` called with float literal argument
+ tests/core/test_utils.py:49:77: RUF032 `Decimal()` called with float literal argument
+ tests/core/test_utils.py:55:52: RUF032 `Decimal()` called with float literal argument
+ tests/core/test_utils.py:55:76: RUF032 `Decimal()` called with float literal argument
+ tests/core/test_utils.py:56:40: RUF032 `Decimal()` called with float literal argument
+ tests/core/test_utils.py:71:18: RUF032 `Decimal()` called with float literal argument
+ tests/core/test_utils.py:77:19: RUF032 `Decimal()` called with float literal argument
+ tests/core/test_utils.py:77:33: RUF032 `Decimal()` called with float literal argument
... 4 additional changes omitted for project

Snowflake-Labs/snowcli (+7 -0 violations, +0 -0 fixes)

+ src/snowflake/cli/_plugins/cortex/types.py:1:1: A005 Module `types` shadows a Python standard-library module
+ src/snowflake/cli/_plugins/notebook/types.py:1:1: A005 Module `types` shadows a Python standard-library module
+ src/snowflake/cli/api/console/abc.py:1:1: A005 Module `abc` shadows a Python standard-library module
+ src/snowflake/cli/api/console/enum.py:1:1: A005 Module `enum` shadows a Python standard-library module
+ src/snowflake/cli/api/errno.py:1:1: A005 Module `errno` shadows a Python standard-library module
+ src/snowflake/cli/api/output/types.py:1:1: A005 Module `types` shadows a Python standard-library module
+ src/snowflake/cli/api/utils/types.py:1:1: A005 Module `types` shadows a Python standard-library module

apache/airflow (+93 -6 violations, +64 -0 fixes)

ruff check --no-cache --exit-zero --ignore RUF9 --no-fix --output-format concise --no-preview --select ALL

+ airflow/api_connexion/types.py:1:1: A005 Module `types` shadows a Python standard-library module
+ airflow/api_fastapi/common/types.py:1:1: A005 Module `types` shadows a Python standard-library module
+ airflow/api_fastapi/execution_api/datamodels/token.py:1:1: A005 Module `token` shadows a Python standard-library module
+ airflow/configuration.py:1313:17: FURB188 [*] Prefer `str.removesuffix()` over conditionally replacing with slice.
- airflow/decorators/__init__.pyi:118:25: PYI041 Use `float` instead of `int | float`
+ airflow/decorators/__init__.pyi:118:25: PYI041 [*] Use `float` instead of `int | float`
- airflow/decorators/__init__.pyi:246:25: PYI041 Use `float` instead of `int | float`
+ airflow/decorators/__init__.pyi:246:25: PYI041 [*] Use `float` instead of `int | float`
- airflow/exceptions.py:419:18: PYI041 Use `float` instead of `int | float`
+ airflow/exceptions.py:419:18: PYI041 [*] Use `float` instead of `int | float`
... 57 additional changes omitted for rule PYI041
+ airflow/io/utils/stat.py:1:1: A005 Module `stat` shadows a Python standard-library module
+ airflow/lineage/entities.py:64:23: RUF008 Do not use mutable default values for dataclass attributes
- airflow/lineage/entities.py:64:23: RUF012 Mutable class attributes should be annotated with `typing.ClassVar`
+ airflow/lineage/entities.py:86:23: RUF008 Do not use mutable default values for dataclass attributes
- airflow/lineage/entities.py:86:23: RUF012 Mutable class attributes should be annotated with `typing.ClassVar`
+ airflow/lineage/entities.py:88:29: RUF008 Do not use mutable default values for dataclass attributes
- airflow/lineage/entities.py:88:29: RUF012 Mutable class attributes should be annotated with `typing.ClassVar`
+ airflow/lineage/entities.py:89:26: RUF008 Do not use mutable default values for dataclass attributes
- airflow/lineage/entities.py:89:26: RUF012 Mutable class attributes should be annotated with `typing.ClassVar`
+ airflow/lineage/entities.py:90:29: RUF008 Do not use mutable default values for dataclass attributes
- airflow/lineage/entities.py:90:29: RUF012 Mutable class attributes should be annotated with `typing.ClassVar`
+ airflow/models/dag.py:430:25: RUF009 Do not perform function call in dataclass defaults
+ airflow/models/dag.py:431:24: RUF009 Do not perform function call `airflow_conf.get_mandatory_value` in dataclass defaults
+ airflow/models/operator.py:1:1: A005 Module `operator` shadows a Python standard-library module
+ airflow/operators/email.py:1:1: A005 Module `email` shadows a Python standard-library module
... 19 additional changes omitted for rule A005
+ airflow/utils/log/action_logger.py:22:5: FURB188 [*] Prefer `str.removeprefix()` over conditionally replacing with slice.
... 137 additional changes omitted for project

apache/superset (+16 -0 violations, +10 -0 fixes)

ruff check --no-cache --exit-zero --ignore RUF9 --no-fix --output-format concise --no-preview --select ALL

+ superset/advanced_data_type/types.py:1:1: A005 Module `types` shadows a Python standard-library module
+ superset/commands/dashboard/copy.py:1:1: A005 Module `copy` shadows a Python standard-library module
- superset/daos/query.py:49:52: PYI041 Use `float` instead of `int | float`
+ superset/daos/query.py:49:52: PYI041 [*] Use `float` instead of `int | float`
+ superset/dashboards/permalink/types.py:1:1: A005 Module `types` shadows a Python standard-library module
+ superset/databases/types.py:1:1: A005 Module `types` shadows a Python standard-library module
+ superset/distributed_lock/types.py:1:1: A005 Module `types` shadows a Python standard-library module
+ superset/explore/permalink/types.py:1:1: A005 Module `types` shadows a Python standard-library module
... 9 additional changes omitted for rule A005
+ superset/migrations/versions/2018-07-22_11-59_bebcf3fed1fe_convert_dashboard_v1_positions.py:228:27: A006 Lambda argument `sum` is shadowing a Python builtin
- superset/models/helpers.py:910:39: PYI016 Duplicate union member `str`
... 16 additional changes omitted for project

bokeh/bokeh (+17 -0 violations, +6 -0 fixes)

ruff check --no-cache --exit-zero --ignore RUF9 --no-fix --output-format concise --no-preview --select ALL

+ src/bokeh/application/handlers/code.py:1:1: A005 Module `code` shadows a Python standard-library module
+ src/bokeh/command/subcommands/json.py:1:1: A005 Module `json` shadows a Python standard-library module
+ src/bokeh/core/property/datetime.py:1:1: A005 Module `datetime` shadows a Python standard-library module
+ src/bokeh/core/property/enum.py:1:1: A005 Module `enum` shadows a Python standard-library module
- src/bokeh/core/property/factors.py:51:56: PYI016 Duplicate union member `tuple[str, str]`
+ src/bokeh/core/property/factors.py:51:56: PYI016 [*] Duplicate union member `tuple[str, str]`
- src/bokeh/core/property/factors.py:52:85: PYI016 Duplicate union member `tp.Sequence[tuple[str, str]]`
+ src/bokeh/core/property/factors.py:52:85: PYI016 [*] Duplicate union member `tp.Sequence[tuple[str, str]]`
+ src/bokeh/core/property/json.py:1:1: A005 Module `json` shadows a Python standard-library module
+ src/bokeh/core/property/string.py:1:1: A005 Module `string` shadows a Python standard-library module
... 10 additional changes omitted for rule A005
... 13 additional changes omitted for project

ibis-project/ibis (+5 -0 violations, +0 -0 fixes)

+ ibis/backends/impala/tests/test_ddl.py:18:44: RUF100 Unused `noqa` directive (unused: `E402`)
+ ibis/backends/impala/tests/test_parquet_ddl.py:12:44: RUF100 Unused `noqa` directive (unused: `E402`)
+ ibis/backends/impala/tests/test_partition.py:13:48: RUF100 Unused `noqa` directive (unused: `E402`)
+ ibis/expr/datatypes/tests/test_value.py:48:26: RUF032 `Decimal()` called with float literal argument
+ ibis/tests/expr/test_pretty_repr.py:13:66: RUF100 Unused `noqa` directive (unused: `E402`)

langchain-ai/langchain (+1 -0 violations, +0 -0 fixes)

+ libs/core/langchain_core/language_models/llms.py:352:9: FURB188 [*] Prefer `str.removesuffix()` over conditionally replacing with slice.

latchbio/latch (+10 -0 violations, +0 -0 fixes)

+ src/latch/functions/secrets.py:1:1: A005 Module `secrets` shadows a Python standard-library module
+ src/latch/registry/types.py:1:1: A005 Module `types` shadows a Python standard-library module
+ src/latch/registry/upstream_types/types.py:1:1: A005 Module `types` shadows a Python standard-library module
+ src/latch/types/glob.py:1:1: A005 Module `glob` shadows a Python standard-library module
+ src/latch/types/json.py:1:1: A005 Module `json` shadows a Python standard-library module
+ src/latch_cli/exceptions/traceback.py:1:1: A005 Module `traceback` shadows a Python standard-library module
... 3 additional changes omitted for rule A005
+ src/latch_cli/snakemake/single_task_snakemake.py:365:12: FURB188 [*] Prefer `str.removeprefix()` over conditionally replacing with slice.
+ src/latch_cli/utils/__init__.py:106:5: FURB188 [*] Prefer `str.removeprefix()` over conditionally replacing with slice.

milvus-io/pymilvus (+2 -0 violations, +0 -0 fixes)

+ pymilvus/client/types.py:1:1: A005 Module `types` shadows a Python standard-library module
+ pymilvus/orm/types.py:1:1: A005 Module `types` shadows a Python standard-library module

pandas-dev/pandas (+8 -0 violations, +0 -0 fixes)

+ pandas/tests/dtypes/cast/test_downcast.py:36:39: RUF032 `Decimal()` called with float literal argument
+ pandas/tests/dtypes/cast/test_downcast.py:38:39: RUF032 `Decimal()` called with float literal argument
+ pandas/tests/dtypes/test_missing.py:324:21: RUF032 `Decimal()` called with float literal argument
+ pandas/tests/io/formats/style/test_style.py:936:26: RUF034 Useless `if`-`else` condition
+ pandas/tests/tools/test_to_numeric.py:195:40: RUF032 `Decimal()` called with float literal argument
+ pandas/tests/tools/test_to_numeric.py:210:31: RUF032 `Decimal()` called with float literal argument
+ pandas/tests/tools/test_to_numeric.py:210:60: RUF032 `Decimal()` called with float literal argument
... 2 additional changes omitted for rule RUF032

... Truncated remaining completed project reports due to GitHub comment length restrictions

Changes by rule (15 rules affected)

code total + violation - violation + fix - fix
A005 82 82 0 0 0
PYI041 80 0 0 80 0
PT006 49 49 0 0 0
RUF032 25 25 0 0 0
FURB188 21 21 0 0 0
RUF100 15 15 0 0 0
RUF008 10 10 0 0 0
PYI016 10 0 0 10 0
RUF012 6 0 6 0 0
RUF009 2 2 0 0 0
A006 2 2 0 0 0
RUF034 2 2 0 0 0
PT007 1 1 0 0 0
PYI006 1 1 0 0 0
PLR1716 1 1 0 0 0

Linter (preview)

✅ ecosystem check detected no linter changes.

Formatter (stable)

ℹ️ ecosystem check detected format changes. (+3220 -3659 lines in 973 files in 41 projects; 14 projects unchanged)

DisnakeDev/disnake (+6 -10 lines across 4 files)

disnake/ext/commands/help.py~L952

         for command in commands:
             name = command.name
             width = max_size - (get_width(name) - len(name))
-            entry = f'{self.indent * " "}{name:<{width}} {command.short_doc}'
+            entry = f"{self.indent * ' '}{name:<{width}} {command.short_doc}"
             self.paginator.add_line(self.shorten_text(entry))
 
     async def send_pages(self) -> None:

disnake/ext/commands/help.py~L1199

         aliases: Sequence[:class:`str`]
             A list of aliases to format.
         """
-        self.paginator.add_line(f'**{self.aliases_heading}** {", ".join(aliases)}', empty=True)
+        self.paginator.add_line(f"**{self.aliases_heading}** {', '.join(aliases)}", empty=True)
 
     def add_command_formatting(self, command) -> None:
         """A utility function to format commands and groups.

disnake/integrations.py~L418

         self.scopes: List[str] = data.get("scopes") or []
 
     def __repr__(self) -> str:
-        return (
-            f"<{self.__class__.__name__} id={self.id}"
-            f" name={self.name!r} scopes={self.scopes!r}>"
-        )
+        return f"<{self.__class__.__name__} id={self.id} name={self.name!r} scopes={self.scopes!r}>"
 
 
 def _integration_factory(value: str) -> Tuple[Type[Integration], str]:

disnake/opus.py~L371

     def set_bandwidth(self, req: BAND_CTL) -> None:
         if req not in band_ctl:
             raise KeyError(
-                f'{req!r} is not a valid bandwidth setting. Try one of: {",".join(band_ctl)}'
+                f"{req!r} is not a valid bandwidth setting. Try one of: {','.join(band_ctl)}"
             )
 
         k = band_ctl[req]

disnake/opus.py~L380

     def set_signal_type(self, req: SIGNAL_CTL) -> None:
         if req not in signal_ctl:
             raise KeyError(
-                f'{req!r} is not a valid bandwidth setting. Try one of: {",".join(signal_ctl)}'
+                f"{req!r} is not a valid bandwidth setting. Try one of: {','.join(signal_ctl)}"
             )
 
         k = signal_ctl[req]

disnake/player.py~L557

             fallback = cls._probe_codec_fallback
         else:
             raise TypeError(
-                "Expected str or callable for parameter 'probe', "
-                f"not '{method.__class__.__name__}'"
+                f"Expected str or callable for parameter 'probe', not '{method.__class__.__name__}'"
             )
 
         codec = bitrate = None

RasaHQ/rasa (+94 -116 lines across 44 files)

.github/tests/test_model_regression_test_read_dataset_branch_tmpl.py~L17

     ],
 )
 def test_read_dataset_branch(comment_body_file: Text, expected_dataset_branch: Text):
-    cmd = (
-        "gomplate "
-        f"-d github={TEST_DATA_DIR}/{comment_body_file} "
-        f"-f {TEMPLATE_FPATH}"
-    )
+    cmd = f"gomplate -d github={TEST_DATA_DIR}/{comment_body_file} -f {TEMPLATE_FPATH}"
     output = subprocess.check_output(cmd.split(" "), cwd=REPO_DIR)
     output = output.decode("utf-8").strip()
     assert output == f'export DATASET_BRANCH="{expected_dataset_branch}"'

rasa/cli/arguments/export.py~L9

         parser,
         default=DEFAULT_ENDPOINTS_PATH,
         help_text=(
-            "Endpoint configuration file specifying the tracker store "
-            "and event broker."
+            "Endpoint configuration file specifying the tracker store and event broker."
         ),
     )
 

rasa/cli/scaffold.py~L162

             os.makedirs(path)
         except (PermissionError, OSError, FileExistsError) as e:
             print_error_and_exit(
-                f"Failed to create project path at '{path}'. " f"Error: {e}"
+                f"Failed to create project path at '{path}'. Error: {e}"
             )
     else:
         print_success(

rasa/cli/utils.py~L278

     # Check if a valid setting for `max_history` was given
     if isinstance(max_history, int) and max_history < 1:
         raise argparse.ArgumentTypeError(
-            f"The value of `--max-history {max_history}` " f"is not a positive integer."
+            f"The value of `--max-history {max_history}` is not a positive integer."
         )
 
     return validator.verify_story_structure(

rasa/cli/x.py~L165

         attempts -= 1
 
     rasa.shared.utils.cli.print_error_and_exit(
-        "Could not fetch runtime config from server at '{}'. " "Exiting.".format(
+        "Could not fetch runtime config from server at '{}'. Exiting.".format(
             config_endpoint
         )
     )

rasa/core/actions/action.py~L322

         if message is None:
             if not self.silent_fail:
                 logger.error(
-                    "Couldn't create message for response '{}'." "".format(
+                    "Couldn't create message for response '{}'.".format(
                         self.utter_action
                     )
                 )

rasa/core/actions/action.py~L470

         else:
             if not self.silent_fail:
                 logger.error(
-                    "Couldn't create message for response action '{}'." "".format(
+                    "Couldn't create message for response action '{}'.".format(
                         self.action_name
                     )
                 )

rasa/core/channels/console.py~L194

     exit_text = INTENT_MESSAGE_PREFIX + "stop"
 
     rasa.shared.utils.cli.print_success(
-        "Bot loaded. Type a message and press enter " "(use '{}' to exit): ".format(
+        "Bot loaded. Type a message and press enter (use '{}' to exit): ".format(
             exit_text
         )
     )

rasa/core/channels/telegram.py~L97

                     reply_markup.add(KeyboardButton(button["title"]))
         else:
             logger.error(
-                "Trying to send text with buttons for unknown " "button type {}".format(
+                "Trying to send text with buttons for unknown button type {}".format(
                     button_type
                 )
             )

rasa/core/exporter.py~L220

         conversation_ids_to_process = await self._get_conversation_ids_to_process()
 
         rasa.shared.utils.cli.print_info(
-            f"Fetching events for {len(conversation_ids_to_process)} "
-            f"conversation IDs:"
+            f"Fetching events for {len(conversation_ids_to_process)} conversation IDs:"
         )
         for conversation_id in tqdm(conversation_ids_to_process, "conversation IDs"):
             tracker = await self.tracker_store.retrieve_full_tracker(conversation_id)

rasa/core/nlg/callback.py~L81

         body = nlg_request_format(utter_action, tracker, output_channel, **kwargs)
 
         logger.debug(
-            "Requesting NLG for {} from {}." "The request body is {}." "".format(
+            "Requesting NLG for {} from {}.The request body is {}.".format(
                 utter_action, self.nlg_endpoint.url, json.dumps(body)
             )
         )

rasa/core/policies/policy.py~L250

         max_training_samples = kwargs.get("max_training_samples")
         if max_training_samples is not None:
             logger.debug(
-                "Limit training data to {} training samples." "".format(
+                "Limit training data to {} training samples.".format(
                     max_training_samples
                 )
             )

rasa/core/policies/ted_policy.py~L837

             # take the last prediction in the sequence
             similarities = outputs["similarities"][:, -1, :]
         else:
-            raise TypeError(
-                "model output for `similarities` " "should be a numpy array"
-            )
+            raise TypeError("model output for `similarities` should be a numpy array")
         if isinstance(outputs["scores"], np.ndarray):
             confidences = outputs["scores"][:, -1, :]
         else:

rasa/core/policies/unexpected_intent_policy.py~L612

         if isinstance(output["similarities"], np.ndarray):
             sequence_similarities = output["similarities"][:, -1, :]
         else:
-            raise TypeError(
-                "model output for `similarities` " "should be a numpy array"
-            )
+            raise TypeError("model output for `similarities` should be a numpy array")
 
         # Check for unlikely intent
         last_user_uttered_event = tracker.get_last_event_for(UserUttered)

rasa/core/test.py~L772

         ):
             story_dump = YAMLStoryWriter().dumps(partial_tracker.as_story().story_steps)
             error_msg = (
-                f"Model predicted a wrong action. Failed Story: " f"\n\n{story_dump}"
+                f"Model predicted a wrong action. Failed Story: \n\n{story_dump}"
             )
             raise WrongPredictionException(error_msg)
     elif prev_action_unlikely_intent:

rasa/core/train.py~L34

             for policy_config in policy_configs:
                 config_name = os.path.splitext(os.path.basename(policy_config))[0]
                 logging.info(
-                    "Starting to train {} round {}/{}" " with {}% exclusion" "".format(
+                    "Starting to train {} round {}/{} with {}% exclusion".format(
                         config_name, current_run, len(exclusion_percentages), percentage
                     )
                 )

rasa/core/train.py~L43

                     domain,
                     policy_config,
                     stories=story_file,
-                    output=str(Path(output_path, f"run_{r +1}")),
+                    output=str(Path(output_path, f"run_{r + 1}")),
                     fixed_model_name=config_name + PERCENTAGE_KEY + str(percentage),
                     additional_arguments={
                         **additional_arguments,

rasa/core/training/converters/responses_prefix_converter.py~L26

         The name of the response, starting with `utter_`.
     """
     return (
-        f"{UTTER_PREFIX}{action_name[len(OBSOLETE_RESPOND_PREFIX):]}"
+        f"{UTTER_PREFIX}{action_name[len(OBSOLETE_RESPOND_PREFIX) :]}"
         if action_name.startswith(OBSOLETE_RESPOND_PREFIX)
         else action_name
     )

rasa/core/training/interactive.py~L346

     choices = []
     for p in sorted_intents:
         name_with_confidence = (
-            f'{p.get("confidence"):03.2f} {p.get(INTENT_NAME_KEY):40}'
+            f"{p.get('confidence'):03.2f} {p.get(INTENT_NAME_KEY):40}"
         )
         choice = {
             INTENT_NAME_KEY: name_with_confidence,

rasa/core/training/interactive.py~L674

     await _print_history(conversation_id, endpoint)
 
     choices = [
-        {"name": f'{a["score"]:03.2f} {a["action"]:40}', "value": a["action"]}
+        {"name": f"{a['score']:03.2f} {a['action']:40}", "value": a["action"]}
         for a in predictions
     ]
 

rasa/core/training/interactive.py~L723

     # export training data and quit
     questions = questionary.form(
         export_stories=questionary.text(
-            message="Export stories to (if file exists, this "
-            "will append the stories)",
+            message="Export stories to (if file exists, this will append the stories)",
             default=PATHS["stories"],
             validate=io_utils.file_type_validator(
                 rasa.shared.data.YAML_FILE_EXTENSIONS,

rasa/core/training/interactive.py~L738

             default=PATHS["nlu"],
             validate=io_utils.file_type_validator(
                 list(rasa.shared.data.TRAINING_DATA_EXTENSIONS),
-                "Please provide a valid export path for the NLU data, "
-                "e.g. 'nlu.yml'.",
+                "Please provide a valid export path for the NLU data, e.g. 'nlu.yml'.",
             ),
         ),
         export_domain=questionary.text(
-            message="Export domain file to (if file exists, this "
-            "will be overwritten)",
+            message="Export domain file to (if file exists, this will be overwritten)",
             default=PATHS["domain"],
             validate=io_utils.file_type_validator(
                 rasa.shared.data.YAML_FILE_EXTENSIONS,

rasa/core/utils.py~L41

     """
     if use_syslog:
         formatter = logging.Formatter(
-            "%(asctime)s [%(levelname)-5.5s] [%(process)d]" " %(message)s"
+            "%(asctime)s [%(levelname)-5.5s] [%(process)d] %(message)s"
         )
         socktype = SOCK_STREAM if syslog_protocol == TCP_PROTOCOL else SOCK_DGRAM
         syslog_handler = logging.handlers.SysLogHandler(

rasa/core/utils.py~L73

     """
     if hot_idx >= length:
         raise ValueError(
-            "Can't create one hot. Index '{}' is out " "of range (length '{}')".format(
+            "Can't create one hot. Index '{}' is out of range (length '{}')".format(
                 hot_idx, length
             )
         )

rasa/model_training.py~L71

         )
 
     rasa.shared.utils.cli.print_success(
-        "No training of components required "
-        "(the responses might still need updating!)."
+        "No training of components required (the responses might still need updating!)."
     )
     return TrainingResult(
         code=CODE_NO_NEED_TO_TRAIN, dry_run_results=fingerprint_results

rasa/nlu/featurizers/sparse_featurizer/count_vectors_featurizer.py~L166

                 )
             if self.stop_words is not None:
                 logger.warning(
-                    "Analyzer is set to character, "
-                    "provided stop words will be ignored."
+                    "Analyzer is set to character, provided stop words will be ignored."
                 )
             if self.max_ngram == 1:
                 logger.warning(

rasa/server.py~L289

         raise ErrorResponse(
             HTTPStatus.BAD_REQUEST,
             "BadRequest",
-            "Invalid parameter value for 'include_events'. "
-            "Should be one of {}".format(enum_values),
+            "Invalid parameter value for 'include_events'. Should be one of {}".format(
+                enum_values
+            ),
             {"parameter": "include_events", "in": "query"},
         )
 

rasa/shared/core/domain.py~L198

             domain = cls.from_directory(path)
         else:
             raise InvalidDomain(
-                "Failed to load domain specification from '{}'. "
-                "File not found!".format(os.path.abspath(path))
+                "Failed to load domain specification from '{}'. File not found!".format(
+                    os.path.abspath(path)
+                )
             )
 
         return domain

rasa/shared/core/events.py~L1964

 
     def __str__(self) -> Text:
         """Returns text representation of event."""
-        return (
-            "ActionExecutionRejected("
-            "action: {}, policy: {}, confidence: {})"
-            "".format(self.action_name, self.policy, self.confidence)
+        return "ActionExecutionRejected(action: {}, policy: {}, confidence: {})".format(
+            self.action_name, self.policy, self.confidence
         )
 
     def __hash__(self) -> int:

rasa/shared/core/generator.py~L401

 
             if num_active_trackers:
                 logger.debug(
-                    "Starting {} ... (with {} trackers)" "".format(
+                    "Starting {} ... (with {} trackers)".format(
                         phase_name, num_active_trackers
                     )
                 )

rasa/shared/core/generator.py~L517

                     phase = 0
                 else:
                     logger.debug(
-                        "Found {} unused checkpoints " "in current phase." "".format(
+                        "Found {} unused checkpoints in current phase.".format(
                             len(unused_checkpoints)
                         )
                     )
                     logger.debug(
-                        "Found {} active trackers " "for these checkpoints." "".format(
+                        "Found {} active trackers for these checkpoints.".format(
                             num_active_trackers
                         )
                     )

rasa/shared/core/generator.py~L553

                 augmented_trackers, self.config.max_number_of_augmented_trackers
             )
             logger.debug(
-                "Subsampled to {} augmented training trackers." "".format(
+                "Subsampled to {} augmented training trackers.".format(
                     len(augmented_trackers)
                 )
             )

rasa/shared/core/trackers.py~L634

         """
         if not isinstance(dialogue, Dialogue):
             raise ValueError(
-                f"story {dialogue} is not of type Dialogue. "
-                f"Have you deserialized it?"
+                f"story {dialogue} is not of type Dialogue. Have you deserialized it?"
             )
 
         self._reset()

rasa/shared/core/training_data/story_reader/story_reader.py~L83

         )
         if parsed_events is None:
             raise StoryParseError(
-                "Unknown event '{}'. It is Neither an event " "nor an action).".format(
+                "Unknown event '{}'. It is Neither an event nor an action).".format(
                     event_name
                 )
             )

rasa/shared/core/training_data/story_reader/yaml_story_reader.py~L334

 
         if not self.domain:
             logger.debug(
-                "Skipped validating if intent is in domain as domain " "is `None`."
+                "Skipped validating if intent is in domain as domain is `None`."
             )
             return
 

rasa/shared/nlu/training_data/formats/dialogflow.py~L34

 
         if fformat not in {DIALOGFLOW_INTENT, DIALOGFLOW_ENTITIES}:
             raise ValueError(
-                "fformat must be either {}, or {}" "".format(
+                "fformat must be either {}, or {}".format(
                     DIALOGFLOW_INTENT, DIALOGFLOW_ENTITIES
                 )
             )

rasa/shared/nlu/training_data/util.py~L24

 
 ESCAPE_DCT = {"\b": "\\b", "\f": "\\f", "\n": "\\n", "\r": "\\r", "\t": "\\t"}
 ESCAPE_CHARS = set(ESCAPE_DCT.keys())
-ESCAPE = re.compile(f'[{"".join(ESCAPE_DCT.values())}]')
+ESCAPE = re.compile(f"[{''.join(ESCAPE_DCT.values())}]")
 UNESCAPE_DCT = {espaced_char: char for char, espaced_char in ESCAPE_DCT.items()}
-UNESCAPE = re.compile(f'[{"".join(UNESCAPE_DCT.values())}]')
+UNESCAPE = re.compile(f"[{''.join(UNESCAPE_DCT.values())}]")
 GROUP_COMPLETE_MATCH = 0
 
 

rasa/shared/utils/io.py~L127

             return f.read()
     except FileNotFoundError:
         raise FileNotFoundException(
-            f"Failed to read file, " f"'{os.path.abspath(filename)}' does not exist."
+            f"Failed to read file, '{os.path.abspath(filename)}' does not exist."
         )
     except UnicodeDecodeError:
         raise FileIOException(

rasa/shared/utils/io.py~L157

     """
     if not isinstance(path, str):
         raise ValueError(
-            f"`resource_name` must be a string type. " f"Got `{type(path)}` instead"
+            f"`resource_name` must be a string type. Got `{type(path)}` instead"
         )
 
     if os.path.isfile(path):

rasa/shared/utils/io.py~L443

             )
     except FileNotFoundError:
         raise FileNotFoundException(
-            f"Failed to read file, " f"'{os.path.abspath(file_path)}' does not exist."
+            f"Failed to read file, '{os.path.abspath(file_path)}' does not exist."
         )
 
 

rasa/utils/common.py~L308

         access_logger.addHandler(file_handler)
     if use_syslog:
         formatter = logging.Formatter(
-            "%(asctime)s [%(levelname)-5.5s] [%(process)d]" " %(message)s"
+            "%(asctime)s [%(levelname)-5.5s] [%(process)d] %(message)s"
         )
         socktype = SOCK_STREAM if syslog_protocol == TCP_PROTOCOL else SOCK_DGRAM
         syslog_handler = logging.handlers.SysLogHandler(

rasa/utils/endpoints.py~L33

         return EndpointConfig.from_dict(content[endpoint_type])
     except FileNotFoundError:
         logger.error(
-            "Failed to read endpoint configuration " "from {}. No such file.".format(
+            "Failed to read endpoint configuration from {}. No such file.".format(
                 os.path.abspath(filename)
             )
         )

tests/core/test_evaluation.py~L563

             True,
         ],
         [
-            "data/test_yaml_stories/"
-            "test_prediction_with_wrong_intent_wrong_entity.yml",
+            "data/test_yaml_stories/test_prediction_with_wrong_intent_wrong_entity.yml",
             False,
             False,
         ],

tests/core/test_migrate.py~L971

         "responses.yml",
     )
 
-    return domain_dir, "Domain files with multiple 'slots' sections were " "provided."
+    return domain_dir, "Domain files with multiple 'slots' sections were provided."
 
 
 @pytest.mark.parametrize(

tests/core/test_tracker_stores.py~L311

     assert isinstance(tracker_store, InMemoryTrackerStore)
 
 
-async def _tracker_store_and_tracker_with_slot_set() -> (
-    Tuple[InMemoryTrackerStore, DialogueStateTracker]
-):
+async def _tracker_store_and_tracker_with_slot_set() -> Tuple[
+    InMemoryTrackerStore, DialogueStateTracker
+]:
     # returns an InMemoryTrackerStore containing a tracker with a slot set
 
     slot_key = "cuisine"

tests/engine/recipes/test_default_recipe.py~L98

         (
             "data/test_config/config_pretrained_embeddings_mitie.yml",
             "data/graph_schemas/config_pretrained_embeddings_mitie_train_schema.yml",
-            "data/graph_schemas/"
-            "config_pretrained_embeddings_mitie_predict_schema.yml",
+            "data/graph_schemas/config_pretrained_embeddings_mitie_predict_schema.yml",
             TrainingType.BOTH,
             False,
         ),

tests/graph_components/validators/test_default_recipe_validator.py~L780

     if should_warn:
         with pytest.warns(
             UserWarning,
-            match=(f"'{RulePolicy.__name__}' is not " "included in the model's "),
+            match=(f"'{RulePolicy.__name__}' is not included in the model's "),
         ) as records:
             validator.validate(importer)
     else:

tests/graph_components/validators/test_default_recipe_validator.py~L883

     num_duplicates: bool,
     priority: int,
 ):
-    assert (
-        len(policy_types) >= priority + num_duplicates
-    ), f"This tests needs at least {priority+num_duplicates} many types."
+    assert len(policy_types) >= priority + num_duplicates, (
+        f"This tests needs at least {priority + num_duplicates} many types."
+    )
 
     # start with a schema where node i has priority i
     nodes = {

tests/graph_components/validators/test_default_recipe_validator.py~L895

 
     # give nodes p+1, .., p+num_duplicates-1 priority "priority"
     for idx in range(num_duplicates):
-        nodes[f"{priority+idx+1}"].config["priority"] = priority
+        nodes[f"{priority + idx + 1}"].config["priority"] = priority
 
     validator = DefaultV1RecipeValidator(graph_schema=GraphSchema(nodes))
     monkeypatch.setattr(

tests/graph_components/validators/test_default_recipe_validator.py~L992

     with pytest.warns(
         UserWarning,
         match=(
-            "Found rule-based training data but no policy "
-            "supporting rule-based data."
+            "Found rule-based training data but no policy supporting rule-based data."
         ),
     ):
         validator.validate(importer)

tests/nlu/featurizers/test_count_vectors_featurizer.py~L772

 
 
 @pytest.mark.parametrize(
-    "initial_train_text, additional_train_text, " "use_shared_vocab",
+    "initial_train_text, additional_train_text, use_shared_vocab",
     [("am I the coolest person?", "no", True), ("rasa rasa", "sara sara", False)],
 )
 def test_use_shared_vocab_exception(

tests/nlu/featurizers/test_regex_featurizer.py~L44

 
 
 @pytest.mark.parametrize(
-    "sentence, expected_sequence_features, expected_sentence_features,"
-    "labeled_tokens",
+    "sentence, expected_sequence_features, expected_sentence_features,labeled_tokens",
     [
         (
             "hey how are you today",

tests/nlu/featurizers/test_regex_featurizer.py~L219

 
 
 @pytest.mark.parametrize(
-    "sentence, expected_sequence_features, expected_sentence_features, "
-    "labeled_tokens",
+    "sentence, expected_sequence_features, expected_sentence_features, labeled_tokens",
     [
         (
             "lemonade and mapo tofu",

tests/nlu/featurizers/test_regex_featurizer.py~L383

 
 
 @pytest.mark.parametrize(
-    "sentence, expected_sequence_features, expected_sentence_features,"
-    "case_sensitive",
+    "sentence, expected_sequence_features, expected_sentence_features,case_sensitive",
     [
         ("Hey How are you today", [0.0, 0.0, 0.0], [0.0, 0.0, 0.0], True),
         ("Hey How are you today", [0.0, 1.0, 0.0], [0.0, 1.0, 0.0], False),

tests/nlu/featurizers/test_spacy_featurizer.py~L133

         vecs = ftr._features_for_doc(doc)
         vecs_capitalized = ftr._features_for_doc(doc_capitalized)
 
-        assert np.allclose(
-            vecs, vecs_capitalized, atol=1e-5
-        ), "Vectors are unequal for texts '{}' and '{}'".format(
-            e.get(TEXT), e.get(TEXT).capitalize()
+        assert np.allclose(vecs, vecs_capitalized, atol=1e-5), (
+            "Vectors are unequal for texts '{}' and '{}'".format(
+                e.get(TEXT), e.get(TEXT).capitalize()
+            )
         )
 
 

tests/nlu/test_train.py~L151

             #   publicly available anymore
             #   (see https://github.com/RasaHQ/rasa/issues/6806)
             continue
-        assert (
-            cls.__name__ in all_components
-        ), "`all_components` template is missing component."
+        assert cls.__name__ in all_components, (
+            "`all_components` template is missing component."
+        )
 
 
 @pytest.mark.timeout(600, func_only=True)

tests/shared/core/test_events.py~L87

 )
 def test_event_has_proper_implementation(one_event, another_event):
     # equals tests
-    assert (
-        one_event != another_event
-    ), "Same events with different values need to be different"
+    assert one_event != another_event, (
+        "Same events with different values need to be different"
+    )
     assert one_event == copy.deepcopy(one_event), "Event copies need to be the same"
     assert one_event != 42, "Events aren't equal to 42!"
 
     # hash test
-    assert hash(one_event) == hash(
-        copy.deepcopy(one_event)
-    ), "Same events should have the same hash"
-    assert hash(one_event) != hash(
-        another_event
-    ), "Different events should have different hashes"
+    assert hash(one_event) == hash(copy.deepcopy(one_event)), (
+        "Same events should have the same hash"
+    )
+    assert hash(one_event) != hash(another_event), (
+        "Different events should have different hashes"
+    )
 
     # str test
     assert "object at 0x" not in str(one_event), "Event has a proper str method"

tests/shared/core/test_slots.py~L52

         value, expected = value_feature_pair
         slot.value = value
         assert slot.as_feature() == expected
-        assert (
-            len(slot.as_feature()) == slot.feature_dimensionality()
-        ), "Wrong feature dimensionality"
+        assert len(slot.as_feature()) == slot.feature_dimensionality(), (
+            "Wrong feature dimensionality"
+        )
 
         # now reset the slot to get initial value again
         slot.reset()
-        assert (
-            slot.value == slot.initial_value
-        ), "Slot should be reset to its initial value"
+        assert slot.value == slot.initial_value, (
+            "Slot should be reset to its initial value"
+        )
 
     def test_empty_slot_featurization(self, mappings: List[Dict[Text, Any]]):
         slot = self.create_slot(mappings=mappings, influence_conversation=True)
-        assert (
-            slot.value == slot.initial_value
-        ), "An empty slot should be set to the initial value"
+        assert slot.value == slot.initial_value, (
+            "An empty slot should be set to the initial value"
+        )
         assert len(slot.as_feature()) == slot.feature_dimensionality()
 
     def test_featurization_if_marked_as_unfeaturized(

tests/shared/core/training_data/test_graph.py~L10

     for n in sorted_nodes:
         deps = incoming_edges.get(n, [])
         # checks that all incoming edges are from nodes we have already visited
-        assert all(
-            [d in visited or (d, n) in removed_edges for d in deps]
-        ), "Found an incoming edge from a node that wasn't visited yet!"
+        assert all([d in visited or (d, n) in removed_edges for d in deps]), (
+            "Found an incoming edge from a node that wasn't visited yet!"
+        )
         visited.add(n)
 
 

Snowflake-Labs/snowcli (+52 -50 lines across 17 files)

src/snowflake/cli/_plugins/connection/commands.py~L334

         "Host": conn.host,
         "Account": conn.account,
         "User": conn.user,
-        "Role": f'{conn.role or "not set"}',
-        "Database": f'{conn.database or "not set"}',
-        "Warehouse": f'{conn.warehouse or "not set"}',
+        "Role": f"{conn.role or 'not set'}",
+        "Database": f"{conn.database or 'not set'}",
+        "Warehouse": f"{conn.warehouse or 'not set'}",
     }
 
     if conn_ctx.enable_diag:

src/snowflake/cli/_plugins/nativeapp/artifacts.py~L250

     def __init__(self, *, project_root: Path, deploy_root: Path):
         # If a relative path ends up here, it's a bug in the app and can lead to other
         # subtle bugs as paths would be resolved relative to the current working directory.
-        assert (
-            project_root.is_absolute()
-        ), f"Project root {project_root} must be an absolute path."
-        assert (
-            deploy_root.is_absolute()
-        ), f"Deploy root {deploy_root} must be an absolute path."
+        assert project_root.is_absolute(), (
+            f"Project root {project_root} must be an absolute path."
+        )
+        assert deploy_root.is_absolute(), (
+            f"Deploy root {deploy_root} must be an absolute path."
+        )
 
         self._project_root: Path = resolve_without_follow(project_root)
         self._deploy_root: Path = resolve_without_follow(deploy_root)

src/snowflake/cli/_plugins/nativeapp/codegen/snowpark/python_processor.py~L433

         create_query += f"\nEXTERNAL_ACCESS_INTEGRATIONS=({', '.join(ensure_all_string_literals(extension_fn.external_access_integrations))})"
 
     if extension_fn.secrets:
-        create_query += f"""\nSECRETS=({', '.join([f"{ensure_string_literal(k)}={v}" for k, v in extension_fn.secrets.items()])})"""
+        create_query += f"""\nSECRETS=({", ".join([f"{ensure_string_literal(k)}={v}" for k, v in extension_fn.secrets.items()])})"""
 
     create_query += f"\nHANDLER={ensure_string_literal(extension_fn.handler)}"
 

src/snowflake/cli/_plugins/stage/manager.py~L106

 
     def get_standard_stage_path(self) -> str:
         path = self.get_full_stage_path(self.path)
-        return f"@{path}{'/'if self.is_directory and not path.endswith('/') else ''}"
+        return f"@{path}{'/' if self.is_directory and not path.endswith('/') else ''}"
 
     def get_standard_stage_directory_path(self) -> str:
         path = self.get_standard_stage_path()

src/snowflake/cli/api/project/schemas/project_d...[Comment body truncated]

MichaReiser and others added 17 commits January 8, 2025 18:12
…15329)

Stabilise [`slice-to-remove-prefix-or-suffix`](https://docs.astral.sh/ruff/rules/slice-to-remove-prefix-or-suffix/) (`FURB188`) for the Ruff 0.9 release.

This is a stylistic rule, but I think it's a pretty uncontroversial one. There are no open issues or PRs regarding it and it's been in preview for a while now.
…ze` calls" (`PT006`) (#15327)

Co-authored-by: Micha Reiser <[email protected]>
Resolves #15324. Stabilizes the behavior changes introduced in #14515.
@dhruvmanila
Copy link
Member

I think the f-string docs commit (#15341) got removed in the latest rebase. I'll open a new PR for that.

Revive #15341 as it got removed
from the latest rebase in #15238.
@MichaReiser MichaReiser merged commit f706c3f into main Jan 9, 2025
21 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
breaking Breaking API change
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants