Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Migrate streaming providers functions to async & improve performance #331

Merged
merged 14 commits into from
Nov 1, 2024

Conversation

mhdzumair
Copy link
Owner

@mhdzumair mhdzumair commented Oct 19, 2024

Summary by CodeRabbit

Release Notes

  • New Features

    • Transitioned multiple classes and methods to asynchronous implementations, enhancing performance for I/O-bound operations across various streaming providers.
    • Introduced asynchronous context management for client instances, improving resource handling.
    • Added background task management capabilities to the video URL retrieval process.
  • Bug Fixes

    • Improved error handling in service-specific methods to manage specific HTTP status codes.
  • Documentation

    • Updated method signatures to reflect changes in parameter types and return types, ensuring clarity on asynchronous behavior.

These enhancements significantly improve the efficiency and responsiveness of the application when interacting with streaming services.

Copy link
Contributor

coderabbitai bot commented Oct 19, 2024

Warning

Rate limit exceeded

@mhdzumair has exceeded the limit for the number of commits or files that can be reviewed per hour. Please wait 8 minutes and 3 seconds before requesting another review.

⌛ How to resolve this issue?

After the wait time has elapsed, a review can be triggered using the @coderabbitai review command as a PR comment. Alternatively, push new commits to this PR.

We recommend that you space out your commits to avoid hitting the rate limit.

🚦 How do rate limits work?

CodeRabbit enforces hourly rate limits for each developer per organization.

Our paid plans have higher rate limits than the trial, open-source and free plans. In all cases, we re-allow further reviews after a brief timeout.

Please see our FAQ for further information.

📥 Commits

Reviewing files that changed from the base of the PR and between a87bab3 and 17f3dab.

Walkthrough

This pull request introduces significant modifications across multiple files in the streaming providers' module, primarily transitioning various classes and functions from synchronous to asynchronous implementations. Key changes include the conversion of numerous methods to use the async keyword, enabling non-blocking operations. Additionally, several method signatures have been updated to reflect the new asynchronous nature, including adjustments to parameter types and return types. The overall aim of these changes is to enhance the efficiency and responsiveness of the code when handling I/O-bound tasks.

Changes

File Path Change Summary
streaming_providers/alldebrid/client.py Converted multiple methods in the AllDebrid class to asynchronous, updated constructor parameter types, and simplified error message formatting.
streaming_providers/alldebrid/utils.py Made several utility functions asynchronous, utilizing await for internal calls to ad_client methods.
streaming_providers/debrid_client.py Transitioned DebridClient methods to asynchronous, added asynchronous context management, and updated error handling to use httpx.
streaming_providers/debridlink/api.py Updated get_device_code and authorize functions to use asynchronous context management for the DebridLink client.
streaming_providers/debridlink/client.py Converted multiple methods in the DebridLink class to asynchronous, updated return types, and ensured internal calls await asynchronous operations.
streaming_providers/debridlink/utils.py Made several utility functions asynchronous, incorporating await for internal calls to DebridLink methods.
streaming_providers/offcloud/client.py Transitioned methods in the OffCloud class to asynchronous, updated _make_request to handle JSON data, and improved error handling for specific HTTP status codes.
streaming_providers/offcloud/utils.py Converted several utility functions to asynchronous, updating internal logic to use await for OffCloud client methods.
streaming_providers/parser.py Modified select_file_index_from_torrent parameters to be optional and added new asynchronous function update_torrent_streams_metadata.
streaming_providers/premiumize/api.py Updated authorize and oauth2_redirect functions to use asynchronous context management for Premiumize.
streaming_providers/premiumize/client.py Converted multiple methods in the Premiumize class to asynchronous, updated internal calls to _make_request to use await.
streaming_providers/premiumize/utils.py Made several utility functions asynchronous, updating internal calls to use await for Premiumize client methods.
streaming_providers/realdebrid/api.py Modified get_device_code and authorize functions to utilize asynchronous context management for RealDebrid.
streaming_providers/realdebrid/client.py Converted multiple methods in the RealDebrid class to asynchronous, updated constructor parameter types, and adjusted return types for several methods.
streaming_providers/realdebrid/utils.py Made several utility functions asynchronous, updating internal calls to use await for RealDebrid client methods.
streaming_providers/torbox/client.py Transitioned methods in the Torbox class to asynchronous, updated _make_request to handle JSON data, and replaced the destructor with an asynchronous context management method.
streaming_providers/torbox/utils.py Converted multiple utility functions to asynchronous, updated internal logic to handle await for Torbox client methods, and added a new asynchronous function for chunk cache status updates.
streaming_providers/pikpak/utils.py Updated find_file_in_folder_tree and retrieve_or_download_file functions to remove unnecessary parameters and adjust internal logic accordingly.
streaming_providers/routes.py Enhanced get_or_create_video_url and streaming_provider_endpoint functions to include a background_tasks parameter for improved task management.
streaming_providers/qbittorrent/utils.py Updated find_file_in_folder_tree, retrieve_or_download_file, and get_video_url_from_qbittorrent to accommodate new parameters and improve file selection logic.
streaming_providers/seedr/api.py Modified get_device_code and authorize functions to transition to asynchronous context management for the Login class.
streaming_providers/seedr/utils.py Converted several functions to asynchronous, introduced TorrentStatus enum, and updated various methods to enhance error handling and resource management.

Possibly related PRs

  • Bugfixes #330: The changes in this PR are unrelated to the main PR, which focuses on transitioning methods in the AllDebrid class to asynchronous operations, while this PR addresses content filtering and data processing enhancements.

Poem

🐰 In the land of code where bunnies hop,
Asynchronous magic makes our hearts stop.
With await and async, we dance with glee,
Non-blocking wonders, as swift as can be!
So here’s to the changes, a joyful delight,
In the world of streaming, our future is bright! 🌟


Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media?

❤️ Share
🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Generate unit testing code for this file.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai generate unit testing code for this file.
    • @coderabbitai modularize this function.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read src/utils.ts and generate unit testing code.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.
    • @coderabbitai help me debug CodeRabbit configuration file.

Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.

CodeRabbit Commands (Invoked using PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai help to get help.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

CodeRabbit Configuration File (.coderabbit.yaml)

  • You can programmatically configure CodeRabbit by adding a .coderabbit.yaml file to the root of your repository.
  • Please see the configuration documentation for more information.
  • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 41

🧹 Outside diff range and nitpick comments (35)
streaming_providers/debridlink/api.py (1)

21-25: Async implementation looks great, consider adding error handling

The transition to using an async context manager for the DebridLink client in the authorize function is well implemented. This change:

  1. Ensures proper resource management
  2. Allows for non-blocking I/O operations, potentially improving performance
  3. Maintains backwards compatibility by keeping the function signature unchanged

The implementation correctly awaits the result of dl_client.authorize(data.device_code) and uses the constant NO_CACHE_HEADERS, maintaining consistency with the previous version.

Consider adding error handling to manage potential exceptions that might occur during the authorization process. This could improve the robustness of the function. For example:

@router.post("/authorize")
async def authorize(data: AuthorizeData):
    try:
        async with DebridLink() as dl_client:
            result = await dl_client.authorize(data.device_code)
            return JSONResponse(
                content=result,
                headers=const.NO_CACHE_HEADERS,
            )
    except Exception as e:
        # Log the error
        logger.error(f"Authorization failed: {str(e)}")
        return JSONResponse(
            content={"error": "Authorization failed"},
            status_code=500,
            headers=const.NO_CACHE_HEADERS,
        )

This suggestion adds basic error handling and provides a more informative response in case of failures.

streaming_providers/premiumize/api.py (1)

24-35: Consider specifying the HTTP status code for error responses

When returning error messages, specifying the appropriate HTTP status code can help clients handle responses correctly. For example, use a 400 status code for bad requests or 500 for internal server errors.

Here's how you might adjust the error response:

from fastapi import HTTPException, status

# In the exception handling block
except Exception as e:
    raise HTTPException(status_code=status.HTTP_500_INTERNAL_SERVER_ERROR, detail=str(e))
streaming_providers/alldebrid/utils.py (3)

10-20: Consider adding type hints for better code clarity

Adding type hints to the get_torrent_info function will improve code readability and help with static analysis tools.

Apply this diff to include type hints:

 async def get_torrent_info(ad_client, info_hash):
+async def get_torrent_info(ad_client: AllDebrid, info_hash: str) -> Optional[Dict]:
     torrent_info = await ad_client.get_available_torrent(info_hash)
     if torrent_info and torrent_info["status"] == "Ready":
         return torrent_info

Ensure to import Optional and Dict from the typing module if not already imported.

from typing import Optional, Dict

23-25: Add type hints and handle potential exceptions

For improved code clarity and robustness, consider adding type hints to the add_new_torrent function. Additionally, ensure that exceptions from ad_client.add_magnet_link are appropriately handled.

Apply this diff to include type hints:

 async def add_new_torrent(ad_client, magnet_link):
+async def add_new_torrent(ad_client: AllDebrid, magnet_link: str) -> str:
     response_data = await ad_client.add_magnet_link(magnet_link)
     return response_data["data"]["magnets"][0]["id"]

Line range hint 28-45: Add type hints to wait_for_download_and_get_link function

Adding type hints will enhance readability and assist with static analysis.

Apply this diff to include type hints:

 async def wait_for_download_and_get_link(
-    ad_client, torrent_id, filename, file_index, episode, max_retries, retry_interval
+    ad_client: AllDebrid,
+    torrent_id: int,
+    filename: str,
+    file_index: int,
+    episode: Optional[int],
+    max_retries: int,
+    retry_interval: int,
 ) -> str:
     torrent_info = await ad_client.wait_for_status(

Ensure to import Optional from the typing module if not already imported.

from typing import Optional
streaming_providers/alldebrid/client.py (5)

15-16: Consider removing async from initialize_headers

The method initialize_headers is marked as async but does not contain any await expressions. Unless there is a specific need for it to be asynchronous, consider removing the async keyword to simplify the code.


18-19: Consider removing async from disable_access_token

The method disable_access_token is defined as asynchronous but doesn't perform any asynchronous operations. If there's no intention to add asynchronous code here in the future, you might remove the async keyword.


21-22: Consider removing async from _handle_service_specific_errors

The method _handle_service_specific_errors is declared as async but doesn't contain any await statements. If asynchronous operations aren't required, consider making it a regular method.


Line range hint 50-68: Ensure consistent error message formatting

In the _validate_error_response method, some error messages have been simplified by removing unnecessary f-string formatting. However, other parts of the code still use f-strings for static messages. For consistency and clarity, consider standardizing how error messages are formatted throughout the codebase.


Line range hint 107-118: Enhance exception handling in create_download_link

The exception raised in create_download_link includes the full response in the message, which may expose sensitive information. Consider logging the detailed response for debugging purposes and raising a user-friendly exception message instead.

Apply this change to improve exception handling:

 def create_download_link(self, link):
     response = await self._make_request(
         "GET",
         "/link/unlock",
         params={"link": link},
         is_expected_to_fail=True,
     )
     if response.get("status") == "success":
         return response
-    raise ProviderException(
-        f"Failed to create download link from AllDebrid {response}",
-        "transfer_error.mp4",
-    )
+    # Log the detailed response for debugging
+    logger.error(f"Failed to create download link from AllDebrid: {response}")
+    # Raise a user-friendly exception
+    raise ProviderException(
+        "Failed to create download link from AllDebrid.",
+        "transfer_error.mp4",
+    )
streaming_providers/premiumize/client.py (2)

Line range hint 77-88: Handle potential KeyError in get_torrent_info.

In get_torrent_info, the code assumes that transfer_list contains a "transfers" key. To prevent a potential KeyError, consider adding a check to ensure that "transfers" exists in transfer_list before accessing it.

Apply the following change to handle the potential error:

 async def get_torrent_info(self, torrent_id):
     transfer_list = await self.get_transfer_list()
+    if "transfers" not in transfer_list:
+        raise ProviderException("Missing 'transfers' in response", "transfer_error.mp4")
     torrent_info = next(
         (
             torrent
             for torrent in transfer_list["transfers"]
             if torrent["id"] == torrent_id
         ),
         None,
     )
     return torrent_info

Line range hint 120-134: Verify the integrity of response handling in get_available_torrent.

In get_available_torrent, ensure that the response from get_transfer_list contains the "transfers" key before accessing it to prevent potential KeyError. Consider adding a check and handling unexpected responses gracefully.

Apply the following change to handle potential missing keys:

     torrent_list_response = await self.get_transfer_list()
     if torrent_list_response.get("status") != "success":
         if torrent_list_response.get("message") == "Not logged in.":
             raise ProviderException(
                 "Premiumize is not logged in.", "invalid_token.mp4"
             )
         raise ProviderException(
             "Failed to get torrent info from Premiumize", "transfer_error.mp4"
         )
+    if "transfers" not in torrent_list_response:
+        raise ProviderException(
+            "Missing 'transfers' in response", "transfer_error.mp4"
+        )
     available_torrents = torrent_list_response["transfers"]
streaming_providers/realdebrid/utils.py (2)

59-60: Correct grammatical error in error message

There's a grammatical error in the error message: "Torrent is already being downloading". It should be "Torrent is already downloading" or "Torrent is already being downloaded".

Apply this diff to fix the error message:

-                    "Torrent is already being downloading", "torrent_not_downloaded.mp4"
+                    "Torrent is already downloading", "torrent_not_downloaded.mp4"

117-117: Consider logging exceptions instead of passing silently

In the update_rd_cache_status function, the except ProviderException: block silently passes on exceptions. Suppressing exceptions without any logging can make debugging difficult.

Consider logging the exception or providing feedback to help with troubleshooting. For example:

except ProviderException as e:
    logger.error(f"Failed to update RD cache status: {e}")
streaming_providers/premiumize/utils.py (1)

43-44: Improve error message grammar

The error message "Not enough seeders available for parse magnet link" is grammatically incorrect. Consider rephrasing it for clarity.

Suggested change:

 raise ProviderException(
-    "Not enough seeders available for parse magnet link",
+    "Not enough seeders available to parse magnet link",
     "transfer_error.mp4",
 )
streaming_providers/offcloud/client.py (2)

17-18: Unnecessary use of async in initialize_headers

The initialize_headers method does not perform any asynchronous operations. Declaring it as an async function is unnecessary and may introduce unintended overhead. Consider removing the async keyword.

Apply this diff to fix the issue:

-    async def initialize_headers(self):
+    def initialize_headers(self):
         pass

20-26: Unnecessary use of async in _handle_service_specific_errors

The _handle_service_specific_errors method does not perform any asynchronous operations. Declaring it as an async function is unnecessary. Consider removing the async keyword.

Apply this diff to fix the issue:

-    async def _handle_service_specific_errors(self, error: httpx.HTTPStatusError):
+    def _handle_service_specific_errors(self, error: httpx.HTTPStatusError):
         if error.response.status_code == 403:
             raise ProviderException("Invalid OffCloud API key", "invalid_token.mp4")
         if error.response.status_code == 429:
             raise ProviderException(
                 "OffCloud rate limit exceeded", "too_many_requests.mp4"
             )
streaming_providers/realdebrid/client.py (5)

Line range hint 20-34: Confirm Python Version Compatibility for match Statement

The use of the match statement in _handle_service_specific_errors relies on Python 3.10 or newer. Ensure that the project's runtime environment supports Python 3.10+, or refactor the match statement to use if-elif conditions for broader compatibility.

If needed, refactor using if-elif statements:

-            match error_code:
-                case 9:
+            if error_code == 9:
                 raise ProviderException(
                     "Real-Debrid Permission denied for free account",
                     "need_premium.mp4",
                 )
-                case 22:
+            elif error_code == 22:
                 raise ProviderException(
                     "IP address not allowed", "ip_not_allowed.mp4"
                 )
-                case 34:
+            elif error_code == 34:
                 raise ProviderException(
                     "Too many requests", "too_many_requests.mp4"
                 )
-                case 35:
+            elif error_code == 35:
                 raise ProviderException(
                     "Content marked as infringing", "content_infringing.mp4"
                 )

Line range hint 59-70: Ensure initialize_headers Is Properly Awaited

The initialize_headers method has been converted to an asynchronous function. Ensure that any code calling this method awaits it appropriately. If overridden from the base class, verify that the base class method is also asynchronous and that all calls are updated.

Search for all usages of initialize_headers and update them to include await:

#!/bin/bash
# Description: Find all calls to initialize_headers that need to be updated.

# Test: Search for initialize_headers calls not preceded by await.
rg --type python --fixed-strings 'initialize_headers(' | rg -v 'await'

163-166: Unnecessary return Statement in disable_access_token

The method disable_access_token performs an action but does not need to return any value. The return statement can be omitted for clarity, especially since the method is intended to return None.

Consider removing the return statement:

-        return await self._make_request(
+        await self._make_request(
             "GET",
             f"{self.BASE_URL}/disable_access_token",
             is_return_none=True,
             is_expected_to_fail=True,
         )

Line range hint 185-202: Enhance Error Handling in create_download_link

The method checks for "download" and "error_code" keys in the response, but if neither is present, it raises a generic exception. Consider adding more detailed error handling or logging to aid in debugging unexpected responses.

You might want to log the unexpected response or include additional information in the exception:

             raise ProviderException(
-                f"Failed to create download link. response: {response}", "api_error.mp4"
+                f"Failed to create download link. Unexpected response: {response}", "api_error.mp4"
             )

2-2: Remove Unused Import Any

The Any type is imported but not used in the code. Cleaning up unused imports can improve code readability.

-from typing import Any, Optional
+from typing import Optional
streaming_providers/torbox/client.py (3)

10-11: Consider removing async from initialize_headers

The method initialize_headers is declared as asynchronous, but it doesn't contain any await expressions or asynchronous operations. If it's not required to be asynchronous (e.g., not overriding an async method in the base class), consider removing the async keyword to simplify the code.


16-17: Consider removing async from _handle_service_specific_errors

The method _handle_service_specific_errors is declared as asynchronous but does not contain any await expressions or perform asynchronous operations. If this method doesn't need to be async (e.g., not overriding an async method in the base class), consider removing the async keyword.


Line range hint 79-90: Enhance error handling in create_download_link method

The condition if "successfully" in response.get("detail") relies on substring matching, which may not be robust against different response messages. Consider checking for a specific success status or code in the response to make error handling more reliable.

Apply this diff to improve the error checking:

- if "successfully" in response.get("detail"):
+ if response.get("status") == "success":

Ensure that the API indeed returns a status field with a value indicating success.

streaming_providers/debridlink/utils.py (3)

12-12: Consider using union types if supporting Python 3.10 or higher

The parameter episode has been annotated with Optional[int]. If your project exclusively supports Python 3.10 or newer, you might use the union type syntax int | None for improved readability.

Also applies to: 30-30


79-80: Log exceptions instead of passing silently

In the except ProviderException: blocks, exceptions are caught but not logged. This can make debugging difficult. Consider logging the exceptions to aid in troubleshooting.

Apply this diff to add logging (assuming a configured logger):

 # In update_dl_cache_status
 except ProviderException as e:
-    pass
+    logger.error(f"ProviderException in update_dl_cache_status: {e}")

 # In fetch_downloaded_info_hashes_from_dl
 except ProviderException as e:
-    return []
+    logger.error(f"ProviderException in fetch_downloaded_info_hashes_from_dl: {e}")
     return []

Also applies to: 94-95


1-2: Remove unused imports if they are unnecessary

Ensure that all imported modules are used in the code. If asyncio or Optional are not required elsewhere, consider removing them to clean up the imports.

streaming_providers/debridlink/client.py (7)

Line range hint 13-17: Consider removing async from _handle_service_specific_errors

The method _handle_service_specific_errors has been converted to an async function but does not contain any await expressions or perform asynchronous operations. Converting it to async unnecessarily may introduce overhead and confusion. Consider keeping it as a synchronous method unless there is a plan to include asynchronous logic.

🧰 Tools
🪛 Gitleaks

11-11: Detected a Generic API Key, potentially exposing access to various services and sensitive operations.

(generic-api-key)


Line range hint 55-62: Add return type annotation to get_token

Adding a return type annotation to get_token enhances readability and assists with static type checking. Example:

async def get_token(self, client_id, device_code) -> dict[str, Any]:

Line range hint 67-74: Add return type annotation to refresh_token

For consistency and clarity, consider specifying the return type of refresh_token. Example:

async def refresh_token(self, client_id, refresh_token) -> dict[str, Any]:

Line range hint 78-90: Add return type annotation to authorize

The authorize method returns different structures based on the response. Specifying the return type can improve understanding of the possible outcomes. Example:

async def authorize(self, device_code) -> dict[str, Any]:

Line range hint 92-104: Add return type annotation to add_magnet_link

Including a return type annotation for add_magnet_link improves code readability. Example:

async def add_magnet_link(self, magnet_link) -> dict[str, Any]:

Line range hint 109-117: Add return type annotation to get_torrent_info

Specify the return type of get_torrent_info for better clarity. Example:

async def get_torrent_info(self, torrent_id) -> dict[str, Any]:

Missing await for Asynchronous Method Calls

Several calls to updated asynchronous methods are not preceded by await, which may lead to RuntimeWarning or RuntimeError. Please ensure that all calls to the following methods are properly awaited and within an asynchronous context:

  • streaming_providers/seedr/api.py

    • response = seedr.authorize(data.device_code)
  • streaming_providers/realdebrid/api.py

    • content=rd_client.get_device_code()
  • scrapers/torrent_info.py

    • torrent_info = torrent_info.get_torrent_info(prowlarr_data["infoUrl"], prowlarr_data["indexer"])
  • streaming_providers/debridlink/utils.py

    • torrent_info = dl_client.get_available_torrent(info_hash)
    • torrent_id = dl_client.add_magnet_link(magnet_link)

Please update these calls by adding the await keyword to ensure proper asynchronous execution.

🔗 Analysis chain

Line range hint 142-153: Verify that all callers of updated methods are now async

With the conversion of several methods to async, ensure that all code calling these methods has been updated to use await and is within an asynchronous context. Failure to do so can lead to RuntimeWarning or RuntimeError due to unawaited coroutines or calling async functions synchronously.

Run the following script to find usages of the updated methods and verify that they are awaited:

This script searches for calls to the updated methods that are not preceded by await, indicating potential issues where the async methods are called without awaiting. Review any matches to ensure that calls to these methods are correctly awaited.

🏁 Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Description: Verify all calls to updated async methods are properly awaited.

methods=(
    '_handle_service_specific_errors'
    'initialize_headers'
    'get_device_code'
    'get_token'
    'refresh_token'
    'authorize'
    'add_magnet_link'
    'get_user_torrent_list'
    'get_torrent_info'
    'get_torrent_files_list'
    'get_torrent_instant_availability'
    'delete_torrent'
    'disable_access_token'
    'get_available_torrent'
)

for method in "${methods[@]}"; do
    echo "Searching for calls to ${method}:"
    rg --type py -g '!streaming_providers/debridlink/client.py' --pcre2 "(?<!await\s)(\b$method\b)\s*\(" -A 2 -B 2
    echo ""
done

Length of output: 49438

📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL

📥 Commits

Files that changed from the base of the PR and between 95f57a9 and 3f36130.

📒 Files selected for processing (17)
  • streaming_providers/alldebrid/client.py (5 hunks)
  • streaming_providers/alldebrid/utils.py (3 hunks)
  • streaming_providers/debrid_client.py (1 hunks)
  • streaming_providers/debridlink/api.py (1 hunks)
  • streaming_providers/debridlink/client.py (11 hunks)
  • streaming_providers/debridlink/utils.py (2 hunks)
  • streaming_providers/offcloud/client.py (3 hunks)
  • streaming_providers/offcloud/utils.py (1 hunks)
  • streaming_providers/parser.py (1 hunks)
  • streaming_providers/premiumize/api.py (1 hunks)
  • streaming_providers/premiumize/client.py (5 hunks)
  • streaming_providers/premiumize/utils.py (3 hunks)
  • streaming_providers/realdebrid/api.py (1 hunks)
  • streaming_providers/realdebrid/client.py (9 hunks)
  • streaming_providers/realdebrid/utils.py (3 hunks)
  • streaming_providers/torbox/client.py (4 hunks)
  • streaming_providers/torbox/utils.py (1 hunks)
🧰 Additional context used
🪛 Ruff
streaming_providers/debrid_client.py

22-25: Use contextlib.suppress(ProviderException) instead of try-except-pass

Replace with contextlib.suppress(ProviderException)

(SIM105)


97-100: Within an except clause, raise exceptions with raise ... from err or raise ... from None to distinguish them from errors in exception handling

(B904)

streaming_providers/realdebrid/utils.py

27-29: Within an except clause, raise exceptions with raise ... from err or raise ... from None to distinguish them from errors in exception handling

(B904)


89-91: Within an except clause, raise exceptions with raise ... from err or raise ... from None to distinguish them from errors in exception handling

(B904)

🔇 Additional comments (32)
streaming_providers/realdebrid/api.py (2)

21-23: ⚠️ Potential issue

LGTM! Consider awaiting authorize()

The migration to asynchronous context management is a good improvement for resource handling. However, there's a potential issue to address:

If rd_client.authorize() is an asynchronous method in the updated RealDebrid class, it should be awaited. Consider updating the code as follows:

 async with RealDebrid() as rd_client:
-    response = rd_client.authorize(data.device_code)
+    response = await rd_client.authorize(data.device_code)
     return JSONResponse(content=response, headers=const.NO_CACHE_HEADERS)

To confirm if authorize() is indeed asynchronous, we can check its implementation:

#!/bin/bash
# Description: Check if authorize() in RealDebrid class is asynchronous

ast-grep --lang python --pattern 'class RealDebrid:
  $$$
  async def authorize($_):
    $$$
'

If this returns a match, we should update the code to await the call.


13-16: ⚠️ Potential issue

LGTM! Consider awaiting get_device_code()

The migration to asynchronous context management is a good improvement for resource handling. However, there's a potential issue to address:

If rd_client.get_device_code() is an asynchronous method in the updated RealDebrid class, it should be awaited. Consider updating the code as follows:

 async with RealDebrid() as rd_client:
     return JSONResponse(
-        content=rd_client.get_device_code(), headers=const.NO_CACHE_HEADERS
+        content=await rd_client.get_device_code(), headers=const.NO_CACHE_HEADERS
     )

To confirm if get_device_code() is indeed asynchronous, we can check its implementation:

If this returns a match, we should update the code to await the call.

streaming_providers/debridlink/api.py (2)

13-16: Excellent implementation of async context manager

The transition to using an async context manager for the DebridLink client is a great improvement. This change:

  1. Ensures proper resource management
  2. Allows for non-blocking I/O operations, potentially improving performance
  3. Maintains backwards compatibility by keeping the function signature unchanged

The implementation correctly awaits the result of dl_client.get_device_code() and uses the constant NO_CACHE_HEADERS, maintaining consistency with the previous version.


Line range hint 1-25: Overall excellent transition to async implementation

The changes in this file successfully achieve the PR objective of migrating to async implementations. Both get_device_code and authorize functions have been updated to use asynchronous context managers, which should lead to improved performance for I/O-bound operations.

The modifications maintain backwards compatibility and adhere to Python's best practices for async programming. These changes contribute positively to the overall goal of enhancing the efficiency and responsiveness of the streaming providers module.

Great job on this implementation!

streaming_providers/parser.py (1)

8-9: LGTM! Verify usage across the codebase.

The changes to make filename and file_index optional parameters (allowing None) improve the function's flexibility and align well with its existing logic. The function already has fallback mechanisms for when these parameters are not provided, so no additional changes to the function body are necessary.

To ensure that this change doesn't introduce any issues, please verify that all calls to this function across the codebase are still valid. Run the following script to check the usage:

Review the results to ensure that all calls to this function are still valid with the new optional parameters.

✅ Verification successful

Verified: Usage of select_file_index_from_torrent remains consistent across the codebase.

All existing calls to select_file_index_from_torrent continue to provide the filename and file_index parameters, ensuring compatibility with the updated function signature where these parameters are now optional. No issues were found during the verification process.

🏁 Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Description: Find all calls to select_file_index_from_torrent and verify they're still valid with the new signature.

# Search for function calls
rg --type python -A 5 'select_file_index_from_torrent\s*\('

# Search for imports of this function to catch cases where it might be renamed
rg --type python 'from\s+streaming_providers\.parser\s+import.*select_file_index_from_torrent'

Length of output: 220


Script:

#!/bin/bash
# Description: Find all calls to select_file_index_from_torrent and verify they're still valid with the new signature.

# Search for function calls in Python files using the correct type identifier
rg --type py -A 5 'select_file_index_from_torrent\s*\('

# Search for imports of this function to catch cases where it might be renamed
rg --type py 'from\s+streaming_providers\.parser\s+import.*select_file_index_from_torrent'

Length of output: 3094

streaming_providers/premiumize/api.py (2)

26-26: Verify the presence of 'access_token' in 'token_data'

When accessing token_data["access_token"], there's an assumption that this key exists. If access_token is missing, a KeyError will be raised. To make the code more robust, consider adding a check to ensure that the access_token key is present.

You can add a check like this:

access_token = token_data.get("access_token")
if not access_token:
    return {"error": "Access token not found in token data"}
token = pm_client.encode_token_data(access_token)

16-19: Confirm that 'get_authorization_url' is synchronous

Within the asynchronous context manager, get_authorization_url() is called without await, implying it's a synchronous method. Ensure that this is intentional and that the method does not need to be awaited.

If get_authorization_url is asynchronous, it should be awaited:

authorization_url = await pm_client.get_authorization_url()

Please verify the method's implementation to confirm its synchronous behavior.

streaming_providers/offcloud/utils.py (2)

54-61: ⚠️ Potential issue

Handle exceptions instead of passing in update_oc_cache_status.

Currently, any ProviderException is silently ignored, which could mask issues that should be addressed or logged. Consider logging the exception to aid in debugging and monitoring.

Apply this diff to handle exceptions appropriately:

 except ProviderException:
-    pass
+    logger.error("Failed to update cache status", exc_info=True)

Likely invalid or redundant comment.


31-34: Verify the correctness of ProviderException usage.

Ensure that the ProviderException class constructor accepts the arguments provided. Incorrect arguments may lead to runtime errors.

Run the following script to check the ProviderException definition:

✅ Verification successful

ProviderException usage is correct.

🏁 Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Description: Verify the constructor signature of `ProviderException`.

# Test: Search for the definition of `ProviderException`. Expect: Constructor accepts message and optional filename.
rg --type python -A 5 'class ProviderException'

Length of output: 80


Script:

#!/bin/bash
# Description: Retry verifying the constructor signature of `ProviderException` with correct file type.

# Test: Search for the definition of `ProviderException`. Expect: Constructor accepts message and optional filename.
rg --type py -A 5 'class ProviderException'

Length of output: 418

streaming_providers/alldebrid/utils.py (1)

1-1: Importing asyncio is appropriate for asynchronous operations

The addition of import asyncio is necessary to support the asynchronous functions and concurrency in the module.

streaming_providers/alldebrid/client.py (5)

82-83: Good implementation of asynchronous get_user_torrent_list

The method get_user_torrent_list has been correctly converted to an asynchronous function and properly awaits the _make_request call.


97-105: Efficient use of async in get_available_torrent

In the get_available_torrent method, you're effectively using asynchronous operations by awaiting get_user_torrent_list and iterating over the results correctly.


75-77: Verify the request payload in add_magnet_link

In the add_magnet_link method, you're sending data using data={"magnets[]": magnet_link}. Confirm that the API expects the magnet link to be sent in this format and that using the data parameter is appropriate. If the API expects a JSON payload, you should use the json parameter instead.

Run the following script to check the API documentation or existing code for the correct usage:

#!/bin/bash
# Description: Verify the correct method to send magnet links to the API.

# Assuming API documentation is available locally
cat docs/alldebrid_api.md | grep -A5 'magnet/upload'

# Alternatively, check other instances where _make_request is used for magnet uploads
rg --type py '_make_request.*magnet/upload' -A5

39-40: Confirm that super()._make_request is asynchronous

Ensure that the super()._make_request method in the parent DebridClient class is asynchronous. Calling await on a non-async method will result in a runtime error.

Run the following script to check the parent class definition:

#!/bin/bash
# Description: Verify that DebridClient._make_request is an async method.

# Search for DebridClient class definition and _make_request method
rg --type py -A5 'class DebridClient' streaming_providers/debrid_client.py

# Look for 'async def _make_request' within DebridClient

28-32: Ensure proper usage of data and json parameters in _make_request

The _make_request method now includes both data and json parameters. Be cautious to avoid conflicts, as passing both can lead to unexpected behavior. Ensure that when calling this method, only one of these parameters is used as appropriate for the request.

Run the following script to check all calls to _make_request for proper usage:

streaming_providers/premiumize/client.py (8)

Line range hint 47-58: Async conversion of get_token is appropriate.

The get_token method has been correctly converted to an asynchronous function, and the use of await with _make_request is appropriate.


60-65: Async conversion of add_magnet_link is appropriate.

The add_magnet_link method is correctly updated to be asynchronous, and it appropriately uses await with _make_request.


67-72: Async conversion of create_folder is appropriate.

The create_folder method has been properly converted to an asynchronous function, and the use of await is correct.


74-75: Async conversion of get_transfer_list is appropriate.

The method get_transfer_list has been correctly updated to be asynchronous, ensuring non-blocking I/O operations.


89-93: Async conversion of get_folder_list is appropriate.

The get_folder_list method is correctly updated to be asynchronous, allowing for non-blocking retrieval of folder lists.


96-99: Async conversion of delete_folder is appropriate.

The delete_folder method has been properly converted to an asynchronous function, and it uses await appropriately.


101-104: Async conversion of delete_torrent is appropriate.

The delete_torrent method is correctly updated to be asynchronous, improving the responsiveness of deletion operations.


Line range hint 106-116: Ensure robust error handling in get_torrent_instant_availability.

The method checks if results.get("status") != "success" and raises a ProviderException if necessary. This is good practice. Ensure that any other potential exceptions from _make_request are also appropriately handled.

streaming_providers/premiumize/utils.py (2)

104-113: ⚠️ Potential issue

Handle exceptions explicitly instead of passing

The except ProviderException: pass block silently ignores exceptions, which can make debugging difficult and hide underlying issues. Consider logging the exception or handling specific error cases.

Suggested change:

 except ProviderException as e:
-    pass
+    logger.error(f"Error updating cache status: {e}")

Likely invalid or redundant comment.


124-130: ⚠️ Potential issue

Avoid silent exception handling

In the except ProviderException: return [] block, exceptions are caught and an empty list is returned without any logging. This could make diagnosing issues harder. Consider logging the exception before returning.

Suggested change:

 except ProviderException as e:
+    logger.error(f"Error fetching downloaded info hashes: {e}")
     return []

Likely invalid or redundant comment.

streaming_providers/offcloud/client.py (1)

122-127: ⚠️ Potential issue

Ensure type consistency when matching episodes

The comparison if episode in PTT.parse_title(link).get("episodes", []) assumes that both episode and the items in episodes are of the same type. Verify that PTT.parse_title(link).get("episodes", []) returns a list of integers to match with episode: Optional[int]. If episodes contains strings, you may need to convert them to integers before comparison.

Run the following script to check the types of episodes returned:

streaming_providers/realdebrid/client.py (2)

42-57: Verify Asynchronous Behavior of super()._make_request

In the overridden _make_request method, you're calling await super()._make_request(...). Please ensure that the DebridClient's _make_request method is asynchronous. If it's synchronous, this will raise a TypeError at runtime.

Confirm that DebridClient's _make_request method is defined as async def.


15-18: Ensure Compatibility of Asynchronous Methods with Base Class

The methods _handle_service_specific_errors and _make_request have been converted to asynchronous functions. Please verify that the base class DebridClient methods are also asynchronous. If DebridClient has synchronous versions of these methods, this could lead to issues when using await with them.

Run the following script to check if the base class methods are asynchronous:

streaming_providers/torbox/client.py (3)

19-27: Ensure compatibility of type hints with supported Python versions

The type hints use the union operator (|), such as Optional[dict | str], which requires Python 3.10 or later. If your project needs to support earlier Python versions (e.g., 3.9 or below), consider using Union from the typing module instead:

from typing import Union, Optional

# Example:
data: Optional[Union[dict, str]] = None

19-32: LGTM—Asynchronous request handling is implemented correctly

The _make_request method has been correctly updated to handle asynchronous operations. The addition of the json parameter allows for flexibility in sending JSON payloads. The use of await ensures that the asynchronous requests are properly awaited.


13-14: Ensure proper implementation of asynchronous context management

The method __aexit__ has been implemented to support asynchronous context management. Please verify that the __aenter__ method is also implemented so that the Torbox class can be used as an asynchronous context manager.

Run the following script to verify the implementation of __aenter__:

streaming_providers/debridlink/client.py (1)

2-2: Importing Any and Optional for type annotations

The addition of Any and Optional from the typing module is appropriate for the type hints used in the code.

Comment on lines +16 to +19
async with Premiumize() as pm_client:
return RedirectResponse(
pm_client.get_authorization_url(), headers=const.NO_CACHE_HEADERS
)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Add exception handling for the asynchronous context

While using the asynchronous context manager with Premiumize, any exceptions that occur within the async with block may lead to unhandled exceptions. Consider adding error handling to manage potential exceptions gracefully and provide meaningful feedback to the user.

Here's an example of how you might implement this:

async with Premiumize() as pm_client:
    try:
        authorization_url = pm_client.get_authorization_url()
        return RedirectResponse(authorization_url, headers=const.NO_CACHE_HEADERS)
    except Exception as e:
        # Handle specific exceptions as needed
        return {"error": str(e)}

Comment on lines +24 to +35
async with Premiumize() as pm_client:
token_data = await pm_client.get_token(code)
token = pm_client.encode_token_data(token_data["access_token"])
user_data = schemas.UserData(
streaming_provider=schemas.StreamingProvider(
service="premiumize", token=token
)
)
encrypted_str = crypto.encrypt_user_data(user_data)
return RedirectResponse(
f"/{encrypted_str}/configure", headers=const.NO_CACHE_HEADERS
)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Handle potential exceptions during token retrieval and user data processing

In the oauth2_redirect function, exceptions might occur during the asynchronous operations, such as retrieving the token or processing user data. It's important to handle these exceptions to prevent unhandled errors and to provide clear error messages to the user.

Consider wrapping the operations in a try-except block:

async with Premiumize() as pm_client:
    try:
        token_data = await pm_client.get_token(code)
        token = pm_client.encode_token_data(token_data["access_token"])
        user_data = schemas.UserData(
            streaming_provider=schemas.StreamingProvider(
                service="premiumize", token=token
            )
        )
        encrypted_str = crypto.encrypt_user_data(user_data)
        return RedirectResponse(
            f"/{encrypted_str}/configure", headers=const.NO_CACHE_HEADERS
        )
    except Exception as e:
        # Handle specific exceptions as needed
        return {"error": str(e)}

streaming_providers/offcloud/utils.py Outdated Show resolved Hide resolved
streaming_providers/offcloud/utils.py Outdated Show resolved Hide resolved
Comment on lines 63 to 77
torrent_info = await get_torrent_info(ad_client, info_hash)
if not torrent_info:
torrent_id = await add_new_torrent(ad_client, magnet_link)
else:
torrent_id = torrent_info.get("id")

return await wait_for_download_and_get_link(
ad_client,
torrent_id,
filename,
file_index,
episode,
max_retries,
retry_interval,
)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Ensure proper exception handling within get_video_url_from_alldebrid

Consider adding exception handling to manage potential errors from the asynchronous calls to get_torrent_info, add_new_torrent, and wait_for_download_and_get_link. This will improve the robustness of the function.

Example of adding exception handling:

async with AllDebrid(
        token=user_data.streaming_provider.token, user_ip=user_ip
    ) as ad_client:
    try:
        torrent_info = await get_torrent_info(ad_client, info_hash)
        if not torrent_info:
            torrent_id = await add_new_torrent(ad_client, magnet_link)
        else:
            torrent_id = torrent_info.get("id")

        return await wait_for_download_and_get_link(
            ad_client,
            torrent_id,
            filename,
            file_index,
            episode,
            max_retries,
            retry_interval,
        )
    except ProviderException as e:
        # Handle specific provider exceptions if necessary
        raise e
    except Exception as e:
        # Handle generic exceptions
        raise ProviderException("An error occurred while fetching the video URL") from e

streaming_providers/debridlink/client.py Outdated Show resolved Hide resolved
streaming_providers/debridlink/client.py Outdated Show resolved Hide resolved
streaming_providers/debridlink/client.py Outdated Show resolved Hide resolved
streaming_providers/debridlink/client.py Outdated Show resolved Hide resolved
streaming_providers/debridlink/client.py Outdated Show resolved Hide resolved
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 13

🧹 Outside diff range and nitpick comments (9)
streaming_providers/parser.py (4)

27-28: Improved file selection logic

The addition of the is_video_file check is a good improvement. It ensures that only video files are selected when a specific file index is provided.

However, we can simplify the code structure slightly:

Consider combining the two conditions into a single if statement for better readability:

if file_index is not None and file_index < len(files) and is_video_file(files[file_index][name_key]):
    return file_index

This change addresses the static analysis suggestion (SIM102) and makes the code more concise.


40-46: Improved episode matching and error handling

The changes in the episode matching logic are beneficial:

  1. The addition of the is_video_file check ensures consistency with other selection methods.
  2. The new ProviderException improves error handling when no matching file is found.

These modifications enhance the robustness of the function.

Consider making the error message more specific:

raise ProviderException(
    f"No matching video file found for episode {episode}", "no_matching_file.mp4"
)

This change provides more context about why no matching file was found.


54-60: Enhanced largest file selection and error handling

The changes in the largest file selection logic are beneficial:

  1. The addition of the is_video_file check ensures that only video files are considered as valid selections.
  2. The new ProviderException improves error handling when no suitable video file is found.

These modifications enhance the robustness and consistency of the function.

Consider making the error message more specific:

raise ProviderException(
    "No suitable video file found in the torrent", "no_matching_file.mp4"
)

This change provides more context about why no matching file was found.


Line range hint 1-60: Overall assessment: Significant improvements to file selection logic

The changes made to the select_file_index_from_torrent function represent a substantial improvement in its functionality, flexibility, and error handling:

  1. The function now consistently checks for video files across all selection methods.
  2. Error handling has been enhanced with specific exceptions for better debugging and user feedback.
  3. The addition of the file_size_callback allows for more dynamic file size retrieval.
  4. Optional parameters provide greater flexibility in function usage.

These modifications align well with the stated goal of improving performance and transitioning to more robust code. The function is now better equipped to handle various scenarios in torrent file selection, particularly focusing on video files.

As this function seems to be part of a larger transition towards asynchronous programming, consider whether this function might benefit from being asynchronous as well, especially if the file_size_callback or is_video_file function might involve I/O operations.

🧰 Tools
🪛 Ruff

26-27: Use a single if statement instead of nested if statements

(SIM102)

streaming_providers/offcloud/client.py (2)

80-89: LGTM with suggestion: Async conversion of get_available_torrent

The get_available_torrent method has been correctly converted to an async function, properly using await for the get_user_torrent_list call. The use of next() with a generator expression is an efficient approach.

However, to improve robustness, consider using the get() method when accessing the 'originalLink' key to prevent potential KeyError:

         return next(
             (
                 torrent
                 for torrent in available_torrents
-                if info_hash.casefold() in torrent["originalLink"].casefold()
+                if info_hash.casefold() in torrent.get("originalLink", "").casefold()
             ),
             None,
         )

This change will handle cases where a torrent dictionary might not have the 'originalLink' key.


94-103: LGTM with suggestion: Efficient implementation of get_file_sizes

The new get_file_sizes static method is an efficient implementation using httpx.AsyncClient and asyncio.gather for concurrent HTTP HEAD requests. This approach aligns well with the async migration objective and should improve performance when dealing with multiple files.

However, to improve robustness, consider adding error handling for cases where the 'Content-Length' header might be missing or invalid:

         return [
-            {**file_data, "size": int(response.headers["Content-Length"])}
+            {**file_data, "size": int(response.headers.get("Content-Length", 0))}
             for file_data, response in zip(files_data, file_sizes)
         ]

This change will default to a size of 0 if the 'Content-Length' header is missing, preventing potential KeyError or ValueError exceptions.

streaming_providers/torbox/client.py (1)

Line range hint 82-94: Enhance success condition checking in create_download_link

In create_download_link, the success condition checks if "successfully" is in response.get("detail"). This approach may not be robust if the detail key is missing or its content changes. Consider checking for a specific status code, a success flag, or handling exceptions to reliably determine if the operation was successful.

Update the success check for a more reliable condition. For example:

 if response.get("status") == "success":
     return response
 raise ProviderException(
     f"Failed to create download link from Torbox {response}",
     "transfer_error.mp4",
 )
streaming_providers/alldebrid/utils.py (2)

Line range hint 28-44: Ensure safe access to torrent links in wait_for_download_and_get_link

When accessing torrent_info["links"][file_index]["link"], there's a risk of KeyError or IndexError if the links list or the specified file_index does not exist. Consider adding checks or exception handling to validate the presence of these elements.

Apply this diff to safely access torrent links:

     file_index = select_file_index_from_torrent(
         torrent_info,
         filename,
         file_index,
         episode,
         file_key="links",
         name_key="filename",
     )
-    response = await ad_client.create_download_link(
-        torrent_info["links"][file_index]["link"]
-    )
+    try:
+        link = torrent_info["links"][file_index]["link"]
+    except (KeyError, IndexError):
+        raise ProviderException("File link not found in torrent info")
+    response = await ad_client.create_download_link(link)
     return response["data"]["link"]

Line range hint 48-77: Add exception handling in get_video_url_from_alldebrid

To improve robustness, consider adding exception handling around the asynchronous calls within get_video_url_from_alldebrid. This will help manage potential errors from get_torrent_info, add_new_torrent, and wait_for_download_and_get_link, providing clearer error messages to the user.

Apply this diff to include exception handling:

 async def get_video_url_from_alldebrid(
     info_hash: str,
     magnet_link: str,
     user_data: UserData,
     filename: str,
     user_ip: str,
     file_index: int,
     episode: int = None,
     max_retries=5,
     retry_interval=5,
     **kwargs,
 ) -> str:
     async with AllDebrid(
         token=user_data.streaming_provider.token, user_ip=user_ip
     ) as ad_client:
+        try:
             torrent_info = await get_torrent_info(ad_client, info_hash)
             if not torrent_info:
                 torrent_id = await add_new_torrent(ad_client, magnet_link)
             else:
                 torrent_id = torrent_info.get("id")
 
             return await wait_for_download_and_get_link(
                 ad_client,
                 torrent_id,
                 filename,
                 file_index,
                 episode,
                 max_retries,
                 retry_interval,
             )
+        except ProviderException as e:
+            # Handle specific provider exceptions if necessary
+            raise e
+        except Exception as e:
+            # Handle generic exceptions
+            raise ProviderException("An error occurred while fetching the video URL") from e
📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL

📥 Commits

Files that changed from the base of the PR and between 3f36130 and 922891c.

📒 Files selected for processing (6)
  • streaming_providers/alldebrid/utils.py (3 hunks)
  • streaming_providers/debrid_client.py (1 hunks)
  • streaming_providers/offcloud/client.py (2 hunks)
  • streaming_providers/offcloud/utils.py (1 hunks)
  • streaming_providers/parser.py (1 hunks)
  • streaming_providers/torbox/client.py (4 hunks)
🧰 Additional context used
🪛 Ruff
streaming_providers/debrid_client.py

23-26: Use contextlib.suppress(ProviderException) instead of try-except-pass

Replace with contextlib.suppress(ProviderException)

(SIM105)


99-102: Within an except clause, raise exceptions with raise ... from err or raise ... from None to distinguish them from errors in exception handling

(B904)

streaming_providers/parser.py

26-27: Use a single if statement instead of nested if statements

(SIM102)

🔇 Additional comments (24)
streaming_providers/parser.py (3)

5-6: Improved function signature and imports

The changes to the imports and function signature are well-thought-out:

  1. The new imports support the enhanced error handling and file validation.
  2. Making filename and file_index optional (using Optional[str] and Optional[int]) increases the function's flexibility.
  3. The addition of the file_size_callback parameter allows for dynamic file size retrieval when needed.

These modifications improve the function's versatility and error handling capabilities.

Also applies to: 11-12, 19-19


34-34: Enhanced filename matching

The addition of the is_video_file check in the filename matching logic is a good improvement. It ensures that only video files are selected when matching by filename, which aligns with the function's purpose and prevents potential issues with selecting non-video files.


48-50: Flexible file size retrieval

The addition of the file_size_callback logic is a good improvement. It allows for dynamic file size retrieval when needed, making the function more versatile. This can be particularly useful in scenarios where file sizes are not immediately available in the torrent info.

streaming_providers/offcloud/utils.py (4)

10-46: LGTM: Successful conversion to async

The function get_video_url_from_offcloud has been correctly converted to an asynchronous function. All calls to OffCloud client methods are properly awaited, and the async context manager is used correctly. The error handling and overall logic remain intact.


49-63: LGTM: Proper async conversion

The update_oc_cache_status function has been successfully converted to an asynchronous function. The async context manager is used correctly, and the call to get_torrent_instant_availability is properly awaited. The exception handling remains in place, maintaining the function's robustness.


66-79: LGTM: Async conversion and past issue resolved

The fetch_downloaded_info_hashes_from_oc function has been successfully converted to an asynchronous function. The async context manager is used correctly, and the call to get_user_torrent_list is properly awaited.

Additionally, the issue mentioned in the past review comment regarding incorrect access to torrent in the list comprehension has been addressed. The code now correctly accesses the "originalLink" key of the torrent dictionary before splitting.


1-88: Overall: Good async conversion with room for improvement

The file has been successfully converted to use async/await syntax, which should improve performance for I/O-bound operations. All functions have been properly updated to be asynchronous, and the OffCloud client is consistently used with async context managers.

However, there's still room for improvement in error handling, particularly in the delete_all_torrents_from_oc function. Addressing this will enhance the robustness of the code.

Great job on the async conversion! Just a bit more attention to error handling will make this code even better.

streaming_providers/debrid_client.py (2)

1-15: LGTM: Appropriate imports and initialization for async refactoring

The changes to the imports and class initialization are well-suited for the transition to asynchronous programming. The use of httpx.AsyncClient with a timeout of 18.0 seconds (as noted in the comment) is appropriate for async HTTP requests within the Stremio context.


104-131: LGTM: Appropriate async conversions and abstract method declarations

The changes to the remaining methods (initialize_headers, disable_access_token, wait_for_status, and get_torrent_info) are well-implemented:

  1. All methods have been correctly converted to async.
  2. wait_for_status now uses asyncio.sleep instead of time.sleep, which is appropriate for async code.
  3. Abstract methods are correctly decorated with @abstractmethod.

These changes are consistent with the overall async refactoring of the class and follow best practices for abstract base classes in Python.

streaming_providers/offcloud/client.py (9)

1-1: LGTM: Appropriate imports for async operations

The addition of asyncio and httpx imports is appropriate for the transition to asynchronous operations. These changes align well with the overall objective of migrating to async functions.

Also applies to: 5-5


16-19: Verify the intention of empty async methods

The initialize_headers and disable_access_token methods have been converted to async but remain empty. Please confirm if this is intentional or if these methods require implementation.

If implementation is needed, consider adding TODOs or raising NotImplementedError.


22-28: LGTM: Proper error handling for specific HTTP status codes

The _handle_service_specific_errors method effectively manages HTTP 403 and 429 status codes, raising appropriate ProviderException instances with clear error messages. This implementation enhances error handling and user feedback.


30-46: LGTM: Async conversion and parameter updates

The _make_request method has been successfully converted to async and includes new parameters that enhance its flexibility. The implementation properly utilizes the async super() call.

Consider using Union for better compatibility

The return type annotation dict | str is only valid in Python 3.10+. For better compatibility with earlier Python versions, consider using Union[dict, str] from the typing module.

- from typing import Optional, List
+ from typing import Optional, List, Union

    async def _make_request(
        self,
        method: str,
        url: str,
-        data: Optional[dict | str] = None,
+        data: Optional[Union[dict, str]] = None,
        json: Optional[dict] = None,
        params: Optional[dict] = None,
        is_return_none: bool = False,
        is_expected_to_fail: bool = False,
        delete: bool = False,
-    ) -> dict | list:
+    ) -> Union[dict, list]:

Clarify the usage of the delete parameter

The delete parameter is used to determine the base URL, which seems unconventional. Consider refactoring this to use separate methods or a more explicit approach for different base URLs.

Can you provide more context on why the delete parameter is used to switch between base URLs?


Line range hint 48-63: LGTM: Proper async conversion of add_magnet_link

The add_magnet_link method has been correctly converted to an async function. The implementation properly uses await for the _make_request call, and the existing error handling logic has been preserved. These changes align well with the overall async migration objective.


65-77: LGTM: Effective async conversion of multiple methods

The get_user_torrent_list, get_torrent_info, and get_torrent_instant_availability methods have been successfully converted to async functions. Each method correctly uses await for the _make_request calls, maintaining their original functionality while adapting to the new asynchronous paradigm.


91-92: LGTM: Proper async conversion of explore_folder_links

The explore_folder_links method has been correctly converted to an async function. It properly uses await for the _make_request call, maintaining its original functionality while adapting to the new asynchronous paradigm.


105-126: LGTM: Effective async conversion of create_download_link

The create_download_link method has been successfully converted to an async function. The implementation correctly uses await for the explore_folder_links call and appropriately integrates the new get_file_sizes method as a callback. These changes maintain the original functionality while adapting to the asynchronous paradigm, potentially improving performance when dealing with multiple files.


128-131: ⚠️ Potential issue

Consider using DELETE HTTP method for delete_torrent

While the delete_torrent method has been correctly converted to an async function, it's using the GET HTTP method for a deletion operation. This is not RESTful and could potentially lead to unintended side effects.

Consider changing the HTTP method to DELETE to properly represent the action:

     async def delete_torrent(self, request_id: str) -> dict:
         return await self._make_request(
-            "GET", f"/cloud/remove/{request_id}", delete=True
+            "DELETE", f"/cloud/remove/{request_id}", delete=True
         )

This change will make the API usage more semantically correct and align with RESTful principles.

Likely invalid or redundant comment.

streaming_providers/torbox/client.py (3)

66-72: ⚠️ Potential issue

Ensure compatibility with Python versions below 3.9

The type annotation list[str] in get_torrent_instant_availability requires Python 3.9 or newer. If the project supports earlier Python versions, consider importing List from typing and updating the annotation accordingly.

To check the project's Python version compatibility, review the project's configuration files or documentation.

If compatibility with Python versions below 3.9 is needed, apply this diff:

+from typing import Any, Optional, List

-    async def get_torrent_instant_availability(self, torrent_hashes: list[str]):
+    async def get_torrent_instant_availability(self, torrent_hashes: List[str]):

74-78: ⚠️ Potential issue

Use compatible type annotations for return types

The return type annotation dict[str, Any] | None in get_available_torrent uses the union operator |, which is available from Python 3.10 onwards. For compatibility with earlier Python versions, replace it with Optional[Dict[str, Any]].

If supporting Python versions earlier than 3.10, apply this diff:

+from typing import Any, Optional, Dict

-    async def get_available_torrent(self, info_hash) -> dict[str, Any] | None:
+    async def get_available_torrent(self, info_hash) -> Optional[Dict[str, Any]]:

10-11: Ensure self.token is initialized before using in initialize_headers

In initialize_headers, self.token is used to set the authorization header. Ensure that self.token is properly initialized before this method is called to avoid AttributeError.

Check where self.token is set and ensure it is assigned before initialize_headers is invoked.

Here is a shell script to search for assignments to self.token:

✅ Verification successful

self.token is properly initialized before use

  • self.token is assigned in streaming_providers/debrid_client.py.
🏁 Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Description: Verify that `self.token` is initialized before use.

# Search for `self.token` assignments in the codebase.
rg --python --pcre2 "(self\.token\s*=\s*.*)" streaming_providers/

Length of output: 478


Script:

#!/bin/bash
# Description: Verify that `self.token` is initialized before use.

# Search for `self.token` assignments in Python files within the streaming_providers directory.
rg "self\.token\s*=\s*.*" --type=python streaming_providers/

Length of output: 93


Script:

#!/bin/bash
# Description: Verify that `self.token` is initialized before use.

# Search for `self.token` assignments in Python files within the streaming_providers directory.
rg "self\.token\s*=\s*.*" streaming_providers/ --glob "*.py"

Length of output: 126

streaming_providers/alldebrid/utils.py (3)

80-99: Good use of safe access and asynchronous operations in update_ad_cache_status

The function correctly uses asynchronous context managers and safely accesses dictionary keys using the .get() method. This approach prevents potential KeyError exceptions and ensures efficient execution.


124-135: ⚠️ Potential issue

Ensure safe access and handle exceptions in delete_all_torrents_from_ad

Accessing torrents["data"]["magnets"] assumes the keys exist. Additionally, consider handling exceptions during individual torrent deletions so a failure in one does not prevent others from being deleted.

Apply this diff to improve robustness:

 async def delete_all_torrents_from_ad(user_data: UserData, user_ip: str, **kwargs):
     """Deletes all torrents from the AllDebrid account."""
     async with AllDebrid(
         token=user_data.streaming_provider.token, user_ip=user_ip
     ) as ad_client:
         torrents = await ad_client.get_user_torrent_list()
+        magnets = torrents.get("data", {}).get("magnets", [])
         await asyncio.gather(
             *[
                 ad_client.delete_torrent(torrent["id"])
-                for torrent in torrents["data"]["magnets"]
+                for torrent in magnets
             ],
+            return_exceptions=True,
         )

Using return_exceptions=True allows all deletions to be attempted, collecting exceptions without stopping the entire process.

Likely invalid or redundant comment.


105-118: ⚠️ Potential issue

Handle missing keys when accessing nested data structures in fetch_downloaded_info_hashes_from_ad

When accessing available_torrents["data"]["magnets"], ensure that both "data" and "magnets" keys exist to prevent KeyError. Use the .get() method or default values to safely access nested data.

Apply this diff to include safe access:

 async def fetch_downloaded_info_hashes_from_ad(
     user_data: UserData, user_ip: str, **kwargs
 ) -> list[str]:
     """Fetches the info_hashes of all torrents downloaded in the AllDebrid account."""
     try:
         async with AllDebrid(
             token=user_data.streaming_provider.token, user_ip=user_ip
         ) as ad_client:
             available_torrents = await ad_client.get_user_torrent_list()
-            if not available_torrents.get("data"):
+            if not available_torrents.get("data") or not available_torrents["data"].get("magnets"):
                 return []
             return [
                 torrent["hash"]
-                for torrent in available_torrents["data"]["magnets"]
+                for torrent in available_torrents.get("data", {}).get("magnets", [])
             ]
     except ProviderException:
         return []

Likely invalid or redundant comment.

Comment on lines 92 to 102
@staticmethod
def _parse_response(response, is_return_none):
async def _parse_response(response: httpx.Response, is_return_none: bool):
if is_return_none:
return {}
try:
return response.json()
except JSONDecodeError as error:
except ValueError as error:
raise ProviderException(
f"Failed to parse response error: {error}. \nresponse: {response.text}",
"api_error.mp4",
)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Use exception chaining in _parse_response

The conversion of _parse_response to a static async method is appropriate. However, the exception handling can be improved.

Use exception chaining to preserve the original exception context when raising a new exception. Apply this diff:

         try:
             return response.json()
         except ValueError as error:
             raise ProviderException(
                 f"Failed to parse response error: {error}. \nresponse: {response.text}",
                 "api_error.mp4",
-            )
+            ) from error

This change will maintain the original exception context, making it easier to debug issues related to response parsing.

📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
@staticmethod
def _parse_response(response, is_return_none):
async def _parse_response(response: httpx.Response, is_return_none: bool):
if is_return_none:
return {}
try:
return response.json()
except JSONDecodeError as error:
except ValueError as error:
raise ProviderException(
f"Failed to parse response error: {error}. \nresponse: {response.text}",
"api_error.mp4",
)
@staticmethod
async def _parse_response(response: httpx.Response, is_return_none: bool):
if is_return_none:
return {}
try:
return response.json()
except ValueError as error:
raise ProviderException(
f"Failed to parse response error: {error}. \nresponse: {response.text}",
"api_error.mp4",
) from error
🧰 Tools
🪛 Ruff

99-102: Within an except clause, raise exceptions with raise ... from err or raise ... from None to distinguish them from errors in exception handling

(B904)

Comment on lines 52 to 61
@staticmethod
async def _handle_request_error(error: Exception, is_expected_to_fail: bool):
if isinstance(error, httpx.TimeoutException):
raise ProviderException("Request timed out.", "torrent_not_downloaded.mp4")
except requests.exceptions.ConnectionError:
elif isinstance(error, httpx.NetworkError):
raise ProviderException(
"Failed to connect to Debrid service.", "debrid_service_down_error.mp4"
)
elif not is_expected_to_fail:
raise ProviderException(f"Request error: {str(error)}", "api_error.mp4")
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Use httpx.ConnectError instead of httpx.NetworkError

The conversion of _handle_request_error to a static async method is appropriate. However, there's an issue with the exception handling.

httpx does not have an exception named NetworkError. Use httpx.ConnectError to handle connection-related errors. Apply this diff:

         elif isinstance(error, httpx.TimeoutException):
             raise ProviderException("Request timed out.", "torrent_not_downloaded.mp4")
-        elif isinstance(error, httpx.NetworkError):
+        elif isinstance(error, httpx.ConnectError):
             raise ProviderException(
                 "Failed to connect to Debrid service.", "debrid_service_down_error.mp4"
             )

This change will ensure that the correct exception is caught for connection-related errors.

📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
@staticmethod
async def _handle_request_error(error: Exception, is_expected_to_fail: bool):
if isinstance(error, httpx.TimeoutException):
raise ProviderException("Request timed out.", "torrent_not_downloaded.mp4")
except requests.exceptions.ConnectionError:
elif isinstance(error, httpx.NetworkError):
raise ProviderException(
"Failed to connect to Debrid service.", "debrid_service_down_error.mp4"
)
elif not is_expected_to_fail:
raise ProviderException(f"Request error: {str(error)}", "api_error.mp4")
@staticmethod
async def _handle_request_error(error: Exception, is_expected_to_fail: bool):
if isinstance(error, httpx.TimeoutException):
raise ProviderException("Request timed out.", "torrent_not_downloaded.mp4")
elif isinstance(error, httpx.ConnectError):
raise ProviderException(
"Failed to connect to Debrid service.", "debrid_service_down_error.mp4"
)
elif not is_expected_to_fail:
raise ProviderException(f"Request error: {str(error)}", "api_error.mp4")

Comment on lines 29 to 50
async def _make_request(
self,
method: str,
url: str,
data=None,
params=None,
is_return_none=False,
is_expected_to_fail=False,
) -> dict:
response = self._perform_request(method, url, data, params)
self._handle_errors(response, is_expected_to_fail)
return self._parse_response(response, is_return_none)

def _perform_request(self, method, url, data, params):
data: Optional[dict | str] = None,
json: Optional[dict] = None,
params: Optional[dict] = None,
is_return_none: bool = False,
is_expected_to_fail: bool = False,
) -> dict | list:
try:
return requests.request(
method,
url,
params=params,
data=data,
headers=self.headers,
timeout=const.DEBRID_SERVER_TIMEOUT,
response = await self.client.request(
method, url, data=data, json=json, params=params, headers=self.headers
)
except requests.exceptions.Timeout:
response.raise_for_status()
return await self._parse_response(response, is_return_none)
except httpx.RequestError as error:
await self._handle_request_error(error, is_expected_to_fail)
except httpx.HTTPStatusError as error:
await self._handle_http_error(error, is_expected_to_fail)
except Exception as error:
await self._handle_request_error(error, is_expected_to_fail)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Remove broad Exception catch in _make_request

The conversion of _make_request to an async method and the use of self.client.request are appropriate. The error handling has been improved to use specific httpx exceptions. However, there's still a broad Exception catch at the end of the try-except block.

Remove the broad Exception catch to prevent masking unexpected errors and to make debugging easier. Apply this diff:

         except httpx.RequestError as error:
             await self._handle_request_error(error, is_expected_to_fail)
         except httpx.HTTPStatusError as error:
             await self._handle_http_error(error, is_expected_to_fail)
-        except Exception as error:
-            await self._handle_request_error(error, is_expected_to_fail)

This change will ensure that only expected exceptions are caught and handled, while allowing unexpected errors to propagate for easier debugging.

📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
async def _make_request(
self,
method: str,
url: str,
data=None,
params=None,
is_return_none=False,
is_expected_to_fail=False,
) -> dict:
response = self._perform_request(method, url, data, params)
self._handle_errors(response, is_expected_to_fail)
return self._parse_response(response, is_return_none)
def _perform_request(self, method, url, data, params):
data: Optional[dict | str] = None,
json: Optional[dict] = None,
params: Optional[dict] = None,
is_return_none: bool = False,
is_expected_to_fail: bool = False,
) -> dict | list:
try:
return requests.request(
method,
url,
params=params,
data=data,
headers=self.headers,
timeout=const.DEBRID_SERVER_TIMEOUT,
response = await self.client.request(
method, url, data=data, json=json, params=params, headers=self.headers
)
except requests.exceptions.Timeout:
response.raise_for_status()
return await self._parse_response(response, is_return_none)
except httpx.RequestError as error:
await self._handle_request_error(error, is_expected_to_fail)
except httpx.HTTPStatusError as error:
await self._handle_http_error(error, is_expected_to_fail)
except Exception as error:
await self._handle_request_error(error, is_expected_to_fail)
async def _make_request(
self,
method: str,
url: str,
data: Optional[dict | str] = None,
json: Optional[dict] = None,
params: Optional[dict] = None,
is_return_none: bool = False,
is_expected_to_fail: bool = False,
) -> dict | list:
try:
response = await self.client.request(
method, url, data=data, json=json, params=params, headers=self.headers
)
response.raise_for_status()
return await self._parse_response(response, is_return_none)
except httpx.RequestError as error:
await self._handle_request_error(error, is_expected_to_fail)
except httpx.HTTPStatusError as error:
await self._handle_http_error(error, is_expected_to_fail)

Comment on lines 63 to 83
async def _handle_http_error(
self, error: httpx.HTTPStatusError, is_expected_to_fail: bool
):
if error.response.status_code in [502, 503, 504]:
raise ProviderException(
"Debrid service is down.", "debrid_service_down_error.mp4"
)

if is_expected_to_fail:
return
self._handle_service_specific_errors(error)
if is_expected_to_fail:
return

if error.response.status_code == 401:
raise ProviderException("Invalid token", "invalid_token.mp4")
await self._handle_service_specific_errors(error)

formatted_traceback = "".join(traceback.format_exception(error))
raise ProviderException(
f"API Error {error.response.text} \n{formatted_traceback}",
"api_error.mp4",
)
if error.response.status_code == 401:
raise ProviderException("Invalid token", "invalid_token.mp4")

formatted_traceback = "".join(traceback.format_exception(error))
raise ProviderException(
f"API Error {error.response.text} \n{formatted_traceback}",
"api_error.mp4",
)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Correct usage of traceback.format_exception

The conversion of _handle_http_error to an async method is correct. However, there's an issue with the usage of traceback.format_exception.

traceback.format_exception requires the exception type, value, and traceback as arguments. Passing only the exception instance may not produce the desired traceback information. Apply this diff:

-        formatted_traceback = "".join(traceback.format_exception(error))
+        formatted_traceback = "".join(
+            traceback.format_exception(type(error), error, error.__traceback__)
+        )
         raise ProviderException(
             f"API Error {error.response.text} \n{formatted_traceback}",
             "api_error.mp4",
         )

This change will ensure that the full traceback information is included in the exception message.

📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
async def _handle_http_error(
self, error: httpx.HTTPStatusError, is_expected_to_fail: bool
):
if error.response.status_code in [502, 503, 504]:
raise ProviderException(
"Debrid service is down.", "debrid_service_down_error.mp4"
)
if is_expected_to_fail:
return
self._handle_service_specific_errors(error)
if is_expected_to_fail:
return
if error.response.status_code == 401:
raise ProviderException("Invalid token", "invalid_token.mp4")
await self._handle_service_specific_errors(error)
formatted_traceback = "".join(traceback.format_exception(error))
raise ProviderException(
f"API Error {error.response.text} \n{formatted_traceback}",
"api_error.mp4",
)
if error.response.status_code == 401:
raise ProviderException("Invalid token", "invalid_token.mp4")
formatted_traceback = "".join(traceback.format_exception(error))
raise ProviderException(
f"API Error {error.response.text} \n{formatted_traceback}",
"api_error.mp4",
)
async def _handle_http_error(
self, error: httpx.HTTPStatusError, is_expected_to_fail: bool
):
if error.response.status_code in [502, 503, 504]:
raise ProviderException(
"Debrid service is down.", "debrid_service_down_error.mp4"
)
if is_expected_to_fail:
return
await self._handle_service_specific_errors(error)
if error.response.status_code == 401:
raise ProviderException("Invalid token", "invalid_token.mp4")
formatted_traceback = "".join(
traceback.format_exception(type(error), error, error.__traceback__)
)
raise ProviderException(
f"API Error {error.response.text} \n{formatted_traceback}",
"api_error.mp4",
)

streaming_providers/torbox/client.py Show resolved Hide resolved
streaming_providers/torbox/client.py Outdated Show resolved Hide resolved
streaming_providers/torbox/client.py Show resolved Hide resolved
streaming_providers/torbox/client.py Show resolved Hide resolved
streaming_providers/torbox/client.py Show resolved Hide resolved
Comment on lines +10 to +17
async def get_torrent_info(ad_client, info_hash):
torrent_info = await ad_client.get_available_torrent(info_hash)
if torrent_info and torrent_info["status"] == "Ready":
return torrent_info
elif torrent_info and torrent_info["statusCode"] == 7:
ad_client.delete_torrent(torrent_info.get("id"))
await ad_client.delete_torrent(torrent_info.get("id"))
raise ProviderException(
"Not enough seeders available for parse magnet link",
"Not enough seeders available to parse magnet link",
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Ensure safe access to dictionary keys to prevent KeyError

In get_torrent_info, accessing torrent_info["status"], torrent_info["statusCode"], and torrent_info.get("id") assumes these keys always exist. To prevent potential KeyError, consider using the .get() method or adding checks to ensure these keys are present before accessing them.

Apply this diff to safely access dictionary keys:

 async def get_torrent_info(ad_client, info_hash):
     torrent_info = await ad_client.get_available_torrent(info_hash)
-    if torrent_info and torrent_info["status"] == "Ready":
+    if torrent_info and torrent_info.get("status") == "Ready":
         return torrent_info
-    elif torrent_info and torrent_info["statusCode"] == 7:
+    elif torrent_info and torrent_info.get("statusCode") == 7:
         await ad_client.delete_torrent(torrent_info.get("id"))
         raise ProviderException(
             "Not enough seeders available to parse magnet link",
             "transfer_error.mp4",
         )
     return None
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
async def get_torrent_info(ad_client, info_hash):
torrent_info = await ad_client.get_available_torrent(info_hash)
if torrent_info and torrent_info["status"] == "Ready":
return torrent_info
elif torrent_info and torrent_info["statusCode"] == 7:
ad_client.delete_torrent(torrent_info.get("id"))
await ad_client.delete_torrent(torrent_info.get("id"))
raise ProviderException(
"Not enough seeders available for parse magnet link",
"Not enough seeders available to parse magnet link",
async def get_torrent_info(ad_client, info_hash):
torrent_info = await ad_client.get_available_torrent(info_hash)
if torrent_info and torrent_info.get("status") == "Ready":
return torrent_info
elif torrent_info and torrent_info.get("statusCode") == 7:
await ad_client.delete_torrent(torrent_info.get("id"))
raise ProviderException(
"Not enough seeders available to parse magnet link",
"transfer_error.mp4",
)
return None

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 20

🧹 Outside diff range and nitpick comments (7)
streaming_providers/parser.py (2)

38-40: Consider consolidating error messages.

The error messages for both cases (no matching file for a specific episode and no matching video file in general) are identical. Consider differentiating these messages to provide more specific information about why no matching file was found.

Here's a suggestion:

if episode:
    raise ProviderException(
        f"No matching video file found for episode {episode}", "no_matching_file.mp4"
    )

# ... (rest of the function)

raise ProviderException(
    "No matching video file found in the torrent", "no_matching_file.mp4"
)

Also applies to: 52-54


77-78: Consider adding error handling for database operations.

The function performs database operations (saving torrent_stream) without any error handling. Consider wrapping these operations in try-except blocks to handle potential database errors gracefully.

Here's a suggestion:

try:
    await torrent_stream.save()
    logging.info(f"Updated {torrent_stream.id} metadata")
except Exception as e:
    logging.error(f"Failed to update {torrent_stream.id} metadata: {str(e)}")
    # Consider re-raising the exception or handling it based on your error management strategy

Also applies to: 115-116

streaming_providers/torbox/utils.py (1)

63-77: LGTM: New async function update_chunk_cache_status implemented correctly

The new async function update_chunk_cache_status is well-implemented with proper error handling and async/await syntax.

Consider making the boolean conversion more explicit for better readability:

stream.cached = stream.id in instant_availability_data

This change makes the intention clearer without changing the functionality.

streaming_providers/offcloud/client.py (3)

26-32: LGTM: Improved error handling with specific exceptions.

The _handle_service_specific_errors method has been properly converted to async and now handles specific HTTP status codes (403 and 429) with appropriate ProviderException raising. The error messages are descriptive and include relevant video file names.

Consider adding a default case to handle unexpected errors:

else:
    raise ProviderException(f"Unexpected error: {error}", "general_error.mp4")

This would ensure all potential errors are caught and handled consistently.


34-49: LGTM: Method successfully converted to async with improved flexibility.

The _make_request method has been properly converted to async and now handles both BASE_URL and DELETE_URL. The parameter updates and superclass method call are correct.

However, the return type hint dict | list is only valid in Python 3.10+. For better compatibility, consider using:

from typing import Union

# ...

async def _make_request(
    # ...
) -> Union[dict, list]:
    # ...

This change ensures compatibility with earlier Python versions while maintaining type hinting.


109-144: LGTM: Method successfully converted to async with improved functionality.

The create_download_link method has been properly converted to async and now handles more complex scenarios, including directory exploration and file selection. The addition of new parameters and the use of background_tasks for updating metadata demonstrate good practices for asynchronous operations.

Consider adding error handling for the case where file_index is out of range:

if file_index >= len(links):
    raise ProviderException("Selected file not found in torrent", "file_not_found.mp4")

This would prevent potential IndexError and provide a more informative error message.

streaming_providers/debridlink/utils.py (1)

22-31: Ensure consistent parameter ordering and type annotations.

In get_video_url_from_debridlink, consider grouping related parameters together, such as placing episode next to filename. Verify that all parameters have accurate type annotations to enhance readability and maintainability.

📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL

📥 Commits

Files that changed from the base of the PR and between 922891c and 45f6e57.

📒 Files selected for processing (10)
  • streaming_providers/alldebrid/utils.py (1 hunks)
  • streaming_providers/debrid_client.py (1 hunks)
  • streaming_providers/debridlink/utils.py (1 hunks)
  • streaming_providers/offcloud/client.py (2 hunks)
  • streaming_providers/offcloud/utils.py (1 hunks)
  • streaming_providers/parser.py (1 hunks)
  • streaming_providers/pikpak/utils.py (3 hunks)
  • streaming_providers/realdebrid/utils.py (1 hunks)
  • streaming_providers/routes.py (5 hunks)
  • streaming_providers/torbox/utils.py (1 hunks)
🚧 Files skipped from review as they are similar to previous changes (1)
  • streaming_providers/alldebrid/utils.py
🧰 Additional context used
🪛 Ruff
streaming_providers/debrid_client.py

23-26: Use contextlib.suppress(ProviderException) instead of try-except-pass

Replace with contextlib.suppress(ProviderException)

(SIM105)


99-102: Within an except clause, raise exceptions with raise ... from err or raise ... from None to distinguish them from errors in exception handling

(B904)


122-123: Use a single if statement instead of nested if statements

Combine if statements using and

(SIM102)

streaming_providers/realdebrid/utils.py

131-133: Within an except clause, raise exceptions with raise ... from err or raise ... from None to distinguish them from errors in exception handling

(B904)

🔇 Additional comments (27)
streaming_providers/offcloud/utils.py (3)

12-63: Excellent async conversion and improved functionality.

The get_video_url_from_offcloud function has been successfully converted to an asynchronous function. The changes include:

  1. Proper use of async/await syntax.
  2. Utilization of async context manager for the OffCloud client.
  3. Addition of stream and background_tasks parameters for better integration.
  4. Improved error handling for the "error" status.

These changes align well with the PR objectives and should contribute to improved performance.


66-79: Async conversion looks good.

The update_oc_cache_status function has been properly converted to an asynchronous function. The changes include:

  1. Correct use of async/await syntax.
  2. Utilization of async context manager for the OffCloud client.
  3. Awaiting the call to get_torrent_instant_availability.

The error handling remains in place, maintaining the function's robustness.


83-96: Async conversion and past issue resolution look good.

The fetch_downloaded_info_hashes_from_oc function has been successfully converted to an asynchronous function. The changes include:

  1. Proper use of async/await syntax.
  2. Utilization of async context manager for the OffCloud client.
  3. Awaiting the call to get_user_torrent_list.

Additionally, the issue mentioned in the past review comment about incorrect access to torrent in the list comprehension has been addressed. The code now correctly accesses the "originalLink" key of the torrent dictionary before splitting.

streaming_providers/parser.py (3)

13-20: LGTM: Improved function signature and new parameter.

The changes to the function signature, including making filename optional and adding the file_size_callback parameter, improve the flexibility of the function. This allows for better handling of various scenarios and provider-specific requirements.


57-117: LGTM: Well-structured async function for updating torrent metadata.

The new update_torrent_streams_metadata function is well-structured and properly utilizes async/await syntax. It handles both single episode and season updates, which provides good flexibility. The use of type hints, logging, and timestamp updates are all positive aspects.


1-117: Overall, good improvements to torrent handling and metadata updating.

The changes in this file significantly enhance the functionality and flexibility of torrent file selection and metadata updating. The select_file_index_from_torrent function now handles more scenarios, and the new update_torrent_streams_metadata function provides a robust way to update torrent stream metadata asynchronously.

Key points:

  1. The transition to async functions where appropriate is a positive step towards improving overall application performance.
  2. The added flexibility in file selection and metadata updating will likely improve the application's ability to handle various torrent structures.
  3. The suggested optimizations and error handling improvements, if implemented, will further enhance the robustness and efficiency of these functions.

Great work on these improvements! The changes align well with the PR's objective of migrating to async and improving performance.

streaming_providers/debrid_client.py (6)

1-12: LGTM: Import statements and class definition are appropriate.

The new imports (asyncio, traceback, abstractmethod, Optional, httpx) are suitable for the async conversion and type hinting. The class definition remains unchanged, which is correct.


12-15: LGTM: Constructor updated correctly for async usage.

The __init__ method has been updated with proper type hinting for the token parameter. The initialization of httpx.AsyncClient with an 18-second timeout is appropriate, considering the comment about Stremio's 20-second timeout.


136-138: LGTM: get_torrent_info method correctly updated.

The get_torrent_info method has been properly converted to an async abstract method. This change is consistent with the overall async refactoring of the class.


1-138: LGTM: Successful async conversion with minor suggestions.

The conversion of the DebridClient class to use async programming is well-executed. The use of httpx.AsyncClient, async context managers, and appropriate error handling for httpx exceptions are all positive changes. The suggestions provided in previous comments will further improve the code quality and robustness.

Great job on this refactoring!

🧰 Tools
🪛 Ruff

23-26: Use contextlib.suppress(ProviderException) instead of try-except-pass

Replace with contextlib.suppress(ProviderException)

(SIM105)


99-102: Within an except clause, raise exceptions with raise ... from err or raise ... from None to distinguish them from errors in exception handling

(B904)


122-123: Use a single if statement instead of nested if statements

Combine if statements using and

(SIM102)


29-50: ⚠️ Potential issue

Remove broad Exception catch in _make_request.

The conversion of _make_request to an async method and the use of self.client.request are appropriate. The error handling has been improved to use specific httpx exceptions. However, there's still a broad Exception catch at the end of the try-except block.

Remove the broad Exception catch to prevent masking unexpected errors and to make debugging easier. Apply this diff:

         except httpx.RequestError as error:
             await self._handle_request_error(error, is_expected_to_fail)
         except httpx.HTTPStatusError as error:
             await self._handle_http_error(error, is_expected_to_fail)
-        except Exception as error:
-            await self._handle_request_error(error, is_expected_to_fail)

This change will ensure that only expected exceptions are caught and handled, while allowing unexpected errors to propagate for easier debugging.

Likely invalid or redundant comment.


63-83: ⚠️ Potential issue

Correct usage of traceback.format_exception.

The conversion of _handle_http_error to an async method is correct. However, there's an issue with the usage of traceback.format_exception.

traceback.format_exception requires the exception type, value, and traceback as arguments. Passing only the exception instance may not produce the desired traceback information. Apply this diff:

-        formatted_traceback = "".join(traceback.format_exception(error))
+        formatted_traceback = "".join(
+            traceback.format_exception(type(error), error, error.__traceback__)
+        )
         raise ProviderException(
             f"API Error {error.response.text} \n{formatted_traceback}",
             "api_error.mp4",
         )

This change will ensure that the full traceback information is included in the exception message.

Likely invalid or redundant comment.

streaming_providers/torbox/utils.py (1)

57-60: LGTM: Type hints added to divide_chunks

The addition of type hints to the divide_chunks function improves code readability and maintainability. The function logic remains correct and unchanged.

streaming_providers/offcloud/client.py (9)

1-3: LGTM: Necessary imports added for async functionality and type hinting.

The new imports are appropriate for the changes made in the file, supporting the transition to async methods and improved type hinting.


5-5: LGTM: Appropriate HTTP client for async operations.

The import of httpx is suitable for handling asynchronous HTTP requests in this updated client.


7-13: LGTM: Necessary imports added for updated functionality.

The new imports provide the required models, base class, exception handling, and utility functions for the updated OffCloud class.


20-25: LGTM: Methods correctly converted to async.

The initialize_headers and disable_access_token methods have been properly converted to async, maintaining consistency with the base class implementation.


Line range hint 52-67: LGTM: Method successfully converted to async with improved error handling.

The add_magnet_link method has been properly converted to async and now includes comprehensive error handling. The use of ProviderException with descriptive messages and video file names enhances the user experience in case of errors.


69-76: LGTM: Methods successfully converted to async with improved type hinting and error handling.

Both get_user_torrent_list and get_torrent_info methods have been properly converted to async. The addition of return type hints enhances code readability and maintainability. The get_torrent_info method now safely handles potential missing data in the response, improving robustness.


78-93: LGTM: Methods successfully converted to async with improved type hinting.

Both get_torrent_instant_availability and get_available_torrent methods have been properly converted to async with appropriate return type hints. The case-insensitive comparison in get_available_torrent is a good practice.

However, there's a potential KeyError in get_available_torrent when accessing originalLink. Consider using the get method to safely access the key:

if info_hash.casefold() in torrent.get("originalLink", "").casefold()

This change will prevent a KeyError if originalLink is missing from the torrent dictionary.


95-107: LGTM: New async method added and existing method converted to async.

The explore_folder_links method has been properly converted to async. The new static method get_file_sizes efficiently uses asyncio.gather for concurrent requests, which is a good practice for improving performance.

However, there's a potential KeyError when accessing the Content-Length header. Consider adding error handling:

return [
    {**file_data, "size": int(response.headers.get("Content-Length", 0))}
    for file_data, response in zip(files_data, file_sizes)
]

This change will use a default value of 0 if the Content-Length header is missing, preventing a KeyError.


146-149: Consider using the DELETE HTTP method for resource deletion.

While the delete_torrent method has been properly converted to async, it's using a GET request for deletion. This is not RESTful and may lead to unintended side effects. Consider using the DELETE HTTP method instead:

async def delete_torrent(self, request_id: str) -> dict:
    return await self._make_request(
        "DELETE", f"/cloud/remove/{request_id}", delete=True
    )

This change aligns the method with RESTful principles and makes the intention of the operation clearer.

streaming_providers/pikpak/utils.py (3)

138-138: LGTM: Improved file selection logic

The removal of the file_index parameter and the modification of the select_file_index_from_torrent function call enhance the flexibility of the file selection process. This change allows for dynamic file index determination based on the provided parameters, which is a more robust approach.


266-266: LGTM: Consistent updates to function calls

The modifications to the find_file_in_folder_tree function calls in retrieve_or_download_file are consistent with the earlier changes to the function signature. These updates ensure that the file retrieval process remains functional and aligned with the new implementation.

Also applies to: 275-275


138-138: Overall: Improved file selection and retrieval process

The changes made to the find_file_in_folder_tree function and its calls in retrieve_or_download_file have successfully simplified the interface and enhanced the flexibility of the file selection process. These modifications contribute to a more robust and maintainable codebase by allowing dynamic file index determination based on provided parameters.

The consistency of these changes across the file demonstrates a thorough approach to refactoring, ensuring that all related parts of the code are updated accordingly.

Also applies to: 266-266, 275-275

streaming_providers/routes.py (2)

Line range hint 89-116: Ensure all get_video_url functions accept background_tasks

The addition of background_tasks to get_or_create_video_url and its inclusion in kwargs means that all functions in mapper.GET_VIDEO_URL_FUNCTIONS must accept background_tasks as a parameter. Passing it via **kwargs could lead to TypeError exceptions at runtime if any of these functions do not expect this parameter.

As a follow-up, please verify that all functions referenced in GET_VIDEO_URL_FUNCTIONS have been updated to accept background_tasks. This will ensure smooth execution without unexpected errors.


235-235: Pass background_tasks consistently to all function calls

When calling get_or_create_video_url, ensure that background_tasks is consistently passed and utilized within downstream functions. This consistency is crucial for the proper handling of asynchronous background tasks.

Please check that all invocations of get_or_create_video_url throughout the codebase include the background_tasks parameter and that it's appropriately used within those functions.

streaming_providers/offcloud/utils.py Outdated Show resolved Hide resolved
Comment on lines 52 to 61
@staticmethod
async def _handle_request_error(error: Exception, is_expected_to_fail: bool):
if isinstance(error, httpx.TimeoutException):
raise ProviderException("Request timed out.", "torrent_not_downloaded.mp4")
except requests.exceptions.ConnectionError:
elif isinstance(error, httpx.TransportError):
raise ProviderException(
"Failed to connect to Debrid service.", "debrid_service_down_error.mp4"
)
elif not is_expected_to_fail:
raise ProviderException(f"Request error: {str(error)}", "api_error.mp4")
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Use httpx.ConnectError for more specific error handling.

The conversion of _handle_request_error to a static async method is appropriate. However, we can improve the specificity of the exception handling.

Replace httpx.TransportError with httpx.ConnectError for more precise error handling. Apply this diff:

         if isinstance(error, httpx.TimeoutException):
             raise ProviderException("Request timed out.", "torrent_not_downloaded.mp4")
-        elif isinstance(error, httpx.TransportError):
+        elif isinstance(error, httpx.ConnectError):
             raise ProviderException(
                 "Failed to connect to Debrid service.", "debrid_service_down_error.mp4"
             )

This change will ensure that the error handling is more specific to connection-related issues.

📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
@staticmethod
async def _handle_request_error(error: Exception, is_expected_to_fail: bool):
if isinstance(error, httpx.TimeoutException):
raise ProviderException("Request timed out.", "torrent_not_downloaded.mp4")
except requests.exceptions.ConnectionError:
elif isinstance(error, httpx.TransportError):
raise ProviderException(
"Failed to connect to Debrid service.", "debrid_service_down_error.mp4"
)
elif not is_expected_to_fail:
raise ProviderException(f"Request error: {str(error)}", "api_error.mp4")
@staticmethod
async def _handle_request_error(error: Exception, is_expected_to_fail: bool):
if isinstance(error, httpx.TimeoutException):
raise ProviderException("Request timed out.", "torrent_not_downloaded.mp4")
elif isinstance(error, httpx.ConnectError):
raise ProviderException(
"Failed to connect to Debrid service.", "debrid_service_down_error.mp4"
)
elif not is_expected_to_fail:
raise ProviderException(f"Request error: {str(error)}", "api_error.mp4")

Comment on lines 85 to 110
@abstractmethod
async def _handle_service_specific_errors(self, error: httpx.HTTPStatusError):
"""
Service specific errors on api requests.
"""
raise NotImplementedError

@staticmethod
def _parse_response(response, is_return_none):
async def _parse_response(response: httpx.Response, is_return_none: bool):
if is_return_none:
return {}
try:
return response.json()
except JSONDecodeError as error:
except ValueError as error:
raise ProviderException(
f"Failed to parse response error: {error}. \nresponse: {response.text}",
"api_error.mp4",
)

def initialize_headers(self):
@abstractmethod
async def initialize_headers(self):
raise NotImplementedError

def disable_access_token(self):
@abstractmethod
async def disable_access_token(self):
raise NotImplementedError
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Use exception chaining in _parse_response.

The conversion of these methods to async and the use of abstract methods where appropriate are correct. However, we can improve the exception handling in _parse_response.

Use exception chaining to preserve the original exception context when raising a new exception. Apply this diff:

         try:
             return response.json()
         except ValueError as error:
             raise ProviderException(
                 f"Failed to parse response error: {error}. \nresponse: {response.text}",
                 "api_error.mp4",
-            )
+            ) from error

This change will maintain the original exception context, making it easier to debug issues related to response parsing.

📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
@abstractmethod
async def _handle_service_specific_errors(self, error: httpx.HTTPStatusError):
"""
Service specific errors on api requests.
"""
raise NotImplementedError
@staticmethod
def _parse_response(response, is_return_none):
async def _parse_response(response: httpx.Response, is_return_none: bool):
if is_return_none:
return {}
try:
return response.json()
except JSONDecodeError as error:
except ValueError as error:
raise ProviderException(
f"Failed to parse response error: {error}. \nresponse: {response.text}",
"api_error.mp4",
)
def initialize_headers(self):
@abstractmethod
async def initialize_headers(self):
raise NotImplementedError
def disable_access_token(self):
@abstractmethod
async def disable_access_token(self):
raise NotImplementedError
@staticmethod
async def _parse_response(response: httpx.Response, is_return_none: bool):
if is_return_none:
return {}
try:
return response.json()
except ValueError as error:
raise ProviderException(
f"Failed to parse response error: {error}. \nresponse: {response.text}",
"api_error.mp4",
) from error
@abstractmethod
async def initialize_headers(self):
raise NotImplementedError
@abstractmethod
async def disable_access_token(self):
raise NotImplementedError
🧰 Tools
🪛 Ruff

99-102: Within an except clause, raise exceptions with raise ... from err or raise ... from None to distinguish them from errors in exception handling

(B904)

streaming_providers/debrid_client.py Outdated Show resolved Hide resolved
streaming_providers/debrid_client.py Outdated Show resolved Hide resolved
streaming_providers/realdebrid/utils.py Show resolved Hide resolved
streaming_providers/realdebrid/utils.py Show resolved Hide resolved
streaming_providers/realdebrid/utils.py Show resolved Hide resolved
streaming_providers/realdebrid/utils.py Outdated Show resolved Hide resolved
streaming_providers/routes.py Show resolved Hide resolved
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 15

🧹 Outside diff range and nitpick comments (2)
streaming_providers/routes.py (1)

89-90: Update the docstring to include the new parameter.

The function signature has been updated to include background_tasks, but the docstring doesn't reflect this change. Please update the docstring to include information about the new parameter for better documentation.

streaming_providers/offcloud/client.py (1)

34-49: Consider adding type hints for the return value.

The _make_request method is annotated to return dict | list. Consider using Union[dict, list] for better readability and compatibility with older Python versions.

from typing import Union

async def _make_request(
    ...
) -> Union[dict, list]:
    ...
📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL

📥 Commits

Files that changed from the base of the PR and between 45f6e57 and 59c80ed.

📒 Files selected for processing (8)
  • streaming_providers/alldebrid/utils.py (1 hunks)
  • streaming_providers/debridlink/utils.py (1 hunks)
  • streaming_providers/offcloud/client.py (2 hunks)
  • streaming_providers/parser.py (1 hunks)
  • streaming_providers/pikpak/utils.py (3 hunks)
  • streaming_providers/realdebrid/utils.py (1 hunks)
  • streaming_providers/routes.py (6 hunks)
  • streaming_providers/torbox/utils.py (1 hunks)
🚧 Files skipped from review as they are similar to previous changes (1)
  • streaming_providers/pikpak/utils.py
🧰 Additional context used
🪛 Ruff
streaming_providers/realdebrid/utils.py

135-137: Within an except clause, raise exceptions with raise ... from err or raise ... from None to distinguish them from errors in exception handling

(B904)

🔇 Additional comments (39)
streaming_providers/parser.py (3)

12-20: Approve async conversion and new parameter.

The conversion of the function to async and the addition of the file_size_callback parameter are good improvements. These changes allow for more efficient handling of I/O-bound operations and provide flexibility in file size retrieval.


57-117: Approve new async function for updating torrent streams metadata.

The new update_torrent_streams_metadata function is a well-structured addition that handles both single episode and entire season updates. It's consistent with the transition to asynchronous programming and appropriately manages different scenarios for updating TorrentStreams metadata.


1-117: Summary of changes and their impact.

The modifications in this file significantly contribute to the overall goal of migrating streaming provider functions to async and improving performance. The conversion of select_file_index_from_torrent to async and the addition of the new async function update_torrent_streams_metadata enhance the module's ability to handle I/O-bound operations efficiently.

These changes, combined with the improvements in error handling and the introduction of the file_size_callback, provide a more robust and flexible implementation. The suggested optimizations for file selection logic and season-specific updates, if implemented, would further enhance the efficiency of these functions.

Overall, these changes align well with the PR objectives and represent a positive step towards improving the performance of the streaming providers module.

streaming_providers/alldebrid/utils.py (1)

10-20: ⚠️ Potential issue

Ensure safe access to dictionary keys

While the conversion to async is correct, there's a potential issue with accessing dictionary keys without checks. This could lead to KeyError exceptions if the expected keys are missing.

Consider using the .get() method or adding explicit checks. Apply this diff to improve robustness:

 async def get_torrent_info(ad_client, info_hash):
     torrent_info = await ad_client.get_available_torrent(info_hash)
-    if torrent_info and torrent_info["status"] == "Ready":
+    if torrent_info and torrent_info.get("status") == "Ready":
         return torrent_info
-    elif torrent_info and torrent_info["statusCode"] == 7:
+    elif torrent_info and torrent_info.get("statusCode") == 7:
         await ad_client.delete_torrent(torrent_info.get("id"))
         raise ProviderException(
             "Not enough seeders available to parse magnet link",
             "transfer_error.mp4",
         )
     return None

Likely invalid or redundant comment.

streaming_providers/routes.py (5)

4-4: LGTM: New imports support async functionality.

The addition of Annotated and BackgroundTasks imports aligns with the changes made to support asynchronous operations and improved type hinting.

Also applies to: 12-12


117-117: LGTM: Consistent update to kwargs dictionary.

The addition of background_tasks to the kwargs dictionary is consistent with the function signature update and allows for proper passing of background tasks to provider-specific functions.


195-196: LGTM: Enhanced type hinting and background task support.

The changes to the function signature improve type hinting and add support for background tasks:

  1. The use of Annotated with Depends for user_data enhances type hinting, as mentioned in a previous review comment.
  2. The addition of the background_tasks parameter is consistent with the shift towards asynchronous operations.

These changes are appropriate and align with modern FastAPI practices.


236-236: LGTM: Consistent use of background_tasks.

The addition of background_tasks to the get_or_create_video_url function call is consistent with the earlier changes and ensures proper handling of background tasks throughout the request lifecycle.


Line range hint 1-300: Summary: Successful migration to async and performance improvements.

The changes in this file consistently support the transition to asynchronous operations and improved type hinting. Key improvements include:

  1. Addition of BackgroundTasks support.
  2. Enhanced type hinting with Annotated.
  3. Consistent updates to function signatures and calls.

These changes align well with the PR objectives and should contribute to improved performance in handling streaming provider requests.

streaming_providers/debridlink/utils.py (4)

1-2: Imports updated correctly for asynchronous functionality

The addition of asyncio and Optional imports is appropriate for the asynchronous modifications and type hints.


99-101: ⚠️ Potential issue

Handle exceptions during torrent deletion in delete_all_torrents_from_dl

When using asyncio.gather to delete torrents, if any delete_torrent call raises an exception, it might halt the entire deletion process. To ensure all torrents are attempted for deletion, handle exceptions for each call individually.

Apply this diff to handle exceptions:

 await asyncio.gather(
-    *[dl_client.delete_torrent(torrent["id"]) for torrent in torrents["value"]]
+    *[
+        dl_client.delete_torrent(torrent["id"]) for torrent in torrents["value"]
+    ],
+    return_exceptions=True
 )
+for torrent, result in zip(torrents["value"], results):
+    if isinstance(result, Exception):
+        logger.error(f"Failed to delete torrent {torrent['id']}: {result}")

Likely invalid or redundant comment.


65-74: ⚠️ Potential issue

Check for 'value' key in instant availability response

In update_dl_cache_status, accessing instant_availability_response["value"] without ensuring the key exists may raise a KeyError. To prevent this, add a check to verify that the 'value' key is present.

Apply this diff to add the key check:

 for stream in streams:
-    stream.cached = bool(
-        stream.id in instant_availability_response["value"]
-    )
+    if "value" in instant_availability_response:
+        stream.cached = stream.id in instant_availability_response["value"]
+    else:
+        stream.cached = False

Likely invalid or redundant comment.


11-14: Ensure all calls to get_download_link are updated

The function get_download_link has been converted to an asynchronous function. Please verify that all calls to this function elsewhere in the codebase have been updated to use await.

Run the following script to find any non-await calls to get_download_link:

streaming_providers/torbox/utils.py (4)

59-62: LGTM!

The function signature has been updated with proper type hints, and the internal logic remains unchanged.


65-79: LGTM!

The new asynchronous function update_chunk_cache_status is implemented correctly with proper error handling.


71-75: The past review comment is still valid and applicable to the current code segment.


110-120: LGTM!

The function has been successfully converted to async with proper type hints and updated to use await for the asynchronous call to select_file_index_from_torrent.

streaming_providers/offcloud/client.py (14)

38-38: Use Union[dict, str] instead of dict | str for Python version compatibility.

The type hint dict | str is only valid in Python 3.10 and later. To maintain compatibility with earlier versions of Python (e.g., Python 3.7, 3.8, 3.9), consider using Union[dict, str] from the typing module.


86-93: Handle potential KeyError when accessing originalLink.

When iterating over available_torrents, it's possible that some torrent dictionaries might not have the originalLink key, which could raise a KeyError. Consider using torrent.get("originalLink", "") to safely access the key.


144-147: Use the correct HTTP method for deleting resources.

The delete_torrent method currently uses the HTTP GET method to perform a delete operation:

return await self._make_request(
    "GET", f"/cloud/remove/{request_id}", delete=True
)

Using GET for deletion is not RESTful and may lead to unintended side effects. Consider using the HTTP DELETE method instead to properly represent the action.


1-3: LGTM!

The imports are organized and follow the standard conventions.


20-21: Verify the empty initialize_headers method.

The initialize_headers method is currently empty. Confirm if this is intentional or if any initialization logic needs to be added.


23-24: Verify the empty disable_access_token method.

The disable_access_token method is currently empty. Confirm if this is intentional or if any token disabling logic needs to be added.


26-32: LGTM!

The error handling for specific HTTP status codes is implemented correctly, raising appropriate exceptions with relevant messages.


Line range hint 52-67: LGTM!

The add_magnet_link method is implemented correctly, handling the response data and raising appropriate exceptions when needed.


69-70: LGTM!

The get_user_torrent_list method is implemented correctly, making the necessary request to retrieve the torrent history.


72-76: LGTM!

The get_torrent_info method is implemented correctly, making the request to retrieve torrent status and returning the relevant information.


78-82: LGTM!

The get_torrent_instant_availability method is implemented correctly, making the request to check the cache status of the provided magnet links.


95-96: LGTM!

The explore_folder_links method is implemented correctly, making the request to retrieve the folder links for the given request ID.


98-105: LGTM!

The update_file_sizes method is implemented correctly, making concurrent HEAD requests to retrieve the file sizes and updating the files_data dictionary accordingly.


107-142: Verify the behavior when filename is None.

When filename is None, the code updates the torrent streams metadata in the background using background_tasks.add_task(). Ensure that this behavior aligns with the expected functionality and does not introduce any unintended side effects.

streaming_providers/realdebrid/utils.py (8)

39-51: LGTM!

The logic for updating torrent stream metadata in the background task looks good. It correctly passes all the required parameters to the update_torrent_streams_metadata function.


158-175: LGTM!

The add_new_torrent function looks good. It correctly checks for torrent limit and duplicate torrents before adding a new one. The error handling for failed magnet link addition is also proper.


133-137: ⚠️ Potential issue

Use exception chaining to preserve original exception context

In the except ProviderException as error block, you are raising a new exception without chaining it from the original one. To maintain the original traceback and aid in debugging, use from when re-raising the exception.

Apply this diff to chain exceptions properly:

     except ProviderException as error:
         await rd_client.delete_torrent(torrent_id)
         raise ProviderException(
             f"Failed to start torrent download, {error}", "transfer_error.mp4"
+        ) from error
🧰 Tools
🪛 Ruff

135-137: Within an except clause, raise exceptions with raise ... from err or raise ... from None to distinguish them from errors in exception handling

(B904)


178-179: 🛠️ Refactor suggestion

Remove unused **kwargs parameter from update_rd_cache_status function

The **kwargs parameter is not used within the update_rd_cache_status function and can be removed to clean up the function signature.

Apply this diff to remove the unused parameter:

 async def update_rd_cache_status(
     streams: list[TorrentStreams], user_data: UserData, user_ip: str
-    , **kwargs
 ):

201-203: 🛠️ Refactor suggestion

Remove unused **kwargs parameter from fetch_downloaded_info_hashes_from_rd function

The **kwargs parameter is not used within the fetch_downloaded_info_hashes_from_rd function and can be removed to streamline the code.

Apply this diff to remove the unused parameter:

 async def fetch_downloaded_info_hashes_from_rd(
     user_data: UserData, user_ip: str
-    , **kwargs
 ) -> list[str]:

216-216: 🛠️ Refactor suggestion

Remove unused **kwargs parameter from delete_all_watchlist_rd function

The **kwargs parameter is not utilized within the delete_all_watchlist_rd function and can be omitted to simplify the function definition.

Apply this diff to remove the unused parameter:

 async def delete_all_watchlist_rd(user_data: UserData, user_ip: str
-    , **kwargs
 ):

77-78: ⚠️ Potential issue

Handle missing mimeType key in response

When checking the mimeType, if the key is missing from the response dictionary, calling startswith on None will raise an AttributeError. Provide a default value to prevent this error.

Apply this diff to handle possible missing mimeType:

-    if not response.get("mimeType").startswith("video"):
+    if not response.get("mimeType", "").startswith("video"):

Likely invalid or redundant comment.


53-73: Verify file selection logic

The logic for selecting the relevant file and handling cases where the file is not selected or the number of selected files doesn't match the number of links looks correct. However, please verify the following:

  • Ensure that selected_file_index is a valid index within the range of torrent_info["files"].
  • Confirm that the "selected" and "id" keys exist in the torrent_info["files"] dictionaries.
  • Verify that len(selected_files) should indeed match len(torrent_info["links"]) for the logic to proceed without re-downloading the torrent.

Generate a script to navigate the codebase and confirm the validity of these assumptions:

streaming_providers/parser.py Show resolved Hide resolved
streaming_providers/parser.py Show resolved Hide resolved
streaming_providers/alldebrid/utils.py Show resolved Hide resolved
streaming_providers/alldebrid/utils.py Show resolved Hide resolved
streaming_providers/alldebrid/utils.py Show resolved Hide resolved
streaming_providers/debridlink/utils.py Outdated Show resolved Hide resolved
streaming_providers/torbox/utils.py Show resolved Hide resolved
streaming_providers/torbox/utils.py Show resolved Hide resolved
streaming_providers/torbox/utils.py Show resolved Hide resolved
streaming_providers/torbox/utils.py Show resolved Hide resolved
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 11

🧹 Outside diff range and nitpick comments (8)
streaming_providers/premiumize/client.py (2)

Line range hint 82-91: Consider adding error handling for torrent not found.

The method should handle cases where the torrent is not found in the transfer list.

     async def get_torrent_info(self, torrent_id):
         transfer_list = await self.get_transfer_list()
+        if transfer_list.get("status") != "success":
+            raise ProviderException("Failed to get transfer list", "transfer_error.mp4")
         torrent_info = next(
             (
                 torrent
                 for torrent in transfer_list["transfers"]
                 if torrent["id"] == torrent_id
             ),
             None,
         )
+        if torrent_info is None:
+            raise ProviderException(f"Torrent {torrent_id} not found", "not_found.mp4")
         return torrent_info

Line range hint 125-146: Add error handling and optimize head requests.

The current implementation has several potential issues:

  1. No timeout on head requests which could lead to hanging
  2. No error handling for malformed Content-Disposition headers
  3. Sequential head requests could be slow for many torrents

Consider implementing these improvements:

     async def get_available_torrent(self, info_hash: str) -> dict[str, Any] | None:
         torrent_list_response = await self.get_transfer_list()
         if torrent_list_response.get("status") != "success":
             if torrent_list_response.get("message") == "Not logged in.":
                 raise ProviderException(
                     "Premiumize is not logged in.", "invalid_token.mp4"
                 )
             raise ProviderException(
                 "Failed to get torrent info from Premiumize", "transfer_error.mp4"
             )

         available_torrents = torrent_list_response["transfers"]
         for torrent in available_torrents:
             src = torrent.get("src")
+            if not src:
+                continue
+            try:
+                response = await self.client.head(src, timeout=5.0)
+            except Exception as e:
+                continue
             if response.status_code != 200:
                 continue

             magnet_link_content = response.headers.get("Content-Disposition")
+            if not magnet_link_content:
+                continue
             if info_hash in magnet_link_content:
                 return torrent
streaming_providers/offcloud/client.py (1)

112-147: Add method documentation

This complex method would benefit from docstring documentation explaining its parameters and return value.

Add docstring documentation:

async def create_download_link(
    self,
    request_id: str,
    torrent_info: dict,
    stream: TorrentStreams,
    filename: Optional[str],
    season: Optional[int],
    episode: Optional[int],
    background_tasks,
) -> str:
    """Create a download link for the specified torrent file.

    Args:
        request_id: The OffCloud request ID
        torrent_info: Dictionary containing torrent information
        stream: TorrentStreams object for metadata update
        filename: Optional filename to match
        season: Optional season number for TV shows
        episode: Optional episode number for TV shows
        background_tasks: FastAPI background tasks object

    Returns:
        str: Direct download URL for the selected file
    """
streaming_providers/debrid_client.py (2)

1-19: LGTM! Consider adding type hints for class variables.

The async client initialization and imports look good. The timeout setting is well-documented with respect to Stremio's constraints.

Consider adding type hints for class variables in __init__:

 def __init__(self, token: Optional[str] = None):
-    self.token = token
-    self.is_private_token = False
-    self.headers = {}
+    self.token: Optional[str] = token
+    self.is_private_token: bool = False
+    self.headers: dict = {}

33-55: LGTM! Consider adding return type annotation.

The async request handling implementation looks good, but could benefit from a more specific return type annotation.

Add a more specific return type using TypeAlias:

+from typing import TypeAlias
+
+JsonResponse: TypeAlias = dict[str, any] | list[any]

-    ) -> dict | list:
+    ) -> JsonResponse:
streaming_providers/debridlink/utils.py (1)

16-23: Add type annotation for background_tasks parameter

To enhance code readability and facilitate static type checking, consider adding a type annotation for the background_tasks parameter in get_download_link. Since background_tasks is an instance of BackgroundTasks, you can annotate it accordingly.

Apply this diff to add the type annotation:

 async def get_download_link(
     torrent_info: dict,
     stream: TorrentStreams,
-    background_tasks,
+    background_tasks: BackgroundTasks,
     filename: Optional[str],
     file_index: Optional[int],
     episode: Optional[int],
     season: Optional[int],
 ) -> str:
streaming_providers/debridlink/client.py (2)

Line range hint 66-77: Add return type annotation to get_token

Consider adding a return type annotation to the get_token method for improved code clarity and type checking. Example:

async def get_token(self, client_id, device_code) -> dict[str, Any]:

Line range hint 78-88: Add return type annotation to refresh_token

Including a return type annotation for refresh_token enhances code maintainability and type checking. Example:

async def refresh_token(self, client_id, refresh_token) -> dict[str, Any]:
📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL

📥 Commits

Files that changed from the base of the PR and between 59c80ed and 3548b32.

📒 Files selected for processing (9)
  • streaming_providers/alldebrid/utils.py (1 hunks)
  • streaming_providers/debrid_client.py (1 hunks)
  • streaming_providers/debridlink/client.py (7 hunks)
  • streaming_providers/debridlink/utils.py (1 hunks)
  • streaming_providers/offcloud/client.py (2 hunks)
  • streaming_providers/premiumize/client.py (6 hunks)
  • streaming_providers/premiumize/utils.py (1 hunks)
  • streaming_providers/realdebrid/api.py (1 hunks)
  • streaming_providers/realdebrid/client.py (10 hunks)
🚧 Files skipped from review as they are similar to previous changes (1)
  • streaming_providers/realdebrid/api.py
🧰 Additional context used
🪛 Ruff
streaming_providers/debrid_client.py

27-30: Use contextlib.suppress(ProviderException) instead of try-except-pass

Replace with contextlib.suppress(ProviderException)

(SIM105)


107-110: Within an except clause, raise exceptions with raise ... from err or raise ... from None to distinguish them from errors in exception handling

(B904)


130-131: Use a single if statement instead of nested if statements

Combine if statements using and

(SIM102)

streaming_providers/premiumize/utils.py

81-81: Dictionary comprehension uses static key: "files"

(B035)

streaming_providers/realdebrid/client.py

2-2: typing.override imported but unused

Remove unused import

(F401)


2-2: typing.overload imported but unused

Remove unused import

(F401)

🔇 Additional comments (32)
streaming_providers/premiumize/client.py (5)

20-21: Empty async method needs review.

The previous review comment about removing the async keyword from _handle_service_specific_errors is still valid, as the method contains no async operations.


23-38: LGTM! Well-structured async implementation.

The method properly implements async/await patterns and includes comprehensive type hints. The token handling logic is clear and appropriate.


40-46: Consider improving error handling and async implementation.

The previous review comment about removing the unnecessary async keyword is still valid.

Consider making the error handling more explicit:

     async def initialize_headers(self):
         if self.token:
             access_token = self.decode_token_str(self.token)
             if access_token:
                 self.headers = {"Authorization": f"Bearer {access_token}"}
             else:
+                # Token exists but couldn't be decoded, assume it's a private API key
                 self.is_private_token = True

122-123: Empty async method needs implementation.

The previous review comment about removing the unnecessary async keyword from the empty disable_access_token method is still valid.


147-148: LGTM! Clean async implementation.

The method correctly implements the async pattern for account info retrieval.

streaming_providers/offcloud/client.py (6)

26-32: LGTM! Well-structured error handling

The error handling implementation is clear and provides good user feedback with appropriate status code handling.


34-49: Use appropriate HTTP method for delete operations

The delete operation should use the HTTP DELETE method instead of GET to follow RESTful principles.


Line range hint 52-81: LGTM! Well-implemented magnet link handling

The async implementation is clean with proper error handling and clear response processing.


84-93: Add safe dictionary access

The access to originalLink could raise KeyError if the key doesn't exist.


98-110: Add safe Content-Length parsing

The conversion of Content-Length header to integer needs safer handling.


Line range hint 1-152: Overall implementation looks good!

The conversion to async/await is well-implemented with proper error handling, concurrency patterns, and good separation of concerns. The code structure is clean and maintainable.

streaming_providers/debrid_client.py (3)

21-31: LGTM! Async context manager implementation is correct.

The async context manager methods are properly implemented for resource management.

🧰 Tools
🪛 Ruff

27-30: Use contextlib.suppress(ProviderException) instead of try-except-pass

Replace with contextlib.suppress(ProviderException)

(SIM105)


56-91: LGTM! Error handling methods properly converted to async.

The error handling methods are properly converted to async and maintain good separation of concerns.


93-142: LGTM! Methods properly converted to async with good abstraction.

The abstract methods and wait_for_status implementation are correctly converted to async operations.

🧰 Tools
🪛 Ruff

107-110: Within an except clause, raise exceptions with raise ... from err or raise ... from None to distinguish them from errors in exception handling

(B904)


130-131: Use a single if statement instead of nested if statements

Combine if statements using and

(SIM102)

streaming_providers/debridlink/utils.py (4)

1-2: LGTM!

The imports of asyncio and Optional are appropriate for asynchronous operations and optional type hints.


4-4: LGTM!

Importing BackgroundTasks from fastapi is necessary for managing background tasks.


10-13: LGTM!

The imported functions select_file_index_from_torrent and update_torrent_streams_metadata are properly utilized in the updated asynchronous functions.


65-68: LGTM!

Proper error handling is implemented for cases where torrent_id is not obtained after adding the magnet link. This improves robustness.

streaming_providers/alldebrid/utils.py (7)

16-23: Ensure safe access to dictionary keys to prevent KeyError

In the get_torrent_info function, accessing torrent_info["status"], torrent_info["statusCode"], and torrent_info.get("id") assumes these keys always exist. To prevent potential KeyError, consider using the .get() method or adding checks to ensure these keys are present before accessing them.


29-30: Handle potential KeyError in nested dictionary access

Accessing response_data["data"]["magnets"][0]["id"] assumes that the keys "data", "magnets" exist and that the list is not empty. This could raise a KeyError or IndexError if the response is not as expected. Consider adding checks or using the .get() method to ensure safe access.


68-70: Ensure safe access to nested dictionary keys

In wait_for_download_and_get_link, accessing torrent_info["links"][file_index]["link"] and response["data"]["link"] assumes that these keys and indices exist, which may lead to KeyError or IndexError if they are missing. To enhance robustness, consider adding checks or using the .get() method with default values.


88-107: Add exception handling to improve robustness in get_video_url_from_alldebrid

The function does not have exception handling for the asynchronous calls to get_torrent_info, add_new_torrent, and wait_for_download_and_get_link. Adding a try-except block will help manage potential errors and enhance the function's robustness.


146-148: Ensure safe access to dictionary keys in list comprehension

Accessing torrent["hash"] assumes that the "hash" key exists in each torrent dictionary. This could lead to a KeyError if the key is missing. Consider using the .get() method or adding a condition to check for the key.


159-164: Handle potential errors and ensure safe dictionary access

In delete_all_torrents_from_ad, accessing torrents["data"]["magnets"] assumes that the keys "data" and "magnets" exist. Additionally, errors during torrent deletion are not handled, which may lead to unhandled exceptions. Consider adding checks for key existence and handling exceptions during the deletion process.


115-129: ⚠️ Potential issue

Avoid suppressing exceptions without handling them

In the update_ad_cache_status function, the except ProviderException: pass block suppresses exceptions without any handling or logging. This can make debugging difficult as errors are silently ignored. Consider handling the exception appropriately or logging the error for better traceability.

Apply this diff to handle the exception:

 except ProviderException as e:
-    pass
+    # Log the exception or handle it appropriately
+    print(f"ProviderException in update_ad_cache_status: {e}")

Likely invalid or redundant comment.

streaming_providers/realdebrid/client.py (4)

41-55: Asynchronous _make_request method updated appropriately

The _make_request method has been successfully converted to an asynchronous method, and the addition of the json parameter is correctly handled. This ensures non-blocking I/O operations and aligns with the asynchronous design.


Line range hint 58-75: Proper asynchronous initialization in initialize_headers

Converting initialize_headers to an asynchronous method is appropriate since it now awaits the get_token method. The logic for handling different token types is correctly implemented.


91-99: Async methods get_device_code and get_token are correctly implemented

The methods have been correctly converted to asynchronous functions, and the API calls are properly awaited. This enhances the responsiveness when interacting with the Real-Debrid API.


Line range hint 183-201: Robust error handling in create_download_link

The method create_download_link appropriately handles different response scenarios, including checking for download in the response and raising exceptions based on error_code. The use of asynchronous calls is consistent and correct.

streaming_providers/premiumize/utils.py (3)

17-21: Clarify the exception message in create_or_get_folder_id.

The error message "Folder already created in meanwhile" is unclear and may confuse users. Consider rephrasing it to better reflect the actual issue, such as "Failed to create folder."


134-141: Add confirmation before deleting all folders to prevent data loss.

The delete_all_torrents_from_pm function deletes all folders from the user's Premiumize account without confirmation, which could lead to accidental data loss. Consider adding a confirmation step or restricting deletion to specific folders.


25-35: Ensure all calls to get_video_url_from_premiumize are updated to asynchronous.

The function get_video_url_from_premiumize has been converted to an asynchronous function. Verify that all calls to this function in the codebase have been updated to use await.

Run the following script to find all calls to get_video_url_from_premiumize that may need updating:

streaming_providers/offcloud/client.py Show resolved Hide resolved
streaming_providers/debrid_client.py Outdated Show resolved Hide resolved
streaming_providers/debridlink/utils.py Show resolved Hide resolved
streaming_providers/debridlink/client.py Show resolved Hide resolved
streaming_providers/debridlink/client.py Outdated Show resolved Hide resolved
streaming_providers/debridlink/client.py Show resolved Hide resolved
streaming_providers/realdebrid/client.py Show resolved Hide resolved
streaming_providers/realdebrid/client.py Outdated Show resolved Hide resolved
streaming_providers/premiumize/utils.py Show resolved Hide resolved
streaming_providers/premiumize/utils.py Show resolved Hide resolved
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL

📥 Commits

Files that changed from the base of the PR and between 3548b32 and 4a735bc.

📒 Files selected for processing (1)
  • streaming_providers/pikpak/utils.py (7 hunks)
🧰 Additional context used
🪛 Ruff
streaming_providers/pikpak/utils.py

176-176: Yoda condition detected

Rewrite as str(error) == "Invalid username or password"

(SIM300)


177-179: Within an except clause, raise exceptions with raise ... from err or raise ... from None to distinguish them from errors in exception handling

(B904)


181-184: Within an except clause, raise exceptions with raise ... from err or raise ... from None to distinguish them from errors in exception handling

(B904)


186-189: Within an except clause, raise exceptions with raise ... from err or raise ... from None to distinguish them from errors in exception handling

(B904)

🔇 Additional comments (4)
streaming_providers/pikpak/utils.py (4)

139-140: LGTM! Good separation of concerns.

The refactoring improves the code by delegating file selection logic to the specialized select_file_index_from_torrent function.


Line range hint 145-195: Great implementation of async context manager!

The use of @asynccontextmanager ensures proper cleanup of HTTP client resources and provides a cleaner way to manage PikPak API connections.

🧰 Tools
🪛 Ruff

176-176: Yoda condition detected

Rewrite as str(error) == "Invalid username or password"

(SIM300)


177-179: Within an except clause, raise exceptions with raise ... from err or raise ... from None to distinguish them from errors in exception handling

(B904)


181-184: Within an except clause, raise exceptions with raise ... from err or raise ... from None to distinguish them from errors in exception handling

(B904)


186-189: Within an except clause, raise exceptions with raise ... from err or raise ... from None to distinguish them from errors in exception handling

(B904)


204-205: Good security practice: Reduced token expiration time.

Reducing the token cache duration from 7 days to 5 minutes significantly improves security by minimizing the window of opportunity for token misuse.


362-395: Well-structured async implementation with proper resource management.

The consistent use of the async context manager across these functions ensures proper resource cleanup. The addition of the _parse_info_hash_from_magnet helper function improves code readability.

streaming_providers/pikpak/utils.py Show resolved Hide resolved
streaming_providers/pikpak/utils.py Show resolved Hide resolved
@mhdzumair
Copy link
Owner Author

@coderabbitai close all current reviews and run a fresh review on this PR

@mhdzumair mhdzumair merged commit 48b3f5e into main Nov 1, 2024
@mhdzumair mhdzumair deleted the feat/streaming_providers branch November 1, 2024 10:28
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 27

🧹 Outside diff range and nitpick comments (12)
streaming_providers/seedr/api.py (2)

13-15: Consider adding error handling

While the async implementation is correct, consider adding try-except blocks to handle potential connection errors or API failures gracefully.

 @router.get("/get-device-code")
 async def get_device_code():
-    async with Login() as seedr_login:
-        device_code = await seedr_login.get_device_code()
-        return JSONResponse(content=device_code, headers=const.NO_CACHE_HEADERS)
+    try:
+        async with Login() as seedr_login:
+            device_code = await seedr_login.get_device_code()
+            return JSONResponse(content=device_code, headers=const.NO_CACHE_HEADERS)
+    except Exception as e:
+        return JSONResponse(
+            status_code=500,
+            content={"error": f"Failed to get device code: {str(e)}"},
+            headers=const.NO_CACHE_HEADERS
+        )

20-27: Add error handling and improve response structure

The async implementation is correct, but consider adding error handling and improving the response structure for consistency.

 @router.post("/authorize")
 async def authorize(data: AuthorizeData):
-    async with Login() as seedr_login:
-        response = await seedr_login.authorize(data.device_code)
-        if "access_token" in response:
-            return JSONResponse(
-                content={"token": seedr_login.token}, headers=const.NO_CACHE_HEADERS
-            )
-        else:
-            return JSONResponse(content=response, headers=const.NO_CACHE_HEADERS)
+    try:
+        async with Login() as seedr_login:
+            response = await seedr_login.authorize(data.device_code)
+            if "access_token" in response:
+                return JSONResponse(
+                    content={"status": "success", "token": seedr_login.token},
+                    headers=const.NO_CACHE_HEADERS
+                )
+            return JSONResponse(
+                content={"status": "pending", "details": response},
+                headers=const.NO_CACHE_HEADERS
+            )
+    except Exception as e:
+        return JSONResponse(
+            status_code=500,
+            content={"status": "error", "message": f"Authorization failed: {str(e)}"},
+            headers=const.NO_CACHE_HEADERS
+        )
streaming_providers/torbox/client.py (1)

Line range hint 83-95: Security: Move token from URL parameter to Authorization header

The token is currently passed as a URL parameter, which could expose it in server logs and browser history. Since the Authorization header is already being used in other methods, it would be more secure to rely on that instead.

Apply this diff to remove the token from URL parameters:

     async def create_download_link(self, torrent_id, filename):
         response = await self._make_request(
             "GET",
             "/torrents/requestdl",
-            params={"token": self.token, "torrent_id": torrent_id, "file_id": filename},
+            params={"torrent_id": torrent_id, "file_id": filename},
             is_expected_to_fail=True,
         )
streaming_providers/premiumize/client.py (2)

Line range hint 125-146: Improve error handling in torrent availability check.

The current implementation has several potential issues:

  1. No timeout for HEAD requests
  2. No handling of connection errors
  3. No null check for magnet_link_content

Consider applying this improvement:

    async def get_available_torrent(self, info_hash: str) -> dict[str, Any] | None:
        torrent_list_response = await self.get_transfer_list()
        if torrent_list_response.get("status") != "success":
            if torrent_list_response.get("message") == "Not logged in.":
                raise ProviderException(
                    "Premiumize is not logged in.", "invalid_token.mp4"
                )
            raise ProviderException(
                "Failed to get torrent info from Premiumize", "transfer_error.mp4"
            )

        available_torrents = torrent_list_response["transfers"]
        for torrent in available_torrents:
            src = torrent["src"]
-           async with self.session.head(src) as response:
+           try:
+               async with self.session.head(src, timeout=10) as response:
+                   if response.status != 200:
+                       continue
+
+                   magnet_link_content = response.headers.get("Content-Disposition")
+                   if magnet_link_content and info_hash in magnet_link_content:
+                       return torrent
+           except Exception:
+               continue
-               if response.status != 200:
-                   continue
-
-               magnet_link_content = response.headers.get("Content-Disposition")
-               if info_hash in magnet_link_content:
-                   return torrent

Line range hint 9-148: Consider adding docstrings to the class and methods.

While the implementation is solid, the code would benefit from documentation explaining:

  • The purpose of the class
  • Method parameters and return types
  • Expected exceptions
  • Usage examples
streaming_providers/debridlink/client.py (3)

Line range hint 116-124: Standardize error handling in get_torrent_info

The method implements custom error handling while other methods rely on _handle_service_specific_errors. This inconsistency makes the code harder to maintain.

Consider using the standard error handling approach:

 async def get_torrent_info(self, torrent_id) -> dict[str, Any]:
     response = await self._make_request(
         "GET", f"{self.BASE_URL}/seedbox/list", params={"ids": torrent_id}
     )
-    if response.get("value"):
-        return response.get("value")[0]
-    raise ProviderException(
-        "Failed to get torrent info from Debrid-Link", "transfer_error.mp4"
-    )
+    return response.get("value", [{}])[0]

Line range hint 149-160: Improve get_available_torrent implementation

The method can be simplified and made more consistent with the rest of the class:

  1. Use standard error handling through _handle_service_specific_errors
  2. Use list comprehension for better readability

Consider this implementation:

 async def get_available_torrent(self, info_hash: str) -> Optional[dict[str, Any]]:
     torrent_list_response = await self.get_user_torrent_list()
-    if "error" in torrent_list_response:
-        raise ProviderException(
-            "Failed to get torrent info from Debrid-Link", "transfer_error.mp4"
-        )
-
     available_torrents = torrent_list_response["value"]
-    for torrent in available_torrents:
-        if torrent["hashString"] == info_hash:
-            return torrent
-    return None
+    return next(
+        (torrent for torrent in available_torrents if torrent["hashString"] == info_hash),
+        None
+    )

Line range hint 1-160: Consider standardizing error handling patterns across the class

The class currently uses multiple error handling patterns:

  1. Some methods use _handle_service_specific_errors through _make_request
  2. Some methods implement custom error handling
  3. Some methods mix both approaches

This inconsistency makes the code harder to maintain and debug.

Consider:

  1. Documenting when to use each error handling approach
  2. Standardizing error handling across similar methods
  3. Creating a common error handling strategy for the entire class
🧰 Tools
🪛 Gitleaks

10-10: Detected a Generic API Key, potentially exposing access to various services and sensitive operations.

(generic-api-key)

streaming_providers/debrid_client.py (1)

18-19: Consider making timeout configurable.

The timeout is hardcoded to 15s. Consider making it configurable through the constructor to allow different timeout values for different use cases.

-    def __init__(self, token: Optional[str] = None):
+    def __init__(self, token: Optional[str] = None, timeout: int = 15):
         self.token = token
         self.is_private_token = False
         self.headers: Dict[str, str] = {}
         self._session: Optional[aiohttp.ClientSession] = None
-        self._timeout = ClientTimeout(total=15)  # Stremio timeout is 20s
+        self._timeout = ClientTimeout(total=timeout)  # Stremio timeout is 20s
streaming_providers/seedr/utils.py (1)

138-140: Log unexpected transfer results for better error diagnostics

When an unexpected transfer["result"] is encountered, the current code raises a generic ProviderException. To facilitate debugging and error tracking, consider logging the unexpected transfer["result"] value before raising the exception.

Add logging before raising the exception:

     if transfer["result"] in error_messages:
         msg, video = error_messages[transfer["result"]]
         raise ProviderException(msg, video)

+    else:
+        logging.error(f"Unexpected transfer result: {transfer['result']}")

     raise ProviderException(
         "Error transferring magnet link to Seedr", "transfer_error.mp4"
     )
streaming_providers/qbittorrent/utils.py (2)

209-216: Remove or explain commented-out code in set_qbittorrent_preferences.

The else block is commented out. If it's not needed, please remove it to improve code clarity. If you plan to use it later, consider adding a comment explaining why it's commented out.


136-136: Clarify the timeout value in ClientSession.

The timeout=18 is set, but it's unclear if this is intentional. Verify if 18 seconds is appropriate for your use case or consider defining it as a constant with a descriptive name.

📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL

📥 Commits

Reviewing files that changed from the base of the PR and between 4a735bc and a87bab3.

📒 Files selected for processing (12)
  • streaming_providers/alldebrid/client.py (5 hunks)
  • streaming_providers/debrid_client.py (1 hunks)
  • streaming_providers/debridlink/client.py (7 hunks)
  • streaming_providers/offcloud/client.py (2 hunks)
  • streaming_providers/offcloud/utils.py (1 hunks)
  • streaming_providers/premiumize/client.py (6 hunks)
  • streaming_providers/premiumize/utils.py (1 hunks)
  • streaming_providers/qbittorrent/utils.py (7 hunks)
  • streaming_providers/realdebrid/client.py (9 hunks)
  • streaming_providers/seedr/api.py (2 hunks)
  • streaming_providers/seedr/utils.py (1 hunks)
  • streaming_providers/torbox/client.py (4 hunks)
🚧 Files skipped from review as they are similar to previous changes (1)
  • streaming_providers/offcloud/utils.py
🧰 Additional context used
🪛 Ruff
streaming_providers/debrid_client.py

36-39: Use contextlib.suppress(ProviderException) instead of try-except-pass

Replace with contextlib.suppress(ProviderException)

(SIM105)


91-93: Within an except clause, raise exceptions with raise ... from err or raise ... from None to distinguish them from errors in exception handling

(B904)


104-104: Within an except clause, raise exceptions with raise ... from err or raise ... from None to distinguish them from errors in exception handling

(B904)


107-110: Within an except clause, raise exceptions with raise ... from err or raise ... from None to distinguish them from errors in exception handling

(B904)


141-144: Within an except clause, raise exceptions with raise ... from err or raise ... from None to distinguish them from errors in exception handling

(B904)


164-165: Use a single if statement instead of nested if statements

Combine if statements using and

(SIM102)

streaming_providers/premiumize/utils.py

81-81: Dictionary comprehension uses static key: "files"

(B035)

streaming_providers/realdebrid/client.py

2-2: typing.override imported but unused

Remove unused import

(F401)


2-2: typing.overload imported but unused

Remove unused import

(F401)

streaming_providers/seedr/utils.py

150-153: Combine if branches using logical or operator

Combine if branches

(SIM114)

🔇 Additional comments (26)
streaming_providers/seedr/api.py (2)

3-3: LGTM: Appropriate async client import

The migration to aioseedrcc.Login is correct for enabling async operations.


Line range hint 1-27: Verify consistent async patterns across related files

The async migration in this file looks good, but let's verify the consistency of async patterns across related files.

✅ Verification successful

Based on the results, I can now generate the final response:

Async implementation is consistent across the streaming providers

The verification shows:

  • All streaming provider modules consistently use async/await patterns
  • Proper error handling with try/except blocks is implemented across providers
  • Common exception handling patterns using ProviderException are used consistently
  • Async HTTP clients (aiohttp, httpx) are used appropriately with proper error handling
🏁 Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Description: Check for consistent async patterns in related files

# Check for async/await usage in related files
echo "Checking async pattern consistency across streaming providers:"
rg -l "async def" "streaming_providers/"

# Look for potential missed async conversions
echo -e "\nChecking for potential missed async conversions:"
rg "def (?!.*async)" "streaming_providers/" | grep -v "test_" | grep "def"

# Check for consistent error handling patterns
echo -e "\nChecking error handling patterns:"
rg "try:|except" "streaming_providers/"

Length of output: 10344

streaming_providers/premiumize/utils.py (4)

11-23: LGTM with a note on error handling

The async conversion looks good with proper await usage and safe dictionary access. However, there's an existing comment about improving the error message clarity that should be addressed.


115-131: LGTM! Well-structured async implementation

The function has good error handling, proper validation of info hash lengths (40 or 32), and safely handles missing content.


134-141: LGTM with a note on safety

The async implementation with asyncio.gather for parallel deletion is efficient. However, there's an existing comment about adding safety checks before deletion that should be addressed.


80-84: ⚠️ Potential issue

Fix dictionary comprehension to handle multiple video files

The current implementation will only keep the last video file due to using a static key in the dictionary comprehension. This could cause issues when multiple video files are present.

Apply this fix to properly collect all video files:

-torrent_info = {
-    "files": file_data
-    for file_data in torrent_folder_data["content"]
-    if "video" in file_data["mime_type"]
-}
+torrent_info = {
+    "files": [
+        file_data
+        for file_data in torrent_folder_data["content"]
+        if "video" in file_data["mime_type"]
+    ]
+}

Likely invalid or redundant comment.

🧰 Tools
🪛 Ruff

81-81: Dictionary comprehension uses static key: "files"

(B035)

streaming_providers/premiumize/client.py (5)

1-1: LGTM! Good addition of type hints.

The addition of typing imports enhances code clarity and type safety.


19-21: Skip commenting on empty async method.

This was already addressed in past reviews.


40-46: Skip commenting on unnecessary async.

This was already addressed in past reviews.


147-148: LGTM! Clean implementation of account info endpoint.

The method correctly implements the async pattern and delegates to _make_request.


Line range hint 52-111: LGTM! Clean async conversion of API methods.

The methods have been correctly converted to async and properly delegate to _make_request.

streaming_providers/alldebrid/client.py (10)

1-1: Appropriate addition of Any and Optional imports

The inclusion of Any and Optional from the typing module enhances type annotations and improves code clarity.


11-11: Correct use of Optional[str] in constructor

The user_ip parameter is appropriately typed as Optional[str] = None, enhancing type safety and accommodating cases where user_ip may not be provided.


15-16: Confirm necessity of async in initialize_headers

The method initialize_headers does not contain any await expressions. If it's not required to be asynchronous due to inheritance from DebridClient, consider removing the async keyword to avoid unnecessary overhead.

Please verify whether initialize_headers needs to be asynchronous based on the parent class implementation.


18-19: Review the use of async in disable_access_token

The disable_access_token method is asynchronous but contains no await expressions and only a pass statement. Ensure that making it async is necessary, possibly due to overriding an asynchronous method in the parent class.

Please confirm if the parent class requires this method to be asynchronous.


21-22: Assess the async designation for _handle_service_specific_errors

The method _handle_service_specific_errors is marked as asynchronous but does not perform any asynchronous operations. If not mandated by the base class DebridClient, consider removing the async keyword.

Check if the base class expects this method to be asynchronous.


24-41: Proper asynchronous implementation of _make_request

The _make_request method is correctly converted to an asynchronous function and appropriately uses await when calling the parent's _make_request method. The addition of parameters like json and typing enhancements improve flexibility and code readability.


51-53: Consistent and clear error messages in exceptions

The customized error messages in ProviderException provide specific feedback, enhancing debugging and user experience. Including identifiers like "invalid_token.mp4" aids in error handling mechanisms.

Also applies to: 60-62, 64-66, 68-70


76-79: Efficient asynchronous implementation in add_magnet_link

The method add_magnet_link is appropriately converted to async, and the use of await with _make_request ensures non-blocking I/O operations. The request parameters are correctly structured.


98-106: Efficient retrieval in get_available_torrent

The method correctly uses asynchronous calls and iterates over the user's torrent list to find a matching info_hash. Error validation is already included, ensuring robustness.


Line range hint 108-120: Proper error handling in create_download_link

The method handles both successful and failed responses appropriately, raising a ProviderException when necessary. The use of is_expected_to_fail aligns with the potential for the request to fail without raising an exception immediately.

streaming_providers/realdebrid/client.py (3)

2-2: Remove unused imports override and overload

The imports override and overload from the typing module are not used in this file. Removing them will clean up the code and eliminate warnings from static analysis tools.

🧰 Tools
🪛 Ruff

2-2: typing.override imported but unused

Remove unused import

(F401)


2-2: typing.overload imported but unused

Remove unused import

(F401)


172-177: Optimize Search in get_available_torrent

The method get_available_torrent fetches the entire torrent list and iterates over it to find the matching info_hash. If the API provides an endpoint to retrieve a torrent by its hash directly, consider using it to improve efficiency.


21-34: ⚠️ Potential issue

Ensure Python 3.10+ compatibility due to use of match statements

The use of match statements in the _handle_service_specific_errors method requires Python 3.10 or newer. Please verify that the project is intended to run on Python 3.10 or higher to avoid syntax errors.

To confirm the project's Python version compatibility, you can check for the required Python version specification in the project configuration files. Here is a shell script to assist with the verification:

streaming_providers/qbittorrent/utils.py (2)

277-279: Ensure new parameters are handled correctly in get_video_url_from_qbittorrent.

The addition of indexer_type, filename, and episode parameters requires updates wherever this function is called. Verify that all calls to this function pass the new arguments appropriately.


226-228: Update retrieve_or_download_file calls with new parameters.

Similarly, ensure that all calls to retrieve_or_download_file now include indexer_type, filename, and episode to prevent runtime errors.

Comment on lines +22 to 37
async def _make_request(
self,
method: str,
url: str,
data: Optional[dict | str] = None,
json: Optional[dict] = None,
params: Optional[dict] = None,
is_return_none: bool = False,
is_expected_to_fail: bool = False,
retry_count: int = 0,
) -> dict:
params = params or {}
url = self.BASE_URL + url
return super()._make_request(
method, url, data, params, is_return_none, is_expected_to_fail
return await super()._make_request(
method, url, data, json, params, is_return_none, is_expected_to_fail
)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Remove unused retry_count parameter

The retry_count parameter is defined in the method signature but is never used in the method implementation or passed to the parent class's _make_request call.

Apply this diff to remove the unused parameter:

     async def _make_request(
         self,
         method: str,
         url: str,
         data: Optional[dict | str] = None,
         json: Optional[dict] = None,
         params: Optional[dict] = None,
         is_return_none: bool = False,
-        is_expected_to_fail: bool = False,
-        retry_count: int = 0,
+        is_expected_to_fail: bool = False
     ) -> dict:
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
async def _make_request(
self,
method: str,
url: str,
data: Optional[dict | str] = None,
json: Optional[dict] = None,
params: Optional[dict] = None,
is_return_none: bool = False,
is_expected_to_fail: bool = False,
retry_count: int = 0,
) -> dict:
params = params or {}
url = self.BASE_URL + url
return super()._make_request(
method, url, data, params, is_return_none, is_expected_to_fail
return await super()._make_request(
method, url, data, json, params, is_return_none, is_expected_to_fail
)
async def _make_request(
self,
method: str,
url: str,
data: Optional[dict | str] = None,
json: Optional[dict] = None,
params: Optional[dict] = None,
is_return_none: bool = False,
is_expected_to_fail: bool = False
) -> dict:
params = params or {}
url = self.BASE_URL + url
return await super()._make_request(
method, url, data, json, params, is_return_none, is_expected_to_fail
)

Comment on lines +75 to 81
async def get_available_torrent(self, info_hash) -> dict[str, Any] | None:
response = await self.get_user_torrent_list()
torrent_list = response.get("data", [])
for torrent in torrent_list:
if torrent.get("hash", "") == info_hash:
return torrent
return {}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Fix return type inconsistency in get_available_torrent

The method's return type hint indicates dict[str, Any] | None, but it returns an empty dict when no torrent is found instead of None.

Apply this diff to align the implementation with the type hint:

     async def get_available_torrent(self, info_hash) -> dict[str, Any] | None:
         response = await self.get_user_torrent_list()
         torrent_list = response.get("data", [])
         for torrent in torrent_list:
             if torrent.get("hash", "") == info_hash:
                 return torrent
-        return {}
+        return None
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
async def get_available_torrent(self, info_hash) -> dict[str, Any] | None:
response = await self.get_user_torrent_list()
torrent_list = response.get("data", [])
for torrent in torrent_list:
if torrent.get("hash", "") == info_hash:
return torrent
return {}
async def get_available_torrent(self, info_hash) -> dict[str, Any] | None:
response = await self.get_user_torrent_list()
torrent_list = response.get("data", [])
for torrent in torrent_list:
if torrent.get("hash", "") == info_hash:
return torrent
return None

Comment on lines +25 to +62
async def get_video_url_from_premiumize(
info_hash: str,
magnet_link: str,
user_data: UserData,
torrent_name: str,
filename: str,
filename: Optional[str],
episode: Optional[int],
max_retries=5,
retry_interval=5,
**kwargs,
) -> str:
pm_client = Premiumize(token=user_data.streaming_provider.token)

# Check if the torrent already exists
torrent_info = pm_client.get_available_torrent(info_hash, torrent_name)
if torrent_info:
torrent_id = torrent_info.get("id")
if torrent_info["status"] == "error":
pm_client.delete_torrent(torrent_id)
raise ProviderException(
"Not enough seeders available for parse magnet link",
"transfer_error.mp4",
)
else:
# If torrent doesn't exist, add it
folder_id = create_or_get_folder_id(pm_client, info_hash)
response_data = pm_client.add_magnet_link(magnet_link, folder_id)
if "id" not in response_data:
raise ProviderException(
"Failed to add magnet link to Real-Debrid", "transfer_error.mp4"
)
torrent_id = response_data["id"]

# Wait for file selection and then start torrent download
torrent_info = pm_client.wait_for_status(
torrent_id, "finished", max_retries, retry_interval
)
return get_stream_link(pm_client, torrent_info, filename, info_hash)
async with Premiumize(token=user_data.streaming_provider.token) as pm_client:
# Check if the torrent already exists
torrent_info = await pm_client.get_available_torrent(info_hash)
if torrent_info:
torrent_id = torrent_info.get("id")
if torrent_info["status"] == "error":
await pm_client.delete_torrent(torrent_id)
raise ProviderException(
"Not enough seeders available for parse magnet link",
"transfer_error.mp4",
)
else:
# If torrent doesn't exist, add it
folder_id = await create_or_get_folder_id(pm_client, info_hash)
response_data = await pm_client.add_magnet_link(magnet_link, folder_id)
if "id" not in response_data:
raise ProviderException(
"Failed to add magnet link to Premiumize", "transfer_error.mp4"
)
torrent_id = response_data["id"]

# Wait for file selection and then start torrent download
torrent_info = await pm_client.wait_for_status(
torrent_id, "finished", max_retries, retry_interval
)
return await get_stream_link(
pm_client, torrent_info, filename, info_hash, episode
)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Consider extracting retry configuration

The async implementation looks great with proper error handling and status checking. However, the retry configuration could be moved to settings for better maintainability.

Consider extracting the retry configuration:

+from config import settings
+
 async def get_video_url_from_premiumize(
     info_hash: str,
     magnet_link: str,
     user_data: UserData,
     filename: Optional[str],
     episode: Optional[int],
-    max_retries=5,
-    retry_interval=5,
+    max_retries=settings.PREMIUMIZE_MAX_RETRIES,
+    retry_interval=settings.PREMIUMIZE_RETRY_INTERVAL,
     **kwargs,
 ) -> str:

Committable suggestion skipped: line range outside the PR's diff.

Comment on lines +94 to 113
async def update_pm_cache_status(
streams: list[TorrentStreams], user_data: UserData, **kwargs
):
"""Updates the cache status of streams based on Premiumize's instant availability."""

try:
pm_client = Premiumize(token=user_data.streaming_provider.token)
instant_availability_data = pm_client.get_torrent_instant_availability(
[stream.id for stream in streams]
)
for stream, cached_status in zip(
streams, instant_availability_data.get("response")
):
stream.cached = cached_status
async with Premiumize(token=user_data.streaming_provider.token) as pm_client:
instant_availability_data = (
await pm_client.get_torrent_instant_availability(
[stream.id for stream in streams]
)
)
for stream, cached_status in zip(
streams, instant_availability_data.get("response")
):
stream.cached = cached_status

except ProviderException:
pass

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Improve error handling and zip safety

While the async implementation is good, consider these improvements:

  1. Add logging for caught exceptions instead of silently passing
  2. Use strict=True with zip to ensure equal length iterations

Consider these improvements:

+import logging
+
 async def update_pm_cache_status(
     streams: list[TorrentStreams], user_data: UserData, **kwargs
 ):
     try:
         async with Premiumize(token=user_data.streaming_provider.token) as pm_client:
             instant_availability_data = (
                 await pm_client.get_torrent_instant_availability(
                     [stream.id for stream in streams]
                 )
             )
             for stream, cached_status in zip(
                 streams, instant_availability_data.get("response"),
+                strict=True
             ):
                 stream.cached = cached_status
     except ProviderException:
-        pass
+        logging.error("Failed to update PM cache status", exc_info=True)
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
async def update_pm_cache_status(
streams: list[TorrentStreams], user_data: UserData, **kwargs
):
"""Updates the cache status of streams based on Premiumize's instant availability."""
try:
pm_client = Premiumize(token=user_data.streaming_provider.token)
instant_availability_data = pm_client.get_torrent_instant_availability(
[stream.id for stream in streams]
)
for stream, cached_status in zip(
streams, instant_availability_data.get("response")
):
stream.cached = cached_status
async with Premiumize(token=user_data.streaming_provider.token) as pm_client:
instant_availability_data = (
await pm_client.get_torrent_instant_availability(
[stream.id for stream in streams]
)
)
for stream, cached_status in zip(
streams, instant_availability_data.get("response")
):
stream.cached = cached_status
except ProviderException:
pass
import logging
async def update_pm_cache_status(
streams: list[TorrentStreams], user_data: UserData, **kwargs
):
"""Updates the cache status of streams based on Premiumize's instant availability."""
try:
async with Premiumize(token=user_data.streaming_provider.token) as pm_client:
instant_availability_data = (
await pm_client.get_torrent_instant_availability(
[stream.id for stream in streams]
)
)
for stream, cached_status in zip(
streams, instant_availability_data.get("response"),
strict=True
):
stream.cached = cached_status
except ProviderException:
logging.error("Failed to update PM cache status", exc_info=True)

Comment on lines +22 to +38
async def _make_request(
self,
method: str,
url: str,
data: Optional[dict | str] = None,
json: Optional[dict] = None,
params: Optional[dict] = None,
is_return_none: bool = False,
is_expected_to_fail: bool = False,
retry_count: int = 0,
) -> dict | list:
params = params or {}
if self.is_private_token:
params["apikey"] = self.token
return await super()._make_request(
method, url, data, json, params, is_return_none, is_expected_to_fail
)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Remove unused retry_count parameter.

The retry_count parameter is defined but never used in the method implementation.

Apply this diff to remove the unused parameter:

    async def _make_request(
        self,
        method: str,
        url: str,
        data: Optional[dict | str] = None,
        json: Optional[dict] = None,
        params: Optional[dict] = None,
        is_return_none: bool = False,
        is_expected_to_fail: bool = False,
-       retry_count: int = 0,
    ) -> dict | list:
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
async def _make_request(
self,
method: str,
url: str,
data: Optional[dict | str] = None,
json: Optional[dict] = None,
params: Optional[dict] = None,
is_return_none: bool = False,
is_expected_to_fail: bool = False,
retry_count: int = 0,
) -> dict | list:
params = params or {}
if self.is_private_token:
params["apikey"] = self.token
return await super()._make_request(
method, url, data, json, params, is_return_none, is_expected_to_fail
)
async def _make_request(
self,
method: str,
url: str,
data: Optional[dict | str] = None,
json: Optional[dict] = None,
params: Optional[dict] = None,
is_return_none: bool = False,
is_expected_to_fail: bool = False,
) -> dict | list:
params = params or {}
if self.is_private_token:
params["apikey"] = self.token
return await super()._make_request(
method, url, data, json, params, is_return_none, is_expected_to_fail
)

Comment on lines +143 to +157
async def wait_for_completion(
seedr: Seedr, info_hash: str, max_retries: int = 1, retry_interval: int = 1
) -> None:
"""Wait for torrent to complete downloading."""
for _ in range(max_retries):
status, data = await check_torrent_status(seedr, info_hash)

if status == TorrentStatus.COMPLETED:
return
elif status == TorrentStatus.DOWNLOADING and data.get("progress") == "100":
return

await asyncio.sleep(retry_interval)

raise ProviderException("Torrent not downloaded yet.", "torrent_not_downloaded.mp4")
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Consider increasing retries and implementing exponential backoff in wait_for_completion

The current max_retries value of 1 and retry_interval of 1 second in wait_for_completion may not be sufficient for torrents that take longer to download. To improve reliability, consider increasing the max_retries value and implementing exponential backoff to handle varying download times more effectively.

Here's how you might adjust the function:

 async def wait_for_completion(
-    seedr: Seedr, info_hash: str, max_retries: int = 1, retry_interval: int = 1
+    seedr: Seedr, info_hash: str, max_retries: int = 5, retry_interval: int = 1
 ) -> None:
     """Wait for torrent to complete downloading."""
     for _ in range(max_retries):
         status, data = await check_torrent_status(seedr, info_hash)

-        if status == TorrentStatus.COMPLETED:
-            return
-        elif status == TorrentStatus.DOWNLOADING and data.get("progress") == "100":
+        if status == TorrentStatus.COMPLETED or (
+            status == TorrentStatus.DOWNLOADING and data.get("progress") == "100"
+        ):
             return

         await asyncio.sleep(retry_interval)
+        retry_interval *= 2  # Exponential backoff

     raise ProviderException("Torrent not downloaded yet.", "torrent_not_downloaded.mp4")
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
async def wait_for_completion(
seedr: Seedr, info_hash: str, max_retries: int = 1, retry_interval: int = 1
) -> None:
"""Wait for torrent to complete downloading."""
for _ in range(max_retries):
status, data = await check_torrent_status(seedr, info_hash)
if status == TorrentStatus.COMPLETED:
return
elif status == TorrentStatus.DOWNLOADING and data.get("progress") == "100":
return
await asyncio.sleep(retry_interval)
raise ProviderException("Torrent not downloaded yet.", "torrent_not_downloaded.mp4")
async def wait_for_completion(
seedr: Seedr, info_hash: str, max_retries: int = 5, retry_interval: int = 1
) -> None:
"""Wait for torrent to complete downloading."""
for _ in range(max_retries):
status, data = await check_torrent_status(seedr, info_hash)
if status == TorrentStatus.COMPLETED or (
status == TorrentStatus.DOWNLOADING and data.get("progress") == "100"
):
return
await asyncio.sleep(retry_interval)
retry_interval *= 2 # Exponential backoff
raise ProviderException("Torrent not downloaded yet.", "torrent_not_downloaded.mp4")
🧰 Tools
🪛 Ruff

150-153: Combine if branches using logical or operator

Combine if branches

(SIM114)

Comment on lines +160 to +167
async def clean_names(seedr: Seedr, folder_id: str) -> None:
"""Clean special characters from all file and folder names."""
content = await seedr.list_contents(folder_id)

for file in content["files"]:
clean_name = clean_filename(file["name"])
if file["name"] != clean_name:
await seedr.rename_item(file["folder_file_id"], clean_name, "file")
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Ensure clean_names function cleans folder names as well

The clean_names function's docstring indicates it cleans special characters from all file and folder names. Currently, it only processes files and does not clean folder names. To fully implement the intended functionality, consider adding logic to clean folder names as well.

Apply the following changes to include folder names:

 async def clean_names(seedr: Seedr, folder_id: str) -> None:
     """Clean special characters from all file and folder names."""
     content = await seedr.list_contents(folder_id)

+    # Clean subfolder names
+    for subfolder in content["folders"]:
+        clean_name = clean_filename(subfolder["name"])
+        if subfolder["name"] != clean_name:
+            await seedr.rename_item(subfolder["id"], clean_name, "folder")

     # Clean file names
     for file in content["files"]:
         clean_name = clean_filename(file["name"])
         if file["name"] != clean_name:
             await seedr.rename_item(file["folder_file_id"], clean_name, "file")
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
async def clean_names(seedr: Seedr, folder_id: str) -> None:
"""Clean special characters from all file and folder names."""
content = await seedr.list_contents(folder_id)
for file in content["files"]:
clean_name = clean_filename(file["name"])
if file["name"] != clean_name:
await seedr.rename_item(file["folder_file_id"], clean_name, "file")
async def clean_names(seedr: Seedr, folder_id: str) -> None:
"""Clean special characters from all file and folder names."""
content = await seedr.list_contents(folder_id)
# Clean subfolder names
for subfolder in content["folders"]:
clean_name = clean_filename(subfolder["name"])
if subfolder["name"] != clean_name:
await seedr.rename_item(subfolder["id"], clean_name, "folder")
# Clean file names
for file in content["files"]:
clean_name = clean_filename(file["name"])
if file["name"] != clean_name:
await seedr.rename_item(file["folder_file_id"], clean_name, "file")

return selected_file


@asynccontextmanager
async def initialize_qbittorrent(user_data: UserData) -> APIClient:
async with ClientSession(cookie_jar=CookieJar(unsafe=True)) as session:
async with ClientSession(cookie_jar=CookieJar(unsafe=True), timeout=18) as session:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Security Concern: Avoid using unsafe=True in CookieJar.

Setting unsafe=True in CookieJar allows insecure cookies, which can pose security risks by accepting cookies from non-HTTPS sources. Consider removing unsafe=True to enforce secure cookie handling, unless there's a specific need for it.

Comment on lines +236 to +247
await set_qbittorrent_preferences(qbittorrent, indexer_type)
await add_magnet(qbittorrent, magnet_link, info_hash, user_data)
await wait_for_torrent_to_complete(
qbittorrent, info_hash, play_video_after, max_retries, retry_interval
)
selected_file = await find_file_in_folder_tree(
webdav, user_data, info_hash, filename
)
if selected_file is None:
raise ProviderException(
"No matching file available for this torrent", "no_matching_file.mp4"
webdav, user_data, info_hash, filename, episode
)
if not selected_file:
raise ProviderException(
"No matching file available for this torrent", "no_matching_file.mp4"
)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Handle exceptions in retrieve_or_download_file more specifically.

When raising a ProviderException, consider providing more context or handling specific exceptions to aid in debugging and user feedback.

Comment on lines +127 to +129
selected_file_index = await select_file_index_from_torrent(
{"files": files}, filename, episode
)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Simplify argument passing to select_file_index_from_torrent.

Instead of wrapping files in a dictionary, you can pass the list directly if the function accepts it. This makes the code cleaner:

-selected_file_index = await select_file_index_from_torrent(
-    {"files": files}, filename, episode
-)
+selected_file_index = await select_file_index_from_torrent(
+    files, filename, episode
+)

Committable suggestion skipped: line range outside the PR's diff.

Copy link
Contributor

coderabbitai bot commented Nov 1, 2024

@mhdzumair: I have closed all current reviews and will run a fresh review on this PR.

✅ Actions performed

Comments resolved.


Review triggered.

Note: CodeRabbit is an incremental review system and does not re-review already reviewed commits. This command is applicable only when automatic reviews are paused.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant