Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Added Prowlarr feed scraping & Improve Advanced scraping capability for prowlarr, zilean, torrentio & more bugfixes & improvements #286

Merged
merged 6 commits into from
Sep 16, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion Pipfile
Original file line number Diff line number Diff line change
Expand Up @@ -43,7 +43,7 @@ humanize = {git = "git+https://github.com/python-humanize/humanize.git"}
scrapy-playwright = "*"
cinemagoerng = {git = "git+https://github.com/mhdzumair/cinemagoerng.git"}
tqdm = "*"
parsett = {git = "git+https://github.com/mhdzumair/PTT"}
parsett = "*"

[dev-packages]
pysocks = "*"
Expand Down
68 changes: 36 additions & 32 deletions Pipfile.lock

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

7 changes: 5 additions & 2 deletions api/main.py
Original file line number Diff line number Diff line change
Expand Up @@ -13,6 +13,7 @@
HTTPException,
Request,
Response,
BackgroundTasks,
)
from fastapi.middleware.cors import CORSMiddleware
from fastapi.responses import RedirectResponse, StreamingResponse
Expand Down Expand Up @@ -44,7 +45,7 @@
)

logging.basicConfig(
format="%(levelname)s::%(asctime)s - %(message)s",
format="%(levelname)s::%(asctime)s::%(filename)s::%(lineno)d - %(message)s",
datefmt="%d-%b-%y %H:%M:%S",
level=settings.logging_level,
)
Expand Down Expand Up @@ -508,6 +509,7 @@ async def get_streams(
season: int = None,
episode: int = None,
user_data: schemas.UserData = Depends(get_user_data),
background_tasks: BackgroundTasks = BackgroundTasks(),
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Move the BackgroundTasks call within the function.

Performing the BackgroundTasks call in the argument defaults can lead to unexpected behavior.

Apply this diff to fix the issue:

-    background_tasks: BackgroundTasks = BackgroundTasks(),
+    background_tasks: BackgroundTasks = None,
):
+    if background_tasks is None:
+        background_tasks = BackgroundTasks()
Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
background_tasks: BackgroundTasks = BackgroundTasks(),
background_tasks: BackgroundTasks = None,
):
if background_tasks is None:
background_tasks = BackgroundTasks()
Tools
Ruff

512-512: Do not perform function call BackgroundTasks in argument defaults; instead, perform the call within the function, or read the default from a module-level singleton variable

(B008)

):
user_ip = await get_user_public_ip(request, user_data)
user_feeds = []
Expand Down Expand Up @@ -549,7 +551,7 @@ async def get_streams(
raise HTTPException(status_code=404, detail="Meta ID not found.")
else:
fetched_streams = await crud.get_movie_streams(
user_data, secret_str, video_id, user_ip
user_data, secret_str, video_id, user_ip, background_tasks
)
fetched_streams.extend(user_feeds)
elif catalog_type == "series":
Expand All @@ -560,6 +562,7 @@ async def get_streams(
season,
episode,
user_ip,
background_tasks,
)
fetched_streams.extend(user_feeds)
elif catalog_type == "events":
Expand Down
12 changes: 12 additions & 0 deletions api/scheduler.py
Original file line number Diff line number Diff line change
Expand Up @@ -5,6 +5,7 @@
from scrapers.imdb_data import fetch_movie_ids_to_update
from scrapers.trackers import update_torrent_seeders
from scrapers.tv import validate_tv_streams_in_db
from scrapers.prowlarr_feed import run_prowlarr_feed_scraper


def setup_scheduler(scheduler: AsyncIOScheduler):
Expand Down Expand Up @@ -230,3 +231,14 @@ def setup_scheduler(scheduler: AsyncIOScheduler):
"scrape_all": "false",
},
)

# Schedule the feed scraper
if not settings.disable_prowlarr_feed_scraper:
scheduler.add_job(
run_prowlarr_feed_scraper.send,
CronTrigger.from_crontab(settings.prowlarr_feed_scrape_interval),
name="prowlarr_feed_scraper",
kwargs={
"crontab_expression": settings.prowlarr_feed_scrape_interval,
},
)
2 changes: 1 addition & 1 deletion api/task.py
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,7 @@
# import background actors
# noqa: F401
from mediafusion_scrapy import task
from scrapers import tv, imdb_data, trackers, helpers, prowlarr
from scrapers import tv, imdb_data, trackers, helpers, prowlarr, prowlarr_feed
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Remove unused imports.

The static analysis tool correctly points out that several imports, including the newly added prowlarr_feed import, are unused in this file.

Unless there are plans to use these imports in the near future, it's best to remove them to keep the codebase clean and maintainable.

Apply this diff to remove the unused imports:

-from scrapers import tv, imdb_data, trackers, helpers, prowlarr, prowlarr_feed
+from scrapers import prowlarr_feed

If you intend to use these imports in upcoming commits, feel free to ignore this comment.

Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
from scrapers import tv, imdb_data, trackers, helpers, prowlarr, prowlarr_feed
from scrapers import prowlarr_feed
Tools
Ruff

10-10: scrapers.tv imported but unused

Remove unused import

(F401)


10-10: scrapers.imdb_data imported but unused

Remove unused import

(F401)


10-10: scrapers.trackers imported but unused

Remove unused import

(F401)


10-10: scrapers.helpers imported but unused

Remove unused import

(F401)


10-10: scrapers.prowlarr imported but unused

Remove unused import

(F401)


10-10: scrapers.prowlarr_feed imported but unused

Remove unused import

(F401)

from utils import validation_helper


Expand Down
7 changes: 6 additions & 1 deletion db/config.py
Original file line number Diff line number Diff line change
Expand Up @@ -27,7 +27,9 @@ class Settings(BaseSettings):
logging_level: str = "INFO"
git_rev: str = "stable"
addon_name: str = "MediaFusion"
logo_url: str = "https://raw.githubusercontent.com/mhdzumair/MediaFusion/main/resources/images/mediafusion_logo.png"
logo_url: str = (
"https://raw.githubusercontent.com/mhdzumair/MediaFusion/main/resources/images/mediafusion_logo.png"
)

# Feature toggles
is_scrap_from_torrentio: bool = False
Expand Down Expand Up @@ -82,6 +84,9 @@ class Settings(BaseSettings):
disable_wwe_tgx_scheduler: bool = False
ufc_tgx_scheduler_crontab: str = "30 */3 * * *"
disable_ufc_tgx_scheduler: bool = False
prowlarr_feed_scrape_interval: int = 3
prowlarr_feed_scraper_crontab: str = "0 */3 * * *"
disable_prowlarr_feed_scraper: bool = False

# Time-related settings
torrentio_search_interval_days: int = 3
Expand Down
Loading