-
Notifications
You must be signed in to change notification settings - Fork 277
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Intermittent high CPU use #11217
Comments
Hello, I have the same problem with my docker image https://github.com/jee-r/docker-medusa I don't know what produces this cpu consumption |
@johnvick |
Same problem with my Docker container. High CPU use alert, and it's the Medusa container. I can still access Medusa when this happens. A restart of the container fixed it until the next morning. Seems to be something happening overnight. |
I'm not sure how to work out which is the offending provider? The log below is from overnight - similar entries recur repeatedly. AniDB is mentioned. 2023-05-14 00:24:25 WARNING GENERICQUEUESCHEDULER-UPDATE-RECOMMENDED-ANILIST :: [e4870f8] Could not parse AniDB show, with exception: Traceback (most recent call last): |
Ok so it's not just me! CONTAINER ID NAME CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS |
If there's any information I can provide to solve this, let me know. |
Also suddenly having issues adding new shows too, may be unrelated. |
Try deleting the |
Thanks I tried this a few hours ago and also cleaned out the old db files, all good so far. |
Have done thanks I'll report back, it has been firing up around 5PM NZ time. |
No luck I am afraid around 5:40 pm fans started up Medusa using 100% CPU on 2 of 6 cores. Restart the container all quiet. Nothing unusual in the logs but just realised I did not have debug logs enabled which I have now done. |
Same here - 100% 2 cores, web interface unresponsive. Didn't have any luck purging the cache files, nor restoring from a month old ZFS snapshot |
Same issue here, GUI becomes unresponsive |
Mine has consistently been starting around 3:30am CDT (America/Chicago) time. I tried the above cache purge and settings changes, and this did not help. I am not seeing any errors or anything unusual in the logs around this time. Just standard items from SHOWUPDATER, POSTPROCESSOR, and SEARCHQUEUE-DAILY-SEARCH doing their normal routine things. I have only 4 providers enabled. Is there commonality with any one else seeing this? I exclusively run torrents, handled by Transmission. Medusa has otherwise been working as expected, episodes are still being downloaded and processed. This started ~3:30am 5/12/2023 and has happened every night around the same time since then. |
@GldRush98 |
That is not the cause as we are not talking about a CPU "spike" until the search is done. FWIW, there are currently all of 3 episodes in the backlog, and all are recent episodes that just haven't downloaded yet and will likely clear out in the next few days. There is nothing unusual/unexpected in the backlog. |
Got it, in that case you should be able to post the debug logs from the moment the issue starts until the reboot. That would probably help. Please no more +1s without debug logs. |
I wonder, but I guess this can be related to #11218 |
I think you may be on to something there. I just checked my "Server Status" page and my "Show Update" process start time says "03:12:00". About 20-25 minutes before I get the maxed out CPU alerts. This would make sense if one show is causing a loop some place it probably takes a bit for the loop to build up enough times that it starts really chewing the CPU up. edit: I believe I triggered the Update for all shows via Mass Update. Will report if I see the issue pop up soon. I have debug log turned on but I'm concerned that I don't see any debug messages in my log. Not sure... |
You could hit "Force Full Update" on a Show's page. And have you change the "Logging level" to "Debug" on the View log page? |
Yup, I have hit the bug. I now believe this is the same as #11218 referenced above. edit: The debug log started working after I restarted the container again and confirmed...:
etc.... |
Debug log gives below repeatedly - show is BBC Documentaries. 2023-05-16 17:56:16 DEBUG SHOWUPDATER :: [e4870f8] User-Agent: Medusa/1.0.13 (Linux; 6.2.11-2-pve; 64f1362c-f3a7-11ed-a36c-0242ac16000b) |
Around 5am my time in Aus it kicked in for me, 100% CPU just totally thrashed. I also had issues adding a new show, unsure if it was just the show but it failed. |
Yeah, don't use TVDB any more, pick a different indexer. |
Funny you should mention that.......... |
which indexer would people recommend to use instead of tvdb with its broken API? |
Any should be fine. |
You need to remove the show first before being able to add it with a different indexer. Or you could use Manage > Change Indexer as well. |
did the bulk change tool, and then all my archoved shows were listed as WANTED and it started trying to grab them ALL again |
Apologies and thank you. |
@medariox Can you provide an official path to this, are you moving to the new api, or do we need to move off tvdb, and if so how do we do it without causing a wanted status on all shows and episodes? bulk index change select all also doesn't work I am getting serious memory leaks lately on 2 different systems like 10GB + usage, do you guys believe this could be this issue as well? I am using tvdb is there a way to change indexers without changing everything to wanted? I have 140 shows on my server, this would be painful |
Sorry I haven't been keeping up, can I clarify this is 100% the fault of using TVDB? If I stop using it entirely, problem is gone? |
Should be fixed with the new version (1.0.14): https://github.com/pymedusa/Medusa/releases/tag/v1.0.14 |
This comment was marked as off-topic.
This comment was marked as off-topic.
It depends what is selected as your default show status. Change the default show status to Archived and try with one show. When you are confident it works as you want it to, you can migrate with more shows. |
The memory leak or the use of TVDB? If I need to switch them all, so be it. |
[This is a cross post from 11218; sorries] I wanted to confirm that the changes to indexers/tvdbv2/api.py (both in 1.0.14 and 1.0.15, with the latter being the latest) fixed this for me. I had been seeing saturated CPU, plus a memory leak that eventually killed the entire Linux machine on which it was running (swapping+thrashing and eventual death). Thanks! P.S.: I'm gradually moving all shows from thetvdb's old/broken API to tvmaze, which will take some time with >600 shows that I want to do manually and observe to ensure nothing goes awry with the more obscure shows. Probly oughta cull some dead shows at the same time ;-) |
Also, thanks to the team for the quick release updates once the root cause was identified and fixes applied in indexers/tvdbv2/api.py. I actually hot patched that single file locally to test it before moving wholesale to the new release 1.0.15. much appreciated |
Could we add a warning emitted to the log by tvdbv2/api.py so it would describe malformed show data received from thetvdb API (on a show by show basis)? This would help identify problematic shows so moving shows from thetvdb API to tvmaze API could be prioritized. I could file an enhancement request but don't want to burden anyone if this is seen as a stupid idea. I know I hate warnings that I consider to be spurious |
I'd be more interested to know why we aren't simply moving to v4 of their api? is there some kind of cost involved? |
Yes, TVDB is going to a payment based API access model. |
A couple of years ago there was a huge discussion over here #8738 about thetvdb planning to monetize their latest API, and what the response here might be. I'll append my comments there so as not to pollute this issue. TL;DR over there: |
The situation with TVDB is getting worse by the day. You are strongly advised to switch to a different indexer ASAP! Don't wait any longer. |
Do we have a best case recommended one to switch to? |
tvmaze is recommended. The API works better, even if some of the content has issues. Some shows don't have a proper air time. See #11300 |
Medusa Info: Branch: master
Commit: e4870f8
Version: 1.0.13
Database: 44.19
Python Version: 3.10.11 (main, Apr 6 2023, 01:16:54) [GCC 12.2.1 20220924]
SSL Version: OpenSSL 3.0.8 7 Feb 2023
OS: Linux-6.2.11-2-pve-x86_64-with
Locale: en_US.UTF-8
Timezone: NZST
User: abc
Program Folder: /app/medusa
Config File: /config/config.ini
Database File: /config/main.db
Cache Folder: /config/cache
Log Folder: /config/Logs
Arguments:
--nolaunch --datadir /config
Runs in Docker: Yes
A recent change is that every so often Medusa uses 100% of two CPUs on the six core Proxmox Ubuntu VM it is running on. The fan noise alerts me to this. At these time web interface cannot be accessed. Restarting the container fixes it for a while.
The only abnormality in the logs is :
2023-05-13 17:47:29 INFO SEARCHQUEUE-DAILY-SEARCH :: [e4870f8] Using daily search providers
/app/medusa/ext/bs4/builder/init.py:545: XMLParsedAsHTMLWarning: It looks like you're parsing an XML document using an HTML parser. If this really is an HTML document (maybe it's XHTML?), you can ignore or filter this warning. If it's XML, you should know that using an XML parser will be more reliable. To parse this document as XML, make sure you have the lxml package installed, and pass the keyword argument
features="xml"
into the BeautifulSoup constructor.warnings.warn(
Any clues to fix this? Thanks.
The text was updated successfully, but these errors were encountered: