-
Notifications
You must be signed in to change notification settings - Fork 22
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Edited cpu limits for workflows #4181
Conversation
📝 WalkthroughWalkthroughThis pull request introduces modifications to the Kubernetes configuration file Changes
Assessment against linked issues
Possibly related PRs
Suggested labels
Suggested reviewers
Poem
Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media? 🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments. CodeRabbit Commands (Invoked using PR comments)
Other keywords and placeholders
CodeRabbit Configuration File (
|
Codecov ReportAll modified and coverable lines are covered by tests ✅
Additional details and impacted files@@ Coverage Diff @@
## staging #4181 +/- ##
========================================
Coverage 11.97% 11.97%
========================================
Files 121 121
Lines 15877 15877
Branches 329 329
========================================
Hits 1902 1902
Misses 13975 13975 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
🧹 Nitpick comments (1)
k8s/workflows/values-stage.yaml (1)
23-24
: Review resource allocation strategyThe CPU limits have been increased across all components, but there are some concerns with the current configuration:
The request-to-limit ratios are quite high:
- Webserver and Scheduler: 1:8 (125m:1000m)
- Celery and Redis: 1:20 (125m:2500m and 50m:1000m)
Such high ratios could lead to:
- Severe CPU throttling during peak usage
- Unpredictable performance
- Potential pod evictions
- Resource contention issues
Consider the following recommendations:
- Adjust request-to-limit ratios to be closer to 1:2 or 1:3 for better resource utilization:
resources: webserver: requests: cpu: 500m limits: cpu: 1000m scheduler: requests: cpu: 500m limits: cpu: 1000m celery: requests: cpu: 1000m limits: cpu: 2500m redis: requests: cpu: 500m limits: cpu: 1000m
Consider implementing Horizontal Pod Autoscaling (HPA) for individual components to handle varying loads more efficiently.
Monitor actual resource usage in staging to fine-tune these values.
Also applies to: 30-31, 37-38, 44-45
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (2)
k8s/workflows/values-stage.yaml
(1 hunks)src/workflows/README.md
(1 hunks)
✅ Files skipped from review due to trivial changes (1)
- src/workflows/README.md
⏰ Context from checks skipped due to timeout of 90000ms (1)
- GitHub Check: Analyze (javascript)
🔇 Additional comments (1)
k8s/workflows/values-stage.yaml (1)
37-38
: Verify Celery's high CPU requirementThe Celery worker's CPU limit of 2.5 cores is significantly higher than other components. While this might be justified for computation-heavy tasks, it's important to verify this requirement.
Let's verify the Celery worker's CPU usage patterns:
✅ Verification successful
CPU allocation of 2.5 cores is appropriate for Celery workload
The high CPU allocation is justified due to:
- Computationally intensive tasks (collocation analysis, data exports)
- Use of scientific computing libraries (numpy, pandas, sklearn)
- Regular task scheduling with potential concurrent operations
- Complex data processing for analytics and monitoring
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash # Description: Search for Celery task definitions to understand CPU requirements # Look for Celery task definitions and their complexity rg -g '*.py' -A 5 '@celery.task' # Look for CPU-intensive operations in task definitions rg -g '*.py' -l 'numpy|pandas|scipy|sklearn|tensorflow|torch'Length of output: 5289
Script:
#!/bin/bash # Search for Celery configuration and task scheduling patterns rg -g '*.py' -A 5 'CELERY_|celery.conf' # Look for concurrent task execution settings rg -g '*.{py,yaml,json}' -l 'concurrency|worker_concurrency|pool_size'Length of output: 1394
cpu: 1000m | ||
memory: 2000Mi |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Optimize Redis resource allocation
The Redis instance has a high memory limit (2000Mi) and CPU limit (1000m) for what is typically a lightweight in-memory store.
Consider reducing the limits unless there's a specific requirement:
redis:
requests:
cpu: 250m
memory: 500Mi
limits:
cpu: 500m
memory: 1000Mi
This should be sufficient for most Redis workloads while maintaining good performance.
Description
[Provide a brief description of the changes made in this PR]
Related Issues
Changes Made
Testing
Affected Services
Endpoints Ready for Testing
API Documentation Updated?
Additional Notes
[Add any additional notes or comments here]
Summary by CodeRabbit
webserver
CPU limit to 1 CPUscheduler
CPU limit to 1 CPUcelery
CPU limit to 2.5 CPUsredis
CPU limit at 1 CPU