Skip to content

Commit

Permalink
ultralytics 8.2.50 new Streamlit live inference Solution (ultralyti…
Browse files Browse the repository at this point in the history
…cs#14210)

Signed-off-by: Glenn Jocher <[email protected]>
Co-authored-by: Muhammad Rizwan Munawar <[email protected]>
Co-authored-by: UltralyticsAssistant <[email protected]>
Co-authored-by: RizwanMunawar <[email protected]>
Co-authored-by: Kayzwer <[email protected]>
  • Loading branch information
5 people authored Jul 5, 2024
1 parent 5f0fd71 commit 26a664f
Show file tree
Hide file tree
Showing 20 changed files with 350 additions and 22 deletions.
1 change: 1 addition & 0 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -11,6 +11,7 @@
<a href="https://zenodo.org/badge/latestdoi/264818686"><img src="https://zenodo.org/badge/264818686.svg" alt="Ultralytics YOLOv8 Citation"></a>
<a href="https://hub.docker.com/r/ultralytics/ultralytics"><img src="https://img.shields.io/docker/pulls/ultralytics/ultralytics?logo=docker" alt="Ultralytics Docker Pulls"></a>
<a href="https://ultralytics.com/discord"><img alt="Ultralytics Discord" src="https://img.shields.io/discord/1089800235347353640?logo=discord&logoColor=white&label=Discord&color=blue"></a>
<a href="https://community.ultralytics.com"><img alt="Ultralytics Forums" src="https://img.shields.io/discourse/users?server=https%3A%2F%2Fcommunity.ultralytics.com&logo=discourse&label=Forums&color=blue"></a>
<br>
<a href="https://console.paperspace.com/github/ultralytics/ultralytics"><img src="https://assets.paperspace.io/img/gradient-badge.svg" alt="Run Ultralytics on Gradient"></a>
<a href="https://colab.research.google.com/github/ultralytics/ultralytics/blob/main/examples/tutorial.ipynb"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open Ultralytics In Colab"></a>
Expand Down
2 changes: 1 addition & 1 deletion docs/en/guides/defining-project-goals.md
Original file line number Diff line number Diff line change
Expand Up @@ -175,4 +175,4 @@ Common challenges include:
- Insufficient understanding of technical constraints.
- Underestimating data requirements.

Address these challenges through thorough initial research, clear communication with stakeholders, and iterative refinement of the problem statement and objectives. Learn more about these challenges [here](#common-challenges).
Address these challenges through thorough initial research, clear communication with stakeholders, and iterative refinement of the problem statement and objectives. Learn more about these challenges in our [Computer Vision Project guide](steps-of-a-cv-project.md).
2 changes: 1 addition & 1 deletion docs/en/guides/model-training-tips.md
Original file line number Diff line number Diff line change
Expand Up @@ -179,4 +179,4 @@ Using pre-trained weights can significantly reduce training times and improve mo

### What is the recommended number of epochs for training a model, and how do I set this in YOLOv8?

The number of epochs refers to the complete passes through the training dataset during model training. A typical starting point is 300 epochs. If your model overfits early, you can reduce the number. Alternatively, if overfitting isnt observed, you might extend training to 600, 1200, or more epochs. To set this in YOLOv8, use the `epochs` parameter in your training script. For additional advice on determining the ideal number of epochs, refer to this section on [number of epochs](#the-number-of-epochs-to-train-for).
The number of epochs refers to the complete passes through the training dataset during model training. A typical starting point is 300 epochs. If your model overfits early, you can reduce the number. Alternatively, if overfitting isn't observed, you might extend training to 600, 1200, or more epochs. To set this in YOLOv8, use the `epochs` parameter in your training script. For additional advice on determining the ideal number of epochs, refer to this section on [number of epochs](#the-number-of-epochs-to-train-for).
4 changes: 2 additions & 2 deletions docs/en/guides/steps-of-a-cv-project.md
Original file line number Diff line number Diff line change
Expand Up @@ -10,7 +10,7 @@ keywords: Computer Vision, AI, Object Detection, Image Classification, Instance

Computer vision is a subfield of artificial intelligence (AI) that helps computers see and understand the world like humans do. It processes and analyzes images or videos to extract information, recognize patterns, and make decisions based on that data.

Computer vision techniques like [object detection](../tasks/detect.md), [image classification](../tasks/classify.md), and [instance segmentation](../tasks/segment.md) can be applied across various industries, from [autonomous driving](https://www.ultralytics.com/solutions/ai-in-self-driving) to [medical imaging](https://www.ultralytics.com/solutions/ai-in-healthcare), to gain valuable insights.
Computer vision techniques like [object detection](../tasks/detect.md), [image classification](../tasks/classify.md), and [instance segmentation](../tasks/segment.md) can be applied across various industries, from [autonomous driving](https://www.ultralytics.com/solutions/ai-in-self-driving) to [medical imaging](https://www.ultralytics.com/solutions/ai-in-healthcare) to gain valuable insights.

<p align="center">
<img width="100%" src="https://media.licdn.com/dms/image/D4D12AQGf61lmNOm3xA/article-cover_image-shrink_720_1280/0/1656513646049?e=1722470400&v=beta&t=23Rqohhxfie38U5syPeL2XepV2QZe6_HSSC-4rAAvt4" alt="Overview of computer vision techniques">
Expand Down Expand Up @@ -227,4 +227,4 @@ For more information, check out the [model export guide](../modes/export.md).

### What are the best practices for monitoring and maintaining a deployed computer vision model?

Continuous monitoring and maintenance are essential for a model's long-term success. Implement tools for tracking Key Performance Indicators (KPIs) and detecting anomalies. Regularly retrain the model with updated data to counteract model drift. Document the entire process, including model architecture, hyperparameters, and changes, to ensure reproducibility and ease of future updates. Learn more in our [monitoring and maintenance guide](#monitoring-maintenance-and-documentation).
Continuous monitoring and maintenance are essential for a model's long-term success. Implement tools for tracking Key Performance Indicators (KPIs) and detecting anomalies. Regularly retrain the model with updated data to counteract model drift. Document the entire process, including model architecture, hyperparameters, and changes, to ensure reproducibility and ease of future updates. Learn more in our [monitoring and maintenance guide](#step-8-monitoring-maintenance-and-documentation).
138 changes: 138 additions & 0 deletions docs/en/guides/streamlit-live-inference.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,138 @@
---
comments: true
description: Learn how to set up a real-time object detection application using Streamlit and Ultralytics YOLOv8. Follow this step-by-step guide to implement webcam-based object detection.
keywords: Streamlit, YOLOv8, Real-time Object Detection, Streamlit Application, YOLOv8 Streamlit Tutorial, Webcam Object Detection
---

# Live Inference with Streamlit Application using Ultralytics YOLOv8

## Introduction

Streamlit makes it simple to build and deploy interactive web applications. Combining this with Ultralytics YOLOv8 allows for real-time object detection and analysis directly in your browser. YOLOv8 high accuracy and speed ensure seamless performance for live video streams, making it ideal for applications in security, retail, and beyond.

| Aquaculture | Animals husbandry |
| :---------------------------------------------------------------------------------------------------------------------------------------------: | :------------------------------------------------------------------------------------------------------------------------------------------------: |
| ![Fish Detection using Ultralytics YOLOv8](https://github.com/RizwanMunawar/RizwanMunawar/assets/62513924/ea6d7ece-cded-4db7-b810-1f8433df2c96) | ![Animals Detection using Ultralytics YOLOv8](https://github.com/RizwanMunawar/RizwanMunawar/assets/62513924/2e1f4781-60ab-4e72-b3e4-726c10cd223c) |
| Fish Detection using Ultralytics YOLOv8 | Animals Detection using Ultralytics YOLOv8 |

## Advantages of Live Inference

- **Seamless Real-Time Object Detection**: Streamlit combined with YOLOv8 enables real-time object detection directly from your webcam feed. This allows for immediate analysis and insights, making it ideal for applications requiring instant feedback.
- **User-Friendly Deployment**: Streamlit's interactive interface makes it easy to deploy and use the application without extensive technical knowledge. Users can start live inference with a simple click, enhancing accessibility and usability.
- **Efficient Resource Utilization**: YOLOv8 optimized algorithm ensure high-speed processing with minimal computational resources. This efficiency allows for smooth and reliable webcam inference even on standard hardware, making advanced computer vision accessible to a wider audience.

## Streamlit Application Code

!!! tip "Ultralytics Installation"

Before you start building the application, ensure you have the Ultralytics Python Package installed. You can install it using the command **pip install ultralytics**

!!! Example "Streamlit Application"

=== "Python"

```Python
from ultralytics import solutions

solutions.inference()

### Make sure to run the file using command `streamlit run <file-name.py>`
```

=== "CLI"

```bash
yolo streamlit-predict
```

This will launch the Streamlit application in your default web browser. You will see the main title, subtitle, and the sidebar with configuration options. Select your desired YOLOv8 model, set the confidence and NMS thresholds, and click the "Start" button to begin the real-time object detection.

## Conclusion

By following this guide, you have successfully created a real-time object detection application using Streamlit and Ultralytics YOLOv8. This application allows you to experience the power of YOLOv8 in detecting objects through your webcam, with a user-friendly interface and the ability to stop the video stream at any time.

For further enhancements, you can explore adding more features such as recording the video stream, saving the annotated frames, or integrating with other computer vision libraries.

## Share Your Thoughts with the Community

Engage with the community to learn more, troubleshoot issues, and share your projects:

### Where to Find Help and Support

- **GitHub Issues:** Visit the [Ultralytics GitHub repository](https://github.com/ultralytics/ultralytics/issues) to raise questions, report bugs, and suggest features.
- **Ultralytics Discord Server:** Join the [Ultralytics Discord server](https://ultralytics.com/discord/) to connect with other users and developers, get support, share knowledge, and brainstorm ideas.

### Official Documentation

- **Ultralytics YOLOv8 Documentation:** Refer to the [official YOLOv8 documentation](https://docs.ultralytics.com/) for comprehensive guides and insights on various computer vision tasks and projects.

## FAQ

### How can I set up a real-time object detection application using Streamlit and Ultralytics YOLOv8?

Setting up a real-time object detection application with Streamlit and Ultralytics YOLOv8 is straightforward. First, ensure you have the Ultralytics Python package installed using:

```bash
pip install ultralytics
```

Then, you can create a basic Streamlit application to run live inference:

!!! Example "Streamlit Application"

=== "Python"

```Python
from ultralytics import solutions
solutions.inference()

### Make sure to run the file using command `streamlit run <file-name.py>`
```

=== "CLI"

```bash
yolo streamlit-predict
```

For more details on the practical setup, refer to the [Streamlit Application Code section](#streamlit-application-code) of the documentation.

### What are the main advantages of using Ultralytics YOLOv8 with Streamlit for real-time object detection?

Using Ultralytics YOLOv8 with Streamlit for real-time object detection offers several advantages:

- **Seamless Real-Time Detection**: Achieve high-accuracy, real-time object detection directly from webcam feeds.
- **User-Friendly Interface**: Streamlit's intuitive interface allows easy use and deployment without extensive technical knowledge.
- **Resource Efficiency**: YOLOv8's optimized algorithms ensure high-speed processing with minimal computational resources.

Discover more about these advantages [here](#advantages-of-live-inference).

### How do I deploy a Streamlit object detection application in my web browser?

After coding your Streamlit application integrating Ultralytics YOLOv8, you can deploy it by running:

```bash
streamlit run <file-name.py>
```

This command will launch the application in your default web browser, enabling you to select YOLOv8 models, set confidence, and NMS thresholds, and start real-time object detection with a simple click. For a detailed guide, refer to the [Streamlit Application Code](#streamlit-application-code) section.

### What are some use cases for real-time object detection using Streamlit and Ultralytics YOLOv8?

Real-time object detection using Streamlit and Ultralytics YOLOv8 can be applied in various sectors:

- **Security**: Real-time monitoring for unauthorized access.
- **Retail**: Customer counting, shelf management, and more.
- **Wildlife and Agriculture**: Monitoring animals and crop conditions.

For more in-depth use cases and examples, explore [Ultralytics Solutions](https://docs.ultralytics.com/solutions).

### How does Ultralytics YOLOv8 compare to other object detection models like YOLOv5 and RCNNs?

Ultralytics YOLOv8 provides several enhancements over prior models like YOLOv5 and RCNNs:

- **Higher Speed and Accuracy**: Improved performance for real-time applications.
- **Ease of Use**: Simplified interfaces and deployment.
- **Resource Efficiency**: Optimized for better speed with minimal computational requirements.

For a comprehensive comparison, check [Ultralytics YOLOv8 Documentation](https://docs.ultralytics.com/models/yolov8) and related blog posts discussing model performance.
4 changes: 4 additions & 0 deletions docs/en/reference/cfg/__init__.md
Original file line number Diff line number Diff line change
Expand Up @@ -51,6 +51,10 @@ keywords: Ultralytics, YOLO, configuration, cfg2dict, get_cfg, check_cfg, save_d

<br><br>

## ::: ultralytics.cfg.handle_streamlit_inference

<br><br>

## ::: ultralytics.cfg.parse_key_value_pair

<br><br>
Expand Down
16 changes: 16 additions & 0 deletions docs/en/reference/solutions/streamlit_inference.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,16 @@
---
description: Explore the live inference capabilities of Streamlit combined with Ultralytics YOLOv8. Learn to implement real-time object detection in your web applications with our comprehensive guide.
keywords: Ultralytics, YOLOv8, live inference, real-time object detection, Streamlit, computer vision, webcam inference, object detection, Python, ML, cv2
---

# Reference for `ultralytics/solutions/streamlit_inference.py`

!!! Note

This file is available at [https://github.com/ultralytics/ultralytics/blob/main/ultralytics/solutions/streamlit_inference.py](https://github.com/ultralytics/ultralytics/blob/main/ultralytics/solutions/streamlit_inference.py). If you spot a problem please help fix it by [contributing](https://docs.ultralytics.com/help/contributing/) a [Pull Request](https://github.com/ultralytics/ultralytics/edit/main/ultralytics/solutions/streamlit_inference.py) 🛠️. Thank you 🙏!

<br><br>

## ::: ultralytics.solutions.streamlit_inference.inference

<br><br>
1 change: 1 addition & 0 deletions docs/en/solutions/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -28,6 +28,7 @@ Here's our curated list of Ultralytics solutions that can be used to create awes
- [Queue Management](../guides/queue-management.md) 🚀 NEW: Implement efficient queue management systems to minimize wait times and improve productivity using YOLOv8.
- [Parking Management](../guides/parking-management.md) 🚀 NEW: Organize and direct vehicle flow in parking areas with YOLOv8, optimizing space utilization and user experience.
- [Analytics](../guides/analytics.md) 📊 NEW: Conduct comprehensive data analysis to discover patterns and make informed decisions, leveraging YOLOv8 for descriptive, predictive, and prescriptive analytics.
- [Live Inference with Streamlit](../guides/streamlit-live-inference.md) 🚀 NEW: Leverage the power of YOLOv8 for real-time object detection directly through your web browser with a user-friendly Streamlit interface.

## Contribute to Our Solutions

Expand Down
6 changes: 4 additions & 2 deletions mkdocs.yml
Original file line number Diff line number Diff line change
Expand Up @@ -162,7 +162,7 @@ nav:
- guides/index.md
- Explorer:
- datasets/explorer/index.md
- NEW 🚀 Analytics: guides/analytics.md # for promotion of new pages
- NEW 🚀 Live Inference: guides/streamlit-live-inference.md # for promotion of new pages
- Languages:
- 🇬🇧&nbsp English: https://ultralytics.com/docs/
- 🇨🇳&nbsp 简体中文: https://docs.ultralytics.com/zh/
Expand Down Expand Up @@ -300,7 +300,7 @@ nav:
- datasets/track/index.md
- NEW 🚀 Solutions:
- solutions/index.md
- NEW 🚀 Analytics: guides/analytics.md
- Analytics: guides/analytics.md
- Object Counting: guides/object-counting.md
- Object Cropping: guides/object-cropping.md
- Object Blurring: guides/object-blurring.md
Expand All @@ -314,6 +314,7 @@ nav:
- Distance Calculation: guides/distance-calculation.md
- Queue Management: guides/queue-management.md
- Parking Management: guides/parking-management.md
- NEW 🚀 Live Inference: guides/streamlit-live-inference.md
- Guides:
- guides/index.md
- YOLO Common Issues: guides/yolo-common-issues.md
Expand Down Expand Up @@ -548,6 +549,7 @@ nav:
- parking_management: reference/solutions/parking_management.md
- queue_management: reference/solutions/queue_management.md
- speed_estimation: reference/solutions/speed_estimation.md
- streamlit_inference: reference/solutions/streamlit_inference.md
- trackers:
- basetrack: reference/trackers/basetrack.md
- bot_sort: reference/trackers/bot_sort.md
Expand Down
2 changes: 1 addition & 1 deletion ultralytics/__init__.py
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
# Ultralytics YOLO 🚀, AGPL-3.0 license

__version__ = "8.2.49"
__version__ = "8.2.50"

import os

Expand Down
17 changes: 14 additions & 3 deletions ultralytics/cfg/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -78,10 +78,13 @@
4. Export a YOLOv8n classification model to ONNX format at image size 224 by 128 (no TASK required)
yolo export model=yolov8n-cls.pt format=onnx imgsz=224,128
6. Explore your datasets using semantic search and SQL with a simple GUI powered by Ultralytics Explorer API
5. Explore your datasets using semantic search and SQL with a simple GUI powered by Ultralytics Explorer API
yolo explorer
5. Run special commands:
6. Streamlit real-time object detection on your webcam with Ultralytics YOLOv8
yolo streamlit-predict
7. Run special commands:
yolo help
yolo checks
yolo version
Expand Down Expand Up @@ -514,6 +517,13 @@ def handle_explorer():
subprocess.run(["streamlit", "run", ROOT / "data/explorer/gui/dash.py", "--server.maxMessageSize", "2048"])


def handle_streamlit_inference():
"""Open the Ultralytics Live Inference streamlit app for real time object detection."""
checks.check_requirements(["streamlit", "opencv-python", "torch"])
LOGGER.info("💡 Loading Ultralytics Live Inference app...")
subprocess.run(["streamlit", "run", ROOT / "solutions/streamlit_inference.py", "--server.headless", "true"])


def parse_key_value_pair(pair):
"""Parse one 'key=value' pair and return key and value."""
k, v = pair.split("=", 1) # split on first '=' sign
Expand Down Expand Up @@ -582,6 +592,7 @@ def entrypoint(debug=""):
"login": lambda: handle_yolo_hub(args),
"copy-cfg": copy_default_cfg,
"explorer": lambda: handle_explorer(),
"streamlit-predict": lambda: handle_streamlit_inference(),
}
full_args_dict = {**DEFAULT_CFG_DICT, **{k: None for k in TASKS}, **{k: None for k in MODES}, **special}

Expand Down
Loading

0 comments on commit 26a664f

Please sign in to comment.