Skip to content

A Kalman Filter-Guided Sensor Fusion For Robust Object Tracking in Dynamic Environments using RGB-Depth-LiDAR sensors

License

Notifications You must be signed in to change notification settings

mzahana/smart_track

Repository files navigation

SMART-TRACK

A novel Kalman Filter-Guided Sensor Fusion For Robust Robot Object Tracking in Dynamic Environments.

SMART-TRACK is a ROS2-based framework designed for real-time, precise detection and tracking of multiple objects in dynamic environments. By combining object detection techniques with Kalman Filter estimators in a feedback manner, SMART-TRACK maintains robust tracking continuity even when direct measurements are intermittent or fail.

IEEE

Citation

If you find this work useful, please STAR this repositry. If you use this work in your research, please cite our publication

@ARTICLE{10778212,
  author={Abdelkader, Mohamed and Gabr, Khaled and Jarraya, Imen and AlMusalami, Abdullah and Koubaa, Anis},
  journal={IEEE Sensors Journal}, 
  title={SMART-TRACK: A Novel Kalman Filter-Guided Sensor Fusion for Robust UAV Object Tracking in Dynamic Environments}, 
  year={2025},
  volume={25},
  number={2},
  pages={3086-3097},
  doi={10.1109/JSEN.2024.3505939}}

Thank you!

Table of Contents

Overview

In sensor fusion and state estimation for object detection and localization, the Kalman Filter (KF) is a standard framework. However, its effectiveness diminishes when measurements are not continuous, leading to rapid divergence in state estimations. SMART-TRACK introduces a novel approach that leverages high-frequency state estimates from the Kalman Filter to guide the search for new measurements. This method maintains tracking continuity even when direct measurements falter, making it pivotal in dynamic environments where traditional methods struggle. While applicable to various sensor types, our demonstration focuses on depth cameras within a ROS2-based simulation environment.

alt text

Key Features

  • Robust Multi-Target Tracking: Maintains continuous tracking even when object detections are intermittent or fail.
  • Measurement Augmentation: Uses Kalman Filter feedback to augment measurements, enhancing detection reliability.
  • Versatile Sensor Integration: Adaptable to multiple sensor types (e.g., LiDAR, depth cameras) by transforming KF predictions into the sensor's frame.
  • ROS2 Compatibility: Fully compatible with ROS2, facilitating integration into modern robotic systems.

Framework Architecture

alt text

This diagram illustrates the SMART-TRACK workflow for robust UAV object tracking using RGB and depth images. Here's a simplified explanation:

  1. RGB Image ( ( I_{RGB} ) ): The system uses YOLOv8 to detect the object and generate a bounding box. If detected, the 3D position is computed and sent to the Kalman Filter (KF).

  2. Depth Image ( ( I_{depth} ) ): Simultaneously, depth images are used to compute the object's 3D position, which is fed as a measurement to the Kalman Filter (KF).

  3. Kalman Filter (KF): The KF predicts the object's position and updates it using new measurements. When YOLOv8 fails to detect the object, the KF’s prediction is used to guide a search region (ROI) in the RGB image.

  4. Tracking: The system keeps tracking the object even if direct detection fails, using the Kalman Filter to estimate and predict the object's location.

Installation

Prerequisites

  • ROS2 Humble
  • Python 3.x
  • OpenCV
  • PX4 v1.14
  • TensorFlow or PyTorch (depending on the object detection model used)
  • Additional Dependencies:
    • vision_msgs
    • rclpy
    • cv_bridge

Setup

For quick setup, you can check the docker image we provide with this repo in the docker sub-directory, and follow the README in there.. Otherwise, you can follow the following steps.

  1. Install YOLOv8

    SMART-TRACK uses YOLOv8 for object detection. Ensure YOLOv8 is installed before proceeding.

    • You can install YOLO provided by the ultralytics package using pip install ultralytics

    • To interface YOLO with ROS 2, you can use yolo_ros package. However, we use our fork yolov8_ros, which we tested with YOLOv8.

      cd ~/ros2_ws/src
      git clone https://github.com/mzahana/yolov8_ros
      pip3 install -r yolov8_ros/requirements.txt
      cd ~/ros2_ws
      rosdep install --from-paths src --ignore-src -r -y
      colcon build
  2. Download the Custom YOLOv8 Model

    • A custom YOLOv8 model for drone detection is available in the config directory of this package. The model is named drone_detection_v3.pt. Use this model with the yolov8_ros package.
  3. Clone the Kalman Filter Implementation

    • Clone the multi_target_kf repository into your ros2_ws/src directory. Checkout the ros2_humble branch.

      git clone -b ros2_humble https://github.com/mzahana/multi_target_kf.git
  4. Clone the SMART-TRACK Repository

    cd ~/ros2_ws/src
    git clone https://github.com/mzahana/smart_track 
  5. Build the ROS2 Workspace

    cd ~/ros2_ws
    colcon build
    source install/setup.bash

Getting Started

To run the pose estimator, follow these steps:

  1. Ensure All Packages Are Built and Sourced

    • Make sure that all packages are inside the ROS2 workspace.

    • Build the workspace if you haven't already:

      cd ~/ros2_ws
      colcon build
      source install/setup.bash
  2. Launch the Pose Estimator

    ros2 launch smart_track yolo2pose.launch.py
    • This launch file starts the pose estimation node that processes YOLO detections to estimate the poses of detected objects.
  3. Run the Kalman Filter Tracker

    • The yolo2pose_node.py accepts Kalman Filter estimations of the target's 3D position to implement the KF-guided measurement algorithm for more robust state estimation.

    • Launch the Kalman Filter tracker:

      ros2 launch multi_target_kf multi_target_kf.launch.py

Subscribed and Published Topics

Subscribed Topics

  • Image and Camera Information

    • /interceptor/depth_image (sensor_msgs/Image): Depth image stream from the camera sensor.
    • /interceptor/camera_info (sensor_msgs/CameraInfo): Camera calibration and configuration data.
  • Object Detections

    • /detections (vision_msgs/DetectionArray): Detection results containing bounding boxes, classes, and confidence scores.
  • Kalman Filter Tracks

    • /kf/good_tracks (kf_msgs/KFTracks): Filtered and predicted states of the tracked objects from the Kalman Filter.

Published Topics

  • Pose Arrays

    • /yolo_poses (geometry_msgs/PoseArray): Array of poses estimated from YOLO detections.
  • Overlay Images

    • /overlay_yolo_image (sensor_msgs/Image): Image with overlaid detections and tracking information for visualization.

Customization

To adapt SMART-TRACK for different types of objects or sensors:

  • Modify the Detection Model: Replace or retrain the object detection model (e.g., YOLOv8) to detect your objects of interest.
  • Adjust Kalman Filter Parameters: Tweak the parameters in multi_target_kf for optimal tracking performance.
  • Integrate Different Sensors: Adapt the measurement augmentation system to work with other sensors by transforming KF predictions into the appropriate sensor frame.

Contributing

Contributions are welcome! Please follow these steps:

  1. Fork the Repository

    • Click the 'Fork' button at the top right of this page.
  2. Clone Your Fork

    • Replace your-username with your GitHub username.

      git clone https://github.com/your-username/smart_track.git
  3. Create a New Branch

    git checkout -b feature/your-feature-name
  4. Make Your Changes

    • Implement your feature or bug fix.
  5. Commit Your Changes

    git commit -am 'Add new feature'
  6. Push to the Branch

    git push origin feature/your-feature-name
  7. Submit a Pull Request

    • Go to the original repository and click New Pull Request.

Notes

  • Depth Image and Camera Info Topics: Ensure you provide the correct depth image topic and camera info topic in the detection.launch.py file.

  • Static Transformation: There should be a valid static transformation between the robot's base link frame and the camera frame. This is required to compute the position of the detected objects in the observer's localization frame, which can be sent to the Kalman Filter. See an example here.

  • Configuration Parameters: You can configure the depth-based detection parameters in the detection_param.yaml file.

  • Rebuild Workspace After Modifications: After any modifications, rebuild your workspace using:

    colcon build

About

A Kalman Filter-Guided Sensor Fusion For Robust Object Tracking in Dynamic Environments using RGB-Depth-LiDAR sensors

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published