Skip to content

This is the repo to deploy the Pima indians model made as a pet example my Coding Dojo course

License

Notifications You must be signed in to change notification settings

iair/deployment_model_inference

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

9 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

title description
Deployment Model Inference
A description of the project.

Deployment Model Inference

This project demonstrates a Dockerized inference pipeline for deploying a machine learning model. The pipeline loads a trained model, preprocesses input data, and makes predictions using the model.

Table of Contents


Project Overview

The project consists of the following components:

  • src/inference.py: The main script that loads the model, preprocesses input data, and makes predictions.
  • src/model_loader.py: A utility script to load the trained model.
  • src/data_preprocessor.py: A utility script to preprocess input data.
  • models/trained_model_2025-01-02.joblib: A trained model saved in the models/ directory.
  • Dockerfile: A Dockerfile to containerize the inference pipeline.
  • pyproject.toml: A Poetry configuration file for dependency management.

Prerequisites

Before running the project, ensure you have the following installed:


Installation

  1. Clone the repository:

    git clone https://github.com/iair/deployment_model_inference.git
    cd deployment_model_inference
  2. Install dependencies using Poetry:

    poetry install

Usage

Running Locally

  1. Activate the Poetry virtual environment:

    poetry shell
  2. Run the inference script:

    poetry run python src/inference.py

Input Data

The script expects input data in the following format:

input_data = {
    "Pregnancies": 6,
    "Glucose": 148,
    "BloodPressure": 72,
    "SkinThickness": 0,
    "Insulin": 0,
    "BMI": 33.6,
    "DiabetesPedigreeFunction": 0.627,
    "Age": 50
}

Docker Deployment

Build the Docker Image

To build the Docker image, run:

docker build -t deployment_model_inference .

Run the Docker Container

To run the container and execute the inference script:

docker run -it deployment_model_inference:latest

Mount Local Files (Optional)

If you want to test changes to your code or model without rebuilding the Docker image, mount your local project directory into the container:

docker run -it -v $(pwd):/app deployment_model_inference:latest

Testing

To test the inference pipeline, ensure the model file (models/trained_model_2025-01-02.joblib) exists and is correctly loaded. You can also test with different input data to verify the pipeline's robustness.


Contributing

Contributions are welcome! If you'd like to contribute, please follow these steps:

  1. Fork the repository.
  2. Create a new branch (git checkout -b feature/YourFeatureName).
  3. Commit your changes (git commit -m 'Add some feature').
  4. Push to the branch (git push origin feature/YourFeatureName).
  5. Open a pull request.

License

This project is licensed under the Apache 2.0 License. See the LICENSE file for details.


Acknowledgments

  • Thanks to the open-source community for providing tools and libraries that made this project possible.
  • Special thanks to Poetry and Docker for simplifying dependency management and deployment.

Contact

For questions or feedback, feel free to reach out:


How to Use

  1. Open your project in Visual Studio Code.
  2. Open the README.md file (or create one if it doesn’t exist).
  3. Copy and paste the above content into the file.
  4. Save the file (Ctrl + S or Cmd + S).

About

This is the repo to deploy the Pima indians model made as a pet example my Coding Dojo course

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published