-
Run the following commands in the project's root directory to set up your database and model.
- To run ETL pipeline that cleans data and stores in database
python data/process_data.py data/disaster_messages.csv data/disaster_categories.csv data/DisasterResponse.db
- To run ML pipeline that trains classifier and saves
python models/train_classifier.py data/DisasterResponse.db models/classifier.pkl
- To run ETL pipeline that cleans data and stores in database
-
Run the following command in the app's directory to run your web app.
python run.py
-
Go to http://0.0.0.0:3001/
The data used are from Appen (formerly Figure 8)
I've used common python libraries like : pandas, nltk, sklearn and plotly. All the libraries are available in the requirements.txt file.
- Folder data : contains data (*.csv) and process_data.py --> This python file will act as an ETL and create a db file.
- Folder models : contains train_classifier.py --> This file will train a model and create the model file. In order to avoid training the model every time, I've created a model file that is available on this github.
- Folder app : contains run.py --> This file will run a web dashboard.
On the main page of the dashboard you can check for training data
You can also write a sentence and view the results of the prediction