Welcome to the bert-sst-finetune
repository! This repository focuses on fine-tuning a BERT uncased base model using the Stanford Sentiment Treebank dataset (SST-2) to achieve sentiment analysis.
- Repository URL: BERT-SST-Finetune
- Dataset: Stanford Sentiment Treebank (SST-2)
- Achieved Accuracy: 90.6%
- State-of-the-Art Accuracy: 97.5% (Papers with Code)
- Fine-tunes a BERT uncased base model on SST-2 dataset.
- Achieves a sentiment analysis accuracy of 90.6%.
- Future updates will include cross-comparisons among different models on the SST-2 dataset.
To view and execute the Jupyter Notebook on Google Colab, access the provided link:
Load the Jupyter Notebook and execute the cells to start fine-tuning the BERT model on the SST-2 dataset. Make sure to set up the correct paths and dependencies as mentioned in the notebook.
- Incorporate cross-comparison analysis amongst various models on SST-2.
- Optimization and tuning to improve the existing accuracy.
Contributions are welcome! Please create an issue to discuss any changes or enhancements.
Feel free to explore and contribute to the repository. Happy coding!
Author: Faizan Faisal