Skip to content

The purpose of this repo is to run sentiment analysis models, test them for their sensitivity to change in gender and race related attributes, and rate them based on this behavior.

Notifications You must be signed in to change notification settings

ai4society/sentiment-rating

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

82 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Rating of Sentiment Models for Bias

Steps to run the experiments:

  1. Go to code/data_generation/ from your command terminal and run the .py files (dataGeneration.py, dataGeneration_name.py,dataGeneration_baseline.py) which will generate the data required for the experiments.

  2. Go to code/translator/ from your command terminal and run 'tran_fr.py' file to translate the same data into French.

  3. In the same directory, run 'tran_fr_oto.py' file to translate the French datasets back to OTO (English). Now we have all the data needed for the experiments.

  4. Go to code/sentimentmodels/, follow the instructions in the readme file inside that directory to evaluate all the SAS on generated datasets including the baseline calculation.

  5. Go to code/translator/ and run 'calc.py' file to see different stats like gender and word difference when the sentences are translated from English to French and back to English. These stats will be generated in 'data/results/analaysis' directory.

About

The purpose of this repo is to run sentiment analysis models, test them for their sensitivity to change in gender and race related attributes, and rate them based on this behavior.

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published