The GLUE dataset learning process is using Transformers library and is adopted from https://github.com/huggingface/transformers/blob/main/examples/pytorch/text-classification/run_glue.py
The Federated learning enviroment is using Flower AI framework.
https://flowerai.net/docs/framework/index.html
pip install requirement.txt
To run the experiments in the paper run:
./script.sh
We used the project at https://github.com/star-ailab/FSRDP to find the proper noise std deviation for different accountant. To find the proper std deviation of noise in different accountants:
Python ./noise_calculation/get_noise.py
target_epsilons and dataset_size_list is configurable in get_noise.py file.
python federated.py \
--model_name_or_path google-bert/bert-base-cased \
--max_seq_length 128 \
--task_name SST2 \
--partition_policy Linear \
--per_device_train_batch_size 550 \
--learning_rate 2e-5\
--output_dir /tmp/SST2/
Model_name is the based model.
task_name is the dataset which can be (SST2, QNLI, or QQP).
Parition_policy can be (Iid, Linear, Square, or Exp)