Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Scripts] Add scripts for FedProx on Cora #331

Merged
merged 3 commits into from
Aug 19, 2022
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
38 changes: 38 additions & 0 deletions benchmark/FedHPOB/scripts/gcn/cora_prox.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,38 @@
use_gpu: True
device: 0
early_stop:
patience: 100
seed: 12345
federate:
mode: standalone
make_global_eval: True
client_num: 5
total_round_num: 500
join_in_info: ['num_sample']
data:
root: data/
type: cora
splitter: 'louvain'
batch_size: 1
model:
type: gcn
hidden: 64
dropout: 0.5
out_channels: 7
task: node
criterion:
type: CrossEntropyLoss
train:
local_update_steps: 1
optimizer:
lr: 0.25
weight_decay: 0.0005
trainer:
type: nodefullbatch_trainer
eval:
freq: 1
metrics: ['acc', 'correct', 'f1']
split: ['test', 'val', 'train']
fedprox:
use: True
mu: 5.0
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this hyperparameter is critical in this algorithm. I am wondering how we should consider it in our setting, that is to say, to be searched or fixed?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

mu is to be searched in (0.1 1.0 5.0).

34 changes: 34 additions & 0 deletions benchmark/FedHPOB/scripts/gcn/run_prox_cora.sh
Original file line number Diff line number Diff line change
@@ -0,0 +1,34 @@
set -e

cudaid=$1
sample_num=$2
mu=$3

# mu=(0.1 1.0 5.0)

cd ../../../..

dataset=cora

out_dir=out_${dataset}_prox

echo "HPO starts..."

lrs=(0.01 0.01668 0.02783 0.04642 0.07743 0.12915 0.21544 0.35938 0.59948 1.0)
wds=(0.0 0.001 0.01 0.1)
dps=(0.0 0.5)
steps=(1 2 3 4 5 6 7 8)

for ((l = 0; l < ${#lrs[@]}; l++)); do
for ((w = 0; w < ${#wds[@]}; w++)); do
for ((d = 0; d < ${#dps[@]}; d++)); do
for ((s = 0; s < ${#steps[@]}; s++)); do
for k in {1..3}; do
python federatedscope/main.py --cfg benchmark/FedHPOB/scripts/gcn/cora_prox.yaml device $cudaid train.optimizer.lr ${lrs[$l]} fedprox.use True fedprox.mu ${mu} train.optimizer.weight_decay ${wds[$w]} model.dropout ${dps[$d]} train.local_update_steps ${steps[$s]} federate.sample_client_num $sample_num seed $k outdir ${out_dir}/${sample_num} expname lr${lrs[$l]}_wd${wds[$w]}_dropout${dps[$d]}_step${steps[$s]}_mu${mu}_seed${k} >/dev/null 2>&1
done
done
done
done
done

echo "HPO ends."
2 changes: 0 additions & 2 deletions environment/extra_dependencies_torch1.10-application.sh
Original file line number Diff line number Diff line change
Expand Up @@ -9,5 +9,3 @@ conda install -y nltk
conda install -y sentencepiece textgrid typeguard -c conda-forge
conda install -y transformers==4.16.2 tokenizers==0.10.3 datasets -c huggingface -c conda-forge
conda install -y torchtext -c pytorch

conda clean -a -y