Skip to content

Commit

Permalink
update performance
Browse files Browse the repository at this point in the history
  • Loading branch information
Quester-one committed Jul 31, 2022
1 parent 467e4a8 commit bcfb653
Show file tree
Hide file tree
Showing 3 changed files with 18 additions and 22 deletions.
26 changes: 13 additions & 13 deletions experiments/performance_table.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,20 +4,20 @@

### 1.1 SST-2 (DEV)

| Model | P | R | F1 | Acc |
|---|---|---|---|---|
| bert-base-cased | 0.90343 | 0.94819 | 0.92527 | 0.92201 |
| bert-base-cased+KT-Emb | 8 | 9 | 6 | 6 |
| bert-base-cased+KG-Emb | 8 | 9 | 6 | 6 |
| bert-base-cased+KT-Attn | 8 | 9 | 6 | 6 |
| Model | P | R | F1 | Acc | file |
|---|---|---|---|---|---|
| bert-base-cased | 0.90343 | 0.94819 | 0.92527 | 0.92201 | sst2_bert_base_cased.py |
| bert-base-cased+KT-Emb | 8 | 9 | 6 | 6 | 6 |
| bert-base-cased+KG-Emb | 8 | 9 | 6 | 6 | 6 |
| bert-base-cased+KT-Attn | 8 | 9 | 6 | 6 | 6 |

### 1.2 SST-5
### 1.2 SST-5 (DEV)

| Model | micro_P | micro_R | micro_F1 | macro_P | macro_R | macro_F1 | Acc |
|---|---|---|---|---|---|---|---|
| bert-base-cased | 0.48319 | 0.48319 | 0.48319 | 0.48459 | 0.46527 | 0.47156 | 0.48319 |
| bert-base-cased+KT-Emb | 8 | 9 | 4 | 5 | 6 | 5 | 6 |
| bert-base-cased+KG-Emb | 8 | 9 | 4 | 5 | 6 | 5 | 6 |
| bert-base-cased+KT-Attn | 8 | 9 | 4 | 5 | 6 | 5 | 6 |
| Model | micro_P | micro_R | micro_F1 | macro_P | macro_R | macro_F1 | Acc | file |
|---|---|---|---|---|---|---|---|---|
| bert-base-cased | 0.48319 | 0.48319 | 0.48319 | 0.48459 | 0.46527 | 0.47156 | 0.48319 | sst5_bert_base_cased.py |
| bert-base-cased+KT-Emb | 8 | 9 | 4 | 5 | 6 | 5 | 6 | 6 |
| bert-base-cased+KG-Emb | 8 | 9 | 4 | 5 | 6 | 5 | 6 | 6 |
| bert-base-cased+KT-Attn | 8 | 9 | 4 | 5 | 6 | 5 | 6 | 6 |

## 2.Sentence Pair
Original file line number Diff line number Diff line change
Expand Up @@ -19,7 +19,7 @@

plm = PlmAutoModel(pretrained_model_name="bert-base-cased")
model = BaseTextClassificationModel(plm=plm, vocab=vocab)
metric = BaseClassificationMetric(mode="binary") # SST_2
metric = BaseClassificationMetric(mode="binary")
loss = nn.CrossEntropyLoss()
optimizer = optim.Adam(model.parameters(), lr=0.00001)

Expand Down
12 changes: 4 additions & 8 deletions test/test_base_text_classification.py
Original file line number Diff line number Diff line change
Expand Up @@ -4,26 +4,22 @@

device, output_path = init_cogktr(
device_id=7,
# output_path="/data/mentianyi/code/CogKTR/datapath/text_classification/SST_2/experimental_result", # SST_2
output_path="/data/mentianyi/code/CogKTR/datapath/text_classification/SST_5/experimental_result", # SST_5
output_path="/data/mentianyi/code/CogKTR/datapath/text_classification/SST_2/experimental_result",
folder_tag="simple_test",
)

# reader = Sst2Reader(raw_data_path="/data/mentianyi/code/CogKTR/datapath/text_classification/SST_2/raw_data") # SST_2
reader = Sst5Reader(raw_data_path="/data/mentianyi/code/CogKTR/datapath/text_classification/SST_5/raw_data") # SST_5
reader = Sst2Reader(raw_data_path="/data/mentianyi/code/CogKTR/datapath/text_classification/SST_2/raw_data")
train_data, dev_data, test_data = reader.read_all()
vocab = reader.read_vocab()

# processor = Sst2Processor(plm="bert-base-cased", max_token_len=128, vocab=vocab) # SST_2
processor = Sst5Processor(plm="bert-base-cased", max_token_len=128, vocab=vocab) # SST_5
processor = Sst2Processor(plm="bert-base-cased", max_token_len=128, vocab=vocab)
train_dataset = processor.process_train(train_data)
dev_dataset = processor.process_dev(dev_data)
test_dataset = processor.process_test(test_data)

plm = PlmAutoModel(pretrained_model_name="bert-base-cased")
model = BaseTextClassificationModel(plm=plm, vocab=vocab)
# metric = BaseClassificationMetric(mode="binary") # SST_2
metric = BaseClassificationMetric(mode="multi") # SST_5
metric = BaseClassificationMetric(mode="binary")
loss = nn.CrossEntropyLoss()
optimizer = optim.Adam(model.parameters(), lr=0.00001)

Expand Down

0 comments on commit bcfb653

Please sign in to comment.