Skip to content

Latest commit

 

History

History
43 lines (31 loc) · 2.08 KB

README.md

File metadata and controls

43 lines (31 loc) · 2.08 KB

Neural Language Modeling

Pre-trained models

Description Dataset Model Test set(s)
Convolutional
(Dauphin et al., 2017)
Google Billion Words download (.tar.bz2) download (.tar.bz2)
Convolutional
(Dauphin et al., 2017)
WikiText-103 download (.tar.bz2) download (.tar.bz2)

Example usage

These scripts provide an example of pre-processing data for the Language Modeling task.

prepare-wikitext-103.sh

Provides an example of pre-processing for WikiText-103 language modeling task:

Example usage:

$ cd examples/language_model/
$ bash prepare-wikitext-103.sh
$ cd ../..

# Binarize the dataset:
$ TEXT=examples/language_model/wikitext-103

$ fairseq-preprocess --only-source \
  --trainpref $TEXT/wiki.train.tokens --validpref $TEXT/wiki.valid.tokens --testpref $TEXT/wiki.test.tokens \ 
  --destdir data-bin/wikitext-103

# Train the model:
# If it runs out of memory, try to reduce max-tokens and max-target-positions
$ mkdir -p checkpoints/wikitext-103
$ fairseq-train --task language_modeling data-bin/wikitext-103 \
  --max-epoch 35 --arch fconv_lm_dauphin_wikitext103 --optimizer nag \
  --lr 1.0 --lr-scheduler reduce_lr_on_plateau --lr-shrink 0.5 \
  --clip-norm 0.1 --dropout 0.2 --weight-decay 5e-06 --criterion adaptive_loss \
  --adaptive-softmax-cutoff 10000,20000,200000 --max-tokens 1024 --tokens-per-sample 1024

# Evaluate:
$ fairseq-eval-lm data-bin/wikitext-103 --path 'checkpoints/wiki103/checkpoint_best.pt'