Skip to content
/ GUR Public

“Generate to Understand for Representation”

Notifications You must be signed in to change notification settings

laohur/GUR

Repository files navigation

Generate to Understand for Representation

This repository contains the code and models discussed in our paper "Generate to Understand for Representation"(https://arxiv.org/abs/2306.10056).

Introducing GUR: a pretraining framework that combines language modeling and contrastive learning objectives in a single training step. We select similar text pairs based on their Longest Common Substring (LCS) from raw unlabeled documents and train the model using masked language modeling and unsupervised contrastive learning. The resulting model, GUR, achieves impressive results without any labeled training data, outperforming all other pretrained baselines as a retriever at the recall benchmark in a zero-shot setting. Additionally, GUR maintains its language modeling ability, as demonstrated in our ablation experiment.

Architecture

Model Architecture

generate samples

python sents2pair.py convet corpus to pairs

train

bash train.sh

init model(optional)

python convert.py

Citation

@INPROCEEDINGS{10438270,
  author={Xue, Changshang and Zhong, Xiande and Liu, Xiaoqing},
  booktitle={2023 11th International Conference on Information Systems and Computing Technology (ISCTech)}, 
  title={Generate to Understand for Representation in One Pre-training Stage}, 
  year={2023},
  volume={},
  number={},
  pages={258-267},
  keywords={Training;Computational modeling;Self-supervised learning;Benchmark testing;Market research;Natural language processing;Task analysis;self-supervised pre-train;contrastive learning;language model;zero-shot learning;text representation;NLP;NLU;NLG;retrieval},
  doi={10.1109/ISCTech60480.2023.00054}}

About

“Generate to Understand for Representation”

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published