Skip to content

Commit

Permalink
Merge pull request PaddlePaddle#10 from PaddlePaddle/master
Browse files Browse the repository at this point in the history
Pull from master
  • Loading branch information
Yelrose authored May 12, 2020
2 parents 7b0ee44 + f78f463 commit 37a209b
Show file tree
Hide file tree
Showing 25 changed files with 1,292 additions and 123 deletions.
7 changes: 0 additions & 7 deletions docs/source/api/pgl.contrib.heter_graph.rst

This file was deleted.

7 changes: 0 additions & 7 deletions docs/source/api/pgl.contrib.heter_graph_wrapper.rst

This file was deleted.

7 changes: 7 additions & 0 deletions docs/source/api/pgl.heter_graph.rst
Original file line number Diff line number Diff line change
@@ -0,0 +1,7 @@
pgl.heter\_graph module: Heterogenous Graph Storage
===============================

.. automodule:: pgl.heter_graph
:members:
:undoc-members:
:show-inheritance:
7 changes: 7 additions & 0 deletions docs/source/api/pgl.heter_graph_wrapper.rst
Original file line number Diff line number Diff line change
@@ -0,0 +1,7 @@
pgl.heter\_graph\_wrapper module: Heterogenous Graph data holders for Paddle GNN.
=========================

.. automodule:: pgl.heter_graph_wrapper
:members:
:undoc-members:
:show-inheritance:
4 changes: 2 additions & 2 deletions docs/source/api/pgl.rst
Original file line number Diff line number Diff line change
Expand Up @@ -9,5 +9,5 @@ API Reference
pgl.data_loader
pgl.utils.paddle_helper
pgl.utils.mp_reader
pgl.contrib.heter_graph
pgl.contrib.heter_graph_wrapper
pgl.heter_graph
pgl.heter_graph_wrapper
9 changes: 2 additions & 7 deletions docs/source/quick_start/md/quick_start_for_heterGraph.md
Original file line number Diff line number Diff line change
Expand Up @@ -58,8 +58,8 @@ Now, we can build a heterogenous graph by using PGL.
import paddle.fluid as fluid
import paddle.fluid.layers as fl
import pgl
from pgl.contrib import heter_graph
from pgl.contrib import heter_graph_wrapper
from pgl import heter_graph
from pgl import heter_graph_wrapper

g = heter_graph.HeterGraph(num_nodes=num_nodes,
edges=edges,
Expand Down Expand Up @@ -160,8 +160,3 @@ for epoch in range(30):
train_loss = exe.run(fluid.default_main_program(), feed=feed_dict, fetch_list=[loss], return_numpy=True)
print('Epoch %d | Loss: %f'%(epoch, train_loss[0]))
```





66 changes: 66 additions & 0 deletions examples/erniesage/README.en.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,66 @@
# ERNIESage in PGL

[中文版 README](./README.md)


## Introduction
In many industrial applications, there is often a special graph shown below: Text Graph. As the name implies, the node attributes of such graph consist of text, and the edges provide structural information. Take the search scenario for example, nodes can be expressed by search query, web page titles, and web page content, while the edges are constructed by user feedback or hyperlink information.

<img src="./docs/source/_static/text_graph.png" alt="Text Graph" width="800">

**ERNIESage** (abbreviation of ERNIE SAmple aggreGatE), a model proposed by the PGL team, effectively improves the performance on text graph by simultaneously modeling text semantics and graph structure information. It's worth mentioning that [**ERNIE**](https://github.com/PaddlePaddle/ERNIE) in **ERNIESage** is a continual pre-training framework for language understanding launched by Baidu.

**ERNIESage** is an aggregation of ERNIE and GraphSAGE. Its structure is shown in the figure below. The main idea is to use ERNIE as an aggregation function (Aggregators) to model the semantic and structural relationship between its own nodes and neighbor nodes. In addition, for the position-independent characteristics of neighbor nodes, attention mask and independent position embedding mechanism for neighbor blindness are designed.

<img src="./docs/source/_static/ernie_aggregator.png" alt="ERNIESage" width="800">

GraphSAGE with ID feature can only model the graph structure information, while ERNIE can only deal with the text. With the help of PGL, the proposed **ERNIESage** model can combine the advantages of both models. Take the following recommendation example of text graph, we can see that **ERNIESage** achieves the best performance when compared to single ERNIE model or GraphSAGE model.

<img src="./docs/source/_static/ERNIESage_result.png" alt="ERNIESage_result" width="800">

Thanks to the flexibility and usability of PGL, **ERNIESage** can be quickly implemented under PGL's Message Passing paradigm. Acutally, there are four PGL version of ERNIESage:

- **ERNIESage v1**: ERNIE is applied to the NODE of the text graph;
- **ERNIESage v2**: ERNIE is applied to the EDGE of the text graph;
- **ERNIESage v3**: ERNIE is applied to the first order neighbors and center node;
- **ERNIESage v4**: ERNIE is applied to the N-order neighbors and center node.

<img src="./docs/source/_static/ERNIESage_v1_4.png" alt="ERNIESage_v1_4" width="800">

## Dependencies
- paddlepaddle>=1.7
- pgl>=1.1

## Dataformat
In the example data ```data.txt```, part of NLPCC2016-DBQA is used, and the format is "query \t answer" for each line.
```text
NLPCC2016-DBQA is a sub-task of NLPCC-ICCPOL 2016 Shared Task which is hosted by NLPCC(Natural Language Processing and Chinese Computing), this task targets on selecting documents from the candidates to answer the questions. [url: http://tcci.ccf.org.cn/conference/2016/dldoc/evagline2.pdf]
```

## How to run

We adopt [PaddlePaddle Fleet](https://github.com/PaddlePaddle/Fleet) as our distributed training frameworks ```config/*.yaml``` are some example config files for hyperparameters. Among them, the ERNIE model checkpoint ```ckpt_path``` and the vocabulary ```ernie_vocab_file``` can be downloaded on the [ERNIE](https://github.com/PaddlePaddle/ERNIE) page.

```sh
# train ERNIESage in distributed gpu mode.
sh local_run.sh config/enriesage_v1_gpu.yaml

# train ERNIESage in distributed cpu mode.
sh local_run.sh config/enriesage_v1_cpu.yaml
```

## Hyperparamters

- learner_type: `gpu` or `cpu`; gpu use fleet Collective mode, cpu use fleet Transpiler mode.

## Citation
```
@misc{ERNIESage,
author = {PGL Team},
title = {ERNIESage: ERNIE SAmple aggreGatE},
year = {2020},
publisher = {GitHub},
journal = {GitHub repository},
howpublished = {\url{https://github.com/PaddlePaddle/PGL/tree/master/examples/erniesage},
}
```
67 changes: 67 additions & 0 deletions examples/erniesage/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,67 @@
# 使用PGL实现ERNIESage

[ENG Readme](./README.en.md)

## 背景介绍

在很多工业应用中,往往出现如下图所示的一种特殊的图:Text Graph。顾名思义,图的节点属性由文本构成,而边的构建提供了结构信息。如搜索场景下的Text Graph,节点可由搜索词、网页标题、网页正文来表达,用户反馈和超链信息则可构成边关系。

<img src="./docs/source/_static/text_graph.png" alt="Text Graph" width="800">

**ERNIESage** 由PGL团队提出,是ERNIE SAmple aggreGatE的简称,该模型可以同时建模文本语义与图结构信息,有效提升 Text Graph 的应用效果。其中 [**ERNIE**](https://github.com/PaddlePaddle/ERNIE) 是百度推出的基于知识增强的持续学习语义理解框架。

**ERNIESage** 是 ERNIE 与 GraphSAGE 碰撞的结果,是 ERNIE SAmple aggreGatE 的简称,它的结构如下图所示,主要思想是通过 ERNIE 作为聚合函数(Aggregators),建模自身节点和邻居节点的语义与结构关系。ERNIESage 对于文本的建模是构建在邻居聚合的阶段,中心节点文本会与所有邻居节点文本进行拼接;然后通过预训练的 ERNIE 模型进行消息汇聚,捕捉中心节点以及邻居节点之间的相互关系;最后使用 ERNIESage 搭配独特的邻居互相看不见的 Attention Mask 和独立的 Position Embedding 体系,就可以轻松构建 TextGraph 中句子之间以及词之间的关系。

<img src="./docs/source/_static/ernie_aggregator.png" alt="ERNIESage" width="800">

使用ID特征的GraphSAGE只能够建模图的结构信息,而单独的ERNIE只能处理文本信息。通过PGL搭建的图与文本的桥梁,**ERNIESage**能够很简单的把GraphSAGE以及ERNIE的优点结合一起。以下面TextGraph的场景,**ERNIESage**的效果能够比单独的ERNIE以及GraphSAGE模型都要好。

<img src="./docs/source/_static/ERNIESage_result.png" alt="ERNIESage_result" width="800">

**ERNIESage**可以很轻松地在PGL中的消息传递范式中进行实现,目前PGL提供了4个版本的ERNIESage模型:

- **ERNIESage v1**: ERNIE 作用于text graph节点上;
- **ERNIESage v2**: ERNIE 作用在text graph的边上;
- **ERNIESage v3**: ERNIE 作用于一阶邻居及起边上;
- **ERNIESage v4**: ERNIE 作用于N阶邻居及边上;

<img src="./docs/source/_static/ERNIESage_v1_4.png" alt="ERNIESage_v1_4" width="800">

## 环境依赖
- paddlepaddle>=1.7
- pgl>=1.1

## Dataformat
示例数据```data.txt```中使用了NLPCC2016-DBQA的部分数据,格式为每行"query \t answer"。
```text
NLPCC2016-DBQA 是由国际自然语言处理和中文计算会议 NLPCC 于 2016 年举办的评测任务,其目标是从候选中找到合适的文档作为问题的答案。[链接: http://tcci.ccf.org.cn/conference/2016/dldoc/evagline2.pdf]
```

## How to run

我们采用了[PaddlePaddle Fleet](https://github.com/PaddlePaddle/Fleet)作为我们的分布式训练框架,在```config/*.yaml```中,有部分用于训练ERNIESage的配置, 其中ERNIE模型```ckpt_path```以及词表```ernie_vocab_file```[ERNIE](https://github.com/PaddlePaddle/ERNIE)下载。


```sh
# 分布式GPU模式或单机模式ERNIESage
sh local_run.sh config/erniesage_v2_gpu.yaml

# 分布式CPU模式训练ERNIESage
sh local_run.sh config/erniesage_v2_cpu.yaml
```

## Hyperparamters

- learner_type: `gpu` or `cpu`; gpu 使用fleet Collective 模式, cpu 使用fleet Transpiler 模式.

## Citation
```
@misc{ERNIESage,
author = {PGL Team},
title = {ERNIESage: ERNIE SAmple aggreGatE},
year = {2020},
publisher = {GitHub},
journal = {GitHub repository},
howpublished = {\url{https://github.com/PaddlePaddle/PGL/tree/master/examples/erniesage},
}
```
7 changes: 4 additions & 3 deletions examples/erniesage/config/erniesage_v2_cpu.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -4,9 +4,9 @@
learner_type: "cpu"
optimizer_type: "adam"
lr: 0.00005
batch_size: 2
CPU_NUM: 10
epoch: 20
batch_size: 4
CPU_NUM: 16
epoch: 3
log_per_step: 1
save_per_step: 100
output_path: "./output"
Expand All @@ -31,6 +31,7 @@ final_fc: true
final_l2_norm: true
loss_type: "hinge"
margin: 0.3
neg_type: "random_neg"

# infer config ------
infer_model: "./output/last"
Expand Down
7 changes: 4 additions & 3 deletions examples/erniesage/config/erniesage_v2_gpu.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -6,9 +6,9 @@ optimizer_type: "adam"
lr: 0.00005
batch_size: 32
CPU_NUM: 10
epoch: 20
log_per_step: 1
save_per_step: 100
epoch: 3
log_per_step: 10
save_per_step: 1000
output_path: "./output"
ckpt_path: "./ernie_base_ckpt"

Expand All @@ -31,6 +31,7 @@ final_fc: true
final_l2_norm: true
loss_type: "hinge"
margin: 0.3
neg_type: "random_neg"

# infer config ------
infer_model: "./output/last"
Expand Down
Loading

0 comments on commit 37a209b

Please sign in to comment.