Skip to content

Commit

Permalink
Remove docker (#2337)
Browse files Browse the repository at this point in the history
* remove docker from  local deployment

* Update source_ngql_for_quick_start.md

* Update docs-2.0/4.deployment-and-installation/2.compile-and-install-nebula-graph/3.deploy-nebula-graph-with-docker-compose.md

Co-authored-by: abby.huang <[email protected]>

* update

Co-authored-by: abby.huang <[email protected]>
  • Loading branch information
cooper-lzy and abby-cyber authored Nov 22, 2022
1 parent 505607b commit 2743b36
Show file tree
Hide file tree
Showing 3 changed files with 83 additions and 151 deletions.
Original file line number Diff line number Diff line change
Expand Up @@ -18,7 +18,7 @@

- 如果已经通过 Docker Compose 在主机上部署了另一个版本的 NebulaGraph,为避免兼容性问题,需要删除目录`nebula-docker-compose/data`

## 部署和连接 NebulaGraph
## 部署 NebulaGraph

1. 通过 Git 克隆`nebula-docker-compose`仓库的`{{dockercompose.release}}`分支到主机。

Expand Down Expand Up @@ -50,74 +50,83 @@

```bash
[nebula-docker-compose]$ docker-compose up -d
Creating nebula-docker-compose_metad0_1 ... done
Creating nebula-docker-compose_metad2_1 ... done
Creating nebula-docker-compose_metad1_1 ... done
Creating nebula-docker-compose_graphd2_1 ... done
Creating nebula-docker-compose_graphd_1 ... done
Creating nebula-docker-compose_graphd1_1 ... done
Creating nebula-docker-compose_storaged0_1 ... done
Creating nebula-docker-compose_storaged2_1 ... done
Creating nebula-docker-compose_storaged1_1 ... done
Creating nebuladockercompose_metad0_1 ... done
Creating nebuladockercompose_metad2_1 ... done
Creating nebuladockercompose_metad1_1 ... done
Creating nebuladockercompose_graphd2_1 ... done
Creating nebuladockercompose_graphd_1 ... done
Creating nebuladockercompose_graphd1_1 ... done
Creating nebuladockercompose_storaged0_1 ... done
Creating nebuladockercompose_storaged2_1 ... done
Creating nebuladockercompose_storaged1_1 ... done
```

!!! compatibility

从 3.1 版本开始,Docker-compose 会自动启动 NebulaGraph Console 镜像的容器,并将 Storage 主机增加至集群中(即`ADD HOSTS`命令)。

!!! Note

上述服务的更多信息,请参见[架构总览](../../1.introduction/3.nebula-graph-architecture/1.architecture-overview.md)。

4. 连接 NebulaGraph
## 连接 NebulaGraph

!!! compatibility
连接 NebulaGraph 有两种方式:

从 3.1 版本开始,Docker-compose 会自动启动 NebulaGraph Console 镜像的容器,并将 Storage 主机增加至集群中(即`ADD HOSTS`命令)。
- 在容器外通过 Nebula Console 连接。因为容器的配置文件中将 Graph 服务的外部映射端口也固定为 9669,因此可以直接通过默认端口连接。详情参见[连接 NebulaGraph](../../2.quick-start/3.quick-start-on-premise/3.connect-to-nebula-graph.md)。

- 登录安装了 NebulaGraph Console 的容器,然后再连接 Graph 服务。本小节介绍这种方式。

1. 使用`docker-compose ps`命令查看 NebulaGraph Console 容器名称。
1. 使用`docker-compose ps`命令查看 NebulaGraph Console 容器名称。

```bash
$ docker-compose ps
Name Command State Ports
----------------------------------------------------------------------------------------------
nebuladockercompose_console_1 sh -c sleep 3 && Up
nebula-co ...
......
```
```bash
$ docker-compose ps
Name Command State Ports
----------------------------------------------------------------------------------------------
nebuladockercompose_console_1 sh -c sleep 3 && Up
nebula-co ...
......
```

2. 进入 NebulaGraph Console 容器中。
2. 进入 NebulaGraph Console 容器中。

```bash
$ docker exec -it nebuladockercompose_console_1 /bin/sh
/ #
```
```bash
$ docker exec -it nebuladockercompose_console_1 /bin/sh
/ #
```

3. 通过 NebulaGraph Console 连接 NebulaGraph。
3. 通过 NebulaGraph Console 连接 NebulaGraph。

```bash
/ # ./usr/local/bin/nebula-console -u <user_name> -p <password> --address=graphd --port=9669
```
```bash
/ # ./usr/local/bin/nebula-console -u <user_name> -p <password> --address=graphd --port=9669
```

!!! Note
!!! Note

默认情况下,身份认证功能是关闭的,只能使用已存在的用户名(默认为`root`)和任意密码登录。如果想使用身份认证,请参见[身份认证](../../7.data-security/1.authentication/1.authentication.md)。

4. 查看集群状态。
4. 查看集群状态。

```bash
nebula> SHOW HOSTS;
+-------------+------+-----------+----------+--------------+----------------------+------------------------+---------+
| Host | Port | HTTP port | Status | Leader count | Leader distribution | Partition distribution | Version |
+-------------+------+-----------+----------+--------------+----------------------+------------------------+---------+
| "storaged0" | 9779 | 19779 | "ONLINE" | 0 | "No valid partition" | "No valid partition" | "3.1.0" |
| "storaged1" | 9779 | 19779 | "ONLINE" | 0 | "No valid partition" | "No valid partition" | "3.1.0" |
| "storaged2" | 9779 | 19779 | "ONLINE" | 0 | "No valid partition" | "No valid partition" | "3.1.0" |
+-------------+------+-----------+----------+--------------+----------------------+------------------------+---------+
```
```bash
nebula> SHOW HOSTS;
+-------------+------+-----------+----------+--------------+----------------------+------------------------+---------+
| Host | Port | HTTP port | Status | Leader count | Leader distribution | Partition distribution | Version |
+-------------+------+-----------+----------+--------------+----------------------+------------------------+---------+
| "storaged0" | 9779 | 19779 | "ONLINE" | 0 | "No valid partition" | "No valid partition" | "x.x.x" |
| "storaged1" | 9779 | 19779 | "ONLINE" | 0 | "No valid partition" | "No valid partition" | "x.x.x" |
| "storaged2" | 9779 | 19779 | "ONLINE" | 0 | "No valid partition" | "No valid partition" | "x.x.x" |
+-------------+------+-----------+----------+--------------+----------------------+------------------------+---------+
```

5. 执行两次`exit`可以退出容器。
执行两次`exit`可以退出容器。

## 查看 NebulaGraph 服务的状态和端口

执行命令`docker-compose ps`可以列出 NebulaGraph 服务的状态和端口。

!!! note
NebulaGraph 默认使用`9669`端口为客户端提供服务,如果需要修改端口,请修改目录`nebula-docker-compose`内的文件`docker-compose.yaml`,然后重启 NebulaGraph 服务。

```bash
$ docker-compose ps
nebuladockercompose_console_1 sh -c sleep 3 && Up
Expand All @@ -133,7 +142,30 @@ nebuladockercompose_storaged1_1 /usr/local/nebula/bin/nebu ... Up 0.0.0
nebuladockercompose_storaged2_1 /usr/local/nebula/bin/nebu ... Up 0.0.0.0:49167->19779/tcp,:::49167->19779/tcp, 0.0.0.0:49164->19780/tcp,:::49164->19780/tcp, 9777/tcp, 9778/tcp, 0.0.0.0:49170->9779/tcp,:::49170->9779/tcp, 9780/tcp
```

NebulaGraph 默认使用`9669`端口为客户端提供服务,如果需要修改端口,请修改目录`nebula-docker-compose`内的文件`docker-compose.yaml`,然后重启 NebulaGraph 服务。
如果服务有异常,用户可以先确认异常的容器名称(例如`nebuladockercompose_graphd2_1`),

然后执行`docker ps`查看对应的`CONTAINER ID`(示例为`2a6c56c405f5`)。

```bash
[nebula-docker-compose]$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
2a6c56c405f5 vesoft/nebula-graphd:nightly "/usr/local/nebula/b…" 36 minutes ago Up 36 minutes (healthy) 0.0.0.0:49230->9669/tcp, 0.0.0.0:49229->19669/tcp, 0.0.0.0:49228->19670/tcp nebuladockercompose_graphd2_1
7042e0a8e83d vesoft/nebula-storaged:nightly "./bin/nebula-storag…" 36 minutes ago Up 36 minutes (healthy) 9777-9778/tcp, 9780/tcp, 0.0.0.0:49227->9779/tcp, 0.0.0.0:49226->19779/tcp, 0.0.0.0:49225->19780/tcp nebuladockercompose_storaged2_1
18e3ea63ad65 vesoft/nebula-storaged:nightly "./bin/nebula-storag…" 36 minutes ago Up 36 minutes (healthy) 9777-9778/tcp, 9780/tcp, 0.0.0.0:49219->9779/tcp, 0.0.0.0:49218->19779/tcp, 0.0.0.0:49217->19780/tcp nebuladockercompose_storaged0_1
4dcabfe8677a vesoft/nebula-graphd:nightly "/usr/local/nebula/b…" 36 minutes ago Up 36 minutes (healthy) 0.0.0.0:49224->9669/tcp, 0.0.0.0:49223->19669/tcp, 0.0.0.0:49222->19670/tcp nebuladockercompose_graphd1_1
a74054c6ae25 vesoft/nebula-graphd:nightly "/usr/local/nebula/b…" 36 minutes ago Up 36 minutes (healthy) 0.0.0.0:9669->9669/tcp, 0.0.0.0:49221->19669/tcp, 0.0.0.0:49220->19670/tcp nebuladockercompose_graphd_1
880025a3858c vesoft/nebula-storaged:nightly "./bin/nebula-storag…" 36 minutes ago Up 36 minutes (healthy) 9777-9778/tcp, 9780/tcp, 0.0.0.0:49216->9779/tcp, 0.0.0.0:49215->19779/tcp, 0.0.0.0:49214->19780/tcp nebuladockercompose_storaged1_1
45736a32a23a vesoft/nebula-metad:nightly "./bin/nebula-metad …" 36 minutes ago Up 36 minutes (healthy) 9560/tcp, 0.0.0.0:49213->9559/tcp, 0.0.0.0:49212->19559/tcp, 0.0.0.0:49211->19560/tcp nebuladockercompose_metad0_1
3b2c90eb073e vesoft/nebula-metad:nightly "./bin/nebula-metad …" 36 minutes ago Up 36 minutes (healthy) 9560/tcp, 0.0.0.0:49207->9559/tcp, 0.0.0.0:49206->19559/tcp, 0.0.0.0:49205->19560/tcp nebuladockercompose_metad2_1
7bb31b7a5b3f vesoft/nebula-metad:nightly "./bin/nebula-metad …" 36 minutes ago Up 36 minutes (healthy) 9560/tcp, 0.0.0.0:49210->9559/tcp, 0.0.0.0:49209->19559/tcp, 0.0.0.0:49208->19560/tcp nebuladockercompose_metad1_1
```

最后登录容器排查问题

```bash
[nebula-docker-compose]$ docker exec -it 2a6c56c405f5 bash
[root@2a6c56c405f5 nebula]#
```

## 查看 NebulaGraph 服务的数据和日志

Expand Down
108 changes: 2 additions & 106 deletions docs-2.0/reuse/source_manage-service.md
Original file line number Diff line number Diff line change
Expand Up @@ -76,9 +76,7 @@ $ systemctl <start | stop | restart | status > <nebula | nebula-metad | nebula-g

## 启动 NebulaGraph 服务

### 非容器部署

对于非容器部署的 NebulaGraph,执行如下命令启动服务:
执行如下命令启动服务:

```bash
$ sudo /usr/local/nebula/scripts/nebula.service start all
Expand All @@ -104,32 +102,12 @@ $ systemctl enable nebula
```
{{ ent.ent_end }}

### 容器部署

对于使用 Docker Compose 部署的 NebulaGraph,在`nebula-docker-compose/`目录内执行如下命令启动服务:

```bash
[nebula-docker-compose]$ docker-compose up -d
Building with native build. Learn about native build in Compose here: https://docs.docker.com/go/compose-native-build/
Creating network "nebula-docker-compose_nebula-net" with the default driver
Creating nebula-docker-compose_metad0_1 ... done
Creating nebula-docker-compose_metad2_1 ... done
Creating nebula-docker-compose_metad1_1 ... done
Creating nebula-docker-compose_storaged2_1 ... done
Creating nebula-docker-compose_graphd1_1 ... done
Creating nebula-docker-compose_storaged1_1 ... done
Creating nebula-docker-compose_storaged0_1 ... done
Creating nebula-docker-compose_graphd2_1 ... done
Creating nebula-docker-compose_graphd_1 ... done
```

## 停止 NebulaGraph 服务

!!! danger

请勿使用`kill -9` 命令强制终止进程,否则可能较小概率出现数据丢失。

### 非容器部署

执行如下命令停止 NebulaGraph 服务:

Expand All @@ -151,41 +129,8 @@ $ systemctl stop nebula
```
{{ ent.ent_end }}

### 容器部署

`nebula-docker-compose/`目录内执行如下命令停止 NebulaGraph 服务:

```bash
[nebula-docker-compose]$ docker-compose down
Stopping nebula-docker-compose_graphd_1 ... done
Stopping nebula-docker-compose_graphd2_1 ... done
Stopping nebula-docker-compose_storaged0_1 ... done
Stopping nebula-docker-compose_storaged1_1 ... done
Stopping nebula-docker-compose_graphd1_1 ... done
Stopping nebula-docker-compose_storaged2_1 ... done
Stopping nebula-docker-compose_metad1_1 ... done
Stopping nebula-docker-compose_metad2_1 ... done
Stopping nebula-docker-compose_metad0_1 ... done
Removing nebula-docker-compose_graphd_1 ... done
Removing nebula-docker-compose_graphd2_1 ... done
Removing nebula-docker-compose_storaged0_1 ... done
Removing nebula-docker-compose_storaged1_1 ... done
Removing nebula-docker-compose_graphd1_1 ... done
Removing nebula-docker-compose_storaged2_1 ... done
Removing nebula-docker-compose_metad1_1 ... done
Removing nebula-docker-compose_metad2_1 ... done
Removing nebula-docker-compose_metad0_1 ... done
Removing network nebula-docker-compose_nebula-net
```

!!! Note

命令`docker-compose down -v`将会**删除**所有本地 NebulaGraph 的数据。如果使用的是 developing 或 nightly 版本,并且有一些兼容性问题,请尝试这个命令。

## 查看 NebulaGraph 服务

### 非容器部署

执行如下命令查看 NebulaGraph 服务状态:

```bash
Expand Down Expand Up @@ -235,60 +180,11 @@ $ systemctl status nebula
3月 28 04:13:24 xxxxxx systemd[1]: Started nebula.service.
...
```
{{ ent.ent_end }}
NebulaGraph 服务由 Meta 服务、Graph 服务和 Storage 服务共同提供,这三种服务的配置文件都保存在安装目录的`etc`目录内,默认路径为`/usr/local/nebula/etc/`,用户可以检查相应的配置文件排查问题。
### 容器部署
`nebula-docker-compose`目录内执行如下命令查看 NebulaGraph 服务状态:
```bash
[nebula-docker-compose]$ docker-compose ps
Name Command State Ports
---------------------------------------------------------------------------------------------------------------------------------------------------------------------
nebula-docker-compose_graphd1_1 /usr/local/nebula/bin/nebu ... Up (healthy) 0.0.0.0:49223->19669/tcp, 0.0.0.0:49222->19670/tcp, 0.0.0.0:49224->9669/tcp
nebula-docker-compose_graphd2_1 /usr/local/nebula/bin/nebu ... Up (healthy) 0.0.0.0:49229->19669/tcp, 0.0.0.0:49228->19670/tcp, 0.0.0.0:49230->9669/tcp
nebula-docker-compose_graphd_1 /usr/local/nebula/bin/nebu ... Up (healthy) 0.0.0.0:49221->19669/tcp, 0.0.0.0:49220->19670/tcp, 0.0.0.0:9669->9669/tcp
nebula-docker-compose_metad0_1 ./bin/nebula-metad --flagf ... Up (healthy) 0.0.0.0:49212->19559/tcp, 0.0.0.0:49211->19560/tcp, 0.0.0.0:49213->9559/tcp,
9560/tcp
nebula-docker-compose_metad1_1 ./bin/nebula-metad --flagf ... Up (healthy) 0.0.0.0:49209->19559/tcp, 0.0.0.0:49208->19560/tcp, 0.0.0.0:49210->9559/tcp,
9560/tcp
nebula-docker-compose_metad2_1 ./bin/nebula-metad --flagf ... Up (healthy) 0.0.0.0:49206->19559/tcp, 0.0.0.0:49205->19560/tcp, 0.0.0.0:49207->9559/tcp,
9560/tcp
nebula-docker-compose_storaged0_1 ./bin/nebula-storaged --fl ... Up (healthy) 0.0.0.0:49218->19779/tcp, 0.0.0.0:49217->19780/tcp, 9777/tcp, 9778/tcp,
0.0.0.0:49219->9779/tcp, 9780/tcp
nebula-docker-compose_storaged1_1 ./bin/nebula-storaged --fl ... Up (healthy) 0.0.0.0:49215->19779/tcp, 0.0.0.0:49214->19780/tcp, 9777/tcp, 9778/tcp,
0.0.0.0:49216->9779/tcp, 9780/tcp
nebula-docker-compose_storaged2_1 ./bin/nebula-storaged --fl ... Up (healthy) 0.0.0.0:49226->19779/tcp, 0.0.0.0:49225->19780/tcp, 9777/tcp, 9778/tcp,
0.0.0.0:49227->9779/tcp, 9780/tcp
```
如果服务有异常,用户可以先确认异常的容器名称(例如`nebula-docker-compose_graphd2_1`),
然后执行`docker ps`查看对应的`CONTAINER ID`(示例为`2a6c56c405f5`)。
```bash
[nebula-docker-compose]$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
2a6c56c405f5 vesoft/nebula-graphd:nightly "/usr/local/nebula/b…" 36 minutes ago Up 36 minutes (healthy) 0.0.0.0:49230->9669/tcp, 0.0.0.0:49229->19669/tcp, 0.0.0.0:49228->19670/tcp nebula-docker-compose_graphd2_1
7042e0a8e83d vesoft/nebula-storaged:nightly "./bin/nebula-storag…" 36 minutes ago Up 36 minutes (healthy) 9777-9778/tcp, 9780/tcp, 0.0.0.0:49227->9779/tcp, 0.0.0.0:49226->19779/tcp, 0.0.0.0:49225->19780/tcp nebula-docker-compose_storaged2_1
18e3ea63ad65 vesoft/nebula-storaged:nightly "./bin/nebula-storag…" 36 minutes ago Up 36 minutes (healthy) 9777-9778/tcp, 9780/tcp, 0.0.0.0:49219->9779/tcp, 0.0.0.0:49218->19779/tcp, 0.0.0.0:49217->19780/tcp nebula-docker-compose_storaged0_1
4dcabfe8677a vesoft/nebula-graphd:nightly "/usr/local/nebula/b…" 36 minutes ago Up 36 minutes (healthy) 0.0.0.0:49224->9669/tcp, 0.0.0.0:49223->19669/tcp, 0.0.0.0:49222->19670/tcp nebula-docker-compose_graphd1_1
a74054c6ae25 vesoft/nebula-graphd:nightly "/usr/local/nebula/b…" 36 minutes ago Up 36 minutes (healthy) 0.0.0.0:9669->9669/tcp, 0.0.0.0:49221->19669/tcp, 0.0.0.0:49220->19670/tcp nebula-docker-compose_graphd_1
880025a3858c vesoft/nebula-storaged:nightly "./bin/nebula-storag…" 36 minutes ago Up 36 minutes (healthy) 9777-9778/tcp, 9780/tcp, 0.0.0.0:49216->9779/tcp, 0.0.0.0:49215->19779/tcp, 0.0.0.0:49214->19780/tcp nebula-docker-compose_storaged1_1
45736a32a23a vesoft/nebula-metad:nightly "./bin/nebula-metad …" 36 minutes ago Up 36 minutes (healthy) 9560/tcp, 0.0.0.0:49213->9559/tcp, 0.0.0.0:49212->19559/tcp, 0.0.0.0:49211->19560/tcp nebula-docker-compose_metad0_1
3b2c90eb073e vesoft/nebula-metad:nightly "./bin/nebula-metad …" 36 minutes ago Up 36 minutes (healthy) 9560/tcp, 0.0.0.0:49207->9559/tcp, 0.0.0.0:49206->19559/tcp, 0.0.0.0:49205->19560/tcp nebula-docker-compose_metad2_1
7bb31b7a5b3f vesoft/nebula-metad:nightly "./bin/nebula-metad …" 36 minutes ago Up 36 minutes (healthy) 9560/tcp, 0.0.0.0:49210->9559/tcp, 0.0.0.0:49209->19559/tcp, 0.0.0.0:49208->19560/tcp nebula-docker-compose_metad1_1
```
最后登录容器排查问题
```bash
[nebula-docker-compose]$ docker exec -it 2a6c56c405f5 bash
[root@2a6c56c405f5 nebula]#
```
## 下一步
- [连接 NebulaGraph](https://docs.nebula-graph.com.cn/{{nebula.release}}/2.quick-start/3.quick-start-on-premise/3.connect-to-nebula-graph/)<!--这里用外链。-->
4 changes: 4 additions & 0 deletions docs-2.0/reuse/source_ngql_for_quick_start.md
Original file line number Diff line number Diff line change
Expand Up @@ -82,6 +82,10 @@
nebula> CREATE SPACE basketballplayer(partition_num=15, replica_factor=1, vid_type=fixed_string(30));
```
!!! note
如果报错提示`[ERROR (-1005)]: Host not enough!`,请检查是否已[添加 Storage 主机](../2.quick-start/3.quick-start-on-premise/3.1add-storage-hosts.md)。
2. 执行命令`SHOW HOSTS`检查分片的分布情况,确保平衡分布。
```ngql
Expand Down

0 comments on commit 2743b36

Please sign in to comment.