Skip to content

Commit

Permalink
update the running in the cloud guides
Browse files Browse the repository at this point in the history
  • Loading branch information
JustinaPetr committed Aug 2, 2024
1 parent 41b1fcb commit eae90cf
Show file tree
Hide file tree
Showing 4 changed files with 91 additions and 104 deletions.
140 changes: 48 additions & 92 deletions docs/composedb/guides/composedb-server/running-in-the-cloud.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -3,36 +3,14 @@ Run a ComposeDB server in the cloud

## Things to Know
- This guide is focused on running in the cloud using Docker and Kubernetes. For local deployment instructions check out [Running Locally](../../guides/composedb-server/running-locally.mdx).
- ComposeDB Server requires running a Ceramic node (which uses IPFS) for decentralized data, [IPFS](https://ipfs.tech/), and a Postgres DB. Each of these components should be running within a separate Docker container.
- Docker images for IPFS are built from the [`go-ipfs-daemon`](https://github.com/ceramicnetwork/go-ipfs-daemon) repository and come pre-configured with plugins that make it easy to run IPFS on cloud infrastructure (e.g. the [S3 plugin](https://github.com/ipfs/go-ds-s3)). Images built from the `main` branch are tagged with `latest`, and the git commit hash of the code from which the image was built.
- Docker images to run ComposeDB Server are built from the [js-ceramic](https://github.com/ceramicnetwork/js-ceramic) repository. Images built from the `main` branch are tagged with `latest`, the git commit hash of the code from which the image was built, and the npm package version of the corresponding [`@ceramicnetwork/cli`](https://www.npmjs.com/package/@ceramicnetwork/cli) release.
- Interacting with ComposeDB requires running a Ceramic node as an interface for Ceramic applications, `ceramic-one` binary for data network access, and a Postgres DB. Each of these components should be running within a separate Docker container.
- Docker images to run a Ceramic server are built from the [js-ceramic](https://github.com/ceramicnetwork/js-ceramic) repository. Images built from the `main` branch are tagged with `latest`, the git commit hash of the code from which the image was built, and the npm package version of the corresponding [`@ceramicnetwork/cli`](https://www.npmjs.com/package/@ceramicnetwork/cli) release.

:::danger

To run a Ceramic node in production, it is critical to persist the [Ceramic state store](../../../protocol/js-ceramic/guides/ceramic-nodes/running-cloud#ceramic-state-store), [IPFS datastore](https://github.com/ipfs/go-ipfs/blob/master/docs/config.md#datastorespec), and the Postgres database used for the ComposeDB index. The form of storage you choose should also be configured for an emergency recovery with data redundancy, and some form of snapshotting and/or backups. **Loss of this data can result in permanent loss of Ceramic streams and will cause your node to be in a corrupt state.**

Your backup procedure should implement the following order:

1. Snapshot your Postgres instance first
2. State store
3. IPFS block store

Leveraging this order guarantees that the higher-level subsystems won't know about data that the lower-level subsystems are missing in the backup.

:::

## Cloud Requirements
**Supported Operating Systems**

- Linux
- Mac
- Windows

:::note

For Windows, Windows Subsystem for Linux 2 (WSL2) is strongly recommended. Using the Windows command line is not portable and can cause compatibility issue when running the same configuration on a different operating system (e.g. in a Linux-based cloud deployment).

:::

**Compute requirements**

Expand All @@ -43,24 +21,21 @@ You’ll need sufficient compute resources to power Ceramic, IPFS, and Postgres.

:::note

If you are just getting started with a brand new project, you can start with a much smaller instance. For example, to follow this guide, you can start with a 1GB RAM and 1vCPU cluster and scale your instance afterwards.
If you are just getting started with a brand new project, you can start with a smaller instance and scale afterwards.

:::

## Running ComposeDB server on Kubernetes
## Running Ceramic server on Kubernetes

You can run ComposeDB Server on Kubernetes on the cloud, such as [Google Kubernetes Engine](https://cloud.google.com/kubernetes-engine) or [Amazon Elastic Kubernetes Service](https://aws.amazon.com/eks/).
You can also run ComposeDB Server on [DigitalOcean Kubernetes](https://www.digitalocean.com/products/kubernetes/).
You can run Ceramic Server on Kubernetes on the cloud, such as [Google Kubernetes Engine](https://cloud.google.com/kubernetes-engine) or [Amazon Elastic Kubernetes Service](https://aws.amazon.com/eks/).
You can also run Ceramic Server on [DigitalOcean Kubernetes](https://www.digitalocean.com/products/kubernetes/).

Running Kubernetes on the Cloud means a provider will manage the underlying infrastructure for you. You can also run Kubernetes on your own infrastructure, but that is outside the scope of this guide.

### Running ComposeDB server on DigitalOcean Kubernetes

DigitalOcean Kubernetes (DOKS) allows developers to deploy Kubernetes clusters using simple managed service. The instructions below are also covered in a video walkthrough here:
### Running Ceramic server on DigitalOcean Kubernetes

<iframe width="660" height="415" src="https://www.youtube.com/embed/mgwM9c5fWck?si=fWP1D1xRtab5Tz6T" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe>

ComposeDB deployment on DigitalOcean Kubernetes will require 2 tools:
DigitalOcean Kubernetes (DOKS) allows developers to deploy Kubernetes clusters using simple managed service.
Ceramic deployment on DigitalOcean Kubernetes will require 2 tools:

- [kubectl](https://kubernetes.io/docs/tasks/tools) - the Kubernetes command line tool
- [doctl](https://docs.digitalocean.com/reference/doctl/how-to/install/) - the DigitalOcean command line tool
Expand All @@ -73,7 +48,7 @@ Once it’s up and running, you are good to continue with the next step.

:::note

When it comes to choosing your cluster capacity, we recommend starting with the most cost-effective option - starting with the smallest cluster size and upgrading later. For example, to follow this guide you can start with a 1GB RAM and 1vCPU cluster. Also, keep in mind that
When it comes to choosing your cluster capacity, we recommend starting with the most cost-effective option - starting with the smallest cluster size and upgrading later. Also, keep in mind that
Digital Ocean offers free credits for the new users to start building their projects.

:::
Expand All @@ -100,34 +75,36 @@ In this section we will focus on deploying the Ceramic with ComposeDB Server on

```
git clone https://github.com/ceramicstudio/simpledeploy.git
cd simpledeploy
cd simpledeploy/k8s/base/ceramic-one
```

2. Run the following commands to deploy the stack:
```
# Create a namespace for the deployment
kubectl create namespace ceramic
kubectl create namespace ceramic-one-0-17-0
# Create the necessary secrets
./k8s/base/composedb/create-secrets.sh
./scripts/create-secrets.sh
# Apply the deployment
kubectl apply -k k8s/base/composedb/
kubectl apply -k .
```

3. It will take a few minutes for the deployment to pull the docker images and start the containers. You can watch the process with the following command:

```bash
kubectl get pods --watch --namespace ceramic
kubectl get pods --watch --namespace ceramic-one-0-17-0
```

You will know that your deployment is up and running when all of the processes have a status `Running` as follows:

```bash
NAME READY STATUS RESTARTS AGE
composedb-0 0/1 Running 0 77s
ipfs-0 1/1 Running 0 77s
postgres-0 1/1 Running 0 77s
ceramic-one-0 1/1 Running 0 77s
ceramic-one-1 1/1 Running 0 77s
js-ceramic-0 1/1 Running 0 77s
js-ceramic-1 1/1 Running 0 77s
postgres-0 1/1 Running 0 77s
```

Hit `^C` on your keyboard to exit this view.
Expand All @@ -136,18 +113,18 @@ Hit `^C` on your keyboard to exit this view.

You can easily access the logs of each of the containers by using the command below and configuring the container name. For example, to access the Ceramic node logs, you can run:

`kubectl logs --follow --namespace ceramic composedb-0`
`kubectl logs --follow --namespace ceramic-one-0-17-0 js-ceramic-0`

:::


### Access the Ceramic with ComposeDB API
### Access the Ceramic node using the API

You can use local port forwarding to access the Ceramic node from your local machine. Open a new terminal and run the command below. The port forward will stop when the command is exited
so make sure to keep this command running for the rest of this guide.

```bash
kubectl port-forward --namespace ceramic composedb-0 7007:7007
kubectl port-forward --namespace ceramic-one-0-17-0 js-ceramic-0 7007:7007
```
Once you run the command you should see the following output in your terminal:

Expand All @@ -162,9 +139,9 @@ The Ceramic node must be ready to accept connections before you can access it.
The pod's state must be `Running` and the `READY` column must be `1/1`.
You can check the status of the node by running the command below:

$ kubectl get pods composedb-0
$ kubectl get pods --namespace ceramic-one-0-17-0 js-ceramic-0
NAME READY STATUS RESTARTS AGE
composedb-0 1/1 Running 1 (28h ago) 28h
js-ceramic-0 1/1 Running 1 (28h ago) 28h

:::

Expand All @@ -181,23 +158,18 @@ Alive!

### Expose the node endpoint to the internet

The last step is to expose your Ceramic node to the internet so that it’s accessible for your application. This can be done using a DigitalOcean Load Balancer:

```bash
kubectl apply -f k8s/base/composedb/do-lb.yaml
```

You can get the EXTERNAL IP address of the load balancer with the following command:
The last step is to expose your Ceramic node to the internet so that it’s accessible for your application. This can be done using a DigitalOcean Load Balancer which comes pre-configured for using using the SimpleDeploy scripts.
You can get the EXTERNAM IP of your `js-ceramic node` (as well as `ceramic-one`) using the following command:

```bash
kubectl get svc --namespace ceramic composedb-lb
kubectl get svc --namespace ceramic-one-0-17-0 js-ceramic-lb-0
```

The result of this command will be an output similar to the one below. Keep in mind that might take a few minutes for the EXTERMAL-IP to be configured and change the status from `<pending>`:

```bash
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
composedb-lb LoadBalancer 10.245.10.130 174.138.109.159 7007:31284/TCP 4m4s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
js-ceramic-lb-1 LoadBalancer 10.245.10.130 174.138.109.159 7007:31284/TCP 4m4s
```

This external IP address can now be used for accessing your node. To test it out, copy the external IP address provided above and substitute it in the following health check command:
Expand All @@ -215,28 +187,28 @@ Alive!
If you wish to direct a domain to your ceramic node and acquire an SSL Certificate, you may follow the steps under [cert-ingress](https://github.com/ceramicstudio/simpledeploy/blob/main/k8s/cert-ingress/README.md) to modify the kubernetes setup. Of course you may use other methods to add a domain name and certificate depending on what provider you wish to use.

### Utilize the Deployed Assets with ComposeDB CLI and Graphiql Server
Now that you have a Ceramic with ComposeDB server deployed, you can utilize the [ComposeDB Cli](../../set-up-your-environment.mdx#composedb) to create models and
Now that you have a Ceramic server deployed, you can utilize the [ComposeDB Cli](../../set-up-your-environment.mdx#composedb) to create models and
composites, as well as standing up a Graphiql server backed by the Ceramic with ComposeDB server.

First you will need to install [ComposeDB Cli](../../set-up-your-environment.mdx#composedb). Next you will need to setup,
your environment to properly talk to your server

```bash
export CERAMIC_URL="http://"$(kubectl get service composedb-lb --namespace ceramic -o json | jq -r '.status.loadBalancer.ingress[0].ip')":7007"
export DID_PRIVATE_KEY=$(kubectl get secrets --namespace ceramic ceramic-admin -o json | jq -r '.data."private-key"' | base64 -d)
export CERAMIC_URL="http://"$(kubectl get service js-ceramic-lb-0 --namespace ceramic-one-0-17-0 -o json | jq -r '.status.loadBalancer.ingress[0].ip')":7007"
export DID_PRIVATE_KEY=$(kubectl get secrets --namespace ceramic-one-0-17-0 ceramic-admin -o json | jq -r '.data."private-key"' | base64 -d)
```

You can now follow the existing guides, omitting adding `--ceramic-url` or `--did-private-key` to your composdb calls. For
example
You can now follow the existing guides, adding `--ceramic-url` or `--did-private-key` to your composdb calls. For
example:

```bash
composedb composite:from-model kjzl6hvfrbw6c5ajfmes842lu09vjxu5956e3xq0xk12gp2jcf9s90cagt2god9 --output=my-first-composite-single.json
composedb composite:from-model kjzl6hvfrbw6c5ajfmes842lu09vjxu5956e3xq0xk12gp2jcf9s90cagt2god9 --output=my-first-composite-single.json --ceramic-url=$CERAMIC_URL --did-private-key=$DID_PRIVATE_KEY
```

will create a new composite, utilizing your remote Ceramic with ComposeDB server. You can also run Graphiql locally
will create a new composite, utilizing your remote Ceramic server. You can also run Graphiql locally

```bash
composedb graphql:server --graphiql runtime-composite.json --port=5005
composedb graphql:server --graphiql runtime-composite.json --port=5005 --did-private-key=$DID_PRIVATE_KEY
```

You can access the graphiql server at [http://localhost:5005/graphql](http://localhost:5005/graphql)
Expand All @@ -245,11 +217,11 @@ You can access the graphiql server at [http://localhost:5005/graphql](http://loc

### Where is my data stored?

Each part of the stack (js-ceramic, ipfs, postgres) has its own [Persistent Volume](https://kubernetes.io/docs/concepts/storage/persistent-volumes/).
Each part of the stack (js-ceramic, postgres) has its own [Persistent Volume](https://kubernetes.io/docs/concepts/storage/persistent-volumes/).
You can view the volumes with the following command:

```bash
kubectl get PersistentVolumeClaim --namespace ceramic
kubectl get PersistentVolumeClaim --namespace ceramic-one-0-17-0
```

This output includes identifiers for the volume on the cloud provider as well as the size and storage class, which defines the properties of the volume.
Expand All @@ -265,28 +237,28 @@ $ kubectl create secret generic ceramic-admin --from-literal=private-key=<YOUR S
```
To view the currently configured admin DID seed, you can use the following command (requires jq):
```
kubectl get secrets --namespace ceramic ceramic-admin -o json | jq -r '.data."private-key"' | base64 -d
kubectl get secrets --namespace ceramic-one-0-17-0 ceramic-admin -o json | jq -r '.data."private-key"' | base64 -d
```

### How do I connect to the Postgres database?

You can create a session to the postgres database with the following command:

```bash
kubectl exec --namespace ceramic -ti postgres-0 -- psql -U ceramic
kubectl exec --namespace ceramic-one-0-17-0 -ti postgres-0 -- psql -U ceramic
```

A `postgres` service is also created and can be exposed locally with port-forwarding:

```bash
kubectl port-forward --namespace ceramic svc/postgres 5432
kubectl port-forward --namespace ceramic-one-0-17-0 svc/postgres 5432
```

The `ceramic` user password randomly generated during deployment.
It is also available in the `postgres-auth` secret:

```bash
kubectl --namespace ceramic get secrets postgres-auth -o yaml
kubectl --namespace ceramic-one-0-17-0 get secrets postgres-auth -o yaml
```

Here you should get the following output:
Expand All @@ -305,7 +277,7 @@ kind: Secret
To remove the workload from the cluster, you can delete the namespace. For example:

```bash
kubectl delete namespace ceramic
kubectl delete namespace ceramic-one-0-17-0
```


Expand All @@ -314,22 +286,6 @@ You can find the ComposeDB server and IPFS Docker images on [Docker Hub](https:/
Below, you can find examples of how you can run IPFS, Postgres and Ceramic processes using Docker.



### Running IPFS

For production deployments you should run your own IPFS process manually and point your Ceramic node at it. This is referred to as running IPFS in "remote" mode in the Ceramic `daemon.config.json` file, versus the pre-configured “bundled” mode used for running locally.

```bash
docker pull ceramicnetwork/go-ipfs-daemon:latest

docker run \
-p 5001:5001 \ # API port
-p 8011:8011 \ # Healthcheck port
-v /path_on_volume_for_ipfs_repo:/data/ipfs \
--name ipfs \
go-ipfs-daemon
```

### Running Postgres
An example below demonstrates how you can run a Postgres process. Make sure to update the variables to fit your use case:

Expand Down Expand Up @@ -361,18 +317,18 @@ docker run -d \
-e NODE_ENV=production \
-e CERAMIC_INDEXING_DB_URI=postgres://username:password@host:5432/dbname \
--name ceramic \
js-ceramic --ipfs-api http://ipfs_ip_address:5001
js-ceramic --ipfs-api http://ipfs_ip_address:5101
```

### Editing the `daemon.config.json` file

To have these IPFS and Postgres settings persist in your Ceramic node, edit the `daemon.config.json` file to include IPFS information. The default location is `~/.ceramic/daemon.config.json`. For a full file example, see the [Ceramic](../../../protocol/js-ceramic/guides/ceramic-nodes/running-cloud#example-daemonconfigjson) docs.
To have the settings persist in your Ceramic node, edit the `daemon.config.json` file to include the configurations. The default location is `~/.ceramic/daemon.config.json`. For a full file example, see the [Ceramic](../../../protocol/js-ceramic/guides/ceramic-nodes/running-cloud#example-daemonconfigjson) docs.

```bash
...
"ipfs": {
"mode": "remote",
"host": "http://ipfs_ip_address:5001"
"host": "http://ipfs_ip_address:5101"
},
...
```
Expand Down
20 changes: 19 additions & 1 deletion docs/composedb/guides/composedb-server/running-locally.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -23,6 +23,7 @@ The easiest way to to run ComposeDB server on your local machine is using [Wheel
- Node.js
- jq
- PostgreSQL (optional dependent on the network)
- [ceramic-one](../../set-up-your-environment.mdx#2-installation) node up and running

Head to [Setup Your Environment](../../set-up-your-environment.mdx#install-the-dependencies) section for more detailed dependency installation instructions.

Expand All @@ -34,7 +35,16 @@ Head to [Setup Your Environment](../../set-up-your-environment.mdx#install-the-d


### Setup
First, download the Wheel:

First, install and run the `ceramic-one` binary:
```bash
brew install ceramicnetwork/tap/ceramic-one
```
```bash
ceramic-one daemon
```

Next, download the Wheel:

```bash
curl --proto '=https' --tlsv1.2 -sSf https://raw.githubusercontent.com/ceramicstudio/wheel/main/wheel.sh | bash
Expand Down Expand Up @@ -78,6 +88,14 @@ For Windows, Windows Subsystem for Linux 2 (WSL2) is strongly recommended. Using

### Installation

Install and run the `ceramic-one` binary:
```bash
brew install ceramicnetwork/tap/ceramic-one
```
```bash
ceramic-one daemon
```

Install the Ceramic CLI and ComposeDB CLI using npm:

```bash
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -19,7 +19,7 @@ When you start the daemon using the `ceramic daemon` command, if a configuration
},
"ipfs": {
"mode": "remote",
"host": "http://ipfs_ip_address:5001"
"host": "http://ipfs_ip_address:5101"
},
"logger": {
"log-level": 2, // 0 is most verbose
Expand Down Expand Up @@ -201,8 +201,7 @@ By default, Ceramic nodes will only index documents they observe using pubsub me

| Name | Description | Default value? |
| --- | --- | --- |
| bundled | IPFS running in same compute process as Ceramic; recommended for early prototyping ||
| remote | IPFS running in separate compute process; recommended for production and everything besides early prototyping | |
| remote | IPFS running in separate compute process; recommended for production and everything besides early prototyping ||

### Persistent Storage
To run a Ceramic node in production, it is critical to persist the [Ceramic state store](../../../protocol/js-ceramic/guides/ceramic-nodes/running-cloud#ceramic-state-store) and the [IPFS datastore](https://github.com/ipfs/go-ipfs/blob/master/docs/config.md#datastorespec). The form of storage you choose should also be configured for disaster recovery with data redundancy, and some form of snapshotting and/or backups.
Expand Down
Loading

0 comments on commit eae90cf

Please sign in to comment.