Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat(testdrive): added cardano2dgraph simple example #395

Merged
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
45 changes: 45 additions & 0 deletions testdrive/cardano2dgraph/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,45 @@
# Cardano => dgraph Testdrive

## Introduction

This is a reference implementation to show how _Oura_ can be leveradged to read from a Cardano relay node into an output events via HTTP to a remote endpoint, in this case a dgraph instance passing through a payload transformer that makes oura events fit dgraph expected payload.

Note the payload transformer [(comcast/eel)](https://github.com/Comcast/eel) runs multiple replicas (load balanced using haproxy) to achieve higher throughput, and although it manages its own buffer and handle failed requests to dgraph and tries to resubmit them, events can be lost in case any of these replicas are unexpectedly restarted.

## Prerequisites

- K8s Cluster
- kubectl
- Skaffold
- kustomize

## Deployment

Create a k8s namespace for the testdrive:

```
kubectl create namespace cardano2dgraph
```

Deploy the resources:

```
skaffold run --namespace cardano2dgraph
```

Check events going thru:
```
kubectl logs -n cardano2dgraph -f -l app=oura-2-dgraph-etl
```

## Access dgraph

Expose `dgraph` api (will sit in http://localhost:8081):
```
kubectl port-forward -n cardano2dgraph svc/dgraph-public 8081:8080
```

Optionally, use [dgraph/ratel](https://github.com/dgraph-io/ratel) webui to access `dgraph` (access http://localhost:8001 and use http://localhost:8081 as dgraph endpoint):
```
docker run --rm -d -p 8001:8000 dgraph/ratel:latest
```
99 changes: 99 additions & 0 deletions testdrive/cardano2dgraph/k8s/dgraph.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,99 @@
# This is the service that should be used by the clients of Dgraph to talk to the cluster.
apiVersion: v1
kind: Service
metadata:
name: dgraph-public
labels:
app: dgraph
spec:
type: ClusterIP
ports:
- port: 5080
targetPort: 5080
name: grpc-zero
- port: 6080
targetPort: 6080
name: http-zero
- port: 8080
targetPort: 8080
name: http-alpha
- port: 9080
targetPort: 9080
name: grpc-alpha
selector:
app: dgraph
---
# This StatefulSet runs 1 pod with one Zero container and one Alpha container.
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: dgraph
spec:
serviceName: "dgraph"
replicas: 1
selector:
matchLabels:
app: dgraph
template:
metadata:
labels:
app: dgraph
spec:
containers:
- name: zero
image: dgraph/dgraph:latest
imagePullPolicy: IfNotPresent
ports:
- containerPort: 5080
name: grpc-zero
- containerPort: 6080
name: http-zero
volumeMounts:
- name: datadir
mountPath: /dgraph
command:
- bash
- "-c"
- |
set -ex
dgraph zero --survive filesystem --my=$(hostname -f):5080
- name: alpha
image: dgraph/dgraph:latest
imagePullPolicy: IfNotPresent
ports:
- containerPort: 8080
name: http-alpha
- containerPort: 9080
name: grpc-alpha
volumeMounts:
- name: datadir
mountPath: /dgraph
env:
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
command:
- bash
- "-c"
- |
set -ex
dgraph alpha --survive filesystem --my=$(hostname -f):7080 --zero dgraph-0.dgraph.${POD_NAMESPACE}.svc.cluster.local:5080 --security whitelist=0.0.0.0/0
terminationGracePeriodSeconds: 60
volumes:
- name: datadir
persistentVolumeClaim:
claimName: datadir
updateStrategy:
type: RollingUpdate
volumeClaimTemplates:
- metadata:
name: datadir
annotations:
volume.alpha.kubernetes.io/storage-class: anything
spec:
accessModes:
- "ReadWriteOnce"
resources:
requests:
storage: 5Gi
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
charts
11 changes: 11 additions & 0 deletions testdrive/cardano2dgraph/k8s/kustomize-haproxy/helm-values.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,11 @@
controller:
extraArgs:
- --namespace-whitelist=cardano2dgraph
service:
type: ClusterIP
enablePorts:
http: true
https: false
stat: false
defaultBackend:
enabled: false
Original file line number Diff line number Diff line change
@@ -0,0 +1,12 @@
helmCharts:
- name: kubernetes-ingress
repo: https://haproxytech.github.io/helm-charts
version: 1.21.1
releaseName: init0
namespace: cardano2dgraph
valuesFile: helm-values.yaml

#resources:
#- ../dgraph.yaml
#- ../oura.yaml
#- ../payload-transformer.yaml
101 changes: 101 additions & 0 deletions testdrive/cardano2dgraph/k8s/oura.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,101 @@
---
apiVersion: v1
kind: ConfigMap
metadata:
name: oura
data:
disabled: |-
[source.intersect]
type = "Tip"

[source.intersect]
type = "Origin"

[source.intersect]
type = "Fallbacks"
value = [
[4492799, "f8084c61b6a238acec985b59310b6ecec49c0ab8352249afd7268da5cff2a457"]
[4490688, "aa83acbf5904c0edfe4d79b3689d3d00fcfc553cf360fd2229b98d464c28e9de"] # epoch 208 first slot
[4490687, "f8084c61b6a238acec985b59310b6ecec49c0ab8352249afd7268da5cff2a457"] # epoch 207 last slot
]

daemon.toml: |-
[source]
type = "N2N"
address = ["Tcp", "europe.relays-new.cardano-mainnet.iohk.io:3001"]
magic = "mainnet"

[source.intersect]
type = "Origin"

[source.mapper]
include_transaction_details = true

[[filters]]
type = "Fingerprint"

[sink]
type = "Webhook"
url = "http://init0-kubernetes-ingress/v1/events"
timeout = 3000
max_retries = 30
backoff_delay = 5000
[sink.headers]
Host = "oura-2-dgraph-etl.local"

[cursor]
type = "File"
path = "/var/oura/cursor"

[metrics]
address = "0.0.0.0:9186"
endpoint = "/metrics"

---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: oura
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 500Mi
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: oura
labels:
app: oura
spec:
replicas: 1
selector:
matchLabels:
app: oura
template:
metadata:
labels:
app: oura
spec:
containers:
- name: main
image: ghcr.io/txpipe/oura:v1.6.0
env:
- name: "RUST_LOG"
value: "warn"
args:
- "daemon"
volumeMounts:
- mountPath: /etc/oura
name: oura-config
- mountPath: /var/oura
name: oura-var
volumes:
- name: oura-config
configMap:
name: oura
- name: oura-var
persistentVolumeClaim:
claimName: oura
Loading