You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi stalkopat
I have been working with this for a while, but in order to persist the storage (so that when the pods are restarted for any reason, HB can recover the existing orders), I cannot find the way to build the yaml file.
Example:
I haven't used Persistent Volumes with Hummingbot, I setup monitoring Infrastructure recognize and cancel "dead" orders instead. I don't see why a persistent volume wouldn't work though
Hi
Thank you for your message. I am struggling to find a solution to properly manage all hanging orders once the K8 pod is reseted automatically by the cluster, or manually deleted when the pods are achieved.
Since there is no way to way to close the open orders when shutting down, how are you able to recognise dead orders and kill them?
Are you using an external solution?
Thanks
On 22 Nov 2022, at 14:05, Thomas Salzgeber ***@***.***> wrote:
I haven't used Persistent Volumes with Hummingbot, I setup monitoring Infrastructure recognize and cancel "dead" orders instead. I don't see why a persistent volume wouldn't work though
—
Reply to this email directly, view it on GitHub <#1 (comment)>, or unsubscribe <https://github.com/notifications/unsubscribe-auth/AWURKUGGJCZ5HWTHDR4PBPDWJTAK7ANCNFSM6AAAAAASBK56OQ>.
You are receiving this because you authored the thread.
Hi stalkopat
I have been working with this for a while, but in order to persist the storage (so that when the pods are restarted for any reason, HB can recover the existing orders), I cannot find the way to build the yaml file.
Example:
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: ku-pmm-avax-usdt-11-sc
provisioner: kubernetes.io/gce-pd
volumeBindingMode: Immediate
allowVolumeExpansion: true
reclaimPolicy: Retain
parameters:
type: pd-standard
fstype: ext4
replication-type: none
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: ku-pmm-avax-usdt-11-pv-c
namespace: hummingbot
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 3Gi
storageClassName: ku-pmm-avax-usdt-11-sc
...
...
apiVersion: apps/v1$(id -u hummingbot):$ (id -g hummingbot)
kind: StatefulSet
metadata:
name: ku-pmm-avax-usdt-11
namespace: hummingbot
spec:
replicas: 1
serviceName: ku-pmm-avax-usdt-11
selector:
matchLabels:
app: ku-pmm-avax-usdt-11-hummingbot
template:
metadata:
labels:
app: ku-pmm-avax-usdt-11-hummingbot
spec:
serviceAccountName: ku-pmm-avax-usdt-11-sa
containers:
- name: ku-pmm-avax-usdt-11
image: bgtcapital/hummingbot_pg:latest
resources:
limits:
cpu: 250m
memory: 512Mi
requests:
cpu: 250m
memory: 512Mi
imagePullPolicy: Always
tty: true
stdin: true
command: [ /bin/bash ]
args:
- -c
- >
cp /readonly-conf/.password_verification /conf; cp /readonly-conf/conf_client.yml /conf ; cp /readonly-conf/conf_fee_overrides.yml /conf ; cp /readonly-conf/hummingbot_logs.yml /conf ; cp /readonly-conf/kucoin.yml /conf/connectors; cp /readonly-conf/ku-pmm-avax-usdt-11.yml /conf/strategies ; cp /readonly-conf/spreads_adjusted_on_volatility_script.py /pmm_scripts ; /home/hummingbot/miniconda3/envs/$(head -1 setup/environment-linux.yml | cut -d' ' -f2)/bin/python3
bin/hummingbot_quickstart.py
-p Whitehole001
-f ku-pmm-avax-usdt-11.yml
--auto-set-permissions
volumeMounts:
- name: hb
mountPath: "/home"
- name: config
mountPath: "/readonly-conf"
volumes:
- name: hb
persistentVolumeClaim:
claimName: ku-pmm-avax-usdt-11-pv-c
readOnly: false
- name: config
configMap:
name: ku-pmm-avax-usdt-11-config
To me it looks as if the mountPath is wrong
volumeMounts:
- name: hb
mountPath: "/home"
Do you have experience adding persistent Volume to the pod, so that orders can be automatically recover ?
Thanks
The text was updated successfully, but these errors were encountered: