Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

tb-kafka-0 and tb-node-1 on CrashLoopBackOff status #48

Open
hsanderr opened this issue Sep 9, 2022 · 3 comments
Open

tb-kafka-0 and tb-node-1 on CrashLoopBackOff status #48

hsanderr opened this issue Sep 9, 2022 · 3 comments

Comments

@hsanderr
Copy link

hsanderr commented Sep 9, 2022

I have installed TB via microservices using Azure Kubernetes service (we have followed this guide~). It worked for a few days but suddenly I wasn't not able to send HTTP requests to the platform anymore. I haven't changed any .yml file. When I run "kubectl get pods", I get:

~$ kubectl get pods
NAME                              READY   STATUS             RESTARTS         AGE
tb-http-transport-0               1/1     Running            0                2d2h
tb-http-transport-1               1/1     Running            0                2d2h
tb-js-executor-776cc56fc5-4wlns   1/1     Running            5 (2d2h ago)     2d2h
tb-js-executor-776cc56fc5-4zlt4   1/1     Running            5 (2d2h ago)     2d2h
tb-js-executor-776cc56fc5-8zds5   1/1     Running            5 (2d2h ago)     2d2h
tb-js-executor-776cc56fc5-hddnr   1/1     Running            5 (2d2h ago)     2d2h
tb-js-executor-776cc56fc5-msl4c   1/1     Running            5 (2d2h ago)     2d2h
tb-kafka-0                        0/1     CrashLoopBackOff   229 (27s ago)    2d2h
tb-node-0                         1/1     Running            3 (2d2h ago)     2d2h
tb-node-1                         0/1     CrashLoopBackOff   531 (113s ago)   2d2h
tb-web-report-5b98458947-qr5cc    1/1     Running            0                2d2h
tb-web-ui-5464b848f9-866x8        1/1     Running            0                2d2h
tb-web-ui-5464b848f9-p8x7r        1/1     Running            0                2d2h
zookeeper-0                       1/1     Running            0                2d2h
zookeeper-1                       1/1     Running            0                2d2h
zookeeper-2                       1/1     Running            0                2d2h

tb-kafka-0 logs:
logs-tb-kafka-0.txt

tb-node-1 logs:
logs-tb-node-1.txt

Can anyone help me with this?

@polarfoxDev
Copy link

We had problems with kafka as well. In our case, the storage for the "logs" volume wasn't enough. We fixed it by increasing the storage space from 200Mi to several GiBs for now (the "logs" volumeClaimTemplate in thirdparty.yml), it looks pretty stable now, but still monitoring it from time to time to see if it could get problematic again.

@lks-hrsch
Copy link

Are there any further investigations of the problem? We are facing the same issue at our AKS deployment of thingsboard-pe.

Because it seems very strange to me, especially when the following config is given:

value: "js_eval.requests:100:1:delete --config=retention.ms=60000 --config=segment.bytes=26214400 --config=retention.bytes=104857600,tb_transport.api.requests:30:1:delete --config=retention.ms=60000 --config=segment.bytes=26214400 --config=retention.bytes=104857600,tb_rule_engine:30:1:delete --config=retention.ms=60000 --config=segment.bytes=26214400 --config=retention.bytes=104857600"

to the line

For more information, our used storage percentage:
Screenshot 2023-02-21 at 08 54 33

You can see we needed to increase the logs and the app-logs volume.

@amarkevich
Copy link
Contributor

PR #62

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants