You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
What is the recommended way to connect to for example the 3 pods of a yugabyte cluster when the client is in-cluster.
since the headless service offer multiple ip names for each pod, is it possible to enter all 3 ips in the host connection string.
is that supported, supported by yugabyte driver, by gorm (most likely not since gorm was made for postgres and postgres does not have active-active natively and relies on external services like pgpool or repmgr)
What happens when one of the pods goes away. who decide to stop using that ip-address.
Is it best to use pgpool.
Example of headless ips : yuga-sts-0.headless-svc:5432,yuga-sts-1.headless-svc:5432,yuga-sts-2.headless-svc:5432
for example, in the case of pgpool, these 3 ips are passed as env-vars if the target was a postgres cluster:
it's not too hard to make a go client that tries to manage all this by itself (create 3 connections, use any of them for each transaction, recognize that a connection is down and use the other 2, watch the one that went down and try to use it again etc..) but it would be better it it was supported by yugabyte the way for example that redis allows to enter multiple ips in the connection string.
when I read about it, often the asnwer is to use an external load-balancer. but I think this is different - my question is for clients that are internal to the cluster. I think I am basically describing a smart load balancer (like pgpool) who knows where to send the writes and where to send the reads (for yugabyte, that is not needed since all nodes accept writes, but it still need to be smart and stop using pods that are not available..)
headless service solves most of this problem but the client still need to pg-ping each ip regularly make sure they all work, or just wait for an error and try the other ips. but I am just guessing, I would prefer to use the yugabyte-supported solution.
thanks
The text was updated successfully, but these errors were encountered:
What is the recommended way to connect to for example the 3 pods of a yugabyte cluster when the client is in-cluster.
since the headless service offer multiple ip names for each pod, is it possible to enter all 3 ips in the host connection string.
Example of headless ips : yuga-sts-0.headless-svc:5432,yuga-sts-1.headless-svc:5432,yuga-sts-2.headless-svc:5432
for example, in the case of pgpool, these 3 ips are passed as env-vars if the target was a postgres cluster:
value: 0:postgres-sts-0.postgres-headless-svc:5432,1:postgres-sts-1.postgres-headless-svc:5432,2:postgres-sts-2.postgres-headless-svc:5432
it's not too hard to make a go client that tries to manage all this by itself (create 3 connections, use any of them for each transaction, recognize that a connection is down and use the other 2, watch the one that went down and try to use it again etc..) but it would be better it it was supported by yugabyte the way for example that redis allows to enter multiple ips in the connection string.
when I read about it, often the asnwer is to use an external load-balancer. but I think this is different - my question is for clients that are internal to the cluster. I think I am basically describing a smart load balancer (like pgpool) who knows where to send the writes and where to send the reads (for yugabyte, that is not needed since all nodes accept writes, but it still need to be smart and stop using pods that are not available..)
headless service solves most of this problem but the client still need to pg-ping each ip regularly make sure they all work, or just wait for an error and try the other ips. but I am just guessing, I would prefer to use the yugabyte-supported solution.
thanks
The text was updated successfully, but these errors were encountered: