Yugabytedb Helm图表服务:LoadBalancer和无头聚类 -
#postgres #database #kubernetes #yugabytedb

使用Helm Chart安装Yugabytedb创建4个服务。这是我像在previous blog post中一样在AWS中部署的示例:

$ kubectl get services -n yb-demo-eu-west-1a

NAME                 TYPE           CLUSTER-IP       PORT(S)
yb-master-ui         LoadBalancer   10.100.227.70    7000:31261/TCP
yb-masters           ClusterIP      None             7000/TCP,7100/TCP
yb-tserver-service   LoadBalancer   10.100.31.106   6379:31669/TCP,9042:31874/TCP,5433:31376/TCP
yb-tservers          ClusterIP      None             9000/TCP,12000/TCP,11000/TCP,13000/TCP,9100/TCP,6379/TCP,9042/TCP,5433/TCP

有两种服务(loadBalancer和clusterip)针对两个状态组(yb-master,Contol Plane和yb-tservers,数据平面)

集群无头服务

clusterip 没有IP创建,这意味着它们只是DNS条目而不是代理。这是无头服务,可以使用Kubernetes DNS来分配无中间组件的连接。

从命名空间(yb-demo-eu-west-1c)中的群集(cluster.local),我可以使用主机名yb-tservers.yb-demo-eu-west-1a.svc.cluster.local

连接到yb-tserver状态下的任何pod

循环到一个状态生物的豆荚

这是一个示例,从任何POD运行psql,并多次连接到此地址:

$ kubectl run -it --rm --restart=Never --image postgres psql -- \
  psql -h yb-tservers.yb-demo-eu-west-1a.svc.cluster.local      \
  -p 5433 -U yugabyte -c 'select inet_server_addr()'

 inet_server_addr
------------------
 192.168.11.13
(1 row)

$ kubectl run -it --rm --restart=Never --image postgres psql -- \
  psql -h yb-tservers.yb-demo-eu-west-1a.svc.cluster.local      \
  -p 5433 -U yugabyte -c 'select inet_server_addr()'

 inet_server_addr
------------------
 192.168.20.199
(1 row)

连接转到不同的节点。这些是在不同POD中运行的不同后Ql后端,但是它们可以公开相同的逻辑数据库,因为yugabytedb是分布式SQL 数据库。

上面的两个IP地址是yb-tserver状态下的两个POD:

$ kubectl get pods -n yb-demo-eu-west-1a -o wide

NAME           READY   STATUS    RESTARTS   AGE    IP               NODE                                          NOMINATED NODE   READINESS GATES
yb-master-0    2/2     Running   0          174m   192.168.2.75     ip-192-168-4-117.eu-west-1.compute.internal   <none>           <none>
yb-tserver-0   2/2     Running   0          174m   192.168.20.199   ip-192-168-4-117.eu-west-1.compute.internal   <none>           <none>
yb-tserver-1   2/2     Running   0          174m   192.168.11.13    ip-192-168-4-117.eu-west-1.compute.internal   <none>           <none>

请注意,我将状态集在三个可用性区域中部署。 clusterip服务连接到一个AZ,这可能是您要从应用程序服务器执行的操作,因此它们可以在同一AZ中连接。这确保了最小延迟和最大可用性。

集群意识到智能驱动器

但是,如果您使用yugabytedb Smart Drivers,他们会发现所有AZ中的所有节点:

$ kubectl run -it --rm --restart=Never --image postgres psql -- \
>   psql -h yb-tservers.yb-demo-eu-west-1a.svc.cluster.local      \
>   -p 5433 -U yugabyte -c 'select host,zone from yb_servers()'
                             host                              |    zone
---------------------------------------------------------------+------------
 yb-tserver-0.yb-tservers.yb-demo-eu-west-1c.svc.cluster.local | eu-west-1c
 yb-tserver-1.yb-tservers.yb-demo-eu-west-1a.svc.cluster.local | eu-west-1a
 yb-tserver-0.yb-tservers.yb-demo-eu-west-1b.svc.cluster.local | eu-west-1b
 yb-tserver-1.yb-tservers.yb-demo-eu-west-1b.svc.cluster.local | eu-west-1b
 yb-tserver-1.yb-tservers.yb-demo-eu-west-1c.svc.cluster.local | eu-west-1c
 yb-tserver-0.yb-tservers.yb-demo-eu-west-1a.svc.cluster.local | eu-west-1a
(6 rows)

这意味着,如果您想在使用智能驱动程序时将连接限制到一个AZ,则需要使用tolopogy键,例如aws.eu-west-1.eu-west-1a

端点

The ClusterIP headless service yb-tservers connects to the yb-tserver pods, 192.168.11.13 and 192.168.20.199 in my case, for all ports exposed by the Table Servers, 5433 being the YSQL one, which is the PostgreSQL-compatible< /strong> api。

$ kubectl describe service yb-tservers -n yb-demo-eu-west-1a

Name:              yb-tservers
Namespace:         yb-demo-eu-west-1a
Labels:            app=yb-tserver
                   app.kubernetes.io/managed-by=Helm
                   chart=yugabyte
                   component=yugabytedb
                   heritage=Helm
                   release=yb-demo
                   service-type=headless
Annotations:       meta.helm.sh/release-name: yb-demo
                   meta.helm.sh/release-namespace: yb-demo-eu-west-1a
Selector:          app=yb-tserver
Type:              ClusterIP
IP Family Policy:  SingleStack
IP Families:       IPv4
IP:                None
IPs:               None
Port:              http-ui  9000/TCP
Endpoints:         192.168.11.13:9000,192.168.20.199:9000
Port:              http-ycql-met  12000/TCP
TargetPort:        12000/TCP
Endpoints:         192.168.11.13:12000,192.168.20.199:12000
Port:              http-yedis-met  11000/TCP
TargetPort:        11000/TCP
Endpoints:         192.168.11.13:11000,192.168.20.199:11000
Port:              http-ysql-met  13000/TCP
TargetPort:        13000/TCP
Endpoints:         192.168.11.13:13000,192.168.20.199:13000
Port:              tcp-rpc-port  9100/TCP
TargetPort:        9100/TCP
Endpoints:         192.168.11.13:9100,192.168.20.199:9100
Port:              tcp-yedis-port  6379/TCP
TargetPort:        6379/TCP
Endpoints:         192.168.11.13:6379,192.168.20.199:6379
Port:              tcp-yql-port  9042/TCP
TargetPort:        9042/TCP
Endpoints:         192.168.11.13:9042,192.168.20.199:9042
Port:              tcp-ysql-port  5433/TCP
TargetPort:        5433/TCP
Endpoints:         192.168.11.13:5433,192.168.20.199:5433
Session Affinity:  None
Events:            <none>

LoadBalancer

除了群集服务外,头盔图还创建了一个负载量。我可以在AWS控制台中看到它:

console

instances

这是kubernetes的描述:

$ kubectl describe service yb-tserver-service -n yb-demo-eu-west-1a

Name:                     yb-tserver-service
Namespace:                yb-demo-eu-west-1a
Labels:                   app=yb-tserver
                          app.kubernetes.io/managed-by=Helm
                          chart=yugabyte
                          component=yugabytedb
                          heritage=Helm
                          release=yb-demo
                          service-type=endpoint
Annotations:              meta.helm.sh/release-name: yb-demo
                          meta.helm.sh/release-namespace: yb-demo-eu-west-1a
Selector:                 app=yb-tserver
Type:                     LoadBalancer
IP Family Policy:         SingleStack
IP Families:              IPv4
IP:                       10.100.31.106
IPs:                      10.100.31.106
LoadBalancer Ingress:     a95638155644e470abd19e552bc8ab01-1510239497.eu-west-1.elb.amazonaws.com
Port:                     tcp-yedis-port  6379/TCP
TargetPort:               6379/TCP
NodePort:                 tcp-yedis-port  31498/TCP
Endpoints:                192.168.11.13:6379,192.168.20.199:6379
Port:                     tcp-yql-port  9042/TCP
TargetPort:               9042/TCP
NodePort:                 tcp-yql-port  30927/TCP
Endpoints:                192.168.11.13:9042,192.168.20.199:9042
Port:                     tcp-ysql-port  5433/TCP
TargetPort:               5433/TCP
NodePort:                 tcp-ysql-port  31565/TCP
Endpoints:                192.168.11.13:5433,192.168.20.199:5433
Session Affinity:         None
External Traffic Policy:  Cluster
Events:                   <none>

Ingress 主机是我可以用来从Kubernetes群集外连接的主机。在此多动配置中,我每个AZ都有一个。让我们通过100个Sucessive Connections检查圆形旋转:

$ for i in {1..100} ; do
   psql -h a95638155644e470abd19e552bc8ab01-1510239497.eu-west-1.elb.amazonaws.com -p 5433 \
   -t -A -c '
     select inet_server_addr()
    ' ; done | sort | uniq -c

     52 192.168.11.13
     48 192.168.20.199

YB-TSERVER-0的48%和YB-TSERVER-1

的52%

Multi-az 创建状态表(其中每个>都有一个yb-master,并且知道群集中的所有yb-master),我使用isMultiAz: True使用了覆盖。如果您不想要LoadBalancer,则可以使用enableLoadbalancer: false

将其禁用

这些值可见:

helm show all yugabytedb/yugabyte

Image description
而且,如果您没有足够的文档和评论,则模板为:yugabyte/charts service.yaml

请注意,Helm图是在Kubernetes上安装Yugabytedb的维护和建议的方法。不需要操作员,因为数据库本身是自我修复,自主自动修复自身自主,而无需其他组件。您可以通过缩放状态组合和连接,SQL处理和数据自动重新平衡。当豆荚倒下时,其余的人进行必要的筏领导者选举,以继续提供一致的阅读和写入。