OpenStack系列(四):controller节点部署04【HAproxy安装】

在云端上HAproxy的每个实例将其前端配置到虚拟IP地址上,根据修改haroxy配置文件,使得Haproxy后端是用于负载平衡的真实实例IP地址的列表。通过使用负载均衡器,在发生单点故障的时候,可以快速的切换到其他的节点。

1)安装Haproxy
在三个节点上分别安装HAProxy

yum -y install haproxy
systemctl enable haproxy.service

2)跟rsyslog结合配置haproxy日志,在三个节点上都操作

cd /etc/rsyslog.d/
vim haproxy.conf

添加:

$ModLoad imudp
$UDPServerRun 514
$template Haproxy,"%rawmsg%\n"
local0.=info -/var/log/haproxy.log;Haproxy
local0.notice -/var/log/haproxy-status.log;Haproxy
local0.* ~
systemctl restart rsyslog.service
systemctl enable rsyslog.service
systemctl status rsyslog.service

3)在是三个节点上配置haproxy.cfg

cd /etc/haproxy/
mv haproxy.cfg haproxy.cfg.orig
vim haproxy.cfg

添加下面内容(注意更改ip):

global
  log 127.0.0.1 local0
  log 127.0.0.1 local1 notice
  maxconn 16000
  chroot /usr/share/haproxy
  user haproxy
  group haproxy
  daemon

defaults
  log global
  mode http
  option tcplog
  option dontlognull
  retries 3
  option redispatch
  maxconn 10000
  contimeout 5000
  clitimeout 50000
  srvtimeout 50000

frontend stats-front
  bind *:8088
  mode http
  default_backend stats-back

backend stats-back
  mode http
  balance source
  stats uri /haproxy/stats
  stats auth admin:yjscloud

listen RabbitMQ-Server-Cluster
  bind 192.168.0.168:56720
  mode  tcp
  balance roundrobin
  option  tcpka
  server controller1 controller1:5672 check inter 10s fastinter 2s downinter 3s rise 3 fall 3
  server controller2 controller2:5672 check inter 10s fastinter 2s downinter 3s rise 3 fall 3
  server controller3 controller3:5672 check inter 10s fastinter 2s downinter 3s rise 3 fall 3

listen RabbitMQ-Web
  bind 192.168.0.168:15673
  mode  tcp
  balance roundrobin
  option  tcpka
  server controller1 controller1:15672 check inter 10s fastinter 2s downinter 3s rise 3 fall 3
  server controller2 controller2:15672 check inter 10s fastinter 2s downinter 3s rise 3 fall 3
  server controller3 controller3:15672 check inter 10s fastinter 2s downinter 3s rise 3 fall 3

listen Galera-Cluster
  bind 192.168.0.168:3306
  balance leastconn
  mode tcp
  option tcplog
  option httpchk
  server controller1 controller1:3306 check port 9200 inter 20s fastinter 2s downinter 2s rise 3 fall 3
  server controller2 controller2:3306 check port 9200 inter 20s fastinter 2s downinter 2s rise 3 fall 3
  server controller3 controller3:3306 check port 9200 inter 20s fastinter 2s downinter 2s rise 3 fall 3

listen keystone_admin_cluster
  bind 192.168.0.168:35357
  balance  source
  option  httpchk
  option  httplog
  option  httpclose
  server controller1 controller1:35358 check inter 10s fastinter 2s downinter 2s rise 30 fall 3
  server controller2 controller2:35358 check inter 10s fastinter 2s downinter 2s rise 30 fall 3
  server controller3 controller3:35358 check inter 10s fastinter 2s downinter 2s rise 30 fall 3

listen keystone_public_internal_cluster
  bind 192.168.0.168:5000
  balance  source
  option  httpchk
  option  httplog
  option  httpclose
  server controller1 controller1:5002 check inter 10s fastinter 2s downinter 2s rise 30 fall 3
  server controller2 controller2:5002 check inter 10s fastinter 2s downinter 2s rise 30 fall 3
  server controller3 controller3:5002 check inter 10s fastinter 2s downinter 2s rise 30 fall 3

listen Memcache_Servers
  bind 192.168.0.168:22122
  balance roundrobin
  mode   tcp
  option  tcpka
  server controller1 controller1:11211 check inter 10s fastinter 2s downinter 2s rise 30 fall 3
  server controller2 controller2:11211 check inter 10s fastinter 2s downinter 2s rise 30 fall 3
  server controller3 controller3:11211 check inter 10s fastinter 2s downinter 2s rise 30 fall 3

listen dashboard_cluster
  bind 192.168.0.168:80
  balance  source
  option  httpchk
  option  httplog
  option  httpclose
  server controller1 controller1:8080 check inter 2000 fall 3
  server controller2 controller2:8080 check inter 2000 fall 3
  server controller3 controller3:8080 check inter 2000 fall 3

listen glance_api_cluster
  bind 192.168.0.168:9292
  balance  source
  option  httpchk
  option  httplog
  option  httpclose
  server controller1 controller1:9393 check inter 10s fastinter 2s downinter 3s rise 3 fall 3
  server controller2 controller2:9393 check inter 10s fastinter 2s downinter 3s rise 3 fall 3
  server controller3 controller3:9393 check inter 10s fastinter 2s downinter 3s rise 3 fall 3

listen glance_registry_cluster
  bind 192.168.0.168:9090
  balance roundrobin
  mode   tcp
  option  tcpka
  server controller1 controller1:9191 check inter 10s fastinter 2s downinter 3s rise 3 fall 3
  server controller2 controller2:9191 check inter 10s fastinter 2s downinter 3s rise 3 fall 3
  server controller3 controller3:9191 check inter 10s fastinter 2s downinter 3s rise 3 fall 3

listen nova_compute_api_cluster
  bind 192.168.0.168:8774
  balance  source
  option  httpchk
  option  httplog
  option  httpclose
  server controller1 controller1:9774 check inter 10s fastinter 2s downinter 3s rise 3 fall 3
  server controller2 controller2:9774 check inter 10s fastinter 2s downinter 3s rise 3 fall 3
  server controller3 controller3:9774 check inter 10s fastinter 2s downinter 3s rise 3 fall 3

listen nova-metadata-api_cluster
  bind 192.168.0.168:8775
  balance  source
  option  httpchk
  option  httplog
  option  httpclose
  server controller1 controller1:9775 check inter 10s fastinter 2s downinter 3s rise 3 fall 3
  server controller2 controller2:9775 check inter 10s fastinter 2s downinter 3s rise 3 fall 3
  server controller3 controller3:9775 check inter 10s fastinter 2s downinter 3s rise 3 fall 3

listen nova_vncproxy_cluster
  bind 192.168.0.168:6080
  balance  source
  option  tcpka
  option  tcplog
  server controller1 controller1:6080 check inter 2000 rise 2 fall 5
  server controller2 controller2:6080 check inter 2000 rise 2 fall 5
  server controller3 controller3:6080 check inter 2000 rise 2 fall 5

listen neutron_api_cluster
  bind 192.168.0.168:9696
  balance  source
  option  httpchk
  option  httplog
  option  httpclose
  server controller1 controller1:9797 check inter 10s fastinter 2s downinter 3s rise 3 fall 3
  server controller2 controller2:9797 check inter 10s fastinter 2s downinter 3s rise 3 fall 3
  server controller3 controller3:9797 check inter 10s fastinter 2s downinter 3s rise 3 fall 3

listen cinder_api_cluster
  bind 192.168.0.168:8776
  balance  source
  option  httpchk
  option  httplog
  option  httpclose
  server controller1 controller1:8778 check inter 10s fastinter 2s downinter 3s rise 3 fall 3
  server controller2 controller2:8778 check inter 10s fastinter 2s downinter 3s rise 3 fall 3
  server controller3 controller3:8778 check inter 10s fastinter 2s downinter 3s rise 3 fall 3

将配置文件拷贝到其他两个节点并重启haproxy服务

scp -p /etc/haproxy/haproxy.cfg controller2:/etc/haproxy/haproxy.cfg
scp -p /etc/haproxy/haproxy.cfg controller3:/etc/haproxy/haproxy.cfg

systemctl restart haproxy.service
systemctl status haproxy.service

参数解释:

  • inter :设定健康状态检查的时间间隔,单位为毫秒,默认为2000;也可以使用fastinter和downinter来根据服务器端状态优化此时间延迟
  • rise :设定健康状态检查中,某离线的server从离线状态转换至正常状态需要成功检查的次数。
  • fall :确认server从正常状态转换为不可用状态需要检查的次数。

4)配置Haproxy能监控Galera数据库集群

数据库如果无法正常启动请重启三个节点的mysql
其他两个节点分别启动mariadb

systemctl start mariadb.service
systemctl status mariadb.service

最后,其他两个节点启动成功了,再回到第一个节点执行:

pkill -9 mysql
pkill -9 mysql
systemctl start mariadb.service
systemctl status mariadb.service

在controller1上进入mysql,创建clustercheck

grant process on *.* to 'clustercheckuser'@'localhost' identified by 'clustercheckpassword!';
flush privileges;

三个节点分别创建clustercheck文本,里面是clustercheckuser用户和密码

vim /etc/sysconfig/clustercheck

添加:

MYSQL_USERNAME=clustercheckuser
MYSQL_PASSWORD=clustercheckpassword!
MYSQL_HOST=localhost
MYSQL_PORT=3306

确认下是否存在/usr/bin/clustercheck脚本,如果没有从网上下载一个,然后放到/usr/bin目录下面,记得chmod +x /usr/bin/clustercheck赋予权限
这个脚本的作用就是让haproxy能监控Galera cluster状态

scp -p /usr/bin/clustercheck controller2:/usr/bin/clustercheck
scp -p /usr/bin/clustercheck controller3:/usr/bin/clustercheck

在controller1上检查haproxy服务状态

clustercheck

8-1-18

结合xinetd监控Galera服务(三个节点都安装xinetd)

yum -y install xinetd
vim /etc/xinetd.d/mysqlchk

添加以下内容:

# default: on
# # description: mysqlchk
service mysqlchk
{
# this is a config for xinetd, place it in /etc/xinetd.d/
  disable = no
  flags = REUSE
  socket_type = stream
  port = 9200
  wait = no
  user = nobody
  server = /usr/bin/clustercheck
  log_on_failure += USERID
  only_from = 0.0.0.0/0
  # recommended to put the IPs that need
  # to connect exclusively (security purposes)
  per_source = UNLIMITED
}

拷贝配置文件到其他节点

scp -p /etc/xinetd.d/mysqlchk controller2:/etc/xinetd.d/mysqlchk
scp -p /etc/xinetd.d/mysqlchk controller3:/etc/xinetd.d/mysqlchk
vim /etc/services

最后一行添加:mysqlchk              9200/tcp #mysqlchk

scp -p /etc/services controller2:/etc/services
scp -p /etc/services controller3:/etc/services

重启xinetd服务

systemctl enable xinetd.service
systemctl restart xinetd.service
systemctl status xinetd.service

5)三个节点修改内网参数

echo 'net.ipv4.ip_nonlocal_bind = 1'>>/etc/sysctl.conf
echo 'net.ipv4.ip_forward = 1'>>/etc/sysctl.conf
sysctl -p

第一个参数的意思是设置haproxy能够绑定到不属于本地网卡的地址上
第二个参数的意思是设置内核是否转发数据包,默认是禁止的,这里我们设置打开。
注意!如果不设置这两个参数,你的第二个第三个ctr节点haproxy服务将启动不了

6)三节点启动haproxy服务

systemctl restart haproxy.service
systemctl status haproxy.service

7)访问haproxy前端web平台

http://192.168.0.168:8088/haproxy/stats     admin/yjscloud

Galera集群服务已经监控成功

8-1-19

|| 版权声明
作者:废权
链接:https://blog.yjscloud.com/archives/100
声明:如无特别声明本文即为原创文章仅代表个人观点,版权归《废权的博客》所有,欢迎转载,转载请保留原文链接。
THE END
分享
二维码
OpenStack系列(四):controller节点部署04【HAproxy安装】
在云端上HAproxy的每个实例将其前端配置到虚拟IP地址上,根据修改haroxy配置文件,使得Haproxy后端是用于负载平衡的真实实例IP地址的列表。通过使用负载均衡器……
<<上一篇
下一篇>>
文章目录
关闭
目 录