部署zookeeper+kafka集群并开启基于DIGEST-MD5的SASL-SSL身份认证

对比

因为3.5.x还是beat版, 所以这里简单在3.4.x和3.5.x之间进行了安全相关的对比:

  1. 3.5.0支持Jetty(The AdminServer), 以此来替代四字命令并不复用zookeeper的通信端口, 而是默认使用8080端口, 这样可以针对此端口做防火墙设置或nginx代理认证等
  2. 3.5.3添加4lw.commands.whitelist参数, 默认不做任何配置可以禁用四字命令, 防止匿名访问zookeeper信息
  3. 3.5.0支持使用secureClientPort开启ssl端口让客户端与服务器通讯

资源

服务器信息

服务器ip 系统版本 jdk版本 备注
10.22.0.70 CentOS 7.6 1.8.0_191 节点1, zookeeper服务: id: 1, kafka服务: id: 0, 服务运行用户名: app
10.22.0.71 CentOS 7.6 1.8.0_191 节点2, zookeeper服务: id: 2, kafka服务: id: 1, 服务运行用户名: app
10.22.0.72 CentOS 7.6 1.8.0_191 节点3, zookeeper服务: id: 3, kafka服务: id: 2, 服务运行用户名: app

zookeeper信息

目录 备注
zookeeper版本 3.5.4-beta
zookeeper安装目录 /data/app/zookeeper-3.5.4-beta
zookeeper配置目录 /data/app/zookeeper-3.5.4-beta/conf
zookeeper日志目录 /data/app/zookeeper-3.5.4-beta/logs
zookeeper数据目录 /data/app/zookeeper-3.5.4-beta/data
zookeeper安装目录软连接 /data/app/zookeeper

kafka信息

目录 备注
kafka版本 2.12-2.1.0
kafka安装目录 /data/app/kafka_2.12-2.1.0
kafka配置目录 /data/app/kafka_2.12-2.1.0/config
kafka日志目录 /data/app/kafka_2.12-2.1.0/logs
kafka安装目录软连接 /data/app/kafka

生成证书

  • 执行节点: 在任意节点执行

  • 脚本:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
#!/bin/bash

# 此部分仅在节点一上执行即可
# 生成证书后将证书目录同时复制到其它两个节点上
ssl_keys_dir=/data/app/ssl
mkdir -p ${ssl_keys_dir}
cd ${ssl_keys_dir}
# 生成服务器keystore(密钥和证书)
keytool -keystore server.keystore.jks -alias FULLSTACKMEMO -validity 3650 -keyalg RSA -storepass testpass -keypass testpass -genkey -deststoretype pkcs12 -dname "C=CN,ST=GD,L=GZ,O=x22x22,OU=x22x22,CN=FULLSTACKMEMO.COM"
# 生成客户端keystore(密钥和证书)
keytool -keystore client.keystore.jks -alias FULLSTACKMEMO -validity 3650 -keyalg RSA -storepass testpass -keypass testpass -genkey -deststoretype pkcs12 -dname "C=CN,ST=GD,L=GZ,O=x22x22,OU=x22x22,CN=FULLSTACKMEMO.COM"
# 创建CA证书
openssl req -new -x509 -keyout ca.key -out ca.crt -days 3650 -passout pass:testpass -subj "/C=CN/ST=GD/L=GZ/O=x22x22/OU=x22x22/CN=FULLSTACKMEMO.COM"
# 将CA证书导入到服务器truststore
keytool -keystore server.truststore.jks -alias FULLSTACKMEMOCARoot -import -noprompt -trustcacerts -file ca.crt -storepass testpass
# 将CA证书导入到客户端truststore
keytool -keystore client.truststore.jks -alias FULLSTACKMEMOCARoot -import -noprompt -trustcacerts -file ca.crt -storepass testpass
# 导出服务器证书
keytool -keystore server.keystore.jks -alias FULLSTACKMEMO -certreq -file cert-file -storepass testpass
keytool -keystore client.keystore.jks -alias FULLSTACKMEMO -certreq -file client-cert-file -storepass testpass
# 用CA证书给服务器证书签名
openssl x509 -req -CA ca.crt -CAkey ca.key -in cert-file -out cert-signed -days 365 -CAcreateserial -passin pass:testpass
openssl x509 -req -CA ca.crt -CAkey ca.key -in client-cert-file -out client-cert-signed -days 365 -CAcreateserial -passin pass:testpass
# 将CA证书导入服务器keystore
keytool -keystore server.keystore.jks -alias FULLSTACKMEMOCARoot -import -noprompt -trustcacerts -file ca.crt -storepass testpass
keytool -keystore client.keystore.jks -alias FULLSTACKMEMOCARoot -import -noprompt -trustcacerts -file ca.crt -storepass testpass
# 将已签名的服务器证书导入服务器keystore
keytool -keystore server.keystore.jks -alias FULLSTACKMEMO -import -noprompt -trustcacerts -file cert-signed -trustcacerts -storepass testpass
keytool -keystore client.keystore.jks -alias FULLSTACKMEMO -import -noprompt -trustcacerts -file client-cert-signed -trustcacerts -storepass testpass

# 验证ssl, 自行下面命令的过程中会提示要求输入证书秘钥的pass, 请按照提示输入pass即可.
# echo 'GET /HTTP/1.1' | openssl s_client -connect 127.0.0.1:2281 -tls1_2 -cert ${ssl_keys_dir}/ca.crt -key ${ssl_keys_dir}/ca.key -CAfile ${ssl_keys_dir}/ca.crt

部署

  • 执行节点: 以下脚本每个节点都需要执行, 但是需要注意修改脚本内提示的针对不同节点的命令

  • 脚本:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
#!/bin/bash

# 前言
# 下载zookeeper, 国内加快下载速度可以使用清华大学开源镜像站或中国科学技术大学开源软件镜像站
# 中国科学技术大学开源软件镜像站zookeeper: http://mirrors.ustc.edu.cn/apache/zookeeper/
# zookeeper-3.5.4-beta: http://mirrors.ustc.edu.cn/apache/zookeeper/zookeeper-3.5.4-beta/zookeeper-3.5.4-beta.tar.gz
# 将zookeeper-3.5.4-beta.tar.gz下载并传送到三台服务器的/tmp下

# 设置环境变量
zookeeper_install=/data/app/zookeeper
zookeeper_version=3.5.4-beta
ssl_keys_dir=/data/app/ssl
run_user=app

# 初始化环境
# 安装jdk并设置jdk全局环境变量, 此步省略
useradd app -d /data/app
mkdir -p /data/app
chown ${run_user}:${run_user} -R /data/app

# 部署
cd /tmp
tar zxf zookeeper-${zookeeper_version}.tar.gz
\mv zookeeper-${zookeeper_version} /data/app
cd /data/app/
# 如果之前存在以及部署的zookeeper, 将其移除并按时间备份
\mv -f zookeeper zookeeper.bk-$(date "+%Y%m%d%H%M%S")
ln -s zookeeper-${zookeeper_version} zookeeper
mkdir -p ${zookeeper_install}/data
mkdir -p ${zookeeper_install}/logs

cat > ${zookeeper_install}/conf/zoo.cfg<<EOF
# ref: https://zookeeper.apache.org/doc/r3.5.4-beta/zookeeperAdmin.html#sc_configuration
# 基本配置
tickTime=2000

dataDir=${zookeeper_install}-${zookeeper_version}/data
dataLogDir=${zookeeper_install}-${zookeeper_version}/logs

# clientPortAddress=127.0.0.1
# secureClientPortAddress=127.0.0.1
# 当前的kafka不支持使用ssl方式连接zookeeper, 所有此处虽然zookeeper设置了ssl, 但是kafka依然是使用明文方式连接zookeeper
clientPort=2181
secureClientPort=2281
initLimit=5
syncLimit=2
server.1=10.22.0.70:2888:3888
server.2=10.22.0.71:2888:3888
server.3=10.22.0.72:2888:3888
admin.enableServer=false

# 认证配置
quorum.auth.enableSasl=true
quorum.auth.learnerRequireSasl=true
quorum.auth.serverRequireSasl=true
quorum.auth.learner.loginContext=QuorumLearner
quorum.auth.server.loginContext=QuorumServer
# quorum.auth.kerberos.servicePrincipal=servicename/_HOST
quorum.cnxn.threads.size=20

authProvider.1=org.apache.zookeeper.server.auth.SASLAuthenticationProvider

# 如果权限认证文件丢失或认证密码忘记, 可以使用以下参数临时跳过认证
# 配置此选项需要对所有节点的配置文件设且重启所有节点才能生效
# skipACL=yes

# 四字命令加入白名单, 没有特殊需求不建议开启
# 4lw.commands.whitelist=*
# 4lw.commands.whitelist=stat, ruok, conf, isro

EOF

cat > ${zookeeper_install}/conf/java.env<<EOF
SERVER_JVMFLAGS="
-Djava.security.auth.login.config=${zookeeper_install}/conf/jass_server.conf
-Dzookeeper.serverCnxnFactory=org.apache.zookeeper.server.NettyServerCnxnFactory
-Dzookeeper.ssl.keyStore.location=${ssl_keys_dir}/server.keystore.jks
-Dzookeeper.ssl.keyStore.password=testpass
-Dzookeeper.ssl.trustStore.location=${ssl_keys_dir}/server.truststore.jks
-Dzookeeper.ssl.trustStore.password=testpass
"
# CLIENT_JVMFLAGS="
# -Djava.security.auth.login.config=${zookeeper_install}/conf/jass_client.conf
# "
EOF

cat > ${zookeeper_install}/conf/jass_server.conf<<EOF
Server {
org.apache.zookeeper.server.auth.DigestLoginModule required
user_super="adminsecret"
user_bob="bobsecret";
};
Client {
org.apache.zookeeper.server.auth.DigestLoginModule required
username="super"
password="adminsecret";
};
QuorumServer {
org.apache.zookeeper.server.auth.DigestLoginModule required
user_quorum="quorumsecret";
};
QuorumLearner {
org.apache.zookeeper.server.auth.DigestLoginModule required
username="quorum"
password="quorumsecret";
};
EOF

cat > ${zookeeper_install}/conf/jass_client.conf<<EOF
Client {
org.apache.zookeeper.server.auth.DigestLoginModule required
username="super"
password="adminsecret";
};
EOF

# 此处需要注意, 不同节点请跟换成对应的节点id
# 不同的节点执行下面不同的命令
echo 1 > ${zookeeper_install}/data/myid
# echo 2 > ${zookeeper_install}/data/myid
# echo 3 > ${zookeeper_install}/data/myid

chown ${run_user}:${run_user} -R ${zookeeper_install}-${zookeeper_version}

cat > /usr/lib/systemd/system/zookeeper.service<<EOF
[Unit]
Description=ZooKeeper Service
Documentation=http://zookeeper.apache.org
Requires=network.target
After=network.target

[Service]
Type=forking
User=${run_user}
Group=${run_user}
ExecStart=${zookeeper_install}/bin/zkServer.sh start ${zookeeper_install}/conf/zoo.cfg
ExecStop=${zookeeper_install}/bin/zkServer.sh stop ${zookeeper_install}/conf/zoo.cfg
ExecReload=${zookeeper_install}/bin/zkServer.sh restart ${zookeeper_install}/conf/zoo.cfg
WorkingDirectory=${zookeeper_install}

[Install]
WantedBy=default.target
EOF

systemctl daemon-reload
systemctl enable zookeeper
systemctl restart zookeeper
systemctl status zookeeper

# 相关运维命令

# 查看zookeeper状态
# 此命令需要在三个节点上运行才能查看三个节点的信息
# 可以从返回新的Mode信息看出当前zookeeper是主还是备
${zookeeper_install}/bin/zkServer.sh status
# 返回信息:
# ZooKeeper JMX enabled by default
# Using config: /data/app/zookeeper/bin/../conf/zoo.cfg
# Mode: followe
# 或者
# ZooKeeper JMX enabled by default
# Using config: /data/app/zookeeper/bin/../conf/zoo.cfg
# Mode: leader

# 设置客户端环境变量
export CLIENT_JVMFLAGS="
-Djava.security.auth.login.config=${zookeeper_install}/conf/jass_client.conf
-Dzookeeper.clientCnxnSocket=org.apache.zookeeper.ClientCnxnSocketNetty
-Dzookeeper.client.secure=true
-Dzookeeper.ssl.keyStore.location=${ssl_keys_dir}/client.keystore.jks
-Dzookeeper.ssl.keyStore.password=testpass
-Dzookeeper.ssl.trustStore.location=${ssl_keys_dir}/client.truststore.jks
-Dzookeeper.ssl.trustStore.password=testpass
"

# 设置关键位置的acl, 使用sasl验证
${zookeeper_install}/bin/zkCli.sh -server 10.22.0.70:2281 setAcl / sasl:super:cdrwa
${zookeeper_install}/bin/zkCli.sh -server 10.22.0.70:2281 setAcl /zookeeper sasl:super:cdrwa

${zookeeper_install}/bin/zkCli.sh -server 10.22.0.70:2281 getAcl / | grep -E "cdrwa|super"
${zookeeper_install}/bin/zkCli.sh -server 10.22.0.70:2281 getAcl /zookeeper | grep -E "cdrwa|super"

部署kafka

  • 执行节点: 以下脚本每个节点都需要执行, 但是需要注意修改脚本内提示的针对不同节点的命令

  • 脚本:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
#!/bin/bash

kafka_install=/data/app/kafka
kafka_version=2.12-2.1.0
ssl_keys_dir=/data/app/ssl
run_user=app

# 初始化环境
# 安装jdk并设置jdk全局环境变量, 此步省略
useradd app -d /data/app
mkdir -p /data/app
chown ${run_user}:${run_user} -R /data/app

# 部署
cd /tmp
tar zxf kafka_${kafka_version}.tgz
\mv kafka_${kafka_version} /data/app
cd /data/app/
# 如果之前存在以及部署的kafka, 将其移除并按时间备份
\mv -f kafka kafka.bk-$(date "+%Y%m%d%H%M%S")
ln -s kafka_${kafka_version} kafka
mkdir -p ${kafka_install}/logs

# 请根据不同节点运行下面的broker.id替换命令
# 节点1
# sed -i 's/^broker.id=.*/broker.id=0/g' ${kafka_install}/config/server.properties
# 节点2
# sed -i 's/^broker.id=.*/broker.id=1/g' ${kafka_install}/config/server.properties
# 节点3
sed -i 's/^broker.id=.*/broker.id=2/g' ${kafka_install}/config/server.properties

sed -i '
[email protected]#listeners=.*@listeners=SASL_PLAINTEXT://:9092,SASL_SSL://:[email protected];
[email protected]=.*@#log.dirs=${kafka_install}/[email protected];
[email protected]=.*@zookeeper.connect=10.22.0.70:2181,10.22.0.71:2181,10.22.0.72:[email protected]
' ${kafka_install}/config/server.properties

cat >> ${kafka_install}/config/server.properties << EOF


# ssl和认证相关设置
ssl.keystore.location=${ssl_keys_dir}/server.keystore.jks
ssl.keystore.password=testpass
ssl.key.password=testpass
ssl.truststore.location=${ssl_keys_dir}/server.truststore.jks
ssl.truststore.password=testpass
security.inter.broker.protocol=SASL_SSL
sasl.mechanism.inter.broker.protocol=PLAIN
sasl.enabled.mechanisms=PLAIN
ssl.client.auth=required
# 如果想要HTTPS严格验证主机名, 需要在生成证书时加上-ext SAN=DNS:{FQDN}
# ssl.endpoint.identification.algorithm=HTTPS
ssl.endpoint.identification.algorithm=
EOF

cat > ${kafka_install}/config/jass.conf<<EOF
KafkaServer {
org.apache.kafka.common.security.plain.PlainLoginModule required
username="admin"
password="admin-secret"
user_admin="admin-secret"
user_alice="alice-secret";
};
Client {
org.apache.kafka.common.security.plain.PlainLoginModule required
username="super"
password="adminsecret";
};
EOF

cat> /usr/lib/systemd/system/kafka.service<<EOF
[Unit]
Description=Confluent Kafka Broker
After=network.target network-online.target remote-fs.target zookeeper.service

[Service]
Type=forking
User=${run_user}
Group=${run_user}
# Uncomment the following line to enable authentication for the broker
Environment="KAFKA_OPTS=-Djava.security.auth.login.config=${kafka_install}/config/jass.conf"
ExecStart=${kafka_install}/bin/kafka-server-start.sh -daemon ${kafka_install}/config/server.properties
ExecStop=${kafka_install}/bin/kafka-server-stop.sh
SuccessExitStatus=143

[Install]
WantedBy=multi-user.target
EOF

chown ${run_user}:${run_user} -R ${kafka_install}_${kafka_version}

systemctl daemon-reload
systemctl enable kafka
systemctl restart kafka
systemctl status kafka

kafka测试

  • 执行节点: 在任意节点测试即可

  • 脚本:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
#!/bin/bash

kafka_install=/data/app/kafka
kafka_version=2.12-2.1.0
ssl_keys_dir=/data/app/ssl
run_user=app


cat >>${kafka_install}/config/producer.properties<<EOF

# ssl和认证相关设置
sasl.jaas.config=org.apache.kafka.common.security.plain.PlainLoginModule required \
username="alice" \
password="alice-secret";
security.protocol=SASL_SSL
sasl.mechanism=PLAIN
ssl.truststore.location=${ssl_keys_dir}/client.truststore.jks
ssl.truststore.password=testpass
ssl.keystore.location=${ssl_keys_dir}/client.keystore.jks
ssl.keystore.password=testpass
ssl.key.password=testpass

# 如果想要HTTPS严格验证主机名, 需要在生成证书时加上-ext SAN=DNS:{FQDN}
# ssl.endpoint.identification.algorithm=HTTPS
ssl.endpoint.identification.algorithm=
EOF

cat >>${kafka_install}/config/consumer.properties<<EOF

# ssl和认证相关设置
sasl.jaas.config=org.apache.kafka.common.security.plain.PlainLoginModule required username="alice" password="alice-secret";
security.protocol=SASL_SSL
sasl.mechanism=PLAIN
ssl.truststore.location=${ssl_keys_dir}/client.truststore.jks
ssl.truststore.password=testpass
ssl.keystore.location=${ssl_keys_dir}/client.keystore.jks
ssl.keystore.password=testpass
ssl.key.password=testpass

# 如果想要HTTPS严格验证主机名, 需要在生成证书时加上-ext SAN=DNS:{FQDN}
# ssl.endpoint.identification.algorithm=HTTPS
ssl.endpoint.identification.algorithm=
EOF

# KAFKA_HEAP_OPTS="-Xms512m -Xmx1g"

# 开启两个shell窗口分别执行以下两个命令:
# 输入命令后可以在此窗口敲打任何信息并回车, 在消费者窗口即可收到此窗口敲打的信息
# 开启数据提供者命令:
${kafka_install}/bin/kafka-console-producer.sh --broker-list 10.22.0.70:9093,10.22.0.71:9093,10.22.0.72:9093 --topic test --producer.config ${kafka_install}/config/producer.properties
# 开启数据消费者命令:
${kafka_install}/bin/kafka-console-consumer.sh --bootstrap-server 10.22.0.70:9093,10.22.0.71:9093,10.22.0.72:9093 --topic test --consumer.config ${kafka_install}/config/consumer.properties

参考

ZooKeeper Administrator’s Guide
Server-Server mutual authentication
Client-Server mutual authentication
Kafka 2.1 Documentation: Encryption and Authentication using SSL
Kafka 2.1 Documentation: Authentication using SASL/PLAIN
Enabling SASL Authentication between WSO2 Message Broker and Zookeeper Cluster
Zookeeper and SASL
confluent documentation
Kafka集群的安装部署和实践应用
Kafka集群部署
Zookeeper权限管理之坑
Zookeeper配置文件(全)
ZooKeeper的配置文件优化性能(转)
Zookeeper配置Kerberos认证
kafka SASL验证
kafka和zk的集群安装和ssl通信
使用ZooKeeper ACL特性进行znode控制
ZooKeeper安全认证机制:ZNode ACL
ZooKeeper安全认证机制:SSL
ZooKeeper安全认证机制:SSL
zookeeper权限acl与四字命令

显示 Gitment 评论