2017-07-25 53 views
1

我有马拉松/ mesos其中有3周版本的经纪人管理0.10.2.1卡夫卡集群。码头图片基于wurstmeister/kafka-docker。在启动和领导都自动重新平衡auto.leader.rebalance.enable=true这是自动和顺序分配的broker.id=-1。客户的版本是0.8.2.1卡夫卡分区领头羊的经纪人后未更新删除

动物园管理员配置:

➜ zkCli -server zookeeper.example.com:2181 ls /brokers/ids 
[1106, 1105, 1104] 

➜ zkCli -server zookeeper.example.com:2181 get /brokers/ids/1104 
{"listener_security_protocol_map":{"PLAINTEXT":"PLAINTEXT"}, 
"endpoints":["PLAINTEXT://host1.mesos-slave.example.com:9092"], 
"jmx_port":9999,"host":"host1.mesos-slave.example.com", 
"timestamp":"1500987386409", 
"port":9092,"version":4} 

➜ zkCli -server zookeeper.example.com:2181 get /brokers/ids/1105 
{"listener_security_protocol_map":{"PLAINTEXT":"PLAINTEXT"}, 
"endpoints":["PLAINTEXT://host2.mesos-slave.example.com:9092"], 
"jmx_port":9999,"host":"host2.mesos-slave.example.com", 
"timestamp":"1500987390304", 
"port":9092,"version":4} 

➜ zkCli -server zookeeper.example.com:2181 get /brokers/ids/1106 
{"listener_security_protocol_map":{"PLAINTEXT":"PLAINTEXT"}, 
"endpoints":["PLAINTEXT://host3.mesos-slave.example.com:9092"], 
"jmx_port":9999,"host":"host3.mesos-slave.example.com", 
"timestamp":"1500987390447","port":9092,"version":4} 

➜ bin/kafka-topics.sh --zookeeper zookeeper.example.com:2181 --create --topic test-topic --partitions 2 --replication-factor 2 
Created topic "test-topic". 

➜ bin/kafka-topics.sh --zookeeper zookeeper.example.com:2181 --describe --topic test-topic 
Topic:test-topic PartitionCount:2  ReplicationFactor:2  Configs: 
     Topic: test-topic Partition: 0 Leader: 1106 Replicas: 1106,1104  Isr: 1106 
     Topic: test-topic Partition: 1 Leader: 1105 Replicas: 1104,1105  Isr: 1105 

消费者可以消费的东西生产者输出。

➜ /opt/kafka_2.10-0.8.2.1 bin/kafka-console-producer.sh --broker-list 10.0.1.3:9092,10.0.1.1:9092 --topic test-topic 
[2017-07-25 12:57:17,760] WARN Property topic is not valid (kafka.utils.VerifiableProperties) 
hello 1 
hello 2 
hello 3 
... 

➜ /opt/kafka_2.10-0.8.2.1 bin/kafka-console-consumer.sh --zookeeper zookeeper.example.com:2181 --topic test-topic --from-beginning 
hello 1 
hello 2 
hello 3 
... 

然后促成1104和1105(host1和host2)走出去,另外一个联机,1107(主机1),使用手动马拉松接口

➜ zkCli -server zookeeper.example.com:2181 ls /brokers/ids 
[1107, 1106] 

➜ zkCli -server zookeeper.example.com:2181 get /brokers/ids/1107 
{"listener_security_protocol_map":{"PLAINTEXT":"PLAINTEXT"}, 
"endpoints":["PLAINTEXT://host1.mesos-slave.example.com:9092"], 
"jmx_port":9999,"host":"host1.mesos-slave.example.com", 
"timestamp":"1500991298225","port":9092,"version":4} 

消费者仍然会从生产者的消息但主题描述看起来过时:

Topic:test-topic PartitionCount:2  ReplicationFactor:2  Configs: 
     Topic: test-topic Partition: 0 Leader: 1106 Replicas: 1106,1104  Isr: 1106 
     Topic: test-topic Partition: 1 Leader: 1105 Replicas: 1104,1105  Isr: 1105 

我试图重新平衡kafka-preferred-replica-election.shkafka-reassign-partitions.sh

➜ $cat all_partitions.json 
{ 
    "version":1, 
    "partitions":[ 
    {"topic":"test-topic","partition":0,"replicas":[1106,1107]}, 
    {"topic":"test-topic","partition":1,"replicas":[1107,1106]} 
    ] 
} 

➜ bin/kafka-reassign-partitions.sh --zookeeper zookeeper.example.com:2181 --reassignment-json-file all_partitions.json --execute 

➜ bin/kafka-reassign-partitions.sh --zookeeper zookeeper.example.com:2181 --reassignment-json-file all_partitions.json --verify 

Status of partition reassignment: 
Reassignment of partition [test-topic,0] completed successfully 
Reassignment of partition [test-topic,1] is still in progress 

➜ $cat all_leaders.json 
{ 
    "partitions":[ 
    {"topic": "test-topic", "partition": 0}, 
    {"topic": "test-topic", "partition": 1} 
    ] 
} 

➜ bin/kafka-preferred-replica-election.sh --zookeeper zookeeper.example.com:2181 --path-to-json-file all_leaders.json 
Created preferred replica election path with {"version":1,"partitions":[{"topic":"test-topic","partition":0},{"topic":"test-topic","partition":1}]} 
Successfully started preferred replica election for partitions Set([test-topic,0], [test-topic,1]) 

对分区1分区领导者仍然是1105,它没有任何意义:

➜ bin/kafka-topics.sh --zookeeper zookeeper.example.com:2181 --describe --topic test-topic 

Topic:test-topic PartitionCount:2  ReplicationFactor:2  Configs: 
     Topic: test-topic Partition: 0 Leader: 1106 Replicas: 1106,1107  Isr: 1106,1107 
     Topic: test-topic Partition: 1 Leader: 1105 Replicas: 1107,1106,1104,1105 Isr: 1105 

为什么分区1认为,领导者仍然是1105,虽然主机2是不是还活着吗?

回答

0

我现在面临与Apache卡夫卡2.11类似的问题。有3个中间商集群,具有分区的话题= 2,复制因子= 1。所以,我的主题的分区进行跨越2个经纪人摊开3.In的生产信息之中,我手动关闭经纪商之一,其中一个的分区居住。经过相当长的时间后,上述分区的负责人继续表示为-1。即分区没有转移到第三个活动和正在运行的经纪人。我不得不auto.leader.rebalance.enable设置为true的所有代理。此外,生产者客户端不停地试图制造到这是在关闭DOEN经纪人分区,并继续未能出示。

相关问题