2016-07-15 80 views
9

我想增加session.timeout.ms允许更长的时间来处理在poll()调用之间收到的消息。但是,当我将session.timeout.ms更改为高于30000的值时,它无法创建Consumer对象并抛出错误。为什么我不能增加session.timeout.ms?

任何人都可以告诉我为什么我不能增加session.timeout.ms的价值,或者我错过了什么?

0 [main] INFO org.apache.kafka.clients.consumer.ConsumerConfig - ConsumerConfig values: 

request.timeout.ms = 40000 
check.crcs = true 
retry.backoff.ms = 100 
ssl.truststore.password = null 
ssl.keymanager.algorithm = SunX509 
receive.buffer.bytes = 262144 
ssl.cipher.suites = null 
ssl.key.password = null 
sasl.kerberos.ticket.renew.jitter = 0.05 
ssl.provider = null 
sasl.kerberos.service.name = null 
session.timeout.ms = 40000 
sasl.kerberos.ticket.renew.window.factor = 0.8 
bootstrap.servers = [server-name:9092] 
client.id = 
fetch.max.wait.ms = 500 
fetch.min.bytes = 50000 
key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 
sasl.kerberos.kinit.cmd = /usr/bin/kinit 
auto.offset.reset = latest 
value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 
ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1] 
partition.assignment.strategy = [org.apache.kafka.clients.consumer.RangeAssignor] 
ssl.endpoint.identification.algorithm = null 
max.partition.fetch.bytes = 2097152 
ssl.keystore.location = null 
ssl.truststore.location = null 
ssl.keystore.password = null 
metrics.sample.window.ms = 30000 
metadata.max.age.ms = 300000 
security.protocol = PLAINTEXT 
auto.commit.interval.ms = 5000 
ssl.protocol = TLS 
sasl.kerberos.min.time.before.relogin = 60000 
connections.max.idle.ms = 540000 
ssl.trustmanager.algorithm = PKIX 
group.id = test7 
enable.auto.commit = false 
metric.reporters = [] 
ssl.truststore.type = JKS 
send.buffer.bytes = 131072 
reconnect.backoff.ms = 50 
metrics.num.samples = 2 
ssl.keystore.type = JKS 
heartbeat.interval.ms = 3000 

异常线程 “main” org.apache.kafka.common.KafkaException: 未能在 org.apache.kafka.clients.consumer.KafkaConsumer(KafkaConsumer.java构建卡夫卡消费者: 624) 在 org.apache.kafka.clients.consumer.KafkaConsumer。(KafkaConsumer.java:518) 在 org.apache.kafka.clients.consumer.KafkaConsumer。(KafkaConsumer.java:500)

+0

你可以发布整个错误堆栈吗?这里没有足够的细节来帮助。 –

回答

10

消费者的范围激活超时由经纪人group.max.session.timeout.ms(默认30秒)和group.min.session.timeout.ms(默认6秒)控制。

您应该先在代理端增加group.max.session.timeout.ms,否则您将得到“会话超时不在可接受的范围内”。

+0

谢谢,但在config/server.properties中添加/设置group.max.session.timeout.ms = 3600000仍然无法解决此问题。但是现在我得到了“由:org.apache.kafka.common.config.ConfigException:request.timeout.ms应该大于session.timeout.ms和fetch.max.wait.ms”错误,这是有帮助的。 – Deeps

+0

将request.timeout.ms设置为大于session.timeout.ms的值后,此工作正常。 – Deeps

2
  • 需要这些条件要牢记改变session.timeout.ms:
    1. group.max.session.timeout.ms在server.properties> session.timeout.ms消费的.properties。
    2. server.properties中的group.min.session.timeout.ms < consumer.properties中的session.timeout.ms。
    3. request.timeout.ms> session.timeout.ms和fetch.max.wait.ms
    4. (session.timeout.ms)/ 3> heartbeat.interval.ms
    5. session.timeout.ms>最差每消费者调查的消费者记录处理时间(ms)。