关闭Kafka在控制台打印的日志

| 分类 java之内存管理  | 标签 Java  日志  Kafka  log4j  consumer  producer 

程序中分别使用到Kafka 的消费者和Kafka 的生产者,但在一次程序更新后,在控制台一直封装打印这样的信息

[INFO] 2019-08-06 08:00:32:734 [Thread-11] [org.apache.kafka.common.utils.AppInfoParser$AppInfo.(AppInfoParser.java:109)] - Kafka version : 1.1.0
[INFO] 2019-08-06 08:00:32:734 [Thread-11] [org.apache.kafka.common.utils.AppInfoParser$AppInfo.(AppInfoParser.java:110)] - Kafka commitId : fdcf75ea326b8e07
[INFO] 2019-08-06 08:00:32:990 [main] [org.apache.kafka.common.config.AbstractConfig.logAll(AbstractConfig.java:279)] - ProducerConfig values: 
	acks = 1
	batch.size = 16384
	bootstrap.servers = [192.168.32.27:9092, 192.168.32.26:9092, 192.168.32.24:9092, 192.168.32.69:9092, 192.168.32.68:9092]
	buffer.memory = 33554432
	client.id = 
	compression.type = none
	connections.max.idle.ms = 540000
	enable.idempotence = false
	interceptor.classes = []
	key.serializer = class org.apache.kafka.common.serialization.ByteArraySerializer
	linger.ms = 0
	max.block.ms = 60000
	max.in.flight.requests.per.connection = 5
	max.request.size = 1048576
	metadata.max.age.ms = 300000
	metric.reporters = []
	metrics.num.samples = 2
	metrics.recording.level = INFO
	metrics.sample.window.ms = 30000
	partitioner.class = class com.cmb.xxx.api.xxxkfk.XXXKFK_Partitioner
	receive.buffer.bytes = 32768
	reconnect.backoff.max.ms = 1000
	reconnect.backoff.ms = 50
	request.timeout.ms = 30000
	retries = 0
	retry.backoff.ms = 100
	sasl.jaas.config = null
	sasl.kerberos.kinit.cmd = /usr/bin/kinit
	sasl.kerberos.min.time.before.relogin = 60000
	sasl.kerberos.service.name = null
	sasl.kerberos.ticket.renew.jitter = 0.05
	sasl.kerberos.ticket.renew.window.factor = 0.8
	sasl.mechanism = GSSAPI
	security.protocol = PLAINTEXT
	send.buffer.bytes = 131072
	ssl.cipher.suites = null
	ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
	ssl.endpoint.identification.algorithm = null
	ssl.key.password = null
	ssl.keymanager.algorithm = SunX509
	ssl.keystore.location = null
	ssl.keystore.password = null
	ssl.keystore.type = JKS
	ssl.protocol = TLS
	ssl.provider = null
	ssl.secure.random.implementation = null
	ssl.trustmanager.algorithm = PKIX
	ssl.truststore.location = null
	ssl.truststore.password = null
	ssl.truststore.type = JKS
	transaction.timeout.ms = 60000
	transactional.id = null
	value.serializer = class org.apache.kafka.common.serialization.ByteArraySerializer

[INFO] 2019-08-06 08:00:33:004 [main] [org.apache.kafka.common.utils.AppInfoParser$AppInfo.(AppInfoParser.java:109)] - Kafka version : 1.1.0
[INFO] 2019-08-06 08:00:33:004 [main] [org.apache.kafka.common.utils.AppInfoParser$AppInfo.(AppInfoParser.java:110)] - Kafka commitId : fdcf75ea326b8e07
2019-08-06 08:00:37 [6bff8807] info    [native] Global Sensortype Capture list (BlackList) changed.
2019-08-06 08:00:37 [6bff8807] info    [native] License = license ok; no uem volume available; 



[INFO] 2019-08-06 08:00:50:090 [XXXASY_MsgHandler(TOPIC-NAME)] [org.apache.kafka.common.config.AbstractConfig.logAll(AbstractConfig.java:279)] - ConsumerConfig values: 
	auto.commit.interval.ms = 5000
	auto.offset.reset = latest
	bootstrap.servers = [192.168.32.27:9092, 192.168.32.26:9092, 192.168.32.24:9092, 192.168.32.69:9092, 192.168.32.68:9092]
	check.crcs = true
	client.id = TOPIC-NAME_5002@st419021818ts_33
	connections.max.idle.ms = 540000
	enable.auto.commit = false
	exclude.internal.topics = true
	fetch.max.bytes = 52428800
	fetch.max.wait.ms = 500
	fetch.min.bytes = 1
	group.id = TOPIC-NAME
	heartbeat.interval.ms = 3000
	interceptor.classes = []
	internal.leave.group.on.close = true
	isolation.level = read_uncommitted
	key.deserializer = class org.apache.kafka.common.serialization.ByteArrayDeserializer
	max.partition.fetch.bytes = 1048576
	max.poll.interval.ms = 300000
	max.poll.records = 100
	metadata.max.age.ms = 300000
	metric.reporters = []
	metrics.num.samples = 2
	metrics.recording.level = INFO
	metrics.sample.window.ms = 30000
	partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor]
	receive.buffer.bytes = 65536
	reconnect.backoff.max.ms = 1000
	reconnect.backoff.ms = 50
	request.timeout.ms = 305000
	retry.backoff.ms = 100
	sasl.jaas.config = null
	sasl.kerberos.kinit.cmd = /usr/bin/kinit
	sasl.kerberos.min.time.before.relogin = 60000
	sasl.kerberos.service.name = null
	sasl.kerberos.ticket.renew.jitter = 0.05
	sasl.kerberos.ticket.renew.window.factor = 0.8
	sasl.mechanism = GSSAPI
	security.protocol = PLAINTEXT
	send.buffer.bytes = 131072
	session.timeout.ms = 10000
	ssl.cipher.suites = null
	ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
	ssl.endpoint.identification.algorithm = null
	ssl.key.password = null
	ssl.keymanager.algorithm = SunX509
	ssl.keystore.location = null
	ssl.keystore.password = null
	ssl.keystore.type = JKS
	ssl.protocol = TLS
	ssl.provider = null
	ssl.secure.random.implementation = null
	ssl.trustmanager.algorithm = PKIX
	ssl.truststore.location = null
	ssl.truststore.password = null
	ssl.truststore.type = JKS
	value.deserializer = class org.apache.kafka.common.serialization.ByteArrayDeserializer

顺便看了一下ProducerConfig、ConsumerConfig 的值,有这么一项,metrics.recording.level = INFO,所以本来以为是这个配置导致的,但是经过查各种资料也没有头绪

继续排查

经过反复搜索,找到了这篇文章怎样关闭kafka的在控制台打印的日志,这里讲到在控制台打印Kafka 的信息是因为logback 的日志配置导致的,仔细分析我自己的程序,并不是使用logback,而是使用的log4j

然后参考Log4j配置文件整理–Kafka日志发送,在Eclipse 项目java package 最开始的同级目录下添加log4j.properties,设置内容如下,不打印info 信息

log4j.appender.stdout=org.apache.log4j.ConsoleAppender
log4j.appender.stdout.layout=org.apache.log4j.PatternLayout
log4j.appender.stdout.layout.ConversionPattern=%d{yyyy-MM-dd HH:mm:ss}%m%n  %l%n


log4j.appender.file=org.apache.log4j.FileAppender
log4j.appender.file.File=sysInfo.log
log4j.appender.file.layout=org.apache.log4j.PatternLayout
log4j.appender.file.layout.ConversionPattern=%d{yyyy-MM-dd HH:mm:ss}%m%n  %l%n


log4j.rootLogger=error,stdout,file

然后再在Eclipse 上启动程序,就不再会打印这些日志了

如果要将项目打包成jar 包发布,那么需要将log4j.properties 打包在jar 包中,但是现在我们的项目是没有在package 同级目录下有log4j.properties 的,所以如果程序已经打包成jar 包,如果使自己的log4j.properties 在已经发布的项目中生效呢

参考log4j.properties文件的配置不起作用,现在的项目中应该有某些jar 包内已经有了log4j.properties

在应用启动的java 命令中加上-Dlog4j.debug 参数,比如

java -server -Dlog4j.debug -Xms512m -Xmx4g -XX:PermSize=256m -XX:MaxPermSize=512m -agentpath:/opt/jbshome/dynatrace-6.2/agent/lib64/libdtagent.so=name=JBoss_99.13.207.137_xiaosk,server=99.8.44.148:9998 -cp /APP/XXX/xxxcld.jar com.cmb.xxx.api.xxxcld.XXXCLD_ApplicationStart com.cmb.xxx.api.xxxolt.XXXOLT_ApplicationStart APP1

然后从输出信息中,果然找到了某个jar 包中有log4j.properties的配置

Reading configuration from URL jar:file:/APP/XXX/xxxkafka-client-5.0.0.jar!/log4j.properties

将这个jar 包解包后,看到log4j.properties 的内容是这样的

log4j.rootLogger=INFO, Console

#Console
log4j.appender.Console=org.apache.log4j.ConsoleAppender
log4j.appender.Console.layout=org.apache.log4j.PatternLayout
log4j.appender.Console.layout.ConversionPattern=[%p] %d{yyyy-MM-dd HH\:mm\:ss:SSS} [%t] [%l] - %m%n

log4j.logger.org.apache.kafka.clients.consumer.ConsumerConfig=INFO
log4j.logger.org.apache.kafka.clients.producer.ProducerConfig=INFO
log4j.logger.org.apache.zookeeper=INFO

再仔细去看程序运行环境中该jar 包所在的目录,发现竟然存在版本冲突,具体是下面这些

xxxkafka-client-5.0.0.jar
xxxkafka-client-5.1.0.jar

kafka-clients-1.1.0.jar
kafka-clients-2.0.0.jar

删除冲突版本的jar 包后,不用去修改log4j.properties 的内容,再在程序运行环境中重启应用,不再会持续打印最开始的日志了

以上是针对一些细枝末节,发现可能是log4j.properties 的问题,再一步步排查发现是一些jar 包版本冲突导致的,通过删除版本冲突的jar 包暂时解决了这个问题,但是更底层的原因、log4j 的原理等等这些内容本文就暂不深究了

另外关于Kafka、ZooKeeper,还是有很多知识盲点!!

参考资料




如果本篇文章对您有所帮助,您可以通过微信(左)或支付宝(右)对作者进行打赏!


上一篇     下一篇