网站首页> 文章专栏> kafka对接flume完成测试
kafka对接flume完成测试
路人王 天津 2020-03-03 194 0 0

1.启动kafka需要提前启动zookeeper,在我的机器里面192.168.52.200 201 202 需启动zk的脚本

2.启动kafka 到bin下 三台全部启动

bin/kafka-server-start.sh -daemon /root/apps /kafka_2.11-0.10.2.1/config/server.properties ./kafka-topics.sh --list --zookeeper hadoop1:2181,hadoop2:2181,hadoop3:2181

3.创建一个flume的topic,启动kafka 的一个消费者

./kafka-topics.sh --create --zookeeper hadoop1:2181,hadoop2:2181,hadoop3:2181 --replication-factor 3 --partitions 3 --topic flume ./kafka-console-consumer.sh --zookeeper hadoop1:2181,hadoop2:2181,hadoop3:2181 --topic flume

4.配置flume,配置坚挺的文件目录

进入 config
vi a1.conf

a1.sources = src1
a1.channels = ch1
a1.sinks = k1

a1.sources.src1.type = exec
a1.sources.src1.command=tail -f /home/centos/log/log
a1.sources.src1.channels = ch1

a1.sinks.k1.type = org.apache.flume.sink.kafka.KafkaSink
a1.sinks.k1.topic = flume
a1.sinks.k1.brokerList=192.168.52.200:9092
a1.sinks.k1.batchSize = 20
a1.sinks.k1.requiredAcks = 1
a1.sinks.k1.channel = ch1

a1.channels.ch1.type = memory
a1.channels.ch1.capacity = 1000

5.启动flume

bin/flume-ng agent --conf conf --conf-file conf/a1.conf --name a1 -Dflume.root.logger=INFO,console

评论

评论  分享  打赏