案例&Flume单源单出口&Flume单源多出口&Flume多源单出口

安装地址

  1. Flume官网地址

    http://flume.apache.org/

  2. 文档查看地址

    http://flume.apache.org/FlumeUserGuide.html

  3. 下载地址

    http://archive.cloudera.com/cdh5/cdh/5/flume-ng-1.6.0-cdh5.16.2-src.tar.gz

安装部署

  1. 将apache-flume-1.7.0-bin.tar.gz上传到linux的/opt/software目录下

  2. 解压apache-flume-1.7.0-bin.tar.gz到/opt/module/目录下

    [aliyun@aliyun software]$ tar -zxf apache-flume-1.7.0-bin.tar.gz -C /opt/module/
  3. 修改apache-flume-1.7.0-bin的名称为flume

    [aliyun@aliyun module]$ mv apache-flume-1.7.0-bin flume
  4. 将flume/conf下的flume-env.sh.template文件修改为flume-env.sh,并配置flume-env.sh文件

    [aliyun@aliyun conf]$ mv flume-env.sh.template flume-env.sh
    [aliyun@aliyun conf]$ vi flume-env.sh
    export JAVA_HOME=/opt/module/jdk1.8.0_144

测试

  1. 安装netcat工具

    [aliyun@aliyun software]$ sudo yum install -y nc
  2. 判断44444端口是否被占用

    [aliyun@aliyun flume-telnet]$ sudo netstat -tunlp | grep 44444
  3. 创建Flume Agent配置文件flume-netcat-logger.conf

    在flume目录下创建job文件夹并进入job文件夹。

    [aliyun@aliyun flume]$ mkdir job
    [aliyun@aliyun flume]$ cd job/
  4. 在job文件夹下创建Flume Agent配置文件flume-netcat-logger.conf。

    [aliyun@aliyun job]$ vim flume-netcat-logger.conf
  5. 添加内容如下:

    # Name the components on this agent    #a1表示agent的名称
    a1.sources = r1 #r1表示a1的输入源
    a1.sinks = k1 #k1表示a1的输出目的地
    a1.channels = c1 #c1表示a1的缓冲区

    # Describe/configure the source
    a1.sources.r1.type = netcat #表示a1的输入源类型为netcat端口类型
    a1.sources.r1.bind = localhost #表示a1的监听的主机
    a1.sources.r1.port = 44444 #表示a1的监听的端口号

    # Describe the sink
    a1.sinks.k1.type = logger #表示a1的输出目的地是控制台logger类型

    # Use a channel which buffers events in memory
    a1.channels.c1.type = memory #表示a1的channel类型是memory内存型
    a1.channels.c1.capacity = 1000 #表示a1的channel总容量1000个event
    a1.channels.c1.transactionCapacity = 10 #表示a1的channel传输时收集到了100条event以后再去提交事务

    # Bind the source and sink to the channel
    a1.sources.r1.channels = c1 #将r1和c1连接起来
    a1.sinks.k1.channel = c1 #将k1和c1连接起来
  6. 先开启flume监听端口

    第一种写法:

    [aliyun@aliyun flume]$ bin/flume-ng agent --conf conf/ --name a1 --conf-file job/flume-netcat-logger.conf -Dflume.root.logger=INFO,console

    第二种写法:

    [aliyun@aliyun flume]$ bin/flume-ng agent -c conf/ -n a1 –f job/flume-netcat-logger.conf -Dflume.root.logger=INFO,console

    参数说明:

    --conf conf/:表示配置文件存储在conf/目录

    --name a1:表示给agent起名为a1

    `--conf-file job/flume-netcat.conf :`flume本次启动读取的配置文件是在job文件夹下的flume-telnet.conf文件。

    --Dflume.root.logger==INFO,console :-D表示flume运行时动态修改flume.root.logger参数属性值,并将控制台日志打印级别设置为INFO级别。日志级别包括:log、info、warn、error。

  7. 使用netcat工具向本机的44444端口发送内容

    [aliyun@aliyun ~]$ nc localhost 44444
    hello
    aliyun
  8. 在Flume监听页面观察接收数据情况

实时读取本地文件到HDFS案例

案例需求:实时监控Hive日志,并上传到HDFS中

单源单出口1

  1. 给Flume的lib目录下添加aliyun相关的jar包

    commons-configuration-1.6.jar
    aliyun-auth-2.7.2.jar
    aliyun-common-2.7.2.jar
    aliyun-hdfs-2.7.2.jar
    commons-io-2.4.jar
    htrace-core-3.1.0-incubating.jar
  2. 创建flume-file-hdfs.conf文件

    创建文件

    [aliyun@aliyun job]$ touch flume-file-hdfs.conf

    注:要想读取Linux系统中的文件,就得按照Linux命令的规则执行命令。由于Hive日志在Linux系统中所以读取文件的类型选择:exec即execute执行的意思。表示执行Linux命令来读取文件。

    [aliyun@aliyun job]$ vim flume-file-hdfs.conf

    添加如下内容

    # Name the components on this agent
    a2.sources = r2 #定义source
    a2.sinks = k2 #定义sink
    a2.channels = c2 #定义channels

    # Describe/configure the source
    a2.sources.r2.type = exec #定义source类型为exec可执行命令的
    a2.sources.r2.command = tail -F /opt/module/hive/logs/hive.log
    a2.sources.r2.shell = /bin/bash -c #执行shell脚本的绝对路径

    # Describe the sink
    a2.sinks.k2.type = hdfs
    a2.sinks.k2.hdfs.path = hdfs://aliyun:9000/flume/%Y%m%d/%H
    a2.sinks.k2.hdfs.filePrefix = logs- #上传文件的前缀
    a2.sinks.k2.hdfs.round = true #是否按照时间滚动文件夹
    a2.sinks.k2.hdfs.roundValue = 1 #多少时间单位创建一个新的文件夹
    a2.sinks.k2.hdfs.roundUnit = hour #重新定义时间单位
    a2.sinks.k2.hdfs.useLocalTimeStamp = true #是否使用本地时间戳
    a2.sinks.k2.hdfs.batchSize = 1000 #积攒多少个Event才flush到HDFS一次
    a2.sinks.k2.hdfs.fileType = DataStream #设置文件类型,可支持压缩
    a2.sinks.k2.hdfs.rollInterval = 60 #多久生成一个新的文件
    a2.sinks.k2.hdfs.rollSize = 134217700 #设置每个文件的滚动大小
    a2.sinks.k2.hdfs.rollCount = 0 #文件的滚动与Event数量无关

    # Use a channel which buffers events in memory
    a2.channels.c2.type = memory
    a2.channels.c2.capacity = 1000
    a2.channels.c2.transactionCapacity = 100

    # Bind the source and sink to the channel
    a2.sources.r2.channels = c2
    a2.sinks.k2.channel = c2

    注意

    对于所有与时间相关的转义序列,Event Header中必须存在以 “timestamp”的key(除非hdfs.useLocalTimeStamp设置为true,此方法会使用TimestampInterceptor自动添加timestamp)。

  3. 执行监控配置

    [aliyun@aliyun flume]$ bin/flume-ng agent --conf conf/ --name a2 --conf-file job/flume-file-hdfs.conf
  4. 开启aliyun和Hive并操作Hive产生日志

    [aliyun@aliyun aliyun-2.7.2]$ sbin/start-dfs.sh
    [aliyun@aliyun103 aliyun-2.7.2]$ sbin/start-yarn.sh

    [aliyun@aliyun hive]$ bin/hive
    hive (default)>
  5. 在HDFS上查看文件。

实时读取目录文件到HDFS案例

案例需求:使用Flume监听整个目录的文件

单源单出口2

  1. 创建配置文件flume-dir-hdfs.conf

    [aliyun@aliyun job]$ touch flume-dir-hdfs.conf
  2. 打开文件

    [aliyun@aliyun job]$ vim flume-dir-hdfs.conf
  3. 添加如下内容

    a3.sources = r3		#定义sources
    a3.sinks = k3 #定义sink
    a3.channels = c3 #定义channel

    # Describe/configure the source
    a3.sources.r3.type = spooldir #定义souce类型为目录
    a3.sources.r3.spoolDir = /opt/module/flume/upload #定义监控目录
    a3.sources.r3.fileSuffix = .COMPLETED #定义文件上传完的后缀
    a3.sources.r3.fileHeader = true #是否有文件头
    a3.sources.r3.ignorePattern = ([^ ]*\.tmp) #忽略所有以.tmp结尾的文件,不上传

    # Describe the sink
    a3.sinks.k3.type = hdfs #sink类型为hdfs
    a3.sinks.k3.hdfs.pat=hdfs://aliyun:9000/flume/upload/%Y%m%d/%H #文件上传到hdfs的路径

    a3.sinks.k3.hdfs.filePrefix = upload- #上传文件到hdfs的前缀
    a3.sinks.k3.hdfs.round = true #是否按照时间滚动文件夹
    a3.sinks.k3.hdfs.roundValue = 1 #多少时间单位创建一个新的文件夹
    a3.sinks.k3.hdfs.roundUnit = hour #重新定义时间单位
    a3.sinks.k3.hdfs.useLocalTimeStamp = true #是否使用本地时间戳
    a3.sinks.k3.hdfs.batchSize = 100 #积攒多少个Event才flush到HDFS一次
    a3.sinks.k3.hdfs.fileType = DataStream #设置文件类型,可支持压缩
    a3.sinks.k3.hdfs.rollInterval = 60 #多久生成一个新的文件
    a3.sinks.k3.hdfs.rollSize = 134217700 #设置每个文件的滚动大小大概是128M
    a3.sinks.k3.hdfs.rollCount = 0 #文件的滚动与Event数量无关

    # Use a channel which buffers events in memory
    a3.channels.c3.type = memory
    a3.channels.c3.capacity = 1000
    a3.channels.c3.transactionCapacity = 100

    # Bind the source and sink to the channel
    a3.sources.r3.channels = c3
    a3.sinks.k3.channel = c3
  4. 启动监控文件夹命令

    [aliyun@aliyun flume]$ bin/flume-ng agent --conf conf/ --name a3 --conf-file job/flume-dir-hdfs.conf

    说明: 在使用Spooling Directory Source时

    1. 不要在监控目录中创建并持续修改文件
    2. 上传完成的文件会以.COMPLETED结尾
    3. 被监控文件夹每500毫秒扫描一次文件变动
  5. 向upload文件夹中添加文件

    在/opt/module/flume目录下创建upload文件夹

    [aliyun@aliyun flume]$ mkdir upload

    向upload文件夹中添加文件

    [aliyun@aliyun upload]$ touch aliyun.txt
    [aliyun@aliyun upload]$ touch aliyun.tmp
    [aliyun@aliyun upload]$ touch aliyun.log
  6. 查看HDFS上的数据

  7. 等待1s,再次查询upload文件夹

    [aliyun@aliyun upload]$ ll
    总用量 0
    -rw-rw-r--. 1 aliyun aliyun 0 5月 20 22:31 aliyun.log.COMPLETED
    -rw-rw-r--. 1 aliyun aliyun 0 5月 20 22:31 aliyun.tmp
    -rw-rw-r--. 1 aliyun aliyun 0 5月 20 22:31 aliyun.txt.COMPLETED

单数据源多出口案例(选择器)

案例需求:使用Flume-1监控文件变动,Flume-1将变动内容传递给Flume-2,Flume-2负责存储到HDFS。同时Flume-1将变动内容传递给Flume-3,Flume-3负责输出到Local FileSystem。

单源多出口1

  1. 准备工作

    在/opt/module/flume/job目录下创建group1文件夹

    [hadoop@aliyun102 job]$ cd group1/

    在/opt/module/datas/目录下创建flume3文件夹

    [hadoop@aliyun102 datas]$ mkdir flume3

  2. 创建flume-file-flume.conf

    配置1个接收日志文件的source和两个channel、两个sink,分别输送给flume-flume-hdfs和flume-flume-dir。

    创建配置文件并打开

    [hadoop@aliyun102 group1]$ touch flume-file-flume.conf

    [hadoop@aliyun102 group1]$ vim flume-file-flume.conf

    添加如下内容

    # Name the components on this agent
    a1.sources = r1
    a1.sinks = k1 k2
    a1.channels = c1 c2

    # 将数据流复制给所有channel
    a1.sources.r1.selector.type = replicating
    # Describe/configure the source
    a1.sources.r1.type = exec
    a1.sources.r1.command = tail -F /opt/module/hive/logs/hive.log
    a1.sources.r1.shell = /bin/bash -c

    # Describe the sink
    # sink端的avro是一个数据发送者
    a1.sinks.k1.type = avro
    a1.sinks.k1.hostname = aliyun102
    a1.sinks.k1.port = 4141
    a1.sinks.k2.type = avro
    a1.sinks.k2.hostname = aliyun102
    a1.sinks.k2.port = 4142

    # Describe the channel
    a1.channels.c1.type = memory
    a1.channels.c1.capacity = 1000
    a1.channels.c1.transactionCapacity = 100

    a1.channels.c2.type = memory
    a1.channels.c2.capacity = 1000
    a1.channels.c2.transactionCapacity = 100

    # Bind the source and sink to the channel
    a1.sources.r1.channels = c1 c2
    a1.sinks.k1.channel = c1
    a1.sinks.k2.channel = c2

    注:Avro是由aliyun创始人Doug Cutting创建的一种语言无关的数据序列化和RPC框架。

    注:RPC(Remote Procedure Call)—远程过程调用,它是一种通过网络从远程计算机程序上请求服务,而不需要了解底层网络技术的协议。

  3. 创建flume-flume-hdfs.conf

    配置上级Flume输出的Source,输出是到HDFS的Sink。

    创建配置文件并打开

    [hadoop@aliyun102 group1]$ touch flume-flume-hdfs.conf

    [hadoop@aliyun102 group1]$ vim flume-flume-hdfs.conf

    添加如下内容

    # Name the components on this agent
    a2.sources = r1
    a2.sinks = k1
    a2.channels = c1

    # Describe/configure the source
    # source端的avro是一个数据接收服务
    a2.sources.r1.type = avro
    a2.sources.r1.bind = aliyun102
    a2.sources.r1.port = 4141

    # Describe the sink
    a2.sinks.k1.type = hdfs
    a2.sinks.k1.hdfs.path = hdfs://aliyun102:9000/flume2/%Y%m%d/%H
    #上传文件的前缀
    a2.sinks.k1.hdfs.filePrefix = flume2-
    #是否按照时间滚动文件夹
    a2.sinks.k1.hdfs.round = true
    #多少时间单位创建一个新的文件夹
    a2.sinks.k1.hdfs.roundValue = 1
    #重新定义时间单位
    a2.sinks.k1.hdfs.roundUnit = hour
    #是否使用本地时间戳
    a2.sinks.k1.hdfs.useLocalTimeStamp = true
    #积攒多少个Event才flush到HDFS一次
    a2.sinks.k1.hdfs.batchSize = 100
    #设置文件类型,可支持压缩
    a2.sinks.k1.hdfs.fileType = DataStream
    #多久生成一个新的文件
    a2.sinks.k1.hdfs.rollInterval = 600
    #设置每个文件的滚动大小大概是128M
    a2.sinks.k1.hdfs.rollSize = 134217700
    #文件的滚动与Event数量无关
    a2.sinks.k1.hdfs.rollCount = 0

    # Describe the channel
    a2.channels.c1.type = memory
    a2.channels.c1.capacity = 1000
    a2.channels.c1.transactionCapacity = 100

    # Bind the source and sink to the channel
    a2.sources.r1.channels = c1
    a2.sinks.k1.channel = c1
  4. 创建flume-flume-dir.conf

    配置上级Flume输出的Source,输出是到本地目录的Sink。

    创建配置文件并打开

    [hadoop@aliyun102 group1]$ touch flume-flume-dir.conf

    [hadoop@aliyun102 group1]$ vim flume-flume-dir.conf

    添加如下内容

    # Name the components on this agent
    a3.sources = r1
    a3.sinks = k1
    a3.channels = c2

    # Describe/configure the source
    a3.sources.r1.type = avro
    a3.sources.r1.bind = aliyun102
    a3.sources.r1.port = 4142

    # Describe the sink
    a3.sinks.k1.type = file_roll
    a3.sinks.k1.sink.directory = /opt/module/data/flume3

    # Describe the channel
    a3.channels.c2.type = memory
    a3.channels.c2.capacity = 1000
    a3.channels.c2.transactionCapacity = 100

    # Bind the source and sink to the channel
    a3.sources.r1.channels = c2
    a3.sinks.k1.channel = c2

    提示:输出的本地目录必须是已经存在的目录,如果该目录不存在,并不会创建新的目录。

  5. 执行配置文件

    分别开启对应配置文件:flume-flume-dir,flume-flume-hdfs,flume-file-flume。

    [hadoop@aliyun102 flume]$ bin/flume-ng agent --conf conf/ --name a3 --conf-file job/group1/flume-flume-dir.conf

    [hadoop@aliyun102 flume]$ bin/flume-ng agent --conf conf/ --name a2 --conf-file job/group1/flume-flume-hdfs.conf

    [hadoop@aliyun102 flume]$ bin/flume-ng agent --conf conf/ --name a1 --conf-file job/group1/flume-file-flume.conf

  6. 启动aliyun和Hive

    [hadoop@aliyun102 aliyun-2.7.2]$ sbin/start-dfs.sh

    [hadoop@aliyun103 aliyun-2.7.2]$ sbin/start-yarn.sh

    [hadoop@aliyun102 hive]$ bin/hive

    hive (default)>
  7. 检查HDFS上数据

  8. 检查/opt/module/datas/flume3目录中数据

    [hadoop@aliyun102 flume3]$ ll

    总用量 8
    -rw-rw-r--. 1 hadoop hadoop 5942 5月 22 00:09 1526918887550-3

单数据源多出口案例(Sink组)

案例需求:使用Flume-1监控文件变动,Flume-1将变动内容传递给Flume-2,Flume-2负责存储到HDFS。同时Flume-1将变动内容传递给Flume-3,Flume-3也负责存储到HDFS

单源多出口2

  1. 准备工作

    在/opt/module/flume/job目录下创建group2文件夹

    [hadoop@aliyun102 job]$ cd group2/

  2. 创建flume-netcat-flume.conf

    配置1个接收日志文件的source和1个channel、两个sink,分别输送给flume-flume-console1和flume-flume-console2。

    创建配置文件并打开

    [hadoop@aliyun102 group2]$ touch flume-netcat-flume.conf

    [hadoop@aliyun102 group2]$ vim flume-netcat-flume.conf

    添加如下内容

    # Name the components on this agent
    a1.sources = r1
    a1.channels = c1
    a1.sinkgroups = g1
    a1.sinks = k1 k2

    # Describe/configure the source
    a1.sources.r1.type = netcat
    a1.sources.r1.bind = localhost
    a1.sources.r1.port = 44444

    a1.sinkgroups.g1.processor.type = load_balance
    a1.sinkgroups.g1.processor.backoff = true
    a1.sinkgroups.g1.processor.selector = round_robin
    a1.sinkgroups.g1.processor.selector.maxTimeOut=10000

    # Describe the sink
    a1.sinks.k1.type = avro
    a1.sinks.k1.hostname = aliyun102
    a1.sinks.k1.port = 4141

    a1.sinks.k2.type = avro
    a1.sinks.k2.hostname = aliyun102
    a1.sinks.k2.port = 4142

    # Describe the channel
    a1.channels.c1.type = memory
    a1.channels.c1.capacity = 1000
    a1.channels.c1.transactionCapacity = 100

    # Bind the source and sink to the channel
    a1.sources.r1.channels = c1
    a1.sinkgroups.g1.sinks = k1 k2
    a1.sinks.k1.channel = c1
    a1.sinks.k2.channel = c1

    注:Avro是由aliyun创始人Doug Cutting创建的一种语言无关的数据序列化和RPC框架。

    注:RPC(Remote Procedure Call)—远程过程调用,它是一种通过网络从远程计算机程序上请求服务,而不需要了解底层网络技术的协议。

  3. 创建flume-flume-console1.conf

    配置上级Flume输出的Source,输出是到本地控制台。

    创建配置文件并打开

    [hadoop@aliyun102 group2]$ touch flume-flume-console1.conf

    [hadoop@aliyun102 group2]$ vim flume-flume-console1.conf

    添加如下内容

    # Name the components on this agent
    a2.sources = r1
    a2.sinks = k1
    a2.channels = c1

    # Describe/configure the source
    a2.sources.r1.type = avro
    a2.sources.r1.bind = aliyun102
    a2.sources.r1.port = 4141

    # Describe the sink
    a2.sinks.k1.type = logger

    # Describe the channel
    a2.channels.c1.type = memory
    a2.channels.c1.capacity = 1000
    a2.channels.c1.transactionCapacity = 100

    # Bind the source and sink to the channel
    a2.sources.r1.channels = c1
    a2.sinks.k1.channel = c1
  4. 创建flume-flume-console2.conf

    配置上级Flume输出的Source,输出是到本地控制台。

    创建配置文件并打开

    [hadoop@aliyun102 group2]$ touch flume-flume-console2.conf

    [hadoop@aliyun102 group2]$ vim flume-flume-console2.conf

    添加如下内容

    # Name the components on this agent
    a3.sources = r1
    a3.sinks = k1
    a3.channels = c2

    # Describe/configure the source
    a3.sources.r1.type = avro
    a3.sources.r1.bind = aliyun102
    a3.sources.r1.port = 4142

    # Describe the sink
    a3.sinks.k1.type = logger

    # Describe the channel
    a3.channels.c2.type = memory
    a3.channels.c2.capacity = 1000
    a3.channels.c2.transactionCapacity = 100

    # Bind the source and sink to the channel
    a3.sources.r1.channels = c2
    a3.sinks.k1.channel = c2
  5. 执行配置文件

    分别开启对应配置文件:flume-flume-console2,flume-flume-console1,flume-netcat-flume。

    [hadoop@aliyun102 flume]$ bin/flume-ng agent --conf conf/ --name a3 --conf-file job/group2/flume-flume-console2.conf -Dflume.root.logger=INFO,console

    [hadoop@aliyun102 flume]$ bin/flume-ng agent --conf conf/ --name a2 --conf-file job/group2/flume-flume-console1.conf -Dflume.root.logger=INFO,console

    [hadoop@aliyun102 flume]$ bin/flume-ng agent --conf conf/ --name a1 --conf-file job/group2/flume-netcat-flume.conf

  6. 使用netcat工具向本机的44444端口发送内容

    $ nc localhost 44444

  7. 查看Flume2及Flume3的控制台打印

多数据源汇总案例

案例需求:

aliyun103上的Flume-1监控文件/opt/module/group.log,

aliyun102上的Flume-2监控某一个端口的数据流,

Flume-1与Flume-2将数据发送给aliyun104上的Flume-3,Flume-3将最终数据打印到控制台。

多源单出口

  1. 准备工作

    分发Flume

    [hadoop@aliyun102 module]$ xsync flume

    在aliyun102、aliyun103以及aliyun104的/opt/module/flume/job目录下创建一个group3文件夹。

    [hadoop@aliyun102 job]$ mkdir group3

    [hadoop@aliyun103 job]$ mkdir group3

    [hadoop@aliyun104 job]$ mkdir group3

  2. 创建flume1-logger-flume.conf

    配置Source用于监控hive.log文件,配置Sink输出数据到下一级Flume。

    在aliyun103上创建配置文件并打开

    [hadoop@aliyun103 group3]$ touch flume1-logger-flume.conf

    [hadoop@aliyun103 group3]$ vim flume1-logger-flume.conf

    添加如下内容

    # Name the components on this agent
    a1.sources = r1
    a1.sinks = k1
    a1.channels = c1

    # Describe/configure the source
    a1.sources.r1.type = exec
    a1.sources.r1.command = tail -F /opt/module/group.log
    a1.sources.r1.shell = /bin/bash -c

    # Describe the sink
    a1.sinks.k1.type = avro
    a1.sinks.k1.hostname = aliyun104
    a1.sinks.k1.port = 4141

    # Describe the channel
    a1.channels.c1.type = memory
    a1.channels.c1.capacity = 1000
    a1.channels.c1.transactionCapacity = 100

    # Bind the source and sink to the channel
    a1.sources.r1.channels = c1
    a1.sinks.k1.channel = c1
  3. 创建flume2-netcat-flume.conf

    配置Source监控端口44444数据流,配置Sink数据到下一级Flume:

    在aliyun102上创建配置文件并打开

    [hadoop@aliyun102 group3]$ touch flume2-netcat-flume.conf

    [hadoop@aliyun102 group3]$ vim flume2-netcat-flume.conf

    添加如下内容

    # Name the components on this agent
    a2.sources = r1
    a2.sinks = k1
    a2.channels = c1

    # Describe/configure the source
    a2.sources.r1.type = netcat
    a2.sources.r1.bind = aliyun102
    a2.sources.r1.port = 44444

    # Describe the sink
    a2.sinks.k1.type = avro
    a2.sinks.k1.hostname = aliyun104
    a2.sinks.k1.port = 4141

    # Use a channel which buffers events in memory
    a2.channels.c1.type = memory
    a2.channels.c1.capacity = 1000
    a2.channels.c1.transactionCapacity = 100

    # Bind the source and sink to the channel
    a2.sources.r1.channels = c1
    a2.sinks.k1.channel = c1
  4. 创建flume3-flume-logger.conf

    配置source用于接收flume1与flume2发送过来的数据流,最终合并后sink到控制台。

    在aliyun104上创建配置文件并打开

    [hadoop@aliyun104 group3]$ touch flume3-flume-logger.conf

    [hadoop@aliyun104 group3]$ vim flume3-flume-logger.conf

    添加如下内容

    # Name the components on this agent
    a3.sources = r1
    a3.sinks = k1
    a3.channels = c1

    # Describe/configure the source
    a3.sources.r1.type = avro
    a3.sources.r1.bind = aliyun104
    a3.sources.r1.port = 4141

    # Describe the sink
    # Describe the sink
    a3.sinks.k1.type = logger

    # Describe the channel
    a3.channels.c1.type = memory
    a3.channels.c1.capacity = 1000
    a3.channels.c1.transactionCapacity = 100

    # Bind the source and sink to the channel
    a3.sources.r1.channels = c1
    a3.sinks.k1.channel = c1
  5. 执行配置文件

    分别开启对应配置文件:flume3-flume-logger.conf,flume2-netcat-flume.conf,flume1-logger-flume.conf。

    [hadoop@aliyun104 flume]$ bin/flume-ng agent --conf conf/ --name a3 --conf-file job/group3/flume3-flume-logger.conf -Dflume.root.logger=INFO,console

    [hadoop@aliyun102 flume]$ bin/flume-ng agent --conf conf/ --name a2 --conf-file job/group3/flume2-netcat-flume.conf

    [hadoop@aliyun103 flume]$ bin/flume-ng agent --conf conf/ --name a1 --conf-file job/group3/flume1-logger-flume.conf

  6. 在aliyun103上向/opt/module目录下的group.log追加内容

    [hadoop@aliyun103 module]$ echo 'hello' > group.log

  7. 在aliyun102上向44444端口发送数据

    [hadoop@aliyun102 flume]$ telnet aliyun102 44444

  8. 检查aliyun104上数据

Author: Tunan
Link: http://yerias.github.io/2018/12/02/flume/2/
Copyright Notice: All articles in this blog are licensed under CC BY-NC-SA 4.0 unless stating additionally.