Windowing functions&The OVER clause&Analytics functions

目标

  1. The OVER clause
  2. Analytics functions
  3. Windowing functions

The OVER clause

聚合函数是将多行数据按照规则聚合为一行,比如count()、sum()、min()、max()、avg()

窗口函数是在做聚合的基础上,要返回的数据不仅仅是一行

窗口函数是在窗口的基础上做统计分析,对其所作用的窗口中的每一条记录输出一条结果

窗口函数借助于over() 函数开窗

窗口函数的标准聚合函数同样包括count()、sum()、min()、max()、avg()

窗口可以在一个窗口子句中单独定义。窗口规范支持以下格式:

(ROWS | RANGE) BETWEEN (UNBOUNDED | [num]) PRECEDING AND ([num] PRECEDING | CURRENT ROW | (UNBOUNDED | [num]) FOLLOWING)
(ROWS | RANGE) BETWEEN CURRENT ROW AND (CURRENT ROW | (UNBOUNDED | [num]) FOLLOWING)
(ROWS | RANGE) BETWEEN [num] FOLLOWING AND (UNBOUNDED | [num]) FOLLOWING

窗口有以上三种定义方式,分别是从某行之后到某行某行之后到某行之前某行到某行之前

PRECEDING: 往前
FOLLOWING: 往后
CURRENT ROW: 当前行
UNBOUNDED: 起点,UNBOUNDED PRECEDING 表示从前面的起点, UNBOUNDED FOLLOWING: 表示到后面的终点

  1. 准备数据

    window01.txt

    ruozedata,2019-04-10,1
    ruozedata,2019-04-11,5
    ruozedata,2019-04-12,7
    ruozedata,2019-04-13,3
    ruozedata,2019-04-14,2
    ruozedata,2019-04-15,4
    ruozedata,2019-04-16,4
  2. 创建表

    drop  database window_over;
    use window_over
    CREATE EXTERNAL TABLE window01(
    name String,
    date String,
    grade Int
    )
    row format delimited fields terminated by ",";
  3. 导入数据

    load data local inpath '/home/hadoop/data/window/window01.txt' into  table  window01;
  4. 查询

    问题:窗口到底怎么开?

    核心:从什么地方开始到什么地方结束

    select 
    name,
    date,
    grade,
    sum(grade) over(partition by name order by grade rows between UNBOUNDED PRECEDING and current row) as g1, //第一行到当前行的grade求sum()
    sum(grade) over(partition by name order by grade rows between 3 PRECEDING and current row) as g2, //第三行到当前行的grade求sum()
    sum(grade) over(partition by name order by grade rows between 3 PRECEDING and 1 following) as g3, //第三行到后一行的grade求sum()
    sum(grade) over(partition by name order by grade rows between current row and unbounded FOLLOWING) as g4 //当前行到最后一行的grade求sum()
    from
    window01;

    结果

    name     |date      |grade|g1|g2|g3|g4|
    ---------|----------|-----|--|--|--|--|
    ruozedata|2019-04-10| 1| 1| 1| 3|26|
    ruozedata|2019-04-14| 2| 3| 3| 6|25|
    ruozedata|2019-04-13| 3| 6| 6|10|23|
    ruozedata|2019-04-16| 4|10|10|14|20|
    ruozedata|2019-04-15| 4|14|13|18|16|
    ruozedata|2019-04-11| 5|19|16|23|12|
    ruozedata|2019-04-12| 7|26|20|20| 7|

Analytics functions

分析函数有RANK、ROW_NUMBER、DENSE_RANK、CUME_DIST、PERCENT_RANK、NTILE

这些函数可以分为三部分,第一部分是排序相关的RANK、ROW_NUMBER、DENSE_RANK,第二部分是占比相关的CUME_DIST、PERCENT_RANK,第三部分是把表切成指定分区的NTILE

排序相关

  1. 准备数据

    gifshow.com,2019-04-10,1
    gifshow.com,2019-04-11,5
    gifshow.com,2019-04-12,7
    gifshow.com,2019-04-13,3
    gifshow.com,2019-04-14,2
    gifshow.com,2019-04-15,4
    gifshow.com,2019-04-16,4
    yy.com,2019-04-10,2
    yy.com,2019-04-11,3
    yy.com,2019-04-12,5
    yy.com,2019-04-13,6
    yy.com,2019-04-14,3
    yy.com,2019-04-15,9
    yy.com,2019-04-16,7
  2. 建表

    create external table traffic(
    domain String,
    date String,
    grade Int
    )
    row format delimited fields terminated by ",";
  3. 导数

    load data local inpath '/home/hadoop/data/window/rank.txt' overwrite into table traffic;
  4. 查询

    select
    domain,
    date,
    grade,
    rank() over(partition by domain order by grade) as r1,
    ROW_NUMBER() over(partition by domain order by grade) as r2,
    DENSE_RANK() over(partition by domain order by grade) as r3
    from
    traffic;

    结果

    domain     |date      |grade|r1|r2|r3|
    -----------|----------|-----|--|--|--|
    gifshow.com|2019-04-10| 1| 1| 1| 1|
    gifshow.com|2019-04-14| 2| 2| 2| 2|
    gifshow.com|2019-04-13| 3| 3| 3| 3|
    gifshow.com|2019-04-16| 4| 4| 4| 4|
    gifshow.com|2019-04-15| 4| 4| 5| 4|
    gifshow.com|2019-04-11| 5| 6| 6| 5|
    gifshow.com|2019-04-12| 7| 7| 7| 6|
    yy.com |2019-04-10| 2| 1| 1| 1|
    yy.com |2019-04-11| 3| 2| 2| 2|
    yy.com |2019-04-14| 3| 2| 3| 2|
    yy.com |2019-04-12| 5| 4| 4| 3|
    yy.com |2019-04-13| 6| 5| 5| 4|
    yy.com |2019-04-16| 7| 6| 6| 5|
    yy.com |2019-04-15| 9| 7| 7| 6|
  5. 总结

    RANK():分组内生成编号,排名相同相同的名词留空位
    DENSE_RANK(): 分组内生成编号,排名相同相同的名词不留空位
    ROW_NUMBER(): 从1开始,按照排序,生成分组内记录的序号(推介使用)

占比相关

CUME_DIST(): 小于等于当前行值(OVER中order by指定的字段排序)的行数/分组内的总行数
PERCENT_RANK(): 分组内当前行的rank -1 / 分组内总行数 -1

  1. 准备数据

    dept01,ruoze,10000
    dept01,jepson,20000
    dept01,xingxing,30000
    dept02,zhangsan,40000
    dept02,lisi,50000
  2. 建表

    create external table window02(
    dept String,
    user String,
    sal int
    )
    row format delimited fields terminated by ",";
  3. 导数据

    load data local inpath '/home/hadoop/data/window/window02.txt' overwrite into table window02;
  4. 查询

    select
    dept,
    user,
    sal,
    CUME_DIST() over(partition by dept order by sal) cume,
    PERCENT_RANK() over(partition by dept order by sal) percent
    from window02;

    结果

    dept  |user    |sal  |cume1|cume2             |percent|
    ------|--------|-----|-----|------------------|-------|
    dept01|ruoze |10000| 0.2|0.3333333333333333| 0|
    dept01|jepson |20000| 0.4|0.6666666666666666| 0.5|
    dept01|xingxing|30000| 0.6| 1| 1|
    dept02|zhangsan|40000| 0.8| 0.5| 0|
    dept02|lisi |50000| 1| 1| 1|

分区相关

NTILE(num):将数据按照输入的数成num片,并且记录分片号

场景:128M的数据只有一两个列的数据记录数有几千万个,必须拆开用NTILE(n)==> 改变map任务的数量

  1. 查询

    select
    domain,
    date,
    grade,
    ntile(2) over(partition by domain order by grade) as n1,
    ntile(3) over(partition by domain order by grade) as n2,
    ntile(4) over(partition by domain order by grade) as n3
    from
    traffic;

    结果

    domain     |date      |grade|n1|n2|n3|
    -----------|----------|-----|--|--|--|
    gifshow.com|2019-04-10| 1| 1| 1| 1|
    gifshow.com|2019-04-14| 2| 1| 1| 1|
    gifshow.com|2019-04-13| 3| 1| 1| 2|
    gifshow.com|2019-04-16| 4| 1| 2| 2|
    gifshow.com|2019-04-15| 4| 2| 2| 3|
    gifshow.com|2019-04-11| 5| 2| 3| 3|
    gifshow.com|2019-04-12| 7| 2| 3| 4|
    yy.com |2019-04-10| 2| 1| 1| 1|
    yy.com |2019-04-11| 3| 1| 1| 1|
    yy.com |2019-04-14| 3| 1| 1| 2|
    yy.com |2019-04-12| 5| 1| 2| 2|
    yy.com |2019-04-13| 6| 2| 2| 3|
    yy.com |2019-04-16| 7| 2| 3| 3|
    yy.com |2019-04-15| 9| 2| 3| 4|

Windowing functions

LAG(col,n,default): 窗口内往上取N行的值,如果有default就取default,没有就用null

LEAD(col,n,default): 窗口内往下取N行的值,如果有default就取default,没有就用null

  1. 准备数据

    cookie1,2015-04-10 10:00:02,url2
    cookie1,2015-04-10 10:00:00,url1
    cookie1,2015-04-10 10:03:04,1url3
    cookie1,2015-04-10 10:50:05,url6
    cookie1,2015-04-10 11:00:00,url7
    cookie1,2015-04-10 10:10:00,url4
    cookie1,2015-04-10 10:50:01,url5
    cookie2,2015-04-10 10:00:02,url22
    cookie2,2015-04-10 10:00:00,url11
    cookie2,2015-04-10 10:03:04,1url33
    cookie2,2015-04-10 10:50:05,url66
    cookie2,2015-04-10 11:00:00,url77
    cookie2,2015-04-10 10:10:00,url44
    cookie2,2015-04-10 10:50:01,url55
  2. 建表

    create external table window03(
    cookid String,
    time String,
    url String
    )
    row format delimited fields terminated by ",";
  3. 导数据

    load data local inpath '/home/hadoop/data/window/window03.txt' overwrite into table window03;
  4. 查询

    select
    cookid,
    time,
    url,
    lag(time,1,'1970-00-00 00:00:00') over(partition by cookid order by time),
    lead(time,2,"null") over(partition by cookid order by time)
    from window03;

    结果

    cookid |time               |url   |lag                |lead               |
    -------|-------------------|------|-------------------|-------------------|
    cookie1|2015-04-10 10:00:00|url1 |1970-00-00 00:00:00|2015-04-10 10:03:04|
    cookie1|2015-04-10 10:00:02|url2 |2015-04-10 10:00:00|2015-04-10 10:10:00|
    cookie1|2015-04-10 10:03:04|1url3 |2015-04-10 10:00:02|2015-04-10 10:50:01|
    cookie1|2015-04-10 10:10:00|url4 |2015-04-10 10:03:04|2015-04-10 10:50:05|
    cookie1|2015-04-10 10:50:01|url5 |2015-04-10 10:10:00|2015-04-10 11:00:00|
    cookie1|2015-04-10 10:50:05|url6 |2015-04-10 10:50:01|null |
    cookie1|2015-04-10 11:00:00|url7 |2015-04-10 10:50:05|null |
    cookie2|2015-04-10 10:00:00|url11 |1970-00-00 00:00:00|2015-04-10 10:03:04|
    cookie2|2015-04-10 10:00:02|url22 |2015-04-10 10:00:00|2015-04-10 10:10:00|
    cookie2|2015-04-10 10:03:04|1url33|2015-04-10 10:00:02|2015-04-10 10:50:01|
    cookie2|2015-04-10 10:10:00|url44 |2015-04-10 10:03:04|2015-04-10 10:50:05|
    cookie2|2015-04-10 10:50:01|url55 |2015-04-10 10:10:00|2015-04-10 11:00:00|
    cookie2|2015-04-10 10:50:05|url66 |2015-04-10 10:50:01|null |
    cookie2|2015-04-10 11:00:00|url77 |2015-04-10 10:50:05|null |
Author: Tunan
Link: http://yerias.github.io/2018/11/08/hive/8/
Copyright Notice: All articles in this blog are licensed under CC BY-NC-SA 4.0 unless stating additionally.