The Oracle 10g function Daquan -- analysis function

Recommended for you: Get network issues from WhatsUp Gold. Not end users.
Oracle analysis function in --SQL*PLUS environment One, general introduction 12.1 Analysis of how to work the grammar function FUNCTION_NAME(<Parameters>,…) OVER (<PARTITION BY Expression,…> <ORDER BY Expression <ASC DESC> <NULLS FIRST NULLS LAST>> <WINDOWINGClause>) PARTITIONClause ORDER BYClause WINDOWINGThe default is equivalent to the clauseRANGE UNBOUNDED PRECEDING 1 range window(RANGE WINDOW) RANGE N PRECEDING is valid only for numeric or date type, selected for sequencing after the window before the current row, a column (or row sequence) values greater than / less than (the current line of the column value – / H + N) all lines, therefore has the relationship with the ORDER BY clause. 2 rows of windows(ROW WINDOW) ROWS N PRECEDING is selected as the current line and window before N. Can also addBETWEEN AND Form,For exampleRANGE BETWEEN m PRECEDING AND n FOLLOWING Function AVG(<distinct all> eXPr) A group or a selected window in the expressions of the average value CORR(expr, expr) That isCOVAR_POP(exp1,exp2) / (STDDEV_POP(expr1) * STDDEV_POP(expr2)),The two expression correlation,-1(Inverse correlation) ~ 1(Positive correlation),0Not related to COUNT(<distinct> <*> <expr>) Count of COVAR_POP (expr, expr) COVAR_SAMP (expr, expr covariance) sample covariance CUME_DIST cumulative distribution, relative position is in in the group, the return of 0 ~ 1 DENSE_RANK for relative ranking (with ORDER BY), the same value with the same sequence number (NULL is the same), is not the first value LAG FIRST_VALUE a group of blank.(expr, <offset>, <default>) Before the visit, OFFSET is the default for 1 of the positive, indicating the relative number of rows, DEFAULT is when return exceeds a selected window when the range of values (such as the first row does not exist before the last value) LEAD LAST_VALUE a group(expr, <offset>, <default>) Access after the line,OFFSETIs the default for1 Positive,Express the relative number of rows,DEFAULTWhen the return window beyond the selected range values(As the last row does not exist before) MAX(expr) The maximum value MIN(expr) The minimum value NTILE(expr) According to the value of the expression and location in the group,If the expression is4,The component4Copy,Respectively.1 ~ 4The value of the,Not equal is a part of the group in the value of the minimum PERCENT_RANK SimilarCUME_DIST,1/(The ordinal number - 1) RANK The relative number,To coordinate,And have subsequently ordinal RATIO_TO_REPORT(expr) The value of an expression / SUM(The value of an expression) ROW_NUMBER Offset ranking group of Bank of China STDDEV(expr) Standard deviation STDDEV_POP(expr) The overall standard deviation STDDEV_SAMP(expr) Sample standard deviation SUM(expr) Total VAR_POP(expr) The overall variance VAR_SAMP(expr) Sample variance VARIANCE(expr) Variance REGR_ xxxx(expr, expr) Linear regression function REGR_SLOPE: 返回斜率, Equal to COVAR_POP (expr1, expr2) / VAR_POP (expr2) REGR_INTERCEPT: the return of the regression lines of Y intercept, Equal to AVG (expr1) - REGR_SLOPE (expr1, expr2) * AVG (expr2) REGR_COUNT: return to fill the regression line number to the number of REGR_R2 non empty: the coefficient of determination to return regression line, Calculation formula: If VAR_POP(expr2) = 0 then return NULL If VAR_POP(expr1) = 0 and VAR_POP(expr2) != 0 then return 1 If VAR_POP(expr1) > 0 and VAR_POP(expr2 != 0 then return POWER(CORR(expr1,expr),2) REGR_AVGX: 计算回归线的自变量(expr2)The average value, Remove the air to (expr1, expr2) after, Equal to AVG (expr2) REGR_AVGY: strain calculation of the regression line (expr1) average, Remove the air to (expr1, expr2) after, Be equal toAVG(expr1) REGR_SXX: The return value is equal to theREGR_COUNT(expr1, expr2) * VAR_POP(expr2) REGR_SYY: 返回值等于REGR_COUNT(expr1, expr2) * VAR_POP(expr1) REGR_SXY: 返回值等于REGR_COUNT(expr1, expr2) * COVAR_POP(expr1, expr2) First: 创建表及接入测试数据 create table students (id number(15,0), area varchar2(10), stu_type varchar2(2), score number(20,2)); insert into students values(1, '111', 'g', 80 ); insert into students values(1, '111', 'j', 80 ); insert into students values(1, '222', 'g', 89 ); insert into students values(1, '222', 'g', 68 ); insert into students values(2, '111', 'g', 80 ); insert into students values(2, '111', 'j', 70 ); insert into students values(2, '222', 'g', 60 ); insert into students values(2, '222', 'j', 65 ); insert into students values(3, '111', 'g', 75 ); insert into students values(3, '111', 'j', 58 ); insert into students values(3, '222', 'g', 58 ); insert into students values(3, '222', 'j', 90 ); insert into students values(4, '111', 'g', 89 ); insert into students values(4, '111', 'j', 90 ); insert into students values(4, '222', 'g', 90 ); insert into students values(4, '222', 'j', 89 ); commit; Two, the specific application: 1, Block sum: 1)The GROUP BY clause --A, GROUPING SETS select id,area,stu_type,sum(score) score from students group by grouping sets((id,area,stu_type),(id,area),id) order by id,area,stu_type; /*--------Understandgrouping sets select a, b, c, sum( d ) from t group by grouping sets ( a, b, c ) Equivalent to select * from ( select a, null, null, sum( d ) from t group by a union all select null, b, null, sum( d ) from t group by b union all select null, null, c, sum( d ) from t group by c ) */ --B, ROLLUP select id,area,stu_type,sum(score) score from students group by rollup(id,area,stu_type) order by id,area,stu_type; /*--------Understandrollup select a, b, c, sum( d ) from t group by rollup(a, b, c); Equivalent to select * from ( select a, b, c, sum( d ) from t group by a, b, c union all select a, b, null, sum( d ) from t group by a, b union all select a, null, null, sum( d ) from t group by a union all select null, null, null, sum( d ) from t ) */ --C, CUBE select id,area,stu_type,sum(score) score from students group by cube(id,area,stu_type) order by id,area,stu_type; /*--------Understandcube select a, b, c, sum( d ) from t group by cube( a, b, c) Equivalent to select a, b, c, sum( d ) from t group by grouping sets( ( a, b, c ), ( a, b ), ( a ), ( b, c ), ( b ), ( a, c ), ( c ), () ) */ --D, GROUPING /*From the above results we easily found,Each data line will appearnull, How to distinguish what is according to the summary of the field.,groupingFunction to determine whether the total column!*/ select decode(grouping(id),1,'all id',id) id, decode(grouping(area),1,'all area',to_char(area)) area, decode(grouping(stu_type),1,'all_stu_type',stu_type) stu_type, sum(score) score from students group by cube(id,area,stu_type) order by id,area,stu_type; Two, the OVER () function to use 1, Statistical ranking — — DENSE_RANK (), ROW_NUMBER (1)) allows side-by-side ranking, the ranking of uninterrupted, DENSE_RANK(), Results such as122344456…… WillscoreAccording to theIDPacket number: dense_rank() over(partition by id order by score desc) WillscoreNo group ranking: dense_rank() over(order by score desc) select id,area,score, dense_rank() over(partition by id order by score desc) GroupingidSort, dense_rank() over(order by score desc) No sorting from students order by id,area; 2)Do not allow the dead heat, the same value ranking does not repeat, ROW_NUMBER(), Results such as123456…… WillscoreAccording to theIDPacket number: row_number() over(partition by id order by score desc) WillscoreNo group ranking: row_number() over(order by score desc) select id,area,score, row_number() over(partition by id order by score desc) GroupingidSort, row_number() over(order by score desc) No sorting from students order by id,area; 3)Allows parallel ranking, ranking automatic replication vacancy, rank(), Results such as12245558…… WillscoreAccording to theIDPacket number: rank() over(partition by id order by score desc) WillscoreNo group ranking: rank() over(order by score desc) select id,area,score, rank() over(partition by id order by score desc) GroupingidSort, rank() over(order by score desc) No sorting from students order by id,area; 4)The position analysis, cume_dist()——-The maximum rank/The total number of function: cume_dist() over(order by id) select id,area,score, cume_dist() over(order by id) a, --According to theIDThe maximum rank/The total number of cume_dist() over(partition by id order by score desc) b, --IDThe packet, Scroe maximum rank value / total number of (row_number) over (order by ID), from students order by ID records, 5) by cume_dist area;(), Allows parallel ranking, ranking automatic replication vacancy, The parallel after the larger ranking, Results such as22355778…… WillscoreAccording to theIDPacket number: cume_dist() over(partition by id order by score desc)*sum(1) over(partition by id) WillscoreNo group ranking: cume_dist() over(order by score desc)*sum(1) over() select id,area,score, sum(1) over() as Total, sum(1) over(partition by id) as The number of packets, (cume_dist() over(partition by id order by score desc))*(sum(1) over(partition by id)) GroupingidSort, (cume_dist() over(order by score desc))*(sum(1) over()) No sorting from students order by id,area 2, Packet statistics--sum(),max(),avg(),RATIO_TO_REPORT() select id,area, sum(1) over() as The total number of records, sum(1) over(partition by id) as Grouped record number, sum(score) over() as Total , sum(score) over(partition by id) as Block sum, sum(score) over(order by id) as Group sequential summation, sum(score) over(partition by id,area) as GroupingIDAndareaSummation, sum(score) over(partition by id order by area) as GroupingIDAnd continuous pressareaSummation, max(score) over() as The maximum value, max(score) over(partition by id) as Packet Maximum, max(score) over(order by id) as Packet continuous maximum, max(score) over(partition by id,area) as GroupingIDAndareaFor the maximum, max(score) over(partition by id order by area) as GroupingIDAnd continuous pressareaFor the maximum, avg(score) over() as All average, avg(score) over(partition by id) as The average packet, avg(score) over(order by id) as The average packet continuous, avg(score) over(partition by id,area) as GroupingIDAndareaAverage, avg(score) over(partition by id order by area) as GroupingIDAnd continuous pressareaAverage, RATIO_TO_REPORT(score) over() as "To account for all%", RATIO_TO_REPORT(score) over(partition by id) as "For packet%", score from students; 3, LAG(COL,n,default), LEAD(OL,n,default) --Taking the former behindNData for the previous record values: lag(score,n,x) over(order by id) Take back the recorded value: lead(score,n,x) over(order by id) Parameters: nSaid mobileNRecord, X said does not exist when the filling value, iDSaid the row sequences select id,lag(score,1,0) over(order by id) lg,score from students; select id,lead(score,1,0) over(order by id) lg,score from students; 4, FIRST_VALUE(), LAST_VALUE() The initial1Row values: first_value(score,n) over(order by id) The article finally1Row values: LAST_value(score,n) over(order by id) select id,first_value(score) over(order by id) fv,score from students; select id,last_value(score) over(order by id) fv,score from students; sum(...) over ... [Function] continuous and analysis function [parameters] [that] specific reference sample Oracle analysis function of NC example: select bdcode, sum (1) over (order by bdcode) AA from bd_bdinfo [example] 1 original information table: SQL> break on deptno skip 1 - to effect more obvious, the different departments of the data segment display. SQL> select deptno,ename,sal 2 from emp 3 order by deptno; DEPTNO ENAME SAL ---------- ---------- ---------- 10 CLARK 2450 KING 5000 MILLER 1300 20 SMITH 800 ADAMS 1100 FORD 3000 SCOTT 3000 JONES 2975 30 ALLEN 1600 BLAKE 2850 MARTIN 1250 JAMES 950 TURNER 1500 WARD 1250 2.Start with a simple, Note that the over (...) in different conditions, The use of sum (SAL) over (order by ename)... Query employee salaries “ ” sum, pay attention to over (order by ename) if there is no order by clause, The sum is not “ ”., Put together, Experience the difference: SQL> select deptno,ename,sal, 2 sum(sal) over (order by ename) Continuous sum, 3 sum(sal) over () The sum, -- Heresum(sal) over () Equal tosum(sal) 4 100*round(sal/sum(sal) over (),4) "Share(%)" 5 from emp 6 / DEPTNO ENAME SAL Continuous sum total share(%) ---------- ---------- ---------- ---------- ---------- ---------- 20 ADAMS 1100 1100 29025 3.79 30 ALLEN 1600 2700 29025 5.51 30 BLAKE 2850 5550 29025 9.82 10 CLARK 2450 8000 29025 8.44 20 FORD 3000 11000 29025 10.34 30 JAMES 950 11950 29025 3.27 20 JONES 2975 14925 29025 10.25 10 KING 5000 19925 29025 17.23 30 MARTIN 1250 21175 29025 4.31 10 MILLER 1300 22475 29025 4.48 20 SCOTT 3000 25475 29025 10.34 20 SMITH 800 26275 29025 2.76 30 TURNER 1500 27775 29025 5.17 30 WARD 1250 29025 29025 4.31 3.The use of sub partitions found all sector salaries continuous. Pay attention to the sector partition. Note that the over (...) in different conditions, sum(sal) over (partition by deptno order by ename) According to the Department of “ ” for the sum of sum (SAL) over (partition by deptno) according to the Department for the sum of sum(sal) over (order by deptno, ename) Not according to the division of “ continuous ” for the sum of sum (SAL) over () not by Department, For all employee sum, The effect is equivalent to sum(sal). SQL> select deptno,ename,sal, 2 sum(sal) over (partition by deptno order by ename) Departments for the sum of all departments, -- "paid for" seeking and 3 sum (SAL) over (partition by deptno) the sum total department, Department of Statistics, The same department summation invariant 4 100*round(sal/sum(sal) over (partition by deptno),4) "Departments share(%)", 5 sum(sal) over (order by deptno,ename) Continuous sum, --All sector salaries"Continuous"Summation 6 sum(sal) over () The sum, -- Heresum(sal) over () Equal tosum(sal), All the staff salary sum 7 100*round(sal/sum(sal) over (),4) "The total share of(%)" 8 from emp 9 / DEPTNO ENAME SAL Departments for the sum total share Department Department(%) Continuous and sum total share(%) ------ ------ ----- ------------ ---------- ----------- ---------- ------ ---------- 10 CLARK 2450 2450 8750 28 2450 29025 8.44 KING 5000 7450 8750 57.14 7450 29025 17.23 MILLER 1300 8750 8750 14.86 8750 29025 4.48 20 ADAMS 1100 1100 10875 10.11 9850 29025 3.79 FORD 3000 4100 10875 27.59 12850 29025 10.34 JONES 2975 7075 10875 27.36 15825 29025 10.25 SCOTT 3000 10075 10875 27.59 18825 29025 10.34 SMITH 800 10875 10875 7.36 19625 29025 2.76 30 ALLEN 1600 1600 9400 17.02 21225 29025 5.51 BLAKE 2850 4450 9400 30.32 24075 29025 9.82 JAMES 950 5400 9400 10.11 25025 29025 3.27 MARTIN 1250 6650 9400 13.3 26275 29025 4.31 TURNER 1500 8150 9400 15.96 27775 29025 5.17 WARD 1250 9400 9400 13.3 29025 29025 4.31 4.A comprehensive example, Sum rule according to the sector partition, There is no examples of partitions SQL> select deptno,ename,sal,sum(sal) over (partition by deptno order by sal) dept_sum, 2 sum(sal) over (order by deptno,sal) sum 3 from emp; DEPTNO ENAME SAL DEPT_SUM SUM ---------- ---------- ---------- ---------- ---------- 10 MILLER 1300 1300 1300 CLARK 2450 3750 3750 KING 5000 8750 8750 20 SMITH 800 800 9550 ADAMS 1100 1900 10650 JONES 2975 4875 13625 SCOTT 3000 10875 19625 FORD 3000 10875 19625 30 JAMES 950 950 20575 WARD 1250 3450 23075 MARTIN 1250 3450 23075 TURNER 1500 4950 24575 ALLEN 1600 6550 26175 BLAKE 2850 9400 29025 5.A reverse, The Department is arranged from large to small, Department staff salary from high to low., The cumulative and rules. SQL> select deptno,ename,sal, 2 sum(sal) over (partition by deptno order by deptno desc,sal desc) dept_sum, 3 sum(sal) over (order by deptno desc,sal desc) sum 4 from emp; DEPTNO ENAME SAL DEPT_SUM SUM ---------- ---------- ---------- ---------- ---------- 30 BLAKE 2850 2850 2850 ALLEN 1600 4450 4450 TURNER 1500 5950 5950 WARD 1250 8450 8450 MARTIN 1250 8450 8450 JAMES 950 9400 9400 20 SCOTT 3000 6000 15400 FORD 3000 6000 15400 JONES 2975 8975 18375 ADAMS 1100 10075 19475 SMITH 800 10875 20275 10 KING 5000 5000 25275 CLARK 2450 7450 27725 MILLER 1300 8750 29025 6.Experience: 在"... from emp;"There is noorder by Clause, Analysis of function of (partition by deptno order by SAL) has sort of statement., If we add at the end of the sentence ordering clause, Consistent inverted., Inconsistent, Results it is difficult. Such as: SQL> select deptno,ename,sal,sum(sal) over (partition by deptno order by sal) dept_sum, 2 sum(sal) over (order by deptno,sal) sum 3 from emp 4 order by deptno desc; DEPTNO ENAME SAL DEPT_SUM SUM ---------- ---------- ---------- ---------- ---------- 30 JAMES 950 950 20575 WARD 1250 3450 23075 MARTIN 1250 3450 23075 TURNER 1500 4950 24575 ALLEN 1600 6550 26175 BLAKE 2850 9400 29025 20 SMITH 800 800 9550 ADAMS 1100 1900 10650 JONES 2975 4875 13625 SCOTT 3000 10875 19625 FORD 3000 10875 19625 10 MILLER 1300 1300 1300 CLARK 2450 3750 3750 KING 5000 8750 8750


RANK() dense_rank() [Grammar] RANK (OVER) ([query_partition_clause] order_by_clause) dense_RANK (OVER) ([query_partition_clause] order_by_clause) [function] aggregate functions RANK and dense_rank main function is to calculate the number in a set of ranking values. [The parameters dense_rank and rank] () [] dence_rank usage is quite, the difference in parallel relationship is related to not skip class,. Rank (rank) is jumping skip sort, two second place next is fourth (also in each group) dense_rank (L) is a continuous sequence, two second people still follow third. [The Oracle analysis function [example]] the RANK aggregate function and dense_rank main function is computed in a set of values sort value.      Prior to the 9i release, only the analysis function (analytic), which is calculated for each ordering of values from a query results, is based on the order_by_clause clause of the value_exprs of the specified field.      The syntax: RANK (OVER) ([query_partition_clause] order_by_clause) in the 9i version adds a new aggregate functions (aggregate), namely the parameters to the given value calculated in ranking query set in its ranking value. These parameters must be constant or constant expressions, and must and ORDER columns in the BY clause the number, location, type of exactly the same.      The grammar:      RANK ( expr [, expr]... ) WITHIN GROUP   ( ORDER BY   expr [ DESC | ASC ] [NULLS { FIRST | LAST }]   [, expr [ DESC | ASC ] [NULLS { FIRST | LAST }]]...   )      Example1:      有表TableThe contents are as follows      COL1 COL2     1 1     2 1     3 2     3 1     4 1     4 2     5 2     5 2     6 2      Analysis of function: 列出Col2After grouping according toCol1Sort,And the generation of digital column. More applicable to identify all the top of the information in the result table.      SELECT a.*,RANK() OVER(PARTITION BY col2 ORDER BY col1) "Rank" FROM table a;      The results are as follows:      COL1 COL2 Rank     1 1   1     2 1   2     3 1   3     4 1   4     3 2   1     4 2   2     5 2   3     5 2   3     6 2   5      Example2:      TABLE: A (科目, Scores of Mathematics), The 80 language, 70 mathematics, 90 mathematics, 60 mathematics, The 100 language, The 88 language, The 65 language, 77 now I want the result is: (i.e. to each subject's top 3 scores) Mathematics, 100 mathematics, 90 mathematics, The 80 language, The 88 language, The 77 language, 70 then the statement just write: select * from (select) rank (over (partition by order by desc rk subject scores), a.* from a) t where t.rk<=3; examples of 3: aggregate functions: calculate the numerical (4,1) in Orade By Col1, Col2 sort of ranking values, Is col1=4, col2=1 in order after the position SELECT RANK (4,3) WITHIN GROUP (ORDER BY col1, col2 "Rank FROM table"); the results are as follows: Rank 4 and rank (dense_rank) usage is quite, But with a difference: dence_rank in parallel relationship is, The relevant level does not skip. rankSkip for example: 表      A      B      C   a     liu     wang   a     jin     shu   a     cai     kai   b     yang     du   b     lin     ying   b     yao     cai   b     yang     99      For example: 当rankWhen the:      select m.a,m.b,m.c,rank() over(partition by a order by b) liu from test3 m       A     B       C     LIU    a     cai      kai     1    a     jin      shu     2    a     liu      wang     3    b     lin      ying     1    b     yang     du      2    b     yang     99      2    b     yao      cai     4      And ifdense_rankWhen the:      select m.a,m.b,m.c,dense_rank() over(partition by a order by b) liu from test3 m       A     B       C     LIU    a     cai     kai     1    a     jin     shu     2    a     liu     wang     3    b     lin     ying     1    b     yang     du      2    b     yang     99      2    b     yao     cai     3

ROW_NUMBER() [(grammar)] ROW_NUMBER OVER (PARTITION BY COL1 ORDER BY COL2) [function] according to COL1 packet, Inside group according to COL2 ranking, And it said each internal sequence numbers sorted (within group continuous only) row_number (return) is mainly the “ ” information, And no ranking [] [] Oracle the parameter analysis function of main function: for the first few, Or the last few names [example] table as follows: name | seqno | description A | 1 | test A | 2 | test A | 3 | test A | 4 | test B | 1 | test B | 2 | test B | 3 | test B | 4 | test C | 1 | test C | 2 | test C | 3 | test C | 4 | test I want to have asqlSentence, The search result is A 1 test A 2 | | | | test B 1 test B 2 | | | | test C 1 test C 2 | | | | test implementation: select name,seqno,description from(select name,seqno,description,row_number() over (partition by name order by seqno) id from table_name) where id<=3;
lag()And lead() [Grammar] lag(EXPR,<OFFSET>,<DEFAULT>) LEAD(EXPR,<OFFSET>,<DEFAULT>) [Function] according to COL1 packet, the packet according to COL2 ranking, and it said each internal sequence numbers sorted (within group continuous only) (lead) the next value (lag) the last value [parameter] EXPR is type OFFSET expression returned from the other line is the default 1 of the positive, indicating the relative number of rows. Hope DEFAULT offset current line partition retrieval is the number of representations in the OFFSET beyond the range for grouping the value to return if. [Explain]Oracle[example analysis function] -- Create table create table LEAD_TABLE ( CASEID VARCHAR2(10), STEPID VARCHAR2(10), ACTIONDATE DATE ) tablespace COLM_DATA pctfree 10 initrans 1 maxtrans 255 storage ( initial 64K minextents 1 maxextents unlimited ); insert into LEAD_TABLE values('Case1','Step1',to_date('20070101','yyyy-mm-dd')); insert into LEAD_TABLE values('Case1','Step2',to_date('20070102','yyyy-mm-dd')); insert into LEAD_TABLE values('Case1','Step3',to_date('20070103','yyyy-mm-dd')); insert into LEAD_TABLE values('Case1','Step4',to_date('20070104','yyyy-mm-dd')); insert into LEAD_TABLE values('Case1','Step5',to_date('20070105','yyyy-mm-dd')); insert into LEAD_TABLE values('Case1','Step4',to_date('20070106','yyyy-mm-dd')); insert into LEAD_TABLE values('Case1','Step6',to_date('20070101','yyyy-mm-dd')); insert into LEAD_TABLE values('Case1','Step1',to_date('20070201','yyyy-mm-dd')); insert into LEAD_TABLE values('Case2','Step2',to_date('20070202','yyyy-mm-dd')); insert into LEAD_TABLE values('Case2','Step3',to_date('20070203','yyyy-mm-dd')); commit; The results are as follows: Case1 Step1 2007-1-1 Step2 2007-1-2 Case1 Step2 2007-1-2 Step3 2007-1-3 Step1 2007-1-1 Case1 Step3 2007-1-3 Step4 2007-1-4 Step2 2007-1-2 Case1 Step4 2007-1-4 Step5 2007-1-5 Step3 2007-1-3 Case1 Step5 2007-1-5 Step4 2007-1-6 Step4 2007-1-4 Case1 Step4 2007-1-6 Step6 2007-1-7 Step5 2007-1-5 Case1 Step6 2007-1-7 Step4 2007-1-6 Case2 Step1 2007-2-1 Step2 2007-2-2 Case2 Step2 2007-2-2 Step3 2007-2-3 Step1 2007-2-1 Case2 Step3 2007-2-3 Step2 2007-2-2 The number of days difference can be further statistical look at both select caseid,stepid,actiondate,nextactiondate,nextactiondate-actiondate datebetween from ( select caseid,stepid,actiondate,lead(stepid) over (partition by caseid order by actiondate) nextstepid, lead(actiondate) over (partition by caseid order by actiondate) nextactiondate, lag(stepid) over (partition by caseid order by actiondate) prestepid, lag(actiondate) over (partition by caseid order by actiondate) preactiondate from lead_table) The results are as follows: Case1 Step1 2007-1-1 2007-1-2 1 Case1 Step2 2007-1-2 2007-1-3 1 Case1 Step3 2007-1-3 2007-1-4 1 Case1 Step4 2007-1-4 2007-1-5 1 Case1 Step5 2007-1-5 2007-1-6 1 Case1 Step4 2007-1-6 2007-1-7 1 Case1 Step6 2007-1-7 Case2 Step1 2007-2-1 2007-2-2 1 Case2 Step2 2007-2-2 2007-2-3 1 Case2 Step3 2007-2-3 Each record can be connected to./The next row content lead () 下一个值 lag() 上一个值 select caseid,stepid,actiondate,lead(stepid) over (partition by caseid order by actiondate) nextstepid, lead(actiondate) over (partition by caseid order by actiondate) nextactiondate, lag(stepid) over (partition by caseid order by actiondate) prestepid, lag(actiondate) over (partition by caseid order by actiondate) preactiondate from lead_table
Recommended from our users: Dynamic Network Monitoring from WhatsUp Gold from IPSwitch. Free Download

Posted by Susan at November 25, 2013 - 5:22 AM