Grouping runs of data in SQL

There have been a handful of times over the last few years where I have needed to take time series data, and group the runs of data together to determine when a certain value changed, and how long it stayed that way. Every time I do this, I have to go back and figure out how I did it the last time, so this time I am actually going to write it down.

First, the data. We have a log table that logs every operation done to a table. It stores all of the columns in the base table, plus who made the change, when it was done, and the operation (INSERT, UPDATE, DELETE). It isn’t particularly efficient as far as storage goes, and newer versions of SQLServer support this type of logging using built in features, but we are using an older version.

For the sake of simplicity, I am dropping all but the most important parts of this table for this exercise. Assume there are more columns in this table, and that there are DELETES being logged. Im just going to show rows that are inserted or updated, and have limited it to just two ids.

logged_atidvalue1
2016-05-3114
2016-05-3125
2016-06-06210
2016-06-0614
2016-06-1414
2016-06-14210
2016-06-15210
2016-06-1518
2016-06-1718
2016-06-17210
2016-09-2314
2016-09-2314
2017-01-0325
2017-11-2018
2017-11-20210
2017-11-2825
2017-11-2825

We can see in this data that the values oscillate – for id 1, value is either 4 or 8, and for id 2, the value is 5 or 10. It goes back and forth over time. We can also see that the value will repeat – maybe there are some other changes for these records, but the value field stays the same across other updates.

What we want to do is eliminate the duplicate values in the runs of data, and gather the timestamp where that value was first seen in the run, and when it was last seen.

For example, for id 1, we should end up with x rows : 4, 8, 4, 8. For id 2, we should expect to have 5 rows: 5, 10, 5, 10, 5.

StackOverflow was helpful in figuring out how to do this. This post closely matched what I was trying to do. I wanted to see how the rownumbering worked, in particular, using two rownumbers and subtracting.

Lets start with this:

SELECT
log1.logged_at,
log1.value1,
log1.id,
ROW_NUMBER() OVER ( PARTITION BY id  ORDER BY logged_at ) as byId,
ROW_NUMBER() OVER ( PARTITION BY id, value1  ORDER BY logged_at ) as idValue,
ROW_NUMBER() OVER ( PARTITION BY id  ORDER BY logged_at )
      - ROW_NUMBER() OVER ( PARTITION BY id, value1  ORDER BY logged_at ) AS idMinusIdValue
FROM logtable log1
order by id, logged_at

This is what we get:

logged_atidvalue1byIdidValueidMinusIdValue
2016-05-3114110
2016-06-0614220
2016-06-1414330
2016-06-1518413
2016-06-1718523
2016-09-2314642
2016-09-2314752
2017-11-2018835
2016-05-3125110
2016-06-06210211
2016-06-14210321
2016-06-15210431
2016-06-17210541
2017-01-0325624
2017-11-20210752
2017-11-2825835
2017-11-2825945

Notice that the value for idMinusValue is not sequential, but it does group together the runs of the data. idMinusValue also will repeat across ids.

Now we want to compress the runs, and sort correctly:

WITH groupings AS (
    SELECT
      log1.logged_at,
      log1.id,
      value1,
      ROW_NUMBER() OVER ( PARTITION BY id  ORDER BY logged_at )
      - ROW_NUMBER() OVER ( PARTITION BY id, value1  ORDER BY logged_at ) AS idMinusIdValue
    FROM logtable log1
), runs AS (
      SELECT
        id,
        value1,
        min(logged_at) AS first_seen,
        max(logged_at) AS last_seen
      FROM groupings
      GROUP BY id, idMinusIdValue, value1
  )
SELECT *
FROM runs
ORDER BY id, first_seen
idvalue1first_seenlast_seen
142016-05-312016-06-14
182016-06-152016-06-17
142016-09-232016-09-23
182017-11-202017-11-20
252016-05-312016-05-31
2102016-06-062016-06-17
252017-01-032017-01-03
2102017-11-202017-11-20
252017-11-282017-11-28

We see the expected rows – 4,8,4,8 for id 1 and 5,10,5,10,5 for id 2.

Leave a Reply