There have been a handful of times over the last few years where I have needed to take time series data, and group the runs of data together to determine when a certain value changed, and how long it stayed that way. Every time I do this, I have to go back and figure out how I did it the last time, so this time I am actually going to write it down.
First, the data. We have a log table that logs every operation done to a table. It stores all of the columns in the base table, plus who made the change, when it was done, and the operation (INSERT, UPDATE, DELETE). It isn’t particularly efficient as far as storage goes, and newer versions of SQLServer support this type of logging using built in features, but we are using an older version.
For the sake of simplicity, I am dropping all but the most important parts of this table for this exercise. Assume there are more columns in this table, and that there are DELETES being logged. Im just going to show rows that are inserted or updated, and have limited it to just two ids.
We can see in this data that the values oscillate – for id 1, value is either 4 or 8, and for id 2, the value is 5 or 10. It goes back and forth over time. We can also see that the value will repeat – maybe there are some other changes for these records, but the value field stays the same across other updates.
What we want to do is eliminate the duplicate values in the runs of data, and gather the timestamp where that value was first seen in the run, and when it was last seen.
For example, for id 1, we should end up with x rows : 4, 8, 4, 8. For id 2, we should expect to have 5 rows: 5, 10, 5, 10, 5.
StackOverflow was helpful in figuring out how to do this. This post closely matched what I was trying to do. I wanted to see how the rownumbering worked, in particular, using two rownumbers and subtracting.
Lets start with this:
SELECT log1.logged_at, log1.value1, log1.id, ROW_NUMBER() OVER ( PARTITION BY id ORDER BY logged_at ) as byId, ROW_NUMBER() OVER ( PARTITION BY id, value1 ORDER BY logged_at ) as idValue, ROW_NUMBER() OVER ( PARTITION BY id ORDER BY logged_at ) - ROW_NUMBER() OVER ( PARTITION BY id, value1 ORDER BY logged_at ) AS idMinusIdValue FROM logtable log1 order by id, logged_at
This is what we get:
Notice that the value for idMinusValue is not sequential, but it does group together the runs of the data. idMinusValue also will repeat across ids.
Now we want to compress the runs, and sort correctly:
WITH groupings AS ( SELECT log1.logged_at, log1.id, value1, ROW_NUMBER() OVER ( PARTITION BY id ORDER BY logged_at ) - ROW_NUMBER() OVER ( PARTITION BY id, value1 ORDER BY logged_at ) AS idMinusIdValue FROM logtable log1 ), runs AS ( SELECT id, value1, min(logged_at) AS first_seen, max(logged_at) AS last_seen FROM groupings GROUP BY id, idMinusIdValue, value1 ) SELECT * FROM runs ORDER BY id, first_seen
We see the expected rows – 4,8,4,8 for id 1 and 5,10,5,10,5 for id 2.