Continuing on my last post, I was curious to how the stopkey optimization is affected when using a between range instead of just an upper bound. This would be similar to queries that have pagination (e.g. page 1 is rows 1-100, page 2 is 101-200, etc.) Note that typically 100 rows per page is about 75 too many for an application like this, but that is a topic for another day…
I am using the same setup as the last post, just some new queries:
explain plan for select small_vc from ( select rownum rn, small_vc from t1 where id > 100 ) where rn between 101 and 200; select * from table(dbms_xplan.display);
---------------------------------------------------------------------------- | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | ---------------------------------------------------------------------------- | 0 | SELECT STATEMENT | | 9901 | 193K| 48 (0)| 00:00:01 | |* 1 | VIEW | | 9901 | 193K| 48 (0)| 00:00:01 | | 2 | COUNT | | | | | | |* 3 | TABLE ACCESS FULL| T1 | 9901 | 145K| 48 (0)| 00:00:01 | ---------------------------------------------------------------------------- Predicate Information (identified by operation id): --------------------------------------------------- 1 - filter("RN"<=200 AND "RN">=101) 3 - filter("ID">100)
So that didn’t work as hoped, as it didn’t do any stopkey optimization, but is doing a full table scan. We can force the stopkey optimization by putting the rownum filter in two steps:
explain plan for select small_vc from ( select rownum rn, small_vc from t1 where id > 100 and rownum < 200 ) where rn > 101; select * from table(dbms_xplan.display);
--------------------------------------------------------------------------------------- | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | --------------------------------------------------------------------------------------- | 0 | SELECT STATEMENT | | 199 | 3980 | 6 (0)| 00:00:01 | |* 1 | VIEW | | 199 | 3980 | 6 (0)| 00:00:01 | |* 2 | COUNT STOPKEY | | | | | | | 3 | TABLE ACCESS BY INDEX ROWID| T1 | 199 | 2985 | 6 (0)| 00:00:01 | |* 4 | INDEX RANGE SCAN | T1_I1 | | | 2 (0)| 00:00:01 | --------------------------------------------------------------------------------------- Predicate Information (identified by operation id): --------------------------------------------------- 1 - filter("RN">101) 2 - filter(ROWNUM<200) 4 - access("ID">100)
So that is a little uglier but does the optimization I was looking for. My last test is a twist on the first one, using the row_number() aggregate function instead of the normal rownum pseudocolumn:
explain plan for select small_vc from ( select row_number() over (order by 1) rn, small_vc from t1 where id > 100 ) where rn between 101 and 200; select * from table(dbms_xplan.display);
---------------------------------------------------------------------------------------- | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | ---------------------------------------------------------------------------------------- | 0 | SELECT STATEMENT | | 199 | 3980 | 6 (0)| 00:00:01 | |* 1 | VIEW | | 199 | 3980 | 6 (0)| 00:00:01 | |* 2 | WINDOW NOSORT STOPKEY | | 199 | 2985 | 6 (0)| 00:00:01 | |* 3 | COUNT STOPKEY | | | | | | | 4 | TABLE ACCESS BY INDEX ROWID| T1 | 199 | 2985 | 6 (0)| 00:00:01 | |* 5 | INDEX RANGE SCAN | T1_I1 | | | 2 (0)| 00:00:01 | ---------------------------------------------------------------------------------------- Predicate Information (identified by operation id): --------------------------------------------------- 1 - filter("RN">=101 AND "RN"<=200) 2 - filter(ROW_NUMBER() OVER ( ORDER BY NULL )<=200) 3 - filter(ROWNUM<200) 5 - access("ID">100)
Definitely not what I expected, but interesting to note.