In the new version of Exadata, cell server had increased the flash cache up to about 90TB for X2-8 full rack. So it make sense to allocate the flash cache as DISK to store the high I/O segment, and ORACLE has use the flash cache as redo as default.
This post to exam the performance with flash cache as GTT storage. In the post for the GTT defined tablespace showed work around for store the GTT in defined tablespace even need parallel DML /Direct load operations on the GTT.
To exam the performance of Exadata feature, one thing need to mention — after 11gR2, ORACLE had implemented the “WRITE BACK” instead of “WRITE THROUGH” as default. “WRITE BACK” would enable the write to capture into flash cache and return back to process if the flash cache write operation faster than disk. Once return to process, the transaction considered done, and I/O locks released anyway. “WRITE BACK” enable the segments go to flash cache before hard disk in most of case (exception is I/O on hard disk faster then flash cache). “WRITE BACK” seems change the picture for store the GTT or create the temp tablespace in flash cache.
Use the same GTT to do the same operations and same set of data but no “WORK AROUND” and GTT segment final located into the default temp tablespace and flash cache should not use in old exadata version.
GTT_TAB is the Global Temp table with the tablespace defined as temp_bb, and temp_bb was created on the flask cache disk. The testing on the parallel DML to load 200M records into GTT_TAB and query on the table after commit.
First one, parallel DML directly applied and ORACLE would allocate session default temp tablespace to it.
SQL> select * from dba_temp_free_space; TABLESPACE_NAME TABLESPACE_SIZE ALLOCATED_SPACE FREE_SPACE ------------------------------ -------------------- -------------------- -------------------- TEMP 80,530,636,800 80,494,985,216 80,522,248,192 TEMP_BB 2,088,960 1,040,384 1,048,576 SQL> truncate table gtt_tab; Table truncated. SQL> select statistic#, value 2 from v$mystat 3 where statistic# in (select statistic# from v$statname where name = 'cell flash cache read hits'); STATISTIC# VALUE ---------- ---------- 1133 7 Elapsed: 00:00:00.01 SQL> set autotrace off SQL> insert /*+ append enable_parallel_dml */ into gtt_tab select /*+ parallel */ * from pba_rnd; 200000000 rows created. Elapsed: 00:00:15.53 SQL> commit; Commit complete. Elapsed: 00:00:00.09 SQL> select statistic#, value 2 from v$mystat 3 where statistic# in (select statistic# from v$statname where name = 'cell flash cache read hits'); STATISTIC# VALUE ---------- ---------- 1133 17408 --- 17401 -- flash cache read for pba_rnd SQL> set autotrace traceonly explain statistics SQL> select column_name, count(*) from gtt_tab group by column_name; 4100 rows selected. Elapsed: 00:00:30.65 Execution Plan ---------------------------------------------------------- Plan hash value: 2288548961 -------------------------------------------------------------------------------------- | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | -------------------------------------------------------------------------------------- | 0 | SELECT STATEMENT | | 4020 | 44220 | 24940 (44)| 00:00:01 | | 1 | HASH GROUP BY | | 4020 | 44220 | 24940 (44)| 00:00:01 | | 2 | TABLE ACCESS STORAGE FULL| GTT_TAB | 200M| 2098M| 15611 (11)| 00:00:01 | -------------------------------------------------------------------------------------- Note ----- - Global temporary table session private statistics used Statistics ---------------------------------------------------------- 1 recursive calls 13 db block gets 1149558 consistent gets 1149440 physical reads 84 redo size 92323 bytes sent via SQL*Net to client 3377 bytes received via SQL*Net from client 275 SQL*Net roundtrips to/from client 0 sorts (memory) 0 sorts (disk) 4100 rows processed SQL> set autotrace off SQL> select statistic#, value 2 from v$mystat 3 where statistic# in (select statistic# from v$statname where name = 'cell flash cache read hits'); STATISTIC# VALUE ---------- ---------- 1133 17779 --- 17779 - 17408 = 371 read hits. SQL> set autotrace traceonly SQL> select column_name, count(*) from gtt_tab group by column_name; 4100 rows selected. Elapsed: 00:00:20.15 SQL> set autotrace off SQL> select statistic#, value 2 from v$mystat 3 where statistic# in (select statistic# from v$statname where name = 'cell flash cache read hits'); STATISTIC# VALUE ---------- ---------- 1133 28731 --- After first run , second hit flash hit 28731 - 17779 = 952 SQL> select * from dba_temp_free_space TABLESPACE_NAME TABLESPACE_SIZE ALLOCATED_SPACE FREE_SPACE ------------------------------ -------------------- -------------------- -------------------- TEMP 80,530,636,800 80,494,985,216 71,106,035,712 TEMP_BB 2,088,960 1,040,384 1,048,576
When GTT be truncated, new segment would be created when next time insert, if parallel DML does insert, the session base default tablespace would be used. Two facts showed here, the tablespace been used is the TEMP — session default; and flash cache be used for query on the temp tables!! — remember THERE WAS WRITE BACK on cell flash cache. — The query statistics show the flash cache was taken effective on GTT with temp tablespace in DISK.
When the GTT managed to create in flash cache disk, the performance was expected much better which been proven by others testing. However, the result was not show this, and ORACLE simply contrast flash cache with flash cache disk to store the temp tablespace.
SQL> select * from dba_temp_free_space; TABLESPACE_NAME TABLESPACE_SIZE ALLOCATED_SPACE FREE_SPACE ------------------------------ -------------------- -------------------- -------------------- TEMP 80,530,636,800 80,494,985,216 80,522,248,192 TEMP_BB 18,824,028,160 18,815,639,552 18,821,939,200 SQL> insert into gtt_tab select * from pba_rnd where rownum < 10; 9 rows created. Elapsed: 00:00:00.02 SQL> delete from gtt_tab; 9 rows deleted. Elapsed: 00:00:00.01 SQL> commit; Commit complete. Elapsed: 00:00:00.00 SQL> select statistic#, value 2 from v$mystat 3 where statistic# in (select statistic# from v$statname where name = 'cell flash cache read hits'); STATISTIC# VALUE ---------- ---------- 1133 28734 SQL> insert /*+ append enable_parallel_dml */ into gtt_tab select /*+ parallel */ * from pba_rnd; 200000000 rows created. Elapsed: 00:00:16.27 SQL> select statistic#, value 2 from v$mystat 3 where statistic# in (select statistic# from v$statname where name = 'cell flash cache read hits'); STATISTIC# VALUE ---------- ---------- 1133 45844 --- increase to 45844 - 28734 = 17110 = same as GTT on hard disk. SQL> set autotrace traceonly explain statistics SQL> select column_name, count(*) from gtt_tab group by column_name; 4100 rows selected. Elapsed: 00:00:30.90 Execution Plan ---------------------------------------------------------- Plan hash value: 2288548961 -------------------------------------------------------------------------------------- | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | -------------------------------------------------------------------------------------- | 0 | SELECT STATEMENT | | 4020 | 44220 | 24926 (44)| 00:00:01 | | 1 | HASH GROUP BY | | 4020 | 44220 | 24926 (44)| 00:00:01 | | 2 | TABLE ACCESS STORAGE FULL| GTT_TAB | 200M| 2098M| 15598 (11)| 00:00:01 | -------------------------------------------------------------------------------------- Note ----- - Global temporary table session private statistics used Statistics ---------------------------------------------------------- 1 recursive calls 13 db block gets 1148623 consistent gets 1148416 physical reads 128 redo size 92326 bytes sent via SQL*Net to client 3377 bytes received via SQL*Net from client 275 SQL*Net roundtrips to/from client 0 sorts (memory) 0 sorts (disk) 4100 rows processed SQL> select statistic#, value 2 from v$mystat 3 where statistic# in (select statistic# from v$statname where name = 'cell flash cache read hits'); STATISTIC# VALUE ---------- ---------- 1133 45865 -- only 21 hits on the flash cache. SQL> select column_name, count(*) from pba_rnd group by column_name; 4100 rows selected. Elapsed: 00:00:30.09 Execution Plan ---------------------------------------------------------- Plan hash value: 913254133 -------------------------------------------------------------------------------------- | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | -------------------------------------------------------------------------------------- | 0 | SELECT STATEMENT | | 4100 | 45100 | 24946 (44)| 00:00:01 | | 1 | HASH GROUP BY | | 4100 | 45100 | 24946 (44)| 00:00:01 | | 2 | TABLE ACCESS STORAGE FULL| PBA_RND | 200M| 2098M| 15618 (11)| 00:00:01 | -------------------------------------------------------------------------------------- Statistics ---------------------------------------------------------- 1 recursive calls 9 db block gets 1148496 consistent gets 1148377 physical reads 0 redo size 92320 bytes sent via SQL*Net to client 3377 bytes received via SQL*Net from client 275 SQL*Net roundtrips to/from client 0 sorts (memory) 0 sorts (disk) 4100 rows processed SQL> select statistic#, value 2 from v$mystat 3 where statistic# in (select statistic# from v$statname where name = 'cell flash cache read hits'); STATISTIC# VALUE ---------- ---------- 1133 45865 -- 0 hit on flash disk for GTT on flash cache disk !!! SQL>select * from dba_temp_free_space TABLESPACE_NAME TABLESPACE_SIZE ALLOCATED_SPACE FREE_SPACE ------------------------------ -------------------- -------------------- -------------------- TEMP 80,530,636,800 80,494,985,216 80,522,248,192 TEMP_BB 18,824,028,160 18,815,639,552 9,437,184,000
The Second test was applied the “work around ” to enforce the GTT on the flash cache disk. The insert performance almost same as GTT on hard disk, the reason behind is : WRITE BACK cache , not wait for disk I/O complete to commit.
The QUERY performance is interesting, GTT on flash cache wasn’t performance better than on Hard Disk, and even worse when second time ran. Statistics showed GTT on flash cache disk total not get any hit on cell flash cache, this is make sense for ORACLE manage the flash cache I/O load. With diff way to organize the segments, in flash cache disk, segments management still as other tablespace on hard disk with file head and data block, perhaps in flash cache, address mapping may in memory which faster than flash cache, this is question need be tested and proved.
From this test, put temp tablespace into flash cache disk and store GTT into this tablespace seems would not gain any performance on query.