performance

OCI Client-Side Deployment in 12C

Posted on Updated on

The 12c version has introduced a new features called “OCI Client-Side Deployment Parameters” this feature gives you the possibility to override some OCI parameters without even changing your application code.

Let’s make a simple test case using Python and the cx_oracle module (which use the OCI interface) for connecting to the oracle database.

You can find this module for your python version and the database 12c version here.

This is the code of our test case:

#!/usr/bin/python2.7 import cx_Oracle
 connection = cx_Oracle.connect ('test','test','testdb')
 for i in range (0,1000000) :
        cursor = connection.cursor ()
        cursor.execute ('select  1 FROM dual')
        cursor.close ()
sql="select a.name name, b.value value from v$statname a, v$mystat b where a.statistic# = b.statistic# and (a.name like '%parse% or a.name like '%session cursor%'" 
cursor = connection.cursor () 
cursor.execute(sql) rows = cursor.fetchall() 
for name,value in rows:    
        print "%s : %d"%(name,value)  
connection.close ()

So basically, we execute this simple “select 1 from dual”  1 million times by opening and closing the cursor during each iteration.

Then we display our session statistics regarding parsing.

After executing that script and check the result :

session cursor cache hits : 16961
session cursor cache count : 7
parse time cpu : 176
parse time elapsed : 641
parse count (total) : 1000010
parse count (hard) : 2

The time consumed for parsing was around 6 seconds and as you can see was mainly soft parsing (only 2 hard parses have been performed for the two cursors opened in our code).

This is even better due to the fact that this cursor was already cached in our Private SQL Area(as observed by checking the session cursor cache hits statistic), so the soft parse was even more softer:)

But wait!! why the session session cursor cache hits is so small after 1M executions.

Adding those lines in the code :

cursor.execute ('select 1 FROM dual')
sql="select a.name name, b.value value from v$statname a, v$mystat b where a.statistic# = b.statistic# and ( a.name = 'session cursor cache hits')"
        cursor.execute(sql)
        rows = cursor.fetchall()
        for name,value in rows:
            print "%s : %d"%(name,value)

You will see the following result :

session cursor cache hits : 65525
session cursor cache hits : 65527
session cursor cache hits : 65529
session cursor cache hits : 65531
session cursor cache hits : 65533
session cursor cache hits : 65535
session cursor cache hits : 1
session cursor cache hits : 3
session cursor cache hits : 5
session cursor cache hits : 7

So i guess that this statistic value is stored in a 16 bits integer and cannot store more than 65535.

I found the confirmation in this Christian’s post so this is not fixed yet in 12.1.0.2.

As i said earlier, with 12c you can now set override your application OCI parameters by using a file named oraaccess.xml.

This XML file needs to be in your TNS_ADMIN directory to be taken in account.

You could find an example in this directory $ORACLE_HOME/dbms/demo/oraaccess.xml.

So now let’s check if that works with the cx_oracle module.

You can set a number of clients parameters like arraysize, client result cache and in our case statement cache.

We didn’t implement the statement cache feature in our code but we will set the value for client statement cache to 1.

<oraaccess xmlns="http://xmlns.oracle.com/oci/oraaccess"
xmlns:oci="http://xmlns.oracle.com/oci/oraaccess"

schemaLocation="http://xmlns.oracle.com/oci/oraaccess
http://xmlns.oracle.com/oci/oraaccess.xsd">
<default_parameters>
<prefetch>
<rows>50</rows>
</prefetch>
<statement_cache>
<size>1</size>
</statement_cache>
<result_cache>
<max_rset_rows>100</max_rset_rows>
<!-- 'K' or 'k' for kilobytes and 'M' or 'm' for megabytes -->       
<max_rset_size>10K</max_rset_size>       
<max_size>64M</max_size>
</result_cache>
</default_parameters>
</oraaccess>

So now we run the python script again and check our result.

session cursor cache hits : 2
session cursor cache count : 6
parse time cpu : 0
parse time elapsed : 0
parse count (total) : 11
parse count (hard) : 0
parse count (failures) : 0
parse count (describe) : 0

As you can see there is no parse time anymore as the client uses its statement cache automatically.

In conclusion soft parsing is better but no parsing at all is possible even by not touching the code.

 

Advertisements

Inserting million of time series data (rows) per second inside oracle database part2

Posted on Updated on

In the previous post we have demonstrated how index columns order (matching or not, the data arrival pattern ) can impact the performance of the insertion.

Going a little bit further we will try to increase again the insertion performance rate.

One of the available options is to use direct path insert, since 11.2 release the APPEND_VALUES hint is available for standard insert statement.

Again using the two different column order, we will check the impact using direct path insert.

Then let’s create again the kpi table :

SQL> create table kpi (timeid number,insid number,indid number,value number);

And first create the index  like the first example in the previous blog post (not sorted on our data arrival pattern):

SQL> create unique index ix_kpi on kpi (insid,indid,timeid);

Using the same dataset let’s insert again 7 millions rows in our table via direct path insert method :

declare
 TYPE kpi_array is TABLE OF kpi%ROWTYPE;
 l_kpi_array kpi_array:=kpi_array ();
 start_time number;
 elapsed number;
 begin
 for time in 1 .. 100
 loop
 for indid in 1 .. 7
 loop
 for insid in 1 .. 10000
 loop
 l_kpi_array.extend();
 l_kpi_array(l_kpi_array.last).timeid:=time;
 l_kpi_array(l_kpi_array.last).indid:=indid;
 l_kpi_array(l_kpi_array.last).insid:=insid;
 l_kpi_array(l_kpi_array.last).value:=dbms_random.value(1,1000);
 end loop;
 end loop;
 end loop;
 start_time:=DBMS_UTILITY.GET_TIME ;
 forall p in l_kpi_array.first .. l_kpi_array.last
 insert /*+ APPEND_VALUES */ into kpi
 values l_kpi_array(p);
 elapsed := DBMS_UTILITY.GET_TIME - start_time;
 dbms_output.put_line('elapsed time (seconds)=' || elapsed/100 );end;
 /

elapsed time (seconds)=31,41 seconds

After executing this script, we can see that it takes 31 seconds to insert 7 millions rows, for comparison purpose, the exact same test without direct path insert used to complete within 56 seconds.Let’s have a look at the session statistics :

select
       name, value
from    v$mystat,v$statname
where
       v$mystat.statistic#=v$statname.statistic#
and name in ('leaf node splits','leaf node 90-10 splits','redo size','redo entries','session logical reads','physical reads',
'db block changes','db block gets','physical reads direct temporary tablespace','physical writes direct temporary tablespace')
;

NAME       VALUE
---------------------------------------------------------------- ----------
session logical reads     506672
db block gets     470582
physical reads       87041
db block changes     466778
physical reads direct temporary tablespace       87038
physical writes direct temporary tablespace       87038
redo entries     235452
redo size   550084680
leaf node splits       17526
leaf node 90-10 splits       17526

You can notice now than the index leaf node split are now 90-10 splits as opposed to the same example without direct path insert when we had mainly 50-50 node splits :

leaf node splits      17050
leaf node 90-10 splits 32

For comparison purpose here are the statistics for the former test (without direct path insert) :

NAME VALUE
---------------------------------------------------------------- ----------
session logical reads 15684333
db block gets         15544039
physical reads        413
db block changes      10610256
redo entries          5363399
redo size             1922236380
leaf node splits      17050
leaf node 90-10 splits 32

Why this difference between direct and non direct path insert?

Because when direct path insert is used the index maintenance is deferred and merged at the end of the process.

That means that oracle has to sort the entire array before updating the index and now can also take advantage of array processing to maintain the index structure.

This explains why the redo entries and the session logical reads have decreased so much compared to the former test (without direct path insert).

Another difference from the former test, is that we now have some physical reads/writes direct temporary tablespace.

The work area is too much small to process the entire sorts in memory and oracle has to spilled sort operations to disk.

This can be verified by checking the insert statement statistics values in v$sql_workarea :

select sql_id,child_number,operation_type,policy,last_execution,optimal_executions,multipasses_executions from v$sql_workarea where sql_id='10qx37pz299jw' and child_number=0;

SQL_ID CHILD_NUMBER OPERATION_TYPE     POLICY     LAST_EXECUTION     OPTIMAL_EXECUTIONS MULTIPASSES_EXECUTIONS
------------- ------------ --------------- ---------- ------------------------------ ------------------ ---------------------
10qx37pz299jw 0 IDX MAINTENANCE(SORT)  AUTO       13 PASSES             0       1

10qx37pz299jw 0 LOAD WRITE BUFFERS     AUTO       OPTIMAL               1       0

The sort was done using  multipasses executions (13 passes)

In order to perform the entire sorts in memory,  we can set the workarea size policy to manual and set the sort_area_size large enough.

alter session set workarea_size_policy=manual;
alter session set sort_area_size=800000000;

Let’s restart the same test with modified settings :

now 23.38 seconds to insert the 7 millions rows .

Let’s check sorting statistics again:

select sql_id,child_number,operation_type,policy,last_execution,optimal_executions,multipasses_executions from v$sql_workarea where sql_id='10qx37pz299jw' and child_number=1;

SQL_ID       CHILD_NUMBER OPERATION_TYPE  POLICY     LAST_EXECUTION     OPTIMAL_EXECUTIONS MULTIPASSES_EXECUTIONS ------------- ------------ --------------- ---------- ------------------------------ ------------------ ------- 
10qx37pz299jw 1 IDX MAINTENANCE(SORT) MANUAL     OPTIMAL       1       0 

10qx37pz299jw 1 LOAD WRITE BUFFERS    MANUAL     OPTIMAL       1       0

The execution time has reduced and all the sort operations are performed in memory.

Now let’s execute again the same test changing the index column order based on the data arrival pattern (see the previous post where without direct path insert we took advantage of array processing) and check if there is still a difference:

SQL> create unique index ix_kpi on kpi (timeid,indid,insid);
alter session set workarea_size_policy=manual;
alter session set sort_area_size=800000000;
declare
TYPE kpi_array is TABLE OF kpi%ROWTYPE;
l_kpi_array kpi_array:=kpi_array ();
start_time number;
elapsed number;
begin
for time in 1 .. 100
loop
for indid in 1 .. 7
loop
for insid in 1 .. 10000
loop
l_kpi_array.extend();
l_kpi_array(l_kpi_array.last).timeid:=time;
l_kpi_array(l_kpi_array.last).indid:=indid;
l_kpi_array(l_kpi_array.last).insid:=insid;
l_kpi_array(l_kpi_array.last).value:=dbms_random.value(1,1000);
end loop;
end loop;
end loop;
start_time:=DBMS_UTILITY.GET_TIME ;
forall p in l_kpi_array.first .. l_kpi_array.last
insert /*+ APPEND_VALUES */ into kpi
values l_kpi_array(p);
elapsed := DBMS_UTILITY.GET_TIME - start_time;
dbms_output.put_line('elapsed time (seconds)=' || elapsed/100 );
commit;
end;
elapsed time (seconds)=13.9 seconds
As you can see the index column order still has a impact on the insertion performance.

Let’s compare the event 10032 (trace sort statistics) with the two different columns order:

first index (insid,indid,timeid) 

—- Sort Parameters ——————————
sort_area_size                                800000000
sort_area_retained_size               800000000
sort_multiblock_read_count       2
max intermediate merge width  21982
 
*** 2015-07-07 14:53:01.341
—- Sort Statistics ——————————
Input records                             7000000
Output records                            7000000
Total number of comparisons performed     150334451
  Comparisons performed by in-memory sort 150334451
Total amount of memory used               524142592
Uses version 1 sort
—- End of Sort Statistics ———————–
 

second index (timeid,indid,insid)

—- Sort Parameters ——————————
sort_area_size                                  800000000
sort_area_retained_size                800000000
sort_multiblock_read_count        2
max intermediate merge width    21982
 
*** 2015-07-07 15:05:19.599
—- Sort Statistics ——————————
Input records                             7000000
Output records                            7000000
Total number of comparisons performed     6999999
  Comparisons performed by in-memory sort 6999999
Total amount of memory used               459501568
Uses version 1 sort
—- End of Sort Statistics ———————–
 

You can see that the algorithm used for sorting is “version 1 sort”, as explained in this jonathan lewis blog this algorithm is a “binary insertion tree” so oracle builds a binary tree as reading data from memory.

Oracle has to compare each row to the current binary tree structure and as in the second example the data are already sorted in the same order than the index only 6999999 comparaisons (7 Millions – 1) were done compared to the previous example where it has performed 150334451 comparaisons.

This sorting cost could  also be verified by checking ‘CPU used by this session’ session statistic, please take in account that this statistic includes the creation of the nested table and the insert statement (all other statistics were almost the same )

first index column order :       5432 cs
second index column order :      1891 cs

Conclusion :

When using direct path insert the index maintenance is deferred  so oracle is able to process index operation by array even if the data pattern is not sorted in the same way that our index column order.

We have also demonstrated that the sorting algorithm keeps taking advantage of sorted data pattern and good index order even in direct path insert mode.

Inserting million of time series data (rows) per second inside oracle database

Posted on Updated on

This blog post is the first of a series of 4 related to time series and indexing.

According to wikipedia this is the definition of a time series:

A time series is a sequence of data points, typically consisting of successive measurements made over a time interval.

This blog post shows how we can efficiently store time series in an oracle database and how indexing strategy can have a big impact on insertion performance rate.

The time series data used in this example are performance metrics coming from a network performance monitoring tool.

We will use three components to identify our time series data point:

• a timestamp (timeid) which will be based on epoch time (and start from 70s to simplify the demonstration)
• a performance indicator indentifier (indid) that represents a performance metric (load average, cpu usage,…)
• a network object identifier (insid) (router1,…)

and finally, we will store the collected metric value.

The key point to have in mind is that our data arrival pattern is sorted based on the time (timeid).
That means that for the current timestamp, we will have to store all the metrics of the entire network.

In terms of oracle modeling, we will create a oracle table called “KPI” to store that kind of information.

This is a pretty simple structure that is in fact a key/value model.

SQL> create table kpi (timeid number,insid number,indid number,value number);

Now comes the question, which indexing strategy to use? as we store billions of rows in this table, we will create a composite index on timeid, insid, and indid columns in order to retrieve and graph our performance metric values over time.

But which order is optimal and what is the impact on the insertion rate and reporting queries?

As the goal is to store and graph the time series data for a specific network equipment and a specific performance metric over time, this is what the sql reporting queries looks like:

select value from KPI where indid=‘CPU %’ and insid=‘router 1’ and timeid between 1 and 51;

When creating a composite index, the columns are sorted by the order mentioned in the index definition.

So considering our reporting sql query, the best access path is going to be an index range scan, and the most obvious implementation for our index is the following :

SQL> create unique index ix_kpi on kpi (insid,indid,timeid);

This will keep the requested index entries « clustered” together and reduce the i/o(s) required for the index range scan operation:

Capture d’écran 2015-05-01 à 12.11.25

While this index seems optimal for our reporting query, what is the index impact on our insertion performance rate ?

Let’s perform a quick insertion of 7 millions rows into this index by using bulk loading.

declare
TYPE kpi_array is TABLE OF kpi%ROWTYPE;
l_kpi_array kpi_array:=kpi_array ();
start_time number;
elapsed number;
begin
for time in 1 .. 100
loop
for indid in 1 .. 7
loop
for insid in 1 .. 10000
loop
l_kpi_array.extend();
l_kpi_array(l_kpi_array.last).timeid:=time;
l_kpi_array(l_kpi_array.last).indid:=indid;
l_kpi_array(l_kpi_array.last).insid:=insid;
l_kpi_array(l_kpi_array.last).value:=dbms_random.value(1,1000);
end loop;
end loop;
end loop;
start_time:=DBMS_UTILITY.GET_TIME ;
forall p in l_kpi_array.first .. l_kpi_array.last
insert into kpi
values l_kpi_array(p);
elapsed := DBMS_UTILITY.GET_TIME - start_time;
dbms_output.put_line('elapsed time (seconds)=' || elapsed/100 );

end;
/

Notice that the data is sorted by time because this is how they are gathered on our network.

elapsed time (seconds)=55.6

After executing this script, we can see that it takes 55 seconds to insert 7 millions rows, so around 127000 rows per second.

Now let’s check some of our session statistics values :

select
name, value
from v$mystat,v$statname
where
v$mystat.statistic#=v$statname.statistic#
and name in ('leaf node splits','leaf node 90-10 splits','redo size','redo entries','session logical reads','physical reads','db block changes','db block gets')
;

NAME VALUE
---------------------------------------------------------------- ----------
session logical reads 15684333
db block gets         15544039
physical reads        413
db block changes      10610256
redo entries          5363399
redo size             1922236380
leaf node splits      17050
leaf node 90-10 splits 32

We can observe the big amount of “session logical reads » and redo entries generated. The index maintenance operations has generated a lot of “50-50 » leaf node splits as well.

As our kpis data is coming sorted in a different order than the index structure, oracle has to maintain the Btree in a non efficient way.

For each data inserted oracle has to go through all the index structure like this:

Capture d’écran 2015-05-02 à 11.14.29

Dumping the redo log, we could find that for each change vector generated during the leaf block operations (« insert leaf row operation »), these operations are “not batched », this is clearly visible by checking the change vector 2 (REDO: SINGLE operation)

REDO RECORD - Thread:1 RBA: 0x0007c8.00000002.0010 LEN: 0x011c VLD: 0x05
SCN: 0x0000.030f4fcb SUBSCN: 1 12/06/2014 15:51:59
(LWN RBA: 0x0007c8.00000002.0010 LEN: 11398 NST: 0001 SCN: 0x0000.030f4fcb)
CHANGE #1 TYP:0 CLS:34 AFN:3 DBA:0x00c96222 OBJ:4294967295 SCN:0x0000.030f4fc8 SEQ:58 OP:5.1 ENC:0 RBL:0
ktudb redo: siz: 84 spc: 3248 flg: 0x0022 seq: 0x0fb9 rec: 0x3a
xid: 0x0009.009.00004bef
ktubu redo: slt: 9 rci: 57 opc: 10.22 objn: 307081 objd: 307081 tsn: 12
Undo type: Regular undo Undo type: Last buffer split: No
Tablespace Undo: No
0x00000000
index undo for leaf key operations
KTB Redo
op: 0x02 ver: 0x01
compat bit: 4 (post-11) padding: 1
op: C uba: 0x00c961af.0fb9.03
Dump kdilk : itl=2, kdxlkflg=0x41 sdc=0 indexid=0x300008a block=0x0300416c
(kdxlpu): purge leaf row
key :(10): 03 c2 38 02 02 c1 06 02 c1 52
CHANGE #2 TYP:0 CLS:1 AFN:12 DBA:0x0300416c OBJ:307081 SCN:0x0000.030f4fc2 SEQ:1 OP:10.2 ENC:0 RBL:0
index redo (kdxlin): insert leaf row
KTB Redo
op: 0x02 ver: 0x01
compat bit: 4 (post-11) padding: 1
op: C uba: 0x00c96222.0fb9.3a
REDO: SINGLE / -- / --
itl: 2, sno: 230, row size 13
insert key: (10): 03 c2 38 02 02 c1 06 02 c1 52
keydata: (6): 03 00 a4 b7 00 43
prefix exists on block, sno: 2 len: 7

Now let’s perform the same insertion test but with the following index structure on the KPI table:

SQL> create unique index ix_kpi on kpi (timeid,indid,insid);

and reinsert again the same 7 millions rows dataset :

elapsed time (seconds)=17.8

Now it takes only 18 seconds for 7 millions rows, so around 390000 rows per second.

Let’s check again our session statistics :

NAME VALUE
---------------------------------------------------------------- ----------
session logical reads 809367
db block gets         711929
physical reads        40
db block changes      601218
redo entries          363907
redo size             823523804
leaf node splits      12273
leaf node 90-10 splits 12273

You can see that the number of logical reads and redo entries decreased dramatically.

The index is now what is called a « right hand» index as the timeid value is always increasing.

All index entries are « clustered » together in the same block on the right of the Btree :

Capture d’écran 2015-05-01 à 15.31.10

This is also the reason why all leaf node splits are now only 90-10 (no more 50-50).

If we check again the redo log records, we can observe that the « insert leaf row » operations are now batched and performed by array ( REDO: ARRAY)

REDO RECORD - Thread:1 RBA: 0x0007ca.00000102.015c LEN: 0x2530 VLD: 0x01
SCN: 0x0000.030f6b8d SUBSCN: 1 12/06/2014 15:58:25
CHANGE #1 TYP:0 CLS:25 AFN:3 DBA:0x00c000c0 OBJ:4294967295 SCN:0x0000.030f6b8d SEQ:1 OP:5.2 ENC:0 RBL:0
ktudh redo: slt: 0x0019 sqn: 0x00000000 flg: 0x000a siz: 2832 fbi: 255
uba: 0x00c8519f.0e15.01 pxid: 0x0000.000.00000000
CHANGE #2 TYP:1 CLS:26 AFN:3 DBA:0x00c8519f OBJ:4294967295 SCN:0x0000.030f6b8d SEQ:1 OP:5.1 ENC:0 RBL:0
ktudb redo: siz: 2832 spc: 832 flg: 0x000a seq: 0x0e15 rec: 0x01
xid: 0x0005.019.00004bbe
ktubu redo: slt: 25 rci: 0 opc: 10.22 objn: 307083 objd: 307083 tsn: 12
Undo type: Regular undo Undo type: Last buffer split: No
Tablespace Undo: No
0x00c8519e
index undo for leaf key operations
KTB Redo
op: 0x02 ver: 0x01
compat bit: 4 (post-11) padding: 1
op: C uba: 0x00c8519e.0e15.33
Dump kdilk : itl=2, kdxlkflg=0x61 sdc=0 indexid=0x300008a block=0x0300008b
(kdxlpu): purge leaf row
number of keys: 235
key sizes:
9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10
10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10

…….
CHANGE #3 TYP:0 CLS:1 AFN:12 DBA:0x0300008b OBJ:307083 SCN:0x0000.030f6b8d SEQ:1 OP:10.2 ENC:0 RBL:0
index redo (kdxlin): insert leaf row
KTB Redo
op: 0x02 ver: 0x01
compat bit: 4 (post-11) padding: 1
op: C uba: 0x00c8519f.0e15.01
REDO: ARRAY / -- / --
itl: 2, sno: 1, row size 3190
number of keys: 235
slots:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89
90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156
157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220
221 222 223 224 225 226 227 228 229 230 231 232 233 234 235
insert key: (2250):
02 c1 02 02 c1 02 02 c1 03 02 c1 02 02 c1 02 02 c1 04 02 c1 02 02 c1 02 02
c1 05 02 c1 02 02 c1 02 02 c1 06 02 c1 02 02 c1 02 02 c1 07 02 c1 02 02 c1
02 02 c1 08 02 c1 02 02 c1 02 02 c1 09 02 c1 02 02 c1 02 02 c1 0a 02 c1 02
02 c1 02 02 c1 0b 02 c1 02 02 c1 02 02 c1 0c 02 c1 02 02 c1 02 02 c1 0d 02
c1 02 02 c1 02 02 c1 0e 02 c1 02 02 c1 02 02 c1 0f 02 c1 02 02 c1 02 02 c1
10 02 c1 02 02 c1 02 02 c1 11

Conclusion :

For composite index, the column’s order has an impact on the insertion performance rate especially when your data is inserted on a specific pattern.

In the next part we will continue to explore the insertion rate and demonstrate that it’s still possible to increase its performance.

I would like to thank Bertrand Drouvot for kindly accepting to review this post.