René Nyffenegger's collection of things on the web
René Nyffenegger on Oracle - Most wanted - Feedback -

April 21, 2005: On table sizes

A question that pops up from time to time is: how big is my table? This question can also be formulated like How many bytes are in my table? As we're going to see, there are at least two, possibly different, answers to this question.
First, I create a table:
create table size_test (
  a number,
  b varchar2(20)
storage (
  initial     65K
  next        65K
  pctincrease  0);
Then, I insert some data into the table. dbms_random comes in handy for that:
  for i in 1 .. 50000 loop
    insert into size_test values (1, dbms_random.string('X', dbms_random.value(5,20)));
    insert into size_test values (2, dbms_random.string('X', dbms_random.value(5,20)));
  end loop;

For a reason that becomes a bit more clear further down, half of the rows have a=1 and the other half have a=2.
B's value is an uppercase alphanumeric whose size is between 5 and 20.
The generated data looks like:
RENE> select * from size_test where rownum < 10;

         A B
---------- --------------------
         1 Q48VY
         2 KJ5HD6LKFV244NH
         1 RIXX8YFSHKVJ19Z2A8
         2 FVBZEHJOL5YY3JN
         1 GTEG14
         2 0EFKM
         1 3PKZYH992LJDXEW1
         2 XZUNF3E9
         1 TAVL4K98L3G8CM8
Now, using vsize, I want to calculate the amount of data in bytes stored in the table:
RENE> select sum(vsize(a) + vsize(b)) data_size from size_test;

Obviously, the table stores 1451138 bytes worth of data.
But how many bytes are actually stored on disk? user_segments tells me!
RENE> select bytes from user_segments where segment_name = 'SIZE_TEST';

Now, that seems to be a huge discrepancy. It uses 71% more disk space than actual data. This discrepancy is caused by overhead. This overhead consists of 84 bytes per block plus 7 bytes per row plus the storage kept free due to pctfree. 84 = 57 (fixed header) + 23 (transaction header) + 4 (table directory). 7 = 2 (row directory) + 3 (row header) + 2 (column size).
We have 100000 rows, that is 700000 bytes row. 1451138 + 700000 = 2151138.
How many blocks are needed to store 2151138 bytes? We need to know the block size:
RENE> show parameter db_block_size

NAME                                 TYPE        VALUE
------------------------------------ ----------- -------
db_block_size                        integer     8192
The block size is 8192 bytes. 84 bytes need to be subtracted: 8192 - 84 = 8108. I have used the default pctfree which is 10%. This implies that I can (when inserting) use only 90% of 8108 bytes: 8108/100*90 = 7297.2 which I round down to 7297 bytes. I can now calculate the number of blocks needed: 2151138/7297 = 294.7. This figure needs to be increased to the next segment allocation size (as specified with the next in the create table statement). next is 65KB or 8 blocks. ceil(294.7/8)*8 = 296. Obviously, 296 blocks are needed to store the data. 296*8192 = 2424832. We're getting closer. I'd assume (until someone points out a more accurate explanation), that this discrepancy has to do with not always using 90% of the available space, but maybe 89.8% or so.
Of course, I should also have included the segment header into the calculation, but I neglected it for brevity.

Deleting data

I delete half of the data:
RENE> delete from size_test where a = 2;
What is the size of the actual data?
RENE> select sum(vsize(a) + vsize(b)) data_size from size_test;

It has reduced by 50% as well. But what about the size of the needed storage?
RENE> select bytes from user_segments where segment_name = 'SIZE_TEST';

It remaind exactly the same. When data is deleted, Oracle does not free storage space.


A similar article is On shrinking table sizes.
June 10th, 2006: Alpesh Desai notifies me of an error on this page and corrects it. Thank you very much!

More on Oracle

This is an on Oracle article. The most current articles of this series can be found here.