What is the largest length of a NUMBER that Oracle will "support"?
The documentation says the following:
Datatype Limits indicates:
"Can be represented to full 38-digit precision"
NUMBER Data Types indicates:
"Oracle guarantees the portability of numbers with precision of up to 20 base-100 digits, which is equivalent to 39 or 40 decimal digits depending on the position of the decimal point."
I realize if I define a column as simply NUMBER, I can insert numbers with a size up to 126 digits. However, Oracle seems to only maintain the first 40 digits IN MOST cases. The most number of digits it seems to allow is 40 before it starts to replace with 0's.
With numbers that have more than 40 digits, Oracle will sometimes replace any numbers after the 38th digit with a 0 and other times will replace with 0 after the 39th or 40th digit.
Therefore, what is the most number of digits we can have confidence in Oracle storing safely?
This was the code I used for testing this process.
create table max_num (num number);
declare
l_x number;
begin
for x in 1..200
loop
l_x := x;
insert into max_num values (rpad(1, x, 1));
end loop;
exception
when others then
dbms_output.put_line('STOP: '||l_x);
dbms_output.put_line(sqlerrm);
end;
/
select num, length(replace(num, 0)) from max_num;
Hi,
user109389 wrote:
What is the largest length of a NUMBER that Oracle will "support"?
The documentation says the following:
Datatype Limits indicates:
"Can be represented to full 38-digit precision"
NUMBER Data Types indicates:
"Oracle guarantees the portability of numbers with precision of up to 20 base-100 digits, which is equivalent to 39 or 40 decimal digits depending on the position of the decimal point."
I realize if I define a column as simply NUMBER, I can insert numbers with a size up to 126 digits. However, Oracle seems to only maintain the first 40 digits IN MOST cases. The most number of digits it seems to allow is 40 before it starts to replace with 0's.
With numbers that have more than 40 digits, Oracle will sometimes replace any numbers after the 38th digit with a 0 and other times will replace with 0 after the 39th or 40th digit.
Therefore, what is the most number of digits we can have confidence in Oracle storing safely?...
If the documentation says 38, then I'll say 38, too.
If you put in a number greater than or equal to 1E39, then Oracle may round it to 38 significant digits; that is, the last digits you entered may get turned to 0's, or rounded up.
38 sf is not bad:
Now, Ludlow and his colleagues have created ytterbium clocks that are stable to one part in a quintillion (10^{18}): it would take a quintillion ticks to find one that is different from its neighbours.
On the other hand. it can fall short sometimes ...
The number of atoms in the entire observable universe is estimated to be within the range of 10^{78} to 10^{82}
Read more: http://www.universetoday.com/36302/atoms-in-the-universe/#ixzz2e3NwCYx1
> Now, Ludlow and his colleagues have created ytterbium clocks that are stable to one part in a quintillion (10^{18}):
As long as they remember to wind it.
What is the largest length of a NUMBER that Oracle will "support"?
You've already provided your own answer. If 'length' means the maximum number of digits if written out then the doc gave you the answer:
999...(38 9's) x10^{125} maximum value
The 38/39/40 referred to in the doc means 'significant digits'. That is why the rounding or truncating occurs if you provide more than the 38/39/40 significant digits.
Oracle stores numbers internally in a 21-byte binary format using a structure documented in the OCI doc in the 'NUMBER' section.
http://docs.oracle.com/cd/E11882_01/appdev.112/e10646/oci03typ.htm#i423684
Oracle Database stores values of the
NUMBER
data type in a variable-length format. The first byte is the exponent and is followed by 1 to 20 mantissa bytes. The high-order bit of the exponent byte is the sign bit; it is set for positive numbers, and it is cleared for negative numbers. The lower 7 bits represent the exponent, which is a base-100 digit with an offset of 65.
That doc section goes on to show you how to convert the internal format to the actual value.
An additional 1 byte is used for all datatypes to store the length; that is why you commonly see docs saying numbers can take 22 bytes.