This content has been marked as final. Show 21 replies
From what I understand, the main focus of btrfs are in areas of data redundancy, snapshot ability, and continued efforts to improve performance.
The main reason for doing so over ext4 is that we need to create a file system > 16TB.According to wikipedia, the limit is 1 EiB or 1024 TB. The issue that limits the file system to 16 TB are the available tools to create and maintain the file system. Version 1.42 of e2fsprogs (e4fsprogs) supports the creation of ext4 file systems larger than 16TB.
You can download http://e2fsprogs.sourceforge.net/
E2fsprogs 1.42 (November 29, 2011): This release of e2fsprogs has support for file systems > 16TB
Version 1.42 has apparently still not made into the current RHEL distributions, currently 1.41.12. Makes me wonder why RHEL even bothered to release ext4 yet.
However, neither btrfs nor ext4 are recommended by Oracle to store oracle database files. Oracle ASM supports file sizes up to 128 TB according to http://docs.oracle.com/cd/E11882_01/server.112/e17110/limits002.htm#REFRN0042
Thanks. Clearly if you have $$$ - the recommended solution is to get a NAS. Otherwise you are left to improvise.
We will check e2fsprogs.
Yes, ASM supports datafiles for Bigfile Tablespaces of up to 128TB. We have those - and performance is outstanding. This is exactly the issue - we'd like to export them into a single filesystem.
I guess the next thing to try is ADVM volumes within a large ASM diskgroup.
Thank you for the tip !
Very true. We are on 126.96.36.199 - so ADVM works - we just need to see how it scales.
The problem is that the SCHEMAs are 40-60TB and typicaly a SCHEMA resides in a single tablespace - hence the large tablespaces.
Ideally we would like to export an entire tablespace (2-3 schemas) but that's even more ambitious. For now we are content to export a single SCHEMA at a time - but that still requires a landing zone filesystem > 16TB.
You can add datafiles to a tablespace as you require or desire and to my knowledge there is no constrain between schema and datafile. It is a common practice, for instance, to simply add another datafile to a tablespace if you need more space.
The Oracle datapump or export utility can split the dumpfile file into several smaller chunks of data and even define that the output goes to different file systems. Such questions are however better discussed in the database forum.
I found an old related post that demonstrates the use of pipes and how to split and automatically compress the export output: What to do when not all pipes was consumed?
You can also define the output using the FILE parameter and thereby create dumpfiles on different volumes, for instance: FILE=/u01/prodex/exp01.dmp,/u02/prodex/exp02.dmp,/u03/prodex/exp03.dmp FILESIZE=2G
Dude - that's a clever workaround ! You are right - you can run expdp with DUMPFILE=DIR1:FILE1_%U,DIR2:FILE2_%U and get 2 sets of dump files, so DIR1 and DIR2 can be 2 different <16TB ext4 file systems !!!
Compression also helps - whether through the OS or the export utility.
And you're right - it's not about the tablespace size - it's the SCHEMAs that are large and need to be exported as a single chunk for ease of administration.
Our hope, of course, was to have one big room where you can just throw everything - instead of multiple smaller cabinets - but if >16TB (btrfs or ext4) is a little bleeding edge - multiple DUMP destinations is a great way to get around it.
bigdelboy - thanks - it's actually a very good suggestion to use XFS. They are supposed to be the "big scalable" filesystem. They were our "plan B" for the following reasons -
(1) there seems to be a licensing surcharge for the >16TB filesystem feature
(2) we are all UEK2 - and on large volume, large performance - uniformity of OS is a virtue - if only for support purposes - who knows - maybe there's some tiny incompatibility between UEK2 and XFS ?
(3) if >16TB is bleeding edge on ext4 and btrfs - is it truly mature on XFS?
(4) we don't have much experience with XFS - so a new OS to get used to - little things no doubt - but still a learning curve
So we were going to try and exhaust the OEL offerings first - but XFS is definitely high on the fallback list.
Alvaro - thanks - that's another great idea.
We don't use OCFS2 today - so we didn't know if it would scale beyond 16TB, and we kind of assumed it's for clusters - but you're right - no reason why it couldn't run standalone, plus it's built into the UEK2 kernel already.
We'll give it a try.
Thanks everyone - a lot of good suggestions !
For local, you should be able to use something like this:
#mkfs.ocfs2 -F -T datafiles -M local --fs-features=refcount /dev/<device>
And when you do the mount entry on fstab, just skip the _netdev, as won't require network to work.
/dev/<device> /<mount_point> ocfs2 datavolume,nointr,noatime 0 0
Test the default mkfs.ocfs2 first, then, start playing with the custom values of block and cluster device.
be nice, and tell us back how it went. :D
As you said you are on UEK, if you want to test ACFS also, take note, as today, you can run ACFS with UEK2 (2.6.39-300) on 188.8.131.52.4 oct 2012 or 184.108.40.206.5 jan 2013
On Grid Home, install
then, this patch does the trick.
#ACFS UEK - required 112034
The rdbms db oh, can be Jan2013 220.127.116.11.5 and works fine.
As today, 12983005 is out for 18.104.22.168.5 Jan 2013 also, so you can be on Oct2012 or Jan 2013.. Bleeding edge in patches. :D
Edited by: Alvaro Miranda on Feb 26, 2013 5:50 PM
Alvaro - thanks. The thing is that we are on this version of UEK2 -
Linux (servername) 2.6.39-200.34.1.el6uek.x86_64 #1 SMP Thu Oct 18 17:00:17 PDT 2012 x86_64 x86_64 x86_64 GNU/Linux
and on 22.214.171.124.0
so making a case for patches / upgrades would be another challenge.
Will ACFS2 with the desired functionality not work on the above?
#ACFS UEK - required 112034
enables the acfs to work with UEK2 2.6.39-300 but it requires the OCT 2012 cpu patch.
not big deal.
I will give you the steps: