1 Reply Latest reply on Oct 18, 2008 4:14 PM by 572474

    Performance retreiving large strings and BLOBs using Typed Dataset/adapters

      While using typed datasets I've noticed that the time to fill a data table increases dramatically if the database table contains varchar2(4000) columns or populated BLOB columns.

      We've got a few large varchar2 columns on the basis of 'Why restrict it when you don't have to?'. We also have some columns that need to be that big. The problem is that filling a data table from a table with 3 varchar2(4000) columns takes much longer (3 seconds for 400 rows) than filling a data table with 3 varchar2(100) columns (0.75 seonds), even though the columns are all null.

      If I change the columns to CLOBS then the fill is quick until you actually populate some of the columns with data at which point it becomes even slower than the large varchars.

      So the question is, if you have data that will usually be null or a short string but can occationally be much longer then what is the most efficent way of getting it into a dataset?

      I'm also having difficulty getting images into a dataset in a timely manner. At the moment it's taking nearly 10 seconds to retreive 400 rows, each with a 144KB image stored ina BLOB (approx. 56MB in total). The Initial Lob Fetch Size setting doesn't seem to have any effect when using Typed Datasets (with TableAdapters). There also doesn't seem to be any way of using the OracleBlob datatype with a Typed Dataset.

      Has anyone got any ideas on how to either fetch the blob data quicker or defer the fetching of blob data when using a Typed Dataset? Or am I just being unreasonable in expecting it to be quicker?