ManagedDataAccess 18.6 (3/21/2019) GetOracleBlobForUpdate is limited to 2GB — oracle-tech

    Forum Stats

  • 3,716,003 Users
  • 2,242,928 Discussions


Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!

ManagedDataAccess 18.6 (3/21/2019) GetOracleBlobForUpdate is limited to 2GB

DL9999 Member Posts: 5
edited July 2019 in ODP.NET

I recognized that there seems to be a 2GB limitation on writing to BLOB in Oracle.ManagedDataAccess. I just updated to the latest Release but it is still the same problem.

I have a DataReader object with access to a lob column (sql like "SELECT media, media_id FROM mat_media WHERE media_id = 111").

Then i get the OracleBlob with

blob = dataReader.GetOracleBlobForUpdate( 0 );

I open a file larger than 2GB:

using( Stream streamReader = File.Open( file.FullName, FileMode.Open, FileAccess.Read, FileShare.Read ) ) {

    streamReader.CopyTo( blob, chunkSize );

It doesn`t depend on chunkSize (i tried different values from 100K up to 50M). The result is an exception:

Message: OraBufWriter:ReadResetResponse - Unexpected Packet received.

Stack trace:

   bei OracleInternal.TTC.MarshallingEngine.ProcessReset()

   bei OracleInternal.TTC.TTCLob.ReceiveResponse(List`1 dataSegments)

   bei OracleInternal.TTC.TTCLob.Write(Byte[] lobLocator, Int64 locatorOffset, Byte[] inBuffer, Int64 inBufferOffset, Int64 numBytesToWrite)

   bei OracleInternal.ServiceObjects.OracleBlobImpl.Write(Int64 locatorOffset, Byte[] inBuffer, Int64 inBufferOffset, Int64 numBytesToWrite)

   bei Oracle.ManagedDataAccess.Types.OracleBlob.Write(Byte[] buffer, Int32 offset, Int32 count)


Then i replaced the "Stream.CopyTo" by its corresponding loop code

byte[] buffer = new byte[chunkSize];

int read;

while( ( read = streamReader.Read( buffer, 0, buffer.Length ) ) != 0 ) {

    blob.Write( buffer, 0, read );

    allWritten += read;


Now I can see that there is exactly 2GB successfully written when the exception is thrown. There must be a undocumented limitation to 2GB when writing to BLOB columns.

All works fine with files smaller than 2GB (i tried i.e. 1.99GB). It looks like you use a Int32 to position the write mark within the blob instead of a UInt64.

Is there a solution for this? I have to upload large files and i don't want to split them to 2GB peaces in different blobs and join them on DB side...


  • Alex Keh-Oracle
    Alex Keh-Oracle Posts: 2,753 Employee
    edited April 2019

    I filed Bug/ER 29631186 on your behalf. The size limitations for the parameters offset and count are documented as Int32, which means they have have a 2 GB max. It would be a good idea to make these parameters able to accept larger values in a future release.

  • DL9999
    DL9999 Member Posts: 5
    edited April 2019

    Thank you Alex for the quick answer.

    It`s true that the offset and count parameters of OracleBlob.Write() are Int32 (as you can see in stack trace above) but they specify the location and amount of bytes in the given buffer to write to the blob. That`s ok for me because my buffer is only 5MB (i tried different chunk sizes as mentioned).

    Internaly the blob handles its own locatorOffset and this is a Int64 like it should be. All internal functionality of blobs handle the location positions in Int64 (as you also can see in stack trace). It seems that the problem is deeper hidden in the communication with the server...

    I hope that this will be fixed soon as we are already discussing the implementation with several blobs that are joined on db side. This hack would be a performance dissaster. The user has to wait long enought to upload the file to the web server and after that from web server to db blob...

  • DL9999
    DL9999 Member Posts: 5
    edited May 2019

    I just installed the latest NuGet package of OracleManagedDataAccess (19.3).

    Unfortunately this bug is still not fixed.

    Please have an eye on my last post as the problem has nothing to do with the data type (Int32) of the mentioned parameters but must be in a lower communication level of ManagedDataAccess.

    Maybe this should be mentioned in the Item (Bug/ER 29631186) you opened for this error...

    Thanks for your help

  • Alex Keh-Oracle
    Alex Keh-Oracle Posts: 2,753 Employee
    edited May 2019

    I mentioned the bug behavior when retrieving a bug larger than 2 GB.

  • DL9999
    DL9999 Member Posts: 5
    edited July 2019

    Good news... I tried the latest OracleManagedDataAccess (V19.3.1) and now it works. Congrats to Oracle, they fixed this issue

    Thanks for your Help Alex Keh.

  • DL9999
    DL9999 Member Posts: 5
    edited July 2019

    Unfortunately there is another Problem.

    Now i have the data in the BLOB. And I can stream it back to my client in full size.

    But Oracle Managed DataAccess gives me a wrong length (negative pecause of the leading bit in Int32 that indicates the sign). With a BFile of 2.9GB it gives me -1.1GB.

    Sometimes we copy the data from BLOB to BFiles (on DB side) to save space in TableSpaces. When I then access those BFiles the given length is also incorrect (same like with BLOB). The stream back to client is working also with BFiles correctly.

    Of course this is not a big problem because i can stream the data. But it could have been testet when they fixed the bug.

Sign In or Register to comment.