We understand your requirement. Let me suggest a couple alternatives worth trying.
Case 1: If you append relatively small amounts of data successively to the record (for example, you might store 100KB of experiment data, then another 100KB...etc), then you can just use a sequence of minor keys to store each chunk of data. To retrieve, you could iteratively fetch the contents of each minor key in sequence.
Case 2: If you first store a LARGE amount (e.g. 2 GB) of data, and then add small (e.g. 100KB) of data subsequently, you could store the large content as a LOB and then store the small chunks as a sequence of minor keys. To retrieve, you'd first read the LOB and then iteratively get() the minor key contents in sequence.
Does this work for you?
Key / value pairs with the data will be of the order of several million. If you split them apart yet, you get even more. Database can work with as many keys without loss of speed?
Store all the data in one pair in my opinion would be better, because we need a fast sequential access. If we needed a fast random access, we could think about your proposal.
This is not the our case.
My recommendation is to do some experiments with different sizes of "value" in order to get an idea of how the system performs. NoSQL Database is designed to handle large numbers of key-value pairs. We've tested NoSQL Database performance with over 2 billion rows. In our test scenario, each key-value pair was approx. 1000 bytes (we use YCSB workload for testing).
Your scenario is different from the YCSB workload since you have very LARGE rows. Some experimentation with data modeling will be very helpful. I'd love to see the results of your experiments.
Hope this was helpful.