Hi, I'm not sure what you are trying to achieve? Can you expand on your requirement in more detail please?
So basically, I have a file system with three files (file1,file2,file3) on my zfssa appliance. I need to do a backup in such a way that the backup data on the tape should spawn across the volumes present in it. For this I created a file system with 3 files but the problem is I'm unaware of the sizes of each volumes present in the tape. So I was wondering if there is a way to figure out volume size or not. Or else is there a way that I can tell the OSB to spawn across vols while starting the backup job itself.
Are you saying that you want file1 to be backed up on tape1, file2 onto tape2 and so on? So you don't have all files on the same tape?
You can achieve that by launching a new job for each path, which you can do by giving each one it's own dataset. You can then call that dataset from a master dataset.
Does that help?
When you say launch a new job, does it mean I need to have individual jobs for each file? If that's the case then that's not the one I'm looking for. I need to have only a single job and the data of the files should be so huge that they automatically spawns across vols. For this I need to know the vol size which is where I'm stuck.