Vdbench50405 now GA on OTN

Version 2

    See Vdbench Downloads

     

    Below is a list of the changes since 50403

     

    Henk.

     

     

     

    Release notes for Vdbench50405

     

    Vdbench50405 GA version

    • No known problems.

    Vdbench50405beta6

    1. Any pending first write to a block found during journal recovery will no longer be checked for valid contents.
    2. Preparing code for possible future switch from 8 to 4 bytes of memory when using validate=time.
    3. Increased maximum 'validate=time' block count from 1,073,741,824 to 2,147,483,647 which at this time will require a maximum of 16gb of extra java heap memory.

    Vdbench50405beta5

    1. 'No read validations done during a Data Validation run' abort when using skip_read_all
    2. Properly report little endian hex SD name in corruption report.
    3. Changed 'may need more jvms' warning message to 100,000 per jvm
    4. Default JVM count from 5,000 to 100,000 iops per JVM
    5. New utility ./vdbench printjournal
    6. When using Dedup or Data Validation with file system testing file sizes must be a multiple of either dedupunit= or the smallest xfersize used.
    7. Resolved lun size issues on Linux when using SD concatenation.

    Vdbench50405beta4

    1. Linux has started using clock_gettime(clock_monotonic) for respone time measurements, this to avoid clock drifting.
    2. Concatenated SDs did not properly use dedupunit.
    3. Memory corruption in SNIA workload
    4. New option: journal=ignore_pending or -jri
    5. Changed option: journal=skip_read_all or -jrs

    Vdbench50405beta3

    1. The new Concatenation markers did not properly calculate the LUN size on Linux.

     

    Vdbench50405beta2

    1. False corruption because a block was unlocked BEFORE the -vr read-immediate was done, allowing a different thread to modify this block too quickly.
    2. False corruption because 'pending write' verification concluded that the pending write was never completed, but failed to pass the fact that the block still contained the OLD data to the 'let's check one more time' code.
    3. False corruption because 'pending write' verification allowed a never before written block to be identified as having invalid data.

    Vdbench50405beta1

    Data Validation:

    1. Data Validation now supports one Exabyte of data in 4k blocks. Vdbench until now supported 'only' 31bits worth of blocks. That lasted more than 13 years, but times have changed. Vdbench now supports 48 bits or 281,474,976,710,656 blocks. That hopefully will be enough for a while.
    2. A 'time line' for a corrupt block is now displayed, trying to assist you in finding out when exactly the block was corrupted.
    3. Data Validation reporting has one more change: in previous versions when multiple data blocks were corrupted output of these different blocks being reported in 512byte chunks could all be mixed together. The new version postpones reporting until a complete data block has been verified making sure that all the errors are reported in proper sequence, including the added benefit that you now can be assured that the WHOLE block is reported before data_errors=nn is reached.
    4. './vdbench dvpost' no longer exists, though I am starting to question that decision. Let me know if you need it.
    5. A 'seekpct=eof,rdpct=100' workload during Data Validation now will ONLY read those blocks that Vdbench knows it has written. This gives you a relative 'quick' way to make sure that all the data is valid and that no data has been corrupted. See also 'journal recovery' below.

    Data Deduplication (Dedup):

    1. Dedup also supports 48bits worth of blocks.
    2. The dedupunit= parameter is now required, no more default of 128k.
    3. You can now specify multiple dedup ratios in the same Vdbench test for the raw i/o functionality. A decision for similar functionality for file system testing has not been made yet.
    4. You can now read and write using ANY data transfer size, there no longer is a need to use a multiple of the dedupunit= parameter.
    5. Dedup 'flip-flop' is now optional and by default is set to OFF. Having it on by default just caused too much confusion with users who expected the generated dedup ratio be stable inspite of random write workloads.
    6. Dedup 'hot sets' allows you now more detailed control over how deep (how many collisions) you want your dedup hash tables to be, and, together with 'flip-flop' allows you to force the creation and deletion of hashes. Dedup previously was mainly operating in a semi-stable state making it virtually impossible for hashes to become obsolete and deleted.
    7. Dedup can now be used across Vdbench slaves and/or clients.
    8. Validate=continue no longer exists. This option was there to allow 'flip flop' information to be passed from run to run so that the next run could be 100% accurate in it's flip/flop decisions. But then I realized that the flipflop mechanism in itself already sacrificed the accuracy of the requested dedup ratio. Trying to be accurate with something that was already known to be inaccurate then absolutely did not make sense.
    9. Dedup now also works with SD concatenation.

    Journaling:

    1. Journaling also supports 48bits worth of blocks.
    2. Journal recovery is now aware of Dedup and will properly identify data corruption of writes that were pending the moment that Vdbench and/or the system shut down.
    3. Journal recovery's 'read everything to make sure nothing has been corrupted' -step now reads data in 128k blocks, and no longer using the usually (much slower) smaller xfersize or dedupunit= used.
    4. This last step even can be completely bypassed, though of course, you lose the opportunity to identify corrupted data as early as possible, causing you to be scratching your head later on.
    5. Also in this last step, any blocks that according to the journal  does not have known data on it, will not be read. Data Validation can ONLY validate data that it knows it has written himself.
    6. Speed improvements: a 12TB lun contains 3.5 billion 4k blocks. Writing all that information to journal files or reading that back may take a while. Efforts have been made to speed that up by no longer doing this single threaded.
    7. journal=(max=nnn): High iops and/or long runs can cause the journal file to become huge. A new journal=(max=nn) option allows you control over how large it gets. When the limit is reached all i/o to the current lun or file system will be temporarily halted and the current in-memory Data Validation map will be written to the journal file, causing all the before/after journal records to be cleared. Be aware, referring to above 12TB lun, this may take a few minutes with all i/o to the lun halted.
    8. A new 'journal=maponly' parameter prevents before/after journal records from being written to the journal file. At the end of a run the Data Validation map will still be stored in the journal file. This of course means that you depend on Vdbench completing successfully, but this option will be very helpful if you want to 'quickly' format an SD without the overhead involved of writing the before/after records. This will also be of great use when you use journaling to for instance validate snapshots or replication.

    Some old Vdbench debugging tools now made available:

    1. 'ShowLba': Vdbench optionally will generate a trace of all the i/o done, allowing ShowLba to visually show you what portions of an SD are being accessed. This is very helpful when verifying the accuracy of parameters like 'hotband' and 'range'.
    2. 'Csim': Compression simulator: you can ask csim to read an x% random sample of the data on a lun or a bunch of files and it will tell you what the current gzip-based compression rate is. This helps you verify the accuracy of the Vdbench compression data pattern that has been generated. Remember, this is an estimate.
    3. 'Dsim': Dedup simulator: dsim will read a file or a lun and will report the dedup ratio for this data. This will verify the accuracy of the Dedup data pattern generated by Vdbench. Note that this is just a 'primitive' tool: it has no restart capabilities and can not handle huge amounts of data: each unique block requires about 100 bytes of java heap memory, and that can all add up pretty quickly when running against a large amount of data, especially data that does not Dedup very well. A 10 gb java heap should be able to easily handle 400gb worth of unique data blocks with dedupunit=4k. Please let me know if there is a need to expand this tool.

    Miscellaneous changes:

    1. A warning message will be displayed when the time of day clocks between master and one or more remote hosts is more than 30 seconds out of sync. Out of sync clocks have caused several false Vdbench heartbeat timeout problems.
    2. Performance statistics sent from all slaves to the master will now be compressed on the slave and decompressed on the master. Some systems, especially Windows 2008, have some serious problems around java socket communications and the hope is that some of those problems now will be resolved.
    3. A new addition to the 'maxdata=' parameter: maxdata_read= and maxdata_written=
    4. When starting Vdbench, some command line and parameter file variable substitution is possible. However, a bug in the code would not recognize that some variables were specified but not found in the parameter file causing a lot of confusion to Vdbench and its users. Vdbench will now properly abort when there is a discrepancy between the two.
    5. The 'pattern=/file/name' option no longer will modify the data pattern just created to prevent accidental Dedup. YOU are now 100% responsible for the data pattern.
    6. A new rd=rd1,….,stopcurve=n.n option: Stop the iorate=curve run when you reach a response time greater than 'n.n' milliseconds.
    7. SD concatenation is mainly used by the SNIA and EPA. To guarantee that users do not try to 'trick' the system, and also to resolve the age-old problem of 'lunX on systemA may not be the same lunX on systemB', Vdbench now will write a marker on each SD to make sure that all LUNs specified not only are the same luns on all clients, but will even, if possible, correct the SD+LUN specifications.
    8. data_errors=nnn now also is used for File System testing, meaning that Vdbench no longer will abort immediately after the very first read or write error. Note that errors during open() and close() calls will still cause an immediate abort.
    9. A new abort_failed_skew=nn parameter will abort Vdbench when the requested workload skew is not reached. Look for the skew report in summary.html.
    10. The swat= parameter no longer exists. The option allowed Vdbench to automatically call a very old version of Swat to allow it to generate some PNG files of performance charts. This function now has been removed.
    11. The process-ID of all the slaves is now reported in logfile.html.

    Vdbench50404rc4

    1. This new version now requires java 1.7 or up, this because of the soft link mentioned below.
    2. Support on Solaris and Linux of /dev/ device names that are actually soft links.
    3. Fixed a bug with max_data= not working when using the 'loop' option.
    4. The flat file was not including cpu statistics for non-solaris systems.
    5. 'format=restart' some times incorrectly thought that an interrupted format was already complete.
    6. Problems with Dedup results when between runs the order of SDs changed.
    7. Added a new 'timestamp' field to the flatfile which now includes a date and time zone.
    8. Fixed a bug that prevented using a raw device as a journal.
    9. When using concatenated SDs with multi-host Vdbench the code now makes sure that LUN names and SD names across hosts really are the same physical device, and will even correct, if possible, mismatches. Yes, it is not only Solaris that  has random device name generators making it difficult for users to figure out 'is this really LUN XYZ?'.

    Vdbench50404rc3

    1. 'format=restart' would not always continue formatting incomplete files, causing "Trying to read beyond EOF" error messages and aborts.
    2. On MAC I have now given up trying to report cpu statistics (instead of printing garbage).
    3. Code will no longer try to create the Data Validation map on a raw device when using 'journal=/dev/rdsk/xxxx'.
    4. Linux now using "clock_gettime(CLOCK_MONOTONIC, &time)" when available. This should eliminate infrequent 'time travel' problems when clocks are being synchronized.

    Vdbench50404rc2:

    1. After seeing more and more systems that do not have csh installed Vdbench switched from csh to using bash for its Unix startup script and all the forking of its work. Of course, after that it did not take too long to run into an AIX system where bash was not available. You just can't win the 'A through Zsh du jour' battle. Bash it is. <End shell bashing>
    2. Use of the jvms=n parameter no longer allowed for file system workloads. The objective for file system testing has always been to have ONE JVM handle all the work for ONE file system, but the introduction of 'shared=yes' caused that to no longer be the case. The code however works fine when spreading the workload over more hosts/clients, so I decided to not put in any effort to resolve the multi-jvm issue, just work around it. So, if you need more JVMs, just code more hosts/client. You don't really need extra physical clients though, you can for instance specify hd=hostA1,system=systemA plus hd=hostA2,system=systemA, or, hd=host1,system=localhost and hd=host2,system=localhost. You can also use the 'client=' Host Definition parameter.
    3. File system functionality was also originally planned to have only one thread use a file. That broke Data Validation after the introduction of 'fileio=(random,shared)', because now we have two or more concurrent users of the same data block which is NOT allowed for Data Validation. Solution: no 'fileio=shared' for Data Validation. If you really need sharing, use the raw i/o (SD/WD) functionality of Vdbench.
    4. Data Validation for raw i/o (SD/WD) incorrectly skipped the Validation of a data block: the first 'read immediately' and any 'non pre-read' read operations was not Validated causing a possible corruption to be found a little later during a pre-read before the next write operation. The Validation in this case was not done but still counted. The only time this bug was really problematic was after a 100% read run.
    5. Problems with using skew= and multi-hosts for file system testing has been resolved (this was the 'no multi-jvm' workaround mentioned above.
    6. File system format=yes for shared file systems no longer allowed. That must be split into two steps: 'format=(clean,only)', and then 'format=(restart,only)'. During shared formatting one host started deleting files that an other host just created.
    7. Messagescan=nnn did not always stop the scan after nnn error messages.
    8. Some runs were so short that they completed inside of the very first reporting interval. Since Vdbench always reports the totals of all intervals minus the first one the results some times showed up as garbage, or question marks. These values will now be displayed as 'NaN', or Not a Number.
    9. The Workload Skew report now also is created for file system workloads. This will help you identify any possible skew problems.
    10. AIX support for journaling. This functionality actually was available already for several years, but was never activated. This proves again: just ask!
    11. A new 'stopcurve=n.n' Run Definition parameter which will stop Vdbench running curve data points the moment that response time n.n (in milliseconds) is reached. Note that this terminates Vdbench, not just the current RD, so don't try to run this together with any 'forxxx' parameters.
    12. When multiple 'seek=eof' SDs completed within one very small window Vdbench would some times terminate without reporting the run totals.
    13. Support with file system testing for sparse files. Since I have not had anyone really using it I have not documented it, I just need some more run time. Let me know if you need it (and then of course USE it and provide me with feedback).
    14. Solaris: I no longer need to run 'ls -lr /dev/rdsk' during configuration interpretation. This may speed up Vdbench startup if you have thousands of luns.
    15. A problem running Journal Recovery reporting false corruption when dedup was used. The order of the SDs specified in the parameter file between the creation of the journal and the journal recovery had changed, causing the expected data patterns to change. Vdbench now will sort the SD names used, this to make sure that they always stay in the same order. This is a great example of a weird corner case!
    16. File system 'totalsize=' and 'workingsetsize' values will be automatically adjusted over all hosts that have been defined.