Skip to Main Content

Hardware

Announcement

For appeals, questions and feedback about Oracle Forums, please email oracle-forums-moderators_us@oracle.com. Technical questions should be asked in the appropriate category. Thank you!

Interested in getting your voice heard by members of the Developer Marketing team at Oracle? Check out this post for AppDev or this post for AI focus group information.

Vdbench 50407 is now available

You can find it here: Vdbench Downloads

Initially I had removed 50405 from the download page, but it turns out that SNIA is still using that, so I added it back in.

Below are the release notes for 50407:

Have fun.

Henk

50407rc29

  • Message “Warning: total amount of i/o per second per slave (123,456) greater than 100,000.” The 100,000 limit can now be controlled using the ‘ios_per_jvm=nnnnnn’ General parameter with the default set to 100,000. This basically gives you control over the i/o limit per slave that vdbench will report on. This means that YOU will be responsible if you are overloading your slaves.
  • A ‘debug=128’ general parameter: use at your own discretion. Normally all file system workloads defined using the fwd= parameter are all controlled workloads. Vdbench does its best to let them all run at their requested skew, or by default, 100% divided by the number of workloads. This debug=128 parameter (or –d128 execution parameter) will let each workload run as fast as it can.
  • A ‘debug=129’ general parameter: use at your own discretion. This option adds an extra column to the file system reporting, reporting the average percentage of time that all requested FWD threads are busy. This is useful if, as mentioned above, some of your workloads are throttled to accommodate other workloads that cannot keep up. Note that this is an approximation; some busy values due to idle time falling across reporting intervals may be reported in adjacent intervals.
  • As usual, ‘debug=’ options are merely debugging aids or problem workarounds and will never be documented in the official Vdbench doc.
  • The data_errors=remove option has been enhanced to data_errors=(remove,nn) (For raw workloads only). The failing SD will not be sent any new i/o operation for ‘nn’ seconds after which normal work will continue.
  • The check to make sure you are running java 1.7 or higher has been removed.
  • A new ‘fwd_thread_adjust=no’ parameter. Example: you are running with the forthreads=1 RD parameter, but you have TWO FWD workloads. Vdbench automatically adjusts this thread count from one to two to make sure each workload gets something to do. This can confuse people if they really, really want to run with only one thread. This parameter will cause the code to NOT make this adjustment and instead abort.
  • No code changes, just a fix to the C compile for Linux. The old Linux system used for C compile died of old age. The new (virtual) system I received was oel7, which requires a newer version of GLIBC which older oel6 systems don’t have. This caused incompatibility issues. Symptom: /lib64/libc.so.6: version `GLIBC_2.14' not found.  Code in rc27 was recompiled with oel6.
  • Though when running without Data Validation Vdbench should not care about data integrity, there is one place where it does matter: for File System functionality (FSD/FWD) Vdbench writes a small Control File into the target FSD. Twice over the years we have seen corruptions in this file which means that when Vdbench restarts using this corrupted Control File all bets are off. For diagnostics purposes this file is now re-read immediately after it is created and its internal checksum is compared. I also now write the FSD anchor name inside of the file plus the process-id of the Vdbench slave doing the writing. This will hopefully help us identifying future corruption issues, or, cases where possibly two or more users are writing to the same FSD anchor at the same time.
  • Small fix: the Histogram for each Workload Definition (WD) was missing.
  • Data Validation: There will be a new 'concept' in Vdbench: the 'Owner ID'.
    That will usually be the process ID of the Vdbench MASTER (no longer the slave) that is running.
    When using journaling however the Owner ID will be the process ID of the very first Vdbench MASTER creating the journal file. Even if you run journal recovery a dozen times, this Owner ID will never change.
    This Owner ID will be as usual in bytes 12 - 15 in each sector (used to be process ID), for DUPLICATE blocks however it will be in bytes 8-11, so if you see a corrupted duplicate block you can immediately see if it was not written by you, though of course, if YOUR block gets lost any other old garbage may still show....
  • The new timeout= parameter will from now on not look at SDs or FSDs that completed their work early for instance due to a format.
  • A new ‘-r rdname’ execution parameter allows for a restart of a parameter file containing multiple Run Definitions, e.g. ‘-r rd3’ If you have rd1 through rd5 in a parameter file, Vdbench will rerun rd3, rd4, and rd5.
  • Added a ‘search’ option to Vdbench Compare allowing you to be more selective comparing specific Vdbench runs.
  • Fixed a bug in the code where using dedup without Data Validation caused a java memory exception.
  • Added ‘validate=reportdedupsets’, where Vdbench Data Validation, when it detects a corruption to a DUPLICATE block will report the lba and ‘last time used’ for all the duplicates of that corrupted block. Yep, when you have one million copies of the same block, which one is the bad one? J

50407rc27

50407rc26

50407rc25:

50407rc24:

50407rc23:

50407rc22:

  • Startcmd=, endcmd=, config.sh and my_config.sh. The user may optionally create a new script directory under the PARENT directory of where Vdbench resides. E.g. /var/vdbench50407 contains Vdbench, user may create /var/vdbscript/. Any script or command will be first searched for in /vdbscript/ and then in /var/vdbench50407/solaris(or)linux. If found that script will be used. This new option allows you to now have a permanent place for scripts called by vdbench without worrying that a re-install of Vdbench will overlay whatever you have there.
  • Vdbench sometimes needed 10 minutes or more of cpu cycles to figure out where what workload would run if you used loads of SDs and/or slaves. That bug has been fixed.
  • The skew report now does a numeric sort of SD names instead of an alpha sort, allowing sd1 and sd2 to be in the proper order instead of having sd1,sd10 and then sd2.
  • The big ‘skew’ fix in 50407rc19 did not work. I did a major rewrite in that area, so hopefully things are all better now.
  • Introduced seekpct=seqnz, fileselect=seqnz and fileio=seqnz.   Sequential i/o always started at block zero, with as result that if the run did not last long enough the next Vdbench execution would just find everything in cache. Adding ‘nz’ causes all i/o to start at a random point in the lun or file so there now is a better chance that we will not find all data in cache. Again, if your lun/file size is relatively small you are STILL going to find it all in cache. ‘fileselect=seq’ basically does the same: it does not start at file0, but somewhere with a random file number.
  • The ‘$host’ logic now also works inside of a Run definition (RD).f

50407rc21:

50407rc19:

  • Symptom: Intermittent incomplete workload when doing a controlled workload AND multiple 100% sequential workloads in one Run Definition (RD) AND multiple slaves AND not enough threads for each slave to have at least one thread. Yep, a pretty small window.

  style="margin-left:.25in"

  • (It took only 15 years for this 'problem' to show up J) 

50407rc18:

  • A fix to the way /etc/mnttab was parsed: for the first time a 64bit value showed up in the 'dev=' mount output.
  • A nasty bug in Data Validation. In 50407 a new key value would no longer always start with key=01, it would start with a random value. On rare occasions this random value would result in key=0xffffff, and that of course caused havoc since it may not be larger than 7 bits.
  • Vdbench compare: new syntax:
    • ./vdbench compare old_output_dir new_output_dir [-o out.csv ]    Using '-o' will bypass the GUI and directly create a CSV.
    • ./vdbench compare old olddir-1 olddir-2 olddir-n new newdir-1 newdir-2 newdir-n  [-o out.csv]  (wildcarding is OK)

  style="margin-left:1.0in"

  • Also added '$error', which will contain either 'r' or 'w' for a normal read or write i/o error, or a to-be-determined hex value in case of a Data Validation error.
  • Just before the above 'data_errors=scriptxxx.sh' is called, no new I/O is started, thus giving the script some extra control over what's happening while the script is running.

50407rc13:

  • When using the report=no_sd_detail parameter the histogram files were still generated, again with 2000 luns this option better work.
  • Now of course we also have the report=no_fsd_detail parameter, though no_sd_detail will imply this also.
  • Workload reports no longer will be generated when a Workload Definition (WD) is NOT used.
  • timeout=(60,600) Will print message every 60 seconds and then will abort at 600 seconds if problem has not been resolved.
  • timeout=(60,abort)  will abort Vdbench if it does not see any i/o completions for 60 seconds
  • timeout=(60,/xxx/script)    will call '/xxx/script', and if the script returns 'abort' Vdbench will abort. Any other value returned by the script will cause vdbench to continue. The script will be called again 60 seconds later if the timeout continues.
  • A fix to ./vdbench rsh which aborted when Vdbench was killed with ctrl-c.
  • A fix to ./vdbench rsh where the java port number of its rsh daemon was accidentally incremented.
  • A fix to a 45-second reporting anomaly on Linux, caused by not setting setTcpNodelay(true).
  • A fix to forseekpct=0 where Vdbench did not force the sequential work to be run on a single JVM.
  • A fix where an I/O error to an FSD allowed the block to be used again.
  • A fix for sparc linux where the openflags= values used by did not match the OS.
  • The skew.html file now will include individual SD or FSD status, even if report=no_sd_detail  was used.
  • ./vdbench print by default ran using directio, that is now optional by adding the '-f directio' flag. The problem with using directio by default was that you would NOT be able to print the in-cache copy of the block.
  • validate=xfersize         This remembers and stores the xfersize used for each key block of the LAST write done to this key block. It requires an extra four bytes per key block of java heap space.  This option has been very useful chasing a data corruption. This xfersize then will be reported when a corruption is found. Not available with journaling.
  • The 'max response time' column has been replaced by 'max read response' and 'max write response'.
  • At then end of a file system functionality test (FSD/FWD) two new lines are reported, with 'max' and 'standard deviation' for appropriate columns.
  • The flatfile parser now allows tab separated files, not only command separated files. It also allows for '-c all' where ALL available columns will be reported.
  • fileselect= [empty full notfull]: causes file selection to only look for empty or full, or partially full files.
  • The FSD= 'count=' parameter now also allows use of a printf mask, e.g. fsd=fsd%02d,anchor=/tmp/dir%03d,…,count=(1,10)

Comments

9d41c51d-4873-45b5-88ab-af0f333ab216

Hi,

I have nearly the same issue, but from another perspective.
Since Update from 8.121 to 8.131 neither the Java verify nor JPNL Applets are working anymore.
JavaScript and Java as local interpreter is working as expected.

Symptoms are like this:

1. start internet explorer 11

2. browse to website (e.g.: https://www.in-manager.arcor.de/INManager/pwd )
3. IE freezes.
4. open Task Manager
5. You can see a single instance of jp2launcher.exe running idle.

6. killing the process unfreezes internet explorer but w/o loading the applet of course.

I tried to build a new virtual machine using opera and firefox under Linux, but with the same behavior.
Is this related to the discontinued NPAPI support?

br

Carsten

Radek Rehorek

Hi. Exactly the same problem with our Java application in our company's environment. Latest working JavaWS was from Java 8 update 92.

I tried many times sequence App-V package, tried to monitor processes, but no luck.

Today I found there is new Java 8 update 141 out, so I'll try it.

Radek Rehorek

No progress in Java 8 update 141. The same behavior.

Radek Rehorek

Java 8 update 144 is out, no change

2ecd0036-1b34-4c1e-8f65-3c05293fc66a

I am having exactly the same issue. Is this problem being looked into by Oracle? This is causing serious issues here now.

3480315

No update from my side so far. I'm working with a person from MS support trying to solve that. So far we had to include java in our base Citrix/XenDesktop image.

Joe_

Exact same problem here... java pops up for a couple seconds, tries to load then closes down. Works fine when running java apps within Internet Explorer itself.

3480315

Ok, I just tested this with latest JRE 9 Early Access build. Works like a charm.

Tested on win 7 machine with all updates, App-V client HF09. But I'm pretty sure it will work on app-v 5.0. Sequenced to VFS. No additional settings like COM integration.

jre9eaappv.png

Joe_

I had the same issue, when i packaged this i packaged it using a PVAD pointing to the Local App Data folder (excluded local app data folder in the ui before sequencing) and then also installed java to the same local app data folder. Need to do some tidying up and test the exception site list but that got it working for me.

8981c429-54f4-4152-9ed0-fb2e02cceaf4

Has anyone found a fix for this issue, other than using JRE9?  We have the same problem and really need this to work with the latest java 8 client. 

5a4b79d5-6e1a-4fcc-a0e4-07d1525b7925

If you have not come across this site / documentation - you may find it useful.

This documentation addresses many of the general java issues w/ AppV.  The site has good insight on IE11 freezes and PVAD.

http://packageology.com/2014/02/sequencing-java-the-definitive-guide-part-1/

3480315
  • https://bugs.openjdk.java.net/browse/JDK-8194690
    Pardeep Sharma added a comment - <time class="livestamp" datetime="2018-02-22T02:06:29-0800">2018-02-22 02:06</time> - edited"Confirming that the adding deployment property - "deployment.security.use.insecure.launcher=true" resolves the issue. Verified with the versions where it has failed prior (JDK 8u102 b14, JDK 8u111 b14 and JDK 162). "
Confirmed, workaround is working fine !Adding deployment.security.use.insecure.launcher=true to deplyment.properties solved the issue. You can :
Remove localAppData from Exclusions prior to sequencing and edit the deplyment.properties in AppData\LocalLow\Sun\Java\Deployment during sequencing OR somehow edit the deplyment.properties in AppData\LocalLow\Sun\Java\Deployment for the existing package in local, not virtualized location (if localLow was in excludes) OR edit the global per machine in %windir%\Sun\Java\DeploymentMore on deployment.properties
https://docs.oracle.com/javase/7/docs/technotes/guides/jweb/jcp/properties.html

1 - 12

Post Details

Added on Jun 14 2018
0 comments
6,131 views