How does your flash_archive_location looks like and what do you find in your jumpstart profile ...
You cannot set archive_location different to your SPS profile settings. If you want to use an other flash archive. Just just another profile or varset for it. Or you create your own target-component to create the settings. So you must copy the target SPS component with the archive_location_flash variable and type :[target:archive_location_flash] int the profile...
Just look at :[ClientEther_base_config] for example. Thsi value comes FROM the target....and so you must modify your component that creates the target (and the jet config). Be aware of that the original OSP plugin components canNOT be modified. All plugins are only "readable"
My flash_archive_locations was entered as: nfs://<ip>/<path>/myflash.flar
In the profile, it shows up as the same : nfs://<ip>/<path>/myflash.flar
The problem is that when the new client machine is performing a jumpstart flash install, it fails reporting: cannot mount <ip>:/<path>/myflash.flar because it is trying to mount a filename, not a directory on the nfs server (NAS)
See above post for what is logged on the NAS, the remote location where my flars are stored. ( nfs://<ip>/<path> )
If I set flash_archive_locations to nfs://<ip>/<path>, the ../SUNWjet/Products/flash/check_client fails.
For some reason, jumpstart is effectively issuing a command equivalent to:
mount -F nfs <ip>:/<path>/myflash.flar /<somemountpoint>
Since the install bombs out, I can open a console and type the following, and see that it mounts:
mount -F nfs <ip>:/<path> /<somemountpoint>
I can then cd /<somemountpoint> and do an ls and see myflash.flar.
So, what is causing the jumpstart script to try to mount a file, and not a directory?
I was thinking that I may have some weird n1sps 6.0 with n1osp 5.2 problem, so I am rebuilding w/ pure 6.0. If not related to this, I will certainly have the same problem again.
I don't need the mountpoint to change for each client, just the flash archive name. But right now no matter what I do, jumpstart tries to mount the FILE, not the directory where the file resides.
The evidence I have of this is that when I manually try to mount the file, I get the same error showing in the NAS access log as jumpstart when it tries to mount. (see first post this topic)
Is this the correct format for this value? And if this value is found in this file (and i have manually mounted it from this host), is this the only value on the client that can influence the mount of the flar?
If it is correct and this is just not working for me, could I specify that solaris mount my NAS (where the flars are) as part of base_config_nfs_mounts, and then specify:
as the variable value for "flash_archive_locations". (providing the same local directory and file was found on the OSP server, so that check_client will run)?
I avoided the NAS for now, and have tried http. I cheated and mounted the NAS on the ma's web server (tomcat's "images" dir). I can hit the file, but the browser starts to display the file in the browser.
...but all I get is "Invalid HTTP headers were returned from the server" when jumpstart tries to access the flar. It does NOT say anything else like archive is too big (saw that in other old postings involving wanboot), just "invalid" headers. I checked tomcat's conf for a max file download, I didn't see anything.
So I played around w/ the mime type, making it application/flar. The browser now asked if I would like to download the file, but I get the same error from jumpstart. What should the mime type of .flar be?
I temporarily outputed the check_client in flash to output the whole header, and it shows up w/ 200 OK, and the size of the file correctly.
I heard back from Sun that currently (3 months ago) that they do not support mounting a file, but that they may in the furture. If your mass storage device supports iSCSI, you can make a disk that Solaris can see from within the format utility, and thus you can nfs share the remote iSCSI disk.