This content has been marked as final. Show 12 replies
How come I can't seem to set archive_location separate from archive_locations_flash?
can you be a little more concrete ....
How does your flash_archive_location looks like and what do you find in your jumpstart profile ...
You cannot set archive_location different to your SPS profile settings. If you want to use an other flash archive. Just just another profile or varset for it. Or you create your own target-component to create the settings. So you must copy the target SPS component with the archive_location_flash variable and type :[target:archive_location_flash] int the profile...
Just look at :[ClientEther_base_config] for example. Thsi value comes FROM the target....and so you must modify your component that creates the target (and the jet config). Be aware of that the original OSP plugin components canNOT be modified. All plugins are only "readable"
Thanks for your help Chris,
My flash_archive_locations was entered as: nfs://<ip>/<path>/myflash.flar
In the profile, it shows up as the same : nfs://<ip>/<path>/myflash.flar
The problem is that when the new client machine is performing a jumpstart flash install, it fails reporting: cannot mount <ip>:/<path>/myflash.flar because it is trying to mount a filename, not a directory on the nfs server (NAS)
See above post for what is logged on the NAS, the remote location where my flars are stored. ( nfs://<ip>/<path> )
If I set flash_archive_locations to nfs://<ip>/<path>, the ../SUNWjet/Products/flash/check_client fails.
For some reason, jumpstart is effectively issuing a command equivalent to:
mount -F nfs <ip>:/<path>/myflash.flar /<somemountpoint>
Since the install bombs out, I can open a console and type the following, and see that it mounts:
mount -F nfs <ip>:/<path> /<somemountpoint>
I can then cd /<somemountpoint> and do an ls and see myflash.flar.
So, what is causing the jumpstart script to try to mount a file, and not a directory?
I was thinking that I may have some weird n1sps 6.0 with n1osp 5.2 problem, so I am rebuilding w/ pure 6.0. If not related to this, I will certainly have the same problem again.
I don't need the mountpoint to change for each client, just the flash archive name. But right now no matter what I do, jumpstart tries to mount the FILE, not the directory where the file resides.
The evidence I have of this is that when I manually try to mount the file, I get the same error showing in the NAS access log as jumpstart when it tries to mount. (see first post this topic)
Edited by: peteziu on Feb 12, 2008 7:07 PM
Edited by: peteziu on Feb 12, 2008 7:08 PM
Edited by: peteziu on Feb 12, 2008 7:12 PM
I have found the following on the machine being jumpstarted:
In it is a value:
Is this the correct format for this value? And if this value is found in this file (and i have manually mounted it from this host), is this the only value on the client that can influence the mount of the flar?
If it is correct and this is just not working for me, could I specify that solaris mount my NAS (where the flars are) as part of base_config_nfs_mounts, and then specify:
as the variable value for "flash_archive_locations". (providing the same local directory and file was found on the OSP server, so that check_client will run)?
...sorry, also in the /tmp/sysid_config.234/profile, is the following:
I know this format, according to the docs should work, but there is another supported format:
archive_location nfs 10.166.56.110:/vol0/blah/blah2/blah3/myflar.flar
please try someting similar on your Solaris box
mount -F nfs 192.168.1.1:/export/install/flash /mnt
As you reported this should work and your flash file is located in this dir and is useable.
So perhaps jumpstart implementation does someting like this during jumpstart sequence:
mount -F 192.168.1.1:/export/install/flash/flash.arcihv /mnt
Do you see the difference ? - I did this successful on my Solaris sparc box and a
ls -l /mnt shows me detailed information of the dedicated file !
I think your NAS filer is not able to do this !?
I am not sure if this is RFC conform !? Solaris supports this "feature" - I will try to find out if this routine will be used during jumpstart process...
Wow. I didn't think it was possible to mount a file, just a dir. I will try the mount test from a Solaris/SPARC client to a Solaris/SPARC client to see if I can repeat your success, but I believe you.
Thing is, the NAS is a StorEdge (sp?). Perhaps, how I make that volume available is causing the issue. I think I have support for the NAS, perhaps I can go in that direction, or..
Maybe I can avoid having to mess w/ the NAS if I mount the NAS on the OSP at bootup and then link it to /export/install/flash?
Problem is, the number of different flash archives I have is preventing me from physically placing them onto the OSP's /export/install/flash.
This is a great help in debugging, thanks so much for your contributions.
I avoided the NAS for now, and have tried http. I cheated and mounted the NAS on the ma's web server (tomcat's "images" dir). I can hit the file, but the browser starts to display the file in the browser.
...but all I get is "Invalid HTTP headers were returned from the server" when jumpstart tries to access the flar. It does NOT say anything else like archive is too big (saw that in other old postings involving wanboot), just "invalid" headers. I checked tomcat's conf for a max file download, I didn't see anything.
So I played around w/ the mime type, making it application/flar. The browser now asked if I would like to download the file, but I get the same error from jumpstart. What should the mime type of .flar be?
I temporarily outputed the check_client in flash to output the whole header, and it shows up w/ 200 OK, and the size of the file correctly.
Edited by: peteziu on Feb 22, 2008 11:03 PM
fyi... i copied a flar to the local disk of the n1osp/jet server, shared it in dfstab, and the flash jump worked. But this is no good to me, I have 1/2 TB of images, so I must serve them from a NAS.
Verifed from Sun support that the particular NAS I am using does not support the mounting of files. Using iSCSI, I was albe to NFS share an iSCSI mountpoint to the NAS.
Hi, I have met same issue as you.
What kind of NAS do you use? What i use is NF800.
I heard back from Sun that currently (3 months ago) that they do not support mounting a file, but that they may in the furture. If your mass storage device supports iSCSI, you can make a disk that Solaris can see from within the format utility, and thus you can nfs share the remote iSCSI disk.
Edited by: peteziu on May 19, 2008 2:23 PM