Ok, but that wasn't really my point. Whether it's a NIC, an HBA or a SW initiator, I would like to be able to debug on the omnios end, by seeing what comstar is doing.
So I set up 3 zvols shared via comstar. I then backed up the comstar config file and move it off the root disk. I was messing around and decided to restore the config. The pull down menu for this doesn't seem to accept a file to restore (e.g. it's got some file hardcoded with the current...
I first tried the ISO upgrade method but that failed as expected (some issue where it doesn't play well with updates done via the online method.) So I created a temporary baseline called 'upgrade from 6.7 to 7.0' lol. Applied it to all 3 machines with no issues. Well, there was one on all...
So I deployed a new NAS which had been serving up a datastore to vsphere. One of the functions was a repo of ISO files. Another was acting as the 2nd heartbeat for HA. Unnoticed by me, the permissions on the NFS share's root were wrong, and vsphere hosts were unable to write heartbeat files...
I've looked high and low and can't find an answer, hopefully I just missed it. I have 2 stacked switches. My 3 esxi hosts each have enet connections to them, but not using LACP, just failover order (e.g. nic teaming?) Is it possible to do this for my OmniOS storage appliance? Looking at...
Ugh ugh ugh. No time to experiment - it isn't just that pool - nothing is working and guests are failing, so time to do a hard reboot. Damn!!! Even though it said jbod was stuck, apparently I/O involving the ssd raid1 (serving vsphere) was also hosed.
Annoying situation just now. 4x2 raid10 of spinners, with 1 ssd as SLOG device. This is running on latest omnios. I meant to plug in another ssd to do hotplug backups, but pulled the slog device by mistake. Plugged it back in, but the pool is stuck:
NAME STATE...
I remember back i the early 90's building a server running linux 0.99. Initially had 16MB (not a typo). Started needing more RAM. Best deal I could get for 16MB more? $300 or so.