Recent content by shanester

  1. S

    OpenSolaris derived ZFS NAS/ SAN (OmniOS, OpenIndiana, Solaris and napp-it)

    Thanks....I had read that thread yesterday, not realizing the OmniOS 151034 requirements. From a logging perspective, making changes via Napp-It shows the commands/processing at the bottom of the screen. Is this logged elsewhere?
  2. S

    OpenSolaris derived ZFS NAS/ SAN (OmniOS, OpenIndiana, Solaris and napp-it)

    Running omnios-r151030-f1189fc02c Napp-It 20.01a6 Pro I am trying to modify smb settings to get timemachine via SMB to work. When using the GUI to modify max_protocol property from "max_protocol=" to "max_protocol=3" nothing occurs. See below: Current SMB properties: sharectl get smb...
  3. S

    OpenSolaris derived ZFS NAS/ SAN (OmniOS, OpenIndiana, Solaris and napp-it)

    As always, thank you for your input. I have determined that I had a bad memory dimm. Upon removing the dimm and running a couple of clear/scrubs (only a few corrupt files), I am happy to say that the devices are all online again.
  4. S

    OpenSolaris derived ZFS NAS/ SAN (OmniOS, OpenIndiana, Solaris and napp-it)

    That was my thought..I pulled the drive. Lets see what happens.
  5. S

    OpenSolaris derived ZFS NAS/ SAN (OmniOS, OpenIndiana, Solaris and napp-it)

    The zpool clear completed but did not clear the errors. I am stuck and open to further suggestions.
  6. S

    OpenSolaris derived ZFS NAS/ SAN (OmniOS, OpenIndiana, Solaris and napp-it)

    The drive wasn't faulted, just hard errors (S:0 H:136 T:0), so I wanted to replace with the spare. The swap never completed and the vdev went into a degraded state "too many errors". I will attempt to clear the errors and try again.
  7. S

    OpenSolaris derived ZFS NAS/ SAN (OmniOS, OpenIndiana, Solaris and napp-it)

    "bad" disk = c0t5000CCA36AC2A785d0 spare = c0t5000CCA36AD05EA2d0 I want to remove the bad disk and replace it with the spare. As I mentioned earlier, I ran zpool replace c0t5000CCA36AC2A785d0 c0t5000CCA36AD05EA2d0 which 'failed'. So if I understand what your saying, is to remove the spare...
  8. S

    OpenSolaris derived ZFS NAS/ SAN (OmniOS, OpenIndiana, Solaris and napp-it)

    I would like some direction on the next steps to fix my pool. Prior to the screenshot below, I did not have any pool errors. I had a disk (A785 - see below) that was showing hard errors so i initiated a replace with the spare. The swap did not complete and the pool was in a degraded status. A...
  9. S

    OpenSolaris derived ZFS NAS/ SAN (OmniOS, OpenIndiana, Solaris and napp-it)

    OMG I feel like a total noob...it has been so long, that is exactly what I forgot to do!!
  10. S

    OpenSolaris derived ZFS NAS/ SAN (OmniOS, OpenIndiana, Solaris and napp-it)

    I did a fresh installation of OmniOS r15130 after a successful upgrade of esx 6.7u2. After installing napp-it, I am unable to import my existing ZFS pool. What am I doing wrong? When I shut down the new VM and go back to my old VM running r151018, I can import the existing zfs pool.
  11. S

    OpenSolaris derived ZFS NAS/ SAN (OmniOS, OpenIndiana, Solaris and napp-it)

    Due to the $$ at this time, and needed additional storage capacity with limited chasis space, I pulled the trigger on (7) 3TB 512B drives, maxing out the slots in my 4220. At this point what would be the benefit of creating a new pool with ashift=12, other than being able to replace the drives...
  12. S

    OpenSolaris derived ZFS NAS/ SAN (OmniOS, OpenIndiana, Solaris and napp-it)

    gea, thanks for your input. I 'wanted' to build an entire new storage device, but funds are tight right now. My thought was to purchase 3TB (HUA723030ALA640) that are 512B to avoid the performance degradation. This would provide me with approximately an additional 12TB until I am able to build a...
  13. S

    OpenSolaris derived ZFS NAS/ SAN (OmniOS, OpenIndiana, Solaris and napp-it)

    Question regarding the addition of a vdev to my config: I currently have a Norco 4220 / SM x9scm-f and two m1015 HBAs. I have two raidz2 vdevs each with six 2TB drives and have two 2TB spares. I was thinking of adding a new raidz2 vdev with six 3TB (512k) drives, replacing the current 2TB spares...
  14. S

    OpenSolaris derived ZFS NAS/ SAN (OmniOS, OpenIndiana, Solaris and napp-it)

    The zpool clear worked for a bit ;) I agree that this is a hardware issue. I did validate/reconnect/replace the SFF-8087 last week. A couple years back I replaced the cables due to similar circumstances, only to learn (from a post somewhere) about the undocumented or incorrect dual power cabling...
Back
Top