Labels

3300 (1) 3PAR (1) Active Directory (1) ADFS (1) Admin Share (1) Auto-Sync (2) Auto-Sync locked (1) AWS (3) AzCopy (1) Azure (3) Backups (2) Broadcom (1) Call Forwarding (1) CLI (2) cmd (1) Compatibility View (1) Dameware MRC (1) Databases (1) DFS (1) DNS (1) Domain Admin (1) domain controller (1) Enterprise Mode (1) ESXi 5.0 (1) ESXi 5.1 (5) ESXi 5.5 (3) Exchange (3) Exchange 2010 (5) Extreme (1) ExtremeXOS (1) Federation (1) File Share (1) FSMO (1) GPO (1) Group Policy (1) Group Policy Client (1) Hardware Acceleration (1) Helpdesk (1) HP-UX (1) Hyper-V (2) IAM (1) IE10 (1) IE11 (1) IP conflict (1) Kayako (3) LDAP (1) Licence (1) Links (9) local groups (1) LUN lock (1) LUN number (1) MAC address (1) Microsoft Teams (1) Mitel (1) Namespaces (1) Networking (6) Nexenta (6) NMC (1) Office 365 (4) OneDrive (1) Outlook 2003 (1) Outlook 2013 (1) PC (1) Physical (1) PowerCLI (10) Powershell (10) promoted links (2) Public Folders (1) RDP (1) RDS (1) Recovery Services (1) RedShift (1) Registry (3) Reports (1) Resolve (2) Restart (1) RSA (1) Run As (1) SAML (1) SAN (1) Scavenging (1) script (10) Server 2003 (3) Server 2008 R2 (1) Server 2012 R2 (2) Servers (2) sharepoint 2013 (3) SMTP (3) Snapshot (2) SRM (1) SSH (5) SSL Certificate (2) Temporary profile (1) Terminal Server (3) Troubleshooting (5) Ubuntu (1) Update Manager (1) Useful Apps (1) VAAI (1) vCenter Server Appliance (1) VDI (1) VDP (1) Veeam Backup and Replication (2) VM (1) VM Error (1) vmdk (1) VMFS (1) vMotion (2) VMware (20) VoiP (1) vSphere 5.5 (4) vSphere 6.0 (2) vSphere 6.5 (1) vUM (1) webpart (1) Windows (3) Windows 10 (1) Windows 7 (2)

Thursday 14 May 2015

Nexenta 3.1.6-FP3/4.0.3 and vSphere 5.5

Systems check:
Nexenta 3.1.6-FP3
VMware ESXi 5.5.0

All LUNs are VMFS5 formatted

When attempting to upgrade to Nexenta 4.0.3 the other night I ran into an number of issues that ended with me carrying out a rollback to 3.1.6-FP3. A couple of notes from the problems I encountered;

1. As our log drive was a DDR card, we had to uninstall the device from the Pool as the driver had to be manually reinstalled once Nexenta had updated to 4.0.3.

2. After updating to 3.1.6-FP3, all of the mappings in SCSI Target Plus had been deleted so when I rescanned for storage it dropped all the connections.

3. 3.1.6-FP3 still caused me issues as none of our datastores (set up in 3.1.5) were visible despite the storage being mounted. I forced the datastores to be persistently mounted through the command line but obviously this was just a work-around rather than a solution. I did this by running the following commands;

esxcfg-volume -l (this generates a list of mounted storage on the host - I copied the UUID displayed for each LUN for the next step)
esxcfg-volume -M UUID (this persistantly mounted the datastore on the host)


These commands had to be carried out for each datastore on each host - not ideal!

4. After further investigation it appears that Nexenta have turned off Hardware Acceleration, not only in 4.0 but also in 3.1.6-FP3. I then connected to each host via the GUI and turned off the 3 settings related to this;

On the selected host go to the Configuration tab, then under Software go to Advanced Settings.
First go to DataMover and set both HardwareAcceleratedInit and HardwareAcceleratedMove to 0.

Then go to VMFS3 and set HardwareAcceleratedLocking to 0.

Changing these settings do not require a reboot.

Once this is done, you also need to turn off Hardware Acceleration on the LUNs themselves.

You only need to run this command on one one of the hosts that has the datastore attached. Running a rescan of datastores after making this change will update the datastore on all the hosts, allowing you to add the storage.

As per VMware's KB article HERE follow the steps below to disable ATSOnly on the LUN. Be aware that the command appears to be case sensitive - I typed ATSonly rather than ATSOnly and it failed to execute;


  1. Connect to one of the hosts sharing the VMFS5 datastore with an SSH session. For more information, see
  2. Run the following command:

    vmkfstools --configATSOnly 0 /vmfs/devices/disks/device-ID:Partition

    Where:

    device-ID is the NAA ID of the LUN on which the VMFS5 datastore was created.
    Partition is the partition number on which VMFS5 datastore was created. This is usually 1.

    For example:

    vmkfstools --configATSOnly 0 /vmfs/devices/disks/naa.6006016055711d00cef95e65664ee011:1

    Note: It is sufficient to run this command on one of the hosts sharing the VMFS5 datastore. Other hosts automatically recognize the change.
  3. Run the following command to rescan for datastores:

    esxcli storage filesystem rescan
  4. The VMFS5 datastore should now mount successfully.
Alternatively, rather than running the command at step 3, just rescan the datastores from within vSphere Client.

As a further note on disabling ATSOnly if you are using datastore heartbeating for HA, remember to turn this off for the datastore you are altering otherwise the disk will always show as being in use, even with it's VMs powered off.

No comments:

Post a Comment