Category Archives: ESXi

Posts about VMware ESXi

How to Retrieve vSAN License Key from an ESXi Cluster using PowerCLI

vSAN is a software-defined storage solution from VMware that is integrated with the vSphere platform. To use vSAN, you need to have a valid license key assigned to your ESXi hosts. In this blog post, we’ll show you how to retrieve the vSAN license key from an ESXi cluster using PowerCLI.


#Assuming already established connection to your VCSA.

$clust = Read-Host "Enter Cluster Name: "
$cluster = Get-Cluster $clust 

$serviceinstance = Get-View ServiceInstance
$LicManRef=$serviceinstance.Content.LicenseManager
$LicManView=Get-View $LicManRef
$licenseassetmanager = Get-View $LicManView.LicenseAssignmentManager
$script:licInfo = $licenseassetmanager.GetType().GetMethod("QueryAssignedLicenses").Invoke($licenseassetmanager,@($dd1.MoRef.Value))
$lkey = $licInfo.assignedlicense.LicenseKey 


foreach ($obj in $licInfo){
    if ($obj.EntityDisplayName -eq $cluster){
    $key = $obj.assignedlicense.LicenseKey
    Write-Host "$cluster has vSAN $key"
    }


Create vMotion TCP/IP stack when making vMotion VMKernel interface with PowerCli

This series of commands will set the default gateway on the vMotion kernel to use dedicated Default Gateway for layer 3 vMotion.

After you connect to the target VCSA (vCenter) update the variables



# Update the Variables Below
$esxi = "ESXi hostname"
$vdsswitchname = "DVS Switch Name"
$vmotionportgroup = "Portgroup to use for VMotion"
$vmotionIP = "IP address of the Vmotion Kernel"
$vmotiongateway = "The Vmotion Default Gatway"
$VmotionSubnetMask = "VMotion Subnet Mask"
# End of Variables

$vds = Get-VDSwitch $vdsswitchname
$hostobject = get-vmhost $esxi
$vmostack = Get-VMHostNetworkStack -vmhost $hostobject | ? {$_.ID -eq "vmotion"}
New-VMHostNetworkAdapter -VirtualSwitch $vds -VMHost $hostobject -PortGroup  -ip $vmotionIP -SubnetMask $VmotionSubnetMask -NetworkStack $vmostack -Confirm:$false
$vmostackconfig = New-Object VMware.Vim.HostNetworkConfig
$vmostackconfig.NetStackSpec = New-Object VMware.Vim.HostNetworkConfigNetStackSpec[] (1)
$vmostackconfig.NetStackSpec[0] = New-Object VMware.Vim.HostNetworkConfigNetStackSpec
$vmostackconfig.NetStackSpec[0].NetStackInstance = New-Object VMware.Vim.HostNetStackInstance
$vmostackconfig.NetStackSpec[0].NetStackInstance.RequestedMaxNumberOfConnections = 11000
$vmostackconfig.NetStackSpec[0].NetStackInstance.CongestionControlAlgorithm = 'newreno'
$vmostackconfig.NetStackSpec[0].NetStackInstance.IpRouteConfig = New-Object VMware.Vim.HostIpRouteConfig
$vmostackconfig.NetStackSpec[0].NetStackInstance.IpRouteConfig.DefaultGateway = $vmotiongateway
$vmostackconfig.NetStackSpec[0].NetStackInstance.Key = 'vmotion'
$vmostackconfig.NetStackSpec[0].Operation = 'edit'
$changmode = 'modify'
$_this = Get-View -Id $hostobject.NetworkInfo.Id
$_this.UpdateNetworkConfig($vmostackconfig, $changmode)

Uninstall VIBs on ESXi via PowerCLI

I ran across a need to have to uninstall a VIB from ESXI once installed. I Didnt want to login to each ESXi via SSH to preform the ESXCLI command so this is how you can do it via PowerCLI.

the ESXi command line would be:


esxcli software vib remove -n=usbcore-usb --dry-run

PowerCLI Code


$VMhost = "ESXi-Hostname"
$VIBNAME = "usbcore-usb"

$esxcliRemoveVibArgs = $esxcli.software.vib.remove.CreateArgs()
$esxcli = Get-EsxCli -VMHost $VMhost -V2
$esxcliRemoveVibArgs.dryrun = $true  #Change this to False to actually perform the uninstall)

$vib = $esxcli.software.vib.list.Invoke() | where{$_.Name -match $VIBNAME }

$esxcliRemoveVibArgs.vibname = $vib.Name 
$esxcli.software.vib.remove.Invoke($esxcliRemoveVibArgs)
	

vmkping via PowerCLI/ESXCLi

When troubleshooting an ESXi host one of the most common problems is testing connectivity.  A tool that used on the console or ssh session of the ESXi host is vmkping. Use cases are testing connectivity from vmkernel to other servers in cluster like vMotion or vSAN connectivity.

This article from VMware KB article 1003728 shows use case on how to use vmkping.

There are situations where you may not have access to the root account to ssh into the box.  There is way to still troubleshooting vmkernel network connectivity by using PowerCLi and ESXCli.


$esxcli = Get-EsxCli -VMHost (Get-VMHost "testesxihost") -V2
$params = $esxcli.network.diag.ping.CreateArgs()
$params.host = '10.1.1.2'
$params.interface  = 'vmk0'
$params.size = '1472' #use 1472 for 1500 MTU or 8972 for 9000 MTU (VMware uses these values on MTU pings on ESXi)
$res = $esxcli.network.diag.ping.Invoke($params)
$res.summary

You will then get output like this:

<

Duplicated : 0
HostAddr : 10.1.1.2
PacketLost : 0
Recieved : 3
RoundtripAvg : 49
RoundtripAvgMS : 0
RoundtripMax : 61
RoundtripMaxMS : 0
RoundtripMin : 42
RoundtripMinMS : 0
Transmitted : 3

You can get further script options for ESXCLI for networking.diag.ping by:


PS C:\> $params

Name                           Value                                                                                                                                                                                                                             
----                           -----                                                                                                                                                                                                                             
host                           Unset, ([string], optional)                                                                                                                                                                                                       
wait                           Unset, ([string], optional)                                                                                                                                                                                                       
df                             Unset, ([boolean], optional)                                                                                                                                                                                                      
interval                       Unset, ([string], optional)                                                                                                                                                                                                       
ttl                            Unset, ([long], optional)                                                                                                                                                                                                         
debug                          Unset, ([boolean], optional)                                                                                                                                                                                                      
nexthop                        Unset, ([string], optional)                                                                                                                                                                                                       
count                          Unset, ([long], optional)                                                                                                                                                                                                         
netstack                       Unset, ([string], optional)                                                                                                                                                                                                       
size                           Unset, ([long], optional)                                                                                                                                                                                                         
ipv4                           Unset, ([boolean], optional)                                                                                                                                                                                                      
ipv6                           Unset, ([boolean], optional)                                                                                                                                                                                                      
interface                      Unset, ([string], optional) 

For help:

PS C:\> $esxcli.network.diag.ping.Help()



vim.EsxCLI.network.diag.ping
-----------------------------------------------------------------------------------------------------------------------
Send ICMP echo requests to network hosts.
Param
-----------------------------------------------------------------------------------------------------------------------
- count           | Specify the number of packets to send.                                                          
- debug           | VMKPing debug mode.                                                                             
- df              | Set DF bit on IPv4 packets.                                                                     
- host            | Specify the host to send packets to. This parameter is required when not executing ping in debug mode (-D)                                                                                      
- interface       | Specify the outgoing interface.                                                                 
- interval        | Set the interval for sending packets in seconds.                                                
- ipv4            | Ping with ICMPv4 echo requests.                                                                 
- ipv6            | Ping with ICMPv6 echo requests.                                                                 
- netstack        | Specify the TCP/IP netstack which the interface resides on                                      
- nexthop         | Override the system's default route selection, in dotted quad notation. (IPv4 only. Requires int
                  | erface option)                                                                                  
- size            | Set the payload size of the packets to send.                                                    
- ttl             | Set IPv4 Time To Live or IPv6 Hop Limit                                                         
- wait            | Set the timeout to wait if no responses are received in seconds.

Creating vSAN cluster with over 32 hosts

So I was building out a 44 node vSAN cluster last week and I ran into an issue where 12 of the ESXi hosts had their own network Partition group different than the other 32.  I had no issues with the vSAN network, I was able to use vmkping every server so there was no communication issue with any of the hosts in the cluster  via the vSAN kernel.  In most cases vSAN Network partitioning occurs when there was issue with vSAN kernel communicating with other hosts.

After several attempts of removing the diskgroups, removing vSAN kernel, moving out of cluster and away from DVS then back I had no luck.  I knew based on VMware supported Maximum that I could create a 64 node vSAN cluster.  I was at a loss so after several hours troubleshooting and Google searching I ended up opening a SR with VMware after about hour or so troubleshooting the VMware engineer was at loss until he found an article that indicates that you must create some advanced settings on the ESXi hosts in order to see above 32 nodes and once we made those settings rebooted the hosts we had a single network partition our issue was resolved.

The KB article (2110081) shows how to perform task via esxcli via SSH logged into root but does not show how to do it via PowerCli.

$vcenter = Read-Host "Enter Vcenter connecting to"

Connect-VIserver $vcenter

$cluster = Read-host "Cluster Name"

foreach ($host in (Get-Vmhost -Location $cluster)){
Get-VMhost $host | Get-AdvancedSetting -Name "VSAN.goto11" | Set-AdvancedSetting -value 1 -confirm:$false
Get-VMhost $host | Get-AdvancedSetting -Name "Net.TcpipHeapMax" | Set-AdvancedSetting -value 1536 -confirm:$false
}

Then reboot your hosts.

That’s it!

You would think that supported maximums would occur out of box but according to VMware they did not want smaller vSAN cluster to sacrifice memory overhead that would be required for larger vSAN clusters to run efficiently.