Hi,
Can I increase CPU and Memory for a VM been deployed via vRA. Can I send it a request from the catalog ?
Thank you
Hi,
Can I increase CPU and Memory for a VM been deployed via vRA. Can I send it a request from the catalog ?
Thank you
We have an existing cluster that has very old CPUs (Sandy Bridge) and does not have EVC set on the cluster currently, We are wanting to expand this cluster with brand new Cascade Lake hosts. Our vCenter is on a separate management cluster, and we are at 6.7.
I want to make sure I understand the steps that would be required to do this work, and when the VMs would need to be rebooted or not, and when vMotions would fail, or not.
1. Set the existing cluster to EVC "Sandy Bridge". At this point the VMs should not need a reboot as they will all have been started on a Sandy Bridge host, and the instruction set the VM is using should not be higher than Sandy Bridge.
2. The cluster is now at EVC level "Sandy Bridge" as well as all VMs in the cluster will be at Sandy Bridge. vMotions should continue to occur normally.
3. Introduce the new Cascade Lake hosts into the cluster. The new hosts will inherit the cluster EVC mode of Sandy Bridge. vMotions for all VMs will continue to occur normally, and VMs can vMotion to the new hosts as well as the old hosts.
4. We can vMotion all VMs to the new hosts (assuming we have capacity, which we will), slowly turning down the old Sandy Bridge CPU hosts.
5. Once all VMs are on the new hosts, and all the old hosts are removed from the cluster, we can increase the EVC level of the cluster to Skylake (can't do Cascade Lake yet, as 6.7 doesn't support it). This should not incur an outage as all the VMs will be at Sandy Bridge EVC still.
I believe all the above is correct. However, I am a bit confused about the next part.
Once the cluster EVC level is set to Skylake, whenever a VM reboots, it will take on the EVC level of Skylake. However, before any individual VMs reboots, is it still able to vMotion once the cluster (and therefore all the hosts) are at the Skylake level? Basically, can a VM with EVC level Sandy Bridge vMotion to a host with EVC level Skylake?
Hopefully that was all clear. Thanks for any help.
Добрый день.
Несколько лет назад еще на 5.0-5.1 версии ESXi я намучался с подбором "правильных" версий CIM SMIS провайдера для LSI, и MegaRAID Storage MAnager тоже не каждый подходил...
И вот сейчас хочу запустить приблизительно то-же самое на ESXi 6.7
К сожалению не получается.
Использую плату Supermicro AOC-S3108L-H8iR
Пробовал "последние" драйвера через поиск на HCL - это 7.705
На сайте broadcom (он же avago, он же lsi) нашел более новые в виде ссылки на сайт vmware 7.707
Там-же нашел последний SMIS провайдер 00.71.V0.04
И в мониторинге с Storage тоже не все хорошо...
http://alpha.fm/pub/capture_7902611.png
MSM пробовал "старую проверенную" версию (которой до сих пор пользуюсь для ESXi 5.1) и последнюю с сайта.
Пробовал использовать slp_helper который написан на PHP давным-давно но все еще замечательно работает со старыми версиями...
Не видит MSM адаптер никак!
Прошу помощи общественности, кто запускал LSI адаптеры на ESXi 6.7 и смог ими управлять?
I've got a small system with only two hosts but multiple VMs. I created a vCenter Server Appliance on Workstation Pro 15.5 and it functions great. But for some reason, on boot up, it doesn't go to the standard console. Instead it dumps me at a login prompt on the CLI (presumably in Linux).
Is there any thing I can do to get it to always go to the console?
Thanks in advance for any suggestions...
I'm running Windows 10 in VMware Fusion. I have Windows 10 "Always show all icons in the notification area" enabled.
I enabled Virtual Machine > Settings > Advanced > Pass power status to VM ... then rebooted; however, the power icon icon doesn't appear in the Windows 10 system tray.
In "Turn system icons on or off", Power is grayed out -- so it seems that Windows 10 sees the VM as a desktop and, therefore, won't display power features. Is there a workaround for this?
Hello,
I migrated from traditional virtual switches to virtual distributed switches.
Instead of using the migration wizard, I left the virtual switches in place and manually reconfigured each VM to use the new virtual distributed port group.
This worked very nicely for most of the VMs, but I now I have several VMs that show they're bound to both the traditional port group and the virtual distributed port group.
Just a quick example...on the traditional virtual switch, I had a port group called 'vlan2'. On the new vDS, I have a vDPG called 'vDPG-vlan0002'. I edited the properties of several VMs, then changed the network interface from 'vlan2' to 'vDPG-vlan0002' and saved changes.
After clicking save, several VMs show they are on both:
vlan2
vDPG-vlan0002
Even if they only have a single network interface. (See attached screenshot for an example)
I downloaded the VMX file to examine the contents and don't see any references to 'vlan2' - only to 'vDPG-vlan0002'. I tried removing the VMs from inventory and adding them back. I've even removed the network interface from the VM altogether, saved the changes, then added a new network interface with the desired vDPG and I get the same result.
Any thoughts as to what might be going on here?
Hi
We are using Workstation Player on Linux in our computer labs.
We would like to be able to disable the shared folder functionality in the Player, to prevent end users mapping folders (especially during restricted access sessions)
Is it possible to accomplish this with a config setting for the Player?
Thanks
Craig Howie
School of Computer Science and Engineering
UNSW Sydney
I have 4 servers that I have updated to ESXi 6.7.0 Update 3 (Build 14320388) and on all 4 servers, the web interface now shows a timeout error when I attempt to login:
I can access the page with no problems. I start by logging in:
I then get an error saying "Connection to ESXi host timed out:"
If I try to refresh the page after this, I just get the VMware logo:
Sometimes the page will load, but I see this in the title of the tab and I am not able to do anything inside the page:
I am experiencing this problem on two Dell R620 servers that are hosted in a data center and two Dell R420 servers that I have at home. They are not on the same network, yet they are both exhibiting the same exact issue. I have tried Chrome, Chrome Incognito, Microsoft Edge, and Opera. If I reboot the servers, I am able to temporarily access the web interface, but I eventually start getting these timeouts again. I am not able to SSH to the servers because SSH gets disabled on each reboot. I have also found that if I connect to the management console via iDRAC, I can press F2 and get prompted for credentials, but after accepting the credentials, it just hangs and is no longer responsive to keyboard commands:
Does anybody have any suggestions on how to fix this? The fact that it is happening on 4 different servers on 2 different networks leads me to believe there is a larger problem here and not something localized to my installs.
Hello,
Today i found one more bug in this product, let me explain :
Dhcp agents where giving an IP address to my instances but the instance never taked this IP, with an dhclient command never found DHCPOFFERS from the dhcp agents
In first list the pods name of your penstack deployment :
kubectl get pods --all-namespaces | grep dhcp
So i looked for the dhcp agents logs with this command on the vio controller :
kubectl logs -f --namespace=openstack neutron-dhcp-agent-default-***
NB: *** --> this is "unique"
This message was spamming :
2019-12-03 17:14:56.735 49 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', '/usr/bin/neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-file', '/etc/neutron/dhcp_agent.ini', '--config-file', '/var/lib/neutron/dhcp/dhcp_override_mac.ini', '--privsep_context', 'neutron.privileged.default', '--privsep_sock_path', '/tmp/tmpNauu1q/privsep.sock']
Changing password for root.
2019-12-03 17:14:56.752 49 WARNING oslo.privsep.daemon [-] privsep log: sudo: Account or password is expired, reset your password and try again
2019-12-03 17:14:56.752 49 WARNING oslo.privsep.daemon [-] privsep log: sudo: no tty present and no askpass program specified
2019-12-03 17:14:56.752 49 WARNING oslo.privsep.daemon [-] privsep log: sudo: unable to change expired password: Authentication token manipulation error
2019-12-03 17:14:56.758 49 CRITICAL oslo.privsep.daemon [-] privsep helper command exited non-zero (1)
2019-12-03 17:14:56.759 49 ERROR neutron.agent.dhcp.agent [-] Unable to enable dhcp for 576b74f4-f765-4955-a1f9-e7681bbef3a6.: FailedToDropPrivileges: privsep helper command exited non-zero (1)
2019-12-03 17:14:56.759 49 ERROR neutron.agent.dhcp.agent Traceback (most recent call last):
2019-12-03 17:14:56.759 49 ERROR neutron.agent.dhcp.agent File "/usr/lib/python2.7/site-packages/neutron/agent/dhcp/agent.py", line 157, in call_driver
2019-12-03 17:14:56.759 49 ERROR neutron.agent.dhcp.agent getattr(driver, action)(**action_kwargs)
2019-12-03 17:14:56.759 49 ERROR neutron.agent.dhcp.agent File "/usr/lib/python2.7/site-packages/neutron/agent/linux/dhcp.py", line 218, in enable
2019-12-03 17:14:56.759 49 ERROR neutron.agent.dhcp.agent common_utils.wait_until_true(self._enable)
2019-12-03 17:14:56.759 49 ERROR neutron.agent.dhcp.agent File "/usr/lib/python2.7/site-packages/neutron/common/utils.py", line 691, in wait_until_true
2019-12-03 17:14:56.759 49 ERROR neutron.agent.dhcp.agent while not predicate():
2019-12-03 17:14:56.759 49 ERROR neutron.agent.dhcp.agent File "/usr/lib/python2.7/site-packages/neutron/agent/linux/dhcp.py", line 229, in _enable
2019-12-03 17:14:56.759 49 ERROR neutron.agent.dhcp.agent interface_name = self.device_manager.setup(self.network)
2019-12-03 17:14:56.759 49 ERROR neutron.agent.dhcp.agent File "/usr/lib/python2.7/site-packages/neutron/agent/linux/dhcp.py", line 1506, in setup
2019-12-03 17:14:56.759 49 ERROR neutron.agent.dhcp.agent ip_lib.IPWrapper().ensure_namespace(network.namespace)
2019-12-03 17:14:56.759 49 ERROR neutron.agent.dhcp.agent File "/usr/lib/python2.7/site-packages/neutron/agent/linux/ip_lib.py", line 236, in ensure_namespace
2019-12-03 17:14:56.759 49 ERROR neutron.agent.dhcp.agent if not self.netns.exists(name):
2019-12-03 17:14:56.759 49 ERROR neutron.agent.dhcp.agent File "/usr/lib/python2.7/site-packages/neutron/agent/linux/ip_lib.py", line 797, in exists
2019-12-03 17:14:56.759 49 ERROR neutron.agent.dhcp.agent return network_namespace_exists(name)
2019-12-03 17:14:56.759 49 ERROR neutron.agent.dhcp.agent File "/usr/lib/python2.7/site-packages/neutron/agent/linux/ip_lib.py", line 1002, in network_namespace_exists
2019-12-03 17:14:56.759 49 ERROR neutron.agent.dhcp.agent output = list_network_namespaces(**kwargs)
2019-12-03 17:14:56.759 49 ERROR neutron.agent.dhcp.agent File "/usr/lib/python2.7/site-packages/neutron/agent/linux/ip_lib.py", line 991, in list_network_namespaces
2019-12-03 17:14:56.759 49 ERROR neutron.agent.dhcp.agent return privileged.list_netns(**kwargs)
2019-12-03 17:14:56.759 49 ERROR neutron.agent.dhcp.agent File "/usr/lib/python2.7/site-packages/oslo_privsep/priv_context.py", line 240, in _wrap
2019-12-03 17:14:56.759 49 ERROR neutron.agent.dhcp.agent self.start()
2019-12-03 17:14:56.759 49 ERROR neutron.agent.dhcp.agent File "/usr/lib/python2.7/site-packages/oslo_privsep/priv_context.py", line 251, in start
2019-12-03 17:14:56.759 49 ERROR neutron.agent.dhcp.agent channel = daemon.RootwrapClientChannel(context=self)
2019-12-03 17:14:56.759 49 ERROR neutron.agent.dhcp.agent File "/usr/lib/python2.7/site-packages/oslo_privsep/daemon.py", line 328, in __init__
2019-12-03 17:14:56.759 49 ERROR neutron.agent.dhcp.agent raise FailedToDropPrivileges(msg)
2019-12-03 17:14:56.759 49 ERROR neutron.agent.dhcp.agent FailedToDropPrivileges: privsep helper command exited non-zero (1)
2019-12-03 17:14:56.759 49 ERROR neutron.agent.dhcp.agent
As you see in bold the important parts and the more important :
sudo: Account or password is expired, reset your password and try again
let get ride of this problem :
Open a shell on all agents :
kubectl exec -it --namespace=openstack neutron-dhcp-agent-default-*** -- /bin/bash
you will be logged as root, just try a simple sudo command :
sudo ls
this is saying you that your password is too much aged and PAM is broken
We need now to turn off agging password but retain the current password, i think there is no password, Vmware can confirm ?
But whatever
i found in /etc/passwd, two accounts, one for root (of course!) and one for neutron, let's turn off aging on this two maybe you need it just for root
so now pass this commands :
passwd -x -1 root
passwd -x -1 neutron
Do it on all DHCP agents pods you have
Now DHCP will work
I saw that the conf file for chage : /etc/login.defs
have this value :
PASS_MAX_DAYS 90
Last days we saw people complaining about dhcp, i think this a relase bug about the product, 90 days correspond to the release date of 6.0, out for the public the 3 september but released by the devs maybe 2 or 3 weeks ago this date...
I'm very unsatisfied with this product, this is just full of bugs that i have to correct/bypass before VMware WTF ??
Please just see that bug i have since i started using VIO 6.0, console never worked great...
Regards,
Alexandre MARTINS.
Guest OS Stops seeing HOST VMNet8
Host:
HP ZBook 17” with USB-C Dock, Xeon CPU w/32g RAM 2.4TB Storage, Started with Windows Vista then 7 and now Windows 10.
Guest OS’s range from WIN-95 to WIN-7
Main use for Guests, Rockwell RSLogix, PanelView and Visual Studio Programming.
The Network cards consist of 1 Bluetooth, 2 Hardwire NIC’s, 1 Wi-Fi, 1 VPN and the VMNet8.
In the past, the Host and Guests ran without problems with the following configuration:
The following configuration has been used by me starting with VM Workstation 6 thru the start of Workstation 15, then things started getting bad.
All Guest OS’s setup with 2 NIC’s, First NIC is NAT for link to the Host for Rockwell Activations, Second NIC has choice of VMNet0 (Bridged to wi-fi) or VMNet1 (Bridged to Hardwire Ethernet 1).
Guest OS Starts, RSLogix is started and via. the First NIC (VMNet8), checks for valid activation served from the Host. After validation, the RSLogix software connects to the PLC via. the Second NIC (VMNet1 most of the time). Programming would be done and all was right with the world.
Current Problems:
“This is still ongoing”
My major problem is currently #3 although #1 needs fixed as well.
I have searched and tried many things for just over a year now. I have been limping along with this because I have had a heavy work load since just before all this started.
Any viable help would be greatly appreciated.
Thank you.
I've been having a dog of a time trying to add a dual 10GBe nic to ESXi. I've tried installing the drivers and everything appears that it is installing but ESXi doesn't seem to be loading the drivers for some reason. This is an Intel NUC NUC7i5BNK host with a OWC Mercury Helios 3 Thunderbolt 3 PCIe Expansion Chassis housing the NIC. The NIC and the expansion chassis is known good on other OS's so everything points to a configuration issue on ESXi or in the bios of the NUC. I've enabled thunderbolt at boot so I think that rules out the BIOS.
[root@esxi1:~] lspci -v | grep -A1 -i ethernet
0000:00:1f.6 Ethernet controller Network controller: Intel Corporation Ethernet Connection (4) I219-V [vmnic0]
Class 0200: 8086:15d8
--
0000:06:00.0 Ethernet controller Network controller: Intel(R) 82599 10 Gigabit Dual Port Network Connection
Class 0200: 8086:10fb
--
0000:06:00.1 Ethernet controller Network controller: Intel(R) 82599 10 Gigabit Dual Port Network Connection
Class 0200: 8086:10fb
[root@esxi1:~] vmkload_mod -l |grep drivername
[root@esxi1:~] vmkload_mod -l |grep ixgben
[root@esxi1:~] esxcfg-nics -l
Name PCI Driver Link Speed Duplex MAC Address MTU Description
vmnic0 0000:00:1f.6 ne1000 Up 1000Mbps Full 94:c6:91:15:0b:e4 9000 Intel Corporation Ethernet Connection (4) I219-V
[root@esxi1:~] esxcli software vib install -d "/vmfs/volumes/datastore1/VMW-ESX-
6.7.0-ixgben-1.7.10-offline_bundle-10105563.zip"
Installation Result
Message: The update completed successfully, but the system needs to be rebooted for the changes to be effective.
Reboot Required: true
VIBs Installed: INT_bootbank_ixgben_1.7.10-1OEM.670.0.0.8169922
VIBs Removed: INT_bootbank_ixgben_1.7.1-1OEM.670.0.0.7535516
VIBs Skipped:
Still no joy after reboot
[root@esxi1:~] esxcli hardware pci list
0000:06:00.0
Address: 0000:06:00.0
Segment: 0x0000
Bus: 0x06
Slot: 0x00
Function: 0x0
VMkernel Name:
Vendor Name: Intel Corporation
Device Name: 82599EB 10-Gigabit SFI/SFP+ Network Connection
Configured Owner: VMkernel
Current Owner: VMkernel
Vendor ID: 0x8086
Device ID: 0x10fb
SubVendor ID: 0x8086
SubDevice ID: 0x7a11
Device Class: 0x0200
Device Class Name: Ethernet controller
Programming Interface: 0x00
Revision ID: 0x01
Interrupt Line: 0xff
IRQ: 255
Interrupt Vector: 0x00
PCI Pin: 0x00
Spawned Bus: 0x00
Flags: 0x3219
Module ID: -1
Module Name: None
Chassis: 0
Physical Slot: 4294967295
Slot Description:
Passthru Capable: true
Parent Device: PCI 0:5:1:0
Dependent Device: PCI 0:6:0:0
Reset Method: Function reset
FPT Sharable: true
0000:06:00.1
Address: 0000:06:00.1
Segment: 0x0000
Bus: 0x06
Slot: 0x00
Function: 0x1
VMkernel Name:
Vendor Name: Intel Corporation
Device Name: 82599EB 10-Gigabit SFI/SFP+ Network Connection
Configured Owner: VMkernel
Current Owner: VMkernel
Vendor ID: 0x8086
Device ID: 0x10fb
SubVendor ID: 0x8086
SubDevice ID: 0x7a11
Device Class: 0x0200
Device Class Name: Ethernet controller
Programming Interface: 0x00
Revision ID: 0x01
Interrupt Line: 0xff
IRQ: 255
Interrupt Vector: 0x00
PCI Pin: 0x01
Spawned Bus: 0x00
Flags: 0x3219
Module ID: -1
Module Name: None
Chassis: 0
Physical Slot: 4294967295
Slot Description:
Passthru Capable: true
Parent Device: PCI 0:5:1:0
Dependent Device: PCI 0:6:0:1
Reset Method: Function reset
FPT Sharable: true
[root@esxi1:~] esxcfg-nics -l
Name PCI Driver Link Speed Duplex MAC Address MTU Description
vmnic0 0000:00:1f.6 ne1000 Up 1000Mbps Full 94:c6:91:15:0b:e4 9000 Intel Corporation Ethernet Connection (4) I219-V
Topic Name : Using the VMware Logon Monitor
Publication Name : Horizon 7 Administration
Product/Version : VMware Horizon 7/7.8
Question :
After updating to Windows 1903 vm's with dual monitors take 44 seconds to login. With only 1 monitor it takes 14 seconds. We only have the VMware drivers running. What is causing this issue?
I am in administrator's account and WHY THE FRICC STILL SHOWING THIS WINDOW UP?!?!
OS: Windows 10 May 2019 update
VMware Workstation: 15 Pro
Account(s): 1
Administrator: Yes
Why is this still happening?
My method is to copy to my Acer Device that has VMware installed and put it to Lenovo computer.
The 15 Pro can't friccin proceed to finish thing
The only version that finishes the following: VMware Workstation 15 Player (setted it to non-commercial use)
I did the same thing as the 15 Pro setting it to "Try VMware Workstation for 30 days."
And it's still happening... Pictures Below:
Been running OSX High Sierra vm for a few years but the 11.5.1 update gives a mousepointer (graphical) bug where the normal sized mousepointer is behind a expanded bigger one.
Hi, we have a Horizon 7.10 environment with multiple connection servers and desktop pools. We use tagging of connection servers and desktop pools to control what people can get access to. We are looking at adding VMware Identity Manager 19.3 to our environment. Can vIDM do some sort of tagging, similar to what Horizon can do, to control what users see when they log into vIDM?
Thanks in advance,
Stewart
As of a few days ago, I am now unable to log into Skyline Advisor and my customer is getting the same message, and unable to access this either. We had set this up back in June and this must have stopped working after the recent permissions change a few days ago. We are all organization admins, and we are unable to grant any Skyline Advisor permissions to any users as we do not even see Skyline Advisor as a service in the permissions page.
Hi, in Horizon View, tags can be added to connection servers and desktop pools to control what users see when they log into the Horizon View environment. Is the equivalent available in VMware Identity Manager (ie a user logs into vIDM and can only see certain desktop pools)?
Scenario
User is entitled to 3 desktop pools in Horizon View. When the user logs into vIDM from the company network, the users "sees" pools 1,2,3. However, when the user logs into vIDM from the Internet, the user only "sees" pool 3. Can this be done?
Thanks in advance,
Stewart
Hello,
We have been searching for hours and following steps on how to reclaim space on a thin provisioned VM with no luck.
Nothing seems to be working.
We have tried - https://community.webcore.cloud/tutorials/how_to_reclaim_disk_space_from_a_thin_provisioned_/
Nothing changed afterwards.
And UNMAP command is not supported.
These are some outputs
VAAI Plugin Name:
ATS Status: unsupported
Clone Status: unsupported
Zero Status: unsupported
Delete Status: unsupported
Revision: 3.45
SCSI Level: 5
Is Pseudo: false
Status: on
Is RDM Capable: true
Is Local: true
Is Removable: false
Is SSD: false
Is VVOL PE: false
Is Offline: false
Is Perennially Reserved: false
Queue Full Sample Size: 0
Queue Full Threshold: 0
Thin Provisioning Status: unknown
Attached Filters:
VAAI Status: unsupported
Hello,
Anyone know if macOS Mojave guests are supposed to function on Fusion 11?
If so, how do I make the guest understand that I'm running with the display capabilities of a MacBook Pro (15-inch, 2016)?
Thanks,
/T
Hi Team,,
Have anyone already did create a settings similar to the once we have with version or vSphere 5.x to version of vSphere 6.x. Can someone please tell me how.