Current config, 3 x ESXi 4 hosts that were once each configured with a vSwitch in which there was a vmkernel port (vmotion), vmKernel port (management), virtual machine port group (vm network). all connected to the same switch (thus same IP range) 6 Physical adapters attached to the vSwitch.
A distributed switch was created to handle the vm's and a management vmKernel.
All was sweetnes and light until they did an upgrade to ESXi 4 update 1 by reinstalling the hosts and manually recreating the networking. You guessed it somewhere it all went wrong.
in the process of resolving the issue we are moving back to a standard vSwitch config and attempting to remove the Distributed switch.
Followed the manual to migrate all the vms back to the standard switch. removed all but one of the physical adapters and reattached to the vSwitch
I am now back to the following config (Hopefully this makes sense)
Host 1: vmkernel( vmk2 -vmotion), vmkernel (vmk1 - management), VM Network (virtual machine port group) on one vSwitch and on the Distributed switch a vmkernel port (vmk0 - management traffic).
Host 2: vmkernel( vmk1 -vmotion), vmkernel (vmk0 - management), VM Network (virtual machine port group) on one vSwitch and on the Distributed switch a vmkernel port (vmk2 - management traffic)
Host 3: vmkernel( vmk1 -vmotion), vmkernel (vmk0 - management), VM Network (virtual machine port group) on one vSwitch. and nothing on the distributed switch.
Looking at the manual it says I can remove the distributed switch easily. But do I need to migrate the vm kernels off the distributed switch first as there is already one on each of the hosts.
Would it be easier to just remove the vmkernel port from the distributed switch first.
Is the numbering of the vmkernel ports significent in this situation..
Sorry for the lack of clarity
I