Quantcast
Viewing all 175326 articles
Browse latest View live

VCenter Question

Hello,

 

I have a question regarding vCenter and vMotion.

I have 2 esxi [esxi1,esxi2] hosts on production site 6.7.0 and one VCSA 6.7.0.12000 that is residing on esxi2. Both hosts are in cluster mode and vSphere HA is enabled. I need to do some work to the iSCSI adapter through VCSA  thus i need to put the esxi's to maintenance mode.

 

If i will do this

Put esxi1 to maintenance mode (all vm's are already powered off)

Do the changes to iSCSI adapter

Exit esxi1 from maintenance mode

 

Move VCSA from esxi2 using vmotion (migrate only compute resource since they use shared storage). This step is feasible or not?

Put esxi2 to maintenance mode

Do the changes to iSCSI adapter of esxi2

Exit esxi2 from maintenance mode

 

And at the end i will move again vcsa to esxi2 (compute host only)

 

Please advice

 

Thank you


Проблема с загрузкой CPU

Всем привет!

 

Кто нибудь может посоветовать, что с этим можно сделать?

Есть двух-процессорный сервер, по 10 ядер на процессор. На нём 4 виртуальные машины. три не требовательные, четвертому выделено 18-ядер.

Numa распределены на два узла. Работает нормально, но периодически нагрузка сползает на один процессор, производительность падает((

Сейчас произошло после того ,как из включенной виртуалки я удалил ненужный HDD. Помогает обычно перезагрузка виртуалки.

 

Image may be NSFW.
Clik here to view.
Screenshot_9.jpg

Numa:

Image may be NSFW.
Clik here to view.
Screenshot_12.jpg

CPU Ultilization

Dear,

 

 

We have some web, database and application servers that process requests filled in on a website.

A few times each year this is open for requests, we get a high ammount of users connecting to the website filling in these requests.

We notice on all servers a big increase on CPU when a lot of people do this. CPU will go to 100% and this causes problems. On all these servers CPU is really high.

 

 

How can we optimize our CPU commitments to these important machines?

We have 3 hosts with 6 servers that need to perform optimally on CPU.

The hosts have 2 sockets 12 cores and 48 logical processors.

 

 

The servers currently have 1 virtual cpu and 4 cores.

 

 

What would be a optimal setup for more performance, keeping cpu readytimes, overcomitting etc in mind.

How many virtual cpu's and cores should we assign to one server.

 

 

Kind Regards,

Thomas

Image may be NSFW.
Clik here to view.

Dashboard on vRops

vROps 7 Missing Content - Dashboard Sharing and What-If Analysis

Hi,

Just upgraded to 7.0 but noticed a few things missing from what i have seen advertised;

 

  1. the dashboard share icon is missing if I log in with my domain credentials, the AD group has been given Administrator access to all objects. If I log in with the admin user I can see the share dashboard icon.
  2. On What-If Analysis physical Infrastrucure Planning and Migration planning are both missing, are these only available in certain versions?

 

Does anyone else have these issue?

Deallocate IP address assigned by vra in a network profile.

Is there any code available to deallocate IP address assigned by VRA in a network profile

AppVolumes 2.16 VolDelayLoadTime

Good Morning,

 

Our infrastructure consists of VMWare Horizon 7.6 with AppVolumes 2.14.2:

 

I'm trying on a test pool (Windows 10, 1803) the new 2.16 Agent (The manager is still 2.14 because I'm reading there are some DB issues with pre-existing AppStacks) because this is needed after a Windows Security Update, however i'm noticing that the registry key "VolDelayLoadTime" on the new agent is ignored at login and unfortunately I really need it because if I don't set a Delay of atleast '5' seconds the printers mapped by GPO create issues (By design AppVolumes restarts the spooler AFTER the AppStacks are mounted):

 

Either the user get an endless login screen stuck on "Applying Policy: Printers Mapping", or they get duplicate printers or they get no printers at all.

 

The Registry Key completely mitigate these issues...

 

Do you know if in the last version this Key is deprecated or changed?

 

 

Thank you!

State down of PVRDMA adapter

Hello,

 

I am trying to setup a Distributed Switch with 4 VMs using the RDMA feature without a HCA adapter on the same Hypervisor with ESXi 6.5 Update 2 (earlier fix for this kind of problem should be included).

 

The VMs are running Redhat RHEL Linux 64-bit with kernel 3.10 and hardware level 13. The PVRDMA adapter is recognized by the OS in all 4 VMs.

 

When running ibstat -v the State is set to "Down":

 

CA 'vmw_pvrdma0'

    CA type: VMW_PVRDMA-1.0.1.0-k

    Number of ports: 1

    Firmware version: 1.0.0

 

    Hardware version: 1

    Node GUID: 0x00505600008b4b52

    System image GUID: 0x0000000000000000

    Port 1:

        State: Down

        Physical state: LinkUp

        Rate: 2.5

        Base lid: 0

        LMC: 0

        SM lid: 0

        Capability mask: 0x04010000

        Port GUID: 0x025056fffe8b4b52

        Link layer: Ethernet

 

Running sminfo fails with:

 

ibwarn: [33392] mad_rpc_open_port: client_register for mgmt 1 failed

sminfo: iberror: failed: Failed to open '(null)' port '0'

 

 

Any idea, why the State is "Down" or if any additional undocumented steps need to be performed for the PVRDMA adapter when no HCA adapter is used?

 

Thank you.


Error from ADMX-based Settings

Hello,

 

In our environment (VMWare Horizon 7.3.2, UEM 9.6) we have problems with the "processing" of ADMX based policies.

 

In my particular case, it is about the screen saver with password protection via an ADMX setting in the UEM to activate. Unfortunately, this setting is not activated, but skipped. Other settings via the same ADMX will be applied. I have no conditions active on the "Windows OS.xml". Why does the UEM handle one setting here differently than others?Attached is an ADMX log which lists the error

 

Thank you for answer or your ideas!

 

Greetings:

 

Burgerking68

 

Image may be NSFW.
Clik here to view.

Integrated Openstack SAML2 integration

Deleting very old Snapshots residing on Netapp Datastore

Hello All,

 

We are using VMWare 5.5 VSphere and have installed many VMs on ESXi Host. Recently one of the VMs (Sles 10) has performance issue whenever a activity (e.g. Postgres Database) starts in cron.

The system seems to get very slow or even non-responsive at times. One more important factor to note here is that, the whole VM is residing on Datastore created on Netapp Storage (via NFS).

So that could also be a reason for Performance problems. But we are not yet sure the real cause of problem. So we are considering all factors.

I came across this information while digging around,  "It is recommended to delete old snapshots because the presence of redundant delta disks can adversely affect virtual machine performance."

There  are no. of old snapshots (oldest being from 2018)  created by earlier administrator and never been deleted then. Please see the attachment.

We wish to delete the older snapshots but would like to keep the present state of the VM. This is very important.

 

I am new to VSphere Snapshot topic. Hence I would like to know your point of view on following points.

 

1. How valid is this assumption, that deleting the snapshot/s would help resolving performance problems?

2. If we decide to delete the older snapshots one by one then how long will it take for each snapshot to get deleted. As I read, the system consolidate the data after each delete operation.

    (The system itself is using currently 112 GB of Netapp-Datastore space.)

3. What should be order to delete the Snapshots? Like Oldest  ---> newest or otherway round ?

4. We do not wish to preserve any snapshot but at the same time do not  wish to delete all the snapshots at once. As this being the production system, would like to take minimum risk.

5. Using snapshot manager has any advantage?

6. Should the system be restarted , before/during/after snapshot delete operation?

7. During snapshot delete , is the system performance degraded? Thereby affecting the users working on it?

 

Please let me know if you require any further information.

I am sorry to ask so many questions but any guidance from people having previous experience, would be very much appreciated. I would as well simultaneously be trying to gather information from elsewhere.

Thanks in advance.

Design Discussion - Separate Mgmt + Compute vCenters?

Hi,

 

In each of our datacenters today we have a separate 4 node management cluster to hold our vcenters, nsx managers, domain controllers etc for that site.  The hosts in these clusters deliberately do not have NSX agents installed to 100% guarantee we don’t fat finger DFW rules etc, have standard switching (to remove all VDS / vCenter dependancies), run on seperate compute infrastructure etc, but the vcenter / vcsa appliance managing this cluster is the same vcenter that’s managing all other clusters - ie compute clusters.

 

Now that we’ve started to dip our toe in the Auto Deploy waters (stateful installs at this stage), I’ve been reading that best practice is to have a separate vcenter + sso domain to manage the management cluster....

 

Do you all subscribe to this theory?  Or is this advice old?  Noticed the v6.5+ vCenter HA deployment automation (basic mode) relies on the vcenter / vcsa you’re enabling it on to be in the same management plane.  I doubt this is an indication of what is considered ‘modern best practice’, but thought it interesting that it would be a promoted deployment model, because it assumes you’re not following what I’ve read is ‘best practice’ elsewhere in auto deploy design docs.

 

What are your thoughts?  Separate management vcenter to manage the management cluster, or it’s fine to use the same for mgmt+compute?  Or do you only consider separating the vcenters if you’re using auto deploy?

 

Interested to hear from those who run medium - large vsphere environments and have had similar design debates.

 

Thanks.

Deploy VM in Specific cluster with respect to there type

we need to configure vRA machine deployment in such a way that

 

VM will deploy in specific cluster with respect to there type:

 

example : Linux VM should deploy in Linux Cluster and Database VM should deploy in Database Cluster .

 

Please suggest

Notification buffer is full, oldest notification was not sent in vro 7.2

Hello All,

 

  I could see continues logs in vRO  "Notification buffer is full, oldest notification was not sent " is there anything need to be worried about it ??

 

   These are info events, attached the screen shot.

 

Image may be NSFW.
Clik here to view.
vro.jpg

Impossible start VM export from ESXi server

I have different VM on  one ESXi server 6.7

I can export some of this  to OVF file from browser of PC connected to the server, but for some other VM this is impossible.

In that VM the process start but the window asking "file name" etc. don't appear, so is impossible proceed.

The process appear as "running" but nothing do.

The VM are very similar but some are "exportable" and some not on the same ESXi server.

Some help ?

Thanks, regards.


There was an issue with validating your login credentials against our backend database.

"There was an issue with validating your login credentials against our backend database" is the error message I receive when I attempt to login to the Skyline Advisor. I was able to create the organization and set up the collector without any issues.

 

Tim

Does anyone experiencing Windows 10 hung on reboot with linked clone VMs?

We are experiencing some issues with Linked Clone VMs that are stuck in reboot. This happens to random VMs. Anyone?

Server crash with pink screen

So today I wake up and notice nothing on my network is responding. After finding out that all my virtual machines are not responding I see this.

 

Image may be NSFW.
Clik here to view.
Capture.PNG

 

 

While I try to fix issues myself, this is something I've never seen (well just once before) and am having a hard time even knowing where to start with this.

I've rebooted the host and machines are up again. But this happened once a couple of weeks ago already and I'm afraid it's not something I can just reboot and forget about it, hoping it doesn't happen again.

 

Any help would be appreciated.

Workstation 15 - Win32 exception detected, exceptionCode 0xc0000006 (disk error while paging)

Host OS is Server 2016 with 188GB RAM and 2 quad core Xeon.

File system: NTFS

NTFS Compression: None

Dedup: None

 

 

I have vm's running in Workstation 15 getting the below log segment error....

 

2019-03-13T08:44:34.449-07:00| vcpu-0| I125: DISKUTIL: scsi0:0 : capacity=125829120 logical sector size=512

2019-03-13T08:45:04.784-07:00| vmx| I125: scsi0:0: Command READ(10) took 1.011 seconds (ok)

2019-03-13T08:45:18.215-07:00| vmx| I125: DISKLIB-LIB   : numIOs = 50000 numMergedIOs = 12400 numSplitIOs = 1776

2019-03-13T08:49:42.546-07:00| vcpu-0| W115: ----Win32 exception detected, exceptionCode 0xc0000006 (disk error while paging)----

2019-03-13T08:49:42.547-07:00| vcpu-0| W115: ExceptionAddress 0x7ff7d55a6f5c eflags 0x00010206

2019-03-13T08:49:42.547-07:00| vcpu-0| W115: rwFlags 0 badAddr 0x1e8519fd000

2019-03-13T08:49:42.547-07:00| vcpu-0| W115: rax 0x1e857bff000 rbx 0x1e8462f2000 rcx 0x200

2019-03-13T08:49:42.547-07:00| vcpu-0| W115: rdx 0x1e857bff000 rsi 0x1e8519fd000 rdi 0x1e8462f2000

2019-03-13T08:49:42.547-07:00| vcpu-0| W115: r8 0x200 r9 0x10000 r10 0x1e845e9b0a8

2019-03-13T08:49:42.547-07:00| vcpu-0| W115: r11 0x1e8462f0000 r12 0x1e8462f0000 r13 0x1

2019-03-13T08:49:42.547-07:00| vcpu-0| W115: r14 0x2000 r15 0x10000

2019-03-13T08:49:42.547-07:00| vcpu-0| W115: rip 0x7ff7d55a6f5c rsp 0x29b3efec70 rbp 0xe

2019-03-13T08:49:42.547-07:00| vcpu-0| W115: LastBranchToRip 0 LastBranchFromRip 0

2019-03-13T08:49:42.547-07:00| vcpu-0| W115: LastExceptionToRip 0 LastExceptionFromRip 0

2019-03-13T08:49:42.547-07:00| vcpu-0| W115: The following data was delivered with the exception:

2019-03-13T08:49:42.547-07:00| vcpu-0| W115:  -- 0

2019-03-13T08:49:42.547-07:00| vcpu-0| W115:  -- 0x1e8519fd000

2019-03-13T08:49:42.548-07:00| vcpu-0| W115:  -- 0xc0000010

2019-03-13T08:49:42.549-07:00| vcpu-0| I125: CoreDump: Minidump file D:\virtualmachines_vmwks\lab-management\vmware-vmx.dmp exists. Rotating ...

2019-03-13T08:49:42.557-07:00| vcpu-0| W115: CoreDump: Writing minidump to D:\virtualmachines_vmwks\lab-management\vmware-vmx.dmp

2019-03-13T08:49:42.966-07:00| vcpu-0| I125: CoreDump: including module base 0x0x7ff7d5130000 size 0x0x0124d000

2019-03-13T08:49:42.966-07:00| vcpu-0| I125:   checksum 0x00f62ec3 timestamp 0x5ba2301c

2019-03-13T08:49:42.966-07:00| vcpu-0| I125:   image file C:\Program Files (x86)\VMware\VMware Workstation\x64\vmware-vmx.exe

2019-03-13T08:49:42.966-07:00| vcpu-0| I125:   file version 15.0.0.38213

 

I get this every morning randomly on some VM's but all VM's exhibit this same problem, just not all the same day.

I have run a chkdsk on the volume they are on and no errors found.

I moved a VM to my SAS SSD LUN and it get's the same thing.

I also for kicks moved it on a RAMDISK LUN and it still gets the same thing.

 

I remove the .vmem file after the crash and it seems to stay alive for the rest of the day.

Funny thing though, at the end of the day, I gracefully shutdown each VM so there is no .vmem file in the morning when I fire up my lab.

I even made a batch file to delete all the .vmem files in each VM's folder but this crash still occurs.

 

Any ideas?

VCSA causes ESXi to become non-responsive

Hi


I've got a weird problem. If I don't have VCSA running, everything works normally. I can log into the console on my three ESXi hosts without any problem at all. Everything is snappy and responsive and all functionality is fine.


If I start VCSA, all is well for about five minutes, then all three hosts go offline in VCSA. At that point it is impossible to then log into the hosts directly. They either just sit there showing the VMware logo, or if you can get past that, the login just times out. SSH works fine at this point - it's just the GUI that fails.

 

The only way to restore normal service is to shutdown VCSA and then restart the host services on the hosts. I did find someone else reporting the same issue a couple of years ago but no resolution was forthcoming and the discussion was abandoned.

 

Any suggestions would be appreciated.

 

ESXi version is 6.7.0 (Build 9484548).

 

Thanks,

 

Steve

Viewing all 175326 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>