Quantcast
Channel: VMware Communities : All Content - All Communities
Viewing all 175326 articles
Browse latest View live

VRMS solution user is missing

$
0
0

My remote Replication appliance is showing Enabled(Configuration issue) when I hover over it I see the message: "The VRMS Solution user is missing or outdated. Visit the vSphere Replication VAMI and reconfigure the appliance". When I go to configure the appliance everything looks ok. Can't find much on vmware kb on this, anyone know what this means?

I rebooted the replication appliance and it went to Enabled (Ok) for awhile but then went back to "configuration error"


VM MAC Conflict alarm on replica vm

$
0
0

i use vcsa 6.5 and vspher replication 6.5 now after start replicated machine on vm show  VM MAC Conflict alarm but both of mac adress are protected vm and replica machine are different  why show this ?

 

BR

vRA Software Component

$
0
0

Author :

URL : http:////docs.vmware.com/en/vRealize-Automation/7.4/com.vmware.vra.prepare.use.doc/GUID-530ABF4D-E0EA-4266-BEB2-C507CCCDAA9D.html

Topic Name : Install the Guest Agent on a Linux Reference Machine

Publication Name : Preparing and Using Service Blueprints in vRealize Automation

Product/Version : vRealize Automation/7.4

Question :

Ok, I'm running vRA 7.4. I added a software component (bash script) to a blueprint that creates a RHEL 7 virtual machine. The vRA guest automation agent/software bootstrap agent is installed and running on the RHEL 7 template that is being provisioned. I also prepared the Linux machine by running the script the agent provides. There is not a networking issue because I can reach everything (good ping, ssh, nslookup, etc.) I can't get the deployment to complete, the bash script does not execute. It just hangs up and then finally fails the deployment because the bash script is not run (times out).

Vcenter 6.5 use IE or Edge open console Error

$
0
0

Vcenter 6.5 use IE or Edge open console Error:

 

[500] An exception occurred processing JSP page /webconsole.html at line 10

Check the vSphere Web Client server logs for details.

 

vSphere Web Client (Flash)  Error

vSphere Client (HTML5)         OK

 

other Browers is OK

Azure Integration does not refresh state

$
0
0

Hi guys,

I'm currently in the process of integrating a vRA 7.4 Private Cloud with Microsoft Azure.

I can deploy VMs with no problem

When I delete a VM, vRA request fails with an error regarding being unable to execute a workflow with a certain ID. When I go to check that specific workflow on vRO, the result is successful, and I can see the deleted VM on the Azure log. This has been fixed by destroying the deployment instead of deleting the VM.

My biggest problem comes when I try to power off a VM. The power off (Stop) works perfectly, but the VM state never refreshes and I never get 'Start' in the list of actions.

I only get the 'Restart' action, and it fails since Azure requires the VM to be powered on in order to restart it.

I'm not sure where to check. Has anything like this happened to you?

Thanks in advance.

Very very very high latency

$
0
0

Hello,

 

How to understand this latency? Maybe this is vSphere 6.7 bug?

(614891480000000 ms = 19500 years)

Image1_.png

Power Cli - Ack Alarm

$
0
0

I want to Acknowledge all alarms in my vCenter using VMware PowerCLI.

 

$Datacenter = Get-Datacenter | select ExtensionData

$Datacenter.ExtensionData.TriggeredAlarmState.Acknowledged -like "True"

Extreme Latency Spikes in ESXi 6.5, but not in ESXi 5.5

$
0
0

Hello everbody,

 

we have two different vSphere environments - both are running on Cisco UCS and Hitachi HDS Storage.

One environment has new hosts with the Cisco ESXi 6.5 U1 ISO, VSCA 6.5.

The second environment is running with ESXi 5.5 U3 Hosts, VSCA 6.5.

Now we have many latency alerts from the first environment with read latency up to 7 minutes. But there is no impact within the guest os or other VMs running on this host or datastore.

We see this latencies in veeam and in the vcenter.


Datastore has punctured Raid set - ESXi as well

$
0
0

I'm in the unfortunately position where I have a data store with a punctured RAID, most of the VM's on it are of no concern bar 1 which can be moved, however the data store affected has ESXi installed on it (the OS). I just wondered if its possible to replicate the datastore completely to a new health unit I have created so I can delete and recreate the faulty store.

UEM 9.4 supportability with Windows 10?

$
0
0

Hi all,

 

I was reading through the install guide for UEM 9.4 and when I read the supportability of 9.4 with Windows 10, I only see listed build 1803.

 

Is this correct?

Only 1 build of Windows 10 is supported with UEM 9.4?

 

I would have thought that it would have been backwards compatible at least a couple of builds.

 

Any thoughts?

 

Regards

Mark

Windows 10 build 1803 error in new VMs.

$
0
0

Hi all,

 

I am 99.9% certain that the problem I have is Microsoft related however I just wanted to see if anyone else had seen the same issue and how they resolved it.

 

I have deployed desktops in Horizon 7.5 with the 1803 build of Windows 10.

The desktops are taking a long time to load upon first use and then they give an error popup that the desktop cannot be found in the searched location.

The desktop is just shown as a bare desktop.

(They are not using Roaming or Mandatory profiles).

 

If I also boot up the master image VM I see the same behavior there. (do I just need to do a reinstall and hope that one comes out better?)

 

Researching on the internet leads me to believe that it is a Microsoft Win 10 issue and nothing to do with Horizon 7.5 however that hasn´t helped me to resolve my issue.

(I found out that the build given to me was at RTM level so I updated it to the latest build before the L1 vulnerability patch however I still see the issue)

 

Has anybody else seen this behavior and found out how to resolve it?

 

Any help or comments would be most welcome.

 

Regards

Mark

Latest vdiskmanager.exe (included in Workstation 12) generates 4gb "split sparse" files instead 2gb

$
0
0

I've just come off the call from vmware support where my concern has been confirmed. There has been a change in the code apparently where the VMware Virtual Disk Manager (In my case build 2985596) generates 4gb files when using option -t 1.

This is not documented at all apparently and I've been informed that this is going to be passed onto the documentation team. Even the help embedded into vmware-vdiskmanager.exe still says that disk types 1 and 3 will generate 2gb files where in fact they generate 4gb file.

At the moment I am awaiting response for my request to receive a copy of the Workstation 11 vdiskmanger where 2gb files are indeed 2gb.

Please share your thoughts and experiences below.

Problema al Instalar OS X 10.9

$
0
0

Hola a todos espero me puedan ayudar, soy nuevo en la virtualización de MAC OS X en ESXi 6.0, así que conseguí una ISO del 10.9 ya que al parecer el ESXi solo soporta hasta el 10.10, pero cuando monto la ISO y arranca al encender aparece la manzana y después el circulo restringido, alguien sabe que puedo hacer para poder instalar correctamente la ISO?

 

Saludos.

Horizon 7.5 and UEM 9.4 Clipboard and Smart Policy not working

$
0
0

As the title points out, I have deployed Horizon 7.5 and UEM 9.4

 

I'm using a GPO to start UEM and within UEM I have smart policies defined to enable clipboard in both directions and it isn't working.

 

I have tested other settings in the same smart policy such as disabling printer redirection and that works, disabling USB redirection and that works as well but the clipboard doesn't, I have uninstalled and reinstalled Horizon 7.5 Agent and UEM 9.4 and that didn't help either.

 

The other thing I noticed is that the HTML File Transfer doesn't work either, I've tested with PCoIP and Blast.

 

I also tried enabling the clipboard within GPO and that didn't make a difference.

 

Any one else run into the same issue, anything I am missing, thanks.

vRealize Business - vCenter Data Collection

$
0
0

Hi,

 

I have a "stand alone" implementation of vRealize Business connected to a vCenter with only 100 VMs as a POC for a client. Five days after the installation, the vCenter Data Collection starts to fail (warn) and also the Cost Callculation (fail).

 

I'm using the lastest version "7.4.0.19475".

 

I couldn't find info in the Internet nor the documentation. The message says "Data collector is not posting data to transformers".

Any ideas?

vrb-status.png

Thanks


attach old datastore on esxi 6.5 failde

$
0
0

Hello,

 

Let me give you the situation here:

We have an old box oracle X4170 with a card LSI megRaid.

We have 1 disk with the OS (300GB) esxi 5.5

We have a datastore with 2 disks configured in RAID 0 ( ~600GB)

after a power outage, we lost the RAID configuration and unable to recover it.

The OS disk get corrupted and unable to boot.

 

We have decided to add another disk and install esxi 6.5 on it.

Everything went well on it.

Now we reconfigure the datastore disks to be the same RAID 0.

 

When we try to add the disk as a datastore, we can't mount it.

On this datastore we have 20 VMs and we try to avoid recreate them from scratch.

 

Please find below some result of some commands:

 

[root@localhost:/vmfs/volumes] esxcli storage vmfs extent list

Volume Name  VMFS UUID                            Extent Number  Device Name                           Partition

-----------  -----------------------------------  -------------  ------------------------------------  ---------

datastore1   5b7b2a9e-06a7b1c2-5ee3-002128a2da58              0  naa.600605b002406d30230de3f30be96b0a          3

[root@localhost:/vmfs/volumes]

[root@localhost:/vmfs/volumes] esxcli storage filesystem list

Mount Point                                        Volume Name  UUID                                 Mounted  Type            Size          Free

-------------------------------------------------  -----------  -----------------------------------  -------  ------  ------------  ------------

/vmfs/volumes/5b7b2a9e-06a7b1c2-5ee3-002128a2da58  datastore1   5b7b2a9e-06a7b1c2-5ee3-002128a2da58     true  VMFS-5  590826438656  589805125632

/vmfs/volumes/1db0be7f-160e13cd-3c0e-9e84931559ff               1db0be7f-160e13cd-3c0e-9e84931559ff     true  vfat       261853184     111267840

/vmfs/volumes/737eb47f-eb094043-2b00-57c5a0f8c627               737eb47f-eb094043-2b00-57c5a0f8c627     true  vfat       261853184     261844992

/vmfs/volumes/4b535f72-589fe73b-ae9e-3423265bfad7               4b535f72-589fe73b-ae9e-3423265bfad7     true  vfat       261853184      99434496

/vmfs/volumes/5b7b2aa4-4af4dd7a-e622-002128a2da58               5b7b2aa4-4af4dd7a-e622-002128a2da58     true  vfat      4293591040    4279697408

/vmfs/volumes/4f34529f-c68cc3d2-7413-002128a2da59               4f34529f-c68cc3d2-7413-002128a2da59     true  vfat       299712512        622592

/vmfs/volumes/5b7b2a90-56be3b86-1973-002128a2da58               5b7b2a90-56be3b86-1973-002128a2da58     true  vfat       299712512      83927040

[root@localhost:/vmfs/volumes]

[root@localhost:/vmfs/volumes]

[root@localhost:/vmfs/volumes]

[root@localhost:/vmfs/volumes] ls -alh /vmfs/devices/disks

total 2337885984

drwxr-xr-x    2 root     root         512 Aug 21 14:19 .

drwxr-xr-x   13 root     root         512 Aug 21 14:19 ..

-rw-------    1 root     root      557.9G Aug 21 14:19 naa.600605b002406d30230de3f30be96b0a

-rw-------    1 root     root        4.0M Aug 21 14:19 naa.600605b002406d30230de3f30be96b0a:1

-rw-------    1 root     root        4.0G Aug 21 14:19 naa.600605b002406d30230de3f30be96b0a:2

-rw-------    1 root     root      550.5G Aug 21 14:19 naa.600605b002406d30230de3f30be96b0a:3

-rw-------    1 root     root      250.0M Aug 21 14:19 naa.600605b002406d30230de3f30be96b0a:5

-rw-------    1 root     root      250.0M Aug 21 14:19 naa.600605b002406d30230de3f30be96b0a:6

-rw-------    1 root     root      110.0M Aug 21 14:19 naa.600605b002406d30230de3f30be96b0a:7

-rw-------    1 root     root      286.0M Aug 21 14:19 naa.600605b002406d30230de3f30be96b0a:8

-rw-------    1 root     root        2.5G Aug 21 14:19 naa.600605b002406d30230de3f30be96b0a:9

-rw-------    1 root     root      556.9G Aug 21 14:19 naa.600605b002406d30230ed2ff17a5b1fc

-rw-------    1 root     root        4.0M Aug 21 14:19 naa.600605b002406d30230ed2ff17a5b1fc:1

-rw-------    1 root     root        4.0G Aug 21 14:19 naa.600605b002406d30230ed2ff17a5b1fc:2

-rw-------    1 root     root      552.1G Aug 21 14:19 naa.600605b002406d30230ed2ff17a5b1fc:3

-rw-------    1 root     root      250.0M Aug 21 14:19 naa.600605b002406d30230ed2ff17a5b1fc:5

-rw-------    1 root     root      250.0M Aug 21 14:19 naa.600605b002406d30230ed2ff17a5b1fc:6

-rw-------    1 root     root      110.0M Aug 21 14:19 naa.600605b002406d30230ed2ff17a5b1fc:7

-rw-------    1 root     root      286.0M Aug 21 14:19 naa.600605b002406d30230ed2ff17a5b1fc:8

lrwxrwxrwx    1 root     root          36 Aug 21 14:19 vml.0200000000600605b002406d30230de3f30be96b0a4d5239323631 -> naa.600605b002406d30230de3f30be96b0a

lrwxrwxrwx    1 root     root          38 Aug 21 14:19 vml.0200000000600605b002406d30230de3f30be96b0a4d5239323631:1 -> naa.600605b002406d30230de3f30be96b0a:1

lrwxrwxrwx    1 root     root          38 Aug 21 14:19 vml.0200000000600605b002406d30230de3f30be96b0a4d5239323631:2 -> naa.600605b002406d30230de3f30be96b0a:2

lrwxrwxrwx    1 root     root          38 Aug 21 14:19 vml.0200000000600605b002406d30230de3f30be96b0a4d5239323631:3 -> naa.600605b002406d30230de3f30be96b0a:3

lrwxrwxrwx    1 root     root          38 Aug 21 14:19 vml.0200000000600605b002406d30230de3f30be96b0a4d5239323631:5 -> naa.600605b002406d30230de3f30be96b0a:5

lrwxrwxrwx    1 root     root          38 Aug 21 14:19 vml.0200000000600605b002406d30230de3f30be96b0a4d5239323631:6 -> naa.600605b002406d30230de3f30be96b0a:6

lrwxrwxrwx    1 root     root          38 Aug 21 14:19 vml.0200000000600605b002406d30230de3f30be96b0a4d5239323631:7 -> naa.600605b002406d30230de3f30be96b0a:7

lrwxrwxrwx    1 root     root          38 Aug 21 14:19 vml.0200000000600605b002406d30230de3f30be96b0a4d5239323631:8 -> naa.600605b002406d30230de3f30be96b0a:8

lrwxrwxrwx    1 root     root          38 Aug 21 14:19 vml.0200000000600605b002406d30230de3f30be96b0a4d5239323631:9 -> naa.600605b002406d30230de3f30be96b0a:9

lrwxrwxrwx    1 root     root          36 Aug 21 14:19 vml.0200000000600605b002406d30230ed2ff17a5b1fc4d5239323631 -> naa.600605b002406d30230ed2ff17a5b1fc

lrwxrwxrwx    1 root     root          38 Aug 21 14:19 vml.0200000000600605b002406d30230ed2ff17a5b1fc4d5239323631:1 -> naa.600605b002406d30230ed2ff17a5b1fc:1

lrwxrwxrwx    1 root     root          38 Aug 21 14:19 vml.0200000000600605b002406d30230ed2ff17a5b1fc4d5239323631:2 -> naa.600605b002406d30230ed2ff17a5b1fc:2

lrwxrwxrwx    1 root     root          38 Aug 21 14:19 vml.0200000000600605b002406d30230ed2ff17a5b1fc4d5239323631:3 -> naa.600605b002406d30230ed2ff17a5b1fc:3

lrwxrwxrwx    1 root     root          38 Aug 21 14:19 vml.0200000000600605b002406d30230ed2ff17a5b1fc4d5239323631:5 -> naa.600605b002406d30230ed2ff17a5b1fc:5

lrwxrwxrwx    1 root     root          38 Aug 21 14:19 vml.0200000000600605b002406d30230ed2ff17a5b1fc4d5239323631:6 -> naa.600605b002406d30230ed2ff17a5b1fc:6

lrwxrwxrwx    1 root     root          38 Aug 21 14:19 vml.0200000000600605b002406d30230ed2ff17a5b1fc4d5239323631:7 -> naa.600605b002406d30230ed2ff17a5b1fc:7

lrwxrwxrwx    1 root     root          38 Aug 21 14:19 vml.0200000000600605b002406d30230ed2ff17a5b1fc4d5239323631:8 -> naa.600605b002406d30230ed2ff17a5b1fc:8

 

log of the vmkernel.log during rescan

 

[root@localhost:/vmfs/volumes] tail -f /var/log/vmkernel.log

2018-08-21T14:43:19.698Z cpu6:67658 opID=650a6241)World: 12230: VC opID 0545905B-000000C0-2ce6 maps to vmkernel opID 650a6241

2018-08-21T14:43:19.698Z cpu6:67658 opID=650a6241)vmw_ahci[0000001f]: scsiDiscover:channel=0, target=0, lun=0, action=0

2018-08-21T14:43:19.698Z cpu6:67658 opID=650a6241)vmw_ahci[0000001f]: scsiDiscover:No media

2018-08-21T14:43:19.698Z cpu6:67658 opID=650a6241)vmw_ahci[0000001f]: scsiDiscover:channel=0, target=1, lun=0, action=0

2018-08-21T14:43:19.698Z cpu6:67658 opID=650a6241)vmw_ahci[0000001f]: scsiDiscover:No media

2018-08-21T14:43:19.698Z cpu6:67658 opID=650a6241)vmw_ahci[0000001f]: scsiDiscover:channel=0, target=2, lun=0, action=0

2018-08-21T14:43:19.698Z cpu6:67658 opID=650a6241)vmw_ahci[0000001f]: scsiDiscover:No media

2018-08-21T14:43:19.698Z cpu6:67658 opID=650a6241)vmw_ahci[0000001f]: scsiDiscover:channel=0, target=3, lun=0, action=0

2018-08-21T14:43:19.698Z cpu6:67658 opID=650a6241)vmw_ahci[0000001f]: scsiDiscover:No media

2018-08-21T14:43:19.698Z cpu6:67658 opID=650a6241)vmw_ahci[0000001f]: scsiDiscover:channel=0, target=4, lun=0, action=0

2018-08-21T14:43:19.698Z cpu6:67658 opID=650a6241)vmw_ahci[0000001f]: scsiDiscover:No media

2018-08-21T14:43:19.698Z cpu6:67658 opID=650a6241)vmw_ahci[0000001f]: scsiDiscover:channel=0, target=5, lun=0, action=0

2018-08-21T14:43:19.698Z cpu6:67658 opID=650a6241)vmw_ahci[0000001f]: scsiDiscover:No media

2018-08-21T14:43:19.700Z cpu8:67935)<6>megasas_service_aen[1]: aen received

2018-08-21T14:43:19.700Z cpu2:65986)<6>megasas_hotplug_work[1]: event code 0x0071

2018-08-21T14:43:19.748Z cpu2:65986)<6>megasas_hotplug_work[1]: aen registered

2018-08-21T14:43:19.749Z cpu6:67658 opID=650a6241)megasas_slave_configure: do not export physical disk devices to upper layer.

2018-08-21T14:43:19.749Z cpu6:67658 opID=650a6241)WARNING: ScsiScan: 1565: Failed to add path vmhba1:C0:T16:L0 : Not found

2018-08-21T14:43:19.750Z cpu8:65663)<6>megasas_service_aen[1]: aen received

2018-08-21T14:43:19.750Z cpu11:65999)<6>megasas_hotplug_work[1]: event code 0x0071

2018-08-21T14:43:19.798Z cpu11:65999)<6>megasas_hotplug_work[1]: aen registered

2018-08-21T14:43:19.800Z cpu6:67658 opID=650a6241)megasas_slave_configure: do not export physical disk devices to upper layer.

2018-08-21T14:43:19.800Z cpu6:67658 opID=650a6241)WARNING: ScsiScan: 1565: Failed to add path vmhba1:C0:T17:L0 : Not found

2018-08-21T14:43:19.800Z cpu8:65663)<6>megasas_service_aen[1]: aen received

2018-08-21T14:43:19.800Z cpu9:65984)<6>megasas_hotplug_work[1]: event code 0x0071

2018-08-21T14:43:19.845Z cpu9:65984)<6>megasas_hotplug_work[1]: aen registered

2018-08-21T14:43:19.847Z cpu6:67658 opID=650a6241)megasas_slave_configure: do not export physical disk devices to upper layer.

2018-08-21T14:43:19.847Z cpu6:67658 opID=650a6241)WARNING: ScsiScan: 1565: Failed to add path vmhba1:C0:T19:L0 : Not found

2018-08-21T14:43:19.884Z cpu7:67683 opID=5f5af7eb)World: 12230: VC opID 0545905B-000000C2-2ce8 maps to vmkernel opID 5f5af7eb

2018-08-21T14:43:19.884Z cpu7:67683 opID=5f5af7eb)Partition: 648: Read from primary gpt table failed on "naa.600605b002406d30230ed2ff17a5b1fc".

2018-08-21T14:43:19.887Z cpu7:67683 opID=5f5af7eb)Partition: 648: Read from primary gpt table failed on "naa.600605b002406d30230ed2ff17a5b1fc".

2018-08-21T14:43:19.888Z cpu7:67683 opID=5f5af7eb)Partition: 648: Read from primary gpt table failed on "naa.600605b002406d30230ed2ff17a5b1fc".

2018-08-21T14:43:19.888Z cpu7:67683 opID=5f5af7eb)Partition: 648: Read from primary gpt table failed on "naa.600605b002406d30230ed2ff17a5b1fc".

2018-08-21T14:43:19.895Z cpu7:67683 opID=5f5af7eb)LVM: 11136: Device naa.600605b002406d30230ed2ff17a5b1fc:3 detected to be a snapshot:

2018-08-21T14:43:19.895Z cpu7:67683 opID=5f5af7eb)LVM: 11143:   queried disk ID: <type 2, len 22, lun 0, devType 0, scsi 0, h(id) 6039607899683040821>

2018-08-21T14:43:19.895Z cpu7:67683 opID=5f5af7eb)LVM: 11150:   on-disk disk ID: <type 2, len 22, lun 0, devType 0, scsi 0, h(id) 4778126699999099765>

2018-08-21T14:43:19.895Z cpu7:67683 opID=5f5af7eb)Partition: 648: Read from primary gpt table failed on "naa.600605b002406d30230ed2ff17a5b1fc".

2018-08-21T14:43:19.896Z cpu7:67683 opID=5f5af7eb)Partition: 648: Read from primary gpt table failed on "naa.600605b002406d30230ed2ff17a5b1fc".

2018-08-21T14:43:19.897Z cpu5:66425)ScsiDeviceIO: 2948: Cmd(0x43950067bd80) 0x1a, CmdSN 0x1fa9 from world 0 to dev "naa.600605b002406d30230de3f30be96b0a" failed H:0x0 D:0x2 P:0x0 Valid sense data: 0x5 0x24 0x0.

2018-08-21T14:43:19.901Z cpu0:68420)Partition: 648: Read from primary gpt table failed on "naa.600605b002406d30230ed2ff17a5b1fc".

2018-08-21T14:43:19.902Z cpu0:68420)Partition: 648: Read from primary gpt table failed on "naa.600605b002406d30230ed2ff17a5b1fc".

2018-08-21T14:43:19.903Z cpu0:68420)Partition: 648: Read from primary gpt table failed on "naa.600605b002406d30230ed2ff17a5b1fc".

2018-08-21T14:43:19.903Z cpu5:66425)ScsiDeviceIO: 2948: Cmd(0x4395095d1600) 0x1a, CmdSN 0x1ff3 from world 0 to dev "naa.600605b002406d30230ed2ff17a5b1fc" failed H:0x0 D:0x2 P:0x0 Valid sense data: 0x5 0x24 0x0.

2018-08-21T14:43:19.903Z cpu2:65709)Partition: 648: Read from primary gpt table failed on "naa.600605b002406d30230ed2ff17a5b1fc".

2018-08-21T14:43:19.904Z cpu8:68419)Partition: 648: Read from primary gpt table failed on "naa.600605b002406d30230ed2ff17a5b1fc".

2018-08-21T14:43:19.905Z cpu0:68420)Partition: 648: Read from primary gpt table failed on "naa.600605b002406d30230ed2ff17a5b1fc".

2018-08-21T14:43:19.905Z cpu2:65709)Partition: 648: Read from primary gpt table failed on "naa.600605b002406d30230ed2ff17a5b1fc".

2018-08-21T14:43:19.906Z cpu0:68420)Partition: 648: Read from primary gpt table failed on "naa.600605b002406d30230ed2ff17a5b1fc".

2018-08-21T14:43:19.907Z cpu2:65709)Partition: 648: Read from primary gpt table failed on "naa.600605b002406d30230ed2ff17a5b1fc".

2018-08-21T14:43:19.907Z cpu8:68419)Partition: 648: Read from primary gpt table failed on "naa.600605b002406d30230ed2ff17a5b1fc".

2018-08-21T14:43:19.908Z cpu2:65709)Partition: 648: Read from primary gpt table failed on "naa.600605b002406d30230ed2ff17a5b1fc".

2018-08-21T14:43:19.909Z cpu8:68419)Partition: 648: Read from primary gpt table failed on "naa.600605b002406d30230ed2ff17a5b1fc".

2018-08-21T14:43:19.909Z cpu2:65709)Partition: 648: Read from primary gpt table failed on "naa.600605b002406d30230ed2ff17a5b1fc".

2018-08-21T14:43:19.910Z cpu8:68419)Partition: 648: Read from primary gpt table failed on "naa.600605b002406d30230ed2ff17a5b1fc".

2018-08-21T14:43:19.911Z cpu8:68419)Partition: 648: Read from primary gpt table failed on "naa.600605b002406d30230ed2ff17a5b1fc".

2018-08-21T14:43:19.918Z cpu2:65709)Partition: 648: Read from primary gpt table failed on "naa.600605b002406d30230ed2ff17a5b1fc".

2018-08-21T14:43:19.924Z cpu8:68419)Partition: 648: Read from primary gpt table failed on "naa.600605b002406d30230ed2ff17a5b1fc".

2018-08-21T14:43:19.927Z cpu0:68420)Partition: 648: Read from primary gpt table failed on "naa.600605b002406d30230ed2ff17a5b1fc".

2018-08-21T14:43:19.927Z cpu0:68420)Partition: 648: Read from primary gpt table failed on "naa.600605b002406d30230ed2ff17a5b1fc".

2018-08-21T14:43:19.928Z cpu0:68420)Partition: 648: Read from primary gpt table failed on "naa.600605b002406d30230ed2ff17a5b1fc".

2018-08-21T14:43:19.929Z cpu0:68420)Partition: 648: Read from primary gpt table failed on "naa.600605b002406d30230ed2ff17a5b1fc".

2018-08-21T14:43:19.930Z cpu0:68420)Partition: 648: Read from primary gpt table failed on "naa.600605b002406d30230ed2ff17a5b1fc".

2018-08-21T14:43:19.955Z cpu2:65709)Partition: 648: Read from primary gpt table failed on "naa.600605b002406d30230ed2ff17a5b1fc".

2018-08-21T14:43:19.955Z cpu5:66425)ScsiDeviceIO: 2948: Cmd(0x4395006ed980) 0x1a, CmdSN 0x208c from world 0 to dev "naa.600605b002406d30230ed2ff17a5b1fc" failed H:0x0 D:0x2 P:0x0 Valid sense data: 0x5 0x24 0x0.

2018-08-21T14:43:19.956Z cpu2:65709)Partition: 648: Read from primary gpt table failed on "naa.600605b002406d30230ed2ff17a5b1fc".

2018-08-21T14:43:19.957Z cpu2:65709)Partition: 648: Read from primary gpt table failed on "naa.600605b002406d30230ed2ff17a5b1fc".

2018-08-21T14:43:19.957Z cpu2:65709)Partition: 648: Read from primary gpt table failed on "naa.600605b002406d30230ed2ff17a5b1fc".

2018-08-21T14:43:19.958Z cpu2:65709)FSS: 5749: No FS driver claimed device 'naa.600605b002406d30230ed2ff17a5b1fc:2': No filesystem on the device

2018-08-21T14:43:19.961Z cpu8:68419)Partition: 648: Read from primary gpt table failed on "naa.600605b002406d30230ed2ff17a5b1fc".

2018-08-21T14:43:19.962Z cpu8:68419)Partition: 648: Read from primary gpt table failed on "naa.600605b002406d30230ed2ff17a5b1fc".

2018-08-21T14:43:19.963Z cpu8:68419)Partition: 648: Read from primary gpt table failed on "naa.600605b002406d30230ed2ff17a5b1fc".

2018-08-21T14:43:19.963Z cpu8:68419)Partition: 648: Read from primary gpt table failed on "naa.600605b002406d30230ed2ff17a5b1fc".

2018-08-21T14:43:19.964Z cpu8:68419)FSS: 5749: No FS driver claimed device 'naa.600605b002406d30230ed2ff17a5b1fc:6': No filesystem on the device

2018-08-21T14:43:19.964Z cpu7:67683 opID=5f5af7eb)VC: 4511: Device rescan time 14 msec (total number of devices 9)

2018-08-21T14:43:19.964Z cpu7:67683 opID=5f5af7eb)VC: 4514: Filesystem probe time 67 msec (devices probed 8 of 9)

2018-08-21T14:43:19.964Z cpu7:67683 opID=5f5af7eb)VC: 4516: Refresh open volume time 0 msec

2018-08-21T14:43:19.965Z cpu7:67683 opID=5f5af7eb)Partition: 648: Read from primary gpt table failed on "naa.600605b002406d30230ed2ff17a5b1fc".

2018-08-21T14:43:19.967Z cpu7:67683 opID=5f5af7eb)Partition: 648: Read from primary gpt table failed on "naa.600605b002406d30230ed2ff17a5b1fc".

2018-08-21T14:43:19.968Z cpu7:67683 opID=5f5af7eb)Partition: 648: Read from primary gpt table failed on "naa.600605b002406d30230ed2ff17a5b1fc".

2018-08-21T14:43:19.972Z cpu7:67683 opID=5f5af7eb)Partition: 648: Read from primary gpt table failed on "naa.600605b002406d30230ed2ff17a5b1fc".

2018-08-21T14:43:19.973Z cpu7:67683 opID=5f5af7eb)Partition: 648: Read from primary gpt table failed on "naa.600605b002406d30230ed2ff17a5b1fc".

2018-08-21T14:43:19.975Z cpu7:67683 opID=5f5af7eb)Partition: 648: Read from primary gpt table failed on "naa.600605b002406d30230ed2ff17a5b1fc".

2018-08-21T14:43:19.976Z cpu7:67683 opID=5f5af7eb)Partition: 648: Read from primary gpt table failed on "naa.600605b002406d30230ed2ff17a5b1fc".

2018-08-21T14:43:19.978Z cpu7:67683 opID=5f5af7eb)Partition: 648: Read from primary gpt table failed on "naa.600605b002406d30230ed2ff17a5b1fc".

2018-08-21T14:43:19.980Z cpu7:67683 opID=5f5af7eb)Partition: 648: Read from primary gpt table failed on "naa.600605b002406d30230ed2ff17a5b1fc".

2018-08-21T14:43:19.982Z cpu7:67683 opID=5f5af7eb)Partition: 648: Read from primary gpt table failed on "naa.600605b002406d30230ed2ff17a5b1fc".

2018-08-21T14:43:19.983Z cpu7:67683 opID=5f5af7eb)Partition: 648: Read from primary gpt table failed on "naa.600605b002406d30230ed2ff17a5b1fc".

2018-08-21T14:43:19.992Z cpu7:67683 opID=5f5af7eb)Partition: 648: Read from primary gpt table failed on "naa.600605b002406d30230ed2ff17a5b1fc".

2018-08-21T14:43:20.001Z cpu7:67683 opID=5f5af7eb)Partition: 648: Read from primary gpt table failed on "naa.600605b002406d30230ed2ff17a5b1fc".

2018-08-21T14:43:20.001Z cpu5:66425)ScsiDeviceIO: 2948: Cmd(0x4395006dda00) 0x1a, CmdSN 0x211e from world 0 to dev "naa.600605b002406d30230ed2ff17a5b1fc" failed H:0x0 D:0x2 P:0x0 Valid sense data: 0x5 0x24 0x0.

2018-08-21T14:43:20.003Z cpu7:67683 opID=5f5af7eb)Partition: 648: Read from primary gpt table failed on "naa.600605b002406d30230ed2ff17a5b1fc".

2018-08-21T14:43:20.004Z cpu7:67683 opID=5f5af7eb)Partition: 648: Read from primary gpt table failed on "naa.600605b002406d30230ed2ff17a5b1fc".

2018-08-21T14:43:20.006Z cpu7:67683 opID=5f5af7eb)Partition: 648: Read from primary gpt table failed on "naa.600605b002406d30230ed2ff17a5b1fc".

2018-08-21T14:43:20.010Z cpu7:67683 opID=5f5af7eb)Partition: 648: Read from primary gpt table failed on "naa.600605b002406d30230ed2ff17a5b1fc".

2018-08-21T14:43:20.011Z cpu7:67683 opID=5f5af7eb)Partition: 648: Read from primary gpt table failed on "naa.600605b002406d30230ed2ff17a5b1fc".

2018-08-21T14:43:20.018Z cpu7:67683 opID=5f5af7eb)Partition: 648: Read from primary gpt table failed on "naa.600605b002406d30230ed2ff17a5b1fc".

2018-08-21T14:43:20.019Z cpu7:67683 opID=5f5af7eb)Partition: 648: Read from primary gpt table failed on "naa.600605b002406d30230ed2ff17a5b1fc".

2018-08-21T14:43:20.020Z cpu7:67683 opID=5f5af7eb)Partition: 648: Read from primary gpt table failed on "naa.600605b002406d30230ed2ff17a5b1fc".

2018-08-21T14:43:20.022Z cpu7:67683 opID=5f5af7eb)Partition: 648: Read from primary gpt table failed on "naa.600605b002406d30230ed2ff17a5b1fc".

2018-08-21T14:43:20.023Z cpu7:67683 opID=5f5af7eb)Partition: 648: Read from primary gpt table failed on "naa.600605b002406d30230ed2ff17a5b1fc".

2018-08-21T14:43:20.025Z cpu7:67683 opID=5f5af7eb)Partition: 648: Read from primary gpt table failed on "naa.600605b002406d30230ed2ff17a5b1fc".

2018-08-21T14:43:20.026Z cpu7:67683 opID=5f5af7eb)Partition: 648: Read from primary gpt table failed on "naa.600605b002406d30230ed2ff17a5b1fc".

2018-08-21T14:43:20.029Z cpu7:67683 opID=5f5af7eb)Partition: 648: Read from primary gpt table failed on "naa.600605b002406d30230ed2ff17a5b1fc".

2018-08-21T14:43:20.035Z cpu5:66425)ScsiDeviceIO: 2948: Cmd(0x4395095e5100) 0x1a, CmdSN 0x2197 from world 0 to dev "naa.600605b002406d30230de3f30be96b0a" failed H:0x0 D:0x2 P:0x0 Valid sense data: 0x5 0x24 0x0.

2018-08-21T14:51:06.887Z cpu4:67541)Partition: 648: Read from primary gpt table failed on "naa.600605b002406d30230ed2ff17a5b1fc".

2018-08-21T14:51:06.888Z cpu4:67541)Partition: 648: Read from primary gpt table failed on "naa.600605b002406d30230ed2ff17a5b1fc".

 

Thanks for the help.

Best

Didier

Device/Credential guard are not compatible.

$
0
0

Hello I am facing the problem that Vmware workstation Device/Credential guard are not compatible. Vmware will run after disabling the device/credential guard.
Even my device/credential guard is already disabled HYper-V function is also unchecked
Please help me

 

 

This problem is facing after updating my windows 10 Pro to 1803 Version (April Update)

Thank You!!!

Remote Control with Established Session

$
0
0

Greetings!

 

I have a VMware Horizon 7.2 environment with vSphere 6.5. What I was wondering is if there is a way to connect with a session that is established through my connection broker thru PCoIP/BLAST from my vSphere console? Currently once a session is established with a VM from the CB the screen blacks out in vSphere.

 

If this is not possible is there a good alternative to connecting and troubleshooting with end-users? I have a large SCCM deployment for my physical desktops, however I am not finding SCCM to be a very good tool for managing my VMs.

 

Best,

 

Brendan

Error when creating Virtual Data Recovery on Server

$
0
0

 

Hi,

 

 

I am getting this error when creating a VDR backup on a Windows 2003 Server. Other VM server works just fine. However, for this server I am getting this error message:

 

 

5/24/2010 5:31:09 AM: Normal backup using Backup Job 1

5/24/2010 5:31:12 AM: Failed to create snapshot for 'servername', error -3960 ( cannot quiesce virtual machine)

5/24/2010 5:31:12 AM: Task incomplete

 

 

Anybody have any idea what this error ID meant?

 

 

Thank you.

 

 

Collin

 

 

”L1 Terminal Fault” vulnerabilities CVE-2018-3646, CVE-2018-3620 and CVE-2018-3615

$
0
0

Intel has disclosed details on a new class of CPU speculative-execution vulnerabilities known collectively as “L1 Terminal Fault” that can occur on past and current Intel processors (from at least 2009 – 2018).

 

Like Meltdown, Rogue System Register Read, and "Lazy FP state restore", the “L1 Terminal Fault” vulnerability can occur when affected Intel microprocessors speculate beyond an unpermitted data access.

 

By continuing the speculation in these cases, the affected Intel microprocessors expose a new side-channel for attack.

 

For more information see VMware Knowledge Base ArticleKB55636.

 

If after applying patches to the ESXi host you are seeing the notification esx.problem.hyperthreading.unmitigated, review KB57374for further information.

 

The Global Services Community Team

Viewing all 175326 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>