VMware vSphere VM Disk Issues Can Hit Your VM Performance

In this modern era, workloads and applications are mainly running on a VMware-based virtualization environment, and VMware administrators are
often asked about slow Virtual Machine(VM) performance.
To optimize and speed up VM performance, it’s quite challenging for VMware admins due to the involvement of several factors including compute, storage, networks, and management within a VMware-based virtualized environment.

It’s a prime responsibility of a VMware admin to monitor not only the hardware and physical infrastructure, but also CPU, memory, disk
allocation, network and storage I/O, and virtual networks of the VMs.

In this blog, we’ll discuss disk metrics that can affect VM disk performance and show monitoring tools that help in remediation of VM performance issues, and if VM(s) are underperforming,
they could be optimized easily and quickly with the right VM performance monitoring and management solution.

We will cover some important virtual machien disk metrics that can hit the performance, health, and capacity of your VMware vSphere-based virtual infrastructure.

Disk Metrics

In VMware virtual environment, VMs use large disk files called virtual disks. These files are also known as Virtual Machine Disk Files (VMDK) that are used to store virtual machine data. All VMs are by default started with a single virtual disk, but can be configured to use more virtual disks.

These virtual disks are located in a storage space called datastores, and these datastores (can be either local datastores (directly attached with ESXi hosts) or shared datastores (networked storageStorage Area Networks (SANs), and Logical Unit Number (LUN) storage devices).

If the storage is performing poorly, then your VMs will also perform poor. Storage latency and throughput play a very critical part in VM performance in your virtual environment.

In VMware vSphere, disk I/O and capacity metrics are reported at different levels including VMs, datastores (either local or shared), and ESXi hosts, because we know that multiple ESXi hosts and VMs can share datastores, and monitoring at the datastore level offers an aggregated view of disk performance in a virtualized environment. However, monitoring the performance of both your virtual disks (at guest OS level) and physical disks (at ESXi host level) are very important, and tracking disk metrics at both levels can also support monitoring of cluster health and troubleshooting where the issues are actually occurring.

VMs in a virtual environment normally use storage controllers to access the virtual disks in a datastore, and these storage controllers allow VMs to send commands to the ESXi hosts, and then these commands are redirected to the appropriate virtual disks. As VMs access datastores through ESXi hosts, monitoring disk metrics such as throughput and latency can help you to ensure that the physical storage is being accessed by ESXi hosts and VMs effectively and without interruption.

The following are some important VM disk metrics that every VMware admin should consider while monitoring the VMware-based virtual environment:

Disk Latency

Monitoring disk latency in a virtual environment is to ensures that the VMs are efficiently communicating with their virtual disks without any delay.

Disk latencies are measured in milliseconds, where an ESXi host processes a request from a VM to a datastore, and it helps you to determine whether vSphere is operating as per agreed performance or not.

Latency spikes indicate that there are some issues in your virtual environment due to several reasons, including resource-starvation or application issues.

You can monitor the virtual disk usage of VMs with Metrics and Logs in your VMware vSphere environment.

If there is an issue with a total latency, the average latency of read/write operations can be checked and verified. Similarly, it can be broken down into read and write latencies at the ESXi host, VM, and datastore level to determine what inventory objects are causing total latency.

High disk latency can also be correlated with other resource usage metrics to determine if the root cause is in lacking available memory or CPU resources, and can be identified which VMs on the ESXi host or cluster are consuming more resources and if VMs needed to allocate more resources or move them to other ESXi hosts or datastores with greater available resources. 

Metrics and Logs provide a complete picture of Disk I/O of VMs in the VMware vSphere environment.

Disk Throughput

Another important virtual disk metric is disk throughput which ensures that your datastores, ESXi hosts, and VMs are performing read/write operations without any interruption. Monitoring disk throughput at multiple levels in virtual infrastructure and correlating it with other metrics can highlight bottlenecks in the environment. If a spike appears in VM read/write in total latency, it indicates that the ESXi host is struggling to process the flood of read/write operations.

This can be mitigated by provisioning more memory to the installed VMs and allow VMs to cache more data and less relying on swapping.


To improve VM performance in a virtual environment, VMware admins should track the latency and throughput of the virtual disk on each VM on the
ESXi hosts, and the usage of virtual and physical disks be monitored so that both the ESXi hosts and VMs could be right-sized with the right
disk allocation.

Metrics and Logs is a software that runs within minutes in your VMware-based virtual environment and immediately starts monitoring the virtual disk metrics that can affect the VM performance.

You can start your Free Trial now, deploy the virtual appliance in minutes and get results in under 15 minutes.

Metrics and Logs

(formerly, Opvizor Performance Analyzer)

VMware vSphere & Cloud

Monitor and Analyze Performance and Log files:
Performance monitoring for your systems and applications with log analysis (tamperproof using immudb) and license compliance (RedHat, Oracle, SAP and more) in one virtual appliance!

Subscribe to Our Newsletter

Get the latest product updates, company news, and special offers delivered right to your inbox.

Subscribe to our newsletter

Use Case - Tamper-resistant Clinical Trials


Blockchain PoCs were unsuccessful due to complexity and lack of developers.

Still the goal of data immutability as well as client verification is a crucial. Furthermore, the system needs to be easy to use and operate (allowing backup, maintenance windows aso.).


immudb is running in different datacenters across the globe. All clinical trial information is stored in immudb either as transactions or the pdf documents as a whole.

Having that single source of truth with versioned, timestamped, and cryptographically verifiable records, enables a whole new way of transparency and trust.

Use Case - Finance


Store the source data, the decision and the rule base for financial support from governments timestamped, verifiable.

A very important functionality is the ability to compare the historic decision (based on the past rulebase) with the rulebase at a different date. Fully cryptographic verifiable Time Travel queries are required to be able to achieve that comparison.


While the source data, rulebase and the documented decision are stored in verifiable Blobs in immudb, the transaction is stored using the relational layer of immudb.

That allows the use of immudb’s time travel capabilities to retrieve verified historic data and recalculate with the most recent rulebase.

Use Case - eCommerce and NFT marketplace


No matter if it’s an eCommerce platform or NFT marketplace, the goals are similar:

  • High amount of transactions (potentially millions a second)
  • Ability to read and write multiple records within one transaction
  • prevent overwrite or updates on transactions
  • comply with regulations (PCI, GDPR, …)


immudb is typically scaled out using Hyperscaler (i. e. AWS, Google Cloud, Microsoft Azure) distributed across the Globe. Auditors are also distributed to track the verification proof over time. Additionally, the shop or marketplace applications store immudb cryptographic state information. That high level of integrity and tamper-evidence while maintaining a very high transaction speed is key for companies to chose immudb.

Use Case - IoT Sensor Data


IoT sensor data received by devices collecting environment data needs to be stored locally in a cryptographically verifiable manner until the data is transferred to a central datacenter. The data integrity needs to be verifiable at any given point in time and while in transit.


immudb runs embedded on the IoT device itself and is consistently audited by external probes. The data transfer to audit is minimal and works even with minimum bandwidth and unreliable connections.

Whenever the IoT devices are connected to a high bandwidth, the data transfer happens to a data center (large immudb deployment) and the source and destination date integrity is fully verified.

Use Case - DevOps Evidence


CI/CD and application build logs need to be stored auditable and tamper-evident.
A very high Performance is required as the system should not slow down any build process.
Scalability is key as billions of artifacts are expected within the next years.
Next to a possibility of integrity validation, data needs to be retrievable by pipeline job id or digital asset checksum.


As part of the CI/CD audit functionality, data is stored within immudb using the Key/Value functionality. Key is either the CI/CD job id (i. e. Jenkins or GitLab) or the checksum of the resulting build or container image.

White Paper — Registration

We will also send you the research paper
via email.

CodeNotary — Webinar

White Paper — Registration

Please let us know where we can send the whitepaper on CodeNotary Trusted Software Supply Chain. 

Become a partner

Start Your Trial

Please enter contact information to receive an email with the virtual appliance download instructions.

Start Free Trial

Please enter contact information to receive an email with the free trial details.