Identifying & Relieving the Pain Points of Storage vMotion (As Well as How to Solve Those VMware Monitoring Woes)

Storage vMotion allows you to migrate a virtual machine’s entire datastore off of one storage system and into another one. It involves physically moving the entire home directory (including log, swap, config, and snapshots) to another storage device, and includes a quick suspension of the virtual machine, which is completely transparent to the user. 

The time it takes to transfer the entire datastore of the virtual machine from one storage system to the other varies, and it can lead to some performance issues with the applications that happen to be running. Storage vMotion has been a part of VMware since version 3, though it began as a mere command line utility, allowing you to upgrade VMFS. 

By version 3.5, it was actually called Storage vMotion, and the GUI appeared first in version 4. 

Let´s check about the common Pain Points of Storage vMotion.

What Storage vMotion is Supposed to Do

Storage vMotion

Storage vMotion would work much better in a completely software-defined data center, or at least in an environment in which all of the storage was virtualized. That just isn’t the case in most enterprise data centers; at least, it is not yet a reality.

Storage vMotion is usually invoked when a user has complained about a machine’s slow performance. While Storage vMotion is helpful in solving storage issues like performance and capacity, there are some inherent pain points when using it. The first reason that Storage vMotion comes with such pain is that it requires a lot of time and effort by the administrator. 

The process means a lot of extremely fast analysis, real-time decision making, and involves a high risk of error. Many data centers simply can’t afford this kind of time or don’t want to undergo the stress involved in using Storage vMotion. 

The Pain Points Associated with Using Storage vMotion

Pain Points of Storage vMotion

Storage vMotion consumes a lot of resources during the migration of a virtual machine, especially processing power. It can also affect the performance of other virtual machines on your network during the process. Few data centers are willing to let go of the time, labor, effort, and resources to use Storage vMotion. 

Another reason Storage vMotion is painful is because it involves a complete physical move of the virtual machine’s data sets, not simply the memory addresses, which is not just time consuming, it is also a resource-intensive process. Additionally, Storage vMotion is only capable of working in a completely VMware environment. 

Most data centers are not fully virtualized just yet, and many data centers are using numerous hypervisors — meaning Storage vMotion simply isn’t an option for them. 

Another pain point with Storage vMotion is that the sheer size of the move takes up a significant amount of network resources, particularly CPU power. The host server’s resources must take care of the initial copy and change block copies until the point at which the destination storage is caught completely up. 

The CPU then has to manage the sending and receiving of all of the data of the virtual machine, which means that the resources of both the host and the storage affects the performance of other virtual machines during the process of migrating a single virtual machine from one storage to another.Lastly, Storage vMotion can contribute, both directly and indirectly, to server sprawl. 

It does not identify a single storage system that is capable of delivering on all of the storage demands, rather, it encourages a piecemeal storage infrastructure that is unnecessarily large and disjointed. This is an inefficient and expensive way to operate a data center. 

  Alleviating the Pain of Storage vMotion

Storage vMotion

Investing in all flash drives is too expensive. Investing in all hard disk drives doesn’t give the data center enough high-performance storage to support the more demanding mission-critical workloads. Most data centers use a combination of these solutions. You can also use a combination of solutions to address the migration of virtual machine memory. 

Since using all flash memory is just too expensive, and using all hard disk drives just isn’t fast enough to handle the more intensive mission-critical workloads, most data centers are operating a mix of flash and hard disk drives. 

The management of moving data between and among the various storage tiers can be done using VVOLS or by the storage system’s data movement intelligence or, better still, by both of these methods.Introduced with vSphere version 6, VVOLS are used to associate the virtual machines with the storage that they are utilizing. 

VVOLS offer visibility and make it easier to merge storage and server administration.A system of consolidated storage solutions should be able to deliver a Quality of Service QoS) function capable of integrating with VVOLS and with Storage vMotion, or at least to offer the function independently for situations like when you are working outside VMware or are dealing with non-virtualized workloads. It is important for any storage system to be able to migrate data within the system itself, and not just across different storage systems. 

This is often done using a physical LUN migration among the various tiers of the system.A software-defined data center would be able to do this data migration in real time by tagging all of the I/Os in a given array and adjusting the data placement of each one appropriately. As the I/O usage of the various workloads migrate the system should be able to make necessary adjustments to the position of the data, assuring that the most demanding workloads still receive the resources they need to maintain a high level of performance. 

Implementing a single system of storage with intelligent, QoS-driven data migration capabilities helps encourage workload consolidation. When the system is capable of taking on more of the workload, the storage administrator has fewer demands on time and resources. A combination of flash and hard disk storage allows the administrator to distribute the workloads appropriately, achieving high-performance for those workloads that are mission critical, while still being able to meet the overall capacity demands of the organization. 

VMware Monitoring 

No matter which option you choose for managing resources and virtual machine memory and workloads, one thing is always constant. Storage vMotion is also known to be behave a bit strange when combined with backups using the vSphere API – or viceversa. Sign up for Snapwatcher to begin a better VMware Snapshot management solution for your environment today.

Metrics and Logs

(formerly, Opvizor Performance Analyzer)

VMware vSphere & Cloud

Monitor and Analyze Performance and Log files:
Performance monitoring for your systems and applications with log analysis (tamperproof using immudb) and license compliance (RedHat, Oracle, SAP and more) in one virtual appliance!

Subscribe to Our Newsletter

Get the latest product updates, company news, and special offers delivered right to your inbox.

Subscribe to our newsletter

Use Case - Tamper-resistant Clinical Trials


Blockchain PoCs were unsuccessful due to complexity and lack of developers.

Still the goal of data immutability as well as client verification is a crucial. Furthermore, the system needs to be easy to use and operate (allowing backup, maintenance windows aso.).


immudb is running in different datacenters across the globe. All clinical trial information is stored in immudb either as transactions or the pdf documents as a whole.

Having that single source of truth with versioned, timestamped, and cryptographically verifiable records, enables a whole new way of transparency and trust.

Use Case - Finance


Store the source data, the decision and the rule base for financial support from governments timestamped, verifiable.

A very important functionality is the ability to compare the historic decision (based on the past rulebase) with the rulebase at a different date. Fully cryptographic verifiable Time Travel queries are required to be able to achieve that comparison.


While the source data, rulebase and the documented decision are stored in verifiable Blobs in immudb, the transaction is stored using the relational layer of immudb.

That allows the use of immudb’s time travel capabilities to retrieve verified historic data and recalculate with the most recent rulebase.

Use Case - eCommerce and NFT marketplace


No matter if it’s an eCommerce platform or NFT marketplace, the goals are similar:

  • High amount of transactions (potentially millions a second)
  • Ability to read and write multiple records within one transaction
  • prevent overwrite or updates on transactions
  • comply with regulations (PCI, GDPR, …)


immudb is typically scaled out using Hyperscaler (i. e. AWS, Google Cloud, Microsoft Azure) distributed across the Globe. Auditors are also distributed to track the verification proof over time. Additionally, the shop or marketplace applications store immudb cryptographic state information. That high level of integrity and tamper-evidence while maintaining a very high transaction speed is key for companies to chose immudb.

Use Case - IoT Sensor Data


IoT sensor data received by devices collecting environment data needs to be stored locally in a cryptographically verifiable manner until the data is transferred to a central datacenter. The data integrity needs to be verifiable at any given point in time and while in transit.


immudb runs embedded on the IoT device itself and is consistently audited by external probes. The data transfer to audit is minimal and works even with minimum bandwidth and unreliable connections.

Whenever the IoT devices are connected to a high bandwidth, the data transfer happens to a data center (large immudb deployment) and the source and destination date integrity is fully verified.

Use Case - DevOps Evidence


CI/CD and application build logs need to be stored auditable and tamper-evident.
A very high Performance is required as the system should not slow down any build process.
Scalability is key as billions of artifacts are expected within the next years.
Next to a possibility of integrity validation, data needs to be retrievable by pipeline job id or digital asset checksum.


As part of the CI/CD audit functionality, data is stored within immudb using the Key/Value functionality. Key is either the CI/CD job id (i. e. Jenkins or GitLab) or the checksum of the resulting build or container image.

White Paper — Registration

We will also send you the research paper
via email.

CodeNotary — Webinar

White Paper — Registration

Please let us know where we can send the whitepaper on CodeNotary Trusted Software Supply Chain. 

Become a partner

Start Your Trial

Please enter contact information to receive an email with the virtual appliance download instructions.

Start Free Trial

Please enter contact information to receive an email with the free trial details.