More customers of us are using Kubernetes and want to monitor their VMware vSphere environment and the Kubernetes nodes and container that run on top of it. Performance Analyzer is going to support all these setup very soon (we’re weeks away). As always we want to share how we setup our test environment to run a full Kubernetes environment on top of our vSphere 6.7 environment.

We’re going to use Ubuntu 18.10, conjure-up, juju and finally the common Kubernetes admin tools.


We’re going to need conjure-up on our Ubuntu management system, that we use to deploy Kubernetes on vSphere. If you don’t have conjure-up installed, start with that by running:

sudo snap install conjure-up –classic

The next step is the conjure Kubernetes installation wizard:

conjure-up kubernetes

Conjure-Up Kubernetes

Kubernetes on vSphere

Of course you can choose a more comprehensive installation, but we’re fine the the Kubernetes Core setup.

Choose Kubernetes on VMware vSphere

You need to scroll down to the Configure a New Cloud section and select vSphere.

VMware vSphere credentials

Make sure to use credentials with administrative rights as VMs will be deployed. The API Endpoint can be either the vCenter ip or the FQDN.

vSphere network

Next important choice is the external network for the Kubernetes environment. While all Kubernetes pods (container) get an internal ip automatically, that is only reachable within the Kubernetes cluster, the external network will be used for the communication from the outside (typically your LAN).

vSphere Datastore

Then you need to select what datastore the Kubernetes nodes should be deployed on. One important note – VMware vSAN is currently not correctly supported when running the conjure deployment. Worst-case, run a NFS datastore on top of vSAN, if you just deploy a development or testing environment.

virtual network

You can either use flannel or calico as your network plugin. The following information should help you to find the right network technology for you. We’re going to use Calico in our blog post.


What is CoreOS Flannel? Definition

Flannel is a networking technology used to connect Linux Containers. It is distributed and maintained by CoreOS, the producer of the stripped-down CoreOS Linux operating system for containers, as well as the rkt container system that competes with Docker.

Although tailored for use with the CoreOS operating system, Flannel is also compatible with Docker. Flannel emerged as an alternative method for container networking, originating from an open-source concept originally called Rudder, which in 2014 was renamed to Flannel.  It was written in the Go programming language.


What is Calico?

Calico is an open source networking and network security solution for containers, virtual machines, and native host-based workloads. Calico supports a broad range of platforms including Kubernetes, OpenShift, Docker EE, OpenStack, and bare metal services.

Calico combines flexible networking capabilities with run-anywhere security enforcement to provide a solution with native Linux kernel performance and true cloud-native scalability. Calico provides developers and cluster operators with a consistent experience and set of capabilities whether running in public cloud or on-prem, on a single node or across a multi-thousand node cluster.

Kubernetes Network

During the installation there might be the case where a sudo command is required, so please make sure to enter the password here, if you need sudo permission.

Conjure-Up Overview

The default setup is done and you could click the Deploy button to start the setup. In that article we want to customize the the worker setup. The Kubernetes worker will be the Nodes, that run the container.

Kubernetes Worker

A node is a worker machine in Kubernetes, previously known as a minion. A node may be a VM or physical machine, depending on the cluster. Each node contains the services necessary to run pods and is managed by the master components. The services on a node include the container runtime, kubelet and kube-proxy. See The Kubernetes Node section in the architecture design doc for more details.

We need a little bit more resources than the default worker, so let’s Configure the kubernetes-worker section.

Kubernetes Worker configuration

You could change the number of vCPU cores as well the amount of memory here. We change it to 8GB of memory.


We are ready to deploy the Kubernetes on vSphere setup now and let Conjure do its magic.

Deploy Kubernetes on vSphere

Depending on your setup it will take probably an hour to deploy the whole setup and you can follow the whole process in the wizard.

Init Phase conjure-up

Deploy Conjure-up Kubernetes

Finish deployment

Kubernetes on vSphere deployed

That’s is – all deployed and ready. You can either make a screenshot of all the URLs listed, to access Kubernetes management and the other deployed services or use Kubernetes (k8s) commands to get all information again. 

kubectl cluster-info

First steps in your Kubernetes on vSphere environment

kubectl get all

conjure-up is automatically configuring your local kubectl profile, so you can start typing commands right away.

kubectl get all

That command lists you all resources of the default namespace. If you want to see all resources of all namespaces, please use –all-namespaces as an argument.

kubectl get all --all-namespaces

juju status

Our Kubernetes on vSphere environment was deployed using conjure-up and juju. juju is a very powerful automation tool, that makes controlling and expanding your Kubernetes setup quite easy. juju command line reference

We start by getting the status of all components.

juju status

Now we need to know where our Kubernetes Management is running as well as the credentials for Kubernetes.

Get credentials Kubernetes

kubectl config view

You can find a full Ubuntu tutorial here as well, if you want to use different deployment options:

SSH into the Kubernetes nodes

juju makes it also very simple to enter any Kubernetes nodes using SSH to troubleshoot or reconfigure.

juju status # to get information about the master and worker names
juju ssh kubernetes-master/0 # enter the master
sudo apt update # update package manager
sudo apt upgrade # install patches
sudo apt autoremove # remove obsolete packages
exit # get out of ssh

Add new worker

You can add a worker that is configured with the same resources as the other running containers

juju add-unit kubernetes-worker

or you could change the resource settings and then add a new worker

juju set-constraints kubernetes-worker cpu-cores=8 mem=32G
juju add-unit kubernetes-worker

little Troubleshooting guide


You might receive a message showing a lookup error when deploying new container of you want to enter a running container. In that case its likely that the Kubernetes master and/or worker nodes have no working name server configuration.

Error: forwarding ports: error upgrading connection: error dialing backend: dial tcp: lookup juju-e1b2fa-2 on server misbehaving

Best way to solve that is to enter the nodes using ssh, try to ping the master or a worker and fix the dns configuration if required. You can fix the name resolution errors the same way you would fix it in any other Ubuntu system.


You can always run a logs command for any running resources.

kubectl get pods # show all pods (container)
kubectl logs pod/my-wordpress-mariadb-0 # or any other pod name

Metrics and Logs

(formerly, Opvizor Performance Analyzer)

VMware vSphere & Cloud

Monitor and Analyze Performance and Log files:
Performance monitoring for your systems and applications with log analysis (tamperproof using immudb) and license compliance (RedHat, Oracle, SAP and more) in one virtual appliance!

Subscribe to Our Newsletter

Get the latest product updates, company news, and special offers delivered right to your inbox.

Subscribe to our newsletter

Use Case - Tamper-resistant Clinical Trials


Blockchain PoCs were unsuccessful due to complexity and lack of developers.

Still the goal of data immutability as well as client verification is a crucial. Furthermore, the system needs to be easy to use and operate (allowing backup, maintenance windows aso.).


immudb is running in different datacenters across the globe. All clinical trial information is stored in immudb either as transactions or the pdf documents as a whole.

Having that single source of truth with versioned, timestamped, and cryptographically verifiable records, enables a whole new way of transparency and trust.

Use Case - Finance


Store the source data, the decision and the rule base for financial support from governments timestamped, verifiable.

A very important functionality is the ability to compare the historic decision (based on the past rulebase) with the rulebase at a different date. Fully cryptographic verifiable Time Travel queries are required to be able to achieve that comparison.


While the source data, rulebase and the documented decision are stored in verifiable Blobs in immudb, the transaction is stored using the relational layer of immudb.

That allows the use of immudb’s time travel capabilities to retrieve verified historic data and recalculate with the most recent rulebase.

Use Case - eCommerce and NFT marketplace


No matter if it’s an eCommerce platform or NFT marketplace, the goals are similar:

  • High amount of transactions (potentially millions a second)
  • Ability to read and write multiple records within one transaction
  • prevent overwrite or updates on transactions
  • comply with regulations (PCI, GDPR, …)


immudb is typically scaled out using Hyperscaler (i. e. AWS, Google Cloud, Microsoft Azure) distributed across the Globe. Auditors are also distributed to track the verification proof over time. Additionally, the shop or marketplace applications store immudb cryptographic state information. That high level of integrity and tamper-evidence while maintaining a very high transaction speed is key for companies to chose immudb.

Use Case - IoT Sensor Data


IoT sensor data received by devices collecting environment data needs to be stored locally in a cryptographically verifiable manner until the data is transferred to a central datacenter. The data integrity needs to be verifiable at any given point in time and while in transit.


immudb runs embedded on the IoT device itself and is consistently audited by external probes. The data transfer to audit is minimal and works even with minimum bandwidth and unreliable connections.

Whenever the IoT devices are connected to a high bandwidth, the data transfer happens to a data center (large immudb deployment) and the source and destination date integrity is fully verified.

Use Case - DevOps Evidence


CI/CD and application build logs need to be stored auditable and tamper-evident.
A very high Performance is required as the system should not slow down any build process.
Scalability is key as billions of artifacts are expected within the next years.
Next to a possibility of integrity validation, data needs to be retrievable by pipeline job id or digital asset checksum.


As part of the CI/CD audit functionality, data is stored within immudb using the Key/Value functionality. Key is either the CI/CD job id (i. e. Jenkins or GitLab) or the checksum of the resulting build or container image.

White Paper — Registration

We will also send you the research paper
via email.

CodeNotary — Webinar

White Paper — Registration

Please let us know where we can send the whitepaper on CodeNotary Trusted Software Supply Chain. 

Become a partner

Start Your Trial

Please enter contact information to receive an email with the virtual appliance download instructions.

Start Free Trial

Please enter contact information to receive an email with the free trial details.