Kubernetes Multi-Tenancy with vCluster
Implementing secure and scalable multi-tenancy in Kubernetes using vCluster virtual clusters
Multi-tenancy in Kubernetes has always been a challenging aspect of cluster management. This guide explores how vCluster provides a robust solution for implementing secure and scalable multi-tenancy in Kubernetes environments.
Prerequisites
- Basic understanding of Kubernetes
- A running Kubernetes cluster (v1.28+ recommended)
- kubectl CLI tool installed
- Helm v3.10.0+ (for advanced deployments)
What are Virtual Clusters?
Virtual clusters (vClusters) are fully functional Kubernetes clusters that run inside regular namespaces of a host Kubernetes cluster. Unlike traditional namespaces, virtual clusters provide stronger isolation with their own dedicated control plane components.
Key characteristics of vClusters include:
- Each vCluster has its own API server, controller manager, and data store (SQLite, etcd, etc.)
- They run inside a single pod within a namespace of the host cluster
- Resources created in a vCluster are synchronized to the host cluster
- They provide namespace-like resource efficiency with cluster-like isolation
vCluster Architecture
The vCluster architecture consists of several key components that work together to create a fully functional virtual Kubernetes environment:
Virtual Control Plane
The virtual control plane is deployed as a single pod managed as a StatefulSet (default) or Deployment and includes:
-
Kubernetes API Server: The management interface for all API requests within the virtual cluster. vCluster supports various Kubernetes distributions, including vanilla Kubernetes (default), K3s, and k0s.
-
Controller Manager: Maintains the state of Kubernetes resources like pods, ensuring they match desired configurations.
-
Data Store: By default, an embedded SQLite is used, but you can configure other options like etcd, MySQL, or PostgreSQL for production environments.
-
Syncer: A critical component that synchronizes resources between the virtual and host clusters, enabling workload management on the host’s infrastructure.
-
Scheduler (Optional): By default, vCluster reuses the host cluster scheduler to save computing resources. You can enable a virtual scheduler if you need custom scheduling capabilities.
The Syncer Component
The syncer is the bridge between virtual and host clusters:
- It synchronizes low-level resources (Pods, ConfigMaps, Secrets) from the virtual cluster to the host cluster
- Higher-level resources (Deployments, StatefulSets, CRDs) exist only in the virtual cluster
- It provides bi-directional syncing to keep the virtual cluster updated with changes in the host cluster
- It can be configured to sync additional resources based on your needs
Networking in vCluster
vCluster provides several networking capabilities:
- Pod-to-Pod Communication: Uses the host cluster’s network infrastructure
- Service Discovery: Each vCluster deploys its own CoreDNS for internal service resolution
- Ingress Support: Can synchronize Ingress resources to utilize the host cluster’s ingress controller
- Cross-Cluster Communication: Services can be mapped between different virtual clusters
The Multi-Tenancy Challenge in Kubernetes
Traditional Kubernetes multi-tenancy approaches face several limitations:
Namespace-based Multi-Tenancy
- Limited isolation between tenants
- Shared cluster-scoped resources
- No tenant admin privileges
- Potential for resource conflicts
- CRD conflicts between different teams
Dedicated Clusters per Tenant
- High infrastructure costs
- Operational overhead
- Slow provisioning (30+ minutes for new clusters)
- Complex fleet management
- Significant resource waste
vCluster: The Best of Both Worlds
vCluster addresses these challenges by providing:
- Strong Isolation: Each tenant gets their own virtual Kubernetes cluster with dedicated control plane components
- Resource Efficiency: Virtual clusters share the underlying host cluster’s compute resources
- Tenant Autonomy: Tenants can be admins in their own virtual cluster without admin access to the host cluster
- Fast Provisioning: New virtual clusters can be created in seconds rather than minutes
- Conflict-Free CRD Management: Each virtual cluster manages its own CRDs independently
Setting Up a vCluster
Here’s a comprehensive guide to creating and managing virtual clusters:
Installing the vCluster CLI
# For Mac (Intel/AMD)
curl -L -o vcluster "https://github.com/loft-sh/vcluster/releases/latest/download/vcluster-darwin-amd64" && \
sudo install -c -m 0755 vcluster /usr/local/bin && \
rm -f vcluster
# For Mac (Silicon/ARM)
curl -L -o vcluster "https://github.com/loft-sh/vcluster/releases/latest/download/vcluster-darwin-arm64" && \
sudo install -c -m 0755 vcluster /usr/local/bin && \
rm -f vcluster
# For Linux (AMD)
curl -L -o vcluster "https://github.com/loft-sh/vcluster/releases/latest/download/vcluster-linux-amd64" && \
sudo install -c -m 0755 vcluster /usr/local/bin && \
rm -f vcluster
# For Linux (ARM)
curl -L -o vcluster "https://github.com/loft-sh/vcluster/releases/latest/download/vcluster-linux-arm64" && \
sudo install -c -m 0755 vcluster /usr/local/bin && \
rm -f vcluster
# Verify installation
vcluster --version
Example output:
vcluster version 0.18.1
Creating a Basic Virtual Cluster
# Create a namespace for your virtual cluster
kubectl create namespace team-a
Example output:
namespace/team-a created
# Create a virtual cluster
vcluster create my-vcluster --namespace team-a
Example output:
info Using helm binary: helm
info Using kubectl binary: kubectl
info Create vcluster my-vcluster in namespace team-a
info Create namespace team-a
info Create vcluster my-vcluster
info Waiting for vcluster to come up...
info Waiting for deployment/coredns to become ready...
info Waiting for statefulset/my-vcluster to become ready...
info Waiting for vcluster pod to become ready...
info Successfully created virtual cluster my-vcluster in namespace team-a
info Switched active kube context to vcluster_my-vcluster_team-a_[CLUSTER_NAME]
info
info Virtual cluster my-vcluster created successfully!
info
info Connect to the virtual cluster via:
info vcluster connect my-vcluster -n team-a
info
info Or directly via:
info kubectl get namespaces
# Connect to the virtual cluster
vcluster connect my-vcluster --namespace team-a
Example output:
info Using helm binary: helm
info Using kubectl binary: kubectl
info Starting port-forwarding to my-vcluster-0 in namespace team-a
info Switched active kube context to vcluster_my-vcluster_team-a_[CLUSTER_NAME]
info
info Virtual cluster kube config written to: ~/.kube/vcluster/config.yaml
info
info You can access the virtual cluster with:
info kubectl get namespaces
# Now you can use kubectl with your virtual cluster
kubectl get ns
Example output:
NAME STATUS AGE
default Active 2m
kube-system Active 2m
kube-public Active 2m
kube-node-lease Active 2m
Advanced Configuration with vcluster.yaml
Create a vcluster.yaml file to customize your virtual cluster:
# vcluster.yaml - Basic configuration
sync:
# Sync resources from host to virtual cluster
fromHost:
# Sync real nodes instead of pseudo nodes
nodes:
enabled: true
# Sync storage classes
storageClasses:
enabled: true
# Sync ingress classes
ingressClasses:
enabled: true
# Sync resources from virtual to host cluster
toHost:
# Sync ingress resources to use host ingress controller
ingresses:
enabled: true
# Configure the control plane
controlPlane:
# Configure persistence
backingStore:
type: SQLite # Options: SQLite, etcd, MySQL, PostgreSQL
persistence: true
size: 5Gi
# Configure ingress for API server
ingress:
enabled: true
host: my-vcluster.example.com
# Configure CoreDNS
coredns:
embedded: false # Set to true with vCluster Pro
# Configure networking
networking:
# Configure isolation
isolation: true
Apply the configuration:
vcluster create my-vcluster --namespace team-a --values vcluster.yaml
Example output:
info Using helm binary: helm
info Using kubectl binary: kubectl
info Create vcluster my-vcluster in namespace team-a
info Create namespace team-a
info Create vcluster my-vcluster
info Waiting for vcluster to come up...
info Waiting for deployment/coredns to become ready...
info Waiting for statefulset/my-vcluster to become ready...
info Waiting for vcluster pod to become ready...
info Successfully created virtual cluster my-vcluster in namespace team-a
info Switched active kube context to vcluster_my-vcluster_team-a_[CLUSTER_NAME]
info
info Virtual cluster my-vcluster created successfully!
info
info Connect to the virtual cluster via:
info vcluster connect my-vcluster -n team-a
info
info Or directly via:
info kubectl get namespaces
Upgrading an Existing Virtual Cluster
vcluster create --upgrade my-vcluster --namespace team-a --values vcluster.yaml
Example output:
info Using helm binary: helm
info Using kubectl binary: kubectl
info Upgrade vcluster my-vcluster in namespace team-a
info Successfully upgraded vcluster my-vcluster in namespace team-a
info Switched active kube context to vcluster_my-vcluster_team-a_[CLUSTER_NAME]
info
info Virtual cluster my-vcluster upgraded successfully!
info
info Connect to the virtual cluster via:
info vcluster connect my-vcluster -n team-a
info
info Or directly via:
info kubectl get namespaces
Advanced vCluster Features
Multi-Distribution Support
vCluster supports different Kubernetes distributions:
# vcluster.yaml
controlPlane:
distro:
name: k3s # Options: k8s, k3s, k0s
version: "v1.26.4" # Specify the Kubernetes version
External Access via Ingress
Configure external access to your virtual cluster:
# vcluster.yaml
controlPlane:
ingress:
enabled: true
host: my-vcluster.example.com
ingressClassName: nginx
proxy:
extraSANs:
- my-vcluster.example.com
Resource Syncing Options
Configure what resources to sync between host and virtual clusters:
# vcluster.yaml
sync:
fromHost:
# Sync nodes from host to virtual cluster
nodes:
enabled: true
# Sync storage classes
storageClasses:
enabled: true
toHost:
# Sync persistent volume claims
persistentVolumeClaims:
enabled: true
# Sync ingress resources
ingresses:
enabled: true
Multi-Tenancy Policies
Implement security policies for multi-tenancy:
# vcluster.yaml - Pro features
policies:
# Configure network policies
networkPolicy:
enabled: true
default:
ingressRule:
from:
- podSelector: {}
egressRule:
to:
- podSelector: {}
# Configure pod security standards
podSecurityStandard: restricted
# Configure resource quotas
resourceQuota:
enabled: true
quota:
hard:
cpu: "10"
memory: 20Gi
pods: "20"
Multi-Tenancy Use Cases
Development Environments
- Provide isolated clusters for each development team
- Allow teams to have admin privileges in their environment
- Prevent conflicts between development teams
- Enable self-service cluster provisioning
SaaS Platforms
- Host each customer in a dedicated virtual cluster
- Ensure strong isolation between customer workloads
- Reduce infrastructure costs compared to dedicated clusters
- Simplify customer onboarding with automated provisioning
CI/CD Pipelines
- Create ephemeral virtual clusters for testing
- Run integration tests in isolated environments
- Tear down environments after tests complete
- Avoid test interference and improve reliability
Edge Computing
- Run lightweight virtual clusters on edge devices
- Provide consistent management across edge locations
- Optimize resource usage on constrained hardware
- Simplify edge application deployment and updates
Benefits of vCluster Multi-Tenancy
Cost Reduction
- Run fewer physical clusters (consolidation)
- Better resource utilization (up to 80% reduction in infrastructure costs)
- Reduced operational overhead
- Lower management complexity
Enhanced Security
- Stronger isolation than namespaces
- Limited permissions in the host cluster
- Independent security policies per tenant
- Reduced attack surface
Operational Simplicity
- Unified management plane
- Consistent platform tooling
- Simplified cluster lifecycle management
- Reduced administrative burden
Developer Experience
- Self-service provisioning
- Full admin access within their environment
- Faster onboarding and development cycles
- Freedom to experiment without impacting others
Best Practices for vCluster Multi-Tenancy
1. Resource Management
- Implement resource quotas to prevent resource starvation
- Configure limit ranges for default resource constraints
- Monitor resource usage across virtual clusters
2. Network Security
- Implement network policies to control traffic between virtual clusters
- Configure default deny policies for ingress/egress traffic
- Use service mesh for advanced traffic management (if needed)
3. Monitoring and Observability
- Set up monitoring for both host and virtual clusters
- Implement centralized logging for all virtual clusters
- Create dashboards to visualize multi-tenant resource usage
4. Backup and Disaster Recovery
- Implement regular backups of virtual cluster data
- Test restoration procedures periodically
- Document recovery processes for different failure scenarios
5. Upgrade Strategy
- Plan for coordinated upgrades of host and virtual clusters
- Test upgrades in a staging environment first
- Implement rolling upgrades to minimize disruption
6. Authentication and Authorization
- Use RBAC to control access within virtual clusters
- Integrate with enterprise identity providers
- Implement just-in-time access for administrative tasks
Multi-Tenancy Use Cases
Development Environments
vCluster provides significant advantages for development teams:
-
Team Isolation: Each development team gets their own virtual Kubernetes cluster with full admin rights, allowing them to install CRDs, create namespaces, and configure RBAC without affecting other teams.
-
Environment Parity: Developers can work in environments that closely match production, including identical Kubernetes versions, API resources, and CRDs.
-
Self-Service Provisioning: Teams can create and manage their own virtual clusters without waiting for platform teams, accelerating development cycles.
-
Resource Efficiency: Multiple development environments can share the same underlying infrastructure, reducing costs by up to 70% compared to dedicated clusters.
Example Implementation:
# team-dev-vcluster.yaml
sync:
fromHost:
# Sync storage classes from host
storageClasses:
enabled: true
toHost:
# Allow developers to create their own ingress resources
ingresses:
enabled: true
# Set resource limits for the dev environment
isolation:
resourceQuota:
enabled: true
quota:
hard:
cpu: "4"
memory: 8Gi
pods: "20"
# Enable metrics for monitoring
monitoring:
enabled: true
SaaS Platforms
For multi-tenant SaaS applications, vCluster offers:
-
Customer Isolation: Each customer gets a dedicated virtual cluster, ensuring complete workload isolation and preventing noisy neighbor problems.
-
Customization Flexibility: Customers can have different versions of your application, custom configurations, or even their own extensions without affecting other customers.
-
Simplified Onboarding: Automate the provisioning of new customer environments with infrastructure-as-code approaches, reducing time-to-value.
-
Operational Efficiency: Manage all customer environments from a central control plane while maintaining strong isolation boundaries.
-
Data Residency Compliance: Deploy customer virtual clusters in specific regions to meet data sovereignty requirements while managing them centrally.
Example Implementation:
# customer-saas-vcluster.yaml
# Unique configuration per customer
metadata:
labels:
customer: acme-corp
tier: enterprise
region: eu-west-1
# Enforce strict security policies
policies:
podSecurityStandard: restricted
networkPolicy:
enabled: true
default:
ingressRule:
from:
- podSelector:
matchLabels:
customer: acme-corp
# Dedicated storage for customer data
persistence:
storageClass: customer-encrypted-storage
size: 100Gi
CI/CD Pipelines
vCluster transforms CI/CD workflows:
-
Ephemeral Testing Environments: Create disposable, isolated Kubernetes environments for each test run or pull request.
-
Parallel Testing: Run multiple test suites simultaneously without resource conflicts or test pollution.
-
Infrastructure Testing: Test infrastructure changes, Kubernetes operators, and CRDs in isolated environments before applying to production.
-
Cost Optimization: Automatically provision and tear down environments only when needed, reducing infrastructure costs by up to 90% compared to always-on test clusters.
-
Pipeline Acceleration: Reduce test environment provisioning time from minutes to seconds, significantly speeding up CI/CD pipelines.
Example Implementation:
# ci-pipeline-vcluster.yaml
# Lightweight configuration for CI
controlPlane:
distro:
name: k3s # Lightweight distribution for faster startup
# Don't persist anything
persistence:
enabled: false
# Minimal resource requirements
resources:
vcluster:
requests:
cpu: 200m
memory: 256Mi
limits:
cpu: 1000m
memory: 1Gi
# Automatically clean up after tests
cleanup:
enabled: true
timeout: 3600 # 1 hour timeout
Edge Computing
For edge and IoT scenarios, vCluster enables:
-
Resource Optimization: Run multiple isolated Kubernetes environments on resource-constrained edge devices.
-
Consistent Management: Use the same Kubernetes APIs and tools across cloud and edge deployments.
-
Application Isolation: Separate different edge applications with strong boundaries while sharing the underlying hardware.
-
Simplified Updates: Update applications independently without affecting other workloads on the same edge device.
-
Reduced Bandwidth Usage: Deploy and manage multiple virtual clusters with minimal overhead, optimizing for limited network connectivity.
Example Implementation:
# edge-device-vcluster.yaml
# Ultra-lightweight configuration
controlPlane:
distro:
name: k3s
resources:
requests:
cpu: 100m
memory: 128Mi
limits:
cpu: 500m
memory: 512Mi
# Optimize for edge devices
sync:
fromHost:
nodes:
enabled: true
syncAllNodes: false # Only sync the local node
# Disable features not needed at the edge
disableFeatures:
- metrics-server
- coredns # Use host DNS instead
# Local storage only
persistence:
storageClass: local-storage
Benefits of vCluster Multi-Tenancy
Cost Reduction
- Run fewer physical clusters (consolidation)
- Better resource utilization (up to 80% reduction in infrastructure costs)
- Reduced operational overhead
- Lower management complexity
Enhanced Security
- Stronger isolation than namespaces
- Limited permissions in the host cluster
- Independent security policies per tenant
- Reduced attack surface
Operational Simplicity
- Unified management plane
- Consistent platform tooling
- Simplified cluster lifecycle management
- Reduced administrative burden
Developer Experience
- Self-service provisioning
- Full admin access within their environment
- Faster onboarding and development cycles
- Freedom to experiment without impacting others
Best Practices for vCluster Multi-Tenancy
1. Resource Management
Effective resource management is critical for multi-tenant environments:
-
Implement Resource Quotas: Define strict resource quotas for each virtual cluster to prevent resource starvation:
isolation: resourceQuota: enabled: true quota: hard: cpu: "8" memory: 16Gi pods: "50" services: "20" persistentvolumeclaims: "10" -
Configure Limit Ranges: Set default resource limits and requests for pods to prevent unconstrained resource usage:
apiVersion: v1 kind: LimitRange metadata: name: default-limits spec: limits: - default: cpu: 500m memory: 512Mi defaultRequest: cpu: 100m memory: 128Mi type: Container -
Implement Pod Disruption Budgets: Ensure service availability during node maintenance or failures:
apiVersion: policy/v1 kind: PodDisruptionBudget metadata: name: app-pdb spec: minAvailable: 2 selector: matchLabels: app: critical-service -
Use Vertical Pod Autoscaling: Automatically adjust resource requests based on actual usage patterns to optimize resource allocation.
2. Network Security
Secure network configuration is essential for multi-tenant isolation:
-
Default Deny Network Policies: Start with a default deny policy and explicitly allow required traffic:
apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: default-deny spec: podSelector: {} policyTypes: - Ingress - Egress -
Tenant Isolation: Ensure traffic can only flow within a tenant’s virtual cluster:
apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: tenant-isolation spec: podSelector: {} policyTypes: - Ingress ingress: - from: - podSelector: {} # Only allow traffic from pods in the same namespace -
Service Mesh Integration: For advanced use cases, integrate with service mesh solutions like Istio or Linkerd to implement fine-grained traffic control, mTLS, and observability.
-
Egress Control: Restrict outbound traffic to only required external services:
apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: egress-control spec: podSelector: {} policyTypes: - Egress egress: - to: - ipBlock: cidr: 10.0.0.0/8 # Internal services - ipBlock: cidr: 0.0.0.0/0 except: - 169.254.0.0/16 # Block metadata services ports: - port: 443 protocol: TCP
3. Monitoring and Observability
Comprehensive monitoring is crucial for multi-tenant environments:
-
Centralized Logging: Implement a unified logging solution that collects logs from both host and virtual clusters while maintaining tenant separation:
- Use Prometheus Operator with multi-tenancy support
- Configure log forwarding with tenant labels for identification
- Implement log retention policies per tenant
-
Custom Metrics: Collect and expose tenant-specific metrics:
apiVersion: monitoring.coreos.com/v1 kind: ServiceMonitor metadata: name: tenant-metrics labels: tenant: tenant-a spec: selector: matchLabels: app: tenant-service endpoints: - port: metrics interval: 15s -
Resource Usage Dashboards: Create tenant-specific dashboards showing resource utilization, costs, and performance metrics.
-
Alerting: Configure tenant-specific alerting with appropriate escalation paths:
apiVersion: monitoring.coreos.com/v1 kind: PrometheusRule metadata: name: tenant-alerts labels: tenant: tenant-a spec: groups: - name: tenant.rules rules: - alert: TenantHighCPUUsage expr: sum(rate(container_cpu_usage_seconds_total{tenant="tenant-a"}[5m])) > 7 for: 10m labels: severity: warning tenant: tenant-a annotations: summary: High CPU usage for tenant
4. Backup and Disaster Recovery
Ensure data protection across all virtual clusters:
-
Regular Backups: Implement automated backups for each virtual cluster:
apiVersion: velero.io/v1 kind: Schedule metadata: name: tenant-a-daily-backup spec: schedule: "0 0 * * *" template: includedNamespaces: - "*" labelSelector: matchLabels: tenant: tenant-a ttl: 720h # 30 days -
Backup Validation: Regularly test backup restoration to ensure recoverability.
-
Cross-Region Replication: For critical workloads, implement cross-region backup strategies.
-
Disaster Recovery Runbooks: Create and test detailed recovery procedures for different failure scenarios.
5. Upgrade Strategy
Manage upgrades safely across multiple virtual clusters:
-
Staged Rollouts: Implement a progressive upgrade strategy:
- Test upgrades in development virtual clusters
- Roll out to non-critical tenants
- Gradually upgrade critical tenant environments
-
Version Skew Policy: Define clear policies for supported version differences between host and virtual clusters.
-
Rollback Plans: Ensure you can quickly revert to previous versions if issues are detected.
-
Tenant Communication: Establish clear communication channels for planned maintenance and upgrades.
6. Authentication and Authorization
Implement robust access controls:
-
RBAC Policies: Define fine-grained RBAC policies for each virtual cluster:
apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: tenant-admin namespace: tenant-a subjects: - kind: Group name: tenant-a-admins apiGroup: rbac.authorization.k8s.io roleRef: kind: ClusterRole name: admin apiGroup: rbac.authorization.k8s.io -
Identity Integration: Integrate with enterprise identity providers (OIDC, LDAP) for centralized authentication.
-
Just-in-Time Access: Implement temporary elevated access with automatic expiration for administrative tasks.
-
Audit Logging: Enable comprehensive audit logging for all administrative actions:
apiVersion: audit.k8s.io/v1 kind: Policy rules: - level: RequestResponse resources: - group: "" resources: ["secrets", "configmaps"] - level: Metadata resources: - group: "" resources: ["pods", "services"]
By following these best practices, organizations can build secure, scalable, and efficient multi-tenant Kubernetes environments using vCluster, maximizing the benefits while minimizing operational overhead.
Conclusion
vCluster provides an elegant solution to the Kubernetes multi-tenancy challenge, offering the isolation of dedicated clusters with the resource efficiency of namespaces. By implementing virtual clusters, organizations can significantly reduce their Kubernetes infrastructure costs while providing secure and isolated environments for their tenants.
The ability to create lightweight, isolated Kubernetes environments within a shared infrastructure makes vCluster an excellent choice for organizations looking to optimize their Kubernetes deployments while maintaining strong isolation between different teams, applications, or customers.
For more information, visit the official vCluster documentation.
References
- vCluster Documentation
- vCluster Architecture
- vCluster Configuration Reference
- Kubernetes Multi-Tenancy Working Group
- CNCF Virtual Cluster Project
- vCluster GitHub Repository
Exploring Your Virtual Cluster
After setting up your virtual cluster, you can explore its resources:
# List pods in all namespaces
kubectl get pods -A
Example output:
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system coredns-565d847f94-b8k5j 1/1 Running 0 5m
kube-system metrics-server-77c969f8c-vdqmx 1/1 Running 0 5m
# Check the nodes in your virtual cluster
kubectl get nodes
Example output with pseudo nodes (default):
NAME STATUS ROLES AGE VERSION
pseudonode-3a2a5dc5-c2b3-4a1b-9c8e-12c Ready <none> 5m v1.28.3
Example output with real nodes (when sync.fromHost.nodes.enabled is true):
NAME STATUS ROLES AGE VERSION
ip-10-0-1-20.ec2.internal Ready control-plane 15h v1.28.3
ip-10-0-2-30.ec2.internal Ready <none> 15h v1.28.3
ip-10-0-3-40.ec2.internal Ready <none> 15h v1.28.3
# Deploy a test application
kubectl create deployment nginx --image=nginx
kubectl expose deployment nginx --port=80 --type=ClusterIP
Example output:
deployment.apps/nginx created
service/nginx exposed
# Check the deployment and service
kubectl get deployment,service
Example output:
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/nginx 1/1 1 1 30s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 6m
service/nginx ClusterIP 10.96.123.456 <none> 80/TCP 20s
# View the resources in the host cluster
kubectl --context=your-host-context -n team-a get pods
Example output:
NAME READY STATUS RESTARTS AGE
my-vcluster-0 2/2 Running 0 6m
coredns-565d847f94-b8k5j-x-kube-system-x 1/1 Running 0 5m
nginx-78f5d695bd-xyz12-x-default-x 1/1 Running 0 45s
Deleting a Virtual Cluster
When you’re done with your virtual cluster, you can delete it:
vcluster delete my-vcluster -n team-a
Example output:
info Delete vcluster my-vcluster in namespace team-a
info Successfully deleted virtual cluster my-vcluster in namespace team-a
For more information, visit the official vCluster documentation.