Kubernetes the k8s -lets Understand it and master the commands
TL;DR
Get the notion and understand k8s commands
Kubernetes the k8s is the modern solution for containerization eveolved from BSD jail, Per-Process Namespaces and mount system from plan9 os, LXC, even the WSL2 use 9p.
minikube and microk8s are similar solution but Kind is suitable for local testing (fast, Docker-based). The best light weight solution is k3s perfect for Edge deployment that fully supports ARM based system like Raspberry PI.
Linux containerization works by leveraging specific features of the Linux kernel to run isolated user-space environments (containers) that share the host machine’s kernel. This is distinct from virtual machines (VMs), which each run a full, separate guest operating system and kernel.
The core mechanisms that enable this are namespaces and control groups (cgroups):
Namespaces: These provide the isolation by partitioning kernel resources to give each container its own isolated view of the system. Each container effectively gets its own:
Process IDs (PID): Containers only see their own processes, with an independent process tree, rather than all processes on the host.
Network: Each container has its own network interfaces, IP addresses, and routing tables.
Mount Points: This provides an isolated view of the file system, making it appear as a self-contained environment using a layered file system (like OverlayFS).
User IDs (UID): A user namespace allows the root user inside a container to be mapped to an unprivileged user on the host, enhancing security.
Hostname/Domainname (UTS): Allows each container to have its own hostname and domain name.
Control Groups (cgroups): These manage and limit the resources that a container can consume, ensuring that no single container can monopolize the host’s resources (CPU, memory, disk I/O, network bandwidth) and impact other containers or the host system.
How the Process Works
When a user runs a container (e.g., using Docker or Podman), the container engine performs several steps:
Image Retrieval: The engine pulls a container image (a read-only template with application code, libraries, and dependencies) from a registry if it’s not already cached locally.
Environment Setup: A writable layer is added on top of the image layers to allow for changes during runtime.
Isolation and Resource Allocation: The container runtime uses Linux kernel system calls to create the necessary namespaces for the new containerized process and applies cgroups limits for resource governance.
Execution: The containerized application’s start command is executed within this newly isolated and resource-limited environment
One-Liner Summary
Cluster | Whole system |
Node | Single server |
Pod | Running container(s) |
Deployment | App definition + manager |
Service | Stable network access |
ConfigMap | Configuration |
Secret | Sensitive data |
kubectl | Command tool |
Helm | Package manager |
Scale | Add/remove copies |
Expose | Make accessible |
Port-forward | Local tunnel |
metrics-server | kubectl top |
Quick Analogy
Restaurant (Cluster)
├── Chefs (Nodes)
├── Dishes (Pods)
├── Menu (Deployment) - defines dish recipe
├── Waiters (Service) - brings dishes to customers
├── Recipes (ConfigMaps) - cooking instructions
├── Secret Sauce (Secrets) - secret ingredients
└── Restaurant Manager (kubectl) - runs the place
Core Components:
kube-apiserver - Kubernetes API server
kube-controller-manager - Manages controllers
kube-scheduler - Schedules pods to nodes
etcd - Cluster database (if run as pods)
Network:
kube-proxy - Network proxy on each node
core-dns / kube-dns - DNS service for pods
CNI plugins (Calico, Flannel, Cilium, etc.)
Add-ons:
metrics-server - Resource metrics
ingress-nginx / other ingress controllers
Storage provisioners (CSI drivers)
#View everything in kube-system
kubectl get all -n kube-system
#See system pods
kubectl get pods -n kube-system
#Check system services
kubectl get svc -n kube-system
#View critical pods
kubectl get pods -n kube-system -o wide | grep -E "kube-apiserver|etcd|scheduler"Cluster
What: A set of machines (nodes) running Kubernetes
Think: The entire Kubernetes “data center”
Example: Your 3 Raspberry Pis + 1 master = 1 cluster
Node
What: A single machine (VM or physical) in the cluster
Think: A “worker bee” or “server”
Example: kubectl get nodes → node-1, node-2, node-3
Pod
What: Smallest deployable unit, contains 1+ containers
Think: A “logical host” for containers
Example: Nginx + Redis containers together in one pod
Life: Ephemeral (born, live, die)
Deployment
What: Manages pods, handles updates, rollbacks
Think: “Pod manager” or “application definition”
Example: kubectl create deployment nginx --image=nginx
Does: Creates/updates pods, scaling, rollbacks
Service
What: Stable network endpoint for pods
Think: “Load balancer” or “stable IP/DNS name”
Example: kubectl expose deployment nginx --port=80
Types: ClusterIP (internal), NodePort, LoadBalancer
Config Map
What: Stores configuration data as key-value pairs
Think: “Environment variables file”
Example: Database URL, feature flags, settings
Not: For secrets (use Secret for that)
Secret
What: Stores sensitive data (base64 encoded)
Think: “Password manager” for Kubernetes
Example: API keys, passwords, TLS certificates
Warning: Base64 is NOT encryption!
Commands/Tools
kubectl (kube-control)
What: Command-line tool for Kubernetes
Think: “k8s Swiss Army knife”
Example: kubectl get pods, kubectl apply -f file.yaml
Helm
What: Package manager for Kubernetes
Think: “apt/yum for Kubernetes”
Example: helm install nginx bitnami/nginx
Chart: Helm package (like a .deb/.rpm file)
Operations
Scale
What: Change number of pod replicas
Think: “Add/remove copies” of your app
Example: kubectl scale deployment nginx --replicas=5
Auto: Horizontal Pod Autoscaler (HPA) does this automatically
Expose
What: Make a deployment accessible
Think: “Open a door” to your app
Example: kubectl expose deployment nginx --port=80 --type=LoadBalancer
Port-forward
What: Tunnel to a pod/service from local machine
Think: “SSH port forwarding” for Kubernetes
Example: kubectl port-forward pod/nginx 8080:80
Other Important Concepts
Namespace
What: Virtual cluster inside a physical cluster
Think: “Folders” for organizing resources
Example: dev, staging, production namespaces
Default: default, kube-system (system pods)
Ingress
What: Manages external HTTP/S access
Think: “NGINX/Apache” for Kubernetes
Example: Route app.example.com → service
Needs: Ingress Controller (like nginx-ingress)
Persistent Volume (PV) / Persistent Volume Claim (PVC)
What: Storage that persists across pod restarts
Think: “External hard drive” for pods
PV: The actual storage (disk)
PVC: “Request” for storage by a pod
Daemon Set
What: Runs one pod per node
Think: “Node-level service”
Example: Log collectors, monitoring agents
StatefulSet
What: For stateful apps (databases)
Think: “Pods with stable identity/storage”
Example: MySQL, PostgreSQL, Elasticsearch
Job / CronJob
What: Run a task once / on schedule
Think: “crontab” for Kubernetes
Job: Run once to completion
CronJob: Run on schedule (like */5 * * * *)
ReplicaSet
What: Maintains a stable set of pod replicas
Think: “Pod replicas manager” (Deployment uses this internally)
You use: Deployment (which creates ReplicaSet)
Label / Selector
What: Key-value tags for filtering/grouping
Think: “Hashtags” for Kubernetes objects
Example: app: nginx, env: prod
Selector: “Find objects with these labels”
Horizontal Pod Autoscaler (HPA) and Vertical Pod Autoscaler (VPA).
Key Aspects of Metrics Server:
Functionality: It queries Kubelets on each node to fetch metrics, which are then exposed through the Metrics API.
Purpose: Exclusively designed for autoscaling pipelines and real-time monitoring; it does not store historical data, making it unsuitable for long-term monitoring solutions.
Installation: While pre-installed on managed services like Azure AKS, it often requires manual installation on Amazon EKS or self-managed clusters using YAML manifests or Helm.
Components: It relies on the Kubernetes API to track pods and nodes, and caches pod health information.
Usage & Limitations:
Commands: Used primarily for kubectl top node and kubectl top pod.
Limitations: Cannot be used for non-autoscaling purposes and is not a replacement for full monitoring solutions.
Basic Cluster Information
# Check kubectl version and connectivity
kubectl version
kubectl version --client
kubectl cluster-info
# Get cluster nodes
kubectl get nodes
kubectl get nodes -o wide # More details
# View cluster events
kubectl get eventsWorking with Pods
# List pods
kubectl get pods
kubectl get pods -A # All namespaces
kubectl get pods -o wide # More details
kubectl get pods -w # Watch mode
# Get detailed pod info
kubectl describe pod <pod-name>
kubectl describe pod <pod-name> -n <namespace>
# Pod logs
kubectl logs <pod-name>
kubectl logs <pod-name> -f # Follow logs
kubectl logs <pod-name> -c <container-name> # Multi-container pods
# Execute commands in pod
kubectl exec <pod-name> -- <command>
kubectl exec -it <pod-name> -- /bin/bash # Interactive shell
# Delete pod
kubectl delete pod <pod-name>Working with Deployments
# List deployments
kubectl get deployments
kubectl get deploy
# Create deployment
kubectl create deployment <name> --image=<image>
# Scale deployment
kubectl scale deployment <name> --replicas=3
# Update deployment (rolling update)
kubectl set image deployment/<name> <container>=<new-image>
# Rollback deployment
kubectl rollout undo deployment/<name>
kubectl rollout history deployment/<name>
# Check rollout status
kubectl rollout status deployment/<name>Services & Networking
# List services
kubectl get services
kubectl get svc
# Expose deployment as service
kubectl expose deployment <name> --port=80 --target-port=8080 --type=LoadBalancer
# Port forwarding
kubectl port-forward <pod-name> 8080:80
kubectl port-forward svc/<service-name> 8080:80
# Get endpoints
kubectl get endpointsConfig Maps & Secrets
# ConfigMaps
kubectl get configmaps
kubectl create configmap <name> --from-literal=key=value
kubectl create configmap <name> --from-file=path/to/file
# Secrets
kubectl get secrets
kubectl create secret generic <name> --from-literal=password=secret
echo -n 'secret' | base64 # For manual secret creationNamespaces
# List namespaces
kubectl get namespaces
kubectl get ns
# Create namespace
kubectl create namespace <name>
# Switch context namespace
kubectl config set-context --current --namespace=<namespace>
# Apply commands to specific namespace
kubectl get pods -n <namespace>YAML Operations
# Create resources from YAML
kubectl apply -f file.yaml
kubectl apply -f ./directory/
# Dry run
kubectl apply -f file.yaml --dry-run=client
# Generate YAML (without creating)
kubectl create deployment <name> --image=nginx --dry-run=client -o yaml > deploy.yaml
# Edit resource
kubectl edit deployment/<name>
# Get resource YAML
kubectl get pod <pod-name> -o yaml
kubectl get deployment <name> -o yaml > output.yamlDebugging & Diagnostics
# Resource usage
kubectl get deployment metrics-server -n kube-system
#Error from server (NotFound): deployments.apps "metrics-server" not found
kubectl apply -f https://github.com/kubernetes-sigs/metrics-server/releases/latest/download/components.yaml
KUBE_EDITOR="nano" kubectl edit deployment metrics-server -n kube-system
- args:
- --kubelet-insecure-tls
- --kubelet-preferred-address-types=InternalIP
kubectl top nodes
kubectl top pods
# API resources
kubectl api-resources
kubectl api-versions
# Explain resource fields
kubectl explain pod
kubectl explain pod.spec.containers
# Run temporary pod for debugging
kubectl run debug-pod --image=busybox --rm -it --restart=Never -- /bin/shContext & Configuration
# List contexts
kubectl config get-contexts
# Switch context
kubectl config use-context <context-name>
# Current context
kubectl config current-context
# View config
kubectl config viewHandy Shortcuts & Aliases
# Common aliases (add to ~/.bashrc or ~/.zshrc)
alias k='kubectl'
alias kg='kubectl get'
alias kd='kubectl describe'
alias kgp='kubectl get pods'
alias kgn='kubectl get nodes'
alias kaf='kubectl apply -f'
alias kdf='kubectl delete -f'
alias kl='kubectl logs'
alias kex='kubectl exec -it'
# Enable autocompletion
source <(kubectl completion bash) # For bash
source <(kubectl completion zsh) # For zshWorkflow Examples
Deploy an app:
kubectl create deployment nginx --image=nginx
kubectl expose deployment nginx --port=80 --type=LoadBalancer
kubectl get svc nginxDebug a failing pod:
kubectl get pods
kubectl describe pod <problem-pod>
kubectl logs <problem-pod>
kubectl exec -it <problem-pod> -- /bin/shUpdate an app:
kubectl set image deployment/myapp app=myapp:v2
kubectl rollout status deployment/myapp
kubectl rollout undo deployment/myapp # If something goes wrongQuick Reference Sheet
Create/Update: kubectl apply -f file.yaml
Delete: kubectl delete -f file.yaml
Watch: kubectl get pods -w
Follow logs: kubectl logs -f <pod>
Shell access: kubectl exec -it <pod> -- sh
Port forward: kubectl port-forward <pod> 8080:80
Get with wide: kubectl get pods -o wide
Describe: kubectl describe <resource> <name>Get the Kubeconfig File
nano ~/.kube/config
scp [email protected]/etc/kubernetes/admin.conf ~/.kube/config
# Set the cluster
kubectl config set-cluster kubernetes \
--server=https://192.168.99.37:6443 \
--certificate-authority=ca.crt
# Set credentials (you need the cert and key files)
kubectl config set-credentials kubernetes-admin \
--client-certificate=admin.crt \
--client-key=admin.key
# Set context
kubectl config set-context kubernetes-admin@kubernetes \
--cluster=kubernetes \
--user=kubernetes-admin
# Use the context
kubectl config use-context kubernetes-admin@kubernetes
# Test connectivity
kubectl cluster-info
# Should show: Kubernetes control plane is running at https://192.168.99.37:6443
# Get nodes
kubectl get nodes
# Get pods (all namespaces)
kubectl get pods -A
# If you get certificate errors, you can bypass (not recommended for production):
kubectl config set-cluster kubernetes \
--server=https://192.168.99.37:6443 \
--insecure-skip-tls-verify=true
# Or copy the CA cert properly
scp [email protected]:/etc/kubernetes/pki/ca.crt ~/.kube/
# Then reference it in your config
# Test network connectivity
telnet 192.168.99.37 6443
# Or
nc -zv 192.168.99.37 6443
# If blocked, open port on control plane (if you control it)
sudo ufw allow 6443/tcp
# Or configure firewall rulesSomething i hate to do
# AWS EKS
aws eks update-kubeconfig --name cluster-name --region region
# Google GKE
gcloud container clusters get-credentials cluster-name --region region
# Azure AKS
az aks get-credentials --resource-group group --name clustersome TUI
For beginners: Kubernetes Dashboard
For daily ops: Lens or K9s (depends on preference)
For terminals: K9s (TUI master)
For desktop: Lens (full-featured)
For enterprise: Rancher
For visualization: KubeView
For modern UI: Headlamp
X- Octant (VMware)