How to deploy a Docker container on Kubernetes (and why manual YAML fails at scale)



Key Points:
- Manual deployment creates toil: Writing discrete Deployment and Service YAML manifests for every container slows feature delivery and introduces human error.
- Fleet-scale scaling breaks
kubectl: Relying on manual terminal commands to deploy containers across distributed environments guarantees configuration drift. - Agentic platforms abstract the pipeline: Centralized control planes automate YAML generation and deployment workflows without sacrificing Kubernetes orchestration.
Deploying a Docker container on Kubernetes requires building an image, authenticating with a registry, writing YAML deployment manifests, configuring services, and executing kubectl commands.
While necessary to understand, executing this manual workflow across thousands of clusters causes severe configuration drift. Enterprise platform teams use agentic platforms to automate the entire deployment lifecycle.
- Manual deployment creates toil: Writing discrete Deployment and Service YAML manifests for every container slows feature delivery and introduces human error.
- Fleet-scale scaling breaks
kubectl: Relying on manual terminal commands to deploy containers across distributed environments guarantees configuration drift. - Agentic platforms abstract the pipeline: Centralized control planes automate YAML generation and deployment workflows without sacrificing Kubernetes orchestration.
Kubernetes is the standard for container orchestration, but getting a single Docker container running can be a highly complex task. The process demands creating multiple YAML configuration files, defining networking concepts, managing image registries, and troubleshooting across distributed systems. What should be a rapid deployment process frequently turns into a lengthy, friction-filled operational bottleneck.
This guide breaks down the full 6-step manual deployment process and explains the automated approach used by platform engineering teams to simplify the workflow entirely.
🚀 Real-world proof
Alan needed to scale their application deployment process after hitting the hard limitations of Heroku and Elastic Beanstalk.
⭐ The result: Alan reduced their deployment times from over 1 hour to 8 minutes and improved reliability across 100+ services using Qovery. Read the full case study here.
The 1,000-cluster reality
Understanding the manual deployment steps is a baseline requirement for infrastructure engineers. However, executing this 6-step process for every application update across an enterprise is an operational liability.
Writing raw YAML and executing manual kubectl commands for a single cluster creates friction; doing it across 1,000 clusters distributed globally generates immediate configuration drift. At scale, relying on individual developers to properly configure resource limits, labels, and ingress controllers leads to misconfigurations that fail security audits. Enterprise engineering organizations pivot away from manual deployments, relying instead on centralized, agentic control planes to enforce standardization automatically.
Understanding Kubernetes components
Before deploying, teams must understand the following core Kubernetes primitives:
- Pods: The smallest deployable units, wrapping one or more containers with shared storage and network resources. A pod represents a single instance of a running process.
- Deployments: These manage pod replicas, handling updates and rollbacks automatically. When a deployment specification changes, Kubernetes creates new pods with the updated configuration and gradually terminates the old ones. This rolling update strategy minimizes downtime.
- Services: These expose pods to network traffic, providing connectivity even as pods are created and destroyed. Services offer a consistent DNS name and load balancing across healthy replicas.
- Ingress resources: These manage external access, routing traffic to the appropriate services based on hostnames or URL paths. Ingress controllers handle TLS termination, allowing teams to configure HTTPS access without modifying application code.
The traditional 6-step manual deployment process
Deploying a Docker container to Kubernetes manually requires multiple configuration files and command-line operations. Each step introduces potential failure points that demand specific technical knowledge to resolve.
Step 1: create and build the Docker image
The process begins with a working Dockerfile in the application repository. Building the image locally verifies the container runs correctly before pushing it to a cluster.
An example Dockerfile for a Node.js project:
FROM node:20-alpine
WORKDIR /app
COPY package*.json ./
RUN npm ci --only=production
COPY . .
EXPOSE 8080
USER node
CMD ["node", "server.js"]The Alpine base image provides a minimal footprint, while the non-root USER directive follows security best practices. Build the image locally to verify it:
docker build -t myapp:v1.0.0 .
docker run -p 8080:8080 myapp:v1.0.0Use semantic version tags rather than relying on the latest tag. The latest tag creates ambiguity about which version runs in production and complicates rollback procedures when incidents occur. Teams adopting GitOps practices tag images with commit SHAs for complete traceability.
Step 2: push the image to a container registry
Kubernetes clusters pull images from container registries, which must be accessible from the cluster's network. You can use DockerHub, Google Container Registry, Amazon ECR, or a private registry.
docker tag myapp:v1.0.0 registry.example.com/myapp:v1.0.0
docker push registry.example.com/myapp:v1.0.0Registry authentication adds another configuration layer. Teams must create Kubernetes Secrets containing registry credentials and reference them in deployment manifests using imagePullSecrets. Forgetting this step results in ImagePullBackOff errors.
Step 3: define the deployment YAML
The Deployment manifest represents the core friction point in Kubernetes operations. This YAML file specifies how Kubernetes should run and manage the application containers.
apiVersion: apps/v1
kind: Deployment
metadata:
name: myapp-deployment
labels:
app: myapp
spec:
replicas: 3
selector:
matchLabels:
app: myapp
template:
metadata:
labels:
app: myapp
spec:
containers:
- name: myapp
image: registry.example.com/myapp:v1.0.0
ports:
- containerPort: 8080
resources:
requests:
memory: "128Mi"
cpu: "250m"
limits:
memory: "256Mi"
cpu: "500m"This configuration requires understanding API versions, label selectors, resource specifications, and the relationship between Deployments, ReplicaSets, and Pods. Missing labels cause silent failures where Deployments create Pods that Services cannot discover.
Resource requests and limits require exact tuning. Requests determine scheduling decisions, while limits trigger throttling or termination when exceeded. Setting these values incorrectly leads to wasted cluster capacity or application instability.
Step 4: define the service YAML
A second YAML file exposes the Deployment to network traffic. Kubernetes offers three Service types, each with different networking implications.
apiVersion: v1
kind: Service
metadata:
name: myapp-service
spec:
selector:
app: myapp
ports:
- protocol: TCP
port: 80
targetPort: 8080
type: LoadBalancerClusterIP creates an internal-only endpoint accessible within the cluster. NodePort exposes the Service on each node's IP at a static port. LoadBalancer provisions an external load balancer in supported cloud environments. Teams often use Ingress controllers to consolidate external access through a single load balancer.
Choosing the wrong type means applications are either inaccessible externally or exposed incorrectly. Port mapping between the service port and container targetPort is a frequent source of connectivity issues that produce no obvious error messages.
Step 5: apply configuration with kubectl
With manifests written, you use kubectl to apply them to the cluster. This requires a properly configured kubeconfig file pointing to the correct cluster context.
kubectl apply -f deployment.yaml
kubectl apply -f service.yamlThe dependency on local tooling creates setup overhead. Each developer needs kubectl installed and configured correctly, with credentials rotated periodically. Teams managing multiple clusters switch contexts frequently, increasing the risk of deploying to unintended environments.
Step 6: verify and debug
Deployment success requires manual verification. Pods may fail to start for numerous reasons that only appear in logs or event streams.
kubectl get pods
kubectl describe pod myapp-deployment-xxxxx
kubectl logs myapp-deployment-xxxxxCommon failure modes include image pull errors from authentication problems, container crashes from missing environment variables, and readiness probe failures from misconfigured health checks. Each requires Kubernetes-specific knowledge to resolve.
The automated solution: agentic deployments with Qovery
The manual process demonstrates why engineering organizations move away from raw Kubernetes operations. Platform tools like Qovery abstract the underlying complexity while maintaining the infrastructure control of Kubernetes orchestration.
Qovery acts as an agentic Internal Developer Platform (IDP) that handles YAML generation, networking configuration, and deployment automation. The workflow simplifies to connecting a Git repository and selecting the application.
How agentic control resolves manual pain points
The YAML configurations from Steps 3 and 4 disappear entirely. Qovery generates deployment manifests based on application requirements detected directly from the repository. Teams specify resource needs through a web interface rather than editing YAML files directly.
Automated CI/CD replaces the manual image push and kubectl apply workflow. A Git push triggers the complete pipeline from build through deployment. Rollbacks execute through the interface rather than requiring manual manifest version tracking.
Built-in observability simplifies the debugging process. Logs stream directly in the dashboard without requiring kubectl access or cluster credentials. Deployment status and resource metrics appear without additional monitoring configuration.
Standardizing day-2 operations
Understanding how to deploy a Docker container in Kubernetes provides necessary baseline knowledge. However, for organizations scaling their operations, manual deployments generate unacceptable levels of configuration drift and toil.
By implementing an agentic control plane like Qovery, CTOs and platform architects retain the strict governance and orchestration benefits of Kubernetes while removing manual YAML configuration. This standardizes day-2 operations across the entire fleet and allows development teams to focus purely on application code.
FAQs
What are the steps to manually deploy a Docker container on Kubernetes?
Manual deployment requires six steps: building the Docker image, pushing it to a container registry, writing a Deployment YAML manifest, writing a Service YAML manifest, applying the configuration via kubectl, and verifying the deployment logs.
Why is writing manual Kubernetes YAML inefficient at scale?
Writing manual YAML requires deep knowledge of API versions, label selectors, and resource limits. When executed across thousands of clusters, manual YAML updates cause configuration drift, deployment failures, and severe operational overhead.
How does agentic Kubernetes automate Docker deployments?
Agentic platforms eliminate manual YAML generation and kubectl commands. By connecting directly to a Git repository, the control plane automatically builds the image, generates manifests, configures networking, and deploys the application based on centralized organizational intent.

Suggested articles
.webp)










