Containers are a first class citizen.
Each container is an equal peer on the network.
Discovery should be framework-agnostic.
Your mission is what your application does for your organization.
Infrastructure (undifferentiated heavy lifting) is incidental cost and incidental complexity.
Application containers make the full promise of cloud computing possible...
but require new ways of working.
Triton Elastic Container Service
Director of DevOps
... Docker in production since Oct 2013
Human-and-machine-readable build documentation.
No more "works on my machine."
Fix dependency isolation.
Interface-based approach to application deployment.
Deployments are fast!
DevOps kool-aid for everyone!
Docker's use of bridging and NAT noticeably increases the transmit path length; vhost-net is fairly efficient at transmitting but has high overhead on the receive side... In real network-intensive workloads, we expect such CPU overhead to reduce overall performance.
IBM Research Report: An Updated Performance Comparison of Virtual Machines and Linux Containers
--host
networking
Bridge (not --bridge
) networking
Simple discovery! But...
Can't address individual hosts behind a record.*
No health checking.*
TTL caching.
Containers don't have their own NIC on the data center network
Pass through proxy for all outbound requests
All packets go through NAT or port forwarding
Cut the cruft!
Push responsibility of the application topology away from the network infrastructure and into the application itself where it belongs.
Registration
Self-introspection
Heartbeats
Look for change
Respond to change
No packaging tooling into another service
App container lifecycle separate from discovery service
Respond quickly to changes
A shim to help make existing apps container-native
Containerbuddy is PID1
Returns exit code of shimmed process back to Docker Engine (or Triton) and dies
Attaches stdout/stderr
from app to stdout/stderr
of container
{
"consul": "consul:8500",
"services": [
{
"name": "nginx",
"port": 80,
"health": "/usr/bin/curl --fail -s http://localhost/health",
"poll": 10,
"ttl": 25
}
],
"backends": [
{
"name": "app",
"poll": 7,
"onChange": "/opt/containerbuddy/reload-nginx.sh"
}
]
}
$ cat ./nginx/opt/containerbuddy/reload-nginx.sh
# fetch latest virtualhost template from Consul k/v
curl -s --fail consul:8500/v1/kv/nginx/template?raw \
> /tmp/virtualhost.ctmpl
# render virtualhost template using values from Consul and reload Nginx
consul-template \
-once \
-consul consul:8500 \
-template \
"/tmp/virtualhost.ctmpl:/etc/nginx/conf.d/default.conf:nginx -s reload"
$ less ./nginx/default.ctmpl
# for each service, create a backend
{{range services}}
upstream {{.Name}} {
# write the health service address:port pairs for this backend
{{range service .Name}}
server {{.Address}}:{{.Port}};
{{end}}
}
{{end}}
server {
listen 80;
server_name _;
# need ngx_http_stub_status_module compiled-in
location /health {
stub_status on;
allow 127.0.0.1;
deny all;
}
{{range services}}
location /{{.Name}}/ {
proxy_pass http://{{.Name}}/;
proxy_redirect off;
}
{{end}}
}
nginx:
image: 0x74696d/containerbuddy-demo-nginx
mem_limit: 512m
ports:
- 80
links:
- consul:consul
restart: always
environment:
- CONTAINERBUDDY=file:///opt/containerbuddy/nginx.json
command: >
/opt/containerbuddy/containerbuddy
nginx -g "daemon off;"
echo 'Starting Consul.'
docker-compose -p example up -d consul
# get network info from consul. alternately we can push this into
# a DNS A-record to bootstrap the cluster
CONSUL_IP=$(docker inspect example_consul_1 \
| json -a NetworkSettings.IPAddress)
echo "Writing template values to Consul at ${CONSUL_IP}"
curl --fail -s -X PUT --data-binary @./nginx/default.ctmpl \
http://${CONSUL_IP}:8500/v1/kv/nginx/template
echo 'Opening consul console'
open http://${CONSUL_IP}:8500/ui
Starting application servers and Nginx
example_consul_1 is up-to-date
Creating example_nginx_1...
Creating example_app_1...
Waiting for Nginx at 72.2.115.34:80 to pick up initial configuration.
...................
Opening web page... the page will reload every 5 seconds with any updates.
Try scaling up the app!
docker-compose -p example scale app=3
echo 'Starting application servers and Nginx'
docker-compose -p example up -d
# get network info from Nginx and poll it for liveness
NGINX_IP=$(docker inspect example_nginx_1 \
| json -a NetworkSettings.IPAddress)
echo "Waiting for Nginx at ${NGINX_IP} to pick up initial configuration."
while :
do
sleep 1
curl -s --fail -o /dev/null "http://${NGINX_IP}/app/" && break
echo -ne .
done
echo
echo 'Opening web page... the page will reload every 5 seconds'
echo 'with any updates.'
open http://${NGINX_IP}/app/
$ docker-compose -p example scale app=3
Creating and starting 2... done
Creating and starting 3... done
The Old Way | The Container-Native Way |
---|---|
Extra network hop from LB or local proxy | Direct container-to-container commmunication |
NAT | Containers have their own IP |
DNS TTL | Topology changes propogate immediately |
Health checks in the LB | Applications report their own health |
Two build & orchestration pipelines | Focus on your app alone |
VMs | Secure multi-tenant bare-metal |