> convert applications into docker images which can be run as docker containers on other machines with docker installed
> docker {command}
> docker run nginx
- run instance of nginx application in docker container if it already exists
- else we pull the image from docker hub and use that
- runs a detached instance in the background
> docker attach cool_container
- reattach container instance
- list running containers
- list all containers
> docker stop cool_container > docker stop 123456789
- stop a container
> docker rm cool_container
- remove a container
- can type only first part of container id if there are no conflicts
- list images
- remove an image
- ensure no containers are running off the image
- delete all dependent containers
> docker pull nginx
- download an image from docker hub
> docker exec ubuntu_container cat /etc/hosts
- if the process stops/crashes, the container exits
> docker inspect cool_container
- inspect a container in JSON form
> docker inspect cool_container
- view output from a container running in the background
> docker run redis:latest (default) > docker run redis:4.0
- docker image versions
- interactive mode
- read input from stdin
- terminal
- display terminal prompts
- combine -i, -t
- allow users outside of the container to access the docker host
> docker run -p 80:5000 webapp
- map external port 80 to internal port 5000
- run multiple instances of applications and map to different ports
- docker containers have their own filesystem
- map a directory outside the docker host to a directory inside the container
> docker run -v /external/directory:/container/directory/sql sql_container
- ex. save db data before we destroy a sql container
> docker run -e ENV_VARIABLE=epic cool_container
- we can find the list of environment variables using `docker inspect“
- creating you own images for shipping/debugging
- the steps required to set up an image
- a docker command
- the argument for the docker command
> FROM Ubuntu
- OS from base/image
> RUN pip install flask
- install dependencies
> COPY . src
- copy all from src/
- copy files from host to image
> ENTRYPOINT FLASK_APP=/src/main.py flask run
- app entrypoint
- save image locally
- each build step is a layer
- we only need to rebuild updated layers
- specify image name
- use the dockerfile in the current directory
- push image to docker hub
- when running an image, there is a default CMD command
- ex. nginx has “nginx” as its command
> docker run ubuntu
- ubuntu uses bash
- bash is a terminal that listens to inputs, but by default docker does not attach a terminal
- so it exits immediately
- bash is a terminal that listens to inputs, but by default docker does not attach a terminal
> docker run ubuntu [command]
- overwrite the default command
- we can also overwrite in the dockerfile > CMD sleep 5 > CMD [“sleep”, “5”]
- like CMD but instead of hard-coding CMD parameters (cmd 5), we append them
> ENTRYPOINT [“sleep”] > docker run image 10
- command: sleep 10
- we can use CMD to as a default value for the parameter > ENTRYPOINT [“sleep”] > CMD[“5”]
- default network
- private, internally created on host
- 172.17.x
- containers access each other from bridge
- we can map to these networks for external usage
- containers are not connected to any network
- external usage
- associate container to host network
- no port mapping needed, direct access
> docker network create {args}
- create multiople bridge networks in the host
- view all networks
> mysql.connect(172.17.0.1) > mysql.connect(sql_container)
- we can use the container name instead of its address
- created on build
- read-only
- docker will reuse identical image layers when building similar applications
- shared by containers using image
- created on run
- read-write
- store data created by the container
- destroyed on container destruction
- we can “modify” files in the image layer from our container as a copy saved to our container layer
> volume create data_volume
- create a new volume called data_volume under the docker/volumes/
> docker run -v data_volume:/var/lib/sql sql_container
- mount volume to /var/lib/sql inside the container; all data is written to data_volume
> docker run -v new_volume:/var/lib/sql sql_container
- automatically creates volume called new_volume
> docker run -v /external/directory:/var/lib/sql sql_container
- we can store our data anywhere, not just a docker volume
- bind mounting
- running multi-container docker applications
> docker run -d –link redis:redis app
> docker-compose.yml
container_name:
image: img_name
build: ./src
ports:
- external:internal
links:
- link:link
- link (same as link:link)- instead of using an image from docker hub, we can build our own image
redis:
image: redis
db:
image: postgres
worker:
build: ./worker
links:
- redis
- db- build, create, start container
- services, dependencies, networks, auto linking
- when we pull nginx from docker hub, we are actually pulling nginx/nginx
> image: docker.io/nginx/nginx
- registry/username/image
- private pics
- automate running containers, monitor host/container health
- create new containers if containers fail, scale containers to usage, load balancing
- multple instances (replicas)
- docker swarm, kubernetes
- combine docker machines into clusters
- distribute application instances into hosts
- swarm manager/workers
- join, nodes
- instances of a service that run on nodes