# Commands
# login
docker login -u _json_key -p "$(cat gcr-json-key-file.json)" https://gcr.io
# list
docker ps
docker ps -a # including stopped
docker ps -a -q # all processes(containers)' id
docker container ls
docker container ls -a # including stopped
# rm
- remove container + image
docker container stop name_or_id docker container rm name_or_id docker image rm name_or_id
- remove all containers
docker rm $(docker ps -a -q)
rm
vsrmi
images size listed indocker images
is cumulated with parent size. If parent is pre-built separately at local machine, remove the image might not release the total size listed. To check actual released size, calculate the difference between the two commandsdocker system df
. To check each layer size,docker history <img>
rm
is for container
rmi
is for image- Remove all dangling(not referring to any tagged images) images. If
-a
is specified, will also remove all images not referenced by any container.docker image prune
# build
docker build -t tag -f Dockerfile .
.
project context
# tag
docker tag image:v1 image:v2
docker push image:v2
# run
run
simple commanddocker run some_image ls
ls
: the command to run in the container. After it exits, the container stops.
interactive command like
/bin/bash
orpython
docker run -ti some_image /bin/bash
-ti
-t
into the container. Interactive commands must have it.-i
with STDIN pass to the container. Only effective with-t
- If only
-t
, a bash shell is created but user input is not passed to it. - If only
-i
, no effect
/bin/bash
: the command to run in the container. After it exits, the container stops.ctrl+p ctl+q
: if-ti
, detach the running docker to background.docker attach name_of_the_container
will resume it.ctrl+d
: if-ti
, will exist the bash in container, stop the container and come back to the terminal.
port-forwarding
docker run -p 8000:8080 --rm -it --name apple image_tag
-p/--publish 8000:8080:
asks Docker to forward traffic incoming on the host’s port 8000, to the container’s port 8080--rm
: once stopped, removed the containers-t
: into the container-i
: with STDIN--name
: give it a name
Containers are merely an instance of the image you use to run them;
docker run
will create a temporary container and execute the command. After command finishes, the container is also stoppedexec
docker exec apple docker exec id docker exec -t -i mycontainer /bin/bash
run commands on the running container
top
docker top apple docker top id
see processes
# docker-compose
docker-compose start name
: start a service
docker-compose up -d name
: start this service and any binding ones
docker-compose run --service-ports name [NEW COMMAND]
: start service with new command and with port-forwarding
# tf serving
check serving status: localhost:8501/v1/models/model_name
check serving schema: localhost:8501/v1/models/model_name/metadata
way 1
gRPC
docker run -p 8500:8500 \ --mount type=bind,source=/tmp/mnist,target=/models/mnist \ -e MODEL_NAME=mnist -t tensorflow/serving &
rest api
docker run -p 8501:8501 \ --mount type=bind,source=/tmp/mnist,target=/models/mnist \ -e MODEL_NAME=mnist -t tensorflow/serving &
bash parameters:
&
as long as the container does not output logs, it will run in the background. Otherwise, the terminal is occupied.jobs
will list background jobsfg
will take a background job in the front- use docker parameter
-d -ti
is better.
docker parameters:
--mount type=bind,source=x,target=y
: creates a folder y in the container, and x,y must be absolute pathsmodels/mnist
contains version folder1
-e MODEL_NAME
must equal target, if no-e MODEL_BASE_PATH
-e MODEL_NAME
, equals--model_name
,--MODEL_NAME
in tf serving parameters-e TF_CPP_MIN_VLOG_LEVEL=4
enables verbose logging of tf serving, and it logs continuously thus occupies the terminal even if&
TF_CPP_MIN_LOG_LEVEL
: controlsLOG
with format[module] msg
; 0 logs everything, 4 logs nothingTF_CPP_MIN_VLOG_LEVEL
: controlsVLOB
with format[time level file] msg
; extra info per allowedLOG
level; 0 logs nothing, 4 logs everythingTF_CPP_MIN_VLOG_LEVEL
has been renamed toTF_CPP_MAX_VLOG_LEVEL
after 2.5.0rc0
-e TF_FORCE_GPU_ALLOW_GROWTH=true
: occupies gpu mem step by step--gpus '"device=2"'
(requires nvidia-docker and docker >= 19.03, for lower version of docker, use--runtime=nvidia
)
tf serving parameters, must adds after image name:
--enable_batching
: adds after image name to batch the requests
way 2
export TESTDATA="$(pwd)/models/model_name/" docker run -t --rm -p 8501:8501 \ -v "$TESTDATA:/models/keras_dpr_model" \ -e MODEL_NAME=keras_dpr_model \ tensorflow/serving &
-v
: creates a folder in the container
way 3
docker run -d --name serving_base tensorflow/serving docker cp /tmp/resnet serving_base:/models/resnet docker commit --change "ENV MODEL_NAME resnet" serving_base $USER/new_image docker kill serving_base docker rm serving_base docker run -p 8500:8500 -t $USER/new_image &
-d
: detach
# tg bot
In google cloud:
- Update environment variables in
.env
andrun.sh
- If session expired:
python main.py
sudo sysctl -w vm.max_map_count=262144
for elastic search to worksudo docker-compose up -d
to start- To index history messages:
/download_history
sudo docker-compose down
to stop