Docker Tips(持续更新) - Docker常见问题

AI 摘要: 本文主要介绍了Ubuntu软件安装的相关内容

1. Ubuntu软件安装

相关包查询: https://packages.ubuntu.com/search?mode=exactfilename&suite=jammy&section=all&arch=any&keywords=ps&searchon=contents

Tips: 可以和Aflred软件配合,做成快速查包的工具

1
2
3
4
5
6
7
8
# Cmd install
apt-get update && apt-get install procps

# Dockerfile
RUN apt-get update && apt-get install -y procps && rm -rf /var/lib/apt/lists/*

# 常见安装包
apt-get install net-tools sysstat iproute2 procps -y

2. Docker 安装问题

2.1. 镜像加速

2.1.1. 镜像拉取

使用 docker pull 命令可以从 Docker Hub 或其他镜像仓库拉取 Docker 镜像。Docker 镜像是一个只读模板,其中包含了创建 Docker 容器所需的文件和配置信息。当本地没有所需的镜像时,docker pull 命令会从镜像仓库下载。下载完成后,镜像就可以用于创建容器。

2.1.2. Docker Engine 架构

Docker Engine 架构分为三部分:Docker 客户端Docker DaemonDocker Registry

  • Docker 客户端是用户与 Docker 交互的主要方式,用户使用 Docker 客户端向 Docker Daemon 发送命令,Docker Daemon 会解析这些命令并执行相应的操作。
  • Docker Daemon 是运行在 Docker 主机上的服务,它负责管理镜像、容器、网络和存储等资源,并提供了 API,允许用户和其他的程序与 Docker 进行交互。
  • Docker Registry 是用来存储 Docker 镜像的仓库。它可以是 Docker 官方的公共仓库 Docker Hub,也可以是用户自己搭建的私有仓库。用户可以使用 Docker 客户端从 Registry 中拉取镜像,并将自己的镜像上传到 Registry 中。

2.1.3. 镜像加速

Docker 镜像加速代理可以提高 Docker 镜像的下载速度。它的原理是通过将 Docker 镜像从 Docker Hub 或其他镜像仓库缓存到加速代理服务器上,然后从加速代理服务器上获取 Docker 镜像,从而提高下载速度。用户需要在 Docker Daemon 配置文件中添加加速代理的地址,例如:

1
2
3
4
$ sudo vi /etc/docker/daemon.json
{
  "registry-mirrors": ["<https://registry.docker-cn.com>"]
}

在这个例子中,用户添加了一个名为 https://registry.docker-cn.com 的加速代理地址。当用户使用 docker pull 命令下载 Docker 镜像时,Docker Daemon 会自动使用该加速代理地址,从而提高下载速度。

需要注意的是,不同的加速代理提供的服务和速度可能会有所不同,用户需要选择一个可靠、快速的加速代理。同时,有些加速代理可能会对 Docker 镜像进行缓存,如果 Docker 镜像在 Registry 中发生了变化,用户需要手动清除缓存,否则可能会导致一些问题。

2.1.4. 常见 Register 仓库镜像代理加速

1
2
3
4
5
"azure": "http://dockerhub.azk8s.cn",
"tencent": "https://mirror.ccs.tencentyun.com",
"netease": "http://hub-mirror.c.163.com",
"ustc": "https://docker.mirrors.ustc.edu.cn",
"aliyun": "https://2h3po24q.mirror.aliyuncs.com"

2.2. Mac Docker Desktop 版本镜像加速代理

2.3. M1 重新安装 Mac Docker Desktop 版本出错

解决方式:清理之前旧版本 Docker 重新打开即可,参考: https://forums.docker.com/t/cannot-get-docker-working-in-macbook-pro-m1/120810

1
2
3
4
# 备份下.docker文件,同时清理之前旧版本Library
$ mv ~/.docker ~/.docker.bak
$ rm -rf ~/Library/Group\ Containers/group.com.docker
$ rm -rf ~/Library/Containers/com.docker.docker

2.4. hostname、/etc/host 修改

下面配置了容器的hostnamekafka_container,以及extra_hosts:选项后,对应的容器中的/etc/hosts为:

1
2
3
# cat /etc/hosts
127.0.0.1	localhost
172.22.0.3	kafka_container

2.5. Docker 环境安装问题 - Mac Network

2.5.1. 已知限制

  1. macOS 上没有 docker0 网桥
  2. macOS 主机无法 ping 容器 IP
  3. macOS 主机无法访问 docker(Linux)桥接网络

2.5.2. 变通方式

1
2
3
4
5
6
7
// 容器内访问macOS主机,基于特定的域名解析
ping host.docker.internal # 对应主机
ping gateway.docker.internal # 对应网关

// 主机=>容器:基于端口映射
$ docker run -d -p 80:80 --name webserver nginx
$ docker run -d -P --name webserver nginx

2.5.3. dns 修改

1
2
3
4
5
6
7
$ docker run -it --rm --dns=223.5.5.5 --dns=223.6.6.6 centos:perf-tools /bin/bash
[root@ea0ac0fcd834 /]# cat /etc/resolv.conf
nameserver 223.5.5.5
nameserver 223.6.6.6

// 无法生效
echo "$(sed '2,$c nameserver 223.5.5.5\nnameserver 223.6.6.6' /etc/resolv.conf)" > /etc/resolv.conf

3. Docker 使用

3.1. Docker 命令

3.1.1. 镜像搜索

1
2
3
4
5
// 搜索指定镜像##
$ docker images -f reference="tkstorm*"
REPOSITORY          TAG                 IMAGE ID            CREATED             SIZE
tkstorm_webx        latest              d5eba30216c8        2 days ago          17.7MB
tkstorm_phpfpmx     latest              0856f58e9335        2 days ago          282MB

3.1.2. 查询容器名称

1
docker ps --format "{{.Names}}"

3.2. docker 在 envsubst 的问题

参考: https://github.com/docker-library/docs/issues/496

1
command: /bin/sh -c "envsubst '$$NGINX_HOST $$NGINX_PORT' < /data/www/frontend/ngx.tmpl > /etc/nginx/conf.d/default.conf && exec nginx -g 'daemon off;'"

4. 容器相关问题

4.1. alipine 阿里云代理

1
2
sed -i 's/dl-cdn.alpinelinux.org/mirrors.aliyun.com/g' /etc/apk/repositories && \
    apk update && apk add --no-cache git curl tcpdump

4.2. perf 在容器中无法使用问题

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
┌─Error:─────────────────────────────────────────────────────────────────┐
│No permission to enable cycles event.                                   │
│                                                                        │
│You may not have permission to collect system-wide stats.               │
│                                                                        │
│Consider tweaking /proc/sys/kernel/perf_event_paranoid,                 │
│which controls use of the performance events system by                  │
│unprivileged users (without CAP_SYS_ADMIN).                             │
│                                                                        │
│The current value is 3:│                                                                        │
-1: Allow use of (almost) all events by all users                     │
│      Ignore mlock limit after perf_event_mlock_kb without CAP_IPC_LOCK │
>= 0: Disallow ftrace function tracepoint by users without CAP_SYS_ADMIN│
│      Disallow raw tracepoint access by users without CAP_SYS_ADMIN     │
>= 1: Disallow CPU event access by users without CAP_SYS_ADMIN          │
>= 2: Disallow kernel profiling by users without CAP_SYS_ADMIN          │
│                                                                        │
│To make this setting permanent, edit /etc/sysctl.conf too, e.g.:│                                                                        │
│  kernel.perf_event_paranoid = -1                                       │
│                                                                        │
│                                                                        │
│                                                                        │
│Press any key...                                                        │
└────────────────────────────────────────────────────────────────────────┘

原因: 由于 perf_event_open 系统调用被阻止,我们通常通过使用以下方法附加到容器来完成此操作:

4.2.1. 解决方式 1:以特权指令运行容器+sysctl 系统配置修改

1
2
3
4
// --privileged Give extended privileges to this container
docker run -it --rm --name some-centos --privileged --dns=223.5.5.5 --dns=223.6.6.6 centos:perf-tools /bin/bash
// 修改系统配置
sysctl kernel.kptr_restrict=0

4.2.2. 解决方式 2:加入特定 sysctl 配置

1
2
// --sysctl map Sysctl options (default map[])
kernel.perf_event_paranoid = -1

4.3. strace 运行问题

1
strace: attach: ptrace(PTRACE_ATTACH, ...): Operation not permitted

4.4. ping 工具

1
2
apt-get update
apt-get install iputils-ping

4.5. c 编译环境安装

1
apk update && apk add build-base autoconf

4.6. yum 告警问题

4.6.1. Docker 中阿里云的 yum 源告警问题(由于配置了多个 baseurl)

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
Determining fastest mirrors
 * base: mirrors.aliyuncs.com
 * extras: mirrors.aliyuncs.com
 * updates: mirrors.aliyuncs.com
http://mirrors.aliyuncs.com/centos/7/os/x86_64/repodata/repomd.xml: [Errno 14] curl#52 - "Empty reply from server"
Trying other mirror.
http://mirrors.cloud.aliyuncs.com/centos/7/os/x86_64/repodata/6614b3605d961a4aaec45d74ac4e5e713e517debb3ee454a1c91097955780697-primary.sqlite.bz2: [Errno 14] curl#6 - "Could not resolve host: mirrors.cloud.aliyuncs.com; Unknown error"
Trying other mirror.

原因:阿里云的yum源指定了多个:
[base]
name=CentOS-$releasever - Base - mirrors.aliyun.com
failovermethod=priority
baseurl=http://mirrors.aliyun.com/centos/$releasever/os/$basearch/
        http://mirrors.aliyuncs.com/centos/$releasever/os/$basearch/
        http://mirrors.cloud.aliyuncs.com/centos/$releasever/os/$basearch/
gpgcheck=1
gpgkey=http://mirrors.aliyun.com/centos/RPM-GPG-KEY-CentOS-7

通过man yum.conf可以看到,不推荐这样配置
baseurl Must be a URL to the directory where the yum repository's `repodata' directory lives. Can be an http://, ftp:// or file:// URL. You can  specify  multiple
URLs in one baseurl statement. The best way to do this is like this:
[repositoryid]
name=Some name for this repository
baseurl=url://server1/path/to/repository/
      url://server2/path/to/repository/
      url://server3/path/to/repository/

If you list more than one baseurl= statement in a repository you will find yum will ignore the earlier ones and probably act bizarrely. Don't do this, you've been
warned.

4.6.2. 包无签名

1
warning: /var/cache/yum/x86_64/7/base/packages/bash-completion-2.1-6.el7.noarch.rpm: Header V3 RSA/SHA256 Signature, key ID f4a80eb5: NOKEY

5. Dockerfile 配置相关

5.1. Docker 容器能力开放

1
2
3
4
5
6
7
8
version: "3.7"
services:
    nginx:
        cap_add:
            - SYS_PTRACE
        security_opt:
            - seccomp:unconfined
...

5.2. docker-compose.yml 示例

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
version: "3"
services:
  zookeeper:
    image: 'bitnami/zookeeper:latest'
    ports:
      - '2181:2181'
    environment:
      - ALLOW_ANONYMOUS_LOGIN=yes
  kafka:
    image: 'bitnami/kafka:latest'
    hostname: kafka_container
    extra_hosts:
      - "dev_kafka:9.134.233.187"
    ports:
      - '9092:9092'
    environment:
      - KAFKA_BROKER_ID=1
      - KAFKA_CFG_LISTENERS=PLAINTEXT://:9092
      - KAFKA_CFG_ADVERTISED_LISTENERS=PLAINTEXT://:9092
      - KAFKA_CFG_ZOOKEEPER_CONNECT=zookeeper:2181
      - ALLOW_PLAINTEXT_LISTENER=yes
    volumes:
      - /data/docker_volumes/kafka-data/kafka_data:/bitnami
    depends_on:
      - zookeeper
    restart: on-failure
  kafka-manager:
    image: kafkamanager/kafka-manager
    ports:
      - "9000:9000"
    environment:
      - ZK_HOSTS=zookeeper:2181
      - KAFKA_MANAGER_AUTH_ENABLED=true
      - KAFKA_MANAGER_USERNAME=admin
      - KAFKA_MANAGER_PASSWORD=clark
      - ALLOW_PLAINTEXT_LISTENER=yes
    depends_on:
      - zookeeper

5.3. docker-composer.yml 中容器使用外部网络

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
version: "3.7"
services:
    alpine-gdb:
        build: .
        image: alpine:gdb
        container_name: "alpine-gdb"
        entrypoint: ["tail", "-f", "/dev/null"]
        volumes:
            - .:/data/
        #network_mode: "bridge"
        networks:
            - mysql_default
            - proxy-net
            - gdb
networks:
    gdb:
    mysql_default:
        external: true
    proxy-net:
        external: true

6. Dockerimage 镜像示例

6.1. etcd systemd 镜像安装

6.1.1. systemd 服务化 - 基于 alpine 镜像,做 etcd 的Dockerfile

参考: http://play.etcd.io/install

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
# after transferring certs to remote machines
mkdir -p ${HOME}/certs
cp /tmp/certs/* ${HOME}/certs

# make sure etcd process has write access to this directory
# remove this directory if the cluster is new; keep if restarting etcd
# rm -rf /tmp/etcd/s1


# to write service file for etcd
cat > /tmp/etcd.service <<EOF
[Unit]
Description=etcd
Documentation=https://github.com/coreos/etcd
Conflicts=etcd.service
Conflicts=etcd2.service

[Service]
Type=notify
Restart=always
RestartSec=5s
LimitNOFILE=40000
TimeoutStartSec=0

ExecStart=/usr/local/bin/etcd --name s1 \
  --data-dir /data/etcd/s1 \
  --listen-client-urls https://localhost:2379 \
  --advertise-client-urls https://localhost:2379 \
  --listen-peer-urls https://localhost:2380 \
  --initial-advertise-peer-urls https://localhost:2380 \
  --initial-cluster s1=https://localhost:2380,s2=https://localhost:22380,s3=https://localhost:32380 \
  --initial-cluster-token tkn \
  --initial-cluster-state new \
  --client-cert-auth \
  --trusted-ca-file ${HOME}/certs/etcd-root-ca.pem \
  --cert-file ${HOME}/certs/s1.pem \
  --key-file ${HOME}/certs/s1-key.pem \
  --peer-client-cert-auth \
  --peer-trusted-ca-file ${HOME}/certs/etcd-root-ca.pem \
  --peer-cert-file ${HOME}/certs/s1.pem \
  --peer-key-file ${HOME}/certs/s1-key.pem

[Install]
WantedBy=multi-user.target
EOF
sudo mv /tmp/etcd.service /etc/systemd/system/etcd.service

# to start service
sudo systemctl daemon-reload
sudo systemctl cat etcd.service
sudo systemctl enable etcd.service
sudo systemctl start etcd.service

# to get logs from service
sudo systemctl status etcd.service -l --no-pager
sudo journalctl -u etcd.service -l --no-pager|less
sudo journalctl -f -u etcd.service

# to stop service
sudo systemctl stop etcd.service
sudo systemctl disable etcd.service