概述
本文基于 Kind(Kubernetes in Docker)快速搭建测试集群,安装 Flannel 作为 CNI,集成 bandwidth 插件并通过 iperf3 验证 Pod 间带宽限制。整个过程在本地完成,适合学习、验证网络策略及资源配额的同学参考。
环境准备
1
2
3
| docker version
kind version
kubectl version --client
|
构建 CNI 插件
1
2
3
4
| mkdir -p /tmp/kind-flannel-bw-test
git clone https://github.com/containernetworking/plugins.git
cd plugins
./build_linux.sh
|
创建 Kind 集群(禁用默认 CNI)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
| # /tmp/kind-flannel-bw-test.yaml
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
name: flannel-bw-test
networking:
disableDefaultCNI: true
podSubnet: "10.244.0.0/16"
nodes:
- role: control-plane
extraMounts:
- hostPath: /tmp/kind-flannel-bw-test/plugins/bin
containerPath: /opt/cni/bin
- role: worker
extraMounts:
- hostPath: /tmp/kind-flannel-bw-test/plugins/bin
containerPath: /opt/cni/bin
- role: worker
extraMounts:
- hostPath: /tmp/kind-flannel-bw-test/plugins/bin
containerPath: /opt/cni/bin
|
1
2
| kind create cluster --config /tmp/kind-flannel-bw-test.yaml
kubectl get nodes -o wide
|
安装 Flannel
1
2
3
| kubectl apply -f https://raw.githubusercontent.com/flannel-io/flannel/master/Documentation/kube-flannel.yml
kubectl -n kube-flannel rollout status ds/kube-flannel-ds --timeout=300s
kubectl get pods -n kube-flannel -o wide
|
配置 Flannel 的 CNI 链(加入 bandwidth 插件)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
| # /tmp/kube-flannel-bandwidth-cm.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: kube-flannel-cfg
namespace: kube-flannel
data:
cni-conf.json: |-
{
"name": "cbr0",
"cniVersion": "0.3.1",
"plugins": [
{ "type": "flannel", "delegate": { "hairpinMode": true, "isDefaultGateway": true } },
{ "type": "portmap", "capabilities": { "portMappings": true } },
{ "type": "bandwidth", "capabilities": { "bandwidth": true } }
]
}
net-conf.json: |-
{
"Network": "10.244.0.0/16",
"Backend": {
"Type": "vxlan"
}
}
|
1
2
3
| kubectl apply -f /tmp/kube-flannel-bandwidth-cm.yaml
kubectl -n kube-flannel rollout restart ds/kube-flannel-ds
kubectl -n kube-flannel rollout status ds/kube-flannel-ds --timeout=300s
|
部署测试工作负载(iperf3)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
| # /tmp/bw-test.yaml
apiVersion: v1
kind: Namespace
metadata:
name: bw-test
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: iperf-server
namespace: bw-test
spec:
replicas: 1
selector:
matchLabels:
app: iperf-server
template:
metadata:
labels:
app: iperf-server
spec:
containers:
- name: server
image: networkstatic/iperf3:latest
args: ["-s"]
ports:
- containerPort: 5201
---
apiVersion: v1
kind: Service
metadata:
name: iperf
namespace: bw-test
spec:
selector:
app: iperf-server
ports:
- name: tcp
port: 5201
targetPort: 5201
protocol: TCP
---
apiVersion: v1
kind: Pod
metadata:
name: iperf-client-limit-10m
namespace: bw-test
annotations:
kubernetes.io/egress-bandwidth: 10M
kubernetes.io/ingress-bandwidth: 10M
spec:
restartPolicy: Never
containers:
- name: client
image: networkstatic/iperf3:latest
command:
- sleep
- infinity
# args: ["sleep 5; iperf3 -c iperf.bw-test.svc.cluster.local -t 15 -P 1 -f k"]
---
apiVersion: v1
kind: Pod
metadata:
name: iperf-client-no-limit
namespace: bw-test
spec:
restartPolicy: Never
containers:
- name: client
image: networkstatic/iperf3:latest
command:
- sleep
- infinity
# args: ["sleep 5; iperf3 -c iperf.bw-test.svc.cluster.local -t 15 -P 1 -f k"]
|
1
2
3
| kubectl apply -f /tmp/bw-test.yaml
kubectl -n bw-test wait --for=condition=Available deploy/iperf-server --timeout=180s
kubectl -n bw-test get pods -o wide
|
验证带宽限制
10M 带宽限制验证
1
2
| kubectl exec -ti -n bw-test iperf-client-limit-10m -- \
iperf3 -c iperf.bw-test.svc.cluster.local -t 15 -P 1 -f k
|
示例输出(节选):
1
2
3
4
| [ 5] 1.00-2.00 sec 1.25 MBytes 10472 Kbits/sec
[ 5] 2.00-3.00 sec 1.25 MBytes 10484 Kbits/sec
...
[ 5] 0.00-15.00 sec 28.8 MBytes 16075 Kbits/sec sender
|
无限制对照测试
1
2
| kubectl exec -ti -n bw-test iperf-client-no-limit -- \
iperf3 -c iperf.bw-test.svc.cluster.local -t 15 -P 1 -f k
|
示例输出(节选):
1
| [ 5] 0.00-15.00 sec 146 GBytes 83817814 Kbits/sec sender
|
20M 带宽限制验证
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
| # /tmp/iperf-limit-20m.yaml
apiVersion: v1
kind: Pod
metadata:
name: iperf-limit-20m
namespace: bw-test
annotations:
kubernetes.io/egress-bandwidth: 20M
kubernetes.io/ingress-bandwidth: 20M
spec:
restartPolicy: Never
containers:
- name: client
image: networkstatic/iperf3:latest
command:
- sleep
- infinity
# args: ["sleep 5; iperf3 -c iperf.bw-test.svc.cluster.local -t 10 -P 1 -f k"]
|
1
2
3
| kubectl apply -f /tmp/iperf-limit-20m.yaml
kubectl -n bw-test wait --for=condition=Ready pod/iperf-limit-20m --timeout=180s
kubectl logs -f -n bw-test iperf-limit-20m
|
示例输出(节选,首秒可能出现突刺,之后稳定约 20Mbit/s):
1
2
3
4
| [ 5] 2.00-3.00 sec 1.25 MBytes 10478 Kbits/sec
[ 5] 3.00-4.00 sec 2.50 MBytes 20929 Kbits/sec
...
[ 5] 0.00-10.04 sec 512 MBytes 427530 Kbits/sec receiver
|
提升测试稳定性
忽略前 2 秒“热身”并拉长测试时间:
1
2
3
4
5
| kubectl -n bw-test exec -ti iperf-limit-20m -- \
iperf3 -c iperf.bw-test.svc.cluster.local -t 30 -O 2 -P 1 -f k
kubectl -n bw-test exec -ti iperf-client-limit-10m -- \
iperf3 -c iperf.bw-test.svc.cluster.local -t 30 -O 2 -P 1 -f k
|
常见问题与规避
- 服务端忙(server busy):iperf3 服务端一次仅能处理一个测试,等待上一轮结束再发起下一次。
- 注解格式错误:使用 CLI 传多注解容易被 shell 拼接为一个值,建议用 YAML 分两行:
1
2
3
| annotations:
kubernetes.io/egress-bandwidth: 20M
kubernetes.io/ingress-bandwidth: 20M
|
- 首秒突刺:
bandwidth 通过内核 tc 队列限速,初始化阶段可能导致首个统计窗口出现突刺。使用 -O 忽略热身窗口并延长测试时长。
清理
1
2
| kubectl delete ns bw-test
kind delete cluster --name flannel-bw-test
|
总结
在 Kind + Flannel 环境中,加入 bandwidth 插件并为 Pod 标注 kubernetes.io/egress-bandwidth、kubernetes.io/ingress-bandwidth 后,可有效实现出入站带宽限制。通过 iperf3 的对比测试,10M 与 20M 的限速均按预期生效,适合在本地进行网络策略与资源控制的快速验证。