编写时间:2021-12-18
友情链接:knative-serving安装
这里列举三个例子,其他用法都是类似
安装好knative-serving和对应的网络
文件需要自行获取
1: knative-serving流量分流主要是通过
traffic
字段控制的,可以对比yaml文件即可看出
2: 后面会有yaml文件详情,其他yaml配置可以在文件中获取
kubectl apply -f serving-test/serving-test-namespaces.yaml
kubectl allpy -f serving-test/helloworld-go-traffic-one.yaml
1: <helloworld-go.serving-test.127.0.0.1.nip.io>:是
kubectl get route -n serving-test
获取的url
,去掉url
的http://
2: <serving_network_ip>和<serving_network_port>是knative-serving的网络服务ip和端口,kourier网络可以通过kubectl get service kourier -n kourier-system
获取,端口是内部端口
3: 1-2分钟如果没有请求访问,pod会自动扩缩容到0
4: 响应值为Hello World!
,表示成功
curl -H "Host: <helloworld-go.serving-test.127.0.0.1.nip.io>" http://<serving_network_ip>:<serving_network_port>
kubectl apply -f serving-test/helloworld-go-traffic-two.yaml
kubectl get revision -n serving-test
# 响应值
NAME CONFIG NAME K8S SERVICE NAME GENERATION READY REASON ACTUAL REPLICAS DESIRED REPLICAS
helloworld-go-one helloworld-go 1 True 1 1
helloworld-go-two helloworld-go 2 True 1 1
3. 通过访问进行验证
一样的操作,响应值为Hello World Two!
,表示成功kubectl apply -f serving-test/helloworld-go-traffic-three.yaml
kubectl get revision -n serving-test
# 响应值
helloworld-go-one helloworld-go 1 True 1 1
helloworld-go-three helloworld-go 3 True 1 1
helloworld-go-two helloworld-go 2 True 1 1
3. 通过访问进行验证
一样的操作,响应值为Hello World !
、Hello World Two!
和Hello World Three!
,出现比例约为6:3:11: ConfigMap为kubernetes的配置文件类型,可以通过
kubectl get configmap -A
查询所有的ConfigMap
2: 修改名为config-autoscaler
的ConfigMap,在date中添加enable-scale-to-zero: "false"
即可完成
3: 后面会有文件详情,其他类似配置可以在文件中获取
apiVersion: v1
kind: ConfigMap
metadata:
name: config-autoscaler
namespace: knative-serving
data:
# 配置后pod不会缩容到0
enable-scale-to-zero: "false"
kubectl apply -f config-autoscaler.yaml
kubectl get pod -n serving-test
kubectl patch configmap config-autoscaler -n knative-serving --type merge -p '{"data":{"enable-scale-to-zero":"false"}}'
kubectl get pod -n serving-test
1: 使用kubernetes的属性,需要在
config-features
ConfigMap添加对应配置,之后才能在yaml文件中使用
2: 后面会有文件详情,其他属性设置在文件中获取
kubectl get configmap config-features -n knative-serving -o yaml > config-features.yaml
kubernetes.podspec-nodeselector: "enabled"
配置,以下yaml内容是简化的apiVersion: v1
kind: ConfigMap
metadata:
labels:
app.kubernetes.io/part-of: knative-serving
app.kubernetes.io/version: 1.0.0
serving.knative.dev/release: v1.0.0
name: config-features
namespace: knative-serving
data:
kubernetes.podspec-nodeselector: "enabled"
kubectl apply -f config-features.yaml
<node_name>: 为kubernetes的node节点
apiVersion: serving.knative.dev/v1
kind: Service
...
spec:
template:
spec:
nodeSelector:
nodeName: <node_name>
kubectl apply -f test-node-select.yaml
kubectl get pod -n serving-test -o wide
kubectl patch configmap config-features -n knative-serving --type merge -p '{"data":{"kubernetes.podspec-nodeselector": "enabled"}}'
3.3.1 yaml形式修改
的4.5.6除了activator
组件之外,您可以使用以下命令扩展knative-serving
(或kourier-system
) 中运行的任何部署
$ kubectl -n knative-serving scale deployment <deployment-name> --replicas=2
设置--replicas
为值大于或等于2
启用 HA,设置--replicas=1
禁用 HA。
minReplicas
和maxReplicas
$ kubectl get hpa activator -n knative-serving -o yaml
$ kubectl patch hpa activator -n knative-serving -p '{"spec":{"minReplicas":9,"maxReplicas":19}}'