文中的
--kubeconfig ~/.kube/sentry
,是指k8s的配置,添加配置后,可以访问指定k8s,如不需要,自行去除。
helm repo add stable http://mirror.azure.cn/kubernetes/charts
helm repo add incubator http://mirror.azure.cn/kubernetes/charts-incubator
helm repo update
helm search repo sentry
#NAME CHART VERSION APP VERSION DESCRIPTION
#stable/sentry 4.2.0 9.1.2 Sentry is a cross-platform crash reporting and ...
#看到sentry,说明镜像没问题
kubectl create namespace sentry
helm --kubeconfig ~/.kube/sentry install sentry stable/sentry \
-n sentry \
--set persistence.enabled=true,user.email=ltz@qq.com,user.password=ltz \
--set ingress.enabled=true,ingress.hostname=sentry.ltz.com,service.type=ClusterIP \
--set email.host=smtp.exmail.qq.com,email.port=465 \
--set email.user=ltz@ltz.com,email.password=ltz,email.use_tls=false \
--wait
值 | 说明 | 必须 |
---|---|---|
–kubeconfig ~/.kube/sentry | kube的配置文件,可以指定使用哪个k8s | true |
user.email | 管理员邮箱 | true |
user.password | 管理员密码 | true |
ingress.hostname | sentry的域名(上报时必须使用域名) | true |
email.host、email.port | 邮箱发站地址、端口 | true |
email.user、email.password | 自己的邮箱(sentry使用这个发送邮件) | true |
email.use_tls | 可以在具体的邮箱设置中查看是否设置true | true |
redis.primary.persistence.storageClass | Redis的SC使用哪个(也可以不设置,我这个是因为没有PV\PVC) | false |
postgresql.persistence.storageClass | postgresql的SC使用哪个(也可以不设置,我这个是因为没有PV\PVC) | false |
Deployment
和三个StatefulSet
都启动了。过一会,访问域名就行了。helm --kubeconfig ~/.kube/sentry uninstall sentry -n sentry
安装后,我的Redis和PG一直启动不起来,提示。
Pending: pod has unbound immediate PersistentVolumeClaims
大概就是说,PVC绑定不上,所以启动不了。
yml太长,贴在最后了。
在yml同级目录执行
kubectl --kubeconfig ~/.kube/sentry apply -f local-path-storage.yaml
将local-path这只为默认sc
kubectl --kubeconfig ~/.kube/cls-saas-prod patch storageclass local-path -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}'
添加参数
helm --kubeconfig ~/.kube/sentry install sentry stable/sentry \
-n sentry \
--set persistence.enabled=true,user.email=ltz@qq.com,user.password=ltz \
--set ingress.enabled=true,ingress.hostname=sentry.ltz.com,service.type=ClusterIP \
--set email.host=smtp.exmail.qq.com,email.port=465 \
--set email.user=ltz@ltz.com,email.password=ltz,email.use_tls=false \
--set redis.primary.persistence.storageClass=local-path \
--set postgresql.persistence.storageClass=local-path \
--wait
正常情况下,启动后,会自动初始化数据库信息。然鹅,我这个没有,所以需要登录到Sentry-web
的机器上手动执行下初始化命令。
kubectl --kubeconfig ~/.kube/sentry exec -it -n sentry $(kubectl get pods -n sentry |grep sentry-web |awk '{print $1}') bash
sentry upgrade
同上,管理员如果没自动创建的话,可以在Sentry-web
手动执行。
kubectl exec -it -n sentry $(kubectl get pods -n sentry |grep sentry-web |awk '{print $1}') bash
sentry createuser
上面的安装参数,email要写对,然后,在pod中的环境变量,也要配置对。
sentry-web的环境变量。
- name: SENTRY_EMAIL_HOST
value: smtp.exmail.qq.com
- name: SENTRY_EMAIL_PORT
value: "465"
- name: SENTRY_EMAIL_USER
value: ltz@ltz.com
- name: SENTRY_EMAIL_PASSWORD
valueFrom:
secretKeyRef:
key: smtp-password
name: sentry
optional: false
- name: SENTRY_EMAIL_USE_TLS
value: "false"
- name: SENTRY_SERVER_EMAIL
value: ltz@ltz.com
sentry-worker的环境变量
- name: SENTRY_EMAIL_HOST
value: smtp.exmail.qq.com
- name: SENTRY_EMAIL_PORT
value: "587"
- name: SENTRY_EMAIL_USER
value: ltz@ltz.com
- name: SENTRY_EMAIL_PASSWORD
valueFrom:
secretKeyRef:
key: smtp-password
name: sentry
optional: false
- name: SENTRY_EMAIL_USE_TLS
value: "true"
- name: SENTRY_SERVER_EMAIL
value: ltz@ltz.com
- name: SENTRY_EMAIL_USE_SSL
value: "false"
sentry-worker
的日志。SENTRY_SERVER_EMAIL
的配置,使用的是sentry-web
中的环境变量!修改完后,两个应用都要重启!!apiVersion: v1
kind: Namespace
metadata:
name: local-path-storage
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: local-path-provisioner-service-account
namespace: local-path-storage
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: local-path-provisioner-role
rules:
- apiGroups: [ "" ]
resources: [ "nodes", "persistentvolumeclaims", "configmaps" ]
verbs: [ "get", "list", "watch" ]
- apiGroups: [ "" ]
resources: [ "endpoints", "persistentvolumes", "pods" ]
verbs: [ "*" ]
- apiGroups: [ "" ]
resources: [ "events" ]
verbs: [ "create", "patch" ]
- apiGroups: [ "storage.k8s.io" ]
resources: [ "storageclasses" ]
verbs: [ "get", "list", "watch" ]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: local-path-provisioner-bind
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: local-path-provisioner-role
subjects:
- kind: ServiceAccount
name: local-path-provisioner-service-account
namespace: local-path-storage
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: local-path-provisioner
namespace: local-path-storage
spec:
replicas: 1
selector:
matchLabels:
app: local-path-provisioner
template:
metadata:
labels:
app: local-path-provisioner
spec:
serviceAccountName: local-path-provisioner-service-account
containers:
- name: local-path-provisioner
image: rancher/local-path-provisioner:v0.0.19
imagePullPolicy: IfNotPresent
command:
- local-path-provisioner
- --debug
- start
- --config
- /etc/config/config.json
volumeMounts:
- name: config-volume
mountPath: /etc/config/
env:
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
volumes:
- name: config-volume
configMap:
name: local-path-config
---
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: local-path
provisioner: rancher.io/local-path
volumeBindingMode: WaitForFirstConsumer
reclaimPolicy: Delete
---
kind: ConfigMap
apiVersion: v1
metadata:
name: local-path-config
namespace: local-path-storage
data:
config.json: |-
{
"nodePathMap":[
{
"node":"DEFAULT_PATH_FOR_NON_LISTED_NODES",
"paths":["/opt/local-path-provisioner"]
}
]
}
setup: |-
#!/bin/sh
while getopts "m:s:p:" opt
do
case $opt in
p)
absolutePath=$OPTARG
;;
s)
sizeInBytes=$OPTARG
;;
m)
volMode=$OPTARG
;;
esac
done
mkdir -m 0777 -p ${absolutePath}
teardown: |-
#!/bin/sh
while getopts "m:s:p:" opt
do
case $opt in
p)
absolutePath=$OPTARG
;;
s)
sizeInBytes=$OPTARG
;;
m)
volMode=$OPTARG
;;
esac
done
rm -rf ${absolutePath}
helperPod.yaml: |-
apiVersion: v1
kind: Pod
metadata:
name: helper-pod
spec:
containers:
- name: helper-pod
image: busybox
imagePullPolicy: IfNotPresent