当前位置: 首页 > 知识库问答 >
问题:

分片 Mongodb 随机停滞

颛孙霖
2023-03-14

我已经在kuberenetes中使用散列分片设置了分片的MongoDB集群。我首先创建了配置服务器副本集,然后创建了2个分片副本集。最终创建了mongos来连接分片集群。

我按照下面的链接设置分片MongoDB点击https://docs . MongoDB . com/manual/tutorial/deploy-sharded-cluster-hashed-sharding/

在创建mongos之后,我为数据库启用了分片,并使用哈希分片策略对集合进行了分片。

在所有这些设置之后,我能够连接到mongos并将一些数据添加到数据库中的一些集合中,并且能够检查数据在不同分片中的分布。

我面临的问题是,当尝试从我的java spring boot项目访问mongodb时,连接随机停止。但是,一旦为特定查询建立了连接,该特定查询就不会在接下来的几次尝试中停止。经过一段时间的空闲时间,如果我尝试再次向mongodb发出请求,它将再次开始停止。

注意:MongoDB托管在“DS2 v2”VM中,此集群有4个节点.1用于配置服务器,2个用于分片,1个用于mongos

>

  • 在其中一个链接中,他们要求为所有集合设置适当的分片键,这将对mongodb的性能产生影响。在选择正确的分片键之前,有几个因素需要考虑,我在选择分片键之前已经考虑了所有这些因素。我通读此链接以选择分片键 - 单击 https://www.mongodb.com/blog/post/on-selecting-a-shard-key-for-mongodb

    我遇到的另一个解决方案是设置ShardingTaskExecutorPoolMaxConnection,并限制mongos节点向连接池添加连接的速率。我尝试将其设置为20,5100150,但这些都没有解决我面临的延迟问题。这是链接-点击https://jira.mongodb.org/browse/SERVER-29237

    我尝试调整其他参数,如ShardingTaskExecutorPoolMinSize和taskExecutorPoolSize。即使这样也没有解决停滞问题。

    我还将- serviceExecutor设置为自适应的。

    将wiredtigerCacheSizeGB从0.25增加到2.This也不会对延迟问题产生任何影响

    1) mongodb配置服务器的服务和部署的YAML文件为:-

    apiVersion: v1
    items:
    - apiVersion: v1
      kind: Service
      metadata:
        annotations:
          kompose.cmd: kompose convert -d -f docker-compose.yml -o azure-deployment.yaml
          kompose.version: 1.12.0 (0ab07be)
        creationTimestamp: null
        labels:
          io.kompose.service: mongo-conf-service
        name: mongo-conf-service
      spec:
        type: LoadBalancer
        ports:
        - name: "27017"
          port: 27017
          targetPort: 27017
        selector:
          io.kompose.service: mongo-conf-service
      status:
        loadBalancer: {}
    - apiVersion: extensions/v1beta1
      kind: Deployment
      metadata:
        annotations:
          kompose.cmd: kompose convert -d -f docker-compose.yml -o azure-deployment.yaml
          kompose.version: 1.12.0 (0ab07be)
        creationTimestamp: null
        labels:
          io.kompose.service: mongo-conf-service
        name: mongo-conf-service
      spec:
        replicas: 1
        strategy: {}
        template:
          metadata:
            creationTimestamp: null
            labels:
              io.kompose.service: mongo-conf-service
          spec:
            containers:
            - env:
              - name: MONGO_INITDB_ROOT_USERNAME
                value: #Username
              - name: MONGO_INITDB_ROOT_PASSWORD
                value: #Password
              command:
              - "mongod"
              - "--storageEngine"
              - "wiredTiger"
              - "--port"
              - "27017"
              - "--bind_ip"
              - "0.0.0.0"
              - "--wiredTigerCacheSizeGB"
              - "2"
              - "--configsvr"
              - "--replSet"
              - "ConfigDBRepSet"
              image: #MongoImageName
              name: mongo-conf-service
              ports:
              - containerPort: 27017
              resources: {}
              volumeMounts:
              - name: mongo-conf
                mountPath: /data/db
            restartPolicy: Always
            volumes:
              - name: mongo-conf
                persistentVolumeClaim:
                  claimName: mongo-conf
    

    2) YAML 文件服务和部署的分片 mongodb 是 -

    apiVersion: v1
    items:
    - apiVersion: v1
      kind: Service
      metadata:
        annotations:
          kompose.cmd: kompose convert -d -f docker-compose.yml -o azure-deployment.yaml
          kompose.version: 1.12.0 (0ab07be)
        creationTimestamp: null
        labels:
          io.kompose.service: mongo-shard
        name: mongo-shard
      spec:
        type: LoadBalancer
        ports:
        - name: "27017"
          port: 27017
          targetPort: 27017
        selector:
          io.kompose.service: mongo-shard
      status:
        loadBalancer: {}
    - apiVersion: extensions/v1beta1
      kind: Deployment
      metadata:
        annotations:
          kompose.cmd: kompose convert -d -f docker-compose.yml -o azure-deployment.yaml
          kompose.version: 1.12.0 (0ab07be)
        creationTimestamp: null
        labels:
          io.kompose.service: mongo-shard
        name: mongo-shard
      spec:
        replicas: 1
        strategy: {}
        template:
          metadata:
            creationTimestamp: null
            labels:
              io.kompose.service: mongo-shard
          spec:
            containers:
            - env:
              - name: MONGO_INITDB_ROOT_USERNAME
                value: #Username
              - name: MONGO_INITDB_ROOT_PASSWORD
                value: #Password
              command:
              - "mongod"
              - "--storageEngine"
              - "wiredTiger"
              - "--port"
              - "27017"
              - "--bind_ip"
              - "0.0.0.0"
              - "--wiredTigerCacheSizeGB"
              - "2"
              - "--shardsvr"
              - "--replSet"
              - "Shard1RepSet"
              image: #MongoImage
              name: mongo-shard
              ports:
              - containerPort: 27017
              resources: {}
    

    mongos服务器的YAML文件:

    apiVersion: v1
    items:
    - apiVersion: v1
      kind: Service
      metadata:
        annotations:
          kompose.cmd: kompose convert -d -f docker-compose.yml -o azure-deployment.yaml
          kompose.version: 1.12.0 (0ab07be)
        creationTimestamp: null
        labels:
          io.kompose.service: mongos-service
        name: mongos-service
      spec:
        type: LoadBalancer
        ports:
        - name: "27017"
          port: 27017
          targetPort: 27017
        selector:
          io.kompose.service: mongos-service
      status:
        loadBalancer: {}
    - apiVersion: extensions/v1beta1
      kind: Deployment
      metadata:
        annotations:
          kompose.cmd: kompose convert -d -f docker-compose.yml -o azure-deployment.yaml
          kompose.version: 1.12.0 (0ab07be)
        creationTimestamp: null
        labels:
          io.kompose.service: mongos-service
        name: mongos-service
      spec:
        replicas: 1
        strategy: {}
        template:
          metadata:
            creationTimestamp: null
            labels:
              io.kompose.service: mongos-service
          spec:
            containers:
            - env:
              - name: MONGO_INITDB_ROOT_USERNAME
                value: #USername
              - name: MONGO_INITDB_ROOT_PASSWORD
                value: #Password
              command:
                - "numactl"
                - "--interleave=all"
                - "mongos"
                - "--port"
                - "27017"
                - "--bind_ip"
                - "0.0.0.0"
                - "--configdb"
                - "ConfigDBRepSet/mongo-conf-service:27017"
              image: #MongoImageName
              name: mongos-service
              ports:
              - containerPort: 27017
              resources: {}
    
    • mongos服务器的日志是:
    2019-08-05T05:27:52.942+0000 I NETWORK  [listener] connection accepted from 10.0.0.0:5058 #308807 (79 connections now open)
    2019-08-05T05:27:52.964+0000 I ACCESS   [conn308807] Successfully authenticated as principal Assist_Random_Workspace on Random_Workspace from client 10.0.0.0:5058
    2019-08-05T05:27:54.267+0000 I NETWORK  [worker-3] end connection 10.0.0.0:52954 (78 connections now open)
    2019-08-05T05:27:54.269+0000 I NETWORK  [listener] connection accepted from 10.0.0.0:52988 #308808 (79 connections now open)
    2019-08-05T05:27:54.275+0000 I NETWORK  [listener] connection accepted from 10.0.0.0:7174 #308809 (80 connections now open)
    2019-08-05T05:27:54.279+0000 I ACCESS   [conn308809] SASL SCRAM-SHA-1 authentication failed for Assist_Refactored_Code_DB on Refactored_Code_DB from client 10.0.0.:7174 ; UserNotFound: User "Assist_Refactored_Code_DB@Refactored_Code_DB" not found
    2019-08-05T05:27:54.281+0000 I NETWORK  [worker-1] end connection 10.0.0.5:7174 (79 connections now open)
    2019-08-05T05:27:54.342+0000 I NETWORK  [worker-1] end connection 10.0.0.6:57391 (78 connections now open)
    2019-08-05T05:27:54.343+0000 I NETWORK  [listener] connection accepted from 10.0.0.0:57527 #308810 (79 connections now open)
    2019-08-05T05:27:55.080+0000 I NETWORK  [worker-3] end connection 10.0.0.0:56021 (78 connections now open)
    2019-08-05T05:27:55.081+0000 I NETWORK  [listener] connection accepted from 10.0.0.0:56057 #308811 (79 connections now open)
    2019-08-05T05:27:56.054+0000 I NETWORK  [worker-1] end connection 10.0.0.0:59137 (78 connections now open)
    2019-08-05T05:27:56.055+0000 I NETWORK  [listener] connection accepted from 10.0.0.0:59184 #308812 (79 connections now open)
    2019-08-05T05:27:59.268+0000 I NETWORK  [worker-1] end connection 10.0.0.5:52988 (78 connections now open)
    2019-08-05T05:27:59.270+0000 I NETWORK  [listener] connection accepted from 10.0.0.0:53047 #308813 (79 connections now open)
    2019-08-05T05:27:59.343+0000 I NETWORK  [worker-3] end connection 10.0.0.6:57527 (78 connections now open)
    2019-08-05T05:27:59.344+0000 I NETWORK  [listener] connection accepted from 10.0.0.0:57672 #308814 (79 connections now open)
    2019-08-05T05:28:00.080+0000 I NETWORK  [worker-3] end connection 10.0.1.1:56057 (78 connections now open)
    2019-08-05T05:28:00.081+0000 I NETWORK  [listener] connection accepted from 10.0.0.0:56116 #308815 (79 connections now open)
    2019-08-05T05:28:01.054+0000 I NETWORK  [worker-3] end connection 10.0.0.0:59184 (78 connections now open)
    2019-08-05T05:28:01.058+0000 I NETWORK  [listener] connection accepted from 10.0.0.0:59225 #308816 (79 connections now open)
    2019-08-05T05:28:01.763+0000 I NETWORK  [listener] connection accepted from 10.0.0.0:7173 #308817 (80 connections now open)
    2019-08-05T05:28:01.768+0000 I ACCESS   [conn308817] SASL SCRAM-SHA-1 authentication failed for Assist_Sharded_Database on Sharded_Database from client 10.0.0.0:7173 ; UserNotFound: User "Assist_Sharded_Database@Sharded_Database" not found
    2019-08-05T05:28:01.770+0000 I NETWORK  [worker-3] end connection 10.0.0.0:7173 (79 connections now open)
    2019-08-05T05:28:04.271+0000 I NETWORK  [worker-3] end connection 10.0.0.0:53047 (78 connections now open)
    2019-08-05T05:28:04.272+0000 I NETWORK  [listener] connection accepted from 10.0.0.0:53083 #308818 (79 connections now open)
    2019-08-05T05:28:04.283+0000 I NETWORK  [listener] connection accepted from 10.0.0.0:7105 #308819 (80 connections now open)
    2019-08-05T05:28:04.287+0000 I ACCESS   [conn308819] SASL SCRAM-SHA-1 authentication failed for Assist_Refactored_Code_DB on Refactored_Code_DB from client 10.0.0.0:7105 ; UserNotFound: User "Assist_Refactored_Code_DB@Refactored_Code_DB" not found
    
    

    在上面的日志中,对Assist_Refactored_Code_DB的身份验证存在错误(此数据库不是我创建的)。我不确定此身份验证失败的原因以及用户名和密码应该在哪个mongo URI中mentioned.And我也不确定这是否是停顿的原因之一。这是我在配置服务器mongos.All其他日志中可以找到的唯一错误日志,分片mongo没有任何错误。

    分片1回复的日志是:

    019-08-06T10:48:08.926+0000 I NETWORK  [listener] connection accepted from 10.0.0.4:58010 #782186 (10 connections now open)
    2019-08-06T10:48:11.585+0000 I NETWORK  [conn782183] end connection 10.0.0.0:64938 (9 connections now open)
    2019-08-06T10:48:11.586+0000 I NETWORK  [listener] connection accepted from 10.0.0.7:64989 #782187 (10 connections now open)
    2019-08-06T10:48:11.765+0000 I NETWORK  [conn782184] end connection 10.0.0.0:62126 (9 connections now open)
    2019-08-06T10:48:11.766+0000 I NETWORK  [listener] connection accepted from 10.0.0.6:62302 #782188 (10 connections now open)
    2019-08-06T10:48:13.763+0000 I NETWORK  [conn782185] end connection 10.0.0.0:52907 (9 connections now open)
    2019-08-06T10:48:13.763+0000 I NETWORK  [listener] connection accepted from 10.0.0.1:52947 #782189 (10 connections now open)
    2019-08-06T10:48:13.926+0000 I NETWORK  [conn782186] end connection 10.0.0.0:58010 (9 connections now open)
    2019-08-06T10:48:13.927+0000 I NETWORK  [listener] connection accepted from 10.0.0.0:58051 #782190 (10 connections now open)
    2019-08-06T10:48:16.586+0000 I NETWORK  [conn782187] end connection 10.0.0.0:64989 (9 connections now open)
    2019-08-06T10:48:16.587+0000 I NETWORK  [listener] connection accepted from 10.0.0.0:65054 #782191 (10 connections now open)
    2019-08-06T10:48:16.766+0000 I NETWORK  [conn782188] end connection 10.0.0.6:62302 (9 connections now open)
    2019-08-06T10:48:16.767+0000 I NETWORK  [listener] connection accepted from 10.0.0.6:62445 #782192 (10 connections now open)
    2019-08-06T10:48:18.765+0000 I NETWORK  [conn782189] end connection 10.0.2.1:52947 (9 connections now open)
    2019-08-06T10:48:18.765+0000 I NETWORK  [listener] connection accepted from 10.0.2.1:52989 #782193 (10 connections now open)
    2019-08-06T10:48:18.927+0000 I NETWORK  [conn782190] end connection 10.0.0.4:58051 (9 connections now open)
    2019-08-06T10:48:18.929+0000 I NETWORK  [listener] connection accepted from 10.0.0.4:58100 #782194 (10 connections now open)
    2019-08-06T10:48:21.588+0000 I NETWORK  [conn782191] end connection 10.0.0.7:65054 (9 connections now open)
    2019-08-06T10:48:21.589+0000 I NETWORK  [listener] connection accepted from 10.0.0.7:65105 #782195 (10 connections now open)
    2019-08-06T10:48:21.767+0000 I NETWORK  [conn782192] end connection 10.0.0.6:62445 (9 connections now open)
    2019-08-06T10:48:21.768+0000 I NETWORK  [listener] connection accepted from 10.0.0.6:62581 #782196 (10 connections now open)
    2019-08-06T10:48:23.766+0000 I NETWORK  [conn782193] end connection 10.0.2.1:52989 (9 connections now open)
    2019-08-06T10:48:23.766+0000 I NETWORK  [listener] connection accepted from 10.0.2.1:53030 #782197 (10 connections now open)
    2019-08-06T10:48:23.928+0000 I NETWORK  [conn782194] end connection 10.0.0.4:58100 (9 connections now open)
    2019-08-06T10:48:23.930+0000 I NETWORK  [listener] connection accepted from 10.0.0.4:58145 #782198 (10 connections now open)
    2019-08-06T10:48:26.589+0000 I NETWORK  [conn782195] end connection 10.0.0.7:65105 (9 connections now open)
    2019-08-06T10:48:26.590+0000 I NETWORK  [listener] connection accepted from 10.0.0.7:65148 #782199 (10 connections now open)
    2019-08-06T10:48:26.768+0000 I NETWORK  [conn782196] end connection 10.0.0.6:62581 (9 connections now open)
    2019-08-06T10:48:26.770+0000 I NETWORK  [listener] connection accepted from 10.0.0.6:62746 #782200 (10 connections now open)
    2019-08-06T10:48:28.766+0000 I NETWORK  [conn782197] end connection 10.0.2.1:53030 (9 connections now open)
    2019-08-06T10:48:28.767+0000 I NETWORK  [listener] connection accepted from 10.0.2.1:53081 #782201 (10 connections now open)
    2019-08-06T10:48:28.930+0000 I NETWORK  [conn782198] end connection 10.0.0.4:58145 (9 connections now open)
    2019-08-06T10:48:28.931+0000 I NETWORK  [listener] connection accepted from 10.0.0.4:58217 #782202 (10 connections now open)
    2019-08-06T10:48:31.590+0000 I NETWORK  [conn782199] end connection 10.0.0.7:65148 (9 connections now open)
    
    

    ConfigDBRepSet的日志是:

    2019-08-06T10:52:18.962+0000 I NETWORK  [conn781553] end connection 10.0.0.4:60257 (10 connections now open)
    2019-08-06T10:52:18.963+0000 I NETWORK  [listener] connection accepted from 10.0.0.4:60306 #781557 (11 connections now open)
    2019-08-06T10:52:21.296+0000 I NETWORK  [conn781554] end connection 10.0.0.7:50910 (10 connections now open)
    2019-08-06T10:52:21.297+0000 I NETWORK  [listener] connection accepted from 10.0.0.7:50956 #781558 (11 connections now open)
    2019-08-06T10:52:22.380+0000 I NETWORK  [conn781555] end connection 10.0.0.5:54999 (10 connections now open)
    2019-08-06T10:52:22.381+0000 I NETWORK  [listener] connection accepted from 10.0.0.5:55043 #781559 (11 connections now open)
    2019-08-06T10:52:22.554+0000 I NETWORK  [conn781556] end connection 10.0.3.1:57125 (10 connections now open)
    2019-08-06T10:52:22.555+0000 I NETWORK  [listener] connection accepted from 10.0.3.1:57258 #781560 (11 connections now open)
    2019-08-06T10:52:23.963+0000 I NETWORK  [conn781557] end connection 10.0.0.4:60306 (10 connections now open)
    2019-08-06T10:52:23.964+0000 I NETWORK  [listener] connection accepted from 10.0.0.4:60341 #781561 (11 connections now open)
    2019-08-06T10:52:26.298+0000 I NETWORK  [conn781558] end connection 10.0.0.7:50956 (10 connections now open)
    2019-08-06T10:52:26.299+0000 I NETWORK  [listener] connection accepted from 10.0.0.7:50998 #781562 (11 connections now open)
    2019-08-06T10:52:27.382+0000 I NETWORK  [conn781559] end connection 10.0.0.5:55043 (10 connections now open)
    2019-08-06T10:52:27.383+0000 I NETWORK  [listener] connection accepted from 10.0.0.5:55086 #781563 (11 connections now open)
    2019-08-06T10:52:27.555+0000 I NETWORK  [conn781560] end connection 10.0.3.1:57258 (10 connections now open)
    2019-08-06T10:52:27.556+0000 I NETWORK  [listener] connection accepted from 10.0.3.1:57415 #781564 (11 connections now open)
    2019-08-06T10:52:28.964+0000 I NETWORK  [conn781561] end connection 10.0.0.4:60341 (10 connections now open)
    2019-08-06T10:52:28.965+0000 I NETWORK  [listener] connection accepted from 10.0.0.4:60406 #781565 (11 connections now open)
    2019-08-06T10:52:31.299+0000 I NETWORK  [conn781562] end connection 10.0.0.7:50998 (10 connections now open)
    2019-08-06T10:52:31.300+0000 I NETWORK  [listener] connection accepted from 10.0.0.7:51043 #781566 (11 connections now open)
    2019-08-06T10:52:32.383+0000 I NETWORK  [conn781563] end connection 10.0.0.5:55086 (10 connections now open)
    2019-08-06T10:52:32.384+0000 I NETWORK  [listener] connection accepted from 10.0.0.5:55136 #781567 (11 connections now open)
    2019-08-06T10:52:32.556+0000 I NETWORK  [conn781564] end connection 10.0.3.1:57415 (10 connections now open)
    2019-08-06T10:52:32.556+0000 I NETWORK  [listener] connection accepted from 10.0.3.1:57535 #781568 (11 connections now open)
    2019-08-06T10:52:33.966+0000 I NETWORK  [conn781565] end connection 10.0.0.4:60406 (10 connections now open)
    2019-08-06T10:52:33.967+0000 I NETWORK  [listener] connection accepted from 10.0.0.4:60461 #781569 (11 connections now open)
    

    sh.status()的输出:

    --- Sharding Status --- 
      sharding version: {
        "_id" : 1,
        "minCompatibleVersion" : 5,
        "currentVersion" : 6,
        "clusterId" : ObjectId("5d3a7c7d035b4525a7de5eaa")
      }
      shards:
            {  "_id" : "Shard1RepSet",  "host" : "Shard1RepSet/94.245.111.162:27017",  "state" : 1 }
            {  "_id" : "Shard2RepSet",  "host" : "Shard2RepSet/13.74.42.35:27017",  "state" : 1 }
      active mongoses:
            "4.0.10" : 1
      autosplit:
            Currently enabled: yes
      balancer:
            Currently enabled:  yes
            Currently running:  no
            Failed balancer rounds in last 5 attempts:  0
            Migration Results for the last 24 hours: 
                    2 : Success
      databases:
    #Databases sharding Information
    

    我希望分片的mongoDB不会在任何时间点停止,并且工作类似于独立的mongob。

    有人能指导我解决分片mongodb问题的拖延吗?

  • 共有1个答案

    邵劲
    2023-03-14

    首先,如果您使用的是dockerhub提供的mongo映像,那么应该同时指定< code>env和< code>command,因为< code>command会覆盖enterypoint,在本例中,entery point负责处理用户和密码的创建,所以它不起作用。检查命令字段是否对应于某些容器运行时中的入口点。

     类似资料:
    • 主要内容:MongoDB 中的分片,分片实例分片是跨多台机器存储数据的过程,它是 MongoDB 满足数据增长需求的方法。随着数据的不断增加,单台机器可能不足以存储全部数据,也无法提供足够的读写吞吐量。通过分片,您可以添加更多计算机来满足数据增长和读/写操作的需求。 为什么要分片? 在复制中,所有写操作都将转到主节点; 对延迟敏感的查询仍会转到主查询; 单个副本集限制为 12 个节点; 当活动数据集很大时,会出现内存不足; 本地磁盘不够大;

    • 我已经在一台具有三个不同端口(例如27018[master]、27019、27020)的机器中进行了复制。我还在一台机器上用两个不同的端口(比如27021、27022)进行了分片。 现在我必须为分片的计算机端口实现复制。我需要为27021和27022实现复制。我怎么能这么做?请帮我解决这个问题。 null

    • 本文向大家介绍MongoDB分片测试,包括了MongoDB分片测试的使用技巧和注意事项,需要的朋友参考一下 分片是mongoDB扩展的一种方式。分片分割一个collection并将不同的部分存储在不同的机器上。当一个数据库的collections相对于当前空间过大时,你需要增加一个新的机器。分片会自动的将collection数据分发到新的服务器上。 1. 连接到mongos可查看系统相关信息 2.

    • 问题内容: 我有两种代码选择: 选项1 要么: 选项2 我了解这更惯用。我想知道的有效性。 在我将只使用由给定的种子产生的第一个号码。在“ 我选择一个种子”并使用该种子生成数字。IIUC在此用例上保证了随机性。 因此,我的问题是,如果我多次打电话,是否能保证产出分配的均匀性? 问题答案: 我真正的问题是选项1在数学上是否有效。 让我们从选项2开始。所使用的随机数生成器在javadoc中指定如下:

    • 我在spring boot中有一个调度器,它每X分钟完成一个特定的业务任务。它工作正常,直到突然停止,不再啮合。日志或任何其他日志中都没有例外。我需要重新启动程序,调度器才能再次工作。 有时调度程序的任务出错,我抛出一个异常。为了能够具体地处理这些异常,我在Spring中为调度器编写了一个定制的ErrorHandler,用于解析用于日志记录的单独任务。它正确链接到调度程序并处理任务。

    • 我有一个 Spark 流式处理作业,它读取 Cosmos 更改源数据,如下所示,在具有 DBR 8.2 的数据砖集群中运行。 虽然作业正常工作,但偶尔,流会突然停止,并且在log4j输出中出现以下循环。重新启动作业将处理“待办事项”中的所有数据。以前有人经历过这样的事情吗?我不确定是什么原因造成的。有什么想法吗?