当前位置: 首页 > 知识库问答 >
问题:

ElasticSearch:如何找到未分配的碎片并分配它们?

郎弘壮
2023-03-14

我在低源硬件配置的机器上得到了1个节点、1个碎片、1个副本体系结构。我必须将Elasticsearch堆大小保持在总内存的20%,并且我索引1K~1M文档到Elasticsearch关于硬件配置。我有不同类型机器,从2GB到16GB,但由于它们是32bit体系结构,我只能使用300M到1.5GB的最大内存作为堆大小。

由于某些原因,我不知道为什么,Elasticsearch创建了一些带有未分配碎片的索引,并将集群运行状况设置为红色。我尝试在不创建新节点的情况下恢复和分配碎片,并将数据转移到其中,因为我不应该这样做。我还尝试了使用以下命令重新路由索引的配置:

curl -XPUT 'localhost:9200/_settings' -d '{
  "index.routing.allocation.disable_allocation": false
}'

以下是我的节点信息:

{
  name: mynode
  transport_address: inet[/192.168.1.4:9300]
  host: myhost
  ip: 127.0.0.1
  version: 1.0.0
  build: a46900e
  http_address: inet[/192.168.1.4:9200]
  thrift_address: /192.168.1.4:9500
  attributes: {
    master: true
  }
  settings: {
    threadpool: {
      search: {
        type: fixed
        size: 600
        queue_size: 10000
      }
      bulk: {
        type: fixed
        queue_size: 10000
        size: 600
      }
      index: {
        type: fixed
        queue_size: 10000
        size: 600
      }
    }
    node: {
      data: true
      master: true
      name: mynode
    }
    index: {
      mapper: {
        dynamic: false
      }
      routing: {
        allocation: {
          disable_allocation: false
        }
      }
      store: {
        fs: {
          lock: none
        }
        compress: {
          stored: true
        }
      }
      number_of_replicas: 0
      analysis: {
        analyzer: {
          string_lowercase: {
            filter: lowercase
            tokenizer: keyword
          }
        }
      }
      cache: {
        field: {
          type: soft
          expire: 24h
          max_size: 50000
        }
      }
      number_of_shards: 1
    }
    bootstrap: {
      mlockall: true
    }
    gateway: {
      expected_nodes: 1
    }
    transport: {
      tcp: {
        compress: true
      }
    }
    name: mynode
    pidfile: /var/run/elasticsearch.pid
    path: {
      data: /var/lib/es/data
      work: /tmp/es
      home: /opt/elasticsearch
      logs: /var/log/elasticsearch
    }
    indices: {
      memory: {
        index_buffer_size: 80%
      }
    }
    cluster: {
      routing: {
        allocation: {
          node_initial_primaries_recoveries: 1
          node_concurrent_recoveries: 1
        }
      }
      name: my-elasticsearch
    }
    max_open_files: false
    discovery: {
      zen: {
        ping: {
          multicast: {
            enabled: false
          }
        }
      }
    }
  }
  os: {
    refresh_interval: 1000
    available_processors: 4
    cpu: {
      vendor: Intel
      model: Core(TM) i3-3220 CPU @ 3.30GHz
      mhz: 3292
      total_cores: 4
      total_sockets: 4
      cores_per_socket: 16
      cache_size_in_bytes: 3072
    }
    mem: {
      total_in_bytes: 4131237888
    }
    swap: {
      total_in_bytes: 4293591040
    }
  }
  process: {
    refresh_interval: 1000
    id: 24577
    max_file_descriptors: 65535
    mlockall: true
  }
  jvm: {
    pid: 24577
    version: 1.7.0_55
    vm_name: Java HotSpot(TM) Server VM
    vm_version: 24.55-b03
    vm_vendor: Oracle Corporation
    start_time: 1405942239741
    mem: {
      heap_init_in_bytes: 845152256
      heap_max_in_bytes: 818348032
      non_heap_init_in_bytes: 19136512
      non_heap_max_in_bytes: 117440512
      direct_max_in_bytes: 818348032
    }
    gc_collectors: [
      ParNew
      ConcurrentMarkSweep
    ]
    memory_pools: [
      Code Cache
      Par Eden Space
      Par Survivor Space
      CMS Old Gen
      CMS Perm Gen
    ]
  }
  thread_pool: {
    generic: {
      type: cached
      keep_alive: 30s
    }
    index: {
      type: fixed
      min: 600
      max: 600
      queue_size: 10k
    }
    get: {
      type: fixed
      min: 4
      max: 4
      queue_size: 1k
    }
    snapshot: {
      type: scaling
      min: 1
      max: 2
      keep_alive: 5m
    }
    merge: {
      type: scaling
      min: 1
      max: 2
      keep_alive: 5m
    }
    suggest: {
      type: fixed
      min: 4
      max: 4
      queue_size: 1k
    }
    bulk: {
      type: fixed
      min: 600
      max: 600
      queue_size: 10k
    }
    optimize: {
      type: fixed
      min: 1
      max: 1
    }
    warmer: {
      type: scaling
      min: 1
      max: 2
      keep_alive: 5m
    }
    flush: {
      type: scaling
      min: 1
      max: 2
      keep_alive: 5m
    }
    search: {
      type: fixed
      min: 600
      max: 600
      queue_size: 10k
    }
    percolate: {
      type: fixed
      min: 4
      max: 4
      queue_size: 1k
    }
    management: {
      type: scaling
      min: 1
      max: 5
      keep_alive: 5m
    }
    refresh: {
      type: scaling
      min: 1
      max: 2
      keep_alive: 5m
    }
  }
  network: {
    refresh_interval: 5000
    primary_interface: {
      address: 192.168.1.2
      name: eth0
      mac_address: 00:90:0B:2F:A9:08
    }
  }
  transport: {
    bound_address: inet[/0:0:0:0:0:0:0:0:9300]
    publish_address: inet[/192.168.1.4:9300]
  }
  http: {
    bound_address: inet[/0:0:0:0:0:0:0:0:9200]
    publish_address: inet[/192.168.1.4:9200]
    max_content_length_in_bytes: 104857600
  }
  plugins: [
    {
      name: transport-thrift
      version: NA
      description: Exports elasticsearch REST APIs over thrift
      jvm: true
      site: false
    }
  ]
}

最糟糕的情况是找到未分配的碎片并删除属于索引,但我希望防止创建未分配的碎片。

知道吗?

共有1个答案

曹景铄
2023-03-14

我找到了一个合乎逻辑的解决方案,这里是如何应用Python的:请参阅代码中的注释块,任何改进都将受到赞赏:

type_pattern = re.compile(r"""
        (?P<type>\w*?)$ # Capture doc_type from index name
        """, re.UNICODE|re.VERBOSE)
# Get mapping content from mapping file
mapping_file = utilities.system_config_path + "mapping.json"
server_mapping = None

try:
    with open(mapping_file, "r") as mapper:
        mapping = json.loads(unicode(mapper.read()))
    # Loop all indices to get and find mapping
    all_indices = [index for index in self.__conn.indices.get_aliases().iterkeys()]
    for index in all_indices:
        # Gather doc_type from index name
        doc_type = type_pattern.search(index).groupdict("type")['type']

        index_mapping = self.__conn.indices.get_mapping(index=index)
        default_mapping = [key for key in [key for key in mapping[doc_type].itervalues()][0]["properties"].iterkeys()]

        if len(index_mapping) > 0:
            # Create lists by iter values to get columns and compare them either they are different or not
             server_mapping = [key for key in [key for key in index_mapping[index]["mappings"].itervalues()][0]["properties"].iterkeys()]

            # Check if index' status is red then delete it
            if self.__conn.cluster.health(index=index)["status"] == "red":
                # Then delete index
                self.__conn.indices.delete(index)
                print "%s has been deleted because of it was in status RED" % index

                self.__conn.indices.create(
                    index=index,
                    body={
                        'settings': {
                        # just one shard, no replicas for testing
                        'number_of_shards': 1,
                        'number_of_replicas': 0,
                           }
                        },
                        # ignore already existing index
                        ignore=400
                    )
                print "%s has been created." % index

                self.__conn.indices.put_mapping(
                        index=index,
                        doc_type=doc_type,
                        body=mapping[doc_type]
                    )
                print "%s mapping has been inserted." % index

                # Check if server mapping is different than what it is supposed to be
            elif server_mapping and len(set(server_mapping) - set(default_mapping)) > 0:
                # Delete recent mapping from server regarding index
                self.__conn.indices.delete_mapping(index=index, doc_type=doc_type)
                print "%s mapping has been deleted." % index

                # Put default mapping in order to match data store columns
                self.__conn.indices.put_mapping(
                    index=index,
                    doc_type=doc_type,
                    body=mapping[doc_type])
                print "%s mapping has been inserted." % index
                # Check if index is healthy but has no mapping then put mapping into
            elif len(index_mapping) == 0:
                print "%s has no mapping. Thus the default mapping will be pushed into it." % index

                self.__conn.indices.put_mapping(
                    index=index,
                    doc_type=doc_type,
                    body=mapping[doc_type])
                print "%s mapping has been inserted." % index
        return "Database has been successfully repaired."
 except:
     # Any exception you would like here
 类似资料:
  • 问题内容: 我有一个具有4个节点的ES集群: 我不得不重新启动search03,当它回来时,它又重新加入了群集,没有问题,但是留下了7个未分配的碎片。 现在,我的集群处于黄色状态。解决此问题的最佳方法是什么? 删除(取消)分片? 将分片移动到另一个节点? 将分片分配给节点? 将“ number_of_replicas”更新为2? 还有其他东西吗? 有趣的是,当添加新索引时,该节点开始在该节点上工作

  • centOS上的elasticsearch 1.7.x 我们的三节点集群变成了两节点集群。一切都很好。每个碎片我们都有3个复制品,所以我们都有了。 但现在集群运行状况是黄色的,我们有: 我们已经将ElasticSearch.yml中的副本计数设置更改为1(从2)并在两个节点上重新启动ES。这没有什么区别。 下一步是什么? 我看到了如何重新分配碎片,但没有看到如何消除未分配的碎片。

  • 我有一个包含4个节点的ES集群: 我不得不重新启动search03,当它回来时,它没有问题地重新加入了集群,但是留下了7个未分配的碎片。 现在我的集群处于黄色状态。解决这个问题最好的办法是什么? 删除(取消)碎片? 将碎片移动到另一个节点? 将碎片分配给节点? 将“number_of_replicas”更新为2? 完全是其他吗? 有趣的是,当添加了一个新的索引时,该节点开始处理它,并与集群的其余部

  • 问题:我已经启动了五个elasticsearch节点,但只有66,84%的数据在kibana中可用。当我用localhost检查集群运行状况时:9200/u cluster/health?pretty=true我得到了以下信息: 除kibana指数外,我所有的指数都是红色的。 小部分:

  • 问题内容: 我的集群处于黄色状态,因为未分配某些分片。怎么办呢? 我尝试设置所有索引,但是我认为这不起作用,因为我使用的是1.1.1版本。 我也尝试过重新启动所有机器,但同样发生。 任何想法? 编辑: 群集统计信息: 问题答案: 这些未分配的分片实际上是主节点上实际分片的未分配副本。 为了分配这些分片,您需要运行一个新的elasticsearch实例来创建一个辅助节点来承载数据副本。 编辑: 有时

  • 我正在运行一个2节点的elasticsearch集群,并将我的所有索引配置为2个主碎片和1个副本。起初,我认为每个节点将存储1个主碎片和1个副本,尽管这不是正在发生的事情。 如上所示,每个碎片都由单个节点托管,没有分配副本。 我做错了什么?