当前位置: 首页 > 知识库问答 >
问题:

生产实例与本地实例相比,mongodb查询性能如何慢得多

唐焕
2023-03-14

我的机器上运行一个本地mongo,在一个ec2容器上运行一个mongodb(使用M5.Large和ebs存储)。

我知道,在本地发出请求与在云中向生产Mongo发出外部请求时,总会有一些不同(网络)等。

然而,我发现这个特别琐碎的查询(假设)所用的时间比我预期的要长得多。

node.js mongo查询

   let query = {
            date: {
                $gte: from,
            },
            game_id: gameId,
        };

  let documents = await gameStreams.find(query).toArray();
**local:**: time_taken 4003.124ms
**production**: time_taken 71187.316ms
{
    "ns" : "gaming.gamestreams",
    "size" : 4138284702.0,
    "count" : 14011415,
    "avgObjSize" : 295,
    "storageSize" : 900091904,
    "freeStorageSize" : 671744,
    "capped" : false,
    "wiredTiger" : {
        "metadata" : {
            "formatVersion" : 1
        },
        "creationString" : "access_pattern_hint=none,allocation_size=4KB,app_metadata=(formatVersion=1),assert=(commit_timestamp=none,durable_timestamp=none,read_timestamp=none),block_allocation=best,block_compressor=snappy,cache_resident=false,checksum=on,colgroups=,collator=,columns=,dictionary=0,encryption=(keyid=,name=),exclusive=false,extractor=,format=btree,huffman_key=,huffman_value=,ignore_in_memory_cache_size=false,immutable=false,internal_item_max=0,internal_key_max=0,internal_key_truncate=true,internal_page_max=4KB,key_format=q,key_gap=10,leaf_item_max=0,leaf_key_max=0,leaf_page_max=32KB,leaf_value_max=64MB,log=(enabled=false),lsm=(auto_throttle=true,bloom=true,bloom_bit_count=16,bloom_config=,bloom_hash_count=8,bloom_oldest=false,chunk_count_limit=0,chunk_max=5GB,chunk_size=10MB,merge_custom=(prefix=,start_generation=0,suffix=),merge_max=15,merge_min=0),memory_page_image_max=0,memory_page_max=10m,os_cache_dirty_max=0,os_cache_max=0,prefix_compression=false,prefix_compression_min=4,source=,split_deepen_min_child=0,split_deepen_per_child=0,split_pct=90,type=file,value_format=u",
        "type" : "file",
        "uri" : "statistics:table:collection-0--7606124955649373726",
        "LSM" : {
            "bloom filter false positives" : 0,
            "bloom filter hits" : 0,
            "bloom filter misses" : 0,
            "bloom filter pages evicted from cache" : 0,
            "bloom filter pages read into cache" : 0,
            "bloom filters in the LSM tree" : 0,
            "chunks in the LSM tree" : 0,
            "highest merge generation in the LSM tree" : 0,
            "queries that could have benefited from a Bloom filter that did not exist" : 0,
            "sleep for LSM checkpoint throttle" : 0,
            "sleep for LSM merge throttle" : 0,
            "total size of bloom filters" : 0
        },
        "block-manager" : {
            "allocations requiring file extension" : 51772,
            "blocks allocated" : 58597,
            "blocks freed" : 7000,
            "checkpoint size" : 899403776,
            "file allocation unit size" : 4096,
            "file bytes available for reuse" : 671744,
            "file magic number" : 120897,
            "file major version number" : 1,
            "file size in bytes" : 900091904,
            "minor version number" : 0
        },
        "btree" : {
            "btree checkpoint generation" : 5456,
            "btree clean tree checkpoint expiration time" : NumberLong(9223372036854775807),
            "column-store fixed-size leaf pages" : 0,
            "column-store internal pages" : 0,
            "column-store variable-size RLE encoded values" : 0,
            "column-store variable-size deleted values" : 0,
            "column-store variable-size leaf pages" : 0,
            "fixed-record size" : 0,
            "maximum internal page key size" : 368,
            "maximum internal page size" : 4096,
            "maximum leaf page key size" : 2867,
            "maximum leaf page size" : 32768,
            "maximum leaf page value size" : 67108864,
            "maximum tree depth" : 4,
            "number of key/value pairs" : 0,
            "overflow pages" : 0,
            "pages rewritten by compaction" : 0,
            "row-store empty values" : 0,
            "row-store internal pages" : 0,
            "row-store leaf pages" : 0
        },
        "cache" : {
            "bytes currently in the cache" : 2587594533.0,
            "bytes dirty in the cache cumulative" : 1180911380.0,
            "bytes read into cache" : 11221545998.0,
            "bytes written from cache" : 4664026027.0,
            "checkpoint blocked page eviction" : 0,
            "data source pages selected for eviction unable to be evicted" : 339,
            "eviction walk passes of a file" : 13253,
            "eviction walk target pages histogram - 0-9" : 3513,
            "eviction walk target pages histogram - 10-31" : 4855,
            "eviction walk target pages histogram - 128 and higher" : 0,
            "eviction walk target pages histogram - 32-63" : 2504,
            "eviction walk target pages histogram - 64-128" : 2381,
            "eviction walks abandoned" : 539,
            "eviction walks gave up because they restarted their walk twice" : 145,
            "eviction walks gave up because they saw too many pages and found no candidates" : 4738,
            "eviction walks gave up because they saw too many pages and found too few candidates" : 2980,
            "eviction walks reached end of tree" : 7126,
            "eviction walks started from root of tree" : 8404,
            "eviction walks started from saved location in tree" : 4849,
            "hazard pointer blocked page eviction" : 14,
            "history store table reads" : 0,
            "in-memory page passed criteria to be split" : 669,
            "in-memory page splits" : 338,
            "internal pages evicted" : 383,
            "internal pages split during eviction" : 5,
            "leaf pages split during eviction" : 1012,
            "modified pages evicted" : 1398,
            "overflow pages read into cache" : 0,
            "page split during eviction deepened the tree" : 1,
            "page written requiring history store records" : 1180,
            "pages read into cache" : 125966,
            "pages read into cache after truncate" : 1,
            "pages read into cache after truncate in prepare state" : 0,
            "pages requested from the cache" : 33299436,
            "pages seen by eviction walk" : 14244088,
            "pages written from cache" : 57731,
            "pages written requiring in-memory restoration" : 45,
            "tracked dirty bytes in the cache" : 0,
            "unmodified pages evicted" : 147454
        },
        "cache_walk" : {
            "Average difference between current eviction generation when the page was last considered" : 0,
            "Average on-disk page image size seen" : 0,
            "Average time in cache for pages that have been visited by the eviction server" : 0,
            "Average time in cache for pages that have not been visited by the eviction server" : 0,
            "Clean pages currently in cache" : 0,
            "Current eviction generation" : 0,
            "Dirty pages currently in cache" : 0,
            "Entries in the root page" : 0,
            "Internal pages currently in cache" : 0,
            "Leaf pages currently in cache" : 0,
            "Maximum difference between current eviction generation when the page was last considered" : 0,
            "Maximum page size seen" : 0,
            "Minimum on-disk page image size seen" : 0,
            "Number of pages never visited by eviction server" : 0,
            "On-disk page image sizes smaller than a single allocation unit" : 0,
            "Pages created in memory and never written" : 0,
            "Pages currently queued for eviction" : 0,
            "Pages that could not be queued for eviction" : 0,
            "Refs skipped during cache traversal" : 0,
            "Size of the root page" : 0,
            "Total number of pages currently in cache" : 0
        },
        "checkpoint-cleanup" : {
            "pages added for eviction" : 3,
            "pages removed" : 0,
            "pages skipped during tree walk" : 1992915,
            "pages visited" : 4892576
        },
        "compression" : {
            "compressed page maximum internal page size prior to compression" : 4096,
            "compressed page maximum leaf page size prior to compression " : 131072,
            "compressed pages read" : 125583,
            "compressed pages written" : 55619,
            "page written failed to compress" : 0,
            "page written was too small to compress" : 2112
        },
        "cursor" : {
            "Total number of entries skipped by cursor next calls" : 0,
            "Total number of entries skipped by cursor prev calls" : 0,
            "Total number of entries skipped to position the history store cursor" : 0,
            "bulk loaded cursor insert calls" : 0,
            "cache cursors reuse count" : 72908,
            "close calls that result in cache" : 0,
            "create calls" : 305,
            "cursor next calls that skip greater than or equal to 100 entries" : 0,
            "cursor next calls that skip less than 100 entries" : 14756445,
            "cursor prev calls that skip greater than or equal to 100 entries" : 0,
            "cursor prev calls that skip less than 100 entries" : 1,
            "insert calls" : 14072870,
            "insert key and value bytes" : 4212628410.0,
            "modify" : 0,
            "modify key and value bytes affected" : 0,
            "modify value bytes modified" : 0,
            "next calls" : 14756445,
            "open cursor count" : 0,
            "operation restarted" : 109,
            "prev calls" : 1,
            "remove calls" : 0,
            "remove key bytes removed" : 0,
            "reserve calls" : 0,
            "reset calls" : 224078,
            "search calls" : 13128942,
            "search history store calls" : 0,
            "search near calls" : 14772,
            "truncate calls" : 0,
            "update calls" : 0,
            "update key and value bytes" : 0,
            "update value size change" : 0
        },
        "reconciliation" : {
            "approximate byte size of timestamps in pages written" : 84197280,
            "approximate byte size of transaction IDs in pages written" : 77480,
            "dictionary matches" : 0,
            "fast-path pages deleted" : 0,
            "internal page key bytes discarded using suffix compression" : 91023,
            "internal page multi-block writes" : 747,
            "internal-page overflow keys" : 0,
            "leaf page key bytes discarded using prefix compression" : 0,
            "leaf page multi-block writes" : 1368,
            "leaf-page overflow keys" : 0,
            "maximum blocks required for a page" : 1,
            "overflow values written" : 0,
            "page checksum matches" : 23657,
            "page reconciliation calls" : 3073,
            "page reconciliation calls for eviction" : 999,
            "pages deleted" : 3,
            "pages written including an aggregated newest start durable timestamp " : 1173,
            "pages written including an aggregated newest stop durable timestamp " : 0,
            "pages written including an aggregated newest stop timestamp " : 0,
            "pages written including an aggregated newest stop transaction ID" : 0,
            "pages written including an aggregated oldest start timestamp " : 1065,
            "pages written including an aggregated oldest start transaction ID " : 19,
            "pages written including an aggregated prepare" : 0,
            "pages written including at least one prepare" : 0,
            "pages written including at least one start durable timestamp" : 31568,
            "pages written including at least one start timestamp" : 31568,
            "pages written including at least one start transaction ID" : 63,
            "pages written including at least one stop durable timestamp" : 0,
            "pages written including at least one stop timestamp" : 0,
            "pages written including at least one stop transaction ID" : 0,
            "records written including a prepare" : 0,
            "records written including a start durable timestamp" : 5262330,
            "records written including a start timestamp" : 5262330,
            "records written including a start transaction ID" : 9685,
            "records written including a stop durable timestamp" : 0,
            "records written including a stop timestamp" : 0,
            "records written including a stop transaction ID" : 0
        },
        "session" : {
            "object compaction" : 0
        },
        "transaction" : {
            "race to read prepared update retry" : 0,
            "update conflicts" : 0
        }
    },
    "nindexes" : 8,
    "indexBuilds" : [],
    "totalIndexSize" : 2627395584.0,
    "totalSize" : 3527487488.0,
    "indexSizes" : {
        "_id_" : 270036992,
        "service_id_1" : 303837184,
        "service_type_1" : 249204736,
        "name_1" : 245424128,
        "date_1" : 226025472,
        "game_id_1" : 247881728,
        "service_id_1_service_type_1_name_1_date_1_game_id_1" : 996036608,
        "game_id_1_date_1" : 88948736
    },
    "scaleFactor" : 1,
    "ok" : 1.0,
    "$clusterTime" : {
        "clusterTime" : Timestamp(1607345809, 1),
        "signature" : {
            "hash" : { "$binary" : "CP8MT7YsL4t+raZn62AZhZdmTFg=", "$type" : "00" },
            "keyId" : NumberLong(6855964983600087045)
        }
    },
    "operationTime" : Timestamp(1607345809, 1)
}

db.getCollection('gamestreams').find({Date:{$gte:new Date('2019-07-03t23:59:59.999z')},game_id:'24024'}).explain()

{
    "queryPlanner" : {
        "plannerVersion" : 1,
        "namespace" : "gaming.gamestreams",
        "indexFilterSet" : false,
        "parsedQuery" : {
            "$and" : [ 
                {
                    "game_id" : {
                        "$eq" : "24024"
                    }
                }, 
                {
                    "date" : {
                        "$gte" : ISODate("2019-07-03T23:59:59.999Z")
                    }
                }
            ]
        },
        "queryHash" : "FCE2088F",
        "planCacheKey" : "A564EAFA",
        "winningPlan" : {
            "stage" : "FETCH",
            "inputStage" : {
                "stage" : "IXSCAN",
                "keyPattern" : {
                    "game_id" : 1.0,
                    "date" : 1.0
                },
                "indexName" : "game_id_1_date_1",
                "isMultiKey" : false,
                "multiKeyPaths" : {
                    "game_id" : [],
                    "date" : []
                },
                "isUnique" : false,
                "isSparse" : false,
                "isPartial" : false,
                "indexVersion" : 2,
                "direction" : "forward",
                "indexBounds" : {
                    "game_id" : [ 
                        "[\"24024\", \"24024\"]"
                    ],
                    "date" : [ 
                        "[new Date(1562198399999), new Date(9223372036854775807)]"
                    ]
                }
            }
        },
        "rejectedPlans" : [ 
            {
                "stage" : "FETCH",
                "filter" : {
                    "game_id" : {
                        "$eq" : "24024"
                    }
                },
                "inputStage" : {
                    "stage" : "IXSCAN",
                    "keyPattern" : {
                        "date" : 1
                    },
                    "indexName" : "date_1",
                    "isMultiKey" : false,
                    "multiKeyPaths" : {
                        "date" : []
                    },
                    "isUnique" : false,
                    "isSparse" : false,
                    "isPartial" : false,
                    "indexVersion" : 2,
                    "direction" : "forward",
                    "indexBounds" : {
                        "date" : [ 
                            "[new Date(1562198399999), new Date(9223372036854775807)]"
                        ]
                    }
                }
            }, 
            {
                "stage" : "FETCH",
                "filter" : {
                    "date" : {
                        "$gte" : ISODate("2019-07-03T23:59:59.999Z")
                    }
                },
                "inputStage" : {
                    "stage" : "IXSCAN",
                    "keyPattern" : {
                        "game_id" : 1
                    },
                    "indexName" : "game_id_1",
                    "isMultiKey" : false,
                    "multiKeyPaths" : {
                        "game_id" : []
                    },
                    "isUnique" : false,
                    "isSparse" : false,
                    "isPartial" : false,
                    "indexVersion" : 2,
                    "direction" : "forward",
                    "indexBounds" : {
                        "game_id" : [ 
                            "[\"24024\", \"24024\"]"
                        ]
                    }
                }
            }
        ]
    },
    "serverInfo" : {
        "host" : "mongo-prod-mongodb-0",
        "port" : 27017,
        "version" : "4.4.1",
        "gitVersion" : "ad91a93a5a31e175f5cbf8c69561e788bbc55ce1"
    },
    "ok" : 1.0,
    "$clusterTime" : {
        "clusterTime" : Timestamp(1607346119, 1),
        "signature" : {
            "hash" : { "$binary" : "XMuLYE5NVhHktX9wZ79LteksXFs=", "$type" : "00" },
            "keyId" : NumberLong(6855964983600087045)
        }
    },
    "operationTime" : Timestamp(1607346119, 1)
}

以上数据取自mongo,如果您需要我提供任何其他信息,请让我知道。

注意:使用带有batchSize(10000)的光标会有一点帮助,将生产所需的时间减少到40秒左右,但与我在本地获得的性能相比,这似乎仍然不太合适。

共有1个答案

宿景曜
2023-03-14

原因有很多,其中有网络因素。

  1. 正在获取大量数据吗?
  2. 如果是,尝试使用limit()来限制一次获取的文档数量。稍后需要分页。
  3. 如果您不需要分页,那么我建议您在获取Mongodoc之前排除一些不需要的字段。它将帮助查询运行得更快。
 类似资料:
  • 在ASP.NET核心依赖项注入中,我只是想知道注册实例是否会比注册实例更好地提高性能? 在我看来,对于实例,只需要花费一次创建新对象和依赖对象的时间。对于实例,将对每个服务请求重复此代价。所以似乎更好。但是,使用比使用能获得多大的性能?提前谢谢!

  • 已定义查询的Dao: 来自Hibernate调试日志的SQL: 当我在数据库上执行这个查询时,大约需要15ms,从代码上执行大约需要1.5秒。我在代码中注释掉了这一行,滞后消失了,所以问题肯定是这个jpql选择。 数据库连接配置: 更新1: debug.log:

  • 本文向大家介绍MongoDB教程之查询操作实例,包括了MongoDB教程之查询操作实例的使用技巧和注意事项,需要的朋友参考一下 1.  基本查询:     构造查询数据。   2.  查询条件:     MongoDB提供了一组比较操作符:$lt/$lte/$gt/$gte/$ne,依次等价于</<=/>/>=/!=。     3.  null数据类型的查询: 4.  正则查询: 5.  数组数据

  • MongoDB 是一个基于分布式文件存储的数据库。 MongoDB 是一个基于分布式文件存储的数据库。提供多节点高可用架构、弹性扩容、容灾、备份恢复、性能优化等功能。MongoDB 将数据存储为一个文档,数据结构由键值(key=>value)对组成。MongoDB 文档类似于 JSON 对象。字段值可以包含其他文档,数组及文档数组。 目前仅只读对接腾讯云的MangoDB实例。 入口:在云管平台单击

  • 本文向大家介绍Python中的MongoDB基本操作:连接、查询实例,包括了Python中的MongoDB基本操作:连接、查询实例的使用技巧和注意事项,需要的朋友参考一下 MongoDB是一个基于分布式文件存储的数据库。由C++语言编写。旨在为WEB应用提供可护展的高性能数据存储解决方案。它的特点是高性能、易部署、易使用,存储数据非常方便。 MongoDB 简单使用 联接数据库 Connectio

  • 我正在使用mongoose来计算匹配某个查询的文档数量。此查询的索引为: Mongo版本为3.2,收藏文档数量约为175万。 需要2分多钟。但如果我这么做了: 然后大约需要2.5秒。 我做错什么了吗?我能做些什么来加快速度吗? 编辑:解释日志。 计数: 为了找到。