当前位置: 首页 > 面试题库 >

Python如何实现字典?

姬阳曜
2023-03-14
问题内容

我想知道python字典如何在后台运行,尤其是动态方面?创建字典时,其初始大小是多少?如果我们用很多元素更新它,我想我们需要扩大哈希表。我想我们需要重新计算散列函数以适应新的更大的散列表的大小,同时又与先前的散列表保持某种逻辑?

如您所见,我不完全了解此结构的内部。


问题答案:

(部分)以下答案来自“ 升级Python技能”:检查字典。有关Python哈希表的更多信息,请参见The
Hood的Python Hash Tables

当我们创建字典时,它的初始大小是多少?

  1. 从源代码中可以看出:
    /* PyDict_MINSIZE is the starting size for any new dict.
     * 8 allows dicts with no more than 5 active entries; experiments suggested
     * this suffices for the majority of dicts (consisting mostly of usually-small
     * dicts created to pass keyword arguments).
     * Making this 8, rather than 4 reduces the number of resizes for most
     * dictionaries, without any significant extra memory use.
     */
    #define PyDict_MINSIZE 8

假设我们使用许多键值对进行更新,我想我们需要消除散列表。我想我们需要重新计算哈希函数,以适应新的更大哈希表的大小,同时保持与先前哈希表的某种逻辑关系。

每当我们添加键时,CPython都会检查哈希表的大小。如果表已满三分之二,它将调整哈希表的大小GROWTH_RATE(当前设置为3),并插入所有元素:

    /* GROWTH_RATE. Growth rate upon hitting maximum load.
     * Currently set to used*3.
     * This means that dicts double in size when growing without deletions,
     * but have more head room when the number of deletions is on a par with the
     * number of insertions.  See also bpo-17563 and bpo-33205.
     *
     * GROWTH_RATE was set to used*4 up to version 3.2.
     * GROWTH_RATE was set to used*2 in version 3.3.0
     * GROWTH_RATE was set to used*2 + capacity/2 in 3.4.0-3.6.0.
     */
    #define GROWTH_RATE(d) ((d)->ma_used*3)

USABLE_FRACTION是我上面提到的三分之二:

    /* USABLE_FRACTION is the maximum dictionary load.
     * Increasing this ratio makes dictionaries more dense resulting in more
     * collisions.  Decreasing it improves sparseness at the expense of spreading
     * indices over more cache lines and at the cost of total memory consumed.
     *
     * USABLE_FRACTION must obey the following:
     *     (0 < USABLE_FRACTION(n) < n) for all n >= 2
     *
     * USABLE_FRACTION should be quick to calculate.
     * Fractions around 1/2 to 2/3 seem to work well in practice.
     */
    #define USABLE_FRACTION(n) (((n) << 1)/3)

此外,索引计算为:

i = (size_t)hash & mask;

面具在哪儿HASH_TABLE_SIZE-1

处理哈希冲突的方法如下:

perturb >>= PERTURB_SHIFT;
i = (i*5 + perturb + 1) & mask;

在源代码中说明:

    The first half of collision resolution is to visit table indices via this
    recurrence:
        j = ((5*j) + 1) mod 2**i
    For any initial j in range(2**i), repeating that 2**i times generates each
    int in range(2**i) exactly once (see any text on random-number generation for
    proof).  By itself, this doesn't help much:  like linear probing (setting
    j += 1, or j -= 1, on each loop trip), it scans the table entries in a fixed
    order.  This would be bad, except that's not the only thing we do, and it's
    actually *good* in the common cases where hash keys are consecutive.  In an
    example that's really too small to make this entirely clear, for a table of
    size 2**3 the order of indices is:
        0 -> 1 -> 6 -> 7 -> 4 -> 5 -> 2 -> 3 -> 0 [and here it's repeating]
    If two things come in at index 5, the first place we look after is index 2,
    not 6, so if another comes in at index 6 the collision at 5 didn't hurt it.
    Linear probing is deadly in this case because there the fixed probe order
    is the *same* as the order consecutive keys are likely to arrive.  But it's
    extremely unlikely hash codes will follow a 5*j+1 recurrence by accident,
    and certain that consecutive hash codes do not.
    The other half of the strategy is to get the other bits of the hash code
    into play.  This is done by initializing a (unsigned) vrbl "perturb" to the
    full hash code, and changing the recurrence to:
        perturb >>= PERTURB_SHIFT;
        j = (5*j) + 1 + perturb;
        use j % 2**i as the next table index;
    Now the probe sequence depends (eventually) on every bit in the hash code,
    and the pseudo-scrambling property of recurring on 5*j+1 is more valuable,
    because it quickly magnifies small differences in the bits that didn't affect
    the initial index.  Note that because perturb is unsigned, if the recurrence
    is executed often enough perturb eventually becomes and remains 0.  At that
    point (very rarely reached) the recurrence is on (just) 5*j+1 again, and
    that's certain to find an empty slot eventually (since it generates every int
    in range(2**i), and we make sure there's always at least one empty slot).


 类似资料:
  • 问题内容: 我想结合,并从一个对象,这应是一个有序的,默认情况下dict。 这可能吗? 问题答案: 以下内容(使用此食谱的修改版)对我有用:

  • 问题内容: 我见过有人说python 中的对象具有O(1)成员资格检查。如何在内部实现它们以允许这样做?它使用哪种数据结构?该实现还有什么其他含义? 这里的每个答案都非常有启发性,但是我只能接受一个答案,因此,我将选择与原始问题最接近的答案。谢谢你的信息! 问题答案: 实际上,CPython的集合被实现为类似于带有伪值的字典(键是集合的成员)的字典,并且进行了一些优化,可以利用这种缺乏值的方式 因

  • 问题内容: 我的班级有一个字典,例如: 然后,我想在MyClass实例中使用字典的键来访问字典,例如: 我知道这应该由__getattr__实现,但是我是Python的新手,我并不完全知道如何实现它。 问题答案: 不过在实施时要小心,您将需要进行一些修改: 如果您不需要设置属性,只需使用namedtuple例如。 如果您想要默认参数,则可以围绕它编写包装类: 或者作为函数看起来更好:

  • 问题内容: …可以用于字符串相等的关键字。 我都尝试过,但是他们没有用。 问题答案: 测试字符串仅在插入字符串时有效。除非你真的知道自己在做什么,并明确实习的字符串,你应该 永远不会 使用的字符串。 测试 身份 ,而非 平等 。这意味着Python会简单地比较一个对象所在的内存地址。基本上回答了以下问题:“同一对象有两个名称吗?” -超载毫无意义。 例如,为 False 。通常,Python将每个

  • 问题内容: 我试图通过使用抽象基类MutableMapping在Python中实现映射,但是在实例化时遇到了错误。我将如何制作该词典的工作版本,以便使用Abstract Base Classs 尽可能清楚地模拟内置 类 ? 一个很好的答案将演示如何进行这项工作,特别是无需子类化(我非常熟悉的一个概念)。 问题答案: 如何使用抽象基类实现字典? 一个很好的答案将演示如何进行这项工作,尤其是在不继承d

  • 问题内容: 我以前的编程中,代码段仅用于调试目的(记录命令等)。通过使用预处理程序指令,可以完全禁用这些语句以进行生产,如下所示: 做类似的事情的最好方法是什么? 问题答案: 如果只想禁用日志记录方法,请使用该模块。如果日志级别设置为排除调试语句,那么它将非常接近无操作(它仅检查日志级别并返回而不插入日志字符串)。 如果要在特定条件下以字节码编译时实际删除代码块,则唯一的选择是相当神秘的全局变量。