当前位置: 首页 > 工具软件 > Python plog > 使用案例 >

python列表去重remove_python列表去重方法

微生毅然
2023-12-01

原文链接:http://www.peterbe.com/plog/uniqifiers-benchmark

Suppose you have a list in python that looks like this:

['a','b','a']

# or like this:

[1,2,2,2,3,4,5,6,6,6,6]

and you want to remove all duplicates so you get this result:

['a','b']

# or

[1,2,3,4,5,6]

How do you do that? ...the fastest way? I wrote a couple of alternative implementations and did a quick benchmark loop on the various implementations to find out which way was the fastest. (I haven't looked at memory usage). The slowest function was 78 times

slower than the fastest function.

However, there's one very important difference between the various functions. Some are order preserving and some are not. For example, in an order preserving function, apart from the duplicates, the order is guaranteed to be the same as it was inputted. Eg, uniqify([1,2,2,3])==[1,2,3]

Here are the functions:

def f1(seq):

# not order preserving

set = {}

map(set.__setitem__, seq, [])

return set.keys()

def f2(seq):

# order preserving

checked = []

for e in seq:

if e not in checked:

checked.append(e)

return checked

def f3(seq):

# Not order preserving

keys = {}

for e in seq:

keys[e] = 1

return keys.keys()

def f4(seq):

# order preserving

noDupes = []

[noDupes.append(i) for i in seq if not noDupes.count(i)]

return noDupes

def f5(seq, idfun=None):

# order preserving

if idfun is None:

def idfun(x): return x

seen = {}

result = []

for item in seq:

marker = idfun(item)

# in old Python versions:

# if seen.has_key(marker)

# but in new ones:

if marker in seen: continue

seen[marker] = 1

result.append(item)

return result

def f6(seq):

# Not order preserving

set = Set(seq)

return list(set)

And what you've all been waiting for (if you're still reading). Here are the results:

* f2 13.24

* f4 11.73

* f5 0.37

f1 0.18

f3 0.17

f6 0.19

(* order preserving)

Clearly f5 is

the "best" solution. Not only is it really really fast; it's also order preserving and supports an optional transform function which makes it possible to do this:

>>> a=list('ABeeE')

>>> f5(a)

['A','B','e','E']

>>> f5(a, lambda x: x.lower())

['A','B','e']

Download the benchmark script here

UPDATE

From the comments I've now added a couple of more functions to the benchmark. Some which don't support uniqify a list of objects that can't be hashed unless passed with a special hashing method. So see all the functions download

the file

Here are the new results:

* f5 10.1

* f5b 9.99

* f8 6.49

* f10 6.57

* f11 6.6

f1 4.28

f3 3.55

f6 4.03

f7 2.59

f9 2.58

(f2 and f4) were too slow for this testdata.

 类似资料: