当前位置: 首页 > 工具软件 > igraph > 使用案例 >

igraph学习

仲孙鸣
2023-12-01

转载自https://blog.csdn.net/NNNNNNNNNNNNY/article/details/53701277

参考https://bighttps://blog.csdn.net/NNNNNNNNNNNNY/article/details/53701277data-ny.github.io/2016/08/12/graph-of-thrones-neo4j-social-network-analysis/

这是用jupyter notebook写的igraph的基本用法,使用的环境是python3+windows,代码上传到csdn资源啦:ABC of igraph

关于matplotlib学习还是强烈建议常去官方http://igraph.org/python/#docs里查一查各种用法和toturial等。
下面是jupyter notebook代码导出的md文件。
1.learn_igraph.md

import csv
edges = []
with open('net.data', 'r') as f:
    for row in csv.reader(f.read().splitlines()):
        u, v = [i for i in row]
        edges.append((u, v))
from igraph import Graph as IGraph
g = IGraph.TupleList(edges, directed=False, vertex_name_attr='name', edge_attrs=None, weights=False)
print(g)
for p in g.vs:
    print(p["name"],p.degree())
names = g.vs["name"]
IGRAPH UN-- 14 17 --
+ attr: name (v)
+ edges (vertex names):
1--2, 1--3, 2--3, 3--7, 4--5, 4--6, 5--6, 7--6, 7--8, 8--9, 9--10, 9--11,
10--11, 8--12, 12--13, 12--14, 13--14
1 2
2 2
3 3
7 3
4 2
5 2
6 3
8 3
9 3
10 2
11 2
12 3
13 2
14 2
paths = g.get_all_shortest_paths("7")
cc = 0
for p in paths:
    print([names[x] for x in p])
    cc += len(p)-1
print("closeness centrality =",(len(paths)-1)/float(cc))
['7', '3', '1']
['7', '3', '2']
['7', '3']
['7']
['7', '6', '4']
['7', '6', '5']
['7', '6']
['7', '8']
['7', '8', '9']
['7', '8', '9', '10']
['7', '8', '9', '11']
['7', '8', '12']
['7', '8', '12', '13']
['7', '8', '12', '14']
closeness centrality = 0.48148148148148145
ccvs = []
for p in zip(g.vs, g.closeness()):
    ccvs.append({"name": p[0]["name"], "cc": p[1]})
# print pgvs
sorted(ccvs, key=lambda k: k['cc'], reverse=True)
[{'cc': 0.48148148148148145, 'name': '7'},
 {'cc': 0.48148148148148145, 'name': '8'},
 {'cc': 0.37142857142857144, 'name': '3'},
 {'cc': 0.37142857142857144, 'name': '6'},
 {'cc': 0.37142857142857144, 'name': '9'},
 {'cc': 0.37142857142857144, 'name': '12'},
 {'cc': 0.2826086956521739, 'name': '1'},
 {'cc': 0.2826086956521739, 'name': '2'},
 {'cc': 0.2826086956521739, 'name': '4'},
 {'cc': 0.2826086956521739, 'name': '5'},
 {'cc': 0.2826086956521739, 'name': '10'},
 {'cc': 0.2826086956521739, 'name': '11'},
 {'cc': 0.2826086956521739, 'name': '13'},
 {'cc': 0.2826086956521739, 'name': '14'}]
sp = []
target = 7
for v in g.vs:
#     print v,v["name"]
    paths = g.get_all_shortest_paths(v["name"])
    for p in paths:
        if target in p and target != p[0] and target != p[-1]:
#             print target,p
            sp.append(p)
# print sp
# print len(sp)
# 去重:i到j和j到i的同一条路径
spbt = 0
tu = []
for x in sp:
    if set((x[0],x[-1])) not in tu:
        tu.append(set((x[0],x[-1])))
        spbt += 1
print("betweenness =", spbt)
betweenness = 51
btvs = []
for p in zip(g.vs, g.betweenness()):
    btvs.append({"name": p[0]["name"], "bt": p[1]})
# print pgvs
sorted(btvs, key=lambda k: k['bt'], reverse=True)
[{'bt': 51.0, 'name': '7'},
 {'bt': 51.0, 'name': '8'},
 {'bt': 22.0, 'name': '3'},
 {'bt': 22.0, 'name': '6'},
 {'bt': 22.0, 'name': '9'},
 {'bt': 22.0, 'name': '12'},
 {'bt': 0.0, 'name': '1'},
 {'bt': 0.0, 'name': '2'},
 {'bt': 0.0, 'name': '4'},
 {'bt': 0.0, 'name': '5'},
 {'bt': 0.0, 'name': '10'},
 {'bt': 0.0, 'name': '11'},
 {'bt': 0.0, 'name': '13'},
 {'bt': 0.0, 'name': '14'}]
pg = g.pagerank()
# pg = g.pagerank(vertices=None, directed=True, damping=0.85, 
#                 weights=weights, arpack_options=None, 
#                 implementation='prpack', 
#                 niter=1000, eps=0.001)
pgvs = []
for p in zip(g.vs, pg):
    pgvs.append({"name": p[0]["name"], "pg": p[1]})
# print pgvs
sorted(pgvs, key=lambda k: k['pg'], reverse=True)[:10]
[{'name': '3', 'pg': 0.08621177802944507},
 {'name': '6', 'pg': 0.08621177802944507},
 {'name': '9', 'pg': 0.08621177802944506},
 {'name': '12', 'pg': 0.08621177802944506},
 {'name': '7', 'pg': 0.08311761850833196},
 {'name': '8', 'pg': 0.08311761850833196},
 {'name': '2', 'pg': 0.06111470635819448},
 {'name': '5', 'pg': 0.06111470635819448},
 {'name': '14', 'pg': 0.06111470635819448},
 {'name': '1', 'pg': 0.06111470635819447}]
btes = []
for p in zip(g.es(), g.edge_betweenness()):
    e = p[0].tuple
    btes.append({"edge":(names[e[0]],names[e[1]]),"bt":p[1]})
sorted(btes, key=lambda k: k['bt'], reverse=True)
[{'bt': 49.0, 'edge': ('7', '8')},
 {'bt': 33.0, 'edge': ('3', '7')},
 {'bt': 33.0, 'edge': ('7', '6')},
 {'bt': 33.0, 'edge': ('8', '9')},
 {'bt': 33.0, 'edge': ('8', '12')},
 {'bt': 12.0, 'edge': ('1', '3')},
 {'bt': 12.0, 'edge': ('2', '3')},
 {'bt': 12.0, 'edge': ('4', '6')},
 {'bt': 12.0, 'edge': ('5', '6')},
 {'bt': 12.0, 'edge': ('9', '10')},
 {'bt': 12.0, 'edge': ('9', '11')},
 {'bt': 12.0, 'edge': ('12', '13')},
 {'bt': 12.0, 'edge': ('12', '14')},
 {'bt': 1.0, 'edge': ('1', '2')},
 {'bt': 1.0, 'edge': ('4', '5')},
 {'bt': 1.0, 'edge': ('10', '11')},
 {'bt': 1.0, 'edge': ('13', '14')}]
communities = g.community_edge_betweenness(directed=False, weights=None)
print(communities)
print(g.vs["name"])
Dendrogram, 14 elements, 13 merges

0 1 2 3 4 5 6 7 11 12 13 8 9 10
| | | | | | | | |  |  |  | | |
| | | | | | | | |  `--'  | | |
| | | | | | | | |   |    | | |
| | | | | | | | `---'    | `-'
| | | | | | | |   |      |  | 
| | | | | `-' |   |      `--' 
| | | | |  |  |   |       |   
| `-' | `--'  |   |       |   
|  |  |  |    |   |       |   
`--'  `--'    `---'       |   
 |     |        |         |   
 `-----'        `---------'   
    |                |        
    `----------------'
['1', '2', '3', '7', '4', '5', '6', '8', '9', '10', '11', '12', '13', '14']

2.Games_of_Thrones.md

分析权利的游戏网络 @sumnous

几个月前,数学家Andrew Beveridge和Jie Shan在Math Horizon杂志上发表了权利的游戏的网络,其中,他们分析了小说“冰雨的风暴”,火爆的“冰与火之歌”(权利的游戏电视剧的基础)的第三卷中的角色互动网络。在他们的论文中,他们详细介绍了如何通过使用文本分析和实体抽取,来发现文本中提到的角色,从而构建角色互动网络。然后,他们应用社交网络分析算法到该网络中,以找到该网络中最重要的角色,并且应用社区检测算法来找到角色集群。

分析和可视化是通过使用Gephi,这一流行的图形分析工具,来完成的。近期也有很多作者使用Neo4j来复制结果。我们也可以使用igraph工具包来分析一下权利的有效网络。

参考:

https://www.macalester.edu/~abeverid/thrones.html

https://bigdata-ny.github.io/2016/08/12/graph-of-thrones-neo4j-social-network-analysis/

http://www.lyonwj.com/2016/06/26/graph-of-thrones-neo4j-social-network-analysis/

import csv
edges = []
firstline = True
with open('stormofswords.csv', 'r') as f:
    for row in csv.reader(f.read().splitlines()):
        if firstline == True:
            firstline = False
            continue
        u, v, weight = [i for i in row]
        edges.append((u, v, int(weight)))

python-igraph库

安装:

pip install python-igraph

conda install -c marufr python-igraph=0.7.1.post6

python-igraph manual: http://igraph.org/python/doc/igraph.Graph-class.html

from igraph import Graph as IGraph

g = IGraph.TupleList(edges, directed=True, vertex_name_attr='name', edge_attrs=None, weights=True)
# print g
names = g.vs["name"]
weights = g.es["weight"]
g.is_weighted()
True

分析网络

# 角色数
g.vcount()
107
# 网络直径: 一个网络的直径(或者测地线)被定义为网络中的最长最短路径。
print(g.diameter())
names = g.vs["name"]
print(g.get_diameter())
[names[x] for x in g.get_diameter()]
7
[38, 19, 24, 5, 0, 2, 9, 86]





['Bronn', 'Gregor', 'Sandor', 'Robert', 'Aemon', 'Samwell', 'Mance', 'Dalla']

最短路径

print(g.shortest_paths("Jon","Margaery"))
print("---------------------")
print([names[x] for x in g.get_shortest_paths("Jon","Margaery")[0]])
print("---------------------")
paths = g.get_all_shortest_paths("Jon")
for p in paths:
    print([names[x] for x in p])

[[3]]

[‘Jon’, ‘Robert’, ‘Renly’, ‘Margaery’]

[‘Jon’, ‘Aemon’]
[‘Jon’, ‘Grenn’]
[‘Jon’, ‘Samwell’]
[‘Jon’, ‘Robert’]
[‘Jon’, ‘Alliser’]
[‘Jon’, ‘Mance’]
[‘Jon’]
[‘Jon’, ‘Robert’, ‘Thoros’]
[‘Jon’, ‘Stannis’, ‘Balon’]
[‘Jon’, ‘Stannis’, ‘Balon’, ‘Loras’]
[‘Jon’, ‘Robert’, ‘Renly’, ‘Loras’]
[‘Jon’, ‘Stannis’, ‘Renly’, ‘Loras’]
[‘Jon’, ‘Robert’, ‘Barristan’]
[‘Jon’, ‘Meera’]
[‘Jon’, ‘Theon’]
[‘Jon’, ‘Stannis’]
[‘Jon’, ‘Stannis’, ‘Renly’, ‘Varys’, ‘Pycelle’]
[‘Jon’, ‘Robert’, ‘Renly’, ‘Varys’, ‘Pycelle’]
[‘Jon’, ‘Robert’, ‘Renly’, ‘Varys’]
[‘Jon’, ‘Stannis’, ‘Renly’, ‘Varys’]
[‘Jon’, ‘Craster’]
[‘Jon’, ‘Craster’, ‘Karl’]
[‘Jon’, ‘Melisandre’, ‘Davos’]
[‘Jon’, ‘Stannis’, ‘Davos’]
[‘Jon’, ‘Stannis’, ‘Davos’, ‘Cressen’]
[‘Jon’, ‘Melisandre’, ‘Davos’, ‘Cressen’]
[‘Jon’, ‘Stannis’, ‘Davos’, ‘Salladhor’]
[‘Jon’, ‘Melisandre’, ‘Davos’, ‘Salladhor’]
[‘Jon’, ‘Eddison’]
[‘Jon’, ‘Gilly’]
[‘Jon’, ‘Stannis’, ‘Renly’]
[‘Jon’, ‘Robert’, ‘Renly’]
[‘Jon’, ‘Janos’]
[‘Jon’, ‘Janos’, ‘Bowen’]
[‘Jon’, ‘Samwell’, ‘Bowen’]
[‘Jon’, ‘Robert’, ‘Renly’, ‘Margaery’]
[‘Jon’, ‘Stannis’, ‘Renly’, ‘Margaery’]
[‘Jon’, ‘Dalla’]
[‘Jon’, ‘Melisandre’]
[‘Jon’, ‘Orell’]
[‘Jon’, ‘Qhorin’]
[‘Jon’, ‘Rattleshirt’]
[‘Jon’, ‘Styr’]
[‘Jon’, ‘Val’]
[‘Jon’, ‘Ygritte’]
[‘Jon’, ‘Stannis’, ‘Renly’, ‘Loras’, ‘Olenna’]
[‘Jon’, ‘Robert’, ‘Renly’, ‘Loras’, ‘Olenna’]
[‘Jon’, ‘Stannis’, ‘Balon’, ‘Loras’, ‘Olenna’]

中心性度量

# 度中心性(Degree Centrality)
# 度中心性仅是一个节点在网络中的连接数。在权利的游戏的图的上下文中,一个角色的度中心性是该角色交互的其他角色数。
print(g.maxdegree())
print("---------------------")
for p in g.vs:
    if p.degree() > 15:
        print(p["name"],p.degree())

36

Jaime 24
Robert 18
Tyrion 36
Tywin 22
Arya 19
Cersei 20
Joffrey 18
Jon 26
Catelyn 18
Robb 25
Sansa 26

# 加权度中心性
for p in g.vs:
    weightedDegree = sum([x.degree() for x in p.neighbors()])
    if weightedDegree > 250:
        print(p["name"],weightedDegree)
Jaime 307
Robert 262
Tyrion 357
Tywin 251
Arya 279
Cersei 273
Joffrey 262
Robb 301
Sansa 325
# 邻居平均度
for p in zip(g.vs,g.knn()[0]):
    if p[1] > 20:
        print(p[0]["name"],p[1])
Aerys 25.0
Balon 21.666666666666668
Jeyne 21.5
Petyr 20.714285714285715
Pycelle 21.25
Qyburn 24.0
Myrcella 21.25
Orell 26.0
Ellaria 21.5
Mace 21.666666666666668
Ramsay 25.0
Chataya 20.5
Doran 36.0
Walton 24.0
# 介数中心性(Betweenness Centrality)
# 一个网络中的一个节点的中介中心性(Betweenness Centrality) 是,网络中所有的节点对之间通过该节点的最短路径数。
# 中介中心性是一项重要的指标,因为它可以用于识别网络中的“信息代理”,或者那些连接不同集群的节点。
btvs = []
for p in zip(g.vs, g.betweenness()):
    btvs.append({"name": p[0]["name"], "bt": p[1]})
# print pgvs
sorted(btvs, key=lambda k: k['bt'], reverse=True)[:10]
[{'bt': 332.9746031746032, 'name': 'Tyrion'},
 {'bt': 244.63571428571433, 'name': 'Samwell'},
 {'bt': 226.2047619047619, 'name': 'Stannis'},
 {'bt': 208.62301587301587, 'name': 'Robert'},
 {'bt': 138.66666666666666, 'name': 'Mance'},
 {'bt': 119.99563492063493, 'name': 'Jaime'},
 {'bt': 114.33333333333334, 'name': 'Sandor'},
 {'bt': 111.26666666666665, 'name': 'Jon'},
 {'bt': 90.65, 'name': 'Janos'},
 {'bt': 64.59761904761905, 'name': 'Aemon'}]
# 接近中心性(Closeness centrality)
# 接近中心性(Closeness centrality)是到网络中所有其他角色的平均距离的倒数。
# 具有高接近中心性的节点通常在图中的集群之间被高度连接,但在集群外部不一定是高度连接的。
ccvs = []
for p in zip(g.vs, g.closeness()):
    ccvs.append({"name": p[0]["name"], "cc": p[1]})
# print pgvs
sorted(ccvs, key=lambda k: k['cc'], reverse=True)[:10]
[{'cc': 0.5120772946859904, 'name': 'Tyrion'},
 {'cc': 0.5096153846153846, 'name': 'Sansa'},
 {'cc': 0.5, 'name': 'Robert'},
 {'cc': 0.48847926267281105, 'name': 'Robb'},
 {'cc': 0.48623853211009177, 'name': 'Arya'},
 {'cc': 0.4796380090497738, 'name': 'Jaime'},
 {'cc': 0.4796380090497738, 'name': 'Jon'},
 {'cc': 0.4796380090497738, 'name': 'Stannis'},
 {'cc': 0.4690265486725664, 'name': 'Tywin'},
 {'cc': 0.4608695652173913, 'name': 'Eddard'}]

PageRank算法

PageRank是一种最初由Google用来对网页重要性进行排序的算法。

pg = g.pagerank()
# pg = g.pagerank(vertices=None, directed=True, damping=0.85, 
#                 weights=weights, arpack_options=None, 
#                 implementation='prpack', 
#                 niter=1000, eps=0.001)
pgvs = []
for p in zip(g.vs, pg):
    pgvs.append({"name": p[0]["name"], "pg": p[1]})
# print pgvs
sorted(pgvs, key=lambda k: k['pg'], reverse=True)[:10]
[{'name': 'Margaery', 'pg': 0.032841464406084854},
 {'name': 'Samwell', 'pg': 0.025444623421744195},
 {'name': 'Loras', 'pg': 0.024829863132020076},
 {'name': 'Roslin', 'pg': 0.02410662426608122},
 {'name': 'Qhorin', 'pg': 0.02001318236350523},
 {'name': 'Karl', 'pg': 0.019198607452237473},
 {'name': 'Drogo', 'pg': 0.018892299206707614},
 {'name': 'Grenn', 'pg': 0.01762837304517548},
 {'name': 'Pycelle', 'pg': 0.01732234562876074},
 {'name': 'Craster', 'pg': 0.017148610457964602}]

社区检测(Community Detection)

社区检测算法用以查找图中的集群。我们将使用igraph中实现的walktrap社区检测方法,来找到那些在社区之中频繁交互,但在社区之外不存在太多互动的角色的社区。

clusters = IGraph.community_walktrap(g, weights="weight").as_clustering()
# community_walktrap: Community detection algorithm of Latapy & Pons, based on random walks.
# Pascal Pons, Matthieu Latapy: Computing communities in large networks using random walks, 
# http://arxiv.org/abs/physics/0512106.
nodes = [{"name": node["name"]} for node in g.vs]
community = {}
for node in nodes:
    idx = g.vs.find(name=node["name"]).index
    node["community"] = clusters.membership[idx]
    if node["community"] not in community:
        community[node["community"]] = [node["name"]]
    else:
        community[node["community"]].append(node["name"])
for c,l in community.items():
    print("Community ", c, ": ", l)
Community  0 :  ['Aemon', 'Grenn', 'Samwell', 'Alliser', 'Mance', 'Jon', 'Craster', 'Karl', 'Eddison', 'Gilly', 'Janos', 'Bowen', 'Dalla', 'Orell', 'Qhorin', 'Rattleshirt', 'Styr', 'Val', 'Ygritte']
Community  1 :  ['Aerys', 'Jaime', 'Robert', 'Tyrion', 'Tywin', 'Amory', 'Oberyn', 'Cersei', 'Gregor', 'Joffrey', 'Balon', 'Loras', 'Brienne', 'Bronn', 'Podrick', 'Lysa', 'Petyr', 'Sansa', 'Elia', 'Ilyn', 'Meryn', 'Pycelle', 'Shae', 'Varys', 'Qyburn', 'Renly', 'Tommen', 'Kevan', 'Margaery', 'Myrcella', 'Jon Arryn', 'Olenna', 'Marillion', 'Robert Arryn', 'Ellaria', 'Mace', 'Chataya', 'Doran', 'Walton']
Community  2 :  ['Arya', 'Anguy', 'Beric', 'Gendry', 'Sandor', 'Thoros', 'Eddard']
Community  3 :  ['Bran', 'Rickon', 'Hodor', 'Jojen', 'Luwin', 'Meera', 'Nan', 'Theon']
Community  4 :  ['Brynden', 'Roose', 'Lothar', 'Walder', 'Catelyn', 'Edmure', 'Hoster', 'Jeyne', 'Robb', 'Roslin', 'Rickard', 'Ramsay']
Community  5 :  ['Belwas', 'Barristan', 'Illyrio', 'Daario', 'Drogo', 'Irri', 'Daenerys', 'Aegon', 'Jorah', 'Kraznys', 'Missandei', 'Rakharo', 'Rhaegar', 'Viserys', 'Worm']
Community  6 :  ['Stannis', 'Davos', 'Cressen', 'Salladhor', 'Melisandre', 'Shireen']
Community  7 :  ['Lancel']
 类似资料: