A web app for visualizing the connections between Wikipedia pages. Try it at wikipedia.luk.ke.
Start by entering a topic into the text box, for example Cats. A single “node” will be generated, labeled Cat, which appears as a circle on the graph. Click this node to expand it.
Expanding a node creates a new node for each Wikipedia article linked in the first paragraph of the article you clicked. These new nodes will be connected to the node from which they were expanded. For example, expanding Cat will create eight nodes, including Fur, Mammal, Carnivore, and Domestication, each of which will be connected to Cat. These new nodes can also be expanded in the same way. By continuing to expand nodes, you can build a complex web of related topics.
You can also enter multiple articles to "compare" by pressing Comma, Tab, or Enter after each one you enter.
When you click to expand a node, a request is made to the Wikipedia API to download the full content of the Wikipedia article corresponding to that node. Wikipedia map uses this data to find the links in the first paragraph of the article.
wikipedia_parse.js
uses the DOMParser
API to parse wikipedia pages’ HTML (retrieved from calls to Wikipedia's API). The parser looks for the <p>
tag corresponding to the first paragraph of the article, then extracts all of the <a>
tag links within this paragraph. It then filters the links to include only those which link to other wikipedia articles.
You can see this in action yourself in your browser’s console. If you have Wikipedia Map open, open your browser’s developer tools and type await getSubPages('Cat')
. After a second, you should see an array with the names of other related articles.
The front-end uses vis.js
to display the graph. Every time a node is clicked, the app makes a XMLHttpRequest
to the Node.js server. The resulting links are added as new nodes, colored according to their distance from the central node (as described above).
To use the app locally, simply
git clone https://github.com/controversial/wikipedia-map/
and open index.html
in a web browser. No compilation or server is necessary to run the front-end.
Expanding a node creates nodes for each article linked in the first paragraph of the article for the node you expand. I've chosen to use links only from the first paragraph of an article for 2 reasons:
Nodes are lighter in color when they are farther away from the central node. If it took 5 steps to reach Ancient Greek from Penguin, it will be a lighter color than a node like Birding, which only took 2 steps to reach. Thus, a node's color indicates how closely an article is related to the central topic.
Hovering the mouse over a node will highlight the path back to the central node:This is not necessarily the shortest path back; it is the path that you took to reach the node.
.gitignore
-ify the libraries directory, no reason for it to be in here when I didn't write that stuffThis project is powered by Wikipedia, whose wealth of information makes this project possible.
The presentation of the graph is powered by vis.js
.
Wikipedia API https://pypi.org/project/Wikipedia-API/ https://github.com/martin-majlis/Wikipedia-API/ 安装 pip install Wikipedia-API Wikipedia-API是 基本使用 import wikipediaapi title = "china" wiki = wiki
导航 (返回顶部) 1. 使用 -map 选项选择流 1.1 examples简单的例子 1.2 behavior默认行为 2. Syntax语法 2.1 Modifiers修饰符 2.2 Order顺序 3. Examples例子 3.1 选择所有流 3.2 特定类型的流 3.3 仅特定视频流 3.4 来自不同文件的视频和音频 3.5 除音频外的所有内容 3.6 特定语言 3.7 从过滤器中选择
官方文档说支持 dataset 的图表有: line、bar、pie、scatter、effectScatter、parallel、candlestick、map、funnel、custom。 其他几种都能在文档中找到demo,那么map类型的图表该怎么用dataset组件设置数据呢? 到官网找到map的例子,然后尝试直接去掉series的data属性,并添加dataset属性,结果能正常显示,说
e820 – retrieve memory map from BIOS What is e820 Definition from Wikipedia: e820 is shorthand to refer to the facility by which the BIOS of x86-based computer systems reports the memory map to the op
base from pyecharts import options as opts from pyecharts.charts
wikipedia-ios 是运行在 iOS 系统上的官方版维基百科客户端。 建立和运行 在目录中,运行./scripts/setup。注意:由于相对路径,转到scripts目录并运行setup将无法正常工作。 运行scripts/setup将设置计算机以生成并运行该应用程序。该脚本假定已经安装了Xcode。它将安装homebrew,Carthage和ClangFormat。它还将创建一个使用Cl
To train a set of embedded word vectors, run train.py: $ python3 train.py This will save your word embeddings into ./wikipedia/embeddings.npy Unfortunately, at this time the sample code is only comp
Wikipedia Android app This repository contains the source code for the official Wikipedia Android app. Documentation All documentation is kept on our wiki. Check it out! Issues Kindly file issues in our bug tracker
问题内容: 我尝试使用Python的urllib来获取Wikipedia文章: 但是,我得到的不是HTML页面,而是以下响应:错误-Wikimedia Foundation: 维基百科似乎阻止了不是来自标准浏览器的请求。 有人知道如何解决此问题吗? 问题答案: 你需要使用的urllib2是superseedes的urllib在蟒蛇STD库,以改变用户代理。 直接从例子
问题内容: 我正在尝试使用AJAX(XMLHttpRequest)实现对Wikipedia API的简单请求。如果我在Firefox的地址栏中键入url,则会得到一个整齐的XML,在那儿不费吹灰之力。但是,使用以下命令调用完全相同的网址: 返回空响应。根据FireBug,我得到200 OK响应,但是内容只是空的。 我怀疑我可能在GET http请求的标头上缺少某些内容。 救命!(谢谢!) 问题答案
这是我收到的错误: 请求的资源上不存在“< code > Access-Control-Allow-Origin ”标头。因此,不允许访问源“https://s.codepen.io”。如果不透明响应满足您的需要,请将请求的模式设置为' < code>no-cors ',以便在禁用cors的情况下获取资源。 我将模式设置为但仍然没有运气。