当前位置: 首页 > 工具软件 > tiger > 使用案例 >

TigerGraph核心特性初探

郭浩穰
2023-12-01

这里简单介绍目前商业市场上出现的宣称是“第三代”图数据库产品,能支持OLAP和OLTP的场景。这个厂商提出了一个新的名词叫NPG(Native Parallel Graph)原生并行图(感觉广告软文在创造新词汇...o(╯□╰)o)。

因为TigerGraph不是开源的,因此我们可以从官宣的资料中了解了解它的核心设计。蓝色部分使个人的一点思考。

A Native Distributed Graph(原生分布式图)

Its data store holds nodes, links, and their attributes. Some graph database products on the market are really wrappers built on top of a more generic NoSQL data store. This virtual graph strategy has a double penalty when it comes to performance.

图数据库引擎将节点,连接和属性直接存储在本地,也就是上层怎么建模的,数据就是怎么存储的。这点跟部分图数据库的设计是不太一样的,比如一些数据库在NoSQL的存储引擎上设计一套图的模型,这种实现的方式被它成为虚拟图,而且会有存在潜在的性能开销。(Titan,别跑,说的就是你啊

另外,Neo4j也是native graph, index-free的形式,看来没有别的捷径,要想图数据库引擎跑得快,需要最大程度减少磁盘IO和网络IO,native+memory是最自然的实现方式,除非后续计算机业界有新的存储技术突破

Compact Storage with Fast Access(存储压缩与快速访问)

Internally hash indices are used to reference nodes and links. In Big-O terms, our average access time is O(1) and our average index update time is also O(1).

Users can set parameters that specify how much of the available memory may be used for holding the graph. If the full graph does not fit in memory, then the excess is stored on disk. Best performance is achieved when the full graph fits in memory, of course.

Data values are stored in encoded formats that effectively compress the data. The compression factor varies with the graph structure and data, but typical compression factors are between 2x and 10x. Compression has two advantages: First, a larger amount of graph data can fit in memory and in CPU cache. Such compression reduces not only the memory footprint, but also CPU cache misses, speeding up overall query performance. Second, for users with very large graphs, hardware costs are reduced.

In general, decompression is needed only for displaying the data. When values are used internally, often they may remain encoded and compressed.

引擎内部的hash索引用于节点和连接,平均的访问速度达到O(1),索引的更新时间也是O(1)。

总的来说,用户可以配置内存的大小,如果图数据内存加载不下,会存储于硬盘中去,当整个图通过内存完整加载的情况下,性能最优(那还用说啊,没有IO开销啊);

通常情况下,数据的压缩(编码)效率达到2-10倍,这样能带来2个优势:

1、将可能大数据量的图尽量通过压缩之后存储于内存和CPU缓存中,减少内存访问路径和提升CPU缓存命中率,提升性能;

2、压缩后,可以有助于减少内存占用,降低成本;

通常经常下,用于展示的场景(比如多属性的节点或者边)数据会进行压缩。

这部分的特性非常有意思:策略是从数据压缩和工程的角度来提升性能。

压缩这部分非常有意思,从香农的《通信的数学原理》论文我们知道,数据的压缩是跟数据的分布情况有关的,2-10倍的压缩我觉得不一定对所有的数据集特性上都能实现...思路有点像Google protocolbuffer,将上层的数据转换为引擎层紧致的数据格式,当然解码部分会有一定的开销。

Parallelism and Shared Values

TigerGraph also excels at parallelism, employing an MPP (massively parallel processing) design architecture throughout.

The nature of graph queries is to “follow the links.”

Ask each counter to do its share of the world, and then combine their results in the end.

TigerGraph在设计之初就支持MPP(大规模并行处理),主要通过并行的处理模型来加快图的查询。

因为是使用原生图的引擎存储方式,因此可以使用物理存储接口出发进行图的访问或计算迭代,follow the link,基本可以预测在内存中直接通过指针访问数据,比通过IO或者其他mapping层面访问数据要快。

另外,在并行处理的过程中,引擎支持多个内存处理单元,比如counter的数据共享,然后做聚合操作。

Storage and Processing Engines Written in C++

TigerGraph has been carefully designed to use memory efficiently and to release unused memory. Careful memory management contributes to TigerGraph’s ability to traverse many links, both in terms of depth and breadth, in a single query.

Many other graph database products are written in Java, which has pros and cons. Java programs run inside a Java Virtual Machine (JVM). The JVM takes care of memory management and garbage collection (freeing up memory that is no longer needed). While this is convenient, it is difficult for the programmer to optimize memory usage or to control when unused memory becomes available.

这个没什么好说的,在所有的路中选择了最难的那条,用C++比较偏底层的语言来实现整个数据库设计,相比其他用Java实现的数据库产品,跑得快是正常的。比如Neo4j, OrientDB主要是通过Java实现的。

可是精通C++人不好招啊...o(╥﹏╥)o

GSQL Graph Query Language

TigerGraph also has its own graph querying and update language, GSQL.

自己设计了一套GSQL的图查询语言,类似于Neo4j的Cypher。

说实话,学不动了,每个图数据库厂商都自己搞一套,希望图数据库领域有个大佬出来统一图数据的查询规范,类似于标准的SQL的规范。

MPP Computational Model

To reiterate what we have revealed above, the TigerGraph graph is both a storage model and a computational model. Each node and link can be associated with a compute function. Therefore, each node or link acts as a parallel unit of storage and computation simultaneously. This would be unachievable using a generic NoSQL data store or without the use of accumulators.

在存储层模型和计算模型中,都采用了MPP的设计。每个节点或者边都可以附加单独的算子(function),因此每个节点或者边都可以作为一个并行的存储和计算单元,这种实现方式,而其他基于NoSQL之上存储模型实现的数据库产品是做不到。

Automatic Partitioning

TigerGraph is designed to automatically partition the graph data across a cluster of servers, and still perform quickly. The hash index is used to determine not only the within-server data location but also which-server. All the links that connect out from a given node are stored on the same server.

自动分区,分区之后的数据计算依然表现良好。

内部的hash-index的设计,既能用于本地数据的寻址也能用于跨服务器数据的索引。

这个策略有点像微软亚洲研究院的GraphEngine的设计...

Distributed Computation Mode

In distributed query mode, all servers are asked to work on the query; each server’s actual participation is on an as-needed basis. When a traversal path crosses from server A to server B, the minimal amount of information that server B needs to know is passed to it. Since server B already knows about the overall query request, it can easily fit in its contribution.

在分布式的查询模式下,所有的服务器都会参与基于需求的查询处理。

当一个图遍历的请求需要跨越服务器A和B的时候,B只需要知道最少必须的数据;

这个有点意思了,个人猜测跟上面提到自动分区里的hash-index的设计有关系,因为hash-index中也存储了服务器的信息;

我理解这个阶段的数据处理可能是串行的,因为如果traverse是动态的,需要跨机器传输的网络数据应该是事前不知道的,或者是不完全知道的。

High Performance Graph Analytics with a Native Parallel Graph

As the world’s first and only true native parallel graph (NPG) system, TigerGraph is a complete, distributed, graph analytics platform supporting web-scale data analytics in real time. The TigerGraph NPG is built around both local storage and computation, supports real-time graph updates, and serves as a parallel computation engine. TigerGraph ACID transactions, guaranteeing data consistency and correct results. Its distributed, native parallel graph architecture enables TigerGraph to achieve unequaled performance levels:

  • Loading 100 to 200 GB of data per hour, per machine.
  • Traversing hundreds of million of nodes/edges per second per machine.
  • Performing queries with 10-plus hops in subsecond time.
  • Updating 1000s of nodes and edges per second, hundreds of millions per day.
  • Scaling out to handle unlimited data, while maintaining real-time speeds and improving loading and querying throughput.

简单了解一下这部分的产品参数:大,快,深,稳

GSQL

简单体验一下GSQL的语法

GSQL

/*DDL 数据建模*/

CREATE VERTEX person (PRIMARY_ID ssn STRING, age INTname STRING)

CREATE UNDIRECTED EDGE friendship (FROM person, TO person)

CREATE DIRECTED EDGE teacher_student (FROM person, TO person)

CREATE GRAPH School (person, friendship, teacher_student)

GSQL job

/*GSQL JOB 数据加载*/

USE GRAPH social

BEGIN

CREATE LOADING JOB load_social FOR GRAPH social {

 DEFINE FILENAME file1=”/home/tigergraph/person.csv”;

 DEFINE FILENAME file2=”/home/tigergraph/friendship.csv”;

 LOAD file1 TO VERTEX person VALUES ($”name”, $”name”, $”age”, $”gender”,

$”state”) USING header=”true”, separator=”,”;

 LOAD file2 TO EDGE friendship VALUES ($0, $1, $2) USING header=”true”,

separator=”,”;

}

END

 

 

GSQL > run loading job load_social

 

/*SQL-like的数据查询*/

SELECT FROM person-(friendship)->person WHERE from_id ==”Tom”

[

 {

 “from_type”: “person”,

 “to_type”: “person”,

 “directed”: false,

 “from_id”: “Tom”,

 “to_id”: “Dan”,

 “attributes”: {“connect_day”: “2017-06-03 00:00:00”},

“e_type”: “friendship”

 },

 {

 “from_type”: “person”,

 “to_type”: “person”,

 “directed”: false,

 “from_id”: “Tom”,

 “to_id”: “Jenny”,

 

“attributes”: {“connect_day”: “2015-01-01 00:00:00”},

 “e_type”: “friendship”

 }

]

 

GSQL algorithm

/*GSQL图算法开发(感觉有一定的学习曲线)*/

CREATE QUERY pageRank(float maxChange, int maxIteration, float damping)

 FOR GRAPH gsql_demo {

 MaxAccum @@maxDiff=9999; #max score change in an iteration

 SumAccum @recvd_score=0; #sum(scores received from neighbors)

 SumAccum @score=1; #scores initialized to 1

 V = {Page.*}; #Start with all Page vertices

 WHILE @@maxDiff > maxChange LIMIT maxIteration DO

 @@maxDiff = 0;

 S = SELECT s

 FROM V:s-(Linkto)->:t

 ACCUM t.@recvd_score += s.@score/s.outdegree()

 POST-ACCUM s.@score = (1-damping) + damping * s.@recvd_score,

 s.@recvd_score = 0,

 @@maxDiff += abs(s.@score - s.@score’);

 END; #end while loop

 PRINT V;

}#end query


GSQL Compared to Other Graph Languages

GSQL can be compared to other prominent graph query languages in circulation today. This comparison seeks to transcend the particular syntax or the particular way in which semantics are defined, focusing on expressive power classified along the following key dimensions.

1. Accumulation: What is the language support for the storage of (collected or aggregated) data computed by the query?

2. Multi-hop Path Traversal: Does the language support the chaining of multiple traversal steps into paths, with data collected along these steps?

3. Intermediate Result Flow: Does the language support the flow of intermediate results along the steps of the traversal?

4. Control Flow: What control flow primitives are supported?

5. Query-Calling-Query: What is the support for queries invoking other queries?

6. SQL Completeness: Is the language SQL-complete? That is, is it the case that for a graph-based representation G of any relational database D, any SQL query over D can be expressed by a GSQL query over G?

7. Turing completeness: Is the language Turing-complete?

 

 参考资料:Native Parallel Graphs - TigerGraph

 类似资料: