当前位置: 首页 > 工具软件 > Quicksql > 使用案例 >

Quicksql部署

姚向晨
2023-12-01

根据官方文档结合自己部署过程整理

配置依赖环境

Quicksql部署非常简单,首先需要确保环境预置完整,依赖的环境有:

  • Java>=1.8
  • Spark>=2.2(必选,未来作为可选)
  • Flink>=1.9(可选)

当前的Quicksql对Flink的支持并不完善,还需要进行二次开发,部署演示的示例基于Spark-2.4.6进行

配置qsql

下载并解压二进制安装包,下载地址:https://github.com/Qihoo360/Quicksql/releases,根据自己选择版本进行下载,我下载的是0.7.0,下载完成后将安装包上传到部署的节点上,也可以直接wget进行下载

$ tar -zxf quicksql-0.7.0.tar.gz -C /opt/software
$ cd /opt/software/qsql-0.7.0
$ vim conf/quicksql-env.sh
#!/bin/bash

# This file is sourced when running quicksql programs
# Copy it as quicksql-env.sh and edit it that to configure quicksql

# Options read when launching programs

# export SPARK_HOME=  # [Required] - SPARK_HOME, to set spark home for quicksql running. quicksql needs spark 2.0 or above.
#把这块改成你的SPARK_HOME,别傻了吧唧的复制粘贴等着报错,动动你聪明的小脑瓜
export SPARK_HOME=/opt/software/spark-2.4.6
# export JAVA_HOME=   # [Required] - JAVA_HOME, to set java home for quicksql running. quicksql needs java 1.8 or above.
#把这块改成你的JAVA_HOME,别傻了吧唧的复制粘贴等着报错,动动你聪明的小脑瓜
export JAVA_HOME=/opt/software/jdk1.8.0_161
# export FLINK_HOME=   # [Required] - FLINK_HOME, to set flink home for quicksql running. quicksql needs flink 1.9.0 or
# above.
# export QSQL_CLUSTER_URL=  # [Required] - QSQL_CLUSTER_URL, to set hadoop file system url.
# export QSQL_HDFS_TMP=   # [Required] - QSQL_HDFS_TMP, to set hadoop file system tmp url.

# Options read when using command line "quicksql.sh -e" with runner "Jdbc", "Spark" or "Dynamic" runner determined that
# this
# They all have default values. But we recommend you to set them here for more properly to your site.
# Those are default value for running a quicksql program if user does not set those properties.

# export QSQL_DEFAULT_WORKER_NUM=20      # [Optional] - QSQL_DEFAULT_WORKER_NUM, to set default worker_num for quicksql programs. if it is not set, default value is
# export QSQL_DEFAULT_WORKER_MEMORY=1G   # [Optional] - QSQL_DEFAULT_WORKER_MEMORY, to set default worker_memory for quicksql programs. if it is not set, default
# export QSQL_DEFAULT_DRIVER_MEMORY=3G   # [Optional] - QSQL_DEFAULT_DRIVER_MEMORY, to set default driver_memory for quicksql programs. if it is not set, default
# export QSQL_DEFAULT_MASTER=yarn-client # [Optional] - QSQL_DEFAULT_MASTER, to set default master for quicksql programs. if it is not set, default value is
# export QSQL_DEFAULT_RUNNER=DYNAMIC     # [Optional] - QSQL_DEFAULT_RUNNER, to set default master for quicksql programs. if it is not set, default value is dynamic.

运行qsql

在Quicksql上运行查询前需要将连接信息以及表、字段信息采集入库。

默认元数据库使用Sqlite,切换元数据库的方式参考官方部署指南,Quicksql提供了众多标准数据源的采集脚本,通过脚本批量拉取元数据。目前支持通过脚本录入元数据的数据源有Hive、MySQL、Kylin、Elasticsearch、Oracle、MongoDB

执行方式如下(注意:-r 参数可以使用LIKE语法,[’%’: 全部匹配,’_’: 占位匹配,’?’: 可选匹配])

$ bin/metadata-extract.sh -p "<SCHEMA-JSON>" -d "<DATA-SOURCE>" -r "<TABLE-NAME-REGEX>"

SCHEMA-JSON格式如下所示
官方没提供MongoDB的,但大体相同,有用到的可以留言供大家参考,但估计八成没人留言,最好也别期待了,还是自己动手试试吧

##MySQL
{
	"jdbcDriver": "com.mysql.jdbc.Driver",
	"jdbcUrl": "jdbc:mysql://localhost:3306/db",
	"jdbcUser": "USER",
	"jdbcPassword": "PASSWORD"
}
##Oracle
{
	"jdbcDriver": "oracle.jdbc.driver.OracleDriver",
	"jdbcUrl": "jdbc:oracle:thin:@localhost:1521/namespace",
	"jdbcUser": "USER",
	"jdbcPassword": "PASSWORD" 
}
##Elasticsearch
{
	"esNodes": "192.168.1.1",
	"esPort": "9000",
	"esUser": "USER",
	"esPass": "PASSWORD",
	"esIndex": "index/type"
}
##Hive(Hive元数据存在MySQL中)
{
	"jdbcDriver": "com.mysql.jdbc.Driver",
	"jdbcUrl": "jdbc:mysql://localhost:3306/db",
	"jdbcUser": "USER",
	"jdbcPassword": "PASSWORD",
	"dbName": "hive_db"
}
##Hive-Jdbc(Hive元数据通过Jdbc访问 )
{
	"jdbcDriver": "org.apache.hive.jdbc.HiveDriver",
	"jdbcUrl": "jdbc:hive2://localhost:7070/learn_kylin",
	"jdbcUser": "USER",
	"jdbcPassword": "PASSWORD",
	"dbName": "default"
}
##Kylin
{
	"jdbcDriver": "org.apache.kylin.jdbc.Driver",
	"jdbcUrl": "jdbc:kylin://localhost:7070/learn_kylin",
	"jdbcUser": "ADMIN",
	"jdbcPassword": "KYLIN",
	"dbName": "default"
}

使用示例

从MySQL数据库中采集元数据,演示的直接采集的库,没加表名限制,但是这个库里只有一个表

$ ./metadata-extract.sh -p "{\"jdbcDriver\":\"com.mysql.jdbc.Driver\",\"jdbcUrl\":\"jdbc:mysql://这写mysqlIP:3306/这写数据库\",\"jdbcUser\":\"这是用户名\",\"jdbcPassword\":\"这是密码\"}" -d "mysql"

采集成功后将返回

2020-07-26 15:48:24,975 [main] INFO  - Input params: properties({"jdbcDriver":"com.mysql.jdbc.Driver","jdbcUrl":"jdbc:mysql://qsql-01:3306/test_qsql","jdbcUser":"root","jdbcPassword":"balabalaxiaomoxian"}), type(mysql), filter regex(%%)
2020-07-26 15:48:24,976 [main] INFO  - Connecting server.....
2020-07-26 15:48:25,728 [main] INFO  - Connected successfully!!
2020-07-26 15:48:25,751 [main] INFO  - Insert database test_qsql successfully!!
2020-07-26 15:48:25,764 [main] INFO  - Insert table example_schema successfully!!
2020-07-26 15:48:25,772 [main] INFO  - Successfully collected metadata for 1 tables!!
2020-07-26 15:48:25,772 [main] INFO  - example_schema

注意:Shell中双引号是特殊字符,传JSON参数时需要做转义!!

从命令行提交查询

从命令行查询是Quicksql提供的最基本的查询方式之一。

像Hive和MySQL一样,使用quicksql.sh -e "YOUR SQL"就可以完成查询,结果集将打印在终端上。

使用示例

  1. 一个简单的查询,将在Quicksql内核中被执行;
$ ./quicksql.sh -e "SELECT * from example_schema"
2020-07-26 16:40:14,196 [main] INFO  - job.execute.start:2020-07-26 16:40:14:195
2020-07-26 16:40:14,284 [main] INFO  - Your SQL is 'select * from example_schema'
               ____        _      __   _____ ____    __ 
              / __ \__  __(_)____/ /__/ ___// __ \  / / 
             / / / / / / / / ___/ //_/\__ \/ / / / / /  
            / /_/ / /_/ / / /__/ ,<  ___/ / /_/ / / /___
Welcome to  \___\_\__,_/_/\___/_/|_|/____/\___\_\/_____/  version 0.7.0.
   \  Process data placed anywhere with the most flexible SQL  /
2020-07-26 16:40:14,521 [main] INFO  - Parsing table names has finished, you will query tables: [example_schema]
2020-07-26 16:40:15,806 [main] INFO  - SQL.parsed:2020-07-26 16:40:15:806
2020-07-26 16:40:15,989 [main] INFO  - apply.resource:2020-07-26 16:40:15:989
2020-07-26 16:40:16,005 [main] INFO  - Jdbc connection has established, the result set is flying to you.
[Empty set]
2020-07-26 16:40:16,039 [main] INFO  - job.execute.final:2020-07-26 16:40:16:039
2020-07-26 16:40:16,039 [main] INFO  - SQL.parsed.time:1611
2020-07-26 16:40:16,039 [main] INFO  - resource.apply.time:183
2020-07-26 16:40:16,039 [main] INFO  - job.query.time:50

如果你按下面演示的方法执行会报错,尽管刚刚已经执行成功了

$ bin/quicksql.sh -e "SELECT * from example_schema"
2020-07-26 16:43:10,726 [main] INFO  - job.execute.start:2020-07-26 16:43:10:726

2020-07-26 16:43:10,818 [main] INFO  - Your SQL is 'SELECT * from example_schema'
               ____        _      __   _____ ____    __ 
              / __ \__  __(_)____/ /__/ ___// __ \  / / 
             / / / / / / / / ___/ //_/\__ \/ / / / / /  
            / /_/ / /_/ / / /__/ ,<  ___/ / /_/ / / /___
Welcome to  \___\_\__,_/_/\___/_/|_|/____/\___\_\/_____/  version 0.7.0.
   \  Process data placed anywhere with the most flexible SQL  /
2020-07-26 16:43:11,056 [main] INFO  - Parsing table names has finished, you will query tables: [example_schema]
java.lang.RuntimeException: java.sql.SQLException: path to '///opt/software/qsql-0.7.0/../metastore/schema.db': '/opt/software/qsql-0.7.0/../metastore' does not exist
	at com.qihoo.qsql.metadata.MetadataClient.createConnection(MetadataClient.java:358)
	at com.qihoo.qsql.metadata.MetadataClient.<init>(MetadataClient.java:43)
	at com.qihoo.qsql.metadata.MetadataPostman$MetadataFetcher.transformSchemaFormat(MetadataPostman.java:84)
	at com.qihoo.qsql.metadata.MetadataPostman$MetadataFetcher.access$100(MetadataPostman.java:72)
	at com.qihoo.qsql.metadata.MetadataPostman.lambda$getAssembledSchema$0(MetadataPostman.java:48)
	at java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:193)
	at java.util.ArrayList$ArrayListSpliterator.forEachRemaining(ArrayList.java:1382)
	at java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:481)
	at java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:471)
	at java.util.stream.ReduceOps$ReduceOp.evaluateSequential(ReduceOps.java:708)
	at java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234)
	at java.util.stream.ReferencePipeline.collect(ReferencePipeline.java:499)
	at com.qihoo.qsql.metadata.MetadataPostman.getAssembledSchema(MetadataPostman.java:50)
	at com.qihoo.qsql.metadata.MetadataPostman.getCalciteModelSchema(MetadataPostman.java:60)
	at com.qihoo.qsql.launcher.ExecutionDispatcher.loadSchemaForTables(ExecutionDispatcher.java:222)
	at com.qihoo.qsql.launcher.ExecutionDispatcher.main(ExecutionDispatcher.java:108)
Caused by: java.sql.SQLException: path to '///opt/software/qsql-0.7.0/../metastore/schema.db': '/opt/software/qsql-0.7.0/../metastore' does not exist
	at org.sqlite.core.CoreConnection.open(CoreConnection.java:192)
	at org.sqlite.core.CoreConnection.<init>(CoreConnection.java:76)
	at org.sqlite.jdbc3.JDBC3Connection.<init>(JDBC3Connection.java:26)
	at org.sqlite.jdbc4.JDBC4Connection.<init>(JDBC4Connection.java:24)
	at org.sqlite.SQLiteConnection.<init>(SQLiteConnection.java:45)
	at org.sqlite.JDBC.createConnection(JDBC.java:114)
	at org.sqlite.JDBC.connect(JDBC.java:88)
	at java.sql.DriverManager.getConnection(DriverManager.java:664)
	at java.sql.DriverManager.getConnection(DriverManager.java:270)
	at com.qihoo.qsql.metadata.MetadataClient.createConnection(MetadataClient.java:354)
	... 15 more
2020-07-26 16:43:11,217 [main] ERROR - execution.error:java.sql.SQLException: path to '///opt/software/qsql-0.7.0/../metastore/schema.db': '/opt/software/qsql-0.7.0/../metastore' does not exist

还有其他的一些坑等着大家踩

其他参数

以上实例提供了基本的查询方式,如果对计算引擎需要指定其他参数可以参考下表:

Property NameDefaultMeaning
-e配置查询的SQL语句,查询时必填。
-h|–help命令参数的详细描述
–runnerdynamic设置执行器类型,包括 dynamic, jdbc, spark, flink
–masteryarn-client设置引擎执行模式
–worker_memory1G执行器的内存大小配置
–driver_memory3G控制器的内存大小配置
–worker_num20执行器的并行度

注意:

​ (1) 在quicksql-env.sh 中可以设置runner、master、worker_memory等参数的默认值;

​ (2) 在非分布式执行中,即使设置了master、worker_memory等参数也不会生效;

从应用提交查询

Quicksql支持使用Client/Server模式的JDBC连接进行查询,用户的应用可以通过引入Driver包与Server建立连接进行联邦查询。

Server端

启动Server

$ bin/quicksql-server.sh start -P 5888 -R spark -M yarn-client

启动参数包括start|stop|restart|status,-P/-R/-M为可选项,分别对应端口号,执行引擎和任务调度方式,

-P:指定server端口号,默认为5888

-R:指定执行引擎,支持Spark/Flink

-M:指定spark任务资源调度方式,yarn-client或yarn-cluster等,默认为local[1]

Client端

应用接入

项目poml文件引入quicksql-client和 avatica 依赖包,官方文档里写的依赖包是0.6的,但是我用的是0.7

<dependency>
    <groupId>com.qihoo.qsql</groupId>
    <artifactId>qsql</artifactId>
    <version>0.7</version>
</dependency>
<dependency>
    <groupId>org.apache.calcite.avatica</groupId>
    <artifactId>avatica-server</artifactId>
    <version>1.12.0</version>
</dependency>

Java代码示例:代码是官方提供的

 public static void main(String[] args) throws SQLException, ClassNotFoundException {
        Class.forName("com.qihoo.qsql.client.Driver"); //注入Drvier

        Properties properties = new Properties();
        properties.setProperty("runner","spark");
        String url = "jdbc:quicksql:url=http://qsql-01:5888";
        Connection connection = DriverManager.getConnection(url,properties);
        Statement pS = connection.createStatement();
        String sql = "select * from example_schema";
        ResultSet rs =  pS.executeQuery(sql);
        while (rs.next()) {
            System.out.println(rs.getString(1));
            System.out.println(rs.getString(2));
        }
        rs.close();
        pS.close();
}
  1. 注入quicksql Driver :com.qihoo.qsql.client.Driver

  2. properties 配置项包含参数

    ​ runner:指定执行引擎, 包括 dynamic, jdbc, spark, flink

    ​ acceptedResultsNum : 执行查询返回数据的最大条数

    ​ appName:启动的spark/flink实例名

  3. 连接server的url : jdbc:quicksql:url=http:// + server服务器域名或ip地址 + server启动端口号(在server的日志文件 里有url信息)

  4. 其他操作与普通jdbc查询相同,包括Connection, Statement,ResultSet,ResultSetMetaData等类的操作,以及结果的遍历。

 类似资料: