sq
is a command line tool that provides jq
-style access tostructured data sources such as SQL databases,or document formats like CSV or Excel.
sq
can perform cross-source joins,execute database-native SQL, and output to a multitude of formats including JSON,Excel, CSV, HTML, Markdown and XML, or insert directly to a SQL database.sq
can also inspect sources to view metadata about the source structure (tables,columns, size) and has commands for common database operations such as copyingor dropping tables.
For other installation options, see here.
It is strongly advised to install shell completion.
brew install neilotoole/sq/sq
scoop bucket add sq https://github.com/neilotoole/sq
scoop install sq
curl -fsSLO https://github.com/neilotoole/sq/releases/latest/download/sq-linux-amd64.deb && sudo apt install -y ./sq-linux-amd64.deb && rm ./sq-linux-amd64.deb
sudo rpm -i https://github.com/neilotoole/sq/releases/latest/download/sq-linux-amd64.rpm
yum localinstall -y https://github.com/neilotoole/sq/releases/latest/download/sq-linux-amd64.rpm
Shell completion is available for bash
, zsh
, fish
, and powershell
.It is strongly recommended to install.
Execute sq completion --help
for installation instructions.
Use sq help
to see command help. The tutorial is the best place to start.The cookbook has recipes for common actions.
The major concept is: sq
operates on data sources, which are treated as SQL databases (even if the source is really a CSV or XLSX file etc).
In a nutshell, you sq add
a source (giving it a handle
), and then execute commands against the source.
Initially there are no sources.
$ sq ls
Let's add a source. First we'll add a SQLite database, but this could also be Postgres,SQL Server, Excel, etc. Download the sample DB, and sq add
the source. Weuse -h
to specify a handle to use.
$ wget https://sq.io/testdata/sakila.db
$ sq add ./sakila.db -h @sakila_sl3
@sakila_sl3 sqlite3 sakila.db
$ sq ls -v
HANDLE DRIVER LOCATION OPTIONS
@sakila_sl3* sqlite3 sqlite3:/root/sakila.db
$ sq ping @sakila_sl3
@sakila_sl3 1ms pong
$ sq src
@sakila_sl3 sqlite3 sakila.db
The sq ping
command simply pings the source to verify that it's available.
sq src
lists the active source, which in our case is @sakila_sl3
.You can change the active source using sq src @other_src
.When there's an active source specified, you can usually omit the handle from sq
commands.Thus you could instead do:
$ sq ping
@sakila_sl3 1ms pong
Fundamentally, sq
is for querying data. Using our jq-style syntax:
$ sq '.actor | .actor_id < 100 | .[0:3]'
actor_id first_name last_name last_update
1 PENELOPE GUINESS 2020-02-15T06:59:28Z
2 NICK WAHLBERG 2020-02-15T06:59:28Z
3 ED CHASE 2020-02-15T06:59:28Z
The above query selected some rows from the actor
table. You could alsouse native SQL, e.g.:
$ sq sql 'SELECT * FROM actor WHERE actor_id < 100 LIMIT 3'
actor_id first_name last_name last_update
1 PENELOPE GUINESS 2020-02-15T06:59:28Z
2 NICK WAHLBERG 2020-02-15T06:59:28Z
3 ED CHASE 2020-02-15T06:59:28Z
But we're flying a bit blind here: how did we know about the actor
table?
sq inspect
is your friend (output abbreviated):
$ sq inspect
HANDLE DRIVER NAME FQ NAME SIZE TABLES LOCATION
@sakila_sl3 sqlite3 sakila.db sakila.db/main 5.6MB 21 sqlite3:///root/sakila.db
TABLE ROWS TYPE SIZE NUM COLS COL NAMES COL TYPES
actor 200 table - 4 actor_id, first_name, last_name, last_update numeric, VARCHAR(45), VARCHAR(45), TIMESTAMP
address 603 table - 8 address_id, address, address2, district, city_id, postal_code, phone, last_update int, VARCHAR(50), VARCHAR(50), VARCHAR(20), INT, VARCHAR(10), VARCHAR(20), TIMESTAMP
category 16 table - 3 category_id, name, last_update
Use --json
(-j
) to output in JSON (output abbreviated):
$ sq inspect -j
{
"handle": "@sakila_sl3",
"name": "sakila.db",
"driver": "sqlite3",
"db_version": "3.31.1",
"location": "sqlite3:///root/sakila.db",
"size": 5828608,
"tables": [
{
"name": "actor",
"table_type": "table",
"row_count": 200,
"columns": [
{
"name": "actor_id",
"position": 0,
"primary_key": true,
"base_type": "numeric",
"column_type": "numeric",
"kind": "decimal",
"nullable": false
}
Combine sq inspect
with jq for some useful capabilities. Here's how to list all the table names in the active source:
$ sq inspect -j | jq -r '.tables[] | .name'
actor
address
category
city
country
customer
[...]
And here's how you could export each table to a CSV file:
$ sq inspect -j | jq -r '.tables[] | .name' | xargs -I % sq .% --csv --output %.csv
$ ls
actor.csv city.csv customer_list.csv film_category.csv inventory.csv rental.csv staff.csv
address.csv country.csv film.csv film_list.csv language.csv sales_by_film_category.csv staff_list.csv
category.csv customer.csv film_actor.csv film_text.csv payment.csv sales_by_store.csv store.csv
Note that you can also inspect an individual table:
$ sq inspect @sakila_sl3.actor
TABLE ROWS TYPE SIZE NUM COLS COL NAMES COL TYPES
actor 200 table - 4 actor_id, first_name, last_name, last_update numeric, VARCHAR(45), VARCHAR(45), TIMESTAMP
sq
query results can be output in various formats (JSON, XML, CSV, etc), and can also be "outputted" as an insert into database sources.
That is, you can use sq
to insert results from a Postgres query into a MySQL table, or copy an Excel worksheet into a SQLite table, or a push a CSV file into a SQL Server table etc.
Note: If you want to copy a table inside the same (database) source, use
sq tbl copy
instead, which uses the database's native table copy functionality.
For this example, we'll insert an Excel worksheet into our @sakila_sl3
SQLite database. First, we download the XLSX file, and sq add
it as a source.
$ wget https://sq.io/testdata/xl_demo.xlsx
$ sq add ./xl_demo.xlsx --opts header=true
@xl_demo_xlsx xlsx xl_demo.xlsx
$ sq @xl_demo_xlsx.person
uid username email address_id
1 neilotoole neilotoole@apache.org 1
2 ksoze kaiser@soze.org 2
3 kubla kubla@khan.mn NULL
[...]
Now, execute the same query, but this time sq
inserts the results into a new table (person
) in @sakila_sl3
:
$ sq @xl_demo_xlsx.person --insert @sakila_sl3.person
Inserted 7 rows into @sakila_sl3.person
$ sq inspect @sakila_sl3.person
TABLE ROWS TYPE SIZE NUM COLS COL NAMES COL TYPES
person 7 table - 4 uid, username, email, address_id INTEGER, TEXT, TEXT, INTEGER
$ sq @sakila_sl3.person
uid username email address_id
1 neilotoole neilotoole@apache.org 1
2 ksoze kaiser@soze.org 2
3 kubla kubla@khan.mn NULL
[...]
sq
has rudimentary support for cross-source joins. That is, you can join an Excel worksheet with a CSV file, or Postgres table, etc.
Note: The current mechanism for these joins is highly naive:
sq
copies the joined table from each source to a "scratch database" (SQLite by default), and then performs the JOIN using the scratch database's SQL interface. Thus, performance is abysmal for larger tables. There are massive optimizations to be made, but none have been implemented yet.
See the tutorial for further details, but given an Excel source @xl_demo
and a CSV source @csv_demo
, you can do:
$ sq '@csv_demo.data, @xl_demo.address | join(.D == .address_id) | .C, .city'
C city
neilotoole@apache.org Washington
kaiser@soze.org Ulan Bator
nikola@tesla.rs Washington
augustus@caesar.org Ulan Bator
plato@athens.gr Washington
sq
provides several handy commands for working with tables. Note that these commands work directly against SQL database sources, using their native SQL commands.
$ sq tbl copy .actor .actor_copy
Copied table: @sakila_sl3.actor --> @sakila_sl3.actor_copy (200 rows copied)
$ sq tbl truncate .actor_copy
Truncated 200 rows from @sakila_sl3.actor_copy
$ sq tbl drop .actor_copy
Dropped table @sakila_sl3.actor_copy
For file-based sources (such as CSV or XLSX), you can sq add
the source file, but you can also pipe it:
$ cat ./example.xlsx | sq .Sheet1
Similarly, you can inspect:
$ cat ./example.xlsx | sq inspect
sq
knows how to deal with a data source type via a driver implementation. To view the installed/supported drivers:
$ sq drivers
DRIVER DESCRIPTION USER-DEFINED DOC
sqlite3 SQLite false https://github.com/mattn/go-sqlite3
postgres PostgreSQL false https://github.com/jackc/pgx
sqlserver Microsoft SQL Server false https://github.com/denisenkom/go-mssqldb
mysql MySQL false https://github.com/go-sql-driver/mysql
csv Comma-Separated Values false https://en.wikipedia.org/wiki/Comma-separated_values
tsv Tab-Separated Values false https://en.wikipedia.org/wiki/Tab-separated_values
json JSON false https://en.wikipedia.org/wiki/JSON
jsona JSON Array: LF-delimited JSON arrays false https://en.wikipedia.org/wiki/JSON
jsonl JSON Lines: LF-delimited JSON objects false https://en.wikipedia.org/wiki/JSON_streaming#Line-delimited_JSON
xlsx Microsoft Excel XLSX false https://en.wikipedia.org/wiki/Microsoft_Excel
sq
has many output formats:
--table
: Text/Table--json
: JSON--jsona
: JSON Array--jsonl
: JSON Lines--csv
/ --tsv
: CSV / TSV--xlsx
: XLSX (Microsoft Excel)--html
: HTML--xml
: XML--markdown
: Markdown--raw
: Raw (bytes)go.mod
for a list of third-party packages.sq
incorporates modified versions of:
olekukonko/tablewriter
segmentio/encoding
for JSON encoding.前言 NVMe(over PCIE) 的速度远超 AHCI(over SATA), 其中一个重要的原因是 NVME 的队列个数远超 AHCI。NVME 队列深度达到了64K,并且支持队列个数最大可达64K。AHCI 只有一个 Queue, Queue 深度只有32。作为NVME 重点中的重点,本文将浓墨重彩介绍命令队列SQ 和 CQ. 1. SQ 和 CQ 是什么? NVMe定义的命令有两种: A
知识点 1、切换查询范围:环境->查询区域 2、更改信息集信息类型:SQ02->编辑->更改信息类型选择 3、SQ01中切换用户组:点击状态栏里第一个按钮切换 4、SQ01中生产Query程序:选中Query->查询其他功能->生产程序 5、SQ01中查看Query程序:选中Query->查询其他功能->显示报表名称 增强 场景1 合同信息存储在0016与3211两个信息类型里,在满足查询条件里一