当前位置: 首页 > 软件库 > 云计算 > >

ipcs

containerd meets ipfs to distribute content
授权协议 MIT License
开发语言 Java
所属分类 云计算
软件类型 开源软件
地区 不详
投 递 者 鄢承运
操作系统 跨平台
开源组织
适用人群 未知
 软件概览

ipcs

Containerd meets IPFS. Peer-to-peer distribution of content blobs.

Getting started

Converting a manifest from DockerHub to p2p manifest:

# Term 1: Start a IPFS daemon
$ make ipfs

# Term 2: Start a rootless containerd backed by ipcs.
$ make containerd

# Term 3: Convert alpine to a p2p manifest
$ make convert
2019/06/04 13:54:40 Resolved "docker.io/library/alpine:latest" as "docker.io/library/alpine:latest@sha256:769fddc7cc2f0a1c35abb2f91432e8beecf83916c421420e6a6da9f8975464b6"
2019/06/04 13:54:40 Original Manifest [456] sha256:769fddc7cc2f0a1c35abb2f91432e8beecf83916c421420e6a6da9f8975464b6:
// ...
2019/06/04 13:54:41 Converted Manifest [456] sha256:9181f3c247af3cea545adb1b769639ddb391595cce22089824702fa22a7e8cbb:
// ...
2019/06/04 13:54:41 Successfully pulled image "localhost:5000/library/alpine:p2p"

Converting two manifests from DockerHub to p2p manifests, and then comparing the number of shared IPLD nodes (layers chunked into 262KiB blocks):

# Term 1: Start a IPFS daemon
$ make ipfs

# Term 2: Start a rootless containerd backed by ipcs.
$ make containerd

# Term 3: Convert ubuntu:bionic and ubuntu:xenial into p2p manifests, then bucket IPLD nodes into nodes unique to each image, and nodes inside intersect.
$ make compare
// ...
2019/06/04 13:51:33 Comparing manifest blocks for "docker.io/library/ubuntu:xenial" ("sha256:8d382cbbe5aea68d0ed47e18a81d9711ab884bcb6e54de680dc82aaa1b6577b8")
2019/06/04 13:51:34 Comparing manifest blocks for "docker.io/titusoss/ubuntu:latest" ("sha256:cfdf8c2f3d5a16dc4c4bbac4c01ee5050298db30cea31088f052798d02114958")
2019/06/04 13:51:34 Found 322 blocks
docker.io/library/ubuntu:xenial: 4503
docker.io/library/ubuntu:xenial n docker.io/titusoss/ubuntu:latest: 87550251
docker.io/titusoss/ubuntu:latest: 76117824
// 87550251 shared bytes in IPLD nodes

Design

IPFS backed container image distribution is not new. Here is a non-exhaustive list of in-the-wild implementations:

P2P container image distribution is also implemented with different P2P networks:

The previous IPFS implementations all utilize the Docker Registry HTTP API V2 to distribute. However, the connection between containerd pulling the image and registry is not peer-to-peer, and if the registry was ran as a sidecar the content would be duplicated twice in the local system. Instead, I chose to implement it as a containerd content plugin for the following reasons:

  • Containerd natively uses IPFS as a content.Store, no duplication.
  • Allow p2p and non-p2p manifests to live together.
  • Potentially do file-granularity chunking by introducing new layer mediatype.
  • Fulfilling the content.Store interface will allow using ipcs to also back the buildkit cache.

IPFS imposes a 4 MiB limit for blocks because it may be run in a public network with adversarial peers. Since its not able to verify hashes until all the content has arrived, an attacker can send gibberish flooding connections and consuming bandwidth. Chunking data into smaller blocks also aids in deduplication:

IPCS implements containerd's content.Store interface and can be built as a golang plugin to override containerd's default local store. A converter implementation is also provided that converts a regular OCI image manifest to a manifest where every descriptor is replaced with the descriptor of the root DAG node added to IPFS. The root node is the merkle root of the 262KiB chunks of the layer.

Although the IPFS daemon or its network may already have the bytes for all image's P2P content, containerd has a boltdb metadata store wrapping the underlying content.Store.

A image pull, starting from the client side goes through the following layers:

  • proxy.NewContentStore (content.Store)
  • content.ContentClient (gRPC client)
  • content.NewService (gRPC server: plugin.GRPCPlugin "content")
  • content.newContentStore (content.Store: plugin.ServicePlugin, services.ContentService)
  • metadata.NewDB (bolt *metadata.DB: plugin.MetadataPlugin "bolt")
  • ipcs.NewContentStore (content.Store: plugin.ContentPlugin, "ipcs")

So in the case of this project ipcs, a pull is simply flushing through its content.Store layers to register the image in containerd's metadata stores. Note that the majority of the blocks don't need to be downloaded into IPFS's local storage in order to complete a pull, and can be delayed until unpacking the layers into snapshots.

Results

Collected data on: 7/11/2019

Systems:

  • m5.large x 3
  • 8.0 GiB Memory
  • 2 vCPUs
  • Up to 10 Gigabit (Throttled by AWS network credits)
  • Linux kernel 4.4.0
  • Ubuntu 16.04.6 LTS
  • Containerd v1.2.6
  • IPFS v0.4.21

Configuration:

  • Switch libp2p mux from yamux to mplex: export LIBP2P_MUX_PREFS="/mplex/6.7.0"
  • Set flatfs sync to false
  • Enable experimental StrategicProviding

Comparison:

  • Pull from DockerHub / Private docker registries
  • Shard content chunks evenly to 3 nodes such that each node has roughly 33% of IPFS blocks.
Image Total size (bytes) IPFS blocks DockerHub pull (secs) IPFS pull (secs) Diff (Hub/IPFS)
docker.io/library/alpine:latest 2759178 14 1.744165732 0.7662775298 227.62%
docker.io/ipfs/go-ipfs:latest 23545678 103 1.791054265 1.633165299 109.67%
docker.io/library/ubuntu:latest 28861894 38 2.720580011 1.629809674 116.93%
docker.io/library/golang:latest 296160075 380 4.687380759 6.015498289 77.92%

IPFS's performance seems to slow down as the number of nodes (size of total image) goes up. There was a recent regression in go-ipfs v0.4.21 that was fixed in this commit on master:

As seen from make compare, there also doesn't seem to be any improvements in deduplication between IPFS chunks as opposed to OCI layers:

$ GO111MODULE=on IPFS_PATH=./tmp/ipfs go run ./cmd/compare docker.io/library/alpine:latest docker.io/library/ubuntu:latest docker.io/library/golang:latest docker.io/ipfs/go-ipfs:latest
// ...
2019/06/04 13:39:55 Found 1381 blocks
docker.io/ipfs/go-ipfs:latest: 46891351
docker.io/library/alpine:latest: 5516903
docker.io/library/golang:latest: 828096081
docker.io/library/ubuntu:latest: 57723854
// Zero block intersection, they are very different images though.

Serious usage of p2p container image distribution should consider Dragonfly and Kraken, because IPFS suffers from performance issues:

Related benchmarking:

Next steps

Explore deduplication by adding each layer's uncompressed, untared files into IPFS to get chunked-file-granular deduplication. IPFS's Unixfs (UNIX/POSIX fs features implemented via IPFS) needs the following:

Explore IPFS-FUSE mounted layers for lazy container rootfs:

Explore IPFS tuning to improve performance

  • Tune goroutine/parallelism in various IPFS components.
  • Tune datastore (use experimental go-ds-badger?)
  • Profile / trace performance issues and identify hotspots.
  • 1.命令简介 ipcs 命令用于查看 Linux 进程间通信设施的状态,包括消息列表、共享内存和信号量的信息。可以帮助开发人员定位进程间通信中出现的问题。 注意,本文描述的是 util-linux 版 ipcs,和其它版本(如 POSIX 版)的实现可能会有出入。 2.命令格式 ipcs [resource-option] [output-format] ipcs [resource-option

  • IPC介绍——10个ipcs例子 semaphorearrays2010performancesystemaccess ipcs是一个uinx/linux的命令。用于报告系统的消息队列、信号量、共享内存等 1、列出所有的ipcs参数: -a 他是默认选项及ipcs等效于ipcs -a [root@test ~]# ipcs -a ------ Shared Memory Segments

  • 分析消息队列共享内存和信号量 补充说明 ipcs命令 用于报告Linux中进程间通信设施的状态,显示的信息包括消息列表、共享内存和信号量的信息。 语法 ipcs(选项) 选项 -a:显示全部可显示的信息; -q: 显示活动的消息队列信息; -m: 显示活动的共性内存信息; -s: 显示活动的信号量信息。 实例 ipcs -a ----- Shared Memory Segments -------

  • ipcs是Linux下显示进程间通信设施状态的工具。可以显示消息队列、共享内存和信号量的信息。对于程序员可能更有用些,普通的系统管理员一般用不到此指令。 (1)显示消息队列信息, 修改消息队列大小: root:用户:/etc/sysctl.conf kernel.msgmnb =4203520 kernel.msgmnb =3520 kernel.msgmni = 2878 保存后需要执行sysc

  • 今天是2014-01-06,在没过春节之前重新复习一下2013年学习的内容,关于oracle内存段在我之前写的blog中有详细操作。在此记录一下ipcs命令的用法。 https://blog.csdn.net/xiaohai20102010/article/details/9634099 ipcs 命令 用途 报告进程间通信设施状态。 语法 ipcs [ -m] [ -q] [ -s] [ -S]

  • [20191119]探究ipcs命令输出.txt $ man ipcs IPCS(8)                    Linux Programmer's Manual                   IPCS(8) NAME        ipcs - provide information on ipc facilities SYNOPSIS        ipcs [ -asmq

  • 在Linux进程通信中,共享内存的应用是比较普遍,把自己学习过的资料作一个小结吧! 共享内存是系统出于多个进程之间通讯的考虑,而预留的的一块内存区。在/proc/sys/kernel/目录下,记录着共享内存的一些限制,如一个共享内存区的最大字节数shmmax,系统范围内最大共享内存区标识符数shmmni等,可以手工对其调整,但不推荐这样做。 注意:在使用共享内存,结束程序退出后。如果你没在程序中用

  • linux中ipcs命令使用详解 用途 报告进程间通信设施状态。 语法 代码如下: ipcs [-mqs] [-abcopt] [-C core] [-N namelist] -m 输出有关共享内存(shared memory)的信息 -q 输出有关信息队列(message queue)的信息 -s 输出信号量(semaphore)的信息 # ipcs -m IPC status from as

  • 用ipcs调试共享内存 测试源程序如下: #include #include #include #include #include #include #include void error_out(const char *msg) { perror(msg); exit(EXIT_FAILURE); } int main (int argc, char *argv[]) { key_t mykey

  • 1、显示所有的IPC设施 # ipcs -a 2、显示所有的消息队列Message Queue # ipcs -q 3、显示所有的信号量 # ipcs -s 4、显示所有的共享内存 # ipcs -m 5、显示IPC设施的详细信息 # ipcs -q -i id id 对应shmid、semid、msgid等。-q对应设施的类型(队列),查看信号量详细情况使用-s,查看共享内存使用-m。 6、显示

  • 【IPC查询资源】 查看系统使用的IPC资源 [root@wzxaini9 ~]# ipcs ------ Shared Memory Segments -------- key        shmid      owner      perms      bytes      nattch     status ------ Semaphore Arrays -------- key     

  •   共享内存是可以被多个进程访问的内存;即可以在不同进程之间共享的内存区域以及在两个进程之间传递数据的更好方式。共享内存是目前可用的最快的进程间通信形式。 假设程序将创建一个内存部分,另一个进程可以访问(如果允许)。同一进程可以多次附加共享段。每当内存映射到进程的地址空间,即共享公共内存区域时,内核在进程之间传递数据时将不参与。许多应用程序(例如 Oracle SGA 需要共享内存设置)都使用此功

  • ipcs 查询进程间通信状态 linux/uinx上进程间通信方式,包括共享内存,消息队列,信号 ipcs是Linux下显示进程间通信设施状态的工具。可以显示消息队列、共享内存和信号量的信息。对于程序员非常有用,普通的系统管理员一般用不到此指令。 1. IPC资源查询 查看系统使用的IPC资源 $ipcs ------ Shared Memory Segments -------- key