当前位置: 首页 > 工具软件 > x11fs > 使用案例 >

【云原生 | Kubernetes 系列】---CephFS和OSS

冯文彬
2023-12-01

1. CephFS

ceph fs即ceph filesystem,可以实现文件系统共享功能,客户端通过ceph协议挂载并使用ceph集群作为数据存储服务器.
Ceph FS需要运行Meta Data Service(MDS)服务,其守护进程为ceph-mds,ceph-mds进程管理与cephFS上存储的文件相关的元数据,并协调对ceph存储集群的访问.

将同一个文件系统挂载到不同的服务器上,当任何一个节点进行数据变更,那么其他服务器可以马上看到变化.

cephfs的元数据使用的动态子树分区,把元数据划分名称空间对应到不同的mds,写入元数据的时候将元数据按照名称保存到不同的mds上,有点类似于nginx中的缓存目录分层一样.

1.1 安装mds

$ apt install -y ceph-mds

1.2 创建Ceph metadata和data存储池

metadata用于保存文件状态,data用于保存数据

$ ceph osd pool create cephfs-metadata 32 32
$ ceph osd pool create cephfs-data 64 64

1.3 创建cephfs并验证

$ ceph fs new mycephfs cephfs-metadata cephfs-data
# 查看cephfs状态
# ceph fs ls
name: mycephfs, metadata pool: cephfs-metadata, data pools: [cephfs-data ]
# ceph fs status mycephfs
mycephfs - 0 clients
========
RANK  STATE      MDS         ACTIVITY     DNS    INOS   DIRS   CAPS  
 0    active  ceph-mgr01  Reqs:    0 /s    11     14     12      0   
      POOL         TYPE     USED  AVAIL  
cephfs-metadata  metadata   158k  4794M  
  cephfs-data      data    12.0k  4794M  
MDS version: ceph version 16.2.10 (45fa1a083152e41a408d15505f594ec5f1b4fe17) pacific (stable)
# 验证cephfs状态
# ceph mds stat
mycephfs:1 {0=ceph-mgr01=up:active}

2. CephFS挂载

2.1 admin挂载

确保ceph.conf和ceph.client.admin.keyring都已经放在/etc/ceph,如果没有就从mgr服务器复制一份

# cat /etc/ceph/ceph.conf 
[global]
fsid = 86c42734-37fc-4091-b543-be6ff23e5134
public_network = 192.168.31.0/24
cluster_network = 172.31.31.0/24
mon_initial_members = ceph-mon01
mon_host = 192.168.31.81
mon clock drift allowed = 2
mon clock drift warn backoff = 30
auth_cluster_required = cephx
auth_service_required = cephx
auth_client_required = cephx
# cat /etc/ceph/ceph.client.admin.keyring
[client.admin]
	key = AQAAxiFjNoK5FRAAy8DUqFsOoCd2H0m9Q1SuIQ==
	caps mds = "allow *"
	caps mgr = "allow *"
	caps mon = "allow *"
	caps osd = "allow *"
# mount -t ceph 192.168.31.81:6789:/ /data -o name=admin,secret=AQAAxiFjNoK5FRAAy8DUqFsOoCd2H0m9Q1SuIQ==
## 查看是否挂载成功
# df -Th |grep /data
192.168.31.81:6789:/ ceph      4.7G     0  4.7G   0% /data

2.2 普通用户挂载

  1. 创建普通用户
# ceph auth add client.hr mon 'allow r' mds 'allow rw' osd 'allow rwx pool=cephfs-data' 
added key for client.hr
# ceph auth get client.hr
exported keyring for client.hr
[client.hr]
	key = AQAOZShjxnhBJxAAvPfPelTi8rIO4kB0i5BiHg==
	caps mds = "allow rw"
	caps mon = "allow r"
	caps osd = "allow rwx pool=cephfs-data"
# ceph auth get client.hr -o ceph.client.hr.keyring
exported keyring for client.hr
# ceph auth print-key client.hr > hr.key
# cat hr.key
AQAOZShjxnhBJxAAvPfPelTi8rIO4kB0i5BiHg==
  1. 挂载cephfs
  • 内核空间挂载(推荐,速度比较快,内核支持就可以)
# 有这个模块就是可以用内核挂载
# lsmod ceph
Usage: lsmod

使用普通用户的用户名和key进行挂载

通过key文件挂载

# 将3个文件复制到/etc/ceph
# scp hr.key ceph.client.hr.keyring ceph.conf 192.168.31.231:/etc/ceph/
## 挂载,挂载时将所有的mon节点都加上
# mount -t ceph 192.168.31.81:6789,192.168.31.82:6789,192.168.31.83:6789:/ /data -o name=hr,secretfile=/etc/ceph/hr.key
# df -Th /data
Filesystem                                                 Type  Size  Used Avail Use% Mounted on
192.168.31.81:6789,192.168.31.82:6789,192.168.31.83:6789:/ ceph  4.7G     0  4.7G   0% /data

通过key进行挂载

# umount /data
# 通过key挂载
# mount -t ceph 192.168.31.81:6789,192.168.31.82:6789,192.168.31.83:6789:/ /data -o name=hr,secret=AQAOZShjxnhBJxAAvPfPelTi8rIO4kB0i5BiHg==
# df -Th /data
Filesystem                                                 Type  Size  Used Avail Use% Mounted on
192.168.31.81:6789,192.168.31.82:6789,192.168.31.83:6789:/ ceph  4.7G     0  4.7G   0% /data

开机挂载

# cat /etc/fstab
192.168.31.81:6789,192.168.31.82:6789,192.168.31.83:6789:/ /data ceph defaults,name=hr,secretfile=/etc/ceph/hr.key,_netdev 0 0
  • 用户空间挂载

性能会较内核模式差

# yum install ceph-fuse ceph-common -y
# ceph-fuse --name client.hr -m 192.168.31.81:6789,192.168.31.82:6789,192.168.31.83:6789 /data

3. Ceph MDS高可用

Ceph MDS(metadata service)作为ceph访问入口,需要实现高性能及高可用.通过分组实现主备实现高可用.主负责读写,备负责备份和同步数据.当主出现故障,备会接管主的服务.

常用配置如下:

  • mds_standby_replay: 值为true或false,true表示开启replay模式,这种模式下主MDS内的数量将实时与从MDS同步,如果主宕机,从可以迅速的切换.如果为false只有宕机的时候才去同步数据,这样会有一段时间的中断.
  • mds_standby_for_name: 设置当前MDS进程只用于备份于指定名称的MDS.
  • mds_standby_for_rank: 设置当前MDS进程只用于备份哪个RANK,通常为Rank编号,另外在存在这个CephFs文件系统中,还可以使用mds_standby_for_fscid参数来指定不同的文件系统.
  • mds_standby_for_fscid: 指定cephfs文件系统id,需要联合mds_standby_for_rank生效,如果设置mds_standby_for_rank,那么就是用于指定文件系统的指定rank,如果没有设置,就是指定文件系统的所有rank

3.1 当前MDS集群状态

# ceph mds stat
mycephfs:1 {0=ceph-mgr01=up:active}

3.2 安装MDS

分别在ceph-mon01,ceph-mon02,ceph-mon03上安装

$ apt install ceph-mds -y

到部署节点上部署mds

$ ceph-deploy mds create ceph-mon01
$ ceph-deploy mds create ceph-mon02
$ ceph-deploy mds create ceph-mon03
## 检查是否运行
# ps -ef |grep ceph-mds
ceph      74195      1  0 09:02 ?        00:00:00 /usr/bin/ceph-mds -f --cluster ceph --id ceph-mon02 --setuser ceph --setgroup ceph
root      74258  73819  0 09:03 pts/0    00:00:00 grep --color=auto ceph-mds

此时查看mds状态已经是1主3备了

$ ceph -s
  cluster:
    id:     86c42734-37fc-4091-b543-be6ff23e5134
    health: HEALTH_WARN
            1 pool(s) do not have an application enabled
 
  services:
    mon: 3 daemons, quorum ceph-mon01,ceph-mon02,ceph-mon03 (age 65m)
    mgr: ceph-mgr01(active, since 4d)
    mds: 1/1 daemons up, 3 standby
    osd: 16 osds: 16 up (since 64m), 16 in (since 4d)
    rgw: 1 daemon active (1 hosts, 1 zones)
 
  data:
    volumes: 1/1 healthy
    pools:   9 pools, 265 pgs
    objects: 278 objects, 45 MiB
    usage:   674 MiB used, 15 GiB / 16 GiB avail
    pgs:     265 active+clean
$ ceph fs status
mycephfs - 0 clients
========
RANK  STATE      MDS         ACTIVITY     DNS    INOS   DIRS   CAPS  
 0    active  ceph-mgr01  Reqs:    0 /s    13     16     12      0   
      POOL         TYPE     USED  AVAIL  
cephfs-metadata  metadata   194k  4830M  
  cephfs-data      data    36.0k  4830M  
STANDBY MDS  
 ceph-mon01  
 ceph-mon03  
 ceph-mon02  
MDS version: ceph version 16.2.10 (45fa1a083152e41a408d15505f594ec5f1b4fe17) pacific (stable)

3.3 增加mds

$ ceph fs get mycephfs
Filesystem 'mycephfs' (1)
fs_name	mycephfs
epoch	4
flags	12
created	2022-09-15T15:16:49.434617+0800
modified	2022-09-15T15:16:50.438293+0800
tableserver	0
root	0
session_timeout	60
session_autoclose	300
max_file_size	1099511627776
required_client_features	{}
last_failure	0
last_failure_osd_epoch	0
compat	compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2}
max_mds	1  ## <-------------此时mds只有1个
in	0
up	{0=5022}
failed	
damaged	
stopped	
data_pools	[9]
metadata_pool	8
inline_data	disabled
balancer	
standby_count_wanted	1
[mds.ceph-mgr01{0:5022} state up:active seq 182 addr [v2:192.168.31.84:6802/987239671,v1:192.168.31.84:6803/987239671] compat {c=[1],r=[1],i=[7ff]}]

将max_mds数量设置成2个

$ ceph fs set mycephfs max_mds 2
$ ceph fs get mycephfs|grep max_mds
max_mds	2
## 此时ceph会在standby的节点中挑选一个作为active
$ ceph fs status
mycephfs - 0 clients
========
RANK  STATE      MDS         ACTIVITY     DNS    INOS   DIRS   CAPS  
 0    active  ceph-mgr01  Reqs:    0 /s    13     16     12      0   
 1    active  ceph-mon02  Reqs:    0 /s    10     13     11      0   
      POOL         TYPE     USED  AVAIL  
cephfs-metadata  metadata   266k  4829M  
  cephfs-data      data    36.0k  4829M  
STANDBY MDS  
 ceph-mon01  
 ceph-mon03  
MDS version: ceph version 16.2.10 (45fa1a083152e41a408d15505f594ec5f1b4fe17) pacific (stable)
$ ceph -s|grep mds
    mds: 2/2 daemons up, 2 standby

此时mds的已经可以通过2个节点同时读写数据,当主mds出现异常,从节点也会接管变为主节点.但耗时会较长

3.4 高可用优化

$ cat ceph.conf
[global]
fsid = 86c42734-37fc-4091-b543-be6ff23e5134
public_network = 192.168.31.0/24
cluster_network = 172.31.31.0/24
mon_initial_members = ceph-mon01
mon_host = 192.168.31.81
mon clock drift allowed = 2
mon clock drift warn backoff = 30
auth_cluster_required = cephx
auth_service_required = cephx
auth_client_required = cephx

## 监视器允许的时钟漂移量,默认是0.050秒即50毫秒
mon clock drift allowed = 3		# 3就是3秒
## 时钟偏移告警的退避指数即连续多少次时间偏差后就发出警告
mon clock drift warn backoff = 10


## ceph-mon01作为ceph-mgr01备节点,同时实时与主节点同步数据
[mds.ceph-mon01]
mds_standby_for name = ceph-mgr01
mds_standby_replay = true
## ceph-mgr01作为ceph-mon01备节点,同时实时与主节点同步数据
[mds.ceph-mgr01]
mds_standby_for name = ceph-mon01
mds_standby_replay = true
## ceph-mon03作为ceph-mon02备节点,同时实时与主节点同步数据
[mds.ceph-mon03]
mds_standby_for name = ceph-mon02
mds_standby_replay = true
## ceph-mon02作为ceph-mon03备节点,同时实时与主节点同步数据
[mds.ceph-mon02]
mds_standby_for name = ceph-mon03
mds_standby_replay = true

[client]
rbd cache = true
rbd cache size = 335544320
rbd cache max dirty = 134217728
rbd cache target dirty = 235544320
rbd cache max dirty age = 30
rbd cache max dirty object = 8
rbd cache writehrough until flush = false

3.5 同步配置

$ ceph-deploy --overwrite-conf config push ceph-mgr01
$ ceph-deploy --overwrite-conf config push ceph-mon01
$ ceph-deploy --overwrite-conf config push ceph-mon02
$ ceph-deploy --overwrite-conf config push ceph-mon03
$ ceph fs status
mycephfs - 0 clients
========
RANK  STATE      MDS         ACTIVITY     DNS    INOS   DIRS   CAPS  
 0    active  ceph-mgr01  Reqs:    0 /s    13     16     12      0   
 1    active  ceph-mon02  Reqs:    0 /s    10     13     11      0   
      POOL         TYPE     USED  AVAIL  
cephfs-metadata  metadata   266k  4821M  
  cephfs-data      data    36.0k  4821M  
STANDBY MDS  
 ceph-mon01  
 ceph-mon03  
MDS version: ceph version 16.2.10 (45fa1a083152e41a408d15505f594ec5f1b4fe17) pacific (stable)
## 此时需要重启ceph-mon01和ceph-mon03 的服务,使配置生效
$ sudo ssh root@ceph-mon01 'systemctl restart ceph-mds@ceph-mon01.service'
$ sudo ssh root@ceph-mon03 'systemctl restart ceph-mds@ceph-mon03.service'

3.6 mds主从切换

当停止ceph-mgr01的mds,ceph-mon03就会接管mds

# systemctl stop ceph-mds@ceph-mgr01.service 
$ ceph fs status
mycephfs - 0 clients
========
RANK  STATE      MDS         ACTIVITY     DNS    INOS   DIRS   CAPS  
 0    active  ceph-mon03  Reqs:    0 /s    13     16     12      0   
 1    active  ceph-mon02  Reqs:    0 /s    10     13     11      0   
      POOL         TYPE     USED  AVAIL  
cephfs-metadata  metadata   266k  4820M  
  cephfs-data      data    36.0k  4820M  
STANDBY MDS  
 ceph-mon01  
MDS version: ceph version 16.2.10 (45fa1a083152e41a408d15505f594ec5f1b4fe17) pacific (stable)
# 当再次启动ceph-mgr01的mds,他就作为了备节点存在
$ ceph fs status
mycephfs - 0 clients
========
RANK  STATE      MDS         ACTIVITY     DNS    INOS   DIRS   CAPS  
 0    active  ceph-mon03  Reqs:    0 /s    13     16     12      0   
 1    active  ceph-mon02  Reqs:    0 /s    10     13     11      0   
      POOL         TYPE     USED  AVAIL  
cephfs-metadata  metadata   266k  4820M  
  cephfs-data      data    36.0k  4820M  
STANDBY MDS  
 ceph-mon01  
 ceph-mgr01  
MDS version: ceph version 16.2.10 (45fa1a083152e41a408d15505f594ec5f1b4fe17) pacific (stable)

4. 对象存储网关RadosGW

数据不需要放置在目录层次结构中,而是存在于平面地址空间内的同一级别.
应用通过唯一地址来识别每个单独的数据对象
每个对象可包含有助于检索的元数据
通过RESTful API在应用级别(非用户级别)进行访问

4.1 RadosGW对象存储简介

RadosGW是OSS实现的一种形式,RADOS网关也称为Ceph对象网关,RadosGW是一种服务,使客户端能够利用标准对象存储API来访问Ceph集群,客户端使用http/https协议通过RESTful API与RGW通信,而RGW则通过librados与ceph集群通信,RGW客户端使用RGW用户进行身份验证,然后RGW网关代表用户利用cephx与ceph存储进行身份验证.

4.2 对象存储特点

通过对象存储将数据存储为对象,每个对象除了包含数,还包含数据自身的元数据.

对象通过Object ID来检索,无法通过普通文件系统的方式通过文件路径及文件名称操作来直接访问对象,只能通过api来访问.或第三方客户端(实际上也是api的封装)

对象存储中的对象不整理到目录树中.而是存储在扁平的命名空间中,这个扁平的空间称为bucket.bucket不能进行嵌套.

bucket需要被授权才能访问到.一个账户可以对多个bucket授权,而授予的权限可以不同.

oss方便横向扩展,快速检索数据.

不适合用于文件过于频繁修改和删除的场景.

4.3 bucket 特性

存储空间是用于存储对象的容器,所有的对象都必须隶属于某个存储空间,可以设置和修改存储空间属性用来控制地域,访问权限,生命周期等,这些属性直接作用于该存储空间内的所有对象,因此可以通过灵活创建不同的存储空间来完成不同的管理功能.

同一个存储空间内部是扁平的,没有文件系统的目录概念,所有对象都直接隶属于其对应的存储空间.

每个用户可以拥有多个存储空间.

存储空间的名称在oss范围内必须是全局唯一的,一旦创建之后无法修改名称.

存储空间内部的对象数目没有限制.

4.4 bucket命名规范

只能包含小写字母,数字和短横线
必须以小写字母或数字开头和结尾
长度必须在3-63个字符之间
桶名不能使用ip地址格式.
bucket名必须全局唯一

4.5 部署RadosGW服务

将ceph-mgr01,ceph-mgr02服务器部署为高可用RadosGW服务

4.5.1 安装Radosgw服务

$ apt install radosgw -y

4.5.2 初始化Radosgw

$ ceph-deploy --overwrite-conf rgw create ceph-mgr01
$ ceph-deploy --overwrite-conf rgw create ceph-mgr02

# 运行后会多一个进程
# ps -ef |grep radosgw
ceph      10823      1  0 Sep18 ?        00:05:56 /usr/bin/radosgw -f --cluster ceph --name client.rgw.ceph-mgr01 --setuser ceph --setgroup ceph
# ss -ntl|grep 7480
LISTEN   0         128                 0.0.0.0:7480             0.0.0.0:*       
LISTEN   0         128                    [::]:7480                [::]:*  
# 可以看到rgw个数变成了2个
$ ceph -s
  cluster:
    id:     86c42734-37fc-4091-b543-be6ff23e5134
    health: HEALTH_WARN
            1 pool(s) do not have an application enabled
 
  services:
    mon: 3 daemons, quorum ceph-mon01,ceph-mon02,ceph-mon03 (age 5h)
    mgr: ceph-mgr01(active, since 5d)
    mds: 2/2 daemons up, 2 standby
    osd: 16 osds: 16 up (since 5h), 16 in (since 4d)
    rgw: 2 daemons active (2 hosts, 1 zones)
 
  data:
    volumes: 1/1 healthy
    pools:   9 pools, 265 pgs
    objects: 297 objects, 45 MiB
    usage:   761 MiB used, 15 GiB / 16 GiB avail
    pgs:     265 active+clean

4.5.3 RGW存储池类型

$ ceph osd pool ls
device_health_metrics
mypool
.rgw.root			 ## 包含realm(领域信息),比如zone和zonegroup
default.rgw.log		 ## 存储日志信息
default.rgw.control   ## 系统控制池,在有数据更新时,通知其他RGW更新存储
default.rgw.meta	 ## 元数据存储池
default.rgw.buckets.index ## 存放bucket到object的索引信息
default.rgw.buckets.data  ## 存放对象的数据
defautl.rgw.buckets.non-ec## 数据的额外信息存储池
cephfs-metadata
cephfs-data
rbd-data

4.5.4 创建RGW账户

$ radosgw-admin user create --uid="qiuqin" --display-name="qiuqin"
{
    "user_id": "qiuqin",
    "display_name": "qiuqin",
    "email": "",
    "suspended": 0,
    "max_buckets": 1000,
    "subusers": [],
    "keys": [
        {
            "user": "qiuqin",
            "access_key": "81V0NJ2226QPJ97W4YBL",
            "secret_key": "mW7UFRa222oLJXpwPxoiaATkzCWCkz1"
        }
    ],
    "swift_keys": [],
    "caps": [],
    "op_mask": "read, write, delete",
    "default_placement": "",
    "default_storage_class": "",
    "placement_tags": [],
    "bucket_quota": {
        "enabled": false,
        "check_on_raw": false,
        "max_size": -1,
        "max_size_kb": 0,
        "max_objects": -1
    },
    "user_quota": {
        "enabled": false,
        "check_on_raw": false,
        "max_size": -1,
        "max_size_kb": 0,
        "max_objects": -1
    },
    "temp_url_keys": [],
    "type": "rgw",
    "mfa_ids": []
}

4.5.5 通过s3cmd测试

# 安装s3cmd
$ sudo apt install s3cmd -y
# 测试访问rgw的7480端口
$ curl http://192.168.31.84:7480
<?xml version="1.0" encoding="UTF-8"?><ListAllMyBucketsResult xmlns="http://s3.amazonaws.com/doc/2006-03-01/"><Owner><ID>anonymous</ID><DisplayName></DisplayName></Owner><Buckets></Buckets></ListAllMyBucketsResult>

4.5.6 生成s3cfg配置

$ s3cmd --configure

Enter new values or accept defaults in brackets with Enter.
Refer to user manual for detailed description of all options.

Access key and Secret key are your identifiers for Amazon S3. Leave them empty for using the env variables.
Access Key: 81V0NJDZD6QPJ97W4YBL		#自己的access key
Secret Key: mW7UFRapA0fYnqYndx1oLJXpwPxoiaATkzCWCkz1 # 自己的secret key
Default Region [US]: 		# 直接回车

Use "s3.amazonaws.com" for S3 Endpoint and not modify it to the target Amazon S3.
S3 Endpoint [s3.amazonaws.com]: 192.168.31.84:7480  # oss地址

Use "%(bucket)s.s3.amazonaws.com" to the target Amazon S3. "%(bucket)s" and "%(location)s" vars can be used
if the target S3 system supports dns based buckets.
DNS-style bucket+hostname:port template for accessing a bucket [%(bucket)s.s3.amazonaws.com]: 192.168.31.84:7480/%(bucket)		# oos存储桶地址

Encryption password is used to protect your files from reading
by unauthorized persons while in transfer to S3
Encryption password: #直接回车
Path to GPG program [/usr/bin/gpg]: #直接回车

When using secure HTTPS protocol all communication with Amazon S3
servers is protected from 3rd party eavesdropping. This method is
slower than plain HTTP, and can only be proxied with Python 2.7 or newer
Use HTTPS protocol [Yes]: No	# 如果用https就yes否则就no

On some networks all internet access must go through a HTTP proxy.
Try setting it here if you can't connect to S3 directly
HTTP Proxy server name: #直接回车

New settings:
  Access Key: 81V0NJDZD6QPJ97W4YBL
  Secret Key: mW7UFRapA0fYnqYndx1oLJXpwPxoiaATkzCWCkz1
  Default Region: US
  S3 Endpoint: 192.168.31.84:7480
  DNS-style bucket+hostname:port template for accessing a bucket: 192.168.31.84:7480/%(bucket)
  Encryption password: 
  Path to GPG program: /usr/bin/gpg
  Use HTTPS protocol: False
  HTTP Proxy server name: # 直接回车
  HTTP Proxy server port: 0

Test access with supplied credentials? [Y/n] y# 是否验证
Please wait, attempting to list all buckets...
Success. Your access key and secret key worked fine :-)## 这里就说明正常了

Now verifying that encryption works...
Not configured. Never mind.

Save settings? [y/N] y#是否保存刚才的配置
Configuration saved to '/home/cephadmin/.s3cfg'

4.5.7 s3cfg文件内容

$ cat /home/cephadmin/.s3cfg
[default]
access_key = 81V0NJDZD6QPJ97W4YBL
access_token = 
add_encoding_exts = 
add_headers = 
bucket_location = US
ca_certs_file = 
cache_file = 
check_ssl_certificate = True
check_ssl_hostname = True
cloudfront_host = cloudfront.amazonaws.com
default_mime_type = binary/octet-stream
delay_updates = False
delete_after = False
delete_after_fetch = False
delete_removed = False
dry_run = False
enable_multipart = True
encoding = UTF-8
encrypt = False
expiry_date = 
expiry_days = 
expiry_prefix = 
follow_symlinks = False
force = False
get_continue = False
gpg_command = /usr/bin/gpg
gpg_decrypt = %(gpg_command)s -d --verbose --no-use-agent --batch --yes --passphrase-fd %(passphrase_fd)s -o %(output_file)s %(input_file)s
gpg_encrypt = %(gpg_command)s -c --verbose --no-use-agent --batch --yes --passphrase-fd %(passphrase_fd)s -o %(output_file)s %(input_file)s
gpg_passphrase = 
guess_mime_type = True
host_base = 192.168.31.84:7480
host_bucket = 192.168.31.84:7480/%(bucket)
human_readable_sizes = False
invalidate_default_index_on_cf = False
invalidate_default_index_root_on_cf = True
invalidate_on_cf = False
kms_key = 
limit = -1
limitrate = 0
list_md5 = False
log_target_prefix = 
long_listing = False
max_delete = -1
mime_type = 
multipart_chunk_size_mb = 15
multipart_max_chunks = 10000
preserve_attrs = True
progress_meter = True
proxy_host = 
proxy_port = 0
put_continue = False
recursive = False
recv_chunk = 65536
reduced_redundancy = False
requester_pays = False
restore_days = 1
restore_priority = Standard
secret_key = mW7UFRapA0fYnqYndx1oLJXpwPxoiaATkzCWCkz1
send_chunk = 65536
server_side_encryption = False
signature_v2 = False
signurl_use_https = False
simpledb_host = sdb.amazonaws.com
skip_existing = False
socket_timeout = 300
stats = False
stop_on_error = False
storage_class = 
urlencoding_mode = normal
use_http_expect = False
use_https = False
use_mime_magic = True
verbosity = WARNING
website_endpoint = http://%(bucket)s.s3-website-%(location)s.amazonaws.com/
website_error = 
website_index = index.html

4.5.8 创建bucket

$ s3cmd mb s3://ehelp
Bucket 's3://ehelp/' created
$ s3cmd mb s3://sap
Bucket 's3://sap/' created

4.5.9 上传文件到bucket

$ s3cmd put /etc/*.conf s3://ehelp
WARNING: Skipping over symbolic link: /etc/resolv.conf
upload: '/etc/adduser.conf' -> 's3://ehelp/adduser.conf'  [1 of 28]
 3028 of 3028   100% in    1s  1575.27 B/s  done
upload: '/etc/ca-certificates.conf' -> 's3://ehelp/ca-certificates.conf'  [2 of 28]
 7093 of 7093   100% in    0s   407.12 kB/s  done
upload: '/etc/daemon.conf' -> 's3://ehelp/daemon.conf'  [3 of 28]
 141 of 141   100% in    0s     8.04 kB/s  done
upload: '/etc/debconf.conf' -> 's3://ehelp/debconf.conf'  [4 of 28]
## 上传目录需要加上--recursive参数
$ s3cmd --recursive put /etc s3://sap

4.5.10 查看bucket下文件

$ s3cmd ls s3://ehelp
2022-09-20 06:13      3028   s3://ehelp/adduser.conf
2022-09-20 06:13      7093   s3://ehelp/ca-certificates.conf
2022-09-20 06:13       141   s3://ehelp/daemon.conf
2022-09-20 06:13      2969   s3://ehelp/debconf.conf
2022-09-20 06:13       604   s3://ehelp/deluser.conf
2022-09-20 06:13       280   s3://ehelp/fuse.conf
2022-09-20 06:13      2584   s3://ehelp/gai.conf
# .... 略
$ s3cmd ls s3://sap/etc/
                       DIR   s3://sap/etc/NetworkManager/
                       DIR   s3://sap/etc/X11/
                       DIR   s3://sap/etc/alternatives/
                       DIR   s3://sap/etc/apache2/
                       DIR   s3://sap/etc/apm/
                       DIR   s3://sap/etc/apparmor.d/
# .... 略

4.5.11 下载bucket中的文件

$ s3cmd get s3://ehelp/fuse.conf ./
download: 's3://ehelp/fuse.conf' -> './fuse.conf'  [1 of 1]
 280 of 280   100% in    0s     6.00 kB/s  done
$ s3cmd get s3://sap/etc/passwd ./
download: 's3://sap/etc/passwd' -> './passwd'  [1 of 1]
 1901 of 1901   100% in    0s    42.51 kB/s  done
$ ls fuse.conf passwd -l
-rw-rw-r-- 1 cephadmin cephadmin  280 Sep 20 06:13 fuse.conf
-rw-rw-r-- 1 cephadmin cephadmin 1901 Sep 20 06:16 passwd
$ tail -2 passwd 
cephadm:x:113:65534:cephadm user for mgr/cephadm,,,:/home/cephadm:/bin/bash
postfix:x:114:116::/var/spool/postfix:/usr/sbin/nologin
 类似资料: