本文首发链接
查看我的个人博客:https://hubinqiang.com
测试环境:Ubuntu 16.04 LTS
本指南将介绍如何设置一个用于进行Swift开发的虚拟机。虚拟机将模拟运行四节点的Swift集群。 做好以下准备:
<your-user-name>
是什么本指南中描述的大部分配置都需要升级的管理员(root)权限,但是,如果管理员作为非特权用户登录,可以使用sudo运行特权命令。
Swift进程在单独的用户和用户组下运行,由配置选项设置,并引用为<your-user-name>:<your-group-name>
。默认用户是swift,它可能不存在于您的系统上。这些说明旨在允许开发人员使用他/她的用户名:<your-user-name>:<your-group-name>
。
apt
为基础的系统上sudo apt-get update
sudo apt-get install curl gcc memcached rsync sqlite3 xfsprogs \
git-core libffi-dev python-setuptools \
liberasurecode-dev libssl-dev
sudo apt-get install python-coverage python-dev python-nose \
python-xattr python-eventlet \
python-greenlet python-pastedeploy \
python-netifaces python-pip python-dnspython \
python-mock
yum
为基础的系统上sudo yum update
sudo yum install curl gcc memcached rsync sqlite xfsprogs git-core \
libffi-devel xinetd liberasurecode-devel \
openssl-devel python-setuptools \
python-coverage python-devel python-nose \
pyxattr python-eventlet \
python-greenlet python-paste-deploy \
python-netifaces python-pip python-dns \
python-mock
注意:这安装必要的系统依赖和大多数python依赖。稍后在过程setuptools/distribute或pip时将安装或升级包。
接下来选择使用分区存储(partition for storage)或使用回环设备(loopback device for storage)进行存储。
如果要为Swift数据使用单独的分区,请确保在创建VM时添加其他设备,并按照以下说明操作:
sudo fdisk /dev/sdb
sudo mkfs.xfs /dev/sdb1
/etc/fstab
并添加/dev/sdb1 /mnt/sdb1 xfs noatime,nodiratime,nobarrier,logbufs=8 0 0
sudo mkdir /mnt/sdb1
sudo mount /mnt/sdb1
sudo mkdir /mnt/sdb1/1 /mnt/sdb1/2 /mnt/sdb1/3 /mnt/sdb1/4
sudo chown ${USER}:${USER} /mnt/sdb1/*
sudo mkdir /srv
for x in {1..4}; do sudo ln -s /mnt/sdb1/$x /srv/$x; done
sudo mkdir -p /srv/1/node/sdb1 /srv/1/node/sdb5 \
/srv/2/node/sdb2 /srv/2/node/sdb6 \
/srv/3/node/sdb3 /srv/3/node/sdb7 \
/srv/4/node/sdb4 /srv/4/node/sdb8 \
/var/run/swift
sudo chown -R ${USER}:${USER} /var/run/swift
# **Make sure to include the trailing slash after /srv/$x/**
for x in {1..4}; do sudo chown -R ${USER}:${USER} /srv/$x/; done
注意:我们创建挂载点并将存储磁盘安装在/mnt/sdb1
下。此磁盘将包含每个模拟swift节点的一个目录,由当前swift用户拥有。
然后,我们在/srv
下创建到这些目录的符号链接。如果磁盘sdb
被卸载,则不会在/srv/
*下写入文件,因为符号链接目标/mnt/sdb1/*
将不存在。防止在卸载驱动器的情况下磁盘同步操作写入根分区。
如果要使用回环设备而不是其他分区,请按照以下说明操作:
sudo mkdir /srv
sudo truncate -s 1GB /srv/swift-disk
sudo mkfs.xfs /srv/swift-disk
修改truncate命令中指定的大小以根据需要创建更大或更小的分区。
/etc/fstab
并添加/srv/swift-disk /mnt/sdb1 xfs loop,noatime,nodiratime,nobarrier,logbufs=8 0 0
sudo mkdir /mnt/sdb1
sudo mount /mnt/sdb1
sudo mkdir /mnt/sdb1/1 /mnt/sdb1/2 /mnt/sdb1/3 /mnt/sdb1/4
sudo chown ${USER}:${USER} /mnt/sdb1/*
for x in {1..4}; do sudo ln -s /mnt/sdb1/$x /srv/$x; done
sudo mkdir -p /srv/1/node/sdb1 /srv/1/node/sdb5 \
/srv/2/node/sdb2 /srv/2/node/sdb6 \
/srv/3/node/sdb3 /srv/3/node/sdb7 \
/srv/4/node/sdb4 /srv/4/node/sdb8 \
/var/run/swift
sudo chown -R ${USER}:${USER} /var/run/swift
# **Make sure to include the trailing slash after /srv/$x/**
for x in {1..4}; do sudo chown -R ${USER}:${USER} /srv/$x/; done
注意:我们创建挂载点并将存储磁盘安装在/mnt/sdb1
下。此磁盘将包含每个模拟swift节点的一个目录,由当前swift用户拥有。
然后,我们在/srv
下创建到这些目录的符号链接。如果磁盘sdb
被卸载,则不会在/srv/
*下写入文件,因为符号链接目标/mnt/sdb1/*
将不存在。防止在卸载驱动器的情况下磁盘同步操作写入根分区。
添加以下几行到/etc/rc.local
中(在exit 0
之前)
mkdir -p /var/cache/swift /var/cache/swift2 /var/cache/swift3 /var/cache/swift4
chown <your-user-name>:<your-group-name> /var/cache/swift*
mkdir -p /var/run/swift
chown <your-user-name>:<your-group-name> /var/run/swift
注意:请注意,在某些系统上,您可能需要创建/etc/rc.local
。
在Fedora 19或更高版本上,您需要将它们放在/etc/rc.d/rc.local
中。
python-swiftclient
代码cd $HOME; git clone https://github.com/openstack/python-swiftclient.git
python-swiftclient
cd $HOME/python-swiftclient; sudo python setup.py develop; cd -
Ubuntu 12.04用户需要在安装python-swiftclient
之前安装python-swiftclient
的依赖项。这是由于旧版本的安装工具中的错误:
cd $HOME/python-swiftclient; sudo pip install -r requirements.txt; sudo python setup.py develop; cd -
swift
的代码git clone https://github.com/openstack/swift.git
swift
cd $HOME/swift; sudo pip install -r requirements.txt; sudo python setup.py develop; cd -
如果swift的开发安装失败,Fedora 19或更高版本的用户可能需要执行以下操作:
sudo pip install -U xattr
swift
测试相关的依赖cd $HOME/swift; sudo pip install -r test-requirements.txt
rsync
/etc/rsyncd.conf
:sudo cp $HOME/swift/doc/saio/rsyncd.conf /etc/
sudo sed -i "s/<your-user-name>/${USER}/" /etc/rsyncd.conf
这里是在repo中维护的默认rsyncd.conf
文件内容,用来修复以上的修改:
uid = <your-user-name>
gid = <your-user-name>
log file = /var/log/rsyncd.log
pid file = /var/run/rsyncd.pid
address = 127.0.0.1
[account6012]
max connections = 25
path = /srv/1/node/
read only = false
lock file = /var/lock/account6012.lock
[account6022]
max connections = 25
path = /srv/2/node/
read only = false
lock file = /var/lock/account6022.lock
[account6032]
max connections = 25
path = /srv/3/node/
read only = false
lock file = /var/lock/account6032.lock
[account6042]
max connections = 25
path = /srv/4/node/
read only = false
lock file = /var/lock/account6042.lock
[container6011]
max connections = 25
path = /srv/1/node/
read only = false
lock file = /var/lock/container6011.lock
[container6021]
max connections = 25
path = /srv/2/node/
read only = false
lock file = /var/lock/container6021.lock
[container6031]
max connections = 25
path = /srv/3/node/
read only = false
lock file = /var/lock/container6031.lock
[container6041]
max connections = 25
path = /srv/4/node/
read only = false
lock file = /var/lock/container6041.lock
[object6010]
max connections = 25
path = /srv/1/node/
read only = false
lock file = /var/lock/object6010.lock
[object6020]
max connections = 25
path = /srv/2/node/
read only = false
lock file = /var/lock/object6020.lock
[object6030]
max connections = 25
path = /srv/3/node/
read only = false
lock file = /var/lock/object6030.lock
[object6040]
max connections = 25
path = /srv/4/node/
read only = false
lock file = /var/lock/object6040.lock
/etc/default/rsync
中的下面一行RSYNC_ENABLE=true
在Fedora, 修改/etc/xinetd.d/rsync
中的下面一行:
disable = no
可能需要创建上述文件来执行编辑。
SELinux
是Enforcing
模式设置为Permissive
sudo setenforce Permissive
或仅仅允许rsync
的full
权限:
sudo setsebool -P rsync_full_access 1
rsync
sudo service rsync restart
sudo systemctl restart xinetd.service
sudo systemctl enable rsyncd.service
sudo systemctl start rsyncd.service
sudo service xinetd restart
rsync
是不是接受了所有的连接rsync rsync://pub@localhost/
执行上述命令,应该会看到
account6012
account6022
account6032
account6042
container6011
container6021
container6031
container6041
object6010
object6020
object6030
object6040
memcached
在非Ubuntu发行版您需要确保memcached正在运行:
sudo service memcached start
sudo chkconfig memcached on
或者试试这个:
sudo systemctl enable memcached.service
sudo systemctl start memcached.service
tempauth
中间件在memcached
中存储令牌。如果memcached
是没有运行,令牌不能被验证,就无法使用swift
。
rsyslog
swift rsyslogd
的配置sudo cp $HOME/swift/doc/saio/rsyslog.d/10-swift.conf /etc/rsyslog.d/
查看该conf
文件以确定是否要将日志独立出来,以及是否需要每小时处理统计信息的日志。 为方便起见,我们提供以下默认内容:
# Uncomment the following to have a log containing all logs together
#local1,local2,local3,local4,local5.* /var/log/swift/all.log
# Uncomment the following to have hourly proxy logs for stats processing
#$template HourlyProxyLog,"/var/log/swift/hourly/%$YEAR%%$MONTH%%$DAY%%$HOUR%"
#local1.*;local1.!notice ?HourlyProxyLog
local1.*;local1.!notice /var/log/swift/proxy.log
local1.notice /var/log/swift/proxy.error
local1.* ~
local2.*;local2.!notice /var/log/swift/storage1.log
local2.notice /var/log/swift/storage1.error
local2.* ~
local3.*;local3.!notice /var/log/swift/storage2.log
local3.notice /var/log/swift/storage2.error
local3.* ~
local4.*;local4.!notice /var/log/swift/storage3.log
local4.notice /var/log/swift/storage3.error
local4.* ~
local5.*;local5.!notice /var/log/swift/storage4.log
local5.notice /var/log/swift/storage4.error
local5.* ~
local6.*;local6.!notice /var/log/swift/expirer.log
local6.notice /var/log/swift/expirer.error
local6.* ~
/etc/rsyslog.conf
并进行以下修改 (通常在“GLOBAL DIRECTIVES” 中)$PrivDropToGroup adm
hourly logs
则执行下面的sudo mkdir -p /var/log/swift/hourly
否则执行:
sudo mkdir -p /var/log/swift
syslog
sudo chown -R syslog.adm /var/log/swift
sudo chmod -R g+w /var/log/swift
sudo service rsyslog restart
sudo chown -R root:adm /var/log/swift
sudo chmod -R g+w /var/log/swift
sudo systemctl restart rsyslog.service
执行以下步骤后,请务必验证Swift是否可以访问生成的配置文件(示例配置文件提供了所有默认值的注释)。
sudo rm -rf /etc/swift
cd $HOME/swift/doc; sudo cp -r saio/swift /etc/swift; cd -
sudo chown -R ${USER}:${USER} /etc/swift
<your-user-name>
引用find /etc/swift/ -name \*.conf | xargs sudo sed -i "s/<your-user-name>/${USER}/"
通过执行上述命令提供的配置文件的内容如下:
1./etc/swift/swift.conf
[swift-hash]
# random unique strings that can never change (DO NOT LOSE)
# Use only printable chars (python -c "import string; print(string.printable)")
swift_hash_path_prefix = changeme
swift_hash_path_suffix = changeme
[storage-policy:0]
name = gold
policy_type = replication
default = yes
[storage-policy:1]
name = silver
policy_type = replication
[storage-policy:2]
name = ec42
policy_type = erasure_coding
ec_type = liberasurecode_rs_vand
ec_num_data_fragments = 4
ec_num_parity_fragments = 2
2./etc/swift/proxy-server.conf
[DEFAULT]
bind_ip = 127.0.0.1
bind_port = 8080
workers = 1
user = <your-user-name>
log_facility = LOG_LOCAL1
eventlet_debug = true
[pipeline:main]
# Yes, proxy-logging appears twice. This is so that
# middleware-originated requests get logged too.
pipeline = catch_errors gatekeeper healthcheck proxy-logging cache bulk tempurl ratelimit crossdomain container_sync tempauth staticweb copy container-quotas account-quotas slo dlo versioned_writes proxy-logging proxy-server
[filter:catch_errors]
use = egg:swift#catch_errors
[filter:healthcheck]
use = egg:swift#healthcheck
[filter:proxy-logging]
use = egg:swift#proxy_logging
[filter:bulk]
use = egg:swift#bulk
[filter:ratelimit]
use = egg:swift#ratelimit
[filter:crossdomain]
use = egg:swift#crossdomain
[filter:dlo]
use = egg:swift#dlo
[filter:slo]
use = egg:swift#slo
[filter:container_sync]
use = egg:swift#container_sync
current = //saio/saio_endpoint
[filter:tempurl]
use = egg:swift#tempurl
[filter:tempauth]
use = egg:swift#tempauth
user_admin_admin = admin .admin .reseller_admin
user_test_tester = testing .admin
user_test2_tester2 = testing2 .admin
user_test_tester3 = testing3
[filter:staticweb]
use = egg:swift#staticweb
[filter:account-quotas]
use = egg:swift#account_quotas
[filter:container-quotas]
use = egg:swift#container_quotas
[filter:cache]
use = egg:swift#memcache
[filter:gatekeeper]
use = egg:swift#gatekeeper
[filter:versioned_writes]
use = egg:swift#versioned_writes
allow_versioned_writes = true
[filter:copy]
use = egg:swift#copy
[app:proxy-server]
use = egg:swift#proxy
allow_account_management = true
account_autocreate = true
3./etc/swift/object-expirer.conf
[DEFAULT]
# swift_dir = /etc/swift
user = <your-user-name>
# You can specify default log routing here if you want:
log_name = object-expirer
log_facility = LOG_LOCAL6
log_level = INFO
#log_address = /dev/log
#
# comma separated list of functions to call to setup custom log handlers.
# functions get passed: conf, name, log_to_console, log_route, fmt, logger,
# adapted_logger
# log_custom_handlers =
#
# If set, log_udp_host will override log_address
# log_udp_host =
# log_udp_port = 514
#
# You can enable StatsD logging here:
# log_statsd_host =
# log_statsd_port = 8125
# log_statsd_default_sample_rate = 1.0
# log_statsd_sample_rate_factor = 1.0
# log_statsd_metric_prefix =
[object-expirer]
interval = 300
# auto_create_account_prefix = .
# report_interval = 300
# concurrency is the level of concurrency o use to do the work, this value
# must be set to at least 1
# concurrency = 1
# processes is how many parts to divide the work into, one part per process
# that will be doing the work
# processes set 0 means that a single process will be doing all the work
# processes can also be specified on the command line and will override the
# config value
# processes = 0
# process is which of the parts a particular process will work on
# process can also be specified on the command line and will override the config
# value
# process is "zero based", if you want to use 3 processes, you should run
# processes with process set to 0, 1, and 2
# process = 0
[pipeline:main]
pipeline = catch_errors cache proxy-server
[app:proxy-server]
use = egg:swift#proxy
# See proxy-server.conf-sample for options
[filter:cache]
use = egg:swift#memcache
# See proxy-server.conf-sample for options
[filter:catch_errors]
use = egg:swift#catch_errors
# See proxy-server.conf-sample for options
4./etc/swift/container-reconciler.conf
[DEFAULT]
# swift_dir = /etc/swift
user = <your-user-name>
# You can specify default log routing here if you want:
# log_name = swift
# log_facility = LOG_LOCAL0
# log_level = INFO
# log_address = /dev/log
#
# comma separated list of functions to call to setup custom log handlers.
# functions get passed: conf, name, log_to_console, log_route, fmt, logger,
# adapted_logger
# log_custom_handlers =
#
# If set, log_udp_host will override log_address
# log_udp_host =
# log_udp_port = 514
#
# You can enable StatsD logging here:
# log_statsd_host =
# log_statsd_port = 8125
# log_statsd_default_sample_rate = 1.0
# log_statsd_sample_rate_factor = 1.0
# log_statsd_metric_prefix =
[container-reconciler]
# reclaim_age = 604800
# interval = 300
# request_tries = 3
[pipeline:main]
pipeline = catch_errors proxy-logging cache proxy-server
[app:proxy-server]
use = egg:swift#proxy
# See proxy-server.conf-sample for options
[filter:cache]
use = egg:swift#memcache
# See proxy-server.conf-sample for options
[filter:proxy-logging]
use = egg:swift#proxy_logging
[filter:catch_errors]
use = egg:swift#catch_errors
# See proxy-server.conf-sample for options
5./etc/swift/container-sync-realms.conf
[saio]
key = changeme
key2 = changeme
cluster_saio_endpoint = http://127.0.0.1:8080/v1/
6./etc/swift/account-server/1.conf
[DEFAULT]
devices = /srv/1/node
mount_check = false
disable_fallocate = true
bind_ip = 127.0.0.1
bind_port = 6012
workers = 1
user = <your-user-name>
log_facility = LOG_LOCAL2
recon_cache_path = /var/cache/swift
eventlet_debug = true
[pipeline:main]
pipeline = recon account-server
[app:account-server]
use = egg:swift#account
[filter:recon]
use = egg:swift#recon
[account-replicator]
rsync_module = {replication_ip}::account{replication_port}
[account-auditor]
[account-reaper]
7./etc/swift/container-server/1.conf
[DEFAULT]
devices = /srv/1/node
mount_check = false
disable_fallocate = true
bind_ip = 127.0.0.1
bind_port = 6011
workers = 1
user = <your-user-name>
log_facility = LOG_LOCAL2
recon_cache_path = /var/cache/swift
eventlet_debug = true
[pipeline:main]
pipeline = recon container-server
[app:container-server]
use = egg:swift#container
[filter:recon]
use = egg:swift#recon
[container-replicator]
rsync_module = {replication_ip}::container{replication_port}
[container-updater]
[container-auditor]
[container-sync]
8./etc/swift/object-server/1.conf
[DEFAULT]
devices = /srv/1/node
mount_check = false
disable_fallocate = true
bind_ip = 127.0.0.1
bind_port = 6010
workers = 1
user = <your-user-name>
log_facility = LOG_LOCAL2
recon_cache_path = /var/cache/swift
eventlet_debug = true
[pipeline:main]
pipeline = recon object-server
[app:object-server]
use = egg:swift#object
[filter:recon]
use = egg:swift#recon
[object-replicator]
rsync_module = {replication_ip}::object{replication_port}
[object-reconstructor]
[object-updater]
[object-auditor]
9./etc/swift/account-server/2.conf
[DEFAULT]
devices = /srv/2/node
mount_check = false
disable_fallocate = true
bind_ip = 127.0.0.1
bind_port = 6022
workers = 1
user = <your-user-name>
log_facility = LOG_LOCAL3
recon_cache_path = /var/cache/swift2
eventlet_debug = true
[pipeline:main]
pipeline = recon account-server
[app:account-server]
use = egg:swift#account
[filter:recon]
use = egg:swift#recon
[account-replicator]
rsync_module = {replication_ip}::account{replication_port}
[account-auditor]
[account-reaper]
10./etc/swift/container-server/2.conf
[DEFAULT]
devices = /srv/2/node
mount_check = false
disable_fallocate = true
bind_ip = 127.0.0.1
bind_port = 6021
workers = 1
user = <your-user-name>
log_facility = LOG_LOCAL3
recon_cache_path = /var/cache/swift2
eventlet_debug = true
[pipeline:main]
pipeline = recon container-server
[app:container-server]
use = egg:swift#container
[filter:recon]
use = egg:swift#recon
[container-replicator]
rsync_module = {replication_ip}::container{replication_port}
[container-updater]
[container-auditor]
[container-sync]
11./etc/swift/object-server/2.conf
[DEFAULT]
devices = /srv/2/node
mount_check = false
disable_fallocate = true
bind_ip = 127.0.0.1
bind_port = 6020
workers = 1
user = <your-user-name>
log_facility = LOG_LOCAL3
recon_cache_path = /var/cache/swift2
eventlet_debug = true
[pipeline:main]
pipeline = recon object-server
[app:object-server]
use = egg:swift#object
[filter:recon]
use = egg:swift#recon
[object-replicator]
rsync_module = {replication_ip}::object{replication_port}
[object-reconstructor]
[object-updater]
[object-auditor]
12./etc/swift/account-server/3.conf
[DEFAULT]
devices = /srv/3/node
mount_check = false
disable_fallocate = true
bind_ip = 127.0.0.1
bind_port = 6032
workers = 1
user = <your-user-name>
log_facility = LOG_LOCAL4
recon_cache_path = /var/cache/swift3
eventlet_debug = true
[pipeline:main]
pipeline = recon account-server
[app:account-server]
use = egg:swift#account
[filter:recon]
use = egg:swift#recon
[account-replicator]
rsync_module = {replication_ip}::account{replication_port}
[account-auditor]
[account-reaper]
13./etc/swift/container-server/3.conf
[DEFAULT]
devices = /srv/3/node
mount_check = false
disable_fallocate = true
bind_ip = 127.0.0.1
bind_port = 6031
workers = 1
user = <your-user-name>
log_facility = LOG_LOCAL4
recon_cache_path = /var/cache/swift3
eventlet_debug = true
[pipeline:main]
pipeline = recon container-server
[app:container-server]
use = egg:swift#container
[filter:recon]
use = egg:swift#recon
[container-replicator]
rsync_module = {replication_ip}::container{replication_port}
[container-updater]
[container-auditor]
[container-sync]
14./etc/swift/object-server/3.conf
[DEFAULT]
devices = /srv/3/node
mount_check = false
disable_fallocate = true
bind_ip = 127.0.0.1
bind_port = 6030
workers = 1
user = <your-user-name>
log_facility = LOG_LOCAL4
recon_cache_path = /var/cache/swift3
eventlet_debug = true
[pipeline:main]
pipeline = recon object-server
[app:object-server]
use = egg:swift#object
[filter:recon]
use = egg:swift#recon
[object-replicator]
rsync_module = {replication_ip}::object{replication_port}
[object-reconstructor]
[object-updater]
[object-auditor]
15./etc/swift/account-server/4.conf
[DEFAULT]
devices = /srv/4/node
mount_check = false
disable_fallocate = true
bind_ip = 127.0.0.1
bind_port = 6042
workers = 1
user = <your-user-name>
log_facility = LOG_LOCAL5
recon_cache_path = /var/cache/swift4
eventlet_debug = true
[pipeline:main]
pipeline = recon account-server
[app:account-server]
use = egg:swift#account
[filter:recon]
use = egg:swift#recon
[account-replicator]
rsync_module = {replication_ip}::account{replication_port}
[account-auditor]
[account-reaper]
16./etc/swift/container-server/4.conf
[DEFAULT]
devices = /srv/4/node
mount_check = false
disable_fallocate = true
bind_ip = 127.0.0.1
bind_port = 6041
workers = 1
user = <your-user-name>
log_facility = LOG_LOCAL5
recon_cache_path = /var/cache/swift4
eventlet_debug = true
[pipeline:main]
pipeline = recon container-server
[app:container-server]
use = egg:swift#container
[filter:recon]
use = egg:swift#recon
[container-replicator]
rsync_module = {replication_ip}::container{replication_port}
[container-updater]
[container-auditor]
[container-sync]
17./etc/swift/object-server/4.conf
[DEFAULT]
devices = /srv/4/node
mount_check = false
disable_fallocate = true
bind_ip = 127.0.0.1
bind_port = 6040
workers = 1
user = <your-user-name>
log_facility = LOG_LOCAL5
recon_cache_path = /var/cache/swift4
eventlet_debug = true
[pipeline:main]
pipeline = recon object-server
[app:object-server]
use = egg:swift#object
[filter:recon]
use = egg:swift#recon
[object-replicator]
rsync_module = {replication_ip}::object{replication_port}
[object-reconstructor]
[object-updater]
[object-auditor]
SAIO
脚本,重新设置环境mkdir -p $HOME/bin
cd $HOME/swift/doc; cp saio/bin/* $HOME/bin; cd -
chmod +x $HOME/bin/*
$HOME/bin/resetswift
脚本模板resetswift
脚本如下所示:
#!/bin/bash
set -e
swift-init all kill
# Remove the following line if you did not set up rsyslog for individual logging:
sudo find /var/log/swift -type f -exec rm -f {} \;
if cut -d' ' -f2 /proc/mounts | grep -q /mnt/sdb1 ; then
sudo umount /mnt/sdb1
fi
# If you are using a loopback device set SAIO_BLOCK_DEVICE to "/srv/swift-disk"
sudo mkfs.xfs -f ${SAIO_BLOCK_DEVICE:-/dev/sdb1}
sudo mount /mnt/sdb1
sudo mkdir /mnt/sdb1/1 /mnt/sdb1/2 /mnt/sdb1/3 /mnt/sdb1/4
sudo chown ${USER}:${USER} /mnt/sdb1/*
mkdir -p /srv/1/node/sdb1 /srv/1/node/sdb5 \
/srv/2/node/sdb2 /srv/2/node/sdb6 \
/srv/3/node/sdb3 /srv/3/node/sdb7 \
/srv/4/node/sdb4 /srv/4/node/sdb8
sudo rm -f /var/log/debug /var/log/messages /var/log/rsyncd.log /var/log/syslog
find /var/cache/swift* -type f -name *.recon -exec rm -f {} \;
if [ "`type -t systemctl`" == "file" ]; then
sudo systemctl restart rsyslog
sudo systemctl restart memcached
else
sudo service rsyslog restart
sudo service memcached restart
fi
如果使用回环设备,请使用/srv/swift-disk
添加环境变量var
替换/dev/sdb1
:
echo "export SAIO_BLOCK_DEVICE=/srv/swift-disk" >> $HOME/.bashrc
如果没有为单个日志记录设置rsyslog
,请删除find /var/log/swift ...
行:
sed -i "/find \/var\/log\/swift/d" $HOME/bin/resetswift
cp $HOME/swift/test/sample.conf /etc/swift/test.conf
模板test.conf如下所示:
[func_test]
# Sample config for Swift with tempauth
auth_host = 127.0.0.1
auth_port = 8080
auth_ssl = no
auth_prefix = /auth/
# Sample config for Swift with Keystone v2 API.
# For keystone v2 change auth_version to 2 and auth_prefix to /v2.0/.
# And "allow_account_management" should not be set "true".
#auth_version = 3
#auth_host = localhost
#auth_port = 5000
#auth_ssl = no
#auth_prefix = /v3/
# Primary functional test account (needs admin access to the account)
account = test
username = tester
password = testing
# User on a second account (needs admin access to the account)
account2 = test2
username2 = tester2
password2 = testing2
# User on same account as first, but without admin access
username3 = tester3
password3 = testing3
# Fourth user is required for keystone v3 specific tests.
# Account must be in a non-default domain.
#account4 = test4
#username4 = tester4
#password4 = testing4
#domain4 = test-domain
# Fifth user is required for service token-specific tests.
# The account must be different from the primary test account.
# The user must not have a group (tempauth) or role (keystoneauth) on
# the primary test account. The user must have a group/role that is unique
# and not given to the primary tester and is specified in the options
# <prefix>_require_group (tempauth) or <prefix>_service_roles (keystoneauth).
#account5 = test5
#username5 = tester5
#password5 = testing5
# The service_prefix option is used for service token-specific tests.
# If service_prefix or username5 above is not supplied, the tests are skipped.
# To set the value and enable the service token tests, look at the
# reseller_prefix option in /etc/swift/proxy-server.conf. There must be at
# least two prefixes. If not, add a prefix as follows (where we add SERVICE):
# reseller_prefix = AUTH, SERVICE
# The service_prefix must match the <prefix> used in <prefix>_require_group
# (tempauth) or <prefix>_service_roles (keystoneauth); for example:
# SERVICE_require_group = service
# SERVICE_service_roles = service
# Note: Do not enable service token tests if the first prefix in
# reseller_prefix is the empty prefix AND the primary functional test
# account contains an underscore.
#service_prefix = SERVICE
# Sixth user is required for access control tests.
# Account must have a role for reseller_admin_role(keystoneauth).
#account6 = test
#username6 = tester6
#password6 = testing6
collate = C
# Only necessary if a pre-existing server uses self-signed certificate
insecure = no
[unit_test]
fake_syslog = False
[probe_test]
# check_server_timeout = 30
# validate_rsync = false
[swift-constraints]
# The functional test runner will try to use the constraint values provided in
# the swift-constraints section of test.conf.
#
# If a constraint value does not exist in that section, or because the
# swift-constraints section does not exist, the constraints values found in
# the /info API call (if successful) will be used.
#
# If a constraint value cannot be found in the /info results, either because
# the /info API call failed, or a value is not present, the constraint value
# used will fall back to those loaded by the constraints module at time of
# import (which will attempt to load /etc/swift/swift.conf, see the
# swift.common.constraints module for more information).
#
# Note that the cluster must have "sane" values for the test suite to pass
# (for some definition of sane).
#
#max_file_size = 5368709122
#max_meta_name_length = 128
#max_meta_value_length = 256
#max_meta_count = 90
#max_meta_overall_size = 4096
#max_header_size = 8192
#extra_header_count = 0
#max_object_name_length = 1024
#container_listing_limit = 10000
#account_listing_limit = 10000
#max_account_name_length = 256
#max_container_name_length = 256
# Newer swift versions default to strict cors mode, but older ones were the
# opposite.
#strict_cors_mode = true
echo "export SWIFT_TEST_CONFIG_FILE=/etc/swift/test.conf" >> $HOME/.bashrc
bin
目录设置到环境变量中去echo "export PATH=${PATH}:$HOME/bin" >> $HOME/.bashrc
source
刚刚设置环境变量,使其生效. $HOME/.bashrc
rings
remakerings
remakerings
脚本如下所示:
#!/bin/bash
set -e
cd /etc/swift
rm -f *.builder *.ring.gz backups/*.builder backups/*.ring.gz
swift-ring-builder object.builder create 10 3 1
swift-ring-builder object.builder add r1z1-127.0.0.1:6010/sdb1 1
swift-ring-builder object.builder add r1z2-127.0.0.1:6020/sdb2 1
swift-ring-builder object.builder add r1z3-127.0.0.1:6030/sdb3 1
swift-ring-builder object.builder add r1z4-127.0.0.1:6040/sdb4 1
swift-ring-builder object.builder rebalance
swift-ring-builder object-1.builder create 10 2 1
swift-ring-builder object-1.builder add r1z1-127.0.0.1:6010/sdb1 1
swift-ring-builder object-1.builder add r1z2-127.0.0.1:6020/sdb2 1
swift-ring-builder object-1.builder add r1z3-127.0.0.1:6030/sdb3 1
swift-ring-builder object-1.builder add r1z4-127.0.0.1:6040/sdb4 1
swift-ring-builder object-1.builder rebalance
swift-ring-builder object-2.builder create 10 6 1
swift-ring-builder object-2.builder add r1z1-127.0.0.1:6010/sdb1 1
swift-ring-builder object-2.builder add r1z1-127.0.0.1:6010/sdb5 1
swift-ring-builder object-2.builder add r1z2-127.0.0.1:6020/sdb2 1
swift-ring-builder object-2.builder add r1z2-127.0.0.1:6020/sdb6 1
swift-ring-builder object-2.builder add r1z3-127.0.0.1:6030/sdb3 1
swift-ring-builder object-2.builder add r1z3-127.0.0.1:6030/sdb7 1
swift-ring-builder object-2.builder add r1z4-127.0.0.1:6040/sdb4 1
swift-ring-builder object-2.builder add r1z4-127.0.0.1:6040/sdb8 1
swift-ring-builder object-2.builder rebalance
swift-ring-builder container.builder create 10 3 1
swift-ring-builder container.builder add r1z1-127.0.0.1:6011/sdb1 1
swift-ring-builder container.builder add r1z2-127.0.0.1:6021/sdb2 1
swift-ring-builder container.builder add r1z3-127.0.0.1:6031/sdb3 1
swift-ring-builder container.builder add r1z4-127.0.0.1:6041/sdb4 1
swift-ring-builder container.builder rebalance
swift-ring-builder account.builder create 10 3 1
swift-ring-builder account.builder add r1z1-127.0.0.1:6012/sdb1 1
swift-ring-builder account.builder add r1z2-127.0.0.1:6022/sdb2 1
swift-ring-builder account.builder add r1z3-127.0.0.1:6032/sdb3 1
swift-ring-builder account.builder add r1z4-127.0.0.1:6042/sdb4 1
swift-ring-builder account.builder rebalance
此命令的输出产生以下内容。请注意,创建3个对象环以便在SAIO环境中测试存储策略和EC。EC环是唯一具有所有8个设备的环。 还有两个复制环,一个用于3x复制,另一个用于2x复制,但是这些环仅使用4个设备:
Device d0r1z1-127.0.0.1:6010R127.0.0.1:6010/sdb1_"" with 1.0 weight got id 0
Device d1r1z2-127.0.0.1:6020R127.0.0.1:6020/sdb2_"" with 1.0 weight got id 1
Device d2r1z3-127.0.0.1:6030R127.0.0.1:6030/sdb3_"" with 1.0 weight got id 2
Device d3r1z4-127.0.0.1:6040R127.0.0.1:6040/sdb4_"" with 1.0 weight got id 3
Reassigned 1024 (100.00%) partitions. Balance is now 0.00. Dispersion is now 0.00
Device d0r1z1-127.0.0.1:6010R127.0.0.1:6010/sdb1_"" with 1.0 weight got id 0
Device d1r1z2-127.0.0.1:6020R127.0.0.1:6020/sdb2_"" with 1.0 weight got id 1
Device d2r1z3-127.0.0.1:6030R127.0.0.1:6030/sdb3_"" with 1.0 weight got id 2
Device d3r1z4-127.0.0.1:6040R127.0.0.1:6040/sdb4_"" with 1.0 weight got id 3
Reassigned 1024 (100.00%) partitions. Balance is now 0.00. Dispersion is now 0.00
Device d0r1z1-127.0.0.1:6010R127.0.0.1:6010/sdb1_"" with 1.0 weight got id 0
Device d1r1z1-127.0.0.1:6010R127.0.0.1:6010/sdb5_"" with 1.0 weight got id 1
Device d2r1z2-127.0.0.1:6020R127.0.0.1:6020/sdb2_"" with 1.0 weight got id 2
Device d3r1z2-127.0.0.1:6020R127.0.0.1:6020/sdb6_"" with 1.0 weight got id 3
Device d4r1z3-127.0.0.1:6030R127.0.0.1:6030/sdb3_"" with 1.0 weight got id 4
Device d5r1z3-127.0.0.1:6030R127.0.0.1:6030/sdb7_"" with 1.0 weight got id 5
Device d6r1z4-127.0.0.1:6040R127.0.0.1:6040/sdb4_"" with 1.0 weight got id 6
Device d7r1z4-127.0.0.1:6040R127.0.0.1:6040/sdb8_"" with 1.0 weight got id 7
Reassigned 1024 (100.00%) partitions. Balance is now 0.00. Dispersion is now 0.00
Device d0r1z1-127.0.0.1:6011R127.0.0.1:6011/sdb1_"" with 1.0 weight got id 0
Device d1r1z2-127.0.0.1:6021R127.0.0.1:6021/sdb2_"" with 1.0 weight got id 1
Device d2r1z3-127.0.0.1:6031R127.0.0.1:6031/sdb3_"" with 1.0 weight got id 2
Device d3r1z4-127.0.0.1:6041R127.0.0.1:6041/sdb4_"" with 1.0 weight got id 3
Reassigned 1024 (100.00%) partitions. Balance is now 0.00. Dispersion is now 0.00
Device d0r1z1-127.0.0.1:6012R127.0.0.1:6012/sdb1_"" with 1.0 weight got id 0
Device d1r1z2-127.0.0.1:6022R127.0.0.1:6022/sdb2_"" with 1.0 weight got id 1
Device d2r1z3-127.0.0.1:6032R127.0.0.1:6032/sdb3_"" with 1.0 weight got id 2
Device d3r1z4-127.0.0.1:6042R127.0.0.1:6042/sdb4_"" with 1.0 weight got id 3
Reassigned 1024 (100.00%) partitions. Balance is now 0.00. Dispersion is now 0.00
Adding Storage Policies to an Existing SAIO
$HOME/swift/.unittests
注意:运行单元测试不要任何swift守护程序运行。
Swift
的主线程,分别包括(proxy
, account
, container
, 和 object
)这几个进程startmain
(遇到“Unable to increase file descriptor limit. Running as non-root?” 警告提示不用理。)
startmain
脚本如下所示:
#!/bin/bash
set -e
swift-init main start
X-Storage-Url
和X-Auth-Token
curl -v -H 'X-Storage-User: test:tester' -H 'X-Storage-Pass: testing' http://127.0.0.1:8080/auth/v1.0
account
curl -v -H 'X-Auth-Token: <token-from-x-auth-token-above>' <url-from-x-storage-url-above>
python-swiftclient
提供的swift
命令是否可以用swift -A http://127.0.0.1:8080/auth/v1.0 -U test:tester -K testing stat
unctional tests
的运行$HOME/swift/.functests
(注意:功能测试将首先删除已配置帐户中的所有内容。)
probe tests
的运行$HOME/swift/.probetests
(注意:probe tests
将在每个测试调用resetswift
时重置您的环境。)
如果不按计划进行,测试失败,或者不能auth,或者某些不工作,这里有一些很好的方式来寻找问题:
/var/log/syslog
中,也可能在/var/log/messages
中。 所以这是用来找错误的位置(很可能是python tracebacks
)。syslog
,尝试手动启动服务器可能很有用,例如:swift-object-server /etc/swift/object-server/1.conf
将启动对象服务器。如果有问题没有显示在syslog
,那么你可能会看到启动时的追踪。syslog
。这对于/dev/log
不可用或不能进行速率限制(单元测试很快生成大量日志)的环境很有用。打开SWIFT_TEST_CONFIG_FILE
指向的文件,并将fake_syslog
的值更改为True。sudo service memcached status
检查memcache是否正在运行。如果memcache没有运行,请使用sudo service memcached start
启动它。一旦memcache正在运行,请重新运行GET帐户。下面列出了一些在使用或测试SAIO时可能遇到的一些意想不到的问题:
fallocate_reserve
- 在大多数情况下,SAIO没有非常大的XFS分区,因此启用fallocate并且fallocate_reserve
设置可能会导致问题,特别是当尝试运行功能测试时。 为此,fallocate在SAIO中的对象服务器上已关闭。 如果你fallocate_reserve设置,然后知道功能测试将失败,除非你将max_file_size
约束更改为默认(5G)更合理。 理想情况下,你将它的XFS文件系统大小的1/4,以便测试可以通过。