当前位置: 首页 > 工具软件 > BOSH > 使用案例 >

使用Bosh在OpenStack上部署CloudFoundry碰到的问题

单勇
2023-12-01

CloudFoundry部署交流QQ群:176302388

1、部署Micro Bosh碰到的问题

1.1、Micro Bosh虚拟机上执行micro bosh部署命令时出错:

bosh micro deploy /var/vcap/stemcells/micro-bosh-stemcell-openstack-kvm-0.8.1.tgz
报错信息:

Could not find Cloud Provider Plugin: openstack

这其实不是真正的报错信息,可以查看抛出该异常的代码文件查看,文件位置:/usr/local/rvm/gems/ruby-1.9.3-p374/gems/bosh_cpi-0.5.1/lib/cloud/provider.rb

将该代码块的异常处理去掉,改为如下,暴露出真正的异常:

module Bosh::Clouds
  class Provider

    def self.create(plugin, options)
      require "cloud/#{plugin}"
      Bosh::Clouds.const_get(plugin.capitalize).new(options)
    end

  end
end
异常信息:

Failed to load plugin
gems/bosh_deployer-1.4.1/lib/bosh/cli/commands/micro.rb: Unable to activate
fog-1.10.1, because net-scp-1.0.4 conflicts with net-scp (~> 1.1)

可以看出,这是由于gem包冲突造成的错误,查看gem包依赖,可以看出,Bosh的Gem包bosh_cli和bosh_deployer都对fog包和net-scp包有依赖,但是版本要求不一样,所以导致了该错误。

解决办法:

卸载掉bosh_cli、bosh_deployer、fog、net-scp的所有包,然后使用bundle方式进行安装

gem uninstall bosh_cli
gem uninstall bosh_deployer
gem uninstall fog
gem uninstall net-scp

在根目录下新建Gemfile文件,增加需要安装的Gems

source "https://RubyGems.org"
gem "bosh_cli"
gem "bosh_deployer"
保存后执行

#安装Gems
bundle install
#验证Bosh Micro
bundle exec bosh micro

问题解决,重新执行部署Micro Bosh的命令

经验总结:有的时候报错的真正原因会被源码封装,以至于从日志中得到的错误信息不准确,在Google中找不到任何解决办法,通过深入阅读并修改源码后,暴露出真正的异常原因,找到并解决问题


1.2、endpoint版本问题

bosh micro deploy /var/vcap/stemcells/micro-bosh-stemcell-openstack-kvm-0.8.1.tgz
报错信息:

response => #<Excon::Response:0x00000000992b90 @body="{\"versions\": [{\"status\": \"EXPERIMENTAL\", \"id\": \"v2\", \"links\": [{\"href\": \"http://ip:9292/v2/\", \"rel\": \"self\"}]}, {\"status\": \"CURRENT\", \"id\": \"v1.1\", \"links\": [{\"href\": \"http://ip:9292/v1/\", \"rel\": \"self\"}]}, {\"status\": \"SUPPORTED\", \"id\": \"v1.0\", \"links\": [{\"href\": \"http://ip:9292/v1/\", \"rel\": \"self\"}]}]}", @headers={"Content-Type"=>"application/json", "Content-Length"=>"340", "Date"=>"Wed, 22 Aug 2013 16:38:30 GMT"}, @status=300>
这是应为glance endpoint的fog组件不支持v2.0,只支持v1.0和v1.1

解决办法:需要重新建v1.0的endpoint,执行以下命令:

keystone endpoint-create \
    --region RegionOne \
    --service_id c933887a2e3341b18bdae2c92e6f1ba7 \
    --publicurl "http://10.68.19.61:9292/v1.0" \
    --adminurl "http://10.68.19.61:9292/v1.0" \
    --internalurl "http://10.68.19.61:9292/v1.0"
注意修改service_id和各url,并且需要删除原来的v2版本的endpoint,否则会报重复定义。

经验总结:文档一类的东西,并不是说官方的就表示肯定是正确的,要对任何技术文档都抱有怀疑态度,遇到问题首先考虑自己的操作是否是否,确认无误后要去了解官方文档的正确性,在安装openstack、Bosh、CloudFoundry的过程中,很多问题官方文档并未指出,但确实存在。


2、部署Bosh碰到的问题

2.1、部署Bosh时quota超额问题

执行bosh deploy部署Bosh时,会出现以下报错信息:

Creating bound missing VMs
  small/2: Expected([200, 202]) <=> Actual(413 Request Entity Too Large)                            
  request => {:connect_timeout=>60, :headers=>{"Content-Type"=>"application/json", "X-Auth-Token"=>"38fe51b931184a30a287e71bc37cc05d", "Host"=>"10.23.54.150:8774", "Content-Length"=>422}, :instrumentor_name=>"excon", :mock=>false, :nonblock=>true, :read_timeout=>60, :retry_limit=>4, :ssl_ca_file=>"/var/vcap/packages/director/bosh/director/vendor/bundle/ruby/1.9.1/gems/excon-0.16.2/data/cacert.pem", :ssl_verify_peer=>true, :write_timeout=>60, :host=>"10.23.54.150", :path=>"/v2/69816bacecd749f9ba1d68b3c8bae1f1/servers.json", :port=>"8774", :query=>"ignore_awful_caching1362453746", :scheme=>"http", :body=>"{\"server\":{\"flavorRef\":\"25\",\"imageRef\":\"e205b9ec-0e19-4500-87fe-ede3af13b227\",\"name\":\"vm-b875d6d8-81ce-483b-bfa8-d6d525aaf280\",\"metadata\":{},\"user_data\":\"eyJyZWdpc3RyeSI6eyJlbmRwb2ludCI6Imh0dHA6Ly8xMC4yMy41MS4zNToy\\nNTc3NyJ9LCJzZXJ2ZXIiOnsibmFtZSI6InZtLWI4NzVkNmQ4LTgxY2UtNDgz\\nYi1iZmE4LWQ2ZDUyNWFhZjI4MCJ9LCJkbnMiOnsibmFtZXNlcnZlciI6WyIx\\nMC4yMy41NC4xMDgiXX19\\n\",\"key_name\":\"jae2\",\"security_groups\":[{\"name\":\"default\"}]}}", :expects=>[200, 202], :method=>"POST"}
  response => #<Excon::Response:0x00000004edec30 @body="{\"overLimit\": {\"message\": \"Quota exceeded for instances: Requested 1, but already used 10 of 10 instances\", \"code\": 413}}", @headers={"Retry-After"=>"0", "Content-Length"=>"121", "Content-Type"=>"application/json; charset=UTF-8", "X-Compute-Request-Id"=>"req-c5427ed2-62af-47b9-98a6-6f114893d8fc", "Date"=>"Tue, 05 Mar 2013 03:22:27 GMT"}, @status=413> (00:00:02)
这是OpenStack相关工程quota配额不足而导致出现的错误

解决办法:修改openstack中该工程的quota配额,可以将CPU数、内存、硬盘、IP等都调整的的较大,并将原来的deployments删除,命令如下:

bosh deployments  

bosh delete deployment 名称

回滚之前部署时的操作,否则会出现错误,提示某个vm找不到,回滚后如果还存在问题,看看是否有缓存的yml文件,还有nova库中的instances等几个实例相关的表是否还存在没有删除的内容。

2.2、部署Bosh时的RateLimit报错问题

Error 100: Expected([200, 202]) <=> Actual(413 Request Entity Too Large)
  request => {:connect_timeout=>60, :headers=>{"Content-Type"=>"application/json", "X-Auth-Token"=>"23bc718661d54252aba2d9c348c264e3", "Host"=>"10.68.19.61:8774", "Content-Length"=>44}, :instrumentor_name=>"excon", :mock=>false, :nonblock=>true, :read_timeout=>60, :retry_limit=>4, :ssl_ca_file=>"/var/vcap/packages/director/bosh/director/vendor/bundle/ruby/1.9.1/gems/excon-0.16.2/data/cacert.pem", :ssl_verify_peer=>true, :write_timeout=>60, :host=>"10.68.19.61", :path=>"/v2/8cf196acd0494fb0bc8d04e47ff77893/servers/046ac2d3-09b5-4abe-ab61-64b33d1348e1/action.json", :port=>"8774", :query=>"ignore_awful_caching1366685706", :scheme=>"http", :body=>"{\"addFloatingIp\":{\"address\":\"10.68.19.132\"}}", :expects=>[200, 202], :method=>"POST"}
  response => #<Excon::Response:0x000000044862d8 @body="{\"overLimit\": {\"message\": \"This request was rate-limited.\", \"code\": 413, \"retryAfter\": \"2\", \"details\": \"Only 10 POST request(s) can be made to * every minute.\"}}", @headers={"Retry-After"=>"2", "Content-Length"=>"161", "Content-Type"=>"application/json; charset=UTF-8", "Date"=>"Tue, 23 Apr 2013 02:55:06 GMT"}, @status=413>

查看Openstack文档,可以看到一下Limit信息:

Table 4.15. Default API Rate Limits
HTTP methodAPI URIAPI regular expressionLimit
POSTany URI (*).*10 per minute
POST/servers^/servers50 per day
PUTany URI (*).*10 per minute
GET*changes-since*.*changes-since.*3 per minute
DELETEany URI (*).*100 per minute

解决办法(修改所有计算节点上的nova配置:/etc/nova/api-paste.ini):

1、去掉Compute API Rate Limiting配置

[composite:openstack_compute_api_v2]
use = call:nova.api.auth:pipeline_factory
noauth = faultwrap sizelimit noauth ratelimit osapi_compute_app_v2
keystone = faultwrap sizelimit authtoken keystonecontext ratelimit osapi_compute_app_v2
keystone_nolimit = faultwrap sizelimit authtoken keystonecontext osapi_compute_app_v2

[composite:openstack_volume_api_v1]
use = call:nova.api.auth:pipeline_factory
noauth = faultwrap sizelimit noauth ratelimit osapi_volume_app_v1
keystone = faultwrap sizelimit authtoken keystonecontext ratelimit osapi_volume_app_v1
keystone_nolimit = faultwrap sizelimit authtoken keystonecontext osapi_volume_app_v1

去掉其中的ratelimit的filter即可。

2、修改Compute API Rate Limiting Value,在[filter:ratelimit]下增加limits配置并修改其中的值,如下代码:

[filter:ratelimit]
paste.filter_factory = nova.api.openstack.compute.limits:RateLimitingMiddleware.factory
limits =(POST, "*", .*, 10, MINUTE);(POST, "*/servers", ^/servers, 50, DAY);(PUT, "*", .*, 10, MINUTE);(GET, "*changes-since*", .*changes-since.*, 3, MINUTE);(DELETE, "*", .*, 100, MINUTE)

修改完之后重启nova-api服务,参考Openstack官方文档:http://docs.openstack.org/havana/config-reference/content/configuring-compute-API.html

2.3、部署Bosh时碰到的director通信错误

执行bosh deploy发布bosh任务的时候,执行到update director的时候,会因为与虚拟机的通信错误造成失败,错误显示如下:

Updating job director
  director/0 (canary) (00:01:58)
Done             1/1 00:01:58

Error 400007: `director/0' is not running after update

Task 3 error
命令行执行bosh vms显示如下:

Deployment `bosh-openstack'

Director task 5

Task 5 done

+----------------------+---------+---------------+--------------------------+
| Job/index            | State   | Resource Pool | IPs                      |
+----------------------+---------+---------------+--------------------------+
| blobstore/0          | running | small         | 50.50.0.23               |
| director/0           | failing | small         | 50.50.0.20, 10.68.19.132 |
| health_monitor/0     | running | small         | 50.50.0.21               |
| nats/0               | running | small         | 50.50.0.19               |
| openstack_registry/0 | running | small         | 50.50.0.22, 10.68.19.133 |
| postgres/0           | running | small         | 50.50.0.17               |
| powerdns/0           | running | small         | 50.50.0.14, 10.68.19.131 |
| redis/0              | running | small         | 50.50.0.18               |
+----------------------+---------+---------------+--------------------------+

VMs total: 8
可以看出director任务的状态是failing,连接到director的虚拟机上查看日志可以看到:

/var/vcap/packages/director/bosh/director/vendor/bundle/ruby/1.9.1/gems/redis-2.2.0/lib/redis/connection/ruby.rb:26:in `initialize': getaddrinfo: Name or service not known (SocketError)
        from /var/vcap/packages/director/bosh/director/vendor/bundle/ruby/1.9.1/gems/redis-2.2.0/lib/redis/connection/ruby.rb:26:in `new'
        from /var/vcap/packages/director/bosh/director/vendor/bundle/ruby/1.9.1/gems/redis-2.2.0/lib/redis/connection/ruby.rb:26:in `block in connect'
        from /var/vcap/data/packages/ruby/2.1/lib/ruby/1.9.1/timeout.rb:57:in `timeout'
        from /var/vcap/packages/director/bosh/director/vendor/bundle/ruby/1.9.1/gems/redis-2.2.0/lib/redis/connection/ruby.rb:124:in `with_timeout'
        from /var/vcap/packages/director/bosh/director/vendor/bundle/ruby/1.9.1/gems/redis-2.2.0/lib/redis/connection/ruby.rb:25:in `connect'
        from /var/vcap/packages/director/bosh/director/vendor/bundle/ruby/1.9.1/gems/redis-2.2.0/lib/redis/client.rb:204:in `establish_connection'
        from /var/vcap/packages/director/bosh/director/vendor/bundle/ruby/1.9.1/gems/redis-2.2.0/lib/redis/client.rb:23:in `connect'
        from /var/vcap/packages/director/bosh/director/vendor/bundle/ruby/1.9.1/gems/redis-2.2.0/lib/redis/client.rb:224:in `ensure_connected'
        from /var/vcap/packages/director/bosh/director/vendor/bundle/ruby/1.9.1/gems/redis-2.2.0/lib/redis/client.rb:114:in `block in process'
        from /var/vcap/packages/director/bosh/director/vendor/bundle/ruby/1.9.1/gems/redis-2.2.0/lib/redis/client.rb:191:in `logging'
        from /var/vcap/packages/director/bosh/director/vendor/bundle/ruby/1.9.1/gems/redis-2.2.0/lib/redis/client.rb:113:in `process'
        from /var/vcap/packages/director/bosh/director/vendor/bundle/ruby/1.9.1/gems/redis-2.2.0/lib/redis/client.rb:38:in `call'
        from /var/vcap/packages/director/bosh/director/vendor/bundle/ruby/1.9.1/gems/redis-2.2.0/lib/redis.rb:150:in `block in get'
        from /var/vcap/data/packages/ruby/2.1/lib/ruby/1.9.1/monitor.rb:201:in `mon_synchronize'
        from /var/vcap/packages/director/bosh/director/vendor/bundle/ruby/1.9.1/gems/redis-2.2.0/lib/redis.rb:149:in `get'
        from /var/vcap/packages/director/bosh/director/vendor/bundle/ruby/1.9.1/gems/resque-1.15.0/lib/resque/worker.rb:425:in `job'
        from /var/vcap/packages/director/bosh/director/vendor/bundle/ruby/1.9.1/gems/resque-1.15.0/lib/resque/worker.rb:357:in `unregister_worker'
        from /var/vcap/packages/director/bosh/director/vendor/bundle/ruby/1.9.1/gems/resque-1.15.0/lib/resque/worker.rb:145:in `ensure in work'
        from /var/vcap/packages/director/bosh/director/vendor/bundle/ruby/1.9.1/gems/resque-1.15.0/lib/resque/worker.rb:145:in `work'
        from /var/vcap/packages/director/bosh/director/bin/worker:77:in `<main>'

解决办法(命令行执行):

monit stop director
monit start director

执行后没有任何反馈,稍等片刻后回到Bosh Cli的机器上查看虚拟机状态如下:

Deployment `bosh-openstack'
Director task 6

Task 6 done

+----------------------+---------+---------------+--------------------------+
| Job/index            | State   | Resource Pool | IPs                      |
+----------------------+---------+---------------+--------------------------+
| blobstore/0          | running | small         | 50.50.0.23               |
| director/0           | running | small         | 50.50.0.20, 10.68.19.132 |
| health_monitor/0     | running | small         | 50.50.0.21               |
| nats/0               | running | small         | 50.50.0.19               |
| openstack_registry/0 | running | small         | 50.50.0.22, 10.68.19.133 |
| postgres/0           | running | small         | 50.50.0.17               |
| powerdns/0           | running | small         | 50.50.0.14, 10.68.19.131 |
| redis/0              | running | small         | 50.50.0.18               |
+----------------------+---------+---------------+--------------------------+

VMs total: 8

可以看出,director任务已经正常启动,此时,再重新部署下bosh即可正常,注意,不用删除原来的deployements,直接命令行执行bosh deploy即可。

root@bosh-cli:/var/vcap/deployments# bosh deploy
Getting deployment properties from director...
Compiling deployment manifest...
Please review all changes carefully
Deploying `bosh.yml' to `microbosh-openstack' (type 'yes' to continue): yes

Director task 9

Preparing deployment
  binding deployment (00:00:00)
  binding releases (00:00:00)
  binding existing deployment (00:00:00)
  binding resource pools (00:00:00)
  binding stemcells (00:00:00)
  binding templates (00:00:00)
  binding properties (00:00:00)
  binding unallocated VMs (00:00:00)
  binding instance networks (00:00:00)
Done             9/9 00:00:00

Preparing package compilation
  finding packages to compile (00:00:00)
Done             1/1 00:00:00

Preparing DNS
  binding DNS (00:00:00)
Done             1/1 00:00:00

Preparing configuration
  binding configuration (00:00:01)
Done             1/1 00:00:01

Updating job blobstore
  blobstore/0 (canary) (00:02:31)
Done             1/1 00:02:31

Updating job openstack_registry
  openstack_registry/0 (canary) (00:01:44)
Done             1/1 00:01:44

Updating job health_monitor
  health_monitor/0 (canary) (00:01:43)
Done             1/1 00:01:43

Task 9 done
Started         2013-05-21 05:17:01 UTC
Finished        2013-05-21 05:23:00 UTC
Duration        00:05:59

Deployed `bosh.yml' to `microbosh-openstack'

问题解决!


3、部署CloudFoundry遇到的问题

3.1、上传Cloudfoundry release包时碰到的问题

执行上传命令:

 bosh upload release /var/vcap/releases/cf-release/CF_Release_VF-131.1-dev.tgz
报错如下:
E, [2013-05-24T06:38:59.646076 #24082] [task:1] ERROR -- : Could not create object, 400/
/var/vcap/packages/director/bosh/director/vendor/bundle/ruby/1.9.1/gems/blobstore_client-0.5.0/lib/blobstore_client/simple_blobstore_client.rb:30:in `create_file'
/var/vcap/packages/director/bosh/director/vendor/bundle/ruby/1.9.1/gems/blobstore_client-0.5.0/lib/blobstore_client/base.rb:30:in `create'
/var/vcap/packages/director/bosh/director/lib/director/blob_util.rb:9:in `block in create_blob'
/var/vcap/packages/director/bosh/director/lib/director/blob_util.rb:9:in `open'
/var/vcap/packages/director/bosh/director/lib/director/blob_util.rb:9:in `create_blob'
/var/vcap/packages/director/bosh/director/lib/director/jobs/update_release.rb:373:in `create_package'
/var/vcap/packages/director/bosh/director/lib/director/jobs/update_release.rb:291:in `block (2 levels) in create_packages'
/var/vcap/packages/director/bosh/director/lib/director/event_log.rb:58:in `track'
/var/vcap/packages/director/bosh/director/lib/director/jobs/update_release.rb:289:in `block in create_packages'
/var/vcap/packages/director/bosh/director/lib/director/jobs/update_release.rb:286:in `each'
/var/vcap/packages/director/bosh/director/lib/director/jobs/update_release.rb:286:in `create_packages'
/var/vcap/packages/director/bosh/director/lib/director/jobs/update_release.rb:272:in `process_packages'
/var/vcap/packages/director/bosh/director/lib/director/jobs/update_release.rb:131:in `process_release'
/var/vcap/packages/director/bosh/director/lib/director/jobs/update_release.rb:48:in `block in perform'
/var/vcap/packages/director/bosh/director/lib/director/lock_helper.rb:47:in `block in with_release_lock'
/var/vcap/packages/director/bosh/director/lib/director/lock.rb:58:in `lock'
/var/vcap/packages/director/bosh/director/lib/director/lock_helper.rb:47:in `with_release_lock'
/var/vcap/packages/director/bosh/director/lib/director/jobs/update_release.rb:48:in `perform'
/var/vcap/packages/director/bosh/director/lib/director/job_runner.rb:98:in `perform_job'
/var/vcap/packages/director/bosh/director/lib/director/job_runner.rb:29:in `block in run'
/var/vcap/packages/director/bosh/director/vendor/bundle/ruby/1.9.1/gems/bosh_common-0.5.4/lib/common/thread_formatter.rb:46:in `with_thread_name'
/var/vcap/packages/director/bosh/director/lib/director/job_runner.rb:29:in `run'
/var/vcap/packages/director/bosh/director/lib/director/jobs/base_job.rb:8:in `perform'
/var/vcap/packages/director/bosh/director/vendor/bundle/ruby/1.9.1/gems/resque-1.15.0/lib/resque/job.rb:127:in `perform'
/var/vcap/packages/director/bosh/director/vendor/bundle/ruby/1.9.1/gems/resque-1.15.0/lib/resque/worker.rb:163:in `perform'
/var/vcap/packages/director/bosh/director/vendor/bundle/ruby/1.9.1/gems/resque-1.15.0/lib/resque/worker.rb:130:in `block in work'
/var/vcap/packages/director/bosh/director/vendor/bundle/ruby/1.9.1/gems/resque-1.15.0/lib/resque/worker.rb:116:in `loop'
/var/vcap/packages/director/bosh/director/vendor/bundle/ruby/1.9.1/gems/resque-1.15.0/lib/resque/worker.rb:116:in `work'
/var/vcap/packages/director/bosh/director/bin/worker:77:in `<main>'
不能在bosh管理机中的blobstore虚拟机上创建文件,分析原因,可能是磁盘空间问题,SSH到blodstore的虚拟机上,查看目录/var/vcap/store的挂载点大小:

root@1154e252-382e-4cf7-bb2d-09adbc97a954:~# df -h
Filesystem            Size  Used Avail Use% Mounted on
/dev/vda1             1.3G  956M  273M  78% /
none                  241M  168K  241M   1% /dev
none                  247M     0  247M   0% /dev/shm
none                  247M   52K  247M   1% /var/run
none                  247M     0  247M   0% /var/lock
none                  247M     0  247M   0% /lib/init/rw
/dev/vdb2              20G  246M   18G   2% /var/vcap/data
/dev/loop0            124M  5.6M  118M   5% /tmp
挂载点无/var/vcap/store的格外挂载,根目录挂载大小为/dev/vda1,大小只有1.3G,上图中是删除了之前上传内容后的大小,没删除之前,/dev/vda1已经100%使用

解决办法:

1、创建大小20G(可根据实际情况设定)的/dev/vda2磁盘,挂载到/var/vcap/store,挂载前将其中的文件拷贝出来,挂载上之后再拷贝进去,结果:

root@1154e252-382e-4cf7-bb2d-09adbc97a954:~# df -h
Filesystem            Size  Used Avail Use% Mounted on
/dev/vda1             1.3G  956M  273M  78% /
none                  241M  168K  241M   1% /dev
none                  247M     0  247M   0% /dev/shm
none                  247M   52K  247M   1% /var/run
none                  247M     0  247M   0% /var/lock
none                  247M     0  247M   0% /lib/init/rw
/dev/vdb2              20G  246M   18G   2% /var/vcap/data
/dev/loop0            124M  5.6M  118M   5% /tmp
/dev/vda2              19G  1.8G   16G  11% /var/vcap/store
2、执行release删除命令
root@bosh-cli:/var/vcap/deployments# bosh releases

+---------------+-----------+
| Name          | Versions  |
+---------------+-----------+
| CF_Release_VF | 131.1-dev |
+---------------+-----------+

Releases total: 1
root@bosh-cli:/var/vcap/deployments# bosh delete release CF_Release_VF
3、再次执行release上传命令。

3.2、上传cf-service release包时发生错误

错误信息如下:

root@bosh-cli:~/src/cloudfoundry/cf-services-release# bosh upload release
Upload release `CF-VF-Service-Release-0.1-dev.yml' to `bosh' (type 'yes' to continue): yes

Copying packages
----------------
ruby (0.1-dev)                SKIP
libyaml (0.1-dev)             SKIP
mysql (0.1-dev)               FOUND LOCAL
ruby_next (0.1-dev)           SKIP
postgresql_node (0.1-dev)     FOUND LOCAL
mysqlclient (0.1-dev)         SKIP
syslog_aggregator (0.1-dev)   FOUND LOCAL
mysql_gateway (0.1-dev)       FOUND LOCAL
postgresql92 (0.1-dev)        FOUND LOCAL
mysql55 (0.1-dev)             FOUND LOCAL
postgresql_gateway (0.1-dev)  FOUND LOCAL
postgresql91 (0.1-dev)        FOUND LOCAL
mysql_node (0.1-dev)          FOUND LOCAL
postgresql (0.1-dev)          FOUND LOCAL
sqlite (0.1-dev)              SKIP
common (0.1-dev)              SKIP

....

Release info
------------
Name:    CF-VF-Service-Release
Version: 0.1-dev

Packages
  - ruby (0.1-dev)
  - libyaml (0.1-dev)
  - mysql (0.1-dev)
  - ruby_next (0.1-dev)
  - postgresql_node (0.1-dev)
  - mysqlclient (0.1-dev)
  - syslog_aggregator (0.1-dev)
  - mysql_gateway (0.1-dev)
  - postgresql92 (0.1-dev)
  - mysql55 (0.1-dev)
  - postgresql_gateway (0.1-dev)
  - postgresql91 (0.1-dev)
  - mysql_node (0.1-dev)
  - postgresql (0.1-dev)
  - sqlite (0.1-dev)
  - common (0.1-dev)

Jobs
  - mysql_node_external (0.1-dev)
  - postgresql_node (0.1-dev)
  - mysql_gateway (0.1-dev)
  - postgresql_gateway (0.1-dev)
  - mysql_node (0.1-dev)
  - rds_mysql_gateway (0.1-dev)

....

Director task 20

Extracting release
  extracting release (00:00:08)
Done                    1/1 00:00:08

Verifying manifest
  verifying manifest (00:00:00)
Done                    1/1 00:00:00

Resolving package dependencies
  resolving package dependencies (00:00:00)
Done                    1/1 00:00:00

Creating new packages
  ruby/0.1-dev: Could not fetch object, 404/ (00:00:01)
Error                   1/16 00:00:01

Error 100: Could not fetch object, 404/

Task 20 error

E, [2013-07-22T02:45:32.218762 #16460] [task:20] ERROR -- : Could not fetch object, 404/
/var/vcap/packages/director/gem_home/gems/blobstore_client-1.5.0.pre.3/lib/blobstore_client/dav_blobstore_client.rb:48:in `get_file'
/var/vcap/packages/director/gem_home/gems/blobstore_client-1.5.0.pre.3/lib/blobstore_client/base.rb:50:in `get'
/var/vcap/packages/director/gem_home/gems/director-1.5.0.pre.3/lib/director/blob_util.rb:16:in `block (2 levels) in copy_blob'
/var/vcap/packages/director/gem_home/gems/director-1.5.0.pre.3/lib/director/blob_util.rb:15:in `open'
/var/vcap/packages/director/gem_home/gems/director-1.5.0.pre.3/lib/director/blob_util.rb:15:in `block in copy_blob'
/var/vcap/packages/ruby/lib/ruby/1.9.1/tmpdir.rb:83:in `mktmpdir'
/var/vcap/packages/director/gem_home/gems/director-1.5.0.pre.3/lib/director/blob_util.rb:13:in `copy_blob'
/var/vcap/packages/director/gem_home/gems/director-1.5.0.pre.3/lib/director/jobs/update_release.rb:352:in `create_package'
/var/vcap/packages/director/gem_home/gems/director-1.5.0.pre.3/lib/director/jobs/update_release.rb:283:in `block (2 levels) in create_packages'
/var/vcap/packages/director/gem_home/gems/director-1.5.0.pre.3/lib/director/event_log.rb:58:in `track'
/var/vcap/packages/director/gem_home/gems/director-1.5.0.pre.3/lib/director/jobs/update_release.rb:281:in `block in create_packages'
/var/vcap/packages/director/gem_home/gems/director-1.5.0.pre.3/lib/director/jobs/update_release.rb:278:in `each'
/var/vcap/packages/director/gem_home/gems/director-1.5.0.pre.3/lib/director/jobs/update_release.rb:278:in `create_packages'
/var/vcap/packages/director/gem_home/gems/director-1.5.0.pre.3/lib/director/jobs/update_release.rb:264:in `process_packages'
/var/vcap/packages/director/gem_home/gems/director-1.5.0.pre.3/lib/director/jobs/update_release.rb:134:in `process_release'
/var/vcap/packages/director/gem_home/gems/director-1.5.0.pre.3/lib/director/jobs/update_release.rb:47:in `block in perform'
/var/vcap/packages/director/gem_home/gems/director-1.5.0.pre.3/lib/director/lock_helper.rb:47:in `block in with_release_lock'
/var/vcap/packages/director/gem_home/gems/director-1.5.0.pre.3/lib/director/lock.rb:58:in `lock'
/var/vcap/packages/director/gem_home/gems/director-1.5.0.pre.3/lib/director/lock_helper.rb:47:in `with_release_lock'
/var/vcap/packages/director/gem_home/gems/director-1.5.0.pre.3/lib/director/jobs/update_release.rb:47:in `perform'
/var/vcap/packages/director/gem_home/gems/director-1.5.0.pre.3/lib/director/job_runner.rb:98:in `perform_job'
/var/vcap/packages/director/gem_home/gems/director-1.5.0.pre.3/lib/director/job_runner.rb:29:in `block in run'
/var/vcap/packages/director/gem_home/gems/bosh_common-1.5.0.pre.3/lib/common/thread_formatter.rb:46:in `with_thread_name'
/var/vcap/packages/director/gem_home/gems/director-1.5.0.pre.3/lib/director/job_runner.rb:29:in `run'
/var/vcap/packages/director/gem_home/gems/director-1.5.0.pre.3/lib/director/jobs/base_job.rb:8:in `perform'
/var/vcap/packages/director/gem_home/gems/resque-1.23.1/lib/resque/job.rb:125:in `perform'
/var/vcap/packages/director/gem_home/gems/resque-1.23.1/lib/resque/worker.rb:186:in `perform'
/var/vcap/packages/director/gem_home/gems/resque-1.23.1/lib/resque/worker.rb:149:in `block in work'
/var/vcap/packages/director/gem_home/gems/resque-1.23.1/lib/resque/worker.rb:128:in `loop'
/var/vcap/packages/director/gem_home/gems/resque-1.23.1/lib/resque/worker.rb:128:in `work'
/var/vcap/packages/director/gem_home/gems/director-1.5.0.pre.3/bin/worker:74:in `<top (required)>'
/var/vcap/packages/director/bin/worker:23:in `load'
/var/vcap/packages/director/bin/worker:23:in `<main>'
初步分析,Ruby的package之所以SKIP,是因为bosh检测到系统中之前以上传到同版本的package,所以跳过该包的上传,但是为什么后边加载这个package的时候找不到了呢?苦苦思索,还是无法确定问题发生原因;再分析package上传后是存储在虚拟机blobstore之中的,但是上传信息是存于别的位置,检查blobstore状态,running状态,正常,突然想到,之前因为网络故障,重启过blobstore虚拟机,检查发现var/vcap/store挂载的/dev/vda消失,发现问题,解决如下:

1、重新挂载/dev/vda2到/var/vcap/store下

2、配置blobstore虚拟机开机挂载/dev/vda2

再次执行upload命令,一切正常。

3.3 部署CloudFoundry services时发生的错误

部署Services时,因为需要创建大量的虚拟机、创建大量的Volumes,所以可能会因为OpenStack环境本身的Quota限制引起一些错误,下边这个错误是因为Cinder本身初始配额为10.当创建的volume超过10的时候,就会发生错误:

Bosh中的报错信息:

E, [2013-08-22T08:45:38.099667 #8647] [task:323] ERROR -- : OpenStack API Request Entity Too Large error. Check task debug log for details.
/var/vcap/packages/director/gem_home/gems/bosh_openstack_cpi-1.5.0.pre.3/lib/cloud/openstack/helpers.rb:20:in `cloud_error'
/var/vcap/packages/director/gem_home/gems/bosh_openstack_cpi-1.5.0.pre.3/lib/cloud/openstack/helpers.rb:39:in `rescue in with_openstack'
/var/vcap/packages/director/gem_home/gems/bosh_openstack_cpi-1.5.0.pre.3/lib/cloud/openstack/helpers.rb:25:in `with_openstack'
/var/vcap/packages/director/gem_home/gems/bosh_openstack_cpi-1.5.0.pre.3/lib/cloud/openstack/cloud.rb:361:in `block in create_disk'
/var/vcap/packages/director/gem_home/gems/bosh_common-1.5.0.pre.3/lib/common/thread_formatter.rb:46:in `with_thread_name'
/var/vcap/packages/director/gem_home/gems/bosh_openstack_cpi-1.5.0.pre.3/lib/cloud/openstack/cloud.rb:342:in `create_disk'
/var/vcap/packages/director/gem_home/gems/director-1.5.0.pre.3/lib/director/instance_updater.rb:377:in `block in update_persistent_disk'
/var/vcap/packages/director/gem_home/gems/sequel-3.43.0/lib/sequel/database/query.rb:338:in `_transaction'
/var/vcap/packages/director/gem_home/gems/sequel-3.43.0/lib/sequel/database/query.rb:300:in `block in transaction'
/var/vcap/packages/director/gem_home/gems/sequel-3.43.0/lib/sequel/database/connecting.rb:236:in `block in synchronize'
/var/vcap/packages/director/gem_home/gems/sequel-3.43.0/lib/sequel/connection_pool/threaded.rb:104:in `hold'
/var/vcap/packages/director/gem_home/gems/sequel-3.43.0/lib/sequel/database/connecting.rb:236:in `synchronize'
/var/vcap/packages/director/gem_home/gems/sequel-3.43.0/lib/sequel/database/query.rb:293:in `transaction'
/var/vcap/packages/director/gem_home/gems/director-1.5.0.pre.3/lib/director/instance_updater.rb:376:in `update_persistent_disk'
/var/vcap/packages/director/gem_home/gems/director-1.5.0.pre.3/lib/director/instance_updater.rb:73:in `block in update'
/var/vcap/packages/director/gem_home/gems/director-1.5.0.pre.3/lib/director/instance_updater.rb:39:in `step'
/var/vcap/packages/director/gem_home/gems/director-1.5.0.pre.3/lib/director/instance_updater.rb:73:in `update'
/var/vcap/packages/director/gem_home/gems/director-1.5.0.pre.3/lib/director/job_updater.rb:63:in `block (5 levels) in update'
/var/vcap/packages/director/gem_home/gems/bosh_common-1.5.0.pre.3/lib/common/thread_formatter.rb:46:in `with_thread_name'
/var/vcap/packages/director/gem_home/gems/director-1.5.0.pre.3/lib/director/job_updater.rb:60:in `block (4 levels) in update'
/var/vcap/packages/director/gem_home/gems/director-1.5.0.pre.3/lib/director/event_log.rb:58:in `track'
/var/vcap/packages/director/gem_home/gems/director-1.5.0.pre.3/lib/director/job_updater.rb:59:in `block (3 levels) in update'
/var/vcap/packages/director/gem_home/gems/bosh_common-1.5.0.pre.3/lib/common/thread_pool.rb:83:in `call'
/var/vcap/packages/director/gem_home/gems/bosh_common-1.5.0.pre.3/lib/common/thread_pool.rb:83:in `block (2 levels) in create_thread'
/var/vcap/packages/director/gem_home/gems/bosh_common-1.5.0.pre.3/lib/common/thread_pool.rb:67:in `loop'
/var/vcap/packages/director/gem_home/gems/bosh_common-1.5.0.pre.3/lib/common/thread_pool.rb:67:in `block in create_thread'

OpenStack中的报错信息:

2013-08-22 16:44:57 ERROR nova.api.openstack [req-75ffcfe3-34ea-4e8d-ab5f-685a890d4378 57be829ed997455f9600a4f46f7dbbef 8cf196acd0494fb0bc8d04e47ff77893] Caught error: VolumeLimitExceeded: Maximum number of volumes allowed (10) exceeded (HTTP 413) (Request-ID: req-30e6e7d6-313d-46b1-9522-b3b20dd3e2ab)
2013-08-22 16:44:57 24345 TRACE nova.api.openstack Traceback (most recent call last):
2013-08-22 16:44:57 24345 TRACE nova.api.openstack   File "/usr/lib/python2.7/dist-packages/nova/api/openstack/__init__.py", line 78, in __call__
2013-08-22 16:44:57 24345 TRACE nova.api.openstack     return req.get_response(self.application)
......
2013-08-22 16:44:57 24345 TRACE nova.api.openstack   File "/usr/lib/python2.7/dist-packages/cinderclient/client.py", line 109, in request
2013-08-22 16:44:57 24345 TRACE nova.api.openstack     raise exceptions.from_response(resp, body)
2013-08-22 16:44:57 24345 TRACE nova.api.openstack OverLimit: VolumeLimitExceeded: Maximum number of volumes allowed (10) exceeded (HTTP 413) (Request-ID: req-30e6e7d6-313d-46b1-9522-b3b20dd3e2ab)
2013-08-22 16:44:57 24345 TRACE nova.api.openstack
解决办法:修改Cinder的quota配额,并重启cinder服务:

cinder quota-update <tenant_id> --volumes 20
cd /etc/init.d/; for i in $( ls cinder-* ); do sudo service $i restart; done
问题解决



 类似资料: