服务器在线 - 服务器软件 - 网站地图 服务器在线,专注于服务器技术!

当前位置:主页 > 云和虚拟化 > OpenStack > 正文

Ubuntu 12.04系统中安装配置Openstack 之Swift 篇

时间:2015-01-11    来源:服务器在线    投稿:泡泡    点击:

1、系统准备

安装Ubuntu 12.04 (当然,官网上以Ubuntu10.04为OS系统)

最小化安装,只需要安装ssh server就可以

apt-get update && apt-get -y dist-upgrade

2、环境及组件的设置与安装

大家经常参考文档安装不成功,基本都是因为你修改的IP地址或者你更改了这里文档默认的密码为了让文档更加灵活,所以需要设置一下环境变量。

2.1            环境变量的设置

你可以根据你的实际情况修改admin的密码和mysql的密码下面文档和数据库相关的密码都是相同,你只需要修改novarc就可以。

运行完下面的命令,你再对novarc进行修改

cat >/root/novarc <<EOF
export OS_TENANT_NAME=admin
export OS_USERNAME=admin
export OS_PASSWORD=password
export MYSQL_PASS=password
export SERVICE_PASSWORD=password
export FIXED_RANGE=10.0.0.0/24
export FLOATING_RANGE=$(/sbin/ifconfig eth0 | awk '/inet addr/ {print $2}' | cut -f2 -d ":" | awk -F "." '{print $1"."$2"."$3}').224/27
export OS_AUTH_URL="http://localhost:5000/v2.0/"
export SERVICE_ENDPOINT="http://localhost:35357/v2.0"
export SERVICE_TOKEN=$(openssl rand -hex 10)
export MASTER="$(/sbin/ifconfig eth0 | awk '/inet addr/ {print $2}' | cut -f2 -d ":")"
EOF

根据你的需求进行调整,

这里的novarc的内容

# cat novarc 
export OS_TENANT_NAME=admin
export OS_USERNAME=admin
export OS_PASSWORD=password
export MYSQL_PASS=password
export SERVICE_PASSWORD=password
export FIXED_RANGE=10.0.0.0/24
export FLOATING_RANGE=10.1.199.224/27
export OS_AUTH_URL="http://localhost:5000/v2.0/"
export SERVICE_ENDPOINT="http://localhost:35357/v2.0"
export SERVICE_TOKEN=d5d892e6de00a922f9fb
export MASTER="10.1.199.17"

确认没有问题或者进行修改,运行

source novarc
echo "source novarc">>.bashrc
2.2            MYSQL 

在Openstack组件里,Nova,Keystone, Glance,都需要用到数据库所以这里们需要创建相关的数据库和用户。

应用数据库

数据库用户

密码

mysql

root

password

nova

nova

password

glance

glance

password

keystone

keystone

password

mysql自动安装

cat <<MYSQL_PRESEED | debconf-set-selections
mysql-server-5.5 mysql-server/root_password password $MYSQL_PASS
mysql-server-5.5 mysql-server/root_password_again password $MYSQL_PASS
mysql-server-5.5 mysql-server/start_on_boot boolean true
MYSQL_PRESEED

Openstack都是Python写的,所以你需要python-mysqldb,安装过程,就不会提示你输入root密码

apt-get install -y mysql-server python-mysqldb

编辑/etc/mysql/my.cnf,允许网络访问mysql

#bind-address           = 127.0.0.1
bind-address            = 0.0.0.0

或者直接运行下面命令

sed -i 's/127.0.0.1/0.0.0.0/g' /etc/mysql/my.cnf

重新启动mysql服务

service mysql restart

创建相关数据库

mysql -uroot -p$MYSQL_PASS <<EOF
CREATE DATABASE nova;
GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'%' IDENTIFIED BY '$MYSQL_PASS';
CREATE DATABASE glance;
GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'%' IDENTIFIED BY '$MYSQL_PASS';
CREATE DATABASE keystone;
GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'%'IDENTIFIED BY '$MYSQL_PASS';
FLUSH PRIVILEGES;
EOF
2.3            Keystone 

Keystone是Openstack的核心,所有的组件,都需要通过keystone进行认证和授权

租户(tenant)

用户

密码

 

admin

admin

password

 

service

nova

password

 
 

glance

password

 

apt-get install -y keystone python-keystone python-keystoneclient

编辑/etc/keystone/keystone.conf,需要修改

·        keystone的默认token是ADMIN,这里这里修改成随机生成,查看novarc获得

·        默认是采用sqlite连接,这里们需要改成mysql

[DEFAULT]
#bind_host = 0.0.0.0
public_port = 5000
admin_port = 35357
#admin_token = ADMIN
admin_token = d5d892e6de00a922f9fb
[sql]
#connection = sqlite:////var/lib/keystone/keystone.db
connection = mysql://keystone:password@10.1.199.17/keystone

或者运行下面命令,很多朋友都是因为修改错误导致错误,建议直接使用下面命令进行修改

sed -i -e " s/admin_token = ADMIN/admin_token = $SERVICE_TOKEN/g" /etc/keystone/keystone.conf
sed -i '/connection = .*/{s|sqlite:///.*|mysql://'"keystone"':'"$MYSQL_PASS"'@'"$MASTER"'/keystone|g}' /etc/keystone/keystone.conf
service keystone restart

同步keystone数据库

keystone-manage db_sync

keystone的数据库,需要导入数据和endpoint,你可以一步一步用命令行导入,可以参考keystone***

为了方便,你可以直接使用下面2个脚本来进行全部的设置

Keystone Data

wget http://www.chenshake.com/wp-content/uploads/2012/07/keystone_data.sh_.txt
mv keystone_data.sh_.txt keystone_data.sh
bash keystone_data.sh

没任何输出,就表示正确,可以通过下面命令检查

echo $?

显示0,就表示脚本正确运行,千万不要重复运行脚本。

Endpoint 导入

wget http://www.chenshake.com/wp-content/uploads/2012/07/endpoints.sh_.txt
mv endpoints.sh_.txt endpoints.sh
bash endpoints.sh

需要注意的是,这个脚本是假设你的glance服务和swift都是安装相同的服务器,如果你的glance在不同的服务器,你需要调整一下endpoint,可以在数据库里调整

可以使用curl命令来测试

命令的格式如下

curl -d '{"auth": {"tenantName": "adminTenant", "passwordCredentials":\
{"username": "adminUser", "password": "secretword"}}}' -H "Content-type:\
application/json" http://IP:35357/v2.0/tokens | python -mjson.tool

你需要替换一下

curl -d '{"auth": {"tenantName": "admin", "passwordCredentials":{"username": "admin", "password": "password"}}}' -H "Content-type:application/json" http://$MASTER:35357/v2.0/tokens | python -mjson.tool

你就可以获得一个24小时的token(注意,上面的脚本没有创建demo用户,所以没法用demo账号去测试)

 "token": {
            "expires": "2012-09-27T02:09:37Z", 
            "id": "c719448800214e189da04772c2f75e23", 
            "tenant": {
                "description": null, 
                "enabled": true, 
                "id": "dc7ca2e51139457dada2d0f7a3719476", 
                "name": "admin"
            }

通过下面命令,可以检查keystone的设置是否正确

root@node17:~# keystone user-list
+----------------------------------+---------+----------------------+--------+
|                id                | enabled |        email         |  name  |
+----------------------------------+---------+----------------------+--------+
| 1189d15892d24e00827e707bd2b7ab07 | True    | admin@chenshake.com  | admin  |
| cca4a4ed1e8842db99239dc98fb1617f | True    | glance@chenshake.com | glance |
| daccc34eacc7493989cd13df93e7f6bc | True    | swift@chenshake.com  | swift  |
| ee57b02c535d44f48943de13831da232 | True    | nova@chenshake.com   | nova   |
+----------------------------------+---------+----------------------+--------+
root@node17:~# keystone endpoint-list
+----------------------------------+-----------+-----------------------------------------------+-----------------------------------------------+------------------------------------------+
|                id                |   region  |                   publicurl                   |                  internalurl                  |                 adminurl                 |
+----------------------------------+-----------+-----------------------------------------------+-----------------------------------------------+------------------------------------------+
| 0b04e1baac1a4c9fb07490e0911192cf | RegionOne | http://10.1.199.17:5000/v2.0 | http://10.1.199.17:5000/v2.0 | http://10.1.199.17:35357/v2.0 |
| 0d3315627d52419fa08095f9def5d7e4 | RegionOne | http://10.1.199.17:8776/v1/%(tenant_id)s | http://10.1.199.17:8776/v1/%(tenant_id)s | http://10.1.199.17:8776/v1/%(tenant_id)s |
| 1c92290cba9f4a278b42dbdf2802096c | RegionOne | http://10.1.199.17:9292/v1 | http://10.1.199.17:9292/v1 | http://10.1.199.17:9292/v1 |
| 56fe83ce20f341d99fc576770c275586 | RegionOne | http://10.1.199.17:8774/v2/%(tenant_id)s | http://10.1.199.17:8774/v2/%(tenant_id)s | http://10.1.199.17:8774/v2/%(tenant_id)s |
| 5fb51aae00684e56818869918f86b564 | RegionOne | http://10.1.199.17:8080/v1/AUTH_%(tenant_id)s | http://10.1.199.17:8080/v1/AUTH_%(tenant_id)s | http://10.1.199.17:8080/v1 |
| aaac7663872d493b85d9e583329be9ed | RegionOne | http://10.1.199.17:8773/services/Cloud | http://10.1.199.17:8773/services/Cloud | http://10.1.199.17:8773/services/Admin |
+----------------------------------+-----------+-----------------------------------------------+-----------------------------------------------+------------------------------------------+

可以使用下面命令来查看结果

keystone tenant-list
keystone user-list
keystone role-list

3、Swift组件的安装

这篇文档是单节点的安装下,把swift集成keystone和glanceswift是使用一个单独的分区做模拟。

3.1      安装软件

apt-get -y install swift swift-proxy swift-accountswift-container swift-object \

xfsprogs curl python-pastedeploy

我安装系统的时候,有一个专门的分区给swift使用分区前,先umount

umount /dev/sda6

3.2      格式化分区

mkfs.xfs -f -i size=1024 /dev/sda6

创建挂载点

mkdir /mnt/swift_backend

修改/etc/fstab,原来是采用uuid,注释掉,加上

/dev/sda6 /mnt/swift_backend xfsnoatime,nodiratime,nobarrier,logbufs=8 0 0

检查修改是否正确

mount -a

如果fstab有错误,会进行提示没错误,就会把目录挂载上

3.3      目录设置

pushd /mnt/swift_backend

mkdir node1 node2 node3 node4

popd

chown swift.swift /mnt/swift_backend/*

for i in {1..4}; do sudo ln -s /mnt/swift_backend/node$i/srv/node$i; done;

mkdir -p /etc/swift/account-server \

/etc/swift/container-server \

/etc/swift/object-server \

/srv/node1/device \

/srv/node2/device \

/srv/node3/device \

/srv/node4/device

mkdir /run/swift

chown -L -R swift.swift /etc/swift /srv/node[1-4]//run/swift

为了在系统启动时启动Swift服务,需要把如下两行命令写入 /etc/rc.local里,位置在“exit 0;”之前:

sudo mkdir /run/swift

sudo chown swift.swift /run/swift

3.4    配置rsync 

编辑 /etc/default/rsync文件

sed -i 's/RSYNC_ENABLE=false/RSYNC_ENABLE=true/g'/etc/default/rsync

创建 /etc/rsyncd.conf

cat > /etc/rsyncd.conf <<EOF

# General stuff

uid = swift

gid = swift

log file = /var/log/rsyncd.log

pid file = /run/rsyncd.pid

address = 127.0.0.1

 

# Account Server replication settings

[account6012]

max connections = 25

path = /srv/node1/

read only = false

lock file = /run/lock/account6012.lock

 

[account6022]

max connections = 25

path = /srv/node2/

read only = false

lock file = /run/lock/account6022.lock

 

[account6032]

max connections = 25

path = /srv/node3/

read only = false

lock file = /run/lock/account6032.lock

 

[account6042]

max connections = 25

path = /srv/node4/

read only = false

lock file = /run/lock/account6042.lock

 

# Container server replication settings

 

[container6011]

max connections = 25

path = /srv/node1/

read only = false

lock file = /run/lock/container6011.lock

 

[container6021]

max connections = 25

path = /srv/node2/

read only = false

lock file = /run/lock/container6021.lock

 

[container6031]

max connections = 25

path = /srv/node3/

read only = false

lock file = /run/lock/container6031.lock

 

[container6041]

max connections = 25

path = /srv/node4/

read only = false

lock file = /run/lock/container6041.lock

 

# Object Server replication settings

 

[object6010]

max connections = 25

path = /srv/node1/

read only = false

lock file = /run/lock/object6010.lock

 

[object6020]

max connections = 25

path = /srv/node2/

read only = false

lock file = /run/lock/object6020.lock

 

[object6030]

max connections = 25

path = /srv/node3/

read only = false

lock file = /run/lock/object6030.lock

 

[object6040]

max connections = 25

path = /srv/node4/

read only = false

lock file = /run/lock/object6040.lock

EOF

重新启动rsync服务

service rsync restart

4、Swift  4.1      Swift 配置文件

cat >/etc/swift/swift.conf <<EOF

[swift-hash]

# random unique string that can never change (DO NOTLOSE)

swift_hash_path_suffix = `od -t x8 -N 8 -A n</dev/random`

EOF

4.2  Proxy Server 

创建 /etc/swift/proxy-server.conf

cat > /etc/swift/proxy-server.conf <<EOF

[DEFAULT]

bind_port = 8080

#bind_port = 443

#cert_file = /etc/swift/cert.crt

#key_file = /etc/swift/cert.key

workers = 8

user = swift

log_facility = LOG_LOCAL1

 

 

[pipeline:main]

pipeline = catch_errors healthcheck cache authtokenkeystone proxy-server

 

[app:proxy-server]

use = egg:swift#proxy

account_autocreate = true

 

[filter:healthcheck]

use = egg:swift#healthcheck

 

[filter:cache]

use = egg:swift#memcache

memcache_servers = 127.0.0.1:11211

 

[filter:keystone]

paste.filter_factory = keystone.middleware.swift_auth:filter_factory

operator_roles = Member,admin

 

[filter:authtoken]

paste.filter_factory =keystone.middleware.auth_token:filter_factory

service_port = 5000

service_host = $MASTER

auth_port = 35357

auth_host = $MASTER

auth_protocol = http

auth_token = $SERVICE_TOKEN

admin_token = $SERVICE_TOKEN

admin_tenant_name = service

admin_user = swift

admin_password = $SERVICE_PASSWORD

cache = swift.cache

 

[filter:catch_errors]

use = egg:swift#catch_errors

 

[filter:swift3]

use = egg:swift#swift3

EOF

4.3  Account Server, ContainerServer, Object Server 

过程比较复杂,所以就考虑用脚本来搞定

for x in {1..4}; do

cat > /etc/swift/account-server/$x.conf <<EOF

[DEFAULT]

devices = /srv/node$x

mount_check = false

bind_port = 60${x}2

user = swift

log_facility = LOG_LOCAL2

 

[pipeline:main]

pipeline = account-server

 

[app:account-server]

use = egg:swift#account

 

[account-replicator]

vm_test_mode = no

 

[account-auditor]

 

[account-reaper]

EOF

 

 

cat >/etc/swift/container-server/$x.conf <<EOF

[DEFAULT]

devices = /srv/node$x

mount_check = false

bind_ip = 0.0.0.0

bind_port = 60${x}1

user = swift

log_facility = LOG_LOCAL2

 

[pipeline:main]

pipeline = container-server

 

[app:container-server]

use = egg:swift#container

 

[container-replicator]

vm_test_mode = no

 

[container-updater]

 

[container-auditor]

 

[container-sync]

EOF

 

 

cat > /etc/swift/object-server/${x}.conf <<EOF

[DEFAULT]

devices = /srv/node${x}

mount_check = false

bind_port = 60${x}0

user = swift

log_facility = LOG_LOCAL2

 

[pipeline:main]

pipeline = object-server

 

[app:object-server]

use = egg:swift#object

 

[object-replicator]

vm_test_mode = no

 

[object-updater]

 

[object-auditor]

EOF

cat <<EOF >>/etc/swift/container-server.conf

[container-sync]

EOF

done

sed -i 's/LOCAL2/LOCAL3/g' /etc/swift/account-server/2.conf

sed -i 's/LOCAL2/LOCAL4/g'/etc/swift/account-server/3.conf

sed -i 's/LOCAL2/LOCAL5/g'/etc/swift/account-server/4.conf

sed -i 's/LOCAL2/LOCAL3/g'/etc/swift/container-server/2.conf

sed -i 's/LOCAL2/LOCAL4/g' /etc/swift/container-server/3.conf

sed -i 's/LOCAL2/LOCAL5/g'/etc/swift/container-server/4.conf

sed -i 's/LOCAL2/LOCAL3/g'/etc/swift/object-server/2.conf

sed -i 's/LOCAL2/LOCAL4/g'/etc/swift/object-server/3.conf

sed -i 's/LOCAL2/LOCAL5/g' /etc/swift/object-server/4.conf

4.4  Ring Server 

pushd /etc/swift

swift-ring-builder object.builder create 18 3 1

swift-ring-builder container.builder create 18 3 1

swift-ring-builder account.builder create 18 3 1

swift-ring-builder object.builder add z1-127.0.0.1:6010/device 1

swift-ring-builder object.builder add z2-127.0.0.1:6020/device 1

swift-ring-builder object.builder add z3-127.0.0.1:6030/device 1

swift-ring-builder object.builder add z4-127.0.0.1:6040/device 1

swift-ring-builder object.builder rebalance

swift-ring-builder container.builder add z1-127.0.0.1:6011/device 1

swift-ring-builder container.builder add z2-127.0.0.1:6021/device 1

swift-ring-builder container.builder add z3-127.0.0.1:6031/device 1

swift-ring-builder container.builder add z4-127.0.0.1:6041/device 1

swift-ring-builder container.builder rebalance

swift-ring-builder account.builder add z1-127.0.0.1:6012/device 1

swift-ring-builder account.builder add z2-127.0.0.1:6022/device 1

swift-ring-builder account.builder add z3-127.0.0.1:6032/device 1

swift-ring-builder account.builder add z4-127.0.0.1:6042/device 1

swift-ring-builder account.builder rebalance

4.5      启动相关服务

设置目录权限

chown -R swift.swift /etc/swift

启动swift服务

swift-init main start

swift-init rest start

-k,是swift账号的密码

swift -v -V 2.0 -Ahttp://127.0.0.1:5000/v2.0/ -U service:swift -K $SERVICE_PASSWORD stat

 

StorageURL: http://10.1.199.17:8080/v1/AUTH_a8b0b44cb5db4da39b053eabac6d3ed7

Auth Token: 3f85c92d6860444e90bf0e1bedc4b45a

   Account:AUTH_a8b0b44cb5db4da39b053eabac6d3ed7

Containers: 0

   Objects: 0

     Bytes: 0

Accept-Ranges: bytes

X-Trans-Id: txea28887460ff4f1d84e9e826e5514711

你也可以直接运行 swift stat. 这时候是直接采用租户/用户  admin/admin 去查询swift。因为我们设置了环境变量。

swift stat

   Account:AUTH_eb68709e74314aa59c449510a91f8d56

Containers: 0

   Objects: 0

     Bytes: 0

Accept-Ranges: bytes

X-Trans-Id: txc5a3afa7f228471698c96fd561830a3d

4.5  Glance集成Swift (这块我们暂时没采用)

编辑 /etc/glance/glance-api.conf

#default_store = file

default_store = swift

 

#swift_store_auth_address = 127.0.0.1:35357/v2.0/

swift_store_auth_address = http://10.1.199.8:5000/v2.0/

 

#swift_store_user = jdoe:jdoe

swift_store_user = service:swift

 

#swift_store_key = a86850deb2742ec3cb41518e26aa2d89

swift_store_key = password

 

#swift_store_create_container_on_put = False

swift_store_create_container_on_put = True

1.     swift_store_auth_addres不能去掉http,否则会导致认证失败

2.     swift_store_key ,我理解就是swift的密码,也就是租户 service,用户 swift的密码

可以直接运行下面命令实现修改

sed -i "/default_store/s/file/swift/;/swift_store_auth_address/s/127.0.0.1:35357/$MASTER:5000/;/swift_store_user/s/jdoe:jdoe/service:swift/;/swift_store_key/s/a86850deb2742ec3cb41518e26aa2d89/$SERVICE_PASSWORD/;/swift_store_create_container_on_put/s/False/True/"/etc/glance/glance-api.conf

重新启动glance服务

service glance-api restart && serviceglance-registry restart

这个时候,image就会传到swift上在dashboard里,也可以上传文件。并且snapshot可以上传到swift上。

swift -V 2 -A http://$MASTER:5000/v2.0 -U service:swift -K$SERVICE_PASSWORD stat

swift -V 2 -A http://$MASTER:5000/v2.0 -U service:swift -K$SERVICE_PASSWORD list

上面命令可以查看上传的image

没上传镜像前

# swift -V 2 -A http://$MASTER:5000/v2.0 -U service:swift -K $SERVICE_PASSWORD stat

   Account:AUTH_678c42aa31114faeb18add84615b4e83

Containers: 0

   Objects: 0

     Bytes: 0

Accept-Ranges: bytes

X-Trans-Id: tx72707ce7086c4bf0bc72ff7ec2813a27

# swift -V 2 -A http://$MASTER:5000/v2.0 -U service:swift -K $SERVICE_PASSWORD list

上传镜像后

# swift -V 2 -A http://$MASTER:5000/v2.0 -U service:swift -K $SERVICE_PASSWORD stat

   Account:AUTH_678c42aa31114faeb18add84615b4e83

Containers: 1

   Objects: 0

     Bytes: 0

Accept-Ranges: bytes

X-Trans-Id: tx65d1d1ee502b4960839f8196b76813f6

# swift -V 2 -A http://$MASTER:5000/v2.0 -U service:swift -K $SERVICE_PASSWORD list

glance

其中:-V 2 指示为keystone验证; IP为keystone节点IP;service:swift为tanent:user;-K为password

swift -V 2 -A http://$MASTER:5000/v2.0 -U admin:admin -K $OS_PASSWORDupload test \

/root/CentOS-6.2-x86_64-bin-DVD1.iso

5、将上述单节点部署改进为能真正运行的多代理节点集群

         多代理节点显然能极大地提高系统的可靠性和可用性。下面,我们紧接着上面的部署配置过程来进行修改,从而将这个单代理集群变成多代理集群。

5.1           RingServer处变为

         我们主要的工作是将所有存储节点都添加到环上,同时,该成一台物理机即一个Zone。因此,Ring Server部分变为:

pushd /etc/swift

swift-ring-builder object.builder create 18 3 1

swift-ring-builder container.builder create 18 3 1

swift-ring-builder account.builder create 18 3 1

swift-ring-builder object.builder remove z1-127.0.0.1:6010/device 1

swift-ring-builder object.builder remove z2-127.0.0.1:6020/device 1

swift-ring-builder object.builder remove z3-127.0.0.1:6030/device 1

swift-ring-builder object.builder remoe z4-127.0.0.1:6040/device 1

swift-ring-builder object.builder rebalance

swift-ring-builder container.builder remove z1-127.0.0.1:6011/device 1

swift-ring-builder container.builder remoe z2-127.0.0.1:6021/device 1

swift-ring-builder container.builder remove z3-127.0.0.1:6031/device 1

swift-ring-builder container.builder remove z4-127.0.0.1:6041/device 1

swift-ring-builder container.builder rebalance

swift-ring-builder account.builder remove z1-127.0.0.1:6012/device 1

swift-ring-builder account.builder remove z2-127.0.0.1:6022/device 1

swift-ring-builder account.builder remove z3-127.0.0.1:6032/device 1

swift-ring-builder account.builder remove z4-127.0.0.1:6042/device 1

swift-ring-builder account.builder rebalance

在node3中将node1,node2,node3,node4中的storage node添加到ring

/etc/swift/

swift-ring-builder account.builder create 18 3 1

swift-ring-builder container.builder create 18 3 1

swift-ring-builder object.builder create 18 3 1

 

swift-ring-builder object.builder addz1-192.168.0.201:6010/device 1

swift-ring-builder object.builder addz1-192.168.0.201:6020/device 1

swift-ring-builder object.builder addz1-192.168.0.201:6030/device 1

swift-ring-builder object.builder addz1-192.168.0.201:6040/device 1

 

swift-ring-builder object.builder addz2-192.168.0.202:6010/device 1

swift-ring-builder object.builder addz2-192.168.0.202:6020/device 1

swift-ring-builder object.builder addz2-192.168.0.202:6030/device 1

swift-ring-builder object.builder addz2-192.168.0.202:6040/device 1

 

swift-ring-builder object.builder addz3-192.168.0.203:6010/device 1

swift-ring-builder object.builder addz3-192.168.0.203:6020/device 1

swift-ring-builder object.builder addz3-192.168.0.203:6030/device 1

swift-ring-builder object.builder addz3-192.168.0.203:6040/device 1

 

swift-ring-builder object.builder addz4-192.168.0.204:6010/device 1

swift-ring-builder object.builder add z4-192.168.0.204:6020/device1

swift-ring-builder object.builder addz4-192.168.0.204:6030/device 1

swift-ring-builder object.builder addz4-192.168.0.204:6040/device 1

 

swift-ring-builder object.builder rebalance

 

swift-ring-builder container.builder addz1-192.168.0.201:6011/device 1

swift-ring-builder container.builder addz1-192.168.0.201:6021/device 1

swift-ring-builder container.builder addz1-192.168.0.201:6031/device 1

swift-ring-builder container.builder addz1-192.168.0.201:6041/device 1

 

swift-ring-builder container.builder addz2-192.168.0.202:6011/device 1

swift-ring-builder container.builder addz2-192.168.0.202:6021/device 1

swift-ring-builder container.builder addz2-192.168.0.202:6031/device 1

swift-ring-builder container.builder add z2-192.168.0.202:6041/device1

 

swift-ring-builder container.builder addz3-192.168.0.203:6011/device 1

swift-ring-builder container.builder addz3-192.168.0.203:6021/device 1

swift-ring-builder container.builder addz3-192.168.0.203:6031/device 1

swift-ring-builder container.builder addz3-192.168.0.203:6041/device 1

 

swift-ring-builder container.builder addz4-192.168.0.204:6011/device 1

swift-ring-builder container.builder addz4-192.168.0.204:6021/device 1

swift-ring-builder container.builder add z4-192.168.0.204:6031/device1

swift-ring-builder container.builder addz4-192.168.0.204:6041/device 1

 

swift-ring-builder container.builder rebalance

 

swift-ring-builder account.builder addz1-192.168.0.201:6012/device 1

swift-ring-builder account.builder addz1-192.168.0.201:6022/device 1

swift-ring-builder account.builder addz1-192.168.0.201:6032/device 1

swift-ring-builder account.builder addz1-192.168.0.201:6042/device 1

 

swift-ring-builder account.builder addz2-192.168.0.202:6012/device 1

swift-ring-builder account.builder addz2-192.168.0.202:6022/device 1

swift-ring-builder account.builder addz2-192.168.0.202:6032/device 1

swift-ring-builder account.builder addz2-192.168.0.202:6042/device 1

 

swift-ring-builder account.builder add z3-192.168.0.203:6012/device1

swift-ring-builder account.builder addz3-192.168.0.203:6022/device 1

swift-ring-builder account.builder addz3-192.168.0.203:6032/device 1

swift-ring-builder account.builder addz3-192.168.0.203:6042/device 1

 

swift-ring-builder account.builder addz4-192.168.0.204:6012/device 1

swift-ring-builder account.builder addz4-192.168.0.204:6022/device 1

swift-ring-builder account.builder addz4-192.168.0.204:6032/device 1

swift-ring-builder account.builder addz4-192.168.0.204:6042/device 1

 

swift-ring-builder account.builder rebalance

5.2   Proxy-Server.conf的修改

在所有节点/etc/swift/proxy-server.conf中修改

memcache_servers =192.168.0.201:11211,192.168.0.202:11211,192.168.0.203:11211,192.168.0.204:11211

在node1,node2,node4的 /etc/swift/proxy-server.conf中修改认证信息

[filter:authtoken]

paste.filter_factory =keystone.middleware.auth_token:filter_factory

service_port = 5000

service_host = 192.168.0.203

auth_port = 35357

auth_host = 192.168.0.203

auth_protocol = http

auth_token = 374d1f82f50e9f1ab45e

admin_token = 374d1f82f50e9f1ab45e

admin_tenant_name = service

admin_user = swift

admin_password = password

cache = swift.cache

5.3  其它

将node3/etc/swift/swift.conf拷贝至node1,node2,node4/etc/swift中

将node3 /etc/swift/下

account.builder, container.builder, object.builder, account.ring.gz, container.ring.gz, object.ring.gz

拷贝至node1,node2,node4/etc/swift中

以上以如下部署:

node1 = 192.168.0.201

node2 = 192.168.0.202

node3 = 192.168.0.203

node4 = 192.168.0.204

以node3为认证节点(采用keystone组件认证),同时node3也是proxy node 和 storage node

6、用Nginx在上述多代理节点集群上实现负载均衡 6.1 准备

Nginx 需要安装:PCRE,openSSL library

ubuntu安装PCRE

# apt-get update

# apt-get install libpcre3 libpcre3-dev

安装openSSL

sudo apt-get install libssl-dev

6.2  下载Nginx

cd /usr/local/src/

wget -S http://nginx.org/download/nginx-1.2.6.tar.gz

6.3  创建用户

useradd -c "Nginx User" -s /sbin/nologin -r -d/var/lib/nginx nginx

6.4  安装Nginx

tar xzvf nginx-1.2.6.tar.gz

cd nginx-1.2.6/

./configure \

  --user=nginx \

  --group=nginx \

 --prefix=/usr/share \

 --sbin-path=/usr/sbin/nginx \

  --conf-path=/etc/nginx/nginx.conf\

 --error-log-path=/var/log/nginx/error.log \

 --http-log-path=/var/log/nginx/access.log \

 --pid-path=/var/log/run/nginx.pid \

 --lock-path=/var/log/lock/subsys/nginx \

 --with-http_stub_status_module \

  --without-poll_module\

 --with-http_gzip_static_module \

 --with-http_realip_module \

 --with-http_ssl_module

 

make

make install

6.5  修改配置文件/etc/nginx/nginx.conf

cat /etc/nginx/nginx.conf | grep -v ^$ | grep -v.*#

worker_processes  1;

events {

    worker_connections 1024;

}

http {

   include       mime.types;

   default_type  application/octet-stream;

   sendfile        on;

   keepalive_timeout  65;

       client_max_body_size 6024M;      #限制用户上传大小

proxy_ignore_client_aborton;      #解决499错误,意思是代理服务器不要主动关闭客户端连接

       upstream swift {

       server 192.168.100.7:8080;         #此次ip为proxy的swift ip

       server 192.168.100.8:8080;

        }

   server {

       listen       8080;

       server_name  swift;             #主机名称

       location / {

           proxy_pass http://swift/;    #upstream一致

       }

       error_page   500 502 503 504  /50x.html;

       location = /50x.html {

           root   html;

       }

    }

}

6.6 启动Nginx

/usr/sbin/nginx

netstat -ltunp | grep 8080

 

备注:记得只需要在endpoint里指向nginx服务器地址信息,其它proxy节点不需要指定。

  注意: 我们上面的部署是将一个分区里面创建了4个Zone,更好的部署是一个大的分区作为一个Zone。可以参照http://www.openstack.org.cn/bbs/forum.php?mod=viewthread&tid=264 FAQ:

1、  Q:在运行bash keystone时,出现” Nohandlers could be found for logger

 “keystoneclient.client””的问题

A:一般是数据库中相应权限没设置好或更常见的一种情况是忘记之前装过或打算重装某些在这些步骤之前的一些组件,我的实践经验是:先将之前装的mysql和keystone卸载干净,即用命令apt-get purge mysql-*;apt-get autoremove;apt-get purge keystonepython-keystone  python-keystoneclient;然后,按照上述职相应安装步骤再重点新来过。

2、  Q:在运行登录dashboard时,出现”Internal Server Error”的错误,怎么解决?

A:按上面部署的话,要看到我们上传和存储的对象,Login的name是swift(当然,如果用admin的话,看到的存储内容为project为admin的内容),Password为password,出现”InternalServer Error”的错误,解决办法是:

(1)       确认你的输入URL是keystone所在的Server的URL;

(2)       修改/etc/openstack-dashboard/local_settings.py中的CACHE_BACKEND为keystone所在的Server的URL,即如下:

CACHE_BACKEND = 'memcached:/keystone所在的URL:11211/'

(3)       重新启动apache服务器

service apache2 restart

3、  Q:运行swift命令,出现[Errno111]ECONNREFUSED错误。

A:这个时候得看你各个组件是否开启了。

4、  Q:Dashboard按上面的步骤安装好后出现在Dashboard中查看容器和对象可以,创建容器可以,但是上传对象unable的问题,同时出现用swift命令创建新容器不行的问题

A:上述问题我们目前的经验是要将keystone(Proxy)所在的主机的IP也要放进数据库keystone中;

keystone endpoint-create --region RegionOne --service_id $ID --publicurl " http://192.168.0.203:8080/v1/AUTH_%(tenant_id)s"--adminurl " http://192.168.0.203:8080/v1" --internalurl  "

http://192.168.0.203:8080/v1/AUTH_%(tenant_id)s"

如果您的问题仍未解决,还可以加入服务器在线技术交流QQ群:8017413寻求帮助。


相关内容
最新热点内容