OpenStack

安装操作

项目三

image-20250310133403101

项目四

4.3

4.3.1

配置双网卡

由于在第一项目已经设置了两个网络适配器,仅主机模式和NAT模式

但是在后面配置的时候执行网络连接命令时发现并没有找到网卡,但是使用ip a却能看到相关网卡

如图

image-20250313182159521

这是蛮抽象的,原因在于前面书本先介绍

1
ip a add ip/24 dev ensxxx

这条命令的时候能让ip a查到该网卡的信息,但是这并不是真正的配置,怎么佐证呢

1
cd /etc/sysconfig/network-scripts

你能发现并没有你要配置的仅主机模式下的那个网卡

首先先删除这个网卡

1
nmcli conn delete 'name'

这个name值是命令

1
nmcli con

执行后反馈的值

删除后如下

image-20250313183005245

能看到名称一样但是uid不同,下面这个是自己做的时候多做了一步下面的操作,不用担心

执行命令,添加网络配置文件

1
nmcli conn add con-name ens160 ifname ens160 type ethernet

修改网络配置文件

1
nmcli conn modify ens160 ipv4.method manual ipv4.address 192.168.212.129/24 autoconnect yes

然后重启网卡即可,但是我这边配网之后分配的是192.168.212.128

这个时候使用命令修改就好了

1
2
ip a del 192.168.212.128/24 dev ens160
ip a add 192.168.212.129/24 dev ens160

目前虚拟机既可以向宿主机通信,宿主机也可以向虚拟机通信

其实仅主机模式下,宿主机和仅主机模式下的虚拟机使用的是以太网适配器 VMware Network Adapter VMnet1:

所以这个时候宿主机和仅主机模式下的虚拟机网段都在同一个网段下

项目五

5.2

5.2.1

openEuler操作系统的软件管理

查看ifconfig软件在哪个包下

Repo后的内容代表来源

1
yum provides ifconfig

查看软件仓库中包含某个字符串的软件信息

1
yum list net-*

安装net-tools软件

1
yum -y install net-tools

image-20250306163355089

提示问题,大致原因是下载的包里有重复的软件,是选择替换还是跳过或者使用最好的包

这里选择替换

1
yum -y install net-tools --allowerasing

Yum源

查看yum源

1
ls /etc/yum.repos.d

.repo文件就是yum源的配置文件,通过修改一些配置就可以达到换源的目的

5.2.2

主机名管理与域名解析

查看主机名

1
hostname

更改主机名

1
hostnamectl set-hostname <主机名>

本地域名解析

修改配置文件

1
vi /etc/hosts

格式要求

1
ip	对应主机名称

测试与主机的连通性

1
ping 对应主机名称

5.2.3

防火墙管理

查看防火墙状态

1
systemctl status firewalld

停止/开启/开机自启/禁止开机自启

1
systemctl stop/start/enable/disable firewalld

5.2.4

OpenStack基础支持服务

Chrony时间同步服务

时间同步服务配置

1
vi /etc/chrony.conf

添加配置行

1
allow 192.168.211.0/24

重启服务并设置开机自启

1
2
systemctl restart chronyd
systemctl enable chronyd
时间同步服务管理
1
chronyc <参数>

查看当前客户端与NTP服务器的连接情况

1
chronyc sources

image-20250306172122111

添加和删除腾讯NTP服务器

1
chronyc add server time1.cloud.tencent.com
1
chronyc delete time1.cloud.tencent.com

image-20250306172948394

OpenStack云计算平台框架

使用以下命令安装OpenStack云计算平台框架

1
yum -y install openstack-rellease-train

升级所有软件包

1
yum upgrade -y --nobest --skip-broken

安装云计算平台客户端

1
yum -y install python-openstackclient

MariaDB数据库服务

1
yum -y install mariadb-server python-PyMySQL

编辑数据库配置文件

1
vi /etc/my.cnf.d/openstack.cnf

配置以下内容

1
2
3
4
5
6
7
[mysqld]
bind-address = 192.168.212.139
default-storage-engine = innodb
innodb_file_per_table = on
max_connections = 4096
collation-server = utf8_general_ci
character-set-server = utf8

启动数据库

1
systemctl enable mariadb
1
systemctl start mariadb

对数据库初始化

1
mysql_secure_installation

接下来设置密码,选项全部选Y

使用数据库

1
mysql -h<数据库ip地址> -u<用户名> -p<密码>

RabbitMQ消息队列服务

安装RabbitMQ的服务端

1
yum -y install rabbitmq-server

设置自启动并启动该服务

1
systemctl enable rabbitmq-server
1
systemctl start rabbitmq-server

管理用户和密码

1
rabbitmqctl add_user <用户> <密码>

删除用户

1
rabbitmqctl delete_user <用户名>

修改用户密码

1
rabbitmqctl change_password <用户名> <新密码>

管理用户权限

1
rabbitmqctl set_permissions <用户名> ".*" ".*" ".*"

查看某一用户权限

1
rabbitmqctl list_user_permissions openstack

检查服务运行状态

1
netstat -lntup

Memcached内存缓存服务

安装内存缓存服务软件

1
yum -y install memcached python-memcached

安装之后系统会自动创建名为memcached的用户

1
cat /etc/passwd | grep memcached

配置内存缓存服务

1
vi /etc/sysconfig/memcached

填充以下内容

1
2
3
4
5
PORT="11211"
USER="memcached"
MAXCONN="1024"
CACHESIZE="64"
OPTIONS="-l 0.0.0.0,::1"

启动内存缓存服务

1
2
systemctl enable memcached
systemctl start memcached

检查服务运行情况

1
netstat -lntup | grep memcached

!!!

从这里开始,我修改了网卡的ip地址,如图所示

节点名称 内网 外网
control 192.168.212.139 192.168.211.139
compute 192.168.212.149 192.168.211.149

书本上的节点ip,总是容易混淆,做一下分类

节点名称 内网 外网
control 192.168.10.10 192.168.20.10
compute 192.168.10.20 192.168.20.20

etc分布式键值存储

安装etcd分布式键值存储

[^control]:

1
yum -y install etcd

配置服务器

1
vi /etc/etcd/etcd.conf

修改后的文件内容如下

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
# [member]
ETCD_NAME=control
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
#ETCD_WAL_DIR=""
#ETCD_SNAPSHOT_COUNT="10000"
#ETCD_HEARTBEAT_INTERVAL="100"
#ETCD_ELECTION_TIMEOUT="1000"
ETCD_LISTEN_PEER_URLS="http://192.168.212.139:2380"
ETCD_LISTEN_CLIENT_URLS="http://192.168.212.139:2379,http://127.0.0.1:2379"
#ETCD_MAX_SNAPSHOTS="5"
#ETCD_MAX_WALS="5"
#ETCD_CORS=""
#
#[cluster]
ETCD_INITIAL_ADVERTISE_PEER_URLS="http://192.168.212.139:2380"
# if you use different ETCD_NAME (e.g. test), set ETCD_INITIAL_CLUSTER value for this name, i.e. "test=http://..."
ETCD_INITIAL_CLUSTER="control=http://192.168.212.139:2380"
ETCD_INITIAL_CLUSTER_STATE="new"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster-01"
ETCD_ADVERTISE_CLIENT_URLS="http://192.168.212.139:2379"
#ETCD_DISCOVERY=""
#ETCD_DISCOVERY_SRV=""
#ETCD_DISCOVERY_FALLBACK="proxy"
#ETCD_DISCOVERY_PROXY=""
#ETCD_STRICT_RECONFIG_CHECK="false"
#ETCD_AUTO_COMPACTION_RETENTION="0"
#
#[proxy]
#ETCD_PROXY="off"
#ETCD_PROXY_FAILURE_WAIT="5000"
#ETCD_PROXY_REFRESH_INTERVAL="30000"
#ETCD_PROXY_DIAL_TIMEOUT="1000"
#ETCD_PROXY_WRITE_TIMEOUT="5000"
#ETCD_PROXY_READ_TIMEOUT="0"
#
#[security]
#ETCD_CERT_FILE=""
#ETCD_KEY_FILE=""
#ETCD_CLIENT_CERT_AUTH="false"
#ETCD_TRUSTED_CA_FILE=""
#ETCD_AUTO_TLS="false"
#ETCD_PEER_CERT_FILE=""
#ETCD_PEER_KEY_FILE=""
#ETCD_PEER_CLIENT_CERT_AUTH="false"
#ETCD_PEER_TRUSTED_CA_FILE=""
#ETCD_PEER_AUTO_TLS="false"
#
#[logging]
#ETCD_DEBUG="false"
# examples for -log-package-levels etcdserver=WARNING,security=DEBUG
#ETCD_LOG_PACKAGE_LEVELS=""

启动etcd

1
2
systemctl enable etcd
systemctl start etcd

查看服务运行情况

1
netstat -lntup | grep etcd

etcd服务管理

存入键值对

1
etcdctl put <键> <值>

读取键值对

1
etcdctl get <键>

5.3

5.3.1

克隆虚拟机,创建计算节点

如图操作,在vmware里右键控制节点

image-20250404153847600image-20250404153934669

点击管理克隆,选择克隆一个完整的虚拟机,其他没什么了

接下来设置主机名,双网卡ip地址,添加域名解析,关闭系统防火墙

[^compute]:

更改主机名

1
hostnamectl set-hostname compute

5.3.2

双网卡

步骤比较繁杂,详细请看4.3.1

5.3.3

域名解析

[^两个节点都要设置]:

修改配置文件

1
vi /etc/hosts

添加以下内容,这里用的是内网的ip地址

1
2
192.168.212.139 control
192.168.212.149 compute

5.3.4

关闭系统防火墙

禁用selinux

1
vi /etc/selinux/config

将SELINUX=enforcing修改为SELINUX=disabled

立即禁用selinux

1
setenforce 0

停用Firewall

1
2
systemctl disable firewalld
systemctl stop firewalld

验证连通性,互相ping一下,检查是否连接

5.3.5

搭建本地软件仓库

在控制节点中配置YUM源

临时挂载

下载资源包openStack-train.iso文件,并将该文件上传到/opt目录下

在/opt目录下创建文件夹openstack

挂在该文件在这个文件夹

1
mount /opt/openStack-train.iso /opt/openstack

永久挂载

1
vi /etc/fstab

在文件最后写入代码

1
/opt/openStack-train.iso /opt/openstack iso9660 defaults 0 0

备份原有YUM源

1
2
3
cd /etc/yum.repos.d 
mkdir bak
mv *.repo bak

编辑YUM源文件,使其指向本地文件

1
vi /etc/yum.repos.d/OpenStack.repo

填充一下内容

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
[OS]
name=OS
baseurl=ftp://control/openstack/OS
enable=1
gpgcheck=0

[everything]
name=everything
baseurl=ftp://control/openstack/everything
enable=1
gpgcheck=0

[EPOL]
name=EPOL
baseurl=ftp://control/openstack/EPOL
enable=1
gpgcheck=0

[update]
name=update
baseurl=ftp://control/openstack/update
enable=1
gpgcheck=0

[OpenStack_Train]
name=OpenStack_Train
baseurl=ftp://control/openstack/OpenStack_Train
enable=1
gpgcheck=0

清除缓存

1
yum clean all

重新建立YUM缓存

1
yum makecache

检查YUM源是否可用,如图即可

1
yum repolist

image-20250404161212714

在控制节点中配置FTP服务器

安装FTP

1
yum -y install vsftpd

配置FTP主目录为软件仓库目录

1
vi /etc/vsftpd/vsftpd.conf

直接替换以下内容

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
# Example config file /etc/vsftpd/vsftpd.conf
#
# The default compiled in settings are fairly paranoid. This sample file
# loosens things up a bit, to make the ftp daemon more usable.
# Please see vsftpd.conf.5 for all compiled in defaults.
#
# READ THIS: This example file is NOT an exhaustive list of vsftpd options.
# Please read the vsftpd.conf.5 manual page to get a full idea of vsftpd's
# capabilities.
#
# Allow anonymous FTP? (Beware - allowed by default if you comment this out).
anonymous_enable=YES
anon_root=/opt
#
# Uncomment this to allow local users to log in.
local_enable=YES
#
# Uncomment this to enable any form of FTP write command.
write_enable=YES
#
# Default umask for local users is 077. You may wish to change this to 022,
# if your users expect that (022 is used by most other ftpd's)
local_umask=022
#
# Uncomment this to allow the anonymous FTP user to upload files. This only
# has an effect if the above global write enable is activated. Also, you will
# obviously need to create a directory writable by the FTP user.
# When SELinux is enforcing check for SE bool allow_ftpd_anon_write, allow_ftpd_full_access
#anon_upload_enable=YES
#
# Uncomment this if you want the anonymous FTP user to be able to create
# new directories.
#anon_mkdir_write_enable=YES
#
# Activate directory messages - messages given to remote users when they
# go into a certain directory.
dirmessage_enable=YES
#
# Activate logging of uploads/downloads.
xferlog_enable=YES
#
# Make sure PORT transfer connections originate from port 20 (ftp-data).
connect_from_port_20=YES
#
# If you want, you can arrange for uploaded anonymous files to be owned by
# a different user. Note! Using "root" for uploaded files is not
# recommended!
#chown_uploads=YES
#chown_username=whoever
#
# You may override where the log file goes if you like. The default is shown
# below.
#xferlog_file=/var/log/xferlog
#
# If you want, you can have your log file in standard ftpd xferlog format.
# Note that the default log file location is /var/log/xferlog in this case.
xferlog_std_format=YES
#
# You may change the default value for timing out an idle session.
#idle_session_timeout=600
#
# You may change the default value for timing out a data connection.
#data_connection_timeout=120
#
# It is recommended that you define on your system a unique user which the
# ftp server can use as a totally isolated and unprivileged user.
#nopriv_user=ftpsecure
#
# Enable this and the server will recognise asynchronous ABOR requests. Not
# recommended for security (the code is non-trivial). Not enabling it,
# however, may confuse older FTP clients.
#async_abor_enable=YES
#
# By default the server will pretend to allow ASCII mode but in fact ignore
# the request. Turn on the below options to have the server actually do ASCII
# mangling on files when in ASCII mode. The vsftpd.conf(5) man page explains
# the behaviour when these options are disabled.
# Beware that on some FTP servers, ASCII support allows a denial of service
# attack (DoS) via the command "SIZE /big/file" in ASCII mode. vsftpd
# predicted this attack and has always been safe, reporting the size of the
# raw file.
# ASCII mangling is a horrible feature of the protocol.
#ascii_upload_enable=YES
#ascii_download_enable=YES
#
# You may fully customise the login banner string:
#ftpd_banner=Welcome to blah FTP service.
#
# You may specify a file of disallowed anonymous e-mail addresses. Apparently
# useful for combatting certain DoS attacks.
#deny_email_enable=YES
# (default follows)
#banned_email_file=/etc/vsftpd/banned_emails
#
# You may specify an explicit list of local users to chroot() to their home
# directory. If chroot_local_user is YES, then this list becomes a list of
# users to NOT chroot().
# (Warning! chroot'ing can be very dangerous. If using chroot, make sure that
# the user does not have write access to the top level directory within the
# chroot)
#chroot_local_user=YES
#chroot_list_enable=YES
# (default follows)
#chroot_list_file=/etc/vsftpd/chroot_list
#
# You may activate the "-R" option to the builtin ls. This is disabled by
# default to avoid remote users being able to cause excessive I/O on large
# sites. However, some broken FTP clients such as "ncftp" and "mirror" assume
# the presence of the "-R" option, so there is a strong case for enabling it.
#ls_recurse_enable=YES
#
# When "listen" directive is enabled, vsftpd runs in standalone mode and
# listens on IPv4 sockets. This directive cannot be used in conjunction
# with the listen_ipv6 directive.
listen=NO
#
# This directive enables listening on IPv6 sockets. By default, listening
# on the IPv6 "any" address (::) will accept connections from both IPv6
# and IPv4 clients. It is not necessary to listen on *both* IPv4 and IPv6
# sockets. If you want that (perhaps because you want to listen on specific
# addresses) then you must run two copies of vsftpd with two configuration
# files.
# Make sure, that one of the listen options is commented !!
listen_ipv6=YES

pam_service_name=vsftpd
userlist_enable=YES

启动FTP服务

1
2
systemctl enable vsftpd
systemctl start vsftpd

在计算节点上配置YUM源

1
2
3
cd /etc/yum.repos.d
mkdir bak
mv *.repo bak

配置文件

1
vi OpenStack.repo

填充以下内容即可

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
[OS]
name=OS
baseurl=ftp://control/openstack/OS
enable=1
gpgcheck=0

[everything]
name=everything
baseurl=ftp://control/openstack/everything
enable=1
gpgcheck=0

[EPOL]
name=EPOL
baseurl=ftp://control/openstack/EPOL
enable=1
gpgcheck=0

[update]
name=update
baseurl=ftp://control/openstack/update
enable=1
gpgcheck=0

[OpenStack_Train]
name=OpenStack_Train
baseurl=ftp://control/openstack/OpenStack_Train
enable=1
gpgcheck=0

清除缓存并建立YUM源

1
2
yum clean all
yum makecache

测试YUM源是否可用,在测试节点上

1
yum repolist

5.3.6

拍摄系统快照

将control和compute节点都拍摄快照,以免后续工作失误而无法挽回

5.3.7

control节点配置基础服务

!!!只有Chrony和基础框架两个服务器都要操作,其他只在控制节点操作就好
Chrony时间同步服务

[^control]:

1
vi /etc/chrony.conf

添加以下内容,注意ip填写内网地址,并且第四位数字为0

1
2
local stratum 1
allow 192.168.212.0/24

重启服务

1
systemctl restart chronyd

[^compute]:

配置文件

1
vi /etc/chrony.conf

将配置文件中的默认同步服务器删掉,换成控制节点的

1
pool pool.ntp.org iburst => server control iburst

重启服务

1
systemctl restart chronyd

查看同步情况,如图即可

1
chronyc sources

image-20250404164637865

OpenStack基础框架
1
yum -y install openstack-release-train
1
rm -rf /etc/yum.repos.d/openstack-train.repo

安装OpenStack客户端

1
yum -y install python-openstackclient
MariaDB数据库
1
yum -y install mariadb-server python-PyMySQL

配置文件

1
vi /etc/my.cnf.d/openstack.cnf

新增以下内容

1
2
3
4
5
6
7
[mysqld]
bind-address = 192.168.212.149
default-storage-engine = innodb
innodb_file_per_table = on
max_connections = 4096
collation-server = utf8_general_ci
character-set-server = utf8

启动数据库

1
systemctl enable mariadb
1
systemctl start mariadb

对数据库初始化

1
mysql_secure_installation

接下来设置密码,选项全部选Y

使用数据库

1
mysql -h<数据库ip地址> -u<用户名> -p<密码>
RabbitMQ消息队列

安装RabbitMQ的服务端

1
yum -y install rabbitmq-server

设置自启动并启动该服务

1
systemctl enable rabbitmq-server
1
systemctl start rabbitmq-server

添加用户

1
rabbitmqctl add_user rabbitmq <密码>
1
rabbitmqctl set_permissions rabbitmq ".*" ".*" ".*"

查看用户列表

1
rabbitmqctl list_users

检查服务运行状态

1
netstat -lntup
Memcached缓存

安装内存缓存服务软件

1
yum -y install memcached python-memcached

配置内存缓存服务

1
vi /etc/sysconfig/memcached

填充以下内容

1
2
3
4
5
PORT="11211"
USER="memcached"
MAXCONN="1024"
CACHESIZE="64"
OPTIONS="-l 0.0.0.0,::1"

启动内存缓存服务

1
2
systemctl enable memcached
systemctl start memcached

检查服务运行情况

1
netstat -lntup | grep 11211
etc键值存储

安装etcd分布式键值存储

1
yum -y install etcd

配置服务器

1
vi /etc/etcd/etcd.conf

修改后的文件内容如下

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
# [member]
ETCD_NAME=compute
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
#ETCD_WAL_DIR=""
#ETCD_SNAPSHOT_COUNT="10000"
#ETCD_HEARTBEAT_INTERVAL="100"
#ETCD_ELECTION_TIMEOUT="1000"
ETCD_LISTEN_PEER_URLS="http://192.168.212.149:2380"
ETCD_LISTEN_CLIENT_URLS="http://192.168.212.149:2379,http://127.0.0.1:2379"
#ETCD_MAX_SNAPSHOTS="5"
#ETCD_MAX_WALS="5"
#ETCD_CORS=""
#
#[cluster]
ETCD_INITIAL_ADVERTISE_PEER_URLS="http://192.168.212.149:2380"
# if you use different ETCD_NAME (e.g. test), set ETCD_INITIAL_CLUSTER value for this name, i.e. "test=http://..."
ETCD_INITIAL_CLUSTER="control=http://192.168.212.149:2380"
ETCD_INITIAL_CLUSTER_STATE="new"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster-01"
ETCD_ADVERTISE_CLIENT_URLS="http://192.168.212.149:2379"
#ETCD_DISCOVERY=""
#ETCD_DISCOVERY_SRV=""
#ETCD_DISCOVERY_FALLBACK="proxy"
#ETCD_DISCOVERY_PROXY=""
#ETCD_STRICT_RECONFIG_CHECK="false"
#ETCD_AUTO_COMPACTION_RETENTION="0"
#
#[proxy]
#ETCD_PROXY="off"
#ETCD_PROXY_FAILURE_WAIT="5000"
#ETCD_PROXY_REFRESH_INTERVAL="30000"
#ETCD_PROXY_DIAL_TIMEOUT="1000"
#ETCD_PROXY_WRITE_TIMEOUT="5000"
#ETCD_PROXY_READ_TIMEOUT="0"
#
#[security]
#ETCD_CERT_FILE=""
#ETCD_KEY_FILE=""
#ETCD_CLIENT_CERT_AUTH="false"
#ETCD_TRUSTED_CA_FILE=""
#ETCD_AUTO_TLS="false"
#ETCD_PEER_CERT_FILE=""
#ETCD_PEER_KEY_FILE=""
#ETCD_PEER_CLIENT_CERT_AUTH="false"
#ETCD_PEER_TRUSTED_CA_FILE=""
#ETCD_PEER_AUTO_TLS="false"
#
#[logging]
#ETCD_DEBUG="false"
# examples for -log-package-levels etcdserver=WARNING,security=DEBUG
#ETCD_LOG_PACKAGE_LEVELS=""

启动etcd

1
2
systemctl enable etcd
systemctl start etcd

查看服务运行情况

1
netstat -lntup | grep etcd

如图即可

image-20250404170800843

项目六

6.3

安装与配置Keystone

在control主机上执行以下命令前,需要换源

前面内容介绍我们换本地源,但是安装openstack-keystone时有一些包本地源没有,所以还是需要原来的包,是不是很抽象

1
cd /etc/yum.repos.d
1
mv /bak/*.repo ../
1
mv OpenStack.repo /bak
1
yum clean all && yum makecache

接下来就可以执行以下命令了

1
yum -y install openstack-keystone httpd mod_wsgi

创建keystone的用户和用户组

登录数据库,赋权

1
mysql -uroot -p<密码>
1
2
grant all privileges on keystone.* to 'keystone'@'localhost' identified by '验证密码';
grant all privileges on keystone.* to 'keystone'@'%' identified by '验证密码';

退出数据库

修改配置文件

1
vi /etc/keystone/keystone.conf

里面内容较多,使用vi搜索

在vi模式下输入/“字符串”,按n键可以查找下一个

查找database

修改这一行

1
connection = mysql+pymysql://keystone:验证密码@control/keystone

查找token

修改这一行

1
provider = fernet

初始化Keystone数据库

1
su keystone -s /bin/sh -c "keystone-manage db_sync"

进入数据库keystone并且查看数据表是否同步

1
2
3
mysql -uroot -p<密码>
use keystone;
show tables;

6.3.2

Keystone组件初始化

初始化Fernet密钥库
1
keystone-manage fernet_setup --keystone-user keystone --keystone-group keystone
1
keystone-manage credential_setup --keystone-user keystone --keystone-group keystone

执行以上命令,如图所示

image-20250410165531565

初始化用户身份认证信息

密码设置成你的数据库密码

1
keystone-manage bootstrap --bootstrap-password <密码> --bootstrap-admin-url http://control:5000/v3 --bootstrap-internal-url http://control:5000/v3 --bootstrap-public-url http://control:5000/v3 --bootstrap-region-id RegionOne

修改配置文件

1
vi /etc/httpd/conf/httpd.conf

搜索ServerName

修改为

1
ServerName control

重启Apache服务

1
systemctl restart httpd && systemctl enable httpd

6.3.3

模拟登录验证

创建初始化环境变量文件
1
vi admin-login

填写以下内容

1
2
3
4
5
6
7
8
export OS_USERNAME=admin
export OS_PASSWORD=<密码>
export OS_PROJECT_NAME=admin
export OS_USER_DOMAIN_NAME=Default
export OS_PROJECT_DOMAIN_NAME=Default
export OS_AUTH_URL=http://control:5000/v3
export OS_IDENTITY_API_VERSION=3
export OS_IMAGE_API_VERSION=2
导入环境并验证
1
source admin-login

查看环境变量

1
export -p

在物理机浏览器上填写以下url

1
ip地址:5000/v3

出现以下内容即可

image-20250410172146919

6.3.4

检测Keystone服务

创建与查阅项目列表
1
openstack project create --domain default project

查看现有项目列表

1
openstack project list
创建角色与查阅角色列表
1
openstack role create user

查看现有角色列表

1
openstack role list
查看域列表和用户列表

查看现有域列表

1
openstack domain list

查看现有用户列表

1
openstack user list

项目七

7.3

7.3.1

安装配置Glance镜像服务

安装Glance
1
yum -y install openstack-glance

查看用户信息

1
cat /etc/passwd | grep glance

查看用户组信息

1
cat /etc/group | grep glance
创建Glance数据库
1
mysql -uroot -p<密码>
1
create database glance;

赋予权限

1
grant all privileges on glance.* to 'glance'@'localhost' identified by '<密码>';
1
grant all privileges on glance.* to 'glance'@'%' identified by '<密码>';

退出数据库

1
quit
修改Glance配置文件

备份配置文件

1
cp /etc/glance/glance-api.conf /etc/glance/glance-api-bak.conf

删除注释和空行

1
grep -Ev '^$|#' /etc/glance/glance-api-bak.conf > /etc/glance/glance-api.conf

修改配置文件

1
vi /etc/glance/glance-api.conf

懒得敲得可以替换以下内容

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
[DEFAULT]
[cinder]
[cors]
[database]
connection=mysql+pymysql://glance:<密码>@<主机名>/glance
[file]
[glance.store.http.store]
[glance.store.rbd.store]
[glance.store.sheepdog.store]
[glance.store.swift.store]
[glance.store.vmware_datastore.store]
[glance_store]
stores=file
default_store=file
filesystem_store_datadir=/var/lib/glance/images
[image_format]
[keystone_authtoken]
auth_url=http://<主机名>:5000
memcached_servers=<主机名>:11211
auth_type=password
username=glance
password=<密码>
project_name=project
user_domain_name=Default
project_domain_name=Default
[oslo_concurrency]
[oslo_messaging_amqp]
[oslo_messaging_kafka]
[oslo_messaging_notifications]
[oslo_messaging_rabbit]
[oslo_middleware]
[oslo_policy]
[paste_deploy]
flavor=keystone
[profiler]
[store_type_location_strategy]
[task]
[taskflow_executor]

7.3.2

Glance组件初始化

创建Glance用户并分配角色
1
cd ~
1
source admin-login
1
openstack user create --domain default --password <密码> glance
1
openstack role add --project project --user glance admin
创建Glance服务及服务端点

创建服务

1
openstack service create --name glance image

创建端点

1
openstack endpoint create --region RegionOne glance public http://control:9292
1
openstack endpoint create --region RegionOne glance internal http://control:9292
1
openstack endpoint create --region RegionOne glance admin http://control:9292
启动Glance服务
1
systemctl enable openstack-glance-api
1
systemctl start openstack-glance-api

7.3.3

验证Glance服务

端口查看
1
netstat -lntup |grep 9292
查看服务运行状态
1
systemctl status openstack-glance-api

image-20250417171357765

7.3.4

用Galnce制作镜像

在~目录下有一个cirros镜像,名为cirros-0.5.1-x86_64-disk.img

1
cd ~
1
openstack image create --file cirros-0.5.1-x86_64-disk.img --disk-format qcow2 --container-format bare --public cirros

查看镜像

1
openstack image list

查看镜像物理文件

1
ll /var/lib/glance/images

项目八

8.3

8.3.1

安装与配置Placement放置服务

安装Placement软件包
1
yum -y install openstack-placement-api

查看用户信息

1
cat /etc/passwd |grep placement

查看用户组信息

1
cat /etc/group |grep placement
创建Placement数据库并授权
1
mysql -uroot -p<密码>
1
create database placement;

赋予权限

1
grant all privileges on placement.* to 'placement'@'localhost' identified by '<密码>';
1
grant all privileges on placement.* to 'placement'@'%' identified by '<密码>';

退出数据库

修改Placement配置文件

备份文件

1
cp /etc/placement/placement.conf /etc/placement/placement-bak.conf

去除注释和空格

1
grep -Ev '^$|#' /etc/placement/placement-bak.conf > /etc/placement/placement.conf

修改配置文件

1
vi /etc/placement/placement.conf

替换以下内容,一些位置根据自己情况做一些修改

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
[DEFAULT]
[api]
auth_strategy=keystone
[cors]
[keystone_authtoken]
auth_url=http://<主机名>:5000
memcached_servers=<主机名>:11211
auth_type=password
project_domain_name=Default
user_domain_name=Default
project_name=project
username=placement
password=<密码>
[oslo_policy]
[placement]
[placement_database]
connection=mysql+pymysql://placement:<密码>@<主机名>/placement
[profiler]
初始化Placement的数据库
1
su placement -s /bin/sh -c "placement-manage db sync"

检查数据库的表是否存在表

1
mysql -uroot -p<密码>
1
use placement;
1
show tables;

8.3.2

Placement组件初始化

创建Placement用户并分配角色
1
source admin-login

创建用户placement

1
opensatck user create --domain default --password lzgbzat placement

分配admin角色

1
openstack role add --project project --user placement admin
创建Placement服务及服务端点

创建服务

1
openstack service create --name placement placement

创建端口

1
openstack endpoint create --region RegionOne placement public http://control:8778
1
openstack endpoint create --region RegionOne placement internal http://control:8778
1
openstack endpoint create --region RegionOne placement admin http://control:8778
启动服务
1
systemctl restart httpd

8.3.3

检测Placement服务

查看端口占用情况
1
netstat -lntup |grep 8778
检验服务端点
1
curl http://control:8778

项目九

9.3

9.3.1

安装与配置控制节点上的Nova服务

安装Nova
1
yum -y install openstack-nova-api openstack-nova-conductor openstack-nova-scheduler openstack-nova-novncproxy

查看用户信息

1
cat /etc/passwd | grep nova

查看用户组信息

1
cat /etc/group | grep nova
创建Nova数据库并授权
1
mysql -uroot -p

创建数据库

1
create database nova_api;
1
create database nova_cell0;
1
create database nova;

赋予权限

1
grant all privileges on nova_api.* to 'nova'@'localhost' identified by '<密码>';
1
grant all privileges on nova_api.* to 'nova'@'%' identified by '<密码>';
1
grant all privileges on nova_cell0.* to 'nova'@'localhost' identified by '<密码>';
1
grant all privileges on nova_cell0.* to 'nova'@'%' identified by '<密码>';
1
grant all privileges on nova.* to 'nova'@'localhost' identified by '<密码>';
1
grant all privileges on nova.* to 'nova'@'%' identified by '<密码>';

退出数据库

修改Nova配置文件

备份文件

1
cp /etc/nova/nova.conf /etc/nova/nova-bak.conf

删除注释和空行

1
grep -Ev '^$|#' /etc/nova/nova-bak.conf > /etc/nova/nova.conf

修改文件

1
vi /etc/nova/nova.conf

替换一下内容即可

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
[DEFAULT]
enabled_apis=osapi_compute,metadata
transport_url=rabbit://<rabbitmq服务的用户名>:<rabbitmq的密码>@<主机名>:5672
my_ip=192.168.212.139
use_neutron=true
firewall_driver=nova.virt.firewall.NoopFirewallDriver
[api]
auth_strategy=keystone
[api_database]
connection=mysql+pymysql://nova:<密码>@<主机名>/nova_api
[barbican]
[cache]
[cinder]
[compute]
[conductor]
[console]
[consoleauth]
[cors]
[database]
connection=mysql+pymysql://nova:<密码>@<主机名>/nova
[devices]
[ephemeral_storage_encryption]
[filter_scheduler]
[glance]
api_servers=http://<主机名>:9292
[guestfs]
[healthcheck]
[hyperv]
[ironic]
[key_manager]
[keystone]
[keystone_authtoken]
auth_url=http://<主机名>:5000
memcached_servers=<主机名>:11211
auth_type=password
project_domain_name=Default
user_domain_name=Default
project_name=project
username=nova
password=<密码>
[libvirt]
[metrics]
[mks]
[neutron]
[notifications]
[osapi_v21]
[oslo_concurrency]
lock_path=/var/lib/nova/tmp
[oslo_messaging_amqp]
[oslo_messaging_kafka]
[oslo_messaging_notifications]
[oslo_messaging_rabbit]
[oslo_middleware]
[oslo_policy]
[pci]
[placement]
auth_url=http://<主机名>:5000
auth_type=password
project_domain_name=Default
user_domain_name=Default
project_name=project
username=placement
password=<密码>
region_name=RegionOne
[powervm]
[privsep]
[profiler]
[quota]
[rdp]
[remote_debug]
[scheduler]
[serial_console]
[service_user]
[spice]
[upgrade_levels]
[vault]
[vendordata_dynamic_auth]
[vmware]
[vnc]
enabled=true
server_listen=$my_ip
server_proxyclient_address=$my_ip
[workarounds]
[wsgi]
[xenserver]
[xvp]
[zvm]
初始化Nova的数据库
1
su nova -s /bin/sh -c "nova-manage api_db sync"
1
su nova -s /bin/sh -c "nova-manage cell_v2 create_cell --name=cell1"
1
su nova -s /bin/sh -c "nova-manage cell_v2 map_cell0"
1
su nova -s /bin/sh -c "nova-manage db sync"

验证单元是否成功注册

1
nova-manage cell_v2 list_cells

image-20250421130615112

9.3.2

Nova组件初始化

创建Nova用户并分配角色

创建用户

1
source admin-login
1
openstack user create --domain default --password <密码> nova

分配角色

1
openstack role add --project project --user nova admin
创建服务以及服务端点

创建服务

1
openstack service create --name nova compute

创建服务端点

1
openstack endpoint create --region RegionOne nova public http://<主机名>:8774/v2.1
1
openstack endpoint create --region RegionOne nova internal http://<主机名>:8774/v2.1
1
openstack endpoint create --region RegionOne nova admin http://<主机名>:8774/v2.1
启动控制节点的Nova服务
1
systemctl enable openstack-nova-api openstack-nova-scheduler openstack-nova-conductor openstack-nova-novncproxy
1
systemctl start openstack-nova-api openstack-nova-scheduler openstack-nova-conductor openstack-nova-novncproxy

9.3.3

检测Nova服务

查看端口占用情况
1
netstat -lntup |grep 877
查看计算服务列表
1
openstack compute service list

image-20250422171045605

9.3.4

安装与配置计算节点上的Nova服务

安装Nova
1
yum -y install openstack-nova-compute

查看用户信息

1
cat /etc/passwd | grep nova

查看用户组信息

1
cat /etc/group | grep nova
修改Nova配置文件

备份文件

1
cp /etc/nova/nova.conf /etc/nova/nova-bak.conf

删除注释和空行

1
grep -Ev '^$|#' /etc/nova/nova-bak.conf > /etc/nova/nova.conf

修改文件

1
vi /etc/nova/nova.conf

替换以下内容

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
[DEFAULT]
enabled_apis=osapi_compute,metadata
transport_url=rabbit://<rabbitmq用户名>:<rabbitmq用户密码>@<主机>:5672
my_ip=192.168.212.149
use_neutron=true
firewall_driver=nova.virt.firewall.NoopFirewallDriver
compute_driver=libvirt.LibvirtDriver
instances_path=/var/lib/nova/instances/
[api]
auth_strategy=keystone
[api_database]
[barbican]
[cache]
[cinder]
[compute]
[conductor]
[console]
[consoleauth]
[cors]
[database]
[devices]
[ephemeral_storage_encryption]
[filter_scheduler]
[glance]
api_servers=http://<主机>:9292
[guestfs]
[healthcheck]
[hyperv]
[ironic]
[key_manager]
[keystone]
[keystone_authtoken]
auth_url=http://<主机>:5000
memcached_servers=<主机>:11211
auth_type=password
project_domain_name=Default
user_domain_name=Default
project_name=project
username=nova
password=<密码>
[libvirt]
virt_type=qemu
[metrics]
[mks]
[neutron]
[notifications]
[osapi_v21]
[oslo_concurrency]
lock_path=/var/lib/nova/tmp
[oslo_messaging_amqp]
[oslo_messaging_kafka]
[oslo_messaging_notifications]
[oslo_messaging_rabbit]
[oslo_middleware]
[oslo_policy]
[pci]
[placement]
auth_url=http://<主机>:5000
auth_type=password
project_domain_name=Default
user_domain_name=Default
project_name=project
username=placement
password=<密码>
region_name=RegionOne
[powervm]
[privsep]
[profiler]
[quota]
[rdp]
[remote_debug]
[scheduler]
[serial_console]
[service_user]
[spice]
[upgrade_levels]
[vault]
[vendordata_dynamic_auth]
[vmware]
[vnc]
enabled=true
server_listen=0.0.0.0
server_proxyclient_address=$my_ip
novncproxy_base_url=http://192.168.212.139:6080/vnc_auto.html
[workarounds]
[wsgi]
[xenserver]
[xvp]
[zvm]
启动Nova服务
1
systemctl enable libvirtd openstack-nova-compute
1
systemctl start libvirtd openstack-nova-compute

切换到控制节点

1
source admin-login
1
su nova -s /bin/sh -c "nova-manage cell_v2 discover_hosts --verbose"

设置自动发现,添加以下第二行内容就好

1
2
[scheduler]
discover_hosts_in_cells_interval=600

重启Nova服务

1
systemctl restart openstack-nova-api

查看模块服务状态

1
openstack compute service list

image-20250422182806557

查看openstack服务及端点列表

1
openstack catalog list

image-20250422183218393

使用Nova状态检测工具进项检查

1
nova-status upgrade check

image-20250422183413257

项目十

10.3

10.3.1

安装与配置控制节点上的Neutron服务

安装Neutron软件
1
yum -y install openstack-neutron openstack-neutron-ml2 openstack-neutron-linuxbridge ebtables

查看用户信息

1
cat /etc/passwd | grep neutron

查看用户组信息

1
cat /etc/group | grep neutron
创建Neutron的数据库并授权
1
mysql -uroot -p<密码>
1
create database neutron;
1
grant all privileges on neutron.* to 'neutron'@'localhost' identified by '<密码>';
1
grant all privileges on neutron.* to 'neutron'@'%' identified by '<密码>';

退出数据库

修改Neutron服务相关配置文件
配置Neutron组件信息

备份文件

1
cp /etc/neutron/neutron.conf /etc/neutron/neutron-bak.conf

删除注释和空行

1
grep -Ev '^$|#' /etc/neutron/neutron-bak.conf > /etc/neutron/neutron.conf

修改文件

1
vi /etc/neutron/neutron.conf

替换以下内容

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
[DEFAULT]
core_plugin=ml2
service_plugins=
transport_url=rabbit://<rabbitmq用户名>:<密码>@<主机名>
auth_strategy=keystone
notify_nova_on_port_status_changes=true
notify_nova_on_port_data_changes=true
[cors]
[database]
connection=mysql+pymysql://neutron:<密码>@<主机名>/neutron
[keystone_authtoken]
auth_url=http://<主机名>:5000
memcached_servers=<主机名>:11211
auth_type=password
project_domain_name=Default
user_domain_name=Default
project_name=project
username=neutron
password=<密码>
[nova]
auth_url=http://<主机名>:5000
auth_type=password
project_domain_name=Default
user_domain_name=Default
project_name=project
username=nova
password=<密码>
region_name=RegionOne
[oslo_concurrency]
lock_path=/var/lib/neutron/tmp
[oslo_messaging_amqp]
[oslo_messaging_kafka]
[oslo_messaging_notifications]
[oslo_messaging_rabbit]
[oslo_middleware]
[oslo_policy]
[privsep]
[ssl]
修改二层模块插件配置文件

备份配置文件

1
cp /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugins/ml2/ml2_conf-bak.ini

删除空行和注释

1
grep -Ev '^$|#' /etc/neutron/plugins/ml2/ml2_conf-bak.ini>/etc/neutron/plugins/ml2/ml2_conf.ini

编辑文件,替换以下内容

1
2
3
4
5
6
7
8
9
10
[DEFAULT]
[ml2]
type_drivers=flat
tenant_network_types=
mechanism_drivers=linuxbridge
extension_drivers=port_security
[ml2_type_flat]
flat_networks=provider
[securitygroup]
enable_ipset=true
启用ML2插件
1
ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini
修改网桥代理配置文件

备份配置文件

1
cp /etc/neutron/plugins/ml2/linuxbridge_agent.ini /etc/neutron/plugins/ml2/linuxbridge_agent-bak.ini

删除空行和注释

1
grep -Ev '^$|#' /etc/neutron/plugins/ml2/linuxbridge_agent-bak.ini>/etc/neutron/plugins/ml2/linuxbridge_agent.ini

编辑文件,替换以下内容

1
vi /etc/neutron/plugins/ml2/linuxbridge_agent.ini
1
2
3
4
5
6
7
8
[DEFAULT]
[linux_bridge]
physical_interface_mappings=provider:<Nat模式下的网卡名称,例如ens160>
[vxlan]
enable_vxlan=false
[securitygroup]
enable_security_group=true
firewall_driver=neutron.agent.linux.iptables_firewall.IptablesFirewallDriver
修改DHCP代理配置文件

备份配置文件

1
cp /etc/neutron/dhcp_agent.ini /etc/neutron/dhcp_agent-bak.ini

删除空行和注释

1
grep -Ev '^$|#' /etc/neutron/dhcp_agent-bak.ini>/etc/neutron/dhcp_agent.ini

编辑文件,替换以下内容

1
vi /etc/neutron/dhcp_agent.ini
1
2
3
4
[DEFAULT]
interface_driver=linuxbridge
dhcp_driver=neutron.agent.linux.dhcp.Dnsmasq
enable_isolated_metadata=true
修改元数据代理配置文件

备份配置文件

1
cp /etc/neutron/metadata_agent.ini /etc/neutron/metadata_agent-bak.ini

删除空行和注释

1
grep -Ev '^$|#' /etc/neutron/metadata_agent-bak.ini > /etc/neutron/metadata_agent.ini

编辑文件,替换以下内容

1
vi /etc/neutron/metadata_agent.ini
1
2
3
4
[DEFAULT]
nova_metadata_host=<主机名>
metadata_proxy_shared_secret=METADATA_SECRET
[cache]
修改Nova配置文件

编辑文件,在[neutron]处添加以下内容,根据实际情况进行替换

1
vi /etc/nova/nova.conf
1
2
3
4
5
6
7
8
9
10
11
[neutron]
auth_url=http://<主机名>:5000
auth_type=password
project_domain_name=Default
user_domain_name=Default
region_name=RegionOne
project_name=project
username=neutron
password=<密码>
service_metadata_proxy=true
metadata_proxy_shared_secret=METADATA_SECRET
同步数据库
1
su neutron -s /bin/sh -c "neutron-db-manage --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head"

查看同步情况

1
mysql -uroot -p<密码>
1
use neutron;
1
show tables;

10.3.2

Neutron组件初始化

创建Neutron用户并分配角色
1
source admin-login
1
openstack user create --domain default --password <密码> neutron
1
openstack role add --project project --user neutron admin
创建Nova服务及服务端点
1
openstack service create --name neutron network
1
openstack endpoint create --region RegionOne neutron public http://<主机名>:9696
1
openstack endpoint create --region RegionOne neutron internal http://<主机名>:9696
1
openstack endpoint create --region RegionOne neutron admin http://<主机名>:9696
启动控制节点上的Nova服务
1
systemctl restart openstack-nova-api
1
systemctl enable neutron-server neutron-linuxbridge-agent neutron-dhcp-agent neutron-metadata-agent
1
systemctl start neutron-server neutron-linuxbridge-agent neutron-dhcp-agent neutron-metadata-agent

10.3.3

检测Neutron服务

1
netstat -lntup | grep 9696
1
curl http://<主机名>:9696

image-20250424164130027

10.3.4

安装与配置计算节点上的Neutron服务

安装Neutron软件
1
yum -y install openstack-neutron-linuxbridge

查看用户信息

1
cat /etc/passwd | grep neutron

查看用户组信息

1
cat /etc/group | grep neutron
修改Neutron配置文件
1
vi /etc/neutron/neutron.conf

替换以下内容

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
[DEFAULT]
transport_url=rabbit://rabbitmq:<密码>@<主机名>:5672
auth_strategy=keystone
[cors]
[database]
[keystone_authtoken]
auth_url=http://<主机名>:5000
memcached_servers=<主机名>:11211
auth_type=password
project_domain_name=Default
user_domain_name=Default
project_name=project
username=neutron
password=<密码>
[oslo_concurrency]
lock_path=/var/lib/neutron/tmp
[oslo_messaging_amqp]
[oslo_messaging_kafka]
[oslo_messaging_notifications]
[oslo_messaging_rabbit]
[oslo_middleware]
[oslo_policy]
[privsep]
[ssl]
修改网桥代理的配置文件

备份配置文件

1
cp /etc/neutron/plugins/ml2/linuxbridge_agent.ini /etc/neutron/plugins/ml2/linuxbridge_agent-bak.ini

删除空行和注释

1
grep -Ev '^$|#' /etc/neutron/plugins/ml2/linuxbridge_agent-bak.ini>/etc/neutron/plugins/ml2/linuxbridge_agent.ini

编辑文件,替换以下内容

1
vi /etc/neutron/plugins/ml2/linuxbridge_agent.ini
1
2
3
4
5
6
7
8
[DEFAULT]
[linux_bridge]
physical_interface_mappings=provider:<Nat模式下的网卡名称,例如ens160>
[vxlan]
enable_vxlan=false
[securitygroup]
enable_security_group=true
firewall_driver=neutron.agent.linux.iptables_firewall.IptablesFirewallDriver
修改Neutron配置文件
1
vi /etc/nova/nova.conf

DEFAULT部分添加以下内容,根据自己情况替换

1
2
vif_plugging_is_fatal=false
vif_plugging_timeout=0

neutron部分添加以下内容,替换…

1
2
3
4
5
6
7
8
9
[neutron]
auth_url=http://<主机名>:5000
auth_type=password
project_domain_name=Default
user_domain_name=Default
region_name=RegionOne
project_name=project
username=neutron
password=<密码>
启动Neutron服务
1
systemctl restart openstack-nova-compute
1
systemctl enable neutron-linuxbridge-agent
1
systemctl start neutron-linuxbridge-agent

10.3.5

检测Neutron服务

转到control主机进行操作

1
source admin-login

查看网络代理服务列表

1
openstack network agent list

用Neutron状态检测工具检测

1
neutron-status upgrade check

image-20250424181318200

项目十一

11.3

11.3.1

安装与配置Dashboard服务

安装Dashboard软件包

在compute节点上操作

1
yum -y install openstack-dashboard

配置以下行

1
ALLOWED_HOSTS = ['*']
1
OPENSTACK_HOST = "control"
1
TIME_ZONE = "Asia/Shanghai"
1
SESSION_ENGINE = 'django.contrib.sessions.backends.cache'
1
2
3
4
5
6
CACHES = {
'default': {
'BACKEND': 'django.core.cache.backends.memcached.MemcachedCache',
'LOCATION': 'control:11211',
},
}

添加以下内容

1
2
3
4
5
6
7
8
OPENSTACK_KEYSTONE_MULTIDOMAIN_SUPPORT=True
OPENSTACK_API_VERSIONS={
"identify":3,
"image":2,
"volume":3,
}
OPENSTACK_KEYSTONE_DEFAULT_DOMAIN="Default"
OPENSTACK_KEYSTONE_DEFAULT_ROLE="user"

设置网络,替换以下内容即可

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
OPENSTACK_NEUTRON_NETWORK = {
'enable_auto_allocated_network': False,
'enable_distributed_router': False,
'enable_fip_topology_check': False,
'enable_ha_router': False,
'enable_ipv6': False,
# TODO(amotoki): Drop OPENSTACK_NEUTRON_NETWORK completely from here.
# enable_quotas has the different default value here.
'enable_quotas': False,
'enable_rbac_policy': False,
'enable_router': False,

'default_dns_nameservers': [],
'supported_provider_types': ['*'],
'segmentation_id_range': {},
'extra_provider_types': {},
'supported_vnic_types': ['*'],
'physical_networks': [],

}

11.3.2

重建Web应用配置文件

1
cd /usr/share/openstack-dashboard/

编译生成Dashboard的Web服务配置文件

1
python manage.py make_web_conf --apache > /etc/httpd/conf.d/openstack-dashboard.conf

查看生成的配置文件

1
cat /etc/httpd/conf.d/openstack-dashboard.conf

建立策略文件的软链接

1
ls /etc/openstack-dashboard/
1
ln -s /etc/openstack-dashboard/ /usr/share/openstack-dashboard/openstack_dashboard/conf

查看网站目录

1
ll /usr/share/openstack-dashboard/openstack_dashboard/

启动Apache服务器,使配置生效

1
systemctl enable httpd
1
systemctl start httpd

11.3.3

检测Dashboard服务

在本地浏览器输入计算节点的ip地址,试验了一下,仅主机和Nat两种模式的ip地址都可以访问

Default
用户名 admin
密码 自己设置的

如果提示以下错误,是因为两台主机时间同步有误造成的

image-20250424190013371

都执行一下以下命令就可以了

1
systemctl restart chronyd

项目十二

12.3

12.3.1

安装与配置控制节点上的Cinder

安装Cinder软件
1
yum -y install openstack-cinder

查看用户信息

1
cat /etc/passwd | grep cinder

查看用户组信息

1
cat /etc/group | grep cinder
创建Cinder数据库并授权
1
mysql -uroot -p<密码>
1
grant all privileges on cinder.* to 'cinder'@'localhost' identified by '<密码>';
1
grant all privileges on cinder.* to 'cinder'@'%' identified by '<密码>';

退出数据库

修改Cinder配置文件

备份配置文件

1
cp /etc/cinder/cinder.conf /etc/cinder/cinder-bak.conf

删除注释和空行

1
grep -Ev '^$|#' /etc/cinder/cinder-bak.conf > /etc/cinder/cinder.conf

编辑配置文件,替换一下内容,根据实际情况进行修改

1
vi /etc/cinder/cinder.conf
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
[DEFAULT]
auth_strategy=keystone
transport_url=rabbit://<rabbitmq用户>:<密码>@<主机名>:5672
[barbican]
[cors]
[database]
connection=mysql+pymysql://cinder:<密码>@<主机名>/cinder
[healthcheck]
[key_manager]
[keystone_authtoken]
auth_url=http://<主机名>:5000
memcached_servers=<主机名>:11211
auth_type=password
project_domain_name=Default
user_domain_name=Default
project_name=project
username=cinder
password=<密码>
[oslo_concurrency]
lock_path=/var/lib/cinder/tmp
[oslo_messaging_amqp]
[oslo_messaging_kafka]
[oslo_messaging_notifications]
[oslo_messaging_rabbit]
[oslo_middleware]
[oslo_policy]
[oslo_reports]
[oslo_versionedobjects]
[privsep]
[profiler]
[sample_castellan_source]
[sample_remote_file_source]
[ssl]
[vault]

修改nova.conf文件

1
vi /etc/nova/nova.conf

添加以下内容在对应配置项里

1
2
[cinder]
os_region_name=RegionOne
初始化数据库
1
su cinder -s /bin/sh -c "cinder-manage db sync"

12.3.2

Cinder组件初始化

创建用户并分配角色
1
source admin-login
1
openstack user create --domain default --password <密码> cinder
1
openstack role add --project project --user cinder admin
创建Cinder服务以及端点
1
openstack service create --name cinderv3 volumev3
1
openstack endpoint create --region RegionOne volumev3 public http://control:8776/v3/%\(project_id\)s
1
openstack endpoint create --region RegionOne volumev3 internal http://control:8776/v3/%\(project_id\)s
1
openstack endpoint create --region RegionOne volumev3 admin http://control:8776/v3/%\(project_id\)s
启动控制节点上的Cinder
1
systemctl restart openstack-nova-api
1
systemctl enable openstack-cinder-api openstack-cinder-scheduler
1
systemctl start openstack-cinder-api openstack-cinder-scheduler

12.3.3

检查Cinder服务

查看端口占用情况
1
netstat -lntup | grep 8776
查看存储服务列表
1
openstack volume service list

12.3.4

搭建存储节点

为计算节点增加硬盘

在VMware上硬件设备里添加磁盘,这里需要注意,第一块硬盘如果选的时Scsi类型的,第二块也可以加这个,按书上一样的。但是我的第一块时Nvme类型的,第二块相加Scsi类型的时候,就是加不上去,太抽象了,所以我第二块使用的也是Nvme类型的。因此命令会有一些差异,根据自己情况替换即可

查看系统磁盘挂载情况

1
lsblk

image-20250428110036965

记住这个名字,后面替换就好

1
pvcreate /dev/<sdb或者nvme0n2>
1
vgcreate cinder-volumes /dev/<sdb或者nvme0n2>

编辑LVM配置文件

找到devices的配置项,添加

1
filter=["a/<sdb或者nvme0n2>/","r/.*/"]
安装和配置存储节点
安装软件包
1
yum -y install openstack-cinder targetcli python-keystone

备份配置文件

1
cp /etc/cinder/cinder.conf /etc/cinder/cinder-bak.conf

去除空行和注释

1
grep -Ev '^$|#' /etc/cinder/cinder-bak.conf > /etc/cinder/cinder.conf

编辑配置文件

1
vi /etc/cinder//cinder.conf

根据实际情况替换,分为两种类型配置

SCSI类型的
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
[DEFAULT]
transport_url=rabbit://<rabbitmq用户名>:<密码>@<主机名>:5672
glance_api_servers=http://<主机名>:9292
enabled_backends=lvm
auth_strategy=keystone
[barbican]
[cors]
[database]
connection=mysql+pymysql://cinder:<密码>@<主机名>/cinder
[healthcheck]
[key_manager]
[keystone_authtoken]
auth_url=http://<主机名>:5000
memcached_servers=<主机名>:11211
auth_type=password
project_domain_name=Default
user_domain_name=Default
project_name=project
username=cinder
password=<密码>
[lvm]
volume_driver=cinder.volume.drivers.lvm.LVMVolumeDriver
volume_group=cinder-volumes
target_protocol=iscsi
target_helper=lioadm
[oslo_concurrency]
lock_path=/var/lib/cinder/tmp
[oslo_messaging_amqp]
[oslo_messaging_kafka]
[oslo_messaging_notifications]
[oslo_messaging_rabbit]
[oslo_middleware]
[oslo_policy]
[oslo_reports]
[oslo_versionedobjects]
[privsep]
[profiler]
[sample_castellan_source]
[sample_remote_file_source]
[ssl]
[vault]

NVME类型的

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
[DEFAULT]
transport_url=rabbit://<rabbitmq用户名>:<密码>@<主机名>:5672
glance_api_servers=http://<主机名>:9292
auth_strategy=keystone
enabled_backends=nvmeof
# 启用的卷驱动
volume_driver = cinder.volume.drivers.nvmeof.NVMeoFVolumeDriver
# 卷组名称,保持不变
volume_group = cinder-volumes
# 目标协议改为nvmet_rdma
target_protocol = nvmet_rdma
# 目标助手改为spdk-nvmeof
target_helper = spdk-nvmeof
[barbican]
[cors]
[database]
connection=mysql+pymysql://cinder:<密码>@<主机名>/cinder
[healthcheck]
[key_manager]
[keystone_authtoken]
auth_url=http://<主机名>:5000
memcached_servers=<主机名>:11211
auth_type=password
project_domain_name=Default
user_domain_name=Default
project_name=project
username=cinder
password=<密码>
[NVMeoF]
# NVMe-oF服务监听的IP地址
target_ip_address = <control主机的ip地址,我用的是NAT的>
# NVMe-oF服务监听的端口
target_port = 4420
# 子系统NQN(Namespace Qualified Name)前缀
nqn_prefix = nqn.2014-08.org.nvmexpress:uuid
[oslo_concurrency]
lock_path=/var/lib/cinder/tmp
[oslo_messaging_amqp]
[oslo_messaging_kafka]
[oslo_messaging_notifications]
[oslo_messaging_rabbit]
[oslo_middleware]
[oslo_policy]
[oslo_reports]
[oslo_versionedobjects]
[privsep]
[profiler]
[sample_castellan_source]
[sample_remote_file_source]
[ssl]
[vault]
启动计算节点上的Cinder
1
systemctl enable openstack-cinder-volume target
1
systemctl start openstack-cinder-volume target

12.3.5

检验Cinder服务

查看存储列表

1
openstack volume service list

image-20250428121100084

12.3.6

用Cinder创建卷

命令模式

在控制节点上创建8GB的卷

1
source admin-login
1
openstack volume create --size 8 volume1

查看卷列表

1
openstack volume list

image-20250428122006250

Dashboard模式

image-20250428122140353

可以看到刚刚命令模式创建的卷成功了

接下来用Dashboard创建卷,点击右上角创建卷的按钮,填写配置内容创建即可

image-20250428122409438