Moved folders around for new workflow
Signed-off-by: Robert Wolff <robert.wolff@linaro.org>
This commit is contained in:
parent
77d8cfa8c9
commit
8a7f6c2cb5
63 changed files with 0 additions and 0 deletions
|
@ -0,0 +1,107 @@
|
|||
# OpenStack
|
||||
|
||||
This repository provides all the support code required to deploy a "Developer
|
||||
Cloud".
|
||||
|
||||
# OpenStack packages
|
||||
|
||||
The OpenStack packages are built by Linaro and made available in the following
|
||||
location:
|
||||
|
||||
http://repo.linaro.org/rpm/linaro-overlay/centos-7/repo
|
||||
|
||||
The build scripts for the packages are available in this repository on the
|
||||
[`openstack-venvs`](https://git.linaro.org/leg/sdi/openstack-ref-architecture.git/tree/openstack-venvs) folder. These scripts are provided on as is basis, and they
|
||||
are tailored specifically for Linaro's building environment. Use only at your
|
||||
own risk.
|
||||
|
||||
# Reference Architecture
|
||||
|
||||
The reference architecture deploys a cloud that uses Ceph as backend for OpenStack:
|
||||
|
||||
[https://git.linaro.org/leg/sdi/openstack-ref-architecture.git](https://git.linaro.org/leg/sdi/openstack-ref-architecture.git)
|
||||
|
||||
See block diagram of how the servers should be connected to the network and how to
|
||||
spread the services on the different hosts on a default configuration in the [architecture document](docs/architecture.md).
|
||||
|
||||
# Pre-requisites
|
||||
|
||||
|
||||
1. All the servers are supposed to have Linaro ERP 16.12 installed and they are supposed to
|
||||
have networking configured in a way that they can see/resolve each other's names.
|
||||
|
||||
1. The nodes that will be used as Ceph OSDs need to have at least one extra harddrive for Ceph.
|
||||
|
||||
1. The networking node should have 3 NICs connected as described in the [architecture document](docs/architecture.md).
|
||||
|
||||
# Configuration
|
||||
|
||||
Some example configuration files are provided in this repo as example, go through them and
|
||||
generate the equivalent ones for your particular deployment:
|
||||
|
||||
ansible/hosts.example
|
||||
group_vars/all
|
||||
|
||||
Ensure to use host names, instead of ips to avoid some known deployment issues.
|
||||
|
||||
The playbook assumes your own files are in a folder called `ansible/secrets`, so the recommendation
|
||||
is to place your files there.
|
||||
|
||||
# The deployment
|
||||
|
||||
The deployment can be split in two different parts. Ceph and OpenStack.
|
||||
|
||||
## Deploying Ceph
|
||||
|
||||
1) Monitors are deployed and the cluster bootstrapmd:
|
||||
|
||||
ansible-playbook -K -v -i ./hosts ./site.yml --tags ceph-mon
|
||||
|
||||
Check that the cluster is up and running by connecting to one of the monitors
|
||||
and checking:
|
||||
|
||||
ssh server1
|
||||
ceph daemon mon.server1 mon_status
|
||||
|
||||
2) OSDs assume a full hard drive is dedicated to Ceph at least. A default
|
||||
configuration if all the servers that will be OSDs have the same HD layout
|
||||
can be spedified in group_vars/all as follows:
|
||||
|
||||
```
|
||||
ceph_host_osds:
|
||||
- sbX
|
||||
- sbY
|
||||
- sbZ
|
||||
```
|
||||
|
||||
If some server has a different configuration, this will be specified in the
|
||||
hostvars folder, in a file with the name of your server. For example:
|
||||
|
||||
```
|
||||
$ cat hostvars/server1
|
||||
|
||||
ceph_host_osds:
|
||||
- sbZ
|
||||
- sbY
|
||||
```
|
||||
|
||||
After configuring, the OSDs are deployed as follows:
|
||||
|
||||
ansible-playbook -K -v -i ./secrets/hosts ./site.yml --tags ceph-osd
|
||||
|
||||
2.1) In the case of setting up a cluster from scratch where ceph has been installed
|
||||
previously, there is an option to force the resetting of all the disks (this
|
||||
option WILL DELETE all the data on the OSDs). This option is not
|
||||
idempotent, use at your own risk. It is safe to use if you have cleanly deployed
|
||||
the machine and the disk to be used as OSD had a previously installed Ceph:
|
||||
|
||||
--extra-vars 'ceph_force_prepare=true'
|
||||
|
||||
## Deploying OpenStack
|
||||
|
||||
OpenStack is deployed using Ansible with the playbook defined in the "ansible"
|
||||
directory. You'll need to create the files "deployment-vars" and "hosts" to
|
||||
match your environment. There are examples to help guide you. Once those files
|
||||
are in place, OpenStack can be deployed with:
|
||||
|
||||
ansible-playbook -K -v -i secrets/hosts site.yml
|
|
@ -0,0 +1,57 @@
|
|||
# Network Diagram
|
||||
|
||||
This diagram is orientative to show how the physical networks
|
||||
are expected to be set up.
|
||||
|
||||
The two networks are physical networks segmented between 2 different VLANS. The
|
||||
internal network is a traditional lab internal network that all the servers can
|
||||
see. The openstack services will communicate with each other on this
|
||||
fairly safe network. Outbound traffic on this network is routed through the
|
||||
external router.
|
||||
|
||||
The "VMS NET" is a 2nd VLAN (can be same or different physical switch).
|
||||
This network is private with no outbound routes. The compute nodes and the
|
||||
network node talk over this network using VXLAN to provide private virtualized
|
||||
networks defined and managed by OpenStack. The network node as a single interface
|
||||
bridged to the public internet and a range of public IPv4 addresses that can
|
||||
be assigned as floating IPs to expose VMs to the internet.
|
||||
|
||||
```
|
||||
+---+ +---------------------------------+ +---+
|
||||
| V | | +--------+ I |
|
||||
| M | | control-node-1 |eth0 | N |
|
||||
| S | | mysql, rabbit, ceph-mon | | T |
|
||||
| | | | | E |
|
||||
| N | +---------------------------------+ | R +-----+
|
||||
| E | | N | |eth0
|
||||
| T | +---------------------------------+ | A | +---------------+
|
||||
| | | +--------+ L | | |
|
||||
| | | control-node-2 |eth0 | | | External |
|
||||
| | | keystone, glance, memcached, | | N | | router |
|
||||
| | | nova(api etc), neutron-server, | | E | | |
|
||||
| 1 | | horizon, cinder, ceph-mon | | T | +---------------+
|
||||
| 9 | | | | W | |eth1
|
||||
| 2 | +---------------------------------+ | O | |
|
||||
| . | | R | |
|
||||
| 1 | +---------------------------------+ | K | |
|
||||
| 6 | | +--------+ | |
|
||||
| 8 | | control-node-3 |eth0 | | |
|
||||
| . | | openvswitch_agent, l3_agent, | | | |
|
||||
| 0 | eth1| dhcp_agent, metadata_agent |eth2 |10 | |
|
||||
| . +--------+ ceph-mon |__ | . | |
|
||||
| X | +---------------------------------+ \ |10 | |
|
||||
| | \___/| . |- |
|
||||
| | +---------------------------------+ | X | \ \ XXXXX
|
||||
| | | +--------+ . | \ XXXX X
|
||||
| | | compute-$X |eth0 | X | \ XX XX
|
||||
| | eth1| nova-compute, ceph-osd | | | \XX XXX
|
||||
| +--------+ neutron-openvswitch_agent | | | X Internet X
|
||||
| | +---------------------------------+ | | XX XXX
|
||||
| | | | XXXXXXXXXXX
|
||||
| | +---------------------------------+ | |
|
||||
| | | +--------+ |
|
||||
| | | compute-$X |eth0 | |
|
||||
| | eth1| nova-compute, ceph-osd | | |
|
||||
| +--------+ neutron-openvswitch_agent | | |
|
||||
+---+ +---------------------------------+ +---+
|
||||
```
|
Loading…
Add table
Add a link
Reference in a new issue