Compare commits

...

39 Commits

Author SHA1 Message Date
Neil Hanlon
129b0c5d3a
update for el9 2023-04-23 22:14:38 -04:00
Neil Hanlon
803bb21868
configure the clouds.py script on the host 2022-04-07 02:03:49 -04:00
Neil Hanlon
f35776479e
add bootstrapping for the cloud 2022-04-07 01:33:15 -04:00
Neil Hanlon
4f0bb1f4af
no idea, commit before changes 2022-03-20 14:21:13 -04:00
Neil Hanlon
2968e83f5b
add back debug to override nova and requirements to matser 2022-02-14 11:23:24 -05:00
Neil Hanlon
ffdb85a388
fix lxc vars 2022-02-14 11:23:01 -05:00
Neil Hanlon
a588f87fe6
add todo 2022-02-13 22:51:21 -05:00
Neil Hanlon
863c9a4068
Add support for non vultr block devices and also check first to avoid errors 2022-02-13 22:51:11 -05:00
Neil Hanlon
e5755dd3d0
AIOs should be metal, for now
* lxc: touch a file that it expects
2022-02-13 18:29:06 -05:00
Neil Hanlon
e6f311f6ec
no longer need to change nova head 2022-02-13 18:28:46 -05:00
Neil Hanlon
84c0b449b5
Fix the roles to actually work; touch a file for lxc to run 2022-02-12 20:31:50 -05:00
Neil Hanlon
a21471be9b
fixes to make it run 2022-02-12 17:44:25 -05:00
Neil Hanlon
e5ba379366
get python3 working again 2022-02-12 17:05:49 -05:00
Neil Hanlon
42ee7d8e9d
fix sshkey_fetch var default 2022-02-12 17:02:14 -05:00
Neil Hanlon
2e86c1ed28 need to build the hosts file, too 2022-02-12 16:55:51 -05:00
Neil Hanlon
3917bfd364
Completely restructure into single-playbooks for AIO and Distributed
* Tested only on distributed at this check-in
* Also **temporarily** installing `patch` on the infra hosts, needed for
  an os_nova patch that will ultimately be removed. It isn't clear from
  this patch though, because the file init-nodes.yml which installs
  packages was renamed to tasks/init-nodes.yml.
* There are some drawbacks to doing it this way, but the playbooks are
  serving a single purpose and don't need to be catch-all infra tooling
2022-02-12 16:36:34 -05:00
Neil Hanlon
38d13e8b0c
readme change temp 2022-02-12 15:59:00 -05:00
Neil Hanlon
664aa103c2
tf push 2022-02-12 15:58:46 -05:00
Neil Hanlon
2254abe62c
add patch file for tests 2022-02-12 15:58:12 -05:00
Neil Hanlon
24fa5fa2cd
some changes for non aio installs 2022-02-12 15:56:25 -05:00
Neil Hanlon
a9280b58f5
hopes and prayers 2022-02-07 10:36:28 -05:00
Neil Hanlon
80e26c806e
hopes and prayers 2022-02-07 10:32:36 -05:00
Neil Hanlon
41a84ab580
idk 2022-02-06 23:00:18 -05:00
Neil Hanlon
e6fa94e2e1
i hate everything 2022-02-03 21:35:23 -05:00
Neil Hanlon
f8d092cbae
Various tinkerings to run on master 2022-02-03 20:11:23 -05:00
Neil Hanlon
fd8d523757
fix more roles... 2022-01-22 19:42:39 -05:00
Neil Hanlon
20acc5b9a0
Update playbooks to generalize 2022-01-22 18:44:47 -05:00
Neil Hanlon
54d396ebf1
don't run iface script on aio, make sure ssh is configured. 2022-01-15 15:56:13 -05:00
Neil Hanlon
b48bc11262
Add AIO server to the mix 2022-01-10 13:24:26 -05:00
Neil Hanlon
9c87fb8c87
add and update ansible playbooks for infra
- [openstack_user_config]: remove NFS in favor of (properly isntalled)
  iscsi
- [openstack_user_config]: remove extraneous config in favor of shorter
  version
- [storage] install and enable targetd (target.service)
- [ansible] only run 'infra' tags on the first infra host - never on an
  AIO
- [ansible] change roles to use the ``host`` extra var to configure
  where to run to mitigate accidents
- [ansible] add aio steps to infra playbook
- [ansible] add storage host playbook to configure volumes and iscsi
- [ansible] aio: configure volume groups
2022-01-10 13:20:05 -05:00
Neil Hanlon
98971618cd
Remove key 2022-01-05 16:58:32 -05:00
Neil Hanlon
aa9bfe1f95
Rework config and add ssh 2022-01-05 16:57:06 -05:00
Neil Hanlon
a32cc255a7
Moved these into ansible 2021-12-29 21:57:45 -05:00
Neil Hanlon
f23769bf54
Configure bootstrap and deployment for infra node 2021-12-29 21:25:12 -05:00
Neil Hanlon
cd452174c0
Fix tags 2021-12-29 19:54:31 -05:00
Neil Hanlon
5504c67d3c
finalize initial bootstrap tasks 2021-12-29 19:37:01 -05:00
Neil Hanlon
f513725182
Add ansible layout and playbooks to provision nodes 2021-12-29 18:02:43 -05:00
Neil Hanlon
0e740ce123
Update readme 2021-12-29 16:17:53 -05:00
899dc59a64
Add license and readme 2021-12-29 16:17:27 -05:00
58 changed files with 2043 additions and 288 deletions

4
.ansible-lint Normal file
View File

@ -0,0 +1,4 @@
warn_list:
- internal-error
skip_list:
- '204'

6
.direnv/aliases/doctl Executable file
View File

@ -0,0 +1,6 @@
#!/usr/bin/env bash
doctl ()
{
/home/neil/bin/doctl --context advancedlsa $@
}
doctl $@

4
.gitignore vendored
View File

@ -1,3 +1,7 @@
id_ed25519
.terraform/
.envrc
*.retry
ansible/*.retry
ansible/playbooks/files/buffer/*
.direnv

33
.pre-commit-config.yaml Normal file
View File

@ -0,0 +1,33 @@
---
repos:
- repo: https://github.com/pre-commit/pre-commit-hooks
rev: v4.4.0
hooks:
- id: trailing-whitespace
- id: end-of-file-fixer
- id: check-added-large-files
- id: check-case-conflict
- id: check-executables-have-shebangs
- id: check-json
- id: pretty-format-json
- id: detect-private-key
# - repo: local
# hooks:
# # - id: ansible-lint
# name: Ansible-lint
# description: This hook runs ansible-lint.
# entry: ansible-lint --force-color
# language: python
# # do not pass files to ansible-lint, see:
# # https://github.com/ansible/ansible-lint/issues/611
# pass_filenames: false
# always_run: true
- repo: https://github.com/adrienverge/yamllint.git
rev: v1.31.0
hooks:
- id: yamllint
files: \.(yaml|yml)$
types: [file, yaml]
entry: yamllint

13
.yamllint Normal file
View File

@ -0,0 +1,13 @@
---
extends: default
rules:
line-length:
max: 160
level: warning
ignore: |
.travis.yml
.github

9
LICENSE Normal file
View File

@ -0,0 +1,9 @@
MIT License
Copyright (c) <year> <copyright holders>
Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.

19
README.md Normal file
View File

@ -0,0 +1,19 @@
# infra
Scripts and code to deploy OpenStack nodes on various providers with OpenStack-Ansible.
Future: Integrate with Netbox to provision and hold IPAM / DCIM
## Ansible
Contains a set of playbooks which will setup hosts with necessary changes and run bootstrap scripts as necessary.
Always supply ``-e 'aio_install=1'`` to scripts when doing an AIO install.
Guide:
* Run init-nodes.yml - `ansible-playbook -i vultr.yml -e 'host=all'`
* Run adhoc-reboot.yml to restart nodes after being upgraded and changing selinux
* Run setup-infra.yml on infra and/or AIO hosts (don't forget -e aio_install=1)
* Run setup-storage.yml on storage hosts, if applicable.

25
ansible/.gitignore vendored Normal file
View File

@ -0,0 +1,25 @@
#keep tmp folder empty
tmp/*
!tmp/Readme.md
#keep folder holding public roles empty
roles/public/*
!roles/public/Readme.md
#keep folder holding ansible collections empty
collections/*
!README.md
# Ignore all vaults
playbooks/vars/vaults/*
!playbooks/vars/vaults/README.md
# Ignore hidden configs
playbooks/templates/hidden/*
!playbooks/templates/hidden/README.md
# keep the inventory generic
inventories/staging/hosts.ini
inventories/production/hosts.ini
*.retry

88
ansible/README.md Normal file
View File

@ -0,0 +1,88 @@
# Ansible
Ansible playbooks, roles, modules, etc will come here. This wiki will reflect the layout, structure, and potential standards that should be followed when making playbooks and roles.
Each playbook should have comments or a name descriptor that explains what the playbook does or how it is used. If not available, README-... files can be used in place, especially in the case of adhoc playbooks that take input. Documentation for each playbook/role does not have to be on this wiki. Comments or README's should be sufficient.
## Management Node Structure
```
.
├── ansible.cfg
├── collections
├── files -> playbooks/files
├── handlers -> playbooks/handlers
├── inventories
│ ├── production
│ | ├── group_vars
│ | ├── host_vars
│ | hosts
│ ├── staging
│ ├── devellopment
├── pkistore
├── playbooks
│ ├── files
│ ├── handlers
│ ├── tasks
│ ├── templates
│ ├── vars
├── roles/local
│ └── <role-name>
│ └── requirements.yml
├── tasks -> playbooks/tasks
├── templates -> playbooks/templates
└── vars -> playbooks/vars
```
## Structure
What each folder represents
```
files -> As the name implies, non-templated files go here. Files that are
dropped somewhere on the file system should be laid out in a way
that represents the file system (eg. ./etc/sysconfig/)
group_vars -> Group Variables go here if they are not fulfilled in an inventory.
Recommended that group_vars be used over inventory vars.
host_vars -> Host variables go here
inventory -> All static inventories go here
roles -> Custom roles can go here
tasks -> Common tasks come here
templates -> Templates go here
vars -> Global variables that are called with vars_files go here. This
```
## Current Playbook Naming
```
init-* -> Starting infrastructure playbooks that run solo or import other
playbooks that start with import-
adhoc -> These playbooks are one-off playbooks that can be used on the CLI or
in AWX. These are typically for basic tasks.
import -> Playbooks that should be imported from the top level playbooks
role-* -> These playbooks call roles specifically for infrastructure tasks.
Playbooks that do not call a role should be named init or adhoc based
on their usage.
```
### Pre-commits / linting
When pushing to your own forked version of this repository, pre-commit must run to verify your changes. They must be passing to be pushed up. This is an absolute requirement, even for roles.
When the linter passes, the push will complete and you will be able to open a PR.
## General YAML Formatting
It is recommended that each yaml file starts with `---` and ends with `...`. This can help with linting and also stating an obvious end to the file.
### Plugin and Formatting Assistance
The YAML format is extremely easy and can be generally followed without much to think about, the same goes with ansible's syntax. Ideally, your editor can assist with these things. If you are a vim user, the following plugins can be useful:
```
stephpy/vim-yaml
pearofducks/ansible-vim
vim-syntastic/syntastic
```
These can be installed using [vim-plug](https://github.com/junegunn/vim-plug).

1
ansible/TODO Normal file
View File

@ -0,0 +1 @@
* make interfaces.sh in ansible

76
ansible/ansible.cfg Normal file
View File

@ -0,0 +1,76 @@
[defaults]
########################################
# Display settings
########################################
# Output display
force_color = 1
nocows = True
# Note: http://docs.ansible.com/ansible/intro_configuration.html#ansible-managed
ansible_managed = Ansible managed
#ansible_managed = Ansible managed - {file} on {host}
# Warn when ansible think it is better to use module.
# Note: http://docs.ansible.com/ansible/intro_configuration.html#id88
command_warnings = True
# Enable this to debug tasks calls
display_args_to_stdout = False
display_skipped_hosts = false
########################################
# Playbook settings
########################################
# Default strategy
strategy = free
# Number of hosts processed in parallel
forks = 20
########################################
# Behaviour settings
########################################
# Make role variables private
retry_files_enabled = True
# Fact options
gathering = smart
#gathering = !all
#gathering = smart,network,hardware,virtual,ohai,facter
#gathering = network,!hardware,virtual,!ohai,!facter
# facts caching
#fact_caching_connection = tmp/facts_cache
#fact_caching = json
fact_caching = memory
fact_caching_timeout = 1800
# Enable or disable logs
# Note put to false in prod
no_log = False
########################################
# Common destinations
########################################
log_path = tmp/ansible.log
known_hosts = tmp/known_hosts
roles_path = roles/local:roles/public
collections_paths = collections/local:collections/public
remote_user=root
[inventory]
enable_plugins=vultr
[ssh_connection]
ssh_args = "-i /home/neil/dev/personal/advancedlsa/terraform/id_ed25519"

View File

@ -0,0 +1 @@
Leave empty, this is a placeholder folder for ansible collections

1
ansible/files Symbolic link
View File

@ -0,0 +1 @@
playbooks/files

1
ansible/handlers Symbolic link
View File

@ -0,0 +1 @@
playbooks/handlers

View File

@ -0,0 +1,69 @@
---
- name: Bootstrap our cloud with stuff
hosts: "{{ host | default('infra1') }}" # Go on infra host by default
become: true
handlers:
- import_tasks: handlers/main.yml
pre_tasks:
- name: Check if ansible cannot be run here
stat:
path: /etc/no-ansible
register: no_ansible
- name: Verify if we can run ansible
assert:
that:
- "not no_ansible.stat.exists"
success_msg: "We are able to run on this node"
fail_msg: "/etc/no-ansible exists - skipping run on this node"
- name: Loading Variables from OS Common
import_tasks: tasks/common_vars.yml
tasks:
- name: configure clouds.yml
import_tasks: tasks/configure_openstacksdk.yml
- name: setup flavors
openstack.cloud.compute_flavor:
cloud: linuxadminbooks
state: present
name: "{{ item.name }}"
ram: "{{ item.ram }}"
vcpus: "{{ item.vcpus }}"
disk: "{{ item.disk }}"
ephemeral: "{{ item.ephemeral }}"
is_public: yes
tags: flavors
# yamllint disable rule:braces
loop:
- { name: 'tiny', ram: 1024, vcpus: 1, disk: 10, ephemeral: 10 }
- { name: 'small', ram: 2048, vcpus: 1, disk: 20, ephemeral: 20 }
- { name: 'medium', ram: 4096, vcpus: 2, disk: 20, ephemeral: 40 }
- { name: 'large', ram: 8192, vcpus: 4, disk: 20, ephemeral: 80 }
- { name: 'xlarge', ram: 16384, vcpus: 8, disk: 20, ephemeral: 100 }
# yamllint enable rule:braces
- name: setup images
include_tasks: tasks/upload_image.yml
tags: images
args:
apply:
tags: images
# yamllint disable rule:braces
loop:
# - { name: 'cirros', filename: 'http://download.cirros-cloud.net/0.5.1/cirros-0.5.1-x86_64-disk.img', properties: {cpu_arch: x86_64, distro: cirros, protected: true}}
- { name: 'rockylinux86', filename: 'https://dl.rockylinux.org/pub/rocky/8/images/Rocky-8-GenericCloud.latest.x86_64.qcow2', properties: {cpu_arch: x86_64, distro: rocky}}
- { name: 'rockylinux90', filename: 'https://dl.rockylinux.org/pub/rocky/9/images/Rocky-9-GenericCloud.latest.x86_64.qcow2', properties: {cpu_arch: x86_64, distro: rocky}}
# yamllint enable rule:braces
post_tasks:
- name: Touching run file that ansible has ran here
file:
path: /var/log/ansible.run
state: touch
mode: '0644'
owner: root
group: root

View File

@ -0,0 +1,105 @@
---
# Copyright 2016, Rackspace US, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
- name: Ensure createrepo package is installed
yum:
name: createrepo
state: present
- name: Deploy upstream COPR yum repo for lxc 3
yum_repository:
name: thm-lxc3.0
description: "Copr repo for lxc3.0 owned by thm"
baseurl: "{{ lxc_centos_package_baseurl }}"
enabled: yes
gpgcheck: yes
gpgkey: "{{ lxc_centos_package_key }}"
repo_gpgcheck: no
state: present
- name: Enable PowerTools repo
# NB: doesn't run command `dnf config-manager --set-enabled PowerTools` as can't make that idempotent
lineinfile:
path: /etc/yum.repos.d/Rocky-PowerTools.repo
create: false # so raise error if not already installed
regexp: enabled=
line: enabled=1
when: ansible_distribution_major_version == "8"
- name: Add GPG key for COPR LXC repo
rpm_key:
key: "{{ lxc_centos_package_key }}"
state: present
register: add_keys
until: add_keys is success
retries: 5
delay: 2
- name: Download EPEL gpg keys
get_url:
url: "{{ lxc_centos_epel_key }}"
dest: /etc/pki/rpm-gpg
register: _get_yum_keys
until: _get_yum_keys is success
retries: 5
delay: 2
- name: Install EPEL gpg keys
rpm_key:
key: "/etc/pki/rpm-gpg/{{ lxc_centos_epel_key.split('/')[-1] }}"
state: present
- name: Install the EPEL repository - Centos-8
yum_repository:
name: epel-lxc_hosts
baseurl: "{{ lxc_centos_epel_mirror ~ '/' ~ ansible_facts['distribution_major_version'] ~ '/Everything/' ~ ansible_facts['architecture'] }}"
description: "Extra Packages for Enterprise Linux {{ ansible_facts['distribution_major_version'] }} - $basearch"
gpgcheck: yes
gpgkey: "file:///etc/pki/rpm-gpg/{{ lxc_centos_epel_key.split('/')[-1] }}"
enabled: yes
state: present
includepkgs: "aria2 systemd-networkd"
register: install_epel_repo
until: install_epel_repo is success
retries: 5
delay: 2
- name: Install distro packages
package:
pkg: "{{ lxc_hosts_distro_packages }}"
state: "{{ lxc_hosts_package_state }}"
register: install_packages
until: install_packages is success
retries: 5
delay: 2
tags:
- lxc-packages
- name: Remove sub system lock if found
file:
path: "/var/lock/subsys/lxc"
state: "absent"
owner: "root"
group: "root"
tags:
- lxc-directories
- name: Enable lxc service
service:
name: lxc
enabled: "yes"
tags:
- lxc_hosts-config

View File

@ -5,16 +5,17 @@ cidr_networks:
storage: 172.29.228.0/22
used_ips:
- "172.29.220.1,172.29.220.50"
- "172.29.224.1,172.29.224.50"
- "172.29.228.1,172.29.228.50"
- "172.29.220.1"
- "172.29.224.1"
- "172.29.228.1"
- "172.29.232.1"
global_overrides:
# The internal and external VIP should be different IPs, however they
# do not need to be on separate networks.
external_lb_vip_address: 172.29.220.10
internal_lb_vip_address: 172.29.220.11
external_lb_vip_address: 172.29.220.100
internal_lb_vip_address: "{{ bootstrap_host_public_address | default(ansible_facts['default_ipv4']['address']) }}"
management_bridge: "br-mgmt"
provider_networks:
- network:
@ -27,6 +28,14 @@ global_overrides:
- all_containers
- hosts
is_container_address: true
- network:
container_bridge: "br-external"
container_type: "veth"
container_interface: "eth12"
type: "flat"
net_name: "external"
group_binds:
- neutron_linuxbridge_agent
- network:
container_bridge: "br-vxlan"
container_type: "veth"
@ -37,15 +46,6 @@ global_overrides:
net_name: "vxlan"
group_binds:
- neutron_linuxbridge_agent
- network:
container_bridge: "br-vlan"
container_type: "veth"
container_interface: "eth12"
host_bind_override: "eth12"
type: "flat"
net_name: "flat"
group_binds:
- neutron_linuxbridge_agent
- network:
container_bridge: "br-storage"
container_type: "veth"
@ -61,13 +61,15 @@ global_overrides:
x-infra-hosts: &x-infra-hosts
infra1:
ip: 172.29.220.5
ip: 172.29.220.10
x-compute-hosts: &x-compute-hosts
compute1:
ip: 172.29.220.6
ip: 172.29.220.20
compute2:
ip: 172.29.220.21
x-storage-hosts: &x-storage-hosts
infra1:
ip: 172.29.220.7
storage1:
ip: 172.29.220.30
container_vars:
cinder_backends:
limit_container_types: cinder_volume
@ -75,7 +77,7 @@ x-storage-hosts: &x-storage-hosts
volume_group: cinder-volumes
volume_driver: cinder.volume.drivers.lvm.LVMVolumeDriver
volume_backend_name: LVM_iSCSI
iscsi_ip_address: "172.29.228.7"
iscsi_ip_address: "172.29.228.10"
##
## Infrastructure
@ -86,6 +88,7 @@ repo-infra_hosts:
<<: *x-infra-hosts
haproxy_hosts:
<<: *x-infra-hosts
##
## OpenStack
##
@ -107,86 +110,5 @@ network_hosts:
<<: *x-infra-hosts
compute_hosts:
<<: *x-compute-hosts
#storage_hosts:
# <<: *x-storage-hosts
###
### Infrastructure
###
## galera, memcache, rabbitmq, utility
#shared-infra_hosts:
# infra1:
# ip: 172.29.220.5
#
## repository (apt cache, python packages, etc)
#repo-infra_hosts:
# infra1:
# ip: 172.29.220.5
#
## load balancer
#haproxy_hosts:
# infra1:
# ip: 172.29.220.5
#
####
#### OpenStack
####
#
## keystone
#identity_hosts:
# infra1:
# ip: 172.29.220.5
#
## cinder api services
#storage-infra_hosts:
# infra1:
# ip: 172.29.220.5
#
## glance
#image_hosts:
# infra1:
# ip: 172.29.220.5
#
## placement
#placement-infra_hosts:
# infra1:
# ip: 172.29.220.5
#
## nova api, conductor, etc services
#compute-infra_hosts:
# infra1:
# ip: 172.29.220.5
#
## heat
#orchestration_hosts:
# infra1:
# ip: 172.29.220.5
#
## horizon
#dashboard_hosts:
# infra1:
# ip: 172.29.220.5
#
## neutron server, agents (L3, etc)
#network_hosts:
# infra1:
# ip: 172.29.220.5
#
## nova hypervisors
#compute_hosts:
# compute1:
# ip: 172.29.220.6
#
## cinder storage host (LVM-backed)
#storage_hosts:
# storage1:
# ip: 172.29.220.7
# container_vars:
# cinder_backends:
# limit_container_types: cinder_volume
# lvm:
# volume_group: cinder-volumes
# volume_driver: cinder.volume.drivers.lvm.LVMVolumeDriver
# volume_backend_name: LVM_iSCSI
# iscsi_ip_address: "172.29.228.7"
storage_hosts:
<<: *x-storage-hosts

View File

@ -0,0 +1,109 @@
---
cidr_networks:
container: 172.29.220.0/22
tunnel: 172.29.224.0/22
storage: 172.29.228.0/22
used_ips:
- "172.29.220.1,172.29.220.50"
- "172.29.224.1,172.29.224.50"
- "172.29.228.1,172.29.228.50"
global_overrides:
# The internal and external VIP should be different IPs, however they
# do not need to be on separate networks.
external_lb_vip_address: 172.29.220.10
internal_lb_vip_address: 172.29.220.11
management_bridge: "br-mgmt"
provider_networks:
- network:
container_bridge: "br-mgmt"
container_type: "veth"
container_interface: "eth1"
ip_from_q: "container"
type: "raw"
group_binds:
- all_containers
- hosts
is_container_address: true
- network:
container_bridge: "br-vxlan"
container_type: "veth"
container_interface: "eth10"
ip_from_q: "tunnel"
type: "vxlan"
range: "1:1000"
net_name: "vxlan"
group_binds:
- neutron_linuxbridge_agent
- network:
container_bridge: "br-storage"
container_type: "veth"
container_interface: "eth2"
ip_from_q: "storage"
type: "raw"
group_binds:
- glance_api
- cinder_api
- cinder_volume
- nova_compute
x-infra-hosts: &x-infra-hosts
infra1:
ip: 172.29.220.5
x-compute-hosts: &x-compute-hosts
compute1:
ip: 172.29.220.6
x-storage-hosts: &x-storage-hosts
storage1:
ip: 172.29.220.7
storage2:
ip: 172.29.220.8
storage3:
ip: 172.29.220.9
##
## Ceph
##
ceph-mon_hosts:
<<: *x-storage-hosts
ceph-osd_hosts:
<<: *x-storage-hosts
ceph-rgw_hosts:
<<: *x-storage-hosts
##
## Infrastructure
##
shared-infra_hosts:
<<: *x-infra-hosts
repo-infra_hosts:
<<: *x-infra-hosts
haproxy_hosts:
<<: *x-infra-hosts
##
## OpenStack
##
identity_hosts:
<<: *x-infra-hosts
storage-infra_hosts:
<<: *x-infra-hosts
image_hosts:
<<: *x-infra-hosts
placement-infra_hosts:
<<: *x-infra-hosts
compute-infra_hosts:
<<: *x-infra-hosts
orchestration_hosts:
<<: *x-infra-hosts
dashboard_hosts:
<<: *x-infra-hosts
network_hosts:
<<: *x-infra-hosts
compute_hosts:
<<: *x-compute-hosts
storage_hosts:
<<: *x-infra-hosts

View File

@ -0,0 +1,24 @@
diff --git a/tasks/nova_install.yml b/tasks/nova_install.yml
index 3002c22..523b867 100644
--- a/tasks/nova_install.yml
+++ b/tasks/nova_install.yml
@@ -38,12 +38,18 @@
tags:
- nova-pip-packages
+- name: Retrieve the constraints URL
+ uri:
+ url: "{{ nova_upper_constraints_url }}"
+ return_content: yes
+ register: _u_c_contents
+
- name: Install the python venv
import_role:
name: "python_venv_build"
vars:
venv_python_executable: "{{ nova_venv_python_executable }}"
- venv_build_constraints: "{{ nova_git_constraints }}"
+ venv_build_constraints: "{{ _u_c_contents.content.split('\n') | reject('match', '^(futures|pypowervm)') | list }}"
venv_build_distro_package_list: "{{ nova_devel_distro_packages }}"
venv_install_destination_path: "{{ nova_bin | dirname }}"
venv_pip_install_args: "{{ nova_pip_install_args }}"

View File

@ -0,0 +1,3 @@
---
nova_git_install_branch: master
requirements_git_install_branch: master

View File

@ -0,0 +1,4 @@
---
# Set max connections to 500 to support running all services
#
galera_max_connections: 500

View File

@ -0,0 +1,15 @@
graylog_password_secret: "%QGWQASqrneb&qNpkSHp2bnis7hdw$jG2XcP5n9tDX@wpN2XA2*wskunfzA@6MDWpEbpT7Qsc#KFS26KR4n$XiCR7m$43^*v"
graylog_root_username: "admin"
graylog_root_password_sha2: "665903cabea02680f8b71807b7c2e1a1698507f71654316fadba6966948a002c" # The output of `echo -n yourpassword | shasum -a 256`
haproxy_extra_services:
- service:
haproxy_service_name: graylog
haproxy_backend_nodes: "{{ [groups['graylog_hosts'][0]] | default([]) }}"
haproxy_ssl: "{{ haproxy_ssl }}"
haproxy_port: 9000
haproxy_balance_type: http
haproxy_backend_arguments:
- "http-request set-header X-Graylog-Server-URL https://{{ external_lb_vip_address }}:9000"
graylog_targets:
- "{{ groups['graylog_hosts'][0] }}:12201"

View File

@ -0,0 +1,7 @@
---
lxc_hosts_container_build_command: "dnf --assumeyes --installroot=/var/lib/machines/{{ lxc_container_base_name }} install --setopt=install_weak_deps=False --nodocs rootfiles coreutils dnf rocky-release rocky-repos systemd --releasever=8"
lxc_container_map:
distro: "{{ hostvars[physical_host]['ansible_facts']['distribution'] | lower }}"
arch: "{{ lxc_architecture_mapping.get( hostvars[physical_host]['ansible_facts']['architecture'] | lower ) }}"
release: "{{ hostvars[physical_host]['ansible_facts']['distribution_major_version'] }}"

View File

@ -0,0 +1,11 @@
---
- name: restart_sshd
service:
name: sshd
state: restarted
- name: enable_targetd
shell: "systemctl enable --now target"
- name: restart_targetd
service:
name: target
state: restarted

View File

@ -0,0 +1,8 @@
---
- name: Bootstrap an AIO install
hosts: "{{ host | default('aio1') }}"
become: true
- import_playbook: setup-distributed.yml
vars:
aio_install: 1

View File

@ -0,0 +1,70 @@
---
# Installs everything on hosts
#
- name: Bootstrap nodes for distributed OSA installation
hosts: "{{ host | default('infra1,compute1,compute2,storage1') }}"
become: true
vars:
handlers:
- import_tasks: handlers/main.yml
pre_tasks:
- name: Check if ansible cannot be run here
stat:
path: /etc/no-ansible
register: no_ansible
- name: Verify if we can run ansible
assert:
that:
- "not no_ansible.stat.exists"
success_msg: "We are able to run on this node"
fail_msg: "/etc/no-ansible exists - skipping run on this node"
- name: Loading Variables from OS Common
import_tasks: tasks/common_vars.yml
tasks:
- name: Initialize nodes
tags:
- init
args:
apply:
tags:
- init
include_tasks: tasks/init-nodes.yml
- name: Reboot
import_tasks: tasks/reboot.yml
when: reboot | default(true) | bool
- name: Setup infra hosts
include_tasks: tasks/infra-host.yml
tags:
- infrastructure
args:
apply:
tags:
- infrastructure
when: tag.find("infra") != -1 or aio_install | default(false) | bool
- name: Setup storage hosts
include_tasks: tasks/storage-host.yml
tags:
- storage
args:
apply:
tags:
- storage
when: tag.find("storage") != -1
post_tasks:
- name: Touching run file that ansible has ran here
file:
path: /var/log/ansible.run
state: touch
mode: '0644'
owner: root
group: root

View File

@ -0,0 +1,16 @@
---
- name: Standard System Configuration Variables
block:
- name: Loading Variables from OS Common
include_vars: "{{ item }}"
with_items:
- "{{ ansible_distribution }}.yml"
- name: Check if system is EFI
stat:
path: "/sys/firmware/efi"
register: efi_installed
always:
- debug: msg="Variables are now loaded"
...

View File

@ -0,0 +1,6 @@
---
- name: Upload clouds.py
ansible.builtin.copy:
src: "../scripts/clouds.py"
dest: /root/clouds.py
mode: '0750'

View File

@ -0,0 +1,93 @@
---
- name: Bootstrap ansible
become: true
shell: scripts/bootstrap-ansible.sh
args:
chdir: /opt/openstack-ansible/
creates: /etc/ansible/
tags:
- bootstrap
- name: Deploy and setup configuration
block:
- name: Copy template to etc
ansible.builtin.copy:
remote_src: true
src: /opt/openstack-ansible/etc/openstack_deploy/
dest: /etc/openstack_deploy
directory_mode: true
force: false
- name: Copy distributed openstack configs
ansible.builtin.copy:
src: "files/{{ item }}.yml"
dest: /etc/openstack_deploy/
mode: '0644'
with_items:
- openstack_user_config
- name: Create secrets
become: true
ansible.builtin.shell:
cmd: /opt/openstack-ansible/scripts/pw-token-gen.py --file /etc/openstack_deploy/user_secrets.yml
tags: secrets
args:
creates: /etc/openstack_deploy/user_secrets.yml.tar
when: aio_install is undefined | default(false)
- name: "[AIO] Deploy and setup configuration / bootstrap"
when: aio_install | default('false') | bool
block:
- name: Debug - Running AIO bootstrap
debug:
var: aio_install
- name: Run bootstrap aio with included args
ansible.builtin.shell: scripts/bootstrap-aio.sh
become: true
args:
chdir: /opt/openstack-ansible/
creates: /etc/openstack_deploy/
environment:
SCENARIO: "{{ SCENARIO | default('aio_lxc') }}"
tags:
- bootstrap
- aio
#- name: Create volume group for cinder
# lvg:
# pv_options: --metadatasize=2048
# pvs: "{{ cinder_pv_device | default('/dev/vdb') }}"
# vg: cinder-volumes
- name: Copy common openstack configs
ansible.builtin.copy:
src: "files/{{ item }}.yml"
dest: /etc/openstack_deploy/
mode: '0644'
with_items:
- user_galera
tags: config
- name: Disable SSH Agent Forwarding
lineinfile:
dest: /etc/ssh/sshd_config
regexp: '^.*AllowAgentForwarding'
line: 'AllowAgentForwarding no'
tags: services
notify:
- restart_sshd
- name: Check playbooks
tags: syntax
become: true
ansible.builtin.shell:
cmd: "openstack-ansible --syntax-check setup-{{ item }}.yml"
args:
chdir: /opt/openstack-ansible/playbooks/
loop:
- hosts
- infrastructure
- openstack
register: playbooks_res
...

View File

@ -0,0 +1,147 @@
---
- name: Disable Firewalld
ansible.builtin.systemd:
name: firewalld.service
masked: true
enabled: false
force: true
state: stopped
tags: services
- name: Set SELinux to permissive
ansible.posix.selinux:
policy: targeted
state: disabled
tags: services
- name: Ensure packages are upgraded
ansible.builtin.dnf:
name: "*"
state: latest
tags: packages
- name: remove curl
ansible.builtin.dnf:
name: "curl"
state: absent
tags: packages
- name: add curl-minimal
ansible.builtin.dnf:
name: "curl-minimal"
state: latest
tags: packages
- name: Generate SSH key
block:
- name: Create ssh key for root
ansible.builtin.user:
name: root
generate_ssh_key: true
ssh_key_bits: 4096
ssh_key_file: .ssh/id_rsa
register: sshkey_register
tags: sshkey
- name: fetch_keys
tags: sshkey
fetch:
src: "~/.ssh/id_rsa.pub"
dest: "files/buffer/infra-id_rsa.pub"
flat: true
when: sshkey_register.ssh_public_key != ""
register: sshkey_fetch
when: tag.find("infra") != -1 and name == "infra1"
tags:
- infra
- sshkey
- name: Disable SSH Agent Forwarding
lineinfile:
dest: /etc/ssh/sshd_config
regexp: '^.*AllowAgentForwarding'
line: 'AllowAgentForwarding no'
tags: services
notify:
- restart_sshd
- name: Setup network
include_tasks: tasks/setup-network.yml
when: aio_install is undefined or not aio_install # don't run when AIO
args:
apply:
tags: interfaces
tags:
- interfaces
- name: Setup Infra Nodes
block:
- name: Install packages
ansible.builtin.dnf:
name:
- git-core
- wget
- chrony
- openssh-server
- sudo
state: latest
tags: packages
- name: Clone repository
ansible.builtin.git:
repo: https://review.opendev.org/openstack/openstack-ansible
dest: /opt/openstack-ansible
version: stable/zed
# version: 'b958c02eeed355484be12db736ed81a047f7d7c0'
# refspec: 'refs/changes/81/852181/2'
tags: repos
- name: Create ssh key for root
ansible.builtin.user:
name: root
generate_ssh_key: true
ssh_key_bits: 4096
ssh_key_file: .ssh/id_rsa
register: sshkey_register
tags: sshkey
- name: fetch_keys
tags: sshkey
fetch:
src: "~/.ssh/id_rsa.pub"
dest: "files/buffer/infra-id_rsa.pub"
flat: true
when: sshkey_register.ssh_public_key != ""
register: sshkey_fetch
when: tag.find("infra") != -1 or aio_install | default(false)
tags: infra
- name: Install packages on non-infra hosts
when: tag.find("infra") != -1 or aio_install | default(false)
ansible.builtin.dnf:
name:
- iputils
- lsof
- openssh-server
- sudo
- tcpdump
- python3
state: latest
- name: Copy key to others
ansible.posix.authorized_key:
user: root
state: present
key: "{{ lookup('file', 'files/buffer/infra-id_rsa.pub') }}"
when: tag.find("infra") == -1 and sshkey_fetch | default(false)
tags: sshkey
- name: Disable cloud init from future runs
file:
path: /etc/cloud/cloud-init.disabled
state: touch
mode: '0644'
owner: root
group: root
...

View File

@ -0,0 +1,32 @@
---
- name: Enable PowerTools repo
# NB: doesn't run command `dnf config-manager --set-enabled PowerTools` as can't make that idempotent
lineinfile:
path: /etc/yum.repos.d/Rocky-PowerTools.repo
create: false # so raise error if not already installed
regexp: enabled=
line: enabled=1
when: ansible_distribution_major_version == "8"
- name: Copy distributed openstack configs
ansible.builtin.copy:
src: "files/{{ item }}"
dest: /tmp/
mode: '0644'
with_items:
- python38-lxc-3.0.4-11.el8.x86_64.rpm
# @TODO - fix this to not be shit
- name: Install neil/lxc3.0 copr
become: yes
shell: "dnf -y copr enable neil/lxc3.0"
- name: Install package
ansible.builtin.dnf:
name: "{{ item }}"
disable_gpg_check: yes # @TODO NO
with_items:
- https://download.copr.fedorainfracloud.org/results/neil/lxc3.0/epel-8-x86_64/03253339-lxc/lxc-4.0.10-2.el8.x86_64.rpm
- https://download.copr.fedorainfracloud.org/results/neil/lxc3.0/epel-8-x86_64/03253339-lxc/lxc-devel-4.0.10-2.el8.x86_64.rpm
- https://download.copr.fedorainfracloud.org/results/neil/lxc3.0/epel-8-x86_64/03253339-lxc/lxc-libs-4.0.10-2.el8.x86_64.rpm
- /tmp/python38-lxc-3.0.4-11.el8.x86_64.rpm

View File

@ -0,0 +1,11 @@
---
- name: Reboot machine
reboot:
register: reboot_register
- name: Verify reboot
assert:
that:
- "reboot_register.rebooted"
success_msg: "Machine rebooted successfully."
fail_msg: "Machine failed to boot: {{ ansible_hostname }}"

View File

@ -0,0 +1,39 @@
---
- include_vars: common-network.yml
- name: Remove cloud-init cruft
ignore_errors: true
community.general.nmcli:
state: absent
conn_name: "{{ item }}"
loop:
- cloud-init enp6s0
- cloud-init enp7s0
- cloud-init enp8s0
- cloud-init enp9s0
- name: Create network bridges
community.general.nmcli:
stp: true
type: bridge
conn_name: "{{ 'Bridge-' + item.key }}"
state: present
ifname: "{{ network_bridges[item.key] }}"
method4: manual
ip4: "{{ network_cidrs[item.key] | ansible.utils.ipmath(host_cidr_octets[inventory_hostname]) }}/{{ network_cidrs[item.key] | split('/') | last }}"
method6: ignore
autoconnect: true
loop: "{{ network_interfaces[inventory_hostname] | dict2items }}"
- name: Enslave network interfaces to bridges
community.general.nmcli:
type: bridge-slave
conn_name: "{{ 'Slave-' + item.value }}"
state: present
ifname: "{{ item.value }}"
master: "{{ network_bridges[item.key] }}"
autoconnect: true
mtu: 1450
method4: manual
hairpin: false
loop: "{{ network_interfaces[inventory_hostname] | dict2items }}"

View File

@ -0,0 +1,26 @@
---
# Setup the storage host. Install targetcli and make sure any volumes are mounted.
- name: Loading Variables from OS Common
import_tasks: tasks/common_vars.yml
- name: Install required packages
become: true
dnf:
name: targetcli
notify: enable_targetd
- name: Check if cinder_pv_device is set
stat:
path: "{{ cinder_pv_device | default('/dev/vdb') }}"
register: stat_cinder_pv_dev
- name: Create volume group for cinder
lvg:
pv_options: --metadatasize=2048
pvs: "{{ cinder_pv_device | default('/dev/vdb') }}"
vg: cinder-volumes
when:
- stat_cinder_pv_dev.stat.exists
- stat.cinder_pv_device.stat.isblk
...

View File

@ -0,0 +1,18 @@
---
- name: "[Upload Image] Download image - {{ item.filename }}"
ansible.builtin.get_url:
url: "{{ item.filename }}"
dest: "/tmp/{{ item.filename | checksum }}"
- name: Upload image to openstack
openstack.cloud.image:
cloud: linuxadminbooks
state: present
is_public: yes
name: "{{ item.name }}"
container_format: "{{ item.containerformat | default('bare') }}" #bare
disk_format: "{{ item.diskformat | default('qcow2') }}" # qcow2
filename: "/tmp/{{ item.filename | checksum }}"
tags:
- custom
properties: "{{ item.properties }}"

View File

@ -0,0 +1,5 @@
# Variables for our common module for RedHat
---
bin_su: /usr/bin/su
bin_sudo: /usr/bin/sudo

View File

@ -0,0 +1,37 @@
---
network_interfaces:
infra1:
management: enp7s0
tunnel: enp6s0
storage: enp8s0
external: enp9s0
compute1:
management: enp6s0
tunnel: enp7s0
storage: enp8s0
external: enp9s0
compute2:
management: enp7s0
tunnel: enp6s0
storage: enp8s0
external: enp9s0
storage1:
management: enp7s0
storage: enp6s0
network_bridges:
management: br-mgmt
tunnel: br-tunnel
storage: br-storage
external: br-ext
network_cidrs:
management: 172.29.220.0/22
tunnel: 172.29.224.0/22
storage: 172.29.228.0/22
external: 172.29.232.0/22
host_cidr_octets:
infra1: 10
infra2: 11
compute1: 20
compute2: 21
storage1: 30
storage2: 31

View File

@ -0,0 +1 @@
Placeholder for public roles installed via galaxy

View File

@ -0,0 +1,8 @@
---
collections:
- name: community.general
- name: ansible.posix
- name: ansible.utils
- name: netbox.netbox
- name: openstack.cloud

24
ansible/scripts/clouds.py Normal file
View File

@ -0,0 +1,24 @@
#!/usr/bin/python3
"""
Adapted From http://adam.younglogic.com/2022/03/generating-a-clouds-yaml-file/ - collected 2022-04-07
"""
import os, yaml
clouds = {
"clouds":{
"linuxadminbooks": {
"auth" : {
"auth_url" : os.environ["OS_AUTH_URL"],
"project_name": os.environ["OS_PROJECT_NAME"],
"project_domain_name": os.environ["OS_PROJECT_DOMAIN_NAME"],
"username": os.environ["OS_USERNAME"],
"user_domain_name": os.environ["OS_USER_DOMAIN_NAME"],
"password": os.environ["OS_PASSWORD"]
}
}
}
}
print(yaml.safe_dump(clouds))

1
ansible/tasks Symbolic link
View File

@ -0,0 +1 @@
playbooks/tasks

1
ansible/templates Symbolic link
View File

@ -0,0 +1 @@
playbooks/templates

1
ansible/vars Symbolic link
View File

@ -0,0 +1 @@
playbooks/vars

2
ansible/vultr.yml Normal file
View File

@ -0,0 +1,2 @@
---
plugin: vultr

View File

@ -1,17 +1,28 @@
dnf -y install git-core git wget python36 chrony openssh-server python3-devel sudo
systemctl stop firewalld
systemctl mask firewalld
sed -i 's/enforcing/permissive/' /etc/sysconfig/selinux
sed -i 's/enforcing/permissive/' /etc/selinux/config
#git clone --branch feature/rocky8 --single-branch https://github.com/NeilHanlon/openstack-ansible.git /opt/openstack-ansible
touch /etc/cloud/cloud-init.disabled
#dnf -y install https://repos.fedorapeople.org/repos/openstack/openstack-xena/rdo-release-xena-1.el8.noarch.rpm
cat << EOF | tee -a /etc/ssh/sshd_config
Match User root
AllowAgentForwarding no
EOF
systemctl restart sshd
#cp /opt/ansible-runtime/lib/python3.6/site-packages/ansible/plugins/connection/ssh.py /etc/ansible/roles/plugins/connection/ssh.py
#dnf -y upgrade &
#
#HOSTNAME="$(hostname)"
#
#if [[ $(hostname) =~ infra ]]; then
# dnf -y install git-core git wget python36 chrony openssh-server python3-devel sudo
# git clone --branch feature/rocky8 --single-branch https://github.com/NeilHanlon/openstack-ansible.git /opt/openstack-ansible
#fi
#
## Always stop and mask firewalld
#systemctl stop firewalld
#systemctl mask firewalld
#
## Always set to permissive.
#sed -i 's/enforcing/permissive/' /etc/sysconfig/selinux
#sed -i 's/enforcing/permissive/' /etc/selinux/config
#
## Disable agent forwarding, in case user forwards agent, so as not to confuse ansible
#cat << EOF | tee -a /etc/ssh/sshd_config
#Match User root
# AllowAgentForwarding no
#
#EOF
#
#systemctl restart sshd
#
#touch /etc/cloud/cloud-init.disabled || exit 1
#true

View File

@ -1,4 +0,0 @@
---
lxc_container_base_name: "rocky-8-amd64"
lxc_hosts_container_build_command: "dnf --assumeyes --installroot=/var/lib/machines/{{ lxc_container_base_name }} install --setopt=install_weak_deps=False --nodocs rootfiles coreutils dnf rocky-release rocky-repos --releasever=8"

View File

@ -1,101 +0,0 @@
#!/bin/bash
iface_mgmt=$(ip addr | grep 172.29.220 | awk '{print $NF}')
iface_vxlan=$(ip addr | grep 172.29.224 | awk '{print $NF}')
iface_storage=$(ip addr | grep 172.29.228 | awk '{print $NF}')
if [[ -z "${iface_mgmt}" ]]; then
echo "can't find mgmt interface"
else
echo "mgmt interface is: ${iface_mgmt}"
fi
if [[ -z "${iface_vxlan}" ]]; then
echo "can't find vxlan interface"
else
echo "vxlan interface is: ${iface_vxlan}"
fi
if [[ -z "${iface_storage}" ]]; then
echo "can't find storage interface"
else
echo "storage interface is: ${iface_storage}"
fi
if [[ -z "${iface_mgmt}" && ( -z "${iface_vxlan}" || -z "${iface_storage}" ) ]]; then
echo "Stopping. Only a mgmt interface found. Need at least one of vxlan or storage"
exit 1
fi
cat << EOF > ifcfg-br-mgmt
BOOTPROTO=none
DEVICE=br-mgmt
NM_CONTROLLED=no
IPADDR=172.29.220.5
NETMASK=255.255.252.0
ONBOOT=yes
TYPE=Bridge
USERCTL=no
EOF
cat << EOF > ifcfg-${iface_mgmt}
TYPE=Ethernet
DEVICE=${iface_mgmt}
ONBOOT=yes
BRIDGE=br-mgmt
HWADDR=$(ip link show ${iface_mgmt} | awk '/link\/ether/{print $2}')
EOF
cat << EOF > ifcfg-br-mgmt\:10
DEVICE=br-mgmt:10
ONPARENT=on
IPADDR=172.29.220.10
PREFIX=22
EOF
cat << EOF > ifcfg-br-mgmt\:11
DEVICE=br-mgmt:11
ONPARENT=on
IPADDR=172.29.220.11
PREFIX=22
EOF
#cat << EOF > ifcfg-br-storage
#BOOTPROTO=none
#DEVICE=br-storage
#IPADDR=172.29.228.5
#NETMASK=255.255.252.0
#NM_CONTROLLED=no
#ONBOOT=yes
#TYPE=Bridge
#USERCTL=no
#EOF
#cat << EOF > ifcfg-${iface_storage}
#TYPE=Ethernet
#DEVICE=${iface_storage}
#ONBOOT=yes
#BRIDGE=br-storage
#HWADDR=$(ip link show ${iface_storage} | awk '/link\/ether/{print $2}')
#EOF
cat << EOF > ifcfg-br-vxlan
BOOTPROTO=none
DEVICE=br-vxlan
IPADDR=172.29.224.5
NETMASK=255.255.252.0
NM_CONTROLLED=no
ONBOOT=yes
TYPE=Bridge
USERCTL=no
EOF
cat << EOF > ifcfg-${iface_vxlan}
TYPE=Ethernet
DEVICE=${iface_vxlan}
ONBOOT=yes
BRIDGE=br-vxlan
HWADDR=$(ip link show ${iface_vxlan} | awk '/link\/ether/{print $2}')
EOF

Binary file not shown.

Binary file not shown.

Binary file not shown.

View File

@ -1,9 +1,9 @@
resource "vultr_instance" "infra1" {
plan = "vdc-2c-8gb"
plan = "vdc-4c-16gb"
region = "ewr"
os_id = "448"
label = "advancedlsa infra controlplane"
tag = "advancedlsa"
label = "infra1"
tag = "advancedlsa infra"
hostname = "ala-infra1"
enable_ipv6 = false
backups = "disabled"
@ -18,8 +18,8 @@ resource "vultr_instance" "compute1" {
plan = "vdc-2c-8gb"
region = "ewr"
os_id = "448"
label = "advancedlsa compute"
tag = "advancedlsa"
label = "compute1"
tag = "advancedlsa compute"
hostname = "ala-compute1"
enable_ipv6 = false
backups = "disabled"
@ -31,11 +31,11 @@ resource "vultr_instance" "compute1" {
}
resource "vultr_instance" "storage1" {
plan = "vc2-4c-8gb"
plan = "vc2-2c-4gb"
region = "ewr"
os_id = "448"
label = "advancedlsa storage"
tag = "advancedlsa"
label = "storage1"
tag = "advancedlsa storage"
hostname = "ala-storage1"
enable_ipv6 = false
backups = "disabled"
@ -54,3 +54,130 @@ resource "vultr_block_storage" "cinder" {
}
#resource "vultr_instance" "aio1" {
# plan = "vc2-4c-8gb"
# region = "ewr"
# os_id = "448"
# label = "aio1"
# tag = "advancedlsa aio" #NO infra tag
# hostname = "ala-aio1"
# enable_ipv6 = false
# backups = "disabled"
# ddos_protection = false
# activation_email = false
# script_id = "0eeabbfb-2d2f-4797-a85a-82d3e1f235bd"
# ssh_key_ids = [vultr_ssh_key.terraform.id]
# private_network_ids = [vultr_private_network.mgmt2.id]
#}
#resource "vultr_block_storage" "cinder-aio" {
# size_gb = 100
# region = "ewr"
# label = "ala-storage-cinder-aio"
# attached_to_instance = vultr_instance.aio1.id
#}
#resource "vultr_instance" "aio2" {
# plan = "vc2-4c-8gb"
# region = "ewr"
# os_id = "448"
# label = "aio2"
# tag = "advancedlsa aio" #NO infra tag
# hostname = "ala-aio2"
# enable_ipv6 = false
# backups = "disabled"
# ddos_protection = false
# activation_email = false
# script_id = "0eeabbfb-2d2f-4797-a85a-82d3e1f235bd"
# ssh_key_ids = [vultr_ssh_key.terraform.id]
# private_network_ids = [vultr_private_network.mgmt2.id]
#}
#resource "vultr_block_storage" "cinder-aio2" {
# size_gb = 100
# region = "ewr"
# label = "ala-storage-cinder-aio"
# attached_to_instance = vultr_instance.aio2.id
#}
#resource "vultr_instance" "aio3" {
# plan = "vc2-4c-8gb"
# region = "ewr"
# os_id = "448"
# label = "aio3"
# tag = "advancedlsa aio" #NO infra tag
# hostname = "ala-aio3"
# enable_ipv6 = false
# backups = "disabled"
# ddos_protection = false
# activation_email = false
# script_id = "0eeabbfb-2d2f-4797-a85a-82d3e1f235bd"
# ssh_key_ids = [vultr_ssh_key.terraform.id]
# private_network_ids = [vultr_private_network.mgmt2.id]
#}
#resource "vultr_block_storage" "cinder-aio3" {
# size_gb = 100
# region = "ewr"
# label = "ala-storage-cinder-aio"
# attached_to_instance = vultr_instance.aio3.id
#}
#resource "vultr_instance" "aio4" {
# plan = "vc2-4c-8gb"
# region = "ewr"
# os_id = "448"
# label = "aio4"
# tag = "advancedlsa aio" #NO infra tag
# hostname = "ala-aio4"
# enable_ipv6 = false
# backups = "disabled"
# ddos_protection = false
# activation_email = false
# script_id = "0eeabbfb-2d2f-4797-a85a-82d3e1f235bd"
# ssh_key_ids = [vultr_ssh_key.terraform.id]
# private_network_ids = [vultr_private_network.mgmt2.id]
#}
#resource "vultr_block_storage" "cinder-aio4" {
# size_gb = 100
# region = "ewr"
# label = "ala-storage-cinder-aio"
# attached_to_instance = vultr_instance.aio4.id
#}
#resource "vultr_instance" "aio5" {
# plan = "vc2-16c-64gb"
# region = "ewr"
# os_id = "448"
# label = "aio5"
# tag = "advancedlsa aio" #NO infra tag
# hostname = "ala-aio5"
# enable_ipv6 = false
# backups = "disabled"
# ddos_protection = false
# activation_email = false
# script_id = "0eeabbfb-2d2f-4797-a85a-82d3e1f235bd"
# ssh_key_ids = [vultr_ssh_key.terraform.id]
# private_network_ids = [vultr_private_network.mgmt2.id]
#}
#resource "vultr_block_storage" "cinder-aio5" {
# size_gb = 100
# region = "ewr"
# label = "ala-storage-cinder-aio"
# attached_to_instance = vultr_instance.aio5.id
#}
#resource "vultr_instance" "aio6" {
# plan = "vc2-16c-64gb"
# region = "ewr"
# os_id = "448"
# label = "aio6"
# tag = "advancedlsa aio" #NO infra tag
# hostname = "ala-aio6"
# enable_ipv6 = false
# backups = "disabled"
# ddos_protection = false
# activation_email = false
# script_id = "0eeabbfb-2d2f-4797-a85a-82d3e1f235bd"
# ssh_key_ids = [vultr_ssh_key.terraform.id]
# private_network_ids = [vultr_private_network.mgmt2.id]
#}
#resource "vultr_block_storage" "cinder-aio6" {
# size_gb = 100
# region = "ewr"
# label = "ala-storage-cinder-aio"
# attached_to_instance = vultr_instance.aio6.id
#}

View File

@ -1,7 +1,7 @@
{
"version": 4,
"terraform_version": "1.1.2",
"serial": 174,
"terraform_version": "1.1.5",
"serial": 574,
"lineage": "9182161d-2dda-d6b4-a789-8481586b33b1",
"outputs": {},
"resources": [
@ -14,10 +14,10 @@
{
"schema_version": 0,
"attributes": {
"content": "dnf -y install git-core git wget python36 chrony openssh-server python3-devel sudo\nsystemctl stop firewalld\nsystemctl mask firewalld\nsed -i 's/enforcing/permissive/' /etc/sysconfig/selinux\nsed -i 's/enforcing/permissive/' /etc/selinux/config\n#git clone --branch feature/rocky8 --single-branch https://github.com/NeilHanlon/openstack-ansible.git /opt/openstack-ansible\ntouch /etc/cloud/cloud-init.disabled\n#dnf -y install https://repos.fedorapeople.org/repos/openstack/openstack-xena/rdo-release-xena-1.el8.noarch.rpm\ncat \u003c\u003c EOF | tee -a /etc/ssh/sshd_config\n\nMatch User root\n AllowAgentForwarding no\n\nEOF\n\nsystemctl restart sshd\n#cp /opt/ansible-runtime/lib/python3.6/site-packages/ansible/plugins/connection/ssh.py /etc/ansible/roles/plugins/connection/ssh.py\n",
"content_base64": "ZG5mIC15IGluc3RhbGwgZ2l0LWNvcmUgZ2l0IHdnZXQgcHl0aG9uMzYgY2hyb255IG9wZW5zc2gtc2VydmVyIHB5dGhvbjMtZGV2ZWwgc3VkbwpzeXN0ZW1jdGwgc3RvcCBmaXJld2FsbGQKc3lzdGVtY3RsIG1hc2sgZmlyZXdhbGxkCnNlZCAtaSAncy9lbmZvcmNpbmcvcGVybWlzc2l2ZS8nIC9ldGMvc3lzY29uZmlnL3NlbGludXgKc2VkIC1pICdzL2VuZm9yY2luZy9wZXJtaXNzaXZlLycgL2V0Yy9zZWxpbnV4L2NvbmZpZwojZ2l0IGNsb25lIC0tYnJhbmNoIGZlYXR1cmUvcm9ja3k4IC0tc2luZ2xlLWJyYW5jaCBodHRwczovL2dpdGh1Yi5jb20vTmVpbEhhbmxvbi9vcGVuc3RhY2stYW5zaWJsZS5naXQgL29wdC9vcGVuc3RhY2stYW5zaWJsZQp0b3VjaCAvZXRjL2Nsb3VkL2Nsb3VkLWluaXQuZGlzYWJsZWQKI2RuZiAteSBpbnN0YWxsIGh0dHBzOi8vcmVwb3MuZmVkb3JhcGVvcGxlLm9yZy9yZXBvcy9vcGVuc3RhY2svb3BlbnN0YWNrLXhlbmEvcmRvLXJlbGVhc2UteGVuYS0xLmVsOC5ub2FyY2gucnBtCmNhdCA8PCBFT0YgfCB0ZWUgLWEgL2V0Yy9zc2gvc3NoZF9jb25maWcKCk1hdGNoIFVzZXIgcm9vdAogIEFsbG93QWdlbnRGb3J3YXJkaW5nIG5vCgpFT0YKCnN5c3RlbWN0bCByZXN0YXJ0IHNzaGQKI2NwIC9vcHQvYW5zaWJsZS1ydW50aW1lL2xpYi9weXRob24zLjYvc2l0ZS1wYWNrYWdlcy9hbnNpYmxlL3BsdWdpbnMvY29ubmVjdGlvbi9zc2gucHkgL2V0Yy9hbnNpYmxlL3JvbGVzL3BsdWdpbnMvY29ubmVjdGlvbi9zc2gucHkK",
"content": "#dnf -y upgrade \u0026\n#\n#HOSTNAME=\"$(hostname)\"\n#\n#if [[ $(hostname) =~ infra ]]; then\n# dnf -y install git-core git wget python36 chrony openssh-server python3-devel sudo\n# git clone --branch feature/rocky8 --single-branch https://github.com/NeilHanlon/openstack-ansible.git /opt/openstack-ansible\n#fi\n#\n## Always stop and mask firewalld\n#systemctl stop firewalld\n#systemctl mask firewalld\n#\n## Always set to permissive.\n#sed -i 's/enforcing/permissive/' /etc/sysconfig/selinux\n#sed -i 's/enforcing/permissive/' /etc/selinux/config\n#\n## Disable agent forwarding, in case user forwards agent, so as not to confuse ansible\n#cat \u003c\u003c EOF | tee -a /etc/ssh/sshd_config\n#Match User root\n# AllowAgentForwarding no\n#\n#EOF\n#\n#systemctl restart sshd\n#\n#touch /etc/cloud/cloud-init.disabled || exit 1\n#true\n",
"content_base64": "I2RuZiAteSB1cGdyYWRlICYKIwojSE9TVE5BTUU9IiQoaG9zdG5hbWUpIgojCiNpZiBbWyAkKGhvc3RuYW1lKSA9fiBpbmZyYSBdXTsgdGhlbgojICAgIGRuZiAteSBpbnN0YWxsIGdpdC1jb3JlIGdpdCB3Z2V0IHB5dGhvbjM2IGNocm9ueSBvcGVuc3NoLXNlcnZlciBweXRob24zLWRldmVsIHN1ZG8KIyAgICBnaXQgY2xvbmUgLS1icmFuY2ggZmVhdHVyZS9yb2NreTggLS1zaW5nbGUtYnJhbmNoIGh0dHBzOi8vZ2l0aHViLmNvbS9OZWlsSGFubG9uL29wZW5zdGFjay1hbnNpYmxlLmdpdCAvb3B0L29wZW5zdGFjay1hbnNpYmxlCiNmaQojCiMjIEFsd2F5cyBzdG9wIGFuZCBtYXNrIGZpcmV3YWxsZAojc3lzdGVtY3RsIHN0b3AgZmlyZXdhbGxkCiNzeXN0ZW1jdGwgbWFzayBmaXJld2FsbGQKIwojIyBBbHdheXMgc2V0IHRvIHBlcm1pc3NpdmUuCiNzZWQgLWkgJ3MvZW5mb3JjaW5nL3Blcm1pc3NpdmUvJyAvZXRjL3N5c2NvbmZpZy9zZWxpbnV4CiNzZWQgLWkgJ3MvZW5mb3JjaW5nL3Blcm1pc3NpdmUvJyAvZXRjL3NlbGludXgvY29uZmlnCiMKIyMgRGlzYWJsZSBhZ2VudCBmb3J3YXJkaW5nLCBpbiBjYXNlIHVzZXIgZm9yd2FyZHMgYWdlbnQsIHNvIGFzIG5vdCB0byBjb25mdXNlIGFuc2libGUKI2NhdCA8PCBFT0YgfCB0ZWUgLWEgL2V0Yy9zc2gvc3NoZF9jb25maWcKI01hdGNoIFVzZXIgcm9vdAojICBBbGxvd0FnZW50Rm9yd2FyZGluZyBubwojCiNFT0YKIwojc3lzdGVtY3RsIHJlc3RhcnQgc3NoZAojCiN0b3VjaCAvZXRjL2Nsb3VkL2Nsb3VkLWluaXQuZGlzYWJsZWQgfHwgZXhpdCAxCiN0cnVlCg==",
"filename": "./files/startup.sh",
"id": "58efcfcffe2afd977420573c7c6186c0abb92d0a"
"id": "5f996b64baace058772b3b1aa86db83787468974"
},
"sensitive_attributes": []
}
@ -33,7 +33,7 @@
"schema_version": 0,
"attributes": {
"date_created": "2021-10-30T21:37:07+00:00",
"date_modified": "2021-12-05T22:23:54+00:00",
"date_modified": "2021-12-30T00:30:37+00:00",
"filter": [
{
"name": "name",
@ -44,7 +44,7 @@
],
"id": "0eeabbfb-2d2f-4797-a85a-82d3e1f235bd",
"name": "advancedlsa",
"script": "ZG5mIC15IGluc3RhbGwgZ2l0LWNvcmUgZ2l0IHdnZXQgcHl0aG9uMzYgY2hyb255IG9wZW5zc2gtc2VydmVyIHB5dGhvbjMtZGV2ZWwgc3VkbwpzeXN0ZW1jdGwgc3RvcCBmaXJld2FsbGQKc3lzdGVtY3RsIG1hc2sgZmlyZXdhbGxkCnNlZCAtaSAncy9lbmZvcmNpbmcvcGVybWlzc2l2ZS8nIC9ldGMvc3lzY29uZmlnL3NlbGludXgKc2VkIC1pICdzL2VuZm9yY2luZy9wZXJtaXNzaXZlLycgL2V0Yy9zZWxpbnV4L2NvbmZpZwojZ2l0IGNsb25lIC0tYnJhbmNoIGZlYXR1cmUvcm9ja3k4IC0tc2luZ2xlLWJyYW5jaCBodHRwczovL2dpdGh1Yi5jb20vTmVpbEhhbmxvbi9vcGVuc3RhY2stYW5zaWJsZS5naXQgL29wdC9vcGVuc3RhY2stYW5zaWJsZQp0b3VjaCAvZXRjL2Nsb3VkL2Nsb3VkLWluaXQuZGlzYWJsZWQKI2RuZiAteSBpbnN0YWxsIGh0dHBzOi8vcmVwb3MuZmVkb3JhcGVvcGxlLm9yZy9yZXBvcy9vcGVuc3RhY2svb3BlbnN0YWNrLXhlbmEvcmRvLXJlbGVhc2UteGVuYS0xLmVsOC5ub2FyY2gucnBtCmNhdCA8PCBFT0YgfCB0ZWUgLWEgL2V0Yy9zc2gvc3NoZF9jb25maWcKCk1hdGNoIFVzZXIgcm9vdAogIEFsbG93QWdlbnRGb3J3YXJkaW5nIG5vCgpFT0YKCnN5c3RlbWN0bCByZXN0YXJ0IHNzaGQKI2NwIC9vcHQvYW5zaWJsZS1ydW50aW1lL2xpYi9weXRob24zLjYvc2l0ZS1wYWNrYWdlcy9hbnNpYmxlL3BsdWdpbnMvY29ubmVjdGlvbi9zc2gucHkgL2V0Yy9hbnNpYmxlL3JvbGVzL3BsdWdpbnMvY29ubmVjdGlvbi9zc2gucHkK",
"script": "I2RuZiAteSB1cGdyYWRlICYKIwojSE9TVE5BTUU9IiQoaG9zdG5hbWUpIgojCiNpZiBbWyAkKGhvc3RuYW1lKSA9fiBpbmZyYSBdXTsgdGhlbgojICAgIGRuZiAteSBpbnN0YWxsIGdpdC1jb3JlIGdpdCB3Z2V0IHB5dGhvbjM2IGNocm9ueSBvcGVuc3NoLXNlcnZlciBweXRob24zLWRldmVsIHN1ZG8KIyAgICBnaXQgY2xvbmUgLS1icmFuY2ggZmVhdHVyZS9yb2NreTggLS1zaW5nbGUtYnJhbmNoIGh0dHBzOi8vZ2l0aHViLmNvbS9OZWlsSGFubG9uL29wZW5zdGFjay1hbnNpYmxlLmdpdCAvb3B0L29wZW5zdGFjay1hbnNpYmxlCiNmaQojCiMjIEFsd2F5cyBzdG9wIGFuZCBtYXNrIGZpcmV3YWxsZAojc3lzdGVtY3RsIHN0b3AgZmlyZXdhbGxkCiNzeXN0ZW1jdGwgbWFzayBmaXJld2FsbGQKIwojIyBBbHdheXMgc2V0IHRvIHBlcm1pc3NpdmUuCiNzZWQgLWkgJ3MvZW5mb3JjaW5nL3Blcm1pc3NpdmUvJyAvZXRjL3N5c2NvbmZpZy9zZWxpbnV4CiNzZWQgLWkgJ3MvZW5mb3JjaW5nL3Blcm1pc3NpdmUvJyAvZXRjL3NlbGludXgvY29uZmlnCiMKIyMgRGlzYWJsZSBhZ2VudCBmb3J3YXJkaW5nLCBpbiBjYXNlIHVzZXIgZm9yd2FyZHMgYWdlbnQsIHNvIGFzIG5vdCB0byBjb25mdXNlIGFuc2libGUKI2NhdCA8PCBFT0YgfCB0ZWUgLWEgL2V0Yy9zc2gvc3NoZF9jb25maWcKI01hdGNoIFVzZXIgcm9vdAojICBBbGxvd0FnZW50Rm9yd2FyZGluZyBubwojCiNFT0YKIwojc3lzdGVtY3RsIHJlc3RhcnQgc3NoZAojCiN0b3VjaCAvZXRjL2Nsb3VkL2Nsb3VkLWluaXQuZGlzYWJsZWQgfHwgZXhpdCAxCiN0cnVlCg==",
"type": "boot"
},
"sensitive_attributes": []
@ -60,13 +60,13 @@
{
"schema_version": 0,
"attributes": {
"attached_to_instance": "a55972fb-5e17-4b56-85ab-8b015ae0abb6",
"attached_to_instance": "4d7f75d3-d8c4-4e83-877d-2fb53cb8a62c",
"cost": 10,
"date_created": "2021-12-28T22:51:10+00:00",
"id": "1b5e5064-bf92-4c80-b6a2-137eefb9fae9",
"date_created": "2022-02-07T21:46:11+00:00",
"id": "310f2d34-3d2a-43df-99b4-b773a38236ce",
"label": "ala-storage-cinder",
"live": false,
"mount_id": "ewr-1b5e5064bf924c",
"mount_id": "ewr-310f2d343d2a43",
"region": "ewr",
"size_gb": 100,
"status": "active"
@ -96,23 +96,23 @@
"app_id": 0,
"backups": "disabled",
"backups_schedule": [],
"date_created": "2021-12-28T22:50:31+00:00",
"date_created": "2022-02-12T20:48:15+00:00",
"ddos_protection": false,
"default_password": "5X+y!6GrB})]-(8W",
"default_password": "Kq2.st%(RP78bCah",
"disk": 110,
"enable_ipv6": false,
"enable_private_network": false,
"features": [],
"firewall_group_id": "",
"gateway_v4": "149.28.46.1",
"gateway_v4": "45.76.0.1",
"hostname": "ala-compute1",
"id": "4a25c69f-091b-44bb-b259-a46a5b0066ec",
"id": "c6c23822-ddba-4e08-8507-65d49feb280f",
"image_id": null,
"internal_ip": "",
"iso_id": null,
"kvm": "https://my.vultr.com/subs/vps/novnc/api.php?data=djJ8aENPYTljdElpcVhtbnZJOWVxRU5oeGFtLUtmeFY0dDZ897mFhYW-f4V2aqKWYP-VHbNbRzSV7ju_JKoNSjAwJbmQaN0hOX2yjYYZ9DWm73Lr3KdMWR6xta8Y62oAx7Ir6ndh_eKFcUkjK3IVdCcbTok4xeT4llr-XooMCfmDXYml6PX2i4k-iZeGJ8GY7uG4qO_ZrO5V4LK_z1s0Q5ATR_Fu24PtQP6MPuqte-3LY61OORkIMriF_kZ-hqvSWP5_MtAmyofE",
"label": "advancedlsa compute",
"main_ip": "149.28.47.218",
"kvm": "https://my.vultr.com/subs/vps/novnc/api.php?data=djJ8b250XzBjR0FudFBEcEpiaDI2OF9pYjB6VHY1aWhWNXB8_d9QS1bR2jM1gsv8bem4u7RrqSRMsY0fG5lFm70lj3ap9CrXqec_XIhIxzNs5LPg_RIpWdzBDDmVEJv4Zaw-scHGTKLSNrK8i7bkMRcFSM6NLAzjRX-zMMWXwCPpcPm1Tv9hFAQQctPijReRudVo_nY-0tNHL1J6vzzX_5aH-3Z8Ejq94buowJ3LzwomH-LpHQFsPjJjUV0ADg",
"label": "compute1",
"main_ip": "45.76.1.95",
"netmask_v4": "255.255.254.0",
"os": "Rocky Linux x64",
"os_id": 448,
@ -133,7 +133,7 @@
"f57eb103-38ce-4a82-a5de-3ffcf1646792"
],
"status": "active",
"tag": "advancedlsa",
"tag": "advancedlsa compute",
"timeouts": null,
"user_data": null,
"v6_main_ip": "",
@ -162,38 +162,38 @@
"schema_version": 0,
"attributes": {
"activation_email": false,
"allowed_bandwidth": 10000,
"allowed_bandwidth": 20000,
"app_id": 0,
"backups": "disabled",
"backups_schedule": [],
"date_created": "2021-12-28T22:50:31+00:00",
"date_created": "2022-02-12T20:48:15+00:00",
"ddos_protection": false,
"default_password": "!8De]*A_#?nB]4=M",
"default_password": "_N9e#?e$3QWn12C)",
"disk": 110,
"enable_ipv6": false,
"enable_private_network": false,
"features": [],
"firewall_group_id": "",
"gateway_v4": "45.76.0.1",
"gateway_v4": "108.61.78.193",
"hostname": "ala-infra1",
"id": "443c7b6c-b08d-4458-9948-af695e6413a9",
"id": "fa0c5097-6ae6-4a5a-a0ae-89d9684c5c1d",
"image_id": null,
"internal_ip": "",
"iso_id": null,
"kvm": "https://my.vultr.com/subs/vps/novnc/api.php?data=djJ8RlFKSlplUG9xbG9yOGkwdUJfZ1V4OVZpZXNpbjVyaDN8gdo_pi7AIJChB7p3n0mRKMhkQmyqhOtYmGc0mqUkFAymQ95Z7TirVjBes6vyaw-nTrKr2iK_BOUNVS5nvwbD9KzfotESwkMMEWLwPmMvi8dAJS3fETlYcSUzIosDvwSGC8glTfDHp3BSkyypn_vT2jkLI8yuIPvH5a75IMhrSmrbp80mVp2JrWcZMfqw8yvrxnUlTD5MC5gqtUe2ZUBujqOvD4BX25_Sxy9-X7JQUPA",
"label": "advancedlsa infra controlplane",
"main_ip": "45.76.0.85",
"netmask_v4": "255.255.254.0",
"kvm": "https://my.vultr.com/subs/vps/novnc/api.php?data=djJ8QlZyOGszcUxaVkVRWjFidkx1aFl5Snd1SjZQT0dqVk985cLdsJjwxbsRUA46xwUUyEQr8bdJp2ZAmLMJ3RVsT2yfsFvXUyMoqDzY47T8jk5mKxd4mS4rPZ-quaGBmWzv55cNCERI0DC-U0SqM7XgvCkhWyUr3shdEp6P8XZHNEo0Va28BWkceJGoQHGVaqGT8tYaTSnTVzQpM3tlDNn8SFcoi7qlayC64J_EQCoV2Q5zFoIUZpKBZlc",
"label": "infra1",
"main_ip": "108.61.78.236",
"netmask_v4": "255.255.255.192",
"os": "Rocky Linux x64",
"os_id": 448,
"plan": "vdc-2c-8gb",
"plan": "vdc-4c-16gb",
"power_status": "running",
"private_network_ids": [
"6bc8c36b-c3b1-4710-9880-c8ad4a53399c",
"ec94cea3-8385-49dd-930c-b2f0a1304a16",
"52bf92d4-a2cd-4266-b8a4-5dbd4190b174"
],
"ram": 8192,
"ram": 16384,
"region": "ewr",
"reserved_ip_id": null,
"script_id": "0eeabbfb-2d2f-4797-a85a-82d3e1f235bd",
@ -203,13 +203,13 @@
"f57eb103-38ce-4a82-a5de-3ffcf1646792"
],
"status": "active",
"tag": "advancedlsa",
"tag": "advancedlsa infra",
"timeouts": null,
"user_data": null,
"v6_main_ip": "",
"v6_network": "",
"v6_network_size": 0,
"vcpu_count": 2
"vcpu_count": 4
},
"sensitive_attributes": [],
"private": "eyJlMmJmYjczMC1lY2FhLTExZTYtOGY4OC0zNDM2M2JjN2M0YzAiOnsiY3JlYXRlIjozNjAwMDAwMDAwMDAwLCJ1cGRhdGUiOjM2MDAwMDAwMDAwMDB9fQ==",
@ -232,37 +232,37 @@
"schema_version": 0,
"attributes": {
"activation_email": false,
"allowed_bandwidth": 4000,
"allowed_bandwidth": 3000,
"app_id": 0,
"backups": "disabled",
"backups_schedule": [],
"date_created": "2021-12-28T22:50:31+00:00",
"date_created": "2022-02-12T20:48:15+00:00",
"ddos_protection": false,
"default_password": "B9(cV}N-M?u_9*tq",
"disk": 160,
"default_password": "4g%KAboY7fpgT9#q",
"disk": 80,
"enable_ipv6": false,
"enable_private_network": false,
"features": [],
"firewall_group_id": "",
"gateway_v4": "104.156.226.1",
"gateway_v4": "45.77.144.1",
"hostname": "ala-storage1",
"id": "a55972fb-5e17-4b56-85ab-8b015ae0abb6",
"id": "4d7f75d3-d8c4-4e83-877d-2fb53cb8a62c",
"image_id": null,
"internal_ip": "",
"iso_id": null,
"kvm": "https://my.vultr.com/subs/vps/novnc/api.php?data=djJ8cEk2Mk50RkNPUEE1WVY5MnpOREd2VDJfS1JEM2JVSUx8DPzPEGu1uIi2TBGexITs0qXvUkNzrj8tFRcxijcIQGDVBaayc7pMZUw_FruJsCFIxzP4ctC3mqu66WPZLORPcmLpAFy9p3y9s5AokoKdNOPf07XMjk8ieIyoH3xoen-CI9BAvFH78zkymhyBTRrYa1Pksr9pMRRfGHLAfsGE3o4b8kZ3hfP8AKWoVcQtLkDwtRbqTyy4XBXfFaDDYK0pPpx6jtMM",
"label": "advancedlsa storage",
"main_ip": "104.156.227.146",
"kvm": "https://my.vultr.com/subs/vps/novnc/api.php?data=djJ8QTlISk4wNnhseXRTWU1XRTNoYzF3UU5wRGp1cWY0c2x8HjtxsSItPk5rsLKhzzB3pICKe9BHapJl5ZsfZR_YxRj0rmKSSbNuFuwoNCYNKWyMmBDlASic0iD0CA5mMfqfAEZwG2lMcRO9RK_y0pdg9TYFlpwNFe66BKk1VdvBdgsodLZDv10SOHROO_NuGBIGA5W8xlMPnWtf2Y2Y5YwIyyxQdIXPSRsTtLzQUQAKZgydujRqt29BvjUZ",
"label": "storage1",
"main_ip": "45.77.144.28",
"netmask_v4": "255.255.254.0",
"os": "Rocky Linux x64",
"os_id": 448,
"plan": "vc2-4c-8gb",
"plan": "vc2-2c-4gb",
"power_status": "running",
"private_network_ids": [
"ec94cea3-8385-49dd-930c-b2f0a1304a16",
"52bf92d4-a2cd-4266-b8a4-5dbd4190b174"
],
"ram": 8192,
"ram": 4096,
"region": "ewr",
"reserved_ip_id": null,
"script_id": "0eeabbfb-2d2f-4797-a85a-82d3e1f235bd",
@ -272,13 +272,13 @@
"f57eb103-38ce-4a82-a5de-3ffcf1646792"
],
"status": "active",
"tag": "advancedlsa",
"tag": "advancedlsa storage",
"timeouts": null,
"user_data": null,
"v6_main_ip": "",
"v6_network": "",
"v6_network_size": 0,
"vcpu_count": 4
"vcpu_count": 2
},
"sensitive_attributes": [],
"private": "eyJlMmJmYjczMC1lY2FhLTExZTYtOGY4OC0zNDM2M2JjN2M0YzAiOnsiY3JlYXRlIjozNjAwMDAwMDAwMDAwLCJ1cGRhdGUiOjM2MDAwMDAwMDAwMDB9fQ==",
@ -424,10 +424,10 @@
"schema_version": 0,
"attributes": {
"date_created": "2021-10-30T21:37:07+00:00",
"date_modified": "2021-12-05T22:23:54+00:00",
"date_modified": "2021-12-30T00:30:37+00:00",
"id": "0eeabbfb-2d2f-4797-a85a-82d3e1f235bd",
"name": "advancedlsa",
"script": "ZG5mIC15IGluc3RhbGwgZ2l0LWNvcmUgZ2l0IHdnZXQgcHl0aG9uMzYgY2hyb255IG9wZW5zc2gtc2VydmVyIHB5dGhvbjMtZGV2ZWwgc3VkbwpzeXN0ZW1jdGwgc3RvcCBmaXJld2FsbGQKc3lzdGVtY3RsIG1hc2sgZmlyZXdhbGxkCnNlZCAtaSAncy9lbmZvcmNpbmcvcGVybWlzc2l2ZS8nIC9ldGMvc3lzY29uZmlnL3NlbGludXgKc2VkIC1pICdzL2VuZm9yY2luZy9wZXJtaXNzaXZlLycgL2V0Yy9zZWxpbnV4L2NvbmZpZwojZ2l0IGNsb25lIC0tYnJhbmNoIGZlYXR1cmUvcm9ja3k4IC0tc2luZ2xlLWJyYW5jaCBodHRwczovL2dpdGh1Yi5jb20vTmVpbEhhbmxvbi9vcGVuc3RhY2stYW5zaWJsZS5naXQgL29wdC9vcGVuc3RhY2stYW5zaWJsZQp0b3VjaCAvZXRjL2Nsb3VkL2Nsb3VkLWluaXQuZGlzYWJsZWQKI2RuZiAteSBpbnN0YWxsIGh0dHBzOi8vcmVwb3MuZmVkb3JhcGVvcGxlLm9yZy9yZXBvcy9vcGVuc3RhY2svb3BlbnN0YWNrLXhlbmEvcmRvLXJlbGVhc2UteGVuYS0xLmVsOC5ub2FyY2gucnBtCmNhdCA8PCBFT0YgfCB0ZWUgLWEgL2V0Yy9zc2gvc3NoZF9jb25maWcKCk1hdGNoIFVzZXIgcm9vdAogIEFsbG93QWdlbnRGb3J3YXJkaW5nIG5vCgpFT0YKCnN5c3RlbWN0bCByZXN0YXJ0IHNzaGQKI2NwIC9vcHQvYW5zaWJsZS1ydW50aW1lL2xpYi9weXRob24zLjYvc2l0ZS1wYWNrYWdlcy9hbnNpYmxlL3BsdWdpbnMvY29ubmVjdGlvbi9zc2gucHkgL2V0Yy9hbnNpYmxlL3JvbGVzL3BsdWdpbnMvY29ubmVjdGlvbi9zc2gucHkK",
"script": "I2RuZiAteSB1cGdyYWRlICYKIwojSE9TVE5BTUU9IiQoaG9zdG5hbWUpIgojCiNpZiBbWyAkKGhvc3RuYW1lKSA9fiBpbmZyYSBdXTsgdGhlbgojICAgIGRuZiAteSBpbnN0YWxsIGdpdC1jb3JlIGdpdCB3Z2V0IHB5dGhvbjM2IGNocm9ueSBvcGVuc3NoLXNlcnZlciBweXRob24zLWRldmVsIHN1ZG8KIyAgICBnaXQgY2xvbmUgLS1icmFuY2ggZmVhdHVyZS9yb2NreTggLS1zaW5nbGUtYnJhbmNoIGh0dHBzOi8vZ2l0aHViLmNvbS9OZWlsSGFubG9uL29wZW5zdGFjay1hbnNpYmxlLmdpdCAvb3B0L29wZW5zdGFjay1hbnNpYmxlCiNmaQojCiMjIEFsd2F5cyBzdG9wIGFuZCBtYXNrIGZpcmV3YWxsZAojc3lzdGVtY3RsIHN0b3AgZmlyZXdhbGxkCiNzeXN0ZW1jdGwgbWFzayBmaXJld2FsbGQKIwojIyBBbHdheXMgc2V0IHRvIHBlcm1pc3NpdmUuCiNzZWQgLWkgJ3MvZW5mb3JjaW5nL3Blcm1pc3NpdmUvJyAvZXRjL3N5c2NvbmZpZy9zZWxpbnV4CiNzZWQgLWkgJ3MvZW5mb3JjaW5nL3Blcm1pc3NpdmUvJyAvZXRjL3NlbGludXgvY29uZmlnCiMKIyMgRGlzYWJsZSBhZ2VudCBmb3J3YXJkaW5nLCBpbiBjYXNlIHVzZXIgZm9yd2FyZHMgYWdlbnQsIHNvIGFzIG5vdCB0byBjb25mdXNlIGFuc2libGUKI2NhdCA8PCBFT0YgfCB0ZWUgLWEgL2V0Yy9zc2gvc3NoZF9jb25maWcKI01hdGNoIFVzZXIgcm9vdAojICBBbGxvd0FnZW50Rm9yd2FyZGluZyBubwojCiNFT0YKIwojc3lzdGVtY3RsIHJlc3RhcnQgc3NoZAojCiN0b3VjaCAvZXRjL2Nsb3VkL2Nsb3VkLWluaXQuZGlzYWJsZWQgfHwgZXhpdCAxCiN0cnVlCg==",
"type": "boot"
},
"sensitive_attributes": [],

View File

@ -1,7 +1,7 @@
{
"version": 4,
"terraform_version": "1.1.2",
"serial": 169,
"terraform_version": "1.1.5",
"serial": 558,
"lineage": "9182161d-2dda-d6b4-a789-8481586b33b1",
"outputs": {},
"resources": [
@ -14,10 +14,10 @@
{
"schema_version": 0,
"attributes": {
"content": "dnf -y install git-core git wget python36 chrony openssh-server python3-devel sudo\nsystemctl stop firewalld\nsystemctl mask firewalld\nsed -i 's/enforcing/permissive/' /etc/sysconfig/selinux\nsed -i 's/enforcing/permissive/' /etc/selinux/config\n#git clone --branch feature/rocky8 --single-branch https://github.com/NeilHanlon/openstack-ansible.git /opt/openstack-ansible\ntouch /etc/cloud/cloud-init.disabled\n#dnf -y install https://repos.fedorapeople.org/repos/openstack/openstack-xena/rdo-release-xena-1.el8.noarch.rpm\ncat \u003c\u003c EOF | tee -a /etc/ssh/sshd_config\n\nMatch User root\n AllowAgentForwarding no\n\nEOF\n\nsystemctl restart sshd\n#cp /opt/ansible-runtime/lib/python3.6/site-packages/ansible/plugins/connection/ssh.py /etc/ansible/roles/plugins/connection/ssh.py\n",
"content_base64": "ZG5mIC15IGluc3RhbGwgZ2l0LWNvcmUgZ2l0IHdnZXQgcHl0aG9uMzYgY2hyb255IG9wZW5zc2gtc2VydmVyIHB5dGhvbjMtZGV2ZWwgc3VkbwpzeXN0ZW1jdGwgc3RvcCBmaXJld2FsbGQKc3lzdGVtY3RsIG1hc2sgZmlyZXdhbGxkCnNlZCAtaSAncy9lbmZvcmNpbmcvcGVybWlzc2l2ZS8nIC9ldGMvc3lzY29uZmlnL3NlbGludXgKc2VkIC1pICdzL2VuZm9yY2luZy9wZXJtaXNzaXZlLycgL2V0Yy9zZWxpbnV4L2NvbmZpZwojZ2l0IGNsb25lIC0tYnJhbmNoIGZlYXR1cmUvcm9ja3k4IC0tc2luZ2xlLWJyYW5jaCBodHRwczovL2dpdGh1Yi5jb20vTmVpbEhhbmxvbi9vcGVuc3RhY2stYW5zaWJsZS5naXQgL29wdC9vcGVuc3RhY2stYW5zaWJsZQp0b3VjaCAvZXRjL2Nsb3VkL2Nsb3VkLWluaXQuZGlzYWJsZWQKI2RuZiAteSBpbnN0YWxsIGh0dHBzOi8vcmVwb3MuZmVkb3JhcGVvcGxlLm9yZy9yZXBvcy9vcGVuc3RhY2svb3BlbnN0YWNrLXhlbmEvcmRvLXJlbGVhc2UteGVuYS0xLmVsOC5ub2FyY2gucnBtCmNhdCA8PCBFT0YgfCB0ZWUgLWEgL2V0Yy9zc2gvc3NoZF9jb25maWcKCk1hdGNoIFVzZXIgcm9vdAogIEFsbG93QWdlbnRGb3J3YXJkaW5nIG5vCgpFT0YKCnN5c3RlbWN0bCByZXN0YXJ0IHNzaGQKI2NwIC9vcHQvYW5zaWJsZS1ydW50aW1lL2xpYi9weXRob24zLjYvc2l0ZS1wYWNrYWdlcy9hbnNpYmxlL3BsdWdpbnMvY29ubmVjdGlvbi9zc2gucHkgL2V0Yy9hbnNpYmxlL3JvbGVzL3BsdWdpbnMvY29ubmVjdGlvbi9zc2gucHkK",
"content": "#dnf -y upgrade \u0026\n#\n#HOSTNAME=\"$(hostname)\"\n#\n#if [[ $(hostname) =~ infra ]]; then\n# dnf -y install git-core git wget python36 chrony openssh-server python3-devel sudo\n# git clone --branch feature/rocky8 --single-branch https://github.com/NeilHanlon/openstack-ansible.git /opt/openstack-ansible\n#fi\n#\n## Always stop and mask firewalld\n#systemctl stop firewalld\n#systemctl mask firewalld\n#\n## Always set to permissive.\n#sed -i 's/enforcing/permissive/' /etc/sysconfig/selinux\n#sed -i 's/enforcing/permissive/' /etc/selinux/config\n#\n## Disable agent forwarding, in case user forwards agent, so as not to confuse ansible\n#cat \u003c\u003c EOF | tee -a /etc/ssh/sshd_config\n#Match User root\n# AllowAgentForwarding no\n#\n#EOF\n#\n#systemctl restart sshd\n#\n#touch /etc/cloud/cloud-init.disabled || exit 1\n#true\n",
"content_base64": "I2RuZiAteSB1cGdyYWRlICYKIwojSE9TVE5BTUU9IiQoaG9zdG5hbWUpIgojCiNpZiBbWyAkKGhvc3RuYW1lKSA9fiBpbmZyYSBdXTsgdGhlbgojICAgIGRuZiAteSBpbnN0YWxsIGdpdC1jb3JlIGdpdCB3Z2V0IHB5dGhvbjM2IGNocm9ueSBvcGVuc3NoLXNlcnZlciBweXRob24zLWRldmVsIHN1ZG8KIyAgICBnaXQgY2xvbmUgLS1icmFuY2ggZmVhdHVyZS9yb2NreTggLS1zaW5nbGUtYnJhbmNoIGh0dHBzOi8vZ2l0aHViLmNvbS9OZWlsSGFubG9uL29wZW5zdGFjay1hbnNpYmxlLmdpdCAvb3B0L29wZW5zdGFjay1hbnNpYmxlCiNmaQojCiMjIEFsd2F5cyBzdG9wIGFuZCBtYXNrIGZpcmV3YWxsZAojc3lzdGVtY3RsIHN0b3AgZmlyZXdhbGxkCiNzeXN0ZW1jdGwgbWFzayBmaXJld2FsbGQKIwojIyBBbHdheXMgc2V0IHRvIHBlcm1pc3NpdmUuCiNzZWQgLWkgJ3MvZW5mb3JjaW5nL3Blcm1pc3NpdmUvJyAvZXRjL3N5c2NvbmZpZy9zZWxpbnV4CiNzZWQgLWkgJ3MvZW5mb3JjaW5nL3Blcm1pc3NpdmUvJyAvZXRjL3NlbGludXgvY29uZmlnCiMKIyMgRGlzYWJsZSBhZ2VudCBmb3J3YXJkaW5nLCBpbiBjYXNlIHVzZXIgZm9yd2FyZHMgYWdlbnQsIHNvIGFzIG5vdCB0byBjb25mdXNlIGFuc2libGUKI2NhdCA8PCBFT0YgfCB0ZWUgLWEgL2V0Yy9zc2gvc3NoZF9jb25maWcKI01hdGNoIFVzZXIgcm9vdAojICBBbGxvd0FnZW50Rm9yd2FyZGluZyBubwojCiNFT0YKIwojc3lzdGVtY3RsIHJlc3RhcnQgc3NoZAojCiN0b3VjaCAvZXRjL2Nsb3VkL2Nsb3VkLWluaXQuZGlzYWJsZWQgfHwgZXhpdCAxCiN0cnVlCg==",
"filename": "./files/startup.sh",
"id": "58efcfcffe2afd977420573c7c6186c0abb92d0a"
"id": "5f996b64baace058772b3b1aa86db83787468974"
},
"sensitive_attributes": []
}
@ -33,7 +33,7 @@
"schema_version": 0,
"attributes": {
"date_created": "2021-10-30T21:37:07+00:00",
"date_modified": "2021-12-05T22:23:54+00:00",
"date_modified": "2021-12-30T00:30:37+00:00",
"filter": [
{
"name": "name",
@ -44,13 +44,639 @@
],
"id": "0eeabbfb-2d2f-4797-a85a-82d3e1f235bd",
"name": "advancedlsa",
"script": "ZG5mIC15IGluc3RhbGwgZ2l0LWNvcmUgZ2l0IHdnZXQgcHl0aG9uMzYgY2hyb255IG9wZW5zc2gtc2VydmVyIHB5dGhvbjMtZGV2ZWwgc3VkbwpzeXN0ZW1jdGwgc3RvcCBmaXJld2FsbGQKc3lzdGVtY3RsIG1hc2sgZmlyZXdhbGxkCnNlZCAtaSAncy9lbmZvcmNpbmcvcGVybWlzc2l2ZS8nIC9ldGMvc3lzY29uZmlnL3NlbGludXgKc2VkIC1pICdzL2VuZm9yY2luZy9wZXJtaXNzaXZlLycgL2V0Yy9zZWxpbnV4L2NvbmZpZwojZ2l0IGNsb25lIC0tYnJhbmNoIGZlYXR1cmUvcm9ja3k4IC0tc2luZ2xlLWJyYW5jaCBodHRwczovL2dpdGh1Yi5jb20vTmVpbEhhbmxvbi9vcGVuc3RhY2stYW5zaWJsZS5naXQgL29wdC9vcGVuc3RhY2stYW5zaWJsZQp0b3VjaCAvZXRjL2Nsb3VkL2Nsb3VkLWluaXQuZGlzYWJsZWQKI2RuZiAteSBpbnN0YWxsIGh0dHBzOi8vcmVwb3MuZmVkb3JhcGVvcGxlLm9yZy9yZXBvcy9vcGVuc3RhY2svb3BlbnN0YWNrLXhlbmEvcmRvLXJlbGVhc2UteGVuYS0xLmVsOC5ub2FyY2gucnBtCmNhdCA8PCBFT0YgfCB0ZWUgLWEgL2V0Yy9zc2gvc3NoZF9jb25maWcKCk1hdGNoIFVzZXIgcm9vdAogIEFsbG93QWdlbnRGb3J3YXJkaW5nIG5vCgpFT0YKCnN5c3RlbWN0bCByZXN0YXJ0IHNzaGQKI2NwIC9vcHQvYW5zaWJsZS1ydW50aW1lL2xpYi9weXRob24zLjYvc2l0ZS1wYWNrYWdlcy9hbnNpYmxlL3BsdWdpbnMvY29ubmVjdGlvbi9zc2gucHkgL2V0Yy9hbnNpYmxlL3JvbGVzL3BsdWdpbnMvY29ubmVjdGlvbi9zc2gucHkK",
"script": "I2RuZiAteSB1cGdyYWRlICYKIwojSE9TVE5BTUU9IiQoaG9zdG5hbWUpIgojCiNpZiBbWyAkKGhvc3RuYW1lKSA9fiBpbmZyYSBdXTsgdGhlbgojICAgIGRuZiAteSBpbnN0YWxsIGdpdC1jb3JlIGdpdCB3Z2V0IHB5dGhvbjM2IGNocm9ueSBvcGVuc3NoLXNlcnZlciBweXRob24zLWRldmVsIHN1ZG8KIyAgICBnaXQgY2xvbmUgLS1icmFuY2ggZmVhdHVyZS9yb2NreTggLS1zaW5nbGUtYnJhbmNoIGh0dHBzOi8vZ2l0aHViLmNvbS9OZWlsSGFubG9uL29wZW5zdGFjay1hbnNpYmxlLmdpdCAvb3B0L29wZW5zdGFjay1hbnNpYmxlCiNmaQojCiMjIEFsd2F5cyBzdG9wIGFuZCBtYXNrIGZpcmV3YWxsZAojc3lzdGVtY3RsIHN0b3AgZmlyZXdhbGxkCiNzeXN0ZW1jdGwgbWFzayBmaXJld2FsbGQKIwojIyBBbHdheXMgc2V0IHRvIHBlcm1pc3NpdmUuCiNzZWQgLWkgJ3MvZW5mb3JjaW5nL3Blcm1pc3NpdmUvJyAvZXRjL3N5c2NvbmZpZy9zZWxpbnV4CiNzZWQgLWkgJ3MvZW5mb3JjaW5nL3Blcm1pc3NpdmUvJyAvZXRjL3NlbGludXgvY29uZmlnCiMKIyMgRGlzYWJsZSBhZ2VudCBmb3J3YXJkaW5nLCBpbiBjYXNlIHVzZXIgZm9yd2FyZHMgYWdlbnQsIHNvIGFzIG5vdCB0byBjb25mdXNlIGFuc2libGUKI2NhdCA8PCBFT0YgfCB0ZWUgLWEgL2V0Yy9zc2gvc3NoZF9jb25maWcKI01hdGNoIFVzZXIgcm9vdAojICBBbGxvd0FnZW50Rm9yd2FyZGluZyBubwojCiNFT0YKIwojc3lzdGVtY3RsIHJlc3RhcnQgc3NoZAojCiN0b3VjaCAvZXRjL2Nsb3VkL2Nsb3VkLWluaXQuZGlzYWJsZWQgfHwgZXhpdCAxCiN0cnVlCg==",
"type": "boot"
},
"sensitive_attributes": []
}
]
},
{
"mode": "managed",
"type": "vultr_block_storage",
"name": "cinder",
"provider": "provider[\"registry.terraform.io/vultr/vultr\"]",
"instances": [
{
"schema_version": 0,
"attributes": {
"attached_to_instance": "bc723c9c-eae5-4f9b-9554-95d2924aaaca",
"cost": 10,
"date_created": "2022-02-07T21:46:11+00:00",
"id": "310f2d34-3d2a-43df-99b4-b773a38236ce",
"label": "ala-storage-cinder",
"live": false,
"mount_id": "ewr-310f2d343d2a43",
"region": "ewr",
"size_gb": 100,
"status": "active"
},
"sensitive_attributes": [],
"private": "bnVsbA==",
"dependencies": [
"vultr_instance.storage1",
"vultr_private_network.mgmt",
"vultr_private_network.storage",
"vultr_ssh_key.terraform"
]
}
]
},
{
"mode": "managed",
"type": "vultr_block_storage",
"name": "cinder-aio",
"provider": "provider[\"registry.terraform.io/vultr/vultr\"]",
"instances": [
{
"schema_version": 0,
"attributes": {
"attached_to_instance": "c0d2a6f1-160d-49a6-9f4c-064fd13f5763",
"cost": 10,
"date_created": "2022-01-10T04:36:55+00:00",
"id": "32fdc5e6-923d-4759-b6d3-f322a7b4060f",
"label": "ala-storage-cinder-aio",
"live": false,
"mount_id": "ewr-32fdc5e6923d47",
"region": "ewr",
"size_gb": 100,
"status": "active"
},
"sensitive_attributes": [],
"private": "bnVsbA==",
"dependencies": [
"vultr_instance.aio1",
"vultr_private_network.mgmt2",
"vultr_ssh_key.terraform"
]
}
]
},
{
"mode": "managed",
"type": "vultr_block_storage",
"name": "cinder-aio2",
"provider": "provider[\"registry.terraform.io/vultr/vultr\"]",
"instances": [
{
"schema_version": 0,
"attributes": {
"attached_to_instance": "acc242da-c48f-4f36-8c7d-8f3e63cb805b",
"cost": 10,
"date_created": "2022-01-26T02:07:06+00:00",
"id": "54637727-2eeb-49fd-90dc-10c84f8db98a",
"label": "ala-storage-cinder-aio",
"live": false,
"mount_id": "ewr-546377272eeb49",
"region": "ewr",
"size_gb": 100,
"status": "active"
},
"sensitive_attributes": [],
"private": "bnVsbA==",
"dependencies": [
"vultr_instance.aio2",
"vultr_private_network.mgmt2",
"vultr_ssh_key.terraform"
]
}
]
},
{
"mode": "managed",
"type": "vultr_block_storage",
"name": "cinder-aio3",
"provider": "provider[\"registry.terraform.io/vultr/vultr\"]",
"instances": [
{
"schema_version": 0,
"attributes": {
"attached_to_instance": "f569916f-3b07-454f-b45e-02e51dcc2493",
"cost": 10,
"date_created": "2022-02-06T20:01:23+00:00",
"id": "2cf2e1a9-9a53-430a-b67a-c15171ea0a40",
"label": "ala-storage-cinder-aio",
"live": false,
"mount_id": "ewr-2cf2e1a99a5343",
"region": "ewr",
"size_gb": 100,
"status": "active"
},
"sensitive_attributes": [],
"private": "bnVsbA==",
"dependencies": [
"vultr_instance.aio3",
"vultr_private_network.mgmt2",
"vultr_ssh_key.terraform"
]
}
]
},
{
"mode": "managed",
"type": "vultr_block_storage",
"name": "cinder-aio4",
"provider": "provider[\"registry.terraform.io/vultr/vultr\"]",
"instances": [
{
"schema_version": 0,
"attributes": {
"attached_to_instance": "e84fb371-83e3-481d-b78b-d82fc6f2e8a8",
"cost": 10,
"date_created": "2022-02-06T23:43:40+00:00",
"id": "edbfad6e-1087-4e65-8279-357063c253f8",
"label": "ala-storage-cinder-aio",
"live": false,
"mount_id": "ewr-edbfad6e10874e",
"region": "ewr",
"size_gb": 100,
"status": "active"
},
"sensitive_attributes": [],
"private": "bnVsbA==",
"dependencies": [
"vultr_instance.aio4",
"vultr_private_network.mgmt2",
"vultr_ssh_key.terraform"
]
}
]
},
{
"mode": "managed",
"type": "vultr_instance",
"name": "aio1",
"provider": "provider[\"registry.terraform.io/vultr/vultr\"]",
"instances": [
{
"schema_version": 0,
"attributes": {
"activation_email": false,
"allowed_bandwidth": 4000,
"app_id": 0,
"backups": "disabled",
"backups_schedule": [],
"date_created": "2022-02-04T04:02:24+00:00",
"ddos_protection": false,
"default_password": "!1Dz3z$#h_%Uq=8_",
"disk": 160,
"enable_ipv6": false,
"enable_private_network": false,
"features": [],
"firewall_group_id": "",
"gateway_v4": "140.82.40.1",
"hostname": "ala-aio1",
"id": "c0d2a6f1-160d-49a6-9f4c-064fd13f5763",
"image_id": null,
"internal_ip": "172.29.220.3",
"iso_id": null,
"kvm": "https://my.vultr.com/subs/vps/novnc/api.php?data=djJ8UTM1bENHRjhiNG5LMnFkMWFWVlRxeU5EUERnUVBCa0Z8oMF6IjCl3pwoUw6oJ-6kG-zljg3-s1GhSTYafr7MdAghAEzKBS7yzgqDFDiii8l3__SNeMYHKxgGJEPbkxArxiDGtAykMfvfqID4vjVdGEKau3BohqkZOIRUWUFx7TqPWuhrh4fHj_1Dg4U-cEVNnwMYVa7sIXtFwAPJPbVuiey8PnbSUWMrW7cs-Y1ufFO-w9Ew-Y2V",
"label": "aio1",
"main_ip": "140.82.41.130",
"netmask_v4": "255.255.254.0",
"os": "Rocky Linux x64",
"os_id": 448,
"plan": "vc2-4c-8gb",
"power_status": "running",
"private_network_ids": [
"e11bbba9-c00a-4e88-bf49-075b880692c3"
],
"ram": 8192,
"region": "ewr",
"reserved_ip_id": null,
"script_id": "0eeabbfb-2d2f-4797-a85a-82d3e1f235bd",
"server_status": "ok",
"snapshot_id": null,
"ssh_key_ids": [
"f57eb103-38ce-4a82-a5de-3ffcf1646792"
],
"status": "active",
"tag": "advancedlsa aio",
"timeouts": null,
"user_data": null,
"v6_main_ip": "",
"v6_network": "",
"v6_network_size": 0,
"vcpu_count": 4
},
"sensitive_attributes": [],
"private": "eyJlMmJmYjczMC1lY2FhLTExZTYtOGY4OC0zNDM2M2JjN2M0YzAiOnsiY3JlYXRlIjozNjAwMDAwMDAwMDAwLCJ1cGRhdGUiOjM2MDAwMDAwMDAwMDB9fQ==",
"dependencies": [
"vultr_private_network.mgmt2",
"vultr_ssh_key.terraform"
]
}
]
},
{
"mode": "managed",
"type": "vultr_instance",
"name": "aio2",
"provider": "provider[\"registry.terraform.io/vultr/vultr\"]",
"instances": [
{
"schema_version": 0,
"attributes": {
"activation_email": false,
"allowed_bandwidth": 4000,
"app_id": 0,
"backups": "disabled",
"backups_schedule": [],
"date_created": "2022-02-05T22:05:46+00:00",
"ddos_protection": false,
"default_password": "L8[y+GY#WZyof)Km",
"disk": 160,
"enable_ipv6": false,
"enable_private_network": false,
"features": [],
"firewall_group_id": "",
"gateway_v4": "64.154.38.1",
"hostname": "ala-aio2",
"id": "acc242da-c48f-4f36-8c7d-8f3e63cb805b",
"image_id": null,
"internal_ip": "172.29.220.4",
"iso_id": null,
"kvm": "https://my.vultr.com/subs/vps/novnc/api.php?data=djJ8X3pjNmJjQnZaMklRXzk0T3FkZEswbFJXRTVrVm9BdDR8vygO3hPWj9Lgb4ge1iAEhcCDZnpBjHUzrxI1IRpzSw2TXvyV-OspydaWlj87wsDsM_3DqT6ERG26SH0sL-LDPxypBGzeh-rSEAjW-CmRDwFwi0fVC9fOnHUHmcoxed6WQXfv-QWPdQ15-msWLC7vXbUZoKEtWVRyoUbCTeZftlGdd9_sc8MswLNZrKq0rcI6nlzX0VOL",
"label": "aio2",
"main_ip": "64.154.38.77",
"netmask_v4": "255.255.255.0",
"os": "Rocky Linux x64",
"os_id": 448,
"plan": "vc2-4c-8gb",
"power_status": "running",
"private_network_ids": [
"e11bbba9-c00a-4e88-bf49-075b880692c3"
],
"ram": 8192,
"region": "ewr",
"reserved_ip_id": null,
"script_id": "0eeabbfb-2d2f-4797-a85a-82d3e1f235bd",
"server_status": "ok",
"snapshot_id": null,
"ssh_key_ids": [
"f57eb103-38ce-4a82-a5de-3ffcf1646792"
],
"status": "active",
"tag": "advancedlsa aio",
"timeouts": null,
"user_data": null,
"v6_main_ip": "",
"v6_network": "",
"v6_network_size": 0,
"vcpu_count": 4
},
"sensitive_attributes": [],
"private": "eyJlMmJmYjczMC1lY2FhLTExZTYtOGY4OC0zNDM2M2JjN2M0YzAiOnsiY3JlYXRlIjozNjAwMDAwMDAwMDAwLCJ1cGRhdGUiOjM2MDAwMDAwMDAwMDB9fQ==",
"dependencies": [
"vultr_private_network.mgmt2",
"vultr_ssh_key.terraform"
]
}
]
},
{
"mode": "managed",
"type": "vultr_instance",
"name": "aio3",
"provider": "provider[\"registry.terraform.io/vultr/vultr\"]",
"instances": [
{
"schema_version": 0,
"attributes": {
"activation_email": false,
"allowed_bandwidth": 4000,
"app_id": 0,
"backups": "disabled",
"backups_schedule": [],
"date_created": "2022-02-06T20:03:33+00:00",
"ddos_protection": false,
"default_password": "Jz9-.S]CW,C,4(SZ",
"disk": 160,
"enable_ipv6": false,
"enable_private_network": false,
"features": [],
"firewall_group_id": "",
"gateway_v4": "45.32.5.1",
"hostname": "ala-aio3",
"id": "f569916f-3b07-454f-b45e-02e51dcc2493",
"image_id": null,
"internal_ip": "172.29.220.5",
"iso_id": null,
"kvm": "https://my.vultr.com/subs/vps/novnc/api.php?data=djJ8dWFwOG1QT0ozVzZacTRPZmxxT0VEaVhldW5UR3gtZDZ8yshtgBuoVy2-iHF2qlaSnWyetcSeD7AtXtYyXUldP0KRldwY_ljvnMNOkifBXyc1blPWkeIo4LFrpwTRj6saJmnOrjC132X4xZiVHJtKZdQBAzPQ8eh7mZBWZPGPn29TorALb4vQWbtXEHrXDhnvl956LJDQhq1VmUCaqkZLZ8OFnuc6gPlmB2VxN9-34mzuJdYRbuEy",
"label": "aio3",
"main_ip": "45.32.5.54",
"netmask_v4": "255.255.255.0",
"os": "Rocky Linux x64",
"os_id": 448,
"plan": "vc2-4c-8gb",
"power_status": "running",
"private_network_ids": [
"e11bbba9-c00a-4e88-bf49-075b880692c3"
],
"ram": 8192,
"region": "ewr",
"reserved_ip_id": null,
"script_id": "0eeabbfb-2d2f-4797-a85a-82d3e1f235bd",
"server_status": "ok",
"snapshot_id": null,
"ssh_key_ids": [
"f57eb103-38ce-4a82-a5de-3ffcf1646792"
],
"status": "active",
"tag": "advancedlsa aio",
"timeouts": null,
"user_data": null,
"v6_main_ip": "",
"v6_network": "",
"v6_network_size": 0,
"vcpu_count": 4
},
"sensitive_attributes": [],
"private": "eyJlMmJmYjczMC1lY2FhLTExZTYtOGY4OC0zNDM2M2JjN2M0YzAiOnsiY3JlYXRlIjozNjAwMDAwMDAwMDAwLCJ1cGRhdGUiOjM2MDAwMDAwMDAwMDB9fQ==",
"dependencies": [
"vultr_private_network.mgmt2",
"vultr_ssh_key.terraform"
]
}
]
},
{
"mode": "managed",
"type": "vultr_instance",
"name": "aio4",
"provider": "provider[\"registry.terraform.io/vultr/vultr\"]",
"instances": [
{
"schema_version": 0,
"attributes": {
"activation_email": false,
"allowed_bandwidth": 4000,
"app_id": 0,
"backups": "disabled",
"backups_schedule": [],
"date_created": "2022-02-06T23:58:24+00:00",
"ddos_protection": false,
"default_password": "cM%5b}c3,]WjAsEn",
"disk": 160,
"enable_ipv6": false,
"enable_private_network": false,
"features": [],
"firewall_group_id": "",
"gateway_v4": "207.246.90.1",
"hostname": "ala-aio4",
"id": "e84fb371-83e3-481d-b78b-d82fc6f2e8a8",
"image_id": null,
"internal_ip": "172.29.220.6",
"iso_id": null,
"kvm": "https://my.vultr.com/subs/vps/novnc/api.php?data=djJ8eDV3bzZZOHo4Q0JFejh1OEtyM1BNU3NTcWFsWDA4VWF8CrgZR4IgBO9LLgps3kw5B1-KaFS5q9dl_1_exgtcBhZ61LKTXIrqQpzdlmoYPbMUQm2jS9No1G6I1vBThssyo_8JIXyfFwMgIvP5roA5XUDrYgn3c5ALQDnyq-ZdnvcuPkH2IwVsz7vP58Szx048kN78X0BUmbn7iba_LnHbr_78h-r9O1LMrc1kXLUaRCOE9VNz8uI",
"label": "aio4",
"main_ip": "207.246.90.98",
"netmask_v4": "255.255.254.0",
"os": "Rocky Linux x64",
"os_id": 448,
"plan": "vc2-4c-8gb",
"power_status": "running",
"private_network_ids": [
"e11bbba9-c00a-4e88-bf49-075b880692c3"
],
"ram": 8192,
"region": "ewr",
"reserved_ip_id": null,
"script_id": "0eeabbfb-2d2f-4797-a85a-82d3e1f235bd",
"server_status": "ok",
"snapshot_id": null,
"ssh_key_ids": [
"f57eb103-38ce-4a82-a5de-3ffcf1646792"
],
"status": "active",
"tag": "advancedlsa aio",
"timeouts": null,
"user_data": null,
"v6_main_ip": "",
"v6_network": "",
"v6_network_size": 0,
"vcpu_count": 4
},
"sensitive_attributes": [],
"private": "eyJlMmJmYjczMC1lY2FhLTExZTYtOGY4OC0zNDM2M2JjN2M0YzAiOnsiY3JlYXRlIjozNjAwMDAwMDAwMDAwLCJ1cGRhdGUiOjM2MDAwMDAwMDAwMDB9fQ==",
"dependencies": [
"vultr_private_network.mgmt2",
"vultr_ssh_key.terraform"
]
}
]
},
{
"mode": "managed",
"type": "vultr_instance",
"name": "compute1",
"provider": "provider[\"registry.terraform.io/vultr/vultr\"]",
"instances": [
{
"status": "tainted",
"schema_version": 0,
"attributes": {
"activation_email": false,
"allowed_bandwidth": 10000,
"app_id": 0,
"backups": "disabled",
"backups_schedule": [],
"date_created": "2022-02-07T22:48:54+00:00",
"ddos_protection": false,
"default_password": "]P6j_Mro!?kgn?[)",
"disk": 110,
"enable_ipv6": false,
"enable_private_network": false,
"features": [],
"firewall_group_id": "",
"gateway_v4": "149.28.234.1",
"hostname": "ala-compute1",
"id": "82183496-739f-4f76-b3d2-7b6caa5b4784",
"image_id": null,
"internal_ip": "",
"iso_id": null,
"kvm": "https://my.vultr.com/subs/vps/novnc/api.php?data=djJ8MlBOTndGVVU5VlJ3OGd2eDU2WkhyZlNOck00WFo2c0h8Gta2M15iTVao1LILTrpb9c0967yqjOYLYBPfxRJ1NeFf8ogYrX2mp8Zg5r0GJMbQSJW9eAYRerQRMFUouVw6o0A8wsQZ2UGToNQdfL2bF6htNZ6Tyy3SiMYfE4LhGhb7X814MlAQk89jMjLuIJ59O6H6wUALBUyjm5NBw7U7_2FjRoStMtLko1djMPqmaQdytjCk3-Faa8Zsiw",
"label": "compute1",
"main_ip": "149.28.235.72",
"netmask_v4": "255.255.254.0",
"os": "Rocky Linux x64",
"os_id": 448,
"plan": "vdc-2c-8gb",
"power_status": "running",
"private_network_ids": [
"6bc8c36b-c3b1-4710-9880-c8ad4a53399c",
"ec94cea3-8385-49dd-930c-b2f0a1304a16",
"52bf92d4-a2cd-4266-b8a4-5dbd4190b174"
],
"ram": 8192,
"region": "ewr",
"reserved_ip_id": null,
"script_id": "0eeabbfb-2d2f-4797-a85a-82d3e1f235bd",
"server_status": "ok",
"snapshot_id": null,
"ssh_key_ids": [
"f57eb103-38ce-4a82-a5de-3ffcf1646792"
],
"status": "active",
"tag": "advancedlsa compute",
"timeouts": null,
"user_data": null,
"v6_main_ip": "",
"v6_network": "",
"v6_network_size": 0,
"vcpu_count": 2
},
"sensitive_attributes": [],
"private": "eyJlMmJmYjczMC1lY2FhLTExZTYtOGY4OC0zNDM2M2JjN2M0YzAiOnsiY3JlYXRlIjozNjAwMDAwMDAwMDAwLCJ1cGRhdGUiOjM2MDAwMDAwMDAwMDB9fQ==",
"dependencies": [
"vultr_private_network.mgmt",
"vultr_private_network.storage",
"vultr_private_network.tunnel",
"vultr_ssh_key.terraform"
]
}
]
},
{
"mode": "managed",
"type": "vultr_instance",
"name": "infra1",
"provider": "provider[\"registry.terraform.io/vultr/vultr\"]",
"instances": [
{
"status": "tainted",
"schema_version": 0,
"attributes": {
"activation_email": false,
"allowed_bandwidth": 20000,
"app_id": 0,
"backups": "disabled",
"backups_schedule": [],
"date_created": "2022-02-07T22:48:54+00:00",
"ddos_protection": false,
"default_password": "_2tWg3KQqKoCPsa{",
"disk": 110,
"enable_ipv6": false,
"enable_private_network": false,
"features": [],
"firewall_group_id": "",
"gateway_v4": "207.148.22.1",
"hostname": "ala-infra1",
"id": "3ed345db-7bb6-4865-b958-8afe2df67235",
"image_id": null,
"internal_ip": "",
"iso_id": null,
"kvm": "https://my.vultr.com/subs/vps/novnc/api.php?data=djJ8aW9sc210Nl80Mi1fUnVGRlRGNXpOaW5GTUR1NmhJQWt8dAmz-C5jKXjf-Vr4r8AYGmc86FbXX3LNN0NDF-nmtQI4ZAdQO2B9wZjVUe50FmM-tB3hnbj78H-Zqa4XPtAlIgCIbBLUQktIf7B5LyIdmdzaF70iPUNA0PcRrzToZvqtDjNZx8Od5VlmGgpnsnM9PTK2OhYk_3FIVE_7hZenxVDbhLiSEKTPIpWJwYlOjFdpGQEHUDcHwcY",
"label": "infra1",
"main_ip": "207.148.23.90",
"netmask_v4": "255.255.254.0",
"os": "Rocky Linux x64",
"os_id": 448,
"plan": "vdc-4c-16gb",
"power_status": "running",
"private_network_ids": [
"6bc8c36b-c3b1-4710-9880-c8ad4a53399c",
"ec94cea3-8385-49dd-930c-b2f0a1304a16",
"52bf92d4-a2cd-4266-b8a4-5dbd4190b174"
],
"ram": 16384,
"region": "ewr",
"reserved_ip_id": null,
"script_id": "0eeabbfb-2d2f-4797-a85a-82d3e1f235bd",
"server_status": "ok",
"snapshot_id": null,
"ssh_key_ids": [
"f57eb103-38ce-4a82-a5de-3ffcf1646792"
],
"status": "active",
"tag": "advancedlsa infra",
"timeouts": null,
"user_data": null,
"v6_main_ip": "",
"v6_network": "",
"v6_network_size": 0,
"vcpu_count": 4
},
"sensitive_attributes": [],
"private": "eyJlMmJmYjczMC1lY2FhLTExZTYtOGY4OC0zNDM2M2JjN2M0YzAiOnsiY3JlYXRlIjozNjAwMDAwMDAwMDAwLCJ1cGRhdGUiOjM2MDAwMDAwMDAwMDB9fQ==",
"dependencies": [
"vultr_private_network.mgmt",
"vultr_private_network.storage",
"vultr_private_network.tunnel",
"vultr_ssh_key.terraform"
]
}
]
},
{
"mode": "managed",
"type": "vultr_instance",
"name": "storage1",
"provider": "provider[\"registry.terraform.io/vultr/vultr\"]",
"instances": [
{
"status": "tainted",
"schema_version": 0,
"attributes": {
"activation_email": false,
"allowed_bandwidth": 3000,
"app_id": 0,
"backups": "disabled",
"backups_schedule": [],
"date_created": "2022-02-07T22:48:54+00:00",
"ddos_protection": false,
"default_password": "3R]o9r[orgrQ7Y3H",
"disk": 80,
"enable_ipv6": false,
"enable_private_network": false,
"features": [],
"firewall_group_id": "",
"gateway_v4": "104.238.128.1",
"hostname": "ala-storage1",
"id": "bc723c9c-eae5-4f9b-9554-95d2924aaaca",
"image_id": null,
"internal_ip": "",
"iso_id": null,
"kvm": "https://my.vultr.com/subs/vps/novnc/api.php?data=djJ8bWRlb1JYYS1kTEJ6Vi1iaGJfRlVONkNnU3J4OERXcVp8BWXjYiqhMKEQTKYavXHPm-IqWcfayCbCJgSpwbDY4cysstheJQZW68OS38WJp2ag3ldR9zkShQxG7zeIh6cvHSyF-c6S3-gFPHOvqmS3cyNRcfdplBQThjzqYiwc1xekAaWUanop31WolmlGz0g89fP2tp3OIuNLdw0zhJRYERgp_QY08XyU6TATsZZifdFu8JXNAsED7vcaNw",
"label": "storage1",
"main_ip": "104.238.130.249",
"netmask_v4": "255.255.252.0",
"os": "Rocky Linux x64",
"os_id": 448,
"plan": "vc2-2c-4gb",
"power_status": "running",
"private_network_ids": [
"ec94cea3-8385-49dd-930c-b2f0a1304a16",
"52bf92d4-a2cd-4266-b8a4-5dbd4190b174"
],
"ram": 4096,
"region": "ewr",
"reserved_ip_id": null,
"script_id": "0eeabbfb-2d2f-4797-a85a-82d3e1f235bd",
"server_status": "ok",
"snapshot_id": null,
"ssh_key_ids": [
"f57eb103-38ce-4a82-a5de-3ffcf1646792"
],
"status": "active",
"tag": "advancedlsa storage",
"timeouts": null,
"user_data": null,
"v6_main_ip": "",
"v6_network": "",
"v6_network_size": 0,
"vcpu_count": 2
},
"sensitive_attributes": [],
"private": "eyJlMmJmYjczMC1lY2FhLTExZTYtOGY4OC0zNDM2M2JjN2M0YzAiOnsiY3JlYXRlIjozNjAwMDAwMDAwMDAwLCJ1cGRhdGUiOjM2MDAwMDAwMDAwMDB9fQ==",
"dependencies": [
"vultr_private_network.mgmt",
"vultr_private_network.storage",
"vultr_ssh_key.terraform"
]
}
]
},
{
"mode": "managed",
"type": "vultr_private_network",
@ -185,10 +811,10 @@
"schema_version": 0,
"attributes": {
"date_created": "2021-10-30T21:37:07+00:00",
"date_modified": "2021-12-05T22:23:54+00:00",
"date_modified": "2021-12-30T00:30:37+00:00",
"id": "0eeabbfb-2d2f-4797-a85a-82d3e1f235bd",
"name": "advancedlsa",
"script": "ZG5mIC15IGluc3RhbGwgZ2l0LWNvcmUgZ2l0IHdnZXQgcHl0aG9uMzYgY2hyb255IG9wZW5zc2gtc2VydmVyIHB5dGhvbjMtZGV2ZWwgc3VkbwpzeXN0ZW1jdGwgc3RvcCBmaXJld2FsbGQKc3lzdGVtY3RsIG1hc2sgZmlyZXdhbGxkCnNlZCAtaSAncy9lbmZvcmNpbmcvcGVybWlzc2l2ZS8nIC9ldGMvc3lzY29uZmlnL3NlbGludXgKc2VkIC1pICdzL2VuZm9yY2luZy9wZXJtaXNzaXZlLycgL2V0Yy9zZWxpbnV4L2NvbmZpZwojZ2l0IGNsb25lIC0tYnJhbmNoIGZlYXR1cmUvcm9ja3k4IC0tc2luZ2xlLWJyYW5jaCBodHRwczovL2dpdGh1Yi5jb20vTmVpbEhhbmxvbi9vcGVuc3RhY2stYW5zaWJsZS5naXQgL29wdC9vcGVuc3RhY2stYW5zaWJsZQp0b3VjaCAvZXRjL2Nsb3VkL2Nsb3VkLWluaXQuZGlzYWJsZWQKI2RuZiAteSBpbnN0YWxsIGh0dHBzOi8vcmVwb3MuZmVkb3JhcGVvcGxlLm9yZy9yZXBvcy9vcGVuc3RhY2svb3BlbnN0YWNrLXhlbmEvcmRvLXJlbGVhc2UteGVuYS0xLmVsOC5ub2FyY2gucnBtCmNhdCA8PCBFT0YgfCB0ZWUgLWEgL2V0Yy9zc2gvc3NoZF9jb25maWcKCk1hdGNoIFVzZXIgcm9vdAogIEFsbG93QWdlbnRGb3J3YXJkaW5nIG5vCgpFT0YKCnN5c3RlbWN0bCByZXN0YXJ0IHNzaGQKI2NwIC9vcHQvYW5zaWJsZS1ydW50aW1lL2xpYi9weXRob24zLjYvc2l0ZS1wYWNrYWdlcy9hbnNpYmxlL3BsdWdpbnMvY29ubmVjdGlvbi9zc2gucHkgL2V0Yy9hbnNpYmxlL3JvbGVzL3BsdWdpbnMvY29ubmVjdGlvbi9zc2gucHkK",
"script": "I2RuZiAteSB1cGdyYWRlICYKIwojSE9TVE5BTUU9IiQoaG9zdG5hbWUpIgojCiNpZiBbWyAkKGhvc3RuYW1lKSA9fiBpbmZyYSBdXTsgdGhlbgojICAgIGRuZiAteSBpbnN0YWxsIGdpdC1jb3JlIGdpdCB3Z2V0IHB5dGhvbjM2IGNocm9ueSBvcGVuc3NoLXNlcnZlciBweXRob24zLWRldmVsIHN1ZG8KIyAgICBnaXQgY2xvbmUgLS1icmFuY2ggZmVhdHVyZS9yb2NreTggLS1zaW5nbGUtYnJhbmNoIGh0dHBzOi8vZ2l0aHViLmNvbS9OZWlsSGFubG9uL29wZW5zdGFjay1hbnNpYmxlLmdpdCAvb3B0L29wZW5zdGFjay1hbnNpYmxlCiNmaQojCiMjIEFsd2F5cyBzdG9wIGFuZCBtYXNrIGZpcmV3YWxsZAojc3lzdGVtY3RsIHN0b3AgZmlyZXdhbGxkCiNzeXN0ZW1jdGwgbWFzayBmaXJld2FsbGQKIwojIyBBbHdheXMgc2V0IHRvIHBlcm1pc3NpdmUuCiNzZWQgLWkgJ3MvZW5mb3JjaW5nL3Blcm1pc3NpdmUvJyAvZXRjL3N5c2NvbmZpZy9zZWxpbnV4CiNzZWQgLWkgJ3MvZW5mb3JjaW5nL3Blcm1pc3NpdmUvJyAvZXRjL3NlbGludXgvY29uZmlnCiMKIyMgRGlzYWJsZSBhZ2VudCBmb3J3YXJkaW5nLCBpbiBjYXNlIHVzZXIgZm9yd2FyZHMgYWdlbnQsIHNvIGFzIG5vdCB0byBjb25mdXNlIGFuc2libGUKI2NhdCA8PCBFT0YgfCB0ZWUgLWEgL2V0Yy9zc2gvc3NoZF9jb25maWcKI01hdGNoIFVzZXIgcm9vdAojICBBbGxvd0FnZW50Rm9yd2FyZGluZyBubwojCiNFT0YKIwojc3lzdGVtY3RsIHJlc3RhcnQgc3NoZAojCiN0b3VjaCAvZXRjL2Nsb3VkL2Nsb3VkLWluaXQuZGlzYWJsZWQgfHwgZXhpdCAxCiN0cnVlCg==",
"type": "boot"
},
"sensitive_attributes": [],