Skip to content

Commit

Permalink
Fixes for 20.04 and new docker compose
Browse files Browse the repository at this point in the history
  • Loading branch information
joseph-v committed Nov 22, 2022
1 parent f2172cc commit e296889
Show file tree
Hide file tree
Showing 9 changed files with 176 additions and 91 deletions.
103 changes: 64 additions & 39 deletions installer/README.md
Original file line number Diff line number Diff line change
@@ -1,29 +1,70 @@
# Introduction
This is a standalone/non-containerized installer for SODA Infrastructure Manager (delfin) project.
It contains a script and options to check the environment feasible for installing delfin. Installs required dependent software/binaries.
# Delfin Installation Guide

The SODA Delfin supports two types of installation
* Installation using Ansible for user experiance with Dashboard
* Installation using scripts

# Supported OS
Ubuntu 16.04, Ubuntu 18.04
## Ansible installer

# Prerequisite
* Supported OS: **Ubuntu 20.04, Ubuntu 18.04**
* Prerequisite: **Python 3.6 or above** should be installed

- #### Ensure the logged-in user has root privileges.
### Install steps

```bash
git clone https://github.com/sodafoundation/delfin.git && git checkout <delfin-release-version>
cd delfin/installer
chmod +x install_dependencies.sh
./install_dependencies.sh
cd ansible
PATH=$PATH:~/.local/bin
sudo -E env "PATH=$PATH" ansible-playbook site.yml -i local.hosts -v
```
### Uninstall
```bash
sudo -E env "PATH=$PATH" ansible-playbook site.yml -i local.hosts -v
```

- #### Setup Python3
Python3 and Pip3 should be installed on the system.

Note: If you don't have python3 in your system, you may follow below steps to setup python3 environment.
### Logs
Delfin processes execution logs can be found in /tmp/ folder
* /tmp/api.log
* /tmp/alert.log
* /tmp/task.log
* /tmp/exporter.log
* /tmp/create_db.log

```sh
apt-get install python3
apt-get install python3-pip
```
### How to use Delfin
Delfin can be used either through dashboard or REST APIs.

Please refer [user guides](https://docs.sodafoundation.io/guides/user-guides/delfin/dashboard/)



## Bash installer
This is a standalone/non-containerized installer for SODA Infrastructure Manager (delfin) project.
It contains a script and options to check the environment feasible for installing delfin. Installs required dependent software/binaries.

- #### Set PYTHONPATH to working directory
* Supported OS: **Ubuntu 20.04, Ubuntu 18.04**
* Prerequisite:
* **Python 3.6 or above** should be installed
* Ensure the logged-in user has **root privileges**.

#### Installation steps
```bash
sudo -i
apt-get install python3 python3-pip
git clone https://github.com/sodafoundation/delfin.git && git checkout <delfin-release-version>
cd delfin
export PYTHONPATH=$(pwd)
./installer/install
```
Refer below for installer options

```sh
export PYTHONPATH=$(pwd)
```
#### Uninstall
```bash
./installer/uninstall
```

- #### [Optional] Setup Prometheus (for monitor performance metric through prometheus)

Expand Down Expand Up @@ -55,13 +96,13 @@ Ubuntu 16.04, Ubuntu 18.04
```sh
root@root:/prometheus/prometheus-2.20.0.linux-amd64$ ./prometheus
```
# Structure of the installer
### Structure of the installer
This installer comes with options of pre-check, install and uninstall
pre-check: This script checks for the components required by delfin to function. If they are not present, precheck will install them.
Install: Installs and starts the delfin process
Uninstall: Uninstalls the delfin. Doesn't uninstall the required components. You may need to uninstall it explicitly using the native approach.
# How to install
### How to install
To get help, execute 'install -h'. It will show help information
Install script can be executed with three different switches to:
Expand Down Expand Up @@ -131,28 +172,12 @@ $ installer/install
Note: Multiple instances of exporter and api is not allowed currently.
#### Post install verification
After delfin installation use the following command to verify all process
of delfin are running.
```sh
ps -aux | grep delfin
```
# Uninstall
Running the uninstall script will stop all delfin processes and do cleanup
```sh
installer/uninstall
# Example
root@root1:~/delfin-demo/delfin$ installer/uninstall
```
# Logs
### Logs
All the installer logs are stored in the /var/log/sodafoundation directory.
The logs can be uniquely identified based upon the timestamp.
# Test the running delfin setup/process
## Test the running delfin setup/process
1. Make sure all delfin process are up and running
```
ps -ef|grep delfin
Expand Down Expand Up @@ -197,5 +222,5 @@ The logs can be uniquely identified based upon the timestamp.
http://localhost:9090/graph
# Limitation
## Limitation
Local installation, unlike Ansible installer, does not support SODA Dashboard integration.
2 changes: 1 addition & 1 deletion installer/ansible/roles/cleaner/scenarios/delfin.yml
Original file line number Diff line number Diff line change
Expand Up @@ -16,7 +16,7 @@
- name: Stop delfin containers, if started
shell: "{{ item }}"
with_items:
- docker-compose down
- docker compose down
become: yes
ignore_errors: yes
args:
Expand Down
4 changes: 2 additions & 2 deletions installer/ansible/roles/cleaner/scenarios/srm-toolchain.yml
Original file line number Diff line number Diff line change
Expand Up @@ -19,15 +19,15 @@
register: srmtoolchainexisted

- name: Stop and remove Prometheus, Alertmanager, Grafana containers but don't delete the images
shell: docker-compose rm -fs
shell: docker compose rm -fs
args:
chdir: "{{ srm_toolchain_work_dir }}/"
when:
- source_purge == false
- srmtoolchainexisted.stat.isdir is defined and srmtoolchainexisted.stat.isdir

- name: Stop and remove Prometheus, Alertmanager, Grafana containers & delete the images
shell: docker-compose down --rmi all
shell: docker compose down --rmi all
args:
chdir: "{{ srm_toolchain_work_dir }}/"
when:
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -17,7 +17,7 @@
shell: "{{ item }}"
with_items:
- docker build -t sodafoundation/delfin .
- DELFIN_METRICS_DIR={{ delfin_exporter_prometheus_metrics_dir }} DELFIN_HOST_IP={{ host_ip }} DELFIN_RABBITMQ_USER={{ delfin_rabbitmq_user }} DELFIN_RABBITMQ_PASS={{ delfin_rabbitmq_pass }} docker-compose up -d
- DELFIN_METRICS_DIR={{ delfin_exporter_prometheus_metrics_dir }} DELFIN_HOST_IP={{ host_ip }} DELFIN_RABBITMQ_USER={{ delfin_rabbitmq_user }} DELFIN_RABBITMQ_PASS={{ delfin_rabbitmq_pass }} docker compose up -d
become: yes
args:
chdir: "{{ delfin_work_dir }}"
Original file line number Diff line number Diff line change
Expand Up @@ -57,12 +57,12 @@
shell: export PROMETHEUS=True

- name: Stop and remove Prometheus, Alertmanager, Grafana containers, keeping images
shell: docker-compose rm -fs
shell: docker compose rm -fs
args:
chdir: "{{ srm_toolchain_work_dir }}/"

- name: start service
shell: docker-compose up -d
shell: docker compose up -d
args:
chdir: "{{ srm_toolchain_work_dir }}/"

7 changes: 6 additions & 1 deletion installer/ansible/script/create_db.py
Original file line number Diff line number Diff line change
Expand Up @@ -28,10 +28,14 @@
CONF = cfg.CONF
db_options.set_defaults(cfg.CONF,
connection='sqlite:////var/lib/delfin/delfin.sqlite')


def remove_prefix(text, prefix):
if text.startswith(prefix):
return text[len(prefix):]
return text


def main():
CONF(sys.argv[1:], project='delfin',
version=version.version_string())
Expand All @@ -41,6 +45,7 @@ def main():
if not os.path.exists(path):
os.makedirs(path)
db.register_db()


if __name__ == '__main__':
main()

72 changes: 41 additions & 31 deletions installer/ansible/script/ministone.py
Original file line number Diff line number Diff line change
Expand Up @@ -19,22 +19,23 @@
import requests
import json


def token_issue():
body = {
'auth': {
'identity': { 'methods': ['password'],
'password': {
'user': {
'name': OS_USERNAME,
'domain': { 'name': OS_USER_DOMAIN_NAME },
'password': OS_PASSWORD
}
}
},
'identity': {'methods': ['password'],
'password': {
'user': {
'name': OS_USERNAME,
'domain': {'name': OS_USER_DOMAIN_NAME},
'password': OS_PASSWORD
}
}
},
'scope': {
'project': {
'name': OS_PROJECT_NAME,
'domain': { 'name': OS_USER_DOMAIN_NAME }
'name': OS_PROJECT_NAME,
'domain': {'name': OS_USER_DOMAIN_NAME}
}
}
}
Expand All @@ -45,8 +46,9 @@ def token_issue():
try:
r_post = requests.post(OS_AUTH_URL + '/v3/auth/tokens',
headers=headers, data=json.dumps(body))
except:
except Exception as ex:
print('ERROR: %s' % (body))
print('Execption: %s' % (ex))
return None

if debug:
Expand All @@ -58,6 +60,7 @@ def token_issue():
else:
return None


def service_list(token):
headers = {
'Content-Type': 'application/json',
Expand All @@ -75,12 +78,14 @@ def service_list(token):
result_list = json.loads(r_get.text)['services']

for s in result_list:
result_dict[s['name']] = s['id']
except:
result_dict[s['name']] = s['id']
except Exception as ex:
print("Got exception %s", ex)
return None

return result_dict


def endpoint_list(token, service):

headers = {
Expand All @@ -93,8 +98,9 @@ def endpoint_list(token, service):
if debug:
print('DEBUG: GET /v3/endpoints - status_code = %s' %
(r_get.status_code))
except:
return None
except Exception as ex:
print("Got exception %s", ex)
return None

if r_get.status_code != 200:
return None
Expand All @@ -104,14 +110,16 @@ def endpoint_list(token, service):

ep_list = []
for ep in json.loads(response)['endpoints']:
if service in service_dict.keys() and (ep['service_id'] == service_dict[service]):
if service in service_dict.keys() and (ep['service_id'] ==
service_dict[service]):
if debug:
print('DEBUG: %s %s' % (ep['id'], ep['interface']))
print('DEBUG: url %s' % (ep['url']))
ep_list.append([ep['id'], ep['interface']])

return ep_list


def endpoint_bulk_update(token, service, url):
headers = {
'Content-Type': 'application/json',
Expand All @@ -126,7 +134,7 @@ def endpoint_bulk_update(token, service, url):
print("DEBUG: ep_list: %s %s" % (ep_list, url))

for ep in ep_list:
body = {"endpoint": { "url": url }}
body = {"endpoint": {"url": url}}
endpoint_id = ep[0]
if debug:
print("DEBUG: %s / %s" %
Expand All @@ -136,8 +144,9 @@ def endpoint_bulk_update(token, service, url):
r_patch = requests.patch(OS_AUTH_URL + '/v3/endpoints/' +
endpoint_id,
headers=headers, data=json.dumps(body))
except:
print('ERROR: endpoint update for id: %s failed.' % (endpoint_id))
except Exception as ex:
print('ERROR: endpoint update for id: %s failed. %s'
% (endpoint_id), ex)
# continue for all the given endpoints
if r_patch.status_code != 200:
print('ERROR: endpoint update for id: %s failed. HTTP %s' %
Expand All @@ -159,36 +168,37 @@ def endpoint_bulk_update(token, service, url):
# Updates URL portion of keystone endpoints of given SERVICE_NAME
# in one action.
#


if __name__ == '__main__':

debug = False

OS_AUTH_URL=os.environ['OS_AUTH_URL']
OS_PASSWORD=os.environ['OS_PASSWORD']
OS_PROJECT_DOMAIN_NAME=os.environ['OS_PROJECT_DOMAIN_NAME']
OS_PROJECT_NAME=os.environ['OS_PROJECT_NAME']
OS_USERNAME=os.environ['OS_USERNAME']
OS_USER_DOMAIN_NAME=os.environ['OS_USER_DOMAIN_NAME']
#OS_USER_DOMAIN_ID=os.environ['OS_USER_DOMAIN_ID']
OS_AUTH_URL = os.environ['OS_AUTH_URL']
OS_PASSWORD = os.environ['OS_PASSWORD']
OS_PROJECT_DOMAIN_NAME = os.environ['OS_PROJECT_DOMAIN_NAME']
OS_PROJECT_NAME = os.environ['OS_PROJECT_NAME']
OS_USERNAME = os.environ['OS_USERNAME']
OS_USER_DOMAIN_NAME = os.environ['OS_USER_DOMAIN_NAME']
# OS_USER_DOMAIN_ID=os.environ['OS_USER_DOMAIN_ID']

# token_issue
# used for keystone process start up check.
token = ''
if len(sys.argv) == 2 and sys.argv[1] == 'token_issue':
token = token_issue()
if not token:
sys.exit(1)
sys.exit(1)
else:
sys.exit(0)
sys.exit(0)

# endpoint_bulk_update
# used for overwriting keystone endpoints
if not ((len(sys.argv) == 4) and (sys.argv[1] == 'endpoint_bulk_update')):
print('Specify service_name and url for bulk update. Exiting...')
sys.exit(1)
sys.exit(1)

token = token_issue()
if not token:
sys.exit(1)
endpoint_bulk_update(token, sys.argv[2], sys.argv[3])

Loading

0 comments on commit e296889

Please sign in to comment.