diff --git a/LICENSE.md b/LICENSE.md index 5aa5556..902cbb6 100644 --- a/LICENSE.md +++ b/LICENSE.md @@ -1,6 +1,6 @@ # MIT License -Copyright (c) 2017-2019 MCCI Corporation +Copyright (c) 2017-2020 MCCI Corporation Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal diff --git a/README.md b/README.md index fbac68f..297a2c5 100644 --- a/README.md +++ b/README.md @@ -1,155 +1,251 @@ -# Dashboard example for The Things Network +# Dashboard example for Internet of Things (IoT) -This repository contains a complete example that grabs device data from The Things Network, stores it in a database, and then displays the data using a web-based dashboard. +This repository contains a complete example that grabs device data from IoT-Network server, stores it in a database, and then displays the data using a web-based dashboard. You can set this up on a "Ubuntu + Docker" VM from the Microsoft Azure store (or on a Ubuntu VM from [DreamCompute](https://www.dreamhost.com/cloud/computing/), or on a Docker droplet from [Digital Ocean](https://www.digitalocean.com/)) with minimal effort. You should set up this service to run all the time so as to capture the data from your devices; you then access the data at your convenience using a web browser. -This dashboard uses [docker-compose](https://docs.docker.com/compose/overview/) to set up a group of four primary [docker containers](https://www.docker.com), backed by two auxiliary containers: +**Table of Contents** -1. An instance of [Apache](http://apache.org), which proxies the other services, handles access control, gets SSL certificates from [Let's Encrypt](https://letsencrypt.org), and faces the outside world. -2. An instance of [Node-RED](http://nodered.org/), which processes the data from the individual nodes, and puts it into the database. -3. An instance of [InfluxDB](https://www.influxdata.com/), which stores the data as time-series measurements with tags. -4. An instance of [Grafana](http://grafana.org/), which gives a web-based dashboard interface to the data. + + + + -The auxiliary containers are: +- [Introduction](#introduction) +- [Definitions](#definitions) +- [Security](#security) +- [Assumptions](#assumptions) +- [Composition and External Ports](#composition-and-external-ports) +- [Data Files](#data-files) +- [Reuse and removal of data files](#reuse-and-removal-of-data-files) +- [Node-RED and Grafana Examples](#node-red-and-grafana-examples) + - [Connecting to InfluxDB from Node-RED and Grafana](#connecting-to-influxdb-from-node-red-and-grafana) + - [Logging in to Grafana](#logging-in-to-grafana) + - [Data source settings in Grafana](#data-source-settings-in-grafana) +- [MQTTS Examples](#mqtts-examples) +- [Setup Instructions](#setup-instructions) +- [Influxdb Backup and Restore](#influxdb-backup-and-restore) +- [Meta](#meta) -1. `influxdb-backup`, which (if configured) runs periodic backups. -2. `postfix`, which (if configured) handles outbound mail services for the containers. + + + -To make things more specific, most of the description here assumes use of Microsoft Azure. However, I have tested this on Ubuntu 16 LTS without difficulty (apart from the additional complexity of setting up `apt-get` to fetch docker, and the need for a manual install of `docker-compose`), on DreamCompute, and on Digital Ocean. I believe that this will work on any Linux or Linux-like platform that supports docker, docker-compose, and node-red. It's likely to run on a Raspberry Pi... but as of this writing, this has not been tested. +## Introduction + +This [`SETUP.md`](./SETUP.md) explains the Application Server Installation and its setup. [Docker](https://docs.docker.com/) and [Docker Compose](https://docs.docker.com/compose/) are used to make the installation and +setup easier. + +This dashboard uses [docker-compose](https://docs.docker.com/compose/overview/) to set up a group of five primary [docker containers](https://www.docker.com), backed by one auxiliary container: + +1. An instance of [Nginx](https://www.nginx.com/), which proxies the other services, handles access control, gets SSL certificates from [Let's Encrypt](https://letsencrypt.org/), and faces the outside world. + +2. An instance of [Node-RED](http://nodered.org/), which processes the data from the individual nodes, and puts it into the database. + +3. An instance of [InfluxDB](https://docs.influxdata.com/influxdb/), which stores the data as time-series measurements with tags. + +4. An instance of [Grafana](http://grafana.org/), which gives a web-based dashboard interface to the data. + +5. An instance of [Mqtt](https://mosquitto.org/), which provides a lightweight method of carrying out messaging using a publish/subscribe model + +The auxiliary container is: + +1. [Postfix](http://www.postfix.org/documentation.html), which (if configured) handles outbound mail services for the containers. + +To make things more specific, most of the description here assumes use of Microsoft Azure. However, this was tested on Ubuntu 16 with no issues (apart from the additional complexity of setting up `apt-get` to fetch docker, and the need for a manual install of `docker-compose`), on Dream Compute, and on Digital Ocean This will work on any Linux or Linux-like platform that supports docker, docker-compose, and Node-. Its likelihood of working with Raspberry Pi has not been tested as yet. ## Definitions -* The **host system** is the system running Docker and Docker-compose. -* A **container** is one of the virtual systems running under Docker on the host system. -* A **file on the host** is a file on the host system (typically not visible from within the container(s). -* A **file in container *X*** (or a **file in the *X* container**) is a file in a file-system associated with container *X* (and typically not visible from the host system). +- The **host system** is the system that runs Docker and Docker-compose. + +- A **container** is one of the virtual systems running under Docker on the host system. + +- A **file on the host** is a file present on the host system (typically not + visible from within the container(s)). + +- A **file in container X** (or a **file in the X container**) is a file + present in a file-system associated with container *X* (and typically not + visible from the host system). ## Security -All communication with the Apache server are encrypted using SSL with auto-provisioned certificates from Let's Encrypt. Grafana is the primary point of access for most users, and Grafana's login is used for that purpose. +All communication with the Nginx server is encrypted using SSL with auto-provisioned certificates from Let's Encrypt. Grafana is the primary point of access for most users, and Grafana's login is used for that purpose. Access to Node-RED and InfluxDB is via special URLs (**base**/node-red/ and **base**/influxdb:8086/, where **base** is the URL served by the Nginx container). These URLs are protected via Nginx `htpasswd` file entries. These entries are files in the Nginx container, and must be manually edited by an Administrator. -Access to Node-RED and InfluxDB is via special URLs (__base__/node-red/ and __base__/influxdb/, where __base__ is the URL served by the Apache container). These URLs are protected via Apache `htpasswd` and `htgroup` file entries. These entries are files in the Apache container, and must be manually edited by an administrator. +The initial administrator's login password for Grafana must be initialized prior to starting; it's stored in `.env`. (When the Grafana container is started for the first time, it creates `grafana.db` in the Grafana container, and stores +the password at that time. If `grafana.db` already exists, the password in grafana/.env is ignored.) -The initial administrator's login password for Grafana must be initialized prior to starting the very first; it's stored in `grafana/.env`. (When you start the Grafana container the first time, it creates `grafana.db` in the Grafana container, and stores the password at that time. If `grafana.db` already exists, the password in `grafana/.env` is ignored.) +Microsoft Azure, by default, will not open any of the ports to the outside world, so the user will need to open port 443 for SSL access to Nginx. -Microsoft Azure, by default, will not open any of the ports to the outside world, so you will need to open port 443 for SSL access to Apache. +For concreteness, the following table assumes that **base** is “server.example.com”. -For concreteness, the following table assumes that __base__ is `server.example.com`. +**User Access** -To access | Open this link | Notes -----------|----------------|-------- -Node-RED | [https://server.example.com/node-red/](https://server.example.com/node-red/) | Port number is not needed and shouldn't be used. Note trailing '/' after `node-red`. -InfluxDB API queries | [https://server.example.com/influxdb/:8086](https://server.example.com/influxdb/:8086) | Port number **_is_** needed. Also note trailing '/' after `influxdb`. -Grafana | [https://server.example.com](https://server.example.com) | Port number is not needed and shouldn't be used. +|**To access**| **Open this link**| **Notes**| +|-------------|-------------------|----------| +| Node-RED | | Port number is not needed and shouldn't be used. Note trailing '/' after node-red. | +| InfluxDB API queries | | Port number is needed. Also note trailing '/' after influxdb. | +| Grafana | [https://server.example.com](https://server.example.com/)| Port number is not needed and shouldn't be used. | +| Mqtt | | Mqtt client is needed. To test it via [Mqtt web portal](https://www.eclipse.org/paho/clients/js/utility/) | + +This can be visualized as shown in the figure below: + +**Docker connection and User Access** -This can be visualized as below: ![Connection Architecture using SSH](assets/Connection-architecture.png) ## Assumptions -* Your host system must have docker-compose 1.9 or later (for which see https://github.com/docker-compose -- be aware that apt-get normally doesn't grab this; if configured at all, it frequently gets an out-of-date version). +- The host system must have docker-compose verison 1.9 or later (for which  -- be aware that apt-get normally doesn't grab this; if configured at all, it frequently gets an out-of-date version). -* The environment variable `TTN_DASHBOARD_DATA`, if set, points to the common directory for your data. If not set, docker-compose will quit at startup. (This is by design!) +- The environment variable `IOT_DASHBOARD_DATA`, if set, points to the common directory for the data. If not set, docker-compose will quit at start-up. (This is by design!) - - `${TTN_DASHBOARD_DATA}node-red` will have your local Node-RED data. - - `${TTN_DASHBOARD_DATA}influxdb` will have your local influxdb data (this is what you should back up) - - `${TTN_DASHBOARD_DATA}grafana` will have your dashboards + - `${IOT_DASHBOARD_DATA}node-red` will have the local Node-RED data. -## Composition and External Ports + - `${IOT_DASHBOARD_DATA}influxdb`  will have the local InfluxDB data (this should be backed-up) -Within their containers, the individual programs use their usual ports, but these are isolated from the outside world, except as specified by `docker-compose.yml`. + - `${IOT_DASHBOARD_DATA}grafana` will have all the dashboards -In `docker-compose.yml`, the following ports on the docker host are connected to the individual programs. + - `${IOT_DASHBOARD_DATA}docker-nginx` will have `.htpasswd` credentials folder `authdata` and Let's Encrypt certs folder `letsencrypt` -* Apache runs on 80 and 443. (All connections to port 80 are redirected to 443 using SSL). + - `${IOT_DASHBOARD_DATA}mqtt/credentials` will have the user credentials -Remember, if your server is running on a cloud platform like Microsoft Azure or AWS, you need to check the firewall and confirm that the ports are open to the outside world. +## Composition and External Ports -## Installation +Within the containers, the individual programs use their usual ports, but these are isolated from the outside world, except as specified by `docker-compose.yml` file. -Please refer to [`SETUP.md`](./SETUP.md) for detailed set-up instructions. +In `docker-compose.yml`, the following ports on the docker host are connected to the individual programs. + +- Nginx runs on 80 and 443. (All connections to port 80 are redirected to 443 using SSL). + +Remember, if the server is running on a cloud platform like Microsoft Azure or AWS, one needs to check the firewall and confirm that the ports are open to the outside world. ## Data Files -When designing this collection of services, we had to decide where to store the data files. We had two choices: keep them inside the docker containers, or keep them in locations on the host system. The advantage of the the former is that everything is reset when you rebuild the docker images. The disadvantage of the former is that you lose all your data when you rebuild. On the other hand, there's another level of indirection when keeping things on the host, as the files reside in different locations on the host and in the docker containers. +When designing this collection of services, there were two choices to store the +data files: + +- we could keep them inside the docker containers, or -Data files are kept in the following locations by default. +- we could keep them in locations on the host system. -Component | Data file location on host | Location in container -----------|----------------------------|---------------------- -Node-RED | `${TTN_DASHBOARD_DATA}node-red` | `/data` -InfluxDB | `${TTN_DASHBOARD_DATA}influxdb`| `/data` -Grafana | `${TTN_DASHBOARD_DATA}grafana`| `/var/lib/grafana` +The advantage of the former is that everything is reset when the docker images are rebuilt. The disadvantage of the former is that there is a possibility to lose all the data when it’s rebuilt. On the other hand, there's another level of indirection when keeping things on the host, as the files reside in different locations on the host and in the docker containers. -As shown, you can easily change locations on the **host** (e.g. for testing). You do this by setting the environment variable `TTN_DASHBOARD_DATA` to the **absolute path** (with trailing slash) to the containing directory prior to calling `docker-compose up`. The above paths are appended to the value of `TTN_DASHBOARD_DATA`. Directories are created as needed. +Because IoT data is generally persistent, we decided that the the extra level of indirection was required. To help find things, consult the followign table. Data files are kept in the following locations by default. -Normally, this is done by an appropriate setting in the `.env` file. +| **Component** | **Data file location on host**| **Location in container** | +|---------------|-----------|----------------------------| +| Node-RED | `${IOT_DASHBOARD_DATA}node-red`| /data +| InfluxDB | `${IOT_DASHBOARD_DATA}influxdb` | /var/lib/influxdb +| Grafana | `${IOT_DASHBOARD_DATA}grafana` | /var/lib/grafana| +| Mqtt | `${IOT_DASHBOARD_DATA}mqtt/credentials` | /etc/mosquitto/credentials +| Nginx | `${IOT_DASHBOARD_DATA}docker-nginx/authdata`| /etc/nginx/authdata +| Let's Encrypt certificates |`${IOT_DASHBOARD_DATA}docker-nginx/letsencrypt`|/etc/letsencrypt + +As shown, one can easily change locations on the **host** (e.g. for testing). This can be done by setting the environment variable `IOT_DASHBOARD_DATA` to the **absolute path** (with trailing slash) to the containing directory prior to +calling `docker-compose up`. The above paths are appended to the value of `IOT_DASHBOARD_DATA`. Directories are created as needed. + +Normally, this is done by an appropriate setting in the `.env` file. Consider the following example: -```bash -$ grep TTN_DASHBOARD_DATA .env -TTN_DASHBOARD_DATA=/dashboard-data/ -$ docker-compose up -d +```console +$ grep IOT_DASHBOARD_DATA .env +IOT_DASHBOARD_DATA=/dashboard-data/ +$ docker-compose up –d ``` In this case, the data files are created in the following locations: -Component | Data file location -----------|------------------- -Node-RED | `/dashboard-data/node-red` -InfluxDB | `/dashboard-data/influxdb` -Grafana | `/dashboard-data/grafana` +Table Data Location Examples + +| **Component** | **Data file location** | +|---------------|-----------------------------------| +| Node-RED | /dashboard-data/node-red | +| InfluxDB | /dashboard-data/influxdb | +| Grafana | /dashboard-data/grafana | +| Mqtt | /dashboard-data/ mqtt/credentials | +| Nginx | /dashboard-data/docker-nginx/authdata| +| Certificates | /dashboard-data/docker-nginx/letsencrypt -### Reuse and removal of data files +## Reuse and removal of data files -Since data files on the host are not removed between runs, as long as you don't remove the files between runs, your data will preserved. +Since data files on the host are not removed between runs, as long as the files are not removed between runs, the data will be preserved. -Sometimes this is inconvenient, and you'll want to remove some or all of the data. For a variety of reasons, the data files and directories are created owned by root, so you must use the `sudo` command to remove the data files. Here's an example of how to do it: +Sometimes this is inconvenient, and it is necessary to remove some or all of the data. For a variety of reasons, the data files and directories are created owned by root, so the `sudo` command must be used to remove the data files. Here's an example of how to do it: ```bash source .env -sudo rm -rf ${TTN_DASHBOARD_DATA}node-red -sudo rm -rf ${TTN_DASHBOARD_DATA}influxdb -sudo rm -rf ${TTN_DASHBOARD_DATA}grafana +sudo rm -rf ${IOT_DASHBOARD_DATA}node-red +sudo rm -rf ${IOT_DASHBOARD_DATA}influxdb +sudo rm -rf ${IOT_DASHBOARD_DATA}Grafana +sudo rm –rf ${IOT_DASHBOARD_DATA}mqtt/credentials ``` ## Node-RED and Grafana Examples -This version requires that you set up Node-RED, the database and the Grafana dashboards manually, but we hope to add a reasonable set of initial files in a future release. +This version requires that you set up Node-RED, the Influxdb database and the Grafana dashboards manually, but we hope to add a reasonable set of initial files in a future release. -## Connecting to InfluxDB from Node-RED and Grafana +### Connecting to InfluxDB from Node-RED and Grafana -There is one point that is somewhat confusing about the connections from Node-RED and Grafana to InfluxDB. Even though InfluxDB is running on the same host, it is logically running on its own virtual machine (created by docker). Because of this, Node-RED and Grafana cannot use **`localhost`** when connecting to Grafana. A special name is provided by docker: **`influxdb`**. Note that there's no DNS suffix. If you don't use **`influxdb`**, Node-RED and Grafana will not be able to connect. +There is one point that is somewhat confusing about the connections from Node-RED and Grafana to InfluxDB. Even though InfluxDB is running on the same host, it is logically running on its own virtual machine (created by docker). Because of this, Node-RED and Grafana cannot use **`local host`** when connecting to Grafana. A special name is provided by docker: `influxdb`. Note that there's no DNS suffix. If `InfluxDB` is not used, Node-RED and Grafana will not be able to connect. -## Logging in to Grafana +### Logging in to Grafana -* On the login screen, the user name is "`admin`". The initial password is given by the value of the variable `GF_SECURITY_ADMIN_PASSWORD` in `grafana/.env`. Note that if you change the password in `grafana/.env` after the first time you launch the grafana containder, the admin password does not change. If you somehow lose the previous value of the admin password, and you don't have another admin login, it's very hard to recover; easiest is to remove `grafana.db` and start over. +On the login screen, the initial user name is "`admin`". The initial password is given by the value of the variable `GF_SECURITY_ADMIN_PASSWORD` in `.env`. Note that if you change the password in `.env` after the first time you launch the grafana container, the admin password does not change. If you somehow lose the previous value of the admin password, and you don't have another admin login, it's very hard to recover; easiest is to remove `grafana.db` and start over. ### Data source settings in Grafana -* Set the URL (under HTTP Settings) to `http://influxdb:8086`. +- Set the URL (under HTTP Settings) to . -* Select the database. +- Select the database. + +- Leave the username and password blank. + +- Click "Save & Test". + +## MQTTS Examples + +Mqtts can be accessed in the following ways: + +Method | Path | Credentials +--------|--------|------------- +MQTT over Nginx proxy | https://dashboard.example.com/mqtts/:443 | Username/Password come from mosquitto’s configuration (password_file) +MQTT over TLS/SSL | https://dashboard.example.com:8883 | Username/Password come from mosquitto’s configuration (password_file) +WebSockets over TLS/SSL | https://dashboard.example.com:8083 | Username/Password come from mosquitto’s configuration (password_file) +MQTT over TCP protocol (not secure) | http://dashboard.example.com:1883 |Username/Password come from mosquitto’s configuration (password_file) + +## Setup Instructions + +Please refer to [`SETUP.md`](./SETUP.md) for detailed set-up instructions. -* Leave user and password blank. +## Influxdb Backup and Restore -* Click "Save & Test". +Please refer to [`influxdb/README.md`](./influxdb/README.md). -## Future work +## Release History -Although the dashboard is already very useful, it's incomplete. Please refer to `TODO.md`, and also note that we're considering the following. Check in for updates! +- HEAD includes the following changes -1. Add a script to setup the passwords initially for Grafana and for access to node-red and Influxdb. -2. Admin script to show roles and maintain the `.htpasswd` and `.htgroup` files. -3. Add the auto-update cron script -- right now you have to restart in order to get the SSL certs updated. Not a big deal, as the patches-requiring-reboot interval is shorter than the life of the certs, but still, this should be fixed. -4. Switch to [phusion](https://github.com/phusion/baseimage-docker) for the base image, instead of Ubuntu. -5. Provide suitable initial files for Grafana and Node-RED, assuming MCCI sensor nodes. -6. The initial script should prompt for the data base name. + - Influxdb: + 1. Backup script is updated for backing up online (live) databases and to push the backup to Amazon bucket. + 2. Crontab was set for automatic backup. + 3. supports sending email for backup alerting. + - Nginx: + 1. The Apache setup is migrated to Nginx. + 2. Proxy-ing the services like ( influxdb, grafana, node-red, mqtts over proxy) was updated. + - Node-red: + 1. supports data flowing via MQTT channel and HTTPS Endpoint + 2. supports sending email. -## Acknowledgements + - MQTTS: + 1. supports different connections as below: + 1. Mqtt Over Nginx proxy. + 2. Mqtt over TCP( disabled by default ) + 3. Mqtt over TLS/SSL + 4. Mqtt over Websockets(WSS) -This builds on work done by Johan Stokking of [The Things Network](www.thethingsnetwork.org) for the staging environment. Additional adaptation done by Terry Moore of [MCCI](www.mcci.com). + - Postfix: + 1. Configured to Relay mails via External SMTP Auth( Tested with Gmail and Mailgun ). + 2. Mails generated from Containers like Grafana,Influxdb and Node-red will be relayed through Postfix container. -Other contributors: [Olivier Girondel](https://github.com/oliv3), [Murugan Chandrasekar](https://github.com/MuruganChandrasekar). +## Meta diff --git a/SETUP.md b/SETUP.md index a283e9e..a86b4d0 100644 --- a/SETUP.md +++ b/SETUP.md @@ -1,24 +1,30 @@ -# Set-by-step Setup Instructions - +# Step-by-step Setup Instructions + + - [Notes](#notes) -- [Cloud-Provider Setup](#cloud-provider-setup) +- [Cloud-Provider-Specific Setup](#cloud-provider-specific-setup) - [On Digital Ocean](#on-digital-ocean) - [Create droplet](#create-droplet) - [Configure droplet](#configure-droplet) - [After server is set up](#after-server-is-set-up) - - [Create and edit the `.env` file](#create-and-edit-the-env-file) + - [Create and edit the .env file](#Create-and-edit-the-.env-file) - [Set up the Node-RED and InfluxDB API logins](#set-up-the-node-red-and-influxdb-api-logins) -- [Start the server](#start-the-server) + - [Migrating `htpasswd` from Apache to Nginx (if required)](#migrating-htpasswd-from-apache-to-nginx-if-required) + - [Creating new `htpasswd` files](#creating-new-htpasswd-files) + - [MQTT User Credentials setup](#mqtt-user-credentials-setup) + - [Start the server](#start-the-server) - [Restart servers in the background](#restart-servers-in-the-background) - [Initial testing](#initial-testing) - - [Set up first data source](#set-up-first-data-source) + - [Set up first data source](#set-up-first-data-source) - [Test Node-RED](#test-node-red) - [Creating an InfluxDB database](#creating-an-influxdb-database) -- [Add Apache log in for NodeRed or query after the fact](#add-apache-log-in-for-nodered-or-query-after-the-fact) + - [Test Postfix Mail setup](#Test-Postfix-Mail-setup) + - [Test MQTT Channels](#Test-MQTT-Channels) + @@ -26,45 +32,44 @@ ## Notes -Throughout the following, we assume you're creating a dashboard server named `dashboard.example.com`. Change this to whatever you like. For convenience, we name other things consistently: +Throughout this document, we use `dashboard.example.com` as the DNS name for the server. You will, of course, change this to something more suitable. When you do this, other things are to be named consistently: -`/opt/docker/dashboard.example.com` is the directory (on the host system) containing the docker files. +- `/opt/docker/dashboard.example.com` is the directory (on the host system) containing the docker files. -`/var/opt/docker/dashboard.example.com` is the directory (on the host system) containing persistent data. +- `/var/opt/docker/dashboard.example.com` is the directory (on the host system) containing persistent data. -We assume that you're familiar with Node-RED. +Node-RED familiarity is assumed. -## Cloud-Provider Setup +## Cloud-Provider-Specific Setup -First you have to choose a cloud provider and install Docker and Docker-Compose. That's very much provider dependent. +As an initial step, a cloud provider is required and Docker and Docker-Compose must be installed. The procedure is provider dependent. ### On Digital Ocean -_Last Update: 2019-07-31_ - #### Create droplet -1. Log in at [Digital Ocean](https://cloud.digitalocean.com/) +1. Log in to [Digital Ocean](https://cloud.digitalocean.com/) -2. Create a new project (if needed) to hold your new droplet. +2. Create a new project (if needed) to hold the new droplet. -3. Discover > Marketplace, search for `Docker` +3. Discover > Marketplace, search for `Docker` -4. You should come to this page: https://cloud.digitalocean.com/marketplace/5ba19751fc53b8179c7a0071?i=ec3581 +4. This page will be redirected: + 5. Press "Create" 6. Select the standard 8G GB Starter that is selected. -7. Choose a datacenter; I chose New York. +7. Choose a datacenter; *New York is selected in the example created for this document.* 8. Additional options: none. -9. Add your SSH keys. +9. Add the SSH keys. -10. Choose a host name, e.g. `passivehouse-ecovillage`. +10. Choose a host name, *e.g. `passivehouse-ecovillage`.* -11. Select your project. +11. Select the project. 12. Press "Create" @@ -76,19 +81,19 @@ _Last Update: 2019-07-31_ 3. Remove the motd (message of the day). -4. Add user: +4. Add user(s). Change `username` as needed. ```bash - adduser username - adduser username admin - adduser username docker - adduser username plugdev - adduser username staff + adduser username + adduser username admin + adduser username docker + adduser username plugdev + adduser username staff ``` 5. Disable root login via SSH or via password -6. Optional: enable `username` to sudo without password. +6. Optional: enable `username` to sudo without password. ```bash sudo VISUAL=vi visudo @@ -96,18 +101,18 @@ _Last Update: 2019-07-31_ Add the following line at the bottom: - ```sudoers + ```bash username ALL=(ALL) NOPASSWD: ALL ``` -7. Test that you can become `username`: +7. Test that you can become `username`: ```console - # sudo -i username + # sudo su - username username@host-name:~$ ``` -8. Drop back to root, and then copy the authorized_keys file to `~username`: +8. Drop back to root, and then copy the authorized_keys file to  `~username`: ```bash mkdir -m 700 ~username/.ssh @@ -115,9 +120,9 @@ _Last Update: 2019-07-31_ chown -R username.username ~username/.ssh/authorized_keys ``` -9. See if you can ssh in. +9. Confirm that the user can SSH in. -10. Optional: set up `byobu` by default: +10. Optional: set up byobu by default. This allows a session to continue even if your connection drops. ```bash byobu @@ -130,11 +135,11 @@ _Last Update: 2019-07-31_ vi /etc/hosts ``` - Change the line `127.0.1.1 name name` to `127.0.0.1 myhost.myfq.dn myhost`. + Change the line `127.0.1.1 name name` to `127.0.0.1 myhost.myfq.dn myhost`. -12. If needed, use `hostnamectl` to set the static hostname to match `myhost`. +12. If needed, use `hostnamectl` to set the static hostname to match `myhost`. -13. set up Git: +13. Set up git. This makes sure you have the latest version. ```bash sudo add-apt-repository ppa:git-core/ppa @@ -142,251 +147,323 @@ _Last Update: 2019-07-31_ sudo apt install git ``` -14. We'll put the docker files at `/opt/docker/docker-ttn-dashboard`, setting up as follows: +14. We'll put the docker files at `/opt/docker/docker-iot-dashboard`, setting up as follows: - ```bash - sudo mkdir /opt/docker - cd /opt/docker - sudo chgrp admin . - sudo chmod g+w . - ``` + ```bash + sudo mkdir /opt/docker + cd /opt/docker + sudo chgrp admin . + sudo chmod g+w . + ``` ## After server is set up -The following instructions are essentially independent of the cloud provider and the underlying distribution. But we've only tested on Ubuntu and (in 2017) on CentOS. +The following instructions are essentially independent of the cloud provider and the underlying distribution. But this was only tested on Ubuntu and (in 2019) on CentOS. 1. Clone this repository. - ```bash - git clone git@github.com:mcci-catena/docker-ttn-dashboard.git /opt/docker/dashboard.example.com - ``` + ```bash + git clone git@github.com:mcci-catena/docker-iot-dashboard.git /opt/docker/dashboard.example.com + ``` 2. Move to the directory populated in step 1. - ```bash - cd /opt/docker/dashboard.example.com - ``` + ```bash + cd /opt/docker/dashboard.example.com + ``` -3. Get a fully-qualified domain name (FQDN) for your server, for which you control DNS. Point it to your server. Make sure it works, using "`dig FQDN`" -- you should get back an `A` record pointing to your server's IP address. +3. Get a fully-qualified domain name (FQDN) for the server, for which the DNS can be controlled. Point it to the server. Make sure it works, using "`dig FQDN`" -- get back an `A` record pointing to your server's IP address. -### Create and edit the `.env` file +### Create and edit the .env file -1. Create a `.env` file. To get a template: +First, create a .env file. The following comand sequence can be cut and paste to generate an empty template: - ```bash - sed -ne '/^#+++/,/^#---/p' docker-compose.yml | sed -e '/^#[^ \t]/d' -e '/^# TTN/s/$/=/' > .env - ``` +```bash +sed -ne '/^#+++/,/^#---/p' docker-compose.yml | sed -e '/^#[^ \t]/d' -e '/^# IOT/s/$/=/' > .env +``` + +Then, edit the .env file as follows: + +1. `IOT_DASHBOARD_NGINX_FQDN=myhost.example.com` + + This sets the name of the resulting server. It tells Nginx what it's serving out. It must be a fully-qualified domain name (FQDN) that resolves to the IP address of the container host. + +2. `IOT_DASHBOARD_CERTBOT_FQDN=myhost.example.com` + + This should be the same as `IOT_DASHBOARD_NGINX_FQDN`. + +3. `IOT_DASHBOARD_CERTBOT_EMAIL=someone@example.com` + + This sets the contact email for Let's Encrypt. The script automatically accepts the Let's Encrypt terms of service, and this indicates who is doing the accepting. + +4. `IOT_DASHBOARD_DATA=/full/path/to/directory/` + + The trailing slash is required! This will put all the data files for this instance as subdirectories of the specified path. If this is undefined, `docker-compose` will print error messages and quit. + +5. `IOT_DASHBOARD_GRAFANA_ADMIN_PASSWORD=SomethingVerySecretIndeed` + + This needs to be confidential. Indeed this sets the *initial* password for the Grafana admin login.This should be changed via the Grafana UI after booting the server. -2. Edit the `.env` file as follows: +6. `IOT_DASHBOARD_GRAFANA_SMTP_FROM_ADDRESS` - 1. `TTN_DASHBOARD_APACHE_FQDN=myhost.example.com` - This sets the name of your resulting server. It tells Apache what it's serving out. It must be a fully-qualified domain name (FQDN) that resolves to the IP address of the container host. + This sets the Grafana originating mail address. - 2. `TTN_DASHBOARD_CERTBOT_FQDN=myhost.example.com` - This should be the same as `TTN_DASHBOARD_APACHE_FQDN`. +7. `IOT_DASHBOARD_GRAFANA_INSTALL_PLUGINS=plugin plugin2` - 3. `TTN_DASHBOARD_CERTBOT_EMAIL=someone@example.com` - This sets the contact email for Let's Encrypt. The script automatically accepts the Let's Encrypt terms of service, and this indicates who is doing the accepting. + This sets a list of Grafana plugins to install. - 4. `TTN_DASHBOARD_DATA=/full/path/to/directory/` - The trailing slash is required! - This will put all the data file for this instance as subdirectories of the specified path. If you leave this undefined, `docker-compose` will print error messages and quit. +8. `IOT_DASHBOARD_INFLUXDB_INITIAL_DATABASE_NAME=demo` - 5. `TTN_DASHBOARD_GRAFANA_ADMIN_PASSWORD=SomethingVerySecretIndeed` - This sets the *initial* password for the Grafana `admin` login. You should change this via the Grafana UI after booting the server. + Change "demo" to the desired name of the initial database that will be created in InfluxDB. - 6. `TTN_DASHBOARD_GRAFANA_SMTP_FROM_ADDRESS` - This sets the Grafana originating mail address. +9. `IOT_DASHBOARD_MAIL_HOST_NAME=myhost.example.com` - 7. `TTN_DASHBOARD_GRAFANA_INSTALL_PLUGINS` - This sets a list of Grafana plugins to install. + This sets the name of your mail server. Used by Postfix. - 8. `TTN_DASHBOARD_INFLUXDB_INITIAL_DATABASE_NAME=demo` - Change "demo" to the desired name of the initial database that will be created in InfluxDB. +10. `IOT_DASHBOARD_MAIL_DOMAIN=example.com` - 9. `TTN_DASHBOARD_MAIL_HOST_NAME=myhost.example.com` - This sets the name of your mail server. Used by Postfix. + This sets the domain name of your mail server. Used by Postfix. - 10. `TTN_DASHBOARD_MAIL_DOMAIN=example.com` - This sets the domain name of your mail server. Used by Postfix. +11. `IOT_DASHBOARD_NODERED_INSTALL_PLUGINS=node-red-node-example1 node-red-node-example2` - 11. `TTN_DASHBOARD_NODERED_INSTALL_PLUGINS=node-red-node-example1 node-red-node-example2` - This installs one or more Node-RED plug-ins. + This installs one or more Node-RED plug-ins. - 11. `TTN_DASHBOARD_TIMEZONE=Europe/Paris` - If not defined, the default timezone will be GMT. +12. `IOT_DASHBOARD_TIMEZONE=Europe/Paris` -Your `.env` file should look like this: + If not defined, the default time zone will be GMT. + +13. `IOT_DASHBOARD_INFLUXDB_MAIL_HOST_NAME=myhost.example.com` + + This sets the name of your mail server for backup mail. Used by Postfix. + +14. `IOT_DASHBOARD_INFLUXDB_BACKUP_EMAIL=a@example.com b@example.com` + + Backup mail will be sent to the mentioned MAIL IDs. + +The resulting `.env` file should look like this: ```bash ### env file for configuring dashboard.example.com -TTN_DASHBOARD_APACHE_FQDN=dashboard.example.com -# The fully-qualified domain name to be served by Apache. -# -# TTN_DASHBOARD_AWS_ACCESS_KEY_ID= -# The access key for AWS for backups. -# -# TTN_DASHBOARD_AWS_DEFAULT_REGION= -# The AWS default region. -# -# TTN_DASHBOARD_AWS_S3_BUCKET_INFLUXDB= -# The S3 bucket to use for uploading the influxdb backup data. -# -# TTN_DASHBOARD_AWS_SECRET_ACCESS_KEY= -# The AWS API secret key for backing up influxdb data. -# -TTN_DASHBOARD_CERTBOT_EMAIL=somebody@example.com +IOT_DASHBOARD_NGINX_FQDN=dashboard.example.com +# The fully-qualified domain name to be served by NGINX. +# IOT_DASHBOARD_AWS_ACCESS_KEY_ID +# The access key for AWS for backups. +# IOT_DASHBOARD_AWS_DEFAULT_REGION +# The AWS default region. +# IOT_DASHBOARD_AWS_S3_BUCKET_INFLUXDB +# The S3 bucket to use for uploading the influxdb backup data. +# IOT_DASHBOARD_AWS_SECRET_ACCESS_KEY +# The AWS API secret key for backing up influxdb data. +IOT_DASHBOARD_CERTBOT_EMAIL=somebody@example.com # The email address to be used for registering with Let's Encrypt. -# -TTN_DASHBOARD_CERTBOT_FQDN=dashboard.example.com +IOT_DASHBOARD_CERTBOT_FQDN=dashboard.example.com # The domain(s) to be used by certbot when registering with Let's Encrypt. -# -TTN_DASHBOARD_DATA=/var/opt/docker/dashboard.example.com/ -# The path to the data directory. This must end with a '/', and must eithe -r +IOT_DASHBOARD_DATA=/var/opt/docker/dashboard.example.com/ +# The path to the data directory. This must end with a '/', and must either # be absolute or must begin with './'. (If not, you'll get parse errors.) -# -TTN_DASHBOARD_GRAFANA_ADMIN_PASSWORD=................... +IOT_DASHBOARD_GRAFANA_ADMIN_PASSWORD=................... # The password to be used for the admin user on first login. This is ignored # after the Grafana database has been built. -# -TTN_DASHBOARD_GRAFANA_PROJECT_NAME=My Dashboard +IOT_DASHBOARD_GRAFANA_PROJECT_NAME=My Dashboard # The project name to be used for the emails from the administrator. -# -# TTN_DASHBOARD_GRAFANA_LOG_MODE= +# IOT_DASHBOARD_GRAFANA_LOG_MODE # Set the grafana log mode. -# -# TTN_DASHBOARD_GRAFANA_LOG_LEVEL= +# IOT_DASHBOARD_GRAFANA_LOG_LEVEL # Set the grafana log level (e.g. debug) -# -TTN_DASHBOARD_GRAFANA_SMTP_ENABLED=true +IOT_DASHBOARD_GRAFANA_SMTP_ENABLED=true # Set to true to enable SMTP. -# -# TTN_DASHBOARD_GRAFANA_SMTP_SKIP_VERIFY= +IOT_DASHBOARD_GRAFANA_SMTP_SKIP_VERIFY=true # Set to true to disable SSL verification. # Defaults to false. -# -# TTN_DASHBOARD_GRAFANA_INSTALL_PLUGINS= -# A list of grafana plugins to install. -# -TTN_DASHBOARD_GRAFANA_SMTP_FROM_ADDRESS=grafana-admin@dashboard.example.com -# The "from" address for Grafana emails. -# -# TTN_DASHBOARD_GRAFANA_USERS_ALLOW_SIGN_UP= -# Set to true to allow users to sign-up to get access to the dashboard. -# -TTN_DASHBOARD_INFLUXDB_ADMIN_PASSWORD=jadb4a4WH5za7wvp +IOT_DASHBOARD_GRAFANA_INSTALL_PLUGINS=plugins1, plugins2 +# A list of grafana plugins to install. Use (comma and space) ", " to delimit plugins. +IOT_DASHBOARD_GRAFANA_SMTP_FROM_ADDRESS=grafana-admin@dashboard.example.com +# The "from" address for Grafana emails. +IOT_DASHBOARD_GRAFANA_USERS_ALLOW_SIGN_UP=true +# Set to true to allow users to sign up. +IOT_DASHBOARD_INFLUXDB_ADMIN_PASSWORD=jadb4a4WH5za7wvp # The password to be used for the admin user by influxdb. Again, this is # ignored after the influxdb database has been built. -# -TTN_DASHBOARD_INFLUXDB_INITIAL_DATABASE_NAME=mydatabase +IOT_DASHBOARD_INFLUXDB_INITIAL_DATABASE_NAME=mydatabase # The inital database to be created on first launch of influxdb. Ignored # after influxdb has been launched. -# -TTN_DASHBOARD_MAIL_DOMAIN=example.com -# the postfix mail domain. -# -TTN_DASHBOARD_MAIL_HOST_NAME=dashboard.example.com -# the external FQDN for the mail host. -# -# TTN_DASHBOARD_MAIL_RELAY_IP= -# the mail relay machine, assuming that the real mailer is upstream from us. -# -TTN_DASHBOARD_NODERED_INSTALL_PLUGINS=node-red-node-example1 nodered-node-example2 -# Additional plugins to be installed for Node-RED. -# -# TTN_DASHBOARD_PORT_HTTP= +IOT_DASHBOARD_MAIL_DOMAIN=example.com +# the postfix mail domain. +IOT_DASHBOARD_MAIL_HOST_NAME=dashboard.example.com +# the external FQDN for the mail host. +IOT_DASHBOARD_MAIL_RELAY_IP= +# the mail relay machine, assuming that the real mailer is upstream from us. +# IOT_DASHBOARD_PORT_HTTP # The port to listen to for HTTP. Primarily for test purposes. Defaults to # 80. -# -# TTN_DASHBOARD_PORT_HTTPS= +# IOT_DASHBOARD_PORT_HTTPS # The port to listen to for HTTPS. Primarily for test purposes. Defaults to # 443. -# -# TTN_DASHBOARD_TIMEZONE= +# IOT_DASHBOARD_TIMEZONE # The timezone to use. Defaults to GMT. +# IOT_DASHBOARD_NODE_RED_VERSION +# To Install specific version of node-red version. Defaults to latest. +IOT_DASHBOARD_NODE_RED_INSTALL_MODULES=node-red-node-example1 nodered-node-example2 +# Install the required node-red modules. use "space" to delimit the modules. +# IOT_DASHBOARD_PORT_MQTT_TCP +# Accessing mqtt channel over TCP. Defaults to 1883. +# IOT_DASHBOARD_PORT_MQTT_SSL +# Accessing mqtt channel over TLS/SSL. Defaults to 8883. +# IOT_DASHBOARD_PORT_MQTT_WSS +# Accessing mqtt channel over WSS. Defaults to 8083. +IOT_DASHBOARD_INFLUXDB_MAIL_HOST_NAME=influxdbbackup.example.com +# the external FQDN for the influxdb host +IOT_DASHBOARD_INFLUXDB_BACKUP_EMAIL=somebody1@example.com somebody2@example.com +# Backup mail will be sent to the mentioned MAIL IDs. Use "space" to delimit the MAIL IDs. ``` ### Set up the Node-RED and InfluxDB API logins -1. Prepare everything: +Run the following commands. + +```bash +docker-compose pull +docker-compose build +``` + +If there are any errors, they need to be fixed before going on. + +#### Migrating `htpasswd` from Apache to Nginx (if required) + +If migrating from an older version of the dashboard that used Apache, you'll need to migrate the `htpasswd` file. + +- Copy [`htpasswd_migration.sh`](./htpasswd_migration.sh) into your local directory (on the host system) containing the docker files. + +- Run the script as below. ```bash - docker-compose pull - docker-compose build - ```` + chmod +x htpasswd_migration.sh + ./htpasswd_migration.sh + ``` - If there are any errors, fix them before going on. +- This script creates one `htpasswd` for each of the controlled services, and then copies them(`node-red_htpasswd`, `query_htpasswd`) to appropriate files as below. -2. Use `docker-compose run apache /bin/bash` to launch a shell in the Apache context. + - For Node-red: + `${IOT_DASHBOARD_DATA}docker-nginx/authdata/nodered/.htpasswd` - - If this fails with the message, `ERROR: Couldn't connect to Docker daemon at http+docker://localunixsocket - is it running?`, then probably your user ID is not in the `docker` group. To fix this, `sudo adduser MYUSER docker`, where "MYUSER" is your login ID. Then (**very important**) log out and log back in. + - For Infludb Queries: + `${IOT_DASHBOARD_DATA}docker-nginx/authdata/influxdb/.htpasswd` -3. Change ownership of Apache's `/etc/apache2/authdata` to user `www-data`. +- If you are migrating older `htpasswd` files, please skip steps `[1-4]` to `htpasswd` below. - ```bash - chown www-data /etc/apache2/authdata - ``` +#### Creating new `htpasswd` files -4. Add Apache's `/etc/apache2/authdata/.htpasswd`. +1. Log into the Nginx docker container. - ```bash - touch /etc/apache2/authdata/.htpasswd - chown www-data /etc/apache2/authdata/.htpasswd - ``` + ```console + $ docker-compose run nginx /bin/bash + # + ``` -5. Add user logins for node-red and influxdb queries. Make `USERS` be a list of login IDs. + If this fails with the message, `ERROR: Couldn't connect to Docker daemon at http+docker://localunixsocket - is it running?`, then probably the user ID is not in the `docker` group. To fix this, `sudo adduser MYUSER docker`, where "MYUSER" is the login ID. Then (**very important**) log out and log back in. - ```bash - export USERS="tmm amy josh" - for USER in $USERS; do echo "Set password for "$USER; htpasswd /etc/apache2/authdata/.htpasswd $USER; done +2. Create `.htpasswd` files for node-red and influxdb queries authentication. + + ```bash + touch /etc/nginx/authdata/influxdb/.htpasswd + touch /etc/nginx/authdata/nodered/.htpasswd + chown www-data /etc/nginx/authdata/influxdb/.htpasswd + chown www-data /etc/nginx/authdata/nodered/.htpasswd + ``` + +3. Add user logins for node-red and influxdb queries. Make `USERS` be a list of login IDs. + + - For Node-red authentication: + + ```bash + export USERS="tmm amy josh" + for USER in $USERS; do \ + echo "Set password for "$USER; \ + htpasswd /etc/nginx/authdata/nodered/.htpasswd $USER; \ + done + ``` + + - For Influxdb queries: + + ```bash + export USERS="tmm amy josh" + for USER in $USERS; do \ + echo "Set password for "$USER; \ + htpasswd /etc/nginx/authdata/influxdb/.htpasswd $USER; \ + done + ``` + +4. Exit Nginx's container with Control+D. + +#### Set up the `MQTTs` User Credentials + +To access mqtt channel, user needs credentials to access it. + +1. Log into the mqtts docker container. + + ```console + $ docker-compose run mqtts /bin/bash + # ``` -6. Add Apache's `/etc/apache2/authdata/.htgroup`. +2. In the container, Create username and password using `mosquitto_passwd` command. ( option `-c` - Create a new password file. If the file already exists, it will be overwritten. so `-c` should be used for the first user creation. please avoid `-c` for the second user creation onwards. ) ```bash - # this assumes USERS is still set from previous step. - touch /etc/apache2/authdata/.htgroup - chown www-data /etc/apache2/authdata/.htgroup - echo "node-red: ${USERS}" >>/etc/apache2/authdata/.htgroup - echo "query: ${USERS}" >>/etc/apache2/authdata/.htgroup + # mosquitto_passwd -c /etc/mosquitto/credentials/passwd + Password: + Reenter password: ``` -7. Exit Apache's container with Control+D. +3. Close the connection to mqtts (Ctrl+D). -## Start the server +### Start the server -1. We recommend you first start things up in "interactive mode". +1. Starting things up in "interactive mode" is recommended as a first step. ```bash docker-compose up ``` - This will show you the log files. It will also be pretty clear if there are any issues. +This will show the log output from the various services. It will also be pretty clear if there are any issues. - One common error (for me, anyway) is entering an illegal initial InfluxDB database name. InfluxDB will spew a number of errors, but eventually it will start up anyway. But then you'll need to create a database manually. +One common error (for me, anyway) is entering an illegal initial InfluxDB database name. InfluxDB will spew a number of errors, but eventually it will start up anyway. But then the database needs to be created manually. ### Restart servers in the background -Once the servers are coming up interactively, use ^C to shut them down, then restart in daemon mode. +Once the servers are coming up interactively, use ^C to shut them down, and then restart in daemon mode. ```bash docker-compose up -d ``` +Status of the containers can be seen as below + +```console +$ docker-compose ps + +Name Command State Ports +----------------------------------------------------------------- +dashboardexamplecom_grafana_1 /run.sh Up 3000/tcp +dashboardexamplecom_influxdb_1 /sbin/my_init Up 8086/tcp +dashboardexamplecom_mqtts_1 /sbin/my_init Up 0.0.0.0:1883->1883/tcp, 0.0.0.0:8083->8083/tcp, 0.0.0.0:8883->8883/tcp +dashboardexamplecom_nginx_1 /sbin/my_init Up 0.0.0.0:443->443/tcp, 0.0.0.0:80->80/tcp +dashboardexamplecom_node-red_1 npm start -- --userDir /da ... Up (healthy) 1880/tcp +dashboardexamplecom_postfix_1 /sbin/my_init Up 25/tcp +``` + ### Initial testing -- Open Grafana on **https://dashboard.example.com**, and log in as admin. +- Open Grafana on [https://dashboard.example.com](https://dashboard.example.com/), and log in as admin. - Change the admin password. -#### Set up first data source +### Set up first data source -Use the Grafana UI -- either click on "add first data source" or use "Configure>Add Data Source", and add an InfluxDB data source. +Use the Grafana UI -- either click on "add first data source" or use "Configure >Add Data Source", and add an InfluxDB data source. -- Set the URL (under HTTP Settings) to `http://influxdb:8086`. +- Set the URL (under HTTP Settings) to ``. -- Select the database. If InfluxDB properly initialized a database, you should also be able to connect to it as a Grafana data source. If not, you'll first need to [create an InfluxDB database](#creating-an-influxdb-database). +- Select the database. If InfluxDB is properly initialized in a database, connect to it as a Grafana data source. If not, [create an InfluxDB database](https://github.com/mcci-catena/docker-iot-dashboard/blob/master/SETUP.md#creating-an-influxdb-database). - Leave user and password blank. @@ -394,11 +471,11 @@ Use the Grafana UI -- either click on "add first data source" or use "Configure> ### Test Node-RED -Open Node-RED on **https://dashboard.example.com/node-red/**, and build a flow that stores data in InfluxDB. **Be sure to add the trailing slash! Otherwise you'll get a 404 from Grafana. We'll fix this soon.** +Open Node-RED on , and build a flow that stores data in InfluxDB. **Be sure to add the trailing slash! Otherwise a 404 error pops from Grafana. This will be fixed soon.** ### Creating an InfluxDB database -To create a database, log in to the host machine, and cd to `/opt/docker/dashboard.example.com`. Use the following commands: +To create a database, log in to the host machine, and cd to `/opt/docker/dashboard.example.com`. Use the following commands: ```console $ docker-compose exec influxdb /bin/bash @@ -423,50 +500,138 @@ my-new-database $ ``` -## Add Apache log in for NodeRed or query after the fact +### Test Postfix Mail setup -To add a user with Node-RED access or query access, follow this procedure. +- Testing Mail setup on `Grafana` + 1. Click on "Bell icon" and click the "Notification channels" option as shown below -1. Log into the host machine + ![grafana_mail_testing](assets/graf-mail_test_1.png) -2. cd to `/opt/docker/dashboard.example.com`. + 2. Click "Add Channel" as shown below -3. log into the apache docker container. + ![grafana_mail_testing](assets/graf-mail_test_2.png) + + 3. Input the required info as shown below. *Be sure to select type as `Email`*. Click `Test" button finally to send test mail. - ```console - $ docker-compose exec apache /bin/bash - # - ``` + ![grafana_mail_testing](assets/graf-mail_test_3.png) -4. In the container, move to the `authdata` directory. +- Testing Mail setup on `Influxdb` and `Postfix` - ```console - # cd /etc/apache2/authdata - # - ``` + Mail setup on `Influxdb` and `Postfix` can be tested using `mail` command, by logging into their container. -5. Add the user. + ***Influxdb*** + 1. Log into the `Influxdb` docker container - ```console - # htpasswd .htpasswd {newuserid} - New password: - Re-type new password: - Adding password for user {newuserid} - # + ```bash + docker-compose exec influxdb bash + + root@influxdbbackup:/# mail -s "Testing mail from Influxdb" cmurugan@mcci.com + Cc: + Testing1 ``` -6. Grant permissions to the user by updating `.htgroup` in the same directory. + ***Postfix*** - ```console - # vi .htgroup + 1. Log into the `Postfix` docker container + + ```bash + docker-compose exec postfix bash + + root@dashboard:/# mail -s "Testing mail from Postfix" cmurugan@mcci.com + Cc: + Testing1 ``` - There are at least two groups, `node-red` and `query`. +- Testing Mail setup on Node-red + + Mail setup on Node-red can be tested by deploying a node-red flow on  as shown below. + + ![nodered_mail_testing](assets/postfix_node_1.png) + + ***Inject node's configuration*** + + ![nodered_mail_testing](assets/postfix_node_2.png) + + here, + - `msg.payload` will be act as `mail body`. + - `msg.topic`will be act as `subject`. + - `msg.from` will be act as `Sender` + + ***Email node's configuration*** + + ![nodered_mail_testing](assets/postfix_node_3.png) + +### Test MQTT Channels + +- To test the `MQTT over TCP` and `MQTT over TLS/SSL` channels user can use [mosquitto client](https://mosquitto.org/download/) tool. + + - MQTT over TCP + + `Subscribing` mqtt channel on topic `test` + + ```bash + mosquitto_sub -h dashboard.example.com -t test -p 1883 -u user1 -P pwd123 + + hello + ``` + + `publishing` on mqtt channel with topic `test` + + ```bash + mosquitto_pub -h dashboard.example.com -m "hello" -t test -p 1883 -u user1 -P pwd123 + ``` + + - MQTT over TLS/SSL + + `Subscribing` mqtt channel on topic `test` + + ```bash + mosquitto_sub -h dashboard.example.com -t test -p 8883 -u user1 -P pwd123 --capath /etc/ssl/certs/ + + hello + ``` + + `publishing` on mqtt channel with topic `test` + + ```bash + + mosquitto_pub -h dashboard.example.com -m "hello" -t test -p 8883 -u user1 -P pwd123 --capath /etc/ssl/certs/ + ``` + +- In order to test the "MQTT over Nginx proxy", the user can use the `mqtts web based client` [portal1](http://tools.emqx.io/) or [portal2](https://www.eclipse.org/paho/clients/js/utility/). + + *Using [portal1](http://tools.emqx.io/)* + + Connection Details + + ![mqtt_testing](assets/mqtt_nginx_1.png) + + `Subscribing` mqtt channel on topic `test` + + ![mqtt_testing](assets/mqtt_nginx_2.png) + + `publishing` on mqtt channel with topic `test` + + ![mqtt_testing](assets/mqtt_nginx_3.png) + + Full window + + ![mqtt_testing](assets/mqtt_nginx_4.png) + +- To test the `MQTT over WebSockets with TLS/SSL`, the user can use the `mqtts web based client` [Portal](http://www.hivemq.com/demos/websocket-client/). + + Connection Details + + ![mqtt_testing](assets/mqtt_web_1.png) + + `Subscribing` mqtt channel on topic `test` + + ![mqtt_testing](assets/mqtt_web_2.png) - - Add `{newuserid}` to group `node-red` if you want to grant access to Node-READ. + `publishing` on mqtt channel with topic `test` - - Add `{newuserid}` to group `query` if you want to grant access for InfluxDB queries. + ![mqtt_testing](assets/mqtt_web_3.png) -7. Write and save the file, then use `cat` to display it. + Full window -8. Close the connection to apache (control+D). + ![mqtt_testing](assets/mqtt_web_4.png) diff --git a/TODO.md b/TODO.md index 220592a..9cc26e9 100644 --- a/TODO.md +++ b/TODO.md @@ -2,9 +2,9 @@ 1. Prepare a script that queries the user during the setup and sets the `.env` file. -2. The script should also get names and roles for access to Node-red and InfluxDB. It then will seed `.htaccess` and `.htgroup`. +2. The script should also get names and roles for access to Node-red and InfluxDB. It then will seed `.htpasswd` files. -3. Same script should be able to show user-by-user roles, and adjust them. (Right now the matrix is transposed; for each role you can look at `.htgroup` and find the members, but you can't easily see all the roles for a member.) +3. Same script should be able to show user-by-user roles, and adjust them. 4. Figure out what to do if the user changes `GRAFANA_ENV_ADMIN_PASSWORD` after the image has been launched once; at present, this is ignored. This might be a maintenance script and/or a makefile so that the system detects edits and does the right thing. @@ -12,4 +12,4 @@ 6. Integrate the other things from `SETUP.md`. -7. Add scripts to backup and restore the user's data directories. Backup should run off-line (unless there's a very good way to backup the datasets from all the servers while they're up). Restore must run off-line. Scripts should do the necessary to ensure that the servers are in fact stopped. This is now partially done with the AWS changes, but more work needs to be done. +7. Add documention on setting up backups. diff --git a/apache/Dockerfile b/apache/Dockerfile deleted file mode 100644 index dc0273b..0000000 --- a/apache/Dockerfile +++ /dev/null @@ -1,40 +0,0 @@ -# -# Dockerfile for building the apache image -# - -# Start from Phusion. -FROM phusion/baseimage - -RUN /usr/bin/apt-get update && /usr/bin/apt-get install software-properties-common -y -RUN /usr/bin/add-apt-repository ppa:certbot/certbot && /usr/bin/apt-get update && /usr/bin/apt-get install apache2 -y -# -# Add the certbot layer -# -RUN /usr/bin/apt-get install python-certbot-apache -y -# -# enable proxys and generally set up. -# -RUN /usr/sbin/a2enmod proxy && \ - /usr/sbin/a2enmod proxy_http && \ - echo "ServerName localhost" > /etc/apache2/conf-available/fqdn.conf && \ - /usr/sbin/a2enconf fqdn.conf && \ - /usr/sbin/a2enmod authz_user authz_groupfile proxy_wstunnel headers - -# RUN mkdir -p /root -COPY setup.sh proxy-*.conf /root/ - -# Running scripts during container startup for letsencrypt update and Apache -RUN mkdir -p /etc/my_init.d -COPY setup.sh /etc/my_init.d/setup.sh -RUN chmod +x /etc/my_init.d/setup.sh - -# Enable letsencrypt renewal crontab -COPY certbot_cron.sh /etc/my_init.d/certbot_cron.sh -RUN chmod +x /etc/my_init.d/certbot_cron.sh - -# Start the Apache2 daemon during container startup -RUN mkdir /etc/service/apache2 -COPY apache2.sh /etc/service/apache2/run -RUN chmod +x /etc/service/apache2/run - -# end of file diff --git a/apache/apache2.sh b/apache/apache2.sh deleted file mode 100644 index 5fd8874..0000000 --- a/apache/apache2.sh +++ /dev/null @@ -1,2 +0,0 @@ -#!/bin/sh -exec /usr/sbin/apache2ctl -DFOREGROUND diff --git a/apache/proxy-grafana.conf b/apache/proxy-grafana.conf deleted file mode 100644 index 2a5dd3b..0000000 --- a/apache/proxy-grafana.conf +++ /dev/null @@ -1,16 +0,0 @@ - Redirect 301 / https://@{FQDN}/grafana/ - Redirect 301 /index.html https://@{FQDN}/grafana/ - Redirect 301 /grafana https://@{FQDN}/grafana/ - - Header always set Strict-Transport-Security "max-age=31536000; includeSubDomains" - Header always set X-Frame-Options "SAMEORIGIN" - Header always set X-Xss-Protection "1; mode=block" - Header always set X-Content-Type-Options "nosniff" - Header always set Content-Security-Policy "default-src 'self' https:; script-src 'self' 'unsafe-inline' 'unsafe-eval'; style-src 'self' 'unsafe-inline'; img-src 'self' data: *.global.ssl.fastly.net" - Header always set Referrer-Policy: "same-origin" - Header always set Feature-Policy: "accelerometer 'none'; camera 'none'; geolocation 'none'; gyroscope 'none'; magnetometer 'none'; microphone 'none'; payment 'none'; usb 'none'" - Header edit Set-Cookie ^(.*)$ $1;Secure - ProxyPass http://grafana:3000/ - ProxyPassReverse http://grafana:3000/ - ProxyPassReverseCookiePath / /grafana/ - diff --git a/apache/proxy-influxdb.conf b/apache/proxy-influxdb.conf deleted file mode 100644 index 2116290..0000000 --- a/apache/proxy-influxdb.conf +++ /dev/null @@ -1,17 +0,0 @@ - Redirect 301 /influxdb:8086/ https://@{FQDN}/influxdb:8086/ - - Header always set Strict-Transport-Security "max-age=31536000; includeSubDomains" - Header always set X-Frame-Options "SAMEORIGIN" - Header always set X-Xss-Protection "1; mode=block" - Header always set X-Content-Type-Options "nosniff" - Header always set Content-Security-Policy "default-src 'self'" - Header always set Referrer-Policy: "same-origin" - Header always set Feature-Policy: "accelerometer 'none'; camera 'none'; geolocation 'none'; gyroscope 'none'; magnetometer 'none'; microphone 'none'; payment 'none'; usb 'none'" - AuthType Basic - AuthName "InfluxDB queries" - AuthUserFile /etc/apache2/authdata/.htpasswd - AuthGroupFile /etc/apache2/authdata/.htgroup - Require group query - ProxyPass http://influxdb:8086/ - ProxyPassReverse http://influxdb:8086/ - diff --git a/apache/proxy-nodered.conf b/apache/proxy-nodered.conf deleted file mode 100644 index e97cdc1..0000000 --- a/apache/proxy-nodered.conf +++ /dev/null @@ -1,22 +0,0 @@ - ProxyPass /node-red/comms/ ws://node-red:1880/comms/ - ProxyPassReverse /node-red/comms/ ws://node-red:1880/comms/ - ProxyPass /node-red/comms ws://node-red:1880/comms - ProxyPassReverse /node-red/comms ws://node-red:1880/comms - ProxyPass /node-red/ http://node-red:1880/ - ProxyPassReverse /node-red/ http://node-red:1880/ - - Redirect 301 /node-red https://@{FQDN}/node-red/ - - Header always set Strict-Transport-Security "max-age=31536000; includeSubDomains" - Header always set X-Frame-Options "SAMEORIGIN" - Header always set X-Xss-Protection "1; mode=block" - Header always set X-Content-Type-Options "nosniff" - Header always set Content-Security-Policy "default-src 'self' https:; script-src 'self' 'unsafe-inline' 'unsafe-eval'; style-src 'self' 'unsafe-inline'; img-src 'self' data:" - Header always set Referrer-Policy: "same-origin" - Header always set Feature-Policy: "accelerometer 'none'; camera 'none'; geolocation 'none'; gyroscope 'none'; magnetometer 'none'; microphone 'none'; payment 'none'; usb 'none'" - AuthType Basic - AuthName "Node-RED" - AuthUserFile /etc/apache2/authdata/.htpasswd - AuthGroupFile /etc/apache2/authdata/.htgroup - Require group node-red - diff --git a/apache/setup.sh b/apache/setup.sh deleted file mode 100644 index 92f902b..0000000 --- a/apache/setup.sh +++ /dev/null @@ -1,63 +0,0 @@ -#!/usr/bin/env bash - -# set up the environment; these might not be set. -export HOME="/root" -export PATH="${PATH}:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin" - -# test that we have a proper setup. -cd $HOME || exit 2 - -# test that authentication is set up, and set permissions as needed by us -if [ ! -d /etc/apache2/authdata ] ; then - echo "The authdata directory is not set; refer to docker-compose script" - exit 3 -fi -if [ ! -f /etc/apache2/authdata/.htpasswd ]; then - echo ".htpasswd file not found" - exit 3 -fi -if [ ! -f /etc/apache2/authdata/.htgroup ]; then - echo ".htgroup file not found" - exit 3 -fi -chown www-data /etc/apache2/authdata /etc/apache2/authdata/.htpasswd /etc/apache2/authdata/.htgroup -chmod 700 /etc/apache2/authdata -chmod 600 /etc/apache2/authdata/.htpasswd /etc/apache2/authdata/.htgroup - -# check that we got the vars we need -if [ -z "$CERTBOT_DOMAINS" -o "$CERTBOT_DOMAINS" = "." ]; then - echo "The docker-compose script must set CERTBOT_DOMAINS to value to be passed to certbot for --domains" - exit 3 -fi - -if [ -z "$CERTBOT_EMAIL" -o "$CERTBOT_EMAIL" = "." ]; then - echo "The docker-compose script must set CERTBOT_EMAIL to an email address useful to certbot/letsencrypt for notifications" - exit 3 -fi - -if [ -z "$APACHE_FQDN" -o "$APACHE_FQDN" = "." ]; then - echo "The docker-compose script must set APACHE_FQDN to the (single) fully-qualified domain at the top level" - exit 3 -fi - -# run cerbot to set up apache -if [ "$CERTBOT_TEST" != "test" ]; then - certbot --agree-tos --email "${CERTBOT_EMAIL}" --non-interactive --domains "$CERTBOT_DOMAINS" --apache --agree-tos --rsa-key-size 4096 --redirect || exit 4 - - # certbot actually launched apache. The simple hack is to stop it; then launch - # it again after we've edited the config files. - /usr/sbin/apache2ctl stop -fi - -# now, add the fields to the virtual host section for https. -set -- proxy-*.conf -if [ "$1" != "proxy-*.conf" ] ; then - echo "add proxy-specs to configuration from:" "$@" - sed -e "s/@{FQDN}/${APACHE_FQDN}/g" "$@" > /tmp/proxyspecs.conf || exit 5 - sed -e '/^ServerName/r/tmp/proxyspecs.conf' /etc/apache2/sites-available/000-default-le-ssl.conf > /tmp/000-default-le-ssl-local.conf || exit 6 - mv /tmp/000-default-le-ssl-local.conf /etc/apache2/sites-available || exit 7 - echo "enable the modified site, and disable the ssl defaults" - /usr/sbin/a2dissite 000-default-le-ssl.conf || exit 8 - /usr/sbin/a2ensite 000-default-le-ssl-local.conf || exit 9 -fi - diff --git a/assets/Connection-architecture-old.png b/assets/Connection-architecture-old.png new file mode 100644 index 0000000..d71f6e2 Binary files /dev/null and b/assets/Connection-architecture-old.png differ diff --git a/assets/Connection-architecture.svg b/assets/Connection-architecture-old.svg similarity index 100% rename from assets/Connection-architecture.svg rename to assets/Connection-architecture-old.svg diff --git a/assets/Connection-architecture.png b/assets/Connection-architecture.png index d71f6e2..dd423fe 100644 Binary files a/assets/Connection-architecture.png and b/assets/Connection-architecture.png differ diff --git a/assets/graf-mail_test_1.png b/assets/graf-mail_test_1.png new file mode 100755 index 0000000..f46a915 Binary files /dev/null and b/assets/graf-mail_test_1.png differ diff --git a/assets/graf-mail_test_2.png b/assets/graf-mail_test_2.png new file mode 100755 index 0000000..79806e4 Binary files /dev/null and b/assets/graf-mail_test_2.png differ diff --git a/assets/graf-mail_test_3.png b/assets/graf-mail_test_3.png new file mode 100755 index 0000000..72d8d94 Binary files /dev/null and b/assets/graf-mail_test_3.png differ diff --git a/assets/mqtt_nginx_1.png b/assets/mqtt_nginx_1.png new file mode 100755 index 0000000..9080541 Binary files /dev/null and b/assets/mqtt_nginx_1.png differ diff --git a/assets/mqtt_nginx_2.png b/assets/mqtt_nginx_2.png new file mode 100755 index 0000000..84b9746 Binary files /dev/null and b/assets/mqtt_nginx_2.png differ diff --git a/assets/mqtt_nginx_3.png b/assets/mqtt_nginx_3.png new file mode 100755 index 0000000..033d1b1 Binary files /dev/null and b/assets/mqtt_nginx_3.png differ diff --git a/assets/mqtt_nginx_4.png b/assets/mqtt_nginx_4.png new file mode 100755 index 0000000..ea1abc4 Binary files /dev/null and b/assets/mqtt_nginx_4.png differ diff --git a/assets/mqtt_web_1.png b/assets/mqtt_web_1.png new file mode 100755 index 0000000..18a7cdb Binary files /dev/null and b/assets/mqtt_web_1.png differ diff --git a/assets/mqtt_web_2.png b/assets/mqtt_web_2.png new file mode 100755 index 0000000..d781ee5 Binary files /dev/null and b/assets/mqtt_web_2.png differ diff --git a/assets/mqtt_web_3.png b/assets/mqtt_web_3.png new file mode 100755 index 0000000..9fd319b Binary files /dev/null and b/assets/mqtt_web_3.png differ diff --git a/assets/mqtt_web_4.png b/assets/mqtt_web_4.png new file mode 100755 index 0000000..eaaaa93 Binary files /dev/null and b/assets/mqtt_web_4.png differ diff --git a/assets/postfix_node_1.png b/assets/postfix_node_1.png new file mode 100755 index 0000000..7e95da5 Binary files /dev/null and b/assets/postfix_node_1.png differ diff --git a/assets/postfix_node_2.png b/assets/postfix_node_2.png new file mode 100755 index 0000000..114c76b Binary files /dev/null and b/assets/postfix_node_2.png differ diff --git a/assets/postfix_node_3.png b/assets/postfix_node_3.png new file mode 100755 index 0000000..97d4034 Binary files /dev/null and b/assets/postfix_node_3.png differ diff --git a/docker-compose.yml b/docker-compose.yml index 35ede6c..7e5704a 100644 --- a/docker-compose.yml +++ b/docker-compose.yml @@ -3,10 +3,10 @@ # Name: docker-compose.yml # # Function: -# Configure the docker-ttn-dashboard collection of docker containers. +# Configure the docker-iot-dashboard collection of docker containers. # # Copyright: -# This file copyright (c) 2017-2019 by +# This file copyright (c) 2017-2020 by # # MCCI Corporation # 3520 Krums Corners Road @@ -15,15 +15,16 @@ # Distributed under the terms of the license file shipped with this # collection. # -# Author: -# Terry Moore, MCCI Corporation -# Based on work by Johan Stocking, The Things Network +# Authors: +# Terry Moore, MCCI Corporation +# Murugan Chandrasekar, MCCI Corporation +# Based on work by Johan Stocking, The Things Network # ############################################################################## # # Note: if you are running this manually, you must set a number of variables, -# not least TTN_DASHBOARD_DATA, which must be the path to the top data directory +# not least IOT_DASHBOARD_DATA, which must be the path to the top data directory # for these apps. If you use dashboard-ctl, this will be done for you. # # To get a list of all variables used in this file, use the following command: @@ -32,120 +33,147 @@ # To get a list of undocumented variables, use the following command, # then look in the left hand column: # -# comm <(sed -n -e 's/#.*$//' -e 's/[^$]*${\([^-:}]*\)[-:}][^$]*/\1/p' docker-compose.yml | LC_ALL=C sort -u) <(sed -n -e '/^# TTN_[A-Z_0-9]*$/s/^# //p' docker-compose.yml | LC_ALL=C sort -u ) +# comm <(sed -n -e 's/#.*$//' -e 's/[^$]*${\([^-:}]*\)[-:}][^$]*/\1/p' docker-compose.yml | LC_ALL=C sort -u) <(sed -n -e '/^# IOT_[A-Z_0-9]*$/s/^# //p' docker-compose.yml | LC_ALL=C sort -u ) # #+++ -# TTN_DASHBOARD_APACHE_FQDN -# The fully-qualified domain name to be served by Apache. +# IOT_DASHBOARD_NGINX_FQDN +# The fully-qualified domain name to be served by NGINX. # -# TTN_DASHBOARD_AWS_ACCESS_KEY_ID -# The access key for AWS for backups. +# IOT_DASHBOARD_AWS_ACCESS_KEY_ID +# The access key for AWS for backups. # -# TTN_DASHBOARD_AWS_DEFAULT_REGION -# The AWS default region. +# IOT_DASHBOARD_AWS_DEFAULT_REGION +# The AWS default region. # -# TTN_DASHBOARD_AWS_S3_BUCKET_INFLUXDB -# The S3 bucket to use for uploading the influxdb backup data. +# IOT_DASHBOARD_AWS_S3_BUCKET_INFLUXDB +# The S3 bucket to use for uploading the influxdb backup data. # -# TTN_DASHBOARD_AWS_SECRET_ACCESS_KEY -# The AWS API secret key for backing up influxdb data. +# IOT_DASHBOARD_AWS_SECRET_ACCESS_KEY +# The AWS API secret key for backing up influxdb data. # -# TTN_DASHBOARD_CERTBOT_EMAIL +# IOT_DASHBOARD_CERTBOT_EMAIL # The email address to be used for registering with Let's Encrypt. # -# TTN_DASHBOARD_CERTBOT_FQDN +# IOT_DASHBOARD_CERTBOT_FQDN # The domain(s) to be used by certbot when registering with Let's Encrypt. # -# TTN_DASHBOARD_DATA +# IOT_DASHBOARD_DATA # The path to the data directory. This must end with a '/', and must either # be absolute or must begin with './'. (If not, you'll get parse errors.) # -# TTN_DASHBOARD_GRAFANA_ADMIN_PASSWORD +# IOT_DASHBOARD_GRAFANA_ADMIN_PASSWORD # The password to be used for the admin user on first login. This is ignored # after the Grafana database has been built. # -# TTN_DASHBOARD_GRAFANA_PROJECT_NAME +# IOT_DASHBOARD_GRAFANA_PROJECT_NAME # The project name to be used for the emails from the administrator. # -# TTN_DASHBOARD_GRAFANA_LOG_MODE +# IOT_DASHBOARD_GRAFANA_LOG_MODE # Set the grafana log mode. # -# TTN_DASHBOARD_GRAFANA_LOG_LEVEL +# IOT_DASHBOARD_GRAFANA_LOG_LEVEL # Set the grafana log level (e.g. debug) # -# TTN_DASHBOARD_GRAFANA_SMTP_ENABLED -# Set to true to enable SMTP. +# IOT_DASHBOARD_GRAFANA_SMTP_ENABLED +# Set to false to disable SMTP. +# Defaults to true # -# TTN_DASHBOARD_GRAFANA_SMTP_SKIP_VERIFY -# Set to true to disable SSL verification. -# Defaults to false. +# IOT_DASHBOARD_GRAFANA_SMTP_SKIP_VERIFY +# Set to false to enable SSL verification. +# Defaults to true. # -# TTN_DASHBOARD_GRAFANA_INSTALL_PLUGINS -# A list of grafana plugins to install. +# IOT_DASHBOARD_GRAFANA_INSTALL_PLUGINS +# A list of grafana plugins to install. Use (comma and space) ", " to delimit plugins. # -# TTN_DASHBOARD_GRAFANA_SMTP_FROM_ADDRESS -# The "from" address for Grafana emails. +# IOT_DASHBOARD_GRAFANA_SMTP_FROM_ADDRESS +# The "from" address for Grafana emails. # -# TTN_DASHBOARD_GRAFANA_USERS_ALLOW_SIGN_UP +# IOT_DASHBOARD_GRAFANA_USERS_ALLOW_SIGN_UP # Set to true to allow users to sign up. # -# TTN_DASHBOARD_INFLUXDB_ADMIN_PASSWORD +# IOT_DASHBOARD_INFLUXDB_ADMIN_PASSWORD # The password to be used for the admin user by influxdb. Again, this is # ignored after the influxdb database has been built. # -# TTN_DASHBOARD_INFLUXDB_INITIAL_DATABASE_NAME +# IOT_DASHBOARD_INFLUXDB_INITIAL_DATABASE_NAME # The inital database to be created on first launch of influxdb. Ignored # after influxdb has been launched. # -# TTN_DASHBOARD_MAIL_DOMAIN -# the postfix mail domain. +# IOT_DASHBOARD_MAIL_DOMAIN +# the postfix mail domain. # -# TTN_DASHBOARD_MAIL_HOST_NAME -# the external FQDN for the mail host. +# IOT_DASHBOARD_MAIL_HOST_NAME +# the external FQDN for the mail host. # -# TTN_DASHBOARD_MAIL_RELAY_IP -# the mail relay machine, assuming that the real mailer is upstream from us. +# IOT_DASHBOARD_MAIL_RELAY_IP +# the mail relay machine, assuming that the real mailer is upstream from us. # -# TTN_DASHBOARD_NODERED_INSTALL_PLUGINS -# A list of additional modules to be intalled. +# IOT_DASHBOARD_MAIL_SMTP_LOGIN +# the mail relay login: name@example.com -- it will come from your upstream +# provider. # -# TTN_DASHBOARD_PORT_HTTP +# IOT_DASHBOARD_MAIL_SMTP_PASSWORD +# the mail relay password +# +# IOT_DASHBOARD_PORT_HTTP # The port to listen to for HTTP. Primarily for test purposes. Defaults to # 80. # -# TTN_DASHBOARD_PORT_HTTPS +# IOT_DASHBOARD_PORT_HTTPS # The port to listen to for HTTPS. Primarily for test purposes. Defaults to # 443. # -# TTN_DASHBOARD_TIMEZONE +# IOT_DASHBOARD_TIMEZONE # The timezone to use. Defaults to GMT. +# +# IOT_DASHBOARD_NODE_RED_VERSION +# To Install specific version of node-red version. Defaults to latest. +# +# IOT_DASHBOARD_NODE_RED_INSTALL_MODULES +# Install the required node-red modules. use "space" to delimit the modules. +# +# IOT_DASHBOARD_PORT_MQTT_TCP +# Accessing mqtt channel over TCP. Defaults to 1883. +# +# IOT_DASHBOARD_PORT_MQTT_SSL +# Accessing mqtt channel over TLS/SSL. Defaults to 8883. +# +# IOT_DASHBOARD_PORT_MQTT_WSS +# Accessing mqtt channel over WSS. Defaults to 8083. +# +# IOT_DASHBOARD_INFLUXDB_MAIL_HOST_NAME +# the external FQDN for the influxdb host +# +# IOT_DASHBOARD_INFLUXDB_BACKUP_EMAIL +# Backup mail will be sent to the mentioned MAIL IDs. Use "space" to delimit the MAIL IDs. +# #--- -# Also see apache/setup.sh, which uses some additional test variables when +# Also see nginx/setup.sh, which uses some additional test variables when # debugging. # version: '3.7' services: - # the apache server connects us to the outside world - apache: + # the nginx server connects us to the outside world + nginx: environment: - CERTBOT_DOMAINS: "${TTN_DASHBOARD_CERTBOT_FQDN:-.}" - CERTBOT_EMAIL: "${TTN_DASHBOARD_CERTBOT_EMAIL:-.}" - APACHE_FQDN: "${TTN_DASHBOARD_APACHE_FQDN:-.}" + CERTBOT_DOMAINS: "${IOT_DASHBOARD_CERTBOT_FQDN:-.}" + CERTBOT_EMAIL: "${IOT_DASHBOARD_CERTBOT_EMAIL:-.}" + NGINX_FQDN: "${IOT_DASHBOARD_NGINX_FQDN:-.}" restart: unless-stopped - build: apache + build: nginx ports: - - "${TTN_DASHBOARD_PORT_HTTP:-80}:80" - - "${TTN_DASHBOARD_PORT_HTTPS:-443}:443" + - "${IOT_DASHBOARD_PORT_HTTP:-80}:80" + - "${IOT_DASHBOARD_PORT_HTTPS:-443}:443" volumes: - - "${TTN_DASHBOARD_DATA}docker-apache2/htdocs:/usr/local/apache2/htdocs" - - "${TTN_DASHBOARD_DATA}docker-apache2/letsencrypt:/etc/letsencrypt" - - "${TTN_DASHBOARD_DATA}docker-apache2/authdata:/etc/apache2/authdata" + - "${IOT_DASHBOARD_DATA}docker-nginx/htdocs:/usr/local/nginx/htdocs" + - "${IOT_DASHBOARD_DATA}docker-nginx/letsencrypt:/etc/letsencrypt" + - "${IOT_DASHBOARD_DATA}docker-nginx/authdata/influxdb:/etc/nginx/authdata/influxdb" + - "${IOT_DASHBOARD_DATA}docker-nginx/authdata/nodered:/etc/nginx/authdata/nodered" - # - # apache proxies for all of the below, so it needs to have links to them. - # Examine apache/proxy-*.conf to see how the links are set up. Also bear + # nginx proxies for all of the below, so it needs to have links to them. + # Examine nginx/proxy-*.conf to see how the links are set up. Also bear # in mind that the individual servers (e.g. grafana) may need to be # informed about the nature of the redirections. # @@ -153,84 +181,111 @@ services: - grafana - node-red - influxdb + - mqtts node-red: restart: unless-stopped + build: + context: ./node-red + dockerfile: Dockerfile + args: + node_red_version: "${IOT_DASHBOARD_NODE_RED_VERSION:-latest}" + node_red_contrib_ttn_version: "${IOT_DASHBOARD_NODE_RED_CONTRIB_TTN_VERSION:-latest}" + node_red_install_modules: "${IOT_DASHBOARD_NODE_RED_INSTALL_MODULES:-}" user: "root" volumes: - - "${TTN_DASHBOARD_DATA}node-red:/data" - # nodered opens ports on influxdb so it needs to be able to talk to it. + - "${IOT_DASHBOARD_DATA}node-red:/data" + environment: + TZ: "${IOT_DASHBOARD_TIMEZONE:-GMT}" + # nodered opens ports on influxdb and postfix so it needs to be able to talk to it. links: - influxdb - postfix - environment: - TZ: "${TTN_DASHBOARD_TIMEZONE:-GMT}" - build: - context: ./node-red - args: - NODERED_INSTALL_PLUGINS: "${TTN_DASHBOARD_NODERED_INSTALL_PLUGINS:-}" - - influxdb: - restart: unless-stopped - image: influxdb:latest - volumes: - - "${TTN_DASHBOARD_DATA}influxdb:/var/lib/influxdb" - environment: - INFLUXDB_DB: "${TTN_DASHBOARD_INFLUXDB_INITIAL_DATABASE_NAME:-demo}" - INFLUXDB_INIT_PWD: "${TTN_DASHBOARD_INFLUXDB_ADMIN_PASSWORD:-!notset}" - INFLUXDB_BIND_ADDRESS: "influxdb:8088" - influxdb-backup: + mqtts: restart: unless-stopped - build: influxdb-backup + build: + context: ./mqtts + dockerfile: Dockerfile + args: + ssl_cert: "${IOT_DASHBOARD_NGINX_FQDN:-.}" + ports: +# - "${IOT_DASHBOARD_PORT_MQTT_TCP:-1883}:1883" #-----> Connection on this TCP port is insecure. can be added if needed + - "${IOT_DASHBOARD_PORT_MQTT_SSL:-8883}:8883" + - "${IOT_DASHBOARD_PORT_MQTT_WSS:-8083}:8083" volumes: - # Dircectory for database backup inside "./data" - - "${TTN_DASHBOARD_DATA}influx-backup:/var/lib/influxdb-backup" - # Connecting influxdb data directory to extra instances - - "${TTN_DASHBOARD_DATA}influxdb:/var/lib/influxdb" - environment: - INFLUX_HOST: influxdb - S3_BUCKET_INFLUXDB: "${TTN_DASHBOARD_AWS_S3_BUCKET_INFLUXDB:-.}" - AWS_ACCESS_KEY_ID: "${TTN_DASHBOARD_AWS_ACCESS_KEY_ID:-.}" - AWS_SECRET_ACCESS_KEY: "${TTN_DASHBOARD_AWS_SECRET_ACCESS_KEY:-.}" - AWS_DEFAULT_REGION: "${TTN_DASHBOARD_AWS_DEFAULT_REGION:-.}" - links: - - influxdb - + - "${IOT_DASHBOARD_DATA}docker-nginx/letsencrypt:/etc/letsencrypt" + - "${IOT_DASHBOARD_DATA}mqtt/credentials:/etc/mosquitto/credentials" + hostname: "${IOT_DASHBOARD_MQTT_HOST_NAME:-mqtts}" - postfix: + influxdb: restart: unless-stopped build: - context: ./postfix + context: ./influxdb dockerfile: Dockerfile args: - relay_ip: "${TTN_DASHBOARD_MAIL_RELAY_IP:-}" - host_name: "${TTN_DASHBOARD_MAIL_HOST_NAME:-.}" - domain: "${TTN_DASHBOARD_MAIL_DOMAIN:-.}" - + distrib_id: "${IOT_DASHBOARD_OS_DISTRIB_ID:-ubuntu}" + distrib_codename: "${IOT_DASHBOARD_OS_DISTRIB_CODENAME:-xenial}" + hostname: "${IOT_DASHBOARD_INFLUXDB_MAIL_HOST_NAME:-.}" + relay_ip: "postfix:25" + domain: "${IOT_DASHBOARD_MAIL_DOMAIN:-.}" + hostname: "${IOT_DASHBOARD_INFLUXDB_MAIL_HOST_NAME:-.}" + expose: + - "8086" + volumes: + - "${IOT_DASHBOARD_DATA}influxdb:/var/lib/influxdb" + # Dircectory for influxdb metadata and database backup inside "./data" + - "${IOT_DASHBOARD_DATA}influx-backup:/var/lib/influxdb-backup" + - "${IOT_DASHBOARD_DATA}influxdb-S3-bucket:/var/lib/influxdb-S3-bucket" + environment: + INFLUXDB_INIT_PWD: "${IOT_DASHBOARD_INFLUXDB_ADMIN_PASSWORD:-!notset}" + PRE_CREATE_DB: "${IOT_DASHBOARD_INFLUXDB_INITIAL_DATABASE_NAME:-demo}" + INFLUXDB_BIND_ADDRESS: "influxdb:8088" + INFLUXDB_BACKUP_MAIL: "${IOT_DASHBOARD_INFLUXDB_BACKUP_EMAIL:-}" + # Backuping influxdb metadata and database to cloud + SOURCE_NAME: "${IOT_DASHBOARD_CERTBOT_FQDN}" + S3_BUCKET_INFLUXDB: "${IOT_DASHBOARD_AWS_S3_BUCKET_INFLUXDB:-.}" + AWS_ACCESS_KEY_ID: "${IOT_DASHBOARD_AWS_ACCESS_KEY_ID:-.}" + AWS_SECRET_ACCESS_KEY: "${IOT_DASHBOARD_AWS_SECRET_ACCESS_KEY:-.}" + AWS_DEFAULT_REGION: "${IOT_DASHBOARD_AWS_DEFAULT_REGION:-.}" grafana: restart: unless-stopped - image: grafana/grafana:latest + image: grafana/grafana:${IOT_DASHBOARD_GRAFANA_VERSION:-latest} user: "root" volumes: - - "${TTN_DASHBOARD_DATA}grafana:/var/lib/grafana" + - "${IOT_DASHBOARD_DATA}grafana:/var/lib/grafana" environment: - GF_SECURITY_ADMIN_PASSWORD: "${TTN_DASHBOARD_GRAFANA_ADMIN_PASSWORD:-!notset}" - GF_SERVER_DOMAIN: "${TTN_DASHBOARD_APACHE_FQDN}" + GF_SECURITY_ADMIN_PASSWORD: "${IOT_DASHBOARD_GRAFANA_ADMIN_PASSWORD:-!notset}" + GF_SERVER_DOMAIN: "${IOT_DASHBOARD_NGINX_FQDN}" GF_SERVER_ROOT_URL: "https://%(domain)s/grafana/" - GF_SMTP_ENABLED: "${TTN_DASHBOARD_GRAFANA_SMTP_ENABLED:-false}" - GF_SMTP_SKIP_VERIFY: "${TTN_DASHBOARD_GRAFANA_SMTP_SKIP_VERIFY:-false}" + GF_SMTP_ENABLED: "${IOT_DASHBOARD_GRAFANA_SMTP_ENABLED:-true}" + GF_SMTP_SKIP_VERIFY: "${IOT_DASHBOARD_GRAFANA_SMTP_SKIP_VERIFY:-true}" GF_SMTP_HOST: "postfix:25" - GF_SMTP_FROM_ADDRESS: "${TTN_DASHBOARD_GRAFANA_SMTP_FROM_ADDRESS:-grafana@localhost}" - GF_SMTP_FROM_NAME: "${TTN_DASHBOARD_GRAFANA_PROJECT_NAME:-Default} grafana admin" - GF_LOG_MODE: "${TTN_DASHBOARD_GRAFANA_LOG_MODE:-console,file}" - GF_LOG_LEVEL: "${TTN_DASHBOARD_GRAFANA_LOG_LEVEL:-info}" - GF_INSTALL_PLUGINS: "${TTN_DASHBOARD_GRAFANA_INSTALL_PLUGINS:-}" - GF_USERS_ALLOW_SIGN_UP: "${TTN_DASHBOARD_GRAFANA_USERS_ALLOW_SIGN_UP:-false}" + GF_SMTP_FROM_ADDRESS: "${IOT_DASHBOARD_GRAFANA_SMTP_FROM_ADDRESS:-grafana@localhost}" + GF_SMTP_FROM_NAME: "${IOT_DASHBOARD_GRAFANA_PROJECT_NAME:-Default} grafana admin" + GF_LOG_MODE: "${IOT_DASHBOARD_GRAFANA_LOG_MODE:-console,file}" + GF_LOG_LEVEL: "${IOT_DASHBOARD_GRAFANA_LOG_LEVEL:-info}" + GF_INSTALL_PLUGINS: "${IOT_DASHBOARD_GRAFANA_INSTALL_PLUGINS:-grafana-worldmap-panel}" + GF_USERS_ALLOW_SIGN_UP: "${IOT_DASHBOARD_GRAFANA_USERS_ALLOW_SIGN_UP:-false}" # grafana opens ports on influxdb and postfix, so it needs to be able to talk to it. links: - influxdb - postfix + postfix: + restart: unless-stopped + build: + context: ./postfix + dockerfile: Dockerfile + args: + hostname: "${IOT_DASHBOARD_MAIL_HOST_NAME:-.}" + relay_ip: "${IOT_DASHBOARD_MAIL_RELAY_IP:-.}" + domain: "${IOT_DASHBOARD_MAIL_DOMAIN:-.}" + smtp_login: "${IOT_DASHBOARD_MAIL_SMTP_LOGIN:-.}" + smtp_password: "${IOT_DASHBOARD_MAIL_SMTP_PASSWORD:-.}" + expose: + - "25" + hostname: "${IOT_DASHBOARD_MAIL_HOST_NAME:-.}" + ### end of file ### diff --git a/htpasswd_migration.sh b/htpasswd_migration.sh new file mode 100644 index 0000000..2ba2547 --- /dev/null +++ b/htpasswd_migration.sh @@ -0,0 +1,89 @@ +#!/bin/bash +# This script will create htpasswd for each controlled services for migration Apache->Nginx +read -r -p "Please enter .env file location : " envi + +if [[ ! -f $envi ]] +then + echo "Please enter correct file location" + exit +fi + +# $1 is the file to read. Result is one setting per line, name followed by single space +# followed by value. We can't source the .env file because it's really a .ini file and +# doesn't follow shell syntax. +function _parseenv { + sed -n -e 's/#.*$//g' -e 's/^[ \t]*//' -e 's/[ \t]*=[ \t]*/=/' -e 's/^\([A-Za-z0-9_][A-Za-z0-9_]*\)=\(.*\)$/\1 \2/p' "$1" +} + +TTN_DASHBOARD_DATA="$(_parseenv "$envi" | sed -ne 's/^TTN_DASHBOARD_DATA //p')" + +htgroup="${TTN_DASHBOARD_DATA}docker-apache2/authdata/.htgroup" +htpasswd="${TTN_DASHBOARD_DATA}docker-apache2/authdata/.htpasswd" + +PS3="Please enter your choice on the number listed above, To exit press 'ctrl+d ' : " +select var in "creating htpasswd for each controlled service manually" "creating htpasswd for each controlled service automatically" +do +case $var in + + "creating htpasswd for each controlled service manually") + PS3="Please select service: " + select output in $(sed 's/:.*$//' "$htgroup") + do + true > "${output}_htpasswd" + + + for i in $(tr < "$htgroup" ' |,' '\n' | sed 's/.*:$//' | sort -u) + do + read -r -p "Do you want the User: $i to be added in .htpasswd (y/n) : " j + case $j in + + [yY][eE][sS]|[yY]) + + sed -n "/$i/p" "$htpasswd" >> "${output}_htpasswd" + ;; + + [nN][oO]|[nN]) + + continue + ;; + *) + echo "Please Enter yes or no" + break + ;; + esac + done + done + echo " " + echo " " + echo " " + echo "It is done. Thanks!" + echo " " + echo " " + exit + ;; + + "creating htpasswd for each controlled service automatically") + while read -r line + do + file=$(echo "$line" | awk '{print $1}' | sed 's/://') + echo "create:" "${file}_htpasswd" + true > "${file}_htpasswd" + for k in $(echo "$line" | tr ' |,' '\n') + do + sed -n "/$k/p" "$htpasswd" >> "${file}_htpasswd" + done + done < "$htgroup" + echo " " + echo " " + echo " " + echo "It is done. Thanks!" + echo " " + echo " " + exit + ;; + + *) + echo "Please enter correct number" + ;; +esac +done diff --git a/influxdb-backup/Dockerfile b/influxdb-backup/Dockerfile deleted file mode 100644 index 5c97bf2..0000000 --- a/influxdb-backup/Dockerfile +++ /dev/null @@ -1,35 +0,0 @@ -# -# Dockerfile for building the extra-instance-for-influxdb-backup -# - -# Build the extra-instance using phusion base image -FROM phusion/baseimage - -# Install influxdb database -RUN curl -sL https://repos.influxdata.com/influxdb.key | apt-key add - -RUN echo "deb https://repos.influxdata.com/ubuntu xenial stable" > /etc/apt/sources.list.d/influxdb.list -RUN apt-get update && apt-get install -y influxdb - -# To backup influxdb to S3 Bucket, some packages need to be installed as follows: -RUN apt-get update && apt-get install -y python-pip -RUN pip install awscli --upgrade - -# Default InfluxDB host -ENV INFLUX_HOST=influxdb - -# Amazon S3 bucket's backup working Directory -RUN mkdir -p /var/lib/amazon-bucket - -# Change workdir -RUN mkdir -p /opt/influxdb-backup -WORKDIR "/opt/influxdb-backup" - -# Backup script -COPY showdb.sh /bin/showdb.sh -COPY backup.sh /bin/backup.sh -RUN chmod +x /bin/backup.sh - -# Backup directory -RUN mkdir -p /var/lib/influxdb-backup - -# end of file diff --git a/influxdb-backup/README.md b/influxdb-backup/README.md deleted file mode 100644 index a898053..0000000 --- a/influxdb-backup/README.md +++ /dev/null @@ -1,173 +0,0 @@ -# BUILD SETUP - -```sh - -cmurugan@iotserver:~/iot/docker-ttn-dashboard$ docker-compose up -d -Creating network "dockerttndashboard_default" with the default driver -Creating dockerttndashboard_influxdb_1 ... -Creating dockerttndashboard_influxdb_1 ... done -Creating dockerttndashboard_influxdb-backup_1 ... -Creating dockerttndashboard_node-red_1 ... -Creating dockerttndashboard_grafana_1 ... -Creating dockerttndashboard_grafana_1 -Creating dockerttndashboard_node-red_1 -Creating dockerttndashboard_grafana_1 ... done -Creating dockerttndashboard_apache_1 ... -Creating dockerttndashboard_apache_1 ... done - -``` - -## status of docker container and databases - -```sh - -cmurugan@iotserver:~/iot/docker-ttn-dashboard$ docker-compose ps - Name Command State Ports ------------------------------------------------------------------------------------------------------------------------- -dockerttndashboard_apache_1 /bin/bash /root/setup.sh Up 0.0.0.0:443->443/tcp, 0.0.0.0:80->80/tcp -dockerttndashboard_apache_run_1 /bin/bash Up -dockerttndashboard_grafana_1 /run.sh Up 3000/tcp -dockerttndashboard_influxdb-backup_1 /entrypoint.sh influxd Up 8086/tcp -dockerttndashboard_influxdb_1 /entrypoint.sh influxd Up 8086/tcp -dockerttndashboard_node-red_1 npm start -- --userDir /da ... Up 1880/tcp -cmurugan@iotserver:~/iot/docker-ttn-dashboard$ - - -cmurugan@iotserver:~/iot/docker-ttn-dashboard$ docker-compose exec influxdb influx -Connected to http://localhost:8086 version 1.2.4 -InfluxDB shell version: 1.2.4 -> show databases -name: databases -name ----- -_internal - -> create database testdb -> show databases -name: databases -name ----- -_internal -testdb - -> use testdb -Using database testdb -> INSERT cpu,host=serverA,region=us_west value=0.64 -> SELECT * FROM cpu -name: cpu -time host region value ----- ---- ------ ----- -1508942563099443156 serverA us_west 0.64 - -``` - -# BACKUP DATABASE THROUGH SHELL SCRIPT USING EXTRA INFLUXDB INSTANCE - -`( Database name should be there in as environment variable separated by ":" ) ` - -```sh - -cmurugan@iotserver:~/iot/docker-ttn-dashboard$ docker exec -it dockerttndashboard_influxdb-backup_1 bash - -root@4e5dbfd20c5c:/opt/influxdb-backup# backup.sh -Backup Influx metadata -2017/10/25 14:48:59 backing up metastore to /var/lib/influxdb-backup/meta.00 -2017/10/25 14:48:59 backup complete -Creating backup for _internal -2017/10/25 14:48:59 backing up db=_internal since 0001-01-01 00:00:00 +0000 UTC -2017/10/25 14:48:59 backing up metastore to /var/lib/influxdb-backup/meta.01 -2017/10/25 14:48:59 backing up db=_internal rp=monitor shard=1 to /var/lib/influxdb-backup/_internal.monitor.00001.00 since 0001-01-01 00:00:00 +0000 UTC -2017/10/25 14:49:00 backup complete -Creating backup for testdb -2017/10/25 14:49:00 backing up db=testdb since 0001-01-01 00:00:00 +0000 UTC -2017/10/25 14:49:00 backing up metastore to /var/lib/influxdb-backup/meta.02 -2017/10/25 14:49:00 backing up db=testdb rp=autogen shard=2 to /var/lib/influxdb-backup/testdb.autogen.00002.00 since 0001-01-01 00:00:00 +0000 UTC -2017/10/25 14:49:00 backup complete - -``` - -## Backup has been taken in the below folder - -```sh - -root@4e5dbfd20c5c:/opt/influxdb-backup# cd /var/lib/influxdb-backup/ -root@4e5dbfd20c5c:/var/lib/influxdb-backup# ls -al -total 136 -drwxrwxr-x 2 1000 1000 4096 Oct 25 14:49 . -drwxr-xr-x 14 root root 4096 Oct 25 14:08 .. --rw-r--r-- 1 root root 110592 Oct 25 14:49 _internal.monitor.00001.00 --rw-r--r-- 1 root root 204 Oct 25 14:48 meta.00 --rw-r--r-- 1 root root 204 Oct 25 14:48 meta.01 --rw-r--r-- 1 root root 204 Oct 25 14:49 meta.02 --rw-r--r-- 1 root root 2048 Oct 25 14:49 testdb.autogen.00002.00 - -``` -## Drop the "testdb" database for checking purpose - -```sh - -cmurugan@iotserver:~/iot/docker-ttn-dashboard$ docker-compose exec influxdb influx -Connected to http://localhost:8086 version 1.2.4 -InfluxDB shell version: 1.2.4 -> show databases -name: databases -name ----- -_internal -testdb - -> drop database testdb -> show databases -name: databases -name ----- -_internal - -``` - -## RESTORE DROPPED DATABASE - -`(Stop the influxdb database in order to restore dropped "testdb" database)` - -```sh - -cmurugan@iotserver:~/iot/docker-ttn-dashboard$ docker-compose stop influxdb -Stopping dockerttndashboard_influxdb_1 ... done - -cmurugan@iotserver:~/iot/docker-ttn-dashboard$ docker exec -it dockerttndashboard_influxdb-backup_1 bash - -root@4e5dbfd20c5c:/opt/influxdb-backup# influxd restore -metadir /var/lib/influxdb/meta /var/lib/influxdb-backup -Using metastore snapshot: /var/lib/influxdb-backup/meta.02 -root@4e5dbfd20c5c:/opt/influxdb-backup# influxd restore -database testdb -datadir /var/lib/influxdb/data /var/lib/influxdb-backup -Restoring from backup /var/lib/influxdb-backup/testdb.* -unpacking /var/lib/influxdb/data/testdb/autogen/2/000000001-000000001.tsm - - -``` - -## Start the influxdb database and check for whether database has been restored - -```sh - -cmurugan@iotserver:~/iot/docker-ttn-dashboard$ docker-compose start influxdb -Starting influxdb ... done - -cmurugan@iotserver:~/iot/docker-ttn-dashboard$ docker-compose exec influxdb influx -Connected to http://localhost:8086 version 1.2.4 -InfluxDB shell version: 1.2.4 -> show databases -name: databases -name ----- -_internal -testdb - -> use testdb -Using database testdb -> SELECT * FROM cpu -name: cpu -time host region value ----- ---- ------ ----- -1508942563099443156 serverA us_west 0.64 - -``` diff --git a/influxdb-backup/backup.sh b/influxdb-backup/backup.sh deleted file mode 100644 index 16b7661..0000000 --- a/influxdb-backup/backup.sh +++ /dev/null @@ -1,25 +0,0 @@ -#!/bin/bash -#The Shell script will be used for taking backup and send it to Amazon s3 bucket. - -DATABASES=$(/bin/showdb.sh) - -echo 'Backup Influx metadata' -influxd backup -host $INFLUX_HOST:8088 /var/lib/influxdb-backup - -# Replace colons with spaces to create list. -for db in ${DATABASES//:/ }; do - echo "Creating backup for $db" - influxd backup -database $db -host $INFLUX_HOST:8088 /var/lib/influxdb-backup -done - -if [ $? -eq 0 ]; then - - tar czf /var/lib/amazon-bucket/metdata_db_backup_`date +%F`.tar.gz /var/lib/influxdb-backup/ - tar czf /var/lib/amazon-bucket/data_directory_backup_`date +%F`.tar.gz /var/lib/influxdb/ - aws s3 sync /var/lib/amazon-bucket/ s3://${S3_BUCKET_INFLUXDB}/ - -fi - -# Remove the old backup data in local directory to avoid excessive storage use -find /var/lib/amazon-bucket/ -type f -mtime +90 -exec rm {} \; - diff --git a/influxdb-backup/showdb.sh b/influxdb-backup/showdb.sh deleted file mode 100755 index 75c7a85..0000000 --- a/influxdb-backup/showdb.sh +++ /dev/null @@ -1,10 +0,0 @@ -#! /bin/bash -# TO Show all Databases that will be used by backup.sh script for backup - -showdb(){ -influx -host $INFLUX_HOST -port 8086 -execute 'SHOW DATABASES' -} - -DATABASES=$(showdb) - -echo $DATABASES | sed -e 's/[\r]//g' | sed -e 's/^.\{26\}//' | sed 's/ /:/g' diff --git a/influxdb/Dockerfile b/influxdb/Dockerfile new file mode 100644 index 0000000..aa45a53 --- /dev/null +++ b/influxdb/Dockerfile @@ -0,0 +1,72 @@ +# +# Dockerfile for building the influxdb instance with S3-backup and Mail alert setup +# + +FROM phusion/baseimage:bionic-1.0.0 + +# Default InfluxDB host +ENV INFLUX_HOST=influxdb + +# Install Influxdb stable release +RUN apt-get update && apt-get install -y wget +ARG distrib_id +ARG distrib_codename + +RUN echo "${distrib_id}" +RUN wget -qO- https://repos.influxdata.com/influxdb.key | apt-key add - +RUN /bin/bash -c "source /etc/lsb-release" +RUN echo "deb https://repos.influxdata.com/${distrib_id} ${distrib_codename} stable" | tee /etc/apt/sources.list.d/influxdb.list + +#some basic package installation for troubleshooting +RUN apt-get update && apt-get install -y \ + iputils-ping \ + net-tools \ + debconf-utils + +# passing arguments to build postfix image +ARG hostname +ARG relay_ip +ARG domain + +# Install Postfix +run echo "postfix postfix/mailname string $host_name" | debconf-set-selections +run echo "postfix postfix/main_mailer_type select Satellite system" | debconf-set-selections +run apt-get update && apt-get install -y postfix +run postconf -e relayhost=$relay_ip + +# This will replace local mail addresses by valid Internet addresses when mail leaves the machine via SMTP. +run echo "root@${hostname} influxdbbackup@${domain}" > /etc/postfix/generic +run postconf -e smtp_generic_maps=hash:/etc/postfix/generic +run postmap /etc/postfix/generic + +# mail command would be used for sending mails +run apt-get install -y mailutils + +# Change workdir +RUN mkdir -p /opt/influxdb-backup +WORKDIR "/opt/influxdb-backup" + +# To backup influxdb to S3 Bucket, some packages need to be installed as follows: +RUN apt-get update && apt-get install -y python-pip influxdb +RUN pip install awscli --upgrade + +# Backup script for influxdb +COPY backup.sh /bin/backup.sh +RUN chmod +x /bin/backup.sh +COPY influxdb.conf /etc/influxdb/influxdb.conf + +# Enable influxdb database automatic backup crontab +RUN mkdir -p /etc/my_init.d +COPY influxdb_cron.sh /etc/my_init.d/influxdb_cron.sh +RUN chmod +x /etc/my_init.d/influxdb_cron.sh + +# Start the postfix daemon during container startup +COPY postfix.sh /etc/my_init.d/postfix.sh +RUN chmod +x /etc/my_init.d/postfix.sh + +# Starting influxd daemon service +RUN mkdir /etc/service/influx +COPY influx.sh /etc/service/influx/run +RUN chmod +x /etc/service/influx/run + +# end of file diff --git a/influxdb/README.md b/influxdb/README.md new file mode 100644 index 0000000..5b9ccda --- /dev/null +++ b/influxdb/README.md @@ -0,0 +1,188 @@ +# Influxdb Backup + + + + + + +- [Status of docker container and databases](#status-of-docker-container-and-databases) +- [Checking the databases available](#checking-the-databases-available) +- [Backing up Databases](#backing-up-databases) +- [Restoring Databases](#restoring-databases) + + + + + +## Status of docker container and databases + +```console +root@ithaca-power:/iot/testing/docker-iot-dashboard# docker-compose ps + Name Command State Ports +-------------------------------------------------------------------------------------------------------------------------------------------------------- +docker-iot-dashboard_grafana_1 /run.sh Up 3000/tcp +docker-iot-dashboard_influxdb_1 /sbin/my_init Up 8086/tcp +docker-iot-dashboard_mqtts_1 /sbin/my_init Up 0.0.0.0:1883->1883/tcp, 0.0.0.0:8083->8083/tcp, 0.0.0.0:8883->8883/tcp +docker-iot-dashboard_nginx_1 /sbin/my_init Up 0.0.0.0:443->443/tcp, 0.0.0.0:80->80/tcp +docker-iot-dashboard_node-red_1 npm start -- --userDir /da ... Up (healthy) 1880/tcp +docker-iot-dashboard_postfix_1 /sbin/my_init Up 25/tcp +``` + +## Checking the databases available + +Moving to `influxdb` container. + +```console +username@ithaca-power:/iot/testing/docker-iot-dashboard$ docker-compose exec influxdb bash +root@influxdb# cd /opt/influxdb-backup +root@influxdb:/opt/influxdb-backup# influx +Connected to http://localhost:8086 version 1.8.0 +InfluxDB shell version: 1.8.0 +> create database testdb +> show databases +name: databases +name +---- +_internal +testdb +> use testdb +Using database testdb +> INSERT cpu,host=serverA,region=us_west value=0.64 +> SELECT * FROM cpu +name: cpu +time host region value +---- ---- ------ ----- +1590247547512536078 serverA us_west 0.64 +> exit +``` + +## Backing up Databases + +Backup can be taken through shell script and synced with Amazon S3 cloud. When complete, mail notification will be sent for the backup. + +The backup shell script `backup.sh` wiil be configured in Crontab while building. (For testing, run `backup.sh` manually ) + +The backup shell script `backup.sh` will back up everything. + +```console +root@influxdb:/opt/influxdb-backup# backup.sh + +Backup Influx metadata +2020/05/23 15:29:40 backing up metastore to /var/lib/influxdb-backup/meta.00 +2020/05/23 15:29:40 No database, retention policy or shard ID given. Full meta store backed up. +2020/05/23 15:29:40 Backing up all databases in portable format +2020/05/23 15:29:40 backing up db= +2020/05/23 15:29:40 backing up db=_internal rp=monitor shard=1 to /var/lib/influxdb-backup/_internal.monitor.00001.00 since 0001-01-01T00:00:00Z +2020/05/23 15:29:40 backing up db=testdb rp=autogen shard=2 to /var/lib/influxdb-backup/testdb.autogen.00002.00 since 0001-01-01T00:00:00Z +2020/05/23 15:29:40 backup complete: +2020/05/23 15:29:40 /var/lib/influxdb-backup/20200523T152940Z.meta +2020/05/23 15:29:40 /var/lib/influxdb-backup/20200523T152940Z.s1.tar.gz +2020/05/23 15:29:40 /var/lib/influxdb-backup/20200523T152940Z.s2.tar.gz +2020/05/23 15:29:40 /var/lib/influxdb-backup/20200523T152940Z.manifest +Creating backup for _internal +2020/05/23 15:29:40 backing up metastore to /var/lib/influxdb-backup/meta.00 +2020/05/23 15:29:40 backing up db=_internal +2020/05/23 15:29:40 backing up db=_internal rp=monitor shard=1 to /var/lib/influxdb-backup/_internal.monitor.00001.00 since 0001-01-01T00:00:00Z +2020/05/23 15:29:40 backup complete: +2020/05/23 15:29:40 /var/lib/influxdb-backup/20200523T152940Z.meta +2020/05/23 15:29:40 /var/lib/influxdb-backup/20200523T152940Z.s1.tar.gz +2020/05/23 15:29:40 /var/lib/influxdb-backup/20200523T152940Z.manifest +Creating backup for testdb +2020/05/23 15:29:40 backing up metastore to /var/lib/influxdb-backup/meta.00 +2020/05/23 15:29:40 backing up db=testdb +2020/05/23 15:29:40 backing up db=testdb rp=autogen shard=2 to /var/lib/influxdb-backup/testdb.autogen.00002.00 since 0001-01-01T00:00:00Z +2020/05/23 15:29:40 backup complete: +2020/05/23 15:29:40 /var/lib/influxdb-backup/20200523T152940Z.meta +2020/05/23 15:29:40 /var/lib/influxdb-backup/20200523T152940Z.s2.tar.gz +2020/05/23 15:29:40 /var/lib/influxdb-backup/20200523T152940Z.manifest +tar: Removing leading `/' from member names +tar: Removing leading `/' from member names +tar: Removing leading `/' from hard link targets +upload: ../../var/lib/influxdb-S3-bucket/ithaca-power.mcci.com_metdata_db_backup_2020-05-23.tar.gz to s3://mcci-influxdb-test/ithaca-power.mcci.com_metdata_db_backup_2020-05-23.tar.gz +upload: ../../var/lib/influxdb-S3-bucket/ithaca-power.mcci.com_data_directory_backup_2020-05-23.tar.gz to s3://mcci-influxdb-test/ithaca-power.mcci.com_data_directory_backup_2020-05-23.tar.gz +``` + +* Backup files will be uploaded in Amazon S3 bucket. They can be viewed using below command. + +```console +root@influxdb:/opt/influxdb-backup# aws s3 ls s3://${S3_BUCKET_INFLUXDB}/ +root@influxdb:/opt/influxdb-backup# aws s3 ls s3://${S3_BUCKET_INFLUXDB}/ithaca-power.mcci.com_metdata_db_backup_2020-05-23.tar.gz +2020-05-23 15:29:43 15447 ithaca-power.mcci.com_metdata_db_backup_2020-05-23.tar.gz +``` + +# Influxdb Restore + +## Restoring Databases + +In this example, we drop the "`testdb`" database for checking purpose + +```console +root@influxdb:/opt/influxdb-backup# influx +Connected to http://localhost:8086 version 1.8.0 +InfluxDB shell version: 1.8.0 +> show databases +name: databases +name +---- +_internal +testdb +> drop database testdb +> show databases +name: databases +name +---- +_internal +> exit +``` + +Next, we download the backed up databases from the Amazon S3 Bucket. + +```console +root@influxdb:/opt/influxdb-backup# aws s3 cp s3://${S3_BUCKET_INFLUXDB}/ithaca-power.mcci.com_metdata_db_backup_2020-05-23.tar.gz . +download: s3://mcci-influxdb-test/ithaca-power.mcci.com_metdata_db_backup_2020-05-23.tar.gz to ./ithaca-power.mcci.com_metdata_db_backup_2020-05-23.tar.gz +root@influxdb:/opt/influxdb-backup# ls -al +total 28 +drwxr-xr-x 1 root root 4096 May 23 15:37 . +drwxr-xr-x 1 root root 4096 May 18 05:46 .. +-rw-r--r-- 1 root root 15447 May 23 15:29 ithaca-power.mcci.com_metdata_db_backup_2020-05-23.tar.gz +``` + +We extract the backed up files. + +```console +root@influxdb:/opt/influxdb-backup# tar xvf staging1-ithaca-power.mcci.com_metdata_db_backup_2020-05-23.tar.gz +var/lib/influxdb-backup/ +var/lib/influxdb-backup/20200523T152940Z.meta +var/lib/influxdb-backup/20200523T152940Z.s1.tar.gz +var/lib/influxdb-backup/20200523T152940Z.s2.tar.gz +var/lib/influxdb-backup/20200523T152940Z.manifest +``` + +We restore all databases found within the backup directory. + +```console +root@influxdb:/opt/influxdb-backup# influxd restore -portable -host $INFLUX_HOST:8088 var/lib/influxdb-backup/ +2020/05/23 15:45:23 Restoring shard 2 live from backup 20200523T152940Z.s2.tar.gz +``` + +Finally, we check that the database has been restored + +```console +root@influxdb:/opt/influxdb-backup# influx +Connected to http://localhost:8086 version 1.8.0 +InfluxDB shell version: 1.8.0 +> show databases +name: databases +name +---- +_internal +testdb +> use testdb +Using database testdb +> SELECT * FROM cpu +name: cpu +time host region value +---- ---- ------ ----- +1590247547512536078 serverA us_west 0.64 +> exit +``` \ No newline at end of file diff --git a/influxdb/backup.sh b/influxdb/backup.sh new file mode 100755 index 0000000..6f7f651 --- /dev/null +++ b/influxdb/backup.sh @@ -0,0 +1,64 @@ +#!/bin/bash +#The Shell script will be used for taking backup and send it to Amazon s3 bucket. + +# TO list all Databases in influxdb databases +DATE=`date +%d-%m-%y_%H-%M` +showdb(){ +influx -host $INFLUX_HOST -port 8086 -execute 'SHOW DATABASES' +} + +showdb > /data.txt + +sed -i '1,3d' /data.txt + +#Backing up the metadata + +echo 'Backup Influx metadata' +influxd backup -portable -host $INFLUX_HOST:8088 /var/lib/influxdb-backup + + +#Backing up the databases listed. +while read db +do + echo "Creating backup for $db" + influxd backup -portable -database "$db" -host $INFLUX_HOST:8088 /var/lib/influxdb-backup +done < "/data.txt" + +# Moving the backup to Amazon Cloud +if [ $? -eq 0 ]; then + + tar czf /var/lib/influxdb-S3-bucket/${SOURCE_NAME}_metdata_db_backup_`date +%F`.tar.gz /var/lib/influxdb-backup/ + tar czf /var/lib/influxdb-S3-bucket/${SOURCE_NAME}_data_directory_backup_`date +%F`.tar.gz /var/lib/influxdb/ + aws s3 sync /var/lib/influxdb-S3-bucket/ s3://${S3_BUCKET_INFLUXDB}/ + echo "DATE:" $DATE > /influxbackup.txt + echo " " >> /influxbackup.txt + echo "DESCRIPTION: ${SOURCE_NAME}_Influxdb backup" >> /influxbackup.txt + echo " " >> /influxbackup.txt + echo "STATUS: influxdb backup is Successful." >> /influxbackup.txt + echo " " >> /influxbackup.txt + echo "******* Influxdb Database & metadata Backup ********" >> /influxbackup.txt + echo " " >> /influxbackup.txt + aws s3 ls s3://${S3_BUCKET_INFLUXDB}/ --human-readable | grep -i ${SOURCE_NAME}_metdata | cut -d' ' -f3- | tac | head -10 &>> /influxbackup.txt + echo " " >> /influxbackup.txt + echo "************** Influxdb data Backup ****************" >> /influxbackup.txt + echo " " >> /influxbackup.txt + aws s3 ls s3://${S3_BUCKET_INFLUXDB}/ --human-readable | grep -i ${SOURCE_NAME}_data | cut -d' ' -f3- | tac | head -10 &>> /influxbackup.txt + echo " " >> /influxbackup.txt + echo "********************** END ********************* " >> /influxbackup.txt + +else + echo "DATE:" $DATE > /influxbackup.txt + echo " " >> /influxbackup.txt + echo "DESCRIPTION: ${SOURCE_NAME}_Influxdb backup" >> /influxbackup.txt + echo " " >> /influxbackup.txt + echo "STATUS: influxdb backup is Failed." >> /influxbackup.txt + echo " " >> /influxbackup.txt + echo "Something went wrong, Please check it" >> /influxbackup.txt + cat /influxbackup.txt | mail -s "${SOURCE_NAME}: influxdb backup" ${INFLUXDB_BACKUP_MAIL} +fi + +# Remove the old backup data in local directory to avoid excessive storage use +find /var/lib/influxdb-S3-bucket/ -type f -exec rm {} \; +find /var/lib/influxdb-backup/ -type f -exec rm {} \; + +cat /influxbackup.txt | mail -s "${SOURCE_NAME}: influxdb backup" ${INFLUXDB_BACKUP_MAIL} diff --git a/influxdb/influx.sh b/influxdb/influx.sh new file mode 100644 index 0000000..50aa472 --- /dev/null +++ b/influxdb/influx.sh @@ -0,0 +1,4 @@ +#!/bin/sh +#/etc/init.d/influxdb start +/usr/bin/influxd -config /etc/influxdb/influxdb.conf + diff --git a/influxdb/influxdb.conf b/influxdb/influxdb.conf new file mode 100644 index 0000000..986803a --- /dev/null +++ b/influxdb/influxdb.conf @@ -0,0 +1,7 @@ +[meta] + dir = "/var/lib/influxdb/meta" + +[data] + dir = "/var/lib/influxdb/data" + engine = "tsm1" + wal-dir = "/var/lib/influxdb/wal" diff --git a/influxdb/influxdb_cron.sh b/influxdb/influxdb_cron.sh new file mode 100644 index 0000000..4c13808 --- /dev/null +++ b/influxdb/influxdb_cron.sh @@ -0,0 +1,19 @@ +#!/bin/sh + +# exit on unchecked errors +set -e + +# backups are scheduled via the root crontab. Start by heading there +cd /root + +#write out current crontab +crontab -l > mycron || echo "no crontab for root, going on" + +#echo new cron into cron file +echo "35 6 * * * /bin/bash -l -c '/bin/backup.sh'" >> mycron + +#delete duplicated lines +sort -u -o mycron mycron + +#install new cron file +crontab mycron diff --git a/influxdb/postfix.sh b/influxdb/postfix.sh new file mode 100644 index 0000000..c14d96e --- /dev/null +++ b/influxdb/postfix.sh @@ -0,0 +1,2 @@ +#!/bin/sh +/etc/init.d/postfix restart diff --git a/influxdb/showdb.sh b/influxdb/showdb.sh new file mode 100755 index 0000000..c328c7f --- /dev/null +++ b/influxdb/showdb.sh @@ -0,0 +1,10 @@ +#! /bin/bash +# TO Show all Databases that will be used by backup.sh script for backup + +showdb(){ +influx -host "$INFLUX_HOST" -port 8086 -execute 'SHOW DATABASES' +} + +DATABASES=$(showdb) + +echo "$DATABASES" | sed -e 's/[\r]//g' | sed -e 's/^.\{26\}//' | sed 's/ /:/g' diff --git a/mqtts/Dockerfile b/mqtts/Dockerfile new file mode 100644 index 0000000..c0e6172 --- /dev/null +++ b/mqtts/Dockerfile @@ -0,0 +1,34 @@ +# +# Dockerfile for building MQTTS +# + +# Build the MQTTS using phusion base image + +# passing arguments to map letsencrypt certs + +FROM phusion/baseimage:bionic-1.0.0 + +# Installing mosquitto packages and certbot +RUN add-apt-repository ppa:certbot/certbot +RUN apt-get update && apt-get install -y \ + certbot \ + mosquitto \ + mosquitto-clients + +# passing arguments to map letsencrypt certs +ARG ssl_cert + +# Linking letsencrypt certiifcates to mosquitto conf +RUN ln -sf /etc/letsencrypt/live/${ssl_cert}/cert.pem /etc/mosquitto/cert.pem +RUN ln -sf /etc/letsencrypt/live/${ssl_cert}/chain.pem /etc/mosquitto/chain.pem +RUN ln -sf /etc/letsencrypt/live/${ssl_cert}/privkey.pem /etc/mosquitto/privkey.pem + + +# Copying mosquitto configuration +COPY mosquitto.conf /etc/mosquitto/mosquitto.conf + +# Start the MQTTS daemon during container startup +RUN mkdir /etc/service/mosquitto +COPY mosquitto.sh /etc/service/mosquitto/run +RUN chmod +x /etc/service/mosquitto/run + diff --git a/mqtts/mosquitto.conf b/mqtts/mosquitto.conf new file mode 100644 index 0000000..9ef570b --- /dev/null +++ b/mqtts/mosquitto.conf @@ -0,0 +1,19 @@ +# Plain MQTT protocol +#listener 1883 + +# username/password authentication; the password file should be present else mosquitto service fail. +allow_anonymous false +password_file /etc/mosquitto/credentials/passwd + +# MQTT over TLS/SSL +listener 8883 +certfile /etc/mosquitto/cert.pem +cafile /etc/mosquitto/chain.pem +keyfile /etc/mosquitto/privkey.pem + +# WebSockets over TLS/SSL +listener 8083 +protocol websockets +certfile /etc/mosquitto/cert.pem +cafile /etc/mosquitto/chain.pem +keyfile /etc/mosquitto/privkey.pem diff --git a/mqtts/mosquitto.sh b/mqtts/mosquitto.sh new file mode 100644 index 0000000..7ffdccf --- /dev/null +++ b/mqtts/mosquitto.sh @@ -0,0 +1,2 @@ +#!/bin/bash +/usr/sbin/mosquitto -c /etc/mosquitto/mosquitto.conf diff --git a/nginx/Dockerfile b/nginx/Dockerfile new file mode 100644 index 0000000..edaff87 --- /dev/null +++ b/nginx/Dockerfile @@ -0,0 +1,31 @@ +# +# Dockerfile for building the Nginx image +# + +# Start from Phusion. +FROM phusion/baseimage:bionic-1.0.0 + +RUN /usr/bin/apt-get update && /usr/bin/apt-get install software-properties-common -y +RUN /usr/bin/add-apt-repository ppa:certbot/certbot && /usr/bin/apt-get update && /usr/bin/apt-get install nginx apache2-utils -y + +# Add the certbot layer +RUN /usr/bin/apt-get install python-certbot-nginx -y + +# Copying Proxy files +COPY setup.sh proxy-*.conf /root/ + +# Running scripts during container startup for letsencrypt update and configuring proxy files behind Nginx +RUN mkdir -p /etc/my_init.d +COPY setup.sh /etc/my_init.d/setup.sh +RUN chmod +x /etc/my_init.d/setup.sh + +# Enable letsencrypt renewal crontab +COPY certbot_cron.sh /etc/my_init.d/certbot_cron.sh +RUN chmod +x /etc/my_init.d/certbot_cron.sh + +# Start the Nginx daemon during container startup +RUN mkdir /etc/service/nginx +COPY nginx.sh /etc/service/nginx/run +RUN chmod +x /etc/service/nginx/run + +# end of file diff --git a/apache/certbot_cron.sh b/nginx/certbot_cron.sh similarity index 100% rename from apache/certbot_cron.sh rename to nginx/certbot_cron.sh diff --git a/nginx/nginx.sh b/nginx/nginx.sh new file mode 100644 index 0000000..69946a6 --- /dev/null +++ b/nginx/nginx.sh @@ -0,0 +1,2 @@ +#!/bin/bash +/usr/sbin/nginx -g 'daemon off;' diff --git a/nginx/proxy-grafana.conf b/nginx/proxy-grafana.conf new file mode 100644 index 0000000..8a85cc7 --- /dev/null +++ b/nginx/proxy-grafana.conf @@ -0,0 +1,22 @@ + rewrite ^/$ https://@{FQDN}/grafana/ permanent; + rewrite ^/index.html$ https://@{FQDN}/grafana/; + rewrite ^/grafana$ https://@{FQDN}/grafana/; + location /grafana/ { + add_header Strict-Transport-Security "max-age=31536000; includeSubDomains" always; + add_header X-Frame-Options SAMEORIGIN always; + add_header X-Xss-Protection "1; mode=block" always; + add_header X-Content-Type-Options nosniff always; + add_header Content-Security-Policy "default-src 'self' https:; script-src 'self' 'unsafe-inline' 'unsafe-eval'; style-src 'self' 'unsafe-inline' blob:; img-src 'self' data: *.global.ssl.fastly.net" always; + add_header Feature-Policy: "accelerometer 'none'; camera 'none'; geolocation 'none'; gyroscope 'none'; magnetometer 'none'; microphone 'none'; payment 'none'; usb 'none'" always; + + proxy_set_header X-Forwarded-Host $host:$server_port; + proxy_set_header X-Forwarded-Server $host; + proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; + proxy_set_header Upgrade $http_upgrade; + proxy_set_header Connection "upgrade"; + proxy_set_header Host $host; + proxy_http_version 1.1; + proxy_pass http://grafana:3000/; + + } + diff --git a/nginx/proxy-influxdb.conf b/nginx/proxy-influxdb.conf new file mode 100644 index 0000000..b9b7910 --- /dev/null +++ b/nginx/proxy-influxdb.conf @@ -0,0 +1,19 @@ + rewrite ^/influxdb:8086/$ https://@{FQDN}/influxdb:8086/; + location /influxdb:8086/ { + add_header Strict-Transport-Security "max-age=31536000; includeSubDomains" always; + add_header X-Frame-Options "SAMEORIGIN" always; + add_header X-Xss-Protection "1; mode=block" always; + add_header X-Content-Type-Options "nosniff" always; + add_header Content-Security-Policy "default-src 'self'" always; + add_header 'Referrer-Policy' 'origin'; + add_header Feature-Policy: "accelerometer 'none'; camera 'none'; geolocation 'none'; gyroscope 'none'; magnetometer 'none'; microphone 'none'; payment 'none'; usb 'none'" always; + proxy_set_header X-Forwarded-Host $host:$server_port; + proxy_set_header X-Forwarded-Server $host; + proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; + proxy_set_header Upgrade $http_upgrade; + proxy_set_header Connection "upgrade"; + proxy_set_header Host $host; + proxy_pass http://influxdb:8086/; + auth_basic "InfluxDB queries"; + auth_basic_user_file /etc/nginx/authdata/influxdb/.htpasswd; + } diff --git a/nginx/proxy-mqtts.conf b/nginx/proxy-mqtts.conf new file mode 100644 index 0000000..b07f130 --- /dev/null +++ b/nginx/proxy-mqtts.conf @@ -0,0 +1,10 @@ + rewrite ^/mqtts$ https://@{FQDN}/mqtts/; + location /mqtts/ { + proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; + proxy_set_header Upgrade $http_upgrade; + proxy_set_header Connection "upgrade"; + proxy_set_header Host $host; + proxy_http_version 1.1; + proxy_pass https://mqtts:8083/; + + } diff --git a/nginx/proxy-nodered.conf b/nginx/proxy-nodered.conf new file mode 100644 index 0000000..12b891b --- /dev/null +++ b/nginx/proxy-nodered.conf @@ -0,0 +1,38 @@ + rewrite ^/node-red$ https://@{FQDN}/node-red/; + location /node-red/ { + add_header Strict-Transport-Security "max-age=31536000; includeSubDomains" always; + add_header X-Frame-Options "SAMEORIGIN" always; + add_header X-Xss-Protection "1; mode=block" always; + add_header X-Content-Type-Options "nosniff" always; + add_header Content-Security-Policy "default-src 'self' https:; script-src 'self' 'unsafe-inline' 'unsafe-eval'; style-src 'self' 'unsafe-inline'; img-src 'self' data:" always; + add_header 'Referrer-Policy' 'origin'; + add_header Feature-Policy: "accelerometer 'none'; camera 'none'; geolocation 'none'; gyroscope 'none'; magnetometer 'none'; microphone 'none'; payment 'none'; usb 'none'" always; + proxy_set_header X-Forwarded-Host $host:$server_port; + proxy_set_header X-Forwarded-Server $host; + proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; + proxy_set_header Upgrade $http_upgrade; + proxy_set_header Connection "upgrade"; + proxy_set_header Host $host; + proxy_pass http://node-red:1880/; + auth_basic "Node-RED"; + auth_basic_user_file /etc/nginx/authdata/nodered/.htpasswd; + } + +# Enabling HTTP Endpoint using node-red + + location /post/ { + add_header Strict-Transport-Security "max-age=31536000; includeSubDomains" always; + add_header X-Frame-Options "SAMEORIGIN" always; + add_header X-Xss-Protection "1; mode=block" always; + add_header X-Content-Type-Options "nosniff" always; + add_header Content-Security-Policy "default-src 'self' https:; script-src 'self' 'unsafe-inline' 'unsafe-eval'; style-src 'self' 'unsafe-inline'; img-src 'self' data:" always; + add_header 'Referrer-Policy' 'origin'; + add_header Feature-Policy: "accelerometer 'none'; camera 'none'; geolocation 'none'; gyroscope 'none'; magnetometer 'none'; microphone 'none'; payment 'none'; usb 'none'" always; + proxy_set_header X-Forwarded-Host $host:$server_port; + proxy_set_header X-Forwarded-Server $host; + proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; + proxy_set_header Upgrade $http_upgrade; + proxy_set_header Connection "upgrade"; + proxy_set_header Host $host; + proxy_pass http://node-red:1880/post; + } diff --git a/nginx/setup.sh b/nginx/setup.sh new file mode 100644 index 0000000..7ad4651 --- /dev/null +++ b/nginx/setup.sh @@ -0,0 +1,73 @@ +#!/usr/bin/env bash + +# set up the environment; these might not be set. +export HOME="/root" +export PATH="${PATH}:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin" + +# test that we have a proper setup. +cd $HOME || exit 2 + +# test that authentication is set up, and set permissions as needed by us +if [ ! -d /etc/nginx/authdata/nodered ] ; then + echo "The authdata directory is not set; refer to docker-compose script" + exit 3 +fi + + +if [ ! -d /etc/nginx/authdata/influxdb ] ; then + echo "The authdata directory is not set; refer to docker-compose script" + exit 3 +fi + +if [ ! -f /etc/nginx/authdata/nodered/.htpasswd ]; then + echo ".htpasswd file not found" + exit 3 +fi + + +if [ ! -f /etc/nginx/authdata/influxdb/.htpasswd ]; then + echo ".htpasswd file not found" + exit 3 +fi + +chown -R www-data $(find /etc/nginx/authdata -type d) +chmod 700 $(find /etc/nginx/authdata -type d) + +# check that we got the vars we need +if [ -z "$CERTBOT_DOMAINS" -o "$CERTBOT_DOMAINS" = "." ]; then + echo "The docker-compose script must set CERTBOT_DOMAINS to value to be passed to certbot for --domains" + exit 3 +fi + +if [ -z "$CERTBOT_EMAIL" -o "$CERTBOT_EMAIL" = "." ]; then + echo "The docker-compose script must set CERTBOT_EMAIL to an email address useful to certbot/letsencrypt for notifications" + exit 3 +fi + +if [ -z "$NGINX_FQDN" -o "$NGINX_FQDN" = "." ]; then + echo "The docker-compose script must set NGINX_FQDN to the (single) fully-qualified domain at the top level" + exit 3 +fi + +# run cerbot to set up Nginx +if [ "$CERTBOT_TEST" != "test" ]; then + certbot --agree-tos --email "${CERTBOT_EMAIL}" --non-interactive --domains "$CERTBOT_DOMAINS" --nginx --agree-tos --rsa-key-size 4096 --redirect || exit 4 + + # certbot actually launched Nginx. The simple hack is to stop it; then launch + # it again after we've edited the config files. + /usr/sbin/nginx -s stop && echo "stopped successfully" +fi + +# now, add the fields to the virtual host section for https. +set -- proxy-*.conf +if [ "$1" != "proxy-*.conf" ] ; then + echo "add proxy-specs to configuration from:" "$@" + sed -e "s/@{FQDN}/${NGINX_FQDN}/g" "$@" > /tmp/proxyspecs.conf || exit 5 + sed -e '/listen 443 ssl;/r/tmp/proxyspecs.conf' /etc/nginx/sites-available/default > /tmp/000-default-le-ssl-local.conf || exit 6 + mv /tmp/000-default-le-ssl-local.conf /etc/nginx/sites-available || exit 7 + echo "enable the modified site, and disable the ssl defaults" + rm -rf /etc/nginx/sites-enabled/default || echo exit 8 + rm -rf /etc/nginx/sites-enabled/000-default-le-ssl-local.conf || exit 9 + ln -s /etc/nginx/sites-available/000-default-le-ssl-local.conf /etc/nginx/sites-enabled/000-default-le-ssl-local.conf || exit 10 +fi + diff --git a/node-red/Dockerfile b/node-red/Dockerfile index 92d241e..bad9fb1 100644 --- a/node-red/Dockerfile +++ b/node-red/Dockerfile @@ -3,23 +3,26 @@ # # build the node red image using the offical node red distribution -FROM nodered/node-red:latest +# passing arguments to build specific image +ARG node_red_version +FROM nodered/node-red:${node_red_version} # To avoid SSL certification issue ENV NODE_TLS_REJECT_UNAUTHORIZED=0 -# add the influxDB connector +# Install required modules +ARG node_red_install_modules +RUN npm install ${node_red_install_modules} +# we always want the influxdb plug-in. RUN npm install node-red-contrib-influxdb - -# add The Things Network connector -RUN npm install node-red-contrib-ttn - -# add any other things that need to be added -ARG NODERED_INSTALL_PLUGINS -RUN /bin/bash -c 'for iPkg in "$@" ; do echo "npm install $iPkg" ; npm install "$iPkg" || { echo "couldnt install: $iPkg" ; exit 1 ; } ; done' -- ${NODERED_INSTALL_PLUGINS} - +# fix any dependency issues RUN npm audit fix +# add The Things Network connector -- this must be after npm audit fix becuase +# it uses a deprecated API. +ARG node_red_contrib_ttn_version +RUN npm install node-red-contrib-ttn@${node_red_contrib_ttn_version} + # copy the settings file USER node-red COPY settings.js /usr/src/node-red/.node-red/ diff --git a/postfix/Dockerfile b/postfix/Dockerfile index e093ed3..4013f48 100644 --- a/postfix/Dockerfile +++ b/postfix/Dockerfile @@ -2,35 +2,51 @@ # Dockerfile for building POSTFIX # # Build the Postfix using phusion base image -FROM phusion/baseimage +FROM phusion/baseimage:bionic-1.0.0 # some basic package installation for troubleshooting RUN apt-get update && apt-get install -y \ iputils-ping \ net-tools \ - debconf-utils \ - mailutils + debconf-utils # passing arguments to build postfix image +ARG hostname ARG relay_ip -ARG host_name ARG domain +ARG smtp_login +ARG smtp_password # Install Postfix -RUN echo "postfix postfix/mailname string $host_name" | debconf-set-selections -RUN echo "postfix postfix/main_mailer_type string 'Internet Site'" | debconf-set-selections -RUN apt-get install -y postfix -RUN postconf -e relayhost=$relay_ip -RUN postconf -e myhostname=$host_name -RUN postconf -e mydomain=$domain -RUN postconf -e smtp_generic_maps=hash:/etc/postfix/generic -RUN postconf -e mynetworks="127.0.0.0/8 [::ffff:127.0.0.0]/104 [::1]/128 172.18.0.0/16" -RUN postconf -e smtpd_use_tls=no -RUN echo $host_name > /etc/mailname - -# This will replace local mail addresses by valid Internet addresses when mail leaves the machine via SMTP. so please change it according to container. -RUN echo "root@aa7fde2ee7f1 iotmail@example.com" > /etc/postfix/generic -RUN postmap /etc/postfix/generic +run echo "postfix postfix/mailname string $hostname" | debconf-set-selections && \ + echo "postfix postfix/main_mailer_type string 'Internet Site'" | debconf-set-selections && \ + echo "postfix postfix/relayhost string $relay_ip" | debconf-set-selections + +run apt-get update && apt-get install -y postfix libsasl2-modules +run postconf -e myhostname=$hostname +run postconf -e mydomain=$domain +run postconf -e myorigin=\$mydomain +run postconf -e masquerade_domains=\$mydomain +run postconf -e mydestination="\$myhostname, $hostname, localhost, localhost.localdomain, localhost" +run postconf -e mynetworks="127.0.0.0/8 [::ffff:127.0.0.0]/104 [::1]/128 172.0.0.0/8" +run echo $domain > /etc/mailname + +# set up the credentials for SMTP authentication using username and password +run echo "$relay_ip $smtp_login:$smtp_password" >/etc/postfix/sasl_passwd && chmod 600 /etc/postfix/sasl_passwd && postmap /etc/postfix/sasl_passwd && \ + printf '%s\n' '# set up login for SMTP' \ + 'smtp_sasl_auth_enable=yes' \ + 'smtp_sasl_password_maps=hash:/etc/postfix/sasl_passwd' \ + 'smtp_sasl_security_options=noanonymous' \ + 'smtp_sasl_tls_security_options=noanonymous' \ + 'smtp_sasl_mechanism_filter=AUTH LOGIN' >> /etc/postfix/main.cf + +# This will replace local mail addresses by valid Internet addresses when mail leaves the machine via SMTP. +run echo "root@${domain} iotmail@${domain}" > /etc/postfix/generic +run postconf -e smtp_generic_maps=hash:/etc/postfix/generic +run postmap /etc/postfix/generic + +# mail command would be used for sending mails +run apt-get install -y mailutils # Start the postfix daemon during container startup RUN mkdir -p /etc/my_init.d