Usage

The xsrv command-line tool automates creation and maintenance of projects on a controller machine. Configuration is stored in YAML files on the controller and deployed to target hosts over SSH.

Use the xsrv command-line to manage your projects, or include xsrv roles in your own ansible playbooks.


Command-line usage

  ╻ ╻┏━┓┏━┓╻ ╻
░░╺╋╸┗━┓┣┳┛┃┏┛
  ╹ ╹┗━┛╹┗╸┗┛ v1.21.0

USAGE: xsrv COMMAND [project] [host]

COMMANDS:
init-project [project] [host]       initialize a new project (and optionally a first host)
edit-inventory [project]            edit/show inventory file (hosts/groups)
edit-playbook [project]             edit/show playbook (roles for each host)
edit-requirements [project]         edit ansible requirements/collections
edit-cfg [project]                  edit ansible configuration (ansible.cfg)
upgrade [project]                   upgrade a project's roles/collections to latest versions
init-host [project] [host]          add a new host to an existing project
edit-host [project] [host]          edit host configuration (host_vars)
edit-vault [project] [host]         edit encrypted (vault) host configuration (host_vars)
edit-group [project] [group]        edit group configuration (group_vars)
edit-group-vault [project] [group]  edit encrypted (vault) group configuration (group_vars)
check [project] [host|group]        simulate deployment, report what would be changed
deploy [project] [host|group]       deploy the main playbook (apply configuration/roles)
fetch-backups [project] [host]      fetch backups from a host to the local backups directory
scan [project]                      scan a project directory for cleartext passwords/secrets
shell|ssh [project] [host]          open interactive SSH shell on a host
logs [project] [host]               view system logs on a host
o|open [project]                    open the project directory in the default file manager
readme-gen [project]                generate a markdown inventory in the project's README.md
nmap [project]                      run a nmap scan against hosts in the project
show-defaults [project] [role]      show all variables and their default values
help                                show this message
help-tags [project]                 show the list of ansible tags and their descriptions
self-upgrade                        check for new releases/upgrade the xsrv script in-place
init-vm-template [--help] [options] initialize a new libvirt VM template
init-vm [--help] [options]          initialize a new libvirt VM from a template

If no project is specified, the 'default' project is assumed.
For edition/utility commands, if no host/group is specified, the first host/group in alphabetical order is assumed.
For deploy/check commands, if no host/group is specified, the 'all' group (all hosts) is assumed.

# ENVIRONMENT VARIABLES (usage: VARIABLE=VALUE xsrv COMMAND)
TAGS               deploy/check only: list of ansible tags (TAGS=ssh,samba,... xsrv deploy)
EDITOR             text editor to use (default: nano)
PAGER              pager to use (default: nano --syntax=YAML --view +1 -)

Examples:

# deploy all hosts in the default project
xsrv deploy # or xsrv deploy default
# initialize a new project named infra
xsrv init-project infra
# deploy all hosts in project infra
xsrv deploy infra
 # add a new host ex2.CHANGEME.org to project infra
xsrv init-host infra ex2.CHANGEME.org
# edit configuration for the host ex2.CHANGEME.org in project infra
xsrv edit-host infra ex2.CHANGEME.org
# edit secret/vaulted configuration for my.CHANGEME.org in project default
xsrv edit-vault default my.CHANGEME.org
# deploy only ex1.CHANGEME.org and ex2.CHANGEME.org
xsrv deploy infra ex1.CHANGEME.org,ex2.CHANGEME.org
# deploy only tasks tagged nextcloud or gitea on ex3.CHANGEME.org
TAGS=nextcloud,gitea xsrv deploy infra ex3.CHANGEME.org
# deploy all hosts except ex1 and ex7.CHANGEME.org
xsrv deploy default '!ex1.CHANGEME.org,!ex7.CHANGEME.org'
# deploy all hosts in group 'prod' in default project (dry-run/simulation mode)
xsrv check default prod
# deploy all hosts whose hostnames begin with srv
xsrv deploy default srv*

Manage projects

Each project contains:

  • an inventory of managed servers (hosts)

  • a list of roles assigned to each host/group (playbook)

  • configuration values for host/group (host_vars/group_vars)

  • deployment logic/tasks used in your project (collections/roles)

  • an independent/isolated ansible installation (virtualenv) and its configuration

Projects are stored in the ~/playbooks directory by default (use the XSRV_PROJECTS_DIR environment variable to override this).

$ ls ~/playbooks/
default/  homelab/  mycompany/

A single project is suitable for most setups (you can still organize hosts as different environments/groups inside a project). Use multiple projects to separate setups with completely different contexts/owners.

xsrv init-project

Initialize a new project from the template - creates all necessary files and prepares a playbook/environment with a single host.

xsrv edit-requirements

Edit the project’s requirements.yml file, which lists ansible collections (a distribution format for Ansible content) used by the project.


Manage hosts

xsrv init-host

Add a new host to the inventory/playbook and create/update all required files. You will be asked for a host name:

xsrv init-host
[xsrv] Host name to add to the default playbook (ex: my.CHANGEME.org): my.example.org

xsrv edit-inventory

Edit the inventory file. This file lists all hosts in your environment and assigns them one or more groups.

# the simplest inventory, single host in a single group 'all'
all:
  my.example.org:
# an inventory with multiple hosts/groups
all:
  children:
    tools:
      hosts:
        hypervisor.example.org:
        dns.example.org:
        siem.example.org:
    dev:
      hosts:
        dev.example.org:
        dev-db.example.org:
    staging:
      hosts:
        staging.example.org:
        staging-db.example.org:
    prod:
      hosts:
        prod.example.org:
        prod-db.example.org:

See YAML inventory.


Manage roles

xsrv edit-playbook

Edit the list of roles (playbook file) that will be deployed to your hosts. Add any role you wish to enable to the roles: list.

The simplest playbook, a single host carrying multiple roles:

# uncomment or add roles to this list to enable additional components
- hosts: my.example.org
  roles:
    - nodiscc.xsrv.common
    - nodiscc.xsrv.monitoring
    - nodiscc.xsrv.apache
    - nodiscc.xsrv.openldap
    - nodiscc.xsrv.nextcloud
    - nodiscc.xsrv.mumble
    # - nodiscc.xsrv.samba
    # - nodiscc.xsrv.jellyfin
    # - nodiscc.xsrv.transmission
    # - nodiscc.xsrv.gitea
    # - other.collection.role

A playbook that deploys some roles in parallel across hosts:

# deploy the common role to all hosts in parallel
- hosts: all
  roles:
    - nodiscc.xsrv.common

# deploy the monitoring role to all hosts except demo35.example.org
- hosts: all:!demo35.example.org
  roles:
    - nodiscc.xsrv.monitoring

# deploy specific roles role to specific hosts
- hosts: ldap.example,app01.example.org
  roles:
    - nodiscc.xsrv.apache
- hosts: ldap.example.org
  roles:
    - nodiscc.xsrv.openldap
- hosts: backup.example.org
  roles:
    - nodiscc.xsrv.backup
- hosts: app01.example.org
  roles:
    - nodiscc.xsrv.postgresql
    - nodiscc.xsrv.nextcloud
    - nodiscc.xsrv.shaarli
    - nodiscc.xsrv.gitea

See Intro to playbooks.

Removing roles: Removing a role from the playbook does not remove its components or data from your hosts. To uninstall components managed by a role, run xsrv deploy with the appropriate utils-remove-* tag:

# remove all gitea role components and data from demo1.example.org
TAGS=utils-remove-gitea xsrv deploy default demo1.example.org

Then remove the role from your playbook.

You may also remove components manually using SSH/xsrv shell, or remove the role from the list, prepare a new host, deploy the playbook again, and restore data from backups or shared storage.

Most roles provide variables to temporarily disable the services they manage.


Manage configuration

xsrv show-defaults

Show all role configuration variables, and their default values.

# show variables for all roles
xsrv show-defaults myproject
# show variables only for a specific role
xsrv show-defaults myproject nextcloud

xsrv edit-host

Edit configuration variables (host_vars) for a host.

The value in host_vars will take precedence over default and group values. Example:

# $ xsrv show-defaults
# yes/no: enable the mumble service
mumble_enable_service: yes

# $ xsrv edit-host
# disable the mumble service on this host
mumble_enable_service: no

Use xsrv show-defaults to list all available variables and their default values.

You may also use special variables or connection variables:

# user account used for deployment
ansible_user: "deploy"
# SSH port used to contact the host (if different from 22)
ansible_ssh_port: 123
# IP/hostname used to contact the host if its inventory name is different/not resolvable
ansible_host: 1.2.3.4

xsrv edit-vault

Edit encrypted configuration variables/secrets for a host.

Sensitive variables such as usernames/password/credentials should not be stored as plain text in host_vars. Instead, store them in an encrypted file:

# xsrv edit-vault
# sudo password for the ansible_user account
ansible_become_pass: "ZplHu0b6q88_QkHNzuKwoa-9cb-Dxrrt"
# roles may require additional secrets/variables
nextcloud_user: "myadminusername"
nextcloud_password: "cyf58eAZFbbEUZ4v3y6B"
nextcloud_admin_email: "admin@example.org"
nextcloud_db_password: "ucB77fNLX4qOoj2GhLBy"

By default, Vault files are encrypted/decrypted by ansible-vault using the master password stored in plain text in .ansible-vault-password. A random strong master password is generated automatically during initial project creation.

# cat ~/playbooks/default/.ansible-vault-password
Kh5uysMgG5f9X£5ap_O_AS(n)XS1fuuY

Keep backups of this file and protect it appropriately (chmod 0600 .ansible-vault-password, full-disk encryption on underlying storage). By default this file is excluded from Git version control if the project was created with xsrv init-project.

You may also place a custom script in .ansible-vault-password, that will fetch the master password from a secret storage/keyring of your choice (in this case the file must be made executable - chmod +x .ansible-vault-password).

To disable reading the master password from a file/script: edit the ansible.cfg file in the project directory (xsrv edit-cfg), comment out the vault_password_file setting, and uncomment the ask_vault_pass = True setting. You will be asked for the sudo password before deployment. You may also specify a different path to the password file.

xsrv edit-group

Edit group configuration (group_vars - configuration shared by all hosts in a group).

# $ xsrv edit-group default all
# enable msmtp mail client installation for all hosts
setup_msmtp: yes

# $ xsrv edit-host default dev.example.org
# except for this host
setup_msmtp: no

Group variables take priority over default values, but are overridden by host variables.

xsrv edit-group-vault

Edit encrypted group configuration - similar to xsrv edit-vault, but for groups.

# $ xsrv edit-group-vault all
# common outgoing mail credentials for all hosts
msmtp_username: "mail-notifications"
msmtp_password: "e9fozo8ItlH6XNoysyt7vdylXcttVu"

Apply changes

xsrv deploy

After any changes to the playbook, inventory or configuration variables, apply changes to the target host(s):

xsrv deploy

You may also deploy changes for a limited set/group of hosts or roles/tasks:

# deploy only to my.example2.org and the 'prod' group in the default project
xsrv deploy default my.example2.org,prod
# deploy only nextcloud and transmission roles
TAGS=nextcloud,transmission xsrv deploy default

Run xsrv help-tags or see the list of all tags.

xsrv check

Check mode will simulate changes and return the expected return status of each task (ok/changed/skipped/failed), but no actual changes will be made to the host (dry-run mode).

# check what would be changed by running xsrv deploy default my.example2.org
xsrv check default my.example2.org
# TAGS can also be sued in check mode
TAGS=nextcloud,transmission xsrv check

Note: check mode may not reflect changes that will actually occur in a “real” deployment, as some conditions may change during actual deployment that will lead to other changes/actions to be triggered. Notably, running xsrv check before the first actual deployment of a host/role will output many errors that would not occur during an actual deployment. These errors are ignored if appropriate.

Equivalent ansible commands: ansible-playbook playbook.yml --limit=my.example2.org,production --tags=transmission,nextcloud --check


Provision hosts

xsrv allows automated creation/provisioning of minimal Debian VMs using these commands:

VMs created using this method can then be added to your project using xsrv init-host or equivalent, at which point you can start deploying your configuration/services to them.


Upgrading

Upgrade roles to the latest release: (this is the default) run xsrv upgrade to upgrade to the latest stable release at any point in time (please read release notes/upgrade procedures and check if manual steps are required).

Upgrade roles to the latest development revision: replace release with master (or any other branch/tag) in the requirements.yml file of your project (xsrv edit-requirements) , then run xsrv upgrade.

Upgrade the xsrv script: run xsrv self-upgrade to upgrade the xsrv command-line utility to the latest stable release.


Other commands

xsrv shell

Open a shell directly on the target host using SSH. This is equivalent to ssh -p $SSH_PORT $USER@$HOST but you only need to pass the host name - the port and user name will be detected automatically from the host’s configuration variables.

$ xsrv shell my.example.org
# or
$ xsrv ssh my.example.org

An alternative is to use the readme-gen command to generate a SSH client configuration file which will allow contacting the host with ssh $HOST without specifying the port/user.

xsrv readme-gen

Adds a summary of basic information about your hosts (groups, IP addresses, OS/virtualization, CPU, memory, storage, quick access links to services deployed on the host, monitoring badges, custom comment…) in README.md at the root of your project, using Markdown. Running this command multiple times will update the summary with the latest information gathered from your hosts.

See the detailed documentation.

xsrv logs

Open the current syslog log with the lnav log viewer on the remote host.

$ xsrv logs my.example.org

If the remote user is not allowed to read /var/log/syslog directly, the sudo password will be asked (a.k.a. ansible_become_pass). This assumes lnav is installed either by one of the monitoring_rsyslog/monitoring_utils/monitoring_netdata roles, or manually (for example using packages_install). A quick introduction to lnav usage can be found here


Advanced

Use without remote controller

Using the server/host as its own controller is not recommended, but can help with single-server setups where no separate administration machine is available. By not using a separate controller, you lose the ability to easily redeploy a new system from scratch in case of emergency/distaster, and centralized management of multiple hosts will become more difficult. Your host will also have access to configuration of other hosts in your project.

##### CONNECTION
# SSH host/port, if different from my.example.org:22
# ansible_host: "my.example.org"
# ansible_port: 22
ansible_connection: local

Use as ansible collection

The main xsrv script maintains a simple and consistent structure for your projects, automates frequent operations, and manages Ansible installation/environments. You can also manage your playbooks manually using your favorite text editor and ansible-* command-line tools.

To import roles as a collection in your own playbooks, install ansible and create requirements.yml in the project directory:

# cat requirements.yml
collections:
  - name: https://gitlab.com/nodiscc/xsrv.git
    type: git
    version: release

Install the collection (or upgrade it to the latest release):

ansible-galaxy collection install --force -r requirements.yml

Include the collection and roles in your playbooks:

# cat playbook.yml
- hosts: my.CHANGEME.org
  collections:
    - nodiscc.xsrv
  roles:
   - nodiscc.xsrv.common
   - nodiscc.xsrv.monitoring
   - nodiscc.xsrv.apache
   - ...

Note that xsrv roles may require a minimum ansible version, specified in meta/runtime.yml

See man ansible-galaxy, Using collections and roles documentation.

Other collections:

  • nodiscc.toolbox - less-maintained, experimental or project-specific roles (awesome_selfhosted_html, bitmagnet, docker, grafana, homepage_extra_icons, icecast, k8s, mariadb, nfs_server, planarally, prometheus, proxmox, pulseaudio, reverse_ssh_tunnel, rocketchat, rss2email, rss_bridge, valheim_server, vscodium, znc)

  • devsec.hardening - battle tested hardening for Linux, SSH, nginx, MySQL

  • debops.debops - general-purpose Ansible roles that can be used to manage Debian or Ubuntu hosts

  • Ansible Galaxy - help other Ansible users by sharing the awesome roles and collections you create

Directory structure for a project:

# tree -a ~/playbooks/default/
├── inventory.yml # inventory of managed hosts
├── playbook.yml # playbook (assign roles to managed hosts)
├── group_vars/ # group variables (file names = group names from inventory.yml)   └── all.yml
├── host_vars/ # host variables (file names = host names from inventory.yml)   ├── my.example.org/
│      ├── my.example.org.vault.yml # plaintext host variables file      └── my.example.org.yml # encrypted/vaulted host variables file   └── my.other.org/
│       ├── my.other.org.vault.yml
│       └── my.other.org.yml
├── data/ # other data   ├── backups/
│   ├── cache/
│   ├── certificates/
│   └── public_keys/
├── playbooks # custom playbooks for one-shot tasks   ├── main.yml
│   └── operationXYZ.yml
├── README.md # documentation about your project
├── ansible.cfg # ansible configuration
├── requirements.yml # required ansible collections
└── ansible_collections # downloaded collections
    └── nodiscc
        └── xsrv

Using ansible command-line tools

Ansible command-line tools can be used directly in projects managed by xsrv. The project’s virtualenv must be activated manually:

# enter the project directory
cd ~/playbooks/default
# activate the virtualenv
source .venv/bin/activate
# run ansible commands directly
ansible-playbook playbook.yml --list-tasks
ansible-playbook playbook.yml --start-at-task 'run nextcloud upgrade command' --limit my.example.org,my2.example.org
ansible-inventory --list --yaml
ansible-vault encrypt_string 'very complex password'
ansible --become --module-name file --args 'state=absent path=/var/log/syslog.8.gz' my.example.org

Version control

Configuration/testing/deployment/change management process can be automated further using version-controlled configuration. Put your playbook directory (e.g. ~/playbooks/default) under git version control and start tracking changes to your configuration:

# create a project
xsrv init-project default
# enter the project directory
cd ~/playbooks/default/
# start tracking changes
git init
# add initial files
git add .
git commit -m "initial commit"
# change a configuration value
xsrv edit-host default prod.example.org
# add and commit the change
git add host_vars/prod.example.org/prod.example.org.yml
git commit -m "prod.example.org: change x configuration to y"
# push your changes
git push

Reverting changes:

  • git checkout your playbook directory as it was before the change or git reset to the desired, “good” commit.

  • apply the playbook xsrv deploy

You may have to restore data from last known good backups/a snapshot from before the change. See each role’s documentation for restoration instructions.

Continuous deployment

Projects stored in git repositories can be tied to a Continuous deployment (CI/CD) system that will perform automated checks and deployments, controlled by git operations (similar to GitOps).

This example .gitea/workflows/ci.yml for Gitea Actions will deploy your project against specific environments/hosts/groups (optionally running the playbook in check mode beforehand) automatically when changes are pushed. It should also work from Github Actions since the syntax is the same. Different target hosts/groups can be specified for master and non-master branches.

This example .gitlab-ci.yml for Gitlab CI checks the playbook for syntax errors, simulates the changes against staging and production environments, and waits for manual action (click on ) to run actual staging/production deployments. Pipeline execution time can be optimized by building a CI image that includes preinstalled dependencies: .gitlab-ci.Dockerfile, .gitlab-ci.yml.