Table of Contents

'cce-servers) #+endsrc

From the top

Todo List

NEXT Set up a docker image repository and integrate Dockerfiles

NEXT Document and update backup regiment

NEXT cross-integrate individual service roles with their CCE modules

  • media stuff in to cce-music
  • ua in to cce-gnus

etc

Wants and Needs

  • Server management here does not need to scale. I am not running a ten node cluster.
  • Server management here should be simple and understandable. No magic, no bullshit.
  • Playbook should be safe to run unattended.

Base Implementation

This is a pretty straightforward port and annotation of roles that I have written in the past for managing my servers. Moving them in to CCE has been on my todo-list for a while, and I'll be using this as an opportunity to re-adjust some of my systems and move forward towards an easier to manage setup and eliminate some roles and workflows I don't use any more.

ansible-playbook --vault-password-file=~/.ansible-vault -i inventory -bK $@ cce-servers.yaml
fontkeming.fail ansible_python_interpreter=python3
localhost ansible_python_interpreter=python3
kusanagi  ansible_python_interpreter=python3 ansible_hostname=192.168.0.18
---
- name: fontkeming services
  hosts: fontkeming.fail
  • secrets.yml is an ansible-vault file that contains some database passwords and other secrets.
  • snippets.yml contains a bunch of nginx snippets which are used to build http server configurations.
vars_files:
- vars/secrets.yml
- vars/snippets.yml
---
ssl_snippet: |
  ssl_certificate /etc/ssl/certs/fontkeming.fail_cert.pem;
  ssl_certificate_key /etc/ssl/private/fontkeming.fail_key.pem;
  ssl_ciphers ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256;
  ssl_protocols TLSv1 TLSv1.1 TLSv1.2;

acme_challenge_dir_snippet: |
  location /.well-known/acme-challenge/ {
    alias /var/www/html/.well-known/acme-challenge/;
    try_files $uri =404;
  }

Some variables that are used for updating my Lets Encrypt certificates.

vars:
  do_letsencrypt: false
  https_redir: false

Some stuff that's gotta happen for the playbook to run and the system to work:

pre_tasks:
- name: python3-psycopg2 installed
  dnf:
    name: python3-psycopg2
    state: installed

- name: matrix-synapse federation port open
  firewalld:
    port: 8448/tcp
    state: enabled
    permanent: yes
    immediate: yes
    zone: public
  tags:
  - matrix

- name: transmission port open
  firewalld:
    port: 51413/tcp
    state: enabled
    permanent: yes
    immediate: yes
    zone: public
  tags:
  - media

- name: transmission port open
  firewalld:
    port: 51413/udp
    state: enabled
    permanent: yes
    immediate: yes
    zone: public
  tags:
  - media

- name: passlib installed
  dnf:
    state: installed
    name: python3-passlib
  tags:
  - pihole

- name: htpasswd set up for admin pages
  htpasswd:
    name: rrix
    password: "{{htpasswd_password}}"
    path: /etc/nginx/htpasswd
  tags:
  - pihole
  - web

Roles

roles:

Base Configuration

This is where, in theory, I would do much of my systems standardization, but right now I just define a single group ID that I can use for things which I know should be "world readable".

- name: fileserver access UNIX group exists
  group:
    state: present
    name: fileserver
    gid:  10100
- file-server

PostgreSQL Databases

This was used to create my Nextcloud database and for any new service I bring up that needs one.

---
- name: "{{application}} | db user exists"
  postgresql_user:
    login_host: localhost
    login_user: postgres
    password: "{{databases[application]['postgresql_user_password']}}"
    name: "{{databases[application]['postgresql_user_name']}}"

- name: "{{application}} | db exists"
  postgresql_db:
    name: "{{databases[application]['postgresql_user_name']}}"
    owner: "{{databases[application]['postgresql_user_name']}}"
    login_host: localhost
    login_user: postgres

Web sites are served through nginx

nginx Frontend

Straightforward, install and enable nginx after stopping any Apache httpd which was running.

---
- name: no httpd running
  service:
    name: httpd
    state: stopped

- name: nginx installed
  dnf:
    state: installed
    name:
    - nginx

- name: nginx running and enabled
  service:
    name: nginx
    state: started
    enabled: yes
- role: nginx-frontend
  tags:
  - web

SSL through Let's Encrypt

I use Let's Encrypt to issue SSL certificates for my systems, specifically by using ACME Tiny. This role creates a system user which can manage the certs in its home directory, only moving to root when it's time to install the certs.

---
- name: system acme http snippet removed
  file:
    path: /etc/httpd/conf.d/acme.conf
    state: absent
  notify:
  - reload httpd

- name: letsencrypt dir exists
  file:
    state: directory
    path: /var/lib/acme

- name: letsencrypt chain cert exists
  copy:
    src: lets-encrypt-x3-cross-signed.pem
    dest: /var/lib/acme/

- name: acme-tiny installed
  get_url:
    url: https://raw.githubusercontent.com/diafygi/acme-tiny/master/acme_tiny.py
    dest: /usr/bin/acme_tiny
    mode: 0755

- name: letsencrypt user exists
  user:
    name: letsencrypt
    state: present

- name: letsencrypt workdir exists
  file:
    state: directory
    path: /home/letsencrypt/acme/fontkeming.fail/
    owner: letsencrypt
    group: letsencrypt

- name: letsencrypt openssl.cnf exists
  copy:
    owner: letsencrypt
    group: letsencrypt
    src: openssl.cnf
    dest: /home/letsencrypt/acme/fontkeming.fail/openssl.cnf

- name: Account Key exists
  shell:
    chdir: /home/letsencrypt/acme/fontkeming.fail
    cmd: openssl genrsa 4096 > letsencrypt_account.key
    creates: /home/letsencrypt/acme/fontkeming.fail/letsencrypt_account.key
  become: yes
  become_user: letsencrypt

- name: account key is 0600
  file:
    path: /home/letsencrypt/acme/fontkeming.fail/letsencrypt_account.key
    mode: 0600

- name: domain key exists
  shell:
    chdir: /home/letsencrypt/acme/fontkeming.fail
    cmd: openssl genrsa 4096 > letsencrypt_domain.key
    creates: /home/letsencrypt/acme/fontkeming.fail/letsencrypt_domain.key
  become: yes
  become_user: letsencrypt

- name: domain key is 0600
  file:
    path: /home/letsencrypt/acme/fontkeming.fail/letsencrypt_domain.key
    mode: 0600

- name: CSR exists
  shell:
    chdir: /home/letsencrypt/acme/fontkeming.fail
    cmd: openssl req -new -sha256 -key letsencrypt_domain.key -subj "/" -reqexts SAN -config openssl.cnf > letsencrypt.csr
    creates: /home/letsencrypt/acme/letsencrypt.csr
  become: yes
  become_user: letsencrypt

- name: acme challenge dir exists
  file:
    state: directory
    path: /var/www/html/.well-known/acme-challenge
    owner: letsencrypt

- name: create cert
  shell:
    chdir: /home/letsencrypt/acme/fontkeming.fail
    cmd: acme_tiny --account letsencrypt_account.key --csr letsencrypt.csr --acme-dir /var/www/html/.well-known/acme-challenge > fontkeming.fail.crt
  become: yes
  become_user: letsencrypt

- name: ssl key directory exists
  file:
    state: directory
    path: /etc/ssl/private

- name: assemble certificate
  shell:
    chdir: /home/letsencrypt/acme/
    cmd: cat fontkeming.fail/fontkeming.fail.crt /var/lib/acme/lets-encrypt-x3-cross-signed.pem > /etc/ssl/certs/fontkeming.fail_cert.pem
  notify:
    - reload httpd

- name: ssl key given to httpd
  copy:
    remote_src: yes
    src: /home/letsencrypt/acme/fontkeming.fail/letsencrypt_domain.key
    dest: /etc/ssl/private/fontkeming.fail_key.pem
  notify:
    - reload nginx

I fucking hate openssl, the only part of this I think I touched was changing the SAN at the bottom of the file so that all of my domains end up in the final certificate.

HOME            = .
RANDFILE        = $ENV::HOME/.rnd

[ ca ]
default_ca  = CA_default

[ CA_default ]
dir     = ./demoCA
certs       = $dir/certs
crl_dir     = $dir/crl
database    = $dir/index.txt
new_certs_dir   = $dir/newcerts
certificate = $dir/cacert.pem
serial      = $dir/serial
crlnumber   = $dir/crlnumber
crl     = $dir/crl.pem
private_key = $dir/private/cakey.pem# The private key
RANDFILE    = $dir/private/.rand
x509_extensions = usr_cert
name_opt    = ca_default
cert_opt    = ca_default

default_days    = 365
default_crl_days= 30
default_md  = default
preserve    = no
policy      = policy_match

[ policy_match ]
countryName     = match
stateOrProvinceName = match
organizationName    = match
organizationalUnitName  = optional
commonName      = supplied
emailAddress        = optional

[ req ]
default_bits        = 2048
default_keyfile     = privkey.pem
distinguished_name  = req_distinguished_name
attributes      = req_attributes
x509_extensions = v3_ca

string_mask = utf8only

[ req_distinguished_name ]
countryName         = Country Name (2 letter code)
countryName_default     = US
countryName_min         = 2
countryName_max         = 2

stateOrProvinceName     = State or Province Name (full name)
stateOrProvinceName_default = Washington

localityName            = Locality Name (eg, city)
localityName_default        = Seattle

0.organizationName      = Organization Name (eg, company)
0.organizationName_default  = Fontkeming dot Fail

organizationalUnitName      = Organizational Unit Name (eg, section)
organizationalUnitName_default  = Systems Engineering

commonName          = Common Name (e.g. server FQDN or YOUR name)
commonName_max          = 64

emailAddress            = Email Address
emailAddress_max        = 64

[ req_attributes ]
challengePassword       = A challenge password
challengePassword_min       = 4
challengePassword_max       = 20

unstructuredName        = An optional company name

[ usr_cert ]

basicConstraints=CA:FALSE
nsComment           = "OpenSSL Generated Certificate"
subjectKeyIdentifier=hash
authorityKeyIdentifier=keyid,issuer

[ v3_ca ]

subjectKeyIdentifier=hash
authorityKeyIdentifier=keyid:always,issuer

basicConstraints = CA:true

[ crl_ext ]

authorityKeyIdentifier=keyid:always

[ proxy_cert_ext ]

basicConstraints=CA:FALSE
nsComment           = "OpenSSL Generated Certificate"

subjectKeyIdentifier=hash
authorityKeyIdentifier=keyid,issuer

proxyCertInfo=critical,language:id-ppl-anyLanguage,pathlen:3,policy:foo

[SAN]
subjectAltName=DNS:home.rix.si,DNS:fontkeming.fail,DNS:matrix.fontkeming.fail,DNS:code.rix.si,DNS:music.fontkeming.fail,DNS:files.fontkeming.fail,DNS:dns.fontkeming.fail,DNS:kickass.systems,DNS:ring.whatthefuck.computer,DNS:whatthefuck.computer,DNS:admin.fontkeming.fail,DNS:dimension.fontkeming.fail,DNS:notes.whatthefuck.computer,DNS:afd.fontkeming.fail
# DNS:fort.kickass.systems,DNS:blag.meznak.net,DNS:doc.rix.si,DNS:dongiverse.com,DNS:makesyouthink.info,DNS:rix.si,DNS:shibuya-cho.club,DNS:totallydisturbingvapyrrdungeon.com
---
- name: reload nginx
  service:
    name: nginx
    state: restarted
- role: letsencrypt
  when: do_letsencrypt
  tags:
  - web

Web Frontends

I have an Ansible role which sets up standardized nginx configurations for various domains that I either own or host. This module has an added bonus of being able to set up DNS entries for sites that I have registered on Gandi.

  • Base Role

    This thing is pretty straightforward. It's designed to be heavily templated and reused in that fashion, variables are either specified in the playbook, the default vars/main.yaml or in the secrets file.

    ---
    - name: "{{domain}} : install httpd snippet"
      template:
        src: virtualhosts/{{template}}.conf
        dest: /etc/nginx/conf.d/{{domain}}.conf
      notify:
      - check config
      - "reload nginx for {{domain}}"
    
    - livedns:
        state: present
        secret: "{{livedns_apikey}}"
        zone: "{{dnszone}}"
        subdomain: "{{dnssubdomain}}"
        rtype: CNAME
        rvalues:
        - fontkeming.fail.
        ttl: 300
      ignore_errors: true
      when: 'dnssubdomain != "@"'
      name: create livedns entry (CNAME)
    
    - livedns:
        state: present
        secret: "{{livedns_apikey}}"
        zone: "{{dnszone}}"
        subdomain: "{{dnssubdomain}}"
        rtype: A
        rvalues:
        - "{{ansible_default_ipv4['address']}}"
        ttl: 300
      ignore_errors: true
      when: 'dnssubdomain == "@"'
      name: create livedns entry (A)
    
    ---
    https_redir: true
    root: /var/www/sites/{{domain}}
    template: "{{domain}}"
    use_htpasswd: false
    
    ---
    - name: check config
      shell: nginx -t
    
    - name: "reload nginx for {{domain}}"
      service:
        name: nginx
        state: reloaded
    
  • Gandi LiveDNS API Module
    # GNU General Public License v3.0+
    
    from __future__ import absolute_import, division, print_function
    __metaclass__ = type
    
    import requests
    import json
    
    
    ANSIBLE_METADATA = {'metadata_version': '1.1',
                        'status': ['preview'],
                        'supported_by': 'community'}
    
    
    DOCUMENTATION = '''
    ---
    '''
    
    EXAMPLES = '''
    '''
    
    RETURN = '''
    '''
    
    from ansible.module_utils.basic import AnsibleModule
    from ansible.module_utils._text import to_native
    
    API_BASE = 'https://dns.api.gandi.net/api/v5'
    
    class Record():
        def __init__(self, module):
            self.module = module
    
            self.zone = module.params['zone']
    
            self.subdomain = self.module.params['subdomain']
            self.rtype = self.module.params['rtype']
            self.rvalues = self.module.params['rvalues']
            self.ttl = self.module.params['ttl']
            self.state = self.module.params['state']
            self.secret = self.module.params['secret']
    
        def create_or_update_record(self):
            self.uuid = self.get_uuid()
            record = self.get_record(self.uuid)
            if record != None and record.get('rrset_values') == self.rvalues:
                return self.module.exit_json(changed=False)
            my_record = {
                'rrset_name': self.zone,
                'rrset_ttl': self.ttl,
                'rrset_type': self.rtype,
                'rrset_values': self.rvalues
            }
            retval = self.update_records(my_record)
            return retval
    
        def get_uuid(self):
            url = API_BASE + '/domains/' + self.zone
            u = requests.get(url, headers={"X-Api-Key":self.secret})
            json_object = json.loads(u._content)
            if u.status_code == 200:
                return json_object['zone_uuid']
            else:
                self.module.fail_json(status_code=u.status_code, msg=json_object['message'], url=url)
    
        def get_record(self, uuid):
            url = API_BASE + '/zones/' + uuid + '/records/' + self.subdomain + '/' + self.rtype
            headers = {"X-Api-Key":self.secret}
            u = requests.get(url, headers=headers)
            json_object = json.loads(u._content)
            if u.status_code == 200:
                return json_object
            else:
                # 404 means we need to create it
                return None
    
        def update_records(self, payload):
            url = API_BASE + '/zones/' + self.uuid + '/records/' + self.subdomain + '/' + self.rtype
            headers = {"Content-Type": "application/json", "X-Api-Key":self.secret}
            u = requests.put(url, data=json.dumps(payload), headers=headers)
            json_object = json.loads(u._content)
            if u.status_code == 201:
                return json_object
            else:
                self.module.fail_json(msg="Fail", **json_object)
    
    
    def main():
        module = AnsibleModule(
            argument_spec={
                'state': {
                    'required':  False,
                    'default': 'present',
                    'choices': ['present', 'absent'],
                    'type': 'str',
                },
                'secret': {
                    'required': True,
                    'type': 'str',
                },
                'zone': {
                    'required': True,
                    'type': 'str',
                },
                'subdomain': {
                    'required': True,
                    'type': 'str',
                },
                'rtype': {
                    'required': True,
                    'type': 'str',
                },
                'rvalues': {
                    'required': True,
                    'type': 'list',
                },
                'ttl': {
                    'required': False,
                    'default': 300,
                    'type': 'int',
                },
            }
        )
    
        record = Record(module)
        result = {}
        if module.params["state"] == 'absent':
            result = record.remove_record()
        elif module.params["state"] == 'present':
            retval = record.create_or_update_record()
            module.exit_json(changed=True, retval=retval)
    
    if __name__ == '__main__':
        main()
    
  • Domain setup

    This is all pretty straightforward.

    - role: web-frontend
      domain: base-domains
      template: base-domains
      tags:
      - web
    
    - role: web-frontend
      template: simple-https-root
      domain: kickass.systems
      dnszone: kickass.systems
      dnssubdomain: "@"
      root: /var/www/sites/kickass.systems/_site
      tags:
      - web
    
    - role: web-frontend
      domain: ring.whatthefuck.computer
      template: simple-https-root
      dnszone: whatthefuck.computer
      dnssubdomain: ring
      tags:
      - web
    
    - role: web-frontend
      domain: whatthefuck.computer
      template: simple-https-root
      root: /var/www/sites/whatthefuck.computer/_site
      dnszone: whatthefuck.computer
      dnssubdomain: "@"
      tags:
      - web
    
    - role: web-frontend
      domain: notes.whatthefuck.computer
      template: simple-https-root
      root: /var/www/sites/notes.whatthefuck.computer/_site
      dnszone: whatthefuck.computer
      dnssubdomain: notes
      tags:
      - web
    
    # Externally managed, need to update the DNS or transfer to Gandi or re-register
    
    # - role: web-frontend
    #   domain: rix.si
    #   tags:
    #     - web
    
    # - role: web-frontend
    #   domain: totallydisturbingvapyrrdungeon.com
    #   tags:
    #     - web
    
    # - role: web-frontend
    #   domain: makesyouthink.info
    #   tags:
    #     - web
    
    # - role: web-frontend
    #   domain: blog.dongiverse.com
    #   root: /var/www/sites/blog.dongiverse.com/_site
    #   tags:
    #     - web
    
    # - role: web-frontend
    #   domain: doc.rix.si
    #   root: /var/www/sites/doc.rix.si/_site
    #   tags:
    #     - web
    
    # - role: web-frontend
    #   domain: dongiverse.com
    #   tags:
    #     - web
    
    # - role: web-frontend
    #   domain: blag.meznak.net
    #   root: /var/www/sites/blag.meznak.net/_site
    #   tags:
    #     - web
    
    # - role: web-frontend
    #   domain: shibuya-cho.club
    #   tags:
    #     - web
    
    # - role: web-frontend
    #   domain: hwatthefuck.men
    #   tags:
    #     - web
    
    # - role: web-frontend
    #   domain: melchior.systems
    #   tags:
    #     - web
    
    server {
        listen 80;
        listen 443 ssl;
        server_name fort.kickass.systems;
    
        {{acme_challenge_dir_snippet}}
    
        {{ssl_snippet}}
    
        location /git {
            rewrite ^/git(/.*)$ https://code.rix.si$1 permanent;
        }
    }
    
    server {
        listen 80;
        listen 443 ssl;
        server_name fontkeming.fail;
    
        {{acme_challenge_dir_snippet}}
    
        {{ssl_snippet}}
    }
    
    server {
        listen 80;
        listen 443 ssl;
        server_name home.rix.si;
    
        {{acme_challenge_dir_snippet}}
    
        {{ssl_snippet}}
    
        location /git {
            rewrite ^/git(/.*)$ https://code.rix.si$1 permanent;
        }
    
        location /prom {
            proxy_pass   http://localhost:9090/prom;
        }
    
        location /grafana/ {
            proxy_pass   http://localhost:3000/;
        }
    
        location /fdroid {
            alias /srv/files/services/fdroid/repo;
        }
    }
    
    server {
           listen 80;
           server_name {{domain}};
    
    {% if https_redir == true %}
           location / { return 301 https://$host$request_uri; }
    {% endif %}
    
           {{acme_challenge_dir_snippet}}
    }
    
    map $http_upgrade $connection_upgrade {
        default upgrade;
        ''      close;
    }
    
    server {
        listen 443 ssl;
        server_name {{domain}};
    
        {{ssl_snippet}}
    
        {{acme_challenge_dir_snippet}}
    
        root {{root}};
    
        location ~ ^/~(.+?)(/.*)?$ {
            alias /home/$1/public_html$2;
            index index.html index.htm;
            autoindex on;
        }
    }
    
    server {
        listen 80;
        server_name {{domain}};
    
        {{acme_challenge_dir_snippet}}
    
    {% if https_redir == true %}
        location / { return 301 https://$host$request_uri; }
    {% endif %}
    }
    
    server {
        listen 443 ssl;
        server_name {{domain}};
    
        {{ssl_snippet}}
    
        {{acme_challenge_dir_snippet}}
    
        root {{root}};
    
        location / {
    {% if use_htpasswd %}
            auth_basic           "closed site";
            auth_basic_user_file /etc/nginx/htpasswd;
    {% endif %}
    
            proxy_set_header Host $http_host;
            proxy_set_header X-Real-IP $remote_addr;
            proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
            proxy_set_header X-Forwarded-Proto $scheme;
            proxy_set_header X-Forwarded-Host $http_host;
            proxy_pass {{proxy_pass_url}};
        }
    }
    

Services largely run on Docker

Most of the services on my system run in Docker containers that I build myself, orchestrated by systemd. This is simple, fool-proof and largely works. In the past I've tried to go the route of docker-compose or Kubernetes or some similar monolith, but I keep coming back to this model.

Docker installation

Fedora's packaging for Docker seems quite broken and at the very least frustrating to use. I use the upstream packages, only the stable versions.

---
- name: docker-ce yum.repos.d installed
  copy:
    src: docker-ce.repo
    dest: /etc/yum.repos.d/docker-ce.repo

- name: Docker GPG key in RPM
  rpm_key:
    key: https://download.docker.com/linux/centos/gpg
    state: present
[docker-ce-stable]
name=Docker CE Stable - $basearch
baseurl=https://download.docker.com/linux/fedora/$releasever/$basearch/stable
enabled=1
gpgcheck=1
gpgkey=https://download.docker.com/linux/fedora/gpg

[docker-ce-stable-debuginfo]
name=Docker CE Stable - Debuginfo $basearch
baseurl=https://download.docker.com/linux/fedora/$releasever/debug-$basearch/stable
enabled=0
gpgcheck=1
gpgkey=https://download.docker.com/linux/fedora/gpg

[docker-ce-stable-source]
name=Docker CE Stable - Sources
baseurl=https://download.docker.com/linux/fedora/$releasever/source/stable
enabled=0
gpgcheck=1
gpgkey=https://download.docker.com/linux/fedora/gpg
- name: docker-ce installed
  dnf:
    state: installed
    name: docker-ce
  notify: restart docker

- name: docker-ce enabled at boot
  service:
    name: docker
    state: started
    enabled: yes
---
- name: restart docker
  service:
    name: docker
    state: restarted
- role: docker
  tags:
  - docker

The Base Application Role

The crux of this is "template a docker run command and shove it in to a systemd service's ExecStart."

---
- name: "{{service_name}} : systemd configuration template installed"
  template:
    src: docker-app.service.j2
    dest: /etc/systemd/system/{{service_name}}.service
  notify:
  - "enable {{service_name}}"
---
- name: "enable {{service_name}}"
  systemd:
    daemon-reload: yes
    enabled: yes
    name: "{{service_name}}"
    state: restarted
[Unit]
Description={{ description }}
After=docker.service

[Service]
Type=simple
ExecStart=/usr/bin/docker run --rm --name {{service_name}} {{docker_run_args}} {%for env in environments%}-e {{env}} {%endfor%} {% for port in ports %}-p {{port}} {%endfor%} {% for volume in volumes %}-v {{volume}} {%endfor%} {{image}} {{command_args}}
ExecStop=/usr/bin/docker stop {{service_name}}

[Install]
WantedBy=default.target

PostgreSQL

If a service needs a database and I have a choice, I use PostgreSQL for them. I run it inside of a Docker container with full --net=host configured for it – probably overkill, and probably unnecessary but I'm not brave enough to fiddle with this too much.

Of note, my system is configured to write WAL and PITR files on to the large disks that I have installed on my server, and the live data is on the SSD mounted to /. This has caused problems for me in the past with disk IO, and is worth re-evaluating.

- role: docker-app
  service_name: postgres
  description: Shared pgsql for fontkeming services
  image: rrix/postgres:9.6
  docker_run_args: --net=host
  command_args: ""
  ports: []
  environments:
  - 'POSTGRES_PASSWORD={{postgresql_password}}'
  volumes:
  - /var/lib/pgsql/data:/var/lib/postgresql/data
  - /srv/files/services/postgres/backups:/backups
  - /srv/files/services/postgres/archive:/archive
  tags:
  - docker
  - postgres
  • NEXT Document the rest of my postgres setup and get my backup scripts automated

Pi-Hole DNS Server

Pi Hole is a simple ad-blocking DNS service designed to run even on a Raspberry Pi, hence the name. I use it as the default DNS server for my systems, and the default DNS server for my Wireguard VPN configurations. There's no reason for me to not use my own DNS, unless some one is doing network stupidity.

- role: docker-app
  service_name: pihole
  description: "Pi-Hole DNS Server"
  image: rrix/pi-hole
  docker_run_args: "--cap-add=NET_ADMIN --dns=127.0.0.1 --dns=1.1.1.1"
  command_args: ""
  ports:
  - 53:53/tcp
  - 53:53/udp
  - 67:67/udp
  - 1801:80/tcp
  - 10443:443/tcp
  environments:
  - "ServerIP={{ansible_default_ipv4.address}}"
  - "VIRTUAL_HOST=admin.fontkeming.fail"
  - "IPv6=False"
  volumes:
  - "/srv/files/services/pi-hole/config:/etc/pihole"
  - "/srv/files/services/pi-hole/dnsmasq:/etc/dnsmasq.d"
  tags:
  - docker
  - pihole

- role: web-frontend
  domain: dns.fontkeming.fail
  template: simple-proxy-pass
  proxy_pass_url: http://127.0.0.1:1801/
  use_htpasswd: true
  dnszone: fontkeming.fail
  dnssubdomain: dns
  tags:
  - docker
  - pihole

Matrix Services

I am a heavy user and proponent of the Matrix federated chat ecosystem. I host all of it myself and you can find me over there as @rrix:kickass.systems.

  • Matrix Synapse Homeserver
    - role: docker-app
      service_name: matrix-synapse
      description: Matrix Synapse Homeserver
      image: rrix/synapse
      docker_run_args: --net=host
      command_args: ""
      ports: []
      environments: []
      volumes:
      - /srv/files/services/matrix-synapse:/data
      tags:
      - docker
      - matrix
    
    
    - role: web-frontend
      domain: matrix.fontkeming.fail
      dnszone: fontkeming.fail
      dnssubdomain: matrix
      tags:
      - web
      - matrix
    
    server {
           listen 80;
           server_name matrix.fontkeming.fail;
    
           location / { return 301 https://$host$request_uri; }
    
           {{acme_challenge_dir_snippet}}
    }
    
    server {
        listen 443 ssl;
        server_name matrix.fontkeming.fail;
    
        {{ssl_snippet}}
    
        {{acme_challenge_dir_snippet}}
    
        location /_matrix {
            proxy_pass http://127.0.0.1:8008/_matrix;
            client_max_body_size 16m;
        }
    
        location / {
            proxy_pass http://127.0.0.1:8010;
        }
    }
    
    server {
        listen 8448 ssl default_server;
        listen [::]:8448 ssl default_server;
        server_name fontkeming.fail;
    
        {{ssl_snippet}}
    
        location / {
            proxy_pass http://localhost:8008;
            proxy_set_header X-Forwarded-For $remote_addr;
        }
    }
    
  • Riot Web Client
    - role: docker-app
      service_name: riot-web
      description: Riot Client
      image: rrix/riot-web
      docker_run_args: --net=host
      command_args: ""
      ports: []
      environments: []
      volumes:
      - /srv/files/services/riot-web:/data
      - /srv/files/services/riot-web/config.json:/var/www/riot/config.json
      tags:
      - docker
      - matrix
    
  • Matrix IRC App Service
    - role: docker-app
      service_name: matrix-irc
      description: Matrix Appservice IRC
      image: rrix/matrix-appservice-irc
      docker_run_args: --net=host
      command_args: -f /irc_registration_file.yaml -c /irc-config.yaml -p 9999
      ports: []
      environments: []
      volumes:
      - /srv/files/services/matrix-synapse/irc-as-data:/usr/local/lib/node_modules/matrix-appservice-irc/data
      - /srv/files/services/matrix-synapse/irc-config.yaml:/irc-config.yaml
      - /srv/files/services/matrix-synapse/wobscale_registration.yaml:/irc_registration_file.yaml
      tags:
      - docker
      - matrix
    
  • Matrix Dimension Integration Server
    - role: docker-app
      service_name: matrix-dimension
      description: "Dimension Matrix integration server"
      image: rrix/dimension
      docker_run_args: ""
      command_args: ""
      ports:
      - 8184:8184
      environments: []
      volumes:
      - /srv/files/services/dimension:/data
      tags:
      - docker
      - matrix
    
    - role: web-frontend
      domain: dimension.fontkeming.fail
      dnszone: fontkeming.fail
      dnssubdomain: dimension
      tags:
      - docker
      - web
      - matrix
    
    
    

Media

I like having my music local. It fits in with my core ideologies, and the vast majority of the music I listen to is the sort of thing I can download in high-fidelity formats, and more directly support artists whose work I appreciate.

Largely, this means music is purchased through Bandcamp, Bleep, and Amazon.com, and then uploading the music "somewhere". In the far distant past, that was Ampache, then Subsonic, and for a while I've tried to use Funkwhale, but I wasn't terribly successful and meshing its library model with my library, even after integrating Beets in – too many of my things aren't on MusicBrainz, and I am not brave and bored enough to upload all of the metadata myself. Music which is not available on those services is usually purchased in some fashion available to me, perhaps a vinyl I find at a local record shop, or a CD I order on-line, which I usually then download through a private torrent tracker.

And so, I keep coming back to mpd, which is simple, there are plenty of clients (including some Emacs ones which I can never quite make work well enough for me). It supports local media playback through PulseAudio, but it also supports setting up an HTTP MP3 or OGG stream, which a client can then consume. This is cross-documented in the CCE Music module, and unifying all this in to there is probably something I should be thinking about doing.

  • MPD Music Player Daemon
    - role: docker-app
      service_name: mpd
      description: "Music Player Daemon"
      image: rrix/mpd
      docker_run_args: ""
      command_args: ""
      ports:
      - 6600:6600
      - 8000:8000
      environments: []
      volumes:
      - /srv/files/Music/:/opt/music
      tags:
      - docker
      - music
    
    - role: web-frontend
      domain: music.fontkeming.fail
      tags:
      - web
      - music
    
    server {
           listen 80;
           server_name music.fontkeming.fail;
    
           location / { return 301 https://$host$request_uri; }
    
           {{acme_challenge_dir_snippet}}
    }
    
    map $http_upgrade $connection_upgrade {
        default upgrade;
        ''      close;
    }
    
    server {
        listen 443 ssl;
        server_name music.fontkeming.fail;
    
        {{ssl_snippet}}
    
        {{acme_challenge_dir_snippet}}
    
        location /mpd {
            proxy_pass   http://localhost:8000;
        }
    }
    
  • Beets Music Library Manager
    - role: docker-app
      service_name: beets
      description: "Beets Audio Library Manager"
      image: rrix/beets
      docker_run_args: ""
      command_args: ""
      ports:
      - 8337:8337/tcp
      environments:
      - PGID=10000
      - PUID=10000
      volumes:
      - /srv/files/Music:/music
      - /srv/files/Music/beets:/config
      - /srv/files/services/transmission/Downloads:/downloads
      - /srv/files/rrix/incoming-music:/downloads/incoming
      tags:
      - docker
      - music
    
  • Transmission Torrent Daemon
    - role: docker-app
      service_name: transmission
      description: Transmission Torrent Daemon
      image: rrix/transmission
      docker_run_args: ""
      command_args: ""
      ports:
      - 9091:9091
      - 51413:51413
      environments: []
      volumes:
      - /srv/files/services/transmission/:/var/lib/transmission-daemon
      tags:
      - docker
      - media
    
  • npbot posts Last.FM scrobbles to Mastodon
    - role: docker-app
      service_name: npbot
      description: "Post nowplaying from last.fm to mastodon"
      image: rrix/npbot
      docker_run_args: ""
      command_args: ""
      ports: []
      environments: []
      volumes: []
      tags:
      - docker
      - mastodon
    
  • NEXT mpdscribble
  • NEXT sonarr and sonar-rss

Gogs Git Server

I use Gogs to manage my git repositories. It's not GitHub, and it's not as heavy-weight as GitLab. I'm not using any of the collaborative features heavily, basically just using this to store my own code and code I rely on.

- role: docker-app
  service_name: gogs
  description: Gogs Git Server
  image: rrix/gogs
  docker_run_args: ""
  command_args: ""
  ports:
  - 10023:22
  - 10080:3000
  environments: []
  volumes:
  - /srv/files/services/gogs:/data
  tags:
  - docker

- role: web-frontend
  domain: code.rix.si
  template: simple-proxy-pass
  proxy_pass_url: http://127.0.0.1:10080
  tags:
  - web
  • NEXT update and migrate to gitea

The Universal Aggregator

This is a piece of software which is heavily documented as part of the GNUS CCE module, but in short, it takes a list of RSS feeds and other "messaging" type of streams like my twitter feeds, and crams them in to a Maildir which can then be fed in to my mail reader through various avenues.

- role: docker-app
  service_name: ua
  description: "Universal Aggregator"
  image: rrix/ua
  docker_run_args: ""
  command_args: ""
  ports: []
  environments:
  - LOCAL_USER_ID=10000
  volumes:
  - /srv/files/rrix/Maildir:/data
  tags:
  - docker

Nextcloud

- role: pgdatabase
  application: nextcloud
  tags:
  - postgres

- role: docker-app
  service_name: nextcloud
  description: Nextcloud
  image: rrix/nextcloud
  docker_run_args: ""
  command_args: ""
  ports:
  - 19000:9000
  environments: []
  volumes:
  - /srv/files/services/nextcloud:/var/www/html
  - /srv/files/rrix:/data/shares
  - /srv/files/Videos:/data/tv
  - /srv/files/Movies:/data/movies
  tags:
  - docker
  - nextcloud

- role: docker-app
  service_name: nextcloud-cron
  description: Nextcloud Cron Jobs
  image: rrix/nextcloud
  docker_run_args: ""
  command_args: "/cron.sh"
  ports: []
  environments: []
  volumes:
  - /srv/files/services/nextcloud:/var/www/html
  - /srv/files/rrix:/data/shares
  - /srv/files/Videos:/data/tv
  - /srv/files/Movies:/data/movies
  tags:
  - docker
  - nextcloud

- role: web-frontend
  domain: files.fontkeming.fail
  tags:
  - web
server {
       listen 80;
       server_name files.fontkeming.fail;

       location / { return 301 https://$host$request_uri; }

       {{acme_challenge_dir_snippet}}
}

server {
    listen 443 ssl;
    server_name files.fontkeming.fail;

    {{ssl_snippet}}

    {{acme_challenge_dir_snippet}}

    add_header X-Content-Type-Options nosniff;
    add_header X-XSS-Protection "1; mode=block";
    add_header X-Robots-Tag none;
    add_header X-Download-Options noopen;
    add_header X-Permitted-Cross-Domain-Policies none;

    root /srv/files/services/nextcloud;

    location / {
        rewrite ^ /index.php$request_uri;
    }

    location ~ ^/(?:build|tests|config|lib|3rdparty|templates|data)/ {
        deny all;
    }
    location ~ ^/(?:\.|autotest|occ|issue|indie|db_|console) {
        deny all;
    }

    location ~ ^/(?:index|remote|public|cron|core/ajax/update|status|ocs/v[12]|updater/.+|ocs-provider/.+)\.php(?:$|/) {
        fastcgi_split_path_info ^(.+?\.php)(/.*)$;
        include fastcgi_params;
        fastcgi_param SCRIPT_FILENAME /var/www/html/$fastcgi_script_name;
        fastcgi_param PATH_INFO $fastcgi_path_info;
        fastcgi_param HTTPS on;
        #Avoid sending the security headers twice
        fastcgi_param modHeadersAvailable true;
        fastcgi_param front_controller_active true;
        fastcgi_pass 127.0.0.1:19000;
        fastcgi_intercept_errors on;
        fastcgi_request_buffering off;
    }

    location ~ ^/(?:updater|ocs-provider)(?:$|/) {
        try_files $uri/ =404;
        index index.php;
    }

    # Adding the cache control header for js and css files
    # Make sure it is BELOW the PHP block
    location ~ \.(?:css|js|woff|svg|gif)$ {
        try_files $uri /index.php$request_uri;
        add_header Cache-Control "public, max-age=15778463";
        # Add headers to serve security related headers (It is intended to
        # have those duplicated to the ones above)
        # Before enabling Strict-Transport-Security headers please read into
        # this topic first.
        # add_header Strict-Transport-Security "max-age=15768000; includeSubDomains; preload;";
        #
        # WARNING: Only add the preload option once you read about
        # the consequences in https://hstspreload.org/. This option
        # will add the domain to a hardcoded list that is shipped
        # in all major browsers and getting removed from this list
        # could take several months.
        add_header X-Content-Type-Options nosniff;
        add_header X-XSS-Protection "1; mode=block";
        add_header X-Robots-Tag none;
        add_header X-Download-Options noopen;
        add_header X-Permitted-Cross-Domain-Policies none;
        # Optional: Don't log access to assets
        access_log off;
    }

    location ~ \.(?:png|html|ttf|ico|jpg|jpeg)$ {
        try_files $uri /index.php$request_uri;
        # Optional: Don't log access to other assets
        access_log off;
    }

    location = /.well-known/carddav {
        return 301 $scheme://$host/remote.php/dav;
    }
    location = /.well-known/caldav {
        return 301 $scheme://$host/remote.php/dav;
    }

    # set max upload size
    client_max_body_size 512M;
    fastcgi_buffers 64 4K;

    # Enable gzip but do not remove ETag headers
    gzip on;
    gzip_vary on;
    gzip_comp_level 4;
    gzip_min_length 256;
    gzip_proxied expired no-cache no-store private no_last_modified no_etag auth;
    gzip_types application/atom+xml application/javascript application/json application/ld+json application/manifest+json application/rss+xml application/vnd.geo+json application/vnd.ms-fontobject application/x-font-ttf application/x-web-app-manifest+json application/xhtml+xml application/xml font/opentype image/bmp image/svg+xml image/x-icon text/cache-manifest text/css text/plain text/vcard text/vnd.rim.location.xloc text/vtt text/x-component text/x-cross-domain-policy;
}

Playmaker F-Droid repositories

I don't use this right now, but Playmaker is a little thing that allows you to download APKs for things in your Google Play library, or through generally searching, and provide an f-droid compatible repository for them. In practice, this allows me to use the handful of proprietary applications which I rely on from a MicroG Lineage-powered device.

I have one of these set up for each of the accounts I've purchased applications for, and access the private interfaces from an SSH tunnel or through Wireguard. The public interface is currently a sub-directory on my home web URL, but I also want to set up a dedicated repository site for this if I end up using it again.

- role: docker-app
  service_name: playmaker_1
  description: playmaker for ry@n.rix.si
  image: nomore201/playmaker
  docker_run_args: ""
  command_args: ""
  ports:
  - 15000:5000
  environments: []
  volumes:
  - /srv/files/services/fdroid:/data/fdroid
  - /srv/files/services/fdroid/credentials-ry_n.rix.si.conf:/data/fdroid/credentials.conf
  tags:
  - docker

- role: docker-app
  service_name: playmaker_2
  description: playmaker for ryan.j.rix
  image: nomore201/playmaker
  docker_run_args: ""
  command_args: ""
  ports:
  - 15001:5000
  environments: []
  volumes:
  - /srv/files/services/fdroid:/data/fdroid
  - /srv/files/services/fdroid/credentials-ryanjrix.conf:/data/fdroid/credentials.conf
  tags:
  - docker

- role: docker-app
  service_name: playmaker_3
  description: playmaker for phrkonaleash
  image: nomore201/playmaker
  docker_run_args: ""
  command_args: ""
  ports:
  - 15002:5000
  environments: []
  volumes:
  - /srv/files/services/fdroid:/data/fdroid
  - /srv/files/services/fdroid/credentials-phrkonaleash.conf:/data/fdroid/credentials.conf
  tags:
  - docker

# - role: web-frontend
#   domain: repos.fontkeming.fail

Monitoring

What good is having infrastructure if when it inevitably fucks up, you don't notice until you have to fix it? I use Prometheus as a time-series database and query infrastructure, Grafana for making it human-visible, and (eventually – I stripped it out and haven't brought it back) AlertManager for alerting to my phone via Pushover or equivalent.

Past that, Prometheus's model is that of providing a unified collection interface which other systems can use to expose data to. So I have a prometheus exporter for my container stats, and for my pi-hole DNS blocker. There are many others available as well.

  • Prometheus
    - role: docker-app
      service_name: prometheus
      description: prometheus monitoring tsdb
      image: prom/prometheus
      docker_run_args: "--net=host"
      command_args: "--config.file=/prometheus/prometheus.yml --web.external-url=https://home.rix.si/prom/"
      ports: []
      environments: []
      volumes:
      - /srv/files/services/prometheus:/prometheus
      tags:
      - docker
      - monitoring
    
  • Grafana
    - role: docker-app
      service_name: grafana
      description: "grafana"
      image: rrix/grafana
      docker_run_args: "--net=host"
      command_args: ""
      ports: []
      environments:
      - GF_SERVER_ROOT_URL=https://home.rix.si/grafana/
      volumes:
      - /srv/files/services/grafana:/var/lib/grafana
      tags:
      - docker
      - monitoring
    
  • cAdvisor
    - role: docker-app
      service_name: cadvisor
      description: "cAdvisor container information system"
      image: google/cadvisor
      docker_run_args: ""
      command_args: "--port=9101"
      ports:
      - 9101:9101
      environments: []
      volumes:
      - /:/roofs:ro
      - /var/run:/var/run:rw
      - /sys:/sys:ro
      - /var/lib/docker:/var/lib/docker:ro
      tags:
      - docker
      - monitoring
    
  • Pi-Hole Prometheus Exporter
    - role: docker-app
      service_name: pihole_exporter
      description: "Pi-Hole Prometheus Exporter"
      image: rrix/pihole_exporter
      docker_run_args: ""
      command_args: "-log.level=debug -pihole http://209.251.245.117:1801"
      ports:
      - 9311:9311
      environments: []
      volumes: []
      tags:
      - docker
      - pihole
    
  • NEXT AlertManager
  • NEXT postgresql exporter
  • NEXT Unifi Exporter
  • NEXT grok exporter
  • NEXT nginx exporter
  • NEXT Transmission Exporter
  • NEXT speedtest exporter
  • NEXT nodeexporter

Services Outside of Docker

AFDSEW - Seattle Weather Discussions via RSS

AFDSEW is a thing I built based on some code by a fine chap named Alex Ford which provides an RSS feed of the Seattle National Weather Service Office's "Area Forecast Discussion", a really nice little description about the causes and effects of localized weather. It's basically a python script that runs once an hour and generates an RSS feed if there is a new AFD. Then there's just an Nginx frontend to expose the RSS feed, the source AFDs and an information page.

- role: afdsew
  tags:
  - docker
  - afdsew

- role: web-frontend
  domain: afd.fontkeming.fail
  template: simple-https-root
  dnszone: fontkeming.fail
  dnssubdomain: afd
  root: /srv/files/services/afdsew/SEW/
  tags:
  - afdsew
  - web
---
- name: afdsew service installed
  template:
    src: service.j2
    dest: /etc/systemd/system/afdsew.service
  notify:
  - reload systemd

- name: afdsew timer installed
  template:
    src: timer.j2
    dest: /etc/systemd/system/afdsew.timer
  notify:
  - reload systemd
[Unit]
Decscription=run AFD every six hours

[Timer]
OnBootSec=60min
OnUnitActiveSec=2h

[Install]
WantedBy=timers.target
[Unit]
Description=Parse Seattle AFD in to an RSS Feed
After=docker.service

[Service]
Type=oneshot
ExecStart=/usr/bin/docker run --rm  -v /srv/files/services/afdsew/:/data/ rrix/afdsew

[Install]
WantedBy=default.target
  • NEXT integrate SELinux configuration for this

fail2ban installation

I pretty much only have fail2ban enabled for SSH blackholing. Operating a system on the public internet is such a joy.

- role: fail2ban
---
- name: fail2ban installed
  dnf:
    state: installed
    name:
    - fail2ban
    - fail2ban-firewalld
  notify:
  - restart fail2ban

- name: fail2ban configuration file installed
  copy:
    src: jail.local
    dest: /etc/fail2ban/jail.local
  notify:
  - restart fail2ban
[DEFAULT]
bantime = 3600
action = %(action_)s
backend = systemd

[sshd]
enabled = true
---
- name: restart fail2ban
  service:
    state: restarted
    enabled: yes
    name: fail2ban

NEXT Syncthing

INPROGRESS wireguard

- role: wireguard

Wireguard is a simple modern VPN client. I use it to secure my devices, as well to create a private LAN to make available services on my work machine to test from my mobile device, or to access files on my desktop's harddrive from the road.

This role is one of the few that is designed to run as both part of the local CCE and server support; indeed, it should probably be its own CCE module, we'll see how complex this needs to be.

The configuration is populated with keys from my secrets.yaml file, gods only know what would happen if those keys were public.

---
- name: wireguard copr enabled
  yum_repository:
    name: jdoss-wireguard
    description: copr repo for wireguard
    state: present
    enabled: yes
    baseurl: https://copr-be.cloud.fedoraproject.org/results/jdoss/wireguard/fedora-$releasever-$basearch/
    gpgcheck: true
    gpgkey: https://copr-be.cloud.fedoraproject.org/results/jdoss/wireguard/pubkey.gpg
    skip_if_unavailable: true
  when: ansible_pkg_mgr == "dnf"

- name: wireguard installed
  dnf:
    state: installed
    name:
    - wireguard-dkms
    - wireguard-tools
  when: ansible_pkg_mgr == "dnf"

- name: wireguard configuration directory exists
  file:
    state: directory
    path: /etc/wireguard
    mode: 0700

- name: wireguard wg0 template installed
  template:
    src: wg0.conf
    dest: /etc/wireguard/wg0.conf
    mode: 0600
    owner: root
    group: root
  notify: reset wireguard wg0

- name: wireguard wg1 template installed
  template:
    src: wg1.conf
    dest: /etc/wireguard/wg1.conf
    mode: 0600
    owner: root
    group: root
  notify: reset wireguard wg1

Handlers:

---
- name: reset wireguard wg0
  shell: wg-quick down wg0

- name: reset wireguard wg1
  shell: wg-quick down wg1

One configuration gives me access to my server and other resources on the wireguard:

[Interface]
Address = {{wireguard_conf[ansible_hostname]['prv_address']}}
ListenPort = 51820
PrivateKey = {{wireguard_conf[ansible_hostname]['private_key']}}

{% for hostname, conf in wireguard_conf.items() %}
{% if hostname != ansible_hostname %}
# {{hostname}}
[Peer]
PublicKey = {{conf['public_key']}}
AllowedIPs = {{conf['prv_address']}}
{% if conf.get('pub_address') %}
Endpoint = {{conf['pub_address']}}
{% endif %}
{% endif %}

{% endfor %}

One configuration routes all traffic over the VPN:

[Interface]
Address = {{wireguard_conf[ansible_hostname]['prv_address']}}
ListenPort = 51820
PrivateKey = {{wireguard_conf[ansible_hostname]['private_key']}}

{% for hostname, conf in wireguard_conf.items() %}
{% if hostname != ansible_hostname %}
# {{hostname}}
[Peer]
PublicKey = {{conf['public_key']}}
AllowedIPs = 0.0.0.0/0
{% if conf.get('pub_address') %}
Endpoint = {{conf['pub_address']}}
{% endif %}
{% endif %}

{% endfor %}

The configuration is kept in an ansible-vault file, in the following schema, one key for each host.

wireguard_conf:
  fontkeming:
    prv_address: {{allowedIPs -> IP range that can be routed to this host}}
    pub_address: {{endpoint -> public IP:port combo that another node can connect to}}
    public_key:  {{public key}}
    private_key: {{private key}}
---
- name: install wireguard on local machine
  hosts: all

  vars_files:
  - vars/secrets.yml

  roles:
  - wireguard
ansible-playbook --vault-password-file=~/.ansible-vault -i inventory -b $@ install-wireguard.yaml

Author: Ryan Rix

Created: 2019-05-07 Tue 11:12

Validate XHTML 1.0