This changes the Vagrant setup to support multiple installations as multiple
boxes. In addition to Ubuntu Vagrant can now be used to install on Debian
as well as on CentOS.
I took the chance to clean up the .deb install a bit and backported analyzer
and celery to SysV proper so it runs there. Some of the distro specfics were
moved to the install script from the python setup scripts to acheive this.
For the CentOS support I added a rather involved OS prepare script. In the
long term this will be added to the preparing-the-server docs we already have.
I had to switch the default port to http-alt (8080). On CentOS 9080 is registered
for ocsp and getting it to work for apache without hacking SELinux is hard. I
think 8080 is the RFC way to go anyhow. If anyone want to override this it
should be rather easy using the --web-port arg and by hacking Vagrantfile.
The PyOpenSSL code has been refactored for all the distros that the Vagrantfile
now supports.
As far as my checks go, I tried this code with all the distros, uploaded a track
and downloaded a unicode and a ssl podcast and was able to listen to them
in each case.
In the experimental CentOS case, the UI is not up to spec since services
need to get scheduled through systemctl and the status overview (ie. on the /?config page)
do not work properly. They need to be as follows:
```
sudo systemctl start airtime-playout
sudo systemctl start airtime-liquidsoap
sudo systemctl start airtime_analyzer.service
sudo systemctl start airtime-celery.service
```
The podcast downloader fails pretty badly when the podcast name contains non ascii chars. The main fail happens during logging; I have learnt way to much about pythons stupid unicode implementation.
This adds addtional debug logging and also outputs the real reason a download fails properly. The content of the tags should be written as UTF-8 or whater is input into it, this commit mainly touches (and fixes) logging.
This is the results of sed -i -e 's|/etc/airtime-saas/|/etc/airtime/|' `grep -irl 'airtime-saas' airtime_mvc/ python_apps/` :P
It might need more testing, the airtime-saas part never really made sense, zf1 has environments for that, ie you would create a saas env based on production for instance.
I beleive legacy upstream was using this to share configuration between customers (ie. analyser runs only once and writes to a shared S3 bucket). I assume they mount the airtime-saas folder onto individual customers instances with a global config. Like I said, I don't feel that this makes sense since all it does is make hacking at the configs in airtime-saas a bit easier. A serious SaaS operation should be using something like puppet or ansible to achieve this.
* Fix various small bugs in auto ingestion and tab implementation
* Update TaskManager run conditions to piggyback on API calls - guarantees a certain frequency of requests and greatly reduces chances of lock contention