raspberry openHab Links

Wednesday, 24. January 2018

PinOut physical/BCM/WiringPi 

 

AdaFruit DHT temp/humid sensor project

Temperatur- und Luftfeuchtigkeitsmessung mit Sensor DHT22/AM2302 und Raspberry Pi

 

Presence detection by mobile
It’s Android so my first approach would be to set up Tasker to turn ON/OFF an OH Item through the OH REST API when Tasker sees the home wifi.

Kalender einbinden:
Binding-docu

Zeiten und Rythmen, Astro und DayTime

Chromecast Audio bespielen:
https://community.openhab.org/t/starting-a-google-play-music-playlist-on-a-chromecast-audio/38619

433 Mhz Funk für FunkSteckDosen, openhab-Einbindung

Openhab docu:

Sitemaps Items Rules  Rules-Tutorial  Xtend-Docu  community.openhab.org/

Editor: VS Code Tool: CronMaker 

 

 

 

 

updating aur packages with yaourt

Wednesday, 6. December 2017

This is a work in progress for right now I dont seem to know how to do it right.

sudo yaourt -Syu --aur 

searches through my system and upgrades the db and then presents me a list of aur packages that need upgrade and then it works it's way through the list and I need to stay there and check every dialog and when there is a package I do not wish to upgrade I cannot skip it. Sure, there is a dialog asking me if I want to update this package, but if I give it a "no" the entire routine is aborted.

So, in case the package I really want to update happens to be the last in yaourt's list I have to update all of them, including the huge font collection which takes ages to download and build, and including the printer driver which I'd rather not touch at all.

That got me shouting and cursing yesterday cause following the update of the driver for the Brother QL-710w label printer the darn thing wouldnt print any more. CUPS was happy though, preparing the print data, posting them to /var/spool/cups and instantly flag it for completed, no errors.

The package that gave me trouble is  aur.archlinux.org/packages/brother-ql710w/  with the version 1.1.4r0. The prior version, pkgver=1.0.2r0 had installed and worked like a charm.

I spent hours trying to find a reason ar even a hint t what was failing to work but nothing helped until I decided to downgrade to the original version.
Now, with 'official' arch packages this is rather esy, there is a storage of older versions at /var/cache/pacman/pkg and you can take the package from there and downgrade with a pacman -U packageName

 But packages from aur are built when you install them, in /tmp and so they are usually lost when you try to step back. I have learned now that there is an option to change this in the /etc/yaourtrc config file. Next time it will be easier.. But as things were I chose a different path yesterday, I manually downgraded the package when it offers me to edit the build script. Changing the pkgver and the sha256sums was all it took.

One of the things I discovered was a nice feature of the aur git, in my case the url is https://aur.archlinux.org/cgit/aur.git/?h=brother-ql710w

I clicked around and soon had the site show me diffs of the prior and new version. Clearly the package was not at fault for my problems. Or Brother has changed some requirement for the install which the former version did not need and which should be reflected in the new package version - I didn't investigate this.

What I did is, searching on the Brother support site for links to older versions of the driver (none!) and then play around with the new url until I succeeded to get the old versions. Naming conventions are a good thing. Then, after successfully downloading them I took their sha256sums

sha256sum ql710wcupswrapper-1.0.2-0.i386.rpm

and those I pasted into the build script. After hours of fruitless trying I was perplexed to see it work on the first go.

That step wasn't actually necessary, I could have taken the sums from the diff at aur.archlinux.org/cgit/aur.git/diff/PKGBUILD 

 

FireFox Sync Server

Thursday, 30. November 2017

After years of Chromium as my default browser I've moved to give FF a new go after the release of v57 aka Quantum. Mozilla is still more trustworthy than Google and hey, I prefered Netscape over Mosaic once. Part of the shift was trying out FireFox sync. Since I have not much trust in the cloud unless it is my own server this meant installing Firefox Syncserver on my debian server.

I more or less followed the howto from sathya.de/blog/how-tos/setup-your-own-firefox-1-5-sync-server-on-debian-with-apache2-and-mysql/ , there are links to the mozilla docu included on his site. The basic steps are easy to take, with dns.he.net and letsencrypt with dehydrated setup of a new subdomain with a valid SSL-certificate has gone down to a matter of minutes.

Git clone the server, create a db and db-user, config the virtualhost in the web servers configuration, edit the default syncserver.ini, restart the webserver and then tell the clients in about:config which syncserver to use. Pretty basic, but it still has some potential for confusion and took me two runs to get it running.

the supplied syncserver.ini has an entry   public_url = http://localhost:5000/   which appears to suggest that ports should be defined in the server config, client config. But this is not so, in a production environment with https and a web server non ports are given.

I saw 404 errors in the error_log which stemmed from an error with the client config, I had erased the token/ - ppart of the uri. And I experienced a multitude of 500 server errors with traces in the error_log pointing at a line between Public_url = ... and allow_new_users = ...
The first complained the given secret exceeded the maximum length, while it had been created like the comments advise. Later I saw errors from parsing the sqluri. Many visual checks did not find a problem.

What helped me was a set of voodoo measures including: manually retyping the sqluri line  and  inserting a 'dummy = stupid' line supposed to catch any non-visible carried syntactical elements. Other possible sources of problems here include permissions issues and missing execution flag on the wsgi file.

And then, suddenly, it worked.

 

Versions:

Debian 9.2
Apache/2.4.25 (Debian)
Nextcloud 12.0.3
officeonline-install.sh v2.4.0

 
https://github.com/husisusi/officeonlin-install.sh/blob/master/README.md
https://github.com/husisusi/officeonlin-install.sh/issues/135
 
mkdir a folder for the script, clone or download the .zip
open officeonline-install.cfg with an editor and adapt the POCO parameters like ExaconAT shared it in the issue 135 (link above)
 
let the script run (started it w/o any parameters). It took about 80 min on my box.
I had to restart the script once because it complained over an inconsistency in /etc/group which needed to be fixed first. After restart the script managed to resume where it had stopped before.
 
created a subdomain for the lool (LibreOffice OnLine) to run in in DNS, a valid certificate for https, a virtual host config to tell apache how to proxy. Check that Apache has the required modules enabled:
proxy_wstunnel proxy proxy_http ssl
 
For the latter I used the example Nextcloud recommends in https://nextcloud.com/collaboraonline/ and this was a source of problems later. It took me hours of try and error & recherche (https://superuser.com/questions/439054/apache-reverse-proxy-no-protocol-handler/760839) to learn that all the ProxyPassReverse lines need to have a trailing slash at the end of the right hand argument. The relevant bit of the config I use is:
 
# Encoded slashes need to be allowed
AllowEncodedSlashes NoDecode

# Container uses a unique non-signed certificate
SSLProxyEngine On
SSLProxyVerify None
SSLProxyCheckPeerCN Off
SSLProxyCheckPeerName Off

# keep the host
ProxyPreserveHost On

# static html, js, images, etc. served from loolwsd
# loleaflet is the client part of LibreOffice Online
ProxyPass           /loleaflet https://127.0.0.1:9980/loleaflet retry=0
ProxyPassReverse    /loleaflet https://127.0.0.1:9980/loleaflet/

# WOPI discovery URL
ProxyPass           /hosting/discovery https://127.0.0.1:9980/hosting/discovery retry=0
ProxyPassReverse    /hosting/discovery https://127.0.0.1:9980/hosting/discovery/

# Main websocket
ProxyPassMatch "/lool/(.*)/ws$" wss://127.0.0.1:9980/lool/$1/ws nocanon

# Admin Console websocket
ProxyPass   /lool/adminws wss://127.0.0.1:9980/lool/adminws

# Download as, Fullscreen presentation and Image upload operations
ProxyPass           /lool https://127.0.0.1:9980/lool
ProxyPassReverse    /lool https://127.0.0.1:9980/lool/
 
The script creates a self signed certificate for lool in /etc/loolwsd
This is not helpful when things do not work at once because chrome and firefox are so very strict against self signed certs and any direct tests of the lool subdomain and the virtual host there are hindered. I got around this by changing the path to the certificate (cert, ca-chain privkey) to point at my valid cert by edit in /opt/online/loolwsd.xml. An increased log level and file enable="true" are other useful settings here. 
 
Things look fine when https://lool.domain.tld:9980 results in "ok" and https://lool.domain.tld:9980/hosting/discovery returns an xml of wopi discovery. If the latter gives a 500 then the proxy config may be the reason.
 
In nextcloud admin Collabora Online the entry is https://lool.domain.tld:9980 - during my recherche I found several discussions with recommendations to not include the port, but this only adds another error.
 
Sources of error messages included nextcloud admin logging (connection refused as long as the proxying doesnt work), /var/log/loolwsd.log (needs log level increased and file enabled) and of course the messages of systemctl status loolwsd and systemctl status apache2
 
 
Works nicely on first sight.
Of course, testing it with a real life document, I had the issue with missing fonts, everything rendered in some default font. The workaround I found is like this:
/home/lool has been created by the script and is mostly empty (some hidden files). Create a .fonts there and copy all .ttf files needed into it. Collabora will register them here when starting, however, while running it cannot access them as it is confined to a jail (each document has it's own jail under /opt/online/jails/) Those jails get copied from /opt/online/systemplate Create /opt/online/systemplate/home/lool/.fonts/ and copy the .ttf into it, too. Done.

There is an admin console at  https://domain.tld:9980/loleaflet/dist/admin/adminSettings.html, you define user/pssword in the systemd service file which is at /lib/systemd/system/loolwsd.service on my box.

 

Mit dem jüngsten Update auf Chromium 57 funktionierten alle meine Zertifikate nicht mehr. Wosign und Startcom waren ja vor einer Weile in Ungnade gefallen und neu ausgegebene Zertifikate dieser Certification Authority (eigentlich ist es nur eine) wurden nicht mehr anerkannt, aber mit Version 57 hat Chromium/Chrome dies stillschweigend weiter eingeschränkt: jetzt sind auch ältere certs betroffen, wenn die Site nicht eine ganz Grosse ist und in Alexa top 1 Mio gelistet wird.

Könnte man sich darüber aufregen, hilft aber nicht. Die billigsten kommerziellen Zertifikate kosten 8 USD/Jahr und kommen von Symantec, was auch nicht gerade sehr vertrauenswürdig klingt.

Bleibt Let's encrypt. Das hatte ich bislang gemieden, weil die Laufzeit mit 90 Tagen zu kurz ist, um manuell Zertifikate anzufordern, der Client aber etwas monstroes daherkommt, zig dependencies hat, den Server selbst konfigurieren möchte und was nicht. Und es eilte ja nicht so sehr - jetzt aber.

Nach etwas Recherche fand ich mir einen schlanken bash-Client und bin zu meiner Verwunderung damit recht zügig zum Ziel gekommen.

- download von https://github.com/lukas2511/dehydrated/archive/master.zip und entpacken nach /usr/local/bin 
- in /etc/apache2/conf-available eine datei dehydrated.conf anlegen mit
Alias /.well-known/acme-challenge /var/www/dehydrated
<Directory /var/www/dehydrated>
       Options None
       AllowOverride None

       # Apache 2.4
       <IfModule mod_authz_core.c>
               Require all granted
       </IfModule>
</Directory>
und diese nach /etc/apache2/conf-enabled symlinken
- in /var/www ein Verzeichnis dehydrated anlegen
- apache restarten
- in /etc/dehydrated config und domains.txt aus dem docs - folder des entzippten Codes einkopieren, anpassen, dabei anfangs die staging-URL eintragen
- im Programmordner mit ./dehydrated -c  script aufrufen, ggf. Fehler korrigieren. Wenn es durchläuft, die staging-Urls auskommentieren und die certs bauen.
- in /etc/apache2/sites-available die conf der vhots so anpassen, dass die neuen certs verwandt werden
       SSLCertificateFile      /etc/dehydrated/certs/beispiel.de/cert.pem
       SSLCertificateKeyFile   /etc/dehydrated/certs/beispiel.de/privkey.pem
       SSLCertificateChainFile /etc/dehydrated/certs/beispiel.de/chain.pem

-apache restarten und fertig. 

Cool, jetzt noch ein cron job, der das mit langem Abstand immer mal ausführt (und veraltende Zertifikate erneuert) und man kann die Sache vergessen.

 

(Seite 1 von 34, insgesamt 169 Einträge) » nächste Seite