Monday, December 19, 2016

gcloud cheatsheet for GKE

This is by no means comprehensive, it's just some things I've found useful.

Get installed and auth'd:
gcloud components install kubectl
gcloud auth application-default login
Creating a cluster with a specific version:
gcloud config set compute/zone us-west1-b
gcloud beta container clusters create permissions-test-cluster \
    --cluster-version=1.6.1 \
    --no-enable-legacy-authorization
Upgrading GKE:
# Get available versions
$ gcloud container get-server-config 
Fetching server config for us-west1-b
defaultClusterVersion: 1.5.7
defaultImageType: COS
validImageTypes:
- COS
- CONTAINER_VM
validMasterVersions:
- 1.6.4
- 1.5.7
validNodeVersions:
- 1.6.4
- 1.6.2
- 1.5.7
- 1.5.6
- 1.4.9

$ CLUSTER_NAME="testing"
$ CLUSTER_VERSION="1.6.4"

# Nodes
$ gcloud container clusters upgrade $CLUSTER_NAME --cluster-version=$CLUSTER_VERSION

# Master
$ gcloud container clusters upgrade $CLUSTER_NAME --master --cluster-version=$CLUSTER_VERSION
List containers in the google-containers project:
gcloud container images list --repository gcr.io/google-containers
Tags for a given container:
gcloud container images list-tags gcr.io/gke-release/auditproxy

Thursday, December 15, 2016

Google gcloud tool cheatsheet

Some gcloud commands I've found useful:
# See config
gcloud config list

# Change default zone
gcloud config set compute/zone us-central1-a

# Copy a file, default zone
gcloud compute copy-files some/file.txt cloud-machine-name:~/

# Copy a file, specifying zone for machine
gcloud compute copy-files some/file.txt cloud-machine-name:~/ --zone=us-west1-a

# Forward a port with ssh
gcloud compute ssh client-machine-name -- -L 8080:localhost:8080

Wednesday, October 26, 2016

Mac OS X Sierra and SSH keys

With OS X Sierra Apple changed the ssh client key handling behavior. They aligned with OpenSSH behavior by not automatically loading passphrases from the keychain on login. More surprisingly, it now remembers your ssh key passphrase automatically by default. To disable this behavior you can add this to ~/.ssh/config:
Host *
    UseKeyChain no
As you can see in the radar report, deleting keys using "ssh-add -D" seems to be just as problematic and confusing as it is with gnome-keyring, i.e. "All identities removed" is a lie.

For deleting already saved passwords and re-instating the El-Cap ssh behavior see here.

Tuesday, October 4, 2016

Prevent system management from installing over a test package on Ubuntu

When you are testing a new package version it's annoying to have your system management come and install the old version over the top of your test one. There's a bunch of ways to stop this, the one I tend to use on Ubuntu is:
echo "package hold" | sudo dpkg --set-selections
To undo the hold and go back to normal:
echo "package install" | sudo dpkg --set-selections

Thursday, July 14, 2016

Running modern python on Ubuntu LTS

The python version on your Ubunutu LTS may be slightly behind latest, or years behind, depending on the release cycle. Here's how to run a newer python without interfering with the system one.

Note that setting an install prefix is necessary to avoid making this the default system python (which will break cinnamon-settings apps as well as possibly other things). The prefix I chose puts it in a directory with my username.

Download the latest python source and install it:
sudo apt-get install build-essential libreadline-dev libsqlite3-dev
./configure --enable-ipv6 --enable-unicode=ucs4 --prefix=/usr/local/${USER}/
make
sudo make install
Your new python is now in /usr/local/${USER}/bin/python2.7. To use it, specify it in any virtualenvs you create. Make it an alias so you never forget:
alias virtualenv='virtualenv --python=/usr/local/${USER}/bin/python'

Tuesday, July 12, 2016

Run a different command on an existing docker container using exec

To run a previously created container with bash, start it as normal and then use exec (this assumes your original container can actually run successfully):
docker start [container id]
docker exec -it [container id] /bin/bash

Thursday, July 7, 2016

Creating a Google Cloud service account that can only create new objects in a single bucket

I wanted a service account that can only create new objects in a single bucket, and have those objects be publicly readable by default. Use case is a travis deployer that publishes build artifacts.
  1. create a service account. Currently this is under "IAM & Admin | Service Accounts" in the Google Cloud UI.
  2. In the IAM screen your service account is over-privileged, you can remove all privileges from the account here (which causes it to disappear from the IAM list). We will grant it permission over the bucket only.
  3. Create your bucket, then give the world access (you can also use AllUsers in the UI):
    gsutil defacl ch -u AllUsers:R gs://mybucket
    
  4. Give your serviceaccount@projectname.iam.gserviceaccount.com writer access to the bucket. It seems there is no way to limit the permission to create only (options are read/write/owner).
  5. Test the permissions of your service account:
    gcloud auth activate-service-account --key-file mysecretfile.json serviceaccountname
    gcloud auth list
    # Check your service account is the active account, then try copying to the bucket you authorized, and another bucket which should fail.
    gsutil cp test gs://mybucket
    gsutil cp test gs://someotherbucket
    
  6. You can then set the default object permissions for the bucket via the UI so that new objects are world readable by default.

Tuesday, June 7, 2016

Make test pypi the default pip installer

It's possible to make the testpypi index the default for pip, but still retrieve any dependencies not on testpypi from the production repo. You just need a pip.conf like this:
$ mkdir ~/.pip
$ cat .pip/pip.conf 
[global]
index-url = https://testpypi.python.org/simple
extra-index-url = https://pypi.python.org/simple

Sunday, May 22, 2016

Lowe's OC821 Iris Outdoor Video Camera

Some quick links to help others find information about using the Lowe's OC821 outdoor video camera without paying for the overpriced Lowe's security monitoring system.


Honestly though it looks like this camera was designed to be used via the API from the Iris hub, which I don't want to pay for. I'm going to replace it with something (Dropcam or similar) that doesn't require ongoing fees and has a better phone app.

Wednesday, May 18, 2016

Bash default value environment variable that can be overridden

Often in bash scripts I want to have a constant that is overridable. It's something I expect people to want to change but isn't worth creating commandline options for.

Here's how to do it:
#!/bin/bash

: ${OVERRIDABLE:="thedefault"}

echo ${OVERRIDABLE}
And it works like this:
$ bash ./temp.sh 
thedefault
$ OVERRIDABLE="overridden" bash ./temp.sh 
overridden

Friday, May 13, 2016

launchd ThrottleInterval

Apple's documentation of the launchd options leave a lot to be desired.  It leaves out important details and is fairly ambiguous about lots of things. Various people are trying to document it themselves, so here's another addition for ThrottleInterval.

The launchd.plist man page says:

ThrottleInterval This key lets one override the default throttling policy imposed on jobs by launchd. The value is in seconds, and by default, jobs will not be spawned more than once every 10 seconds. The principle behind this is that jobs should linger around just in case they are needed again in the near future. This not only reduces the latency of responses, but it encourages developers to amortize the cost of program invocation.

What it really means is this:

By default jobs are expected to run for at least 10 seconds. If they run for less than 10 seconds, they will be respawned "10 - runtime" seconds after they die. Exit code is ignored, all that matters is runtime. If a job runs for more than 10 seconds then exits, it will be respawned immediately (assuming all other restart conditions are met).

So instead of just throttling how often a service gets restarted, ThrottleInterval also implies minimum runtime. Which is surprising to more than just me.

You'll see a message like this in the logs if the service dies inside the ThrottleInterval:
com.apple.xpc.launchd[1] (com.apple.mdworker.shared.03000000-0000-0000-0000-000000000000): Service only ran for 3 seconds. Pushing respawn out by 7 seconds.

Thursday, May 12, 2016

Python check compilation flags: CFLAGS

Here's a handy command to print the CFLAGS that python was compiled with:
$ python-config --cflags
-I/usr/local/Library/Taps/homebrew/homebrew-core/PYTHON_ENV/include/python2.7 -I/usr/local/Library/Taps/homebrew/homebrew-core/PYTHON_ENV/include/python2.7 -fno-strict-aliasing -fno-common -dynamic -march=core2 -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes

Wednesday, May 11, 2016

pip and distutils setup.py cheatsheet

Download source for version 2.1:
pip download somepackage==2.1
Make a source distribution and stick it in another directory:
python setup.py sdist --dist-dir="${HOME}/dist-out"
Build wheels:
python setup.py bdist_wheel
you'll find them under dist/

Friday, May 6, 2016

Building a debian package for PPA distribution

Before you start packaging for PPA you should know two things:
  1. Packages must be built from source only. PPAs do not accept built .debs. Only source tarballs are allowed for security and opensource licensing reasons.
  2. All dependencies must be satisfied from the ubuntu repo or other PPAs. No internet access for build due to security and build repeatability.
Those two things basically made it impossible for me to use a PPA, but here's the notes anyway.
  1. Create a launchpad account.
  2. From your account page you can click "Create a new PPA".
  3. Add your gpg key to your account as explained here. Use this to get the fingerprint:
  4. gpg --fingerprint
    
  5. Update your debian/changelog file for the new version by running:
  6. dch -i
    
  7. Build your source package. Personally I found creating a Dockerfile was a nice way to do it that avoided having to install all the build dependencies on my own system (although this may not be worth it if you are just going to build the source package). Pbuilder is also an alternative. Inside your Dockerfile you will run this to get an unsigned .changes and .dsc:
  8. dpkg-buildpackage -S
  9. Sign your .changes with debsign. You only need to specify -e here if your keyname is different to the package maintainer email in your control file.
  10. debsign -e myemail@mydomain mypackage_3.1.0-2_amd64.changes
    
  11. Upload to the PPA:
  12. dput ppa:myusername/testppa mypackage_3.1.0-2_amd64.changes

Thursday, May 5, 2016

Changing virtualbox dhcp address range for vagrant VMs

It seems to be impossible to change the default vagrant address range from 10.0.x.0/24 to anything else for virtualbox VMs. This is virtualbox's fault. You can change the address range for individual VMs like this, note you need to shut it down first:
$ VBoxManage list vms
"ubuntu-xenial-16.04-cloudimg" {ea0e8dfd-55b9-48c7-b568-e933d0853762}
$ VBoxManage modifyvm "ubuntu-xenial-16.04-cloudimg" --natnet1 "192.168.23.0/24"
But that's just that VM, when you tear it down or create a new one with vagrant it will still come up with 10.0.x.0. You can of course also specify a specific IP in your Vagrantfile, but then you need to do that for every box.

I think you should be able to use this feature to create a new nat network with the new address range, and then tell vagrant to use that network in your Vagrantfile. But I haven't actually tried this because it means everyone using your Vagrantfile has to then have the same setup.

Really what needs to happen is that VirtualBox should have a global setting for their inbuilt NAT that allows you to change the address range.

Wednesday, May 4, 2016

Detecting prelinking breakage

A long time ago RHEL made a bad decision to have prelink enabled by default. This has caused various forms of hard-to-debug heartache for people including me, as a maintainer of a pyinstaller package. The performance gains are dubious, and it causes problems with ASLR and rpm verify (since binaries are modified post-install). Thankfully I believe it is off by default in new versions of Red Hat.

Here's a quick troubleshooting guide to see if prelinking is causing unexpected modification of your binaries.

First check to see if it is enabled:
grep ^PRELINK /etc/sysconfig/prelink
You can also check to see if the binary itself is prelinked using readelf.

To disable it system-wide set "PRELINKING=no" in /etc/sysconfig/prelink and run /etc/cron.daily/prelink as root.

The symptoms are that the binary changes size and hash, but passes rpm --verify (since verify knows about prelinking). In my case the error message looked like:
Cannot open self /usr/lib64/mybin or archive /usr/lib64/mybin.pkg

Tuesday, April 26, 2016

Squashing git commits into a single commit for a github pull request

There's lots of advice about how to git rebase. But unfortunately almost none of these address the case where you have a pull request on a repo with multiple merges to master. In this case you'll have lots of other people's commits in your history and you can't just do:
git rebase -i HEAD~3
Thankfully github documented the solution. 99% of the time what I want to do is squash all of my commits on a branch relative to the repo master branch. It's super easy once you know:
git rebase -i master

Friday, April 22, 2016

Systemd learnings: templates for multiple services, target vs. service, service grouping

My notes on best practices for creating systemd units for a package with multiple services.

Packages install into /lib/systemd/system/ directory, which I found surprising since there is also /etc/systemd. The /etc/systemd directory is actually used as a way for users to override settings from the original package via "drop-in" unit files.

Attaching your service to the multi-user target like this:
WantedBy=multi-user.target
essentially causes a symlink to be created in the relevant .wants directory when you enable the service. There's some instructions floating around the internet where people create these symlinks manually - there's no need to do that, let systemctl do it for you:
$ ls /lib/systemd/system/multi-user.target.wants/
console-setup.service  getty.target           plymouth-quit-wait.service      systemd-logind.service                systemd-user-sessions.service
dbus.service           plymouth-quit.service  systemd-ask-password-wall.path  systemd-update-utmp-runlevel.service
Running multiple copies of a service and grouping services are all much easier than with System V or upstart. Dependency resolution is powerful but a little confusing: e.g. if you look at the getty.target:
$ cat /lib/systemd/system/getty.target
#  This file is part of systemd.
#
#  systemd is free software; you can redistribute it and/or modify it
#  under the terms of the GNU Lesser General Public License as published by
#  the Free Software Foundation; either version 2.1 of the License, or
#  (at your option) any later version.

[Unit]
Description=Login Prompts
Documentation=man:systemd.special(7) man:systemd-getty-generator(8)
Documentation=http://0pointer.de/blog/projects/serial-console.html
it looks like it doesn't do anything. But if you look at what it wants:
$ ls /lib/systemd/system/getty.target.wants/
getty-static.service
there's something there. And it happens to be a good example of using a template to start multiple copies of a service:
$ cat /lib/systemd/system/getty-static.service 
[Unit]
Description=getty on tty2-tty6 if dbus and logind are not available
ConditionPathExists=/dev/tty2
ConditionPathExists=!/lib/systemd/system/dbus.service

[Service]
Type=oneshot
ExecStart=/bin/systemctl --no-block start getty@tty2.service getty@tty3.service getty@tty4.service getty@tty5.service getty@tty6.service
RemainAfterExit=true
but where did that .wants come from? It's the template itself that creates the dependency:
$ cat /lib/systemd/system/getty@.service

...[snip]

[Install]
WantedBy=getty.target
DefaultInstance=tty1
OpenVPN have done a good job of using systemd to reduce complexity and simplify customization of their server init scripts. They have a template:
$ cat /lib/systemd/system/openvpn@.service 
[Unit]
Description=OpenVPN connection to %i
PartOf=openvpn.service
ReloadPropagatedFrom=openvpn.service
Before=systemd-user-sessions.service
Documentation=man:openvpn(8)
Documentation=https://community.openvpn.net/openvpn/wiki/Openvpn23ManPage
Documentation=https://community.openvpn.net/openvpn/wiki/HOWTO

[Service]
PrivateTmp=true
KillMode=mixed
Type=forking
ExecStart=/usr/sbin/openvpn --daemon ovpn-%i --status /run/openvpn/%i.status 10 --cd /etc/openvpn --script-security 2 --config /etc/openvpn/%i.conf --writepid /run/openvpn/%i.pid
PIDFile=/run/openvpn/%i.pid
ExecReload=/bin/kill -HUP $MAINPID
WorkingDirectory=/etc/openvpn
ProtectSystem=yes
CapabilityBoundingSet=CAP_IPC_LOCK CAP_NET_ADMIN CAP_NET_BIND_SERVICE CAP_NET_RAW CAP_SETGID CAP_SETUID CAP_SYS_CHROOT CAP_DAC_READ_SEARCH CAP_AUDIT_WRITE
LimitNPROC=10
DeviceAllow=/dev/null rw
DeviceAllow=/dev/net/tun rw

[Install]
WantedBy=multi-user.target
That means you can have multiple openvpn configs and control them as if you had written init scripts for each:
/etc/openvpn/server1
/etc/openvpn/server1

systemctl enable openvpn@server1.service
systemctl enable openvpn@server2.service
systemctl start openvpn@server1.service
systemctl start openvpn@server2.service
And all of those are grouped together using a "openvpn.service" which is referred to in PartOf in the template above so you can operate on them as a block. The ReloadPropagatedFrom tells systemd to reload the individual units when the parent is reloaded:
$ service openvpn start
$ cat openvpn.service 
# This service is actually a systemd target,
# but we are using a service since targets cannot be reloaded.

[Unit]
Description=OpenVPN service
After=network.target

[Service]
Type=oneshot
RemainAfterExit=yes
ExecStart=/bin/true
ExecReload=/bin/true
WorkingDirectory=/etc/openvpn

[Install]
WantedBy=multi-user.target

The comment in that file is interesting. If you have a group of services it seems you are better off creating a service rather than a target, even though at first glance targets seem to have been created for exactly this purpose. My interpretation is that targets are essentially useful for grouping dependencies to determine execution order (i.e. they were primarily created to replace the runlevel system), but you should use a service if you expect users to want to operate on your services as a block.

You'll want to check your systemd file with systemd-analyze:
$ systemd-analyze verify /lib/systemd/system/my-server@.service 
[/lib/systemd/system/my-server@.service:6] Unknown lvalue 'Environment' in section 'Unit'
[/lib/systemd/system/my-server@.service:13] Executable path is not absolute, ignoring: mkdir -p /var/log/myserver;mkdir -p /var/run/myserver/tmp/%i

Monday, April 11, 2016

Publishing a package on PyPi

First, create a ~/.pypirc like this. You don't need to (and shouldn't!) put your cleartext password in here, you will get a prompt when you actually register.
[distutils]
index-servers =
  pypi
  pypitest

[pypi]
repository=https://pypi.python.org/pypi
username=your_username

[pypitest]
repository=https://testpypi.python.org/pypi
username=your_username

Write your setup.py:
setup(
    name="mypackage",
    version="3.1.0",
    description="My description",
    license="Apache License, Version 2.0",
    url="https://github.com/myhomepage"
Make sure your version number and other info in your setup.py is correct and then test your package on the the test server by registering:
python setup.py register -r pypitest
Then build and upload your actual file content using twine. You can also use setup.py to upload, but this way you get to inspect the build tarball before it gets uploaded:
python setup.py sdist
twine upload -r pypitest dist/*
Check that it looks OK on the test site. And that you can install it:
pip install -i https://testpypi.python.org/pypi mypackage
Then register and upload it on the production pypi server:
python setup.py register -r pypi
twine upload -r pypi dist/*

Friday, April 8, 2016

Externally hosting PyPi python packages to workaround PyPi size limit

PyPi has a limit on the size of the package they are willing to host. This doesn't seem to be documented anywhere, but I've seen people mention 60MB on forums. Our package contains a bunch of compressed binary data and weighs in at 130MB, so we needed to find another solution. The error you get from twine when uploading is this:
HTTPError: 413 Client Error: Request Entity Too Large for url: https://testpypi.python.org/pypi
But since hosting files on cloud services is now cheap and reliable we can work around the problem, as long as you're willing to have people use a custom pip command. If you point pip at a file with links using -f it will look through those links for a suitable install candidate. So if you create an index like this:
<html><head><title>Simple Index</title><meta name="api-version" value="2" /></head><body>
<a href='mypackage-3.1.0.tar.gz#md5=71525271a5fdbf0c72580ce56194d999'>mypackage-3.1.0</a><br/>
<a href='mypackage-3.1.2.tar.gz#md5=71525271a5fdbf0c72580ce56194daaa'>mypackage-3.1.2</a><br/>
</body></html>
And host it somewhere (like google cloud storage), along with the tarballs you get from running:
python setup.py sdist
Then your install command looks like this:
pip install --allow-external mypackage -f https://storage.googleapis.com/mypackage/index.html mypackage

Tuesday, April 5, 2016

Verify SHA256 SSH RSA key fingerprint

As of OpenSSH 6.8 the defaults is to display base64 encoded SHA256 hashes for SSH host keys, whereas previously it showed MD5 hex digests. While this is a good move for security, it's a PITA to verify host keys now, especially on systems with older OpenSSH.

For systems with modern OpenSSH, you can just ask for the sha256 version:
ssh-keygen -l -f /etc/ssh/ssh_host_rsa_key.pub -E sha256
If you have old ssh, you need to work it out yourself:
awk '{print $2}' /etc/ssh/ssh_host_rsa_key.pub | base64 -d | sha256sum -b | awk '{print $1}' | xxd -r -p | base64
On OS X, same thing but with slightly different options:
awk '{print $2}' /etc/ssh/ssh_host_rsa_key.pub | base64 -D | shasum -a 256 -b | awk '{print $1}' | xxd -r -p | base64
Or if you have access to the server by another means you can get the server to tell you the MD5 fingerprint:
ssh-keygen -l -f /etc/ssh/ssh_host_rsa_key.pub -E md5

Wednesday, March 9, 2016

Troubleshooting Kubernetes and GCE deployment manager

I've been using the GCE deployment manager to create a GCE deployment of a complex server application running multiple docker containers.

Here's a basic command cheatsheet:
gcloud deployment-manager deployments create my-first-deployment --config test_config.yaml
gcloud deployment-manager deployments describe my-first-deployment
Once you're at the point where the actual deployment works you probably need to debug other issues. Use the GUI to ssh into your container host VM. If you only see the /pause container, something is wrong. If you do "ps -a" you should get a list of containers that have failed to start properly (it will just keep retrying).
sudo docker ps -a
You can see the configuration Kubernetes passed to the container at creation time with "inspect". This is useful for debugging configuration problems:
sudo docker inspect [container id]
You can see STDOUT for the container launch with:
sudo docker logs [container id]
One trap I fell into is that the Kubernetes use of Cmd is different to docker :( I had a custom entrypoint in my Dockerfile and called it like this with docker:
docker run mycontainer command
But in Kubernetes config speak, cmd gets translated to docker entrypoint, and args gets translated to cmd. Ugh. So assuming your entrypoint is specified in the Dockerfile you want to leave that alone and just set the args:
containers:
  - name: mycontainer
    args: ["command"]
    env:
      - name: EXTERNAL_HOSTNAME
        value: localhost
      - name: ADMIN_PASSWORD
        value: demo

When run by Kubernetes it looks something like this:
   "Config": {
        "Hostname": "mydeployment-host",
        "Domainname": "",
        "User": "",
        "AttachStdin": false,
        "AttachStdout": false,
        "AttachStderr": false,
        "Tty": false,
        "OpenStdin": false,
        "StdinOnce": false,
        "Env": [
            "EXTERNAL_HOSTNAME=localhost",
            "ADMIN_PASSWORD=demo",
        ],
        "Cmd": [
            "command"
        ],
        "Image": "mydocker:latest",
        "Entrypoint": [
            "/mycustom-entrypoint.sh"
        ],
    }

Friday, March 4, 2016

The horror of signing RPMs that support CentOS 5

If you're reading this you are probably just starting to realise what a shit pile you've stepped in. You just want to have a signed RPM that's installable on CentOS 5+ right? The one you built worked fine on CentOS 7, but on CentOS 5 you saw something like this:

$ sudo rpm -i package.rpm 
error: package.rpm: Header V4 RSA/SHA1 signature: BAD, key ID 1234567
error: package.rpm cannot be installed
$ rpm --version
RPM version 4.4.2.3

It turns out that CentOS 5 doesn't support V4 signatures, is very picky about whether your public key has subkeys, and none of this is documented outside of an ancient bug and a bunch of angry blog posts and stack overflow questions. If you read all of that you'll get a bunch of conflicting advice, so I'll add another shout into the wind that might help someone in the future.  Here's a working setup:


Signing system is Ubuntu trusty:
$ lsb_release -rd
Description: Ubuntu 14.04.1 LTS
Release: 14.04
$ rpmsign --version
RPM version 4.11.1
$ rpmsign --define "%_gpg_name My GPGName" --define "__gpg_sign_cmd %{__gpg} gpg --force-v3-sigs --digest-algo=sha1 --batch --no-verbose --no-armor --passphrase-fd 3 --no-secmem-warning -u \\\"%{_gpg_name}\\\" -sbo %{__signature_filename} %{__plaintext_filename}" --resign package.rpm
$ rpm -Kv package.rpm
package.rpm:
    Header V3 RSA/SHA1 Signature, key ID 1234567: OK
    Header SHA1 digest: OK (aaaaaaaaaaaaaaabbbbbbbbbbbb)
    V3 RSA/SHA1 Signature, key ID 1234567: OK
    MD5 digest: OK (aaaaaaaabbbbbbbbb)
Note that your signing key can have subkeys when signing (by default gpg creates a subkey), but if you just export your public key with the subkey as normal and attempt to use it for verification it will look like this (V3 sig, but still marked "BAD") on CentOS 5:
$ rpm -Kv new2.rpm 
new2.rpm:
    Header V3 RSA/SHA1 signature: BAD, key ID 1234567
    Header SHA1 digest: OK (aaaaaaaaaaaaaaabbbbbbbbbbbb)
    V3 RSA/SHA1 signature: BAD, key ID 1234567
    MD5 digest: OK (aaaaaaaabbbbbbbbb)
and since gpg doesn't seem to give you a way to export a master without subkeys, on your Ubuntu signing machine you need to delete the subkey and export again:
$ gpg --edit 1234567
gpg> key 1
gpg> delkey
gpg> save
gpg> quit

gpg --export --armor 1234567 > 1234567_master.pub
Then on your Centos 5 system (I was using 5.11):
$ sudo rpm --import 1234567_master.pub
$ rpm -Kv new2.rpm 
new2.rpm:
    Header V3 RSA/SHA1 signature: OK, key ID 1234567
    Header SHA1 digest: OK (aaaaaaaaaaaaaaabbbbbbbbbbbb)
    V3 RSA/SHA1 signature: OK, key ID 1234567
    MD5 digest: OK (aaaaaaaabbbbbbbbb)
Simple right?

Tuesday, February 23, 2016

Unpack a debian .deb package

When you are building .deb's it's handy to be able to unpack them to check the contents, especially postinst and similar scripts. This command gives you all the package contents:

dpkg-deb -R google-chrome-stable_current_amd64.deb .

The postinst and other package-related scripts will be in the DEBIAN directory:

$ ls DEBIAN/
control  postinst  postrm  prerm

Creating a debian package that can run with System V, Upstart, or Systemd

We now have a gaggle of ways daemons can be run on linux, and ubuntu in particular. I want my .deb to be installable on a wide range of ubuntu and debian systems, some of them quite old, so here's my solution.

The general idea is to provide files for all three systems, and pick the right one to use at post-install time as described here, but with the added complication that we need systemd as well (for Ubuntu after 15.10, which uses systemd by default).

My postinstall file looks like this:

case "$1" in
  configure)
    ${DAEMON} ${DAEMON_ARGS} "--install"

    if [ -x /sbin/initctl ] && /sbin/initctl version | /bin/grep -q upstart; then
      # Early versions of upstart didn't support restarting a service that
      # wasn't already running:
      # https://bugs.launchpad.net/ubuntu/+source/upstart/+bug/430883
      /usr/sbin/service myservice stop 2>/dev/null || true
      /usr/sbin/service myservice start 2>/dev/null
    elif [ -x /bin/systemctl ]; then
      # Systemd
      /bin/systemctl enable myservice
      /bin/systemctl restart myservice
    elif [ -x "/etc/init.d/myservice" ]; then
      update-rc.d myservice defaults >/dev/null
      invoke-rc.d myservice start || exit $?
    fi
  ;;

  abort-upgrade|abort-remove|abort-deconfigure)
  ;;

  *)
    echo "postinst called with unknown argument \`$1'" >&2
    exit 1
  ;;
esac

If you're using debhelper you need to make sure you're using at least version 9.20130504, when systemd support was added. Then, just like you do for Upstart and System V you need to put your systemd unit file in:

debian/mypackage.service

and it will be copied into

lib/systemd/system/package.service

in the package build directory as described here.


Friday, February 19, 2016

Storing and using GPG keys on the Yubikey

I wanted to move to using GPG keys for encryption and signing stored on a Yubikey 4. There's a bunch of HOWTOs out there, I'll put a pile of links at the end.

I started out making a bootable Ubuntu USB drive with the intention of generating the master key on there while offline, putting the subkeys on the Yubikey, and only importing the public key of the master onto the laptops I would use for day-to-day sign/decrypt. This way the master secret key is never on an internet connected machine. This approach is described in more detail here.

I basically gave up on trying to make the yubikey talk to gpg correctly on linux and used a mac (you can read the whole saga after this). So I followed Trammel's excellent instructions with the following modifications:
  1. Disconnect from the network.
  2. Follow Trammel's instructions. If you have the Yubikey 4 you can use 4096 bit keys. ykpersonalize didn't work ("no yubikey present"), so I had to install the Yubikey NEO Manager, which for some reason requires a reboot.
  3. Using the GUI export the key a second time into a file that is just the public key.
  4. Copy pub/private exported key and revocation cert onto USB key.
  5. Use "srm -sz" to remove the exported key and cert, leave the exported public key.
  6. Delete the key (public and secret) from the GPG keychain using the GUI. The only copy of the master secret key is now on the USB.
  7. Import the public key using the GUI.
The command:
gpg --card-status
Should now show "sec#" as described here, to indicate the master secret key isn't present. Now your key is ready to use. I seem to be having similar problems as described here:
https://gpgtools.tenderapp.com/discussions/problems/28634-gpg-agent-stops-working-after-osx-upgrade-to-yosemite
I'll update this post when I know more.

The Linux GPG2 and yubkiey saga


Installing gpg2 (required for yubikey "card" support) turned out to be really painful. Ubuntu ships with gpg 1.4, so I ended up downloading a ton of packages off the gpg ftp server, verifying the signature of each one and doing the configure, make, make install dance. It took ages. Update: I didn't think to look for a gpg2 package, turns out there is one, so this was a big waste of time :)

Then I still had to download and install the yubico tools for interacting with the card. I got ykpersonalize installed, but all the tool ever gave me was this error:
Yubikey core error: no yubikey present
This bug pointed me to the Yubikey NEO manager, which has a PPA! Hooray! Except I couldn't get it to work on trusty, my errors are below. However, I just re-tried in a clean trusty docker container and it seemed to succeed, so I'm not going to file a bug:
ubuntu@ubuntu:~$ sudo apt-get install yubikey-neo-manager
Reading package lists... Done
Building dependency tree      
Reading state information... Done
Some packages could not be installed. This may mean that you have
requested an impossible situation or if you are using the unstable
distribution that some required packages have not yet been created
or been moved out of Incoming.
The following information may help to resolve the situation:
 
The following packages have unmet dependencies:
 yubikey-neo-manager : Depends: libu2f-host0 (>= 0.0) but it is not going to be installed
                       Depends: python-pyside.qtwebkit but it is not installable
                       Recommends: pcscd but it is not installable
E: Unable to correct problems, you have held broken packages.
ubuntu@ubuntu:~$ sudo apt-get install python-pyside.qtwebkit
Reading package lists... Done
Building dependency tree      
Reading state information... Done
Package python-pyside.qtwebkit is not available, but is referred to by another package.
This may mean that the package is missing, has been obsoleted, or
is only available from another source
So at this point I gave up on linux and used a Mac, which was waaay easier.

Once I had the keys on the card, to use them on linux I had to do this dance to stop gnome-keyring from ruining everything. On trusty if you use gpg2 you get this error:
$ gpg2 --card-status
gpg: OpenPGP card not available: No SmartCard daemon
but gpg 1.4 works fine. This appears to be caused by differences in how gpg 1 and 2 are packaged, gpg2 needs more packages to work.

Links to other HOWTOs


Here's a big pile of useful links: