Monday, May 19, 2014

Use JuJu "run" to discover charm relationships

Juju run is cool because it lets you execute stuff inside an anonymous hook context. So if you were say prompted with the following topology.
services:
  mediawiki:
    charm: cs:precise/mediawiki-16
    exposed: true
    relations:
      db:
      - mysql
      website:
      - siege
    units:
      mediawiki/0:
        agent-state: started
        agent-version: 1.18.3
        machine: "5"
        open-ports:
        - 80/tcp
        public-address: XXX
  mysql:
    charm: cs:precise/mysql-44
    exposed: false
    relations:
      cluster:
      - mysql
      db:
      - mediawiki
    units:
      mysql/0:
        agent-state: started
        agent-version: 1.18.3
        machine: "3"
        public-address: XXX
  siege:
    charm: cs:~dannf/precise/siege-4
    exposed: false
    relations:
      website:
      - mediawiki
    units:
      siege/0:
        agent-state: started
        agent-version: 1.18.3
        machine: "4"
        public-address: XXX
Which is just mediawiki powered by mysql with siege attached to mediawiki. Say you want to know more details mysql's relationships.
# bash shell
unit='mysql/0'
relations=$(juju run --unit ${unit} 'ls ./hooks/ | grep relation | cut -d '-' -f1 | uniq')

for rel in $relations
do
  echo "testing for relationships in $rel"
  juju run --unit ${unit} "relation-ids   ${rel} --format=json"
done

...

testing for relationships in ceph
[]
testing for relationships in cluster
["cluster:2"]
testing for relationships in db
["db:4"]
testing for relationships in ha
[]
testing for relationships in ha_relations.py
[]
testing for relationships in local
[]
testing for relationships in master
[]
testing for relationships in monitors
[]
testing for relationships in munin
[]
testing for relationships in shared
[]
testing for relationships in shared_db_relations.py
[]
testing for relationships in slave
[]
Cool, there's the DB relationship, now you can probe it.
ppetraki@:cabs-sandbox$ juju run --unit ${unit} "relation-get database ${unit} -r db:4 --format=json"
"mediawiki"

ppetraki@:cabs-sandbox$ juju run --unit ${unit} "relation-get password ${unit} -r db:4 --format=json"
"alaeyomooxaesee"

Wednesday, October 9, 2013

Re-sizing SCSI disks for RAID membership

Yet another reason why SATA sucks. In SCSI we can define how many blocks are exposed. So say we have a raid set like so:

root@pops:~# cat /proc/mdstat

Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]

md0 : active raid10 sdb1[1] sdc1[3] sdd1[2]

      286875904 blocks super 1.2 32K chunks 2 near-copies [4/3] [_UUU]

      bitmap: 2/3 pages [8KB], 65536KB chunk

sde is our new member and it's twice the size of the original members.

root@pops:~# lsscsi

[0:0:0:0]    cd/dvd  ATAPI    iHAS124   C      LL0B  /dev/sr0

[1:0:0:0]    disk    ATA      WDC WD1002FAEX-0 05.0  /dev/sda

[6:0:0:0]    disk    HITACHI  HUC151414CSS600  A330  /dev/sdb

[6:0:1:0]    disk    HITACHI  HUC151414CSS600  A330  /dev/sdc

[7:0:0:0]    disk    HITACHI  HUC151414CSS600  A330  /dev/sdd

[7:0:1:0]    disk    HITACHI  HUS156030VLS600  A5D0  /dev/sde



root@pops:~# sg_readcap /dev/sde

Read Capacity results:

   Last logical block address=586072367 (0x22eec12f), Number of blocks=586072368

   Logical block length=512 bytes

Hence:

   Device size: 300069052416 bytes, 286168.1 MiB, 300.07 GB

root@pops:~# sg_readcap /dev/sdd

Read Capacity results:

   Last logical block address=287140276 (0x111d69b4), Number of blocks=287140277

   Logical block length=512 bytes

Hence:

   Device size: 147015821824 bytes, 140205.2 MiB, 147.02 GB

I always use partitions for RAID sets just to advertise "hey, someone is probably using this", it also gives you some head room in case you start developing badblocks. For the sake of simplicity, I want to slim down the replacement disk. I don't want those blocks available for anyone else to add another partition and make the raid member thrash due to competing IOs, so let's resize it.
root@pops:~# sg_format --resize --count=287140277 /dev/sde

    HITACHI   HUS156030VLS600   A5D0   peripheral_type: disk [0x0]

      << supports protection information>>

Mode Sense (block descriptor) data, prior to changes:

  Number of blocks=586072368 [0x22eec130]

  Block size=512 [0x200]

Resize operation seems to have been successful

and to check.
root@pops:~# sg_readcap /dev/sde
Read Capacity results:
   Last logical block address=287140276 (0x111d69b4), Number of blocks=287140277
   Logical block length=512 bytes
Hence:
   Device size: 147015821824 bytes, 140205.2 MiB, 147.02 GB

Sweet! Now to reuse the existing partition map from it's siblings and push it into service.
sfdisk -d /dev/sdd > part.txt

sfdisk /dev/sde < part.txt

...

and we're back in service
root@pops:~# mdadm /dev/md0 --add /dev/sde1

mdadm: added /dev/sde1



root@pops:~# cat /proc/mdstat

Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]

md0 : active raid10 sde1[4] sdb1[1] sdc1[3] sdd1[2]

      286875904 blocks super 1.2 32K chunks 2 near-copies [4/3] [_UUU]

      [>....................]  recovery =  0.2% (423040/143437952) finish=28.1min speed=84608K/sec

      bitmap: 2/3 pages [8KB], 65536KB chunk



unused devices: 

Done.

Wednesday, July 10, 2013

Execute complex Python or Ruby code inline within Bash

A lot of times when we write extensive shell code, using pipes, file descriptors etc, we eventually run into the limitations of shell and are forced to write an external helper to keep the whole pipe ninja thing moving forward. Well, what if I said you didn't have to?

Update: cannot be used with set -e

read -r -d '' VAR <<'EOF'
import sys,re
state = sys.stdin.read().rstrip('\n')
g = re.match('^([\w-]+)(\s[\w+\/]+)(, \w+ \d+)?',state).groups()
# if the last field exists, it's running
if g[-1] is not None:
  print 'running'
else:
  print 'stopped'
EOF

Upstart does lots of things right, except status, I mean...


tty2 start/running, process 738
udevtrigger stop/waiting
That's just not right, requiring a regular expression to get status. So I created this little snippet. read takes the heredoc and shoves it into VAR and then, it's no different than it having been in a file to begin with.

# initctl list
...

tty3 start/running, process 740
udev-finish stop/waiting
juju-daq-emitter-0 start/running, process 12022
hostname stop/waiting
mountall-reboot stop/waiting
mountall-shell stop/waiting
mounted-tmp stop/waiting



root@ip-10-147-220-141:~# initctl status tty3 | python -c "$VAR"
running
root@ip-10-147-220-141:~# initctl status udev-finish | python -c "$VAR"
stopped
root@ip-10-147-220-141:~# initctl status hostname | python -c "$VAR"
stopped
Much better.

Saturday, June 22, 2013

Is the phone company ripping you off? An analysis of unlimited plans vs metered monthly

I have one of the older Vonage yearly contracts that does unlimited everything plus free international to select countries for the low low price of $269/year. Now that we're buying a house, I thought it wise to analyse our expenses and of course, I wrote a program to do it.
$ git clone https://github.com/ppetraki/household-misc.git

$ household-misc/vonage_outgoing_calls.py $(ls Vonage/*.csv)



billing          minutes  750_plan  300_plan
================================================================================
Vonage/callActivity01182013.csv 452 -11.622 + 19.99 = 19.99 7.6 + 11.99 = 19.59
Vonage/callActivity02182013.csv 544 -8.034 + 19.99 = 19.99 12.2 + 11.99 = 24.19
Vonage/callActivity03182013.csv 450 -11.7 + 19.99 = 19.99 7.5 + 11.99 = 19.49
Vonage/callActivity04182013.csv 481 -10.491 + 19.99 = 19.99 9.05 + 11.99 = 21.04
Vonage/callActivity05182013.csv 280 -18.33 + 19.99 = 19.99 -1.0 + 11.99 = 11.99
Vonage/callActivity06182012.csv 359 -15.249 + 19.99 = 19.99 2.95 + 11.99 = 14.94
Vonage/callActivity06182013.csv 457 -11.427 + 19.99 = 19.99 7.85 + 11.99 = 19.84
Vonage/callActivity06222013.csv 3 -29.133 + 19.99 = 19.99 -14.85 + 11.99 = 11.99
Vonage/callActivity07182012.csv 124 -24.414 + 19.99 = 19.99 -8.8 + 11.99 = 11.99
Vonage/callActivity08182012.csv 158 -23.088 + 19.99 = 19.99 -7.1 + 11.99 = 11.99
Vonage/callActivity09182012.csv 265 -18.915 + 19.99 = 19.99 -1.75 + 11.99 = 11.99
Vonage/callActivity10182012.csv 234 -20.124 + 19.99 = 19.99 -3.3 + 11.99 = 11.99
Vonage/callActivity11182012.csv 625 -4.875 + 19.99 = 19.99 16.25 + 11.99 = 28.24


total world $269
total 300 $219.27
total 750 $259.87
I admit, metered plans can be scary but Vonage only charges you for outgoing calls. This program accounts for that and shows you how much you would pay per month: base + minutes used. While there were some high months, I never used more outgoing minutes that what is offered by the 750m plan. If they would only roll-over minutes you didn't use, the 300m plan would save you almost $100/year, instead you save only $50. Better than nothing.

Tuesday, June 18, 2013

Automating and encrypting duplicity backups using cron

Background

Having suffered data loss in the past and hacking on storage suggests that it's a good idea to have regular backups. I wanted redundancy in case my local server failed and I wanted to encrypt my backups using a password protected gpg key. The current solution uses a passphrase kept in plain text outside of the backup path. I plan to investigate moving the gpg key to a smartcard and using a pin key to unlock it instead. If anyone has any additional solutions please describe them in detail.

Persisting requisite environmental variables

Running anything from cron detaches it from your current environment, you lose all of the variables describing things like your ssh-agent gpg-agent, stuff you need to begin to communicate with the remote server. I took a simple approach, in my ~.bashrc I created the following.
cat > ~/.backenvrc << EOF
# used by crontab backup script
export SSH_AGENT_PID=$SSH_AGENT_PID
export SSH_AUTH_SOCK=$SSH_AUTH_SOCK
export GPG_AGENT_INFO=$GPG_AGENT_INFO
export GPGKEY=XXX-insert-your-gpg-key-here-XXX
EOF
and simply source this from the backup script referenced in my crontab, I merely need only login once to populate this file.

Setting up the Crontab

# crontab -l
# m h  dom mon dow   command
MAILTO=ppetraki@localhost
BACKUP=/home/ppetraki/Documents/System/Backup
#
0 0  * * *      /usr/bin/crontab -l  > $BACKUP/crontab-backup
0 0  * * *      /usr/bin/dpkg --get-selections > $BACKUP/installed-software
0 0  * * *      /usr/local/bin/ppetraki-backup.sh inc
0 0  * * Fri    /usr/local/bin/ppetraki-backup.sh full
Note that I am also backing up my crontab and my list of installed software, eventually I will move this into another script that also does things like
  1. backup my bookmarks from chrome and firefox
  2. backup mail in a non-binary format
The current cron format performs an incremental backup every night and a full backup every Friday.

Driver script

This wraps the invocation of duplicity and acquires the necessary environmental variables. Duplicity itself can be hairy with all the command line switches and even more of a burden if you have multiple targets. I have redundant backups, first to a local server and to a remote service provided by rsync.net (great customer support!). I found horcrux to be a wonderful, lightweight, duplicity wrapper to suit my needs. The driver script, which is external to my backup path, also contains my GPG passphrase to encrypt my backups. Eventually I wish to move to a smartcard driven system illustrated here
#!/bin/bash
# [/usr/local/bin/ppetraki-backup.sh]

export PATH=$PATH:/usr/local/bin
action=$1

export USER=XXX
export HOME=/home/$USER

source $HOME/.backenvrc

echo "verifying environment"
echo "gpg-agent: ${GPG_AGENT_INFO}"
echo "gpg-key:   ${GPGKEY}"
echo "ssh-agent-pid:   ${SSH_AGENT_PID}"
echo "ssh-auth-sock:   ${SSH_AUTH_SOCK}"

if [ -z $action ]; then
  echo "requires an action!"
  exit 1
fi

export PASSPHRASE=

[ -z $PASSPHRASE ] && exit 1

echo "begin"

for config in local_backup remote_backup
do
  horcrux clean   $config
  horcrux $action $config
done

Using horcrux to wrangle duplicity

Horcrux has the notion of profiles that takes all the complexity out of managing the duplicity CLI. Here's an example of a profile.
cat /home/ppetraki/.horcrux/local_backup-config
destination_path="rsync://192.168.1.XXX/backups/personal"
 cat ~/.horcrux/local_backup-exclude
- /home/ppetraki/Sandbox
- /home/ppetraki/Bugs
- /home/ppetraki/Downloads
- /home/ppetraki/Videos
- /home/ppetraki/.xsession-errors
- /home/ppetraki/.thumbnails
- /home/ppetraki/.local
- /home/ppetraki/.gvfs
- /home/ppetraki/.systemtap
- /home/ppetraki/.adobe/Flash_Player/AssetCache
- /home/ppetraki/.thunderbird
- /home/ppetraki/.mozilla
- /home/ppetraki/.config/google-googletalkplugin
- /home/ppetraki/.config/google-chrome
- /home/ppetraki/.cache
- /home/ppetraki/**[cC]ache*
I found it problematic to backup only sub directories of things like mozilla and google-chrome, instead I will write an additional script to cherry pick those files for backup. The main horcrux config file
cat ~/.horcrux/horcrux.conf 
source="/home/ppetraki/"          # Ensure trailing slash
encrypt_key=XXXXXX     # Public key ID to encrypt backups with
sign_key='-'             # Key ID to sign backups with (leave as '-' for no signing)

use_agent=false          # Use gpg-agent?
remove_n=3               # Number of full filesets to remove
verbosity=5              # Logs all the file changes (see duplicity man page)
vol_size=25              # Split the backup into 25MB volumes
full_if_old=30D         # Cause 'full' operation to perform a full
                         # backup if older than 360 days
backup_basename='backup' # Directory name for local backups (i.e., destination
                         # /Volumes/my_drive/backup/ or /media/my_drive/backup/)
dup_params='--use-agent' # Parameters to pass to Duplicity
This is great as it reduces a backup invocation to this:
 $ horcrux inc local_backup 

Monitoring

I defined MAILTO in my crontab and also installed mutt and the reconfigured postfix for local mail delivery. Every night I get a progress report on how the backups ran.

Conclusion

I've spent quite a bit of time determining how to automate this in and provide strong encryption. If you have a more secure way to encrypt the backups I would be happy to hear it.

Thursday, June 13, 2013

Determine machine size in juju programitcally

I wouldn't call this easy, but it works, and it keeps me out of python.
# on hpcloud

$juju status apache2 2>/dev/null | grep instance-id |
  awk '{split($0,array,":")} END{print array[2]}' | tr -d ' ' |
  xargs nova show | grep flavor |
  awk '{split($0,array,"|")} END{print array[3]}' | tr -d ' '

standard.xsmall

Wednesday, June 5, 2013

Install git binaries without package management and fix missing helpers

Talk about flexibility. Suppose you need git, but don't have package management or a compiler. What's a SWE to do? Use git's --exec-path feature, which changes the search path for it's helpers, allowing you to install it pretty much anywhere.
$ mkdir ~/opt && cd ~/opt

$ wget http://archive.ubuntu.com/ubuntu/pool/main/g/git-core/git-core_1.7.0.4-1ubuntu0.2_amd64.deb

$ dpkg -x git-core_1.7.0.4-1ubuntu0.2_amd64.deb .

$ export GIT_EXEC_PATH=/home/ppetraki/opt/usr/lib/git-core

$ cd ~/Sandbox

$ ~/opt/usr/bin/git clone git://kernel.ubuntu.com/ubuntu/ubuntu-quantal.git
warning: templates not found /usr/share/git-core/templates
Initialized empty Git repository in /home/ppetraki/Sandbox/ubuntu-quantal/.git/
remote: Counting objects: 2721745, done.
remote: Compressing objects: 100% (419165/419165), done.
Receiving objects:   2% (78914/2721745), 21.36 MiB | 89 KiB/s 
Sweet! This feature also address the git-XXXX helper not found.