Custom bobber

I’m not usually a big fan of Harley which truth be told has less to do with style or name and more to do with cost and maintenance. This bobber is too good to not say something about though. Head over to the original source and check their other customs out as well…
image

I don’t feel like starting a new post so I’m adding to this one, not sure of the details on these or of a source to give credit to….
image
O805C0aETkdvu09qn8xdfubQo1_400

triumph-bobber

jano_brassie_triumph

ig_640

Backup for One day old files

This should be verified as I may have inadvertently introduced a bug while I was ‘scrubbing’ this. I’m recycling some of my tricks/scripts/configs in the off chance that they are of use to someone besides me. For this one I needed a quick shell script to create a tar.gz backup file of anything modified or added since 1 day ago and cobbled this together to address the concept of ‘incremental’ backup capability.

#!/bin/bash
#-- ---------------------------------------------------- --#
#-- Desc..: backup script for any file 1 day old (assumes
#--         this is run in a scheduled job such as cron)
#-- Author: john.lawson@scriobha.im
#-- Date..: 03.18.2015
#-- Notes.: 
#-- ---------------------------------------------------- --#
#-- Configuration and initialization of values ------------#

DATEFORMAT=`date "+%F_%H-%M-%S"`
BACKUPFILENAME=data_${DATEFORMAT}-${ACTION}.tar.gz
SOURCEDIR=/var/logs
#-- -------------------------------------------------------#

echo '#---- Begin SiteDataBackup -------'

`find ${SOURCEDIR}/ -path ${SOURCEDIR}/[do not include] -prune -o -path ${SOURCEDIR}/[do not include2] -prune -o -newerct '1 day ago' -type f -print | xargs tar --null -zcpf $BACKUPFILENAME`

Google Failures

I know it’s childish and petty but I derive a lot of pleasure from the very few and far between occurrences of Google errors. The latest happened this morning when I received this for my searching efforts. I think it must be the perverse pleasure of seeing the mighty giant of perfect propriety and “200 OK” statuses throw a failure that rings my bell. Like I said, childish and petty, but still I chuckle quietly…

GoogleFail_20151006

Vagrant AWS Template

This took me a bit to gather all the details together so I wanted to document this and maybe provide some usefulness to others. There are a number of plugins, features, configurations tricks that I’ve found extremely useful over the last several years for working with Vagrant and Chef. Below is a sample Vagrantfile that can be used as a template and is hopefully commented enough to be self explanatory. You’ll need the following plugins installed:

vagrant plugin install vagrant-berskhelf
vagrant plugin install vagrant-omnibus
vagrant plugin install vagrant-cachier
vagrant plugin install vagrant-vbguest
vagrant plugin install vagrant-hostmanager

Vagrant File:

#-----------------------------------------------------#
#- Project: vagrant-based
#- Author.: john.lawson@scriobha.im
#- Date...: 2015-03-16
#- Notes..: Base template should be usable with a dual scenario
#-----------------------------------------------------#

#- Defaults
VAGRANTFILE_API_VERSION = "2"

Vagrant.configure(VAGRANTFILE_API_VERSION) do |c|

    #- Caching section to speed local dev/usage don't remove! -#
    if Vagrant.has_plugin?("vagrant-cachier")
        #- Configure scope
        c.cache.scope = :box

        # Enable Apt cache
        c.cache.enable :apt

        # Disable this for actual development use as this will cause issues in file refresh
        #c.cache.enable_nfs = true
    end
    #- END Caching ------------------------------------------ -#

    #- Configure vagrant-hostmanager plugin ----------------- -#
    #- Use this for managing multiple instances, mimics
    #- AWS OpsWorks DNS by hostname functionality
    if Vagrant.has_plugin?("vagrant-hostmanager")
        c.hostmanager.enabled = true
        c.hostmanager.manage_host = true
        c.hostmanager.ignore_private_ip = false
        c.hostmanager.include_offline = true
    end

    #- Define our primary instance as app1
    c.vm.define "app1", primary: true do |app1|

        #- Instance Details
        app1.vm.box = "[name of instance here - this will be the name listed in Virtual Box GUI]"

        #- Pull down an Ubuntu 14.04 base box from Amazon
        app1.vm.box_url = "https://opscode-vm-bento.s3.amazonaws.com/vagrant/virtualbox/opscode_ubuntu-14.04_chef-provisionerless.box"

        #- Set the hostname and alias (used in conjunction with the host manager plugin
        app1.vm.hostname = "dev-app1.local.vm"
        app1.hostmanager.aliases = %w(dev-app1.localdomain dev-app1)

        #- Network & Network Shares
        app1.vm.network(:forwarded_port, {:guest=>80, :host=>8001})
        app1.vm.network(:forwarded_port, {:guest=>443, :host=>8443})
        app1.vm.network(:private_network, {:ip=>"199.10.0.2"})
        app1.vm.synced_folder ".", "/vagrant", disabled: true

        #- Setup a mount point for Apache docroot (not in /var/www/) and we'll mount it later with Chef recipe
        app1.vm.synced_folder "~/vm-mounts/project-folder", "/mnt/app-www/", create: true, :nfs=> { :mount_options=> ['rw', 'vers=3', 'tcp', 'fsc'] }

        #- Instance Customizations
        app1.vm.provider :virtualbox do |p|
            p.name = app1.vm.box
            p.customize ["modifyvm", :id, "--memory", "1024"]
            p.customize ["modifyvm", :id, "--cpus", "1"]
        end

        #- Chef Solo Configurations & Details
        app1.berkshelf.enabled = true
        app1.omnibus.chef_version = '11.14.2'
        
        app1.vm.provision :chef_solo do |chef|
            chef.cookbooks_path = ['.']
            chef.add_recipe 'recipe[opsworks::_local_init]'
            chef.add_recipe 'recipe[opsworks::apache2]'
            chef.json = {
                #- used locally to mimic auto DNS resolution by hostname done by AWS OpsWorks
                opsworks: {
                    local_dev: true
                },
                # Mimic Hash provided by OpsWorks
                deploy: {
                    webapp: {
                        environment: {
                            #- 10.0.2.2 is most likely your host (this allows you to keep a central DB)
                            main_dbhost: "10.0.2.2",
                            main_dbname: "dbname",
                            totara_dbuser: "dbuser",
                            totara_dbpwd: "dbpwd"
                        }
                    }
                }
            }
        end
    end

    #- Define a secondary instance, could be used as NFS server, jobs server, other? Will not start unless you specifically call it
    c.vm.define "file1", autostart: false do |file1|
        #- Instance Details
        file1.vm.box = "[name of instance here - this will be the name listed in Virtual Box GUI]"

        #- Pull down an Ubuntu 14.04 base box from Amazon
        file1.vm.box_url = "https://opscode-vm-bento.s3.amazonaws.com/vagrant/virtualbox/opscode_ubuntu-14.04_chef-provisionerless.box"

        #- Set the hostname and alias (used in conjunction with the host manager plugin
        file1.vm.hostname = "dev-file1.local.vm"

        #- Network & Network Shares
        file1.vm.network(:private_network, {:ip=>"199.10.0.4"})
        file1.vm.synced_folder ".", "/vagrant", disabled: true

        #- Instance Customizations
        file1.vm.provider :virtualbox do |p|
            p.customize ["modifyvm", :id, "--memory", "512"]
            p.customize ["modifyvm", :id, "--cpus", "1"]
        end
        file1.vm.provision :chef_solo do |chef|
            chef.cookbooks_path = ['.']
            chef.add_recipe 'recipe[opsworks::_local_init]'
            chef.add_recipe 'recipe[opsworks::nfs-config]'
            chef.json = {
                #- used locally to mimic auto DNS resolution by hostname done by AWS OpsWorks
                opsworks: {
                    local_dev: true
                }
            }
        end
    end
end

Great clouds!

Taken on a whim by a cell phone camera (which now by far blows away the first digital camera I ever purchased by over 12 megapixels) but still pretty amazing shots in my limited ability with photos and photo taking. I have a few more that I think will eventually make up an album in the gallery for the 2015 year which is sorely lacking in, well, anything picture wise.

image

image

wpid-img_20151010_184307834.jpg

CI/Build/Deploy

A friend of mine recently asked a very general question of what tools, tips, process, other had I used at a former employer for our Continuous Integration, Build, Deploy process. After digging through my memory for some of the names of products and then dusting off the processes that we developed I realized that the entire process was a lot of fun. I’m very interested to see how much further the concept becomes a standard practice among development teams and companies in general.

Google Drive Synced for Ubuntu

OK, so I’ve been finding myself using GDrive more and more but missing the sync capabilities as with DropBox and of course there is no Linux client/integration because why would anyone want to do that. Don’t you know Linux is used to bypass DRM and hack people and stuff!? OMG! I mean really the next thing will be cats and dogs living together, mass hysteria!

So, after poking around grive seemed to be the most flexible, allow for actual sync and not be tied to a beta (see InSync beta program) and only free for that beta. That’s not really free in my mind and yes I’m arguing semantics; it’s my site, I can do that.

Steps:

  1. $ sudo add-apt-repository ppa:nilarimogard/webupd8
  2. $ sudo apt-get update
  3. $ sudo apt-get install grive
  4. $ mkdir /home/[your home directory here]/gDrive
  5. $ cd $home/gDrive
  6. $ grive -a
  7. follow the prompt from Google to allow Grive to access and then copy the key and paste it as instructed

I personally don’t like having to execute manual things so I also added a Cron entry to sync every 15 minutes

  1. $ crontab -e
  2. */15 * * * * cd /home/[your home directory here]/gDrive && grive
  3. save/exit
  4. Done!