Shell Script for Apt-Get Security Based Updates

I’ve been slowly working this script into a cheap method of notifying me of updates available for Linux instances (Ubuntu specific) instead of purchased product or managed solution. It’s not fancy, flashy, API driven, cloud hosted, OAuth authenticating, or any other buzzwords. It does work though…

#!/bin/bash

#-------------------------------------------------------------------------------------------------#
#- Name....: checkSecurityupdates.sh
#- Notes...:
#-------------------------------------------------------------------------------------------------#

# create fresh securities file each run
grep "-security" /etc/apt/sources.list | sudo grep -v "#" > /etc/apt/security.sources.list
echo "created security specific source list"


# Create the security file list
echo 'n' | apt-get upgrade -o Dir::Etc::SourceList=/etc/apt/security.sources.list >> /root/securities-to-update.txt
echo "created list of security updates"



# What's the mimetype
get_mimetype(){
  # warning: assumes that the passed file exists
  file --mime-type "$1" | sed 's/.*: //'
}


# some variables

from="SecUpdates-Report@example.com"
to="monitor-this-mailbox@example.com"
subject=`hostname`
boundary="ZZ_/afg6432dfgkl.94531q"
body="Please see attached"
declare -a attachments
attachments=( "securities-to-update.txt" )

# Build headers
{

printf '%s\n' "From: $from
To: $to
Subject: $subject
Mime-Version: 1.0
Content-Type: multipart/mixed; boundary=\"$boundary\"

--${boundary}
Content-Type: text/plain; charset=\"US-ASCII\"
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

$body
"

# now loop over the attachments, guess the type
# and produce the corresponding part, encoded base64
for file in "${attachments[@]}"; do

  [ ! -f "$file" ] && echo "Warning: attachment $file not found, skipping" >&2 && continue

  mimetype=$(get_mimetype "$file")

  printf '%s\n' "--${boundary}
Content-Type: $mimetype
Content-Transfer-Encoding: base64
Content-Disposition: attachment; filename=\"$file\"
"

  base64 "$file"
  echo
done

# print last boundary with closing --
printf '%s\n' "--${boundary}--"

} | sendmail -t -oi   
echo "sent security updates list"



# cleanup security files
rm /etc/apt/security.sources.list
rm /root/securities-to-update.txt

Backup for One day old files

This should be verified as I may have inadvertently introduced a bug while I was ‘scrubbing’ this. I’m recycling some of my tricks/scripts/configs in the off chance that they are of use to someone besides me. For this one I needed a quick shell script to create a tar.gz backup file of anything modified or added since 1 day ago and cobbled this together to address the concept of ‘incremental’ backup capability.

#!/bin/bash
#-- ---------------------------------------------------- --#
#-- Desc..: backup script for any file 1 day old (assumes
#--         this is run in a scheduled job such as cron)
#-- Author: john.lawson@scriobha.im
#-- Date..: 03.18.2015
#-- Notes.: 
#-- ---------------------------------------------------- --#
#-- Configuration and initialization of values ------------#

DATEFORMAT=`date "+%F_%H-%M-%S"`
BACKUPFILENAME=data_${DATEFORMAT}-${ACTION}.tar.gz
SOURCEDIR=/var/logs
#-- -------------------------------------------------------#

echo '#---- Begin SiteDataBackup -------'

`find ${SOURCEDIR}/ -path ${SOURCEDIR}/[do not include] -prune -o -path ${SOURCEDIR}/[do not include2] -prune -o -newerct '1 day ago' -type f -print | xargs tar --null -zcpf $BACKUPFILENAME`

Vagrant AWS Template

This took me a bit to gather all the details together so I wanted to document this and maybe provide some usefulness to others. There are a number of plugins, features, configurations tricks that I’ve found extremely useful over the last several years for working with Vagrant and Chef. Below is a sample Vagrantfile that can be used as a template and is hopefully commented enough to be self explanatory. You’ll need the following plugins installed:

vagrant plugin install vagrant-berskhelf
vagrant plugin install vagrant-omnibus
vagrant plugin install vagrant-cachier
vagrant plugin install vagrant-vbguest
vagrant plugin install vagrant-hostmanager

Vagrant File:

#-----------------------------------------------------#
#- Project: vagrant-based
#- Author.: john.lawson@scriobha.im
#- Date...: 2015-03-16
#- Notes..: Base template should be usable with a dual scenario
#-----------------------------------------------------#

#- Defaults
VAGRANTFILE_API_VERSION = "2"

Vagrant.configure(VAGRANTFILE_API_VERSION) do |c|

    #- Caching section to speed local dev/usage don't remove! -#
    if Vagrant.has_plugin?("vagrant-cachier")
        #- Configure scope
        c.cache.scope = :box

        # Enable Apt cache
        c.cache.enable :apt

        # Disable this for actual development use as this will cause issues in file refresh
        #c.cache.enable_nfs = true
    end
    #- END Caching ------------------------------------------ -#

    #- Configure vagrant-hostmanager plugin ----------------- -#
    #- Use this for managing multiple instances, mimics
    #- AWS OpsWorks DNS by hostname functionality
    if Vagrant.has_plugin?("vagrant-hostmanager")
        c.hostmanager.enabled = true
        c.hostmanager.manage_host = true
        c.hostmanager.ignore_private_ip = false
        c.hostmanager.include_offline = true
    end

    #- Define our primary instance as app1
    c.vm.define "app1", primary: true do |app1|

        #- Instance Details
        app1.vm.box = "[name of instance here - this will be the name listed in Virtual Box GUI]"

        #- Pull down an Ubuntu 14.04 base box from Amazon
        app1.vm.box_url = "https://opscode-vm-bento.s3.amazonaws.com/vagrant/virtualbox/opscode_ubuntu-14.04_chef-provisionerless.box"

        #- Set the hostname and alias (used in conjunction with the host manager plugin
        app1.vm.hostname = "dev-app1.local.vm"
        app1.hostmanager.aliases = %w(dev-app1.localdomain dev-app1)

        #- Network & Network Shares
        app1.vm.network(:forwarded_port, {:guest=>80, :host=>8001})
        app1.vm.network(:forwarded_port, {:guest=>443, :host=>8443})
        app1.vm.network(:private_network, {:ip=>"199.10.0.2"})
        app1.vm.synced_folder ".", "/vagrant", disabled: true

        #- Setup a mount point for Apache docroot (not in /var/www/) and we'll mount it later with Chef recipe
        app1.vm.synced_folder "~/vm-mounts/project-folder", "/mnt/app-www/", create: true, :nfs=> { :mount_options=> ['rw', 'vers=3', 'tcp', 'fsc'] }

        #- Instance Customizations
        app1.vm.provider :virtualbox do |p|
            p.name = app1.vm.box
            p.customize ["modifyvm", :id, "--memory", "1024"]
            p.customize ["modifyvm", :id, "--cpus", "1"]
        end

        #- Chef Solo Configurations & Details
        app1.berkshelf.enabled = true
        app1.omnibus.chef_version = '11.14.2'
        
        app1.vm.provision :chef_solo do |chef|
            chef.cookbooks_path = ['.']
            chef.add_recipe 'recipe[opsworks::_local_init]'
            chef.add_recipe 'recipe[opsworks::apache2]'
            chef.json = {
                #- used locally to mimic auto DNS resolution by hostname done by AWS OpsWorks
                opsworks: {
                    local_dev: true
                },
                # Mimic Hash provided by OpsWorks
                deploy: {
                    webapp: {
                        environment: {
                            #- 10.0.2.2 is most likely your host (this allows you to keep a central DB)
                            main_dbhost: "10.0.2.2",
                            main_dbname: "dbname",
                            totara_dbuser: "dbuser",
                            totara_dbpwd: "dbpwd"
                        }
                    }
                }
            }
        end
    end

    #- Define a secondary instance, could be used as NFS server, jobs server, other? Will not start unless you specifically call it
    c.vm.define "file1", autostart: false do |file1|
        #- Instance Details
        file1.vm.box = "[name of instance here - this will be the name listed in Virtual Box GUI]"

        #- Pull down an Ubuntu 14.04 base box from Amazon
        file1.vm.box_url = "https://opscode-vm-bento.s3.amazonaws.com/vagrant/virtualbox/opscode_ubuntu-14.04_chef-provisionerless.box"

        #- Set the hostname and alias (used in conjunction with the host manager plugin
        file1.vm.hostname = "dev-file1.local.vm"

        #- Network & Network Shares
        file1.vm.network(:private_network, {:ip=>"199.10.0.4"})
        file1.vm.synced_folder ".", "/vagrant", disabled: true

        #- Instance Customizations
        file1.vm.provider :virtualbox do |p|
            p.customize ["modifyvm", :id, "--memory", "512"]
            p.customize ["modifyvm", :id, "--cpus", "1"]
        end
        file1.vm.provision :chef_solo do |chef|
            chef.cookbooks_path = ['.']
            chef.add_recipe 'recipe[opsworks::_local_init]'
            chef.add_recipe 'recipe[opsworks::nfs-config]'
            chef.json = {
                #- used locally to mimic auto DNS resolution by hostname done by AWS OpsWorks
                opsworks: {
                    local_dev: true
                }
            }
        end
    end
end

Google Drive Synced for Ubuntu

OK, so I’ve been finding myself using GDrive more and more but missing the sync capabilities as with DropBox and of course there is no Linux client/integration because why would anyone want to do that. Don’t you know Linux is used to bypass DRM and hack people and stuff!? OMG! I mean really the next thing will be cats and dogs living together, mass hysteria!

So, after poking around grive seemed to be the most flexible, allow for actual sync and not be tied to a beta (see InSync beta program) and only free for that beta. That’s not really free in my mind and yes I’m arguing semantics; it’s my site, I can do that.

Steps:

  1. $ sudo add-apt-repository ppa:nilarimogard/webupd8
  2. $ sudo apt-get update
  3. $ sudo apt-get install grive
  4. $ mkdir /home/[your home directory here]/gDrive
  5. $ cd $home/gDrive
  6. $ grive -a
  7. follow the prompt from Google to allow Grive to access and then copy the key and paste it as instructed

I personally don’t like having to execute manual things so I also added a Cron entry to sync every 15 minutes

  1. $ crontab -e
  2. */15 * * * * cd /home/[your home directory here]/gDrive && grive
  3. save/exit
  4. Done!

Froyo 2.2 for Motorola Droid Update.

I’ve been watching all the hype about Verizon updating it’s line of MotoDroid phones and getting annoyed about having to wait. Officially now, it’s been rolling out but they are projecting Aug. 18th for completing the roll-out. It’s an open phone, why can’t I update it myself? Well, I didn’t like the prospect of “bricking” it and having to shell out money for a new one. That said, I stumbled on this post from Phandroid and couldn’t help myself. I am now a Froyo user in just over 10 minutes. Easy, easy, easy!

Create a perfect ISO

There are a million posts out there that cover how to make an ISO image of a CD/DVD. For some reason I was having a particularly hard time with a specific disk. It had long file names, funny folder structures and I kept getting all uppercase file names. I tried the stock dd if=/dev/cdrom of=~/[file-name].iso first and found that it was just not cutting it. The below seemed to work the best on multiple CDs and formats.

mkisofs -r -J -l -d -allow-multidot -allow-leading-dots -joliet-long -no-bak -o ~/[name-of-file-or-disk].iso /media/[source-folder]

Xmind

First off, XMind is a mind mapping program that I started using on a Mac, transferred to an Eclipse 3.4 plugin and have since downloaded and installed on any computer I have including my Linux machines at home. That said, there is a bit of an oddity when running it under linux. I couldn’t find any information on how to run the executable from outside of the unzipped folder. I have found that fix and wanted to share for anyone else who has had this issue. Below is what I have in place on a Fedora 9 install with Sun JDK 1.5.0…

  1. Download the XMind Portable from their site
  2. Unzip to /usr/local/xmind-portable/
  3. Rename XMind for Linux to XMind-Linux
  4. Open the config.ini file from XMind-Linux folder
  5. Edit any paths to reflect your location of /usr/local/xmind-portable/XMind-Linux/
  6. Copy all the values in this file and edit the lines to be a single line
  7. Create your shortcut to /usr/local/xmind-portable/XMind-Linux/xmind and append the single line from step 6
  8. Done!