PYTHON TRACEROUTE STYLE TOOL

Recently, while talking with a some techies I was asked to explain how traceroute works. I was quick to answer what results I expect back from the command, and how to understand that data, but for the life of me I was having issues recalling how TTL works at the foundation.

Later that week I spent some time reading up on it, and click, it all came back.

To reinforce this I decided to write some code. I decided to use Scapy in order to make crafting network packets a breeze (shown below). You should be able to get Scapy right from PyPi:

PROCESS ELASTICSEARCH JSON ON THE SHELL

Lets throw security out the window for a moment. Say we store user accounts with clear text passwords in Elasticsearch , what is the easiest way to use the results in a shell script? We can begin by creating two accounts, one for admin and one for john :

# curl -XPUT localhost:9200/site/people/1?pretty=True -d '
  {"name": "admin", "password": "secret", "admin": "true"}
'
{
  "_index" : "site",
  "_type" : "people",
  "_id" : "1",
  "_version" : 1,
  "_shards" : {
    "total" : 2,
    "successful" : 1,
    "failed" : 0
  },
 "created" : true
}
# curl -XPUT localhost:9200/site/people/2?pretty=True -d '
  {"name": "john", "password": "password", "admin": "false"}
'
{
  "_index" : "site",
  "_type" : "people",
  "_id" : "1",
  "_version" : 2,
  "_shards" : {
    "total" : 2,
    "successful" : 2,
    "failed" : 0
  },
 "created" : false
}

Using curl this is very easy to query, we just use the id :

TELEGRAF LAPTOP BATTERY PLUGIN

Wanted to expand a little on my previous blog post Custom Telegraf Plugin , and decided to do a simple battery monitor. The end result looks something like this:

Screen Shot 2016-04-11 at 5.40.47 PM

I decided to read from the file /sys/class/power_supply/BAT0/capacity on my Ubuntu 14.04 machine, this file merely shows the current battery percent:

# cat /sys/class/power_supply/BAT0/capacity
62

All that is needed is a little Python script for converting this output to JSON, my script outputs like this:

CUSTOM TELEGRAF PLUGIN

I just started looking to InfluxDB and Telegraf for collecting data on a Linux machine, then visualizing it with Grafana . I’ve historically used collectd , statsite , and graphite to accomplish the same sort of task, but wanted to see how some of the new software compares.

I’m running a Ubuntu 14.04 LTS virtual machine, so feel free to follow along.

I managed to install the packages from the InfluxDB ubuntu repositories :

$ cat /etc/apt/sources.list.d/influxdb.list
deb https://repos.influxdata.com/ubuntu trusty stable

After adding the repo, and their GPG key, update and install the packages:

CHECK SSL CERTIFICATE'S EXPIRATION

If you ever want to quickly check the expiration date on your HTTPS server’s SSL certificate all you need is OpenSSL , luckily most of your Linux and OSX workstations will already have it installed.

openssl s_client -showcerts -connect **domain.com**:443 </dev/null 2>/dev/null \
  | openssl x509 -noout -dates

You should get back a nice and tidy response with a notBefore and a notAfter date:

notBefore=Mar 13 00:00:00 2015 GMT
notAfter=Mar 12 23:59:59 2018 GMT

POSTGRESQL VACUUM SCRIPT

PostgreSQL does have a built in auto vacuum , but sometimes you just want a small script that can be ran through Jenkins to perform the vacuum for you.

Wanted to share with you guys a small Python script I wrote that will perform a VACUUM VERBOSE ANALYZE on every table within a database.

You will need to get psycopg2 installed from PyPi first:

pip install psycopg2

At which point you should be able to use the below script with the correct environment variables to vacuum your database:

RASPBERRY PI – CLEVERBOT VOICE COMMUNICATION

Using my first generation Raspberry Pi and a few USB / analog devices, i’ve been able to create (a rather slow) cleverbot voice communicator.

The reason for the slow down is initialization and listening on the USB microphone, but other than that everything works as expected.

#!/usr/bin/env python

import speech_recognition as sr
import pyttsx
import cleverbot

print 'Initializing, please wait...'

# define our cleverbot
cb = cleverbot.Cleverbot()

# speech recognizer setup
r = sr.Recognizer()

# engine for text to speech
engine = pyttsx.init()
#engine.setProperty('rate', 20)

while True:

    # obtain audio from the microphone
    # this is the bit of code that take a long time...
    with sr.Microphone() as source:
	print("Talk to cleverbot!")
	audio = r.listen(source)

    phrase = r.recognize_google(audio)
    print '  me: %s' % phrase
    resp = cb.ask(phrase)
    print '  cleverbot: %s' % resp

    engine.say(resp)
    engine.runAndWait()

JANKY LEGO STOP MOTION

Well the kids have lost interest in Raspberry Pi Python programming for now, but look who’s still at it! The jankyiest of Lego stop motions.

Here was the code I tossed together to make the gif above:

#!/usr/bin/env python2

import os
import time
import shutil
import datetime
import tempfile
import pygame.camera
import pygame.image
import RPi.GPIO as GPIO

save_dir = '/usr/share/nginx/www'

GPIO.setmode(GPIO.BCM)
GPIO.setwarnings(False)
GPIO.cleanup()
GPIO.setup(17, GPIO.IN)

pygame.camera.init()
camera = pygame.camera.Camera('/dev/video0')

def make_picture(filename):
    raw_input('Ready for picture? ')
    camera.start()
    image = camera.get_image()
    pygame.image.save(image, filename)
    camera.stop()

def make_gif(frames=5):
    print 'Making you a gif using %s frames, get ready!' % frames
    time.sleep(0.5)
    dir = tempfile.mkdtemp()
    for i in range(frames):
        print 'Taking picture!'
        make_picture('%s/%s.jpg' % (dir, i))
        time.sleep(3)

    print 'Converting images to gif, please wait...'
    os.system('convert -delay 20 %s/*.jpg %s/animated.gif' % (dir, dir))

    filename = '%s.gif' % datetime.datetime.now().isoformat()
    shutil.move('%s/animated.gif' % dir, '%s/%s' % (save_dir, filename))
    shutil.rmtree(dir)
    print 'Complete!'

while True:
  if GPIO.input(17):
      make_gif()

GPIO.cleanup()

And a picture of the rig: Screen Shot 2015-12-21 at 6.42.50 PM

GEODJANGO AND TACO BELL

I’ve been at it again with GeoDjango , this time I’ve pulled data on all Taco Bells locations from a popular social media site, took that data and added to a Django project, and finally plotted them in a view using Google Maps :

Screen Shot 2015-11-22 at 8.13.51 AM

Wow, that is a lot of Taco Bells !

Since this is Django, we are also able to view and edit from the admin :

Screen Shot 2015-11-22 at 8.15.13 AM

USING GEODJANGO TO FILTER BY POINTS

Just recently I found myself playing with GeoDjango , I’ve been using it on both a Ubuntu 14.04 cloud server and a Macbook Pro (OS X El Capitan).

GeoDjango allows us to query by geographic points directly on the data model. We are then able to extend the model, and add a custom method to search by zipcode.

Using the Django shell we can easily check data in our favorite interpreter :