GeoDjango and Taco Bell

I’ve been at it again with GeoDjango, this time I’ve pulled data on all Taco Bells locations from a popular social media site, took that data and added to a Django project, and finally plotted
them in a view using Google Maps:

Screen Shot 2015-11-22 at 8.13.51 AM









Wow, that is a lot of Taco Bells!

Since this is Django, we are also able to view and edit from the admin:

Screen Shot 2015-11-22 at 8.15.13 AM

As well as the shell:

Screen Shot 2015-11-22 at 8.15.47 AM
django-cities was used tie it all together, which allows me to do searches like how many Taco Bells do the cities with the highest population have:

In [1]: from cities.models import City

In [2]: from hub.models import Location

In [3]: for city in City.objects.order_by('-population')[:20]:
    locations = Location.objects.filter(cities_city=city)
    print '%s with population %s has %s Taco Bells' % (, city.population, len(locations))
New York City with population 8175133 has 0 Taco Bells
Los Angeles with population 3792621 has 34 Taco Bells
Chicago with population 2695598 has 15 Taco Bells
Brooklyn with population 2300664 has 5 Taco Bells
Borough of Queens with population 2272771 has 0 Taco Bells
Houston with population 2099451 has 54 Taco Bells
Philadelphia with population 1526006 has 13 Taco Bells
Manhattan with population 1487536 has 1 Taco Bells
Phoenix with population 1445632 has 34 Taco Bells
The Bronx with population 1385108 has 0 Taco Bells
San Antonio with population 1327407 has 22 Taco Bells
San Diego with population 1307402 has 22 Taco Bells
Dallas with population 1197816 has 27 Taco Bells
San Jose with population 945942 has 18 Taco Bells
Indianapolis with population 829718 has 30 Taco Bells
Jacksonville with population 821784 has 18 Taco Bells
San Francisco with population 805235 has 11 Taco Bells
Austin with population 790390 has 25 Taco Bells
Columbus with population 787033 has 23 Taco Bells
Fort Worth with population 741206 has 16 Taco Bells

Or how may Taco Bells each state in the United State has:

In [1]: from cities.models import Region

In [2]: from hub.models import Location

In [3]: for region in Region.objects.all()[:10]:
    locations = Location.objects.filter(cities_state=region)
    print '%s has %s Taco Bells' % (, len(locations))
Arkansas has 58 Taco Bells
Washington, D.C. has 5 Taco Bells
Delaware has 14 Taco Bells
Florida has 363 Taco Bells
Georgia has 193 Taco Bells
Kansas has 65 Taco Bells
Louisiana has 92 Taco Bells
Maryland has 91 Taco Bells
Missouri has 157 Taco Bells
Mississippi has 48 Taco Bells

Share on Facebook Share on Twitter Share on Google+

Using GeoDjango to filter by Points

Just recently I found myself playing with GeoDjango, I’ve been using it on both a Ubuntu 14.04 cloud server and a Macbook Pro (OS X El Capitan).

GeoDjango allows us to query by geographic points directly on the data model.
We are then able to extend the model, and add a custom method to search by zipcode.

Using the Django shell we can easily check data in our favorite interpreter :

$ ./ shell

In [1]: from hub.models import Vendor

In [2]: Vendor.get_vendors(zipcode='78664', miles=5)
Out[2]: [<Vendor: Starbucks>]

In [3]: Vendor.get_vendors(zipcode='78664', miles=10)
Out[3]: [<Vendor: Starbucks>, <Vendor: Starbucks>, 
<Vendor: Starbucks>, <Vendor: Starbucks>, 
<Vendor: Starbucks>, <Vendor: Starbucks>, <Vendor: Starbucks>]

It’s then pretty easy to take that data and present it on a Google Map
(using the Django application’s views and templates):

Screen Shot 2015-11-17 at 12.39.06 PM








If you find any of this exciting; read on, I’m going to go over setting the
environment up from scratch (using a Macbook as the development environment).



It is a good idea to add‘s bin path to your $PATH.

You should run the following command (changing the version to match your install),
and add it to the bottom of your ~/.bash_profile:

export PATH=$PATH:/Applications/

Next lets create our PostgreSQL database, and enable the GIS extension.

Start the OSX application. Next click the elephant from your upper task bar, and select Open psql.

jeffreyness=# create database geoapp;

jeffreyness=# \c geoapp
You are now connected to database "geoapp" as user "jeffreyness".

geoapp=# CREATE EXTENSION postgis;

You can now close the psql shell.

Next lets install Django into a virtualenv

# create and change to new app directory
mkdir ~/geoapp && cd ~/geoapp/

# create a fresh virtual environment
virtualenv env

# activate the virtual environment
source env/bin/activate

# install Django inside the virtual environment
pip install Django

To use PostgreSQL with Python we will need the adapter installed,
be sure you added’s bin path to your $PATH:

pip install psycopg2

GeoDjango requires the geos server to be available, we can install this with homebrew:

brew install geos

We are now ready to create the Django project and application.

# create a new project using Django admin tool
django-admin startproject geoproject

# change to the newly created project directory
cd geoproject/

# create a new application
./ startapp hub

Now you need to configure your Django application to use PostgreSQL and GIS,
open geoproject/ with your favorite text editor.

vim geoproject/

Append django.contrib.gis and hub to your INSTALLED_APPS:


Next find the DATABASES portion and set it to the postgis engine:

    'default': {
        'ENGINE': 'django.contrib.gis.db.backends.postgis',
        'NAME': 'geoapp',
        'PASSWORD': '',
        'HOST': 'localhost',
        'PORT': ''

The next step will be to create our model using GIS points,
add the following to hub/

from django.contrib.gis.db import models
from django.contrib.gis.geos import Point, fromstr
from django.contrib.gis.measure import D

class Vendor(models.Model):

    def __unicode__(self):
        return unicode(

    def save(self, *args, **kwargs):
        if self.latitude and self.longitude:
            self.location = Point(float(self.longitude), float(self.latitude))
        super(Vendor, self).save(*args, **kwargs)

    name = models.CharField(max_length=100)
    longitude = models.FloatField()
    latitude = models.FloatField()
    location = models.PointField(blank=True, null=True)

You will also want to add this model to the admin page, so update hub/

from django.contrib import admin

from hub.models import Vendor

class VendorAdmin(admin.ModelAdmin):
    list_display = ('name', 'longitude', 'latitude')
    exclude = ('location',), VendorAdmin)

At this point you are ready to create the database tables, use the provided script:

./ syncdb

I’m going to now jump into the Django shell to add data, but this can also be done using the admin (

./ shell

In [1]: from hub.models import Vendor

In [2]: Vendor.objects.create(longitude=-97.677580, latitude=30.483176,
   ...: name='Starbucks')
Out[2]: <Vendor: Starbucks>

In [3]: Vendor.objects.create(longitude=-97.709085, latitude=30.518423,
  ...: name='Starbucks')
Out[3]: <Vendor: Starbucks>

In [4]: Vendor.objects.create(longitude=-97.658976, latitude=30.481517, 
   ...: name='Starbucks')
Out[4]: <Vendor: Starbucks>

In [5]: Vendor.objects.create(longitude=-97.654141, latitude=30.494810,
   ...: name='Starbucks')
Out[5]: <Vendor: Starbucks>

I can then define a point in the center of the city, and filter by locations within a 5 mile radius:

In [6]: from django.contrib.gis.geos import fromstr

In [7]: from django.contrib.gis.measure import D

In [8]: point = fromstr('POINT(-97.6786111 30.5080556)')

In [9]: Vendor.objects.filter(location__distance_lte=(point, D(mi=5)))
Out[9]: [<Vendor: Starbucks>, <Vendor: Starbucks>, <Vendor: Starbucks>, 
<Vendor: Starbucks>]

Hope you found this article helpful; if you did, please share with friends and coworkers.

Share on Facebook Share on Twitter Share on Google+

Simple EC2 Instance + Route53 DNS

If you have a multi-environment AWS setup, and want a easy way to resolve all EC2 instance using Route53 DNS, look no further!

Currently I’m maintaining a production and staging environment on Amazon Web Services across multiple regions. We tend to not use ElasticIPs as that just increases cost, plus internally we resolve using Consul. There is one drawback with not using ElasticIPs, when ever the instance restarts they will be offered a new dynamic IP (we will solve this with automation).

Our EC2 instances are deployed using Saltstack and salt-cloud, so adding to our base SLS made sense, below is a snippet of the states


 - cli53

# Update AWS Route53 with our hostname

 - source: salt://base/templates/
 - mode: 775

 - name: /opt/ update {{ pillar['environment'] }}
 - unless: /opt/ check {{ pillar['environment'] }}
 - require:
  - pip: cli53
  - file: /opt/

This state places a new script at /opt/, then below it runs the update unless the check shows no change. This script requires cli53 so we have another SLS that handles that install.

The script is merely a bash shell script with a case statement:



# Simple script for updating Route53 with instance IPs
# How to get public ip for EC2 instance

ENVIORNMENT=$2 # either prod or dev
PUBLIC_IP=$(curl -s http://instance-data/latest/meta-data/public-ipv4)
DNS_IP=$(dig $HOSTNAME.$ +short)

case "$1" in

  if [[ "$DNS_IP" == "" ]] ; then
   exit 1
  elif [[ "$PUBLIC_IP" != "$DNS_IP" ]] ; then
   exit 1
  exit 0

  if [[ "$DNS_IP" == "" ]] ; then
   echo "Did not find record for $HOSTNAME.$ENVIORNMENT, Creating.."
  elif [[ "$PUBLIC_IP" != "$DNS_IP" ]] ; then
   echo "Found IP $DNS_IP for $HOSTNAME.$ENVIORNMENT, Updating to $PUBLIC_IP"
   cli53 rrdelete $HOSTNAME.$ENVIORNMENT
   sleep 30 # give AWS some time to delete
   echo "No need to update. passing.."

Assuming you have set your NS records to Route53 for your domain, and the salt
state or script has been ran, you should be able to resolve your instances like below:

$ dig +short
$ dig +short

Happy hacking!

Share on Facebook Share on Twitter Share on Google+

C# applications deployed with Docker and Mono

Lately I’ve been working a lot with Mono, and building C# applications on Linux. Just recently I discovered the official mono image in the Docker Hub Repo. This image comes with xbuild and NuGet (tools we need for building).

So lets do a little work and get a mono application up and running (note I’m using a company application and will remove any references that may be sensitive.)

I start by pulling the application’s source code down beside the Dockerfile:

# tree -L 3 .
├── Company.Session
│   ├──
│   └── src
│   ├── Company.Session
│   ├── Company.Session.SessionService
│   ├── Company.Session.sln
│   ├── Company.Session.sln.DotSettings
│   └── Company.Session.Tests
└── Dockerfile

5 directories, 4 files

The Dockerfile handles the build, running, and network exposing for this app:

# The Official Mono Docker container
FROM mono:3.12

MAINTAINER Jeffrey Ness "jeffrey.ness@...."

# The TCP ports this Docker container exposes the the host.

ENV LISTEN_ON http://*:80/

# Add the project tarball to Docker container
ADD Company.Session /var/mono/Company.Session/
WORKDIR /var/mono/Company.Session/src/

# Build our project
RUN nuget restore Company.Session.sln
RUN xbuild Company.Session.sln

# Change to our artifact directory
WORKDIR /var/mono/Company.Session/src/Company.Session.SessionService/bin/Debug

# Entry point should be mono binary
ENTRYPOINT mono Company.Session.SessionService.exe

All that is needed now is to build the Docker image:

# docker build --no-cache -t session:0.1 .

After the build we should have some new images:

# docker images
session     0.1   e886dc0f6db2  3 minutes ago     405.3 MB
mono        3.12  ad04eb901ba0  2 weeks ago       348.7 MB

Let’s start the new session image and bind it’s exposed port locally to 2345:

# docker run -d -p 2345:80 e886dc0f6db2

We should now have a running Docker container:

# docker ps
d8c4a7088da8 session:0.1 /bin/sh -c 'mono Big 12 seconds ago Up 11 seconds>80/tcp stoic_lalande

The Docker command logs will display the output from the running command.

# docker logs d8c4a7088da8
{"date":"2015-03-24T01:44:30.3285150+00:00","level":"INFO","appname":"Company.Session.SessionService.exe","logger":"Topshelf.HostFactory","thread":"1","ndc":"(null)","message":"Configuration Result:\n[Success] Name Company.Session.SessionService\n[Success] ServiceName Company.Session.SessionService"}


And lastly we should verify the TCP port mapping is working and we can hit it from the host:

# curl -I localhost:2345
HTTP/1.1 302 Found
Location: http://localhost/metadata
Vary: Accept
X-Powered-By: ServiceStack/4.036 Unix/Mono
Server: Mono-HTTPAPI/1.0
Date: Tue, 24 Mar 2015 01:46:06 GMT
Content-Length: 0
Keep-Alive: timeout=15,max=100

Share on Facebook Share on Twitter Share on Google+

Elasticsearch using Docker

Elasticsearch is a distributed RESTFul search tool over the HTTP protocol. And we are going
to use Docker to spin up multiple nodes in the cluster. First we need a server node running Docker. I’m using a Debian server so the command I need is apt-get:

# apt-get install

After installing the package make sure the docker command is available:

# docker version
Client version: 1.3.1
Client API version: 1.15
Go version (client): go1.3.2
Git commit (client): 4e9bbfa
OS/Arch (client): linux/amd64
Server version: 1.3.1
Server API version: 1.15
Go version (server): go1.3.2
Git commit (server): 4e9bbfa

Excellent we now have docker, lets start by downloading a image, the below command will
download the latest Debian docker image:

# docker create debian
Unable to find image 'debian' locally
debian:latest: The image you are pulling has been verified
511136ea3c5a: Pull complete
f10807909bc5: Pull complete
f6fab3b798be: Pull complete
Status: Downloaded newer image for debian:latest

We can verify the debian image is available using the docker command images:

# docker images
debian       latest  f6fab3b798be  8 days ago   85.1 MB

Next I will shell into the debian image and install some packages:

# docker run -t -i f6fab3b798be /bin/bash

You will want to take note of the running process id (85bcc90e1983) from the prompt.

Next lets update your apt repository cache and installed java runtime environment:

root@85bcc90e1983:/# apt-get update
root@85bcc90e1983:/# apt-get install openjdk-7-jre

From here we can get the latest statically compiled binaries from the elasticsearch download page:

root@85bcc90e1983:~# apt-get install wget
root@85bcc90e1983:~# wget
root@85bcc90e1983:~# tar -zxvf elasticsearch-1.4.0.tar.gz

From here lets test starting up the elasticsearch process:

root@85bcc90e1983:~# elasticsearch-1.4.0/bin/elasticsearch
[2014-11-15 00:18:30,616][INFO ][node ] [Ape-Man] version[1.4.0], pid[6482], build[bc94bd8/2014-11-05T14:26:12Z]
[2014-11-15 00:18:30,617][INFO ][node ] [Ape-Man] initializing ...
[2014-11-15 00:18:30,620][INFO ][plugins ] [Ape-Man] loaded [], sites []
[2014-11-15 00:18:32,805][INFO ][node ] [Ape-Man] initialized
[2014-11-15 00:18:32,805][INFO ][node ] [Ape-Man] starting ...
[2014-11-15 00:18:32,893][INFO ][transport ] [Ape-Man] bound_address {inet[/0:0:0:0:0:0:0:0:9300]}, publish_address {inet[/]}
[2014-11-15 00:18:32,905][INFO ][discovery ] [Ape-Man] elasticsearch/-LrLApD4RhyPpz8VYbDAnQ
[2014-11-15 00:18:36,671][INFO ][cluster.service ] [Ape-Man] new_master [Ape-Man][-LrLApD4RhyPpz8VYbDAnQ][85bcc90e1983][inet[/]], reason: zen-disco-join (elected_as_master)
[2014-11-15 00:18:36,700][INFO ][http ] [Ape-Man] bound_address {inet[/0:0:0:0:0:0:0:0:9200]}, publish_address {inet[/]}
[2014-11-15 00:18:36,700][INFO ][node ] [Ape-Man] started
[2014-11-15 00:18:36,711][INFO ][gateway ] [Ape-Man] recovered [0] indices into cluster_state

Everything looks good, lets CTRL+C out of the elasticsearch process and CTRL+C out of our docker process.

We then need to commit our change we made to the debian image, but we will save it as a new image name, we need the docker process id mentioned previously:

# docker commit -a 'jness' -m 'Elasticsearch v1.4.0' 85bcc90e1983 jness/elasticsearch:v1

It is now time to run a elasticsearch process using our new image, we will need to make sure
to map our network ports (transport process, and http process). Lets first find the IMAGE ID:

# docker images
jness/elasticsearch v1     2a523f874a5c  About a minute ago  612.7 MB
debian              latest f6fab3b798be  8 days ago          85.1 MB

Using the above IMAGE ID we can start a process using the elasticsearch binary:

# docker run -d -p 9200:9200 -p 9300:9300 2a523f874a5c /root/elasticsearch-1.4.0/bin/elasticsearch

We should now have a running docker process, lets check using the docker ps command:

# docker ps
b621c107a1fb jness/elasticsearch:v1 "/root/elasticsearch 36 seconds ago Up 35 seconds>9200/tcp,>9300/tcp stoic_pike

Looks like we have our process running, lets make sure we can access it from the host using curl:

# curl -XGET localhost:9200
 "status" : 200,
 "name" : "Franklin Storm",
 "cluster_name" : "elasticsearch",
 "version" : {
 "number" : "1.4.0",
 "build_hash" : "bc94bd81298f81c656893ab1ddddd30a99356066",
 "build_timestamp" : "2014-11-05T14:26:12Z",
 "build_snapshot" : false,
 "lucene_version" : "4.10.2"
 "tagline" : "You Know, for Search"

Sweet we have a response! lets have it store some data shall we?

# curl -XPOST localhost:9200/ness/jeff/1/ -d '
 "full_name" : "Jeffrey Ness"

And we should be able to retrieve that same piece of data:

 curl -XGET localhost:9200/ness/jeff/1/
 "full_name" : "Jeffrey Ness"

And finally lets see the true power of elasticsearch by adding a couple more nodes,
we will need to make sure we map the ports to unused ports on the host:

# docker run -d -p 9201:9200 -p 9301:9300 2a523f874a5c /root/elasticsearch-1.4.0/bin/elasticsearch

# docker run -d -p 9202:9200 -p 9302:9300 2a523f874a5c /root/elasticsearch-1.4.0/bin/elasticsearch

And without doing anything these two additional nodes should return the same data:

# curl -XGET localhost:9201/ness/jeff/1/
 "full_name" : "Jeffrey Ness"
# curl -XGET localhost:9202/ness/jeff/1/
 "full_name" : "Jeffrey Ness"

And there you have it! A single server running three docker processes of elasticsearch.

Hope you enjoyed this little walk-through  !


Share on Facebook Share on Twitter Share on Google+