Simple EC2 Instance + Route53 DNS

If you have a multi-environment AWS setup, and want a easy way to resolve all EC2 instance using Route53 DNS, look no further!

Currently I’m maintaining a production and staging environment on Amazon Web Services across multiple regions. We tend to not use ElasticIPs as that just increases cost, plus internally we resolve using Consul. There is one drawback with not using ElasticIPs, when ever the instance restarts they will be offered a new dynamic IP (we will solve this with automation).

Our EC2 instances are deployed using Saltstack and salt-cloud, so adding to our base SLS made sense, below is a snippet of the states

salt/base.sls

include:
 - cli53

#
# Update AWS Route53 with our hostname
#

/opt/update_route53.sh:
 file.managed:
 - source: salt://base/templates/update_route53.sh
 - mode: 775

update_route53:
 cmd.run:
 - name: /opt/update_route53.sh update {{ pillar['environment'] }}
 - unless: /opt/update_route53.sh check {{ pillar['environment'] }}
 - require:
  - pip: cli53
  - file: /opt/update_route53.sh

This state places a new script at /opt/update_route53.sh, then below it runs the update unless the check shows no change. This script requires cli53 so we have another SLS that handles that install.

The script is merely a bash shell script with a case statement:

/opt/update_route53.sh

#!/bin/bash

#
# Simple script for updating Route53 with instance IPs
#
# How to get public ip for EC2 instance
# http://stackoverflow.com/a/7536318/4635050
#

ENVIORNMENT=$2 # either prod or dev
HOSTNAME=$(hostname)
PUBLIC_IP=$(curl -s http://instance-data/latest/meta-data/public-ipv4)
DNS_IP=$(dig $HOSTNAME.$ENVIORNMENT.youdomain.com +short)

case "$1" in

 check)
  if [[ "$DNS_IP" == "" ]] ; then
   exit 1
  elif [[ "$PUBLIC_IP" != "$DNS_IP" ]] ; then
   exit 1
  fi
  exit 0
  ;;

 update)
  if [[ "$DNS_IP" == "" ]] ; then
   echo "Did not find record for $HOSTNAME.$ENVIORNMENT, Creating.."
   cli53 rrcreate yourdomain.com $HOSTNAME.$ENVIORNMENT A $PUBLIC_IP
  elif [[ "$PUBLIC_IP" != "$DNS_IP" ]] ; then
   echo "Found IP $DNS_IP for $HOSTNAME.$ENVIORNMENT, Updating to $PUBLIC_IP"
   cli53 rrdelete yourdomain.com $HOSTNAME.$ENVIORNMENT
   sleep 30 # give AWS some time to delete
   cli53 rrcreate yourdomain.com $HOSTNAME.$ENVIORNMENT A $PUBLIC_IP
  else
   echo "No need to update. passing.."
  fi
  ;;
esac

Assuming you have set your NS records to Route53 for your domain, and the salt
state or script has been ran, you should be able to resolve your instances like below:

$ dig salt-master.prod.yourdomain.com +short
1.2.3.4
$ dig webapp.dev.yourdomain.com +short
4.3.2.1

Happy hacking!


Share on Facebook Share on Twitter Share on Google+

C# applications deployed with Docker and Mono

Lately I’ve been working a lot with Mono, and building C# applications on Linux. Just recently I discovered the official mono image in the Docker Hub Repo. This image comes with xbuild and NuGet (tools we need for building).

So lets do a little work and get a mono application up and running (note I’m using a company application and will remove any references that may be sensitive.)

I start by pulling the application’s source code down beside the Dockerfile:

# tree -L 3 .
.
├── Company.Session
│   ├── README.md
│   └── src
│   ├── Company.Session
│   ├── Company.Session.SessionService
│   ├── Company.Session.sln
│   ├── Company.Session.sln.DotSettings
│   └── Company.Session.Tests
└── Dockerfile

5 directories, 4 files

The Dockerfile handles the build, running, and network exposing for this app:

# The Official Mono Docker container
# https://registry.hub.docker.com/_/mono/
FROM mono:3.12

MAINTAINER Jeffrey Ness "jeffrey.ness@...."

# The TCP ports this Docker container exposes the the host.
EXPOSE 80

ENV LISTEN_ON http://*:80/
ENV POSTGRESQL_USER_ID root
ENV POSTGRESQL_USER_PW password
ENV POSTGRESQL_HOST 172.17.42.1
ENV POSTGRESQL_PORT 5432
ENV POSTGRESQL_DATABASE session
ENV POSTGRESQL_SEARCH_PATH public

# Add the project tarball to Docker container
ADD Company.Session /var/mono/Company.Session/
WORKDIR /var/mono/Company.Session/src/

# Build our project
RUN nuget restore Company.Session.sln
RUN xbuild Company.Session.sln

# Change to our artifact directory
WORKDIR /var/mono/Company.Session/src/Company.Session.SessionService/bin/Debug

# Entry point should be mono binary
ENTRYPOINT mono Company.Session.SessionService.exe

All that is needed now is to build the Docker image:

# docker build --no-cache -t session:0.1 .

After the build we should have some new images:

# docker images
REPOSITORY  TAG   IMAGE ID      CREATED VIRTUAL   SIZE
session     0.1   e886dc0f6db2  3 minutes ago     405.3 MB
mono        3.12  ad04eb901ba0  2 weeks ago       348.7 MB

Let’s start the new session image and bind it’s exposed port locally to 2345:

# docker run -d -p 2345:80 e886dc0f6db2
d8c4a7088da8ba0874c63e30e564a077b1c1a544825d7d1e148862b6b81f5600

We should now have a running Docker container:

# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
d8c4a7088da8 session:0.1 /bin/sh -c 'mono Big 12 seconds ago Up 11 seconds 0.0.0.0:2345->80/tcp stoic_lalande

The Docker command logs will display the output from the running command.

# docker logs d8c4a7088da8
{"date":"2015-03-24T01:44:30.3285150+00:00","level":"INFO","appname":"Company.Session.SessionService.exe","logger":"Topshelf.HostFactory","thread":"1","ndc":"(null)","message":"Configuration Result:\n[Success] Name Company.Session.SessionService\n[Success] ServiceName Company.Session.SessionService"}

...

And lastly we should verify the TCP port mapping is working and we can hit it from the host:

# curl -I localhost:2345
HTTP/1.1 302 Found
Location: http://localhost/metadata
Vary: Accept
X-Powered-By: ServiceStack/4.036 Unix/Mono
Server: Mono-HTTPAPI/1.0
Date: Tue, 24 Mar 2015 01:46:06 GMT
Content-Length: 0
Keep-Alive: timeout=15,max=100

Share on Facebook Share on Twitter Share on Google+

Elasticsearch using Docker

Elasticsearch is a distributed RESTFul search tool over the HTTP protocol. And we are going
to use Docker to spin up multiple nodes in the cluster. First we need a server node running Docker. I’m using a Debian server so the command I need is apt-get:

# apt-get install docker.io

After installing the package make sure the docker command is available:

# docker version
Client version: 1.3.1
Client API version: 1.15
Go version (client): go1.3.2
Git commit (client): 4e9bbfa
OS/Arch (client): linux/amd64
Server version: 1.3.1
Server API version: 1.15
Go version (server): go1.3.2
Git commit (server): 4e9bbfa

Excellent we now have docker, lets start by downloading a image, the below command will
download the latest Debian docker image:

# docker create debian
Unable to find image 'debian' locally
debian:latest: The image you are pulling has been verified
511136ea3c5a: Pull complete
f10807909bc5: Pull complete
f6fab3b798be: Pull complete
Status: Downloaded newer image for debian:latest
6cf83ed03695134d2a606f63c494dcf6e9dedcb2fe0db2768d8d5a95baac50c1

We can verify the debian image is available using the docker command images:

# docker images
REPOSITORY   TAG     IMAGE ID      CREATED      VIRTUAL SIZE
debian       latest  f6fab3b798be  8 days ago   85.1 MB

Next I will shell into the debian image and install some packages:

# docker run -t -i f6fab3b798be /bin/bash
root@85bcc90e1983:/#

You will want to take note of the running process id (85bcc90e1983) from the prompt.

Next lets update your apt repository cache and installed java runtime environment:

root@85bcc90e1983:/# apt-get update
root@85bcc90e1983:/# apt-get install openjdk-7-jre

From here we can get the latest statically compiled binaries from the elasticsearch download page:

root@85bcc90e1983:~# apt-get install wget
root@85bcc90e1983:~# wget https://download.elasticsearch.org/elasticsearch/elasticsearch/elasticsearch-1.4.0.tar.gz
root@85bcc90e1983:~# tar -zxvf elasticsearch-1.4.0.tar.gz

From here lets test starting up the elasticsearch process:

root@85bcc90e1983:~# elasticsearch-1.4.0/bin/elasticsearch
[2014-11-15 00:18:30,616][INFO ][node ] [Ape-Man] version[1.4.0], pid[6482], build[bc94bd8/2014-11-05T14:26:12Z]
[2014-11-15 00:18:30,617][INFO ][node ] [Ape-Man] initializing ...
[2014-11-15 00:18:30,620][INFO ][plugins ] [Ape-Man] loaded [], sites []
[2014-11-15 00:18:32,805][INFO ][node ] [Ape-Man] initialized
[2014-11-15 00:18:32,805][INFO ][node ] [Ape-Man] starting ...
[2014-11-15 00:18:32,893][INFO ][transport ] [Ape-Man] bound_address {inet[/0:0:0:0:0:0:0:0:9300]}, publish_address {inet[/172.17.0.2:9300]}
[2014-11-15 00:18:32,905][INFO ][discovery ] [Ape-Man] elasticsearch/-LrLApD4RhyPpz8VYbDAnQ
[2014-11-15 00:18:36,671][INFO ][cluster.service ] [Ape-Man] new_master [Ape-Man][-LrLApD4RhyPpz8VYbDAnQ][85bcc90e1983][inet[/172.17.0.2:9300]], reason: zen-disco-join (elected_as_master)
[2014-11-15 00:18:36,700][INFO ][http ] [Ape-Man] bound_address {inet[/0:0:0:0:0:0:0:0:9200]}, publish_address {inet[/172.17.0.2:9200]}
[2014-11-15 00:18:36,700][INFO ][node ] [Ape-Man] started
[2014-11-15 00:18:36,711][INFO ][gateway ] [Ape-Man] recovered [0] indices into cluster_state

Everything looks good, lets CTRL+C out of the elasticsearch process and CTRL+C out of our docker process.

We then need to commit our change we made to the debian image, but we will save it as a new image name, we need the docker process id mentioned previously:

# docker commit -a 'jness' -m 'Elasticsearch v1.4.0' 85bcc90e1983 jness/elasticsearch:v1

It is now time to run a elasticsearch process using our new image, we will need to make sure
to map our network ports (transport process, and http process). Lets first find the IMAGE ID:

# docker images
REPOSITORY          TAG    IMAGE ID      CREATED             VIRTUAL SIZE
jness/elasticsearch v1     2a523f874a5c  About a minute ago  612.7 MB
debian              latest f6fab3b798be  8 days ago          85.1 MB

Using the above IMAGE ID we can start a process using the elasticsearch binary:

# docker run -d -p 9200:9200 -p 9300:9300 2a523f874a5c /root/elasticsearch-1.4.0/bin/elasticsearch

We should now have a running docker process, lets check using the docker ps command:

# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
b621c107a1fb jness/elasticsearch:v1 "/root/elasticsearch 36 seconds ago Up 35 seconds 0.0.0.0:9200->9200/tcp, 0.0.0.0:9300->9300/tcp stoic_pike

Looks like we have our process running, lets make sure we can access it from the host using curl:

# curl -XGET localhost:9200
{
 "status" : 200,
 "name" : "Franklin Storm",
 "cluster_name" : "elasticsearch",
 "version" : {
 "number" : "1.4.0",
 "build_hash" : "bc94bd81298f81c656893ab1ddddd30a99356066",
 "build_timestamp" : "2014-11-05T14:26:12Z",
 "build_snapshot" : false,
 "lucene_version" : "4.10.2"
 },
 "tagline" : "You Know, for Search"
}

Sweet we have a response! lets have it store some data shall we?

# curl -XPOST localhost:9200/ness/jeff/1/ -d '
{
 "full_name" : "Jeffrey Ness"
}
'
{"_index":"ness","_type":"jeff","_id":"1","_version":1,"created":true}

And we should be able to retrieve that same piece of data:

 curl -XGET localhost:9200/ness/jeff/1/
{"_index":"ness","_type":"jeff","_id":"1","_version":1,"found":true,"_source":
{
 "full_name" : "Jeffrey Ness"
}
}

And finally lets see the true power of elasticsearch by adding a couple more nodes,
we will need to make sure we map the ports to unused ports on the host:

# docker run -d -p 9201:9200 -p 9301:9300 2a523f874a5c /root/elasticsearch-1.4.0/bin/elasticsearch

# docker run -d -p 9202:9200 -p 9302:9300 2a523f874a5c /root/elasticsearch-1.4.0/bin/elasticsearch

And without doing anything these two additional nodes should return the same data:

# curl -XGET localhost:9201/ness/jeff/1/
{"_index":"ness","_type":"jeff","_id":"1","_version":1,"found":true,"_source":
{
 "full_name" : "Jeffrey Ness"
}
}
# curl -XGET localhost:9202/ness/jeff/1/
{"_index":"ness","_type":"jeff","_id":"1","_version":1,"found":true,"_source":
{
 "full_name" : "Jeffrey Ness"
}
}

And there you have it! A single server running three docker processes of elasticsearch.

Hope you enjoyed this little walk-through  !

 


Share on Facebook Share on Twitter Share on Google+