Using Magic the Gathering Art for D&D Ideas

Recently I’ve spent a bit time reading some of Dragon+ Magazine articles, one particularly grabbed my attention. Using Magic Cards as D&D Items presents a very interesting way to craft items, using Magic the Gathering cards, one can gain inspiration for their next +1 Dagger, or even an Elven Stronghold.

I think the reason I so quickly latched on to this idea was because Magic the Gathering and Dungeons & Dragons are two of my favorite games, plus I really dig the Ixala art work, I mean who hasn’t fantasized about riding a dinosaur to work.

Now I absolutely own tons of Magic the Gathering cards, and I could easily grab a handful and start coming up with an epic dungeon delve, but I thought, why not include a little bit of technology.

Lucky for me, I was able to recycle much of the work I did on Dungeon Brawl to make Magic Inspiration. So just what exactly is Magic Inspiration? Using this application, a Game Master can easily gain inspiration using Magic the Gathering artwork. Then when the ideas come flooding in, weave an amazing story.

It’s time to give credit where credit is due, this application was only made possible using Scryfall‘s bulk data. These people truly provide an awesome dataset!

Anyways, please feel free to grab the application from my Github, and let me know what you think.

Until next time. . .

Python Pandas and D&D Monsters

As you may be aware the Dungeon Brawl application I’ve been working on defines monsters in YaML format (check out the data/monsters directory).

I thought it would be interesting to load this data in to Pandas and do a bit of data analysis.

Loading Data

While in the Dungeon Brawl repository I started up an ipython shell,
then import a couple libraries:

In [1]: import yaml

In [2]: import glob

In [3]: import pandas

Next I need to find each of my monster’s YaML documents, these files reside in the data directory.

Using the glob library I can easily find all files in the directory with the .yaml extension:

In [4]: files = glob.glob('data/monsters/*.yaml')

I’m now able to iterate over each of my files, open them, parse them as YaML, then store the results in a new list:

In [5]: data = []

In [6]: for _file in files:
...:        raw = open(_file).read()
...:        data.append(yaml.load(raw))

The data list now contains a dictionary for each of my monsters:

In [7]: len(data)
Out[7]: 762

In [8]: data[0]['name']
Out[8]: 'Empyrean'

All that is left is to load this data into a Pandas DataFrame:

In [9]: df = pandas.DataFrame(data)


One of the first things I checked was the average hit points and armor class of a monster by challenge rating:

I then dug a bit deeper into each of the stats using the Pandas describe method, this gives things like standard deviation, mean, min, and max.

Below are a couple attempts as useful describe tables:

Hit Points by Challenge Rating

Armor Class by Challenge Rating

Challenge Rating by Monster Type

Hit Points by Monster Size

Challenge Rating by Monster Size

Well that is it, hope you found something in this post interesting.

Raspberry Pi Weather Station

Well it’s been a little over two week and the Raspberry Pi 3 Model B weather station has held up; figured now would be a good time to go a little deeper into the setup.

Telegraf has been solid in ingesting my JSON documents periodically, lets have a look at it’s SensorTag configuration:

# cat /etc/telegraf/telegraf.d/sensor_tag.conf
 command = "cat /sensor_tag.json"
 data_format = "json"
 name_suffix = "_sensor_tag"
 interval = "60s"

This telegraf configuration inserts the output of /sensor_tag.json every 60 seconds into InfluxDB.

In order to keep the JSON document up to date I have cron executing a python script every 60 seconds:

# crontab -l | grep sensor
* * * * * /root/ A0:??:??:??:??:?? is a slightly modified version of my SensorTag code on Github.