User Auth System with Persistent Sessions using Phoenix, Pow, and Mnesia

A Lib Abandoned

I have a web app built on Phoenix/Elixir that up until today used Coherence as its user authentication lib. Unfortunately the maintainer of this lib has straight ghosted, and so my options were fork Coherence and maintain it myself, build my own user auth system, or search out another lib. I did what I believed to be the smart and lazy choice and searched for something new. What I found was Pow.

New Lib on the Block

Pow is pretty dope. It has lots of great functionality out of the box, has controller callbacks with is a feature I felt Coherence was lacking, and is actively maintained. The documentation is lacking, but at the time of this writing, the project is very young, so I’m sure it will be added in due time.

One of Pow’s coolest upgrades over Coherence is that it has a built-in module for persistent sessions that uses Mnesia. I wasn’t familiar with Mnesia before this, and from my cursory reading, it seems pretty much like SQLite but built into Erlang (and therefore Elixir) so that’s pretty cool. I was definitely excited to try implementing this. However, it was not without its stumbling blocks. Here’s how I got it done.

The First Hurdle

The Pow readme shows you what your Application module should look like if you want to use Mnesia across the board (in dev, test, prod, etc), but I only wanted to use it in production, so here’s one way to accomplish that:

# lib/my_app/application.ex
defmodule MyApp.Application do
  use Application
  def start(_type, _args) do
    import Supervisor.Spec

    pow_worker = @pow_session_cache_worker
    |>, [[nodes: [node()]]]))

    children = [
      supervisor(MyApp.Repo, []),
      supervisor(MyAppWeb.Endpoint, []),
    ] ++ workers()

    opts = [strategy: :one_for_one, name: MyApp.Supervisor]
    Supervisor.start_link(children, opts)

  def config_change(changed, _new, removed) do
    MyAppWeb.Endpoint.config_change(changed, removed)

  defp workers() do
    import Supervisor.Spec

    case Application.get_env(:my_app, :pow)[:cache_store_backend] do
      Pow.Store.Backend.MnesiaCache -> [worker(Pow.Store.Backend.MnesiaCache, [[nodes: [node()]]])]
      _ -> []

# config/prod.exs
config :my_app, :pow, cache_store_backend: Pow.Store.Backend.MnesiaCache

I added the :cache_store_backend config value as per the Pow docs to my prod config, and the workers() method will check this value and return the proper worker in prod and an empty list in every other env.

I also added this config value in dev first just to double check and everything was fine so time to deploy. I use Distillery and Edeliver, so I built a new release, deployed it to production, and restarted the production app, but there was a problem:

Application my_app exited: MyApp.Application.start(:normal, []) returned an error: shutdown: failed to start child: Pow.Store.Backend.MnesiaCache 
** (EXIT) an exception was raised: 
    ** (UndefinedFunctionError) function :mnesia.create_schema/1 is undefined (module :mnesia is not available) 
        (pow) lib/pow/store/backend/mnesia_cache.ex:172: Pow.Store.Backend.MnesiaCache.table_init/1 
        (pow) lib/pow/store/backend/mnesia_cache.ex:66: Pow.Store.Backend.MnesiaCache.init/1 
        (stdlib) gen_server.erl:374: :gen_server.init_it/2 
        (stdlib) gen_server.erl:342: :gen_server.init_it/6 
        (stdlib) proc_lib.erl:249: :proc_lib.init_p_do_apply/3

The Second Hurdle

A quick Google led me to this GitHub page that said “This is because distillery doesn’t export mnesia by default. You need to tell distillery to export :mnesia by adding it to the extra_applications option in your mix application.”

I tried that, and this is what I got:

Application my_app exited: MyApp.Application.start(:normal, []) returned an error: shutdown: failed to start child: Pow.Store.Backend.MnesiaCache 
** (EXIT) an exception was raised: 
    ** (CaseClauseError) no case clause matching: {:aborted, {:bad_type, Pow.Store.Backend.MnesiaCache, :disc_copies, :"my_app@"}} 
        (pow) lib/pow/store/backend/mnesia_cache.ex:179: Pow.Store.Backend.MnesiaCache.table_init/1 
        (pow) lib/pow/store/backend/mnesia_cache.ex:66: Pow.Store.Backend.MnesiaCache.init/1 
        (stdlib) gen_server.erl:374: :gen_server.init_it/2 
        (stdlib) gen_server.erl:342: :gen_server.init_it/6 
        (stdlib) proc_lib.erl:249: :proc_lib.init_p_do_apply/3 

Ok, this seems like… idk progress? The GitHub page was definitely right about Distillery not exposing :mnesia, but as we’ll see in a bit, their fix is not quite what I needed. Anyway, googling this new error message was not terribly productive.

A Hurdlish Stumble

One thing I did find is that remote consoling into my production app and running :mnesia.system_info() yielded some information – most interesting to me was that the mnesia directory was something like /home/my_app/app_releases/Mnesia.my_app@ This is not really an intuitive place to keep my disk-written copy of the data, so another quick search revealed I should add this line to my config/prod.exs file:

config :mnesia, dir: '/home/my_app/app_storage/Mnesia'

I also had to mkdir the app_storage and Mnesia dirs on my production server. One really important thing to note is that YOU HAVE TO USE SINGLE QUOTES. Why does it want a charlist instead of a string? If there are any Erlangelicals (™ pending) out there, I’d love to know the answer. I’ll post the error I was getting when using a double-quoted directory string in config for any lost googlers out there:

Application my_app exited: MyApp.Application.start(:normal, []) returned an error: shutdown: failed to start child: Pow.Store.Backend.MnesiaCache 
** (EXIT) an exception was raised: 
    ** (CaseClauseError) no case clause matching: {:error, {:EXIT, :function_clause}} 
        (pow) lib/pow/store/backend/mnesia_cache.ex:172: Pow.Store.Backend.MnesiaCache.table_init/1 
        (pow) lib/pow/store/backend/mnesia_cache.ex:66: Pow.Store.Backend.MnesiaCache.init/1 
        (stdlib) gen_server.erl:374: :gen_server.init_it/2 
        (stdlib) gen_server.erl:342: :gen_server.init_it/6 
        (stdlib) proc_lib.erl:249: :proc_lib.init_p_do_apply/3

Still, this did not solve the 2nd error I got when I tried to start up the production app. Frustrated but unabated, I thought I’d try it out in dev again and see if I could piece together why one was working and the other was not.

The Finish Line

Lucky for me, I got the same error when I tried to start up the dev server. This meant it wasn’t some Distillery or production issue. After some experimentation, I discovered that having :mnesia in :extra_applications was the problem, because it was starting up Mnesia in memory, then when Pow tried to start it up with disk_copies enabled, it failed. But, I still needed Distillery to expose :mnesia to the rest of my app, so what to do?

The answer was with :included_applications. From the docs:

Any included application, defined in the :included_applications key of the .app file will also be loaded, but they won’t be started.

So I changed my application function in mix.exs to look like this:

  def application do
      mod: {MyApp.Application, []},
      included_applications: [:mnesia]

And boom! It worked like a difficult-to-debug charm.

Automatically Restart Elixir Applications after Server Reboot with Edeliver/Distillery, Ubuntu, and Crontab

Edeliver is an awesome deployment tool for Elixir applications, including Phoenix apps, but one thing it lacks (intentionally) is support for auto-restarting your Elixir app after server reboots.

Out of the box, if you want to restart your app after a server reboot, you can just run

$ mix edeliver restart production

in a terminal from your local app directory.

Having to run this command after every reboot is a minor annoyance, but, what if your hosting company decides to restart your server in the middle of the night? What if you have many elixir apps running on the same server? What if you’re just OCD about never wanting to type commands again and just want the damn thing automated? Well, one way to solve is to use a crontab job.

Starting and Stopping Your App from the Server Itself

Edeliver uses another great tool, Distillery, to help build releases, and Distillery provides several useful command line tasks, including tasks that start and stop your app. This is what we’ll use in our systemd script.

NOTE: This article assumes you have setup your deployment process the same way as this article.

Step 1: Open the Crontab Edit File

Ok, if you’ve followed the setup linked above, you have a dedicated user on your production server just for your app. SSH into that server as that user and run the following command:

$ crontab -e

If this is your first time running that command, it will ask you what editor you want to use. Let’s go with nano.

Step 2: Add a Reboot Job to The Crontab File

Next you want to make use of crontab’s special @reboot string and use the Distillery start command to start up your app. Add this line to the bottom of the file:

@reboot /home/youruser/app_release/your_app/bin/your_app start

Obviously you’re going to want to replace the stuff in orange with your own info. One you’ve added the line, save and close the file by hitting Ctrl+x then Ctrl+y then Enter.

Step 3: Test It Out

Now do a sudo reboot and your app should automatically be back up and running in a matter of seconds.

Setting up Phoenix Channels to use MessagePack for Serialization

In this article, I’ll show you how to setup your Phoenix 1.3 app to use the binary serialization format MessagePack for sending and receiving web socket channel messages.

What is MessagePack?

MessagePack is a binary serialization format. You know how JSON uses strings to represent different kinds of data and objects? Well MessagePack does the same thing, but with binary, so the encoded things are generally smaller than their JSON equivalents.

Same guy, just in a smaller package.

What is Serialization?

Serialization is a nice word for encoding and decoding messages. In this article, I’ll be talking about two serializers – one for the server and one for the client.

The server serializer encodes Elixir data into MessagePack binary and decodes the binary to Elixir. The client serializer does the same thing but with JavaScript instead of Elixir.

Why use MessagePack?

The default for passing data around is pretty much always JSON. It’s fast, and relatively compact. But what about a case where you’re passing potentially hundreds of messages per second back and forth between client(s) and server. Any reduction in message size can add up really quickly.

Such is the case with many .io games, and Breakfast of Champions is no different. Using Msgpax, a MessagePack serializer written in Elixir, I was able to reduce the size of each message passed by an order of magnitude.


Setting Up Phoenix to Use MessagePack

Add Msgpax to Deps

First we need to make the Msgpax lib available to our app. Follow the installation instructions on the repo page:

Tell Phoenix to Use a Custom Serializer

NOTE: If you’re reading this in the future, this part will have changed when Phoenix 1.4 came out!

Next we need to edit the channels/user_socket.ex file. A Socket is an object (or module in this case) that deals with actions relating to the connection between a web socket server and client. Both the Phoenix server and JavaScript client have Sockets.

Find the line in the UserSocket module that says:

transport :websocket, Phoenix.Transports.WebSocket

Here transport is a macro that sets up the UserSocket’s transport property. If Sockets are objects that deal with web socket connections, a Socket’s transport property tells the socket how it’s communicating – in other words, what method is the socket using to transport messages. The two built in options for Phoenix are websocket and longpoll. The two parameters in the line above are telling Phoenix we’re using websockets to pass channel messages, and that transport behavior is defined in thePhoenix.Transports.WebSocket module.

So how do we tell the transport module that we want to use a different serializer to encode and decode our channel messages? If you take look at the implementation of the transport macro, you can see that it also accepts an optional config property. Upon further inspection, we see that config is a list that can have the key-value pair serializer: [//list of serializers and transport versions]. Bingo! Transform the above line into:

transport :websocket, Phoenix.Transports.WebSocket, serializer: [{YourAppName.MsgpaxSerializer, "~> 2.0.0"}]

Create a Custom Serializer

Ok, so we’ve told Phoenix we want to use a custom serializer, called MsgpaxSerializer, to encode and decode our messages, but that module doesn’t exist yet, so we better create it.

For inspiration, take a look at Phoenix’s own implementation of the websocket serializer. One of the first lines is: @behaviour Phoenix.Transports.Serializer. The @behaviour directive in Elixir is like an interface in Java or a protocol in Swift/Objective C. It defines a set of functions that have to be implemented by a module, and it raises errors if those functions are missing. We want our custom serializer to conform to this behavior (or behaviour) too, so let’s check it out, pasted here in it’s entirety:


defmodule Phoenix.Transports.Serializer do
  @moduledoc """
  Defines a behaviour for `Phoenix.Socket.Message` serialization.

  @doc "Translates a `Phoenix.Socket.Broadcast` struct to fastlane format"
  @callback fastlane!(Phoenix.Socket.Broadcast.t) :: term

  @doc "Encodes `Phoenix.Socket.Message` struct to transport representation"
  @callback encode!(Phoenix.Socket.Message.t | Phoenix.Socket.Reply.t) :: term

  @doc "Decodes iodata into `Phoenix.Socket.Message` struct"
  @callback decode!(iodata, options :: Keyword.t) :: Phoenix.Socket.Message.t

As you can see, we need three methods in our serializer to conform to the Phoenix.Transports.Serializer behavior: fastlane!, encode!, and decode!. Now that we know what we need, and have an example, let’s make the thing.

Make the module file at your_app/lib/your_app/msgpax_serializer.ex and add the following contents:

defmodule YourApp.MsgpaxSerializer do
  @moduledoc false
  @behaviour Phoenix.Transports.Serializer

  alias Phoenix.Socket.Reply
  alias Phoenix.Socket.Message
  alias Phoenix.Socket.Broadcast

  @doc """
  Translates a `Phoenix.Socket.Broadcast` into a `Phoenix.Socket.Message`.
  def fastlane!(%Broadcast{} = msg) do
    msg = %Message{topic: msg.topic, event: msg.event, payload: msg.payload}

    {:socket_push, :binary, encode_v1_fields_only(msg)}

  @doc """
  Encodes a `Phoenix.Socket.Message` struct to MessagePack binary.
  def encode!(%Reply{} = reply) do
    msg = %Message{
      topic: reply.topic,
      event: "phx_reply",
      ref: reply.ref,
      payload: %{status: reply.status, response: reply.payload}

    {:socket_push, :binary, encode_v1_fields_only(msg)}
  def encode!(%Message{} = msg) do
    {:socket_push, :binary, encode_v1_fields_only(msg)}

  @doc """
  Decodes MessagePack binary into `Phoenix.Socket.Message` struct.
  def decode!(message, _opts) do
    |> Msgpax.unpack!()
    |> Phoenix.Socket.Message.from_map!()

  defp encode_v1_fields_only(%Message{} = msg) do
    |> Map.take([:topic, :event, :payload, :ref])
    |> Msgpax.pack!()

It’s nearly identical the Phoenix’s websocket_serializer.ex with a few key differences. First, in the tuples we returned from fastlane! and encode!, we replaced :text with :binary, and second, we replaced all Poison encoding/decoding with Msgpax packing and unpacking.

That’s it. Phoenix is now sending all its channel messages in the MessagePack binary format and expecting that format in return. However, our js client is still sending and expecting JSON messages. We’ll change that in the next post!

Deploying a Phoenix App to Ubuntu 16.04 with Edeliver, Distillery, and Nginx

When I first started doing live testing of Breakfast of Champions, I was using Heroku because it’s cheap, easy, and it’s what I was used to. Unfortunately, Heroku allots you very little CPU power regardless of your dyno size. Not to mention they restart every app once a day, hot swapping is not possible, nor is distributed clustering. So when it came time to actually deploy the game I had to look for something different.

Finally, after much procrastination, I cut the PaaS cord and decided to deploy my own cloud server using Vultr. I spun up an Ubuntu 16.04 instance based in Dallas with 2 virtual CPU cores (good enough for now and easily upgraded). I had looked at several alternatives, including Digital Ocean, AWS, and some smaller companies specializing in game servers, but Vultr seemed to benchmark pretty well against the competition and the price was right.

What I didn’t realize is how great the deployment process already is for Elixir apps. The language really benefits from standing on the shoulders of the Erlang giant. After you get past the setup, Edeliver and Distillery make it ridiculously simple.

Anyway, let’s get to the good stuff. After some experimentation and gathering disparate pieces of info (major props go to Digital Ocean’s Phoenix deployment guide), here are the steps to deploy a Phoenix application.





This guide assumes you have a Phoenix >=1.3 application with a default directory structure that uses Postgres and Brunch, that you’re using Git for version control, and that you have purchased a domain name. In this guide, I’ll be using

Step 1: spin up a Server

Create a server instance on Vultr with Ubuntu 16.04. It should have at least 1GB of RAM.

Step 2: SSH into your New Server

Once your server is spun up, click “Manage” to view your server’s details. From this page, copy your server’s root password. Then, in a local terminal:

$ ssh root@your_server_ip

Paste in the password when prompted.

Step 3: Update the Server

In SSH terminal:

$ apt-get update && apt-get upgrade && apt-get dist-upgrade
$ reboot
Step 4: Create a New User

Re-ssh into your server, then:

$ adduser nick

Enter a strong password, and optionally enter more info. Now give the new user sudo privledges:

$ usermod -aG sudo nick
Step 5: Enable Public Key Authentication

If you haven’t generated a local key-pair, do so now. You can use this guide:

Copy the contents of ~/.ssh/ from your local machine. You can use:

$ cat ~/.ssh/

Highlight and copy the output. Then, in a server terminal:

$ su - nick
$ mkdir ~/.ssh
$ chmod 700 ~/.ssh
$ nano ~/.ssh/authorized_keys

Paste in your public key. Then ctrl+X to exit, Y then Enter to save.

$ chmod 600 ~/.ssh/authorized_keys
$ exit
Step 6: Disable SSH Password Authentication

Still in server terminal, as root user:

$ nano /etc/ssh/sshd_config

Uncomment the line that says “Password Authentication yes” and change it to “Password Authentication no”. Again save and exit nano with ctrl+X then Y then Enter.

Apply new configuration:

$ systemctl reload sshd

Test the new configuration. In a local terminal:

$ ssh root@your_server_ip

This should fail with “Permission denied (publickey)”. Then, SSH with your new user:

$ ssh nick@your_server_ip
Step 7: Add Server to Local SSH Config File

On your local machine, create a file at ~/.ssh/config if it doesn’t exist already, and append this to it:

Host your_app_name
    HostName your_server_ip
    User nick
    IdentityFile ~/.ssh/id_rsa

Now, in a local terminal try:

$ ssh your_app_name

And bada-bing-bada-boom, you should be SSH’ed.

Step 8: Set Up a Firewall to Accept SSH Connections

In server terminal, as your new user:

$ sudo ufw allow OpenSSH
$ sudo ufw enable
Step 9: Install NGinx

Still in server terminal:

$ sudo apt-get install nginx
$ sudo ufw allow 'Nginx Full'
$ sudo ufw status

The output of the last command should look like:

Step 10: Configure DNS

Change your DNS settings to point your domain name (both @ and www) to your server’s IP address.

Step 11: Set up an Nginx Server Block for Your Site

In server terminal:

$ sudo nano /etc/nginx/sites-available/

Then copy/paste the following into the file. Remember to replace “” with your URL.

map $http_upgrade $connection_upgrade {
    default upgrade;
    '' close;

upstream phoenix {

server {
    root /var/www/html;

    index index.html index.htm index.nginx-debian.html;


    location / {
        allow all;

        # Proxy Headers
        proxy_http_version 1.1;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header Host $http_host;
        proxy_set_header X-Cluster-Client-Ip $remote_addr;

        # WebSockets
        proxy_set_header Upgrade $http_upgrade;
        proxy_set_header Connection "upgrade";

        proxy_pass http://phoenix;

Save and close this file. Now create a symlink of this file in /etc/nginx/sites-enabled:

$ sudo ln -s /etc/nginx/sites-available/ /etc/nginx/sites-enabled/
Step 12: Change Hash Bucket Memory Config

In server terminal:

$ sudo nano /etc/nginx/nginx.conf

Uncomment the line “server_names_hash_bucket_size 64;”, then save and close the file.

Test the new config to make sure it works, then if all is good, restart Nginx:

$ sudo nginx -t
$ sudo systemctl restart nginx
Step 13: Install Certbot and Get an SSL Cert for Your Site

In server terminal:

$ sudo add-apt-repository ppa:certbot/certbot
$ sudo apt-get update
$ sudo apt-get install python-certbot-nginx
$ sudo certbot --nginx -d -d

Finish the prompts. When it asks about redirecting all requests through HTTPS, I select yes, because why not right? Now make sure certbot is good to auto-renew by running the following command and make sure no errors happen.

$ sudo certbot renew --dry-run

The python-certbot-nginx program installed a cron job at /etc/cron.d/certbot to check every day if the certs need to be updated, which is pretty cool.

You should now be able to go to and view the default Nginx page, but, you know, securely.

Step 14: Install Erlang, Elixir, Hex, and NPM

In server terminal:

$ cd ~
$ wget
$ sudo dpkg -i erlang-solutions_1.0_all.deb
$ sudo apt-get update
$ sudo apt-get install esl-erlang
$ sudo apt-get install elixir
$ mix local.hex
$ sudo apt-get install npm
$ sudo npm install -g n
$ sudo n stable

This last command should install the latest stable version of node. It may be preferable to use NVM instead of n though. Not sure, leave a comment if you know better.

Step 15: Install Postgresql

In server terminal:

$ sudo apt-get install postgresql postgresql-contrib
$ sudo -u postgres createuser --interactive

For your postgres username, make sure to use the same thing as your Ubuntu username. In this guide, I’ve been using “nick”.

Also be sure to type Y to make the new user a superuser.

Next, create a db of the same name as the new user:

$ sudo -u postgres createdb nick

Create your app’s prod db. You can find the db name in config/prod.secret.exs file of your app.

$ sudo -u postgres createdb your_prod_db_name

Now open a pqsl session. If you did the previous steps correctly, this should work no problem:

$ psql

Set the password for your Postgres user then exit psql:

$ \password nick
$ \q

Add the credentials you just created to your config/prod.secret.exs file as your db username and password.

Step 16: Add Distillery and Edeliver to your App’s Dependencies

Have a look at the GitHub pages of Edeliver and Distillery to see what to add to your mix.exs file. Run mix deps.get when done.

Step 17: Set up your Prod Config

Change config/prod.exs to the following:

use Mix.Config

config :your_app, YourAppWeb.Endpoint,
  http: [port: 4000],
  url: [host: "", port: 80, scheme: "https"],
  cache_static_manifest: "priv/static/cache_manifest.json",
  server: true,
  code_reloader: false,
  root: ".",
  check_origin: false,
  version: Application.spec(:your_app, :vsn)

config :logger, level: :info

import_config "prod.secret.exs"
Step 18: Copy prod.secret.exs to your server

In server terminal:

$ cd ~ && mkdir app_config

In local terminal:

$ scp ~/your_app/config/prod.secret.exs

Run that command anytime you make changes to the prod.secret.exs file.

Step 19: Configure Distillery and Edeliver

In a local terminal in your app’s base directory, run the following command to generate Distillery’s config file.

$ mix release.init

Next, also locally, in the base directory of your app, create a directory named “.deliver”. Inside this directory make a file called “config”, and copy the following for its contents:




pre_erlang_get_and_update_deps() {
  local _prod_secret_path="/home/nick/app_config/prod.secret.exs"
  if [ "$TARGET_MIX_ENV" = "prod" ]; then
    __sync_remote "
      ln -sfn '$_prod_secret_path' '$BUILD_AT/config/prod.secret.exs'

pre_erlang_clean_compile() {
  status "Installing NPM dependencies"
  __sync_remote "
    [ -f ~/.profile ] && source ~/.profile
    set -e

    cd '$BUILD_AT/assets'
    npm install $SILENCE

  status "Building static files"
    __sync_remote "
      [ -f ~/.profile ] && source ~/.profile
      set -e

      cd '$BUILD_AT'
      mkdir -p priv/static
      cd '$BUILD_AT/assets'
      npm run deploy $SILENCE

  status "Running phx.digest"
  __sync_remote "
    [ -f ~/.profile ] && source ~/.profile
    set -e

    cd '$BUILD_AT'

Add the line “.deliver/releases” to the end of your .gitignore file, then make a new commit, and push if you want to. Almost done!

Step 20: Build your first Release with Edeliver

In a local terminal from your app’s base directory:

$ mix edeliver build release

This will take a while. It uses the version number specified in your mix.exs file. If the build is successful, deploy it to your server, start up the app, and run your db migrations:

$ mix edeliver deploy release to production
$ mix edeliver start production
$ mix edeliver migrate production

Your site should be up and running at your url! If it’s not, you can check the logs on your server at ~/app_releases/your_app/var/log/erlang.log.1

Step 21: Hot Upgrades

If you want to push upgrades to your app that don’t require a restart, you can use the following commands:

$ mix edeliver build upgrade --with=<last version>
$ mix edeliver deploy upgrade to production --version=<new version>

Replace “<last version>” and “<new version>” with your actual last and new versions, like: mix edeliver build upgrade --with=0.0.1

Step 22: DO a Little Dance!

That’s it! Everything should be working swimmingly. If not, leave a comment, and I’d be happy to try to help you troubleshoot.

Choosing a Tech Stack for a Multiplayer Action Game: Part 3 – The Apps

Ok, we’ve made it to the end of this Tech Stack article series – or the beginning of the end anyway. In part 1, I discussed why I chose Elixir for the backend, and in part 2 I talked about why I’m using plain old JavaScript for the web frontend. All that’s left now is to choose how we’re going to build the mobile apps.

The Problem is Duplication. The Problem is Duplacation Duplication.

I don’t want to write two different code bases that do ostensibly exactly the same thing. As a matter of fact, I think it’s a load of malarkey that I can’t just use my browser js, pack it in a native app wrapper, and send it to the App Store. Have I fallen through a wormhole which has left me stuck in 2010 or something? How is this still a problem?!

Questions for another day I suppose. The unfortunate fact is, this is still very much a problem that must be dealt with. The most obvious solution is to just write my frontend code two more times – once in Objective C for iOS and once in Java for Android. This triples the size of my frontend code base which increases the opportunities for bugs to show up. Boo!

On the other hand, writing in native languages will guarantee that my app is relatively small and fast. There are also mature and widely-used development environments and deployment pipelines, so I would (theoretically) have no trouble developing and compiling my app.

An Abundance of Alternatives, a Dearth of Good Ones

The flavor of the month for write-once-deploy-everywhere frameworks is of course React Native. Other popular options include Cordova, Xamarin, PhoneGap, Ionic, and Titanium. They all have similar but slightly different target audiences.

Right off the bat, I’m leaning toward either React Native or Xamarin, since they both compile down to native code, and let’s face it, using native elements makes for the best apps. The problem is regardless of which framework I choose, I think the game code itself will have to be rewritten into native (using Quartz for iOS and maybe Core Graphics for Android).

Since that is the case, I’d really only be using one of these frameworks for like 2 or 3 screens, a couple modals, and a menu. Is the added complexity worth it? Also, I’d have to take into consideration how difficult is it to integrate a fully native component into the framework (if it’s even possible).

Why Not Use Unity?

I have a sinking feeling that I might be asking myself this question over and over again. Why oh why didn’t I just use Unity? Unity makes is pretty simple to deploy your game to any platform from just one code base. Learning to use Unity has its own overhead, but perhaps it would have been worth it.

I guess my main reason for bypassing Unity is they stopped supporting their web player. Even the fact that they had a web player in the first place is kind of off putting. I mean, my ActionScript 3 skills aren’t opening any doors for me anymore, are they? Unity is transitioning to WebGL, but it would basically translate the C# code I wrote in Unity to C++, then again to JavaScript and WebGL… This does not sound like a recipe for stability or performance.

Notable Mentions

Apparently, according to this article, Dropbox writes some logic code in C++ then shares the .dll files across their iOS and Android apps. That’s pretty cool. Instead of abstracting on top of native code, they went down a level. Smart.

In this Ars Technica article, they explain how Google created a tool called J2ObjC which translates Java to Objective C (not Swift). They open sourced the tool in an effort to promote Android-first development. This tool is not meant for any UI, platform-specific translation though.

And the Winner Is…

Given the added size and complexity of using a framework compared to the relative simplicity of what I’d be using them for, I think the best option forward is to write two native apps and hope for the best.

Choosing a Tech Stack for a Multiplayer Action Game: Part 2 – The Frontend

In Part 1 of this three part series, I explained why I chose Elixir for the back end of our flagship game, Breakfast of Champions. Here in Part 2, I’ll talk about the decision process for the frontend tech.  There will be three frontends because we’re planning on making the game playable through browsers, an iPhone app, and Android app.

Unlike the backend tech, frontend seems pretty straight forward right? For browsers, JavaScript; for iOS, Swift; for Android, Java. However, there is still a lot of gray area for each of these platforms, so let’s get into it.

The Browser

One of the nice things about Phoenix (the Elixir framework I’m using for the backend) is that it comes by default with Brunch, a Javascript transpiler.  This way you can write ES6 code right out of the box. However,  another nice thing about Phoenix is that it doesn’t force this decision on you. You can easily swap Brunch out.

Since I was already making the jump from Ruby to Elixir, I also thought about making the jump from JavaScript to Elm, a statically type, functional language that compiles down to Javascript. It claims to never have runtime exceptions! What?! Crazy.

As enticing as Elm sounds though, I am worried about performance. I’m sure its performance is good or even great for “normal” web apps, but I’m making a game. I’ll be calling a loop ~60/second, and I’d rather have full control over the js, as opposed to letting Elm compile the js for me. I will definitely be trying Elm out in future projects though. Also, if someone has tried out Elm for a game before, please let me know how it worked out!

Framework or No Framework?

Now that I’ve decided to go with js, the next question is whether to use a framework. There are a ton of game frameworks out there, and I looked at many of them. The questions I was trying to answer about each framework were:

  1. How big is this framework? Will it slow down my game’s load time considerably?
  2. What features does it offer? How many of the features that it offers will I actually use?
  3. What’s the documentation like? Will I be left scratching my head and diving through source code to find my answers? Is there an active community around this framework?

In the end, I decided Breakfast of Champions will be too simple of a game to warrant the use of any framework.

Winner: Vanilla ES6 JavaScript


Much to my chagrin, canvas rendering performance on iOS and Android is bad. Like really bad. I haven’t found any good benchmarks online, but I’m talking like at least an order of magnitude worse than on OS X and Windows. I would love to be able to just stuff the Breakfast of Champions site into a web view, then bing, bang, boom, I’ve got an iOS and Android app for my game for very little extra work. Sadly, because of performance, this is not an option.

So what are my options? Well, it’s clear I’m going to have to go native for the actual game portion of my code. Swift and Java, here I come! But for the home screen and menus and such, there are some options. Perhaps I will use this React Native that startups are losing their freaking minds over. Maybe I’ll decide that’s overkill and just write some vanilla Swift and Java like I did with the browser js.

Anyway, I plan on creating the browser version first and then duplicating the js class design in the mobile languages, so I’ll punt on the React Native vs vanilla decision for now. What do you think? Is React Native worth using for a few screens and a menu? Come on back for part 3 of this series to see what happens.

Choosing a Tech Stack for a Multiplayer Action Game: Part 1 – The Backend

Ah yes, starting a new project. All roads are open and the potential is limitless… I’ve got a rough idea of the thing I want to build, but how to choose a tech stack? The cartoon above captures the essence of the choice really. In this three part series, I’ll go through my thought process of choosing the tech stack, starting with the backend and finishing with the frontend.

My Background

I’ve been a Rails developer now for quite some time, and increasingly it’s started to feel like the first picture. Now don’t get me wrong, I love Rails and Ruby. I think they’re the most programmer-friendly framework/language combo as far as community, ease of plugging libraries, code readability, syntactic sugary goodness, etc. Perhaps this is familiarity beginning to breed contempt, maybe it’s the 7 year itch, but I’ve been looking to build a project in something else and this game seems like the perfect opportunity.

And so the search begins…

As any developer with zir head on straight will tell you, to get the right tech stack you need to know the requirements. The project I’m writing about is our flagship game, Breakfast of Champions. It will be similar to many of the popular .io games popping up these days:

  1. Accessible through the browser
  2. Accessible through iPhone and Android apps
  3. Multiplayer

Because this game is multiplayer, I’ll need to run it on a central server and broadcast out the game state to all the players involved. To accomplish this, I’ll be using web sockets.

Great, now that that’s sorted, let’s take a look at:

The Backend

C++ right? It’s a game, you should use C++. C++ is used for like every game server. Well, yeah, that’s true, but I guess I’m a spoiled Ruby developer, and I really just don’t want to code in C++. Does that make me a horrible game dev? TBD I guess. Someday I will probably make a game that has a C++ backend server… But it is not this day.

So what’s left? Well the fine folks at Hashrocket did us all a favor and compiled this excellent article on websocket performance by language. The results were C++ coming out on top, Clojure a not-terribly-close second, and Go and Elixir sort of tied for third.

Given that the Elixir framework, Phoenix, has channels built in, the language is sort of Ruby-looking, and the performance is passable, I think I’ll go with Elixir!

Next up, the frontend…

What do you think? Am I in for a world of pain or is Elixir the best thing since sliced arrays?