Planet Ruby Last Update: Saturday, 09. September 2017 07:30

Maciej Mensfeld (Running with Rails) @ Kraków › Poland - Sep 06

Karafka (Ruby + Kafka) framework 1.0.0 Release Notes

Note: These release notes cover only the major changes. To learn about various bug fixes and changes, please refer to the change logs or check out the list of commits in the main Karafka repository on GitHub. It’s been over a year, since the last major Karafka framework release (0.5). During that time, we’ve managed […] 16 days ago

Note: These release notes cover only the major changes. To learn about various bug fixes and changes, please refer to the change logs or check out the list of commits in the main Karafka repository on GitHub.

It’s been over a year, since the last major Karafka framework release (0.5). During that time, we’ve managed to implement plenty of new features and fix so many bugs, that I don’t know where to start…

Today I’m pleased to announce, that we’re ready with the final 1.0 release.

Code quality

The quality of our work has always been important to us. Few months ago we’ve made a transition from polishgeeks-dev-tools to Coditsu. It allowed us to find and fix several new code offenses and to leverage the quality of the code and documentation. Here are some screens on where we were and where we are now:

There are still some things to be fixed and made better, That said, this is the best release we’ve made not only in terms of features but also in terms of quality of the code and the documentation.

For more details about the quality of the whole Karafka ecosystem, feel free to visit our Karafka Coditsu organization page.


There are more and more companies taking advantage of Karafka as their backend async messaging backbone. Many of the new features were either feature requests or pull requests (including some from Shopify and other big players), that cover both performance and functionality issues existing in Karafka. It’s amazing looking into all the use-cases that people cover with this framework.

Batch processing

Believe it or not, but up until now, Karafka didn’t have batch processing functionality. It had batch messages receiving option, but each of the messages had to be processed separately. At the beginning we wanted to imitate the HTTP world, where (most of the time) a single request would equal a single controller instance logic execution.

It wasn’t the best idea ever. Or maybe it was at the time, but we’ve clearly noticed, that it took away a huge part of possibilities that Kafka brings to the table.

Luckily those days are gone! From now on you can not only receive messages in batches (which makes Karafka several times faster), but you can also process them that way. The only thing you need to do is set the batch_processing config option to true.

You can do this either on an app configuration level:

class App < Karafka::App
  setup do |config|
    config.batch_consuming = true
    config.batch_processing = true
    # Other options

or per each topic route you define:

App.routes.draw do
  consumer_group :events_consumer do
    batch_consuming true

    topic :events_created do
      controller EventsCreatedController
      backend :inline
      batch_processing true

Once you turn this option on, you will have access to a method called #params_batch that will contain all the messages fetched from Kafka in a single batch.

It’s worth pointing out, that a single messages batch always contains messages from the same topic and the same partition.

class EventsController < ApplicationController
  def perform
    # This example uses
    Event.import { |param| param[:event] }

Keep in mind, that params_batch is not just a simple array. The messages inside are lazy parsed upon first usage, so you shouldn’t directly flush them into DB.

Note: For more details about processing messages, please visit the Processing messages section of Karafka wiki.

New routing engine and multiple topic consumer groups

Routing engine provides an interface to describe how messages from all the topics should be received and processed.

Karafka routing engine used to be trivial. The only thing you could really do, was defining topics and their options. From now on, there are two modes in which routing API can operate:

  • Karafka 0.6+ consumer group namespaced style (recommended)
  • Karafka 0.5 compatible consumer group per topic style (old style)

With 0.6+ mode, you can define consumer groups subscribed to multiple topics. This will allow you to group topics based on your use-cases and other factors. It also enables overwriting most of the default settings, in case you need to create a per consumer group specific setup (for example to receive data from multiple Kafka clusters).

App.consumer_groups.draw do
  consumer_group :group_name do
    topic :example do
      controller ExampleController

    topic :example2 do
      controller Example2Controller

Note: For more details about processing messages, please visit the Routing section of Karafka wiki.

#topic reference on a controller level

There are several changes related to the topic itself. The biggest one, is its assignment to a controller class, not to a controller instance. This may not seem significant, but it is. It means, that you no longer should use same controller for handling multiple topics. You can still use #topic from your controllers instance (no need to do self.class.topic) – it’s just an alias.

The second big change, is the topic owning consumer group that you can reference as well from the topic. This allows you to discover and programmatically access all the routing details you need just by playing with the topic and consumer group objects:

# From the controller instance level
topic.consumer_group.class #=> Karafka::Routing::ConsumerGroup #=> 'commit_builds' #=> 'commit_builds_scheduled'

# From the console / outside of the controller scope
App.consumer_groups.count #=> 3 #=> 'commit_builds'
App.consumer_groups.first.topics.count #=> 5

#params_batch messages with additional Kafka message details

Each Kafka message you receive, contains now following extra attributes received from Kafka:

  • partition
  • offset
  • key
  • topic

IMHO the most interesting one is the partition key, that can be used when applying ordered changes to any persistent models (key can be used to ensure proper order delivery via Kafka guaranteed partition order feature):

def perform
  params_batch.each do |param|

#params_batch and #params lazy evaluation

params_batch is not just a simple array. The messages inside are lazy parsed upon first usage, so you shouldn’t directly flush them into DB. To do so, please use the #parsed params batch method to parse all the messages:

class EventsController < ApplicationController
  def perform

Parsing will be automatically performed as well, if you decide to map parameters (or use any Enumerable module method):

class EventsController < ApplicationController
  def perform { |param| param[:user] })

Karafka does not parse all the messages at once due to performance reasons. There are cases in which you might be interested only in the last message in a batch. It would be useless on such occasions to parse everything there is.

You can use this feature to prefilter unparsed data based on partition, topic or any other non-data related aspects:

def perform
  # In this example, we will ignore non-existing users data
  # without even unparsing their details.
  # Casting to an array will disable the automatic parsing upon iterating,
  # so when we decide to fetch user data, we need to use the #retrieve method
  ids_from_partition_key = { |param| param[:key] }
  existing_users_ids = User.where(id: ids_from_partition_key).pluck(:id)

  params_batch.to_a.each do |param|
    param[:parsed] #=> false
    next unless existing_users_ids.include?(param[:key])
    # Some heavy parsing happens here
    param[:parsed] #=> true
    User.where(id: param[:key]).update!(param[:user])

Long running persistent controllers

Karafka used to create a single controller instance per each received message. This was one of the reasons why it had a quite big memory fingerprint. From now on (if not disabled by the config persistent flag), Karafka will create and use a single object for each topic partition up until its shutdown.

This change not only reduces memory and CPU usage, but also allows to do cross-batch aggregations. One of the use-cases could be normalization of the batch insert process, so the DB flushing is performed only when we reach a certain buffer size:

class EventsController < ApplicationController
  FLUSH_SIZE = 1000

  def perform
    buffer << { |param| param[:event] }
    if buffer.size >= FLUSH_SIZE
      data = buffer.shift(FLUSH_SIZE)


  def buffer
    @buffer ||= []

Note: example above is simplified. You probably want to cover flushing buffer also in a case of process shutdown.

Encryption and authentication using SSL and SASL support

Karafka uses ruby-kafka driver to talk with Kafka. Now you can embrace all its encryption and authentication features. All the related configuration options are described here.

Limited consumer groups execution from a single process

One of the biggest downsides of Karafka 0.5 was its lack of ability to do a per consumer group scaling. Each server process was spinning up all the consumer groups from the routing. This was OK for smaller applications, but it was not enough for bigger systems. Karafka 1.0 server allows you to specify which consumer groups you want to run in a given process. This means you can easily scale your infrastructure together with your Kafka traffic.

Given set of consumer groups like this one:

App.consumer_groups.draw do
  consumer_group :events do
    # events related topics definitions

  consumer_group :users do
    # users related topics definitions

  consumer_group :webhooks do
    # webhooks related topics definitions

can now run all together:

# Equals to bundle exec karafka server --consumer-groups=events users webhooks
bundle exec karafka server

in separate processes:

bundle exec karafka server --consumer-groups=events --daemon --pid=./pids/
bundle exec karafka server --consumer-groups=users --daemon --pid=./pids/
bundle exec karafka server --consumer-groups=webhooks --daemon --pid=./pids/

or in a mixed mode, where some of the processes run multiple groups:

bundle exec karafka server --consumer-groups=events --daemon --pid=./pids/
bundle exec karafka server --consumer-groups=users webhooks --daemon --pid=./pids/

Multi process management thanks to Capistrano-Karafka

In reference to the previous feature, Capistrano-Karafka has been updated as well. It now supports multi-process, multi and single group process deployment flow:

# Exemplary Capistrano deployment Karafka definitions
set :karafka_role, %i[karafka_small karafka_big]

set :karafka_small_processes, 1
set :karafka_small_consumer_groups, %w[

set :karafka_big_processes, 4
set :karafka_small_consumer_groups, [
  'group_a group_b',
  'group_c group_d',

server '', roles: %i[karafka_small]
server '', roles: %i[karafka_big]

Processing backends (Inline and Sidekiq)

Karafka is no longer bound to Sidekiq. There are cases in which Sidekiq can be really helpful when processing messages (reentrancy, thread scaling, etc), however for many other it was just a redundancy (receiving from one queue and pushing back to another). The default processing mode for Karafka 1.0 is an :inline mode. It means that processing of messages will happen right after they are fetched from Kafka.

If you want to process your Kafka messages automatically in Sidekiq (without having to worry about workers or anything else), please visit the Karafka-Sidekiq-Backend README.

JRuby support

Thanks to few small changes, Karafka can be executed with JRuby 9000.


Moving forward means, that from time to time, you need to introduce some incompatibilities. There were some breaking changes, but the upgrading process shouldn’t be that hard. We will cover it in a different article soon. Here are the most important incompatibilities you might encounter during the upgrade:

  • Default boot file has been renamed from app.rb to karafka.rb
  • Removed worker glass as dependency (now and independent gem – if you use it, you need to add it to your gemfile)
  • kafka.hosts option renamed to kafka.seed_brokers – you don’t need to provide all the hosts to work with Kafka
  • start_from_beginning setting moved into kafka scope (kafka.start_from_beginning)
  • Router no longer checks for route uniqueness – now you can define same routes for multiple kafkas and do a lot of crazy stuff, so it’s your responsibility to check uniqueness
  • Change in the way we identify topics in between Karafka and Sidekiq workers. If you upgrade, please make sure, all the jobs scheduled in Sidekiq are finished before the upgrade.
  • batch_mode renamed to batch_consuming
  • Renamed #params content key to value to better resemble ruby-kafka internal messages naming convention
  • Renamed inline_mode to inline_processing to resemble other settings conventions
  • Renamed inline_processing to backend
  • Single controller needs to be used for a single topic consumption
  • Renamed before_enqueue to after_received to better resemble internal logic, since for inline backend, there is no enqueuing.
  • Due to the level on which topic and controller are related (class level), the dynamic worker selection is no longer available.
  • Renamed params #retrieve to params #retrieve! to better reflect how it works.
  • Sidekiq backend needs to be added as a separate gem (Karafka no longer depends on it)

Wiki updates

We’ve spent long hours to ensure, that our wiki is complete and consistent. We’ve added several new pages, including:

Other changes

Lower memory usage

We’ve managed to reduce number of new allocated objects down by around 70%. Karafka no longer creates so many objects for each received message and message batch as it used to. It also depends on less gems and requires much less additional libraries, so the overall memory consumption is significantly lower.

Better settings management between ruby-kafka and karafka

We’ve reorganized the whole concept of passing settings in betwen Karafka and ruby-kafka to be able to faster adapt if anything changes. The internal API is also much cleaner and easier to understand.

Dry-validation FTW

All internal validations are now powered by dry-validation schemas.


In order to support different Ruby implementations, we’ve decided to use multi_json gem, so anyone can pick the most suitable JSON parser he needs.

Getting started with Karafka

If you want to get started with Kafka and Karafka as fast as possible, then the best idea is to just clone our example repository:

git clone ./example_app

then, just bundle install all the dependencies:

cd ./example_app
bundle install

and follow the instructions from the example app Wiki.

16 days ago

Tobias Pfeiffer (PragTob) @ Berlin › Germany - Aug 29

Careful what you measure: 2.1 times slower to 4.2 times faster – MJIT versus TruffleRuby

Have you seen the MJIT benchmark results? Amazing, aren’t they? MJIT basically blows the other implementations out of the water! What were they doing all these years? That’s it, we’re done here right? Well, not so fast as you can infer from the title. But before we can get to what I take issue with […]♦ 24 days ago
Have you seen the MJIT benchmark results? Amazing, aren’t they? MJIT basically blows the other implementations out of the water! What were they doing all these years? That’s it, we’re done here right? Well, not so fast as you can infer from the title. But before we can get to what I take issue with […] 24 days ago

DockYard @ Boston, MA › United States - Jul 09

First Month at DockYard

Our summer intern shares her thoughts on her first month with us over a year ago

I came to the programming party a little late in my college career, only having a year and a half of classroom experience under my belt. Due to my limited CS background, I was rather anxious to start work at a web development startup, fearing my qualifications were not up to par. However, my feelings of inadequacy quickly subsided after starting at DockYard! I was welcomed by a comfortable learning environment and an atmosphere of positivity and productivity. My coworkers are very approachable and always willing to answer my questions. Because of DockYard’s intimate size I am able to see all sides of the business, and get to know everyone in the office.

It has now been a month since I began working and already I feel I am light-years ahead of where I started. My nerves have been replaced by an eagerness to continue learning and grow as a developer. The amount of information that has been thrown my way is rather overwhelming. I have already pored through 6 books, including: Agile Web Development with Rails by Sam Ruby and, Secrets of the JavaScript Ninja by John Resig, in addition to other readings on CoffeeScript, version control (git), and user stories. Not to mention my introduction to vim and all the commands that accompany it. While I cannot claim all of the information has stuck, the exposure will surely breed understanding. I have learned that it isn’t about having all the answers, but the efficiency of finding them that is important.

In addition to all the reading, I have worked on a small side project: a simple blog site (my introduction to test-driven development) that I rebuilt 3 times, each time adding more features. The saying, “repetition is the mother of all learning”, has certainly resonated with me after this exercise. Also pair programming with others in the office has also been extremely helpful, giving me an idea of the day-to-day. Overall, I feel lucky to be at DockYard in the midst of the growing field of web development. Already I can say my experience has been invaluable and I thoroughly look forward to the rest of the summer.

over a year ago

DockYard @ Boston, MA › United States - Feb 14

Love Your lib Directory

Patterns for happy hacking over a year ago

Be sure to check out Bryan Helmkamp’s blog post on the same topic

The lib/ directory is the Red Headed Stepchild of your Rails application. Let’s discuss some conventions for keeping it clean and what should and shouldn’t go in there.

It’s not a dump


Does this look familiar? It does to me. This is what my lib/ directory looked like before I got fed up with it. That truck, that was me dumping more code into lib/.

In my experience there is one outstanding reason why code ends up getting dumped into the lib/ directory: A poor understanding of what a model is. Rails has this way of reinforcing bad habits. Perhaps because it is so easy to get going some developers never bother to learn that a model does not in any way need to be attached to a persitence layer. (i.e. ActiveRecord)

Let’s all agree to the following:

  1. All Business Logic Goes Into A Model
  2. All Models Go Into app/models

When we say “Business Logic” we are of course talking about “Application Specific Business Logic”. There is always the case of something you’re working on that is so generic it can be shared with other applications you are (or will be) working on. Or, even better, with the community in general as Open Source. That brings me to the next point.

Understanding the load path

If you have written a Rubygem, or at the very least, looked through one, you know that the lib/ directory is special. The short version of the story is that Rubygems iterates over all of the libraries you have installed as a gem, and appends any lib/ directories onto Ruby’s Load Path. This is basically how Ruby gem files are exposed, so when you as do a gem require it will iterate through every path in the load path and give you the first match.

This is also true with Rails. After all of your gems are loaded and your application is up Rails will append ./lib/ to your load path. Any files you put in there can now be required the exact same way gems are. This gives us an excellent path to extracting general functionality out into. You can even play tricks with this, in your application.rb file put the following at the top:

$:.unshift(File.expand_path('../../lib', __FILE__))

Now in your lib directory create an ‘active_record’ directory and add a file called ‘base.rb’. Inside that file add the following:


Load up your Rails app and watch it throw an exception. Why? Because your app’s lib/ directory was prepended to the load paths and when the lookup for active_record/base happened the first match was in your app’s lib/ instead of in the proper gem. This of course is more of an interesting hack than anything really useful. But it does do a good job of demonstrating how Rubygems’ lookup happens.

Use initializers for initializing, that is all

I have seen developers dump code into initializers that has no business being there. Yes, it loads and it works. That is not the point. We have conventions for a reason. Any code that you feel needs to go into an initializer and has nothing to do with actually setting preferences or something of that manner almost always should go into the lib/ directory. If you must monkey patch. Put it into the lib/ directory. If you are creating a new class or module that has no business being in app/models put it in to the lib/ directory.

Using lib/ to extend core, stlib, or a gem

Far too often I’ve needed to extend a class that is being defined outside of my project. There are a few ways to deal with this. You can use a Composite to define a new class that you can then play around with. The downside to this is that I sometimes want to modify a class that is being inherited by other classes. This is when I think it is appropriate to Monkey Patch.

The pattern I have fallen upon is to define a gem_ext/ directory and a gem_ext.rb file in lib. I then make sure the extensions are loaded up using an initializer. For lack of a better term I call this lib_loader.rb. Lets start with the loader.

# config/initializers/lib_loader.rb

require 'gem_ext'

Simple enough. Now for this example I’ll use a Haml custom filter I wrote. This filter allows me to write Handlebars templates in my views like so:

-# app/views/home/show.html.haml

  // handlebars code goes here

Now I can easily add handlebar templates to any haml file. This is how I did it.

Under lib/gem_ext I defined a haml/ directory and a haml.rb file. Then I defined haml/custom_filters.rb and inside that file I added

# lib/gem_ext/haml/custom_filters.rb

module Haml::Filters
  module Handlebars
    include Base

    def render_with_options(text, options)
      type = " type=#{options[:attr_wrapper]}text/x-handlebars#{options[:attr_wrapper]}"
  #{text.rstrip.gsub("\n", "\n    ")}

Now in haml.rb I added

# lib/gem_ext/haml.rb

require 'gem_ext/haml/custom_filters'

And finally in gem_ext.rb I added

# lib/gem_ext.rb

require 'gem_ext/haml'

This gives me a very clean approach to extending classes without worrying about muddying up the load path with name collisions or other surprises. In addition this pattern can be repeated for Core and Stdlib classes in core_ext and stdlib_ext respectively.

Using lib/ as a pattern to extracting Rubygems

A pattern I have fallen upon when wanting to extract functionality out of an app into a Rubygem has been to first extract that code into the lib/ directoy. From there I have a nice way to test the code in isolation. I am also forced to write the code as a class independent from my app. After I am satisfied with what I have I can think about extracting that into an external gem.

A great example of this is something that Patrick Robertson wrote for BostonRB

We wanted to show the next upcoming event at the top of the website. All of our events are stored in a Google Calendar. Unfortunately most of the Google Calendar gems out there are crap. Patrick decided to roll his own.

You can see that the boston_rb_calendar.rb is requiring several files just like any Gem would. Because of the isolation he was able to test the class very easily.

From here, if Patrick wanted to release this as a gem it wouldn’t take too much effort. Some renaming of classes would be required but he has all of the major parts in place.

Go forth and show some <3<3<3<3

Keeping your code clean pays itself forward in many way. The team you are apart of or the team you are handing off to will thank you. Heck, your future self might thank you. The patterns I’ve described here are ones that I have found success with. If you have noticed other patterns concerning the lib/ directory please feel free to comment!

over a year ago

DockYard @ Boston, MA › United States - Feb 13

Authenticating multiple models with a strategy

Using the Strategy Pattern to clean up multiple login paths over a year ago

A current project requires that there be multiple models that can sign in and each one must use the same sign in form. The original SessionsController#create action looked like the following:

def create
  if user = (Owner.authenticate(params[:user]) || Employee.authenticate(params[:user]))
    session[:user_id]    =
    session[:user_class] = user.class
    redirect_to dashboard_path
    render :action => :new

We’re using has_secure_password and rolling our own authentication. Considering that, the above was good enough. But… looking down the line for this app it is likely we will have to support authentication for more than just two models on the same form. I also don’t like having logic in my controllers. So I decided to break this logic out and I chose the Strategy Pattern to help.

I like putting all of my strategies into app/strategies. This required me to add this directory to the Rails autoload_paths. Simply open up config/application.rb (not necessary in Rails 3.1+, thanks Artur Roszczyk)

config.autoload_paths += %W(#{config.root}/app/strategies)

Next I wrote up a simple spec, thankfully I already had the logic from the controller so there wasn’t much work to be done here. This went into spec/strategies/authentication_strategy_spec.rb

require 'spec_helper'

describe AuthenticationStrategy do
  context 'authenticating an owner' do
    let(:owner) { mock('Owner') }
    before do
    it 'returns an owner' do => '', :password => 'password').should eq owner

  context 'authenticating an employee' do
    let(:employee) { mock('Employee') }
    before do
    it 'returns an employee' do => '', :password => 'password').should eq employee


  describe 'failing to authenticate' do
    context 'with no attributes' do
      it 'returns nil' do be_nil
    context 'with no match for owner or employee' do
      it 'returns nil' do => '', :password => 'password').should be_nil

Now it was time to make these specs green! The strategy file goes into app/strategies/authentication_strategy.rb

class AuthenticationStrategy
  def = nil)
    return nil if (attributes.nil? || attributes[:email].blank? || attributes[:password].blank?)
    Owner.authenticate(attributes) || Employee.authenticate(attributes)

And finally to clean up the controller

def create
  if user =[:user])
    session[:user_id]    =
    session[:user_class] = user.class
    redirect_to dashboard_path
   render :action => :new

In the end this may appear to be more work than is necessary. Keep in mind that app requirements will expand to support more models. The wins should be obvious considering that context. If the requirements grow to 5 or 6 models perhaps at that point it makes sense to actually break the authentication up into Identities with a polymorphic association to the different models. But we’ll cross that road when we get there.

over a year ago

DockYard @ Boston, MA › United States - Aug 09

Phoenix Channels vs Rails Action Cable

comparing phoenix and rails real-time performance over a year ago

At DockYard, we transitioned our backend development from Ruby and Rails to Elixir and Phoenix once it became clear that Phoenix better served our clients needs to take on the modern web. As we’ve seen, Phoenix is Not Rails, but we borrow some of their great ideas. We were also delighted to give back in the other direction when Rails announced that Rails 5.0 would be shipping with Action Cable – a feature that takes inspiration from Phoenix Channels.

Now that Rails 5.0 has been released, we have clients with existing Rails stacks asking if they should use Action Cable for their real-time features, or jump to Phoenix for more reliable and scalable applications. When planning architecture decisions, in any language or framework, you should always take measurements to prove out your assumptions and needs. To measure how Phoenix channels and Action Cable handle typical application work-flows, we created a chat application in both frameworks and stressed each through varying levels of workloads.

How the tests were designed

For our measurements, we used the tsung benchmarking client to open WebSocket connections to each application. We added XML configuration to send specific Phoenix and Rails protocol messages to open channels on the established connection and publish messages. For hardware, we used two Digital Ocean 16GB, 8-core instances for the client and server.

The work-flows for each tsung client connection were as follows:

  • open a WebSocket connection to the server
  • create a single channel subscription on the connection, to a chat room chosen at random
  • Periodically send a message to the chat room, randomly once every 10s-60s to simulate messaging across members in the room

On the server, the channel code for Rails and Phoenix is quite simple:

defmodule Chat.RoomChannel do
  use Chat.Web, :channel

  def join("room:" <> _id, _params, socket) do
    {:ok, socket}

  def handle_in("publish_msg", %{"body" => body, "user" => user}, socket) do
    broadcast!(socket, "new_message", %{body: body, user: user})
    {:reply, :ok, socket}
class RoomsChannel < ApplicationCable::Channel
  def subscribed
    @topic = "rooms:#{rand(1..8)}"
    stream_from @topic

  def publish_msg(data)
      body: data["body"],
      username: data["username"],
      started: data["started"]

After establishing N numbers of rooms, with varying numbers of users per room, we measured each application’s responsiveness. We tested performance by joining a random room from the browser and timing the broadcasts from our local browser to all members in the room. As we increased the numbers of users per room, we measured the change in broadcast latency. We recorded short clips of each application’s behavior under different loads.

This simulates a “chat app”, but the work-flow applies equally to a variety of applications; real-time updates to visitors reading articles, streaming feeds, collaborative editors, and so on. As we evaluate the results, we’ll explore how the numbers relate to different kinds of applications.

Rails Memory Leaks

After creating the Rails chat application, setting up redis, and deploying the application to our instance, we immediately observed a memory leak in the application that was visible just by refreshing a browser tab and watching the memory grow; to never be freed. The following recording shows this in action (sped up 10x):

We searched recently reported bugs around this area, and found an issue related to Action Cable failing to call socket.close when cleaning up connections. This patch has been applied to the 5-0-stable branch, so we updated the app to the unreleased branch and re-ran the tests. The memory leak persisted.

We haven’t yet isolated the source of the leak, but given the simplicity of the channel code, it must be within the Action Cable stack. This leak is particularly concerning since Rails 5.0 has been released for some time now and the 5-0-stable branch itself has unreleased memory leak patches going back greater than 30 days.


We set the memory leak issue aside and proceeded with our tests for the following scenarios:

  • Max numbers of rooms supported by a single server, with 50 users per room
  • Max numbers of rooms supported by a single server, with 200 users per room

Note: For Phoenix, for every scenario we maxed the client server’s ability to open more WebSocket connections, giving us 55,000 users to simulate for our tests. Browser -> Server latency should also be considered when evaluating broadcast latency in these tests.

50 users per room

Rails: 50 rooms, 2500 users:

Responsiveness was speedy at 50 rooms, so we upped the room count to 75, giving us 3750 users.

Rails: 75 rooms, 3750 users:

Here, we can see Action Cable falling behind on availability when broadcasting, with messages taking an average of 8s to be broadcast to all 50 users for the given room. For most applications, this level of latency is not acceptable, so the level of performance for maxinum rooms on this server would be somewhere between 50 and 75 rooms, given 50 users per room.

Phoenix: 1100 rooms, 55,000 users (maxed 55,000 client connections)

We can see that Phoenix responds on average in 0.25s, and only is maxed at 1100 rooms because of the 55,000 client limit on the tsung box.

Making sense of the results, 50 users per room

These results show that if you are building an application for small to medium sized businesses, if 75 companies visit the app at the same time, their notifications will be delayed. Given the severe degradation in performance from 50 to 75 rooms, it may also be hard to pinpoint your server density when planning for load.

For load planning, horizontal scalability will be required with 50-75 companies per server, but the reliance on redis should be considered as a central bottleneck. Horizontal scalability was the obvious choice for Ruby when it came to stateless HTTP requests, but we can see why vertical scalability becomes increasingly important for stateful connections. If you require dozens or hundreds of servers, Action Cable will still rely on a central redis bottleneck for PubSub.

200 users per room

Rails: 8 rooms, 1600 users:

We can see Action Cable starts off with acceptable latency, but begins to quickly fall behind as broadcast latency grows with each message. Broadcast latency grows longer than 10s. Next, we upped the room count by 1, to see where the limit was.

Rails: 9 rooms, 1800 users before availability was compromised when broadcasting.

At 200 users per room, Action Cable is unable to maintain the broadcast load and supported just 9 rooms before we experienced messages stop arriving or subscriptions failing to establish. At these levels, the only consistent performance we could get for 200 users per room, was limiting the server to 7 rooms, or 1400 users.

Phoenix: 275 rooms, 55,000 users (maxed 55,000 client connections)

We can see that Phoenix responds on average in 0.24s, and only is maxed at 275 rooms because of the 55,000 client limit on the tsung box. Additionally, it’s important to note that Phoenix maintains the same responsiveness when broadcasting for both the 50 and 200 users per room tests.

Making sense of the results, 200 users per room

You may be thinking, “but my application isn’t a chat app, and only needs simple page updates”. These tests apply equally to many scenarios. Imagine you have an information system where you want to publish periodic updates to pages. This could be for a news site where visitors see new comments, or a booking site where visitors see “20 other people are currently viewing this hotel”.

This tests shows that if your app receives a sudden spike in visitors and more than 7 articles have 200 or more readers, the server won’t be able to keep up with the notification demand for both the popular articles, as well as the low traffic ones.

For a booking site, imagine 7 hotels across the country release a discount rate and customers jump online for the deal. Suddenly, you need to spin up extra servers to maintain booking notifications, and the severity of the delays becomes worse if critical functionality of your application relies on these notifications.


If memory leaks can be ruled out or addressed, the sweet-spot for Action Cable today is small workloads with few subscribers on any given topic. Going beyond this means engineering efforts and resources must be spent on managing multiple nodes and optimizing code out of channels. Higher workloads with broadcasts to more than a few dozen subscribers risks availability. With Phoenix, we’ve shown that channels performance remains consistent as notification demand increases, which is essential for handling traffic spikes and avoiding overload.

When I started my own real-time Rails features with Sync several years ago, memory leaks and consistent performance were my main fears that drove me to look elsewhere, find Elixir, and create Phoenix. The rails-core team has done a great job putting a real-time story in place, but wrangling Ruby’s lack of concurrency features will continue to be a challenge.

The source code and instructions for running these tests on your own hardware can be found here.

over a year ago

DockYard @ Boston, MA › United States - Mar 24

Rubyists Guide to Ember.js Dependencies

A dependency management primer for Rubysist living in a Gemfile-less, Ember.js world. over a year ago

At DockYard, we have a lot of Ruby on Rails experts who have adopted Ember on the frontend. One of the early hurdles a Ruby developer faces when working on an Ember.js application is dependency management. A popular mechanism for managing a Ruby application’s dependencies is the Gemfile provided by Bundler. Including a library is as easy as declaring it in the Gemfile and running bundle install:

# Gemfile
source ''
gem 'rails', '~> 4.2.0'

For better or worse, there is no dominant, single package manager in JavaScript. Ember applications, and more specifically, those running Ember-CLI, rely on two package managers: Bower for client-side libraries and npm for server-side libraries.

In this post, I’ll provide a basic dependency management primer for those moving from Ruby to JavaScript.


Ember-CLI uses npm to manage internal dependencies. npm resembles RubyGems, in so far as it allows you to install and manage third-party libraries, which in this case, are Node.js programs.


Libraries for npm are referred to as “packages.” Each package has a package.json file which lists the dependencies of the library itself. In this regard, the package.json is analogous to a RubyGem’s gemspec file.


You can configure how node packages get installed via the .npmrc file. You may have one globally, per user (~/.npmrc), or per project.

Installing dependencies

To install an npm package, run npm install [package-name] from the command line.

This will either install the library and it’s dependencies into your current working directory or in one of its parent directories. Here’s how it works: if there is a node_modules/ or package.json in any directory above the current working directory, packages will be installed into that directory. Otherwise, calling npm install [package-name] creates a node_modules/ directory in your current working directory and installs the packages there.

This is a slightly different mental model for Rubyists who are not used to installing gems on a per project basis; gems are generally installed into version-specific Ruby directories with the more popular version managers like rbenv or RVM.

It’s also possible to install packages globally using the --global flag when installing. This installs the package in your usr/local/lib/ directory by default. These packages typically contain executable files and are used via the command line (such as Ember-CLI).

Your dependencies will likely have dependencies. These get installed within a node_modules/ directory in the given package. It’s a little strange the first time you navigate into a node_modules/package-name/ only to find another node_modules/ directory, but that’s what that is. You’ll notice a node_modules/ directory for dependencies of global packages as well if you look in the usr/local/lib/ directory where global packages live.

One last thing to note regarding npm installations: npm caches the libraries you pull down to prevent you from having to download libraries that are already on your system. You’ll find that cache: ~/.npm/.


While you’ll use npm to manage your server-side Node.js dependencies, you’ll use Bower for managing front-end assets, such as JavaScript, HTML, CSS, image, and font files.


Bower itself is an npm package. Its libraries are referred to as “components” and the end user can configure their installations via a .bowerrc file. This file specifies where dependent components will be installed, the URL where the component will be registered (its registry), and the JSON file used to define the component (bower.json by default) among other things.


The bower.json file resembles the gemspec file you find in Ruby gems. It contains the library metadata, such as the name, version, dependencies, and development dependencies for the library.

As we mentioned, components can be searched for via registries. The registry matches the name of a component with the endpoint at which it’s hosted. closely resembles in this way.

Installing dependencies

When you install a Bower component via bower install [component_name], the repository will be cached locally to expedite any future installations of the component. In case you’re curious, the bower cache location is: ~/.cache/bower/.

Unlike npm, Bower components are installed “flat” as opposed to in a hierarchical manner; all of your project’s components (and their dependencies) will be installed into bower_components/ directory, by default. For example, if one of your components is dependent on the underscore.js library, both will sit side-by-side in the bower_components/ directory (remember, with npm, dependencies of dependencies are continually nested in their parent’s directory within a node_modules/ directory).


Here’s a quick wrap-up of the analogous files between Ruby and the JS package managers we discussed:

DescriptionRubyJS (npm, server-side)JS (Bower, client-side)
Term for external library“Gem”“Package”“Component”
End-user configuration file.gemrc.npmrc.bowerrc
Per-library configuration file*.gemspecpackage.jsonbower.json
Cache directory~/.gem/~/.npm/~/.cache/bower/

As ES2015 (formerly known as “ES6”) becomes more prevalent and JavaScript code becomes more modular and better for passing around, dependency management grows in importance. Hopefully this quick primer will clear up some confusion Rubysists have as they transition from working with the Gemfile to working with the package managers JavaScript offers.

over a year ago

DockYard @ Boston, MA › United States - Feb 20

Rails Engines and Monkey Patching

A simple pattern for extending your Rails Engines in your app over a year ago

We’ve started extracting simple behavior into Rails Engines lately. An example of this is our Invitable engine. As you may have guessed, it adds invitation request support to an existing app. It’s about 50% complete right now but for the purposes of this post it will act as the example.

As an engine it has a very slim Invitation model that only expects an email attribute. A client app we’re currently building requires two additional attributes to be gathered: name and zipcode.

There is no need to overwrite the model, I just want to extend it. The cleanest thing to do is just monkey patch it.

Let’s start with writing the spec of where I want the model to be (I am using ValidAttribute if the specs don’t look familiar, I suggest you try it test spec your validations)

require 'spec_helper'

describe Invitable::Invitation do
  it { should     have_valid(:name).when('Henry Ford') }
  it { should_not have_valid(:name).when(nil, '') }
  it { should     have_valid(:zipcode).when('02115') }
  it { should_not have_valid(:zipcode).when(nil, '', 'hello', '0211', '021156') }

To make this spec green there are two things that I have to do

  1. Add the name and zipcode columsn to the correct table
  2. Open up the class and add the proper validations on those attributes

The first is simple. I just create a new migration and add the columns to invitable_invitations.

The second is not so straight forward. If I open up the class in the client app and attempt to add the validations like so:

module Invitable
  class Invitation
    validates :name, :zipcode, :presence => true
    validates :zipcode, :format => /^\d{5}$|^\d{5}-\d{4}$/

The app will raise a NoMethodError exception complaining that validates is undefined. In the load path there are two app/models/invitable/invitation.rb files and the one in the app takes precendence over the one in the engine. This is fine because you might want to overwrite the model entirely, but in this case I want to extend it. So you must explicitly require the engine’s model at the top of the app’s model.

Thankfully the engine itself has a nice helper called_from that tracks its full path on the file system. In this example we access it with Invitable::Engine.called_from. This will point to the lib/invitable directory in the gem itself. Here is what I ended up with in the model:

require File.expand_path('../../app/models/invitable/invitation', Invitable::Engine.called_from)

module Invitable
  class Invitation
    validates :name, :zipcode, :presence => true
    validates :zipcode, :format => /^\d{5}$|^\d{5}-\d{4}$/

It’s verbose and this could be better so let’s clean that up.

In my engine I’ve added a spec to spec/lib/invitable/engine_spec.rb with the following (I’m using Mocha for the stubbing)

require 'spec_helper'

describe Invitable::Engine do
  before { Invitable::Engine.stubs(:called_from).returns('/lib/invitable') }

  describe '.app_path' do
    it 'returns the path to the engine app directory' do
      Invitable::Engine.app_path.should eq '/app'

  describe 'controller_path' do
    it 'returns the path to the named engine controller' do
      Invitable::Engine.controller_path(:test_controller).should eq '/app/controllers/invitable/test_controller.rb'

  describe 'helper_path' do
    it 'returns the path to the named engine helper' do
      Invitable::Engine.helper_path(:test_helper).should eq '/app/helpers/invitable/test_helper.rb'

  describe 'mailer_path' do
    it 'returns the path to the named engine mailer' do
      Invitable::Engine.mailer_path(:test_mailer).should eq '/app/mailers/invitable/test_mailer.rb'

  describe 'model_path' do
    it 'returns the path to the named engine model' do
      Invitable::Engine.model_path(:test_model).should eq '/app/models/invitable/test_model.rb'

This looks good enough to me. Now to make it green I added the following to lib/invitable/engine.rb

def self.app_path
  File.expand_path('../../app', called_from)

%w{controller helper mailer model}.each do |resource|
  class_eval <<-RUBY
    def self.#{resource}_path(name)
      File.expand_path("#{resource.pluralize}/invitable/\#{name}.rb", app_path)

And now in the app model I can do the following

require Inivitable::Engine.model_path :invitation

module Invitable
  class Invitation
    validates :name, :zipcode, :presence => true
    validates :zipcode, :format => /^\d{5}$|^\d{5}-\d{4}$/

Nice and clean!

This simple pattern can be applied to the controllers, mailers, etc… any class you want to actually extend from the engine instead of overwrite entirely.

Finally, I’d like the address a question I’m sure some of you have. Why not subclass? For this engine the Invitable::InvitationsController is expecting a class of Invitation within the context of the Invitable module. So if I were to subclass

class Inivtation < Inivitable::Invitation

You would then have to subclass the controller

class InvitationsController < Invitable::InvitationsController

And because the InvitationsController is referencing InvitationMailer within the context of the Invitable module you would have to subclass the mailer

class InvitationMailer < Invitable::InvitationMailer

Finally, because you’ve subclassed the controller the mount in routes.rb becomes meaningless. If you head down the subclass path you defeat the purpose of using the engine in the first place.

over a year ago

DockYard @ Boston, MA › United States - Oct 20

Live Search with Ember & JSON API

Build a JSON API-compliant live search using Ember and Phoenix/Rails 11 months ago

Live search is a feature commonly found in applications. There are various solutions to implementing search functionality on a website, but “live search” promotes a better user experience over the traditional way of searching (i.e. type in keyword and click to view results); it’s satisfying to a user since immediate feedback is received based on what they have typed, without having to click some form of a “submit” or “search” button. It’s refreshing to see results narrow as you type more words, or broaden as you backspace to delete already-typed characters from the search box. The less work your user has to do, the better.

Live search is not new concept whatsoever, but if you’re new to the JSON API specification and would like to follow its conventions, this may be helpful.

The specification states the following on the subject of filtering:

  • The filter query parameter is reserved for filtering data.
  • Servers and clients SHOULD use this key for filtering operations.

Given that info, we’ll go over how to set up the client-side (Ember), and the server-side (in both Phoenix and Rails) to get live search working. In the following examples, we’ll work with a Food model having a category attribute.

On the Ember side

To get started, make sure your application is using the DS.JSONAPIADAPTER; it’s the default adapter in a new Ember application. This informs your application of the type of data that it should be expecting from the server. In our case, the payload will be expected to have a specific format and will be expected to contain certain keys. Check out the spec if you’d like more details on this.

Having that, we only need to add a couple of things: query parameters and the call to the server itself.

// controllers/foods.js
import Ember from 'ember';
const { Controller } = Ember;

export default Controller.extend({
  queryParams: 'category',
  category: ''
// routes/foods.js
import Ember from 'ember';
const { Route } = Ember;

export default Route.extend({
  queryParams: {
    category: { refreshModel: true }

  model(params) {
    return'food', { filter: { category: params.category } });

It’s that simple. Notice that we’re using the store’s query method and providing it with a filter. This filter must be included in the call. This will result in a GET request containing a URL encoded string with the filter query parameter:


Now let’s see how to set this up on the backend for a seamless integration.

On the Phoenix side

After following the lib’s quick installation instructions, and aside from needing to add our route and schema, that’s all we need to do in Phoenix before heading over to our controller for some filtering logic.

defmodule MyApp.FoodController do
  import Ecto.Query

  use JaResource
  use MyApp.Web, :controller

  plug JaResource

  def filter(_conn, query, "category", category) do
    from f in query,
      where: ilike(f.category, ^("%#{category}%"))

On L7, plug JaResource is reponsible for providing all the controller actions by default. There is no need for you to implement these actions unless you’d like to add custom logic. That’s a pretty nice time saver! Plus we can customize our controller’s behavior via the many callbacks that the library provides. JaSerializer conveniently provides the callback filter/4 where we can handle our custom filtering given our filter parameters. In the example, we only want to filter by category, so we add “category” as the third argument so that we get a match. You’ll have to define one of these filter callbacks for as many filter parameters as you want to pass. “Anything not explicitly matched by your callbacks will get ignored.”

On the Rails side

After having installed the gem, like in the Phoenix section above, you’ll need to declare your routes and models. To gain the simplest form of the filter functionality, you just need to add the following (L5) to the corresponding resource file (this will find an exact match):

class FoodResource < JSONAPI::Resource
  attributes :category

  filter :category

The filter will be based on the term passed in from the GET request coming from the Ember side; it will make sure that we are only returned Food records whose category value matches exactly that of the request parameter (i.e. “pastry”).

Below, I show another example that leverages the :apply option whose arguments are records (an ActiveRecord::Relation), the value to filter by, and an options hash. However, you have much flexibility on how you decide to implement your filter. The README filter section has a more comprehensive list of the possibilities.

class FoodResource < JSONAPI::Resource
  attributes :category

  filter :category, apply: -> (records, value, _options) {
   records.where('category LIKE ?', "%#{value[0]}%")


That wraps it up! The Ember frontend and the Phoenix/Rails backends now work together to provide a live search functionality to a web application. Since we’re following the JSON API spec, there is little to no friction on either side in order to get this working as expected. Happy filtering!

11 months ago

DockYard @ Boston, MA › United States - Feb 16

Single quotes or double quotes?

An opinion on when to use the different quoting styles with some performance notes over a year ago

I have a simple rule when it comes to strings: I always start out with single quotes

'Hello world!'

When I need to interpolate in the string or add an escaped character it is time to upgrade to double quotes

"Hello #{planet}!"

"To: John Adamsn\nFrom: Thomas Jefferson"

Now what happens when the string style is part of that string itself? For example, I don’t need to interpolate and the only escaped character needed is a single quote. This is when I’ve been using string expressions. A string literal of %q is the equivalent of a single quote string and a %Q is the equivalent of a double quote string. The string literals are contained withing a non-alphanumeric delimiter.

# single quote
%q{Wayne's world!}

# double quote
%Q{#{name}'s world!}

# ZOMG also a double quote!
%{#{name}'s world!}

I try to follow this rule. I don’t think it saves anything other than it just looks nicer to me. A very simple (and completely unscientific) benchmark shows that the difference between the two is a wash

Update: These benchmarks may be wrong, please see the comments for more information

require 'benchmark'

Benchmark.measure { 1..10_000_000.times { a = 'hey now' } }
# =>   1.960000   0.000000   1.970000 (  1.958126)

Benchmark.measure { 1..10_000_000.times { a = "hey now" } }
# =>   1.980000   0.010000   1.980000 (  1.988363)

Any given run of this and the times would flip. The string is probably just being optimized somewhere so this benchmark is not very good. At the very least it shows that execution time is similar. Let’s see what happens when interpolating:

Benchmark.measure { 1..10_000_000.times { |i| a = "hey now #{i}" } }
# =>   6.110000   0.010000   6.120000 (  6.111669)

Now we can see a significant jump in time. (over 3 times longer) Why does this take so much longer? A clue as to what is happening can be seen when we compare this benchmark to string concatenation using single quotes

Benchmark.measure { 1..10_000_000.times { |i| a = 'hey now ' + i.to_s } }
# =>   6.490000   0.020000   6.510000 (  6.502408)

This ends up being about the same execution time as string interpolation. Before we answer the previous question let’s take a look at one more option

require 'benchmark'

Benchmark.measure { 1..10_000_000.times { |i| a = 'hey now ' << i } }
#  =>   2.990000   0.010000   3.000000 (  2.986346)

Whoa, this is much faster, more than 50% faster than interpolation and concatenation. Why? What is happening here?

What we are seeing is the difference between creating a new object and modifying an existing object. It is not immediately obvious with string interpolation as it is with concatenation. With the append we are actually modyfing the object so there is no need to do any memory allocation.

There are several differences between the two styles, they aren’t always interchangable. Most of the time the decision comes down to a styling preference but there are certain use cases where it can make a difference. String interpolation is in Ruby as a nice convenience but if you’re doing anything that is relying upon interpolation quite heavily you may want to consider other options.

over a year ago

DockYard @ Boston, MA › United States - Oct 21

Chris McCord is a DockYarder

over a year ago

I’m very happy to announce that Chris McCord, the creator of the Phoenix Framework, has joined the DockYard team!

We’ve been working with Phoenix for nearly a year now. We love it. So much so that we dumped Ruby on Rails and doubled down on Phoenix and Elixir as the future. With Chris joining our team we hope to invest heavily in that future.

Chris’ primary responsibility at DockYard is to continue building Phoenix and make it the best choice for backend frameworks. He will be available for consulting through DockYard in limited capacity but his knowledge is already making its way through our team.

If you have a project in Phoenix we’d love to chat.

over a year ago

DockYard @ Boston, MA › United States - Nov 18

Phoenix is not Rails

Phoenix is not Rails, but some ideas are borrowed over a year ago

In his yearly recap last December, Brian went public with his plans to transition the company over to Elixir and Phoenix development. Throughout this year, he found it was a smooth transition for the team going from primarily Rails to Phoenix powered applications. On the surface, Phoenix shares some familiar conventions with Rails that lets folks jump into new applications and contribute early to a project – on their way to greater mastery. Complete mastery will take a bit more practice than knowing a few shared conventions, but the similar-at-a-glance features has enticed Ruby teams to get involved and many are delighted to get up and running quickly. Unfortunately, it has also led to wrong assumptions about Phoenix’s likeness to Rails, causing some to miss the important differences around their core philosophies.

It is common in the Ruby community to say that there are Rails developers and Ruby developers. We don’t expect this to happen with Phoenix. Although Phoenix of course introduces its own abstractions, ultimately writing a Phoenix application is writing an Elixir application. Testing Phoenix code is testing Elixir functions. This post aims to address these ideas by comparing the similarities and differences between Phoenix and Rails and why it matters.


Most of the phoenix-core team comes from a Rails-heavy background, so it’s natural we borrow some of the great ideas Rails brings to the table, such as:

  • Both focus on productivity, from client to server side
  • Both provide a default directory structure, although Phoenix simply relies on the structure imposed by Elixir applications
  • Both are MVC frameworks (Phoenix does a functional twist on the architecture though) with a router sitting on top
  • Both provide a default stack with relational databases (sqlite3 for Rails, PostgreSQL for Phoenix)
  • Both promote security best practices in their default stack
  • Both ship with a default toolkit for writing and running tests


With a few similarities, comes major differences. From how you structure your applications, recover from failure, debug your systems, or talk to a remote client, Phoenix takes an approach that few run-times can offer. We embrace Elixir and OTP conventions in Phoenix so that your Phoenix application is only a component of your greater application infrastructure. This deviation from Rails has effects throughout the stack.


There is no such thing as a “Phoenix application”. Your Phoenix projects are first and foremost Elixir applications, which relies on Phoenix to provide part of its functionality. This means there is one way to build, run, and deploy your applications – the Elixir way.

Why it matters: no singletons

In Rails there is a single application that’s accessible via Rails.application. Rails runs the show, from starting the application, configuration, and even running command line tasks. As an inherent limitation of this approach, you cannot run two Rails applications side by side. If you need sharing, you need to carefully break it apart into engines and learn a new set of rules.

With Phoenix, nothing is global. There is no monolith. A new Phoenix application will include one Endpoint, one Router, and one PubSub Server, but you are free to add more. With no global state or global servers, you can break your application into pieces as your infrastructure grows.

Why it matters: startup and shutdown

Elixir conventions structure your projects as small composable “applications” that can be started and stopped as a unit. The trail usually goes like this (using Phoenix itself as an example):

  1. Every application has a specification, that may specify which module to invoke when the application will be initialized:
  def application do
    [mod: {Phoenix, []},
     applications: [:plug, :poison, :logger, :eex],


  1. If a module is specified, the start/2 function of this module is invoked:
  defmodule Phoenix do
    def start(_type, _args) do


  1. The start/2 function must return the identifier of a supervised process, such as Phoenix.Supervisor.start_link above


A similar flow happens when stopping your application. The consequence is that it doesn’t matter if you are using Phoenix or not, every application has its own and contained start/stop mechanism.

This is a stark contrast to Rails initialization which is extremely complex and requires extensions to hijack a single, sequential initialization flow. For a Rails 4.2.2 app:

$ rails c
Loading development environment (Rails 4.2.2)
irb(main):001:0> Rails.application.initializers.length
=> 74

Those are 74 snippets of code (Ruby blocks) spread around multiple files in a non-specified order! Having control of the initialization logic is extremely important to know exactly what your app is running and to keep boot times fast.

Why it matters: monitoring and introspection

By relying on applications, you gain supervision, fault tolerance, and introspection into your running system. We can easily view our applications running as a unit, or as a whole with tools like observer:


The beauty is, your project will start as a single application and it may (or may not) be broken into multiple applications naturally, be they all running in a single node or in a service oriented architecture. We pay no upfront cost because the runtime is built on tried and true patterns. In fact, we will cover such an example in an upcoming chapter of the Programming Phoenix book.

Request life-cycle

Phoenix provides fantastic performance out of the box, with benchmarks to prove it. The request/response life-cycle in Phoenix differs greatly from the approach Rails takes with Rack.

Why it matters: easy to understand

Explicit > Implicit. Almost Always. Phoenix favors explicitness in most of its stack. For example, when generating your Phoenix application, you can see all the “plugs” your request goes through in lib/my_app/endpoint.ex. Where Rails segregates Rack middleware to a side-loaded part of the application, Phoenix makes all plugs explicit. You have an instant, at-a-glance look into your request life-cycle by viewing the plugs in your endpoint and router.

defmodule MyApp.Endpoint do
  use Phoenix.Endpoint, otp_app: :my_app

  socket "/socket", MyApp.UserSocket
  plug Plug.Static, at: "/", from: :my_app, gzip: false, only: ~w(css images js)
  plug Plug.RequestId
  plug Plug.Logger
  plug Plug.Parsers, parsers: [:urlencoded, :multipart, :json], pass: ["*/*"]
  plug Plug.MethodOverride
  plug Plug.Head
  plug Plug.Session, store: :cookie
  plug MyApp.Router

A request starts in your Endpoint, flows through the explicit plug “base middleware”, and is handed off to your Router, which itself as just a plug. Then the router applies its own plugs before handing off to a controller, which is (you guessed it!), a Plug. A single level of abstraction throughout the entire stack makes reasoning about your request life-cycle as clear as possible. It also allows easy third-party package integration because of the simplicity of the Plug contract.

Let’s compare two very similar looking controllers to see how Phoenix’s functional approach with Plug makes the code easier to understand:


before_action :find_user

def show do
  @post = @user.posts.find(params[:id])

def find_user
  @user = User.find(params[:user_id])


plug :find_user

def show(conn, %{"id" => id}) do
  post = conn.assigns.user |> assoc(:posts) |> Repo.get(id)
  render conn, "show.html", post: post

defp find_user(conn, _) do
  assign(conn, :user, Repo.get(User, conn.params["user_id"]))

Unless you’re a seasoned Rails developer, you wouldn’t know that show calls render "show.html" implicitly. Even if it was called explicitly, you would have to know that all instance variables are copied from the controller instance to the view instance, which is a layer of complexity that few realize when first getting into Rails development. Convention over configuration is a Good Thing, but there’s a threshold where implicit behavior sacrifices clarity. Phoenix optimizes for clarity, in a way that we think strikes a perfect balance with easy to use APIs. Beyond that, as an Object Oriented programmer you must be aware of all the implicit state of the instance, such as the params hash, the request object, and any instance variables set in before_action filters. In Phoenix, everything is explicit. The conn is our bag of data and line of communication with the webserver. We pass it along through a pipeline of functions called plugs, transforming the connection, and sending response(s) as needed.

Why it matters: easy to test

Functional programming and the Plug contract make testing your controllers in isolation, or integration testing your entire endpoint, only a matter of passing a conn through the plug pipeline and asserting against the result. Additionally, controller actions in Phoenix are just functions, without implicit state. If we need to test the controller in isolation, we call the function!

test "sends 404 when user is not found" do
  conn =, %{"id" => "not-found"})
  assert conn.status == 404

There’s no stumbling with setting up controller instances thanks to functional programming. And when we need to fully integration test through the endpoint, Phoenix just calls the pipeline of functions:

test "shows users" do
  conn = get conn(), "/users/123"
  assert %{id: "123"} = json_response(conn, :ok)

Phoenix views follow the same principle as controllers: they are all just functions, there is no implicit data sneaking in!

Why it matters: easy to share code

Once you end-up relying on controller instance variables and methods, a method that you wrote to run in a Rails controller cannot be easily moved to a Rack middleware because it relies on many controller internals.

Since plugs are just functions, you know what is coming in and you know what is going out. There is one abstraction for the entire HTTP stack: whether in the endpoint, router or controller. For example, let’s say you want to apply an AdminAuthentication plug to all "/admin" requests, as well as a special DashboardController. We use the same plug at both the Router and Controller levels of abstraction:

defmodule MyApp.Router do
  pipeline :browser do
    plug :fetch_session
    plug :protect_from_forgery

  pipeline :admin do
    plug AdminAuthentication
  scope "/" do
    get "/dashboard", DashboardController

  scope "/admin" do
    pipe_through [:browser, :admin] # plugged for all routes in this scope

    resources "/orders", OrderController

defmodule MyApp.DashboardController do
  plug AdminAuthentication # plugged only on this controller

  def show(conn, _params) do
    render conn, "show.html"

Since we use plug at all levels of the stack, we can plug in the AdminAuthentication plug in the Router and controller for fine-grained request rules. In Rails, you would inherit from an AdminController, but the clarity of what transformations apply to your request is lost. You have to track down the inheritance tree to find out which rules are applied and where. In Phoenix, router pipelines make the concerns of your request explicit.


Phoenix from day one was built to take on the challenges of the modern, highly connected, real-time web. Channels bring transport agnostic real-time connections to your application, which can scale to millions of clients on a single server. This deviates from Rails where historically real-time features have been second-class.


Why it matters: the web is evolving

Phoenix Channels target the Web beyond the browser. The web is evolving to include connected devices (phones, watches, smart toasters) – one of which is a browser. We need a framework that can evolve with changing and new protocols alike. That’s why Channels are transport agnostic, with native channel clients available on iOS, Android, and Windows platforms. You can see this in action with a Phoenix chat app running natively on a browser, iPhone, and Apple Watch.

Why it matters: fast performance, with less dependencies

Rails’ recent entry into real-time features with Action Cable bring a heavy list of dependencies: Faye, Celluloid, EventMachine, Redis, to name a few. Because Phoenix runs on the Erlang Virtual Machine, Phoenix gets real-time features out of the box from the run-time. The run-time is distributed, allowing Phoenix to skip any operational dependency like Redis to orchestrate PubSub messages across servers.


Phoenix does not impose strict naming conventions, like we see in Rails.

Why it matters: easy to learn

Phoenix does not tie module names to the filename. Rails requires a UsersController to be located in a file named users_controller.rb. We agree conventions like these are good, but Phoenix does not care about such tight restrictions. Instead we promote sane defaults, but are flexible to individual requirements. Naming also creates a lot of confusion for people who learn Rails first then try to write Ruby applications. Because Rails depends on const_missing to require files based upon the class name convention of file path, knowing how to require files in a regular Ruby application is a bit of a mystery for programmers looking to move their knowledge outside of Rails.

Phoenix includes a “web” directory where you put controllers, views, etc, but it only exists for code reloading purposes which gives you refresh-driven-development.

Phoenix also does not impose singular and plural naming rules. Rails naming rules can confuse beginners and advanced developers alike: models use singular names, controllers use plural ones, URL helpers mix both, and so on. Phoenix consistently uses singular rules, as any other Elixir code. You may use plural names for your tables and router paths, but those are explicitly written at your system boundaries.


Phoenix uses a tool named brunch by default for handling static assets, but it allows you to bring your own JavaScript build tool, instead of building one specific to the framework, like Rails does with the asset pipeline. Phoenix also leverages its channel layer to provide live-reload of changes out of the box.

Why it matters: ES6/ES2015 is the future

Phoenix promotes ES6/ES2015 instead of CoffeeScript, by supporting ES2015 out of the box for new projects. CoffeeScript served its noble purpose to push the industry forward. ES2015 and its first-class transpilers are the clear way forward.

Why it matters: live-reload is an essential feature

Phoenix ships with live reload out of the box. As soon as you change a .js or .css file, it is automatically reloaded in your browser. Once you add this feature to your development work-flow, it’s one you can’t live without.


Regardless of your background, you’ll find Phoenix borrows from great ideas that came before it, while using Elixir to carve its own path to take on the modern web.

over a year ago

Plataformatec @ São Paulo › Brazil - Sep 05

Dilemas de PO: como definir OKRs em equipes ágeis

Continuando a série de posts referente aos dilemas da vida de gerentes de produto, gostaria de compartilhar uma técnica para que você e a sua equipe possam se planejar melhor. É comum no ambiente de equipes que trabalham com a construção ou evolução de produtos a necessidade da definição de objetivos, a projeção de resultados ... » 16 days ago

Continuando a série de posts referente aos dilemas da vida de gerentes de produto, gostaria de compartilhar uma técnica para que você e a sua equipe possam se planejar melhor.

É comum no ambiente de equipes que trabalham com a construção ou evolução de produtos a necessidade da definição de objetivos, a projeção de resultados esperados e o planejamento de entregas. Se você convive com tais desafios, confira a seguir uma dinâmica na qual obtive ótimos resultados no processo de concepção de OKRs.

O que é OKR e para o que serve

Bom, antes de escrever sobre o assunto, vale a pena definir o termo OKR.

Segundo definição do pessoal da Lean Performance, OKR — do inglês Objectives and Key Results –, utilizado no Silicon Valley por empresas como Google, Facebook, Linkedin, Intel etc, é um framework para definir metas. Trata-se de um sistema simples para criar alinhamento e engajamento em torno de metas mensuráveis e dinâmicas, tipicamente definidas trimestralmente.

OKRs são uma ótima forma das organizações criarem e comunicarem objetivos e resultados. Seu maior propósito é promover conexão entre os objetivos da empresa, do time e dos indivíduos a partir de resultados mensuráveis, fazendo com que todos se movam na mesma direção.

Em resumo, utilizar OKRs adequadamente auxilia no aumento do alinhamento e do engajamento entre as pessoas, e torna o processo de planejamento estratégico mais ágil, com maior cadência de revisão, adaptação e participação de toda a equipe.

O alinhamento e engajamento entre as pessoas aumentam, pois o time passa a fazer parte do direcionamento do produto, gerando maior senso de pertencimento em cada um.

Certa vez, no projeto de um dos nossos clientes, após a equipe definir os OKRs, fui informado pelo QA que o bug que estava ocorrendo no ambiente de produção iria impactar diretamente um dos OKRs, se não fosse corrigido imediatamente. Esse fato me deixou realmente surpreso, pois jamais veria uma ação assim se não compartilhassemos a responsabilidade com todo o time.

Com a aplicação de OKRs, ganhamos mais uma ferramenta na priorização do backlog e, mais importante, temos o escopo direcionado para um objetivo comum.

Ao aplicar dinâmicas de OKRs, espera-se que a empresa, equipe ou pessoa tenha em mãos:

  1. Objetivos: Possuem, por essência, característica qualitativa, inspiracional, com prazo definido e apontem para metas, ou seja, dizem para onde se deve ir.
  2. Key results: Quantificam toda a ideia inspiracional. Devem ser atingíveis, mensuráveis e realistas. Em outras palavras, indicam como será possível atingir os objetivos.

Beleza Raphael, entendi os conceitos e percebi que isso pode ser útil, mas você pode dar um exemplo?

Objetivo: Aumentar o número de clientes a partir de melhor experiência de uso do aplicativo.

Período: Primeiro trimestre de 2017.

Key results:

  • Aumentar a taxa de conversão mobile global de 5% (último trimestre de 2016) para 10%.
  • Aumentar o volume de sessões entrantes de 161 mil (último trimestre de 2016) para 240 mil.
  • Vender 10.000 mil licenças básicas.
  • Vender 500 licenças premium.

De onde vamos tirar esses indicadores?

Aqui vai um ponto importante: para definir bons key results, que sejam mensuráveis e reflitam a situação real, é essencial que a organização seja orientada a dados e indicadores. Basicamente, a empresa deve possuir métricas de produto que assegurem uma tomada de decisão assertiva, dando a real noção de direção e esforço; e não somente baseada em intuições e experiências. Esses dados ajudam a definir as estratégias do próximo período com maior segurança e a otimizar os recursos empreendidos.

Os dados devem direcionar as seguintes questões:

  • Quais desafios temos à frente? Quais indicadores estão ruins? O que devemos fazer para melhorar tal indicador? Ex: Diminuir o churn, aumentar prospects e leads, melhorar os processos operacionais etc.
  • Qual frente do produto precisamos melhorar para atingir o objetivo X da organização?
  • Quais riscos esses dados evidenciam e onde devemos manter a atenção?

Feita essa introdução, vamos para a dinâmica de definição dos OKRs para que você possa aplicá-la e aprimorá-la.

Parte 1 – Compreendendo o negócio

O primeiro tópico a ser discutido diz respeito a revisão ou criação do modelo de negócios do produto.

Baseado nas personas, a equipe poderá analisar os canais, formatos de relacionamento com o cliente, fontes de receita, atividades chave, recursos chave, parceiros chave e estrutura de custo.

Para levantar tais informações, sugiro que você leve as seguintes perguntas:

  • Personas: qual o perfil de clientes que utilizam o produto?
  • Canais: como os clientes chegam ao produto, geralmente?
  • Formatos de relacionamento: como ocorre o relacionamento dos usuários com o produto atualmente?
  • Fontes de receita: quais são as fontes de receita atuais?
  • Atividades chave: quais as principais atividades que precisam ser realizadas no dia a dia da operação para que o produto possa entregar o valor que se propõe?
  • Recursos chave: o que o produto possui de diferencial competitivo?
  • Parceiros chave: quais parceiros ajudam a empresa a viabilizar/alavancar o produto?
  • Custos: qual a estrutura necessária para tornar a operação do produto viável?

Business model canvas

Assim, a equipe terá maior clareza sobre o estágio atual do produto. Caso o produto não possua um modelo de negócios, não deixe de desenvolver um. Uma dica de leitura para que você possa construir um é o livro Business Model Generation.

Após a revisão do modelo de negócio, o próximo passo será analisar as principais métricas do produto. Cada pessoa da equipe poderá explorar o produto a partir de:

  • Informações financeiras: churn, ticket médio, custo de aquisição do cliente etc;
  • Comportamento de acesso: número de acessos, taxa de saída, tempo despendido nas páginas do fluxo de venda, origem do tráfego, dispositivo utilizado para acessar o fluxo de venda etc;
  • Perfil do cliente: dados demográficos, nível de serviço contratado, faixa etária etc.

Exemplo de dashboard de produto A partir das métricas de produto, cada membro da equipe deverá identificar pontos de melhoria e pontos onde o produto aparenta estar bem.

Parte 2 – Definindo os OKRs

Feita a análise das métricas de negócio e do modelo de negócio, levante com a equipe os potenciais objetivos que serão trabalhados no período de tempo (exemplo: próximo trimestre).

Se você estiver facilitando a discussão, tenha em mãos as seguintes dicas antes da equipe determinar os objetivos:

  • Tenha poucos objetivos (de 3 a 4). Motivo? O trimestre, por exemplo, é curto para se assumir muitos objetivos.
  • Relacione os objetivos do produto com os objetivos da empresa.
  • Relacione os objetivos com problemas e necessidades reais.
  • Utilize as métricas de negócio para definir os objetivos.

Caso os objetivos levantados sejam complexos ou grandes, vale a pena fomentar discussões para que a equipe simplifique. O mantra que deve ser pensado aqui é: pequenos incrementos que geram valor econômico para o negócio.

Para medir os objetivos, a equipe deverá elencar quais serão os key results que serão acompanhados ao longo do período. Para que a equipe tenha um norte:

  • Defina poucos key results (de 3 à 4). Motivo? Se o objetivo precisa de muitos itens para ser medido é provável que ele seja complexo demais.
  • Os key results devem ser expressos através de números.
  • Quais resultados garantirão que alcancemos nossos objetivos?

Exemplo de OKRs definidos

Parte 3 – Definindo um roadmap

Dado que os objetivos e os key results estão definidos, a equipe pode se questionar: bom, e agora, o que vamos fazer para atingir nossos OKRs?

Sugiro como última parte da dinâmica o desenvolvimento de um roadmap, a fim de identificar entregáveis macro e quando eles deverão ser entregues ao longo do período (ex: trimestre).

Técnicas de brainstorming e story mapping ajudam a ter uma ideia de cronologia e dependência entre as possíveis funcionalidades que serão construídas para alcançar os OKRs.

Antes que alguém me crucifique dizendo que não estou sendo lean, gostaria de deixar algo claro. O roadmap criado será uma referência que COM CERTEZA (repita, C-O-M C-E-R-T-E-Z-A) sofrerá alterações ao longo do tempo por conta das mudanças e experimentações que acontecerão para atingir os objetivos e key results definidos.


A estrutura de OKR pode ser replicada no nível empresa, equipe e, até mesmo, indivíduo. Uma dica que sempre compartilho é: comece com poucos objetivos e trace em torno de 3 à 4 key results. O motivo? Foco. Quanto mais direcionado, mensurável e acionável for a sua estrutura de OKR, maiores serão as chances de alcançar o planejamento desenvolvido.

Lembre-se que os OKRs devem ser revistos com frequência para que o time possa tê-los sempre em mente e, juntos, decidirem onde despender esforços para caminharem na direção correta.

É essencial que a empresa possua uma cultura questionadora, desapegada de ferramentas e soluções revolucionárias e tenha um ambiente que promova aprendizado com as falhas, pois certamente haverão erros no início.

A dinâmica que apresentei teve como objetivo conectar negócio, resultados atuais e futuro (OKRs + funcionalidades).

E você, como tem aplicado dinâmicas de OKR? Compartilhe os resultados e desafios.

Até próximo dilema 😉

Algumas referências: 🙂

Métricas para Projetos Agile

Curso gratuito em 7 e-mails, elaborado pelo autor do livro “Métricas Ágeis”, Raphael Albino.
Indicado para Agile Coach, Gerente de Produtos, Gerente de Projetos, Product Owner, Scrum Master e CTO/CIO.


16 days ago

Plataformatec @ São Paulo › Brazil - Aug 25

Book Release: Learn Functional Programming with Elixir

Last week, The Pragmatic Bookshelf started the sales of Learn Functional Programming with Elixir written by me, now in beta. After one year of hard work, we are finally publishing the beta version of the book. In this blog post, I’ll tell you about the motivation for writing a book, its contents and its future. ... » 27 days ago

Last week, The Pragmatic Bookshelf started the sales of Learn Functional Programming with Elixir written by me, now in beta. After one year of hard work, we are finally publishing the beta version of the book. In this blog post, I’ll tell you about the motivation for writing a book, its contents and its future.

Why Writing a Book?

Last year, we were, company-wise, looking for different ways to contribute to the Elixir community. At some point, Hugo Barauna reached out to me and said: “Why don’t you write a book?”. I must tell you: I wasn’t ready.

I’m Brazilian and Portuguese is my native language. Writing a book in English is a great challenge. I thought about everything that I would have to give up to write a book (goodbye, video game nights) and then… I accepted it.

Once Hugo told me The Pragmatic Bookshelf was behind it, I felt motivated. I had a big challenge in front of me, but I knew I wouldn’t be alone. But most of all, what made me most excited about the book was the opportunity to present a different way of learning Elixir that might help people that are stuck in their journey.

Why Functional Programming with Elixir?

After accepting the challenge, I had to write a book proposal. It’s a document outlining the book topics. The Pragmatic Bookshelf is one of the most important technical publishers. Your proposal must stand out if you want to be accepted. One of the ways to stand out is to figure out exactly what the other books are missing.

We have great Elixir books. Some of them are for expert programmers, others are for beginners. These books usually focus on Elixir features and OTP. But they don’t slow down and explain functional programming. Then, observing that functional programming was missing in these books, I came up with an idea of a book that has a balanced level of Functional Programming and Elixir.

I had to do my homework in order to explain why teaching functional programming and Elixir together is a good idea. To write a book, you need to read books. I had to read beyond all of the Elixir books out there. I used dynamic reading, looking for content that authors weren’t talking about.

Book: Learn Functional Programming with Elixir

What’s in the Book?

You’ll find a beginner’s guide to functional programming concepts, at the same time, you’ll be introduced to Elixir. You’ll learn:

  • Immutability,
  • Declarative coding,
  • Impure and pure functions,
  • Anonymous and named functions,
  • Closures,
  • Pattern matching,
  • Recursion,
  • High-order functions,
  • Behaviours,
  • and Protocols.

The book introduces functional programming while teaching Elixir in the proper pace. For example, in other books, the recursion concepts are presented fast in two or three pages. Meanwhile, in Learn Functional Programming with Elixir, you’ll have an entire chapter with more than twenty pages explaining many concepts, like unbounded and bounded recursion, divide and conquer, and tail call optimization.

The Next Months

Before the release of the book’s final version, we’ll release two more chapters. We’ll refine the book with all suggestions and errata from readers during this beta phase. Soon, the book will reach its final stage and it’ll also have a printed version.

Thanks to Jackie Carter and Bruce Tate from The Pragmatic Bookshelf, Plataformatec, technical reviewers, beta readers, and friends, we’re building a great book.

If you bought it and you are reading it, I’m counting on your feedback. For now I have to go, I have a lot of work to do! XD

Elixir Radar
27 days ago

DockYard @ Boston, MA › United States - Jul 10

Design Patterns: The Template Method Pattern

Exploring design patterns and their use cases over a year ago


As the field of software development grows, we developers are continuously trying to catch up with the latest technologies. Fortunately, the craft of writing maintainable code is language agnostic, and in this series of blogposts, we’ll focus on a powerful set of timeless tools: Design Patterns.

I highly recommend Russ Olsen’s book Design Patterns in Ruby. Our series will draw inspiration from it and is brief in comparison. So if you enjoy these posts (and I hope you do!), the book will be a great investment.

We’ll explore various design patterns and learn when to apply them. Our topic for today will be the Template Method pattern, the simplest design pattern.

Our First Day in Construction

The Right Tools

Quite simply, design patterns are just tools that help us construct software. However, just like tools, we need to use the correct and proper one for the task. We could use a hammer on screws, but we’d damage the wood planks and using a power drill will be much more efficient. Before using any one of the numerous design patterns, it is crucial to understand the problem we wish to solve.

It is incorrect to use a particular design pattern on the wrong type of problem. In other words, it is in poor practice to use a particular design pattern on a problem that does not require the aforementioned design pattern.

Let’s Build Some Walls

Today, we’ve been asked by our foreman to build a couple of walls. All the walls will share the same dimensions and will be made from the same material (for this construction project, our foreman has given us an “easy” set of requirements).

# Blueprints for Wall
require 'minitest/autorun'

describe Wall do
  let(:wall) { }

  it 'should state its dimensions' do
    wall.dimensions.must_equal 'I am 30ft. long and 20ft. wide!'

  it 'should be made from brick' do
    wall.made_from.must_equal 'I am made from brick!'

What a nice boss, he’s handed us the blueprints! Now it’s just up to us to build out the Wall.

class Wall
  def dimensions
    'I am 30ft. long and 20ft. wide!'

  def made_from
    'I am made from brick!'

Nice! Our tests pass, everybody is happy, and we’re off to lunch!

A Hammer or a Nailgun?

Coming back to the site, our foreman has informed us that we need more walls. “That’s a piece of cake,” we reply, recalling how easy it was to build out the Wall.

“Not so fast,” our foreman retorts. We’re given new blueprints with different wall requirements.

# Blueprints for a BrickWall
describe BrickWall do
  let(:brick_wall) { }

  it 'should state its dimensions' do
    brick_wall.dimensions.must_equal 'I am 30ft. long and 20ft. wide!'

  it 'should be made from brick' do
    brick_wall.made_from.must_equal 'I am made from brick!'

# Blueprints for a ConcreteWall
describe ConcreteWall do
  let(:concrete_wall) { }

  it 'should state its dimensions' do
    concrete_wall.dimensions.must_equal 'I am 30ft. long and 20ft. wide!'

  it 'should be made from concrete' do
    concrete_wall.made_from.must_equal 'I am made from concrete!'

# Blueprints for a WoodWall
describe WoodWall do
  let(:wood_wall) { }

  it 'should state its dimensions' do
    wood_wall.dimensions.must_equal 'I am 10ft. long and 20ft. wide!'

  it 'should be made from wood' do
    wood_wall.made_from.must_equal 'I am made from wood!'

Hmm… A couple of ideas run through our heads. We could follow the initial Wall class and define each method, hardcoding each string output, for the BrickWall, ConcreteWall, and WoodWall classes. That seems like an okay idea, but we’d have to hard code each instance method. What if our house requires a dozen different types of walls?

Open That Toolbox!

Sipping on our after-lunch coffee, we realize that we’ve got a tool right for the job, the Template Method pattern.

In the Template Method pattern, the creation of a skeletal class will serve as the basis for various subclasses or concrete classes. Within the skeletal class there are abstract methods, which in turn, will be overridden by the methods of subclasses. Essentially, we’ll define a Wall class (our skeletal class) and its subclasses, BrickWall, ConcreteWall, and WoodWall.

Going over the blueprints, we notice that the three different classes of walls each contain the methods #dimensions and #made_from, which result in slighty different strings. With this knowledge, let’s create our Wall class and its subclasses.

class Wall
  def dimensions
    "I am #{length}ft. long and #{width}ft. wide!"

  def made_from
    "I am made from #{material}!"


  def length

class BrickWall < Wall

  def width

  def material

class ConcreteWall < Wall

  def width

  def material

class WoodWall < Wall

  def length

  def width

  def material


Hook Methods

Within the Wall class we have defined a private method called #length because we see that BrickWall and ConcreteWall share the same length. As for the WoodWall class, we simply overwrite the #length and give it a value of 10. These are examples of Hook Methods.

Hook Methods serve two purposes:

  1. Override the skeletal implementation and define something new
  2. Or, accept the default implementation

Please note that the default implemetation, within the skeletal class, does not necessarily need to define a method. For example, we could have had:

class Wall



  def length
    raise NotImplementedError, 'Sorry, you have to override length'

class BrickWall < Wall


  def length

In the example above, the #length method within the Wall class served as a placeholder for the #length for the BrickWall, it’s concrete class. Essentially, hook methods inform all concrete classes that the method may require an override. If the base implementation is undefined the subclasses must define the hook methods.

Those Are Some Nice Walls

Our foreman is delighted with the results and we’re going to call it a day. As we can see, using the Template Method pattern is not difficult at all. We first defined a base class, within which we defined necessary hook methods to be overridden by our subclasses. Of course, this particular design pattern does not solve every conceivable problem, but helps keep our code clean by the use of inheritance.

Next we’ll be discussing the Strategy method pattern. Stay tuned!

over a year ago

DockYard @ Boston, MA › United States - Jun 17

Bringing Ecto Changesets into Ember.js

Changesets in Ecto are composable and allow casting and validation when manipulating models. We brought this concept over into Ember.js, which makes dealing with complex forms a cinch. over a year ago

The past month or so, I’ve been working on an Elixir and Phoenix API for a client. I am blown away by how nice it is working with Elixir and functional programming (FP) concepts. FP feels more intuitive and less prone to “shoot yourself in the foot” scenarios compared to OOP. In fact, I try to use functional approaches wherever possible in JavaScript as well.

That isn’t to say that one is better than the other, but in my experience less unexpected behavior occurs in FP. It turns out whole classes of bugs disappear when embracing immutability and pure functions.

Ecto Changesets to the Rescue

In Elixir, we use Ecto, a DSL for writing queries and interacting with databases. One of the core concepts in Ecto is the changeset – an atomic collection of changes. Changes are validated and checked against database constraints (such as uniqueness) before casting. This ensures that we catch invalid data in the app layer before insertion into the database. Ecto is often confused with Rails’ ActiveRecord, but it isn’t an ORM, and shouldn’t be used like one.

The idea for bringing changesets into Ember occurred to me while I was working on a new client app. An edit page featured 3 forms, all bound to the same model. Each hidden form had separate “toggle”, “save” and “reset” actions. Using ember-form-for and ember-cp-validations, this turned out to be harder than I thought. Editing one form would immediately update the model with 2-way bindings. This was a poor user experience, since you might edit more than 1 form at a time, but want to separate saves and resets.

Changesets in Ember

In my mind, I could see a solution using changesets. Each form would have a separate changeset, so changes (and validations) would be independent. It turns out that this approach works really well, and I’m happy to announce that you can install it today as an addon with:

ember install ember-changeset
ember install ember-changeset-validations

I wrote the addon with compatibility in mind, so it’s easy to wire up with your favorite validation library. The simplest way to incorporate validations is to use ember-changeset-validations, a companion addon. It has a simple mental model, and there are no observers or CPs involved – just pure, plain JavaScript functions.

Let’s take a look at ember-changeset is implemented, and we’ll also demonstrate how they align with Ember’s Data Down Actions Up (DDAU) philosophy.

Virtual Properties with unknownProperty and setUnknownProperty

The core concept behind ember-changeset is the definition of unknownProperty and setUnknownProperty. These methods are invoked (if present) in Ember.get or Ember.set whenever an Ember Object does not define a property. Ruby developers would be familiar with this behavior via the method_missing method. A colleague I used to work with wrote an excellent blog post on this topic, please check it out if you’re interested in finding out more!

For example:

let Person = EmberObject.extend({
  firstName: null,
  lastName: null,

  unknownProperty(key) {
    console.log(`Could not get ${key}!`);

  setUnknownProperty(key, value) {
    console.log(`Could not set `${key} with ${value}!`);

When a Person is created, trying to get or set a property other than firstName and lastName will invoke the unknownProperty and setUnknownProperty methods respectively:

let jim = Person.create({ firstName: 'Jim', lastName: 'Bob' });
jim.get('firstName'); // "Jim"
jim.get('fullName'); // "Could not get fullName!"
jim.set('age', 25); // "Could not set age with 25!"

These two methods allow us to proxy our changeset to the actual model, meaning we can hold back changes but still forward gets to the model.

Storing Changes

Our changeset needs a reference to the underlying model, as well as an internal list of changes to be applied. We can set this up in the init method of our object, which is invoked whenever a new instance is created.

export function changeset(obj, validateFn/*, validationMap */) {
  return EmberObject.extend({
    init() {
      this._content = obj;
      this._changes = {};
      this._errors = {};
      this._validator = validateFn;

export default class Changeset {
  constructor() {
    return changeset(...arguments).create();

We want to be able to forward gets to _content, but hold back sets on _changes, and this is easy enough to set up via virtual properties:

  unknownProperty(key) {
    let content = get(this, '_content');
    return get(content, key);

  setUnknownProperty(key, value) {
    return this._validateAndSet(key, value);

  _validateAndSet(key, value) {
    // if valid, set it on `_changes`
    // otherwise set it on `_errors`

Since a changeset should only allow valid changes to be set, we validate the change using the validateFn function that was passed in to the changeset factory. If a change is valid, we add it to the hash of changes in _changes, and if it’s invalid and returns an error message, we add it to the hash of _errors.

Of course, there are more implementation details than that, but the concept remains unchanged. After defining a simple public API for using changesets, there wasn’t too much more code to add! For example, this is how you would use a changeset:

let changeset = new Changeset(user, validatorFn);
user.get('firstName'); // "Michael"
user.get('lastName'); // "Bolton"

changeset.set('firstName', 'Jim');
changeset.set('lastName', 'B');
changeset.get('isInvalid'); // true
changeset.get('errors'); // [{ key: 'lastName', validation: 'too short', value: 'B' }]
changeset.set('lastName', 'Bob');
changeset.get('isValid'); // true

user.get('firstName'); // "Michael"
user.get('lastName'); // "Bolton"; // sets and saves valid changes on the user
user.get('firstName'); // "Jim"
user.get('lastName'); // "Bob"

Rolling back changes, and even merging them, becomes trivial with a changeset:

changeset.set('firstName', 'Milton');
changeset.get('isDirty'); // true
changeset.get('isDirty'); // false
let changesetA = new Changeset(user, validatorFn);
let changesetB = new Changeset(user, validatorFn);
changesetA.set('firstName', 'Jim');
changesetB.set('firstName', 'Jimmy');
changesetB.set('lastName', 'Fallon');
let changesetC = changesetA.merge(changesetB);
user.get('firstName'); // "Jimmy"
user.get('lastName'); // "Fallon"

Data Down Actions Up, Not 2-Way Bindings

One of the reasons DDAU is so strongly emphasized in Ember 2.x is because it helps us avoid shooting ourselves in the foot with 2-way bindings (2WBs). 2WBs were the “killer feature” of many JavaScript frameworks when they first debuted. As client side applications matured and became more sophisticated, developers realized that 2WBs were more harmful than useful. 2WBs led to instability and difficult debugging in the form of cascading changes, and React was the first library to attempt to solve this problem.

React’s breakthrough was in the use of a virtual DOM, a representation of the actual DOM as a tree-like data structure. Diffing the changes between the virtual and real DOM paved the way for the complete removal of 2WBs – the application simply re-renders whenever there is a change in value.

This continues to be a simpler mental model, and just like Elixir (and other functional languages), eliminates a whole class of bugs. DDAU in Ember.js is built upon the same idea, that data should flow one way.

Using changesets in Ember takes the DDAU philosophy used in rendering into the realm of interacting with client side view models. Instead of 2WBs, changesets allow one way data flow to a model, ensuring that they are always valid, and eliminating a whole class of synchronization headaches.

Is This Real Life?

When I dropped in ember-changeset and ember-changeset-validations into my client app, it instantly clicked with the way I’ve been writing Ember, using DDAU. My complex forms now have independent validations and changes, and I no longer need to worry about saving an unintended change in one form when I submit another.

Because ember-changeset can be used directly in place of an ember-data model, using it with a form library like ember-form-for is trivial using the changeset helper:

{{dummy-form changeset=(changeset model (action "validate"))}}
{{#form-for changeset as |f|}}
  {{f.text-field "firstName"}}
  {{f.text-field "lastName"}}
  {{ "birthDate"}}

  {{f.submit "Save"}}

Validating Changesets

Validation becomes even simpler with changesets. Throughout Ember’s history, we have largely relied on addons like ember-validations which make extensive use of observers. Newer libraries like ember-cp-validations use computed properties (CPs) instead, but that still relies on 2WBs.

Using ember-changeset and ember-changeset-validations, you can take a functional approach with validations. A validator function is passed into the changeset, that is invoked whenever a property is set. This validator function then looks up the appropriate validator (say presence or format) on the validation map, and returns true or an error message.

import {
} from 'ember-changeset-validations/validators';
import validateCustom from '../validators/custom'; // local validator
import validatePasswordStrength from '../validators/password-strength'; // local validator

export default {
  firstName: [
    validateLength({ min: 4 })
  lastName: validatePresence(true),
  age: validateCustom({ foo: 'bar' }),
  email: validateFormat({ type: 'email' }),
  password: [
    validateLength({ min: 8 }),
    validatePasswordStrength({ minScore: 80 })
  passwordConfirmation: validateConfirmation({ on: 'password' })

A validator like validatePresence is simply a function that returns a function:

// validators/custom.js
export default function validateCustom({ foo, bar } = {}) {
  return (key, newValue, oldValue, changes) => {
    // validation logic
    // return `true` if valid || error message string if invalid

Which is simpler to reason about compared to an OOP implementation that relies on extending base classes and holding on to state. Because validation maps are simply POJOs, composing validators is intuitive:

// validations/user.js
import {
} from 'ember-changeset-validations/validators';

export default {
  firstName: validatePresence(true),
  lastName: validatePresence(true)

You can easily import other validations and combine them using Ember.assign or Ember.merge.

// validations/adult.js
import Ember from 'ember';
import UserValidations from './user';
import { validateNumber } from 'ember-changeset-validations/validators';

const { assign } = Ember;

export const AdultValidations = {
  age: validateNumber({ gt: 18 })

export default assign({}, UserValidations, AdultValidations);

This approach lets you build up validations independent of the model. Ember Data models aren’t 1 to 1 representations of server-side records, they’re View Models. This means we shouldn’t need to validate them the same way we would a server-side model. For example, you might have User models in your application, and some of these users might have different roles that require different validation. Best of all, we don’t need to use observers or CPs!

The Only Constant is Change

I hope you enjoyed reading about the concept and implementation of changesets in Ember. DDAU on your models will make your life simpler and your app easier to reason about! Work is still on-going on these addons, so please try them out and let me know if you have any issues or feedback.

As always, thanks for reading!

over a year ago

DockYard @ Boston, MA › United States - Mar 25

Testing when your frontend and backend are separated

How can you run full integration tests when using separate repos? over a year ago

The last project I worked on was an Ember app that had a Rails backend that was deployed on Heroku. We had this application as a single repository, where there were two folders at the root, frontend and backend. This was somewhat easy to test on Travis-CI; it would check out the one repository, run the Rails tests, start the Rails server, then run the ember tests that hit the Rails server. This ended up being a pain to deploy, as when you changed the Rails app, you were going to redeploy the Ember app, and vice-versa. It also presented an issue when deploying to Heroku, as we had to utilize git subtree to push the backend, which contained the production assets.

With the latest project I started, I’m keeping the backend and the Ember app separate. Since the apps are separate, they can be deployed independant of each other. This made it a little bit harder to run integration tests against the backend.

Side note: while you can mock/stub your API in your Ember tests, it is important to run integration tests against your backend regularly. When you mock your API, it ends up giving you this false sense of security when it comes to your Ember app being compatible. Your models may line up perfectly with your mocks, but your mocks can fall out of date. To prevent this, at least when running on your continuous integration (CI) server, you should have your Ember app hit the backend server.

To run end-to-end integration tests on Travis-CI, I added tasks to the .travis.yml file to clone the backend repository, install dependencies, and run the server:

language: node_js
  - "0.12"

sudo: false
    - node_modules
    - backend
    - vendor/bundle

  - npm config set spin false
  - npm install -g npm@^2
  # Select the RVM version
  - rvm use 2.2.1 --install --binary --fuzzy
  # Clone the repository if isn't cloned
  - "[ -d backend/.git ] || git clone<backend-repo> backend"
  - "cd backend"
  # Reset the repo so we can have a conflict-less pull
  - "git reset --hard"
  - "git clean -f"
  - "git pull"
  # Install dependencies
  - "bundle install --path=../vendor/bundle --jobs=3 --retry=3 --deployment"
  # Run the server
  - "RAILS_ENV=test ./bin/rails s &"
  # Wait for the Rails app to start
  - "sleep 5"
  - "cd .."

  - npm install -g bower
  - npm install
  - bower install

  - npm test

Note that I cached both the backend and bundle directories to speed up the time it takes to get the backend running. Since the backend is cached, we only have to pull the new code.

In this example, we have a Rails app with no database, but it would be pretty easy to add one. The only other required step was to add an SSH private key to the Travis settings, since you would have two separate deploy keys. That would prevent you from cloning the backend repository from the frontend test. There should be nothing holding you back from performing end to end tests when you have separate repositories!

over a year ago

DockYard @ Boston, MA › United States - Oct 26

Validating your Ecto Models with ValidField

Unit testing your changesets made easy over a year ago

When we were working with Rails, we would unit test our validations with a libary called ValidAttribute. This library would allow you to specify the attribute and a list of values then check if the values yield errors or not. On a recent client project, I resurrected the pattern and extracted it as a Phoenix library this weekend.

Introducing ValidField

Let’s import ValidField and get right to the tests:

defmodule App.UserTest do
  import ValidField
  use ExUnit.Case
  alias App.User

  test ".changeset - Validations" do
    |> assert_valid_field(:email, [""])
    |> assert_invalid_field(:email, ["", nil, "test"])
    |> assert_valid_field(:password, ["password123!"])
    |> assert_invalid_field(:password, [nil, "", "test", "nospecialcharacters1", "nonumber!"])

  test ".changeset - Validations - complex changeset" do
    with_changeset(%User{}, fn (model, params) -> App.UserController.changeset(model, params, :insert))
    |> assert_valid_field(:email, [""])

First, we use with_changeset/1, which takes the model struct as the sole argument and returns a map that contains an anonymous function that yields a changeset from Model.changeset. with_changeset/1 assumes that your changeset is defined at Model.changeset/2. If your changeset is defined elsewhere or has additional arguments, you’ll want to use with_changeset/2. The first argument of with_changeset/2 is still the model struct, but the second argument is a function with an arity of 2. The first argument to the function will be the model struct passed in, the second argument will be a map of field values to be set in the changeset.

After we have a changeset map, we pass that as the first argument to assert_valid_field/3 and assert_invalid_field/3. Instead of returning a boolean of whether or not the field is valid for the list of values passed in, these functions run the assertions internally. This is done to provide useful testing errors when running mix test. Assume that you inverted the third line of the test to be the following (and didn’t change your validations), the following error will be generated:

defmodule App.UserTest do
  import ValidField
  use ExUnit.Case
  alias App.User

  test ".changeset - Validations" do
    |> assert_valid_field(:email, [""])
    |> assert_valid_field(:email, ["", nil, "test"])
    # (ExUnit.AssertionError) Expected the following values to be valid for "email": nil, "", "tests"

OK, I see what you did there but why?

Clean workflow for unit testing changesets

By grouping all the valid and invalid cases in your tests, you can quickly understand what makes your changeset valid. It also allows you to update your tests by just adding another value to either function call. Say you want to stop accepting Gmail address as valid email address; you just add to your assert_invalid_field call for email, and update the tests to satisfy this new requirement. We aren’t worried about the error message anymore.

Less brittle tests

Most unit tests around changeset validations use the error_on function and assert that the field and a specific error message are contained in the list of errors provided. This is a decent starting point, but has a couple of drawbacks. The first is that your test is tied directly to the error message, meaning that changing a validation message requires you to update your test. A correction to a gramatical error would cause a test failure, showing how brittle this pattern could be. What if you support multiple languages? Since your error messages might be different for an email that contains a space or one that doesn’t contain a valid domain, your tests will be more verbose since the messages need to be matched individually.

With ValidField, you are testing the behavior of your changeset, rather than the implementation of your error messages.

Go forth and test your changeset

Making sure your changeset is properly defined is important, and ValidField makes it much easier to unit test them. Having the list of valid and invalid values for your field in your tests also serves a documentation of what should be accepted for a given field as well.

over a year ago

DockYard @ Boston, MA › United States - Apr 01

EmberConf 2016 - Bigger and Better

A personal look at EmberConf 2016 over a year ago

Leading up to this year’s [EmberConf][emberconf] I wasn’t sure I was going to attend. My indecisiveness had nothing to do with concerns about Ember or DockYard’s commitment to the framework. It was simply that I personally haven’t been writing much Ember over the past six months. Between being a new dad, buying a house, running DockYard, and exploring Elixir I lacked the bandwidth to keep up with what has been happening in Ember.

The deciding factor was to lend moral support for two of our engineers who were speaking: [Estelle DeBlois][estelle] and [Lauren Tan][lauren]. They both gave great talks and if that was the only reason it would have been more than worth the trip. However, like many things, I was thankfully wrong about my concerns attending and would like to share my experience this year.

First, I'd like to point out how great of a job [Leah Silber][leah] has done building and organizing EmberConf. This year the attendance was just shy of 1,000. That's almost double from last year. I've heard the aim is for 1,500 next year. As someone who has run a few conferences I can say that Leah has been kicking ass. My time was split about 50/50 between the hallway track and the talks. [Tom][tom] & [Yehuda's][yehuda] keynote was probably my favorite talk as we're very interested in the mobile web becoming more competitive against native. Elements of that keynote should play out over the next year or so to help position Ember as the best choice for [Progressive Web Applications][pwa]. I was really impressed with how many people were interested in [Elixir][elixir]. Easily a majority of the conversations I had were with people curious about Elixir or actively using it. I don't want to dwell on Elixir too much because this was EmberConf but this reinforced my opinion that Elixir/Phoenix are a natural fit for Ember backends. It seems there are many people out there that have the same idea. This is now my fourth EmberConf (if I'm counting the original [EmberCamp][embercamp]) and catching up with old friends is important but I really enjoyed meeting new people. As always, the regret is not meeting enough people. I wish that future EmberConfs were more days with more down time. I realize this is not realistic as the days really increase the cost to run the conference and more down time creates less incentive for companies to send employees, but I personally find the most value from networking. I am aware of DockYard's place in the Ember community. Thinking back to the earlier [RailsConfs][railsconf], and companies that were in a similiar position to DockYard, I don't recall the leaders of those companies being very accessible. There were clear social cliques that these people stuck to. I'm trying not to repeat that. At times I found myself gravitating towards friends but after a few minutes I'd excuse myself so I could meet new people. Tuesday night we hosted an event at [Ground Kontrol][gk]. For those unfamiliar, Ground Kontrol is a video arcade with a bar. We rented the entire space for three hours and had close to 100 EmberConf attendees through the doors that evening. All the games were free. When I walked in to get the place ready for the event it was like stepping into a time machine. Nearly ever coin-op game from my childhood was there. If I learned anything that evening it is that I vastly over estimated my skills as kid. Or I am just rusty. Or it was the controller... all of the controllers. Also, I am really out of touch with modern arcade games as I was completely unaware of this [Killer Queen][killer-queen] game. A good conference leaves you feeling energized and wanting more, EmberConf has done that for me every year. A great conference gives you the opportunity to connect with friends new and old, and it takes a great community to build what Ember has. We hear a lot about *Ember vs X*. Not often enough do I hear the practitioners of other frameworks bragging about their community the way Ember does. I think there is a good reason for that. Twenty years from now I'm not going to give a shit about which library rendered which obscure demo the fastest. But I will remember... ...the people. [gk]: [leah]: [elixir]: [pwa]: [emberconf]: [estelle]: [lauren]: [tom]: [yehuda]: [embercamp]: [railsconf]: [killer-queen]: over a year ago

DockYard @ Boston, MA › United States - Apr 13

TIL: Elixir can pattern match at multiple depths in a single statement

over a year ago

One of Elixir’s greatest assets is its pattern matching. If you have ever used Elixir, you have probably had the pleasure of writing something like:

def background_check(%{manager: employee} = company) do
  %{name: full_name} = employee

  from(c in Criminals,
  where: c.full_name == ^full_name,
  select: c)
  |> case do
    nil -> congratulate(employee)
    criminal -> notify(company)

Here we are assigning the entire parameter map to a variable called company, and pattern matching to get the employee we want to do a background check on. We need to query our Criminals database table for a criminal with the same name as our employee. To do so, we first have to grab the name property off the employee object.

Well, today I learned, that you can have multiple matches in a single statement! With this newly acquired knowledge, we can simplify our background_check() function definition:

def check_company(%{manager: %{name: full_name} = employee} = company) do
  from(c in Criminals,
  where: c.full_name == ^full_name,
  select: c)
  |> case do
    nil -> congratulate(employee)
    criminal -> notify(company)

Now we can pattern match to get the employee’s full_name, while also assigning the entire map under the manager key to the variable employee, as we did before.

Hopefully, you learned something too! Enjoy.

over a year ago

DockYard @ Boston, MA › United States - Apr 10

The Beginner's Fallacy

over a year ago

Helping onboard beginners is a hot topic right now in software development. It is very good that this is important to people as we need new software developers. However, I have noticed that many of these developers are finding it difficult advancing beyond beginner. This is especially evident when they are taken out of their current development environment.

Beginners require guidance and a set of rules to follow. This has been shown time over time to be an effective strategy for learning. The rules/principles/laws, whatever you want to call them, are based upon the experience of more advanced developers that essentially boil down to “here is what you want to do most of the time”. They are a great substitute for experience. They also fit in very nicely with the notion of Convention Over Configuration. But unless you are stepping outside of your framework the advantage of COC may turn into a career hindering disadvantage.

I am a big fan of Convention Over Configuration. I was first exposed to it with Ruby on Rails and I like that Ember.js has been following this path as well. However, I also believe that it can become a crutch for beginners. The rules that they are following coupled with the low friction environment of COC leads to a developer experience that does not present too many obstacles to be overcome. Learning requires challenge. You meet a challenge, you learn how to overcome that challenge, you move on and now that experience is a tool you can wield in the future.

The market being flooded with beginners along with COC frameworks should produce an environment that is heavily favored towards employers. (buyer’s market) In most cases these developers will be able to accomplish most of what a very experienced developer can accomplish. This is great for the company’s bottom line but not so good for the individual developer’s own career growth.

Instead, if you are interested in advancing beyond being a beginner, you must get out of your comfort zone. This is going to require you to put more time into your craft than your job necessarily allows. Try new software languages, try old software languages. Go read SICP. Attend meetups, read papers, contribute to open source. Contribute to open source even if you think your PR won’t be accepted. All of these things will take you out of the comfort zone that your daily work gives you, but you’ll grow and become a better (and more in demand) developer for it.

over a year ago

DockYard @ Boston, MA › United States - Oct 29

How long it took to convert our team from Rails to Phoenix

over a year ago

One week.

That is how long it took for our engineers, who had all (but one) worked with Rails for a few years, to be productive on a new Phoenix client project.

I had assigned each of them to read Part 1 of Dave Thomas’ Programming Elixir book which was only 160 pages of material. Part 1 introduces early Functional Programming concepts and the Elixir standard library. In my mind, this is enough to make the switch.

“It was cool being able to contribute to a Phoenix app without prior experience with the framework thanks to the similarity in structure with Rails. Elixir is the biggest hurdle, but a quick read through the key concepts is enough to make you productive - not to mention learning a new language is fun!”

— Romina Vargas, DockYard Engineer

At the higher level of writing actions, routes, tests, models, queries, etc… there is so much overlap with the concepts that exist in Rails that it was simply a matter of syntax that had to be learned before a Rails engineer could make contributions back to Phoenix applications.

There is no doubt that Phoenix borrows a lot of concepts and structure from Rails. For good reason, Rails nailed the MVC app pattern. The benefit here is that a lot of that domain knowledge on how to build Rails apps can be transferred over to building Phoenix apps.

“Ruby on Rails had a pretty steep learning curve. Not only did I have to study a new programming language, I had to master the MVC framework as well. But with RoR under my belt, the learning curve for Elixir and Phoenix was significantly reduced. Plus pattern matching makes everything way easier!”

— Marin Abernethy, DockYard Engineer

Now, don’t get me wrong. I am not suggesting that after this one week that you should be ramped up on the complexities of Elixir and the Erlang ecosystem. I think there is enough to Erlang that could take years to fully absorb. But that’s not the point.

The best way to write a faster Rails app is to write it in Phoenix.

Get in touch with us if you’d like to move from Rails to Phoenix. We can help!

over a year ago

DockYard @ Boston, MA › United States - Apr 08

Elixir: Come for the syntax, stay for everything else

over a year ago

I have been programming for over 20 years now. I started with Basic, found my way to C++, and spent two years writing Assembly (MASM). Then I found Ruby. Ruby completely changed everything for me. I loved Ruby. I loved Ruby for a reason that many “elite” programmers tend to dismiss: the syntax.

You see, syntax is very important to me. Call it what you will, bikeshedding, OCD, stupidity. I care about syntax. It matters to me, and with Ruby I found a community that shared my thoughts.

When Go and Rust came along I was disappointed. Clearly these two languages were superior in performance (and in many other areas) but were a syntactic step back from Ruby. What was their reason? Clearly Ruby, and even Python, have proven that the masses are attracted by clear and readable syntax. New languages should take the best of what is currently available and improve upon them. Go seems to target the C/C++ audience, whereas Rust seems to attract JavaScript developers. So I guess this becomes a matter of perspective and opinion.

Elixir is different. I put Elixir up there with Go and Rust as part of the three new languages that will define the next decade of backend software development. With Elixir I found a language that embraced Ruby-like syntax, but also gave me much more.

The syntax is only skin deep, but this is part of allure of Elixir. It is my foot in the door. When I first saw Elixir code I thought to myself “OK, this is something I can wrap my head around”.

I think a lot of Ruby developers will find their way to Elixir. It seems that many were attracted to Go but I suspect when they start to explore what the Elixir language has to offer they’ll see the benefits.

But a language needs more than just a hook, there has to be a compelling reason to stay. For me that was Functional Programming.

It seems that Functional Programming is making a come back. Every day there is a new blog article on why you should start writing Functional code. Let’s break this down into a few points:

1. Scalability

This is an Erlang trait. Elixir apps will attempt to make the best use of all the cores in your CPU as possible. Compared to Ruby this is a big deal. We don’t have to write anything special, the Erlang VM (BEAM) just handles this for us automatically. This means we are efficiently using our hardware. This type of approach didn’t make a lot of sense a few years ago, multi-core CPUs were expensive. Now they’re cheap and Elixir benefits.

2. Memory

Elixir programs are meant to be broken into many different processes. The garbage collection strategy being used isn’t revolutionary but because we are dealing with many runtimes instead of just one the impact on GC is negligible. In addition, you can picture how short-lived processes might be the equivalent of objects in an OOP lanuage. We pass messages into the process and get back a value. Each process manages its own memory, if the process is short-lived enough GC is never even run and the process is destroyed after it has completed its job. As opposed to Ruby where everything lives in one world and if you stop using the object it will get GC’d eventually impacting performance.

3. Immutability

Immutability got a bad rap when memory was expensive. Why would we write applications in such a way so as to waste memory by having variables whose values couldn’t be mutated? Memory is now super cheap, and this is not much of a concern. With this in mind we can evaluate immutability within the context it was originally meant: to ensure state. When we talk about parallel processing the state of a process becomes very important. If we are expecting X to always be a specific value but we are writing in a language where X can change this can lead to problems.

4. Fault Tolerance

This one really impressed me when I started to dig into it. You may have heard that Erlang was invented for telephony. How often do you get a message from your phone company saying “we’re updating our systems so you won’t get a call for a while”. This is the level of uptime that is achievable with Elixir. Hot code swapping is another very cool feature. Think real Zero Downtime Deploys.

5. Community

This one is more personal to me. I’m attracted to technology that is not centralized into one company. Go and Rust are very much just Google and Mozilla technologies. Those languages will always be at the whim of their corporate masters, wheras a language like Elixir that is not tied to any one company feels like it has a more democratic process behind its development. Let many companies develop use-cases and improve the language. (I realize that Erlang falls into this category, but Erlang is pretty much set in stone at this point)

The community around Elixir also feels very much like the Ruby community did early on. I said the same thing about the Ember.js community. I guess I’m just chasing that Ruby dragon, trying to catch that high again.


We’ve been exploring Elixir heavily over the past few months. The more I dig into the language the more I love it. We’re going to bet pretty heavily on Elixir and if you are a Ruby developer looking for a change in pace I highly suggest you check it out. The best place to start is with Dave Thomas’ Book.

over a year ago

DockYard @ Boston, MA › United States - Mar 04

EmberConf 2015 Day 2

Live blog of EmberConf 2015 over a year ago

Fault Tolerant UX by Dan Gebhardt


  • Users should be shielded from any application issues that are encountered

Transaction UX

  • Atomic: all or nothing
  • Ex. if a user fills out a form your app should save all the data, not just some.
  • Consistent: move between different states
  • Isolated: allows concurrent changes
  • Durable: changes are persisted

Apps MUST NOT violate the rules of transactional UX or you are violating the users trust

Forgiving User Experience

  • Fault Tolerant UX —> Forgiving UX
  • Transitional experience: to persist data that has not yet be saved, but in the process of being edited
  • Undo/redo
  • Offline support
  • Asynchronous interface (non-blocking)
  • user can make changes as quickly as possible (changes can be queued up and synced at your apps convenience)

Engineering Fault Tolerant UX

  • Ember provides simple elgant patterns for building a consistent UX
  • Similarly, ember data provides durable UX
  • Ember data requires customization (extra code) to provide atomic and isolated code


Orbit application patterns

  • Client first development
  • Pluggable sources
  • Data synchronization
  • Editing contexts
  • Undo/redo


  • Provides a store with synchronous and asynchronous methods

Aligning Ember with Web Standards by Matthew Beale



  • The JS standardization process is about to change: ES5, ES6, ES2015!
  • Standards Process

    • 5 stages - strawman, proposal(polyfills), draft(experimental), candidate(compliant), finished(shipping)
    • Polyfill: A polyfill is a piece of code (or plugin) that provides the technology that you expect the browser to provide natively.
  • 2 major standards groups:

    • WHATWG + W3C (html / dom related)
    • TC39 + Ecma International (promises, classes, for loops, etc)
  • Aligning with standards is not a one time event. It is ongoing!

Why Standards?

  • The goal is productivity
  • Standards are portable, reflect best preactices, and endure
  • Participants win

ES5 -> ES2015

  • New API for maps
  • Promises
  • Proxies


  • Babel will turn your ES6+ code into ES5 friendly code

    • Enables new syntax (fat arrow, let) , APIs (map, set), not everything

Aligning Ember’s Object Model

  • is this feature: stable? a good pattern? implemented correctly? implemented performantly?

ES Classes

  • Three new tools: class, extend, super
  • More gotchas:
  • setUnknownProperty
  • Transpiler output
  • New syntax
  • Changes in way that super behaves
  • Mixins

Remember: standards are a two-way street!

Ember Community Survey

Growing Ember One Tomster at a Time by Jamie White


How did a tech community come to be so vibrant? How can we continue?

1. The Tomster

  • Representation of productivity and friendliness
  • Tomster wore different hats
  • Custom tomsters
  • Good defaults
  • Having a friendly mascot makes things easier.
  • “Ambition” and “friendliness” is hard to juxtapose
  • Composing concepts

2. Language

  • Tomster is a tool. Productivity and friendliness implicitly part of conversation
  • Words stick; the right words enable conversations
  • “hack” is not a good vocabulary word - negative connotation

3. User Interface

  • Programming language and documentation with good user interface

4. Hackability

  • Parts have to be accesible - has to feel hackable.
  • Tomster was not overly done.

5. Roles

  • Many specialisms in Ember Community: documenteer, student, mentor, critic, explorer, and many more!

Community building is a design and engineering challenge

Interaction Design with Ember 2.0 and Polymer by Bryan Langslet


  • The web browser is the largest app runtime in the world, and will continue to grow
  • Every device has to be connected to the web
  • Web frameworks and toolkits are getting closer to native performance everyday

“How can I - one person with a laptop - leverage my time as powerfully as I possibly can, every minute I work?”


  • A paradigm shift for web interaction design
  • The goal: to blur the lines between native and web applications

Web Components

  • Extends the browser itself
  • Polymer components extend a base component
  • Encapsulation
  • Declarative
  • True reusability/portability

Ember vs. Polymer Use Cases:

  • Ember: developer productivity, conventions
  • Ember: community
  • Ember: World-class routing and state management
  • Polymer: constantly pushing the web forward

Web Animations API

  • Has the best of both CSS and javascript animations
  • Web animations run outside of the main thread and can be accelerated on the GPU

Treasure Hunt Demo Application

  • “Demonstrates an experimental integration between ember.js routing and Polymer’s core-animated-pages component to create beautiful inter-state animated transitions”

Building Applications for Custom Environments with Ember CLI by Brittany Storoz


Ember CLI

  • Everyones favorite command line tool
  • Build organized ember apps quickly
  • Fills huge void in toolset for JS devs

Ember CLI Addons

  • Extend ember-cli beyond core fucntionality
  • Follow standard npm conventions
  • Easy to create & install:

ember addon name-of-your-addon

ember install:addon name-of-your-addon

Firefox OS

  • Requirements that Ember CLI could not provide

    1. Generate and validate a manifest file (same concept as package.json)
    2. UI components that mimic OS interface
    3. Publish to Firefox marketplace
  • Ember CLI Addon was born to fill those requirements.

1st Requirement: Generating The Manifest

  • Creating Blueprints
  • rules for generating common code and file structures:

ember generate blueprint name-of-blueprint

2nd Requirement: FirefoxOS UI (Gaia)

  • Building components bower install gaia-components/gaia-stubs
  • 2 responsibilities:
  • including dependencies and creating the addon
  • making both available to the consuming application

Components Review

  • Dependencies:
  • bower install within addon
  • bower install withing consuming logic
  • Component logic
  • create component
  • export components to consuming aplication
  • define component template
  • Validation & Publishing
  • creating commands for control over when these things happen
  • includedCommands hook: returns object of commands which are found inside lb/commands
  • ember help lists out information about available add-on commands. And lots more useful info.

Building Real-time Applications with Ember by Steve Kinney


  • Integrating browser functionality and third party code into our applications. In this case, WebSockets.
  • What is a WebSocket Used for?
  • Collaboration, analytics dashboards, prompting user to upgrade application
  • Can I actually use WebSockets?
  • For the most part, yes (some earlier version of IE not supported)
  • -> library for Node
  • Faye -> simple pub/sub messaging

Approach #1: Use Standalone Controller

  • Somewhat limited because it only works between controllers

Approach #2: Dependency Injection with Services

  • ember generate service websocket
  • Declare where you want to inject it inside the Initializer
  • Inside controller: websocket: Ember.inject.service()

Approach #3 Using

  • is both a server and client side library

What is your favorite thing about JavaScript?


1. Measuring Performance with User Timing API by Bill Heaton


  • Measuring the differences in template rendering speeds between Ember.js v1.8.1 w/Handlebars v1.3 and Ember.js v1.10.0 w/HTMLBars
  • Check out his findings on blog!

2. ember-islands by Mitch Lloyd


  • ember-islands
  • Render Ember components UJS-style to achieve “Islands of Richness”. You can arbitrarily render Ember components in the body of the page and they will all be connected to the same Ember app.

3. Ember Testing with Chemistry Dog by Liz Bailey


  • Migration from Rails to Ember
  • Ember does not provide as much documentation on testing
  • Would love to help make Ember more approachable to beginners

4. Running C++ in ember-cli with Emscripten by Michael Nutt


  • ember-cli-emscripten
  • Allows you to add C or C++ to your ember app, then require the exposed functions and classes.
  • Fibonacci sequence demo!

5. Ember Observer by Kate Gengler


  • Ember Observer
  • Gives addons a score out of 10
  • pulls hourly from npm and Github

6. CSS is Hard by Erik Bryn


Physical Design by Edward Faulkner


  • Computers are so abstract. Possibilities are endless, only hindered by your imagination.
  • Constrained by physics
  • Googles material design spec
  • does not break rules of physics
  • animations and motion appeal to us because they fit into our idea of how it should physically work.
  • Liquid Fire live demo!
  • npm install —save-dev liquid-fire for Ember 1.11+
  • Ember Paper

Closing Keynote: Chris Eppstein


Announcing: Eyeglass

  • Distribute SASS extensions as NPM modules for LIBSASS
  • Will be able to integrate with a number of different build systems, including Ember CLI
  • Major performance improvements
  • The best parts of SASS and Compass, working with the best tools JS has to offer

A Selection of Chris’ Inspirational Messages

  • “Don’t be a Sasshole”
  • “People come to a community for the tech, but stay for the love!”
  • “Sass didn’t lose when I started ignoring the haters”
  • “If you use a framework you love, you’ll never work a day in your life”
  • “Secret to a vibrant community: be excellent to eachother”
over a year ago
pluto.models/1.4.0, feed.parser/1.0.0, feed.filter/1.1.1 - Ruby/2.0.0 (2014-11-13/x86_64-linux) on Rails/4.2.0 (production)