Planet Hosting & Delivery Last Update: Tuesday, 17. November 2015 07:30

Docker News - Aug 17

Docker Global Hack Day #3: Update on Local Editions, Prizes and More!

Join us for the next Docker Global Hack Day from Wednesday, September 16th through Monday, September 21st! Submit Your Hack Ideas!   We are excited to announce first round of cities committed to participating in this global event along with the prizes … Continued over a year ago
Join us for the next Docker Global Hack Day from Wednesday, September 16th through Monday, September 21st! Submit Your Hack Ideas!   We are excited to announce first round of cities committed to participating in this global event along with the prizes … Continued over a year ago

Docker News - Aug 16

Your Docker Agenda for LinuxCon

Docker and containers are making a big splash in Seattle next week! LinuxCon runs from Monday, August 17 through Wednesday, August 19 but there are plenty of co-located events during the week including ContainerCon, MesosCon and the Linux Plumbers Conference. … Continued over a year ago
Docker and containers are making a big splash in Seattle next week! LinuxCon runs from Monday, August 17 through Wednesday, August 19 but there are plenty of co-located events during the week including ContainerCon, MesosCon and the Linux Plumbers Conference. … Continued over a year ago

Open Shift (by Red Hat) News | Ruby - Aug 13

Part 1: From App to OpenShift (Runtimes and Templates)

This is Part 1 of a 2 part series of blogs that will help you bringing your applications into OpenShift. OpenShift 3 allows you to deploy your application to the cloud and the great thing is it doesn’t matter if your cloud is public, private, or even hybrid. Typically, the PaaS platform (OpenShift in this case) will provide a set of predefined runtimes that a developer can use to deploy an appli over a year ago
This is Part 1 of a 2 part series of blogs that will help you bringing your applications into OpenShift. OpenShift 3 allows you to deploy your application to the cloud and the great thing is it doesn’t matter if your cloud is public, private, or even hybrid. Typically, the PaaS platform (OpenShift in this case) will provide a set of predefined runtimes that a developer can use to deploy an application on top of. This developer does not need to worry about the infrastructure, the runtime setup, or the configuration, he/she would just need to focus on their application, and what runtime to use. The PaaS platform will take care of sewing it all together and running it. over a year ago

Docker News - Aug 12

Announcing Docker Toolbox

The fastest way to get Docker running in development written by Michael Chiang, Docker Today we’re announcing a new installer for Mac OS X and Windows called Docker Toolbox. We’ve been hearing again and again that it can be difficult to … Continued over a year ago
The fastest way to get Docker running in development written by Michael Chiang, Docker Today we’re announcing a new installer for Mac OS X and Windows called Docker Toolbox. We’ve been hearing again and again that it can be difficult to … Continued over a year ago

Docker News - Aug 12

Announcing Docker 1.8: Content Trust, Toolbox, and Updates to Registry and Orchestration

written by Ben Firshman, Docker   We’re thrilled to announce Docker 1.8 with support for image signing, a new installer, as well as incremental improvements to Engine, Compose, Swarm, Machine and Registry. You’ve been telling us that you want Docker to … Continued over a year ago
written by Ben Firshman, Docker   We’re thrilled to announce Docker 1.8 with support for image signing, a new installer, as well as incremental improvements to Engine, Compose, Swarm, Machine and Registry. You’ve been telling us that you want Docker to … Continued over a year ago

Travis News - Aug 11

Making Travis CI a Family-Friendly Place to Work: Our Maternity and Paternity Leave for US Employees

When Travis CI first started in February 2012 with 5 male Founders, they didn't put much thought into where the next employee would live and certainly didn't think about policies like vacation or conference attendance, least of all maternity/paternity leave. We're now in our 4th year and currently have 12 male and 11 female Builders (that's what we call employees), 8 of whom are living in the US over a year ago

When Travis CI first started in February 2012 with 5 male Founders, they didn't put much thought into where the next employee would live and certainly didn't think about policies like vacation or conference attendance, least of all maternity/paternity leave. We're now in our 4th year and currently have 12 male and 11 female Builders (that's what we call employees), 8 of whom are living in the US (initially, our American employees were hired as contractors due our company being in Germany, but to us they're all employees). As our company has grown over the years and we've increased our US coverage, we've had the challenge of finding a balance between the German and American startup work culture. At the same time, with 3 female employees being pregnant at the beginning of the year, all of whom we hired knowing they were already pregnant, we needed to figure out what, if anything, we were offering for maternity leave. In Germany the maternity laws are quite generous with 14 months of paid leave being covered by the government. One of our employees was covered by this. The other two were employees based in the US and while we try to offer the same benefits to our American employees as we do to the German ones, we didn't have anything in place for parental leave.

Let me digress for a bit and say that I am an American woman living in Berlin. I moved here in March of 2015 to seek a better life for myself and my future (far far future!) family. After living in San Francisco and working for startups the past 7 years, I have been disappointed time and time again by the offers made (or lack thereof) to pregnant employees. Once I even worked for a company that created a Dogs In The Office policy before creating a Maternity Leave policy... So when our CEO asked me to do some research and make a proposal for our US Maternity/Paternity Leave policy, I took it very seriously and wanted to offer something beyond the bare minimum.

You would be amazed at what I discovered during my research. From companies offering a small amount of money to help cover the time away, to somewhat generous paid time off plus cash bonuses, to companies offering as little as 2 weeks. TWO WEEKS! Another astounding thing was that most of the offerings were not equal for mothers and fathers. While I can understand the need for the mother to have more time away with her baby, having her partner around to not only assist her, but also bond with the baby is crucial. So after a couple of weeks of research, we made a decision to offer our expectant mothers AND fathers:

  • 2 weeks before the due date paid at 100% (optional, but recommended)
  • 20 weeks for normal births paid at 100%
  • 24 weeks for births with complications paid at 100%
  • Flexible working hours after the 20/24 weeks are complete (part-time arrangements can be made)
  • Your job will be here for you when you return

When we relayed this information to the two US employees, one became a little teary because her last employer (a much bigger and older company), didn't offer anything. This being her second child, it was a huge relief to know she was going to have paid time off with flexibility upon return. While it was a great reaction, it shouldn't happen this way. If you value your employees, you should value their need for time away. At the same time, if you want to hire someone, whether or not they are already pregnant should be irrelevant.

We're continuously working on making Travis CI an inclusive and family-friendly place to work, and offering all of our employees time off with their new-borns is an important part of that.

over a year ago

Codeship News - Aug 10

A Survey of Non-Rails Frameworks in Ruby: Cuba, Sinatra, Padrino, Lotus

It’s common for a Ruby developer to describe themselves as a Rails developer. It’s also common for someone’s entire Ruby experience to be through Rails. Rails began in 2003 by David Heinemeier Hansson, quickly becoming the most popular web framework and also serving as an introduction to Ruby for many developers. Rails is great. I […]

The post A Survey of Non-Rails over a year ago

It’s common for a Ruby developer to describe themselves as a Rails developer. It’s also common for someone’s entire Ruby experience to be through Rails. Rails began in 2003 by David Heinemeier Hansson, quickly becoming the most popular web framework and also serving as an introduction to Ruby for many developers.

Rails is great. I use it every day for my job, it was my introduction to Ruby, and this is the most fun I’ve ever had as a developer. That being said, there are a number of great framework options out there that aren’t Rails.

This article intends to highlight the differences between Cuba, Sinatra, Padrino, Lotus, and how they compare to or differ from Rails.

The Format

First things first, we’re going to take a quick look at Rack, which all of these frameworks (including Rails) are built on top of. After that we’ll compare them using MVC as the basis of comparison, but we will also talk about their router. Finally we will quickly discuss when it might be appropriate to choose one over the other.

The Foundation: Rack

Rack provides a minimal interface between web servers that support Ruby and Ruby frameworks.

Rack has really been what has helped the Ruby community have such a large number of both webservers and frameworks. It allows them to communicate with each other in a standard way. A webserver like Puma, Unicorn, or Passenger only has to be built for just one interface: Rack. As long as the framework is also built upon Rack, it’s able to work with any one of the webservers out there. All of the frameworks I’ll be discussing in this article are built on top of Rack, and Rails is no different.

A minimal Rack application contains an object which responds to the call method. This method should return an array with three elements: an HTTP status code, a hash of HTTP headers, and the body of the response. Essentially what a framework does is to help you organize and manage creating responses that get converted into this format.

Here is the smallest Rack application:

# config.ru
run Proc.new { |env| ['200', {'Content-Type' => 'text/html'}, ['Hello, Rack']] }

By running the rackup command (which will start a webserver), we can actually reach this endpoint from the browser or via curl.

rackup config.ru

Now we can reach it from curl:

-> curl 127.0.0.1:9292
Hello, Rack

Cuba

The tagline for Cuba is “Ceci n’est pas un framework,” which translates to “This isn’t a framework.” You might ask yourself why it’s even in the list, but luckily just below that phrase it says that Cuba is in fact a micro framework. That’s why I have it listed first, just above Rack.

Cuba was written by Michel Martens with the goal of following a minimalist philosophy and only providing what’s needed rather than some of the bloat and unused features that come along with much larger frameworks. Cuba is small, lightweight, and fast.

Router

Cuba provides a small router that allows you to define routes using its DSL.

Cuba.define do
  on root do
    res.write("Hello World!")
  end
end

This code is small but surprisingly powerful. The on method is looking for a clause that returns true. So in this case, when a request is made to /, or root, it will yield the block of code. This block of code has access to res, which is a Cuba::Response class. Here you can tell it what status code to use, which text to render, etc.

Because of how on works, we can nest routes together like so, where we group all of our GET requests together:

Cuba.define do 
  on get do 
    on "about" do 
      res.write("About us") 
    end

    on root do
      res.redirect("/about")
    end
  end 
end

Model and controller

Cuba, as mentioned at the outset is a micro framework, and it has chosen not to include what might traditionally be the model or controller. If you are in need of a model for your app, you are welcome to bring in a solution from another framework, like ActiveRecord or Lotus::Model. What would be in the controller in Rails is put inside of the router directly.

View

Cuba comes with a plugin called Cuba::Render which helps you with the View layer. It allows you to use ERB templates by default but can easily be configured to support Haml, Sass, CoffeeScript, etc. by using the Tilt gem.

To take advantage of the views you can use three methods: partial, view, or render. In this example, I’ll use render, which looks for an ERB file inside of the views folder and passes that as a content variable to a layout.erb file.

require "cuba" 
require "cuba/render" 
require "erb"

Cuba.plugin(Cuba::Render)

Cuba.define do 
  on get do 
    on "about" do 
      render("about") 
    end 
  end 
end

Here is our layout file (views/layout.erb):


<html>

<body>
  <%= content %>
</body>
</html>

And lastly our actual view template (views/about.erb):

<h1>About Us</h1>
<h2>Welcome, friends</h2>

Sinatra

Going up the ladder of complexity a little bit, we have Sinatra. Sinatra was created by Blake Mizerany in 2007, about four years after Rails began. Sinatra is probably the second most popular Ruby framework out there; it’s used by many coding bootcamps to give students their first introduction to building a Ruby app.

Router

Sinatra comes with a very capable router which is centered around two things:

  • the HTTP verb (GET, PUT, POST, etc.)
  • the path of the HTTP request.

Using these, you can match the home page by using get and "/".

The main job of one of these blocks of code is to either respond with the text (or JSON or HTML) to be rendered, or by redirecting to another page. In this example, I’m redirecting the root URL to "/hello", which renders some text.

get "/" do 
  redirect "/hello" 
end

get "/hello" do 
  "Hello, World" 
end

Model and controller

Like Cuba, Sinatra doesn’t come with a model or controller layer out of the box, and you’re welcome to use Active Record, Lotus::Model, or another ORM of your choosing.

View

Sinatra comes with a View layer which is built-in and allows you to use ERB, Haml, and Sass, among other templating engines. Each of these different templating engines exposes itself to you in the router via its own rendering method. To render an ERB file, we can simply say erb :hello, to render the views/hello.erb file which will be embedded into a views/layout.erb file via a yield.

get "/hello" do
  erb :hello
end

Another feature that Sinatra has is the ability to define helper methods which can be used inside of the templates. To do that, you use the helpers method and define your own methods inside of the block.

helpers do
  def title
    "Sinatra Demo App"
  end
end

In our views/layout.erb file we can access the title method we defined as a helper.

<!doctype html>
<html>
  
  <body>
    <%= yield %>
  </body>
</html>

Padrino

Our goal with Padrino is to stay true to the core principles of Sinatra while at the same time creating a standard library of tools, helpers, and functions that will make Sinatra suitable for increasingly complex applications.

Padrino and Sinatra go hand in hand. Padrino is based on Sinatra but adds many additional tools such as having generators, tag helpers, caching, localization, mailers, etc. It takes Sinatra, which could be said is on the lighter side of what a framework is, and adds some of the things missing from making it a full-stack framework.

Router

In Padrino apps, the routes and controllers are combined in one place. Instead of having a routes file where all the routes go for the whole application (although this too is possible), the controllers essentially contain the routes. The users controller contains all routes related to a User. Anything you can do with routing in Sinatra you can do here, plus some extra features like route nesting.

Bookshelf::App.controllers :books do
  get :index do # /books
    @books = Book.all
    render 'index', layout: 'application'
  end
end

Model

Padrino doesn’t have its own model layer but rather comes with support for a large number of established ORMs that either live on their own or come from other frameworks.

Padrino supports Mongoid, Active Record, minirecord, DataMapper, CouchRest, mongomatic, MongoMapper, OHM, Ripple, SEQUEL, Dynamoid. While generating an app you can choose which one you’re going to use (if any), and then the model generators will help you generate models according to the ORM that you chose.

padrino g model Book title:string code:string published_on:date

View

Padrino has a view layer which is rendered from within the controller. It supports ERB, Haml, Slim, and Liquid out of the box. There isn’t much to say here other than it works well and as expected. Variables can be passed to the view by setting an instance variable in the controller. There are also helpers generated for each controller which can also be used in the view.

Controller

Controllers and routes in Padrino are essentially the same thing. The controller defines the routes and decides how to handle the response: whether to find data to render or to redirect elsewhere.

Try Codeship – The simplest Continuous Delivery service out there.

Lotus

Lotus is a Ruby MVC web framework comprised of many micro-libraries. It has a simple, stable API, a minimal DSL, and prioritizes the use of plain objects over magical, over-complicated classes with too much responsibility.

Lotus is a full-stack MVC framework created by Luca Guidi that began in 2013. The philosophy behind it, as mentioned in a blog post by Luca, is that its goal is simplicity, aiming to be built in a modular way, relying more on plain old Ruby objects (POROs) rather than DSLs.

Lotus is actually comprised of seven different modules (or “micro-libraries”):

  • Lotus::Model
  • Lotus::Router
  • Lotus::Utils
  • Lotus::View
  • Lotus::Controller
  • Lotus::Validations
  • Lotus::Helpers

These can be used individually or brought together in a complete full-stack framework under the Lotus gem itself.

Lotus::Router

Lotus comes with a very clean and capable router. It feels very similar to the router that comes with Rails.

To make a get request to the Index action of our Home controller, we put:

get '/', to: 'home#index'

The Lotus Router also supports RESTful resources right out of the box.

resources :books, except: [:destroy]

Because Lotus is Rack compatible, we can respond with a Proc (because it has a call method), and we can even mount an entire Sinatra application inside our routes.

get '/proc', to: ->(env) { [200, {}, ['Hello from Lotus!']] }
mount SinatraApp.new, at: '/sinatra'

Lotus::Model

Lotus::Model follows a Domain Driven Design approach (see Domain Driven Design by Eric Evans), which implements the following concepts:

  • Entity
  • Repository
  • Data Mapper
  • Adapter
  • Query

An Entity is an object that is defined by its identity. In other terms, it’s the “noun” or “thing” in your app: a User, a Book, a Library, etc.

class Book
  include Lotus::Entity
  attributes :author_id, :price, :title, :code
end

A Repository is the next major object in Lotus::Model, whose job it is to mediate between an Entity and the persistence layer (PostgreSQL, MySQL, etc.). Entities aren’t responsible for querying themselves or persisting themselves to the database. That’s the job of the Repository, and it allows for the separation of concerns.

The Repository is where you’ll put all of your queries. They are actually private methods of this object, meaning you’re forced to keep them organized in one place. Controllers (or even worse, Views) no longer have intimate knowledge of how to query data.

In Active Record, which is an implementation of the Active record pattern, you might write your query like this, which could exist anywhere in your application (and is quite commonly found in Controllers):

Book.where(author_id: author.id).order(:published_at).limit(8)

But in Lotus it would live inside the repository.

class BookRepository
  include Lotus::Repository

  def self.most_recent_by_author(author, limit: 8) 
    query do 
      where(author_id: author.id). 
        order(:published_at) 
    end.limit(limit) 
  end 
end

The job of the Data Mapper is to map our fields in the database to attributes on our Entity.

collection :books do
  entity Book 
  repository BookRepository

  attribute :id, Integer 
  attribute :author_id, Integer 
  attribute :title, String 
  attribute :price, Integer 
  attribute :code, String 
end

Lotus also comes with migrations to help you manage the schema of your database. These are quite similar in Lotus as they are in Rails. You have a series of command line commands which will help you generate a new migration. Once you’re done writing it, you can run the migration, get the current migration the database is on, or roll it back.

bundle exec lotus generate migration create_books
# db/migrations/20150724114442_create_books.rb
Lotus::Model.migration do 
  change do 
    create_table :books do 
      primary_key :id 
      foreign_key :author_id, :authors, on_delete: :cascade, null: false

      column :code,  String,  null: false, unique: true, size: 128
      column :title, String,  null: false
      column :price, Integer, null: false, default: 100
    end
  end 
end

Lotus didn’t feel the need to reinvent the wheel here and is using SEQUEL under the hood to help it with migrations and communicating with the database.

bundle exec lotus db migrate
bundle exec lotus db version # 20150724114442

Here is how you would create and persist a Book:

author = Author.new(name: "George Orwell") 
author = AuthorRepository.persist(author)

book = Book.new(title: "1984", code: "abc123", author_id: author.id, price: 1000) book = BookRepository.persist(book)

Lotus::View

In Lotus, the View is an actual object which is responsible for rendering a template. This varies from Rails where the controller renders the template directly.

# web/views/books/index.rb
module Web::Views::Books 
  class 
    Index include Web::View

    def title
      "All the books"
    end
  end
end

Inside of the template, we are now able to call books (which was exposed to us from the Controller/Action) and title to get the page title. Lotus comes with ERB templates by default, but it supports many different rendering engines such as Haml and Slim.

<h1><%= title %></h1>

<ul>
  <% books.each do |book| %> 
    <li><%= book.title %></li>
  <% end %>
</ul>

Lotus::Controller

One major difference between Lotus and Rails in the Controller layer is that each Action in Lotus is its own file and class. Another difference is that @ (instance) variables aren’t exposed to the View by default. We must explicitly tell the Action which variables we want to expose.

# web/controllers/books/index.rb
module Web::Controllers::Books 
  class Index 
    include Web::Action 
    expose :books

    def call(params)
      @books = BookRepository.all
    end
  end 
end

Summary

It should be said that all of these frameworks have uses and things that differentiate them one from the other. So which is the best one? Here’s an answer that you’ll hate:

It depends.

It depends on the requirements of your project or in some cases it can be developer preference when all else is equal.

Here’s a summary of when it might be a good idea to choose one of these frameworks over another:

  • Cuba: Very close to Rack with very low overhead. I think its best use is for small endpoints where speed is crucial or for those who want full control over their entire stack, adding additional gems and complexity as needed.
  • Sinatra: Not as close to Rack, yet still far from being a full-stack framework such as Rails or Lotus. I think it’s best used when Cuba is too light, and Rails/Lotus are too heavy. It’s also a great teaching tool because of its small interface.
  • Padrino: For those who have an existing Sinatra app that is becoming more complex and warranting things that come in a full-stack framework. You can start with Sinatra and graduate to Padrino if needed.
  • Lotus: A great Rails alternative with a simple and explicit architecture. For those that find themselves disagreeing with “The Rails Way,” or for those that really enjoy the Domain Driven Design approach.


“A survey of non-Rails frameworks in Ruby: Cuba, Sinatra, Padrino, Lotus” – via @codeship
Click To Tweet


The post A Survey of Non-Rails Frameworks in Ruby: Cuba, Sinatra, Padrino, Lotus appeared first on via @codeship.

over a year ago

Docker News - Aug 07

Reactive Containers: How Kitematic works hand-in-hand with the Docker CLI

written by Jeff Morgan, Software Engineer at Docker, Inc.  Introduction Kitematic is a desktop app for Mac & Windows with the purpose of helping new Docker users run their first container in minutes. Kitematic streamlines the install process to setup … Continued over a year ago
written by Jeff Morgan, Software Engineer at Docker, Inc.  Introduction Kitematic is a desktop app for Mac & Windows with the purpose of helping new Docker users run their first container in minutes. Kitematic streamlines the install process to setup … Continued over a year ago

Codeship News - Aug 05

Use An Ask, Don’t Tell Policy With Ruby

This article was originally published by Pat Shaughnessy on his personal blog. With his kind permission, we’re sharing it here for Codeship readers. Innisfree is an actual island in Lough Gill, County Sligo, Ireland. The next time you need to develop a new algorithm, ask Ruby for what you want, don’t tell it what to […]

The post Use An Ask, Don’t Tell Policy With Ruby appeared first over a year ago

This article was originally published by Pat Shaughnessy on his personal blog. With his kind permission, we’re sharing it here for Codeship readers.

innisfree
Innisfree is an actual island in
Lough Gill, County Sligo, Ireland.

The next time you need to develop a new algorithm, ask Ruby for what you want, don’t tell it what to do.

Don’t think of your code as a series of instructions for the computer to follow.

Instead, ask Ruby for what you need: Your code should state the solution to your problem, even if you’re not sure what that solution is yet! Then dive into more and more detail, filling in your solution’s gaps as you do. This can lead to a more expressive, functional solution that you might not find otherwise.

Too often over the years I’ve written code that consists of instructions for the computer to follow. Do this, do that, and then finish by doing this third thing. As I write code I imagine I am the computer, in a way, asking myself: What do I need to do first to solve this problem? When I decide, this becomes the first line of code in my program. Then I continue, writing each line of code as another instruction for the computer to follow.

But what does “Ask, Don’t Tell” mean exactly? And how could Ruby possibly know the answer when I ask it something? An example will help you understand what I mean.


Your code should state the solution to your problem, even if you don’t know what that solution is.
Click To Tweet


Parsing a Yeats Poem

Last week I needed to parse a text file to obtain the lines of text that appeared after a certain word. My actual task was very boring (separating blog articles from their metadata), so instead let’s work with something more beautiful, The Lake Isle Of Innisfree:

I will arise and go now, and go to Innisfree, And a small cabin build there, of clay and wattles made: Nine bean-rows will I have there, a hive for the honeybee, And live alone in the bee-loud glade.

And I shall have some peace there, for peace comes dropping slow, Dropping from the veils of the morning to where the cricket sings; There midnight’s all a glimmer, and noon a purple glow, And evening full of the linnet’s wings.

I will arise and go now, for always night and day I hear lake water lapping with low sounds by the shore; While I stand on the roadway, or on the pavements grey, I hear it in the deep heart’s core.

My task is to write a Ruby script to return the line that contains a given word, along with the following lines:

console

Telling Ruby What To Do

When I first wrote this script, I put myself in the computer’s shoes: What do I need to do to find the target word? I started writing instructions for Ruby to follow.

First I need to open the file and read in the poem: one

Here File#readlines saves all the lines of text into an array, which the parse method will process, returning the result in another array. Later I join the result lines together and print them out.

How do I implement parse? Again, I imagine that I am the computer, that I am Ruby. How do I find the lines that follow glimmer? Well, obviously I need to loop through the array looking for the target word.

two

Once I find the word, I’ll start saving the lines into a new array called result. Since I want to save all the following lines and not just the matching line, I’ll also use a boolean flag to keep track of whether I’ve already seen the target.

three

What’s wrong with this code? Nothing really. It works just fine, and it’s even somewhat idiomatic Ruby. In the past, I would have probably considered this done and moved on.

However, I can do better than this. I can ask Ruby for what I want, instead of telling Ruby what to do.

Try Codeship – The simplest Continuous Delivery service out there.

Ask Ruby For What You Want

Don’t imagine you are the computer. Don’t think about how to solve a problem by figuring out what Ruby should do and then writing down instructions for it to follow. Instead, start by asking Ruby for the answer.

What should my method return? An array of the lines that appear after the target word. To reflect this, I’ll rename my method from parse (telling Ruby what to do) to lines_after (asking Ruby for what I want).

This might seem like an unimportant detail, but naming methods is one of the most difficult and important things a programmer does. Picking a name for a method gives the reader a hint about what the method does, about what your intentions were when you wrote it. Think of writing code the same way you would think of writing an essay or story. You want your readers to understand what you are saying, and to be able to follow along. (You also want them to enjoy reading enough that they consider the code to be their own someday.)

To get started I’ll write the new method to return an empty array. four

Notice on the left I changed the label from “Instructions:” to “What do I want?” This reflects my new way of thinking about the problem.

Now, what does “appear after the target word” mean exactly? It means the lines that appear in the array after (and including) the line containing the target. Ah… in other words, the lines_after method should return a subset or slice of the array. Rewriting the problem in a different way lead me towards a solution I hadn’t thought of before.

Now I can rewrite the “What do I want?” text like this:

five

I rewrote what I want from Ruby to be more specific: I want a “portion of the array” and I want the portion “including and following the line containing the target.” I haven’t written much code yet, but I’ve taken a big step forward in how I think about the problem.

On the right, I’ve written code to return a subset of the array, lines[target_index..-1]. But my solution is still incomplete; what should target_index be?

Thinking about this a bit, it’s easy to see how to find the line containing the target string: I can use detect to find the line that includes the target word.

six

But I’m still not done. I need the index of the line containing the target, not the line itself. How can I find target_index? Again, I shouldn’t tell Ruby what to do (maybe create a local variable and loop through the lines checking each one). Instead, I should ask Ruby for what I need. What do I need? I need the index which corresponds to the line containing the target. In other words, I need to find (to detect) the target index, not the target line.

Here’s how to do it:

seven

Here I use Ruby’s detect method to search a range of index values, not lines. Inside the block I check whether the line corresponding to each index (lines[i]) contains the target. At the bottom I return the correct slice of the array if I found the target, or an empty array if I didn’t.

Learning From Functional Languages

In my opinion this code is better than what I showed earlier. Why? They both work equally well. What’s the difference? Let’s take a look at them side-by-side.

compare

First of all, I have simpler, more terse code. Less code is better. The lines_after method contains just four lines of code while the parse method contains nine. Of course, I could find ways to rewrite parse to use fewer lines, but any way you look at it lines_after is simpler than parse.

The parse method contains two local variables which are changed, or mutated, by code inside the loop. This makes the method harder to understand. What is the value of flag? What about result? To really understand how parse works you almost need to simulate the loop inside your head, thinking about how the flag and result values change over time.

The lines_after method also contains two local variables. However, they aren’t used in the same way — they aren’t changed as the program runs. The block parameter, i, while different each time the block is called, doesn’t change inside the block. It’s meaning is clear and unambiguous while that block is running. Similarly, the target_index variable is set once to an intermediate value, not changed once each time around a loop.

Terse, simple code that doesn’t change values while it is running is the hallmark of functional programming languages like Haskell or Clojure. While these languages allow you to write concurrent code without using locks, their chief benefit is that they encourage (Clojure) or even force you (Haskell) to write simple, terse code. Code that asks the computer for what you need, not code that tells the computer what to do.

But, as we’ve seen, you don’t need to abandon Ruby to write functional code.

Update: Simon Kröger and Josh Cheek both suggested using drop_while, which gives us an even more readable, functional solution:

eight

I also decided to rename the after method to lines_after, based on the comments from TenderGlove and John Kary. I agree with them after would make more sense if I called it as a method on an object containing the lines (e.g., lines.after). But as a simple function like in this example lines_after is more expressive.

Thanks guys!

Learning From Sandi Metz

In her famous book, Practical Object-Oriented Design in Ruby, Sandi Metz mentions the Ask, Don’t Tell policy also, but using slightly different words. With her brilliant bicycle examples, Sandy shows us in Chapter 4 of POODR why we should be Asking for “What” Instead of Telling “How”. When you send a message to an object, you should ask it for what you want, not tell it what to do or make assumptions about how it works internally. Sandi shows us how this policy — along with other important design principles — helps us write classes that are more independent and decoupled one from the other.

The Ask, Don’t Tell policy applies equally well to functional programming and object oriented programming. At a lower level, it helps us write more terse, functional Ruby methods. Stepping back, it can also help us design object oriented applications that are easier to maintain and extend.

Update #2: Apparently I’ve (unknowingly) conflated “Ask, Don’t Tell” with the “Tell, Don’t Ask,” advice Dave Thomas has been giving us for years to make a different but related point about object oriented design. Dave explains here: Telling, Asking, and the Power of Jargon. He also disagrees with my opinion that the parse_lines example was written in a functional style.


“Use An Ask, Don’t Tell Policy With Ruby” via @pat_shaughnessy
Click To Tweet


The post Use An Ask, Don’t Tell Policy With Ruby appeared first on via @codeship.

over a year ago

Docker News - Aug 04

Docker

written by Brian Goff, Technical Alliances at Docker, Inc. Integration allows Oracle users to easily distribute applications built and deployed in Oracle Solaris Zones Over the last year, I’ve had the opportunity to work closely with Oracle on a few … Continued over a year ago
written by Brian Goff, Technical Alliances at Docker, Inc. Integration allows Oracle users to easily distribute applications built and deployed in Oracle Solaris Zones Over the last year, I’ve had the opportunity to work closely with Oracle on a few … Continued over a year ago

Docker News - Jul 31

Happy SysAdmin Day: Our Favorite SysAdmin War Stories

Thank you everyone who submitted and be sure to thank your SysAdmins today! We received so many awesome submissions, it was very difficult to pick only a few war stories to share. Members of the Docker team voted and picked … Continued over a year ago
Thank you everyone who submitted and be sure to thank your SysAdmins today! We received so many awesome submissions, it was very difficult to pick only a few war stories to share. Members of the Docker team voted and picked … Continued over a year ago

Codeship News - Jul 30

An Inside Look with Codeship: Madison May, CTO of indico

Madison May is the CTO of indico, which provides developer-friendly tools and APIs that turn raw text and image data into human insight. We had the chance to catch up with Madison and talk about the challenges of a fast-growing company, fine tuning a development workflow, and the future of machine learning. An Inside Look […]

The post An Inside Look with Codeship: Madison May, CTO o over a year ago

Madison May is the CTO of indico, which provides developer-friendly tools and APIs that turn raw text and image data into human insight. We had the chance to catch up with Madison and talk about the challenges of a fast-growing company, fine tuning a development workflow, and the future of machine learning.

An Inside Look with Codeship is a regular series providing an insider’s perspective on founding and building a tech company. Each session, we chat with some of the most exciting voices in tech and ask them where they’ve been, where they’re going, and what we could all be doing together. You can read all Inside Look interviews here. An inside look with codeship madison may cto of indico

Hi, Madison! To start, tell us about your role and how indico came to be founded.

indico was originally founded by Slater Victoroff and Alec Radford, two fellows from the undergraduate university I attended, a tiny school called Olin College of Engineering. After a series of successful consulting jobs, they decided to take the tools they had developed while consulting and produce a product designed to give developers easy access to machine learning tech. When the fall of 2014 rolled around, indico was accepted to the TechStars program, and they made the tough decision to put their undergraduate education on hold to pursue the idea. I had a keen interest in machine learning and developer tools, and I have an incredible amount of respect for Alec and Slater, so I decided to follow suit and joined the company as CTO. It’s been a wild ride ever since.

Was there one particular problem you want to solve with indico?

We looked at the landscape of companies that provided machine learning APIs, and it was painful to see the amount of hype and buzzword-filled language that had come to dominate the market. We saw an opportunity to provide a very straightforward, honest alternative, and Alec had the expertise required to translate some of the more recent advances from the academic community into a form that is usable in industry.

The idea of automating a process that is typically reserved for humans appeals greatly to me. In many ways the field of machine learning represents a frontier — the machine learning community is just beginning to put together bits and pieces of the theory required to automate some surprisingly complex tasks. I’m convinced that machine learning is going to play a critical role in tomorrow’s society, and I’m excited to help move the tech forward.

When you started, were there some initial roadblocks that surprised you?

One of the major challenges over the past nine months has been finding balance in the amount of process required to keep the company running smoothly. In the beginning, none of our schedules were aligned. Some of the night owls among us stayed until five in the morning and took the morning train home, while others got into the office bright and early. It was all very organic and undirected. We planned days ahead rather than weeks or months ahead.

When we first decided to address the problem by introducing more structure, we ended up overshooting a bit in the other direction. For a while we were definitely bogged down by some of the project management tools we had introduced, and we obsessed over processes that felt productive but may have actually been primarily a distraction. We were checking off boxes on checklists for the simple reason that checking off boxes on checklists felt productive. Processes and tools that make sense at larger companies don’t necessary provide value at a tiny seed-stage startup.

It took us a while to find a happy medium, but I feel that we’ve now arrived at a good balance between structure and flexibility.

Was the role of CTO what you expected it to be?

When I first joined indico, I expected the CTO role to be what most companies would call a “technical lead” and anticipated spending most of my time coding and informing software architecture decisions. In fact, that’s the role I ended up playing for only the first few months of indico’s life.

It took me a while to realize that often my time is better spent taking a step back and looking at the bigger picture. Trying to make sure that our team functions effectively is often more valuable that having an IDE open and my hands on the keyboard all the time. indico team

How did you cope with your work moving from hands-on development to looking at the bigger picture?

I don’t think we’ve completed that transition yet. I would say about 50 percent of my time is still spent coding, and I still derive pleasure from the mechanics of software development. But I’ve also come to accept many other roles as well.

With regards to motivation, I’m very much a person who is driven by the end result rather than the process. Being primarily motivated by helping the company to build a product, rather than the intricate details of code, has helped to make that transition easier.

What do you look for when you’re bringing developers into the team?

Good communication skills are crucial, perhaps even more valuable than technical aptitude. We’ve interviewed quite a few individuals who are clearly extremely technically competent but who have had difficulty clearly communicating the reasoning behind their decisions. They very well might be able to churn out an absolutely brilliant solution if they are left alone for a long period of time, but that’s not necessarily valuable to us. Collaboration is key, and code written in isolation is code that’s liable to become a problem. It’s difficult turning down people who are very technically qualified, but we feel it’s best in the long run.


“Code written in isolation is code that’s liable to become a problem.” @_MadisonMay_ on @codeship
Click To Tweet


We are also just looking for passion, like every company. One of the strongest indicators is when someone comes in and communicates how they think indico could be made better. That’s a huge positive indicator.

Is there a tricky problem you’re wrestling with right now?

In our development workflow, one problem I’m focusing on is the time between hypothesis and the result of an experiment. Oftentimes you’ll make some initial guesses as to what the proper model is for a given problem, and then you’ll wait several hours before you get a result. We want to cut down on that response time. We want to minimize the time between developing a hypothesis about why something is wrong and evaluating whether or not your hypothesis is correct. When turnaround time is too long, the development process becomes very frustrating and very inefficient.

We are putting significant effort into making sure that turnaround time is short. The same goes for deployment cycles. We want to be able to go from initial conception of idea to something that we can test and validate with real people as soon as possible. For these reasons, continuous integration and continuous deployment are incredibly valuable to us.

Are there any unique characteristics about your users that have surprised you?

There are definitely times where the engineering team will feel that something is very intuitive, only to get feedback that our design decision is confusing once the product has been shipped. I think one of the best decisions we’ve made recently was hiring a user-experience lead to ensure that doesn’t happen. Every company needs someone with an ear to the ground to tell when users are happy or are frustrated.

Has your philosophy on testing changed since you started?

I don’t know if you would call it a philosophy. We test what we feel is necessary. We shoot for coverage standards, and we shoot for other related metrics, but we don’t test merely for the sake of hitting coverage metrics. There is real value in knowing when a test is being written for the sake of increasing a number and when a test is being written for the sake of helping us identify a problem and to prevent problems in the future.

We split all tests into two distinct groups: unit tests and functional tests. Functional tests ensure things look right from the user’s perspective, giving us the confidence we need to ship an update. Unit tests ensure that we are able to quickly identify and fix problems that do arise and help nip potential problems in the bud. These two styles of testing play two very different roles, and it’s important to separate the two. Try Codeship – The simplest Continuous Delivery service out there.

What’s the biggest problem in the industry that’s affecting your company?

Infrastructure still feels very broken. It’s a very fragmented industry, and although a community seems to be growing around the use of containers for simpler infrastructure, there’s certainly no universal solution. I’m hoping that the community will unite around a single approach over the course of the next couple of years, but time will tell. It’s also one of the reasons that the services indico provides are so valuable to people — it’s difficult to run many machine learning algorithms at scale, so we handle hosting and infrastructure so our users don’t have to.

What do you think are the key skills required to be an effective CTO or technical founder?

It’s very important to listen to and gather feedback from the rest of the team. One of my main roles is aggregating opinions and information. Oftentimes, it’s not myself that’s the most qualified to make a decision. It is my role to aggregate that feedback and learn what the organization believes is the best solution.


“One of a CTO’s main roles is aggregating information.” @_MadisonMay_ interview on @codeship
Click To Tweet


It’s also important to very quickly understand that your own productivity should not be measured by technical output. Measuring the productivity of a CTO by measuring code output is a very, very poor metric. Productivity, for me, means enabling people to function uninterrupted and helping to break up work into components that are easily handled.

Communication is key. It’s my job to be able to communicate our company’s technical aspects to people who perhaps don’t come from a great technical background. Learning to cross that language barrier has been tough, especially when you’re working in an industry like machine learning. Learning to communicate some of the same concepts in everyday language is critical.

Communication is always a huge barrier for any new idea.

I found that there are a lot of analogs between software architecture and good methods of running a company. A lot of the red flags in architecture are similar to the red flags in an organization.


“There are analogs between software architecture and org architecture.” @_MadisonMay_ on @codeship
Click To Tweet


Like the idea of spaghetti code — the idea that there are too many components responsible for a single function as opposed to having each component be responsible for a single action. That was a pain we felt in the early days of indico. It wasn’t clear who was responsible for what aspects of the company. It took a while for us to figure out how to effectively route tasks throughout the organization and to ensure that the proper people contributed to decisions. Having everyone have a say in every decision doesn’t necessarily contribute to a good decision-making process. It just contributes to confusion.

There are also similarities in the principle of redundancy. We have been working hard on knowledge transfer, so that no knowledge resides in the head of a single person. Knowledge should spread out across our organization so we can continue to function in the absence of an individual.

If you could go back in time, what advice would you give yourself?

I would want to communicate to my former self that you should approach urgency in a different way. Early on, I felt a lot of urgency to get things done and to ship products, to make rapid decisions. That feels rewarding because rapid decisions lead to rapid change. But in reality, it’s also very important to make well-informed decisions. Don’t confuse short term growth with long term sustainability. The things that are good indicators in the very short term are not necessarily good indicators in the longer term.


“Don’t confuse short-term growth with long-term sustainability.” @_MadisonMay_ on @codeship
Click To Tweet


Thanks, Madison!

The post An Inside Look with Codeship: Madison May, CTO of indico appeared first on via @codeship.

over a year ago

Codeship News - Aug 17

Improving Shipscope Performance with WebSockets

Nearly a year ago, we released Shipscope, a Chrome extension for monitoring your projects on Codeship. We’ve received some really good feedback from our users, and I want to share some of that insight in this post. In order to provide frequent status updates, Shipscope requested project status from our API server every 10 seconds. […]

The post Improving Shipscope Performance w over a year ago

Nearly a year ago, we released Shipscope, a Chrome extension for monitoring your projects on Codeship. We’ve received some really good feedback from our users, and I want to share some of that insight in this post.

In order to provide frequent status updates, Shipscope requested project status from our API server every 10 seconds. Multiply this by a rapidly growing userbase, and it really amounted to something.

There were two consequences of Shipscope’s frequent polling:

  1. As Shipscope became more popular, the load on our API server grew. This was not a critical problem, but if we can reduce the number of requests to our API server while providing timely updates to users, then we should really try to do that.

  2. Shipscope stayed pretty busy in the user’s browser, frequently requesting updates which most often would report no change to any build status. This is just wasteful use of CPU.


“Stop wasteful polling. Use WebSockets in your browser clients.” – via @codeship
Click To Tweet


In our web-based UI, we have been using Pusher for a while to provide build progress updates to the browser via WebSockets. Using Pusher, our web client never needs to poll the server for build progress updates. The server sends a message on the socket when progress occurs in the build. The client then requests build info from the server and updates the UI. This provides immediate updates without any unnecessary polling.

For security reasons, we don’t send full build information across the WebSocket. We don’t want to send customer build info over a connection that could be unencrypted. Pusher allows payloads to be encrypted between the server and Pusher and then from Pusher to the client over the WebSocket.

However, that transfer between the HTTP port and the WebSocket port is not encrypted. Rather than trying to encrypt the build info before we send it, we choose to limit the data that we send over the WebSocket. So, the client is notified of an update, and the full update info is fetched securely over an HTTPS connection.

Try Codeship – The simplest Continuous Delivery service out there.

This approach has worked very well for us with our web client, so we decided we should give it a try with Shipscope. The first step is to get a Pusher app key, which is easy enough once you signup at Pusher’s website.

After installing Pusher’s JavaScript client, Shipscope initializes a Pusher instance like so:

pusher = new Pusher(PUSHER_APP_KEY, {
    authEndpoint: 'https://codeship.com/pusher/auth',
    auth: {
      params: {
        api_key: options.api_key
      }
    },
    encrypted: true
})

The authEndpoint and auth properties allows us to verify that Shipscope can only subscribe to updates for which the api_key authorizes it. encrypted ensures that messages to and from Pusher are not in plain text.

Once we have a Pusher instance, we only need to subscribe to a channel. Each project has its own channel for publishing updates. So, if we are monitoring a project with id, 9876, then we’ll subscribe like this:

var channel = pusher.subscribe("private-project-9876")

and finally,

channel.bind(UPDATE_EVENT, onUpdate.bind(projectInfo))

will call onUpdate() with projectInfo bound to this every time a build starts or finishes in our project.

That saves a lot of effort for both Shipscope running in your browser and our API server, which now only gets requests when something interesting has happened.

One more thing that we get from Pusher is offline notification. If you somehow find yourself away from a wifi connection, the Pusher client fires a state_change event to let Shipscope know. We use this currently to change the Shipscope icon to red to let you know that Shipscope is offline.

Setting this up is as simple as:

pusher.connection.bind('state_change', function() {
  if (pusher.connection.state == 'connected') {
    chrome.browserAction.setIcon({path: 'img/shipscope_icon_19.png'})
    getShipscopeSummary()
  } else {
    chrome.browserAction.setIcon({path: 'img/shipscope_icon_19_error.png'})
    chrome.browserAction.setBadgeText({text: ''})
  }
})

WebSockets have made a big difference to both our server and the Shipscope Chrome extension. Pusher made WebSockets easy. You can install Shipscope from the Chrome Web Store. Shipscope is open source, so feel free to review the code at https://github.com/codeship/shipscope.


“Improving Shipscope performance with WebSockets” – via @codeship
Click To Tweet


The post Improving Shipscope Performance with WebSockets appeared first on via @codeship.

over a year ago

Codeship News - Aug 13

Creating a Consistent Cross-platform Docker Development Environment

How many times have you read this statement: “The great thing about Docker is that your developers run the exact same container as what runs in production.” Docker is all the hype these days, and with statements like that, many are wondering how they can get on board and take advantage of whatever it is […]

The post Creating a Consistent Cross-platform Docker Development Environment over a year ago

How many times have you read this statement:

“The great thing about Docker is that your developers run the exact same container as what runs in production.”

Docker is all the hype these days, and with statements like that, many are wondering how they can get on board and take advantage of whatever it is that makes Docker so popular.

That was us just six months ago when we started playing with Docker and trying to fit it into our processes. After just a few months we knew we liked it and wanted to run apps this way, but we were struggling with some of the Docker development workflow.

As the manager of a development team, I like using the same processes and technologies through the whole lifecycle of our applications. When we were running apps on AWS OpsWorks using Chef to provision servers and deploy applications, we used Vagrant with Chef to run the same recipes locally to build our development environment.

Try Codeship – The simplest Continuous Delivery service out there.

Challenges with a Docker development environment

It didn’t take long for us developing with Docker to realize that the common statement isn’t as easy to achieve as it sounds.

This article highlights the top six challenges we faced when trying to create a consistent Docker development environment across Windows, Mac, and Linux:

  1. Running Docker on three different platforms
  2. Docker Compose issues on Windows
  3. Running minimal OS in vagrant (boot2docker doesn’t support guest additions)
  4. Write access to volumes on Mac and Linux
  5. Running multiple containers on the same host port
  6. Downloading multiple copies of docker images

Running Docker on multiple operating systems

Docker requires Linux. If everyone runs Linux this really isn’t an issue, but when a team uses multiple OSes, it creates a significant difference in process and technology. The developers on our team happen to use Windows, Mac, and Linux, so we needed a solution that would work consistently across these three platforms.


“We needed a dev environment solution that would work consistently across three platforms.” via…
Click To Tweet


Docker provides a solution for running on Mac and Linux called boot2docker. boot2docker is a minimal Linux virtual machine with just enough installed to run Docker. It also provides shell initialization scripts to enable use of Docker command line tools from the host OS (Windows or Mac), mapping them into the Docker host process running inside the boot2docker VM. Combined with VirtualBox, this provides an easy way to get Docker up and running on Windows or Mac.

While boot2docker works well for simple use cases, it makes certain conditions difficult to work with. I’ll get into those in the following challenges. This topic can be hard to understand at first, so here’s a simple illustration of the three main options for running Docker locally:

Three options for running Docker locally

Using Docker Compose on Windows

Docker Compose is a fantastic tool for orchestrating multiple container environments and in many ways actually makes Docker usable for development. If one had to run all the normal Docker CLI commands and flags to spin up their environments and link them properly, it would be more work than many of us are willing to do.

Compose is still relatively new though, like Docker itself really, and as a result it does not work very well on Windows yet. There are so many issues on Windows in fact, that there is an epic on the project just to deal with them: https://github.com/docker/compose/issues/1085. Some good news though is Docker Toolbox claims Compose support for Windows is coming soon.

(re)Enter Vagrant

I mentioned earlier that boot2docker works well to create a Linux VM for running Docker in but that it did not work well for all conditions.

Vagrant has been a fantastic tool for development teams for the past few years, and when I started working with Docker I was even a little sad to be moving away from it. After a couple months of struggling to get everything working with boot2docker though, we brought Vagrant back into the equation.

We liked how small boot2docker was since we didn’t need a full featured Docker host, but unfortunately it doesn’t support the VirtualBox guest additions required for synced folders. Thankfully though we found the vagrant box AlbanMontaigu/boot2docker that was a version of boot2docker with guest additions installed and weighs in at a light 28M. Compare that with a minimal Ubuntu 14.04 box at 363M.

Write access on volumes

Docker can mount the host filesystem into containers as volumes. This is great when the container only needs to read the files, but if the container needs to write changes to the files there can be a problem.

On Windows VirtualBox, synced folders are world-writeable with Linux permissions of 777. So on Windows, write access is not an issue. However on Linux and Mac, there are file ownership and permissions to work with. For example, when I’m writing code on my Mac, my username is shipley, and my uid/gid is 1000. However, in my container, Apache runs as www-data with uid/gid of 33.

So when I want Apache to generate files that I can access on my host machine to continue development with, Apache is not allowed to write to them because it runs as a different user. My options would either be to change ownership/permissions of files on the Mac filesystem, or change the user and uid/gid apache runs as in the container to shipley and 1000. However, that option is pretty sloppy and does not work for team development.

With VirtualBox, you can change the user/group and permissions that synced folders are mounted as, but it’s not really easy or convenient by default. Vagrant provides a very convenient way to do this though. This was one of the biggest motivators for us to go back to Vagrant. With Vagrant, all we need to add to our Vagrantfile is:

config.vm.synced_folder "./application", "/data", mount_options: ["uid=33","gid=33"]

With the extra mount_options, it will own the /data folder inside the vm as uid/gid 33 which, inside an Apache container based on Ubuntu, will map to user/group www-data.

Funny thing though — as I mentioned earlier, by default the filesystem permissions on Windows are 777. So write access there isn’t an issue. However, we found that when using volumes in Docker to mount a custom my.cnf file into a mariadb container that mariadb doesn’t like it when the configuration file is world writeable. So again, Vagrant helps us out by making it simple to also set file permissions in the mount:

config.vm.synced_folder "./application", "/data", mount_options: ["uid=33","gid=33","dmode=755","fmode=644"]

Running multiple containers that expose same port

My team primarily develops web applications, so for us each of our projects/applications expose port 80 for HTTP access during development.

While boot2docker for Windows/Mac and native Docker on Linux makes getting started quick and easy, you can only have one container bound to a given port on the host. So when we’re developing multiple applications or multiple components of an application that expose the same port, it doesn’t work. This really isn’t a show stopper, but it is an inconvenience as it requires running apps on non-standard ports, which just gets awkward to work with and hard to remember.

Running each app however in its own VM via vagrant solves this problem. Of course it introduces a couple more issues though; like now you have to access the app via an IP address or map a hostname to it in your hosts file. This really isn’t that bad though since you should only have to do it once per app.

Another problem this solution introduces is running multiple VMs requires a lot more memory. It also seems a bit counterproductive since Docker is supposed to remove the burden of running full VMs. Anyway, it’s a small price to pay to have multiple apps running at the same time and accessible on the same ports.

Downloading multiple copies of Docker images

The most annoying problem created by this solution though is, now that Docker is running in multiple VMs, each one needs to download any dependent Docker images. This just takes more time and bandwidth, and if we developers hate one thing, it’s waiting.

We were able to get creative though, and on bootup of the VM, we check the host machine folder defined by environment variable DOCKER_IMAGEDIR_PATH for any Docker images, and if found it will docker load them. Then after docker-compose up -d is completed, any new images that have been downloaded are copied into the DOCKER_IMAGEDIR_PATH folder.

Bingo, only need to download each image once now.

Conclusion

After running into all these challenges and finding solutions to them, we now have a simple Vagrantfile we can copy into each of our projects in order to provide a consistent development experience regardless of what operating system the developer is using. We’re using it in multiple projects today and have even gotten to the stage of continuous integration and blue/green deployment to Amazon Elastic Container Service in production (but those are topics for another article).

I expect we’ll face more challenges as time goes by and our projects change, and as we do we’ll continue to evolve our solutions to account for them. Our Vagrantfile is open source, and we welcome suggestions and contributions.

Feel free to post your questions and comments below to make this article more meaningful, and hopefully we can address issues you’re facing too.

References

Docker Docker Compose Docker Toolbox VirtualBox Vagrant boot2docker Custom boot2docker vagrant box with Guest Additions installed Our vagrant-boot2docker project


“Creating a consistent cross-platform Docker development environment” via @codeship
Click To Tweet


The post Creating a Consistent Cross-platform Docker Development Environment appeared first on via @codeship.

over a year ago

Codeship News - Aug 12

An Inside Look with Codeship: Peter van Hardenberg, founding developer of Heroku Postgres

Peter van Hardenberg is the founding developer of Heroku Postgres. We recently had a chance to catch up with Peter and talk about the challenges facing industry, the Heroku team, and how Mario influenced their software development. An Inside Look with Codeship is a regular series providing an insider’s perspective on founding and building a […]

The post An Inside Look with Codeship: over a year ago

Peter van Hardenberg is the founding developer of Heroku Postgres. We recently had a chance to catch up with Peter and talk about the challenges facing industry, the Heroku team, and how Mario influenced their software development.

An Inside Look with Codeship is a regular series providing an insider’s perspective on founding and building a tech company. Each session, we chat with some of the most exciting voices in tech and ask them where they’ve been, where they’re going, and what we could all be doing together. You can read all Inside Look interviews here. An Inside Look with Codeship: Peter van Hardenberg of Heroku Postgres

Hi, Peter! Can you introduce yourself and what you’ve been working on at Heroku?

Sure thing. I’ve been part of Heroku for about six years and from about a dozen employees. I run product at the moment for our ecosystem and data products. I was the founding developer for the Heroku Postgres product.

When you started building Heroku Postgres, was there one major problem for your users you wanted to solve?

Heroku had been building this development platform and application runtime platform, and we realized at some point that, in order to do that, we were going to need a really strong story around data as well as around running people’s code. So the founder Adam Williams came to me, and he said, “Peter, I want you to look at the databases.” And I said, “I hate databases, but I agree that this is really broken, and so I will spend six months on it.” And that was five and a half years ago now.

Ultimately your code is important to your business, right? Your code is how you express value, but your real business is your data. If you lose your data, your business is over.


“Your code is how you express value, but your real business is your data.” @pvh interview
Click To Tweet


I had a friend who worked for a startup that, during a data center move, the servers literally fell off the truck. The truck went over a bump, the server fell out of the back, and all the disks were lost. That was the end of the company. For us, our primary goal has always been durability and ensuring business continuity for our customers, but that’s really just table stakes.

We observed from the beginning that databases were always something that intimidated developers and that had become governed by a priesthood of specialists who, by and large, were not well-aligned to business objectives. Not to speak ill of DBAs, but I think that the relationship between DBAs and developers was always very challenging. Our goal has always been to make every developer able to run applications at a very large scale, no matter how much experience they have with the data.

How did you approach problems you encountered by growing so rapidly?

Fundamentally, when you’re going through exponential growth, you need to be getting better faster than you’re growing. This is a real challenge that we understood from the beginning could only be solved by basically becoming the best at automation, anywhere.


“You need to be getting better faster than you’re growing.” @pvh interview on @codeship
Click To Tweet


We’ve got millions of databases and many, many thousands of servers; and we do it all with no ops team. Unless you count, obviously, our infrastructure as a service provider. Our team has always been integrated between development, operations, and user support. Everybody on the team does on-call. Everybody on the team does support, and everybody on the team does development of different varieties.

Obviously, as we’ve grown, we’ve specialized somewhat. But it’s still a guiding principle that everyone should be exposed to customers as directly as possible. This is a little bit controversial. I think it’s quite common for people to try and shelter their developers from users, but I think that you really only can build that true empathy by actually having to talk to and deal with users on a regular basis.

You talk about having your developers not so siloed away from the company. Does that change what you look for in your developers when you’re hiring new members of your team?

I think what we really look for is grit, the ability to learn, and gallows humor.

I think that gallows humor is highly correlated with the ability to survive in a high-energy startup. You know, you need to recognize that everything is miserable, and it’s unlikely to get better; but that, ultimately, that’s what we’re doing for our users. We suffer so that, hopefully, other people won’t have to. I think that if you can’t laugh at that, you’re really going to have a hard time. Databases in particular are miserable, you know?

One way we talk about our goal internally is to just try and claw back the misery of using databases just a little bit. Databases are still terrible to work with, but I think we have managed to make them more approachable and more joyful for developers to use. And, ultimately, that’s what we’re in the business of doing.

You’ve previously talked about the necessity of wearing many different hats while leading a product, but knowing how to take off the hats when it comes to people issues and culture. How did you come to that lesson as a manager?

It was advice given to me by Jim Lindenbaum, one of the Heroku founders. I’d been working with the team, and I was trying to figure out how to balance people’s individual desires for their own career development with what the team needed.

He gave me a really good piece of advice. He said, “No. You need to figure out what the team needs and get that. But when you’re working with people, you need to stop thinking about what the team needs and just focus on what that person needs.”

That sometime creates a tension, obviously, between those things. If a person wants to go in a particular direction, but that’s not what the team needs right now, that can be challenging to manage. But I think that it’s important to be able to take off your “what does the product need right now?” hat and really focus on “what does the person that I’m working with here in this situation need?” It’s the same when you’re working with customers versus your business. I think the ability to view things through those different lenses is really essential.

I understand Heroku works with remote teams.

Absolutely. Heroku is more than half distributed. We have about 200 employees. About 55 percent work from somewhere other than San Francisco. This was a real challenge for us to overcome, initially. In the early days, I actually relocated to San Francisco because we were an all-local company. And as we grew, we realized just the incredible potential that was available by taking advantage of the global market, as opposed to just the Bay Area market.

How do you make sure important knowledge and information doesn’t get lost and reaches your entire team?

We use a whole variety of different tools to manage that. Everything we do is in a Google Hangout or is in email. It’s unsexy, but mailing lists are fundamentally one of the most key tools we use to run our business – and, of course, tools like HipChat and GitHub and everything else that everybody uses.

People talk in Hangouts, but decisions are made in mailing lists and on pull requests, so we always have a written record of these things. When I say “always,” I’m sort of rounding up, of course. The simple truth is that if you’re not in the office you’re going to miss some things. If you weren’t in the Hangout, you’re going to miss some things, and that’s just a fact of life. But that’s true whether you’re in the office or not to some extent. If you weren’t at the meeting, or if you were out at a conference that week. Ultimately, put things into writing and make sure that as much as possible is communicated.

Is there a unique technological difficulty your projects have had to overcome?

When we started this, no one had done this before. Heroku Postgres predates Amazon’s RDS service, and so at the time, there weren’t really any database services out there except for Amazon’s SimpleDB, which was not as robust or functional as was required for most consumer applications. Some of the difficulty is scale and availability. Some of it fundamentally is about understanding the user experience. It’s really about investing deeply and understanding what a developer needs to understand, thinking about what do they need to know and then automating that.

I think the greatest personal technological advantage is that my background is diverse. I’ve done video game development. I’ve worked in research laboratories. I’ve done Shakespearean scholarship. When I came to Heroku, I brought sort of a pretty diverse set of experiences with me.

The best trick that we pulled in was actually drawn from video games, which was to treat our databases a little bit like the enemies in a Mario game. All characters in video games are little state machines that travel around, and when they reach the edge of a platform, they turn around. Or, if they see Mario, they throw a boomerang. That kind of thing. We really treat our databases kind of the same way, which is that if a database has an interruption, then it spawns a little state machine, and it says, “Okay, I need to find a new server,” “I need to repair this disk,” whatever the task of the day is. And so really what we have is a very large, distributed job processing queue, where we’re continually taking every database that we have and saying, “What does this thing need next?” We’re feeding all the events that we get from the world into it, so we can think about it a bit like a very large game where the goal is to maximize the availability of our service. And I think we play it pretty well.

Is there a way that Heroku grows the team skills or mentors learning?

I think participation in a company like Heroku really exposes you to a lot of challenges. We don’t have any formal mentoring programs in place. Actually, we have historically biased towards bringing in fairly experienced engineers. One of the things I’ve seen in some of our teams is that bringing a more junior engineer can really give everybody else a lot more focus and discipline, because it forces them: 1) to work with that person and explain what they’re doing and 2) to be really conscious of how somebody who’s less experienced might perceive and understand a thing.

You know, you don’t want to have a team that’s unbalanced towards all junior people, but I think that having all senior people can actually be a big of a crutch as well. Building a team with a blend of skills is really valuable, not just in terms of cost management, but in terms of having a few people who are newer in the industry and are really passionate about growing their skills from nothing. They don’t have bad habits to unlearn. They see things with really clear eyes. An Inside Look with Codeship: Peter van Hardenberg of Heroku Postgres

Is there a way that you and Heroku try to give back to developers outside your company?

Our mandate is to try and make databases less awful. We have a saying, “The worst code is code you write yourself, the second-worst code is code somebody else maintains, and the best code is no code at all.” So we always take the position that if you’re writing code, that’s probably a mistake. if you can get someone else to maintain that code, or get that code maintained by a community, that’s way better.


“Our mandate is to try and make databases less awful.” @pvh interview on @codeship
Click To Tweet


So to some extent, we upstream and open-source a lot of the changes we make in Postgres and in other tools. We try to open-source and release tools and get a community built around them. We see that, just from a business strategy perspective, as awareness that will come back to Heroku. But it’s also about fundamentally aligning to our goal, which is to make building software better, less miserable, more effective.

What do you think is the biggest challenge for the industry right now?

I think we have a lot of technical and social problems.

On the technical front, we’re going through a really Cambrian period right now, which is wonderful. If you remember, the stagnant — maybe “stagnant” is unfair, but there was a time when everything was Java. I wouldn’t call it monoculture, but it was certainly a less diverse technical world, say, 15 years ago than it is now.

Today, there are multiple package managers for Node. It feels like there’s a new programming language of the week that people are getting into. Is it Go, Rust, Crystal? I think that’s great, but it also introduces a lot of new challenges in terms of managing the complexity of projects over time. It’s wonderful that we have these new tools that make it possible to digest this complexity. It’s easier to build your product in multiple languages if you use tools like Heroku, which allow you to deploy all your applications the same way, no matter what language they’re written in. On the other hand, I think it’s getting harder and harder for developers to pick tools, because there are so many options which appear viable but that may or may not survive in the market.

But if you want to talk about what really worries me in technology, it’s that we have a relatively small group of undiverse individuals making decisions that will impact the direction of our society for a long time. There’s poor representation from women. There’s poor representation from a lot of visible minority communities, and we are not doing a good job of expanding that.


“We have a relatively small group of undiverse individuals making impactful decisions.” @pvh
Click To Tweet


If you look at the Gamergate controversies of the last year, it’s tied into that. I see areas where we’re making progress. I want to give some credit to the Ada Initiative. I’m a big fan of the work they’re doing improving gender representation, but it really feels like we’re still in the Stone Age and that this is still a boys’ club. That needs to be fixed.

Is there something you’d like to see the industry as a whole doing more of in a practical sense?

We’re still losing the share in terms of the number of women in engineering, and I think that there’s a lot of people doing great work there. I’ll give credit to organizations like Hackbright, who are trying to shift the ratio, but I definitely want greater investment there.

I’d encourage any company to get involved with organizations like Hackbright and see what they can do to help include communities that are not well-represented in technology today in their company.

Is there recent advancement in the industry that is there right now that will transform the future of your work?

This is an industry that is in constant flux. It’s hard to pick out anything — as William Gibson said, “The future is already here. It’s just not evenly distributed yet.”

Broadly speaking, I’m really excited about the consumerization of developer technologies. Heroku has been a big part of this. That’s why I’ve been so passionate and excited about working at Heroku. If you go back a few years, developers were really treated like experts, and the idea was that, “We build tools to be powerful, and developers will figure them out.” I think that the new crop of companies coming up, Codeship among them, take a fundamentally different perspective, which is to say that developers are people who deserve to have great tools to use, and that those tools need to be approachable and easy to use and intelligible. Whatever we can take off a developer’s mind is something that won’t distract them from solving the problem that they’re there to solve.

Our goal is not to build user-friendly tools precisely, but it’s to make developers more able to build better software by reducing the number of distractions and frustrations in doing so. Five years ago, there were very few companies doing this. Today, there are dozens, hundreds, even, focused on so many different parts of that stack.

Try Codeship – The simplest Continuous Delivery service out there.

The other big trend supporting this is the move to the cloud. Now these services can collaborate much more effectively than they could a few years ago, because people are mostly hosted in Amazon. It means that low-latency interconnection between those services is possible, so you can buy your database from one person, you can buy your CI service from another, and you can buy your runtime from a third. You can run your own ersatz, custom stuff alongside as well and those things can all inter-operate. I think that that kind of marketplace and focus on developer experience is huge.

What do you think are the key skills required to be an effective product manager or engineering lead?

Know how to ask questions. The first question is always, “What problem are you trying to solve?” If you don’t know what problem you’re trying to solve — whether it’s a technical, product, or people problem — you won’t know if you’ve accomplished your goal. So many people get excited about an implementation, or an idea; but they run ahead of making sure that they’ve documented what problem they’re solving and also what problems they’re not solving.

A good example might be if you’re building a new feature to reduce the amount of time for a log-on. You know you’re building a new log-in page. Why are you building it? Are you trying to reduce the friction in sign-up? Okay. How are you going to know that you did that at the end? You’ve got to make sure that whenever you set out to do something you know it’s use. Is that actually going to be useful? Is it going to be used? What problem does a user have that this solves?

Number two is empathy. You need to have empathy for the user, for the team members, for management, for your reports. Ultimately, software is hard, and people are harder. And people make software, so you’re ultimately going to be solving people problems, whether you’re an engineer, a product manager, or a manager.

If you could go back in time to the start of your career, what one piece of advice would you give yourself?

I’ve been very fortunate. My whole career, I’ve basically followed a strategy of never being afraid to say yes to an interesting opportunity. Nothing I’ve ever done has been planned far in advance. It’s always been about recognizing a new opportunity and following it when you see it. That used to worry me more than it does now, but if I had tried to plan my career, I certainly wouldn’t be where I am right now.

So, I would say: take opportunities, feel comfortable with the uncertainty. It’ll all work out.


“Take opportunities, feel comfortable with the uncertainty.” @pvh interview on @codeship
Click To Tweet


The post An Inside Look with Codeship: Peter van Hardenberg, founding developer of Heroku Postgres appeared first on via @codeship.

over a year ago

Docker News - Aug 12

Introducing Docker Content Trust

written by Diogo Mónica, Docker   Image Signing and Verification using The Update Framework (TUF)   A common request that we’ve heard from the Docker community is the need to have strong cryptographic guarantees over what code and what versions of … Continued over a year ago
written by Diogo Mónica, Docker   Image Signing and Verification using The Update Framework (TUF)   A common request that we’ve heard from the Docker community is the need to have strong cryptographic guarantees over what code and what versions of … Continued over a year ago

Codeship News - Aug 11

How to Use Heroku PGBackups

Backing up your data is one of the most critical activities for your application. Heroku PGBackups makes the entire experience pretty simple but comes with a lot of flexibility too. Setup a Test App Heroku provides a free PostgreSQL database to every Rails application, because Heroku loves you. To walk through the backups functionality, let’s […]

The post How to Use Heroku PGB over a year ago

Backing up your data is one of the most critical activities for your application. Heroku PGBackups makes the entire experience pretty simple but comes with a lot of flexibility too.

Setup a Test App

Heroku provides a free PostgreSQL database to every Rails application, because Heroku loves you. To walk through the backups functionality, let’s start with creating a new Heroku app. If you’ve never setup a Heroku app before, check out their Getting Started page for the language of your choice.


Heroku provides a free PostgreSQL database to every Rails application, because Heroku loves you.
Click To Tweet


Heroku provides a getting-started Ruby app that already has PostgreSQL configured, which is what I’ll be using.

git clone https://github.com/heroku/ruby-getting-started.git
cd ruby-getting-started
heroku create
git push heroku master
heroku run rake db:migrate
heroku open

Alright, now that we’ve got our test app working, let’s load in some sample data.

heroku run rails console
100.times do |i|
  Widget.create(
    name: "Widget #{i}",
    description: "This is my widget. There are many like it, but this one is mine.",
    stock: i
    )
end

Heroku PGBackups Basics

Now that we have some critical data loaded into our system, we want to make sure that if something bad happens we can get it back to its original state. Heroku PGBackups provides a lot of options for doing this, but we’ll step through them all.

Create a manual backup

heroku pg:backups capture

This will begin a backup immediately with a progress meter. You can press CTRL + C to get your prompt back, and the backup will continue in the background. It defaults to running against the main DATABASE_URL but if you have multiple databases you can specify the environment variable that the connection string is stored in to trigger a backup there.

pgbackups_empty

Additionally, if you visit the Heroku Apps web interface and click on your Heroku PostgreSQL database you’ll notice a button called PG Backups. From here you can quickly create a database snapshot and view all of your existing snapshots to download or delete them.

pgbackups_single

Scheduling backups

Manual backups are great if you need to pull a copy of the database to run some tests or save things as they are before doing something major with the data. But for most applications we’re going to want those backups to be scheduled regularly. Heroku PGBackups lets you set those schedules from the command line like so:

heroku pg:backups schedule DATABASE_URL --at '01:00 America/New_York'

That just scheduled a backup for 1 a.m. EST every night. Heroku keeps seven daily backups and one weekly backup for every plan. As your plan increases so does your backup retention. Premium plans will include 12-month retention with monthly backups in addition to the daily and weekly backups.

You can see a list of your scheduled backups with:

heroku pg:backups schedules
=== Backup Schedules
DATABASE_URL: daily at 1:00 (America/New_York)

And cancel them with:

heroku pg:backups unschedule DATABASE_URL

Download your backups

Whether it’s a manual or scheduled backup, at some point you may want to save it outside of Heroku. The PG Backups interface described earlier will give you point and click access to download any of them. But we’re all about automation, so ideally that’s something we’d want to do from the command line.

Heroku provides a command to give you an expiring link. You can either use the command with no arguments to get the latest backup, or you can specify the name of a specific snapshot. To use that command to automatically download our image we can utilize curl and pass in the url that the Heroku command gives us.

curl -o latest.dump `heroku pg:backups public-url`

Restoring from backups

Loading data back in from one of these is fairly straight forward as well. Just reference a backup name, an app name::backup name to pull from another one of your apps (like production to staging), or a public url of a pg_dump followed by the config reference for the database to restore the data into. Here are a few examples:

heroku pg:backups restore b101 DATABASE_URL
heroku pg:backups restore myapp-production::b101 DATABASE_URL
heroku pg:backups restore 'https://mys3url/that-leads-to-db.dump' DATABASE_URL --app sushi

Be careful when doing restores this way; it will completely overwrite the target database.

The database dumps are just standard PostgreSQL pg_dump files, so you can easily load them into any other non-Heroku PostgresSQL database using pg_restore. More details on how to do that can be found in the Heroku import/export docs.

You can also make direct copies from one database to another using almost the same syntax (assuming the colors listed below are references to database environment variables).

heroku pg:copy COBALT GREEN
heroku pg:copy myapp-production::ORANGE GREEN

Advanced Backups for Heroku PGBackups

In a production application, depending solely on nightly backups probably isn’t going to be good enough. What happens if there’s a problem in the middle of the day? Manually triggering a backup every hour might seem like a good idea at first, but that could get seriously out of hand and put a lot of extra stress on the database. Worse yet, what happens when even the nightly backups cause enough stress on the database that you’d like to avoid them?


What to do when nightly backups put enough stress on the database that you’d like to avoid them.
Click To Tweet


The Write Ahead Log (WAL) records commits before the commits are actually written to the database. The records of these commits are used to stream read replicas as well as backups to allow point-in-time restoration. There are a number of different ways to take advantage of this to solve our backup problems.

Schedule backups from a follower

If you’re on a database plan at the Standard level or up, you can create followers/read replicas. A follower will receive streaming updates; if nightly backups against your primary database are out of the question because of the stress involved, scheduling those backups against a follower will provide the same benefit without degrading application performance.

Try Codeship – The simplest Continuous Delivery service out there.

Point-in-time recovery

Now, imagine a scenario when some very important data gets accidentally deleted, overwritten, changed, or corrupted somehow. Doesn’t matter how it happens, user error, developer error, security breach; all that matters is that it happened.

This is where Heroku’s Continuous Protection comes into play. To rollback to a previous point in time, you can visit the web interface and create a rollback — a clone of your database but at a specific point in time. Just specify the date and time, and poof… instant recovery (relative to the size of your database).

rollback

You can also create a rollback from the command line. Continuous Protection is set up using the open source tool WAL-E that allows for continuously archiving the PostgreSQL WAL to Amazon S3.

Real World

Now let’s look at a real world scenario. You need to get data from a backup without taking down the running application. Running pg:copy (pg_restore) isn’t going to be good enough because that will overwrite the existing database, costing you all of your new data since the backup. Point in time restore would even have the same problem, although less pronounced.

Realistically in such a situation, you restore a copy of your backup or rollback database and then proceed to code in a way to find and merge in the data that was missing. We’ll simulate that now.

If you haven’t already captured a backup with our sample data, do so now. This is a real world situation so I’m going to be working with a production tier database (Standard or higher) and that will mean utilizing functionality that isn’t available to Hobby and Free tier databases to handle the restore.

If you want to follow along, here are the instructions to upgrade.

Now open up a heroku run rails console and run the following to delete half of our data in a semi-random fashion:

Widget.all.each_with_index do |widget, i|
  widget.delete if i % 2 == 0
end

Oh no! Now, using one of the above techniques, either create a rollback or create a new database and restore a backup to it. I’m going with the rollback, so I’ll visit the web interface and choose the version of my database from a few minutes ago.

Now, let’s connect to our database and make sure it has everything.

$ heroku pg:psql HEROKU_POSTGRESQL_CYAN_URL
=> SELECT COUNT(\*) FROM widgets;
100

Perfect! Now how do we get the data merged? There are a lot of ways to do that. You could write a script that connected to both databases and inserted the missing records or use a desktop client, like Navicat, that included data syncing. But there’s a much easier way.

PostgreSQL includes something called Foreign Data Wrappers that allow one PostgreSQL database to connect to a number of external data sources like Redis or another PostgreSQL database. Heroku makes this even easier with data links. You can read more about data links there, but for our purposes we want to link our main database (BRONZE) with our restored backup (CYAN) like so:

$ heroku pg:links create HEROKU_POSTGRESQL_CYAN HEROKU_POSTGRESQL_BRONZE
New link 'sighing_duly_2312' successfully created.
$ heroku pg:links

The latter command will show you all of the links that are configured. If for some reason you get an error after the second command, make sure you’ve installed the pg-extras plugin:

heroku plugins:install git://github.com/heroku/heroku-pg-extras.git

Now that we’re connected, open up your psql prompt and let’s merge that data with a query.

heroku pg:psql
INSERT INTO widgets (id, name, description, stock, created_at, updated_at) (
  SELECT * 
  FROM sighing_duly_2312.widgets backup 
  WHERE backup.id NOT IN (SELECT id FROM widgets)
);

That query does a select against our backup database that we restored for all of the ids that weren’t in our main table, then inserts them into our main table for us.

Conclusion

PostgreSQL can manage your data integrity, and Heroku PostgreSQL makes sure it’s safe in case of emergency with almost no effort on your part. Restoring is a more complicated beast but thanks to Foreign Data Wrappers (pg:links), merging in just what we need gets a lot easier. Always backup your data.


“How to use Heroku PGBackups” – via @codeship
Click To Tweet


The post How to Use Heroku PGBackups appeared first on via @codeship.

over a year ago

Docker News - Aug 10

Docker Demo Webinar: FAQ

Thanks again to everyone who joined last week’s Docker Demo webinar! There were more questions than time to answer them – and we wanted to share the Q&A publicly so everyone had access to this information on their Docker journey. Don’t … Continued over a year ago
Thanks again to everyone who joined last week’s Docker Demo webinar! There were more questions than time to answer them – and we wanted to share the Q&A publicly so everyone had access to this information on their Docker journey. Don’t … Continued over a year ago

Open Shift (by Red Hat) News | Ruby - Aug 07

CA Technologies Transforms App Development with OpenShift Enterprise – Video

With OpenShift Enterprise by Red Hat, CA Technologies provides enterprise Platform-as-a-Service (PaaS) to its development teams to more quickly build and deploy web-based apps for customers. over a year ago
With OpenShift Enterprise by Red Hat, CA Technologies provides enterprise Platform-as-a-Service (PaaS) to its development teams to more quickly build and deploy web-based apps for customers. over a year ago

Codeship News - Aug 06

How We Manage Passwords and Data Sharing at Codeship

Sharing passwords and secret data with the people on your team securely is painful. You want to limit the passwords that a specific person has while being able to give them access to more at any point. All of the above should be shared in a completely secure manner that’s easy to use for tech […]

The post How We Manage Passwords and Data Sharing at Codeship appeared first on via @co over a year ago

Sharing passwords and secret data with the people on your team securely is painful. You want to limit the passwords that a specific person has while being able to give them access to more at any point. All of the above should be shared in a completely secure manner that’s easy to use for tech and non-tech team members alike.

As we have grown our team at Codeship over the last few months and are using more and more services, we’ve started to struggle with this ourselves. There are many services that provide good user management (incidentally, we’ve just launched our organizations feature recently), but there are also many that don’t.


Limit the passwords a person has, yet be able to give them access to more at any time.
Click To Tweet


Sharing Strategy

Some of us were already using 1Password for our personal passwords, so we decided to use it company wide. It runs on Macs and mobile devices. Some of our developers are using Linux laptops, but as all the services we use in our engineering team have great user management, they can rely on Linux native secure key management.

We created an admin 1Password vault that is shared between Moritz (our CEO), Jim (our VPE), and me (CTO here at Codeship). This vault contains usernames and passwords to the admin accounts of various services and anything we need to onboard or offboard somebody from Codeship (those accounts are additionally secured with 2FA where we can). As this vault is shared between the three of us, somebody should always be available to manage any service.

Next, we created separate vaults for each department in the company. These are encrypted and synced via Dropbox. The main passwords for those vaults are stored in the admin vault I mentioned before. Thus Jim, Mo, and I have access to all company-wide admin accounts. Obviously, our machines and mobile devices are strictly locked down.

The department vaults can then be shared easily with anyone on those specific teams, so that they have easy access to everything they need. Additionally, we now have a secure way to transfer small pieces of data between people in one department; they can just add the data to their department vault, get it synced, and remove it after it was shared. This removes any need for sharing anything through unencrypted channels like Slack.

To have a really secure system, we need to make sure the main passwords that everyone uses are strong. This mostly comes down to proper education and following up with people regularly. In the next section, I’ll explain our strategy for creating good and secure passwords that people can remember easily. This is taken directly from our internal wiki, so you should be able to copy it verbatim and use it internally if you like.

Try Codeship – The simplest Continuous Delivery service out there.

Creating a Good Master Password

A good master password should be random while at the same time memorable. Passwords selected by humans are typically very easy to crack automatically; humans limit their selection of words and characters to make a password easier to remember.

And even adding a special character here or there doesn’t solve that necessarily. To counter this, randomly select words by throwing dice and then connect those words with special characters. You can this method to create a complex master password that you can then use in connection with 1Password.


“How to create a strong master password” via @codeship
Click To Tweet


1Password

1Password helps you manage your passwords so you only need to remember one strong master password. You can still have unique and very strong passwords for each service you use, of course. 1Password stores all of your passwords in encrypted vaults. You will have a main vault and create or import additional vaults (which you can use for storing team specific passwords).

Download 1Password from Agilebits download page and take a look at their Getting Started guide.

How to create a strong master password

Diceware is a great tool for creating strong, random master passwords. It’s a list of around 7,500 preselected words. Pick five of these words by rolling the dice multiple times. Let’s walk through how to set up a master password using Diceware.

Note: Don’t use fewer than five words. It makes brute forcing the password far too easy. More words are always better.

  • Get some dice or go to https://www.random.org/dice/?num=1
  • Look at the Diceware list: http://world.std.com/~reinhold/diceware.wordlist.asc
  • Roll the dice five times and write down the numbers (e.g., 61353).
  • Look through the list to find the word associated with that dice roll (e.g., if you rolled 61353, the word would be “today”).
  • Roll the dice 25 times to get five words. Don’t throw out any of the words that were selected by random. If you use some but not others, you’re again limiting the choice of words which makes the password much easier to crack.
  • Put the words together and have either whitespace or special characters in between each word.
  • If you feel you might forget the password, write the password on a piece of paper by hand (do not print from your laptop or store on any electronic device) and store it somewhere safe in your home where you can get to it. If you forget your password, you will not be able to log into your 1Password again. This strong password is to keep your digital data secure. The chance of somebody discovering/stealing your password paper and using it to break your accounts is very low if you keep it well stored at home.

EXAMPLE MASTER PASSWORD:

  • Dice rolls: 14364 23346 61556 34523 21322
  • Five words: blonde dove tram jl comet
  • Capitalization: Blonde Dove Tram Jl Comet
  • Special characters/numbers: Blonde@Dove2Tram*Jl%Comet

This method will give you a password where you should be able to remember the five main words easily, and the four special characters/numbers are then not a large hurdle. You don’t have to capitalize the first letter, or you can also capitalize the last one or capitalize every second letter, if you’d like other options.

Conclusions

As security is very important for us at Codeship, we need to make sure we keep access to internal services secure. At the same time, access and sharing need to be easy so as not to create incentives to circumvent the secure system in any way.

With our new 1Password-based system, we’ve got a great new strategy in place that will allow us to grow our respective teams while keeping our customers’ data secure.

I hope sharing this with you helps with some of the pain of managing your own passwords that every team, and especially every startup, deals with. If you have other strategies, please let us know in the comments.

Additional resources:

1Password Blog

Bruce Schneier on passwords

Diceware

StackExchange Discussion


How we manage passwords and data sharing at @Codeship
Click To Tweet


The post How We Manage Passwords and Data Sharing at Codeship appeared first on via @codeship.

over a year ago

Codeship News - Aug 04

Using the gdb debugger with Go

Troubleshooting an application can be fairly complex, especially when dealing with highly concurrent languages like Go. It can be fairly simple to add print statements to determine subjective application state at specific intervals, however it’s much more difficult to respond dynamically to conditions developing in your code with this method. Debuggers provide an incredibly powerful [̷ over a year ago

Troubleshooting an application can be fairly complex, especially when dealing with highly concurrent languages like Go. It can be fairly simple to add print statements to determine subjective application state at specific intervals, however it’s much more difficult to respond dynamically to conditions developing in your code with this method.

Debuggers provide an incredibly powerful troubleshooting mechanism. Adding code for troubleshooting can subtly affect how an application runs. Debuggers can give a much more accurate view of your code in the wild.

A number of debuggers exist for Go, some inject code at compile time to support an interactive terminal which negates some of the benefit of using a debugger. The gdb debugger allows you to inspect your compiled binaries, provided they were linked with debug information, without altering the source code. This is quite a powerful feature, since you can pull a build artifact from your deployment pipeline and debug it interactively. You can read more about this in the official Golang docs, so this guide will give a quick overview into the basic usage of the gdb debugger for Go applications.

There seem to have been a number of changes in gdb since this was announced, most notably the replacement of -> operator with . for accessing object attributes. Keep in mind that there may be subtle changes like this between versions of gdb and Go. This guide was written using gdb version 7.7.1 and go version 1.5beta2.

Getting Started with gdb Debugging

To experiment with gdb I’m using a test application, the complete source code for which can be found on in gdb_sandbox on Github. Let’s start with a really simple application:

package main

import ( 
    "fmt" 
)

func main() { 
    for i := 0; i < 5; i++ {
        fmt.Println("looping") 
    } 
    fmt.Println("Done") 
}

We can run this code and see some very predictable output:

$ go run main.go
looping
looping
looping
looping
looping
Done

Let’s debug the application. First, build the Go binary and then execute gdb with the binary path as an argument. Depending on your setup, you’ll also need to load Go runtime support via a source command. At this point we’ll be in the gdb shell, and we can set up breakpoints before executing our binary.

$ go build -gcflags "-N -l" -o gdb_sandbox main.go 
$ ls
gdb_sandbox  main.go  README.md
$ gdb gdb_sandbox
....
(gdb) source /usr/local/src/go/src/runtime/runtime-gdb.py
Loading Go Runtime support.

First off, let’s put a breakpoint (b) inside the for loop and take a look at what state our code has in each loop execution. We can then use the print (p) command to inspect a variable from the current context and the list (l) and backtrace (bt) commands to take a look at the code around the current step. The application execution can be stepped using next (n) or we can just continue to the next breakpoint (c).

(gdb) b main.go:9 
Breakpoint 1 at 0x400d35: file /home/bfosberry/workspace/gdb_sandbox/main.go, line 9. 
(gdb) run 
Starting program: /home/bfosberry/.go/src/github.com/bfosberry/gdb_sandbox/gdb_sandbox Breakpoint 1, main.main () at 
/home/bfosberry/.go/src/github.com/bfosberry/gdb_sandbox/main.go:9 
9         fmt.Println("looping") 
(gdb) l 
4         "fmt" 
5         ) 
6  
7 func main() {
8         for i := 0; i < 5; i++ { 
9         fmt.Println("looping") 
10        }` 
11        fmt.Println("Done") 
12 } 
(gdb) p i 
$1 = 0 
(gdb) n 
looping 
Breakpoint 1, main.main () at 
/home/bfosberry/.go/src/github.com/bfosberry/gdb_sandbox/main.go:9 
9        fmt.Println("looping") 
(gdb) p i 
$2 = 1 
(gdb) bt
# 0 main.main () at /home/bfosberry/.go/src/github.com/bfosberry/gdb_sandbox/main.go:9

You can set breakpoints using a relative file and line number reference, a GOPATH file and line number reference, or a package and function reference. The following are also valid breakpoints:

(gdb) b github.com/bfosberry/gdb_sandbox/main.go:9
(gdb) b 'main.main'

Structs

We can make the code a little more complex to show the benefits of live debugging. We will use the function f to generate a simple pair, x and y, where y = f(x) when x is even, otherwise = x.

type pair struct { 
    x int 
    y int 
}

func handleNumber(i int) *pair { 
    val := i 
    if i%2 == 0 { 
        val = f(i) 
    } 
    return &pair{ 
       x: i, 
       y: val, 
    } 
}

func f(int x) int { 
    return x*x + x 
}

Also we can change the looping code to call these new functions.

    p := handleNumber(i)
    fmt.Printf("%+v\n", p)
    fmt.Println("looping")

Let’s say we need to debug the value of y. We can start by setting a breakpoint where y is being set and then step through the code. Using info args we can check function parameters, and as before bt gives us the current backtrace.

(gdb) b 'main.f' 
(gdb) run 
Starting program: /home/bfosberry/.go/src/github.com/bfosberry/gdb_sandbox/gdb_sandbox

Breakpoint 1, main.f (x=0, ~anon1=833492132160) 
    at /home/bfosberry/.go/src/github.com/bfosberry/gdb_sandbox/main.go:33 
33       return x*x + x 
(gdb) info args 
x = 0 
(gdb) continue 
Breakpoint 1, main.f (x=0, ~anon1=833492132160) 
    at /home/bfosberry/.go/src/github.com/bfosberry/gdb_sandbox/main.go:33 
33       return x*x + x 
(gdb) info args 
x = 2 
(gdb) bt
#0 main.f (x=2, ~anon1=1) 
    at /home/bfosberry/.go/src/github.com/bfosberry/gdb_sandbox/main.go:33
#1 0x0000000000400f0e in main.handleNumber (i=2, ~anon1=0x1)
    at /home/bfosberry/.go/src/github.com/bfosberry/gdb_sandbox/main.go:24
#2 0x0000000000400c47 in main.main ()
    at /home/bfosberry/.go/src/github.com/bfosberry/gdb_sandbox/main.go:14

Since we’re in a condition where the value of y is being set based on the function f, we can set out of this function context and examine code farther up the stack. While the application is running we can set another breakpoint at a higher level and examine the state there.

(gdb) b main.go:26 
Breakpoint 2 at 0x400f22: file 
/home/bfosberry/.go/src/github.com/bfosberry/gdb_sandbox/main.go, line 26. 
(gdb) continue 
Continuing.

Breakpoint 2, main.handleNumber (i=2, ~anon1=0x1) 
    at /home/bfosberry/.go/src/github.com/bfosberry/gdb_sandbox/main.go:28 
28             y: val, 
(gdb) l 
23         if i%2 == 0 { 
24             val = f(i) 
25         } 
26         return &pair{ 
27             x: i, 
28             y: val, 
29         } 
30     } 
31  
32 func f(x int) int { 
(gdb) p val 
$1 = 6 
(gdb) p i 
$2 = 2

If we continue at this point we will bypass breakpoint 1 in function f, and we will trigger the breakpoint in the handleNumber function immediately since the function f is only executed for every second value of i. We can avoid this by disabling breakpoint 2 temporarily.

(gdb) disable breakpoint 2 
(gdb) continue 
Continuing. 
&{x:2 y:6} 
looping 
&{x:3 y:3} 
looping 
[New LWP 15200] 
[Switching to LWP 15200]

Breakpoint 1, main.f (x=4, ~anon1=1) 
    at /home/bfosberry/.go/src/github.com/bfosberry/gdb_sandbox/main.go:33 
33         return x*x + x 
(gdb)

We can also clear and delete breakpoints using clear and delete breakpoint NUMBER respectively. By dynamically creating and toggling breakpoints we can efficiently traverse application flow.

Slices and Pointers

Few applications will be as simple as pure numerical or string values, so let’s make the code a little more complex. By adding a slice of pointers to the main function and storing generated pairs, we could potentially plot them later.

    var pairs []*pair
    for i := 0; i < 10; i++ {
        p := handleNumber(i)
        fmt.Printf("%+v\n", p)
        pairs = append(pairs, p)
        fmt.Println("looping")
        }

This time around let’s examine the slice or pairs as it gets built. First of all we’ll need to examine the slice by converting it to an array. Since handleNumber returns a *pair type, we’ll need to dereference the pointers and access the struct attributes.

(gdb) b main.go:18 
Breakpoint 1 at 0x400e14: file /home/bfosberry/.go/src/github.com/bfosberry/gdb_sandbox/main.go, line 18. 
(gdb) run 
Starting program: /home/bfosberry/.go/src/github.com/bfosberry/gdb_sandbox/gdb_sandbox &{x:0 y:0}

Breakpoint 1, main.main () at /home/bfosberry/.go/src/github.com/bfosberry/gdb_sandbox/main.go:18 
18         fmt.Println("looping") 
(gdb) p pairs 
$1 = []*main.pair = {0xc82000a3a0} 
(gdb) p pairs[0] 
Structure has no component named operator[]. 
(gdb) p pairs.array 
$2 = (struct main.pair **) 0xc820030028 
(gdb) p pairs.array[0] 
$3 = (struct main.pair *) 0xc82000a3a0 
(gdb) p *pairs.array[0] 
$4 = {x = 0, y = 0} 
(gdb) p (*pairs.array[0]).x 
$5 = 0 
(gdb) p (*pairs.array[0]).y 
$6 = 0 
(gdb) continue 
Continuing. 
looping 
&{x:1 y:1}

Breakpoint 1, main.main () at /home/bfosberry/.go/src/github.com/bfosberry/gdb_sandbox/main.go:18 
18         fmt.Println("looping") 
(gdb) p (pairs.array[1][5]).y 
$7 = 1 
(gdb) continue 
Continuing. 
looping 
&{x:2 y:6}

Breakpoint 1, main.main () at /home/bfosberry/.go/src/github.com/bfosberry/gdb_sandbox/main.go:18 
18         fmt.Println("looping") 
(gdb) p (pairs.array[2][6]).y 
$8 = 6 
(gdb)

You’ll notice that while gdb does identify the fact that pairs is a slice, we can’t directly access attributes. In order to access the members of the slice we need to convert it to an array via pairs.array. We can check the length and capacity of the slice though:

(gdb) p $len(pairs)
$12 = 3
(gdb) p $cap(pairs)
$13 = 4

At this point we can run the loop a few times and monitor the increasing value of x and y across different members of the slice. Something to note here is that struct attributes can be accessed via the pointer, so p pairs.array[2].y works just as well.

Try Codeship – The simplest Continuous Delivery service out there.

Goroutines

Now that we can access structs and slices, let’s make the application even more complicated. Let’s add some goroutines into the mix by updating our main function to process each number in parallel and pass the results back through a channel:

    pairs := []*pair{}
    pairChan := make(chan *pair)
    wg := sync.WaitGroup{}
        for i := 0; i < 10; i++ {
          wg.Add(1)
          go func(val int) {
            p := handleNumber(val)
            fmt.Printf("%+v\n", p)
            pairChan <- p
            wg.Done()
            }(i)
    }
    go func() {
            for p := range pairChan {
              pairs = append(pairs, p)
            }
    }()
    wg.Wait()
    close(pairChan)

If we wait for the WaitGroup to complete and inspect the resulting pairs slice, we can expect the contents to be exactly the same, although perhaps in a different order. The real power of gdb here comes from inspecting goroutines in flight:

(gdb) b main.go:43 
Breakpoint 1 at 0x400f7f: file /home/bfosberry/.go/src/github.com/bfosberry/gdb_sandbox/main.go, line 43. 
(gdb) run 
Starting program: /home/bfosberry/.go/src/github.com/bfosberry/gdb_sandbox/gdb_sandbox

Breakpoint 1, main.handleNumber (i=0, ~r1=0x0) 
    at /home/bfosberry/.go/src/github.com/bfosberry/gdb_sandbox/main.go:43 
43         y: val, 
(gdb) l 
38     if i%2 == 0 { 
39         val = f(i) 
40     } 
41     return &pair{ 
42         x: i, 
43         y: val, 
44     } 
45 } 
46  
47 func f(x int) int { 
(gdb) info args 
i = 0 
~r1 = 0x0 
(gdb) p val 
$1 = 0

You’ll notice that we placed a breakpoint in a section of code which is executed within a goroutine. From here we can inspect local variables as well as look at other goroutines in progress:

(gdb) info goroutines 
  1 waiting runtime.gopark 
  2 waiting runtime.gopark 
  3 waiting runtime.gopark 
  4 waiting runtime.gopark 
* 5 running main.main.func1 
  6 runnable main.main.func1 
  7 runnable main.main.func1 
  8 runnable main.main.func1 
  9 runnable main.main.func1 
* 10 running main.main.func1 
  11 runnable main.main.func1 
  12 runnable main.main.func1 
  13 runnable main.main.func1 
  14 runnable main.main.func1 
  15 waiting runtime.gopark 
(gdb) goroutine 11 bt
#0 main.main.func1 (val=6, pairChan=0xc82001a180, &wg=0xc82000a3a0)
    at /home/bfosberry/.go/src/github.com/bfosberry/gdb_sandbox/main.go:19
#1 0x0000000000454991 in runtime.goexit () at /usr/local/go/src/runtime/asm_amd64.s:1696
#2 0x0000000000000006 in ?? ()
#3 0x000000c82001a180 in ?? ()
#4 0x000000c82000a3a0 in ?? ()
#5 0x0000000000000000 in ?? ()
(gdb) goroutine 11 l 
48         return x*x + x 
49     } 
(gdb) goroutine 11 info args 
val = 6 
pairChan = 0xc82001a180 
&wg = 0xc82000a3a0 
(gdb) goroutine 11 p val 
$2 = 6

The first thing we do here is list all running goroutines and identify one of our handlers. We can then view a backtrace and essentially send any debug commands to that goroutine. The backtrace and listing clearly do not match, how backtrace does seem to be accurate. info args on that goroutine shows us local variables, as well as variables available in the main function, outside the scope of the goroutine function which are prepended with an &.

Conclusion

When it comes to debugging applications, gdb can be incredibly powerful. This is still a fairly fresh integration, and not everything works perfectly. Using the latest stable gdb, with go1.5beta2, many things are still broken:

Interfaces

According to the go blog post, go interfaces should be supported, allowing you to dynamically cast then to their base types in gdb. This seems to be broken.

Interface{} types

There is no current way to convert an interface{} to its type.

Listing a different goroutine

Listing surrounding code from within another goroutine causes the line number to drift, eventually resulting in gdb thinking the current line is beyond the bounds of the file and throwing an error:

(gdb) info goroutines 
  1 waiting runtime.gopark 
  2 waiting runtime.gopark 
  3 waiting runtime.gopark 
  4 waiting runtime.gopark 
* 5 running main.main.func1 
  6 runnable main.main.func1 
  7 runnable main.main.func1 
  8 runnable main.main.func1 
  9 runnable main.main.func1 
* 10 running main.main.func1 
  11 runnable main.main.func1 
  12 runnable main.main.func1 
  13 runnable main.main.func1 
  14 runnable main.main.func1 
  15 waiting runtime.gopark 
(gdb) goroutine 11 bt
#0 main.main.func1 (val=6, pairChan=0xc82001a180, &wg=0xc82000a3a0)
    at /home/bfosberry/.go/src/github.com/bfosberry/gdb_sandbox/main.go:19
#1 0x0000000000454991 in runtime.goexit () at /usr/local/go/src/runtime/asm_amd64.s:1696
#2 0x0000000000000006 in ?? ()
#3 0x000000c82001a180 in ?? ()
#4 0x000000c82000a3a0 in ?? ()
#5 0x0000000000000000 in ?? ()
(gdb) goroutine 11 l 
48         return x*x + x 
49     } 
(gdb) goroutine 11 l 
Python Exception <class 'gdb.error'> Line number 50 out of range; /home/bfosberry/.go/src/github.com/bfosberry/gdb_sandbox/main.go has 49 lines.: 
Error occurred in Python command: Line number 50 out of range; /home/bfosberry/.go/src/github.com/bfosberry/gdb_sandbox/main.go has 49 lines.

Goroutine debugging is unstable

Handling goroutines general tends to be unstable; I managed to cause a number of segfaults executing simple commands. At this stage you should be prepared to deal with some issues.

Configuring gdb with Go support can be troublesome

Running gdb with Go support can be troublesome, getting the right combination of paths and build flags, and gdb auto-loading functionality doesn’t seem to work correctly. First of all, loading Go runtime support via a gdb init file initializes incorrectly. This may need to be loaded manually via a source command once the debugging shell has been initialized as described in this guide.

When should I use a debugger?

So when is it useful to use gdb? Using print statement and debugging code is a much more targeted approach.

  • When changing the code is not an option.
  • When debugging a problem where the source is not known, and dynamic breakpoints may be beneficial.
  • When working with many goroutines where the ability to pause and inspect program state would be beneficial.


“Debugging #golang with gdb” – via @codeship
Click To Tweet


The post Using the gdb debugger with Go appeared first on via @codeship.

over a year ago

Docker News - Aug 04

The Docker Governance Advisory Board (DGAB) Grows and Evolves

written by Jeff Borek (IBM), Chair of DGAB Just Like the Docker Project   The Conference DockerCon 2015 was June 22 – 23 in San Francisco with over 2100 attendees. Last year at this time, about 500 people attended to … Continued over a year ago
written by Jeff Borek (IBM), Chair of DGAB Just Like the Docker Project   The Conference DockerCon 2015 was June 22 – 23 in San Francisco with over 2100 attendees. Last year at this time, about 500 people attended to … Continued over a year ago

Docker News - Jul 31

Your Docker Agenda for August

Excited for Docker Global Hack Day #3 and DockerCon 2015 Europe? Us too! As you wait for these bigger Docker events, you can participate in awesome Docker meetup events and attend other Docker talks and conferences happening globally. Below is your Docker agenda for … Continued over a year ago
Excited for Docker Global Hack Day #3 and DockerCon 2015 Europe? Us too! As you wait for these bigger Docker events, you can participate in awesome Docker meetup events and attend other Docker talks and conferences happening globally. Below is your Docker agenda for … Continued over a year ago

Docker News - Jul 28

New Tool to Migrate From V1 Registry to Docker Trusted Registry or V2 Open Source Registry

written by Matt Bentley, Solutions Engineer at Docker, Inc.   Harder, Better, Faster, Stronger The distribution and management of Docker images are made possible by various registry services including: Docker Hub, Docker Trusted Registry and Docker Open Source Registry. Both … Continued over a year ago
written by Matt Bentley, Solutions Engineer at Docker, Inc.   Harder, Better, Faster, Stronger The distribution and management of Docker images are made possible by various registry services including: Docker Hub, Docker Trusted Registry and Docker Open Source Registry. Both … Continued over a year ago
pluto.models/1.4.0, feed.parser/1.0.0, feed.filter/1.1.1 - Ruby/2.0.0 (2014-11-13/x86_64-linux) on Rails/4.2.0 (production)