Planet Hosting & Delivery

the unofficial alterslash-style digest

Docker Global Hack Day #3: Update on Local Editions, Prizes and More!

Docker News Monday August 17, 2015 @ 15:24 • over a year ago
Join us for the next Docker Global Hack Day from Wednesday, September 16th through Monday, September 21st! Submit Your Hack Ideas!   We are excited to announce first round of cities committed to participating in this global event along with the prizes … Continued

Improving Shipscope Performance with WebSockets

Codeship News Monday August 17, 2015 @ 15:15 • over a year ago

Nearly a year ago, we released Shipscope, a Chrome extension for monitoring your projects on Codeship. We’ve received some really good feedback from our users, and I want to share some of that insight in this post.

In order to provide frequent status updates, Shipscope requested project status from our API server every 10 seconds. Multiply this by a rapidly growing userbase, and it really amounted to something.

There were two consequences of Shipscope’s frequent polling:

  1. As Shipscope became more popular, the load on our API server grew. This was not a critical problem, but if we can reduce the number of requests to our API server while providing timely updates to users, then we should really try to do that.

  2. Shipscope stayed pretty busy in the user’s browser, frequently requesting updates which most often would report no change to any build status. This is just wasteful use of CPU.

“Stop wasteful polling. Use WebSockets in your browser clients.” – via @codeship
Click To Tweet

In our web-based UI, we have been using Pusher for a while to provide build progress updates to the browser via WebSockets. Using Pusher, our web client never needs to poll the server for build progress updates. The server sends a message on the socket when progress occurs in the build. The client then requests build info from the server and updates the UI. This provides immediate updates without any unnecessary polling.

For security reasons, we don’t send full build information across the WebSocket. We don’t want to send customer build info over a connection that could be unencrypted. Pusher allows payloads to be encrypted between the server and Pusher and then from Pusher to the client over the WebSocket.

However, that transfer between the HTTP port and the WebSocket port is not encrypted. Rather than trying to encrypt the build info before we send it, we choose to limit the data that we send over the WebSocket. So, the client is notified of an update, and the full update info is fetched securely over an HTTPS connection.

Try Codeship – The simplest Continuous Delivery service out there.

This approach has worked very well for us with our web client, so we decided we should give it a try with Shipscope. The first step is to get a Pusher app key, which is easy enough once you signup at Pusher’s website.

After installing Pusher’s JavaScript client, Shipscope initializes a Pusher instance like so:

pusher = new Pusher(PUSHER_APP_KEY, {
    authEndpoint: '',
    auth: {
      params: {
        api_key: options.api_key
    encrypted: true

The authEndpoint and auth properties allows us to verify that Shipscope can only subscribe to updates for which the api_key authorizes it. encrypted ensures that messages to and from Pusher are not in plain text.

Once we have a Pusher instance, we only need to subscribe to a channel. Each project has its own channel for publishing updates. So, if we are monitoring a project with id, 9876, then we’ll subscribe like this:

var channel = pusher.subscribe("private-project-9876")

and finally,

channel.bind(UPDATE_EVENT, onUpdate.bind(projectInfo))

will call onUpdate() with projectInfo bound to this every time a build starts or finishes in our project.

That saves a lot of effort for both Shipscope running in your browser and our API server, which now only gets requests when something interesting has happened.

One more thing that we get from Pusher is offline notification. If you somehow find yourself away from a wifi connection, the Pusher client fires a state_change event to let Shipscope know. We use this currently to change the Shipscope icon to red to let you know that Shipscope is offline.

Setting this up is as simple as:

pusher.connection.bind('state_change', function() {
  if (pusher.connection.state == 'connected') {
    chrome.browserAction.setIcon({path: 'img/shipscope_icon_19.png'})
  } else {
    chrome.browserAction.setIcon({path: 'img/shipscope_icon_19_error.png'})
    chrome.browserAction.setBadgeText({text: ''})

WebSockets have made a big difference to both our server and the Shipscope Chrome extension. Pusher made WebSockets easy. You can install Shipscope from the Chrome Web Store. Shipscope is open source, so feel free to review the code at

“Improving Shipscope performance with WebSockets” – via @codeship
Click To Tweet

The post Improving Shipscope Performance with WebSockets appeared first on via @codeship.

Your Docker Agenda for LinuxCon

Docker News Sunday August 16, 2015 @ 20:46 • over a year ago
Docker and containers are making a big splash in Seattle next week! LinuxCon runs from Monday, August 17 through Wednesday, August 19 but there are plenty of co-located events during the week including ContainerCon, MesosCon and the Linux Plumbers Conference. … Continued

Creating a Consistent Cross-platform Docker Development Environment

Codeship News Thursday August 13, 2015 @ 16:34 • over a year ago

How many times have you read this statement:

“The great thing about Docker is that your developers run the exact same container as what runs in production.”

Docker is all the hype these days, and with statements like that, many are wondering how they can get on board and take advantage of whatever it is that makes Docker so popular.

That was us just six months ago when we started playing with Docker and trying to fit it into our processes. After just a few months we knew we liked it and wanted to run apps this way, but we were struggling with some of the Docker development workflow.

As the manager of a development team, I like using the same processes and technologies through the whole lifecycle of our applications. When we were running apps on AWS OpsWorks using Chef to provision servers and deploy applications, we used Vagrant with Chef to run the same recipes locally to build our development environment.

Try Codeship – The simplest Continuous Delivery service out there.

Challenges with a Docker development environment

It didn’t take long for us developing with Docker to realize that the common statement isn’t as easy to achieve as it sounds.

This article highlights the top six challenges we faced when trying to create a consistent Docker development environment across Windows, Mac, and Linux:

  1. Running Docker on three different platforms
  2. Docker Compose issues on Windows
  3. Running minimal OS in vagrant (boot2docker doesn’t support guest additions)
  4. Write access to volumes on Mac and Linux
  5. Running multiple containers on the same host port
  6. Downloading multiple copies of docker images

Running Docker on multiple operating systems

Docker requires Linux. If everyone runs Linux this really isn’t an issue, but when a team uses multiple OSes, it creates a significant difference in process and technology. The developers on our team happen to use Windows, Mac, and Linux, so we needed a solution that would work consistently across these three platforms.

“We needed a dev environment solution that would work consistently across three platforms.” via…
Click To Tweet

Docker provides a solution for running on Mac and Linux called boot2docker. boot2docker is a minimal Linux virtual machine with just enough installed to run Docker. It also provides shell initialization scripts to enable use of Docker command line tools from the host OS (Windows or Mac), mapping them into the Docker host process running inside the boot2docker VM. Combined with VirtualBox, this provides an easy way to get Docker up and running on Windows or Mac.

While boot2docker works well for simple use cases, it makes certain conditions difficult to work with. I’ll get into those in the following challenges. This topic can be hard to understand at first, so here’s a simple illustration of the three main options for running Docker locally:

Three options for running Docker locally

Using Docker Compose on Windows

Docker Compose is a fantastic tool for orchestrating multiple container environments and in many ways actually makes Docker usable for development. If one had to run all the normal Docker CLI commands and flags to spin up their environments and link them properly, it would be more work than many of us are willing to do.

Compose is still relatively new though, like Docker itself really, and as a result it does not work very well on Windows yet. There are so many issues on Windows in fact, that there is an epic on the project just to deal with them: Some good news though is Docker Toolbox claims Compose support for Windows is coming soon.

(re)Enter Vagrant

I mentioned earlier that boot2docker works well to create a Linux VM for running Docker in but that it did not work well for all conditions.

Vagrant has been a fantastic tool for development teams for the past few years, and when I started working with Docker I was even a little sad to be moving away from it. After a couple months of struggling to get everything working with boot2docker though, we brought Vagrant back into the equation.

We liked how small boot2docker was since we didn’t need a full featured Docker host, but unfortunately it doesn’t support the VirtualBox guest additions required for synced folders. Thankfully though we found the vagrant box AlbanMontaigu/boot2docker that was a version of boot2docker with guest additions installed and weighs in at a light 28M. Compare that with a minimal Ubuntu 14.04 box at 363M.

Write access on volumes

Docker can mount the host filesystem into containers as volumes. This is great when the container only needs to read the files, but if the container needs to write changes to the files there can be a problem.

On Windows VirtualBox, synced folders are world-writeable with Linux permissions of 777. So on Windows, write access is not an issue. However on Linux and Mac, there are file ownership and permissions to work with. For example, when I’m writing code on my Mac, my username is shipley, and my uid/gid is 1000. However, in my container, Apache runs as www-data with uid/gid of 33.

So when I want Apache to generate files that I can access on my host machine to continue development with, Apache is not allowed to write to them because it runs as a different user. My options would either be to change ownership/permissions of files on the Mac filesystem, or change the user and uid/gid apache runs as in the container to shipley and 1000. However, that option is pretty sloppy and does not work for team development.

With VirtualBox, you can change the user/group and permissions that synced folders are mounted as, but it’s not really easy or convenient by default. Vagrant provides a very convenient way to do this though. This was one of the biggest motivators for us to go back to Vagrant. With Vagrant, all we need to add to our Vagrantfile is:

config.vm.synced_folder "./application", "/data", mount_options: ["uid=33","gid=33"]

With the extra mount_options, it will own the /data folder inside the vm as uid/gid 33 which, inside an Apache container based on Ubuntu, will map to user/group www-data.

Funny thing though — as I mentioned earlier, by default the filesystem permissions on Windows are 777. So write access there isn’t an issue. However, we found that when using volumes in Docker to mount a custom my.cnf file into a mariadb container that mariadb doesn’t like it when the configuration file is world writeable. So again, Vagrant helps us out by making it simple to also set file permissions in the mount:

config.vm.synced_folder "./application", "/data", mount_options: ["uid=33","gid=33","dmode=755","fmode=644"]

Running multiple containers that expose same port

My team primarily develops web applications, so for us each of our projects/applications expose port 80 for HTTP access during development.

While boot2docker for Windows/Mac and native Docker on Linux makes getting started quick and easy, you can only have one container bound to a given port on the host. So when we’re developing multiple applications or multiple components of an application that expose the same port, it doesn’t work. This really isn’t a show stopper, but it is an inconvenience as it requires running apps on non-standard ports, which just gets awkward to work with and hard to remember.

Running each app however in its own VM via vagrant solves this problem. Of course it introduces a couple more issues though; like now you have to access the app via an IP address or map a hostname to it in your hosts file. This really isn’t that bad though since you should only have to do it once per app.

Another problem this solution introduces is running multiple VMs requires a lot more memory. It also seems a bit counterproductive since Docker is supposed to remove the burden of running full VMs. Anyway, it’s a small price to pay to have multiple apps running at the same time and accessible on the same ports.

Downloading multiple copies of Docker images

The most annoying problem created by this solution though is, now that Docker is running in multiple VMs, each one needs to download any dependent Docker images. This just takes more time and bandwidth, and if we developers hate one thing, it’s waiting.

We were able to get creative though, and on bootup of the VM, we check the host machine folder defined by environment variable DOCKER_IMAGEDIR_PATH for any Docker images, and if found it will docker load them. Then after docker-compose up -d is completed, any new images that have been downloaded are copied into the DOCKER_IMAGEDIR_PATH folder.

Bingo, only need to download each image once now.


After running into all these challenges and finding solutions to them, we now have a simple Vagrantfile we can copy into each of our projects in order to provide a consistent development experience regardless of what operating system the developer is using. We’re using it in multiple projects today and have even gotten to the stage of continuous integration and blue/green deployment to Amazon Elastic Container Service in production (but those are topics for another article).

I expect we’ll face more challenges as time goes by and our projects change, and as we do we’ll continue to evolve our solutions to account for them. Our Vagrantfile is open source, and we welcome suggestions and contributions.

Feel free to post your questions and comments below to make this article more meaningful, and hopefully we can address issues you’re facing too.


Docker Docker Compose Docker Toolbox VirtualBox Vagrant boot2docker Custom boot2docker vagrant box with Guest Additions installed Our vagrant-boot2docker project

“Creating a consistent cross-platform Docker development environment” via @codeship
Click To Tweet

The post Creating a Consistent Cross-platform Docker Development Environment appeared first on via @codeship.

Part 1: From App to OpenShift (Runtimes and Templates)

Open Shift (by Red Hat) News | Ruby Thursday August 13, 2015 @ 10:58 • over a year ago
This is Part 1 of a 2 part series of blogs that will help you bringing your applications into OpenShift. OpenShift 3 allows you to deploy your application to the cloud and the great thing is it doesn’t matter if your cloud is public, private, or even hybrid. Typically, the PaaS platform (OpenShift in this case) will provide a set of predefined runtimes that a developer can use to deploy an application on top of. This developer does not need to worry about the infrastructure, the runtime setup, or the configuration, he/she would just need to focus on their application, and what runtime to use. The PaaS platform will take care of sewing it all together and running it.

Peter van Hardenberg is the founding developer of Heroku Postgres. We recently had a chance to catch up with Peter and talk about the challenges facing industry, the Heroku team, and how Mario influenced their software development.

An Inside Look with Codeship is a regular series providing an insider’s perspective on founding and building a tech company. Each session, we chat with some of the most exciting voices in tech and ask them where they’ve been, where they’re going, and what we could all be doing together. You can read all Inside Look interviews here. An Inside Look with Codeship: Peter van Hardenberg of Heroku Postgres

Hi, Peter! Can you introduce yourself and what you’ve been working on at Heroku?

Sure thing. I’ve been part of Heroku for about six years and from about a dozen employees. I run product at the moment for our ecosystem and data products. I was the founding developer for the Heroku Postgres product.

When you started building Heroku Postgres, was there one major problem for your users you wanted to solve?

Heroku had been building this development platform and application runtime platform, and we realized at some point that, in order to do that, we were going to need a really strong story around data as well as around running people’s code. So the founder Adam Williams came to me, and he said, “Peter, I want you to look at the databases.” And I said, “I hate databases, but I agree that this is really broken, and so I will spend six months on it.” And that was five and a half years ago now.

Ultimately your code is important to your business, right? Your code is how you express value, but your real business is your data. If you lose your data, your business is over.

“Your code is how you express value, but your real business is your data.” @pvh interview
Click To Tweet

I had a friend who worked for a startup that, during a data center move, the servers literally fell off the truck. The truck went over a bump, the server fell out of the back, and all the disks were lost. That was the end of the company. For us, our primary goal has always been durability and ensuring business continuity for our customers, but that’s really just table stakes.

We observed from the beginning that databases were always something that intimidated developers and that had become governed by a priesthood of specialists who, by and large, were not well-aligned to business objectives. Not to speak ill of DBAs, but I think that the relationship between DBAs and developers was always very challenging. Our goal has always been to make every developer able to run applications at a very large scale, no matter how much experience they have with the data.

How did you approach problems you encountered by growing so rapidly?

Fundamentally, when you’re going through exponential growth, you need to be getting better faster than you’re growing. This is a real challenge that we understood from the beginning could only be solved by basically becoming the best at automation, anywhere.

“You need to be getting better faster than you’re growing.” @pvh interview on @codeship
Click To Tweet

We’ve got millions of databases and many, many thousands of servers; and we do it all with no ops team. Unless you count, obviously, our infrastructure as a service provider. Our team has always been integrated between development, operations, and user support. Everybody on the team does on-call. Everybody on the team does support, and everybody on the team does development of different varieties.

Obviously, as we’ve grown, we’ve specialized somewhat. But it’s still a guiding principle that everyone should be exposed to customers as directly as possible. This is a little bit controversial. I think it’s quite common for people to try and shelter their developers from users, but I think that you really only can build that true empathy by actually having to talk to and deal with users on a regular basis.

You talk about having your developers not so siloed away from the company. Does that change what you look for in your developers when you’re hiring new members of your team?

I think what we really look for is grit, the ability to learn, and gallows humor.

I think that gallows humor is highly correlated with the ability to survive in a high-energy startup. You know, you need to recognize that everything is miserable, and it’s unlikely to get better; but that, ultimately, that’s what we’re doing for our users. We suffer so that, hopefully, other people won’t have to. I think that if you can’t laugh at that, you’re really going to have a hard time. Databases in particular are miserable, you know?

One way we talk about our goal internally is to just try and claw back the misery of using databases just a little bit. Databases are still terrible to work with, but I think we have managed to make them more approachable and more joyful for developers to use. And, ultimately, that’s what we’re in the business of doing.

You’ve previously talked about the necessity of wearing many different hats while leading a product, but knowing how to take off the hats when it comes to people issues and culture. How did you come to that lesson as a manager?

It was advice given to me by Jim Lindenbaum, one of the Heroku founders. I’d been working with the team, and I was trying to figure out how to balance people’s individual desires for their own career development with what the team needed.

He gave me a really good piece of advice. He said, “No. You need to figure out what the team needs and get that. But when you’re working with people, you need to stop thinking about what the team needs and just focus on what that person needs.”

That sometime creates a tension, obviously, between those things. If a person wants to go in a particular direction, but that’s not what the team needs right now, that can be challenging to manage. But I think that it’s important to be able to take off your “what does the product need right now?” hat and really focus on “what does the person that I’m working with here in this situation need?” It’s the same when you’re working with customers versus your business. I think the ability to view things through those different lenses is really essential.

I understand Heroku works with remote teams.

Absolutely. Heroku is more than half distributed. We have about 200 employees. About 55 percent work from somewhere other than San Francisco. This was a real challenge for us to overcome, initially. In the early days, I actually relocated to San Francisco because we were an all-local company. And as we grew, we realized just the incredible potential that was available by taking advantage of the global market, as opposed to just the Bay Area market.

How do you make sure important knowledge and information doesn’t get lost and reaches your entire team?

We use a whole variety of different tools to manage that. Everything we do is in a Google Hangout or is in email. It’s unsexy, but mailing lists are fundamentally one of the most key tools we use to run our business – and, of course, tools like HipChat and GitHub and everything else that everybody uses.

People talk in Hangouts, but decisions are made in mailing lists and on pull requests, so we always have a written record of these things. When I say “always,” I’m sort of rounding up, of course. The simple truth is that if you’re not in the office you’re going to miss some things. If you weren’t in the Hangout, you’re going to miss some things, and that’s just a fact of life. But that’s true whether you’re in the office or not to some extent. If you weren’t at the meeting, or if you were out at a conference that week. Ultimately, put things into writing and make sure that as much as possible is communicated.

Is there a unique technological difficulty your projects have had to overcome?

When we started this, no one had done this before. Heroku Postgres predates Amazon’s RDS service, and so at the time, there weren’t really any database services out there except for Amazon’s SimpleDB, which was not as robust or functional as was required for most consumer applications. Some of the difficulty is scale and availability. Some of it fundamentally is about understanding the user experience. It’s really about investing deeply and understanding what a developer needs to understand, thinking about what do they need to know and then automating that.

I think the greatest personal technological advantage is that my background is diverse. I’ve done video game development. I’ve worked in research laboratories. I’ve done Shakespearean scholarship. When I came to Heroku, I brought sort of a pretty diverse set of experiences with me.

The best trick that we pulled in was actually drawn from video games, which was to treat our databases a little bit like the enemies in a Mario game. All characters in video games are little state machines that travel around, and when they reach the edge of a platform, they turn around. Or, if they see Mario, they throw a boomerang. That kind of thing. We really treat our databases kind of the same way, which is that if a database has an interruption, then it spawns a little state machine, and it says, “Okay, I need to find a new server,” “I need to repair this disk,” whatever the task of the day is. And so really what we have is a very large, distributed job processing queue, where we’re continually taking every database that we have and saying, “What does this thing need next?” We’re feeding all the events that we get from the world into it, so we can think about it a bit like a very large game where the goal is to maximize the availability of our service. And I think we play it pretty well.

Is there a way that Heroku grows the team skills or mentors learning?

I think participation in a company like Heroku really exposes you to a lot of challenges. We don’t have any formal mentoring programs in place. Actually, we have historically biased towards bringing in fairly experienced engineers. One of the things I’ve seen in some of our teams is that bringing a more junior engineer can really give everybody else a lot more focus and discipline, because it forces them: 1) to work with that person and explain what they’re doing and 2) to be really conscious of how somebody who’s less experienced might perceive and understand a thing.

You know, you don’t want to have a team that’s unbalanced towards all junior people, but I think that having all senior people can actually be a big of a crutch as well. Building a team with a blend of skills is really valuable, not just in terms of cost management, but in terms of having a few people who are newer in the industry and are really passionate about growing their skills from nothing. They don’t have bad habits to unlearn. They see things with really clear eyes. An Inside Look with Codeship: Peter van Hardenberg of Heroku Postgres

Is there a way that you and Heroku try to give back to developers outside your company?

Our mandate is to try and make databases less awful. We have a saying, “The worst code is code you write yourself, the second-worst code is code somebody else maintains, and the best code is no code at all.” So we always take the position that if you’re writing code, that’s probably a mistake. if you can get someone else to maintain that code, or get that code maintained by a community, that’s way better.

“Our mandate is to try and make databases less awful.” @pvh interview on @codeship
Click To Tweet

So to some extent, we upstream and open-source a lot of the changes we make in Postgres and in other tools. We try to open-source and release tools and get a community built around them. We see that, just from a business strategy perspective, as awareness that will come back to Heroku. But it’s also about fundamentally aligning to our goal, which is to make building software better, less miserable, more effective.

What do you think is the biggest challenge for the industry right now?

I think we have a lot of technical and social problems.

On the technical front, we’re going through a really Cambrian period right now, which is wonderful. If you remember, the stagnant — maybe “stagnant” is unfair, but there was a time when everything was Java. I wouldn’t call it monoculture, but it was certainly a less diverse technical world, say, 15 years ago than it is now.

Today, there are multiple package managers for Node. It feels like there’s a new programming language of the week that people are getting into. Is it Go, Rust, Crystal? I think that’s great, but it also introduces a lot of new challenges in terms of managing the complexity of projects over time. It’s wonderful that we have these new tools that make it possible to digest this complexity. It’s easier to build your product in multiple languages if you use tools like Heroku, which allow you to deploy all your applications the same way, no matter what language they’re written in. On the other hand, I think it’s getting harder and harder for developers to pick tools, because there are so many options which appear viable but that may or may not survive in the market.

But if you want to talk about what really worries me in technology, it’s that we have a relatively small group of undiverse individuals making decisions that will impact the direction of our society for a long time. There’s poor representation from women. There’s poor representation from a lot of visible minority communities, and we are not doing a good job of expanding that.

“We have a relatively small group of undiverse individuals making impactful decisions.” @pvh
Click To Tweet

If you look at the Gamergate controversies of the last year, it’s tied into that. I see areas where we’re making progress. I want to give some credit to the Ada Initiative. I’m a big fan of the work they’re doing improving gender representation, but it really feels like we’re still in the Stone Age and that this is still a boys’ club. That needs to be fixed.

Is there something you’d like to see the industry as a whole doing more of in a practical sense?

We’re still losing the share in terms of the number of women in engineering, and I think that there’s a lot of people doing great work there. I’ll give credit to organizations like Hackbright, who are trying to shift the ratio, but I definitely want greater investment there.

I’d encourage any company to get involved with organizations like Hackbright and see what they can do to help include communities that are not well-represented in technology today in their company.

Is there recent advancement in the industry that is there right now that will transform the future of your work?

This is an industry that is in constant flux. It’s hard to pick out anything — as William Gibson said, “The future is already here. It’s just not evenly distributed yet.”

Broadly speaking, I’m really excited about the consumerization of developer technologies. Heroku has been a big part of this. That’s why I’ve been so passionate and excited about working at Heroku. If you go back a few years, developers were really treated like experts, and the idea was that, “We build tools to be powerful, and developers will figure them out.” I think that the new crop of companies coming up, Codeship among them, take a fundamentally different perspective, which is to say that developers are people who deserve to have great tools to use, and that those tools need to be approachable and easy to use and intelligible. Whatever we can take off a developer’s mind is something that won’t distract them from solving the problem that they’re there to solve.

Our goal is not to build user-friendly tools precisely, but it’s to make developers more able to build better software by reducing the number of distractions and frustrations in doing so. Five years ago, there were very few companies doing this. Today, there are dozens, hundreds, even, focused on so many different parts of that stack.

Try Codeship – The simplest Continuous Delivery service out there.

The other big trend supporting this is the move to the cloud. Now these services can collaborate much more effectively than they could a few years ago, because people are mostly hosted in Amazon. It means that low-latency interconnection between those services is possible, so you can buy your database from one person, you can buy your CI service from another, and you can buy your runtime from a third. You can run your own ersatz, custom stuff alongside as well and those things can all inter-operate. I think that that kind of marketplace and focus on developer experience is huge.

What do you think are the key skills required to be an effective product manager or engineering lead?

Know how to ask questions. The first question is always, “What problem are you trying to solve?” If you don’t know what problem you’re trying to solve — whether it’s a technical, product, or people problem — you won’t know if you’ve accomplished your goal. So many people get excited about an implementation, or an idea; but they run ahead of making sure that they’ve documented what problem they’re solving and also what problems they’re not solving.

A good example might be if you’re building a new feature to reduce the amount of time for a log-on. You know you’re building a new log-in page. Why are you building it? Are you trying to reduce the friction in sign-up? Okay. How are you going to know that you did that at the end? You’ve got to make sure that whenever you set out to do something you know it’s use. Is that actually going to be useful? Is it going to be used? What problem does a user have that this solves?

Number two is empathy. You need to have empathy for the user, for the team members, for management, for your reports. Ultimately, software is hard, and people are harder. And people make software, so you’re ultimately going to be solving people problems, whether you’re an engineer, a product manager, or a manager.

If you could go back in time to the start of your career, what one piece of advice would you give yourself?

I’ve been very fortunate. My whole career, I’ve basically followed a strategy of never being afraid to say yes to an interesting opportunity. Nothing I’ve ever done has been planned far in advance. It’s always been about recognizing a new opportunity and following it when you see it. That used to worry me more than it does now, but if I had tried to plan my career, I certainly wouldn’t be where I am right now.

So, I would say: take opportunities, feel comfortable with the uncertainty. It’ll all work out.

“Take opportunities, feel comfortable with the uncertainty.” @pvh interview on @codeship
Click To Tweet

The post An Inside Look with Codeship: Peter van Hardenberg, founding developer of Heroku Postgres appeared first on via @codeship.

Announcing Docker Toolbox

Docker News Wednesday August 12, 2015 @ 13:04 • over a year ago
The fastest way to get Docker running in development written by Michael Chiang, Docker Today we’re announcing a new installer for Mac OS X and Windows called Docker Toolbox. We’ve been hearing again and again that it can be difficult to … Continued

Introducing Docker Content Trust

Docker News Wednesday August 12, 2015 @ 13:01 • over a year ago
written by Diogo Mónica, Docker   Image Signing and Verification using The Update Framework (TUF)   A common request that we’ve heard from the Docker community is the need to have strong cryptographic guarantees over what code and what versions of … Continued
written by Ben Firshman, Docker   We’re thrilled to announce Docker 1.8 with support for image signing, a new installer, as well as incremental improvements to Engine, Compose, Swarm, Machine and Registry. You’ve been telling us that you want Docker to … Continued

How to Use Heroku PGBackups

Codeship News Tuesday August 11, 2015 @ 15:52 • over a year ago

Backing up your data is one of the most critical activities for your application. Heroku PGBackups makes the entire experience pretty simple but comes with a lot of flexibility too.

Setup a Test App

Heroku provides a free PostgreSQL database to every Rails application, because Heroku loves you. To walk through the backups functionality, let’s start with creating a new Heroku app. If you’ve never setup a Heroku app before, check out their Getting Started page for the language of your choice.

Heroku provides a free PostgreSQL database to every Rails application, because Heroku loves you.
Click To Tweet

Heroku provides a getting-started Ruby app that already has PostgreSQL configured, which is what I’ll be using.

git clone
cd ruby-getting-started
heroku create
git push heroku master
heroku run rake db:migrate
heroku open

Alright, now that we’ve got our test app working, let’s load in some sample data.

heroku run rails console
100.times do |i|
    name: "Widget #{i}",
    description: "This is my widget. There are many like it, but this one is mine.",
    stock: i

Heroku PGBackups Basics

Now that we have some critical data loaded into our system, we want to make sure that if something bad happens we can get it back to its original state. Heroku PGBackups provides a lot of options for doing this, but we’ll step through them all.

Create a manual backup

heroku pg:backups capture

This will begin a backup immediately with a progress meter. You can press CTRL + C to get your prompt back, and the backup will continue in the background. It defaults to running against the main DATABASE_URL but if you have multiple databases you can specify the environment variable that the connection string is stored in to trigger a backup there.


Additionally, if you visit the Heroku Apps web interface and click on your Heroku PostgreSQL database you’ll notice a button called PG Backups. From here you can quickly create a database snapshot and view all of your existing snapshots to download or delete them.


Scheduling backups

Manual backups are great if you need to pull a copy of the database to run some tests or save things as they are before doing something major with the data. But for most applications we’re going to want those backups to be scheduled regularly. Heroku PGBackups lets you set those schedules from the command line like so:

heroku pg:backups schedule DATABASE_URL --at '01:00 America/New_York'

That just scheduled a backup for 1 a.m. EST every night. Heroku keeps seven daily backups and one weekly backup for every plan. As your plan increases so does your backup retention. Premium plans will include 12-month retention with monthly backups in addition to the daily and weekly backups.

You can see a list of your scheduled backups with:

heroku pg:backups schedules
=== Backup Schedules
DATABASE_URL: daily at 1:00 (America/New_York)

And cancel them with:

heroku pg:backups unschedule DATABASE_URL

Download your backups

Whether it’s a manual or scheduled backup, at some point you may want to save it outside of Heroku. The PG Backups interface described earlier will give you point and click access to download any of them. But we’re all about automation, so ideally that’s something we’d want to do from the command line.

Heroku provides a command to give you an expiring link. You can either use the command with no arguments to get the latest backup, or you can specify the name of a specific snapshot. To use that command to automatically download our image we can utilize curl and pass in the url that the Heroku command gives us.

curl -o latest.dump `heroku pg:backups public-url`

Restoring from backups

Loading data back in from one of these is fairly straight forward as well. Just reference a backup name, an app name::backup name to pull from another one of your apps (like production to staging), or a public url of a pg_dump followed by the config reference for the database to restore the data into. Here are a few examples:

heroku pg:backups restore b101 DATABASE_URL
heroku pg:backups restore myapp-production::b101 DATABASE_URL
heroku pg:backups restore 'https://mys3url/that-leads-to-db.dump' DATABASE_URL --app sushi

Be careful when doing restores this way; it will completely overwrite the target database.

The database dumps are just standard PostgreSQL pg_dump files, so you can easily load them into any other non-Heroku PostgresSQL database using pg_restore. More details on how to do that can be found in the Heroku import/export docs.

You can also make direct copies from one database to another using almost the same syntax (assuming the colors listed below are references to database environment variables).

heroku pg:copy COBALT GREEN
heroku pg:copy myapp-production::ORANGE GREEN

Advanced Backups for Heroku PGBackups

In a production application, depending solely on nightly backups probably isn’t going to be good enough. What happens if there’s a problem in the middle of the day? Manually triggering a backup every hour might seem like a good idea at first, but that could get seriously out of hand and put a lot of extra stress on the database. Worse yet, what happens when even the nightly backups cause enough stress on the database that you’d like to avoid them?

What to do when nightly backups put enough stress on the database that you’d like to avoid them.
Click To Tweet

The Write Ahead Log (WAL) records commits before the commits are actually written to the database. The records of these commits are used to stream read replicas as well as backups to allow point-in-time restoration. There are a number of different ways to take advantage of this to solve our backup problems.

Schedule backups from a follower

If you’re on a database plan at the Standard level or up, you can create followers/read replicas. A follower will receive streaming updates; if nightly backups against your primary database are out of the question because of the stress involved, scheduling those backups against a follower will provide the same benefit without degrading application performance.

Try Codeship – The simplest Continuous Delivery service out there.

Point-in-time recovery

Now, imagine a scenario when some very important data gets accidentally deleted, overwritten, changed, or corrupted somehow. Doesn’t matter how it happens, user error, developer error, security breach; all that matters is that it happened.

This is where Heroku’s Continuous Protection comes into play. To rollback to a previous point in time, you can visit the web interface and create a rollback — a clone of your database but at a specific point in time. Just specify the date and time, and poof… instant recovery (relative to the size of your database).


You can also create a rollback from the command line. Continuous Protection is set up using the open source tool WAL-E that allows for continuously archiving the PostgreSQL WAL to Amazon S3.

Real World

Now let’s look at a real world scenario. You need to get data from a backup without taking down the running application. Running pg:copy (pg_restore) isn’t going to be good enough because that will overwrite the existing database, costing you all of your new data since the backup. Point in time restore would even have the same problem, although less pronounced.

Realistically in such a situation, you restore a copy of your backup or rollback database and then proceed to code in a way to find and merge in the data that was missing. We’ll simulate that now.

If you haven’t already captured a backup with our sample data, do so now. This is a real world situation so I’m going to be working with a production tier database (Standard or higher) and that will mean utilizing functionality that isn’t available to Hobby and Free tier databases to handle the restore.

If you want to follow along, here are the instructions to upgrade.

Now open up a heroku run rails console and run the following to delete half of our data in a semi-random fashion:

Widget.all.each_with_index do |widget, i|
  widget.delete if i % 2 == 0

Oh no! Now, using one of the above techniques, either create a rollback or create a new database and restore a backup to it. I’m going with the rollback, so I’ll visit the web interface and choose the version of my database from a few minutes ago.

Now, let’s connect to our database and make sure it has everything.

=> SELECT COUNT(\*) FROM widgets;

Perfect! Now how do we get the data merged? There are a lot of ways to do that. You could write a script that connected to both databases and inserted the missing records or use a desktop client, like Navicat, that included data syncing. But there’s a much easier way.

PostgreSQL includes something called Foreign Data Wrappers that allow one PostgreSQL database to connect to a number of external data sources like Redis or another PostgreSQL database. Heroku makes this even easier with data links. You can read more about data links there, but for our purposes we want to link our main database (BRONZE) with our restored backup (CYAN) like so:

New link 'sighing_duly_2312' successfully created.
$ heroku pg:links

The latter command will show you all of the links that are configured. If for some reason you get an error after the second command, make sure you’ve installed the pg-extras plugin:

heroku plugins:install git://

Now that we’re connected, open up your psql prompt and let’s merge that data with a query.

heroku pg:psql
INSERT INTO widgets (id, name, description, stock, created_at, updated_at) (
  FROM sighing_duly_2312.widgets backup 

That query does a select against our backup database that we restored for all of the ids that weren’t in our main table, then inserts them into our main table for us.


PostgreSQL can manage your data integrity, and Heroku PostgreSQL makes sure it’s safe in case of emergency with almost no effort on your part. Restoring is a more complicated beast but thanks to Foreign Data Wrappers (pg:links), merging in just what we need gets a lot easier. Always backup your data.

“How to use Heroku PGBackups” – via @codeship
Click To Tweet

The post How to Use Heroku PGBackups appeared first on via @codeship.

When Travis CI first started in February 2012 with 5 male Founders, they didn't put much thought into where the next employee would live and certainly didn't think about policies like vacation or conference attendance, least of all maternity/paternity leave. We're now in our 4th year and currently have 12 male and 11 female Builders (that's what we call employees), 8 of whom are living in the US (initially, our American employees were hired as contractors due our company being in Germany, but to us they're all employees). As our company has grown over the years and we've increased our US coverage, we've had the challenge of finding a balance between the German and American startup work culture. At the same time, with 3 female employees being pregnant at the beginning of the year, all of whom we hired knowing they were already pregnant, we needed to figure out what, if anything, we were offering for maternity leave. In Germany the maternity laws are quite generous with 14 months of paid leave being covered by the government. One of our employees was covered by this. The other two were employees based in the US and while we try to offer the same benefits to our American employees as we do to the German ones, we didn't have anything in place for parental leave.

Let me digress for a bit and say that I am an American woman living in Berlin. I moved here in March of 2015 to seek a better life for myself and my future (far far future!) family. After living in San Francisco and working for startups the past 7 years, I have been disappointed time and time again by the offers made (or lack thereof) to pregnant employees. Once I even worked for a company that created a Dogs In The Office policy before creating a Maternity Leave policy... So when our CEO asked me to do some research and make a proposal for our US Maternity/Paternity Leave policy, I took it very seriously and wanted to offer something beyond the bare minimum.

You would be amazed at what I discovered during my research. From companies offering a small amount of money to help cover the time away, to somewhat generous paid time off plus cash bonuses, to companies offering as little as 2 weeks. TWO WEEKS! Another astounding thing was that most of the offerings were not equal for mothers and fathers. While I can understand the need for the mother to have more time away with her baby, having her partner around to not only assist her, but also bond with the baby is crucial. So after a couple of weeks of research, we made a decision to offer our expectant mothers AND fathers:

  • 2 weeks before the due date paid at 100% (optional, but recommended)
  • 20 weeks for normal births paid at 100%
  • 24 weeks for births with complications paid at 100%
  • Flexible working hours after the 20/24 weeks are complete (part-time arrangements can be made)
  • Your job will be here for you when you return

When we relayed this information to the two US employees, one became a little teary because her last employer (a much bigger and older company), didn't offer anything. This being her second child, it was a huge relief to know she was going to have paid time off with flexibility upon return. While it was a great reaction, it shouldn't happen this way. If you value your employees, you should value their need for time away. At the same time, if you want to hire someone, whether or not they are already pregnant should be irrelevant.

We're continuously working on making Travis CI an inclusive and family-friendly place to work, and offering all of our employees time off with their new-borns is an important part of that.

Docker Demo Webinar: FAQ

Docker News Monday August 10, 2015 @ 16:56 • over a year ago
Thanks again to everyone who joined last week’s Docker Demo webinar! There were more questions than time to answer them – and we wanted to share the Q&A publicly so everyone had access to this information on their Docker journey. Don’t … Continued

A Survey of Non-Rails Frameworks in Ruby: Cuba, Sinatra, Padrino, Lotus

Codeship News Monday August 10, 2015 @ 15:15 • over a year ago

It’s common for a Ruby developer to describe themselves as a Rails developer. It’s also common for someone’s entire Ruby experience to be through Rails. Rails began in 2003 by David Heinemeier Hansson, quickly becoming the most popular web framework and also serving as an introduction to Ruby for many developers.

Rails is great. I use it every day for my job, it was my introduction to Ruby, and this is the most fun I’ve ever had as a developer. That being said, there are a number of great framework options out there that aren’t Rails.

This article intends to highlight the differences between Cuba, Sinatra, Padrino, Lotus, and how they compare to or differ from Rails.

The Format

First things first, we’re going to take a quick look at Rack, which all of these frameworks (including Rails) are built on top of. After that we’ll compare them using MVC as the basis of comparison, but we will also talk about their router. Finally we will quickly discuss when it might be appropriate to choose one over the other.

The Foundation: Rack

Rack provides a minimal interface between web servers that support Ruby and Ruby frameworks.

Rack has really been what has helped the Ruby community have such a large number of both webservers and frameworks. It allows them to communicate with each other in a standard way. A webserver like Puma, Unicorn, or Passenger only has to be built for just one interface: Rack. As long as the framework is also built upon Rack, it’s able to work with any one of the webservers out there. All of the frameworks I’ll be discussing in this article are built on top of Rack, and Rails is no different.

A minimal Rack application contains an object which responds to the call method. This method should return an array with three elements: an HTTP status code, a hash of HTTP headers, and the body of the response. Essentially what a framework does is to help you organize and manage creating responses that get converted into this format.

Here is the smallest Rack application:

run { |env| ['200', {'Content-Type' => 'text/html'}, ['Hello, Rack']] }

By running the rackup command (which will start a webserver), we can actually reach this endpoint from the browser or via curl.


Now we can reach it from curl:

-> curl
Hello, Rack


The tagline for Cuba is “Ceci n’est pas un framework,” which translates to “This isn’t a framework.” You might ask yourself why it’s even in the list, but luckily just below that phrase it says that Cuba is in fact a micro framework. That’s why I have it listed first, just above Rack.

Cuba was written by Michel Martens with the goal of following a minimalist philosophy and only providing what’s needed rather than some of the bloat and unused features that come along with much larger frameworks. Cuba is small, lightweight, and fast.


Cuba provides a small router that allows you to define routes using its DSL.

Cuba.define do
  on root do
    res.write("Hello World!")

This code is small but surprisingly powerful. The on method is looking for a clause that returns true. So in this case, when a request is made to /, or root, it will yield the block of code. This block of code has access to res, which is a Cuba::Response class. Here you can tell it what status code to use, which text to render, etc.

Because of how on works, we can nest routes together like so, where we group all of our GET requests together:

Cuba.define do 
  on get do 
    on "about" do 
      res.write("About us") 

    on root do

Model and controller

Cuba, as mentioned at the outset is a micro framework, and it has chosen not to include what might traditionally be the model or controller. If you are in need of a model for your app, you are welcome to bring in a solution from another framework, like ActiveRecord or Lotus::Model. What would be in the controller in Rails is put inside of the router directly.


Cuba comes with a plugin called Cuba::Render which helps you with the View layer. It allows you to use ERB templates by default but can easily be configured to support Haml, Sass, CoffeeScript, etc. by using the Tilt gem.

To take advantage of the views you can use three methods: partial, view, or render. In this example, I’ll use render, which looks for an ERB file inside of the views folder and passes that as a content variable to a layout.erb file.

require "cuba" 
require "cuba/render" 
require "erb"


Cuba.define do 
  on get do 
    on "about" do 

Here is our layout file (views/layout.erb):


  <%= content %>

And lastly our actual view template (views/about.erb):

<h1>About Us</h1>
<h2>Welcome, friends</h2>


Going up the ladder of complexity a little bit, we have Sinatra. Sinatra was created by Blake Mizerany in 2007, about four years after Rails began. Sinatra is probably the second most popular Ruby framework out there; it’s used by many coding bootcamps to give students their first introduction to building a Ruby app.


Sinatra comes with a very capable router which is centered around two things:

  • the HTTP verb (GET, PUT, POST, etc.)
  • the path of the HTTP request.

Using these, you can match the home page by using get and "/".

The main job of one of these blocks of code is to either respond with the text (or JSON or HTML) to be rendered, or by redirecting to another page. In this example, I’m redirecting the root URL to "/hello", which renders some text.

get "/" do 
  redirect "/hello" 

get "/hello" do 
  "Hello, World" 

Model and controller

Like Cuba, Sinatra doesn’t come with a model or controller layer out of the box, and you’re welcome to use Active Record, Lotus::Model, or another ORM of your choosing.


Sinatra comes with a View layer which is built-in and allows you to use ERB, Haml, and Sass, among other templating engines. Each of these different templating engines exposes itself to you in the router via its own rendering method. To render an ERB file, we can simply say erb :hello, to render the views/hello.erb file which will be embedded into a views/layout.erb file via a yield.

get "/hello" do
  erb :hello

Another feature that Sinatra has is the ability to define helper methods which can be used inside of the templates. To do that, you use the helpers method and define your own methods inside of the block.

helpers do
  def title
    "Sinatra Demo App"

In our views/layout.erb file we can access the title method we defined as a helper.

<!doctype html>
    <%= yield %>


Our goal with Padrino is to stay true to the core principles of Sinatra while at the same time creating a standard library of tools, helpers, and functions that will make Sinatra suitable for increasingly complex applications.

Padrino and Sinatra go hand in hand. Padrino is based on Sinatra but adds many additional tools such as having generators, tag helpers, caching, localization, mailers, etc. It takes Sinatra, which could be said is on the lighter side of what a framework is, and adds some of the things missing from making it a full-stack framework.


In Padrino apps, the routes and controllers are combined in one place. Instead of having a routes file where all the routes go for the whole application (although this too is possible), the controllers essentially contain the routes. The users controller contains all routes related to a User. Anything you can do with routing in Sinatra you can do here, plus some extra features like route nesting.

Bookshelf::App.controllers :books do
  get :index do # /books
    @books = Book.all
    render 'index', layout: 'application'


Padrino doesn’t have its own model layer but rather comes with support for a large number of established ORMs that either live on their own or come from other frameworks.

Padrino supports Mongoid, Active Record, minirecord, DataMapper, CouchRest, mongomatic, MongoMapper, OHM, Ripple, SEQUEL, Dynamoid. While generating an app you can choose which one you’re going to use (if any), and then the model generators will help you generate models according to the ORM that you chose.

padrino g model Book title:string code:string published_on:date


Padrino has a view layer which is rendered from within the controller. It supports ERB, Haml, Slim, and Liquid out of the box. There isn’t much to say here other than it works well and as expected. Variables can be passed to the view by setting an instance variable in the controller. There are also helpers generated for each controller which can also be used in the view.


Controllers and routes in Padrino are essentially the same thing. The controller defines the routes and decides how to handle the response: whether to find data to render or to redirect elsewhere.

Try Codeship – The simplest Continuous Delivery service out there.


Lotus is a Ruby MVC web framework comprised of many micro-libraries. It has a simple, stable API, a minimal DSL, and prioritizes the use of plain objects over magical, over-complicated classes with too much responsibility.

Lotus is a full-stack MVC framework created by Luca Guidi that began in 2013. The philosophy behind it, as mentioned in a blog post by Luca, is that its goal is simplicity, aiming to be built in a modular way, relying more on plain old Ruby objects (POROs) rather than DSLs.

Lotus is actually comprised of seven different modules (or “micro-libraries”):

  • Lotus::Model
  • Lotus::Router
  • Lotus::Utils
  • Lotus::View
  • Lotus::Controller
  • Lotus::Validations
  • Lotus::Helpers

These can be used individually or brought together in a complete full-stack framework under the Lotus gem itself.


Lotus comes with a very clean and capable router. It feels very similar to the router that comes with Rails.

To make a get request to the Index action of our Home controller, we put:

get '/', to: 'home#index'

The Lotus Router also supports RESTful resources right out of the box.

resources :books, except: [:destroy]

Because Lotus is Rack compatible, we can respond with a Proc (because it has a call method), and we can even mount an entire Sinatra application inside our routes.

get '/proc', to: ->(env) { [200, {}, ['Hello from Lotus!']] }
mount, at: '/sinatra'


Lotus::Model follows a Domain Driven Design approach (see Domain Driven Design by Eric Evans), which implements the following concepts:

  • Entity
  • Repository
  • Data Mapper
  • Adapter
  • Query

An Entity is an object that is defined by its identity. In other terms, it’s the “noun” or “thing” in your app: a User, a Book, a Library, etc.

class Book
  include Lotus::Entity
  attributes :author_id, :price, :title, :code

A Repository is the next major object in Lotus::Model, whose job it is to mediate between an Entity and the persistence layer (PostgreSQL, MySQL, etc.). Entities aren’t responsible for querying themselves or persisting themselves to the database. That’s the job of the Repository, and it allows for the separation of concerns.

The Repository is where you’ll put all of your queries. They are actually private methods of this object, meaning you’re forced to keep them organized in one place. Controllers (or even worse, Views) no longer have intimate knowledge of how to query data.

In Active Record, which is an implementation of the Active record pattern, you might write your query like this, which could exist anywhere in your application (and is quite commonly found in Controllers):


But in Lotus it would live inside the repository.

class BookRepository
  include Lotus::Repository

  def self.most_recent_by_author(author, limit: 8) 
    query do 

The job of the Data Mapper is to map our fields in the database to attributes on our Entity.

collection :books do
  entity Book 
  repository BookRepository

  attribute :id, Integer 
  attribute :author_id, Integer 
  attribute :title, String 
  attribute :price, Integer 
  attribute :code, String 

Lotus also comes with migrations to help you manage the schema of your database. These are quite similar in Lotus as they are in Rails. You have a series of command line commands which will help you generate a new migration. Once you’re done writing it, you can run the migration, get the current migration the database is on, or roll it back.

bundle exec lotus generate migration create_books
# db/migrations/20150724114442_create_books.rb
Lotus::Model.migration do 
  change do 
    create_table :books do 
      primary_key :id 
      foreign_key :author_id, :authors, on_delete: :cascade, null: false

      column :code,  String,  null: false, unique: true, size: 128
      column :title, String,  null: false
      column :price, Integer, null: false, default: 100

Lotus didn’t feel the need to reinvent the wheel here and is using SEQUEL under the hood to help it with migrations and communicating with the database.

bundle exec lotus db migrate
bundle exec lotus db version # 20150724114442

Here is how you would create and persist a Book:

author = "George Orwell") 
author = AuthorRepository.persist(author)

book = "1984", code: "abc123", author_id:, price: 1000) book = BookRepository.persist(book)


In Lotus, the View is an actual object which is responsible for rendering a template. This varies from Rails where the controller renders the template directly.

# web/views/books/index.rb
module Web::Views::Books 
    Index include Web::View

    def title
      "All the books"

Inside of the template, we are now able to call books (which was exposed to us from the Controller/Action) and title to get the page title. Lotus comes with ERB templates by default, but it supports many different rendering engines such as Haml and Slim.

<h1><%= title %></h1>

  <% books.each do |book| %> 
    <li><%= book.title %></li>
  <% end %>


One major difference between Lotus and Rails in the Controller layer is that each Action in Lotus is its own file and class. Another difference is that @ (instance) variables aren’t exposed to the View by default. We must explicitly tell the Action which variables we want to expose.

# web/controllers/books/index.rb
module Web::Controllers::Books 
  class Index 
    include Web::Action 
    expose :books

    def call(params)
      @books = BookRepository.all


It should be said that all of these frameworks have uses and things that differentiate them one from the other. So which is the best one? Here’s an answer that you’ll hate:

It depends.

It depends on the requirements of your project or in some cases it can be developer preference when all else is equal.

Here’s a summary of when it might be a good idea to choose one of these frameworks over another:

  • Cuba: Very close to Rack with very low overhead. I think its best use is for small endpoints where speed is crucial or for those who want full control over their entire stack, adding additional gems and complexity as needed.
  • Sinatra: Not as close to Rack, yet still far from being a full-stack framework such as Rails or Lotus. I think it’s best used when Cuba is too light, and Rails/Lotus are too heavy. It’s also a great teaching tool because of its small interface.
  • Padrino: For those who have an existing Sinatra app that is becoming more complex and warranting things that come in a full-stack framework. You can start with Sinatra and graduate to Padrino if needed.
  • Lotus: A great Rails alternative with a simple and explicit architecture. For those that find themselves disagreeing with “The Rails Way,” or for those that really enjoy the Domain Driven Design approach.

“A survey of non-Rails frameworks in Ruby: Cuba, Sinatra, Padrino, Lotus” – via @codeship
Click To Tweet

The post A Survey of Non-Rails Frameworks in Ruby: Cuba, Sinatra, Padrino, Lotus appeared first on via @codeship.

With OpenShift Enterprise by Red Hat, CA Technologies provides enterprise Platform-as-a-Service (PaaS) to its development teams to more quickly build and deploy web-based apps for customers.

Reactive Containers: How Kitematic works hand-in-hand with the Docker CLI

Docker News Friday August 07, 2015 @ 05:28 • over a year ago
written by Jeff Morgan, Software Engineer at Docker, Inc.  Introduction Kitematic is a desktop app for Mac & Windows with the purpose of helping new Docker users run their first container in minutes. Kitematic streamlines the install process to setup … Continued
pluto.models/1.4.0, feed.parser/1.0.0, feed.filter/1.1.1 - Ruby/2.0.0 (2014-11-13/x86_64-linux) on Rails/4.2.0 (production)