Planet Hosting & Delivery

Updated Saturday, 21 April 2018 07:30
{}
Docker News ( Feed )
Tuesday, 06 March 2018
5 years later, where are you on your Docker journey?

Docker is turning five the week of March 19-25, 2018 and in celebration of this event, we would like to turn the spotlight to our users – the customers, partners, individuals and organizations that are using the Docker platform every day. From scientific use cases in the field of medical and

Docker is turning five the week of March 19-25, 2018 and in celebration of this event, we would like to turn the spotlight to our users – the customers, partners, individuals and organizations that are using the Docker platform every day. From scientific use cases in the field of medical and space research to more traditional use cases in large banks and insurance companies, Docker’s adoption is only accelerating with everyone from individual developers to global corporations. Here are some of the key figures showing the widespread adoption of Docker across the community and within enterprises.

 

Docker has evolved from a container runtime to Docker Enterprise Edition (EE), a secure enterprise-ready container platform that brings unmatched freedom of choice to enterprises, while providing a foundation for their digital and multi-cloud initiatives. Millions of users rely on Docker, downloading 100M container images a day, and over 450 companies have turned to Docker Enterprise Edition – including hundreds of the largest enterprises in the world. With such vast adoption, the range of stories to tell and the diverse set of use cases continues to grow. So where are some of these users and enterprise customers on their Docker journey?

Docker users have deployed Docker containers for a variety of use cases. One research center is using Docker to create a scalable, parallelizable and rapidly deployable test infrastructure for DART, NASA’s mission to hit an asteroid at 6 km/s. Another use case involves Pinterest, a user that deployed Docker to  improve the production Machine Learning Systems powering their home feed. Still, some of our commercial customers are using Docker EE for a variety of mission-critical applications:

GE Digital  The company leveraged the Docker platform and Windows containers to repackage one of their main Asset Performance Management (APM) on-premise solutions and embrace a new fog computing approach with greater intelligence and more computing power, at the edge.

GlaxoSmithKline – The global pharmaceutical company is using Docker EE to help powers its new research environment for identifying new medical discoveries more rapidly.

MetLife – By using Docker EE for legacy application modernization, MetLife projects savings of 66% across nearly 600 of its applications, enabling the 150-year old company to self-fund innovation projects.

We are thrilled that MetLife will be hosting an onsite birthday event to celebrate their containerization journey, welcome new Docker users and share their first-hand experiences with how the Docker platform has impacted their business.

Docker Birthday events worldwide – RSVP Today

Come learn, mentor, celebrate, eat cake, and take an epic #dockerselfie! No matter where you are in your Docker journey, the birthday events are meant to help demystify some Docker concepts and guide users along specific learning path depending on their use case and expertise level. Advanced Docker users and partners will be available on-site and act as mentors to help everyone complete a series of Docker hands-on labs.  

Docker Birthday #5 Hands-on Labs will include the following:

  • Docker 101 for Linux users
  • Docker 101 for Windows users
  • Kubernetes on Docker for Mac and Docker for Window
  • Modernizing Traditional Java Apps
  • Modernizing Traditional .NET / Windows Apps
  • Deploying Multi-OS applications with Docker EE with Kubernetes

The main event will take place at Docker HQ in San Francisco on March 22nd but thanks to the hard work of awesome Community Leaders, we now have 65 and counting events scheduled worldwide! Check out the map below to confirm your RSVP or reach out to your community leaders to organize an event in your city.


5 years later, where are you on your #docker journey? #dockerbday  
Click To Tweet


The post 5 years later, where are you on your Docker journey? appeared first on Docker Blog.

Using Genetic Algorithms in Ruby

Reading Time: 5 minutesDid you know that there’s a way to use the power of natural selection to solve programming challenges? With genetic algorithms (GA), you can solve optimization problems using the same concepts that you find in nature: Reproduction Survival of the fittest Adaptation to

Reading Time: 5 minutes

Did you know that there’s a way to use the power of natural selection to solve programming challenges? With genetic algorithms (GA), you can solve optimization problems using the same concepts that you find in nature:

  • Reproduction
  • Survival of the fittest
  • Adaptation to the environment

So what’s an optimization problem? It’s when you want to find not just a valid solution but the solution that will give you the best results.

For example, if you have a backpack that only fits a certain amount of stuff and you want to maximize the amount of stuff you can bring, then you could use a genetic algorithm to find the best solution. This is also known as *the knapsack problem*.

The genetic algorithm is not the only way to solve this kind of problem, but it’s an interesting one because it’s modeled after real-world behavior. So let’s learn how they work and how you can implement your own using Ruby.


“Experiment with genetic algorithms for optimization problems in Ruby.” via @matugm
Click To Tweet


The Initial Population

The first thing you need in a genetic algorithm is the initial population. This is just a pool of potential solutions that are initially generated at random. A population is made of chromosomes.

Here’s a key point: Every chromosome represents a potential solution to the problem.

A chromosome can encode a solution in different ways; one is to use a binary string, a string composed of 0s and 1s. Here’s part of the Chromosome class:

class Chromosome
  SIZE = 10

  def initialize(value)
    @value = Array.new(SIZE) { ["0", "1"].sample }
  end
end

With Array.new(size), you can create a prefilled array with the results from the block, which in this case is a random number between 0 and 1.

This is what a chromosome looks like:

"0010010101"

We use a 1 to represent an item inside the backpack and a 0 to represent an item that is not in the backpack.

Now that we have a chromosome, we can generate the initial population:

population = 100.times { Chromosome.new }

Survival of The Fittest

In this step, we want to select the strongest chromosomes (potential solutions) from our population and use them to create the next generation.

There are two components to this:

  • The fitness function
  • The selection algorithm

The fitness function is used to ‘score’ every chromosome to see how close it is to the optimal solution. This of course depends on the problem we are trying to solve. For the backpack problem, we could use a fitness function that returns a higher score for every item that we are able to fit in.

Here is an example:

CAPACITY = 20

def fitness
  weights = [2, 3, 6, 7, 5, 9, 4]
  values  = [6, 5, 8, 9, 6, 7, 3]

  w = weights
      .map
      .with_index { |w, idx| value[idx].to_i * w }
      .inject(:+)

  v = values
      .map
      .with_index { |v, idx| value[idx].to_i * v }
      .inject(:+)

  w > CAPACITY ? 0 : v
end

First, we calculate the total weight of the items to see if we have gone over capacity. Then if we go over capacity, we are going to return a fitness of 0 because this solution is invalid. Otherwise we are going to return the total value of the items that we were able to fit in, because that’s what we are optimizing for.

For example, with the chromosome "0010011" and the values and weights given above, we have the items [6, 9, 4] inside our backpack, for a total weight of 19. Since that is within capacity, we are going to return the total value for these items, which is 8 + 7 + 3 = 18.

That becomes the fitness score for this particular chromosome.

Selection Algorithm

Now let’s go over the selection algorithm. This decides which two chromosomes to evolve at any given time.

There are different ways to implement a selection algorithm, like the roulette wheel selection algorithm and the group selection algorithm.

Or we can simply pick two random chromosomes. I found this to be good enough as long as you apply elitism, which is to keep the best fit chromosomes after every generation.

Here’s the code:

def select(population)
  population.sample(2)
end

Next we will learn how we can evolve the selected chromosomes so we can create the next generation and get closer to the optimal solution.

Genetic Algorithm Evolution

To evolve our selected chromosomes, we can apply two operations: crossover and mutation.

Crossover

In the crossover operation, you cross two chromosomes at some random point to generate two new chromosomes, which will form part of the next generation.

Wikipedia

Here’s the crossover method:

def crossover(selection, index, chromosome)
  cr1 = selection[0][0...index] + selection[1][index..-1]
  cr2 = selection[1][0...index] + selection[0][index..-1]

  [chromosome.new(cr1), chromosome.new(cr2)]
end

We don’t always apply this crossover operation because we want some of the current population to carry over.

Mutation

The other evolutionary operation we can perform is mutation. Mutation is only applied with a small probability because we don’t want to drift off too much from the current solution.

The purpose of mutation is to avoid getting stuck with a local minima solution.

Implementation:

def mutate(probability_of_mutation)
  @value = value.map { |ch| rand < probability_of_mutation ? invert(ch) : ch }
end

def invert(binary)
  binary == "0" ? "1" : "0"
end

Now that we have all the components, we can make them work together.

The Run Method

This method generates the initial population and contains the main loop of the algorithm. It will also find the best-fit solution and return it at the end. It looks something like this:

def run
  # initial population
  population = Array.new(100) { Chromosome.new }

  current_generation = population
  next_generation    = []

  iterations.times {
    (population.size / 2).times {
      # selection
      selection = crossover(select(current_generation), rand(0..Chromosome::SIZE), chromosome)

      # mutation
      selection[0].mutate(p_mutation)
      selection[1].mutate(p_mutation)
    }

    current_generation = next_generation
    next_generation = []
  }

  current_generation.max_by { |ch| ch.fitness }
end

This run method is defined inside the GeneticAlgorithm class, which we can use like this:

ga = GeneticAlgorithm.new
puts ga.run(Chromosome, 0.2, 0.01, 100)

The first argument is the chromosome class we are going to use, the second argument is the crossover rate, the third is the argument mutation rate, and the last argument is the number of generations.

How Do You Know That You Got The Best Solution?

Long story short, you can’t know for sure. What you can do is run a good amount of iterations and trust that the result is either the optimal solution or very close to the optimal solution.

Another option is to keep track of the best-fit chromosome and stop if it doesn’t improve after a certain number of iterations.

Conclusion

In this article, you learned that genetic algorithms are used to solve optimization problems. You also learned how they work and what components they’re made of (initial population, selection, and evolution). You can find the finished project on GitHub.

If you found this article interesting, do us a favor and share this post with as many people as you can so they can enjoy it, too!


“Using Genetic Algorithms in Ruby” via @matugm
Click To Tweet


The post Using Genetic Algorithms in Ruby appeared first on via @codeship.

{}
Travis News ( Feed )
Monday, 05 March 2018
dpl 1.9.0 is coming

Deployment is a critical component of Travis CI, and it is one of our major goals to ensure that it works well at all times.

It is carried out with dpl, our deployment tool, written in Ruby, with contributions from our community.

A major update (version 1.9.0) to

Deployment is a critical component of Travis CI, and it is one of our major goals to ensure that it works well at all times.

It is carried out with dpl, our deployment tool, written in Ruby, with contributions from our community.

A major update (version 1.9.0) to dpl is coming soon, and we would like you to help us ensure a smooth transition by testing it before the pubilc release.

This will affect all our users, on https://travis-ci.org, https://travis-ci.com, and all the Travis Enterprise customers.

What’s new?

The new version 1.9.0 will introduce following breaking changes:

  1. It requires Ruby 2.2 or later
  2. Drops support for Appfog

Ruby 2.2 is required

The requirement of Ruby 2.2 may sound scary for those using older versions of Ruby for builds. Please rest assured that this is not a concern, because we use a version of Ruby pre-installed on the build image to ensure that the deployment is carried out correctly. This version is already Ruby 2.2 on our hosted services and newer Travis Enterprise releases.

In the rare cases where the deployment is carried out with Ruby 1.9 for whatever reason, we choose dpl 1.8.x to deploy.

One case where a complication may arise is older Travis Enterprise releases, where we fail to specify dpl 1.8.x when the deployment uses Ruby 1.9. If this applies to you, please test deployment as shown below, and please get in touch with us at enterprise@travis-ci.com if you encounter problems.

Additional changes

In addition, dpl 1.9.0 splits each deployment provider code into separate gems. This allows for more focused deployment approaches, where one deployment provider’s requirement is not affected by another’s.

The Chef Supermarket provider, for example, can now use version 13.8.0, whereas dpl 1.8.47 used chef 11.18.12 (due to dependency resolution unrelated to it).

Testing dpl 1.9.0 before the release

To use the pre-release version of dpl, add the following to your existing deployment configuration:

deploy:
  edge: true

If you are already using deploy.edge: true, you may have experienced deployment difficulties over the last week or so. We apologize for the unexpected difficulties. We also thank you for reporting the issues, and continue working on addressing them as they come in. (Many have been already addressed, so if you encountered a problem last week, please try restarting the builds.)

Release schedule

Barring major issues, dpl 1.9.0 will be released on March 8, 2018.

As always, please report issues to our GitHub repository, including a link to your build log. You can also email us at support@travis-ci.com.

Thank you!

{}
Codeship News ( Feed )
Thursday, 01 March 2018
Setting Up HAProxy for a SaaS Application

Reading Time: 4 minutesAt Shopblocks, each customer receives several temporary subdomains so that they can access their website, admin system, and static assets. Part of the challenge of building Shopblocks was in providing all customers, by default, an SSL certificate. During our prelaunch stage,

Reading Time: 4 minutes

At Shopblocks, each customer receives several temporary subdomains so that they can access their website, admin system, and static assets. Part of the challenge of building Shopblocks was in providing all customers, by default, an SSL certificate.

During our prelaunch stage, our system was based heavily on Apache Virtual Hosts, with each customer getting their own Virtual Host file. This was necessary because of the SSL certificate configuration being required on a per-customer domain basis.


“Working with HAProxy to address issues with SaaS scaling” via @bowersbros
Click To Tweet


Addressing Limitations When Scaling

However, a problem with using Apache Virtual Hosts started to crop up when we grew our customer base and started getting into the thousands and tens of thousands of virtual host files.

We were finding the memory usage (without any request throughput) was increasing with the number of Virtual Host files loaded; graceful reloads were taking longer to perform. Each time a customer registered, a process was set off to create their config files and then to reload Apache.

Further issues appeared when we started building out our infrastructure for a hosted application, such as reloading multiple servers, ensuring that all servers are capable of serving all sites, handling a server that’s gone offline, and ensuring server configurations were synchronized when they came back online.

Shifting SSL Termination Responsibility to HAProxy

Many of these issues were solved by removing the requirement of a Virtual Host per customer, but this left open the issue of SSL termination. We solved this by moving the responsibility for terminating SSL further up the chain to the HAProxy load balancer and away from Apache itself.

In HAProxy, we use map files and header rewrites to handle all domains without a change to any Apache configuration for new customers.

When you sign up to Shopblocks, you actually receive three subdomains. For example, if you registered with the name bowersbros, you will receive bowersbros.myshopblocks.com, bowersbros-admin.myshopblocks.com, and bowersbros-static.myshopblocks.com.

This will create a few entries in the map file for our HAProxy configuration. These lines look like this:

...
bowersbros.myshopblocks.com <id>
bowersbros-admin.myshopblocks.com <id>
bowersbros-static.myshopblocks.com <id>
...

Replacing <id> with a specific identifier for your customer, you can reference the correct config/database as necessary. This will be passed through to your application as a HTTP header X-Customer-ID.

When registering, you cannot register with a dash in your name, so we can safely assume that any hyphens with -admin or -static are intended to hit those specific routes in the application (and no hyphen being the customers public website).

Our HAProxy configuration now looks something like this.

frontend http
    bind :80
    option forwardfor
    
    # Redirect all http requests to be https with a 301 redirect
    redirect scheme https code 301
    
frontend https
    # Check for your PEM certificates
    bind :443 ssl crt /path/to/certificates
    
    # Check if the hostname is recognised in the map file
    acl is_customer hdr(host),lower,map_str(/path/to/map/file.map) -m found
    
    # If the domain is not recognised, then silently drop the conneciton
    # You have the option of deny if you want to immediately reject this connection
    http-request silent-drop unless is_customer
    
    # Delete the X-Customer-ID header incase a header is sent through unexpectedly
    http-request del-header X-Customer-ID
    
    # Add a X-Forwarded-Host header with the original hostname
    http-request add-header X-Forwarded-Host %[req.hdr(Host)]
    
    http-request add-header X-Customer-ID %[hdr(host),lower,map_str(/path/to/map/file.map)]
    
    default_backend core
    
backend core
    balance roundrobin
    
    http-request set-header Host app.<your domain>.com</your>

This is a simplified version of the configuration we use, of course, and will need to be modified for your use case.

All requests that hit the HTTPS frontend get their SSL certificate checked to decrypt the request, then we check their domain against the map file. If we do not find them, they are dropped silently from the request.

We then add their ID to the request under X-Customer-ID, which will be sent with the request as it continues to your web server. Inside of your web server, you can then use the correct config and database connections based on the value of that header.

Conclusion

The Host header is also overwritten so that the correct Apache Virtual Host can be used, with a generic server name in Apache. This now allows your Apache Virtual Host configuration to be generic with one per application, rather than one per server. The only requirement when adding a new customer, domain, or SSL certificate is to modify the file.map and perform a reload in HAProxy, which is a zero-downtime action.

And there we have it, an overview of using HAProxy to setup a SaaS app simply.


“Setting Up HAProxy for a SaaS Application” via @bowersbros
Click To Tweet


The post Setting Up HAProxy for a SaaS Application appeared first on via @codeship.

The Deep End Podcast Ep. 14: Making Sense of It All with Attentive

There’s such a thing as “too much information”, especially for companies scaling out their sales operations. That’s why Attentive was born in 2015: to help sales teams make their increasing pipelines simpler to manage. Indeed, the small, Portugal-based team is itself focused on scaling, having par

The Deep End Podcast Ep. 14: Making Sense of It All with Attentive

There’s such a thing as “too much information”, especially for companies scaling out their sales operations. That’s why Attentive was born in 2015: to help sales teams make their increasing pipelines simpler to manage. Indeed, the small, Portugal-based team is itself focused on scaling, having participated in accelerator programs like Techstars.

In this episode, Attentive founder and CTO Pedro Araújo talks about what it takes to build a tech product from the ground up. Discover their approach to running an engineering team, from adopting new open source technologies, to onboarding junior developers and learning about cloud infrastructure.

Subscribe to the The Deep End Podcast on iTunes and Spotify, or listen to the latest episode on SoundCloud below:

Hollie Haggans heads up Global Partnerships for DigitalOcean’s Hatch program. She is passionate about startups and cold brew coffee. Get in touch with questions at hatch@digitalocean.com.

{}
Travis News ( Feed )
New Enterprise Build Environments - Ubuntu 14.04, Trusty

We’re super excited to announce new build environments available for Travis CI Enterprise 🎉 These are the stable images used for Trusty container-based on Travis CI Hosted and are up-to-date with the latest tools and features.

Upgrading Build Environments

The new build environments are su

We’re super excited to announce new build environments available for Travis CI Enterprise 🎉 These are the stable images used for Trusty container-based on Travis CI Hosted and are up-to-date with the latest tools and features.

Upgrading Build Environments

The new build environments are supported on Travis CI Enterprise installations at version 2.1.9 or later (here’s how to upgrade your Travis CI Enterprise Platform instance). The worker will also need to be installed on a different instance than the legacy Precise build environment workers, but the process is very similar. Full installation docs can be found here: docs.travis-ci.com/user/enterprise/trusty

After upgrading your Travis CI Enterprise installation, builds that should be routed to the new build environments will need to have a dist: trusty in their .travis.yml. From there, everything should be good to go.

Of course, if you have any questions or feedback about upgrading, please get in touch at enterprise@travis-ci.com. We’d love to help out!

Support for Legacy Precise Build Environments

Although we encourage everyone to upgrade to the new build environments, we certainly understand these things take time. We’re planning to fully support the legacy (Precise) build environments for the next six months, though we will not update the images. You may update or customize them yourself, if you would like. Please let us know if you have any concerns on this process.

Thank You to Build Environment Beta Testers

Last of all, all of us on the Travis CI Enterprise Team wants to send a big shout-out to our Trusty Build Environment beta testers. Thank you for your help and input! 💖

{}
Docker News ( Feed )
Wednesday, 28 February 2018
A Secure Supply Chain for Kubernetes

The beta release of the Docker Enterprise Edition (Docker EE) container platform last month integrates Kubernetes orchestration, running alongside Swarm, to provide a single container platform that supports both legacy and new applications running on-premises or in the cloud. For organizations tha

The beta release of the Docker Enterprise Edition (Docker EE) container platform last month integrates Kubernetes orchestration, running alongside Swarm, to provide a single container platform that supports both legacy and new applications running on-premises or in the cloud. For organizations that are exploring Kubernetes or deploying it in production, Docker EE offers integrated security for the entire lifecycle of a containerized application, providing an additional layer of security before the workload is deployed by Kubernetes and continuing to secure the application while it is running.

Mike Coleman previously discussed access controls for Kubernetes. This week we’ll begin discussing how Docker EE secures the Kubernetes supply chain.

What is a Software Supply Chain?

When you purchase something from a retail store, there is an entire supply chain that gets the product from raw materials to the manufacturer to you. Similarly, there is a software supply chain that takes an application from code on a developer’s laptop to production.

Every company’s software supply chain may be slightly different; some outsource software development, some have adopted Continuous Integration and Continuous Delivery processes, and some deploy production applications across multiple clouds, some on-premises. Regardless of what the software supply chain consists of, Docker EE provides a set of solutions that integrates with your workflows while ensuring that applications remain secure, trusted and safe through all of these steps using both Kubernetes and Swarm.

In this week’s blog, we’ll take a closer look at one part of this solution – image scanning and policy-based image promotions.

Secure Automation of Workflows for Kubernetes

Before an application is deployed in production, organizations typically want to know that it does not have any known vulnerabilities that often come from older releases or unpatched versions of software. It’s also difficult for large organizations to keep a full inventory of every application they have running that may be affected by a new vulnerability.

Docker EE provides image security scanning to help organizations both identify vulnerabilities before the applications are deployed in production and to alert you when new vulnerabilities affect existing applications. This is done by executing a binary-level scan of your images against the NIST list of known vulnerabilities. As shown below, each layer of an image can be thoroughly scanned to provide insight into the workload.

Docker EE also has the ability to define policies to automate the movement of images between repositories. These image promotion policies can be combined with the results of security scanning to create a secure, automated workflow for images moving to production.

For example, a developer is working on a new Kubernetes project with access to the ‘dev’ repository from which they can push and pull images. The repository is set up with image scanning to automatically scan all images when they are pushed to the repository. When the developer is ready to move this into production, they add a specific tag like “latest” to the image. Their repository is set up with an image promotion policy that states that if an image has the “latest” tag and has no critical vulnerabilities, it gets automatically copied or promoted to the ‘QA’ repository.

In this example, only the QA team has access to the QA folder, limiting access to only those who require it. This policy also ensures that developers are responsible for fixing any vulnerabilities before they are passed on to the QA team.

By combining these Docker EE capabilities, organizations can:

  • Automate the movement of images between repositories at scale
  • Enforce security scanning practices at certain stages of development
  • Prevent applications with known vulnerabilities from being deployed in production
  • Limit the access to sensitive repositories (like ‘production’) to only those who require it, while still removing bottlenecks in the process by defining appropriate policies

These are all critical workflows that happen prior to the app being deployed in production with Kubernetes. With Docker EE you get the only container platform with integrated security across the entire supply chain. For more information about Docker’s Secure Supply Chain for Kubernetes watch the following on-demand video: 

 


Learn how #Docker EE security scanning and image promotions can secure the #Kubernetes supply chain
Click To Tweet


 Learn more about Docker Enterprise Edition with Kubernetes integration:

The post A Secure Supply Chain for Kubernetes appeared first on Docker Blog.

Scaling Your Containers With Kubernetes

Reading Time: 4 minutesWith the growing adoption of containerization, many developers are no longer struggling with the woes of code environmental inconsistencies. While containers have solved important problems, new questions have arisen over how to effectively manage and scale them. Products lik

Reading Time: 4 minutes

With the growing adoption of containerization, many developers are no longer struggling with the woes of code environmental inconsistencies. While containers have solved important problems, new questions have arisen over how to effectively manage and scale them. Products like Kubernetes have been born out of an attempt to solve these issues. Yet, what exactly is Kubernetes?

Kubernetes is essentially a solution that coordinates a group of containerized applications and orchestrates them to work together as a single unit. Having a cohesive group of containers allows us to explicitly schedule the amount of container instances we have running at once and also provides an automated way to scale them in case of the unexpected.

Let’s look at the technical specifics of how Kubernetes actually does this.


“So devs are into containers now. But how do you manage and scale them?” via @hiimtaylorjones
Click To Tweet


How Kubernetes Works

Kubernetes instances are referred to as clusters. Each cluster consist of three main units:

  • Pods, which are units that control one or more containers.
  • Node components that are associated with specific pods. These nodes can either be a physical or virtual machine.
  • Master components that coordinate all of the node components to work as one.

These three components work with each other to scale and schedule containers properly. The master component dictates the scheduling and scaling commands. The nodes then receive these commands and orchestrate the pods accordingly.

Each of these units are self-contained services in their own right. They all communicate with each other over a series of specialized APIs. This means that Kubernetes is actually series of microservices itself.

Another thing to note about the technical structure of Kubernetes is that it’s open source! Kubernetes’ open-source nature means that we can extend it to meet any need that might not be covered in its core feature set. Many organizations have taken the base Kubernetes project and extended it to create products that meet their organization’s unique needs.

However, if we focus on the core aspects of Kubernetes, what exactly about it is unique? What’s the secret sauce behind Kubernetes’ success?

What’s Unique About Kubernetes?

Kubernetes’ early success was mainly due to its creation and subsequent backing by Google. With an endorsement from a company like Google, many developers immediately became attracted to the idea of using it. Yet corporate hype isn’t the only reason it’s popular.

One of the biggest technical features that Kubernetes executes really well is its autoscaling feature. If your production app is experiencing an unexpected spike in traffic, then it will automatically scale to handle the new workload. Compared to its competitors, it has the more maturity in this feature.

Kubernetes also isn’t bound to one specific container solution for its pods. Many competitors to Kubernetes are bound a bit more closely to what container solutions they support.

Another point that perks folks’ interest about Kubernetes is the open-source nature of the project. As I mentioned earlier, being open-source allows infinite customization of the core product. Many of Kubernetes’ competing solutions offer no or limited customization to their products.

We’ve spent a few minutes talking up the good parts of Kubernetes, but what are some drawbacks or immature aspects of it?

What Are the Drawbacks to Kubernetes?

To be frank, Kubernetes has a reputation of being hard to set up. Once set up, the complaints seem to fall off. However, the journey getting to that point has a reputation of being fairly tough.

Guides (Kubernetes: The Hard Way) argue that maybe understanding why Kubernetes is so hard to set up actually leans to a deeper understanding and mastery of the product. However, despite its reputation, many tools have started to emerge that make Kubernetes a bit more accessible.

Another thing that doesn’t work in Kubernetes’ favor is that its core feature set isn’t that unique compared to other products. There are a lot of other tools that have a similar feature set and are a bit easier to use. The lack of a completely unique feature will allow for other products — with an easier learning curve — to eat at Kubernetes’ market share.

While Kubernetes has its triumphs and its downfalls, what exactly is next for its development?

What’s Next for Kubernetes?

Moving forward, Kubernetes is trying to smooth out its learning curve by working closer with container providers like Docker to pave the way for easier integration with different types of containers.

Another cornerstone of Kubernetes’ mission moving forward is to make container orchestration a much more popular idea than it already is. Numerous companies have adopted the ideas of containerization in their products. However, the problems that Kubernetes solves haven’t exactly been adopted into the mainstream of tech development yet. Kubernetes’ mission moving forward is to make sure people understand its purpose.

Finally, there’s a plan to address Kubernetes’ competition. Providers like AWS and IBM have their own container management products. Kubernetes is working to better integrate into these platforms for increased adoption, whether that be a supported option for Kubernetes or an entire custom feature built around it. Kubernetes is simply trying to become more embedded in the products we use every day.

Overall, if you’re interested in finding a solution to container scaling and management, Kubernetes is a great place to start. Take the time to learn what it’s about, and you’ll unlock a massive amount of headroom for your organization to scale and manage applications in production environments.


“Scaling Your Containers With Kubernetes” via @hiimtaylorjones
Click To Tweet


The post Scaling Your Containers With Kubernetes appeared first on via @codeship.

{}
Docker News ( Feed )
Tuesday, 27 February 2018
First Look at the DockerCon San Francisco 2018 Agenda

From June 12th – 15th, San Francisco will welcome 6,000+ developers, sysadmins, architects, VP of Apps and other IT leaders to get hands-on with the latest innovations in the container ecosystem at DockerCon 2018. Today, we are excited to share a first look at the DockerCon Agenda. We’ve bro

From June 12th – 15th, San Francisco will welcome 6,000+ developers, sysadmins, architects, VP of Apps and other IT leaders to get hands-on with the latest innovations in the container ecosystem at DockerCon 2018. Today, we are excited to share a first look at the DockerCon Agenda. We’ve brought back some of your favorite from past DockerCons and are also thrilled to welcome many first time DockerCon speakers to the stage. Here is a first look at some of our favorites sessions:

 

DockerCon

Customers in Production  

Use case sessions highlight how companies are using Docker to modernize their infrastructure and build, manage and secure  distributed applications. These sessions are heavy on business value, ROI and production implementation advice, and learnings.

  • Building your NoSQL ship: How an Enterprise transitioned from a RDBMS to NoSQL DB using Agile and Docker by Jonell Taylor, Metlife
  • Black Friday and 100K Deployments Per Year by Srikanth Bulusu & Sanjoy Mukherjee, JCPenney
  • Packaging Software for Distribution on the Edge with Docker and Windows Server
    Peter Ngai, GE Digital

Using Docker

Using Docker sessions are introductory sessions for Docker users, dev and ops alike. Filled with practical advice, learnings, and insight, these sessions will help you get started with Docker or better implement Docker into your workflow.

  • Taking Kubernetes to Production with Docker by Scott Coulton, Puppet
  • Don’t have a Meltdown! Practical Steps for Defending your Apps by Liz Rice, Aqua and Justin Cormack, Docker
  • Demystifying Container Connectivity with Kubernetes in the Docker Platform by Nicola Kabar, Docker and Karthik Prabhakar, Tigera
  • Proactive Ops for Container Orchestration Environments by John Harris, Docker

Black Belt

One way to achieve a deep understanding of a complex system is to isolate the various components of that system, as well as those that interact with it, and examine all of them relentlessly. This is what we do in the Black Belt track! It features deeply technical talks covering not only container technology but also related projects.

  • Kubernetes in Docker by Alex Mavrogiannis + Guillamine Rose, Docker
  • Windows Container Security by David Lawrence, Docker and Saruhan Karademir, Microsoft
  • Istio: Weaving, Securing and Observing the Microservices by Lin Sun, IBM and Zack Butcher, Google
  • Kubernetes Extensibility by Tim Hockin + Eric Tune, Google

Docker Docker Docker

Docker Docker Docker sessions provide a deeper dive into Docker tooling, implementation, and real world production use recommendations. If you are ready to get to the next level with your Docker usage, join this track for product updates and best practices from the Docker team.

  • Product Overview: Docker Desktop for Developers by Gareth Rushgrove
  • Product Overview: Docker Enterprise Edition by Vivek Saraswat
  • Docker Storage: Designing a Platform for Persistent Data by Daniel Finneran, Docker

Innovation

The Innovation Track shows how containers are redefining our technology toolbox, from solving old problems in a new way to pushing the boundaries of what we can accomplish with software. Sessions in this track provide a glimpse into the new container frontier as it relates to IoT, Machine Learning and distributed systems.

  • Hardware Testing with Docker for Space by Christopher.Heistand, Johns Hopkins Applied Physics Lab
  • Democratizing Machine Learning on Kubernetes by Joy Qiao + Lachlan Evenson, Microsoft
  • A Vision of Persistence by Justin Cormack, Docker and Luke Marsden, DotMesh

Transform

The transform track focuses on the impact of change – both for organizations and ourselves as individuals and communities. Filled with inspiration, insights and new perspectives, these stories will leave you energized and equipped to drive innovation.

  • Depend on Docker (DoD) – Imagine a world where the only dependency you need is Docker! by Alex Iankoulski and Arun Subramaniyan, Baker Hughes
  • Shaving my Head Made me a Better Programmer by Alex Qin, GAKKO
  • #bigwhale: An Unexpected Journey into Containerization @ Lockheed Martin – Part 1 by Arjuna Rivera, Lockheed Martin
  • A Strong Belief, Loosely Held: Bringing Empathy to IT by Nirmal Mehta, Booz Allen Hamilton

In addition to the tracks above, DockerCon also includes an Ecosystem Track which showcases the variety of container products and solutions developed by sponsoring partners from the container ecosystem. Please reach out to sponsors@docker.com if you’re interested in speaking opportunities.

We were humbled by the quantity and quality of submissions we received as part of the CFP and wanted to extend a sincere thank you to the Docker community for sharing all of the amazing things they are doing with Docker. Feel free to reach out to meetups@docker.com if you’d like to give a talk at your local Docker user group.

Useful DockerCon Links:

 


First Look at the @DockerCon San Francisco 2018 Agenda #dockercon
Click To Tweet


The post First Look at the DockerCon San Francisco 2018 Agenda appeared first on Docker Blog.

Using Apache Kafka for Asynchronous Communication in Microservices

Reading Time: 4 minutesWhile microservice architecture might not be a silver bullet for all systems, it definitely has its advantages, especially when building a complex system with a lot of different components. Of course, if you’re considering microservices, you have to give serious though

Reading Time: 4 minutes

While microservice architecture might not be a silver bullet for all systems, it definitely has its advantages, especially when building a complex system with a lot of different components. Of course, if you’re considering microservices, you have to give serious thought to how the different services will communicate.

In this post, we’ll look at how to set up an Apache Kafka instance, create a user service to publish data to topics, and build a notification service to consume data from those topics. Specifically, we’re going to build a two-factor authentication app where a user will register, receive a mail with a verification code, and complete their registration using the code. The source code can be found here.


“You have to give serious thought to how different microservices will communicate.”
Click To Tweet


Why Apache Kafka?

Kafka is a distributed streaming platform created by LinkedIn in 2011 to handle high throughput, low latency transmission, and processing of streams of records in real time. It’s three major capabilities make it ideal for this use case:

  • Publishing and subscribing to streams of records. In this respect, it is similar to a message queue or enterprise messaging system.
  • Storing streams of records in a fault-tolerant way.
  • Processing streams of records as they occur.

Setting Up Apache Kafka

Before starting this tutorial, the following will be required:

  • Docker for Mac or Docker for Windows
  • Knowledge of Docker Compose
  • Knowledge of Node.js

We will be using the Wurstmeister Kafka Docker image. Note that Kafka uses Zookeeper for coordination between different Kafka nodes.

A docker-compose.yml similiar to the one below is used to pull the images for Kafka and Zookeeper. One of the required configuration options for the Kafka service is KAFKA_ZOOKEEPER_CONNECT, which tells Kafka where to find the Zookeeper instance.

version: '2.1'
  services:
    zookeeper:
      container_name: zookeeper
      image: wurstmeister/zookeeper
      ports:
        - "2181:2181"
    kafka:
      container_name: kafka
      image: wurstmeister/kafka
      ports:
        - "9092"
      depends_on:
        - "zookeeper"
      environment:
        KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181

Publishing Data to Kafka Topics

To publish data to a Kafka topic, we are going to create a user service that provides two endpoints:

  • /api/register – Stores user details in an In-Memory store node-cache and publishes user data to a Kafka topic, user_account_created.
  • /api/verify – Verifies that the provided code is correct and publishes user data to a Kafka topic, user_account_verified.

We use the node-rdkafka NPM package to create a producer that connects to Kafka from our node app:

  let producerReady;
  producer = new kafka.Producer({
    debug: 'all',
    'client.id': 'user-api',
    'metadata.broker.list': KAFKA_BROKER_LIST,
    'compression.codec': 'gzip',
    'retry.backoff.ms': 200,
    'message.send.max.retries': 10,
    'socket.keepalive.enable': true,
    'queue.buffering.max.messages': 100000,
    'queue.buffering.max.ms': 1000,
    'batch.num.messages': 1000000,
    dr_cb: true
  });
  producer.connect({}, err => {
    if (err) {
      logger.error('connect', err);
    }
  });
  producerReady = new Promise((resolve, reject) => {
    producer.on('ready', () => {
      logger.info('producer ready');
      resolve(producer);
    });
  });

We create a new promise object that resolves to a producer that is ready to start publishing data. This is used in our sendMessage function, which publishes data to a Kafka topic partition:

  KafkaService.prototype.sendMessage = function sendMessage(
    topic,
    payload,
    partition = 0
  ) {
    return producerReady
      .then(producer => {
        const message = Buffer.from(JSON.stringify(payload));
        producer.produce(topic, partition, message);
      })
      .catch(error => logger.error('unable to send message', error));
  };

Consuming Data from Kafka Topics

To consume data from our Kafka topic, we are going to create a notification service that listens for data coming from our topics and sends an email with either a verification code or success message depending on the topic it got the data from.

We create a consumer that connects to Kafka, where KAFKA_BROKER_LIST is a comma-separated list of all Kafka instances.

  process.stdin.resume(); // keep process alive

  require('dotenv').config();

  const Kafka = require('node-rdkafka');

  const logger = require('./logger');

  const sendMail = require('./email');

  const KAFKA_BROKER_LIST = process.env.KAFKA_BROKER_LIST;

  const consumer = new Kafka.KafkaConsumer({
    //'debug': 'all',
    'metadata.broker.list': KAFKA_BROKER_LIST,
    'group.id': 'notification-service',
    'enable.auto.commit': false
  });

The consumer object returned by node-rdkafka is an instance of a readable stream. We wait for the ready event to subscribe to our topics, user_account_created and user_account_verified, and listen for data in those topics:

  const topics = [
    'user_account_created',
    'user_account_verified'
  ];

  //counter to commit offsets every numMessages are received
  let counter = 0;
  let numMessages = 5;

  consumer.on('ready', function(arg) {
    logger.info('consumer ready.' + JSON.stringify(arg));

    consumer.subscribe(topics);
    //start consuming messages
    consumer.consume();
  });

  consumer.on('data', function(metadata) {
    counter++;

    //committing offsets every numMessages
    if (counter % numMessages === 0) {
      logger.info('calling commit');
      consumer.commit(metadata);
    }

    // Output the actual message contents
    const data = JSON.parse(metadata.value.toString());
    logger.info('data value', data);

    if(metadata.topic === 'user_account_created'){
      const to = data.email;
      const subject = 'Verify Account';
      const content = `Hello ${data.first_name}, 
      Please use this code ${data.code} to complete your verification`;
      sendMail(subject, content,to);
    }else if(metadata.topic === 'user_account_verified') {
      const to = data.email;
      const subject = 'Account Verified';
      const content = `Hello ${data.first_name}, 
      You have successfully been verified`;
      sendMail(subject, content,to);
    }

  });

  consumer.on('disconnected', function(arg) {
    logger.info('consumer disconnected. ' + JSON.stringify(arg));
  });

  //logging all errors
  consumer.on('event.error', function(err) {
    logger.error('Error from consumer', err, 'code: ', err.code);
  });

  //starting the consumer
  consumer.connect();

The data event handler is called when a message is published to any of the topics we are listening to. Here we parse the incoming message and check the metadata object to know which topic the received data is for, so we can carry out the appropriate action.

Conclusion

Our two-factor authentication app demonstrates the communication pattern between only two microservices using Apache Kafka (there are other systems like RabbitMQ, ZeroMQ), but by decoupling communication between those services, we add flexibility for the future. For example, let’s say we add a recommendation service in the future that needs to send out recommendations whenever a new user is signed on; it simply subscribes to the user_account_verified topic, and there would be no need to change the user service.

Resources


“Using Apache Kafka for Asynchronous Communication in Microservices” via @wale_martins
Click To Tweet


The post Using Apache Kafka for Asynchronous Communication in Microservices appeared first on via @codeship.

{}
Digitial Ocean News ( Feed )
Thursday, 22 February 2018
Onward and Upward Together

As we turn the page on 2017, I’m proud to share that DigitalOcean had another tremendous year of rapid growth and strong profitability, a combination which few tech companies have achieved at our scale. We are rapidly approaching $200M in annual recurring revenue and are looking forward to celebra

As we turn the page on 2017, I’m proud to share that DigitalOcean had another tremendous year of rapid growth and strong profitability, a combination which few tech companies have achieved at our scale. We are rapidly approaching $200M in annual recurring revenue and are looking forward to celebrating our 6th anniversary next month. The key to our success is our disruptive offering — a cloud computing platform that is engineered with simplicity at the core — and our vibrant, growing developer community. We see a substantial and growing market need, and believe that DigitalOcean is perfectly positioned to lead this category in the years ahead.

While we have enjoyed great success since I co-founded the company in 2012, I believe we have barely scratched the surface. I’ve been reflecting on our next phase of growth and what it will take to reach our full potential, and it’s become clear to me that now is the right time to identify my successor as CEO of DigitalOcean.

I recognize where my strengths lie and where others will have more experience to give. With all of the exciting opportunities in front of us, including the possibility of an IPO — a long-term goal we have frequently discussed internally — I feel a new seasoned executive will be best to guide the company through the next chapter of our journey. We have engaged a leading search firm to help us find a great leader. One that will be inspirational, able to scale our operations beyond 1,000 people, evolve our go-to-market strategy, and help us reach our audacious vision. Someone who can build a global brand that could potentially help us become a publicly-traded company with the simplest cloud platform for developers to run applications of any size.

Once we’ve identified this person, I’ll be taking on a new role as Chairman of the Board, which will allow me to support our company vision and strategy while working closely with the new CEO.

When Moisey, Mitch, Alec, Jeff, and I started the company in 2012, we left our families and friends in New York to join the Techstars program in Colorado. We slept on bunk beds and worked relentlessly pretty much every day until midnight. Finding product-market fit didn’t happen overnight and it took months of iterating and refining our product offering. We had 400 users when we graduated from the Techstars program, and while we knew we had developed something special, trying to raise venture capital at that time was a real uphill battle. We heard many “no’s” from investors along the way, but believed in our long-term vision.

After returning to a small office in New York City, we launched the first SSD virtual machine service with unprecedented price-to-performance on January 15th, 2013. We instantly went from signing up a couple of new users per day to more than 100. I vividly remember sitting at our kitchen table with the co-founding team, having to manually install SSDs into our servers to keep up with the demand. It’s been a humbling journey to say the least, and I could not have imagined the growth, success, and scale we would achieve only five years later. DigitalOcean has accomplished so many incredible things over the years and I know that our product, people, and operations have never been stronger.

Aug 9, 2012 - Mitch, Alec, Moisey, me and Jeff walking on stage for Techstars demo day

We have raised $123M from some of the world’s leading VCs that share our belief that the developer will lead the continuing technology revolution. Today, we have a team of 400-plus employees around the world with growing offices in New York, Cambridge, Mass., and Bangalore. Our user base has grown with us and last year we crossed one million users from almost every country in the world. Over the last few years, our product went from a single offering, Droplet, to a complete cloud platform. We are extremely proud to be one of the largest and fastest-growing cloud providers in the world.

I’ve always said that putting the business first and doing what is right for DigitalOcean is my highest priority. I’m making this decision knowing that DigitalOcean’s best days are still to come. We have never been in a better position to begin this transition. We have a great leadership team in place, the business has very strong momentum, and we are a clear leader in our industry. I’m confident that our new CEO will be able to rapidly build on this strong foundation.

No matter who our next leader is, one thing that definitely won’t change is our unwavering commitment to delivering the industry’s simplest cloud computing platform, while building one of the world’s largest developer communities. All of the core elements that have contributed to our success — the powerful simplicity of the product, the dedication and talent of the team, and the passionate community of developers that we serve — will remain the same.

I am tremendously excited about DigitalOcean’s future and the milestones ahead. I want to thank everyone who has helped turn our dream and passion into reality. The skills I have learned and friendships I have made while helping to build this company will last me a lifetime, for which I will be forever grateful and I couldn’t be more excited for the journey ahead.

Onward and upward together,
Ben Uretsky

Embracing the Chaos of Chaos Engineering

Reading Time: 5 minutesModern applications are increasingly growing in complexity. Adding a dizzying amount of moving parts, layers of abstraction, reliance on external systems and distribution that all result in a stack that few truly fully understand. Any developer worth hiring now knows the mer

Reading Time: 5 minutes

Modern applications are increasingly growing in complexity. Adding a dizzying amount of moving parts, layers of abstraction, reliance on external systems and distribution that all result in a stack that few truly fully understand.

Any developer worth hiring now knows the merits of a thorough testing regime, but one of the issues with testing is that you are often testing for predictable outcomes. Despite our ‘logical systems,’ show-stopping issues are typically unexpected; situations that no one foresaw.

These unforeseen eventualities are what chaos engineering attempts to account for. It’s a reasonably new principle, practiced by Netflix for several years and then formalized in 2015, setting out its principles in a time-honored manifesto.

Naturally, there are critics of the practice, and the comments at the bottom of this TechCrunch article summarize some of them. The typical counterarguments are that the principle is a band-aid for applications that were poorly planned and architected in the first place, or that it’s another buzzword-laden excuse to invent shiny new tools that no one knew they needed.

Still, it’s proponents are a friendly bunch, so in this article, I summarize my findings on the practice and let you decide.


“Unforeseen eventualities are what chaos engineering attempts to account for.” via @ChrisChinch
Click To Tweet


Organized Chaos

In many ways, while the term ‘chaos’ is a good eye-catching phrase, it’s misleading, summoning images of burning servers and hapless engineers running around an office screaming. A better term is experimental engineering, but I agree that is less likely to get tech blog or conference attention.

The core principles of chaos engineering follow similar lines to those you followed in school or university science classes:

  1. Form a hypothesis.
  2. Communicate to your team.
  3. Run experiments.
  4. Analyze the results.
  5. Increase the scope.
  6. Automate experiments.

Early in the lifetime of chaos engineering at Netflix, most engineers thought chaos engineering was about “breaking things in production,” and it is in part. But while breaking things is great fun, it’s not a useful activity unless you learn something from it.

These principles encourage you to introduce real-world events and events you expect to be able to handle. I wonder if fully embracing the “chaos” might result in more interesting results, ie, measuring the worst that could happen. True randomness and extremity could surface even more insightful results and observations.

Let’s look at each of these steps in more detail.

1 – Form a hypothesis

To begin, you want to make an educated guess about what will happen in which scenarios. The key word here is “educated”; you need to gather data to support the hypothesis that you’ll share with your team.

Decide on your steady state

What is “steady” depends on your application and use case, but decide on a set of metrics that are important to you and what variance in those metrics is acceptable. For example:

  • When completing checkout, the majority of customers should have a successful payment processed.
  • Users should experience latency below a particular rate.
  • A process should complete within a time frame.

When deciding on these metrics, also consider external factors such as SLAs and KPIs for your team or product(s).

Introduce real-world events

The sorts of events to test vary depending on your use case, but common to most applications are:

  • Hardware/VM failure
  • State inconsistency
  • Running out of CPU, memory, or storage space
  • Dependency issues
  • Race conditions
  • Traffic spikes
  • Service unavailability

Run in production

“Testing in production” has long been a tongue-in-cheek reference to an untested code base, but as chaos engineering is likely run in collaboration with a properly pre-tested code base, it takes on a different meaning.

The principles we’re working with here encourage you to undertake tests in production, or if you have a genuine reason for not doing so, as close as possible. Chaos engineering principles are designed to identify weakness, so they argue that running in production is fundamentally a good thing.

Some banks are already following these principles, and while engineers behind safety-critical systems should be confident of their setup before embarking on chaos engineering, the principles also recommend you design each experiment to have minimal impact and ensure you can abort at any time.

Metrics

While the most tempting hypothesis is “let’s see what happens” (much like “let’s just break things”), it’s not a constructive one. Try to concoct a hypothesis based on your steady state, for example:

  • If PayPal is unavailable, successful payments will drop by 20 percent.
  • During high traffic, latency will increase by 500ms.
  • If an entire AWS region is unavailable, a process will take 1 second longer to complete.

2 – Communicate to your team

As a technical communicator, this is perhaps the most important step to me. If you have a team of engineers running experiments on production systems, then relevant people (if not everyone) deserve to know. It’s easy to remember engineers, but don’t forget people who deal with the public, too, such as support and community staff who may start receiving questions from customers.

3 – Run your experiments

The way you introduce your experiments varies, some from code deployments, others by injecting calls you know will fail, or simple scripts. There are myriad tools available to help simulate these; I’ve provided links to find them below.

Make sure you have alerting and reporting in place to stop an experiment if needed, but also to analyze results later.

4 – Analyze the results

There’s no point in running an experiment if you don’t take time to reflect on what data you gathered and to learn from it. There are many tools you probably already use to help with this stage, but make sure you involve input from any teams whose services were involved in the experiment.

5 – Increase the scope

After defining your ideal metrics and the potential effects on them, it’s time to start testing your hypothesis. Much like other aspects of modern software development, be sure to iterate these events, changing parameters or the events you test for.

Once you’ve tried one experiment, learned from it, and potentially fixed issues it identified, then move on to the next one. This may be introducing a new experiment or increasing the metrics of an existing one to find out where a system really starts to break down.

6 – Automate the experiments

The first time(s) you run an experiment, manually is fine — you can monitor the outcome and abort it if necessary. But you should (especially with teams that follow continuous deployment) automate your experiments as quickly as possible. This means that the experiment can run when new factors are introduced into an application, but it also makes it easier to change input parameters for the scope of your experiments.

Again, the resources section below lists places to find tools to help with this.

Quietly Chaotic

While engineers and developers are divided on the usefulness of chaos engineering, the most interesting aspects to me are not the technical ones, but rather that it tests and checks ego.

The principles state in many places that if you are truly confident in your application, then you shouldn’t fear what it proposes. They force you to put your money where your mouth is and (albeit in a careful and controlled way) prove your application is as confident as you are. I can imagine many insightful debriefing sessions after a chaos engineering experiment.

Tools and Resources


“Embracing the Chaos of Chaos Engineering” via @ChrisChinch
Click To Tweet


The post Embracing the Chaos of Chaos Engineering appeared first on via @codeship.

{}
Codeship News ( Feed )
Wednesday, 21 February 2018
Announcing Codeship’s IP Whitelisting Feature

Reading Time: 4 minutesWe’re pleased to announce that IP whitelisting and two-factor authentication are now generally available for all Codeship users — this is especially exciting for our Codeship Enterprise hybrid cloud CI/CD feature set, as they were the last two pieces of that puzzle. Ov

Reading Time: 4 minutes

We’re pleased to announce that IP whitelisting and two-factor authentication are now generally available for all Codeship users — this is especially exciting for our Codeship Enterprise hybrid cloud CI/CD feature set, as they were the last two pieces of that puzzle.

Over the last couple of months, we’ve added support for self-hosted GitHub Enterprise, GitLab Community Edition and Enterprise Edition, as well as Bitbucket Server. However, a missing piece for many has been how to grant Codeship infrastructure access to those internal systems in a limited way.

With our IP whitelisting feature released today, this is now a much more manageable problem — you only need to open your firewall for traffic from eight fixed IP addresses.

A second piece of the puzzle has been to provide two factor authentication (2FA) for those who sign up by email or cannot authenticate via their self-hosted git server. Read more about 2FA at the bottom of this post.


“We’re very excited to say that IP whitelisting and 2FA are now generally available.” via @codeship
Click To Tweet


What Is IP Whitelisting?

Our IP whitelisting feature is a way for you to tell Codeship to only contact your servers from a subset of IP addresses. Without this, requests from a Codeship-build infrastructure could come from virtually any IP address in the range assigned by AWS to their us-east-1 data centers. Granting access to such a large range of potential unknown sources presents an unacceptable security concern for many.

On a more technical level, enabling the IP whitelisting feature will force all outgoing traffic from Codeship services or build service through a proxy or network address translation (NAT). Codeship services (connecting to your SCM, sending build status back, etc.) will go through a proxy while all requests made from a build machine will go through NAT.

Why Should I Use IP Whitelisting?

There are a couple scenarios where you’d want to use the IP whitelisting feature, but honestly, most users will likely not need it. If you’re not hosting your git server behind a firewall or trying to access other services behind a firewall, you’ll most likely not need it.

The scenarios where you will want to enable to IP whitelisting are those where your git server is behind a firewall (ie, no publicly accessible IP address) or you’re using services that are not reachable via a public IP address. This could be deploying code to production servers in a VPC or your own data center, sending build artifacts to internal systems or file servers, running commands via SSH on servers behind a firewall, or probably a large range of other tasks that require the Codeship build server to communicate with something that doesn’t have a public IP address.

How Do I Enable IP Whitelisting?

Enabling the IP whitelisting feature is very simple.

  1. Go to your organization’s Settings page.
  2. Check the box next to Whitelisting.
  3. If you don’t have access to the Settings page, contact the organization owner (usually the one who set up the account) and ask to have it enabled.

With whitelisting enabled, you’ll see a list of IP addresses, which are the ones you’d want to allow access to your non-public systems. Exactly how to configure your firewall to allow access depends on your firewall, etc. so it’s best to work with your network administrator on that.

Organization Settings

When allowing access to the eight IP addresses listed in the box, you should also be mindful of what ports you want to give access to. For accessing a self-hosted git server, check the Self-hosted SCM documentation for details of the ports. For accessing other services, it really depends on your service, so unfortunately you’re a bit on your own there.

You’re welcome to ask for help in our community Slack channel though, and we’ll try our best to help you figure out what you need.

What Are the Risks of Whitelisting?

There are always some risks associated with allowing external traffic through a firewall. The firewall is there to keep internet traffic out, so naturally any “hole” through the firewall carries a risk.

In this particular case, the risk is extremely low. Only Codeship services can use the IP addresses listed, and even though another Codeship customer could theoretically call your firewall from their builds, they would still need to know what to call and how.

That said, the risk is there, so make sure that you secure your internal services adequately and don’t blindly trust any traffic that attempts to access your internal services. This is not specific to Codeship, but generally best practice when allowing outside traffic in through your firewall.

Two Factor Authentication

As mentioned earlier, one last piece of the hybrid cloud puzzle is making sure that users in a hybrid setup have the same security options as those on cloud git servers.

Two-factor authentication (or 2FA) is already a common feature and has been offered by GitHub, GitLab, and Bitbucket for some time. The main premise is that you’ll need your username and password, plus a temporary code that only you can access. This way, even if hackers were to get hold of your username and password, no one but you will be able to access your account.

At Codeship, we’ve decided to go the route of requiring an Authenticator app on your personal device (phone, tablet, etc.), which will generate unique, one-time-use codes that act as the second factor in authenticating you. To learn how to set up and configure 2FA for Codeship, see our 2FA documentation.


“Announcing Codeship’s IP Whitelisting Feature” via @codeship
Click To Tweet


The post Announcing Codeship’s IP Whitelisting Feature appeared first on via @codeship.

{}
Docker News ( Feed )
Tuesday, 20 February 2018
Video Series: Modernizing .NET Apps for Developers

 Docker Enterprise Edition (EE)  is the container platform for modernizing your existing applications, and running them in the cloud or on-premises. You can take monoliths and run them in containers with no code changes, and that gets you portability, security and efficiency. Running in Docker is

MTA for Devs

 Docker Enterprise Edition (EE)  is the container platform for modernizing your existing applications, and running them in the cloud or on-premises. You can take monoliths and run them in containers with no code changes, and that gets you portability, security and efficiency.

Running in Docker is also a great starting point for modernizing the application architecture. You can breaking down the monolith into smaller, independent components which makes it easier to deploy updates, manage scale and introduce new technologies.

This new video series covers app modernization, for .NET developers and architects. It walks through the evolution of a monolithic ASP.NET 3.5 app to a distributed application running across multiple containers, using the Docker platform to plug everything together and adding features with great open-source software from the Docker ecosystem.

This is not a full re-architecture to microservices – for large .NET apps that would be a 12 month project. This series uses a feature-driven approach, taking key features out of the monolith to fix performance issues, add new functionality and support fast application updates.

Part 1 introduces the series, talks about what “modernization” means and then gets started – this is a very demo-heavy video series, where you’ll see lots of code. You’ll learn in Part 1 how to run .NET Framework apps in containers on Windows – using Docker to compile and package the app, and how to share your apps on a registry like Docker Hub.

The demo application is an ASP.NET WebForms app which uses SQL Server for storage. Part 2 shows you how to run SQL Server databases in Docker containers on Windows, packaging the schema so containers start with the database already deployed. You’ll also see how to optimize Dockerfiles for deploying .NET apps, and how to integrate logging and configuration with the Docker platform – still without changing code.

Part 3 starts modernizing the application architecture, extracting the web application’s save feature into its own container. The web app publishes an event to a message queue when users save data, and a message handler listens for those events and saves the data to SQL Server. The monolith now has an event-driven architecture, with all the components running in Docker containers, and that supports independent scaling and updates for different features.

Part 4 takes advantage of the new event publishing mechanism to add self-service analytics for users, powered by the open-source Elasticsearch stack. The new reporting database and analytics UI are run in Docker containers, and there’s a new message handler which stores data in Elasticsearch. You’ll learn how Docker EE  lets you add features like this with a zero-downtime release to production.

The series ends with part 5, which looks at breaking up the web UI. The homepage for the app is taken out and run in a separate container, which means changes to the UI and UX of the homepage can be released without regression testing the monolith. You’ll learn how to break up UI components and see how Docker enables fast, safe application releases.

Modernizing .NET Apps for Developers shows you how to take an existing .NET application, run it in a container, and use Docker EE  to modernize the architecture. The source code for the app is on GitHub and all you need to follow along is Docker for Windows on Windows 10, or Docker EE on Windows Server 2016.

Every part of the series shows you how to run the app locally on your dev environment, and also shows you the path to production – deploying to a highly-available Docker Enterprise Edition cluster in the cloud. By the end of the series you’ll be ready to start using Docker to modernize your own .NET applications.

Learn More about Modernizing Traditional Apps with Docker EE:


Learn how to modernize @dotnet apps on @Windows w/ the #Docker EE #MTA #dotnet
Click To Tweet


The post Video Series: Modernizing .NET Apps for Developers appeared first on Docker Blog.

Continuous Delivery for Static Sites

Reading Time: 15 minutesThis article was originally published on Pete Hodgson’s personal site. With their kind permission, we’re sharing it here for Codeship readers. In this article, I’ll show that the typical approach for deploying sophisticated static sites is not sufficient and explain h

Reading Time: 15 minutes

This article was originally published on Pete Hodgson’s personal site. With their kind permission, we’re sharing it here for Codeship readers.

In this article, I’ll show that the typical approach for deploying sophisticated static sites is not sufficient and explain how we can apply the principles of continuous delivery to improve the situation. We’ll then walk through a worked example, building a fully functional continuous delivery pipeline for a single page React application.


“Using the principles of continuous delivery to improve deployment of static sites” via @ph1
Click To Tweet


The Rise of the Static Site

The client-side capabilities of modern browsers have made great strides in recent years. We’ve also seen an explosion in PaaS products aimed at replacing functionality which would previously have been delivered via custom backends. It’s become more and more feasible to deliver an app as a static site, where all functionality is implemented via HTML, CSS, client-side JavaScript and third-party APIs, with no custom backend code at all.

This provides a very attractive option for building out an application without the operational complexity of deploying and managing server-side infrastructure. There are fewer moving parts, and operational worries become Someone Else’s Problem when you’re delivering your static site using something like Amazon S3.

Change Management for Static Sites

With these modern capabilities, client-side apps have evolved from a set of simple static HTML pages into sophisticated single-page web apps. However, change management for these static sites has not evolved at the same pace. Deploying a production change is often still done in a very unstructured way. Perhaps with a developer running through a set of manual build steps and then pushing some artifacts directly to a production S3 bucket.

These type of ad-hoc processes might make sense for deploying a simple change to some static content, but for a full-fledged app they are problematic. Any production change involves manual work from one of a small set of “experts” who are able to perform the correct incantation of processes. Code changes are not always robustly tested and not always consistently applied. There’s usually no way of knowing what’s been deployed into a given environment, or when it was done so.

The Case for Continuous Delivery

A decade or so ago, this situation was the norm for software deployment generally. What changed in the deployment of backend systems was a move toward continuous delivery practices. A strong focus on automation along with the formalization of a delivery pipeline helped to solve a lot of these issues for backend code. The same ideas can be easily applied to static sites.

We can replace manual processes with automated ones and replace poorly documented deployment steps with a declarative software delivery pipeline. In doing so we achieve a predictable, consistent deployment process which performs the exact same sequence of steps every time. We are able to test a code change in a pre-production environment with confidence that it will behave the same in our production environment.

By including automated testing as part of our delivery pipeline, we ensure that our changes are always tested before being deployed. By having machines rather than humans perform deployment steps, we reduce the time taken to deploy and can increase the rate at which we deliver changes to our users.

Applying CD

In this post, I’ll present an example of how we can do all of the above for a simple static site. We’ll build a simple but functional continuous delivery pipeline for a small single-page app hosted out of Amazon S3. We’ll end up with a fully automated deployment process which is:

  • FAST: Changes show up in a live environment within seconds of being pushed to master.
  • ROBUST: No code change can get into production without first proving its worth by passing our automated tests.
  • REPEATABLE: Every deployment is carried out in the same way, with no room for human error.
  • CONSISTENT: The code we test in staging is always the exact same code that gets deployed into production.
  • AUDITABLE: We have a record of every deployment, courtesy of our CD tooling. We know where every change came from, courtesy of version control. There’s no way for someone to make a sneaky manual change in order to “fix” something in production.

A Deployment Pipeline For nxtbrt.com

We’ll be working on the deployment infrastructure for a real site. We’ll be deploying nxtbrt.com, a side project of mine. It’s a simple transit schedule app for the San Francisco Bay Area’s BART system, implemented as a single-page React application. This app runs entirely client-side, pulling live schedule information directly from BART’s real-time API. There’s no need for a backend service, which allows our app to be deployed as a static website hosted out of an S3 bucket.

You can browse the source code for nxtbrt.com, as well as its CD configuration, on GitHb.

The pipeline

Any code change for this app goes through a fairly typical sequence of steps, which eventually lead to a new deployment of nxtbrt.com in production. First we run automated tests against the code. We then build a deployable version of the app, and then deploy it to a staging environment. After some light manual acceptance testing we then deploy the same build to our production environment.

We’ll be creating a continuous delivery pipeline that models this delivery process as the following sequence of pipeline steps:

  • Test
  • Build
  • Deploy to staging
  • Deploy to production

Our target pipeline

Any code change that lands on our master branch will be moved automatically through each step in this pipeline, although we’ll introduce a manual gate before the production deployment. This will allow us to perform manual testing of the code change in staging before it’s signed off and deployed to prod.

Implementing with Codeship

We’ll be implementing this CD pipeline using Codeship, a hosted continuous delivery platform. Specifically, we’ll be using Codeship Pro, which uses Docker containers to isolate each step of the build pipeline.

This provides a nice repeatable build environment — our steps run in the exact same computational environment each time. It also makes it easy to run a pipeline step locally in almost the exact same environment it would run in within the CD platform.

Codeship Pro follows the pattern of representing the entire pipeline configuration via configuration files that live in the same repo as the app itself. This gives us version control of our build pipeline and allows us to change both the app and the way it’s built within a single, atomic code commit.

Defining our CD pipeline

Step 1 – Test

To get started, we’ll set up a really basic single-step pipeline which runs our app’s automated tests whenever we check in code. We start by adding a simple test.sh script to our codebase:

codeship/steps/test.sh
#!/bin/sh
set -e -u

yarn install
yarn test -- --coverage

Our script starts with a shebang, which indicates that this script should be run using the sh interpreter. We use set -e to ensure that any individual failure within the script will cause the entire script to fail. We use set -u to ensure that we fail fast if our script contains references to undeclared variables. See this awesome blog post for more details on these flags, along with additional tricks for creating safer shell scripts.

With our scripting administrivia out of the way, the next line in our script runs yarn install to install our application’s dependencies. Then we run yarn test to actually execute our automated tests, using a --coverage flag to also capture code coverage information.

This script encapsulates all that’s necessary to run our automated tests. Next we need to configure Codeship to run it as a build step. We configure Codeship’s build steps by placing a codeship-steps.yml file in the root of our code repo:

codeship-steps.yml

- name: test
  service: workshop
  command: sh ./codeship/steps/test.sh

This file defines a build step called test which CodeShip will automatically run whenever it sees a new code commit land. That build step is configured to execute the test.sh script we just created.

Codeship runs every build step within a Docker container, which it refers to as a service. We’ve specified that our test build step should run within the workshop service, but haven’t yet told Codeship what that service looks like. We do that via a separate codeship-services.yml configuration file, also placed in the root of our code repo:

codeship-services.yml

workshop:
  image: node:alpine
  volumes:
    - ./:/usr/src/app
    - ./tmp/artifacts:/artifacts
  working_dir: /usr/src/app

This tells Codeship that our workshop service uses the node:alpine Docker image, and configures some mapping from locations in our code repo to locations within the running container.

With these three files (test.sh, codeship-steps.yml, and codeship-services.yml) added to our code repo, we have successfully set up a simple CI pipeline for our app! Every time we push changes to our code repository those changes will automatically be validated against our test suite.

Next we will set up a full deployment pipeline, where any change which lands on our master branch will automatically be deployed to a staging environment (assuming it passes our test suite of course).

Step 2 – Build

The first thing we’ll do to automate our deployment is create an additional pipeline step which will package up our app into a set of static files optimized for deployment into a production setting.

Our app is a simple single-page react app, and because it was created using create-react-app we already have most of what we need to create an optimized set of static files. We just need to do some simple orchestration, via another shell script:

codeship/steps/build.sh

#!/bin/sh
set -e -u

yarn install
yarn build

rm -rf /artifacts/build
mv ./build /artifacts/build

echo "$CI_COMMIT_ID/$CI_BUILD_ID" > /artifacts/build/VERSION.txt

This script installs our app dependencies, then uses the yarn build command that create-react-app provides to place an optimized static build of our app within the ./build directory. We then move those files into /artifacts/build. /artifacts is a shared volume which we are using to expose the artifacts created in this pipeline step to subsequent steps (you can read more details on this technique here).

Finally we add a VERSION.txt file to the set of build artifacts which will be deployed. This file is a handy way of easily determining the provenance of a given deployment — we can hit /VERSION.txt in a live deployment of the site to find out the exact code commit and build pipeline which produced the currently deployed app.

We want to run this script as a new pipeline step. That’s as simple as adding a new section to our codeship-steps.yml configuration file:

codeship-steps.yml
- name: test
  service: workshop
  command: sh ./codeship/steps/test.sh

- name: build
  service: workshop
  command: sh ./codeship/steps/build.sh

With these two changes checked in, our pipeline will now not only run tests but also create a deployable production build of our application. But we’re not yet deploying those files anywhere. Let’s work on that now.

Step 3 – Deploy to staging

Just like with our other pipeline steps, the first thing we’ll do is create a script:

codeship/steps/deploy_staging.sh
#!/bin/sh
set -e -u

TARGET_BUCKET="staging.nxtbrt.com"

cd /artifacts/build

echo "*******************************************************"
echo "* "
echo "* deploying the following files to $TARGET_BUCKET..."
echo "* "
echo "*******************************************************"

ls -lR

aws s3 sync --acl public-read --delete . "s3://$TARGET_BUCKET/"

echo "deployment complete"

The heavy lifting in this script is done via the aws command-line tool, which we use to push the current contents of the /artifacts/build directory up into an S3 bucket called staging.nxtbrt.com which we’ve configured to host a static website.

Now let’s add that script as a pipeline step:

codeship-steps.yml

- name: test
  service: workshop
  command: sh ./codeship/steps/test.sh

- name: build
  service: workshop
  command: sh ./codeship/steps/build.sh

- name: deploy staging
  service: deployer
  command: sh ./codeship/steps/deploy_staging.sh

Note that we’re using a new Codeship service called deployer to run this new pipeline step. Let’s take a look at how that’s configured, in codeship-services.yaml:

codeship-services.yml
workshop:
  image: node:alpine
  volumes:
    - ./:/usr/src/app
    - ./tmp/artifacts:/artifacts
  working_dir: /usr/src/app

deployer:
  image: codeship/aws-deployment
  volumes:
    - ./codeship/steps:/codeship/steps
    - ./tmp/artifacts:/artifacts
  encrypted_env_file: codeship/deployer.env.encrypted

There are a few things going on with this new service. It’s configured to use a Docker image called codeship/aws-deployment. As the name suggests, this is a Docker image provided by the engineers at Codeship which comes pre-installed with AWS automation tooling, such as the aws CLI tool which our script requires.

If we’d have used the workshop service, we wouldn’t have that command available for use by our deploy script. The deployer service is also configured to mount the same artifacts directory that the workshop service uses. This allows the deployable artifacts created during the previous build step in the pipeline to be available during the subsequent deploy step.

Last, we’re using encrypted environment variables to pass AWS credentials to the aws tool.

As an aside, for the sake of security, I strongly suggest using credentials for a specifically created IAM user that only has permissions for the S3 bucket upload needed for this deployment task.

With these new configuration changes, our CD pipeline is now set up to automatically test our code and then deploy it to our staging environment. There’s one little problem though.

By default, Codeship will run the pipeline against any commit which is pushed up to our code repository. But we don’t want any old branch that has a change pushed to be deployed to staging — we only want contain the latest from our master branch, no other branches.

This is easy to fix. Codeship allows us to limit a step to only run for commits on a specific branch, like so:

codeship-steps.yml

- name: test
  service: workshop
  command: sh ./codeship/steps/test.sh

- name: build
  service: workshop
  command: sh ./codeship/steps/build.sh

- name: deploy staging
  tag: master
  service: deployer
  command: sh ./codeship/steps/deploy_staging.sh

Note the new tag parameter in the deploy staging step. Now our test and build steps will run for commits pushed on any branch, but we will only deploy to staging if a commit is on our master branch.

Step 4 – Deploy to production

Let’s get our code deployed into production. We’ve already configured a separate S3 bucket as the static host for our production site. We could just copy-paste our deploy_staging.sh script to a new deploy_prod.sh script and modify the bucket name, but it’s cleaner to instead move to a general purpose deploy.sh script, making the bucket name configurable:

deploy.sh
#!/bin/sh
set -e -u

if [ -z ${1+x} ]; then
  echo "a target bucket must be specified (e.g. \`deploy.sh staging.foo.com\`)"
  exit 1
fi

TARGET_BUCKET="$1"

cd /artifacts/build

echo "*******************************************************"
echo "* "
echo "* deploying the following files to $TARGET_BUCKET..."
echo "* "
echo "*******************************************************"

ls -lR

aws s3 sync --acl public-read --delete . "s3://$TARGET_BUCKET/"

echo "deployment complete"

We now simply need to modify our step configuration to use this new generalized script for both staging and production deployment:

codeship-steps.yml

- name: test
  service: workshop
  command: sh ./codeship/steps/test.sh

- name: build
  service: workshop
  command: sh ./codeship/steps/build.sh

- name: deploy staging
  tag: master
  service: deployer
  command: sh ./codeship/steps/deploy.sh staging.nxtbrt.com

- name: deploy prod
  tag: 'MARKER/prod'
  service: deployer
  command: sh ./codeship/steps/deploy.sh www.nxtbrt.com

Simple enough.

You may have noticed the strange MARKER/prod branch filter we’ve specified for the deploy prod step; I’ll discuss that next.

Continuous Deployment Versus Manual Acceptance Testing

For this app, we aren’t confident enough to implement full continuous deployment, where every code change that passes automated tests is automatically pushed all the way through to production.

Instead we want to do some manual acceptance testing of a change (or batch of changes) in our staging environment, and then manually push them through to prod once they’ve passed muster. However we still want all of our change control to be managed via our CD pipeline — we don’t want deployments to prod to be run from some separate system.

Implementing this type of manual UAT (user acceptance testing) from within a CD pipeline is usually done via a manual gate sitting in front of our production deployment. After manual testing is done, an operator would push a button in the CD system to mark the change as approved, triggering the pending production deployment step.

Codeship doesn’t have that capability, so we are using a somewhat hacky workaround in the form of a marker branch. Once a set of changes has been manually approved, an operator can trigger a production deployment by simply fast-forwarding the MARKER/prod branch to point to the commit that is being approved. Our CD system will notice that the branch has moved and rerun the pipeline, including a production deploy at the end. Definitely a bit of a hack, but a functional one.

Victory!

With this final set of file changes, we are done. We have created a fully functional continuous delivery pipeline for our little static site. Any code change that lands on our master branch is automatically tested and deployed into a staging environment within a few seconds. After someone has manually verified a change, it is a one-line command to promote the exact same code into production.

Our deployment process is now:

  • FAST: Changes show up in staging within a few seconds of being pushed to master.
  • ROBUST: Every change that is heading to production must first prove its worth by passing our automated tests.
  • REPEATABLE: Every deployment is done the same way, with no room for human error.
  • CONSISTENT: The code we test in staging is always the exact same code that gets deployed into production.
  • AUDITABLE: We have a record of every deployment, courtesy of our CD tooling. We know where every change came from, courtesy of version control. There’s no way for someone to make a sneaky manual change in order to “fix” something in production.

Pro Tips for Maintainable CD Infrastructure

This worked example has included some tricks and techniques which I’ve picked up while building CI/CD pipelines over the years. I’ll briefly highlight a few here.

Separate build from deploy

It’s a good idea to have a distinct step in your pipeline which is solely responsible for creating a deployable artifact, along with separate steps which are responsible for taking that artifact and deploying it into a specific environment.

In our simple example, our build step created the artifact in the form of an /artifacts/build directory full of production-optimized static files.

Having a distinct build step forces you to create a well-defined deployable artifact that is not specific to any one environment. With that artifact defined, you can then create deployment steps that all work off of the same deployable artifact. This provides confidence that the artifact you’re deploying to production is the exact code that has been validated and tested earlier in the pipeline — one of the big wins from CD.

Version-stamp your artifacts

A related tip is to always include something in every deployable artifact which lets you trace its provenance. I typically do this via a VERSION file which is baked into the deployable artifact and contains metadata such as a build number, a commit SHA, perhaps a timestamp and a repo name. This allows you to take always be able to answer the question “where exactly did the bits currently running in this environment come from?”

Prefer scripts to pipeline configuration

I always strive to keep my CD tooling configuration as minimal as possible, particularly when it comes to the actual work being done by each step in the pipeline. The ideal is what we have in this worked example — the CD configuration just contains a reference to a script, and all the real work is encapsulated behind that script.

The main motivation for this is to keep your build infrastructure testable and debuggable. I want to avoid the number of instances where the only way I can test a build change or debug a build breakage is by running something within my CD infrastructure. It’s tedious, forces a really slow feedback loop, and is prone to error. Much better to be able to run the relevant script on a local workstation (or Docker container) in order to reproduce an issue or test out a local change before committing.

Treat your infra code with respect

It’s tempting to succumb to the fallacy that your build pipeline isn’t “real” software, it’s “just” scripts and configuration. In fact, this is code that will be maintained and extended over time, just like “regular” production software. You can apply the same principles which keep “regular” code maintainable to your infrastructure code.

Strive for small, well-documented modules. Organize your scripts into directories and give them good names. Extract shared functionality into helper functions. Treat copy-pasted code as a smell.

Least privilege

By their nature, CD pipelines involve low-level access to production infrastructure. Along with this power comes the responsibility to minimize the potential for a badly behaved script to do damage or expose sensitive data.

We can partially mitigate this risk by applying the principle of least privilege, along with fine-grained access control. The authentication credentials provided to a deployment process should provide the minimum set of permissions necessary for that process. A deployment script that updates content in an S3 bucket shouldn’t have permission to modify network settings, read databases, or create EC2 instances. A deployment script which is updating a staging environment shouldn’t have any access to production systems.

Going Further

We’ve created a basic CD pipeline for nxtbrt.com in this post. There are plenty more details I could go into — how to configure S3 buckets correctly, how to configure CloudFront, DNS, and SSL, how to create an appropriately restrictive IAM role for your deployment scripts, etc, etc. However there are also a lot of other articles out there on the internet covering the details. My focus was on how to drive this type of deployment from a CD pipeline.


“Continuous Delivery for Static Sites” via @ph1
Click To Tweet


The post Continuous Delivery for Static Sites appeared first on via @codeship.

Meet the DigitalOcean Brand Design Team

As a company, we’ve always cared about contributing to developer culture in an authentic way, and one of the ways we do that is by adding moments of visual delight to everything we do, whether it's a Community tutorial, an interaction in the control panel, or a T-shirt at

Meet the DigitalOcean Brand Design Team

As a company, we’ve always cared about contributing to developer culture in an authentic way, and one of the ways we do that is by adding moments of visual delight to everything we do, whether it's a Community tutorial, an interaction in the control panel, or a T-shirt at a conference. That is why, from the very beginning, DigitalOcean put an emphasis on building out a Brand Design team comprised of not just proficient graphic designers, but brilliant illustrators as well.

The Brand Designers at DigitalOcean are challenged every single day to transform extremely technical and esoteric content into approachable and friendly touch points. Lead Visual Designer Masami Kubo says, “We believe these technologies should be accessible to everyone, and a part of that is acknowledging and celebrating the diverse and quirky personality behind the humans that build these amazing things. Visuals and branding throughout the cloud computing industry are often disregarded or unconsidered, so it’s a unique opportunity for us as designers to bring that culture to life.”

We interviewed DO’s Brand (Visual) Designers Kasia Bojanowska, Masami Kubo, Pat Raubo, and Alex Mostov to learn more about their design process, how they illustrate technical concepts, and where they turn to for inspiration.

How do you approach technical topics as illustrators?

Masami: We’ve been illustrating technical topics for years, so the challenge now is how to keep it fresh and relevant. However, if we push the imagery too conceptual or meta, we run the risk of none of it making any sense to our audience. My approach now is to identify the primary action or message behind complex concepts, and focus on making that one thing really clear. I like to start minimal, then add elements sparingly to not distract from the primary message.

Alex: I came to the DigitalOcean team without much technical knowledge. In some ways I think this has actually been an advantage in creating conceptual illustrations. I create images that help me understand the concepts. I think and hope that inherently makes them more intuitive to others, too.

Where do you draw inspiration from for your designs?

Kasia: When starting a new project I definitely try to spend a good chunk of time looking for inspirations. Google image search, Pinterest, Dribbble, Behance are all wonderful resources for that. We have a few shared pinterest boards with stuff we like. I also get really inspired when I see great work being made by others on our team.

Pat: One of the benefits of working with a team of such enormously talented designers is that I draw inspiration from them and their work all the time. Masami and Kasia both do amazing work, and I’ve learned a great deal from both of them, as well as from Alex. I try to seek out inspiration from a number of things. Some have a pretty clear association with the kind of work we do at DO, like design and illustration done specifically for tech, but I also draw from editorial illustration, film, comics, and book covers, among other sources.

Meet the DigitalOcean Brand Design Team Illustrations by Kasia Bojanowska, Patricia Raubo, & Alex Mostov

How do you come up with new ideas for similar technical topics?

Masami: I think it actually helps for imagery with similar technical topics to have a common thread of imagery, so as to build a visual association. We have strict style guides for most of our platforms and campaigns, but some of these style guides allow for permutation in aesthetics to avoid looking too repetitive over time.

Pat: I like to first do some research to understand the basic concept of what I’m going to illustrate, and then add to my notes with simple schematics and/or sketches to see if there’s anything I can pull from those for the final visuals.

Alex: I will often try to think about representing a topic in a different kind of space or world. For examples if I create an image for a topic in a 2D space, the next time I will try to figure out how I could represent that same concept in a 3D space or from a different perspective.

What is one of your favorite projects you’ve worked on at DO thus far?

Pat: I worked on a series of illustrations for our Employee Handbook, which meant drawing a team of cute sea creatures in an office setting. I really enjoyed working on that project, and it was great to see people respond to the illustrations in such a positive way.

Masami: My favorite projects are often also the most challenging ones. And usually the more ambitious they are, the more compromises on vision I’ve had to make. But some of the most exciting stuff I’ve worked on here is the art direction and design of our office spaces, in collaboration with architects, fabricators, and our People team. I was expected to transform the space into a branded and navigable experience. It’s still a work in progress, but I love the challenge of designing for physical spaces.

Meet the DigitalOcean Brand Design Team Murals by Alex Mostov & Masami Kubo

What was one of the most challenging projects you’ve worked on at DO?

Kasia: Redesigning the DO logo was definitely the biggest challenge for me. The process was pretty high pressure but I was allowed enough time to really let myself explore and dig in deep. In this case having a supportive team to brainstorm and keep motivation high through all of the iterations was essential.

Masami: We did a design refresh of the marketing site a year ago, and it went through a lot of changes and push backs. The task was simple—refresh the designs and clean up the performance—but it involved approval from every department and stakeholder in the company. I was doing everything from art direction, web design layouts, and spot illustration. I learned a ton about project management and designing within web accessibility standards, thanks to Una Kravets. I felt creatively drained after the project was finished, and didn’t think it would be possible to revisit it with new ideas. Surprisingly, I am now leading a complete design overhaul for the marketing site, and I feel more equipped than ever to tackle all the challenges and make something more beautiful and smart than last year.

Meet the DigitalOcean Brand Design Team

Sometimes you create visual assets that are targeted at a very specific audience, and you have to balance things like humor with cultural sensitivities. How does localization factor into your designs?

Masami: Part of our job is being aware and sensitive to any imagery that might have harmful or negative impacts to our community. We are fortunate to have a diverse employee base that cares about these things, so the more opinions we can gather, the better. We try to treat branding the same in any other countries as we do here. However, we do want to highlight our growing global coverage, so one way we approach this is to celebrate the unique design culture local to these countries. For example, the Frankfurt datacenter launch campaign featured designs inspired by Bauhaus Constructivist design. For the Bangalore datacenter launch, we created stylized renditions of local architecture. Being a developer from another country doesn’t necessarily mean you have vastly different tastes or interests, so it’s important for companies and designers to address these things authentically.

How do you create different kinds of content while maintaining brand consistency?

Kasia: For illustrations, we keep a consistent color palette. We have a list of prompts to help us throughout the process, but we do not have a very strict style guide when it comes to editorial illustration. We tend to have more fun and variation with all of our community and conference designs. However, we are definitely more strict about stylistic consistency when it comes to our website design.

Like much of DO, the Brand Design team is distributed across the world. What systems or processes do you have in place that allow for open communication and collaboration?
Pat: One of our team members, Kasia, is based in Poland, so we have a time difference of six hours between us. We started to make a habit of doing our daily stand ups and critiques early in the day to make sure we were all able to benefit from them. We have a private Slack channel which we use to stay in contact, to brainstorm, and to share ideas on projects.

Where do you see the DO brand going?

Masami: When I first joined DigitalOcean in 2014, the company was breaking into the cloud computing world by differentiating itself as friendly and accessible. At the time that meant being extra illustrative and bubbly with our designs. We wanted to let the developer community know that their content and culture deserves this kind of attention. That attitude and core value is still what drives every decision, but our aesthetics have matured and evolved just as our products and features have grown. The brand now has a diverse voice ranging from playful and young to mature and sophisticated, all under the same goal of enabling the developer community. I think this range directly reflects the diversity of users we want to speak to.

Alex: I really like DO’s brand evolution because I feel like the changes are made based on need and effectiveness rather than just trying to make a splash. I think the brand will continue to change in this deliberate way as the community and product develop. I also hope it will always maintain the sense of playfulness that I think makes DO special.

What is your best advice for designers just starting out?

Pat: I would encourage aspiring creative folks of any stripe to always stay curious (as cliched as it may sound, it’s advice I’ve followed that I feel has served me well) and seek out inspiration from a range of sources (museums, books, online communities, whatever floats your boat!), because you never know what’s going to be the seed that becomes the root of a fantastic idea. Feeding your mind will give you perspective and enrich your work.

That said, don’t wait around for inspiration to strike, either! It’s best not to be too precious about your work. Just sit down, make the thing, and make it to suit your standards. Then, when you think it’s done, work on it just a little bit more. Keep learning, and push yourself a bit more with each new project.


Do you enjoy our designers' creations? Download desktop wallpapers from some of their favorite illustrations.

{}
Travis News ( Feed )
Security advisory: Log token allowing access to other endpoints

In November last year the token issued for displaying job logs on Travis CI was incorrectly valid for two other endpoints. This issue was discovered on Wednesday 14 February, 2018 and fixed the next day. Further investigation into historical data found no successful attempts at fetching non-log en

In November last year the token issued for displaying job logs on Travis CI was incorrectly valid for two other endpoints. This issue was discovered on Wednesday 14 February, 2018 and fixed the next day. Further investigation into historical data found no successful attempts at fetching non-log endpoints with the log token. This indicates the issue was not exploited in any way.

When displaying job’s log on Travis CI we also show a button to retrieve a raw version of the log. If a repository is private, the link to the raw log contains a log token for convenience so that, for example, the log can be opened in a new browser tab. The token is valid for 24 hours and should only be valid for that specific log.

In November last year we were updating the V3 version of our API to include the log token in the /job/:id/log endpoint. Unfortunately, the token issued for the log was also valid for two other endpoints: /job/:id and /repository/:id/jobs. This meant an attacker could get information about a job, a job’s config, or the job’s commit info, which in some cases may be sensitive.

The problem was caused by using our permissions system in a wrong way. Instead of checking the log visibility for a user we reused a function that checked the job visibility. On top of that the log token isn’t scoped to specific endpoints, which allowed to use it on any endpoint that was only checking for a job visibility. We’ve fixed this by using a fine grained log visibility check and by adding more tests covering this area of the code.

As mentioned above, the issue was discovered on 14 February, and resolved the following day. Going forward we plan to revisit our code to allow to scope tokens to work only on specific endpoints, which would make it much harder to introduce such issues in the future. We will also do a review of the permissions system to confirm that there are no other places were permissions checks are too generic.

If you’re concerned about the issue, don’t hesitate to contact us at support@travis-ci.com.

{}
Docker News ( Feed )
Thursday, 15 February 2018
Docker Compose and Kubernetes with Docker for Desktop

If you’re running an edge version of Docker on your desktop (Docker for Mac or Docker for Windows Desktop), you can now stand up a single-node Kubernetes cluster with the click of a button. While I’m not a developer, I think this is great news for the millions of developers who have already been u

If you’re running an edge version of Docker on your desktop (Docker for Mac or Docker for Windows Desktop), you can now stand up a single-node Kubernetes cluster with the click of a button. While I’m not a developer, I think this is great news for the millions of developers who have already been using Docker on their Macbook or Windows laptop because they now have a fully compliant Kubernetes cluster at their fingertips without installing any other tools.

Developers using Docker to build containerized applications often build Docker Compose files to deploy them. With the integration of Kubernetes into the Docker product line, some developers may want to leverage their existing Compose files but deploy these applications in Kubernetes. There is, of course, Kompose, but that’s a translation layer which causes you to have two separate artifacts to manage. Is there a way to keep a native Docker-based workflow?

With Docker on the desktop (as well as Docker Enterprise Edition) you can use Docker compose to directly deploy an application onto a Kubernetes cluster.

Here’s how it works:

Let’s assume I have a simple Docker compose file like the one below that describes a three tier app: a web front end, a worker process (words) and a database.

Notice that our web front end is set to route traffic from port 80 on the host to port 80 on the service (and subsequently the underlying containers). Also, our words service is going to launch with 5 replicas.

 

services:
  web:
    build: web
    image: dockerdemos/lab-web
    volumes:
     - "./web/static:/static"
    ports:
     - "80:80"

  words:
    build: words
    image: dockerdemos/lab-words
    deploy:
      replicas: 5
      endpoint_mode: dnsrr
      resources:
        limits:
          memory: 16M
        reservations:
          memory: 16M

  db:
    build: db
    image: dockerdemos/lab-db

I’m using Docker for Mac, and Kubernetes is set as my default orchestrator. To deploy this application I simply use docker stack deploy providing the name of our compose file (words.yaml) and the name of the stack (words). What’s really cool is that this would be the exact same command you would use with Docker Swarm:

$ docker stack deploy --compose-file words.yaml words
Stack words was created
Waiting for the stack to be stable and running...
 - Service db has one container running
 - Service words has one container running
 - Service web has one container running
Stack words is stable and running

 

Under the covers the compose file has created a set of deployments, pods, and services which can be viewed using kubectl.

 
$ kubectl get deployment
NAME      DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
db        1         1         1            1           2m
web       1         1         1            1           2m
words     5         5         5            5           2m

$ kubectl get pods
NAME                     READY     STATUS    RESTARTS   AGE
db-5489494997-2krr2      1/1       Running   0          2m
web-dd5755876-dhnkh      1/1       Running   0          2m
words-86645d96b7-8whpw   1/1       Running   0          2m
words-86645d96b7-dqwxp   1/1       Running   0          2m
words-86645d96b7-nxgbb   1/1       Running   0          2m
words-86645d96b7-p5qxh   1/1       Running   0          2m
words-86645d96b7-vs8x5   1/1       Running   0          2m

$ kubectl get services
NAME            TYPE          CLUSTER-IP       EXTERNAL-IP    PORT(S)       AGE
db              ClusterIP     None                    55555/TCP     2m
web             ClusterIP     None                    55555/TCP     2m
web-published   LoadBalancer  10.104.198.84        80:32315/TCP  2m
words           ClusterIP     None                    55555/TCP     2m

If you look at the list of services you might notice something that seems a bit odd at first glance. There are services for both web and web-published. The web service allows for intra-application communication, whereas the web-published service (which is a load balancer backed by vpnkit in Docker for Mac) exposes our web front end out to the rest of the world.

So if we visit http://localhost:80 we can see the application running. You can actually see the whole process in this video that Elton recorded.

Now if we wanted to remove the service you might think you would remove the deployments using kubectl (I know I did). But what you actually do is use docker stack rm and that will remove all the components created when we brought the stack up.

$ docker stack rm words
Removing stack: words

$ kubectl get deployment
No resources found

And, to me, the cool thing is that this same process can be used with Docker EE – I simply take my Compose file and deploy it directly in the UI of Docker Enterprise Edition (EE) – but that’s another post.

Want to try it for yourself? Grab Docker for Mac or Docker for Windows, but be sure to check out the documentation (Mac and Windows) for more info.

Learn more:


Use Docker Compose to deploy a multiservice app on #Kubernetes w/ #Docker for Mac by @mikegcoleman
Click To Tweet


The post Docker Compose and Kubernetes with Docker for Desktop appeared first on Docker Blog.

4 Tips for Successful Remote 1-1s

Reading Time: 7 minutesRemote work is commonplace, especially among development teams and startups. The effects of skipping the commute and controlling your workspace are usually reflected in a productivity boost and a better work-life balance, but these advantages do come with some drawbacks. Tea

Reading Time: 7 minutes

Remote work is commonplace, especially among development teams and startups. The effects of skipping the commute and controlling your workspace are usually reflected in a productivity boost and a better work-life balance, but these advantages do come with some drawbacks.

Team dynamics are complicated enough even when everyone can grab lunch together. It’s often the “relationship” side of your team that will need some different techniques in a remote setting. If you’re a manager, building strong relationships with your direct reports is paramount for both their success and yours.

At Codeship, I manage a fully remote team, and our employees are scattered across Europe and North America. I found that much of the usual advice for 1-1s — hold them biweekly or monthly, make it a walk-and-talk, keep it tech-free — just doesn’t work for remote teams. Our set of constraints necessarily leads to some interesting solutions.

So don’t let the limitations of wifi bandwidth, your video camera, and your laptop screen get you down. Here are four tips for leading successful 1-1s with your remote teams.


“Much of the usual advice for 1-1s just doesn’t work for remote teams.” via @rhein_wein
Click To Tweet


1 – Video Is a Must

There’s no substitute for sitting down next to someone over a coffee, but video call is a pretty good approximation! Facial expressions and nonverbal cues can give you the chance to respond to or correct something before it gets out of control. It’s easy enough to hide emotions over a voice call (we’ve all seen Ferris Bueller’s Day Off, right?), but video gives a deeper window into what’s really going on with someone.

Video also helps humanize your team. When teams interact with each other mainly over text, it’s sometimes hard to forget there’s a human on the other end of that screen. Simply seeing the faces of our team goes a long way for shared understanding, cooperation, and empathy.

It also avoids miscommunications when talking about more contentious subjects. If someone tells me they’re disappointed in the outcome of a project and that we need to correct course, it’s much easier to be at ease if I can see their calm expression on video. It’s sometimes hard to assume best intentions without a face to back it up.


“Video for remote 1-1s minimizes miscommunication on contentious subjects” via @rhein_wein
Click To Tweet


While lots of remote employees love that they don’t have to commute, it doesn’t mean that they also don’t miss the social aspects of working in an office. Some remote employees still crave those in-person interactions and feel disconnected from their team if they don’t share some non-transactional human experiences on a regular basis. While not a perfect replacement for the watercooler, a video call can help strengthen relationships and build camaraderie by being able to see each others’ faces.

I’ve worked on teams where we had a “no video” policy, which sometimes helped us feel more at ease with spontaneous calls or pairing sessions (after all, it might easy to throw on an office hoodie for some of my coworkers, but my grooming routine is a bit more complex, so spontaneous video calls aren’t really my thing!). Our engineering team at Codeship uses the opposite policy, where video is an expectation. It might mean that we need a bit more lead time for calls, but the quality of the communication is significantly better with video on.

I don’t mean to suggest that video can completely replace the need for in-person meetings. Relationships grow stronger when people have a chance to interact in person, so team meetings and retreats are crucial. But once you have a solid foundation for a relationship, whether it’s with your direct report, a peer, or your own boss, using video will help to keep it strong.

2 – Embrace Your Laptop Screen

So many best practices for 1-1s encourage you to close your laptops and be “in the moment.” For remote teams, this simply isn’t an option.

By their nature, it’s not possible for your remote video 1-1 to be a tech-free zone. But choosing a great video conferencing tool can make a big difference when it comes to eliminating distractions.

My team recently made the switch from Google Meet to Zoom, and it’s made a measurable positive impact on both the technical quality of the call as well as what we’re able to discuss. Each attendee can decide which layout they prefer for video display (a Hollywood Squares-esque grid or just having the speaker on jumbotron). The big change-maker is that if you share your screen, you stay on video. This makes discussions around documents more fruitful since all participants are still on camera, and it eliminates that awkward feeling where the one person just feels oddly monitored while the other’s video is hidden.

I don’t love Slack calls for 1-1s, mostly because Slack is the center of the distraction universe. Zoom has built-in chat functionality, so I make it a habit to close Slack during 1-1s and rely on that instead.

And if your report has eyes bouncing all over the screen (or you can see they’re browsing Twitter by the screen reflection in their glasses, which is my favorite managerial secret), it’s a pretty sturdy indicator that they perceive the 1-1 as a waste of time. It’s on you to call attention to it and figure out how to fix it.


“It’s up to the manager to ensure that remote 1-1s are useful.” via @rhein_wein
Click To Tweet


3 – Keep Shared Notes

The chances for miscommunication over video chat are higher because you’re quite literally only getting part of the picture. To avoid “What, you never told me that!” conversations in the future, make it a point to keep shared notes during your 1-1s.

Google Docs is a good format for shared notes. I usually set them up by date and then use the “outline” feature to see an overview of all the meetings. For the notes themselves, just a list is fine, but make sure to mark which person is suggesting the topic. Not only does having an agenda help you keep the discussion balanced, but it can also show you trends developing over time.

Has it been a couple weeks since your direct report came to the meeting with anything to talk about? It’s time to start a conversation about it, hopefully made easier by the fact the evidence is right there in front of you.


“Shared notes let you see who has said what–and who hasn’t said anything for awhile.”
Click To Tweet


Since you’ll both need to have your laptops open for a video call (and you’re definitely doing your 1-1s over video, right?), weaving in notetaking with conversation can be pretty natural, but only if you’re using a tool that won’t prevent you from seeing the other person while you’re typing or reading the doc.

It’s especially important when delivering critical feedback, setting goals, or any other exchange where an action item is communicated. It also gives instant peer review. If your report writes something that isn’t what you thought was agreed on, it’s easy to spot and correct it before a week goes by.

4 – Engage More Often Than You Think You Need To

Especially in a remote team, it’s easy for all kinds of things to be left unsaid because the “communication tax” is higher. Blocking off time for your reports on a weekly basis keeps your office door open.

This applies to positive and unpleasant things alike. Having more frequent 1-1s over voice and video can reduce the overhead for quality communication and gives you insight into how your reports are doing. What might be uncomfortable or awkward to bring up over Slack can be easily detectable via facial expressions, tone of voice, and body language.

It’s hard to hide emotions when you’re face to face — even if that face is on a computer screen and 2,000 miles away.

When everyone’s not co-located in the same office, it’s often easy to miss out on the non-work aspects of someone’s life that impact their workday in a significant way. There’s no way to tell that your colleague is stepping away every hour to hear an update about an ailing grandparent, or that they’re really not feeling well and trying to stay at work for the sake of the team.


“Frequent 1-1s help remote managers understand when non-work life impacts someone’s workday.”
Click To Tweet


Part of your job as a remote manager is to create a relationship where your direct report would come to you and share those details, but that won’t happen overnight. Having frequent 1-1s makes it easier to strengthen that relationship and to pick up on those nonverbal cues that lead to deeper conversations.

And this “face-to-face” communication doesn’t need to stick to the confines of a 1-1. At Codeship, we use video for every call, whether it’s an all-hands meeting, sprint planning, or a project meeting. Chat conversations can turn into spontaneous 1-1s as well, and that “presence” with your team is helpful to enforce your availability and willingness to jump in to assist wherever needed.

If you’re having a 1-1 with someone who isn’t operating within your normal business hours, it’s best to balance the inconvenience of working a bit outside of the normal workday. Working in the EU, I’m usually game for a meeting that goes until 7:30 pm, because I know that my US counterparts will also jump on a call at 7:30 am if they need to.

Curious for more tips on remote work? Zapier has a rich guide for managing remote teams available on their site. Of course, remember that these tips might need some adaptation to work with your team. Teams are immutable human infrastructures, and each one is different!


“4 Tips for Successful Remote 1-1s” via @rhein_wein
Click To Tweet


The post 4 Tips for Successful Remote 1-1s appeared first on via @codeship.

{}
Codeship News ( Feed )
Wednesday, 14 February 2018
Understanding Rust Loops

Reading Time: 8 minutesWhen you come from another language to learn Rust, some things may come across the same, but many things will be very different. Writing Rust loops can have the appearance of familiarity, but behind the scenes, Rust is translating those loops to its own syntax. If you learn

Reading Time: 8 minutes

When you come from another language to learn Rust, some things may come across the same, but many things will be very different. Writing Rust loops can have the appearance of familiarity, but behind the scenes, Rust is translating those loops to its own syntax. If you learn that style for yourself, you will find Rust loops to be far more capable and useful in your day-to-day programming.

Rust works with a form of pattern matching to account for what may possibly result from each step of your loop. This ensures that you don’t need to write as many tests; Rust gives you several guarantees about the types and boundaries of your conditions being iterated over. This helps your tests to focus on more relevant things, as well as makes your tests more sensible.

Let’s have a look at how Rust works with loops.


“Rust loops can look familiar, but check out what goes on behind the scenes.” via @6ftdan
Click To Tweet


Revealing a For Loop on a Range

A for loop may at first look as it would for many other languages. You tell it to use a variable for each step of a numbered range and simply use that value within the loop.

for value in 1..10 {
  println!("The value is {}", value);
}

This will print out nine lines starting with “The value is”, and then each line with a number from 1 to 9. When writing code like that, I would simply accept that the language was written to work in this way. But if you look at how many other loops are written in Rust, you’ll notice they seem very different. It’s as if they’re designed to operate in a different way.

So let’s dig into how this for loop is implemented and see why that isn’t so.

The first piece to look at is the range syntax used: 1..10. What this does is create an instance of the struct Range. 1 is given to the structs start value and 10 to the end.

pub struct Range<Idx> {
  pub start: Idx,
  pub end: Idx,
}

The for loop then calls the IntoIterator::into_iter method on that range which will iterate over each item and takes ownership of each item it brings into the loop. Iterators in Rust have one requirement, and that is to define a next method when you implement that trait Iterator for your type. For Range, the implementation of Iterator looks roughly like:

impl<A> Iterator for ops::Range<A> {
  type Item = A;

  #[inline]
  fn next(&mut self) -> Option<A> {
    if self.start < self.end {
      if let Some(mut n) = self.start.add_usize(1) {
        mem::swap(&mut n, &mut self.start);
        Some(n)
      } else {
        None
      }
    } else {
      None
    }
  }
}

This code is a little more verbose than it needs to be for this post — it has some compiler optimization focuses for its implementation.

In essence, what is being done is a start-to-end boundary check for LLVM (compiler) optimization, then it tries to add 1 to our start value, if that’s possible given number type limitations (ie, the maximum number was reached for the kind of number used). If it can add 1, it uses a memory swap method to update the start value.

And last, it returns either a Some(value) or a None (where value in Some is the current number start is at). The Some and None results are what most loops we write will pattern match against and are both of the enum type Option<_>.

You can see one case of pattern matching used above with if let. When using match or let, you can perform the pattern matching known as destructuring assignment.

// structuring assignment
let number = Some(1);

// destructuring assignment
let Some(result) = number;

println!("The result is {}", result);
// 1

When using if just before let, it will do a conditional destructuring assignment and code block execution if the pattern matches.

let number = None;

if let Some(value) = number {
  println!("This doesn't run.  You won't see this.");
}

let number = Some(12);

if let Some(value) = number {
  println!("This prints!  The value is {}", value);
}
// This prints!  The value is 12

You can add an else after an if let if you want to handle two possible paths (just as in the Iterator example above). What is perhaps more common is to use match for cases when you have two or more result paths to work with.

let x = Some(7);

match x {
  Some(v) => println!("A value has been produced.  It's {}", v),
  None => println!("No value."),
}
// A value has been produced.  It's 7

Now back on the subject of the for loop, we’ve covered that it creates a Range and that into_iter is used to produce an iterator which will produce an Option<_> value each time the next method is called. And Option<Thing> will either return as Some<Thing> or None once the end of the loop is reached.

In this case, the Thing in Option is a numeric type known as usize.

Ownership

The block of code that prints each of the values does so from the
destructured result from Some(value). Since into_iter consumes
the ownership for each item passed in, then the collection of values
from the range will not be available after the loop. Assigning it to a variable and trying to use it later won’t work in this case.

let range = 1..10;  
for value in range {
  println!("The value is {}", value);
}

println!("{:?}", range);

This produces an error:

error[E0382]: use of moved value: `range`
 --> src/main.rs:7:20
  |
3 |   for value in range {
  |                ----- value moved here
...
7 |   println!("{:?}", range);
  |                    ^^^^^ value used here after move
  |
  = note: move occurs because `range` has type `std::ops::Range<i32>`, which does not implement the `Copy` trait</i32>

When looping over a collection, you have a few different ways to choose ownership of the items used.

  • iter() — iterates over &T (a borrowed reference to the item)
  • iter_mut() — iterates over &mut T (an editable borrowed reference to the item)
  • into_iter() — iterates over T (takes/consumes ownership)

Here we’ll look at a while loop that uses the let destructured assignment feature and only borrow the items as we run through the loop, allowing it to be available afterward.

let range = (1..4).collect::<Vec <usize>>();
let mut range_iterator = range.iter();

while let Some(value) = range_iterator.next() {
  println!("The value is {}", value);
}

println!("{:?}", range);

And this outputs:

The value is 1
The value is 2
The value is 3
[1, 2, 3]

The last line is the debug output we asked for on the last line of code, and we still had access to the vector named range because we used iter() to borrow it.

Typical Loops in Rust

One very common use of loops in Rust is the loop/match combination. It may look something like this:

let values = vec![1,2,3,4,5];
let mut iterator = values.into_iter();

loop {
  match iterator.next() {
    Some(number) => print!("{}", number),
    None => break,
  }
}
// 12345

Rust’s type system will make sure to match accounts for all the possible outputs that the next method produces from your iterator. When the collection reaches the end, it will produce a None, and the match runs that code block breaking out of the loop.

match can handle more advanced pattern matching.

let values = vec![1,2,3,4,5,6,7];
let mut iterator = values.into_iter();

loop {
  match iterator.next() {
    Some(4) | Some(6) => { println!("Even!"); },
    Some(number) => { println!("Prime! {}", number); },
    None => { break },
  }
}

And with guards:

let values = vec![1,2,3,4,5,6,7];
let mut iterator = values.into_iter();

loop {
  match iterator.next() {
    Some(number) if (number == 4) | (number == 6) => {
      println!("Not prime {}", number);
    },
    Some(number) => { println!("Prime! {}", number); },
    None => { break },
  }
}

And match also has bindings:

let values = 1..8;

for value in values {
  match value {
    num @ 1...3 => println!("Lower Range: {}", num),
    num @ 3...6 => println!("Upper Range: {}", num),
    _ => println!("Not in range."),
  }
}

And the above outputs:

Lower Range: 1
Lower Range: 2
Lower Range: 3
Upper Range: 4
Upper Range: 5
Upper Range: 6
Not in range.

There are many methods on the Iterator trait that can help you do many of the things you want to do while iterating over a collection [link]. With them, you can daisy-chain methods like so:

let mut db: Vec<ResumeKey> = vec![];

contents.
  split("\n\n").
  map(|s| s.to_string()).
  filter(|s| !s.is_empty()).
  for_each(|item| {
    let rk = ResumeKey::try_from(item);
    if let Ok(key) = rk {
      db.push(key);
    }
  });

This is some code from my app I used for processing text from a resume key text file. The contents are the text from the file, and entries are split by double new line entries. From there, I just use try_from to generate a ResumeKey from the sections in the file — if it pattern matches as an Ok() (a type of Result), then I go ahead and push it into my dataset of resume keys.

As you get more familiar with Rust, it’s likely you’ll use the daisy-chained method, as many times it will comes across as more readable. Even so, the same pattern matching system will still happen behind the scenes where it is applicable.

Creating an Iterator

To create an iterator, we need to implement the trait Iterator with the method next(&mut self) for our object. It needs to be &mut self because the iterator needs to change some information on the object it’s on, so the next time next is called, it will have progressed forward by the information it saved last time. The template for Iterator is as follows.

pub trait Iterator {
  type Item;
  fn next(&mut self) -> Option<Self::Item>;
}

The type used here is for aliasing one type to another, specifically the type of item we’ll be iterating over. Since this is just a trait template, the alias for the type hasn’t been set yet, just the name Item for when we use it.

struct Pairs {
  pairs: Vec<(usize,usize)>,
}

impl Iterator for Pairs {
  type Item = (usize, usize);

  fn next(&mut self) -> Option<Self::Item> {
      self.pairs.pop()
  }
}

fn main() {
  let set = Pairs {
    pairs: vec![(1,2), (2,3), (4,5)]
  };

  for pair in set {
    println!("Pair: {:?}", pair);
  }
}

The above will output:

Pair: (4, 5)
Pair: (2, 3)
Pair: (1, 2)

When we define the Iterator type for our item, we automatically get the into_iter method, which lets it work in for loops. Since into_iter consumes ownership of the item, we’ve implemented the way next should work for this item correctly.

If you would like to learn how to implement the iter and iter_mut versions as well, I highly recommend looking at the well-documented source code for vec_deque. In here, you will find that the structs Iter, IterMut, and IntoIter all have the trait Iterator implemented for them for VecDeque with their own behavior in the next function.

Summary

Rust has a fantastic system that manages your types and ownership very well, and this shines in loops as well. When you otherwise would worry about types and bounds with other languages, Rust takes a load off your mind with compile time checks and points you in the right direction.

I have found Rust removes most of the need for TDD when working with Rust-to-Rust code. Where TDD and testing really come in to play is any time you work with something outside the ecosystem. Then all the normal practice guidelines apply.

Rust will spoil you with it’s compile time helpful error messages, and your loops will scarcely ever run in to typical issues found in other languages.


“Understanding Rust Loops” via @6ftdan
Click To Tweet


The post Understanding Rust Loops appeared first on via @codeship.

{}
Docker News ( Feed )
Top 3 reasons to attend DockerCon 2018

In case you missed it, DockerCon 2018 will take place at Moscone Center in San Francisco, CA on June 13-15, 2018. DockerCon is where the Docker community comes to learn, belong, and collaborate. Attendees are a mix of beginner, intermediate and advanced users who are all looking to level up their

In case you missed it, DockerCon 2018 will take place at Moscone Center in San Francisco, CA on June 13-15, 2018. DockerCon is where the Docker community comes to learn, belong, and collaborate. Attendees are a mix of beginner, intermediate and advanced users who are all looking to level up their skills and go home inspired. With a 2 full days of training, more than 100 sessions, free workshops and hands-on labs, and the wealth of experience brought by each attendee, DockerCon is the place to be if you’re looking to learn Docker in 2018.

Want to go but need information to convince your manager? Here is a document to help you build a case for it including content, budget and reasons why you should attend.

 

Register for DockerCon 2018

 

Reason #1: Inspiring and informative breakout sessions

From beginner to experts, DockerCon brings together the brightest minds to talk about all things containers including Docker Platform, Kubernetes, Digital Transformation in the Enterprise, Moby and CNCF projects, Container Security, Service Mesh and more. Although the full schedule won’t be announced until the end of the month, below is a sneak peak of some of the sessions we have lined up:

  • Kubernetes extensibility by Tim Hockin and Eric Tune (Google)
  • Accelerating Development Velocity of Production ML Systems with Docker by Kinnary Jangla (Pinterest)
  • Digital Transformation with Docker, Cloud and DevOps: How JCPenney Handles Black Friday and 100K Deployments Per Year by Sanjoy Mukherjee, (JCPenney)
  • Don’t have a Meltdown! Practical Steps for Defending your Apps by Liz Rice (Aqua) and Justin Cormack (Docker)
  • Creating Effective Docker Images by Abby Fuller (AWS)
  • App Transformation with Docker: 5 Patterns for Success by Elton Stoneman (Docker)

DockerCon 2018

 

Reason #2: Concrete learnings through Hands-on Labs and Workshops

Hands-on Labs will provide conference attendees with the opportunity to gain experience with various Docker features and use cases. Each lab runs around 45 minutes, covering both introductory and more advanced topics. Labs are self paced, so attendees can learn at their own speed when they have time during the day.

Conference attendees will also get the opportunity to attend free 2 hour hands-on, instructors led workshops will run as a track and free to first-come-first-served basis.

 

Reason #3: Effective networking with the Docker Hallway Track

Docker Hallway Track is an innovative platform that helps you find like-minded people to meet one-on-one and share knowledge in a structured way, so you get tangible results from networking.

Register for DockerCon 2018


Top 3 reasons to attend @DockerCon 2018 including a sneak peak at the speaker lineup
Click To Tweet


Useful DockerCon Links:

The post Top 3 reasons to attend DockerCon 2018 appeared first on Docker Blog.

Make a Lasting Impact with "Write for DOnations"

“Our community is bigger than just us” — As DigitalOcean (DO) employees, we aim to keep this value at the front of our minds in all our work. Since the company was founded in 2012, we’ve worked hard to build a vibrant, engaging Community where everybody from beginners to professionals

Make a Lasting Impact with

“Our community is bigger than just us” — As DigitalOcean (DO) employees, we aim to keep this value at the front of our minds in all our work. Since the company was founded in 2012, we’ve worked hard to build a vibrant, engaging Community where everybody from beginners to professionals can learn from one another about working in the cloud.

It’s important to us that the Community emulates the best that tech has to offer by serving as a welcoming place where members can share their ideas and experiences. This is what led us to introduce the Write for DigitalOcean program. Write for DO gives Community members an opportunity to build their brand, develop their writing skills, and get paid for contributing to DigitalOcean’s collection of tutorials on open-source software deployment, configuration, and development.

We’re always looking for new ways to give back to the Community. To that end, we’re excited to announce some updates to the Write for DigitalOcean program and reintroduce it as “Write for DOnations” (currently in beta — the full program launch is coming later this year).

There are two main changes that we are excited to share:

  • DigitalOcean will match the payout to Community authors in the form of a donation to a tech-focused nonprofit, which they can choose from a predetermined list. We hope to add more organizations to this list over time, but as of the beta launch the available organizations fall into the following categories: Free and Open Source, Tech Education, Diversity and Inclusion in Tech, and organizations promoting a Free and Open Internet.

  • The typical payout for new tutorial content from Community authors will increase to $300, to be paid either via PayPal or as DO credit.

The Write for DOnations beta program will follow the same editorial structure as Write for DO:

  • Anyone interested in becoming a DO Community author can apply by submitting a sample tutorial which showcases their ability to explain technical concepts to others.
  • One of DigitalOcean’s editors will reach out to approved applicants and the two will work together to find a topic for an original, first-run article that would be exciting to the author and valuable to the broader DO Community.
  • The author will write and submit their first draft, then collaborate one-on-one with their editor to revise their work to align with the DigitalOcean Style Guide.

At the end of this review process, the author’s tutorial will be published on the Community website and they will receive their payout. The author will then get to choose the nonprofit(s) that will receive their matching donation. Donations will be processed through Bright Funds, and authors’ donations can either go to a single tech-focused nonprofit or be evenly split between a group of nonprofits that share similar missions. Please note that the charitable contributions made by DigitalOcean through this program are not tax-deductible to the authors.

Since its launch, the Write for DigitalOcean program has allowed authors to share their diverse technical knowledge with the world while also improving their writing skills and growing their personal brand. Our team is always on the lookout for fresh content our community will love. To get a sense of which tutorial topics we’re particularly interested in, take a look at our suggested topics page.

Although Write for DOnations is still in development, we’re excited to help our Community authors make a real impact by donating to fantastic organizations that are working to shape the world of tech for the better.

We are actively seeking feedback to inform the full release of the the new Write for DOnations program. Check out the program’s FAQ page for more details, and please share any questions or comments about the Write for DOnations beta launch in the comments below or reach out to us directly at writefordonations@digitalocean.com.

{}
Codeship News ( Feed )
Tuesday, 13 February 2018
How We Built the Codeship API v2

Reading Time: 5 minutesWe started work on our API v2 at the beginning of 2017. We knew that implementing it could have significant implications for our architecture as well as our customers’ workflows, so we wanted to spend the time to get it right rather than rushing to deliver something and then

Reading Time: 5 minutes

We started work on our API v2 at the beginning of 2017. We knew that implementing it could have significant implications for our architecture as well as our customers’ workflows, so we wanted to spend the time to get it right rather than rushing to deliver something and then having to live with the consequences.

Click to Tweet:


“Examining how v2 of the Codeship API was brought into being.” via @codeship
Click To Tweet


API Gateway Versus the Majestic Monolith

The first thing we looked at was how it would fit in our current architecture and our vision for the system going forward. Our existing architecture has a couple supporting services surrounded by a Rails monolith.

Our long-term vision is to move to a more services-based architecture, so we decided to first explore the idea of using an API gateway. This approach would allow us to reduce the number of requests going to our monolith frontend and will enable us to separate our UI load from our API load. Additionally, it would provide a central gatekeeper for all of our systems.

As appealing as this option was, the types of API calls we wanted would almost always end up hitting our monolith as we started to investigate. This, coupled with the overhead of having to add an API call to the gateway and then turn around and have to pass the same request to our monolith, seemed like it was a lot of extra work for an unknown future payoff.

In the end, we decided to build our API into our monolith as a Rails engine. Rails engines are just an embeddable Rails application that operates in an isolated namespace from the main Rails application. This separation helped prevent us from inadvertently using code from our monolith but still required a little bit of discipline on our part to prevent us from explicitly reaching into the monolith.

Interaction between API Rails Engine and the monolith

The above diagram highlights the interaction between our API Rails Engine and our monolith while retrieving a build object.

  1. An ApiV2::BuildRequest Ruby object is created using our validated request parameters.
  2. The ApiV2::BuildRequest object is passed to the ApiV2::BuildService, which currently serves as the interface into our monolith. As we move to a more service-oriented architecture, this could point to a separate service.
  3. The BuildProvider in our monolith then performs the necessary business logic to look up the resulting Active Record build object.
  4. The Active Record build object is then converted into a plain old Ruby object (PORO), which is then passed back to the API engine.
  5. The API Engine converts our PORO into JSON and sends it back to the client.

If you are interested in learning more about Rails Engines, I would highly recommend reading the Rails Guide on Getting Started with Engines.

Authentication

The next significant design consideration was authenticating requests. Currently, all of our authentication and authorization occurs in our monolith. This would not have been a big deal if we kept our API in an embedded Rails engine, but thinking about extracting it out into an external service, this could amount to a lot of network traffic.

In searching for a solution, we found out that several of our developers had worked with JSON Web Tokens (JWT) in the past and suggested it as a possible solution. Using JWT, with the initial authentication request, we authenticate the user and generate a token containing the following information:

{
  "user_id": 1,
  "scopes": {
    "09ddc0f0-7ae2-0135-addd-745c899e7aa9": [
      "project.read",
      "project.write",
      "build.read",
      "build.write"
    ],
    "09fec460-7ae2-0135-addd-745c899e7aa9": [
      "project.read",
      "build.read",
   ]
  },
  "exp": 1516220059,
  "iss": "https://api.codeship.com/v2/",
  "aud": "client",
  "iat": 1516216459,
  "jti": "692e3d0a906870b604ae25aea02e038d"
}

Our example shows what a deencrypted JWT token looks like for Codeship. This particular token is for User 1 who has access to two different organizations. One of the organizations can both read and write project and build attributes. Our second organization can only read values.

This bearer token is passed with every request in the Authorization HTTP header. Upon receiving the request, our monolith verifies that the user has the correct scope for the resource that they are trying to access. If the user is authorized, we process the request.

If you are interested in learning more about JWT, there is an excellent introduction available at https://jwt.io/.

REST Versus GraphQL

Even more important than the architecture issues was the API itself. We felt that there were only two choices to consider: REST and GraphQL.

GraphQL is an API query language that allows you to retrieve the exact data you are looking for in a single server request. A theoretical GraphQL query to find the build status and commit message for a build might look like this:

{
  organization(uuid: "09fec460-7ae2-0135-addd-745c899e7aa9") {
    project(uuid: "d7c7ea93-fe9c-4b5d-8943-25e83e22b60d")
    build(uuid: "95639b6e-0af5-4f72-9671-060177b8b6cb") {
      message
      status
  }
}

We loved how powerful these queries are, but it didn’t feel quite as natural to trigger a build.

Taking a REST approach, our build would live at the following location:

http://api.codeship.com/v2/organizations/<organization_id>/projects/<project_id>/builds/<build_id>

We could retrieve the build information with a GET request against this URL. A POST request to the same URL would trigger the build.

Ultimately, we felt that our customers, as well as ourselves, were more experienced with REST, and we decided to take that approach. As we learn more about how users are interacting with our API, we may revisit GraphQL in the future.

If you haven’t yet looked at our API, you can find all the information you need to get started here. In case you get stuck working with the API or have feedback on how to make it better, feel free to reach out to our help desk or connect with us in our Community Slack.


“How We Built the Codeship API v2” via @codeship
Click To Tweet


The post How We Built the Codeship API v2 appeared first on via @codeship.

The Deep End Podcast Ep #13: From Prototype to Internet of Things with Muzzley

A vision, a small prototype, and a PowerPoint presentation: that’s how Muzzley, a platform for interacting between Internet of Things (IoT) devices, was born three years ago. Today the Muzzley team works to solve a pain point for smart home consumers: managing their IoT devices from one interface,

The Deep End Podcast Ep #13: From Prototype to Internet of Things with Muzzley

A vision, a small prototype, and a PowerPoint presentation: that’s how Muzzley, a platform for interacting between Internet of Things (IoT) devices, was born three years ago. Today the Muzzley team works to solve a pain point for smart home consumers: managing their IoT devices from one interface, with minimum hassle. But they also place importance on transparency, privacy, and protecting their customers’ data.

In this episode, Muzzley co-founders, Domingo Bruges and Sasha Dewitt, discuss how Muzzley’s tech stack evolved to support a product that integrates with different vendors. They share insight into how they manage the data generated by consumer IoT devices, and how they approach consumer privacy and data production.

Subscribe to the The Deep End Podcast on iTunes, and listen to the latest episode on SoundCloud below:

Hollie Haggans heads up Global Partnerships for DigitalOcean’s Hatch program. She is passionate about startups and cold brew coffee. Get in touch with questions at hatch@digitalocean.com.


pluto.models/1.4.0, feed.parser/1.0.0, feed.filter/1.1.1 - Ruby/2.0.0 (2014-11-13/x86_64-linux) on Rails/4.2.0 (production)