|Updated Saturday, 21 April 2018 07:30|
|5 years later, where are you on your Docker journey?|
Docker is turning five the week of March 19-25, 2018 and in celebration of this event, we would like to turn the spotlight to our users – the customers, partners, individuals and organizations that are using the Docker platform every day. From scientific use cases in the field of medical and
Docker is turning five the week of March 19-25, 2018 and in celebration of this event, we would like to turn the spotlight to our users – the customers, partners, individuals and organizations that are using the Docker platform every day. From scientific use cases in the field of medical and space research to more traditional use cases in large banks and insurance companies, Docker’s adoption is only accelerating with everyone from individual developers to global corporations. Here are some of the key figures showing the widespread adoption of Docker across the community and within enterprises.
Docker has evolved from a container runtime to Docker Enterprise Edition (EE), a secure enterprise-ready container platform that brings unmatched freedom of choice to enterprises, while providing a foundation for their digital and multi-cloud initiatives. Millions of users rely on Docker, downloading 100M container images a day, and over 450 companies have turned to Docker Enterprise Edition – including hundreds of the largest enterprises in the world. With such vast adoption, the range of stories to tell and the diverse set of use cases continues to grow. So where are some of these users and enterprise customers on their Docker journey?
Docker users have deployed Docker containers for a variety of use cases. One research center is using Docker to create a scalable, parallelizable and rapidly deployable test infrastructure for DART, NASA’s mission to hit an asteroid at 6 km/s. Another use case involves Pinterest, a user that deployed Docker to improve the production Machine Learning Systems powering their home feed. Still, some of our commercial customers are using Docker EE for a variety of mission-critical applications:
GE Digital – The company leveraged the Docker platform and Windows containers to repackage one of their main Asset Performance Management (APM) on-premise solutions and embrace a new fog computing approach with greater intelligence and more computing power, at the edge.
GlaxoSmithKline – The global pharmaceutical company is using Docker EE to help powers its new research environment for identifying new medical discoveries more rapidly.
MetLife – By using Docker EE for legacy application modernization, MetLife projects savings of 66% across nearly 600 of its applications, enabling the 150-year old company to self-fund innovation projects.
We are thrilled that MetLife will be hosting an onsite birthday event to celebrate their containerization journey, welcome new Docker users and share their first-hand experiences with how the Docker platform has impacted their business.
Docker Birthday events worldwide – RSVP Today
Come learn, mentor, celebrate, eat cake, and take an epic #dockerselfie! No matter where you are in your Docker journey, the birthday events are meant to help demystify some Docker concepts and guide users along specific learning path depending on their use case and expertise level. Advanced Docker users and partners will be available on-site and act as mentors to help everyone complete a series of Docker hands-on labs.
Docker Birthday #5 Hands-on Labs will include the following:
The main event will take place at Docker HQ in San Francisco on March 22nd but thanks to the hard work of awesome Community Leaders, we now have 65 and counting events scheduled worldwide! Check out the map below to confirm your RSVP or reach out to your community leaders to organize an event in your city.
5 years later, where are you on your #docker journey? #dockerbday
The post 5 years later, where are you on your Docker journey? appeared first on Docker Blog.
|Using Genetic Algorithms in Ruby|
Reading Time: 5 minutesDid you know that there’s a way to use the power of natural selection to solve programming challenges? With genetic algorithms (GA), you can solve optimization problems using the same concepts that you find in nature: Reproduction Survival of the fittest Adaptation to
Reading Time: 5 minutes
Did you know that there’s a way to use the power of natural selection to solve programming challenges? With genetic algorithms (GA), you can solve optimization problems using the same concepts that you find in nature:
So what’s an optimization problem? It’s when you want to find not just a valid solution but the solution that will give you the best results.
For example, if you have a backpack that only fits a certain amount of stuff and you want to maximize the amount of stuff you can bring, then you could use a genetic algorithm to find the best solution. This is also known as *the knapsack problem*.
The genetic algorithm is not the only way to solve this kind of problem, but it’s an interesting one because it’s modeled after real-world behavior. So let’s learn how they work and how you can implement your own using Ruby.
“Experiment with genetic algorithms for optimization problems in Ruby.” via @matugm
The Initial Population
The first thing you need in a genetic algorithm is the initial population. This is just a pool of potential solutions that are initially generated at random. A population is made of chromosomes.
A chromosome can encode a solution in different ways; one is to use a binary string, a string composed of 0s and 1s. Here’s part of the
This is what a chromosome looks like:
We use a 1 to represent an item inside the backpack and a 0 to represent an item that is not in the backpack.
Now that we have a chromosome, we can generate the initial population:
Survival of The Fittest
In this step, we want to select the strongest chromosomes (potential solutions) from our population and use them to create the next generation.
There are two components to this:
The fitness function is used to ‘score’ every chromosome to see how close it is to the optimal solution. This of course depends on the problem we are trying to solve. For the backpack problem, we could use a fitness function that returns a higher score for every item that we are able to fit in.
Here is an example:
First, we calculate the total weight of the items to see if we have gone over capacity. Then if we go over capacity, we are going to return a fitness of 0 because this solution is invalid. Otherwise we are going to return the total value of the items that we were able to fit in, because that’s what we are optimizing for.
For example, with the chromosome
That becomes the fitness score for this particular chromosome.
Now let’s go over the selection algorithm. This decides which two chromosomes to evolve at any given time.
There are different ways to implement a selection algorithm, like the roulette wheel selection algorithm and the group selection algorithm.
Or we can simply pick two random chromosomes. I found this to be good enough as long as you apply elitism, which is to keep the best fit chromosomes after every generation.
Here’s the code:
Next we will learn how we can evolve the selected chromosomes so we can create the next generation and get closer to the optimal solution.
Genetic Algorithm Evolution
To evolve our selected chromosomes, we can apply two operations: crossover and mutation.
In the crossover operation, you cross two chromosomes at some random point to generate two new chromosomes, which will form part of the next generation.
Here’s the crossover method:
We don’t always apply this crossover operation because we want some of the current population to carry over.
The other evolutionary operation we can perform is mutation. Mutation is only applied with a small probability because we don’t want to drift off too much from the current solution.
The purpose of mutation is to avoid getting stuck with a local minima solution.
Now that we have all the components, we can make them work together.
The Run Method
This method generates the initial population and contains the main loop of the algorithm. It will also find the best-fit solution and return it at the end. It looks something like this:
The first argument is the chromosome class we are going to use, the second argument is the crossover rate, the third is the argument mutation rate, and the last argument is the number of generations.
How Do You Know That You Got The Best Solution?
Long story short, you can’t know for sure. What you can do is run a good amount of iterations and trust that the result is either the optimal solution or very close to the optimal solution.
Another option is to keep track of the best-fit chromosome and stop if it doesn’t improve after a certain number of iterations.
In this article, you learned that genetic algorithms are used to solve optimization problems. You also learned how they work and what components they’re made of (initial population, selection, and evolution). You can find the finished project on GitHub.
If you found this article interesting, do us a favor and share this post with as many people as you can so they can enjoy it, too!
“Using Genetic Algorithms in Ruby” via @matugm
|dpl 1.9.0 is coming|
Deployment is a critical component of Travis CI, and it is one of our major goals to ensure that it works well at all times.
It is carried out with
A major update (version 1.9.0) to Deployment is a critical component of Travis CI, and it is one of our major
goals to ensure that it works well at all times. It is carried out with A major update (version 1.9.0) to This will affect all our users, on https://travis-ci.org, https://travis-ci.com, and all
the Travis Enterprise customers. The new version 1.9.0 will introduce following breaking changes: The requirement of Ruby 2.2 may sound scary for those using older versions of Ruby
Please rest assured that this is not a concern, because we use a version of Ruby
pre-installed on the build image to ensure that the deployment is carried out
This version is already Ruby 2.2 on our hosted services and newer Travis Enterprise
releases. In the rare cases where the deployment is carried out with Ruby 1.9 for whatever reason,
we choose One case where a complication may arise is older Travis Enterprise releases,
where we fail to specify In addition, The Chef Supermarket
provider, for example, can now use version 13.8.0, whereas To use the pre-release version of If you are already using Barring major issues, Thank you!
Deployment is a critical component of Travis CI, and it is one of our major goals to ensure that it works well at all times.
It is carried out with
A major update (version 1.9.0) to
This will affect all our users, on https://travis-ci.org, https://travis-ci.com, and all the Travis Enterprise customers.
The new version 1.9.0 will introduce following breaking changes:
The requirement of Ruby 2.2 may sound scary for those using older versions of Ruby for builds. Please rest assured that this is not a concern, because we use a version of Ruby pre-installed on the build image to ensure that the deployment is carried out correctly. This version is already Ruby 2.2 on our hosted services and newer Travis Enterprise releases.
In the rare cases where the deployment is carried out with Ruby 1.9 for whatever reason,
One case where a complication may arise is older Travis Enterprise releases,
where we fail to specify
The Chef Supermarket
provider, for example, can now use version 13.8.0, whereas
To use the pre-release version of
If you are already using
Barring major issues,
|Setting Up HAProxy for a SaaS Application|
Reading Time: 4 minutesAt Shopblocks, each customer receives several temporary subdomains so that they can access their website, admin system, and static assets. Part of the challenge of building Shopblocks was in providing all customers, by default, an SSL certificate. During our prelaunch stage,
Reading Time: 4 minutes
At Shopblocks, each customer receives several temporary subdomains so that they can access their website, admin system, and static assets. Part of the challenge of building Shopblocks was in providing all customers, by default, an SSL certificate.
During our prelaunch stage, our system was based heavily on Apache Virtual Hosts, with each customer getting their own Virtual Host file. This was necessary because of the SSL certificate configuration being required on a per-customer domain basis.
“Working with HAProxy to address issues with SaaS scaling” via @bowersbros
Addressing Limitations When Scaling
However, a problem with using Apache Virtual Hosts started to crop up when we grew our customer base and started getting into the thousands and tens of thousands of virtual host files.
We were finding the memory usage (without any request throughput) was increasing with the number of Virtual Host files loaded; graceful reloads were taking longer to perform. Each time a customer registered, a process was set off to create their config files and then to reload Apache.
Further issues appeared when we started building out our infrastructure for a hosted application, such as reloading multiple servers, ensuring that all servers are capable of serving all sites, handling a server that’s gone offline, and ensuring server configurations were synchronized when they came back online.
Shifting SSL Termination Responsibility to HAProxy
Many of these issues were solved by removing the requirement of a Virtual Host per customer, but this left open the issue of SSL termination. We solved this by moving the responsibility for terminating SSL further up the chain to the HAProxy load balancer and away from Apache itself.
In HAProxy, we use map files and header rewrites to handle all domains without a change to any Apache configuration for new customers.
When you sign up to Shopblocks, you actually receive three subdomains. For example, if you registered with the name
This will create a few entries in the map file for our HAProxy configuration. These lines look like this:
When registering, you cannot register with a dash in your name, so we can safely assume that any hyphens with
Our HAProxy configuration now looks something like this.
All requests that hit the HTTPS frontend get their SSL certificate checked to decrypt the request, then we check their domain against the map file. If we do not find them, they are dropped silently from the request.
We then add their ID to the request under
The Host header is also overwritten so that the correct Apache Virtual Host can be used, with a generic server name in Apache. This now allows your Apache Virtual Host configuration to be generic with one per application, rather than one per server. The only requirement when adding a new customer, domain, or SSL certificate is to modify the file.map and perform a reload in HAProxy, which is a zero-downtime action.
And there we have it, an overview of using HAProxy to setup a SaaS app simply.
“Setting Up HAProxy for a SaaS Application” via @bowersbros
|The Deep End Podcast Ep. 14: Making Sense of It All with Attentive|
There’s such a thing as “too much information”, especially for companies scaling out their sales operations. That’s why Attentive was born in 2015: to help sales teams make their increasing pipelines simpler to manage. Indeed, the small, Portugal-based team is itself focused on scaling, having par
There’s such a thing as “too much information”, especially for companies scaling out their sales operations. That’s why Attentive was born in 2015: to help sales teams make their increasing pipelines simpler to manage. Indeed, the small, Portugal-based team is itself focused on scaling, having participated in accelerator programs like Techstars.
In this episode, Attentive founder and CTO Pedro Araújo talks about what it takes to build a tech product from the ground up. Discover their approach to running an engineering team, from adopting new open source technologies, to onboarding junior developers and learning about cloud infrastructure.
Hollie Haggans heads up Global Partnerships for DigitalOcean’s Hatch program. She is passionate about startups and cold brew coffee. Get in touch with questions at firstname.lastname@example.org.
|New Enterprise Build Environments - Ubuntu 14.04, Trusty|
We’re super excited to announce new build environments available for Travis CI Enterprise 🎉 These are the stable images used for Trusty container-based on Travis CI Hosted and are up-to-date with the latest tools and features.Upgrading Build Environments
The new build environments are su
We’re super excited to announce new build environments available for Travis CI Enterprise 🎉 These are the stable images used for Trusty container-based on Travis CI Hosted and are up-to-date with the latest tools and features.
Upgrading Build Environments
The new build environments are supported on Travis CI Enterprise installations at version 2.1.9 or later (here’s how to upgrade your Travis CI Enterprise Platform instance). The worker will also need to be installed on a different instance than the legacy Precise build environment workers, but the process is very similar. Full installation docs can be found here: docs.travis-ci.com/user/enterprise/trusty
After upgrading your Travis CI Enterprise installation, builds that should be routed to the new build environments will need to have a
Of course, if you have any questions or feedback about upgrading, please get in touch at email@example.com. We’d love to help out!
Support for Legacy Precise Build Environments
Although we encourage everyone to upgrade to the new build environments, we certainly understand these things take time. We’re planning to fully support the legacy (Precise) build environments for the next six months, though we will not update the images. You may update or customize them yourself, if you would like. Please let us know if you have any concerns on this process.
Thank You to Build Environment Beta Testers
Last of all, all of us on the Travis CI Enterprise Team wants to send a big shout-out to our Trusty Build Environment beta testers. Thank you for your help and input! 💖
|A Secure Supply Chain for Kubernetes|
The beta release of the Docker Enterprise Edition (Docker EE) container platform last month integrates Kubernetes orchestration, running alongside Swarm, to provide a single container platform that supports both legacy and new applications running on-premises or in the cloud. For organizations tha
The beta release of the Docker Enterprise Edition (Docker EE) container platform last month integrates Kubernetes orchestration, running alongside Swarm, to provide a single container platform that supports both legacy and new applications running on-premises or in the cloud. For organizations that are exploring Kubernetes or deploying it in production, Docker EE offers integrated security for the entire lifecycle of a containerized application, providing an additional layer of security before the workload is deployed by Kubernetes and continuing to secure the application while it is running.
What is a Software Supply Chain?
When you purchase something from a retail store, there is an entire supply chain that gets the product from raw materials to the manufacturer to you. Similarly, there is a software supply chain that takes an application from code on a developer’s laptop to production.
Every company’s software supply chain may be slightly different; some outsource software development, some have adopted Continuous Integration and Continuous Delivery processes, and some deploy production applications across multiple clouds, some on-premises. Regardless of what the software supply chain consists of, Docker EE provides a set of solutions that integrates with your workflows while ensuring that applications remain secure, trusted and safe through all of these steps using both Kubernetes and Swarm.
In this week’s blog, we’ll take a closer look at one part of this solution – image scanning and policy-based image promotions.
Secure Automation of Workflows for Kubernetes
Before an application is deployed in production, organizations typically want to know that it does not have any known vulnerabilities that often come from older releases or unpatched versions of software. It’s also difficult for large organizations to keep a full inventory of every application they have running that may be affected by a new vulnerability.
Docker EE provides image security scanning to help organizations both identify vulnerabilities before the applications are deployed in production and to alert you when new vulnerabilities affect existing applications. This is done by executing a binary-level scan of your images against the NIST list of known vulnerabilities. As shown below, each layer of an image can be thoroughly scanned to provide insight into the workload.
Docker EE also has the ability to define policies to automate the movement of images between repositories. These image promotion policies can be combined with the results of security scanning to create a secure, automated workflow for images moving to production.
For example, a developer is working on a new Kubernetes project with access to the ‘dev’ repository from which they can push and pull images. The repository is set up with image scanning to automatically scan all images when they are pushed to the repository. When the developer is ready to move this into production, they add a specific tag like “latest” to the image. Their repository is set up with an image promotion policy that states that if an image has the “latest” tag and has no critical vulnerabilities, it gets automatically copied or promoted to the ‘QA’ repository.
In this example, only the QA team has access to the QA folder, limiting access to only those who require it. This policy also ensures that developers are responsible for fixing any vulnerabilities before they are passed on to the QA team.
By combining these Docker EE capabilities, organizations can:
These are all critical workflows that happen prior to the app being deployed in production with Kubernetes. With Docker EE you get the only container platform with integrated security across the entire supply chain. For more information about Docker’s Secure Supply Chain for Kubernetes watch the following on-demand video:
Learn how #Docker EE security scanning and image promotions can secure the #Kubernetes supply chain
Learn more about Docker Enterprise Edition with Kubernetes integration:
|Scaling Your Containers With Kubernetes|
Reading Time: 4 minutesWith the growing adoption of containerization, many developers are no longer struggling with the woes of code environmental inconsistencies. While containers have solved important problems, new questions have arisen over how to effectively manage and scale them. Products lik
Reading Time: 4 minutes
With the growing adoption of containerization, many developers are no longer struggling with the woes of code environmental inconsistencies. While containers have solved important problems, new questions have arisen over how to effectively manage and scale them. Products like Kubernetes have been born out of an attempt to solve these issues. Yet, what exactly is Kubernetes?
Kubernetes is essentially a solution that coordinates a group of containerized applications and orchestrates them to work together as a single unit. Having a cohesive group of containers allows us to explicitly schedule the amount of container instances we have running at once and also provides an automated way to scale them in case of the unexpected.
Let’s look at the technical specifics of how Kubernetes actually does this.
“So devs are into containers now. But how do you manage and scale them?” via @hiimtaylorjones
How Kubernetes Works
Kubernetes instances are referred to as clusters. Each cluster consist of three main units:
These three components work with each other to scale and schedule containers properly. The master component dictates the scheduling and scaling commands. The nodes then receive these commands and orchestrate the pods accordingly.
Each of these units are self-contained services in their own right. They all communicate with each other over a series of specialized APIs. This means that Kubernetes is actually series of microservices itself.
Another thing to note about the technical structure of Kubernetes is that it’s open source! Kubernetes’ open-source nature means that we can extend it to meet any need that might not be covered in its core feature set. Many organizations have taken the base Kubernetes project and extended it to create products that meet their organization’s unique needs.
However, if we focus on the core aspects of Kubernetes, what exactly about it is unique? What’s the secret sauce behind Kubernetes’ success?
What’s Unique About Kubernetes?
Kubernetes’ early success was mainly due to its creation and subsequent backing by Google. With an endorsement from a company like Google, many developers immediately became attracted to the idea of using it. Yet corporate hype isn’t the only reason it’s popular.
One of the biggest technical features that Kubernetes executes really well is its autoscaling feature. If your production app is experiencing an unexpected spike in traffic, then it will automatically scale to handle the new workload. Compared to its competitors, it has the more maturity in this feature.
Kubernetes also isn’t bound to one specific container solution for its pods. Many competitors to Kubernetes are bound a bit more closely to what container solutions they support.
Another point that perks folks’ interest about Kubernetes is the open-source nature of the project. As I mentioned earlier, being open-source allows infinite customization of the core product. Many of Kubernetes’ competing solutions offer no or limited customization to their products.
We’ve spent a few minutes talking up the good parts of Kubernetes, but what are some drawbacks or immature aspects of it?
What Are the Drawbacks to Kubernetes?
To be frank, Kubernetes has a reputation of being hard to set up. Once set up, the complaints seem to fall off. However, the journey getting to that point has a reputation of being fairly tough.
Guides (Kubernetes: The Hard Way) argue that maybe understanding why Kubernetes is so hard to set up actually leans to a deeper understanding and mastery of the product. However, despite its reputation, many tools have started to emerge that make Kubernetes a bit more accessible.
Another thing that doesn’t work in Kubernetes’ favor is that its core feature set isn’t that unique compared to other products. There are a lot of other tools that have a similar feature set and are a bit easier to use. The lack of a completely unique feature will allow for other products — with an easier learning curve — to eat at Kubernetes’ market share.
While Kubernetes has its triumphs and its downfalls, what exactly is next for its development?
What’s Next for Kubernetes?
Moving forward, Kubernetes is trying to smooth out its learning curve by working closer with container providers like Docker to pave the way for easier integration with different types of containers.
Another cornerstone of Kubernetes’ mission moving forward is to make container orchestration a much more popular idea than it already is. Numerous companies have adopted the ideas of containerization in their products. However, the problems that Kubernetes solves haven’t exactly been adopted into the mainstream of tech development yet. Kubernetes’ mission moving forward is to make sure people understand its purpose.
Finally, there’s a plan to address Kubernetes’ competition. Providers like AWS and IBM have their own container management products. Kubernetes is working to better integrate into these platforms for increased adoption, whether that be a supported option for Kubernetes or an entire custom feature built around it. Kubernetes is simply trying to become more embedded in the products we use every day.
Overall, if you’re interested in finding a solution to container scaling and management, Kubernetes is a great place to start. Take the time to learn what it’s about, and you’ll unlock a massive amount of headroom for your organization to scale and manage applications in production environments.
“Scaling Your Containers With Kubernetes” via @hiimtaylorjones
|First Look at the DockerCon San Francisco 2018 Agenda|
From June 12th – 15th, San Francisco will welcome 6,000+ developers, sysadmins, architects, VP of Apps and other IT leaders to get hands-on with the latest innovations in the container ecosystem at DockerCon 2018. Today, we are excited to share a first look at the DockerCon Agenda. We’ve bro
From June 12th – 15th, San Francisco will welcome 6,000+ developers, sysadmins, architects, VP of Apps and other IT leaders to get hands-on with the latest innovations in the container ecosystem at DockerCon 2018. Today, we are excited to share a first look at the DockerCon Agenda. We’ve brought back some of your favorite from past DockerCons and are also thrilled to welcome many first time DockerCon speakers to the stage. Here is a first look at some of our favorites sessions:
Customers in Production
Use case sessions highlight how companies are using Docker to modernize their infrastructure and build, manage and secure distributed applications. These sessions are heavy on business value, ROI and production implementation advice, and learnings.
Using Docker sessions are introductory sessions for Docker users, dev and ops alike. Filled with practical advice, learnings, and insight, these sessions will help you get started with Docker or better implement Docker into your workflow.
One way to achieve a deep understanding of a complex system is to isolate the various components of that system, as well as those that interact with it, and examine all of them relentlessly. This is what we do in the Black Belt track! It features deeply technical talks covering not only container technology but also related projects.
Docker Docker Docker
Docker Docker Docker sessions provide a deeper dive into Docker tooling, implementation, and real world production use recommendations. If you are ready to get to the next level with your Docker usage, join this track for product updates and best practices from the Docker team.
The Innovation Track shows how containers are redefining our technology toolbox, from solving old problems in a new way to pushing the boundaries of what we can accomplish with software. Sessions in this track provide a glimpse into the new container frontier as it relates to IoT, Machine Learning and distributed systems.
The transform track focuses on the impact of change – both for organizations and ourselves as individuals and communities. Filled with inspiration, insights and new perspectives, these stories will leave you energized and equipped to drive innovation.
In addition to the tracks above, DockerCon also includes an Ecosystem Track which showcases the variety of container products and solutions developed by sponsoring partners from the container ecosystem. Please reach out to firstname.lastname@example.org if you’re interested in speaking opportunities.
We were humbled by the quantity and quality of submissions we received as part of the CFP and wanted to extend a sincere thank you to the Docker community for sharing all of the amazing things they are doing with Docker. Feel free to reach out to email@example.com if you’d like to give a talk at your local Docker user group.
Useful DockerCon Links:
First Look at the @DockerCon San Francisco 2018 Agenda #dockercon
The post First Look at the DockerCon San Francisco 2018 Agenda appeared first on Docker Blog.
|Using Apache Kafka for Asynchronous Communication in Microservices|
Reading Time: 4 minutesWhile microservice architecture might not be a silver bullet for all systems, it definitely has its advantages, especially when building a complex system with a lot of different components. Of course, if you’re considering microservices, you have to give serious though
Reading Time: 4 minutes
While microservice architecture might not be a silver bullet for all systems, it definitely has its advantages, especially when building a complex system with a lot of different components. Of course, if you’re considering microservices, you have to give serious thought to how the different services will communicate.
In this post, we’ll look at how to set up an Apache Kafka instance, create a user service to publish data to topics, and build a notification service to consume data from those topics. Specifically, we’re going to build a two-factor authentication app where a user will register, receive a mail with a verification code, and complete their registration using the code. The source code can be found here.
“You have to give serious thought to how different microservices will communicate.”
Why Apache Kafka?
Kafka is a distributed streaming platform created by LinkedIn in 2011 to handle high throughput, low latency transmission, and processing of streams of records in real time. It’s three major capabilities make it ideal for this use case:
Setting Up Apache Kafka
Before starting this tutorial, the following will be required:
Publishing Data to Kafka Topics
To publish data to a Kafka topic, we are going to create a user service that provides two endpoints:
We use the
We create a new promise object that resolves to a producer that is ready to start publishing data. This is used in our
Consuming Data from Kafka Topics
To consume data from our Kafka topic, we are going to create a notification service that listens for data coming from our topics and sends an email with either a verification code or success message depending on the topic it got the data from.
We create a consumer that connects to Kafka, where
The consumer object returned by
Our two-factor authentication app demonstrates the communication pattern between only two microservices using Apache Kafka (there are other systems like RabbitMQ, ZeroMQ), but by decoupling communication between those services, we add flexibility for the future. For example, let’s say we add a recommendation service in the future that needs to send out recommendations whenever a new user is signed on; it simply subscribes to the
“Using Apache Kafka for Asynchronous Communication in Microservices” via @wale_martins
The post Using Apache Kafka for Asynchronous Communication in Microservices appeared first on via @codeship.
|Onward and Upward Together|
As we turn the page on 2017, I’m proud to share that DigitalOcean had another tremendous year of rapid growth and strong profitability, a combination which few tech companies have achieved at our scale. We are rapidly approaching $200M in annual recurring revenue and are looking forward to celebra
As we turn the page on 2017, I’m proud to share that DigitalOcean had another tremendous year of rapid growth and strong profitability, a combination which few tech companies have achieved at our scale. We are rapidly approaching $200M in annual recurring revenue and are looking forward to celebrating our 6th anniversary next month. The key to our success is our disruptive offering — a cloud computing platform that is engineered with simplicity at the core — and our vibrant, growing developer community. We see a substantial and growing market need, and believe that DigitalOcean is perfectly positioned to lead this category in the years ahead.
While we have enjoyed great success since I co-founded the company in 2012, I believe we have barely scratched the surface. I’ve been reflecting on our next phase of growth and what it will take to reach our full potential, and it’s become clear to me that now is the right time to identify my successor as CEO of DigitalOcean.
I recognize where my strengths lie and where others will have more experience to give. With all of the exciting opportunities in front of us, including the possibility of an IPO — a long-term goal we have frequently discussed internally — I feel a new seasoned executive will be best to guide the company through the next chapter of our journey. We have engaged a leading search firm to help us find a great leader. One that will be inspirational, able to scale our operations beyond 1,000 people, evolve our go-to-market strategy, and help us reach our audacious vision. Someone who can build a global brand that could potentially help us become a publicly-traded company with the simplest cloud platform for developers to run applications of any size.
Once we’ve identified this person, I’ll be taking on a new role as Chairman of the Board, which will allow me to support our company vision and strategy while working closely with the new CEO.
When Moisey, Mitch, Alec, Jeff, and I started the company in 2012, we left our families and friends in New York to join the Techstars program in Colorado. We slept on bunk beds and worked relentlessly pretty much every day until midnight. Finding product-market fit didn’t happen overnight and it took months of iterating and refining our product offering. We had 400 users when we graduated from the Techstars program, and while we knew we had developed something special, trying to raise venture capital at that time was a real uphill battle. We heard many “no’s” from investors along the way, but believed in our long-term vision.
After returning to a small office in New York City, we launched the first SSD virtual machine service with unprecedented price-to-performance on January 15th, 2013. We instantly went from signing up a couple of new users per day to more than 100. I vividly remember sitting at our kitchen table with the co-founding team, having to manually install SSDs into our servers to keep up with the demand. It’s been a humbling journey to say the least, and I could not have imagined the growth, success, and scale we would achieve only five years later. DigitalOcean has accomplished so many incredible things over the years and I know that our product, people, and operations have never been stronger.
Aug 9, 2012 - Mitch, Alec, Moisey, me and Jeff walking on stage for Techstars demo day
We have raised $123M from some of the world’s leading VCs that share our belief that the developer will lead the continuing technology revolution. Today, we have a team of 400-plus employees around the world with growing offices in New York, Cambridge, Mass., and Bangalore. Our user base has grown with us and last year we crossed one million users from almost every country in the world. Over the last few years, our product went from a single offering, Droplet, to a complete cloud platform. We are extremely proud to be one of the largest and fastest-growing cloud providers in the world.
I’ve always said that putting the business first and doing what is right for DigitalOcean is my highest priority. I’m making this decision knowing that DigitalOcean’s best days are still to come. We have never been in a better position to begin this transition. We have a great leadership team in place, the business has very strong momentum, and we are a clear leader in our industry. I’m confident that our new CEO will be able to rapidly build on this strong foundation.
No matter who our next leader is, one thing that definitely won’t change is our unwavering commitment to delivering the industry’s simplest cloud computing platform, while building one of the world’s largest developer communities. All of the core elements that have contributed to our success — the powerful simplicity of the product, the dedication and talent of the team, and the passionate community of developers that we serve — will remain the same.
I am tremendously excited about DigitalOcean’s future and the milestones ahead. I want to thank everyone who has helped turn our dream and passion into reality. The skills I have learned and friendships I have made while helping to build this company will last me a lifetime, for which I will be forever grateful and I couldn’t be more excited for the journey ahead.
Onward and upward together,
|Embracing the Chaos of Chaos Engineering|
Reading Time: 5 minutesModern applications are increasingly growing in complexity. Adding a dizzying amount of moving parts, layers of abstraction, reliance on external systems and distribution that all result in a stack that few truly fully understand. Any developer worth hiring now knows the mer
Reading Time: 5 minutes
Modern applications are increasingly growing in complexity. Adding a dizzying amount of moving parts, layers of abstraction, reliance on external systems and distribution that all result in a stack that few truly fully understand.
Any developer worth hiring now knows the merits of a thorough testing regime, but one of the issues with testing is that you are often testing for predictable outcomes. Despite our ‘logical systems,’ show-stopping issues are typically unexpected; situations that no one foresaw.
These unforeseen eventualities are what chaos engineering attempts to account for. It’s a reasonably new principle, practiced by Netflix for several years and then formalized in 2015, setting out its principles in a time-honored manifesto.
Naturally, there are critics of the practice, and the comments at the bottom of this TechCrunch article summarize some of them. The typical counterarguments are that the principle is a band-aid for applications that were poorly planned and architected in the first place, or that it’s another buzzword-laden excuse to invent shiny new tools that no one knew they needed.
Still, it’s proponents are a friendly bunch, so in this article, I summarize my findings on the practice and let you decide.
“Unforeseen eventualities are what chaos engineering attempts to account for.” via @ChrisChinch
In many ways, while the term ‘chaos’ is a good eye-catching phrase, it’s misleading, summoning images of burning servers and hapless engineers running around an office screaming. A better term is experimental engineering, but I agree that is less likely to get tech blog or conference attention.
The core principles of chaos engineering follow similar lines to those you followed in school or university science classes:
Early in the lifetime of chaos engineering at Netflix, most engineers thought chaos engineering was about “breaking things in production,” and it is in part. But while breaking things is great fun, it’s not a useful activity unless you learn something from it.
These principles encourage you to introduce real-world events and events you expect to be able to handle. I wonder if fully embracing the “chaos” might result in more interesting results, ie, measuring the worst that could happen. True randomness and extremity could surface even more insightful results and observations.
Let’s look at each of these steps in more detail.
1 – Form a hypothesis
To begin, you want to make an educated guess about what will happen in which scenarios. The key word here is “educated”; you need to gather data to support the hypothesis that you’ll share with your team.
Decide on your steady state
What is “steady” depends on your application and use case, but decide on a set of metrics that are important to you and what variance in those metrics is acceptable. For example:
When deciding on these metrics, also consider external factors such as SLAs and KPIs for your team or product(s).
Introduce real-world events
The sorts of events to test vary depending on your use case, but common to most applications are:
Run in production
“Testing in production” has long been a tongue-in-cheek reference to an untested code base, but as chaos engineering is likely run in collaboration with a properly pre-tested code base, it takes on a different meaning.
The principles we’re working with here encourage you to undertake tests in production, or if you have a genuine reason for not doing so, as close as possible. Chaos engineering principles are designed to identify weakness, so they argue that running in production is fundamentally a good thing.
Some banks are already following these principles, and while engineers behind safety-critical systems should be confident of their setup before embarking on chaos engineering, the principles also recommend you design each experiment to have minimal impact and ensure you can abort at any time.
While the most tempting hypothesis is “let’s see what happens” (much like “let’s just break things”), it’s not a constructive one. Try to concoct a hypothesis based on your steady state, for example:
2 – Communicate to your team
As a technical communicator, this is perhaps the most important step to me. If you have a team of engineers running experiments on production systems, then relevant people (if not everyone) deserve to know. It’s easy to remember engineers, but don’t forget people who deal with the public, too, such as support and community staff who may start receiving questions from customers.
3 – Run your experiments
The way you introduce your experiments varies, some from code deployments, others by injecting calls you know will fail, or simple scripts. There are myriad tools available to help simulate these; I’ve provided links to find them below.
Make sure you have alerting and reporting in place to stop an experiment if needed, but also to analyze results later.
4 – Analyze the results
There’s no point in running an experiment if you don’t take time to reflect on what data you gathered and to learn from it. There are many tools you probably already use to help with this stage, but make sure you involve input from any teams whose services were involved in the experiment.
5 – Increase the scope
After defining your ideal metrics and the potential effects on them, it’s time to start testing your hypothesis. Much like other aspects of modern software development, be sure to iterate these events, changing parameters or the events you test for.
Once you’ve tried one experiment, learned from it, and potentially fixed issues it identified, then move on to the next one. This may be introducing a new experiment or increasing the metrics of an existing one to find out where a system really starts to break down.
6 – Automate the experiments
The first time(s) you run an experiment, manually is fine — you can monitor the outcome and abort it if necessary. But you should (especially with teams that follow continuous deployment) automate your experiments as quickly as possible. This means that the experiment can run when new factors are introduced into an application, but it also makes it easier to change input parameters for the scope of your experiments.
Again, the resources section below lists places to find tools to help with this.
While engineers and developers are divided on the usefulness of chaos engineering, the most interesting aspects to me are not the technical ones, but rather that it tests and checks ego.
The principles state in many places that if you are truly confident in your application, then you shouldn’t fear what it proposes. They force you to put your money where your mouth is and (albeit in a careful and controlled way) prove your application is as confident as you are. I can imagine many insightful debriefing sessions after a chaos engineering experiment.
Tools and Resources
“Embracing the Chaos of Chaos Engineering” via @ChrisChinch
|Announcing Codeship’s IP Whitelisting Feature|
Reading Time: 4 minutesWe’re pleased to announce that IP whitelisting and two-factor authentication are now generally available for all Codeship users — this is especially exciting for our Codeship Enterprise hybrid cloud CI/CD feature set, as they were the last two pieces of that puzzle. Ov
Reading Time: 4 minutes
We’re pleased to announce that IP whitelisting and two-factor authentication are now generally available for all Codeship users — this is especially exciting for our Codeship Enterprise hybrid cloud CI/CD feature set, as they were the last two pieces of that puzzle.
Over the last couple of months, we’ve added support for self-hosted GitHub Enterprise, GitLab Community Edition and Enterprise Edition, as well as Bitbucket Server. However, a missing piece for many has been how to grant Codeship infrastructure access to those internal systems in a limited way.
With our IP whitelisting feature released today, this is now a much more manageable problem — you only need to open your firewall for traffic from eight fixed IP addresses.
A second piece of the puzzle has been to provide two factor authentication (2FA) for those who sign up by email or cannot authenticate via their self-hosted git server. Read more about 2FA at the bottom of this post.
“We’re very excited to say that IP whitelisting and 2FA are now generally available.” via @codeship
What Is IP Whitelisting?
Our IP whitelisting feature is a way for you to tell Codeship to only contact your servers from a subset of IP addresses. Without this, requests from a Codeship-build infrastructure could come from virtually any IP address in the range assigned by AWS to their us-east-1 data centers. Granting access to such a large range of potential unknown sources presents an unacceptable security concern for many.
On a more technical level, enabling the IP whitelisting feature will force all outgoing traffic from Codeship services or build service through a proxy or network address translation (NAT). Codeship services (connecting to your SCM, sending build status back, etc.) will go through a proxy while all requests made from a build machine will go through NAT.
Why Should I Use IP Whitelisting?
There are a couple scenarios where you’d want to use the IP whitelisting feature, but honestly, most users will likely not need it. If you’re not hosting your git server behind a firewall or trying to access other services behind a firewall, you’ll most likely not need it.
The scenarios where you will want to enable to IP whitelisting are those where your git server is behind a firewall (ie, no publicly accessible IP address) or you’re using services that are not reachable via a public IP address. This could be deploying code to production servers in a VPC or your own data center, sending build artifacts to internal systems or file servers, running commands via SSH on servers behind a firewall, or probably a large range of other tasks that require the Codeship build server to communicate with something that doesn’t have a public IP address.
How Do I Enable IP Whitelisting?
Enabling the IP whitelisting feature is very simple.
With whitelisting enabled, you’ll see a list of IP addresses, which are the ones you’d want to allow access to your non-public systems. Exactly how to configure your firewall to allow access depends on your firewall, etc. so it’s best to work with your network administrator on that.
When allowing access to the eight IP addresses listed in the box, you should also be mindful of what ports you want to give access to. For accessing a self-hosted git server, check the Self-hosted SCM documentation for details of the ports. For accessing other services, it really depends on your service, so unfortunately you’re a bit on your own there.
You’re welcome to ask for help in our community Slack channel though, and we’ll try our best to help you figure out what you need.
What Are the Risks of Whitelisting?
There are always some risks associated with allowing external traffic through a firewall. The firewall is there to keep internet traffic out, so naturally any “hole” through the firewall carries a risk.
In this particular case, the risk is extremely low. Only Codeship services can use the IP addresses listed, and even though another Codeship customer could theoretically call your firewall from their builds, they would still need to know what to call and how.
That said, the risk is there, so make sure that you secure your internal services adequately and don’t blindly trust any traffic that attempts to access your internal services. This is not specific to Codeship, but generally best practice when allowing outside traffic in through your firewall.
As mentioned earlier, one last piece of the hybrid cloud puzzle is making sure that users in a hybrid setup have the same security options as those on cloud git servers.
Two-factor authentication (or 2FA) is already a common feature and has been offered by GitHub, GitLab, and Bitbucket for some time. The main premise is that you’ll need your username and password, plus a temporary code that only you can access. This way, even if hackers were to get hold of your username and password, no one but you will be able to access your account.
At Codeship, we’ve decided to go the route of requiring an Authenticator app on your personal device (phone, tablet, etc.), which will generate unique, one-time-use codes that act as the second factor in authenticating you. To learn how to set up and configure 2FA for Codeship, see our 2FA documentation.
“Announcing Codeship’s IP Whitelisting Feature” via @codeship
|Video Series: Modernizing .NET Apps for Developers|
Docker Enterprise Edition (EE) is the container platform for modernizing your existing applications, and running them in the cloud or on-premises. You can take monoliths and run them in containers with no code changes, and that gets you portability, security and efficiency. Running in Docker is
Docker Enterprise Edition (EE) is the container platform for modernizing your existing applications, and running them in the cloud or on-premises. You can take monoliths and run them in containers with no code changes, and that gets you portability, security and efficiency.
Running in Docker is also a great starting point for modernizing the application architecture. You can breaking down the monolith into smaller, independent components which makes it easier to deploy updates, manage scale and introduce new technologies.
This new video series covers app modernization, for .NET developers and architects. It walks through the evolution of a monolithic ASP.NET 3.5 app to a distributed application running across multiple containers, using the Docker platform to plug everything together and adding features with great open-source software from the Docker ecosystem.
This is not a full re-architecture to microservices – for large .NET apps that would be a 12 month project. This series uses a feature-driven approach, taking key features out of the monolith to fix performance issues, add new functionality and support fast application updates.
Part 1 introduces the series, talks about what “modernization” means and then gets started – this is a very demo-heavy video series, where you’ll see lots of code. You’ll learn in Part 1 how to run .NET Framework apps in containers on Windows – using Docker to compile and package the app, and how to share your apps on a registry like Docker Hub.
The demo application is an ASP.NET WebForms app which uses SQL Server for storage. Part 2 shows you how to run SQL Server databases in Docker containers on Windows, packaging the schema so containers start with the database already deployed. You’ll also see how to optimize Dockerfiles for deploying .NET apps, and how to integrate logging and configuration with the Docker platform – still without changing code.
Part 3 starts modernizing the application architecture, extracting the web application’s save feature into its own container. The web app publishes an event to a message queue when users save data, and a message handler listens for those events and saves the data to SQL Server. The monolith now has an event-driven architecture, with all the components running in Docker containers, and that supports independent scaling and updates for different features.
Part 4 takes advantage of the new event publishing mechanism to add self-service analytics for users, powered by the open-source Elasticsearch stack. The new reporting database and analytics UI are run in Docker containers, and there’s a new message handler which stores data in Elasticsearch. You’ll learn how Docker EE lets you add features like this with a zero-downtime release to production.
The series ends with part 5, which looks at breaking up the web UI. The homepage for the app is taken out and run in a separate container, which means changes to the UI and UX of the homepage can be released without regression testing the monolith. You’ll learn how to break up UI components and see how Docker enables fast, safe application releases.
Modernizing .NET Apps for Developers shows you how to take an existing .NET application, run it in a container, and use Docker EE to modernize the architecture. The source code for the app is on GitHub and all you need to follow along is Docker for Windows on Windows 10, or Docker EE on Windows Server 2016.
Every part of the series shows you how to run the app locally on your dev environment, and also shows you the path to production – deploying to a highly-available Docker Enterprise Edition cluster in the cloud. By the end of the series you’ll be ready to start using Docker to modernize your own .NET applications.
Learn More about Modernizing Traditional Apps with Docker EE:
Learn how to modernize @dotnet apps on @Windows w/ the #Docker EE #MTA #dotnet
The post Video Series: Modernizing .NET Apps for Developers appeared first on Docker Blog.
|Continuous Delivery for Static Sites|
Reading Time: 15 minutesThis article was originally published on Pete Hodgson’s personal site. With their kind permission, we’re sharing it here for Codeship readers. In this article, I’ll show that the typical approach for deploying sophisticated static sites is not sufficient and explain h
Reading Time: 15 minutes
This article was originally published on Pete Hodgson’s personal site. With their kind permission, we’re sharing it here for Codeship readers.
In this article, I’ll show that the typical approach for deploying sophisticated static sites is not sufficient and explain how we can apply the principles of continuous delivery to improve the situation. We’ll then walk through a worked example, building a fully functional continuous delivery pipeline for a single page React application.
“Using the principles of continuous delivery to improve deployment of static sites” via @ph1
The Rise of the Static Site
This provides a very attractive option for building out an application without the operational complexity of deploying and managing server-side infrastructure. There are fewer moving parts, and operational worries become Someone Else’s Problem when you’re delivering your static site using something like Amazon S3.
Change Management for Static Sites
With these modern capabilities, client-side apps have evolved from a set of simple static HTML pages into sophisticated single-page web apps. However, change management for these static sites has not evolved at the same pace. Deploying a production change is often still done in a very unstructured way. Perhaps with a developer running through a set of manual build steps and then pushing some artifacts directly to a production S3 bucket.
These type of ad-hoc processes might make sense for deploying a simple change to some static content, but for a full-fledged app they are problematic. Any production change involves manual work from one of a small set of “experts” who are able to perform the correct incantation of processes. Code changes are not always robustly tested and not always consistently applied. There’s usually no way of knowing what’s been deployed into a given environment, or when it was done so.
The Case for Continuous Delivery
A decade or so ago, this situation was the norm for software deployment generally. What changed in the deployment of backend systems was a move toward continuous delivery practices. A strong focus on automation along with the formalization of a delivery pipeline helped to solve a lot of these issues for backend code. The same ideas can be easily applied to static sites.
We can replace manual processes with automated ones and replace poorly documented deployment steps with a declarative software delivery pipeline. In doing so we achieve a predictable, consistent deployment process which performs the exact same sequence of steps every time. We are able to test a code change in a pre-production environment with confidence that it will behave the same in our production environment.
By including automated testing as part of our delivery pipeline, we ensure that our changes are always tested before being deployed. By having machines rather than humans perform deployment steps, we reduce the time taken to deploy and can increase the rate at which we deliver changes to our users.
In this post, I’ll present an example of how we can do all of the above for a simple static site. We’ll build a simple but functional continuous delivery pipeline for a small single-page app hosted out of Amazon S3. We’ll end up with a fully automated deployment process which is:
A Deployment Pipeline For nxtbrt.com
We’ll be working on the deployment infrastructure for a real site. We’ll be deploying nxtbrt.com, a side project of mine. It’s a simple transit schedule app for the San Francisco Bay Area’s BART system, implemented as a single-page React application. This app runs entirely client-side, pulling live schedule information directly from BART’s real-time API. There’s no need for a backend service, which allows our app to be deployed as a static website hosted out of an S3 bucket.
You can browse the source code for nxtbrt.com, as well as its CD configuration, on GitHb.
Any code change for this app goes through a fairly typical sequence of steps, which eventually lead to a new deployment of nxtbrt.com in production. First we run automated tests against the code. We then build a deployable version of the app, and then deploy it to a staging environment. After some light manual acceptance testing we then deploy the same build to our production environment.
We’ll be creating a continuous delivery pipeline that models this delivery process as the following sequence of pipeline steps:
Any code change that lands on our master branch will be moved automatically through each step in this pipeline, although we’ll introduce a manual gate before the production deployment. This will allow us to perform manual testing of the code change in staging before it’s signed off and deployed to prod.
Implementing with Codeship
We’ll be implementing this CD pipeline using Codeship, a hosted continuous delivery platform. Specifically, we’ll be using Codeship Pro, which uses Docker containers to isolate each step of the build pipeline.
This provides a nice repeatable build environment — our steps run in the exact same computational environment each time. It also makes it easy to run a pipeline step locally in almost the exact same environment it would run in within the CD platform.
Codeship Pro follows the pattern of representing the entire pipeline configuration via configuration files that live in the same repo as the app itself. This gives us version control of our build pipeline and allows us to change both the app and the way it’s built within a single, atomic code commit.
Defining our CD pipeline
Step 1 – Test
To get started, we’ll set up a really basic single-step pipeline which runs our app’s automated tests whenever we check in code. We start by adding a simple test.sh script to our codebase:
Our script starts with a shebang, which indicates that this script should be run using the
With our scripting administrivia out of the way, the next line in our script runs
This script encapsulates all that’s necessary to run our automated tests. Next we need to configure Codeship to run it as a build step. We configure Codeship’s build steps by placing a
This file defines a build step called
Codeship runs every build step within a Docker container, which it refers to as a service. We’ve specified that our
This tells Codeship that our
With these three files (
Next we will set up a full deployment pipeline, where any change which lands on our master branch will automatically be deployed to a staging environment (assuming it passes our test suite of course).
Step 2 – Build
The first thing we’ll do to automate our deployment is create an additional pipeline step which will package up our app into a set of static files optimized for deployment into a production setting.
Our app is a simple single-page react app, and because it was created using create-react-app we already have most of what we need to create an optimized set of static files. We just need to do some simple orchestration, via another shell script:
This script installs our app dependencies, then uses the
Finally we add a
We want to run this script as a new pipeline step. That’s as simple as adding a new section to our
With these two changes checked in, our pipeline will now not only run tests but also create a deployable production build of our application. But we’re not yet deploying those files anywhere. Let’s work on that now.
Step 3 – Deploy to staging
Just like with our other pipeline steps, the first thing we’ll do is create a script:
The heavy lifting in this script is done via the
Now let’s add that script as a pipeline step:
Note that we’re using a new Codeship service called
There are a few things going on with this new service. It’s configured to use a Docker image called
If we’d have used the
Last, we’re using encrypted environment variables to pass AWS credentials to the
With these new configuration changes, our CD pipeline is now set up to automatically test our code and then deploy it to our staging environment. There’s one little problem though.
By default, Codeship will run the pipeline against any commit which is pushed up to our code repository. But we don’t want any old branch that has a change pushed to be deployed to staging — we only want contain the latest from our master branch, no other branches.
This is easy to fix. Codeship allows us to limit a step to only run for commits on a specific branch, like so:
Note the new
Step 4 – Deploy to production
Let’s get our code deployed into production. We’ve already configured a separate S3 bucket as the static host for our production site. We could just copy-paste our
We now simply need to modify our step configuration to use this new generalized script for both staging and production deployment:
You may have noticed the strange
Continuous Deployment Versus Manual Acceptance Testing
For this app, we aren’t confident enough to implement full continuous deployment, where every code change that passes automated tests is automatically pushed all the way through to production.
Instead we want to do some manual acceptance testing of a change (or batch of changes) in our staging environment, and then manually push them through to prod once they’ve passed muster. However we still want all of our change control to be managed via our CD pipeline — we don’t want deployments to prod to be run from some separate system.
Implementing this type of manual UAT (user acceptance testing) from within a CD pipeline is usually done via a manual gate sitting in front of our production deployment. After manual testing is done, an operator would push a button in the CD system to mark the change as approved, triggering the pending production deployment step.
Codeship doesn’t have that capability, so we are using a somewhat hacky workaround in the form of a marker branch. Once a set of changes has been manually approved, an operator can trigger a production deployment by simply fast-forwarding the
With this final set of file changes, we are done. We have created a fully functional continuous delivery pipeline for our little static site. Any code change that lands on our master branch is automatically tested and deployed into a staging environment within a few seconds. After someone has manually verified a change, it is a one-line command to promote the exact same code into production.
Our deployment process is now:
Pro Tips for Maintainable CD Infrastructure
This worked example has included some tricks and techniques which I’ve picked up while building CI/CD pipelines over the years. I’ll briefly highlight a few here.
Separate build from deploy
It’s a good idea to have a distinct step in your pipeline which is solely responsible for creating a deployable artifact, along with separate steps which are responsible for taking that artifact and deploying it into a specific environment.
In our simple example, our
Having a distinct build step forces you to create a well-defined deployable artifact that is not specific to any one environment. With that artifact defined, you can then create deployment steps that all work off of the same deployable artifact. This provides confidence that the artifact you’re deploying to production is the exact code that has been validated and tested earlier in the pipeline — one of the big wins from CD.
Version-stamp your artifacts
A related tip is to always include something in every deployable artifact which lets you trace its provenance. I typically do this via a
Prefer scripts to pipeline configuration
I always strive to keep my CD tooling configuration as minimal as possible, particularly when it comes to the actual work being done by each step in the pipeline. The ideal is what we have in this worked example — the CD configuration just contains a reference to a script, and all the real work is encapsulated behind that script.
The main motivation for this is to keep your build infrastructure testable and debuggable. I want to avoid the number of instances where the only way I can test a build change or debug a build breakage is by running something within my CD infrastructure. It’s tedious, forces a really slow feedback loop, and is prone to error. Much better to be able to run the relevant script on a local workstation (or Docker container) in order to reproduce an issue or test out a local change before committing.
Treat your infra code with respect
It’s tempting to succumb to the fallacy that your build pipeline isn’t “real” software, it’s “just” scripts and configuration. In fact, this is code that will be maintained and extended over time, just like “regular” production software. You can apply the same principles which keep “regular” code maintainable to your infrastructure code.
Strive for small, well-documented modules. Organize your scripts into directories and give them good names. Extract shared functionality into helper functions. Treat copy-pasted code as a smell.
By their nature, CD pipelines involve low-level access to production infrastructure. Along with this power comes the responsibility to minimize the potential for a badly behaved script to do damage or expose sensitive data.
We can partially mitigate this risk by applying the principle of least privilege, along with fine-grained access control. The authentication credentials provided to a deployment process should provide the minimum set of permissions necessary for that process. A deployment script that updates content in an S3 bucket shouldn’t have permission to modify network settings, read databases, or create EC2 instances. A deployment script which is updating a staging environment shouldn’t have any access to production systems.
We’ve created a basic CD pipeline for nxtbrt.com in this post. There are plenty more details I could go into — how to configure S3 buckets correctly, how to configure CloudFront, DNS, and SSL, how to create an appropriately restrictive IAM role for your deployment scripts, etc, etc. However there are also a lot of other articles out there on the internet covering the details. My focus was on how to drive this type of deployment from a CD pipeline.
“Continuous Delivery for Static Sites” via @ph1
|Meet the DigitalOcean Brand Design Team|
As a company, we’ve always cared about contributing to developer culture in an authentic way, and one of the ways we do that is by adding moments of visual delight to everything we do, whether it's a Community tutorial, an interaction in the control panel, or a T-shirt at
As a company, we’ve always cared about contributing to developer culture in an authentic way, and one of the ways we do that is by adding moments of visual delight to everything we do, whether it's a Community tutorial, an interaction in the control panel, or a T-shirt at a conference. That is why, from the very beginning, DigitalOcean put an emphasis on building out a Brand Design team comprised of not just proficient graphic designers, but brilliant illustrators as well.
The Brand Designers at DigitalOcean are challenged every single day to transform extremely technical and esoteric content into approachable and friendly touch points. Lead Visual Designer Masami Kubo says, “We believe these technologies should be accessible to everyone, and a part of that is acknowledging and celebrating the diverse and quirky personality behind the humans that build these amazing things. Visuals and branding throughout the cloud computing industry are often disregarded or unconsidered, so it’s a unique opportunity for us as designers to bring that culture to life.”
We interviewed DO’s Brand (Visual) Designers Kasia Bojanowska, Masami Kubo, Pat Raubo, and Alex Mostov to learn more about their design process, how they illustrate technical concepts, and where they turn to for inspiration.
How do you approach technical topics as illustrators?
Masami: We’ve been illustrating technical topics for years, so the challenge now is how to keep it fresh and relevant. However, if we push the imagery too conceptual or meta, we run the risk of none of it making any sense to our audience. My approach now is to identify the primary action or message behind complex concepts, and focus on making that one thing really clear. I like to start minimal, then add elements sparingly to not distract from the primary message.
Alex: I came to the DigitalOcean team without much technical knowledge. In some ways I think this has actually been an advantage in creating conceptual illustrations. I create images that help me understand the concepts. I think and hope that inherently makes them more intuitive to others, too.
Where do you draw inspiration from for your designs?
Kasia: When starting a new project I definitely try to spend a good chunk of time looking for inspirations. Google image search, Pinterest, Dribbble, Behance are all wonderful resources for that. We have a few shared pinterest boards with stuff we like. I also get really inspired when I see great work being made by others on our team.
Pat: One of the benefits of working with a team of such enormously talented designers is that I draw inspiration from them and their work all the time. Masami and Kasia both do amazing work, and I’ve learned a great deal from both of them, as well as from Alex. I try to seek out inspiration from a number of things. Some have a pretty clear association with the kind of work we do at DO, like design and illustration done specifically for tech, but I also draw from editorial illustration, film, comics, and book covers, among other sources.
Illustrations by Kasia Bojanowska, Patricia Raubo, & Alex Mostov
How do you come up with new ideas for similar technical topics?
Masami: I think it actually helps for imagery with similar technical topics to have a common thread of imagery, so as to build a visual association. We have strict style guides for most of our platforms and campaigns, but some of these style guides allow for permutation in aesthetics to avoid looking too repetitive over time.
Pat: I like to first do some research to understand the basic concept of what I’m going to illustrate, and then add to my notes with simple schematics and/or sketches to see if there’s anything I can pull from those for the final visuals.
Alex: I will often try to think about representing a topic in a different kind of space or world. For examples if I create an image for a topic in a 2D space, the next time I will try to figure out how I could represent that same concept in a 3D space or from a different perspective.
What is one of your favorite projects you’ve worked on at DO thus far?
Pat: I worked on a series of illustrations for our Employee Handbook, which meant drawing a team of cute sea creatures in an office setting. I really enjoyed working on that project, and it was great to see people respond to the illustrations in such a positive way.
Masami: My favorite projects are often also the most challenging ones. And usually the more ambitious they are, the more compromises on vision I’ve had to make. But some of the most exciting stuff I’ve worked on here is the art direction and design of our office spaces, in collaboration with architects, fabricators, and our People team. I was expected to transform the space into a branded and navigable experience. It’s still a work in progress, but I love the challenge of designing for physical spaces.
Murals by Alex Mostov & Masami Kubo
What was one of the most challenging projects you’ve worked on at DO?
Kasia: Redesigning the DO logo was definitely the biggest challenge for me. The process was pretty high pressure but I was allowed enough time to really let myself explore and dig in deep. In this case having a supportive team to brainstorm and keep motivation high through all of the iterations was essential.
Masami: We did a design refresh of the marketing site a year ago, and it went through a lot of changes and push backs. The task was simple—refresh the designs and clean up the performance—but it involved approval from every department and stakeholder in the company. I was doing everything from art direction, web design layouts, and spot illustration. I learned a ton about project management and designing within web accessibility standards, thanks to Una Kravets. I felt creatively drained after the project was finished, and didn’t think it would be possible to revisit it with new ideas. Surprisingly, I am now leading a complete design overhaul for the marketing site, and I feel more equipped than ever to tackle all the challenges and make something more beautiful and smart than last year.
Sometimes you create visual assets that are targeted at a very specific audience, and you have to balance things like humor with cultural sensitivities. How does localization factor into your designs?
Masami: Part of our job is being aware and sensitive to any imagery that might have harmful or negative impacts to our community. We are fortunate to have a diverse employee base that cares about these things, so the more opinions we can gather, the better. We try to treat branding the same in any other countries as we do here. However, we do want to highlight our growing global coverage, so one way we approach this is to celebrate the unique design culture local to these countries. For example, the Frankfurt datacenter launch campaign featured designs inspired by Bauhaus Constructivist design. For the Bangalore datacenter launch, we created stylized renditions of local architecture. Being a developer from another country doesn’t necessarily mean you have vastly different tastes or interests, so it’s important for companies and designers to address these things authentically.
How do you create different kinds of content while maintaining brand consistency?
Kasia: For illustrations, we keep a consistent color palette. We have a list of prompts to help us throughout the process, but we do not have a very strict style guide when it comes to editorial illustration. We tend to have more fun and variation with all of our community and conference designs. However, we are definitely more strict about stylistic consistency when it comes to our website design.
Like much of DO, the Brand Design team is distributed across the world. What systems or processes do you have in place that allow for open communication and collaboration?
Where do you see the DO brand going?
Masami: When I first joined DigitalOcean in 2014, the company was breaking into the cloud computing world by differentiating itself as friendly and accessible. At the time that meant being extra illustrative and bubbly with our designs. We wanted to let the developer community know that their content and culture deserves this kind of attention. That attitude and core value is still what drives every decision, but our aesthetics have matured and evolved just as our products and features have grown. The brand now has a diverse voice ranging from playful and young to mature and sophisticated, all under the same goal of enabling the developer community. I think this range directly reflects the diversity of users we want to speak to.
Alex: I really like DO’s brand evolution because I feel like the changes are made based on need and effectiveness rather than just trying to make a splash. I think the brand will continue to change in this deliberate way as the community and product develop. I also hope it will always maintain the sense of playfulness that I think makes DO special.
What is your best advice for designers just starting out?
Pat: I would encourage aspiring creative folks of any stripe to always stay curious (as cliched as it may sound, it’s advice I’ve followed that I feel has served me well) and seek out inspiration from a range of sources (museums, books, online communities, whatever floats your boat!), because you never know what’s going to be the seed that becomes the root of a fantastic idea. Feeding your mind will give you perspective and enrich your work.
That said, don’t wait around for inspiration to strike, either! It’s best not to be too precious about your work. Just sit down, make the thing, and make it to suit your standards. Then, when you think it’s done, work on it just a little bit more. Keep learning, and push yourself a bit more with each new project.
Do you enjoy our designers' creations? Download desktop wallpapers from some of their favorite illustrations.
|Security advisory: Log token allowing access to other endpoints|
In November last year the token issued for displaying job logs on Travis CI was incorrectly valid for two other endpoints. This issue was discovered on Wednesday 14 February, 2018 and fixed the next day. Further investigation into historical data found no successful attempts at fetching non-log en
In November last year the token issued for displaying job logs on Travis CI was incorrectly valid for two other endpoints. This issue was discovered on Wednesday 14 February, 2018 and fixed the next day. Further investigation into historical data found no successful attempts at fetching non-log endpoints with the log token. This indicates the issue was not exploited in any way.
When displaying job’s log on Travis CI we also show a button to retrieve a raw version of the log. If a repository is private, the link to the raw log contains a log token for convenience so that, for example, the log can be opened in a new browser tab. The token is valid for 24 hours and should only be valid for that specific log.
In November last year we were updating the V3 version of our API to include the
log token in the
The problem was caused by using our permissions system in a wrong way. Instead of checking the log visibility for a user we reused a function that checked the job visibility. On top of that the log token isn’t scoped to specific endpoints, which allowed to use it on any endpoint that was only checking for a job visibility. We’ve fixed this by using a fine grained log visibility check and by adding more tests covering this area of the code.
As mentioned above, the issue was discovered on 14 February, and resolved the following day. Going forward we plan to revisit our code to allow to scope tokens to work only on specific endpoints, which would make it much harder to introduce such issues in the future. We will also do a review of the permissions system to confirm that there are no other places were permissions checks are too generic.
If you’re concerned about the issue, don’t hesitate to contact us at firstname.lastname@example.org.
|Docker Compose and Kubernetes with Docker for Desktop|
If you’re running an edge version of Docker on your desktop (Docker for Mac or Docker for Windows Desktop), you can now stand up a single-node Kubernetes cluster with the click of a button. While I’m not a developer, I think this is great news for the millions of developers who have already been u
If you’re running an edge version of Docker on your desktop (Docker for Mac or Docker for Windows Desktop), you can now stand up a single-node Kubernetes cluster with the click of a button. While I’m not a developer, I think this is great news for the millions of developers who have already been using Docker on their Macbook or Windows laptop because they now have a fully compliant Kubernetes cluster at their fingertips without installing any other tools.
Developers using Docker to build containerized applications often build Docker Compose files to deploy them. With the integration of Kubernetes into the Docker product line, some developers may want to leverage their existing Compose files but deploy these applications in Kubernetes. There is, of course, Kompose, but that’s a translation layer which causes you to have two separate artifacts to manage. Is there a way to keep a native Docker-based workflow?
With Docker on the desktop (as well as Docker Enterprise Edition) you can use Docker compose to directly deploy an application onto a Kubernetes cluster.
Here’s how it works:
Let’s assume I have a simple Docker compose file like the one below that describes a three tier app: a web front end, a worker process (
Notice that our web front end is set to route traffic from port 80 on the host to port 80 on the service (and subsequently the underlying containers). Also, our words service is going to launch with 5 replicas.
services: web: build: web image: dockerdemos/lab-web volumes: - "./web/static:/static" ports: - "80:80" words: build: words image: dockerdemos/lab-words deploy: replicas: 5 endpoint_mode: dnsrr resources: limits: memory: 16M reservations: memory: 16M db: build: db image: dockerdemos/lab-db
I’m using Docker for Mac, and Kubernetes is set as my default orchestrator. To deploy this application I simply use docker stack deploy providing the name of our compose file (words.yaml) and the name of the stack (words). What’s really cool is that this would be the exact same command you would use with Docker Swarm:
$ docker stack deploy --compose-file words.yaml words Stack words was created Waiting for the stack to be stable and running... - Service db has one container running - Service words has one container running - Service web has one container running Stack words is stable and running
Under the covers the compose file has created a set of deployments, pods, and services which can be viewed using kubectl.
$ kubectl get deployment NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE db 1 1 1 1 2m web 1 1 1 1 2m words 5 5 5 5 2m $ kubectl get pods NAME READY STATUS RESTARTS AGE db-5489494997-2krr2 1/1 Running 0 2m web-dd5755876-dhnkh 1/1 Running 0 2m words-86645d96b7-8whpw 1/1 Running 0 2m words-86645d96b7-dqwxp 1/1 Running 0 2m words-86645d96b7-nxgbb 1/1 Running 0 2m words-86645d96b7-p5qxh 1/1 Running 0 2m words-86645d96b7-vs8x5 1/1 Running 0 2m $ kubectl get services NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE db ClusterIP None 55555/TCP 2m web ClusterIP None 55555/TCP 2m web-published LoadBalancer 10.104.198.84 80:32315/TCP 2m words ClusterIP None 55555/TCP 2m
If you look at the list of services you might notice something that seems a bit odd at first glance. There are services for both web and web-published. The web service allows for intra-application communication, whereas the web-published service (which is a load balancer backed by vpnkit in Docker for Mac) exposes our web front end out to the rest of the world.
Now if we wanted to remove the service you might think you would remove the deployments using kubectl (I know I did). But what you actually do is use docker stack rm and that will remove all the components created when we brought the stack up.
$ docker stack rm words Removing stack: words $ kubectl get deployment No resources found
And, to me, the cool thing is that this same process can be used with Docker EE – I simply take my Compose file and deploy it directly in the UI of Docker Enterprise Edition (EE) – but that’s another post.
Use Docker Compose to deploy a multiservice app on #Kubernetes w/ #Docker for Mac by @mikegcoleman
The post Docker Compose and Kubernetes with Docker for Desktop appeared first on Docker Blog.
|4 Tips for Successful Remote 1-1s|
Reading Time: 7 minutesRemote work is commonplace, especially among development teams and startups. The effects of skipping the commute and controlling your workspace are usually reflected in a productivity boost and a better work-life balance, but these advantages do come with some drawbacks. Tea
Reading Time: 7 minutes
Remote work is commonplace, especially among development teams and startups. The effects of skipping the commute and controlling your workspace are usually reflected in a productivity boost and a better work-life balance, but these advantages do come with some drawbacks.
Team dynamics are complicated enough even when everyone can grab lunch together. It’s often the “relationship” side of your team that will need some different techniques in a remote setting. If you’re a manager, building strong relationships with your direct reports is paramount for both their success and yours.
At Codeship, I manage a fully remote team, and our employees are scattered across Europe and North America. I found that much of the usual advice for 1-1s — hold them biweekly or monthly, make it a walk-and-talk, keep it tech-free — just doesn’t work for remote teams. Our set of constraints necessarily leads to some interesting solutions.
So don’t let the limitations of wifi bandwidth, your video camera, and your laptop screen get you down. Here are four tips for leading successful 1-1s with your remote teams.
“Much of the usual advice for 1-1s just doesn’t work for remote teams.” via @rhein_wein
1 – Video Is a Must
There’s no substitute for sitting down next to someone over a coffee, but video call is a pretty good approximation! Facial expressions and nonverbal cues can give you the chance to respond to or correct something before it gets out of control. It’s easy enough to hide emotions over a voice call (we’ve all seen Ferris Bueller’s Day Off, right?), but video gives a deeper window into what’s really going on with someone.
Video also helps humanize your team. When teams interact with each other mainly over text, it’s sometimes hard to forget there’s a human on the other end of that screen. Simply seeing the faces of our team goes a long way for shared understanding, cooperation, and empathy.
It also avoids miscommunications when talking about more contentious subjects. If someone tells me they’re disappointed in the outcome of a project and that we need to correct course, it’s much easier to be at ease if I can see their calm expression on video. It’s sometimes hard to assume best intentions without a face to back it up.
“Video for remote 1-1s minimizes miscommunication on contentious subjects” via @rhein_wein
While lots of remote employees love that they don’t have to commute, it doesn’t mean that they also don’t miss the social aspects of working in an office. Some remote employees still crave those in-person interactions and feel disconnected from their team if they don’t share some non-transactional human experiences on a regular basis. While not a perfect replacement for the watercooler, a video call can help strengthen relationships and build camaraderie by being able to see each others’ faces.
I’ve worked on teams where we had a “no video” policy, which sometimes helped us feel more at ease with spontaneous calls or pairing sessions (after all, it might easy to throw on an office hoodie for some of my coworkers, but my grooming routine is a bit more complex, so spontaneous video calls aren’t really my thing!). Our engineering team at Codeship uses the opposite policy, where video is an expectation. It might mean that we need a bit more lead time for calls, but the quality of the communication is significantly better with video on.
I don’t mean to suggest that video can completely replace the need for in-person meetings. Relationships grow stronger when people have a chance to interact in person, so team meetings and retreats are crucial. But once you have a solid foundation for a relationship, whether it’s with your direct report, a peer, or your own boss, using video will help to keep it strong.
2 – Embrace Your Laptop Screen
So many best practices for 1-1s encourage you to close your laptops and be “in the moment.” For remote teams, this simply isn’t an option.
By their nature, it’s not possible for your remote video 1-1 to be a tech-free zone. But choosing a great video conferencing tool can make a big difference when it comes to eliminating distractions.
My team recently made the switch from Google Meet to Zoom, and it’s made a measurable positive impact on both the technical quality of the call as well as what we’re able to discuss. Each attendee can decide which layout they prefer for video display (a Hollywood Squares-esque grid or just having the speaker on jumbotron). The big change-maker is that if you share your screen, you stay on video. This makes discussions around documents more fruitful since all participants are still on camera, and it eliminates that awkward feeling where the one person just feels oddly monitored while the other’s video is hidden.
I don’t love Slack calls for 1-1s, mostly because Slack is the center of the distraction universe. Zoom has built-in chat functionality, so I make it a habit to close Slack during 1-1s and rely on that instead.
And if your report has eyes bouncing all over the screen (or you can see they’re browsing Twitter by the screen reflection in their glasses, which is my favorite managerial secret), it’s a pretty sturdy indicator that they perceive the 1-1 as a waste of time. It’s on you to call attention to it and figure out how to fix it.
“It’s up to the manager to ensure that remote 1-1s are useful.” via @rhein_wein
3 – Keep Shared Notes
The chances for miscommunication over video chat are higher because you’re quite literally only getting part of the picture. To avoid “What, you never told me that!” conversations in the future, make it a point to keep shared notes during your 1-1s.
Google Docs is a good format for shared notes. I usually set them up by date and then use the “outline” feature to see an overview of all the meetings. For the notes themselves, just a list is fine, but make sure to mark which person is suggesting the topic. Not only does having an agenda help you keep the discussion balanced, but it can also show you trends developing over time.
Has it been a couple weeks since your direct report came to the meeting with anything to talk about? It’s time to start a conversation about it, hopefully made easier by the fact the evidence is right there in front of you.
“Shared notes let you see who has said what–and who hasn’t said anything for awhile.”
Since you’ll both need to have your laptops open for a video call (and you’re definitely doing your 1-1s over video, right?), weaving in notetaking with conversation can be pretty natural, but only if you’re using a tool that won’t prevent you from seeing the other person while you’re typing or reading the doc.
It’s especially important when delivering critical feedback, setting goals, or any other exchange where an action item is communicated. It also gives instant peer review. If your report writes something that isn’t what you thought was agreed on, it’s easy to spot and correct it before a week goes by.
4 – Engage More Often Than You Think You Need To
Especially in a remote team, it’s easy for all kinds of things to be left unsaid because the “communication tax” is higher. Blocking off time for your reports on a weekly basis keeps your office door open.
This applies to positive and unpleasant things alike. Having more frequent 1-1s over voice and video can reduce the overhead for quality communication and gives you insight into how your reports are doing. What might be uncomfortable or awkward to bring up over Slack can be easily detectable via facial expressions, tone of voice, and body language.
It’s hard to hide emotions when you’re face to face — even if that face is on a computer screen and 2,000 miles away.
When everyone’s not co-located in the same office, it’s often easy to miss out on the non-work aspects of someone’s life that impact their workday in a significant way. There’s no way to tell that your colleague is stepping away every hour to hear an update about an ailing grandparent, or that they’re really not feeling well and trying to stay at work for the sake of the team.
“Frequent 1-1s help remote managers understand when non-work life impacts someone’s workday.”
Part of your job as a remote manager is to create a relationship where your direct report would come to you and share those details, but that won’t happen overnight. Having frequent 1-1s makes it easier to strengthen that relationship and to pick up on those nonverbal cues that lead to deeper conversations.
And this “face-to-face” communication doesn’t need to stick to the confines of a 1-1. At Codeship, we use video for every call, whether it’s an all-hands meeting, sprint planning, or a project meeting. Chat conversations can turn into spontaneous 1-1s as well, and that “presence” with your team is helpful to enforce your availability and willingness to jump in to assist wherever needed.
If you’re having a 1-1 with someone who isn’t operating within your normal business hours, it’s best to balance the inconvenience of working a bit outside of the normal workday. Working in the EU, I’m usually game for a meeting that goes until 7:30 pm, because I know that my US counterparts will also jump on a call at 7:30 am if they need to.
Curious for more tips on remote work? Zapier has a rich guide for managing remote teams available on their site. Of course, remember that these tips might need some adaptation to work with your team. Teams are immutable human infrastructures, and each one is different!
“4 Tips for Successful Remote 1-1s” via @rhein_wein
|Understanding Rust Loops|
Reading Time: 8 minutesWhen you come from another language to learn Rust, some things may come across the same, but many things will be very different. Writing Rust loops can have the appearance of familiarity, but behind the scenes, Rust is translating those loops to its own syntax. If you learn
Reading Time: 8 minutes
When you come from another language to learn Rust, some things may come across the same, but many things will be very different. Writing Rust loops can have the appearance of familiarity, but behind the scenes, Rust is translating those loops to its own syntax. If you learn that style for yourself, you will find Rust loops to be far more capable and useful in your day-to-day programming.
Rust works with a form of pattern matching to account for what may possibly result from each step of your loop. This ensures that you don’t need to write as many tests; Rust gives you several guarantees about the types and boundaries of your conditions being iterated over. This helps your tests to focus on more relevant things, as well as makes your tests more sensible.
Let’s have a look at how Rust works with loops.
“Rust loops can look familiar, but check out what goes on behind the scenes.” via @6ftdan
Revealing a For Loop on a Range
A for loop may at first look as it would for many other languages. You tell it to use a variable for each step of a numbered range and simply use that value within the loop.
This will print out nine lines starting with
So let’s dig into how this for loop is implemented and see why that isn’t so.
The first piece to look at is the range syntax used:
The for loop then calls the
This code is a little more verbose than it needs to be for this post — it has some compiler optimization focuses for its implementation.
In essence, what is being done is a start-to-end boundary check for LLVM (compiler) optimization, then it tries to add 1 to our start value, if that’s possible given number type limitations (ie, the maximum number was reached for the kind of number used). If it can add 1, it uses a memory swap method to update the start value.
And last, it returns either a
You can see one case of pattern matching used above with
You can add an
Now back on the subject of the for loop, we’ve covered that it creates a
The block of code that prints each of the values does so from the
This produces an error:
When looping over a collection, you have a few different ways to choose ownership of the items used.
Here we’ll look at a while loop that uses the
And this outputs:
The last line is the debug output we asked for on the last line of code, and we still had access to the vector named range because we used
Typical Loops in Rust
One very common use of loops in Rust is the
Rust’s type system will make sure to match accounts for all the possible outputs that the next method produces from your iterator. When the collection reaches the end, it will produce a
And with guards:
And match also has bindings:
And the above outputs:
There are many methods on the Iterator trait that can help you do many of the things you want to do while iterating over a collection [link]. With them, you can daisy-chain methods like so:
This is some code from my app I used for processing text from a resume key text file. The contents are the text from the file, and entries are split by double new line entries. From there, I just use
As you get more familiar with Rust, it’s likely you’ll use the daisy-chained method, as many times it will comes across as more readable. Even so, the same pattern matching system will still happen behind the scenes where it is applicable.
Creating an Iterator
To create an iterator, we need to implement the trait
The above will output:
When we define the
If you would like to learn how to implement the
Rust has a fantastic system that manages your types and ownership very well, and this shines in loops as well. When you otherwise would worry about types and bounds with other languages, Rust takes a load off your mind with compile time checks and points you in the right direction.
I have found Rust removes most of the need for TDD when working with Rust-to-Rust code. Where TDD and testing really come in to play is any time you work with something outside the ecosystem. Then all the normal practice guidelines apply.
Rust will spoil you with it’s compile time helpful error messages, and your loops will scarcely ever run in to typical issues found in other languages.
“Understanding Rust Loops” via @6ftdan
|Top 3 reasons to attend DockerCon 2018|
In case you missed it, DockerCon 2018 will take place at Moscone Center in San Francisco, CA on June 13-15, 2018. DockerCon is where the Docker community comes to learn, belong, and collaborate. Attendees are a mix of beginner, intermediate and advanced users who are all looking to level up their
In case you missed it, DockerCon 2018 will take place at Moscone Center in San Francisco, CA on June 13-15, 2018. DockerCon is where the Docker community comes to learn, belong, and collaborate. Attendees are a mix of beginner, intermediate and advanced users who are all looking to level up their skills and go home inspired. With a 2 full days of training, more than 100 sessions, free workshops and hands-on labs, and the wealth of experience brought by each attendee, DockerCon is the place to be if you’re looking to learn Docker in 2018.
Want to go but need information to convince your manager? Here is a document to help you build a case for it including content, budget and reasons why you should attend.
Reason #1: Inspiring and informative breakout sessions
From beginner to experts, DockerCon brings together the brightest minds to talk about all things containers including Docker Platform, Kubernetes, Digital Transformation in the Enterprise, Moby and CNCF projects, Container Security, Service Mesh and more. Although the full schedule won’t be announced until the end of the month, below is a sneak peak of some of the sessions we have lined up:
Reason #2: Concrete learnings through Hands-on Labs and Workshops
Hands-on Labs will provide conference attendees with the opportunity to gain experience with various Docker features and use cases. Each lab runs around 45 minutes, covering both introductory and more advanced topics. Labs are self paced, so attendees can learn at their own speed when they have time during the day.
Conference attendees will also get the opportunity to attend free 2 hour hands-on, instructors led workshops will run as a track and free to first-come-first-served basis.
Reason #3: Effective networking with the Docker Hallway Track
Docker Hallway Track is an innovative platform that helps you find like-minded people to meet one-on-one and share knowledge in a structured way, so you get tangible results from networking.
Top 3 reasons to attend @DockerCon 2018 including a sneak peak at the speaker lineup
Useful DockerCon Links:
|Make a Lasting Impact with "Write for DOnations"|
“Our community is bigger than just us” — As DigitalOcean (DO) employees, we aim to keep this value at the front of our minds in all our work. Since the company was founded in 2012, we’ve worked hard to build a vibrant, engaging Community where everybody from beginners to professionals
“Our community is bigger than just us” — As DigitalOcean (DO) employees, we aim to keep this value at the front of our minds in all our work. Since the company was founded in 2012, we’ve worked hard to build a vibrant, engaging Community where everybody from beginners to professionals can learn from one another about working in the cloud.
It’s important to us that the Community emulates the best that tech has to offer by serving as a welcoming place where members can share their ideas and experiences. This is what led us to introduce the Write for DigitalOcean program. Write for DO gives Community members an opportunity to build their brand, develop their writing skills, and get paid for contributing to DigitalOcean’s collection of tutorials on open-source software deployment, configuration, and development.
We’re always looking for new ways to give back to the Community. To that end, we’re excited to announce some updates to the Write for DigitalOcean program and reintroduce it as “Write for DOnations” (currently in beta — the full program launch is coming later this year).
There are two main changes that we are excited to share:
The Write for DOnations beta program will follow the same editorial structure as Write for DO:
At the end of this review process, the author’s tutorial will be published on the Community website and they will receive their payout. The author will then get to choose the nonprofit(s) that will receive their matching donation. Donations will be processed through Bright Funds, and authors’ donations can either go to a single tech-focused nonprofit or be evenly split between a group of nonprofits that share similar missions. Please note that the charitable contributions made by DigitalOcean through this program are not tax-deductible to the authors.
Since its launch, the Write for DigitalOcean program has allowed authors to share their diverse technical knowledge with the world while also improving their writing skills and growing their personal brand. Our team is always on the lookout for fresh content our community will love. To get a sense of which tutorial topics we’re particularly interested in, take a look at our suggested topics page.
Although Write for DOnations is still in development, we’re excited to help our Community authors make a real impact by donating to fantastic organizations that are working to shape the world of tech for the better.
We are actively seeking feedback to inform the full release of the the new Write for DOnations program. Check out the program’s FAQ page for more details, and please share any questions or comments about the Write for DOnations beta launch in the comments below or reach out to us directly at email@example.com.
|How We Built the Codeship API v2|
Reading Time: 5 minutesWe started work on our API v2 at the beginning of 2017. We knew that implementing it could have significant implications for our architecture as well as our customers’ workflows, so we wanted to spend the time to get it right rather than rushing to deliver something and then
Reading Time: 5 minutes
We started work on our API v2 at the beginning of 2017. We knew that implementing it could have significant implications for our architecture as well as our customers’ workflows, so we wanted to spend the time to get it right rather than rushing to deliver something and then having to live with the consequences.
Click to Tweet:
“Examining how v2 of the Codeship API was brought into being.” via @codeship
API Gateway Versus the Majestic Monolith
The first thing we looked at was how it would fit in our current architecture and our vision for the system going forward. Our existing architecture has a couple supporting services surrounded by a Rails monolith.
Our long-term vision is to move to a more services-based architecture, so we decided to first explore the idea of using an API gateway. This approach would allow us to reduce the number of requests going to our monolith frontend and will enable us to separate our UI load from our API load. Additionally, it would provide a central gatekeeper for all of our systems.
As appealing as this option was, the types of API calls we wanted would almost always end up hitting our monolith as we started to investigate. This, coupled with the overhead of having to add an API call to the gateway and then turn around and have to pass the same request to our monolith, seemed like it was a lot of extra work for an unknown future payoff.
In the end, we decided to build our API into our monolith as a Rails engine. Rails engines are just an embeddable Rails application that operates in an isolated namespace from the main Rails application. This separation helped prevent us from inadvertently using code from our monolith but still required a little bit of discipline on our part to prevent us from explicitly reaching into the monolith.
The above diagram highlights the interaction between our API Rails Engine and our monolith while retrieving a build object.
If you are interested in learning more about Rails Engines, I would highly recommend reading the Rails Guide on Getting Started with Engines.
The next significant design consideration was authenticating requests. Currently, all of our authentication and authorization occurs in our monolith. This would not have been a big deal if we kept our API in an embedded Rails engine, but thinking about extracting it out into an external service, this could amount to a lot of network traffic.
In searching for a solution, we found out that several of our developers had worked with JSON Web Tokens (JWT) in the past and suggested it as a possible solution. Using JWT, with the initial authentication request, we authenticate the user and generate a token containing the following information:
Our example shows what a deencrypted JWT token looks like for Codeship. This particular token is for User 1 who has access to two different organizations. One of the organizations can both read and write project and build attributes. Our second organization can only read values.
This bearer token is passed with every request in the Authorization HTTP header. Upon receiving the request, our monolith verifies that the user has the correct scope for the resource that they are trying to access. If the user is authorized, we process the request.
If you are interested in learning more about JWT, there is an excellent introduction available at https://jwt.io/.
REST Versus GraphQL
Even more important than the architecture issues was the API itself. We felt that there were only two choices to consider: REST and GraphQL.
GraphQL is an API query language that allows you to retrieve the exact data you are looking for in a single server request. A theoretical GraphQL query to find the build status and commit message for a build might look like this:
We loved how powerful these queries are, but it didn’t feel quite as natural to trigger a build.
Taking a REST approach, our build would live at the following location:
We could retrieve the build information with a GET request against this URL. A POST request to the same URL would trigger the build.
Ultimately, we felt that our customers, as well as ourselves, were more experienced with REST, and we decided to take that approach. As we learn more about how users are interacting with our API, we may revisit GraphQL in the future.
If you haven’t yet looked at our API, you can find all the information you need to get started here. In case you get stuck working with the API or have feedback on how to make it better, feel free to reach out to our help desk or connect with us in our Community Slack.
“How We Built the Codeship API v2” via @codeship
|The Deep End Podcast Ep #13: From Prototype to Internet of Things with Muzzley|
A vision, a small prototype, and a PowerPoint presentation: that’s how Muzzley, a platform for interacting between Internet of Things (IoT) devices, was born three years ago. Today the Muzzley team works to solve a pain point for smart home consumers: managing their IoT devices from one interface,
A vision, a small prototype, and a PowerPoint presentation: that’s how Muzzley, a platform for interacting between Internet of Things (IoT) devices, was born three years ago. Today the Muzzley team works to solve a pain point for smart home consumers: managing their IoT devices from one interface, with minimum hassle. But they also place importance on transparency, privacy, and protecting their customers’ data.
In this episode, Muzzley co-founders, Domingo Bruges and Sasha Dewitt, discuss how Muzzley’s tech stack evolved to support a product that integrates with different vendors. They share insight into how they manage the data generated by consumer IoT devices, and how they approach consumer privacy and data production.
Subscribe to the The Deep End Podcast on iTunes, and listen to the latest episode on SoundCloud below:
Hollie Haggans heads up Global Partnerships for DigitalOcean’s Hatch program. She is passionate about startups and cold brew coffee. Get in touch with questions at firstname.lastname@example.org.