Planet Open Source Ruby & Rails Apps

Updated Friday, 14 August 2020 07:30
{}
GitLab News ( Feed )
Friday, 20 April 2018
New names for GitLab self-hosted pricing tiers

At GitLab, iteration is one of our core values. We’ve recently iterated on the names of our self-hosted pricing tiers, so Marcia and I got together and wrote this post to catch you up on the current options. We’ll explain each tier, and share how to figure out which features your subscription give

At GitLab, iteration is one of our core values. We’ve recently iterated on the names of our self-hosted pricing tiers, so Marcia and I got together and wrote this post to catch you up on the current options. We’ll explain each tier, and share how to figure out which features your subscription gives you access to.

GitLab deployment options

To use GitLab, you have two options:

  • GitLab self-hosted: Install, administer, and maintain your own GitLab instance.
  • GitLab.com: GitLab's SaaS offering. You don't need to install anything to use GitLab.com, you only need to sign up and start using GitLab straight away.

GitLab self-hosted

With GitLab self-hosted, you deploy your own GitLab instance on-premises or in the cloud. From bare metal to Kubernetes, you can install GitLab almost anywhere. GitLab self-hosted has both free and paid options: Core, Starter, Premium, and Ultimate.

You can see a full list of features in each self-hosted tier on the self-hosted feature comparison page.

GitLab.com

GitLab.com is hosted, managed, and administered by GitLab, Inc., with free and paid options for individuals and teams: Free, Bronze, Silver, and Gold.

To support the open source community and encourage the development of open source projects, GitLab grants access to Gold features for all GitLab.com public projects, regardless of the subscription.

You can see a full list of features in each GitLab.com tier on the GitLab.com feature comparison page.

Repository architecture

We develop GitLab from two repositories, one for GitLab Community Edition (CE) and another for GitLab Enterprise Edition (EE):

GitLab EE grants you access to features by installing a license key. You can also install GitLab EE and run it for free without a license key which will give you access to the same features as CE. This makes it easier to upgrade later on.

Visit the CE vs EE page to see which GitLab installation method to choose.

Subscription model

GitLab Core contains all of the open source features of GitLab. Whether you are running GitLab CE or GitLab EE without a license key, you'll get access to the same Core features. The proprietary features of EE are unlocked by purchasing a license key.

Tiers are additive:

  • Starter contains all the features of Core
  • Premium contains all the features of Starter and Core
  • Ultimate contains all of the features of Premium, Starter, and Core

GitLab Core, Starter, Premium, Ultimate

Examples

  • Consider a user of GitLab Premium who wants to contribute to a given feature present in GitLab Core, e.g. Issue Boards. The code is submitted to the CE repo, therefore, it's open source code. The master branch of GitLab CE is then merged into GitLab EE. The CE code will be available to this Premium user in the next release.
  • Consider a user of GitLab Premium who wants to contribute to a given feature present only in Premium, e.g., Geo. The code is submitted directly to the EE repo, therefore, it's proprietary. The same is valid for Starter and Ultimate features.

Use cases

GitLab self-hosted use cases

  • I installed GitLab CE: I’m a Core user. I have access to Core features. The software I’m using is 100 percent open source.
  • I installed GitLab EE: the software I’m using is open core- it includes both open source and proprietary code.
    • I don't have a subscription: I have access to Core features.
    • I have a Starter subscription: I have access to Starter features.
    • I have a Premium subscription: I have access to Premium features.
    • I have an Ultimate subscription: I have access to Ultimate features.
  • I have a trial installation: I installed GitLab EE, and I’m an Ultimate user during the valid period of the trial. If the trial period expires and I don’t get a paid subscription (Starter, Premium, or Ultimate), I’ll become a Core user, with access to Core features.

GitLab.com use cases

  • I use GitLab.com, a huge installation of GitLab EE. I’m using proprietary software.
  • I don’t have access to administration features as GitLab.com is administered by GitLab, Inc.
  • Subscriptions:
    • I have a Bronze subscription: my private projects get access to Bronze features. My public projects get access to Gold features.
    • I have a Silver subscription: my private projects get access to Silver features. My public projects get access to Gold features.
    • I have a Gold subscription: my private projects get access to Gold features, as well as my public projects.
    • I don’t have any paid subscriptions: I’m a Free GitLab.com user:
      • I have access to Free features for private projects.
      • I have access to Gold features for public projects.

Questions, comments? Let us know what you think below.

{}
Discourse News ( Feed )
Thursday, 19 April 2018
HTTPS by Default
Here at Discourse World HQ, we’re firm believers in the value of security. We fund a public bug bounty program, and we document our security policy and procedures right in the repo. Securing traffic between a forum’s server and its users is important, too, and we’ve had first-class support for integr

Here at Discourse World HQ, we’re firm believers in the value of security. We fund a public bug bounty program, and we document our security policy and procedures right in the repo. Securing traffic between a forum’s server and its users is important, too, and we’ve had first-class support for integrating Let’s Encrypt into a self-hosted Discourse server since virtually day one of their being generally available.

All of this is to explain why we’re so very proud to be able to announce that every new hosted Discourse instance now comes with HTTPS configured (and enforced via HSTS policy) by default, and the majority of our existing customers have been transparently migrated to enforced HTTPS. Some of our customers can’t be automatically HTTPS enforced; if you see your site is still loading over HTTP, please contact our support team so we can walk you through the changes that need to be made.

The Story So Far

image

HTTPS has always been supported by our hosting, but for a long time it wasn’t feasible to automatically roll it out to everyone. Back in the Dark Times (up to 2015 or so), SSL was a headache to do at scale, unless you were big enough to be able to wangle a special deal with a CA. Sure, certificates cost money, but that wasn’t the biggest hassle. The thing that stopped us, honestly, was the painful story around automation. Basically, there wasn’t any – not end-to-end, anyway.

It’s fun to track the evolution of our SSL procedures by looking at the titles of our runbooks over time. In the Dark Times, we went through several iterations – there was “How to install a customer’s SSL certificate”, then “Customer SSL workflow (Manual Edition)”, “Semi-Automated Customer SSL workflow”, and finally, “Customer SSL workflow (99.5% Automated Edition)”. Whilst we tried very hard to automate as much of the process as we could, getting an SSL certificate always seemed to involve a human in the loop somewhere, and when you provide automated trial signups, you can’t put a human in the middle of that process without making everything terrible.

Thankfully, Let’s Encrypt came along to bring us into the age of certificate Enlightenment. We’ve been big supporters of Let’s Encrypt for a long time; we’ve been donating the hosting for the Let’s Encrypt community forums practically since the initial public announcement, and we donate what we would have otherwise spent on certificates in cash, as well. When Let’s Encrypt became generally available, we re-implemented our SSL certificate issuance pipeline on top of that, and everything was reasonably peachy.

Reality Ensues

image

So, Let’s Encrypt made it easy for us to fully-automate getting SSL certificates for our customers, as they asked for it. Except, there’s a big difference between requesting a certificate for a running site, when a customer asks for it, and making sure you’re getting a certificate issued as soon as the site’s DNS is configured. Even detecting that DNS is correctly configured, given the wide variety of, shall we say, “interesting” configuration choices some people make, can be a challenge. This took a lot more time and ingenuity than getting issuance working. Requesting a certificate from Let’s Encrypt and dealing with validation is about 10 lines of Ruby (with the help of the acme-client gem) and a bit of HAProxy magic. Dealing with all the special cases and exceptions in people’s DNS and proxy setups is quite a bit more code.

As an added challenge, a large part of our team has been hard at work since mid last year building out a completely custom, hyper-scale AWS-based hosting environment for a large game company. That work, which has now borne fruit, diverted time and attention away from building out universal SSL as quickly as we would have liked.

The Final Hurdle

The final hurdle is the problem of migration. We want all our existing customers to automatically have the benefits of our SSL labours. There’s a few things that get in the way of just flicking the switch for customers who have been with us for some time.

First off, forcing everyone to use HTTPS, via redirects and HSTS config, breaks some things. The biggest issue is the lack of a universally-supported way to say to a HTTP client, “transparently make this exact request again, but to this other URL”. There are newly-standardised HTTP response codes for “redirect with the same HTTP verb”, but they’re not supported by all browsers and HTTP libraries, and they sometimes prompt “do you want to do this again?”, which isn’t what you want for a smooth transition.

External authentication providers get in the way, too. They often whitelist the return URLs they’ll accept. When you call out to an authentication service, you typically send along a link which the browser should be redirected back to after authentication is complete. Sending an HTTPS link, when the authentication provider expects an HTTP one, results in authentication errors. Very not cool. We can’t fix this ourselves, either – we don’t have access to our customers’ Google, Facebook, and Twitter applications to update the whitelisted URL.

Finally, there’s the age-old problem of mixed-content warnings, which pretty much everyone who has stood up a HTTPS site has dealt with at one time or another. Thankfully, the vast majority of those we can fix on our customers’ behalf, by making sure the site being linked to supports HTTPS, and modifying the site config to add a strategically-placed “s”.

Security is no longer optional

Our deployment of HTTPS by default is coming at just the right time. Popular browsers such as Chrome and Firefox are in the process of rolling out changes to mark HTTP-only sites as “Not Secure”, like so:

google_chrome_http

It’s reasonable to assume that this trend will continue, with other browsers following suit, and the warnings getting more and more scary over time. So, regardless of where you’re hosted, now would be a good time to start making sure all your web properties are being served over HTTPS.

Full of win

It has been a bit of a journey, but in the end, we’ve gotten to where we want to be: new customers get HTTPS enforced by default, most existing customers have HTTPS enforced by default without their even noticing, and for the remainder, we know exactly what they need to do to complete the upgrade.

successkid

{}
GitLab News ( Feed )
Wednesday, 18 April 2018
Merge requests are coming to your desktop with Tower

The concept of "merge requests" has changed the way we work – by redefining the way we collaborate. Although it's a relatively new tool, it's already hard to think back to how we worked without them.

Very recently, they have even burst out of the browser and are now, finally, accessible ri

The concept of "merge requests" has changed the way we work – by redefining the way we collaborate. Although it's a relatively new tool, it's already hard to think back to how we worked without them.

Very recently, they have even burst out of the browser and are now, finally, accessible right on your desktop: the upcoming new major version of Tower, a Git desktop client for Mac and Windows, brings native support for GitLab merge requests!

This means that working with merge requests has become even more comfortable and easy: the most common tasks can now be performed directly from your desktop! You can create, merge, comment, inspect, and close merge requests in Tower. And, since you can of course work on your MRs in Tower and in the browser side by side, we've included a quick link so you can access the browser version with just a click.

At the moment, you can use the new Tower for free during our Public Beta. Simply sign up on our beta page and give it a try. You'll see that we've worked very hard to make GitLab merge requests feel at home in Tower.

Check out how the integration works in the demo below:

And please let us know if you have feedback for us: we're eager to help GitLab users become even more productive through Tower!

About the guest author

Tobias Günther is the founder and CEO of Tower, the popular Git client for Mac and Windows.

{}
GitLab News ( Feed )
Tuesday, 17 April 2018
The people have spoken: Remote work creates a stronger DevOps culture

According to our 2018 Global Developer Report, remote teams tend to trend higher in visibility and DevOps satisfaction compared to in-office teams, suggesting that a remote workplace culture is more conducive to DevOps adoption.

As a remote-only company, this finding naturally pi

According to our 2018 Global Developer Report, remote teams tend to trend higher in visibility and DevOps satisfaction compared to in-office teams, suggesting that a remote workplace culture is more conducive to DevOps adoption.

The differences between remote and in-office teams

As a remote-only company, this finding naturally piqued our interest. We started thinking about the traits of a remote team and how these characteristics set up operations and development teams for success.

The challenges of DevOps adoption

One of the greatest difficulties an organization faces when adopting a DevOps model is a resistance to culture change. Because DevOps requires teams to collaborate and communicate in new ways (and at an increasing frequency), historically siloed teams may have trouble adjusting. This type of radical shift in culture can be too difficult for a team to handle and may result in an increase in friction and frustration.

How can teams that have traditionally worked alongside each other – but not together – suddenly adopt a model that encourages them to contribute to a single conversation across every stage?

Remote work paves the way to a smoother transition

In our survey we learned that 20 percent of respondents say most or all of their development team works remotely. Every remote worker knows the importance of communicating effectively and frequently to ensure that others are aware of decisions and progress. Without the convenience of physical proximity, working remotely requires a commitment to open discussion and an understanding that team members must be able to easily view projects and receive updates. Furthermore, remote teams use tools to work concurrently, decreasing the challenges of siloed workflows.

An effective remote culture embraces:

  • efficiency
  • collaboration
  • visibility

When operations and development teams already have a culture founded on trust and transparency, they can more easily adopt a model that fosters cross-functional communication and workflows.

Remote teams are already accustomed to transparency, collaboration, and visibility, making a DevOps adoption a seamless transition. Because teams must document discussion conclusions, an inherent benefit of working remotely is complete real-time visibility of all projects and activities, an advantage of the DevOps model.

How can in-office teams ease DevOps adoption?

While a remote workplace culture appears to create a solid foundation upon which a DevOps model can thrive, we concede that remote teams can still encounter challenges to adoption. Poor communication, internal conflict, and a lack of defined processes can hinder any team. However, there are insights that in-office teams can gain from these findings. Because culture is the underpinning of successful DevOps adoption, in-office teams can ease challenges by encouraging teams to work concurrently and by transparently documenting conversations and decisions. Furthermore, a shift towards empathy can help teams gain respect for the work that others accomplish, a change that can increase collaboration and decrease friction.

By creating a collaborative culture, an organization can facilitate a smoother transition to a DevOps model.

Does your development team work remotely? Let’s chat about DevOps and remote working! Tweet us @gitlab.

Cover image licensed under CC X

Merge requests are coming to your desktop with Tower

The concept of "merge requests" has changed the way we work – by redefining the way we collaborate. Although it's a relatively new tool, it's already hard to think back to how we worked without them.

Very recently, they have even burst out of the browser and are now, finally, accessible ri

The concept of "merge requests" has changed the way we work – by redefining the way we collaborate. Although it's a relatively new tool, it's already hard to think back to how we worked without them.

Very recently, they have even burst out of the browser and are now, finally, accessible right on your desktop: the upcoming new major version of Tower, a Git desktop client for Mac and Windows, brings native support for GitLab merge requests!

This means that working with merge requests has become even more comfortable and easy: the most common tasks can now be performed directly from your desktop! You can create, merge, comment, inspect, and close merge requests in Tower. And, since you can of course work on your MRs in Tower and in the browser side by side, we've included a quick link so you can access the browser version with just a click.

At the moment, you can use the new Tower for free during our Public Beta. Simply sign up on our beta page and give it a try. You'll see that we've worked very hard to make GitLab merge requests feel at home in Tower.

Check out how the integration works in the demo below:

And please let us know if you have feedback for us: we're eager to help GitLab users become even more productive through Tower!

About the guest author

Tobias Günther is the founder and CEO of Tower, the popular Git client for Mac and Windows.

{}
GitLab News ( Feed )
Monday, 16 April 2018
Telstra Ventures invests in GitLab to boost innovation and collaboration

Telstra Ventures, the investment arm of Australia’s leading telecommunications and technology company, has chosen to invest in GitLab for our open core DevOps philosophy that supports the entire development and operations lifecycle.

“Customers are increasingly demanding better digital expe

Telstra Ventures, the investment arm of Australia’s leading telecommunications and technology company, has chosen to invest in GitLab for our open core DevOps philosophy that supports the entire development and operations lifecycle.

“Customers are increasingly demanding better digital experiences, and DevOps is becoming the leading way for companies to develop, deliver, and support applications that drive great customer experiences,” said Mark Sherman, Managing Director at Telstra Ventures. “One of the reasons we decided to invest is because GitLab is committed to continuously improving its application, which is key to helping companies rapidly take their best ideas from development to market.”

“One of the reasons we decided to invest is because GitLab is committed to continuously improving its application, which is key to helping companies rapidly take their best ideas from development to market.”

We believe that a collaborative environment is necessary to take your best ideas to market. We know from our 2018 Global Developer Report that a collaborative environment is important to you (94 percent of respondents said so!), but that visibility and transparency has some catching up to do. In addition, 55 percent of respondents are still using at least five tools for their development processes and 62 percent of respondents acknowledged losing time due to context switching between tools on a typical work day. This isn’t a good use of anyone’s time, which is why it’s our mission to deliver a single application that meets everyone’s needs.

Our focus on a collaborative approach gives development, quality assurance, security, and operations teams the ability to concurrently work on the same project within a single application and to see the entire workflow from their own point of view. The same information’s all there – just presented in a way that’s relevant to each team. We published a blog post last year with more details of our DevOps vision. This investment from Telstra is affirmation that we’re on the right track and will help accelerate our progress towards realizing this vision.

“We look forward to partnering with Telstra to support its large application team and to aid the company in its vision of connecting people through technology,” said Sid Sijbrandij, our CEO and co-founder. “DevOps is increasingly being adopted by organizations around the globe to radically improve productivity and the pace at which software moves from idea to market.”

As the only single software application that supports the entire DevOps lifecycle, GitLab is built from the ground up to enable collaboration amongst teams adopting the methodology. We’re happy that Telstra believes in our vision and our capability to enable software development teams to achieve faster DevOps lifecycles.

5 things I wish I'd known about Kubernetes before I started

I first encountered Kubernetes in January 2017 when our CEO Sid Sijbrandij challenged me and five other team members to get a live install functional on Kubernetes for an Idea to Production demo during the company summit in Cancún.

Prior to the challenge I had never touched Kubernetes. Non

I first encountered Kubernetes in January 2017 when our CEO Sid Sijbrandij challenged me and five other team members to get a live install functional on Kubernetes for an Idea to Production demo during the company summit in Cancún.

Prior to the challenge I had never touched Kubernetes. Nonetheless, my team members and I conquered the challenge, completing the task a day before deadline to boot. You can watch the demo here.

Now, a little more than a year later, I've taken a deeper dive into the container orchestration platform, leading my team in building and releasing the alpha version of the cloud native GitLab helm chart, which allows for the deployment of GitLab on Kubernetes. With that experience fresh in mind, I've got a bit of advice for those looking to move into the world of Kubernetes:

The internet is your friend. Check out the documentation, online courses and walkthroughs.

First things first, there are a couple of really good sets of documentation out there, and even a solid course on edX. These are all good choices. You don’t have to go through all of the courses to really get a running start with what’s going on. But if you want to get into the nitty-gritty, I would strongly suggest taking some of the courses. If all you want to do is see it work, be able to play with it and kind of get an idea of what it is, then you can get a free trial with GKE (Google Kubernetes Engine), set up a little cluster and do a deployment that way. And if all you want to do is deploy a couple of your applications into the same cluster, we (GitLab) already have Auto DevOps that can hook everything together for you, and then you can use your entire workflow, do your deployments, and pop right in there. We’ll even help you spin up a GKE cluster with all the requirements right from the UI.

But if you want to do it by hand the first time, that’s one of those things where you should start with the tutorial walkthroughs. Install the tools. They are all straightforward to get your hands on. Pull down one of the charts, try it, change some configuration options and retry it. Just play with it.

Be clear on how you will use Kubernetes.

The challenges you encounter in Kubernetes really depend on what you’re trying to do with it. Are you using it as a test round, are you using it as a staging environment, or are you going all the way in and going for production? Just using it for a development environment is not really complicated. You need to understand some basic concepts, like namespaces. You need to know what a secret is, what a configuration is, and what a deployment is. These core concepts will get you a very long way.

Beyond that, you start getting into the involved steps. That’s where you need to understand what didn’t exist prior, like the role-based access controls, or RBAC, which is now involved with Kubernetes and also Helm. Those features did not exist a year ago, and now they do. They are becoming ever-present and even more involved. This is good for people doing production, engineers, SREs (site reliability engineers), deployments, customers, etc. because now you’re making sure that things aren’t touching other things they shouldn’t. It’s not an open, flat plane of network.

Now you have fine-grained controls via RBAC. Multiple namespaces, with controls per namespace on access or creation to secrets and configuration. This allows you to have production-grade multi-tenant clusters where you are not concerned about neighbors stepping on each other or poking their nose where they don't belong. This is a big step compared to the state of Kubernetes as a whole in early 2017.

The thing I wish I knew was how fast it was going to develop. I walked into Kubernetes in January and then I walked away from it in February. When I came back to it in September, I was surprised by how much had changed. And then the same thing keeps happening every single release.

Don’t expect the same version on every service provider.

I think the biggest thing that people should understand is that not all cloud providers provide the exact same version of Kubernetes. They’re all very close, they’re all almost identical, but the way in which certain features are implemented is slightly different. So, the way you get it on Azure’s container services and the way you get it on Amazon’s container services or GKE won't be exactly the same. Everybody’s implementation is slightly different. Perhaps the available version of the base functionality is going to be a little different, but the real difference will be between each of these providers' own product integrations.

Then there’s the whole ‘roll your own’ approach, at which point you get to use really nifty plugins and other components that you can’t use out of the box with a cloud provider today. Play with it, but it still comes down to this: there are differences between the providers. Target mainline or vanilla, and it will work everywhere. Target a provider, and you’re now a part of that provider.

Be nimble. Change is constant, but don’t follow along blindly in an attempt to keep up.

Wow, there is just so much development. In the year from when I first touched Kubernetes to where I’m at now, the feature set has expanded quite a bit. And the controls that are required for large enterprises are now in place. These can bite you if you’re not paying attention, but they’re not horribly hard to understand if you’re willing to just take a moment and read. Also, everybody and their brother is now doing this and playing with this. Just because you see somebody else do it doesn’t mean it’s an industry best practice.

Last bit of sage advice: Seriously. DO NOT sleep on the releases.

The thing I wish I knew was how fast it was going to develop. I walked into Kubernetes in January and then I walked away from it in February. When I came back to it in September, I was surprised by how much had changed. And then the same thing keeps happening every single release.

It is a production-ready system. However, new feature sets and capabilities are evolving at such a pace that it can be hard to keep up with. You’re not breaking anything, but now there’s all these new, nifty features. All the shinies keep coming.

This is not a six-month release cycle software. I’m not going to install Kubernetes, walk away for a year and come back thinking I’ll simply be able to go to the next LTS (long-term support). You have to be present. You have to be paying attention. It doesn’t matter if you only check in once a month, you’ve got to check in once a month.


Kubernetes summit challenge demo

Photo by Frank Mckenna on Unsplash

{}
GitLab News ( Feed )
Friday, 13 April 2018
How to set up advanced monitoring for your GitLab environment with Logz.io and the ELK stack

GitLab comes with some built-in monitoring and visualization capabilities, such as Cycle Analytics and the per-project contributors and repository graphs, as well as integration with Prometheus to monitor your GitLab instance at the server level. Cycle Analytics is especially useful as it enables

GitLab comes with some built-in monitoring and visualization capabilities, such as Cycle Analytics and the per-project contributors and repository graphs, as well as integration with Prometheus to monitor your GitLab instance at the server level. Cycle Analytics is especially useful as it enables teams to analyze their efficiency. However, if you want to analyze the data by searching and querying, or if you want to visualize the data yourself, you might find it helpful to adopt a more centralized methodology by integrating with the ELK Stack (Elasticsearch, Logstash and Kibana).

ELK provides powerful log aggregation, analysis and visualization capabilities that, used in tandem with GitLab’s extensive logging framework, will give organizations an accurate and comprehensive bird's eye view of the system for monitoring, troubleshooting, and analyzing team activity. Using GitLab’s log data, for example, rich dashboards can be created to monitor not only the system’s general health but also specific team metrics, such as the number of commits, issues opened and closed, and so forth.

Logz.io users can benefit from a built-in integration with GitLab and the additional analysis tools provided by the service, but if you’re using your own ELK deployment you’ll be able to set up the described integration as well.

How to integrate GitLab and Logz.io

The steps outlined below presume the following:

  • You have an Omnibus GitLab installation up and running. If you haven't installed GitLab already, visit the installation page.
  • You have an ELK Stack up and running (either your own ELK deployment or a Logz.io account). We will be using Filebeat to ship the logs into Elasticsearch, so Logstash is only required if you want to apply advanced parsing to the data.

GitLab logs

As mentioned above, GitLab has an advanced logging framework that ships a variety of different system logs.

Of course, what log data you want to ship is entirely up to you. You can ship all the log data, or you can be a bit more selective. These logs can be pretty verbose, so depending on storage and retention considerations, it’s good practice to first understand what logs you need to monitor in the first place.

The Filebeat configurations provided below are designed for shipping the following logs.

production_json.log

This JSON-formatted log records requests sent by GitLab to the Ruby controllers. Here is a sample log:

{"method":"GET","path":"/-/metrics","format":"html","controller":
"MetricsController","action":"index","status":200,"duration":1.69,
"view":0.23,"db":0.0,"time":"2017-12-26T14:47:49.505Z","params":{},
"remote_ip":null,"user_id":null,"username":null}

As you can see, the information in the log includes the request method, the controller, the action performed, the request status, duration, remote IP, and more.

The location of the file will vary according to your installation types. In the case of the Omnibus GitLab packages (recommended installation), the file will reside at:

/var/log/gitlab/gitlab-rails/production_json.log

production.log

This is a plain text log file that contains information about all performed requests. It includes the request URL, type, and origin IP as well the parts of code that serviced it. The log also provides details on all SQL requests and how long they took. Here is a sample log:

Completed 200 OK in 1ms (Views: 0.2ms | ActiveRecord: 3.2ms |
Elasticsearch: 1.5ms)

Again, the location of the file varies. In the case of the GitLab Omnibus packages, the file resides at:

/var/log/gitlab/gitlab-rails/production.log

api_json.log

A specific, JSON-formatted, file for logging API requests only.

{"time":"2017-12-10T18:30:11.219Z","severity":"INFO","duration":5.22,
"db":0.82,"view":10.11,"status":200,"method":"POST","path":"/api/v4/
internal/allowed","params":{"action":"git-upload-pack","changes":"_any","
<span style="font-weight: 400;">project":"hello-world","protocol":"ssh","env":"{}","key_id":"[FILTERED]"
,"secret_token":"[FILTERED]"},"host":"127.0.0.1","ip":"127.0.0.1","ua":"Ruby"}</span>

Location:

 /var/log/gitlab/gitlab-rails/api_json.log

application.log

This plain text log file tracks GitLab actions such as adding a new user, creating a new project or group, and so forth. Can act as an audit trail for monitoring user activity.

Example:

December 24, 2017 15:10: User Created: username=dbirtin email=xxx@gmail.com
ip=xx.xx.xxx.xx confirmed:true

Location:

/var/log/gitlab/gitlab-rails/application.log

In any case, I recommend reading GitLab’s excellent documentation to read up on these log files and the information included in them before commencing.

Configuring Filebeat

Filebeat is a log shipper belonging to the Beats family of shippers. Written in Go and extremely lightweight, Filebeat is the easiest and most cost-efficient way of shipping log files into the ELK Stack.

If you haven’t already installed Filebeat, here are some instructions (for Debian):

curl -L -O https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-6.1.1-amd64.deb
sudo dpkg -i filebeat-6.1.1-amd64.deb

Open up the Filebeat configuration file at: /etc/filebeat/filebeat.yml:

sudo vim /etc/filebeat/filebeat.yml

The following configuration defines the different GitLab files to track and ship into ELK. I’ve defined a prospector for each log type so I can add custom fields to each. Alternatively, I could have defined one prospector for all of the files.

filebeat.prospectors:
- type: log
  enabled: true
  paths:
    - /var/log/gitlab/gitlab-rails/production_json.log
  fields:
    log: production_json
  json.keys_under_root: true
- type: log
  enabled: true
  paths:
    - /var/log/gitlab/gitlab-rails/production.log
  fields:
    log: production
- type: log
  enabled: true
  paths:
    - /var/log/gitlab/gitlab-rails/api_json.log
  fields:
    log: api_json
  json.keys_under_root: true
- type: log
  enabled: true
  paths:
    - /var/log/gitlab/gitlab-rails/application.log
  fields:
    log: application
output.elasticsearch:
  # Array of hosts to connect to.
  hosts: ["localhost:9200"]

Start Filebeat with:

sudo service filebeat start

After a while, a new index will be created and you can define a new index pattern (filebeat-*) in Kibana to begin analyzing the data.

Shipping to Logz.io

If you are using Logz.io, a few small modifications need to be applied to establish the logging pipeline.

First, you will need to download an SSL certificate to use encryption:

wget https://raw.githubusercontent.com/logzio/public-certificates/master/COMODORSADomainValidationSecureServerCA.crt

sudo mkdir -p /etc/pki/tls/certs

sudo cp COMODORSADomainValidationSecureServerCA.crt /etc/pki/tls/certs/

You can now edit the Filebeat configuration file. If you like, you can make use of the Logz.io Filebeat wizard to generate the FIlebeat YAML file automatically (available in the Filebeat section, under Log Shipping in the UI).

Either way, the configurations should look something like this:

filebeat:
  prospectors:
    -
      paths:
        - /var/log/gitlab/gitlab-rails/production_json.log
      fields:
        logzio_codec: json
        token: <yourToken>
        type: gitlab-production-json
      fields_under_root: true
      encoding: utf-8
      ignore_older: 3h
    -
      paths:
        - /var/log/gitlab/gitlab-rails/production.log
      fields:
        logzio_codec: plain
        token: <yourToken>
        type: gitlab-production
      fields_under_root: true
      encoding: utf-8
      ignore_older: 3h
    -
      paths:
        - /var/log/gitlab/gitlab-rails/api_json.log
      fields:
        logzio_codec: json
        token: <yourToken>
        type: gitlab-api-json
      fields_under_root: true
      encoding: utf-8
      ignore_older: 3h
    -
      paths:
        - /var/log/gitlab/gitlab-rails/application.log
      fields:
        logzio_codec: plain
        token: <yourToken>
        type: gitlab-application
      fields_under_root: true
      encoding: utf-8
      ignore_older: 3h
  registry_file: /var/lib/filebeat/registry
output:
  logstash:
    hosts: ["listener.logz.io:5015"]     
    ssl:
      certificate_authorities: ['/etc/pki/tls/certs/COMODORSADomainValidationSecureServerCA.crt']

The main differences are:

  • Logz.io specific fields added to each prospector. Replace with your Logz.io account token (can be found in the Logz.io UI, under Settings).
  • The output section defines the Logz.io listener and the SSL certificate to use.

Once you start (or restart) Filebeat, the GitLab logs will begin to show up in Logz.io.

Analyzing the GitLab logs

Now that your logging pipeline is up and running, it’s time to look into the data with some simple analysis operations in Kibana.

Some of the fields can be used to get some visibility into the logs. Adding, for example, the ‘type’ field (the ‘log’ field in case you are using your own ELK), helps give the logs some context.

We can use Kibana queries to search for specific log data. Say, for example, you want to take a look at failed logins into the system. To do this, we would use this combination of a field-level and free-text search:

type:gitlab-application AND "failed"

Analyzing logs

Another example could be querying Elasticsearch for error responses for GitLab requests:

type:gitlab-production-json AND status:[400 TO *]

GitLab requests

Using Kibana’s visualization capabilities, you can create a series of simple charts and metric visualizations for giving you a nice overview of your GitLab environment. Here are a few examples.

Visualizing commits

What organization does not want to monitor its team’s productivity? A simple metric visualization will give you a counter on how many commits were performed by your team:

Fourteen commits

Likewise, we can create a line chart visualization that gives us an overview over time of the commits, per user:

Fourteen commits

Visualizing issues

In a similar fashion, you can use Kibana to keep track of opened and closed issues. A simple data table visualization gives us a breakdown of the issues opened:

Visualize issues

A line chart can give us a depiction of how many issues were opened over time:

Line chart

The list goes on. You can monitor projects created, merges, user activity, CI/CD processes, and more. The logs generated by GitLab include a wealth of information that can be tapped into for monitoring, and adding these visualizations into one Kibana dashboard gives you a nice overview of your environment.

End dashboard

End notes

The ELK Stack offers built-in storage, search and visualization features that complement GitLab’s rich logging capabilities. Using Filebeat, building a logging pipeline for shipping data into ELK is simple. If you want to further process the logs, you might want to consider adding Logstash into your pipeline setup.

Logz.io provides some tools to help you hit the ground running – easy integration steps, as well as the monitoring dashboard above. To install the dashboard, simply search for ‘GitLab’ in ELK Apps and hit the install button.

Enjoy!

About the guest author

Daniel Berman is Product Evangelist at Logz.io. He is passionate about log analytics, big data, cloud, and family and loves running, Liverpool FC, and writing about disruptive tech stuff. Follow him @proudboffin.

{}
Discourse News ( Feed )
Tuesday, 10 April 2018
Effectively using Discourse together with group chat
Most modern businesses and organisations today are using some kind of team chat application. The usual suspects are Slack, HipChat, Discord, Mattermost, Rocket Chat, Riot, and Gitter, to name a few. While chat is immediate and primarily synchronous, communication in Discourse is gradual and asynchron

Courtesy of Claudio Nichele and Jon Worth - https://www.flickr.com/photos/cnichele65/14060880832/in/photostream/
Most modern businesses and organisations today are using some kind of team chat application. The usual suspects are Slack, HipChat, Discord, Mattermost, Rocket Chat, Riot, and Gitter, to name a few.

While chat is immediate and primarily synchronous, communication in Discourse is gradual and asynchronous. We’ve seen far too many community managers treat these two modes of communication as competitors. Quite on the contrary, chat and forum communities can complement one another beautifully, and we aim to show exactly how by breaking it down into three different levels of understanding.

  1. Ephemeral vs Permanent
    What is the difference between a chat and a forum community
  2. Strengths and Weaknesses
    When to use which tool
  3. Complementary Workflows
    How to use both tools together most effectively

Ephemeral vs Permanent

The discussion that takes place in a chatroom is best described as ephemeral, meaning:

Something which lasts for a short period of time.

While a chat platform might keep a searchable record of all your conversations, the highly unstructured nature of chat makes it unsuitable as a long-term storage of knowledge.

In direct contrast to the inherently ephemeral nature of chat, forum discussions are permanent:

  1. Without end, eternal. Nothing in this world is truly permanent.
  2. Lasting for an indefinitely long time. The countries are now locked in a permanent state of conflict.

One mode of discussion is not better than the other. They both have unique benefits that go hand in hand with best-practice use cases.

Strengths and weaknesses

Chat is an essential communication tool, but when used excessively it can have a negative influence on community health. Whereas with forum communities, perhaps the most common misstep is to start one before your project has the necessary momentum, and you end up with a ghost town.

At Discourse we like to think of chat and forum as your extended team’s collective “short term” and “long term” memory, respectively.

Group chat is great for…

 

  1. Minimum viable communities
    Get two people in a chat room together and you’ve got yourself the beginnings of a healthy community. As long as there’s some chatter on a regular basis, the room will come off as lively and inviting to other prospective participants. This is a great onboarding strategy in the early days of a community, but there is a hard limit on how far it can scale. Doing things that don’t scale can be a winning strategy for startups and burgeoning communities alike; the key is knowing when you’ve outgrown your initial growth strategy.
  2. Real-time resolutions
    “where’s our invoice template again?”
    “I need fresh eyes on this weird query”
    “welp, I think I just broke something”
  3. Urgent notifications
    Servers are on fire!
    An urgent email was just received!
  4. Socialising
    While social chatter is a rare occurrence on a forum, this is the norm in chat, and it happens in every room, not just a designated #off-topic channel. Give it a little time and you won’t be able to avoid it: it’s about to get personal.There’s something about being present together in the same slice of time, the here and now, that lends itself to Getting Real. Getting personal on a forum feels more like broadcasting, made even weirder by metrics (Likes, Replies, View) that can unintentionally imply that Bob’s piece of personal news didn’t “perform” as well as someone else’s. Chat imposes much less scrutiny on your shared content.

You’ll find this list to be nearly identical with that of Jason Fried’s excellent writeup on group chat on behalf of Basecamp. While eerily similar, we really did come upon these findings independently! The proof happens to be permanently recorded on meta.discourse.org and in many other communities where we discussed this.

Forum discussion is great for…

 

  1. All-inclusive dialogue

    The asynchronous nature of a forum community effectively lowers the bar about as far down as it can go. You’ll get a much greater diversity of input if you solicit feedback from anyone who’s available some time in the next 24, 72 or 168 hours as opposed to right now.

    Another little discussed benefit of “slow” asynchronous conversations (fun fact: very few things in life, especially in business, are actually URGENT) is that it encourages walking away from the discussion for a while, which is scientifically proven to improve critical thinking.

  2. Communities of scale

    Similar to what version control did for code and wikis did for encyclopaedias, community platforms like Discourse have long since solved the “too many chefs” problem for discussion at scale. Hundreds or even thousands of people can discuss an equal amount of topics simultaneously on Discourse because (1) discussions are broken up into logical topic blobs and (2) long-form input is strongly encouraged over rapid-fire back-and-forth debating.

  3. Knowledge storage & distribution

    The permanence of a Discourse topic makes it an excellent storage of knowledge. Some will argue that forum posts become outdated, but we’ve found that when a particular solution stops working then users will promptly resume the topic discussion until it arrives at a satisfactory solution once more.

    This is further helped by

    • Search-friendly content
    • Discoverable content, helped by Categories, Titles, Participants, Top rankings and strictly linear discussions with minimal digressions and noice.
    • Extra exposure for content with many Likes
    • Marking solutions as the official answer
    • Collaborative editing with full revision history
  4. Civilized discussion

    Most current group chats have very limited moderation controls. And while they may eventually catch up, the experience will always be sub-par simply because it’s real-time. If you hold someone’s comment for moderation in chat, that’s incredibly frustrating because you’re expecting live discussion. On a forum on the other hand the expectation is that you’ll get a reply within a few hours or even days after posting, so if your post gets flagged? No biggie, you can wait.

Complementary workflows

So let’s assume you’ve already made the wise decision to use both chat and Discourse for your community and you understand when to use which tool. But how do you best use them together?

Continually enforce your communication policies

It’s important that your project’s community leaders (which should ideally translate to every major contributor to your project) actively enforce the norms that you’re striving for. Lead by example and politely nudge newcomers in the right direction when they’re asking for guidance in the wrong place or in the wrong way.

New user: How can I do X?
Helpful user: Good question. Please re-post this to our public forum so that any answers you receive can be searched for and read by anyone else who might be asking the same thing.

We’ve heard of some communities utilising bots with some success for this type of best-practice enforcement. If anyone has any such stories to share with us, please send us a mail at team@discourse.org!

Feed highlights from Discourse into chat

Important discussions in Discourse should be fed into your chat for extra exposure. There are many ways to do this:

It’s important to note that you shouldn’t be mindlessly feeding all of your Discourse discussions into your chat. Make sure you only cross-post the major highlights, i.e. just a few select categories or tags and the occasional manual curation.

Let chat conversations run their course, then post a summary

When a conversation takes place in chat that is related to an existing thread on Discourse, point out the link to the Discourse thread in chat. Don’t try to move the conversation over while it’s in flight though – that’s often disruptive. Instead, if anything new came out of the chat, encourage the primary chat participants to summarize it in a follow-up post on the Discourse topic.

Similarly, any chat content with lasting value to other community members should be exported over to Discourse once the initial burst of chatter comes to an end. Examples include:

  • team members walking through a problem together and arriving at a solution
  • deep conversation that is sure to continue for multiple days
  • new user who asked a simple yet frequently asked question
  • minutes of a meeting

Specifically for Slack, we have a very handy tool to help with this.

/discourse post 20 will create a draft topic containing the last 20 slack messages. See this how-to for more info.

This is currently only possible with our Slack integration. We welcome other group chat services to talk to us about how to make this possible for their platform. Please contact team@discourse.org


For some further reading, check these out.

Must-reads:

Also interesting:

diaspora* security release 0.7.4.1

We just released diaspora* version 0.7.4.1 which fixes a possible cross-site scripting issue with maliciously crafted OpenGraph metadata.

Updating

Please update as soon as possible. Update instructions are available as usual in the wiki.

We just released diaspora* version 0.7.4.1 which fixes a possible cross-site scripting issue with maliciously crafted OpenGraph metadata.

Updating

Please update as soon as possible. Update instructions are available as usual in the wiki.

{}
GitLab News ( Feed )
Monday, 09 April 2018
GitLab Patch Release: 10.6.4

Today we are releasing version 10.6.4 for GitLab Community Edition and Enterprise Edition.

This version resolves a number of regressions and bugs in this month's 10.6 release and prior versions.

GitLab Community Edition and Enterprise Edition

Available in GitLab Core, Starter, P

Today we are releasing version 10.6.4 for GitLab Community Edition and Enterprise Edition.

This version resolves a number of regressions and bugs in this month's 10.6 release and prior versions.

GitLab Community Edition and Enterprise Edition

Available in GitLab Core, Starter, Premium, and Ultimate:

Available in GitLab Starter, Premium, and Ultimate:

Available in GitLab Premium and Ultimate:

Upgrade barometer

This version does not include any new migrations, and should not require any downtime.

Please be aware that by default the Omnibus packages will stop, run migrations, and start again, no matter how “big” or “small” the upgrade is. This behavior can be changed by adding a /etc/gitlab/skip-auto-migrations file, which is only used for updates.

Updating

To update, check out our update page.

GitLab subscriptions

Access to GitLab Starter, Premium, and Ultimate features is granted by a paid subscription.

Alternativelly, sign up for GitLab.com to use GitLab's own infrastructure.

{}
Redmine News ++ Jean-Philippe Lang ( Feed )
Saturday, 07 April 2018
Redmine 3.4.5 and 3.3.7 released

These maintenance releases are available for download.
You can review the changes in the Changelog.

Thanks to all the contributors who worked on these releases!

These maintenance releases are available for download.
You can review the changes in the Changelog.

Thanks to all the contributors who worked on these releases!

{}
GitLab News ( Feed )
Friday, 06 April 2018
How contributing to open source can help you land your first job

Contributing to open source can significantly boost your chances of getting a job. And even though this is true for all developers, regardless of their level of experience, it's especially important for entry-level ones.

Let me make this perfectly clear: contributing to open source is the

Contributing to open source can significantly boost your chances of getting a job. And even though this is true for all developers, regardless of their level of experience, it's especially important for entry-level ones.

Let me make this perfectly clear: contributing to open source is the most effective job-seeking hack you can take advantage of right now.

Even better, by contributing to open source you won't only improve your chances of getting a job, but you will also give back to the community, meet amazing and talented people, and feel incredibly accomplished when your first contribution gets accepted.

At Microverse, the company I founded, we train remote software developers from all around the world, and we ask them to contribute to open source, starting from their first day in the program.

Here are six reasons why contributing to open source will help you too.

Reason 1: Work as part of a (distributed) team

When looking for a job, experience counts. However, experience limited to coding and the language syntax is not enough. You need to know how to work as part of a team, collaborating with others to build large and complex applications.

How do you get that kind of collaborative and at-scale experience if you can't get a job first? The answer is open source.

Large, open source projects are almost always built by a large team. Sometimes the people in those teams even work for large organizations (e.g. GitLab, React/Facebook, etc.). By becoming a contributor you get the chance to work with those exceptional teams without having to be hired by those companies.

You will sharpen your written communication skills, understand how to pick and negotiate things to work on, perfect your Git Flow/GitLab Flow, and many other things that are as valuable as understanding the language syntax.

Reason 2: Work in a complex and large application

If you join a company, you will most likely work on an existing application. And, probably, it will be a large and complex one. As a coding student you rarely have the chance to do that, but when you join an open source project, that's exactly the scenario that you will face.

You will first need to set up your local development environment following the contributing guidelines provided by the project. You will then start by refactoring existing code to correct typos and fix small bugs, the same way you would at a regular job! Finally, you will start understanding how all the pieces of a large application fit together, how it was architected, and where the code for each functionality lives.

These are not things you could experience working on small learning projects, but you need this kind of experience if you want to land a job.

Reason 3: Get a lot of good feedback

Every time you pick an open source issue to work on, you will start by forking the project and creating a feature branch. You will write tests and code until you are happy with your solution, and then submit a merge request to the original code.

However, this is just the first step in the process. One of the main developers at the project will review your merge request and will tell you if it's ready to be merged. Most likely it won't. But that's fine, because she will also provide feedback about what you need to fix before your code can be merged.

Can you imagine getting this kind of direct feedback from a seasoned developer at GitLab or Facebook? Think about it… they really want your help, but they also need to keep the quality of the code at a high level. They will help you, and you will end up learning a lot in the process.

Reason 4: Build an online reputation

Getting experience working as part of a team and contributing to large and complex applications is really important, but it won't help you land a job unless companies can find you and want to interview you.

Contributing to open source will help you with that too. After quickly reading your resume, employers will want to find you online, and they will want to see your code. GitLab and GitHub profiles are the new resumes.

If employers can see that you are an active member of large open source projects, that will tell them something else that is very important: software is not just what you do for a living, but it's also your passion and hence what you do in your free time.

What do employers currently find when they search your name on Google? Open source will make you look great!

Reason 5: Network with the community

Open source projects often have large organizations behind them who are constantly hiring new developers. Wouldn't it be great for those organizations if they could hire people who love their product? What if their new hires knew the product so well already that they could be productive contributors from the moment they join the company?

Well, that's exactly the value you offer as an active member of an open source community. You know the product, you know the code, and the people behind the project know you. Chances are that you will eventually be offered to work for them. In fact, almost a third of the first 40 engineers that GitLab hired were contributors to its codebase first.

Reason 6: Stay motivated

Last, but not least, we all know the single and most important advice to be successful at anything is perseverance. However, staying motivated and focused while learning to code and applying for jobs is not easy. There are a lot of things to learn, a lot of different paths to take, and many rejections on the path to landing your first job.

Joining an open source project will give you the real-world encouragement and a community to support you throughout the journey.

Are you convinced that contributing to open source is the best thing you can do right now to help you on your way to landing your first job? I'm pretty sure you are. Go ahead and start now!

About the guest author

Ariel Camus is the founder of Microverse, a company finding the world's untapped talent and training it to become remote software developers. Ariel was previously the co-founder and CEO of TouristEye, a travel startup that he grew to a million users and sold to Lonely Planet in 2013.

Cover photo by Maik Jonietz on Unsplash

{}
GitLab News ( Feed )
Thursday, 05 April 2018
GitLab + Google Cloud Platform = simplified, scalable deployment

Get super-simple deployment for your app with GitLab and Google Cloud Platform (GCP): thanks to our integration with Google Kubernetes Engine (GKE), you can now get CI/CD and Kubernetes deployment set up with just a few clicks, and $500 credit to get you started.

Now everyone can get automati

Get super-simple deployment for your app with GitLab and Google Cloud Platform (GCP): thanks to our integration with Google Kubernetes Engine (GKE), you can now get CI/CD and Kubernetes deployment set up with just a few clicks, and $500 credit to get you started.

Now everyone can get automatic code quality, security testing, and no-configuration deployment

With increasing adoption of cloud native practices, the use of microservices and containers has become critical to modern software development. Kubernetes has emerged as the first choice for container orchestration, allowing apps to scale elastically from a couple of users to millions. It's been possible to deploy to Kubernetes from GitLab for quite a while, but the process of setting up and managing everything was manual and time intensive.

Today, we’re happy to announce we've been collaborating with Google to make Kubernetes easy to set up on GitLab. Now, with our native Google Kubernetes Engine integration, you can automatically spin up a cluster to deploy applications, with just a few clicks. Simply connect your Google account, enter a few details, and you're good to go! GitLab will create the clusters for you. The clusters are fully managed by Google and run on Google Cloud Platform's best-in-class infrastructure.

This also means you can easily take advantage of GitLab Auto DevOps. This feature does all the hard work for you, by automatically configuring CI/CD pipelines to build, test, and deploy your application. To make use of Auto DevOps, it used to be necessary to have an in-depth understanding of Kubernetes, and you had to manage your own clusters. Not any more!

With the integration between GitLab and GKE, we’ve made it simple to set up a managed deployment environment on Google Cloud Platform and access our robust DevOps capabilities. That’s all the benefits of fully automated code quality, security testing, and deployment, with none of the headache of managing and updating your clusters (Google does that all for you!). More than half of developers and 78 percent of managers in our 2018 Global Developer Report agreed that automating more of the software development lifecycle is a top priority for their organization. We hope that this integration gives you a head start, by offering automation out of the box with Kubernetes and Auto DevOps.

What’s next for GitLab?

We’re not just excited about offering this integration for you to use, we’re excited to use it ourselves! We’re already in the process of migrating GitLab.com to Google Cloud Platform. For us, the primary reason to migrate was because it has the most mature Kubernetes platform. By moving, we get access to security functionality like default encrypted data at rest, a broad, ever-expanding list of localities served globally, and tight integration with our existing CDN for faster caching. Be on the lookout for more information on our migration as it progresses.

Get seamless integration with GKE and $500 credit for your project

Every new Google Cloud Platform account receives $300 in credit upon signup. In partnership with Google, GitLab is able to offer an additional $200 for both new and existing GCP accounts to get started with GitLab’s GKE integration. Here's a link to apply for your $200 credit.

Join Google and GitLab for a live demo

On April 26th, join Google’s William Denniss and GitLab’s William Chia for a walkthrough of the new GKE integration. You’ll learn how easy it is to set up a Kubernetes cluster, how to deploy your app using GitLab CI/CD, and how GKE enables you to deploy, update, and manage containerized applications at scale.

Register today!

{}
GitLab News ( Feed )
Wednesday, 04 April 2018
GitLab Security Release: 10.6.3, 10.5.7, and 10.4.7

Today we are releasing versions 10.6.3, 10.5.7, and 10.4.7 for GitLab Community Edition (CE) and Enterprise Edition (EE).

These versions contain a number of important security fixes, and we strongly recommend that all GitLab installations be upgraded to one of these versions immediately.

Today we are releasing versions 10.6.3, 10.5.7, and 10.4.7 for GitLab Community Edition (CE) and Enterprise Edition (EE).

These versions contain a number of important security fixes, and we strongly recommend that all GitLab installations be upgraded to one of these versions immediately.

The vulnerability details will be made public on our issue tracker in approximately 30 days.

Please read on for more information regarding this release.

Confidential issue comments in Slack, Mattermost, and webhook integrations

Comments on confidential issues were previously sent to webhooks and integrations when notifications were configured to send notes or comments. This applied to custom webhooks, Slack, and Mattermost notifications.

We've introduced a new option to control the sending of confidential notes as well as an option for specifying a different channel for Slack and Mattermost.

Versions Affected

Affects GitLab CE/EE 8.6 and up.

Remediation

We strongly recommend that all installations running an affected version above to be upgraded to the latest version as soon as possible.

Persistent XSS in milestones data-milestone-id

The milestone dropdown feature contained a persistent XSS issue that is now resolved in the latest release. This issue has been assigned CVE-2018-9244.

Thanks to fransrosen for responsibly reporting this vulnerability to us.

Versions Affected

Affects GitLab CE/EE 9.2 and up.

Remediation

We strongly recommend that all installations running an affected version above to be upgraded to the latest version as soon as possible.

Persistent XSS in filename of merge request

Filenames in the changes tab contained a persistent XSS issue that is now resolved in the latest release. This issue has been assigned CVE-2018-9243.

Thanks to fransrosen for responsibly reporting this vulnerability to us.

Versions Affected

Affects GitLab CE/EE 8.4 and up.

Remediation

We strongly recommend that all installations running an affected version above to be upgraded to the latest version as soon as possible.

Upgrade barometer

This release includes one database migration, which can be run without downtime. This migration adds a column to the services table. Another background migration is launched to populate this value.

Updating

To update, check out our update page.

{}
GitLab News ( Feed )
Thursday, 29 March 2018
GitLab Patch Release: 10.6.2

Today we are releasing version 10.6.2 for GitLab Community Edition and Enterprise Edition.

This version resolves one regression in this month's 10.6 release and prior versions.

GitLab Community Edition and Enterprise Edition

Available in GitLab Core, Starter, Premium, and Ultima

Today we are releasing version 10.6.2 for GitLab Community Edition and Enterprise Edition.

This version resolves one regression in this month's 10.6 release and prior versions.

GitLab Community Edition and Enterprise Edition

Available in GitLab Core, Starter, Premium, and Ultimate:

Upgrade barometer

This version does not include any new migrations, and should not require any downtime.

Please be aware that by default the Omnibus packages will stop, run migrations, and start again, no matter how “big” or “small” the upgrade is. This behavior can be changed by adding a /etc/gitlab/skip-auto-migrations file, which is only used for updates.

Updating

To update, check out our update page.

GitLab subscriptions

Access to GitLab Starter, Premium, and Ultimate features is granted by a paid subscription.

Alternativelly, sign up for GitLab.com to use GitLab's own infrastructure.

Polishing GitLab’s UI: A new color system

We receive a lot of feedback from our users and the broader community. After hearing that there is a perceived lack of consistency and quality in GitLab’s UI, we decided to take a look at our color palette.

Aesthetic aspects like this are a fundamental part of the UI. If we don’t get the

We receive a lot of feedback from our users and the broader community. After hearing that there is a perceived lack of consistency and quality in GitLab’s UI, we decided to take a look at our color palette.

Aesthetic aspects like this are a fundamental part of the UI. If we don’t get these right, everything else in the UI won’t feel, look, or behave correctly. Like a house, these aesthetics are the foundation upon which everything else is built.

Our color palette had various issues, so we started by:

Why start with colors?

There are many aesthetic aspects to a UI. So why tackle colors first? Well…

  • Colors are easy to change: it’s just a matter of changing simple values in our variables.scss file.
  • Color changes don’t affect layout: we weren’t reinventing the wheel, so these changes wouldn’t influence the layout and spacing between elements like typography can.

And, more subjectively, colors have a huge impact on the perception of a UI. It’s said that 90 percent of information entering the brain is visual and color is an attention-grabbing device.

Issues with the previous color palette

Previous color palette

It didn’t extend the brand colors

They weren’t in line with our brand colors, with the most obvious example being the pinkish-red normally associated with negative aspects like errors or irreversible actions. We already have a red from our brand, so why use a different one?

There were too many similar colors

With so many colors, it wasn’t easy to tell them apart. They were so similar that they no longer brought value to the table, just more guesswork and maintenance.

There wasn’t enough contrast

Many of our color combinations did not meet the contrast ratios defined in the Web Content Accessibility Guidelines (WCAG).

Note that some of these issues were also applicable to grayscale colors (also called “achromatic”).

Building a better palette

At GitLab, we’ve done a lot of things while standing on the shoulders of giants, aligning with our company value of boring solutions. As such, one of our initial thoughts was to use an existing color palette, something that could save us time and maybe serve as the basis for our work.

We soon found Open color, an open source color scheme optimized for UI. It has 13 hues, each with 10 levels of brightness, totaling 130 different colors. All of the values are there, it would be easy for our Frontend team to get started by importing it as a dependency. This was starting to look very promising and we were getting excited about this quick start.

However, the more we thought about our current needs and goals, the more we realized that this approach wasn’t going to work for us. Existing color palettes usually had too many colors for our needs and the ones we did need, would have to be tweaked to align with our brand colors. All of the upsides of using an existing color palette were now irrelevant.

We went back to the drawing board, starting with defining the goals we wanted our new color palette to achieve:

  • Align with and extend our brand colors
  • Have only the hues that we need, the colors that have meaning in the UI
  • Be accessible by passing the WCAG

1. Extending the brand

The first step in creating our new color palette was inspired by “Add Colors To Your Palette With Color Mixing,” where we used ColorSchemer Studio to generate this color wheel from the three brand colors and the primary purple used on this site:

Color wheel generated from the brand colors

Initial colors were separated by even intervals of hue and manually tweaked. In the image above, the matching brand colors are next to the wheel for reference.

2. Cutting the rainbow

Then, we generated tints and shades for some of the hues in that color wheel: green, blue, purple, red and orange.

Tints and shades

These were first obtained from the Material Design Palette Generator and then tweaked manually using Colorizer and Eric Meyer’s Color Blender. The dark orange colors are a good example of manual tweaking as they initially looked very “muddy.”

It’s important to consider the number of tints and shades that you need, as that affects the flexibility when applying those colors. Our guiding principle here was to provide clear and visible contrast between each step of the scale. If we had steps that were too similar, the difference wouldn’t be noticeable, which meant that there was no value in having those colors.

We didn’t want all of the colors of the rainbow, just the ones that carry meaning effectively. We want to be able to communicate states and actions by applying colors to elements in the UI (e.g. informational elements are associated with blue). If you have too many similar colors in a UI, like green and lime, you’re expecting too much not only of your users but also of your team. On the one hand, most of your users won’t notice the difference between colors when placed in a complex UI, so they also won’t pick up the different meanings. On the other hand, your team will have more work learning, working with, and maintaining unnecessary colors.

Additionally, we shouldn’t rely on color alone to communicate something, so that’s also another point for not having too many similar colors. This is actually one of the success criteria of the WCAG about the use of color:

Color is not used as the only visual means of conveying information, indicating an action, prompting a response, or distinguishing a visual element.

3. Colors for everyone

Using a small set of colors which allows for better memorization and recognition is already a good step towards a more usable product, but it’s not enough.

Evaluating, testing, and prioritizing accessibility problems is one of our main initiatives here at GitLab. Establishing contrast between text and background is one of the key aspects of accessibility and, as we saw before, our previous color palette didn’t meet the WCAG contrast ratios. So, as we were defining our new color palette, we continually tested the colors using the WebAIM Color Contrast Checker.

Along the way, we hit a problem: combinations of white text over green or orange backgrounds did not pass WCAG level AA for small text. This was an issue because we wanted to keep a uniform “vibrancy” and “pop” throughout all colors. While the colors looked uniform to our human eye, the WCAG test didn’t “see” them as we did. Would we be forced to “break” this visual consistency and use darker shades for those colors? Not only that, but this would render them too dark to carry meaning effectively. In the following example, the “success” meaning of green or the “warning” meaning of orange become less immediate as their contrast increases.

Warning and success elements can be more or less noticeable but that affects the result of the WCAG contrast tests

We found an interesting take on this at the Google Design website, which intentionally uses colors that at least pass AA for large text:

Due to this site’s purpose being a source for visual design reference and inspiration, we felt it was acceptable not to target a stronger color contrast level. — Behind the Code — Google Slash Design Accessibility

Considering our audience and user base, should we be rigid and enforce AA level for small text? As a first step towards better color contrasts, we decided to set our minimum at AA for large text, even for small text. For grays, we tested and tweaked their contrast against light gray backgrounds, as that is a common color used to differentiate regions in the UI.

All tints and shades with corresponding WCAG levels, including grays

Color priorities

So, after all this work, we introduced a wide range of color tints and shades with the new color palette. The problem was that there was no guidance for using them. Some color decisions are fairly quick and intuitive, but we wanted to standardize and make the color selection process as objective as possible for everyone, even developers. We want to give people the chance to make a decision without imposing approval or reviews by the UX team. We want to be lean, efficient, and focus on results.

Some questions that we should be able to answer:

  • “I need to use one blue, which shade should I pick?”
  • “This UI component needs three contrasting shades of green. Can I pick whichever I want?”

The Material Design colors have been a great source of inspiration for us. They follow the numeric naming conventions used by the CSS font-weight property, where a higher value equals a higher degree of blackness. So, we’ve named our colors from the lightest (50) to the darkest (950).

On top of this naming scheme, we’ve defined a system of color priorities. This is similar to how different font weights are used to create contrasting typography that communicates hierarchy.

We can apply this same logic to colors, as seen in the image below, by tagging them according to their priority: from 1 to 4. If you need guidance, the priorities can help you make better choices. When choosing how to apply color to a UI component:

  • You start at priority 1, which is the medium weight 500. There’s only one shade with priority 1 per color (the “default” shade).
  • For more shades of the same color, you could then choose from the next priority level, number 2, which can either be 300 (lighter) or 700 (darker). And so forth for even lighter or darker shades.

All tints and shades with corresponding priorities, names, and WCAG levels, including grays

What’s next

Along the way, we’ve learned that mixing colors and defining color palettes is not only science, nor only art, it’s a subjective balance on the human mind. Color harmony depends on many factors, like culture, age, social status, or even the designer’s intent.

We’ll have to see how people use the 11 tints and shades and how they’re applied in our Design System. This is a constant evolution, and we’re always iterating (as we should be).

Next, we’re going to review our color meaning guidelines and be more active in their usage, not only in the product but also in our Design System and pattern library.

A new color palette and a color priority system are seemingly small steps towards a better user experience throughout GitLab, but they do make a big difference, for our users, our team, and every contributor. This is the first initiative to polish our UI styles, next we’re implementing our new type scale – which will deserve a dedicated blog post.

If you have any questions, feel free to post a comment below, tweet at us, or join the discussion on the following issues:


Cover image by David Clode on Unsplash.

{}
GitLab News ( Feed )
Wednesday, 28 March 2018
GitLab Patch Release: 10.6.1

Today we are releasing version 10.6.1 for GitLab Community Edition and Enterprise Edition.

This version resolves a number of regressions and bugs in this month's 10.6 release and prior versions.

GitLab Community Edition and Enterprise Edition

Available in GitLab Core, Starter, P

Today we are releasing version 10.6.1 for GitLab Community Edition and Enterprise Edition.

This version resolves a number of regressions and bugs in this month's 10.6 release and prior versions.

GitLab Community Edition and Enterprise Edition

Available in GitLab Core, Starter, Premium, and Ultimate:

Available in GitLab Premium and Ultimate:

Omnibus GitLab

Upgrade barometer

This version does not include any new migrations, and should not require any downtime.

Please be aware that by default the Omnibus packages will stop, run migrations, and start again, no matter how “big” or “small” the upgrade is. This behavior can be changed by adding a /etc/gitlab/skip-auto-migrations file, which is only used for updates.

Updating

To update, check out our update page.

GitLab subscriptions

Access to GitLab Starter, Premium, and Ultimate features is granted by a paid subscription.

Alternativelly, sign up for GitLab.com to use GitLab's own infrastructure.

{}
GitLab News ( Feed )
Tuesday, 27 March 2018
High-efficiency innovation: 3 lessons to learn from GitLab's culture of rapid execution

All companies have different ways of creating new products and services. Despite that, there are a few patterns that show up consistently. At Jump, we like to call those patterns the different "cultures" of innovation. One such pattern has to do with execution. Great executors (like GE and FedEx)

All companies have different ways of creating new products and services. Despite that, there are a few patterns that show up consistently. At Jump, we like to call those patterns the different "cultures" of innovation. One such pattern has to do with execution. Great executors (like GE and FedEx) are masters of sharp focus and efficient machine-making.

Many of the Fortune 500 companies that we work with do their best innovation this way. They've built infrastructure that excels at launching products globally, coordinating thousands of employees and operating at massive scale. These companies often ask us what they can learn from what's going on in Silicon Valley. There's much to learn, of course, from the startups and entrepreneurial ecosystem here.

The important question is not "How do they do things in Silicon Valley?" Instead, it's "What can I learn that would work well in my organization?" It's always exciting to come across a startup that's doing what these big companies do best – execute at scale – and doing it in a completely different way.

GitLab is one such company. They're an open source software company powering many of the world's largest corporations. They've developed a surprising – and strong – culture of innovation. They're a remote-only company. There's no physical headquarters or office space for their 200+ employees located worldwide. They proudly admit that they value "boring solutions." Their entire business strategy is available for the public and their competitors to see. They're respected for their product, their culture, and their results.

Many companies pride themselves on their ability to iterate quickly and answer yes/no decisions rapidly. Even they might be surprised at the scope and scale of GitLab's efficiency. GitLab drives high-efficiency innovation through a culture of rapid execution. They weave speed directly into the fabric of who they are and what they do. Do you want to learn how they do it? I recently shadowed GitLab's CEO, Sid Sijbrandij, and his team for a day.

Here's how they make it happen.

When the answer is clear, build for speed. Speed wins.

Why build a culture of rapid execution?

With such a unique team culture and set of business practices, the first thing I wanted to learn from Sid was why GitLab operates the way it does. What became clear was that it's all very intentional.

A few key beliefs are central to the decisions they've made:

Belief 1: The solution required to win is already super clear to everyone.

They're operating in a market called DevOps, which is about creating platforms and tools for software developers to use in their work. It's a market where both the unmet customer need and the ideal solution are clear to everyone.

They were newer to the game than some brand name and legacy competitors, so they chose to prioritize speed over invention to get to the finish line first.

Belief 2: If you don't do anything new, you can do things faster, bigger and better.

The folks at GitLab believe that it's better to be boring. They value "boring solutions." It's not because boring is better in and of itself. It's because boring is efficient. It's faster. And faster can become bigger. And when you add in collaboration with a global open source community, bigger can become better.

If there's a market standard, they don't try to create something different. They get on board. As Sid says, "It's about convention over conviction. We make sure everyone [in the open source community] is enticed to participate. If the rest of the world is doing it in some way, we should be doing it in that way."

Belief 3: It's OK not to make everyone happy.

It's hard for most companies – and most people – to change to what made them successful in the first place. For GitLab, making those kinds of changes is critical to achieving the growth they seek. So on a daily basis, they choose to act quickly, make mistakes quickly, and learn from those mistakes quickly.

That can lead to decisions – big and small – that might not make everyone happy.

When they launch a completely new version of GitLab (they're on version 10.6 right now), they always add some things that will frustrate some existing customers, and they often take away things that other customers love.

"There's way more people not using GitLab than that are. So we should always optimize for those future customers, not your current ones. That's why companies slow down. They start listening. Engineers want to fix the current bugs. Sales wants to keep the old deck that works for them. You start listening to your customers and what they need you to maintain or fix. The natural motion of any company is to slow down. So as CEO you need to get the company beyond that."

So what does high-efficiency innovation and rapid execution look like at GitLab?

Here are a few examples of the pace at which they operate:

  1. They release a new version of GitLab every single month.
  2. Everything is in draft and subject to change. It's always under construction.
  3. They don't repeat themselves. GitLab documents how it does things in a handbook. It's 1,000 pages long. If it's in the handbook, don't repeat it.
  4. Every conference call starts on time. No wasted minutes. Sid checks 15-30 action items off the list in each of his 25-minute 1-on-1 meetings.
  5. They trust their team to multi-task appropriately. If you want to check email during a meeting, it's probably more important than the meeting is to you.

There's a final, often-overlooked value of speed: it's exciting. Workplaces that manage to pair speed with evident progress allow their teams to feel accomplished, motivated, and on the edge of their seats. It's an easy hack for maintaining employee engagement.

Don't sacrifice long-term vision for short-term speed. Be accountable for both.

What is GitLab is rapidly executing on?

Many companies who prize execution do a great job at sustaining and growing their existing products. They're often quite efficient – though they could learn something from the speed at which GitLab operates. But they're more likely to struggle with thinking far out into the future.

To paraphrase Stephen Covey, there's a big difference between efficiency and effectiveness. A jet flying 1,000 miles per hour is efficient; a jet flying 1,000 miles per hour in the right direction is effective.

So if GitLab as an organization is a jet built for speed – where is it going?

Sid wants GitLab to help multiply the potential for progress that humanity can drive into the world. "Our mission is 'Everyone can contribute.' That's a long-term vision. That's 10 years. It means changing all of our culture to read-write. Think Wikipedia. They allow everyone to contribute. Imagine if we can do that. You release a lot of progress. You 10x the progress. [Multipliers like that are] thrown around so easily in Silicon Valley that you have to be cautious. But if you look at 100,000 companies using GitLab, and really being able to get their out software faster. I'm willing to stand behind that."

That means that not only is GitLab thinking about efficiency and effectiveness, but it's also thinking about impact. Impact on the scale of human progress and global culture.

That's pretty big and pretty far out. So how do they make sure the pilots keep looking way out there on the horizon while flying at supersonic speeds and maneuvring around today's obstacles?

First, you set the mission and vision. Everything starts with that mission in mind. Everyone knows it, and Sid talks about it every chance he gets.

Next, you draw that vision back into today's actions with cascading plans. Create a three-to-five-year strategy about how to get there. Craft a yearly plan and product vision – one that's concrete enough that you could show screenshots of what it will look like a year from now. Define quarterly goals (GitLab's OKRs are public), monthly targets, and smaller sprints to get you there.

Third, you make each of these regular goals highly ambitious, close-in, unambiguous, and concrete. "Setting high goals pushes people beyond their comfort zone," Sid told me. At Y Combinator, he says they taught GitLab that "20 percent is the new 10 percent." That's 20 percent growth, every single week. It's a high number, and it forces them to make completely different types of decisions.

Finally, because the short-term goals are incredibly high, you focus on iteration. Iteration is one of GitLab's core values. They define it clearly: "We do the smallest possible thing and get it out as quickly as possible." And they don't just ask developers and designers to work this way. "We put the whole company on that diet. It made sense for the product. But for marketing, sales, etc., we've gotten them there. If you say 'Grow XYZ in the next two weeks,' you do completely different things. I don't know why that is, but you do."

Encode culture and values to keep the company moving faster.

How does GitLab do what they do?

It was GitLab's strong culture and values orientation that first drew me to them as an organization. I'm often on the lookout for how leaders drive values through their organizations – from Jon Stewart on "The Daily Show" to the frontline teams at Starbucks and Zappos.

The best values-oriented organizations draw explicit links between their values, their competitive advantages, and their daily activities.

Here's where GitLab stands out.

In just one day of shadowing GitLab's staff, the team talked about values during a product meeting, two interviews with prospective employees, an analyst call and a 1-on-1 with a teammate. The whole team is drawing causal links between what it does (its business activities) and how it does them (the values they live by).

The whole team is drawing causal links between what it does (its business activities) and how it does them (the values they live by).

So how does that work? It requires leaders choosing to identify not just the values that matter, but also how to organize around them. Sid told us "I didn't do a very good job coding GitLab [when he and his co-founders all started back in 2011]. But I think I'm doing a good job coding GitLab the company."

As a remote-only company, "coding the company" means (1) writing things down, (2) referencing back to what's been written and (3) reinforcing it through rewards.

All of this "GitLab the company" code is captured in its handbook. The handbook is referenced in almost every conversation. The handbook consists of over 1,000 pages of text. It's a tool that GitLab uses to capture and detail out decisions that have already been made about all of its core business practices – marketing, sales, product, team operations, finance, and more. It's a constant practice for Sid and the team to reference the handbook in meetings, and to send people to look there first before continuing the conversation.

The values take a prime place in the handbook. There, values are defined, not just described. Words can mean different things in different contexts – and these values indicate a particular thing at GitLab. The definitions are brought to life with 5-15 concrete actions that employees often take for each of the six values. As Sid says, "The culture got stronger because it is written down. And because it improves and is edited over time." And then they're reinforced every day through hiring, coaching, performance reviews and casual conversations.

It's rare that companies think about linking their values with their competitive advantage. It's rarer still that a company brings its values to life through the day-to-day work. What GitLab has unlocked with its values orientation is not just good and meaningful work. It has also opened the most important competitive advantage in its business model – speed.

It's rare that companies think about linking their values with their competitive advantage. It's rarer still that a company brings its values to life through the day-to-day work.

It says it right there in the 'Why have values' section of the handbook: "Values are a framework for distributed decision-making; they allow you to determine what to do without asking your manager." By encoding values deep into everyday activities of the company, everyone on GitLab's team can make decisions faster.

In DevOps, winning is about getting there first. GitLab coded values right into its organizational design to make sure it could always be the fastest to market.

Parting thoughts: Will high-efficiency innovation work for you?

Although they weren't thinking about large corporations, the oracles of Delphi were right. The most important maxim is to "know thyself." The GitLab prescription isn't right for every company. What's most important is to build a culture of innovation that reflects your strengths and your values.

GitLab is a company of executors, of coders and of people who aren't afraid to work out in the open and make mistakes. They see clear problems. Then they attack. GitLab built a method of innovation that works well for them, but it's not a one-size-fits-all approach. It won't work for everyone, but it might work for you.

Here are the questions you should ask:

  1. Is the problem you're facing clear to you and your competitors?
  2. Would the people on your team prioritize efficiency over novelty if it'll get you there first?
  3. Do you know how to make trade-offs between what works for your existing customers and what might work better for future customers?

If you answered yes, pay close attention to what GitLab is doing. Their unrelentingly quick iterative process might be just what the doctor ordered to scale your innovation.

If not, the GitLab system isn't the right fit for you. You'll want to organize your innovation in a different way.

As one example, we built Jump to handle an entirely different type of highly ambiguous problems. So it makes sense that some of Jump's values (Passion, Curiosity, Enthusiasm, Intention, Acuity, Initiative and Play) look very much the opposite of GitLab's values (Collaboration, Results, Efficiency, Diversity, Iteration and Transparency).

Jump and GitLab are both deeply values-oriented companies with rich and collaborative cultures focused on innovation. And yet we value different things, have different org structures, hire different types of people and work on very different types of problems.

So what if you're like me and your company's approach or market situation is quite different than GitLab's? Take this as an opportunity to learn from seeing your mirror image.

First, test parts of their approach. See what works for you and your team. Then, consider the polar opposites. Find the points where you value distinctly different things, and ask why. Learn why their method works for them, and why it wouldn't work for you. Then flip the script – what's an approach to innovation that GitLab would never do that would be a difference maker for you if you did it?

Either way, take note of what GitLab is doing and how they're doing it. It's amazing, effective, growing like crazy and a great place to work. And ask yourself – should my team be innovating like that?

About the guest author

Jay Newman is Director of Strategy at Jump Associates, a leading strategy and innovation firm. Learn more at jumpassociates.com and connect directly with Jay on LinkedIn.

Photo by Karsten Würth on Unsplash

{}
GitLab News ( Feed )
Monday, 26 March 2018
Summary of limited GitLab credentials exposed in an internal logging system
Summary

From February 19, 2018 to March 19, 2018, some GitLab.com personal access tokens and third-party credentials were inadvertently exposed publicly via an unprotected logging dashboard. The affected users represent approximately 0.5 percent of our GitLab.com user base. While the dashboard wa

Summary

From February 19, 2018 to March 19, 2018, some GitLab.com personal access tokens and third-party credentials were inadvertently exposed publicly via an unprotected logging dashboard. The affected users represent approximately 0.5 percent of our GitLab.com user base. While the dashboard was originally protected from unauthorized access, a recent configuration change made this internal system publically accessible.

Upon further review, we discovered a limited number of tokens and third-party credentials were incorrectly stored in this system. We’ve already mitigated all issues that led to this exposure. Our audit logs only cover a portion of the time the dashboard was exposed, but only one login resulted in a deep query that resulted in a personal access token being exposed. This activity was positively linked to the user that immediately disclosed the issue to us. As such, we have no evidence that any of the affected credentials were used in a malicious manner.

We have notified the affected users via email to update their credentials and are implementing a series of security enhancements to prevent such issues from happening again. Note that self-hosted GitLab customers are entirely unaffected, as their credentials are stored in their own database and this logging functionality is off by default in self-hosted instances.

Credentials written to logs

On February 28, 2018 we discovered internally that GitLab’s Gitaly service was, in certain circumstances, mishandling the sanitization of private third-party credentials for push-mirrored and imported repositories. These unscrubbed credentials were then sent to our logging system (ELK) and our exception management system (Sentry).

During the investigation, we realized that some other credentials were also leaking into our HAProxy load balancers logs. These were personal access tokens used for GitLab API access when passed via query parameters in an HTTP request. Additionally, credentials that allow access to some third-party Git repositories were leaked into the logs. These have been fixed in the following:

Exposure of logs

On March 19, 2018, GitLab user Ryan Hefner discovered an authentication misconfiguration on our internal logging dashboard. A configuration change was made incorrectly on February 19, 2018, which resulted in potential public read-only access to the centralized, internal logging server for GitLab. This misconfiguration was resolved immediately upon notification.

GitLab uses an OAuth proxy service to authenticate all access to our internal logging infrastructure. Previously, this proxy service had been correctly configured to only allow access to authorized GitLab engineers. However, in the course of migrating this proxy service to our new cloud infrastructure, we inadvertently introduced a misconfiguration that allowed anyone with a valid Google account read-only access to our internal logging server, instead of the intended behavior of allowing only authorized GitLab engineering accounts.

Impact

There is no evidence of user credential abuse. However, the audit logs for the dashboard only cover seven of the 30 days in question. The only audit log activity accessing sensitive information was from the user that immediately reported the issue to GitLab. We found no evidence of abuse against the API itself. Regardless, we are taking every precaution, and strongly recommending all affected Gitlab.com users rotate their personal access tokens. Affected users have received further instructions via email notifications, accordingly.

Although there is no evidence to suggest it happened, in the worst case an attacker could have gained access and obtained personal access tokens used in API calls between February 19, 2018 and March 19, 2018.

Mitigations

Since the discovery, we have worked to investigate and mitigate all of these related security issues. We are continually improving our security processes and logging mechanisms to ensure that similar incidents will not occur again. These improvements are:

  • Increase logging retention periods
  • Fine-grain access controls to all logging infrastructure
  • Regular, periodic audits of all our logs
  • Adding mandatory review of any security-relevant changes by a GitLab Application Security engineer
  • Document this process in our runbooks to ensure repeatability
  • An additional measure we won't disclose because informing adversaries could render it less effective

If your project or account is affected by this security issue, you will receive an email notification listing affected projects with steps to rotate your personal access tokens. To proactively change them visit this page: https://gitlab.com/profile/personal_access_tokens.

We apologize for the impact this issue has caused our users. GitLab takes your information and your data extremely seriously. We have quadrupled the size of our internal security team in the last six months and have further plans to grow. We will learn from this incident and use it to improve upon our security even further.

In keeping with our company value of transparency we also believe in communicating about such incidents clearly and promptly. If you have any questions, please contact security@gitlab.com.

{}
GitLab News ( Feed )
Friday, 23 March 2018
GitLab’s Global Compensation Calculator: The next iteration

We know many of you have thoughts about our Compensation Calculator! We see your comments on Hacker News; we are listening and continually working on improving it. In line with our value of iteration, we have made additional changes to our Compensation Calculator. In January 2018, we released a ne

We know many of you have thoughts about our Compensation Calculator! We see your comments on Hacker News; we are listening and continually working on improving it. In line with our value of iteration, we have made additional changes to our Compensation Calculator. In January 2018, we released a new version to align the calculator closer to market rates, and adjust all current team members’ pay to be in line with the outputs of the iterated version. Here’s how it works.

What is our new formula?

Your compensation = SF benchmark x (0.7 x (max (0.2, Rent Index + Hot Market Adjustment) / 1.26) + 0.30) x Level Factor x Experience Factor x Contract Type Factor x Country Factor

SF benchmark

This is the employee salary at the 50th percentile for the role in San Francisco (SF), which we determine using various sources of market data including Comptryx.

Rent Index

This is taken from Numbeo, which expresses the ratio of cost of rent in many metro areas. Since we are using San Francisco benchmarks, we divide by 1.26 to normalize the rent index to San Francisco. A minimum Rent Index of 0.2 is applied so no one is paid less than 41 percent of San Francisco's market.

We multiply the Rent Index by 0.7 and then add 0.3, so the sum would equal 1 (i.e. we pay San Francisco rates in San Francisco).

Hot Market Adjustment

This is an adjustment to any US-based metro area where the geographical area Rent Index is less than the Hot Market Adjustment plus the Numbeo Rent Index, to recognize that "hot markets" tend to have a Rent Index that is trailing (i.e. lower than) what one would expect based on compensation rates in the area.

Level Factor

This is currently defined as junior (0.8), intermediate (1.0), senior (1.2), staff (1.4), or manager (1.4), and will be defined as II (.8), III (1.0), Senior (1.2), Staff (1.4), or manager (1.4).

Experience Factor

This falls between 0.8 - 1.2 based on our Experience Factor Guidelines:

  • 0.8: New to the position requirements
  • 0.9: Learning the position requirements
  • 1: Comfortable with the requirements
  • 1.1: Thriving with the requirements
  • 1.2: Expert in the requirements

Country Factor

This is a ratio of the calculator to market data. We determine this ratio by looking at how our calculator aligns to market in the region. If the calculator comes in higher than market, a factor lower than 1 is applied. If the calculator is in line with market, the factor stays at 1.

Contract Type Factor

This distinguishes between employee (1) or contractor (1.17). A contractor may carry the costs of their own health insurance, social security taxes, etc, leading to a 17 percent higher compensation for the contractor to account for the extra expenses to these GitLabbers.

The calculator can be found on each position description. For example, take a look at our Compensation Calculator for Developers.

Using San Francisco Market Data

The first step in this iteration was to gather market data and incorporate it as the benchmarks for each role. After obtaining a global data set to map to our positions, we needed to decide if New York was still the right city to pivot the benchmarks around. After some analysis, we determined that San Francisco was a better source of data, so we adjusted the formula. We also analyzed and adjusted the parameters around rent index to ensure in San Francisco you make San Francisco's benchmark.

Instituting a Minimum Rent Index

Earlier in 2017, we instituted a Geographical Areas iteration to the compensation calculator to ensure that there are not large pay differences in regions that have a similar job market. We looked at the rent indexes by region, determined any outliers on the high or low end of the rent index, and set the regional rent index at the highest of the remaining data set. With the January iteration of the compensation calculator, we also set a Minimum Rent Index so no one would be paid less than 41 percent of San Francisco’s market.

Adjusting our team’s pay

With this iteration of the compensation calculator, we wanted to align our team’s salaries according to market. We first looked at how experienced the team member is in their role by having the manager conduct an Experience Factor Review. This review verified we are paying our team in line with their experience, and not determining their experience to fit compensation. This review generates an output which is applied in the compensation calculator, but is also a great way to start the conversation around growth within each role. Managers and direct reports were able to review the experience factors and have constructive conversations around experience. Once we had all of the calculator inputs, including the up-to-date Experience Factor, our People Ops team reviewed all salaries to match the new compensation calculator. At the same time as the calculator was released, the increases to pay were also communicated.

What’s next, and why we think the compensation calculator is a powerful tool

We’ll continue to add more countries to our Country Factors list, review adding an additional factor for specialization within Development roles, review how the levels overlap when it comes to promotions, and review the Rent Indexes for countries with many data points (like the United States and United Kingdom).

We want to continue to make the calculator as reflective of market in as many locations as we can, given possible data constraints. This will go some way towards eliminating pay inequality among underrepresented groups, promote salary transparency on what each team member and candidate’s market value is, and save valuable recruiting time.

We also want to hear from you on where this calculator can continue to improve! Please let us know what you think in the comments.

Cover image by Antoine Dautry on Unsplash

{}
GitLab News ( Feed )
Thursday, 22 March 2018
GitLab 10.6 released with CI/CD for GitHub and deeper Kubernetes integration

Continuous integration, continuous delivery, and continuous deployment form the backbone of modern DevOps. GitLab features built-in CI/CD that has received a lot of positive attention from developers, enterprises, and analysts alike.

But one thing that was missing was that you couldn't use

Continuous integration, continuous delivery, and continuous deployment form the backbone of modern DevOps. GitLab features built-in CI/CD that has received a lot of positive attention from developers, enterprises, and analysts alike.

But one thing that was missing was that you couldn't use GitLab CI/CD with GitHub.  Well today, we’ve fixed that.

Introducing GitLab CI/CD for GitHub

While GitLab works best when you use it all end-to-end, we also seek to play well with others. In this spirit, we’ve added CI/CD integration with GitHub, and more generally the ability to integrate CI/CD with other external repos like BitBucket as well. We see four primary audiences that this functionality is designed for. 

Open source projects

If you have a public, open source project on GitHub you can now take advantage of free CI/CD on GitLab.com. As part of our commitment to open source, we offer all public projects our highest tier features (Gold) for free. While other CI/CD vendors limit you to running a handful of concurrent jobs, GitLab.com gives open source projects hundreds of concurrent jobs with 50,000 free CI pipeline minutes per month.

Large Enterprises

When we talk to our largest customers they tell us that they often have many teams using many different tools. They want to standardize on GitLab for CI/CD but code is stored in GitLab, GitHub, and other repos. This feature now allows enterprises to use common CI/CD pipelines across all of their different repos. This is a key audience and why we’ve made CI/CD for GitHub part of our self-hosted Premium plan.

Anyone using GitHub.com

While GitLab is designed to use SCM & CI/CD in the same application, we understand the appeal of using GitLab CI/CD with GitHub version control. So, for the next year we are making the GitLab CI/CD for GitHub feature a part of our GitLab.com Free tier. That means anyone using GitHub from personal projects and startups to SMBs can use GitLab CI/CD for free. Starting at 2000 free CI pipeline minutes per month, folks can also add their own Runners or upgrade plans to get more.

Gemnasium customers

We recently acquired Gemnasium. While we are super excited about having such a great team join our ranks, we also want to take care of folks that were using Gemnasium and provide them a migration path. We’ve already shipped Gemnasium features as part of our built-in security scanning. Now, GitLab CI/CD for GitHub allows Gemnasium customers that were using GitHub + Gemnasium to begin using GitLab CI/CD for their security needs without needing to migrate their code.

Kubernetes on GitLab keeps getting better

GitLab continues to invest in integrating with containerization. In 10.4 we release Kubernetes Cluster Integration and GKE Integration to General Availability. With this release, we make it even easier for users to use Kubernetes with GitLab. You can now deploy a GitLab Runner to your connected Kubernetes cluster with a single click. You can also monitor your connected Kubernetes cluster from within GitLab itself. And you can now also see the IP address of an Ingress controller connected to your Kubernetes cluster, again, right inside GitLab!

Join us for an upcoming event

GitLab MVP badge

This month's Most Valuable Person (MVP) is Takuya Noguchi

This release’s MVP is Takuya Noguchi. Takuya implemented a re-designed and extremely useful branches page in GitLab, making it much easier to manage active versus stale branches, especially in large projects with many ongoing branches.

Thank you, Takuya for your contribution! We’ve sent Takuya some GitLab swag as a thank you, including a hoodie, socks, and a handmade tanuki.

GitLab CI/CD for external repos

In 2011, GitLab started out as a code repo alone. Since then, we’ve built an application for the complete DevOps lifecycle that includes rich capabilities for testing, security, packaging, deployment, and monitoring. With this newest release, you can now use GitLab for CI, or even CD and monitoring, all while your application code is hosted in an external repo.

To use GitLab CI/CD with a GitHub repository create a new GitLab project. On the CI/CD for external repo tab, click GitHub to sign in and select your GitHub repo. Once you add a .gitlab-ci.yml file to your repo (or enable Auto DevOps), GitLab will automatically run pipelines and update the commit status in GitHub.

You can also connect to any Git repo via URL and configure status webhooks manually. For example, if you’re using Bitbucket, read how to manually enable GitLab CI/CD.

To celebrate this release, we’re offering this feature promotionally as part of GitLab.com Free through March 2019.

GitLab CI/CD for external repos

Read through the documentation on CI/CD for external repos


Quick deploy of GitLab Runner to Kubernetes cluster

GitLab gives you the ability to interact with Kubernetes clusters, and it also allows easy installation of applications that can be leveraged by your project.

In GitLab 10.6 we add the ability to deploy a GitLab Runner directly on your cluster with a single click. It will be automatically available to run jobs for your project, without any further configuration needed.

Read through the documentation on installing a runner on Kubernetes

Quick deploy of GitLab Runner to Kubernetes cluster

Kubernetes cluster monitoring

Kubernetes provides a great way for developers to easily deploy and manage applications, without worrying about how or where their software is running. It is still important however to manage overall cluster capacity, to balance room for growth versus underutilized compute costs.

GitLab 10.6 makes this easy, by directly showing both the current and available compute resources for a connected cluster. For example if deploy boards is showing a pod that is failing to start, a user can simply check the cluster metrics to confirm if resources have been exhausted.

Read through the documentation on monitoring Kubernetes clusters

Kubernetes cluster monitoring

Ingress IP address on Kubernetes cluster page

In GitLab 10.2 we released the ability to install an Ingress in your Kubernetes cluster. Once installed, the Ingress provides a public IP address that allows external access to your deployed applications.

In GitLab 10.6, you can see the IP address assigned to your Ingress controller directly from the Kubernetes page in the UI, and use it to configure a domain name to access your applications from the internet.

Read through the documentation on Ingress address for Kubernetes clusters

Ingress IP address on Kubernetes cluster page

Maintainers can push to MR from fork

Forking workflows are common in open source projects like GitLab, where contributors submit merge requests from their fork of the project back to the upstream project.

When reviewing merge requests from forks, maintainers of the upstream project can now make small fixes or rebase before merging, reducing the back and forth of accepting community contributions. Of course, maintainers aren’t limited to small fixes and can help out by adding large commits to the merge request too!

Prior to this release, maintainers could not directly contribute to a merge request from a fork since maintainers do not automatically receive write permissions to forks. With this release, if the merge request author has write access to the source branch of the merge request, they can grant maintainers write access to the source branch of the merge request by enabling Allow edits from maintainers on the merge request. When enabled, users with merge permissions to the target branch of the upstream project will be able to push to the source branch of the merge request. By default, it is turned off.

Read through the documentation on maintainers access

Maintainers can push to MR from fork

Single Group Issue Board in Core and Free

GitLab’s Group Issue Board allows you to manage issues from multiple projects all at once. You can see issues from projects within the same group all within the same interface, and move them across workflow stages, all in one interface.

This feature was previously available exclusively in the Premium and Ultimate tiers. And users in these tiers have found it to be very useful. GitLab Core users have also asked for this feature, and they have said providing one group issue board would be a great addition to their workflows. So that’s what we have shipped in this release. Core and Starter instances now have one group issue board per group, and multiple group issue boards remain reserved for Premium and Ultimate. Correspondingly, GitLab.com Free and Bronze also have one group issue board per group, with multiple group issue boards continuing to be in GitLab.com Silver and Gold. We think this adds significant value to the Core and GitLab.com Free tiers, and helps even more users better evaluate and provide feedback on the feature itself.

Read through the documentation on Issue Boards

Single Group Issue Board in Core and Free

Other Improvements in GitLab 10.6

SAST security report on pipelines view

A few releases ago, we shipped Static Application Security Testing (SAST), which automatically finds vulnerabilities in any new code changes in a merge request. This allows you to fix them before merging, ensuring these security problems are not introduced nto master and not released to production.

With this release, this same information is available in a complete SAST security report in the CI/CD > Pipelines page. This allows developers, production/systems engineers, and any other security stakeholders to have even more visibility into any security risks as your code progresses through CI/CD.

SAST security report on pipelines view

Read through the documentation on SAST security reports in the pipeline view

External Authorization Control

In some regulated environments, project classification systems are used to control access to projects, and can now be used with GitLab. When enabled, admins can set the classification of each project. In addition to GitLab access controls, access to projects will also require approval from the external authorization service.

Read through the documentation on External Authorization Control

External CI/CD configuration in Starter and Bronze

In GitLab 10.5 we added the ability to include external CI/CD configuration files into the main .gitlab-ci.yml for your project. This feature was available only to Premium users on self-hosted Gitlab and Silver users on GitLab.com.

We received a lot of feedback from customers asking us to move this to to a lower tier and we are excited to bring this feature to even more users in this release by making it now availabe to Starter users on self-hosted Gitlab and Bronze users on GitLab.com. The ability to have a centralized control over the pipeline configuration and to reuse the same definition in multiple projects is something that is valuable for enterprises and smaller businesses as well.

Note that as part of our commitment to open source, public projects on Free GitLab.com have features equivalent to a Gold level subscription. So those public projects will continue to have this feature.

Read through the documentation on including external CI/CD configuration

Some teams use GitLab integrated with an external issue tracker. For example, Jira issues integrated with GitLab merge requests is a popular workflow for many teams. In this scenario, GitLab issues still function as normal, and teams are free to use them, for example, in separate one-off scenarios where a team wants everything just in GitLab.

To streamline this integration, we’ve added a new link to the project navigation. If you have configured any external issue tracker (Redmine, Jira, Bugzilla, or the Custom Issue Tracker), there will be a separate link in the project navigation that allows you to quickly navigate to that external system. The GitLab issues link remains so that there’s no confusion and also allows you to use both issue trackers if you want.

Navigate to external issue tracker

Read through the documentation on External Issue Tracker

Labels in Epics

GitLab issues and merge requests support labels to enable flexible and highly customizable management of these objects. It’s an effective design that we’ve also brought to Epics in this release.

You can now assign group labels to epics from the sidebar of an epic, exactly the same as with issues and merge requests. And you can filter by labels on the epics list page in a group, again like issues and merge requests. Users of GitLab will thus find this feature immediately recognizable. This allows you easily mix and match epics into different categories based on the powerful search and filter bar.

Labels in Epics

Read through the documentation on Epics

Epics API

Along with the labels support mentioned above, we have maintained parity with API support for Epics. You can get a list of epics based on the same search and filter parameters of the search and filter bar in the web UI of the epics page. This includes searching by the epic title and description, filtering by the author and labels, and ordering by “created at” and “updated at” timestamps.

Read through the documentation on Epics API

Discussions API

With this release, we have brought API support to discussions in issues, snippets, and epics. This means that all comments and discussions (for issues) are now accessible via the API. Teams can leverage this API for flexible, customized, and specific workflows that are not necessarily in the main GitLab web UI.

API support for comments and discussions in merge requests will also come in a future release.

Read through the documentation on Discussions API

Filipino, Indonesian, and Turkish language support

As part of our ongoing effort to internationalize GitLab, we have now added support for Filipino, Indonesian and Turkish translations.

We have also externalised strings on the Repository Locked Files (Premium and above) list page allowing our translation community to add more languages and strings to GitLab.

If you are interested in contributing to GitLab’s internationalization efforts, we welcome you to join our translation community.

Read through the documentation on translating GitLab

SAST for Java-Maven apps

Prior to this release, GitLab already supported popular languages such as Ruby, Python, and JavaScript as part of Static Application Security Testing (SAST) feature.

In GitLab 10.6, we are adding Maven, a common build automation tool for Java. If you are already using SAST, you don’t need to change anything in your configuration to get the new checks; they will be automatically available.

See the complete list of supported languages and frameworks.

SAST for Java-Maven apps

Read through the documentation on SAST

Authentication support for DAST

A few releases ago, we shipped Dynamic Application Security Testing (DAST), allowing you to check for security vulnerabilities dynamically and automatically in a Review App version of your work-in-progress code, before it is merged into master and released to production.

Previously, this feature was limited to public pages. With this release, you can now specify credentials that DAST will use to authenticate into your web app and to simulate an attacker that is able to access sections protected with a login process.

Authentication support for DAST

Read through the documentation on Authentication support for DAST

Branches overview

As projects and teams grow, so do the number of branches. The new branches overview and filtered branches lists make it easy to quickly find the branch you’re looking for. Branches with a commit added in the last three months are shown as active.

Thank you Takuya Noguchi for the contribution!

Branches overview

Read through the documentation on Branches

Project import/export API

Projects are extremely important in GitLab, since they contain all the valuable work (including the Git repo) and organization (including issues and merge requests) of your team. Using the existing project export and import features of GitLab, projects can easily be transferred within and between instances.

Up to now, this was a manual process. With this release, project exports and imports are now part of the GitLab API, allowing you even more automated and flexible workflows when you need move your projects within or between GitLab instances.

Thank you Travis Miller for the contribution!

Read through the documentation on Project Import/Export API

GitLab ChatOps (alpha)

For many organizations, much of their communication, including their operations and troubleshooting discussions, is moving to chat. There is also typically an “operations toolbox,” containing frequently used commands to check on the health of an environment or to perform routine actions.

With GitLab 10.6 we wanted to make it easy to both automate these routine actions, as well as bring them into Slack itself. Getting started is as easy as adding a job to your GitLab CI YML, and enabling Slack slash commands integration. Users will then be able to interact with it by typing in the slash command, the CI job name, and then passing any relevant arguments. The job will be executed on a runner, with the output being sent right back to Slack.

Read through the documentation on GitLab ChatOps

Merge Request Approvals API

Prior to this release, the Merge Request Approvals API was limited to approving and unapproving a merge request only. With this release, you can now fully configure approvals at the project level and at the merge request level, giving users feature parity with the GitLab web UI.

With the Approvals API, teams can now create more elaborate code review and approval workflows that are specific to their needs. You can use as much or as little of the API as needed to customize which parts of your workflow happen inside the GitLab web UI, and which parts happen outside.

Read through the documentation on Merge Request Approvals API

Business and other custom metrics

Since GitLab 9.0, developers have been able to monitor critical system and response metrics of their deployed apps, like throughput, latency, and CPU/memory utilization.

This provided a great baseline understanding of both the user experience your customers were receiving as well as resource utilization, directly in the tool they use every day.

With GitLab 10.6 we have added the ability to add your own metrics, allowing deeper introspection of your application and business. For example metrics from a credit card processing module can be added, tracking not just success rates but also revenue and order size. This can help surface failures that may not result in HTTP errors, as well as the ultimate impact on business performance.

To get started, simply provide the Prometheus PromQL query and it will begin to display in the dashboard.

Business and other custom metrics

Read through the documentation on adding business and other custom metrics

Cloud native GitLab Helm chart (alpha)

We are excited to announce that the cloud native GitLab Helm chart is now in alpha, and available for testing. This chart features a more cloud native architecture, with a container for each component of GitLab and no requirement for shared storage. These changes result in increased resilience, scalability, and performance of GitLab on Kubernetes.

It is important to note that the chart and containers are still in active development, contain known issues and limitations, and should not be used for production. For this release GitLab Premium is required, while we work to bring Object Storage support to Core.

Read through the documentation on GitLab Helm chart

Omnibus improvements

  • GitLab Mattermost 4.7 includes enhanced image preview and thumbnails, faster load times, upgraded desktop app, and security updates. Upgrading is recommended.
  • Chef has been updated to 13.6.4
  • Omnibus has been updated to 5.6.10.
  • PostgreSQL has been updated to 9.6.8.
  • Python has been updated to 3.4.8.
  • jemalloc has been updated to 5.0.1.
  • announce-ip and announce-port are now configurable for Redis/Sentinel, to better support HA in Docker environments.

Read through the documentation on Omnibus GitLab

Deprecations

Mattermost configuration changes

With the release of GitLab 11.0, the number of Mattermost configuration options supported within gitlab.rb will be reduced. We will continue to support the core configuration settings necessary to run Mattermost, and set up the integration with GitLab. Going forward, other configuration settings should be set directly within the Mattermost console, or passed as environment variables.

Presently with two applications attempting to write to the same config file, changes can be lost.

Due: GitLab 11.0.

The legacy gitlab Helm chart

The legacy gitlab Helm chart is deprecated. For installations on Kubernetes today, we recommend the beta gitlab-omnibus Helm chart.

A new cloud native GitLab chart is in development with increased scalability, resilience, and other benefits. This chart will replace both existing charts when generally available later this year.

For more information on GitLab Helm charts, please read the documentation on installing GitLab on Kubernetes.

Due: March 22, 2018.

Upgrade barometer

To upgrade to GitLab 10.6 from the latest 10.5 version, no downtime is required.

For this release we have migrations and post-deploy migrations.

GitLab.com migrations took approximately 30 minutes and post-deploy migrations accounted for about 10 minutes.

GitLab Geo users, please consult the documentation on upgrading Geo.

{}
GitLab News ( Feed )
Wednesday, 21 March 2018
Avoiding the DevOps tax

With the influx of DevOps-related products and services on the market, today’s application delivery toolchain has become complex and fragmented, resulting in more time spent on integrating tools instead of software innovation. Mark Pundsack, Head of Product at GitLab, and guest speaker Christopher

With the influx of DevOps-related products and services on the market, today’s application delivery toolchain has become complex and fragmented, resulting in more time spent on integrating tools instead of software innovation. Mark Pundsack, Head of Product at GitLab, and guest speaker Christopher Condo, Senior Analyst at Forrester, recently met to discuss the current state of DevOps automation and how IT leaders can unlock themselves from today’s toolchain to avoid the “DevOps tax.”

What is the DevOps tax?

In a typical DevOps toolchain, lots of different tools are tied together to deliver DevOps. You have different tools for planning, code creation, CI and security testing, packaging, release and deploy, configuration management, and monitoring.

But administrating all these products and connecting them together is complex. For example, your CI needs to talk to your version control, your code review, your security testing, your container registry, and your configuration management. The permutations are staggering, and it’s not just a one-time configuration – each new project needs to reconnect all these pieces together.

That's the DevOps tax: time spent on integrating and maintaining complicated toolchains, limiting your efficiency.

What's in the webcast

Before we dive into the DevOps tax and how to avoid it, we start by looking at digital transformation and current trends in DevOps, leading up to the DevOps tax, and then offering some best practices for reducing friction.

Watch the recording

Key takeaways

The digital transformation imperative

Customer experience is key

The people with the bad customer experience, their stock is lagging those companies that have an excellent customer experience. That's showing you that customer experience really matters - Christopher Condo

Expect disruption

The common thread is placing the customer first. If there's a place where the customer's not being placed first, and some company can come along with an innovative way to do it, it seems like the government is open to it and customers are certainly open to it as well - Christopher Condo

Better integration of tools

I just ran a Wave on continuous integration tools and customers told us loud and clear that they are looking for a complete, integrated toolchain because they're tired of integrating their own toolchain. It's great to have the integrated tool chain but it comes at a cost - Christopher Condo

Better integration of teams

They want to be able to check in with the security expert and say, "Here's our design, here's our architecture, here's how we're handling these problems. What are we missing? What do we need to be doing next?" All of those teams sort of act as shared resources, they don't act as blockers on a particular project - Christopher Condo

Containers are critical

What is the DevOps tax?

When it's a pain to integrate security, how many teams just don't bother? Or when it's a pain to share information between teams, how many organizations overcome that burden and find a way to work together? How much impact does this tax have on collaboration? With separate tools and separate processes, we're naturally encouraging separate silos where functional teams work in isolation - Mark Pundsack

Concurrent DevOps

DevOps best practices

  • To maximize your digital transformation, you need to optimize your CI/CD pipeline, create integrated product teams, and modernize your application architecture with microservices and a cloud native approach.
  • Avoid the DevOps tax by reducing the number of integration points in your toolchain, integrate as deeply as you can, and strive for a single conversation across development, operations, security and business.
  • If you’re just getting started, start with continuous integration. Automating tests and building confidence in your code will pay dividends many times over.
  • If you already got CI, then move on to continuous delivery. Automate deployments and make them less scary. If you already started the DevOps transformation, then embrace the culture. You can only go so far when there’s a wall between dev and ops.

{}
GitLab News ( Feed )
Tuesday, 20 March 2018
GitLab Critical Security Release: 10.5.6, 10.4.6, and 10.3.9

Today we are releasing versions 10.5.6, 10.4.6, and 10.3.9 for GitLab Community Edition (CE) and Enterprise Edition (EE).

These versions contain a number of important security fixes, and we strongly recommend that all GitLab installations be upgraded to one of these versions immediately.

Today we are releasing versions 10.5.6, 10.4.6, and 10.3.9 for GitLab Community Edition (CE) and Enterprise Edition (EE).

These versions contain a number of important security fixes, and we strongly recommend that all GitLab installations be upgraded to one of these versions immediately.

The vulnerability details will be made public on our issue tracker in approximately 30 days.

Please read on for more information regarding this release.

SSRF in services and web hooks

There were multiple server-side request forgery issues in the Services feature. An attacker could make requests to servers within the same network of the GitLab instance. This could lead to information disclosure, authentication bypass, or potentially code execution. This issue has been assigned CVE-2018-8801.

Thanks to @jobert from HackerOne for reporting this.

Versions Affected

  • Affects GitLab CE/EE 8.3 and up

Remediation

We strongly recommend that all installations running an affected version above to be upgraded to the latest version as soon as possible.

Gitlab Auth0 integration issue

There was an issue with the GitLab omniauth-auth0 configuration which resulted in the Auth0 integration signing in the wrong users.

Thanks to Trond Hindenes for reporting this issue.

Versions Affected

  • Affects GitLab CE 8.6 and up

Remediation

We strongly recommend that all installations running an affected version above to be upgraded to the latest version as soon as possible.

Updating

To update, check out our update page.

Update (2018-03-21 7:00PM UTC)

In order to address the SSRF issue, we created a new checkbox setting to allow outbound requests to local networks (IPv4 and IPv6 private address ranges). This is currently unchecked by default.

The setting is located in Admin area->Settings->Outbound Requests. If you need to allow outbound requests to your local network for hooks and services, please enable this checkbox. Note that by checking this, your GitLab instance will be vulnerable to the SSRF issue mentioned above.

To provide a more flexible and improved solution, we may add a configurable whitelist at a future date.

Update (2018-03-23 9:00AM UTC)

If you are currently using Auth0, the configuration will need to be updated slightly.

Check the Auth0 integration documentation for the correct syntax.


pluto.models/1.4.0, feed.parser/1.0.0, feed.filter/1.1.1 - Ruby/2.0.0 (2014-11-13/x86_64-linux) on Rails/4.2.0 (production)