Organizations that successfully implement a phased approach to continuous integration and continuous delivery (CI/CD) gain many operational advantages, which we'll outline throughout this white paper. As a vital tool for managing every stage of the DevOps Lifecycle, Gitlab nonetheless introduces organizational challenges concerning the initial implementation and workflow. Notwithstanding those concerns, integration with extensible data platforms like Splunk gives Gitlab CI/CD genuine business value. Yet, organizations struggle to deliver that value-add to the enterprise as a whole. Creating a strategic plan for the deployment is how organizations can close the gap and improve time-to-value.
Thus, the focus of this white paper will be on optimizing Splunk and Gitlab specifically, although many of the concepts we discuss are applicable to similar software tools. We simply haven't vetted them yet since we were only able to write this paper on software already approved in the TRM for DHS.
We will first present the current lay of the land in this context while touching upon the pre-integration stage where developers select software tools and related orchestration platforms (i.e., Kubernetes for Docker). Afterward, we will move along to explaining a three-phase approach to Gitlab CI/CD and ultimately provide a transparent process for a multi-phase CI/CD deployment.
In this section we will discuss the tools used for the methodologies described in following sections.
The focus of this document is in support of Splunk, but some of this guide may be applied to other tools.
These tools can be substituted for other tools that provide similar functionality, but they have not been evaluated by the author. The tools listed in this report have also been approved in the TRM for DHS.
GitLab (GitLab) is a fully fledged application that features solutions for every stage of the DevOps Lifecycle (https://about.gitlab.com/stages-devops-lifecycle/)
Beyond checking in all configurations related to Splunk, it allows us to track issues, submit feature requests, add a wiki for commonly used searches, as well as automate parts of our deployment.
At the core of the integration pipeline is the associated GitLab Runner(s). The GitLab Runner can work utilizing many different options including Kubernetes, Docker, shell scripts, and more. (https://docs.gitlab.com/runner/)
Important considerations of GitLab were also:
- The ability to integrate with LDAP for Authentication and Authorization into the tool utilizing existing Enterprise Active Directory Structure.
- The ability to send email for alerts in our CI/CD Pipeline.
- A decently documented API for integrations into Splunk
- System Hooking (integration to a tool like Mattermost or Slack)
Docker has won the war on containerization. It allows system administrators and developers to create images that adhere to strict configurations with a complex setup and share those configurations in other environments. Red Hat, IBM, Rancher Labs, and Piviotal are a few players that have adopted the strategies for both developing images that work on docker aswell as a minimalistic platform to run containers on.
Docker containers have the following:
- Standardization: Vendors create a supported image for their software.
- Lightweight: Because the container shares the Host OS kernel, containers are much more efficient.
- Secure: Containers provide isolation from other containers and the host.
In addition, containerization allows us to persist data if we wish, or keep data ephemeral to the container session.
Kubernetes is a distributed orchestration platform for Docker. Allows for highly available and distributed docker containers with role-based access.
Kubernetes allows quick and easy scale of docker containers with standard images that can leverage different configurations for customization to the environment without changes needed to the image.
Outlined in this section will be a multiple phased approach for adoption to CI/CD.
These sections are subject to change and are not hard boundaries, but rather levels of maturity into full CI/CD.
This is the Introductory phase. This is the phase where you install GitLab, integrate with AD to the appropriate groups, assign a few administrators, and begin building out projects and groups that work best for you.
The structure is meant to be simple and easy to understand for people who are unfamiliar with toolsets such as this.
Within a former customer’s rapidly expanding Splunk environment, with only one Full Time Splunk Architect and one Government Representative supporting 100+ Subcomponents, 50 Indexers (clustered), two Search Head Clusters (Adhoc, ES), 1000’s of Universal Forwards and beginning talks of expansion to Workstations, it was imperative that we had supporting structure to support feature requests, nightly backups, documentation for proper searching.
The structure was developed as follows:
Production Splunk Group
This provides the ability for us to restore any part of our Splunk environment with nightly snapshots of Search Head local knowledge objects.
This structure was also important for issue creation. It allows us to track issues at the Top-Level Group and see any sub project issues, giving a single pane of glass for issues within Production Splunk, but allowing us to assign those issues where applicable.
Phase 1 begins part of the CI/CD piece. At this phase we have fully implemented nightly backups and have GitLab stood up. At this phase, we want to introduce some change control at some level. This excludes the front-end changes / local knowledge object creation at the search head level. We want a good user experience, which means dashboard development and content creation right on the search head, but we want to control changes outside of that.
The way I prefer to setup the Deployment Server with environments that I maintain control of, most changes for Splunk happen there first and roll down to the rest of the environment. (Deployment Server deploys directly to Master-apps, and shcluster/apps). This allows us to leverage a single place to implement most changes.
Phase 1 would be the most basic, and changes made at this level would have visibility, approvals, and backups of previous versions to ensure all changes are approved.
After we have mastered change control and have a single integration point, it’s important we begin doing proper testing before implementation into production. At this phase, we want to begin making changes in development/test environments only. It is critical that we have a solid development environment that mimics production. The architecture at this point is not critical, as we are mostly focused on changes related to configurations but are not yet ready to worry about scaling. This is where Docker and Kubernetes would be appropriate. Ephemeral instances can be created to dynamically spin up environments that mimic production. Forking of portions of live production data would be an easy way to ensure we meet real use cases and not spend a ton of effort recreating scenarios to test.
As more engineers begin working in Splunk, changes need to be vetted before they are integrated into production. “Sprints” would allow us to stand up a development environment that mimics production as it stands, changes can be made by many developers and we can ensure that no changes collide. Those changes can be pushed to Git at a given interval, and once at a “release” state, an approver can merge to the master branch.
Once to the master branch, we can leverage a GitLab Runner, which could tell the production deployment server to pull the latest branch if we choose, we can reload deployment server for specified classes, or do a full reload of the deployment server. The runner could also execute a shcluster bundle push or cluster bundle push after a specified wait period if you so choose.
We now have the capability to deploy a development environment from the master branch of Git. All changes should flow through Git, so this should not present any conflicts. It is at this phase we want to establish capabilities for multiple development environments and phased approval steps.
Since we are leveraging Docker/Kubernetes or some other means of automation and ephemeral standup of Dev, we should be able to stand up multiple instances side-by-side to allow for any administrator to have his or her own playground before an integration environment (Will now be referred to as TEST).
Administrators will stand up their own Dev environment, make their changes, and then check those changes into the dev branch. The new test environment will mimic prod and will now fork 10% of production data. This will allow you to see results against production data before the changes affect production. Checkout our blog post from Matt Cimino on how to achieve forking of production data here.
The benefits of optimizing Gitlab CI/CD in the Splunk data platform can deliver genuine value to the enterprise. The challenge facing DevOps professionals is creating an implementation process from scratch. Indeed, using a new tool for the sake of using a new tool isn’t the best way to maximize results. It requires a concerted, carefully planned approach instead, which we posited in this white paper.
Without software to improve DevOps efficiency, organizations may struggle to deliver new services and products to market on time and under budget. The result was lackluster performance when alternatives were available yet not capitalized upon.
We first introduced a slate of DevOps tools: Gitlab Runner, Docker, and Kubernetes – all of which integrate with the Splunk extensible data platform. We then proposed a continuous integration process in multiple phases, including use cases and examples. Nevertheless, we ask that the reader keeps in mind that this whitepaper focused on Splunk integration, although many takeaways correlate to similar data platforms.
Creating a roadmap for improved cyber resiliency requires alignment between business operations and IT infrastructure and using a multiphase approach is the high-level strategy we recommend at True Zero Technologies. The challenges surrounding CI/CD demand a more efficient means to integrate DevOps while keeping security top of mind. Contact our security experts today for more details on optimizing your organization’s CI/CD pipeline.