Using Git as your SCM- Digital Transformation with IBM API Connect
There have been many version control systems used in corporate environments. They usually required a centralized server and developers would develop code and check in the code when the code was complete. Examples include Subversion and Team Foundation Server (now called Azure DevOps server). Often, the check-in happened after various deployments have already occurred. Even with more rigid procedures of code check-ins, having a centralized server could lead to a loss of agility if the version control server was down.
In 2005, Linus Torvalds authored Git to support the mass Linux open source community after their version control system became commercialized. Given that the open source community was distributed, a version control system that was fully distributed, simple in design, and allowed parallel branches was needed. And thus, Git became a reality.
Git is very powerful and has many options. You’ll be introduced to the features that we will be using in the pipeline next.
Git capabilities
Minimally, you should know the capabilities of Git so you know how to incorporate it within your DevOps pipeline. As a developer, you should understand how to use Git to ensure you are able to create and modify code while maintaining the integrity of the main branch. The features you should be capable of doing are as follows:
- Create and configure a local Git repository: git init, git config.
- Add files to your local repository: git add [file].
- Associate a remote repository (an example is GitHub): git config –global user.username [github username], git remote add origin [url for your github repository].
- Commit your files with minimal documentation: git commit -m “description”.
- Pull and push changes to your remote repository: git push origin master.
- Rebase your repository: git rebase master.
- Create branches and merge branches: git branch [new branch name], git checkout [branch name], git merge [branch name].
- Clone the repository: git clone -b [branch] [user credentials] [location].
Figure 14.14 shows, at a high level, the activities you will perform when interfacing with a Git repository:
Figure 14.14 – Basic Git capabilities you should learn
As you can see in Figure 14.14, git is supporting all the capabilities necessary to work within teams. You can create multiple repositories and allow multiple developers to branch code streams, merge changes, and rebase. When you need to synchronize your local repository with GitHub, you have the ability to push, pull, and add from the origin.
Now that you have reviewed how a developer interacts with Git, you are ready to move on to your pipeline construction using Jenkins.
Constructing the Jenkins pipeline
With all the background information, you are ready to start developing the pipeline stages required for your Jenkins pipeline. While we are utilizing Jenkins to implement our DevOps pipeline, you should be aware that other products can perform similar tasks that you may come across. Since API Connect runs on top of RedHat OpenShift, you should be aware that Tekton provides a cloud-native, open source CI/CD platform that makes it easier to deploy across multiple clouds. Tekton runs using Kubernetes so it is a natural fit for companies moving to containers and microservices. To learn more about Tekton, you can visit the following URL: https://www.openshift.com/learn/topics/pipelines.
While there are choices in terms of CI/CD, you will be learning how to implement using Jenkins in this chapter. Let’s review some of the Jenkins configurations and prepare you for building the Jenkins pipeline.
Jenkins in a nutshell
Jenkins is an open source CI/CD automation tool built on Java. Learning Jenkins will help you build and test your APIs and make it easier to deliver software easier and more often. Jenkins will help you build, test, stage, publish, and deploy all the way up through production.
When you install Jenkins, you will immediately become aware of the many plugins built to run on Jenkins. These plugins will help Jenkins integrate with various tools, such as Git and Ansible. Refer to the following screenshot:
Figure 14.15 – Jenkins Dashboard
When you install Jenkins, you install the master server first. From within the master server, you can perform all CI/CD stages, but that could eventually tax the system. It is the master server that schedules jobs, records build results, and provides status. To alleviate that concern, you can create additional server nodes (called slaves) that work on the build stages. When building your pipeline, you can direct execution to specific nodes. The master server will dispatch builds to the additional node and monitor the execution of the builds on those nodes.
With Jenkins, you create various stages of your builds that run either on the master or one of the nodes. Within each stage, you define a number of steps to accomplish.
In Figure 14.16, you see a Build step. In this step, you can define what you want Jenkins to do.
Figure 14.16 – The Build step defines the task to execute
In Figure 14.16, you see Jenkins executing a shell script running three shell commands (echo, pwd, and date). The Build step was initially created by clicking on New Item shown in Figure 14.15. Within a stage, you can create simple commands or complex processing. The combination of the stages and steps creates what is called a Jenkinsfile.
To start putting your pipeline together, you will need to decide between the two methods of coding your Jenkinsfile. You have a choice between the scripted or declarative methods to create your Jenkinsfile. Let’s understand each of these next.