Wednesday, April 1, 2020

Integrate Jenkins with Azure Key Vault


Jenkins has been one of the most used CI/CD tools. For every tool which we are using in our daily life, it becomes really challenges when it comes to handling secret information. I know there are lots of tools available provided with PAAS or in house hosting solution. But we need those tools to support integration with different toolsets without many efforts. 

In this particular blog, we will be discussing the integration of Jenkins with the Azure Key Vault. Thanks to all the guys who are continuously working for different communities and spending time to make product more flexible and enhancing the product capabilities.


We are going to use Azure Key Vault plugin for this. There are multiple ways to use this. But in this post, we'll go through the integration and then testing using declarative pipelines.

Pre-Requisites-

  • Make sure you have running Jenkins setup
  • You have valid Azure subscription
Implementation Steps-

     1. Create an Azure Key Vault using the below steps:


kulsharm2@WKMIN5257929:~$ ⚙️  $az login
You have logged in. Now let us find all the subscriptions to which you have access...
[
  {
    "cloudName": "AzureCloud",
    "id": "dd019fb5-db8a-4e4f-96ec-fc8decd2db8b",
    "isDefault": true,
    "name": "<>",
    "state": "Enabled",
    "tenantId": "d52c9ea1-7c21-47b1-82a3-33a74b1f74b8",
    "user": {
      "name": "<>",
      "type": "user"
    }
  }
]



kulsharm2@WKMIN5257929:~$ ⚙️  $az ad sp create-for-rbac --name http://local-jenkins
Found an existing application instance of "7e575c9b-b902-4510-8a06-8cbe1639aba3". We will patch it
Creating a role assignment under the scope of "/subscriptions/dd019fb5-db8a-4e4f-96ec-fc8decd2db8b"
  Role assignment already exits.

{
  "appId": "7e575c9b-b902-4510-8a06-8cbe1639aba3",
  "displayName": "local-jenkins",
  "name": "http://local-jenkins",
  "password": "e7157115-6e35-46f9-a811-c856ba9bb5c0",
  "tenant": "d52c9ea1-7c21-47b1-82a3-33a74b1f74b8"
}
kulsharm2@WKMIN5257929:~$ ⚙️  $RESOURCE_GROUP_NAME=my-resource-group
kulsharm2@WKMIN5257929:~$ ⚙️  $az group create  --name $RESOURCE_GROUP_NAME -l "East US"
{
  "id": "/subscriptions/dd019fb5-db8a-4e4f-96ec-fc8decd2db8b/resourceGroups/my-resource-group",
  "location": "eastus",
  "managedBy": null,
  "name": "my-resource-group",
  "properties": {
    "provisioningState": "Succeeded"
  },
  "tags": null,
  "type": "Microsoft.Resources/resourceGroups"
}
kulsharm2@WKMIN5257929:~$ ⚙️  $az group show --name $RESOURCE_GROUP_NAME -o table
Location    Name
----------  -----------------
eastus      my-resource-group

kulsharm2@WKMIN5257929:~$ ⚙️  $VAULT=jenkins-local
kulsharm2@WKMIN5257929:~$ ⚙️  $az keyvault create --resource-group $RESOURCE_GROUP_NAME --name $VAULT
{
  "id": "/subscriptions/dd019fb5-db8a-4e4f-96ec-fc8decd2db8b/resourceGroups/my-resource-group/providers/Microsoft.KeyVault/vaults/jenkins-local",
  "location": "eastus",
  "name": "jenkins-local",
  "properties": {
    "accessPolicies": [
      {
        "applicationId": null,
        "objectId": "fd5bcd48-13d1-40c5-98a3-d46442c5194e",
        "permissions": {
          "certificates": [
  .          
  .       
  <>

kulsharm2@WKMIN5257929:~$ ⚙️  $az keyvault list -o table
Location    Name           ResourceGroup
----------  -------------  -----------------
eastus      jenkins-local  my-resource-group
kulsharm2@WKMIN5257929:~$ ⚙️  $az keyvault set-policy --resource-group $RESOURCE_GROUP_NAME --name $VAULT    --secret-permissions get list --spn http://local-jenkins
{
  "id": "/subscriptions/dd019fb5-db8a-4e4f-96ec-fc8decd2db8b/resourceGroups/my-resource-group/providers/Microsoft.KeyVault/vaults/jenkins-local",
  "location": "eastus",
  "name": "jenkins-local",
  "properties": {
    "accessPolicies": [
      {
        "applicationId": null,
        "objectId": "fd5bcd48-13d1-40c5-98a3-d46442c5194e",
        "permissions": {
          "certificates": [
            "get",
            "list",
            "delete",
            "create",
            "import",
            "update",
            "managecontacts",
            "getissuers",
            "listissuers",
            "setissuers",
            "deleteissuers",
            "manageissuers",
 <>
      2. Create one secret in the Azure Key Vault :

kulsharm2@WKMIN5257929:~$ ⚙️  $az keyvault secret set --vault-name $VAULT --name secret-key --value my-super-secret
{
  "attributes": {
    "created": "2020-04-01T05:18:37+00:00",
    "enabled": true,
    "expires": null,
    "notBefore": null,
    "recoveryLevel": "Purgeable",
    "updated": "2020-04-01T05:18:37+00:00"
  },
  "contentType": null,
  "id": "https://jenkins-local.vault.azure.net/secrets/secret-key/85a36fe61ba34f53b60217c5e08f1774",
  "kid": null,
  "managed": null,
  "tags": {
    "file-encoding": "utf-8"
  },
  "value": "my-super-secret"
}

      3. Let's make changes on Jenkins side to complete the integration:
          1. Install the plugin as below:



        2. Add the Azure Key Vault URL to Jenkins Configuration following "Manage Jenkins --> Configure System" as below :


       
       4. Add credentials by going through "Credentials --> System --> Global Credentials(unrestricted)" as below:

       
       5. Create new credential as below-
 

    6. Now, let's create a pipeline and try to fetch the secret we stored in AKV:


*** Pipeline Code ***
pipeline {
  agent any
  environment {
    SECRET_KEY = credentials('secret-key')
  }
  stages {
    stage('Foo') {
      steps {
        echo SECRET_KEY
        echo SECRET_KEY.substring(0, SECRET_KEY.size() -1) // shows the right secret was loaded, don't do this for real secrets unless you're debugging 
      }
    }
  }
}






Happy Learning!!

Sunday, March 22, 2020

How to handle packaging in python using __init__.py


Keeping in mind the current situation across the world. I hope everyone is doing good. Please take precautions and stay at home and keep your self busy in whatever way you want to be.

I was reading the book "Python for DevOps" and came across the topic "Packaging". In every business, packaging plays a big role while it comes to product distribution. 

While it comes to IT software usually, below are the few things which should take care of :

  • Descriptive Versioning 
    • In Python packages, the following two variants are used:
      • major.minor
      • major.minor.micro
    • major - for backward-incompatible changes
    • minor - adds features that are also backward compatible
    • micro - adds backward-compatible bug fixes.

  • The Changelog
    • This is a simple file that keeps track of all the changes we will be doing for each version upgrade.

Not going in detail here, coming directly to implementation on how we can handle packaging in python using the "__init__.py" file. 

The tool used here for packaging is "setuptools" python module.
Now we'll create python virtual environment and add "setuptools" there as below - 

$ python3 -m venv /tmp/packaging
$ source /tmp/packaging/bin/activate
$ pip3 install setuptools

Tip - you can cross check the list of installed modules using pip3 as below.

Now, let's see the code. I have simple hello-world examples as below

(packaging) kulsharm2@WKMIN5257929:/tmp/hello_world$ ⚙️  $tree .
.
├── README
├── hello_world
│   ├── __init__.py
│   ├── hello_python.py
│   └── hello_world.py
└── setup.py

1 directory, 5 files
Note -
  • README - simple instructions
  • hello_world(directory) - module name
  • __int__.py - organize modules while keeping them in directory
  • hello_*.py - two different module with different functionality
  • setup.py - required by "setuptools" to build a package.

Source code is available on GitHub page as well.
(packaging) kulsharm2@WKMIN5257929:/tmp/hello_world$ ⚙️  $cat hello_world/hello_world.py 
def helloworld():
    return "HELLO WORLD"
(packaging) kulsharm2@WKMIN5257929:/tmp/hello_world$ ⚙️  $cat hello_world/hello_python.py 
def hellopython():
    return "HELLO PYTHON3" 
(packaging) kulsharm2@WKMIN5257929:/tmp/hello_world$ ⚙️  $cat hello_world/__init__.py 
from .hello_python import hellopython
from .hello_world import helloworld
(packaging) kulsharm2@WKMIN5257929:/tmp/hello_world$ ⚙️  $cat setup.py 
from setuptools import setup, find_packages

setup(
    name="hello_example",
    version="0.0.1",
    author="Example Author",
    author_email="author@example.com",
    url="example.com",
    description="A hello-world example package",
    packages=find_packages(),
    classifiers=[
        "Programming Language :: Python :: 3",
        "License :: OSI Approved :: MIT License",
        "Operating System :: OS Independent",
    ],
)
Now, lets start packaging our hello world program:
(packaging) kulsharm2@WKMIN5257929:/tmp/hello_world$ ⚙️  $python3 setup.py sdist
running sdist
running egg_info
creating hello_example.egg-info
writing hello_example.egg-info/PKG-INFO
writing dependency_links to hello_example.egg-info/dependency_links.txt
writing top-level names to hello_example.egg-info/top_level.txt
writing manifest file 'hello_example.egg-info/SOURCES.txt'
reading manifest file 'hello_example.egg-info/SOURCES.txt'
writing manifest file 'hello_example.egg-info/SOURCES.txt'
running check
creating hello_example-0.0.1
creating hello_example-0.0.1/hello_example.egg-info
creating hello_example-0.0.1/hello_world
copying files to hello_example-0.0.1...
copying README -> hello_example-0.0.1
copying setup.py -> hello_example-0.0.1
copying hello_example.egg-info/PKG-INFO -> hello_example-0.0.1/hello_example.egg-info
copying hello_example.egg-info/SOURCES.txt -> hello_example-0.0.1/hello_example.egg-info
copying hello_example.egg-info/dependency_links.txt -> hello_example-0.0.1/hello_example.egg-info
copying hello_example.egg-info/top_level.txt -> hello_example-0.0.1/hello_example.egg-info
copying hello_world/__init__.py -> hello_example-0.0.1/hello_world
copying hello_world/hello_python.py -> hello_example-0.0.1/hello_world
copying hello_world/hello_world.py -> hello_example-0.0.1/hello_world
Writing hello_example-0.0.1/setup.cfg
creating dist
Creating tar archive
removing 'hello_example-0.0.1' (and everything under it)

After this, you will see that the above command has created other folders as well and our packaged module has been stored in "dist" folder.

(packaging) kulsharm2@WKMIN5257929:/tmp/hello_world$ ⚙️  $tree .
.
├── README
├── dist
│   └── hello_example-0.0.1.tar.gz
├── hello_example.egg-info
│   ├── PKG-INFO
│   ├── SOURCES.txt
│   ├── dependency_links.txt
│   └── top_level.txt
├── hello_world
│   ├── __init__.py
│   ├── hello_python.py
│   └── hello_world.py
└── setup.py

3 directories, 10 files
Now let's install and list down the modules using "pip3".


(packaging) kulsharm2@WKMIN5257929:/tmp/hello_world$ ⚙️  $pip3 install dist/hello_example-0.0.1.tar.gz 
Processing ./dist/hello_example-0.0.1.tar.gz
Installing collected packages: hello-example
    Running setup.py install for hello-example ... done
Successfully installed hello-example-0.0.1
(packaging) kulsharm2@WKMIN5257929:/tmp/hello_world$ ⚙️  $
(packaging) kulsharm2@WKMIN5257929:/tmp/hello_world$ ⚙️  $ pip3 list --format=columns
Package       Version
------------- -------
hello-example 0.0.1  
pip           20.0.2 
setuptools    41.2.0 
As the module has been installed, let's test this first using "ipyhon" console and then using the python program.

(packaging) kulsharm2@WKMIN5257929:/tmp/hello_world$ ⚙️  $ipython3
/usr/local/lib/python3.7/site-packages/IPython/core/interactiveshell.py:931: UserWarning: Attempting to work in a virtualenv. If you encounter problems, please install IPython inside the virtualenv.
  warn("Attempting to work in a virtualenv. If you encounter problems, please "
Python 3.7.7 (default, Mar 10 2020, 15:43:03) 
Type 'copyright', 'credits' or 'license' for more information
IPython 7.8.0 -- An enhanced Interactive Python. Type '?' for help.

In [1]: import hello_world as hw                                                                                       

In [2]: hw.hellopython()                                                                                               
Out[2]: 'HELLO PYTHON3'

In [3]: hw.helloworld()                                                                                                
Out[3]: 'HELLO WORLD'

In [4]:
Here you saw that I can call the function using my custom module "hello_world". 

Using this module in the program-

(packaging) kulsharm2@WKMIN5257929:/tmp/hello_world$ ⚙️  $cat example.py 
import hello_world as hw

print("Calling Hello Python Function: "+hw.hellopython())
print("Calling Hello World Function: "+hw.helloworld())
(packaging) kulsharm2@WKMIN5257929:/tmp/hello_world$ ⚙️  $
(packaging) kulsharm2@WKMIN5257929:/tmp/hello_world$ ⚙️  $python3 example.py 
Calling Hello Python Function: HELLO PYTHON3
Calling Hello World Function: HELLO WORLD
(packaging) kulsharm2@WKMIN5257929:/tmp/hello_world$ ⚙️  $

Now, let's suppose you want to upgrade to version "0.0.2", then we will follow the below steps -

(packaging) kulsharm2@WKMIN5257929:/tmp/hello_world$ ⚙️  $cat hello_world/hello_python.py 
def hellopython():
    return "HELLO PYTHON3 with version **0.0.2**"
(packaging) kulsharm2@WKMIN5257929:/tmp/hello_world$ ⚙️  $
(packaging) kulsharm2@WKMIN5257929:/tmp/hello_world$ ⚙️  $cat hello_world/hello_world.py 
def helloworld():
    return "HELLO WORLD with version ** 0.0.2 **"
(packaging) kulsharm2@WKMIN5257929:/tmp/hello_world$ ⚙️  $
(packaging) kulsharm2@WKMIN5257929:/tmp/hello_world$ ⚙️  $cat setup.py 
from setuptools import setup, find_packages

setup(
    name="hello_example",
    version="0.0.2",
    author="Example Author",
    author_email="author@example.com",
    url="example.com",
    description="A hello-world example package",
    packages=find_packages(),
    classifiers=[
        "Programming Language :: Python :: 3",
        "License :: OSI Approved :: MIT License",
        "Operating System :: OS Independent",
    ],
)
Package it again - 
(packaging) kulsharm2@WKMIN5257929:/tmp/hello_world$ ⚙️  $ python3 setup.py sdist
running sdist
running egg_info
writing hello_example.egg-info/PKG-INFO
writing dependency_links to hello_example.egg-info/dependency_links.txt
writing top-level names to hello_example.egg-info/top_level.txt
reading manifest file 'hello_example.egg-info/SOURCES.txt'
writing manifest file 'hello_example.egg-info/SOURCES.txt'
running check
creating hello_example-0.0.2
creating hello_example-0.0.2/hello_example.egg-info
creating hello_example-0.0.2/hello_world
copying files to hello_example-0.0.2...
copying README -> hello_example-0.0.2
copying setup.py -> hello_example-0.0.2
copying hello_example.egg-info/PKG-INFO -> hello_example-0.0.2/hello_example.egg-info
copying hello_example.egg-info/SOURCES.txt -> hello_example-0.0.2/hello_example.egg-info
copying hello_example.egg-info/dependency_links.txt -> hello_example-0.0.2/hello_example.egg-info
copying hello_example.egg-info/top_level.txt -> hello_example-0.0.2/hello_example.egg-info
copying hello_world/__init__.py -> hello_example-0.0.2/hello_world
copying hello_world/hello_python.py -> hello_example-0.0.2/hello_world
copying hello_world/hello_world.py -> hello_example-0.0.2/hello_world
Writing hello_example-0.0.2/setup.cfg
Creating tar archive
removing 'hello_example-0.0.2' (and everything under it)
Install new version -

(packaging) kulsharm2@WKMIN5257929:/tmp/hello_world$ ⚙️  $pip3 install dist/hello_example-0.0.2.tar.gz 
Processing ./dist/hello_example-0.0.2.tar.gz
Installing collected packages: hello-example
  Attempting uninstall: hello-example
    Found existing installation: hello-example 0.0.1
    Uninstalling hello-example-0.0.1:
      Successfully uninstalled hello-example-0.0.1
    Running setup.py install for hello-example ... done
Successfully installed hello-example-0.0.2
Verify the installation - 
(packaging) kulsharm2@WKMIN5257929:/tmp/hello_world$ ⚙️  $ pip3 list --format=columns
Package       Version
------------- -------
hello-example 0.0.2  
pip           20.0.2 
setuptools    41.2.0 
Test again using the python program -

(packaging) kulsharm2@WKMIN5257929:/tmp/hello_world$ ⚙️  $python3 example.py 
Calling Hello Python Function: HELLO PYTHON3 with version **0.0.2**
Calling Hello World Function: HELLO WORLD with version ** 0.0.2 **
(packaging) kulsharm2@WKMIN5257929:/tmp/hello_world$ ⚙️  $


Wednesday, November 27, 2019

Deploy and Scale Kubernetes Application using Spinnaker

Deploy and Scale Kubernetes Application using Spinnaker-


In my last post Getting started with Spinnaker, we completed the installation and setup part of spinnaker. In this post, I'll be going through the "Deploying and Scaling application on Kubernetes using spinnaker".

In this particular exercise, we'll create a simple "nginx" deployment on kubernetes and expose that as a service. After that we'll see how we can scale up and down the deployment easily from Spinnaker Dashboard itself.

Fot this, first make sure we have done port-forwarding for required pods and able to access Spinnaker Dashboard. 

Note - Before moving ahead, please make sure that "kubernetes" provider is enabled. You can check this on "Halyard" configuration as below.

$ kubectl exec -it  spinnaker-local-spinnake-halyard-0 /bin/bash -n spinnaker
$ hal config list | grep -A 37 kubernetes




After that click on the Create Application in Application tab to below popup.



After filling the required information and hitting the create button. You'll get landed on below screen.

There are few other terms which you can check on documents like "Clusters, Load Balancers, Server Group" etc.

Here, we'll create a Server Group which will basically contain our deployment manifest(below).

---
apiVersion: apps/v1 # for versions before 1.9.0 use apps/v1beta2
kind: Deployment
metadata:
  name: nginx-deployment
spec:
  selector:
    matchLabels:
      app: nginx
  replicas: 1 # tells deployment to run 2 pods matching the template
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: nginx:1.7.9
        ports:
        - containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
  name: nginx
  labels:
    app: nginx
spec:
  type: NodePort
  ports:
    - port: 80
      nodePort: 30000
  selector: 
app: nginx 

Once you click on "Create Server Group", You'll see below screen where we need to paste above yaml and hit create.



Below will be the end state, if everything goes well.


Now, lets inspect our deployment. 
On Cluster tab we can see the deployment and number of replicas(pod) available in this deployment as below -


Please verify the same from CLI using kubectl-


Check for the services in Load Balancer Section -

Verify same using CLI -


Access the service -




Scale up the Deployment from 1 to 4 pods and verify the results-






So, this was the simple how-to for managing the K8S manifest. In the next post, I will try to explore the integration on Jenkins with Spinnaker and auto triggering Spinnaker Deployment based on Jenkins events.



Tuesday, November 26, 2019

Getting Started with Spinnaker locally using minikube(local Kubernetes)

Before jumping to Installation and setup part, first of all, lets briefly summarize about the "What is Spinnaker"



Spinnaker : 

            Spinnaker is an open-source, multi-cloud continuous delivery platform for releasing software changes with high velocity and confidence.

          I am not going in many details about the functionality here but would like to highlight the main architectural component which I think we should know at least before starting playing with this. This will help you in troubleshooting if got stuck in between.

So, Spinnaker is composed on multiple components. You will be able to see all these after we complete the setup. List of different components is as below(currently just copy-pasting from the official site)-
  1. Deck is the browser-based UI.
  2. Gate is the API gateway.
    The Spinnaker UI and all api callers communicate with Spinnaker via Gate.
  3. Orca is the orchestration engine. It handles all ad-hoc operations and pipelines. Read more on the Orca Service Overview.
  4. Clouddriver is responsible for all mutating calls to the cloud providers and for indexing/caching all deployed resources.
  5. Front50 is used to persist the metadata of applications, pipelines, projects and notifications.
  6. Rosco is the bakery. It produces immutable VM images (or image templates) for various cloud providers.
    It is used to produce machine images (for example GCE imagesAWS AMIsAzure VM images). It currently wraps packer, but will be expanded to support additional mechanisms for producing images.
  7. Igor is used to trigger pipelines via continuous integration jobs in systems like Jenkins and Travis CI, and it allows Jenkins/Travis stages to be used in pipelines.
  8. Echo is Spinnaker’s eventing bus.
    It supports sending notifications (e.g. Slack, email, SMS), and acts on incoming webhooks from services like Github.
  9. Fiat is Spinnaker’s authorization service. 
    It is used to query a user’s access permissions for accounts, applications and service accounts.
  10. Kayenta provides automated canary analysis for Spinnaker.
  11. Halyard is Spinnaker’s configuration service.
Halyard manages the lifecycle of each of the above services. It only interacts with these services during Spinnaker startup, updates, and rollbacks.

Note - In our setup "Fiat and Kayenta" will not be present as this is not available in the helm chart that we have installed on minikube.
Along with Architecture, I guess we should know the ports mapping as well.


Minikube - 

        Minikube provides a way to setup Kubernetes locally for development purpose. I am not going in details about the installation. Please go through my previous blog post if you want to install minikube.

After installation, let's start minikube cluster. I am starting with my custom configuration so that it should be able to handle the load.



Other Tools -

     Apart from minikube, below are the other tools that we need and I am supposing that these are already installed.
  1. helm
  2. kubectl



Install Spinnaker -

        Now, we have minikube with helm installed and running. We are ready to install spinnaker. We will install spinnaker using helm chart.
Helm is a templating engine for k8s deployments. We need to provide values those templates. So, to start with they are providing the default set of values which we are going to use.


Download the default values file form above helm repo.
$ curl -Lo values.yaml https://raw.githubusercontent.com/kubernetes/charts/master/stable/spinnaker/values.yaml



Now, let's install the spinnaker to K8S cluter.
$ helm install -n spinnaker-local stable/spinnaker -f values.yaml --timeout 300   --namespace spinnaker


Tip - In case you get timed out exception in first run(like below). Then please delete the helm installation using "helm del --purge {release-name}" and re-run the same command again.


After successful installation, check for the pods in "spinnaker" namespace. All should be in running state. 



P.S. - Please ignore the hal status in above output. Its taking some time to start :).

To access the Spinnaker UI, follow the above instructions. If you notice then in above output we are doing the port-forward for two pods. As per the architecture, these two components are responsible for below functionalities.
  • First one is "deck" - which is providing the UI dashboard
  • Second one the "gate" - which is responsible for accesing the apis.
#export DECK_POD=$(kubectl get pods --namespace spinnaker -l "cluster=spin-deck" -o jsonpath="{.items[0].metadata.name}")
#export GATE_POD=$(kubectl get pods --namespace spinnaker -l "cluster=spin-gate" -o jsonpath="{.items[0].metadata.name}")
#echo $DECK_POD
#echo $GATE_POD
#alias ui='kubectl port-forward --namespace spinnaker $DECK_POD 9000'
#alias api='kubectl port-forward --namespace spinnaker     $GATE_POD 8084'
#ui & api &




Access the Spinnaker Dashboard




In next post we'll try to create pipelines which will deploy entities on kubernetes. Also in later posts we'll explore more on the integration with different providers e.g. Jenkins/Cloud vendors.


Friday, November 8, 2019

Taints and Tolerations in Kubernetes


              We all know that Kubernetes is powerful orchestration tool in the world of containers. The whole complexity of managing, distributing multiple containers across the cluster is being taken care Kubernetes OOB. In shorts it takes care of all the heavy and complex lifting for us.

Since, its K8S who takes care of all distribution and scheduling of pods across different nodes in the cluster. So, what we if we want to run specific pod on specific node only. Luckily we have option to manage this as well. In K8S its called "taint and toleration".

In general terms:
        - Taint is the capability of the node which makes node to do not let any pod to be scheduled on it.
        - On the other hand, Toleration is another capability makes that particular pod to be tolerated by specific Node.

To summarise, Taint and Toleration are used to set restrictions on the what pods can be scheduled on a node.

Let us suppose we have 3 node cluster as below and below is the state when we have pods running in normal scenarios.


Now suppose, we got a requirements where we want to schedule only specific pods on Node1 and nothing should be scheduled on that. For this now lets add a taint called "taint=blue" on Node1. After this no pod will be able to schedule on this Node, until we add tolerations to specific pod to get scheduled on Node1.
Below, we added "blue" toleration to pod "D" and then after below will be the status.


Demo -
             In below, fresh setup we'll see we don't have any taint set on worker node, though we have a taint set on Master node. That is the reason that by default nothing will be scheduled on master node.






Now, lets add a taint "Taint=Dog" to worker node and try to schedule a pod on this.




Create a pod and see the status of the pod-





You noticed that status is pending, let's see what logs say. If you see the last line it says "0/2 nodes are available: 2 node(s) had taints that the pod didn't tolerate".




Now lets create a new pod "dog" which should be tolerate to Node1.




You'll see that after adding tolerance to the pod it got scheduled to Node01 and the other pod is still in pending state.

I hope this clears out the concept of taint and toleration in K8S.

Wednesday, August 21, 2019

Terraform setting up clustered web server !! Getting Started Part-3!!

Last two posts first we saw the basics of terraform "Part-1" and then created simple web server "Part-2". 

Now as we know that running single web server in production is never a good idea. We always need services which should be highly available as well as should be scalable as per the requirements.

Creating and managing a cluster always going to be a pain point. Fortunately  with new cloud technologies now its possible to automate all this and things will be much easier to manage. In this tutorial we'll use AWS's "Auto Scaling Group (ASG)".

    An ASG takes care of everything automatically, including launching a cluster of EC2 instances , monitoring the health of each instance, replacing the failed instances and adjusting the size of cluster in response to the load.





A full working ASG stack include multiple resources to make a working cluster. It starts with "launch configuration" which basically specify how a particular EC2 instance will be configured.

Now, in Fig-1, you saw that we have two EC2 instances and each instance will be having its own IP address. Problem is that what will be end point that you will be provide to your users. Also, later on what we had some issue issues with any of the server ASG can destroy the faulty server and launch a server with new IP. It will be difficult to handle such a situation.

     One way to solve this issue is to use Load Balancer to distribute traffic to backend servers and give LB IP intact DNS to all the users to access the services. 




AWS offers three type of Load Balancers-

  1.) Application Load balancer(ALB) :
         Best suited for load balancing of HTTP and HTTPS traffic. Operates at the application layer (Layer 7) of the OSI model.
  
  2.) Network Load Balancer(NLB) :
         Best suited for load balancing of TCP, UDP and TLS traffic. Operates at the transport layer (Layer 4) of the OSI model.

  3.) Classic Load Balancer(CLB) :
         This is legacy load balancer that predates both ALB and NLB. It can handle all types of traffic that ALB and NLB can handle.

Now a days most of the application either use ALB or NLB. In our case we are going to handle HTTP traffic, so we will use ALB.

Again, ALB consists of several parts:

1.) Listener - listen on specific port and protocol.
2.) Listener Rule - takes request that comes to listener and send those that match specific paths e.g. /foo or /bar to specific target group.
3.) Target Groups - One of more servers that receive requests from the load balancer. Target group also perform health checks on those servers and only sends requests to the healthy servers.





You can get the code from my GitHub repo here.

Please see all the steps in below screen shots -


Check the status on your aws console and access site from browser as well :




Now, lets destroy whole the whole setup with one command :).









Integrate Jenkins with Azure Key Vault

Jenkins has been one of the most used CI/CD tools. For every tool which we are using in our daily life, it becomes really challenges when ...