Forum

Notifications
Clear all

Devops-05-Oct-2024

0
Topic starter
###To create environment variables in pod manifest files:
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
 
syntax in spec block is:
 
spec:
  containers:
  - name: <container-name>
    image: <image-name>
    env:
    - name: <variable_name>
      value: <variable_value>
  
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
 
 
# create a pod with below configurations.
  - pod name ==> tomcatpod3 , image tomee & container tomeecontainer3 
  - labels ==> type: webapptool & author: 
  - environment variables==> sports football
 
vi pod-definition2.yml
 
---
apiVersion: v1
kind: Pod
metadata:
  name: tomcatpod3
  labels: 
    type: webapptool
spec:
  containers:
  - name: tomeecontainer3
    image: tomee
    env:
    - name: sports
      value: football
...
 
kubectl apply -f pod-definition2.yml
 
 
 
to see env variables details in k8s master node use ==> 
kubectl describe pod <pod_name> (observe env variables defined here)
 
to see env variables details inside the pod/container use ==>
login into pod
kubectl exec -it <pod_name> -- /bin/bash
 
echo $cricketer (from output - observe variable substitutions)
echo $password (from output - observe variable substitutions)
 
 
 
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
 
Port Mapping:
 
Syntax / Template to write pod manifest file
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
 
apiVersion: v1
kind: Pod
metadata:
  name: <NameofPod>
  labels:
    <label-name>: <label-value>           
spec: #<Technical section, how our container needs to get created>
  containers:
  - name: <NameOfTheContainer>
    image: <imageName>
    ports:
    - containerPort: <portOfContainer>
      hostPort: <hostport>
      
++++++++++++++++++++++++++++++++++++++++++++++++++++++++
 
apiVersion: v1
kind: Pod
metadata:
  name: apachetomcatpod
spec:
  containers:
  - name: tomcat-container-v2
    image: tomee  # Replace with your desired Tomcat image
    ports:
    - containerPort: 8080
      hostPort: 30010
=============================================================================
 
Commands recap:
---------------
kubectl create -f <object_file_name>.yaml --> To create a object with a definition file
kubectl get pods --> To display all the pods on the cluster
kubectl get pods -o wide --> To display more information of pods
 
kubectl describe <object_kind> <object_name> --> To see the information about a particular object
kubectl delete <object_kind> <object_name> --> To delete a k8s object
 
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
 
Note:
pod is smallest object which we can create in k8s, simillar to pods kubernetes also have other objects like replicasets, replication-controller , deployments , services etc...
 
 
ReplicaSet:
----------
is a higher level object, 
using which we can run multiple instances (multiple pods) of our applications.
 
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
 
 
** important interview question **
what are labels in k8s?
- labels are user provided key-value pairs, using labels we can organize(group) Kubernetes objects
- using labels we can filter different Kubernetes objects
- Same label (key/value) can be assigned to multiple Kubernetes objects
 
ex: create a pod name as my-pod-2, container name as c2  use image nginx, with labels as environment: production & app: nginx
 
 
observation on filtering pods using labels:
if we have multiple pods in our cluster & if we want to list only Pods which have label as ==> app:nginx
Kubectl get pods -l app=nginx
 
 
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
 
Controller- manager:
--------------------
controller manager is the brain behind the orchestration, it always monitor actual state of objects with that of desired states.
 
Controller manager ==> ( actual state == desired state)
 
controller manager objects includes following
1. Replication-controller --> older feature (not used now a days)
2. Replicasets --> newer feature
3. Deployments.
4. Deamonsets.
 
 
ReplicaSet:
-----------
if we want to run multiple number of pods then we can use replicasets. 
 
Pod is the smallest kubernetes object, which we worked on. Next Level is replicasets.
- ReplicaSets is used for creating multiple replicas(numbers) of a specific pod very easily. 
- ReplicaSets ensure that "NUMBER OF PODS (Replicas)" specified in manifest file always exists.
- using ReplicaSets we can achieve high availablity, load balancing and scaling.
- if a pod get stopped / deleted then Replicasets will automatically recreate the new pod with same configuration ( Auto-healing). 
 
 
 
Note:
- ReplicaSet uses elements like "replicas","selector" & "template" field in its 'spec' section.
- Template contains all pod related information so it can be also called as pod template
- copy pod defintion file contents ==> under "template" ==> without apiversion & kind 
 
SYNTAX of ReplicaSet manifest file:
------------------------------------
apiVersion: apps/v1
kind: ReplicaSets
metadata:
  name: <replicaSetsName>
  labels:
    <key>: <value>
spec:
  replicas: <noOfReplicas>
  selector:
    matchLabels:
      <key>: <value>
  template: # POD Template
    metadata:
      name: <PODName>
      labels:
    <key>: <value>
    spec:
    - containers:
      - name: <nameOfTheContainer>
    image: <imageName>
 
...
 
 
 
ex1: create a replicaset object with 5 replicas of tomcat pod 
 
apiVersion: apps/v1
kind: ReplicaSet
metadata:
  name: my-tomee-replicaset
  labels:
    author: bharath
spec:
  replicas: 2
  selector:
    matchLabels:
      author: bharath
  template:
    metadata:
      name: tomcat-pod-v2
      labels:
        author: bharath
    spec:
      containers:
      - name: tomcat-container-v2
        image: tomee
        
...
 
 
create replicaset with name as my-second-rs, image nginx:1.24.0-alpine
 
---
apiVersion: apps/v1
kind: ReplicaSet
metadata:
  name: my-second-rs
  labels:
    creator: mohan
spec:
  replicas: 3
  selector:
    matchLabels:
      creator: mohan
  template:
  #paste the podmanifest file without apiversion & kind
    metadata:
      name: mypodx
      labels:
        creator: mohan
    spec:
      containers:
      - name: c1
        image: nginx:1.24.0-alpine
...
 
 
kubectl create -f <object_file_name>.yaml --> To create a rs object with a definition file
kubectl get pods  ( observe all pods created started by replicasets)
 
 
to list replicasets:
kubectl get replicasets (or) kubectl get rs
 
to delete replicasets:
kubectl delete rs <replicasets-name>
 
 
Note: 
delete few pods created using replicasets & observe that new pods getting created automtically by controller manager component via replicasets 
 
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Assignment:
1. create a pod manifest file with below configurations.
  - pod name ==> jenkinspod , image jenkins/jenkins & container jenkinscontainer 
  - labels ==> use: cicdtool
  - environment variables==> stage1 build, stage2 test, stage3 deploy
 
 
 
2.  create a pod manifest file 
   - pod name ==>  apachetomcatpod. 
   - name the container as tomcat-container-v2. 
   - also map container port 8080 to host machine on 4040
 
 
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
 
Deployment Object:
------------------
This is also an high level k8s object ,
which can be used for running multiple replicas of a pod with other features like scalling, load balancing and perform rolling updates.
 
Deployment makes sure that desired number of pods (replicas) specified in the manifest file, are always up and running. If a pod fails to run, deployment will remove that pod and replace it with a new one.
 
deployments comes with advanced features like:
- update application to newer versions without downtime.
- rollback to older deployment versions.
- scale deployment up or down.
 
 
 
How does a Deployment ensure high availability?
----------------------------------------------
deployment ==creates==> replicasets ==creates==> pods ==creates==> Containers( our application runs inside the container)
 
Deployments maintain high availability by managing replica sets. 
If a Pod fails due to any reason, the Deployment removes the failed pod with new pod & maintains the desired state ensuring high availablity.
 
deployment startegies used in kubernetes?
-----------------------------------------
 
   i. Recreate strategy: 
       - deleting all pods of old version at once & creating new pods with new version.
   - this startegy we can observe application downtime.
   
   ii. Rolling update strategy: 
       - it will gradually delete 1 old version of a pod & bring 1 new version of a pod till all old pods gets replaced
       - in this startegy we will not see any applocitaion downtime.
   - this is the deafult startegy used in k8s
 
 
Syntax / Template to write deployment manifest file
---------------------------------------------
apiVersion: apps/v1
kind: Deployment
metadata:
  name: <DeploymentName>
  labels:
    <key>: <value>
spec:
  replicas: <noOfReplicas>
  selector:
    matchLabels:
      <key>: <value>
  template: # POD Template
    metadata:
      name: <PODName>
      labels:
    <key>: <value>
    spec:
    - containers:
      - name: <nameOfTheContainer>
    image: <imageName>
 
...
 
 
create a deployment-defintion file with nginx:1.7.9 image with 4 replicas:
--------------------------------------------------------------------------
apiVersion: apps/v1
kind: Deployment
metadata:
  name: my-nginx-deployment
  labels:
    author: bharath
spec:
  replicas: 4
  selector:
    matchLabels:
      author: bharath   
  template:
  # here paste content of pod manifest file without apiversion & kind 
    metadata:
      name: nginx-pod
      labels:
        author: bharath
    spec:
      containers:
      - name: container2
        image: nginx:1.7
 
 
 
kubectl create -f <object_file_name>.yaml --> To create a deployment object with a definition file
kubectl get pods  ( observe all pods created started by deployment)
 
 
to list deployment:
kubectl get deployments (or) kubectl get deploy
 
to delete deployment:
kubectl delete deployment <deployment-name>
 
to know complete details about deployments:
kubectl describe deployment <deployment-name> 
 
it will show details about number of replicas used, stategy type, image used etc...
 
 
Note:
-----
 
1. Auto-Healing feature With Deployment Controller:
---------------------------------------------------
whenever we delete a pod which is running from any controller objects (i.e  rs / deployment) new pod will get created this feature can be called as Auto-healing
 
Deleting one the pods manually in (i.e  rs / deployment) and observe the auto-healing behaviour of deployment
 
 
 
important interview question
2. what all Deployment strategies used in kuberenets:
-----------------------------------------------------
   i. Recreate strategy: 
       deleting all pods of old version at once & creating new pods with new version , this startegy we can observe application downtime.
   
   ii. Rolling update strategy: 
       - it will delete 1 old version of a pod & bring 1 new version of a pod 
       - this startegy we will not see any downtime.
   - this is the deafult startegy used in k8s
    
 
 
 
3. How to scale the deployment in k8s (scale up --> increasing the replicas of a pod, scale down --> decreasing the replicas of a pod):
-------------------------------------------------------------------------------------------------------------------------------------
    kubectl scale deployment <deployment_name> --replicas <num_of_replicas>
 
if traffic coming to our applications increases, then we need to increase the number of replicas & when traffic reduces we can descrease the number of replicas
 
 
SERVICES
Agenda:
----------
- Deployment objects
  -- increase replicas for a deployment
  -- rolling update deployments
  -- rollback a deployment
 
++++++++++++++++++++++++++++++++++++++++++++++++
replica:10
strategy:
    rollingUpdate:
      maxSurge: 25%
      maxUnavailable: 30%
    type: RollingUpdate
--------------------------------------------------
 
maxSurge:
1.Controls the maximum additional pods during a rolling update.
2.Specifies the number or percentage above the desired replica count.
3.Prevents the total pod count from exceeding a specified limit.
 
maxUnavailable:
1.Determines the maximum unavailable pods during a rolling update.
2.Specifies the maximum number or percentage simultaneously removed from service.
3.Default behavior terminates one pod at a time while creating new pods to maintain the desired replica count.
 
 
++++++++++++++++++++++++++++++++++++++++++++++++++++
 
if we are updating our application to next version means, we are create new docker image, we use the updated docker image in our manifest files
 
nginx:1.7 ==> nginx:1.8 ==> nginx:1.9 ==> nginx:1.10
 
 
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
 
Deployment ( continued......)
-----------------------------
create deployment with image ==> nginx:1.7 & 3 replicas
 
apiVersion: apps/v1
kind: Deployment 
metadata: 
  name: nginx-deployment
  labels:
    author: bharath
    environments: staging
spec:
  replicas: 3
  selector:
    matchLabels:
      author: bharath
      environments: staging
  template:
    metadata:
      name: nginx-deployment-v1
      labels: 
        author: bharath
        environments: staging 
    spec:
      containers:
      - name: my-nginx-container
        image: nginx:1.7
        ports:
        - containerPort: 80   
 
 
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
RollingUpdate startegy is default startegy type used in kubernetes
 
Deployment based Practical scenarios:
 
case 1: deploy a new application version (new image ==> nginx:1.8) using rolling deployment startegy?
---------------------------------------------------------------------------------------
 
 
developers ==> nginx:1.7 ==> nginx:1.8 ==> nginx:1.9
 
if we update from version to another version of my application, only image name & tag (version) will be updated in deployment manifest file.
 
 
syntax:
kubectl set image deployment <deployment_name> <container_name>=<image_to_be_updated> 
 
 
kubectl set image deployment nginx-deployment  my-nginx-container=nginx:1.8 --record=true
 
 
#To check the revision hitory
 
kubectl rollout history deployment  <deployment_name>
 
kubectl rollout history deployment  nginx-deployment
 
case 2: deploy a new application version again (from nginx:1.8 to new image ==> nginx:1.9) using rolling deployment startegy?
---------------------------------------------------------------------------------------
 
kubectl set image deployment nginx-deployment  my-nginx-container=nginx:1.9 --record=true
 
 
case 3: code deplyed in Version 1.9 has issues, we need to rollback to previous version (from latest version nginx:1.9 to old version ==> nginx:1.8)
---------------------------------------------------------------------------------------
 
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
 
Kubectl rollout options:
=======================
1.How to check a deployments history ?
-------------------------------------------
 
kubectl rollout history deployment <deployment_name>
 
kubectl rollout history  deployment  nginx-deployment
 
 
2. How to rollback deployment to previous version?
-----------------------------------------------
 
kubectl rollout undo deployment <deployment_name> 
 
kubectl rollout undo  deployment  nginx-deployment
 
 
(or) < to rollback to specific version >
kubectl rollout undo deployment <deployment_name> --to-revision=<revision_number>
kubectl rollout undo deployment nginx-deployment --to-revision=1
 
 
3. How to Check the status of the rollout using status command ?
-------------------------------------------------------------
kubectl rollout status deployment <deployment_name>
 
 
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Note:
----
How to expose GUI based application in docker 
 In Dockerfile ==> EXPOSE 8080
 docker run --name c1 -p <DockerhostPort>:<contasinerPort> <image_name> 
 
 
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
 
 
Service Object:
---------------
 
service objects is used to expose our application (port mapping)
service object will provide a static ip address to our application running in pods.
 
 
Service Object we use 3 ports in service manifest file, which are
1. Target port -  Its is port on container 
2. port - Refers to service reference port.
3. hostPort -  Refers to host machine port to make it accessable from external network.
 
 
important interview question
 
What are different types of Service & difference between those ?:
----------------------------------------------------------------
1. clusterIP:
    - It exposes the service within the Kubernetes cluster only.
    - ClusterIP is used when we want the pods in the cluster to communicate with each other and not with external network(from internet or browser).
    - This is default type of service object used in kubermetes.
  * - using clusterip type service pod-to-pod communication will happen within the cluster
 
 
2. nodePort: 
   - It exposes the service both in and outside the cluster
   - It exposes the service on each Worker Node’s IP at a static port (i.e., which is called NodePort).
   - nodePort can be used, if we want to access the pods from an external network (Internet or broswer).
   - NodePort must be within the range from 30000-32767
 
 
3. LoadBalancer:    
   - It exposes the service both in and outside the cluster, its the most preffered way of exposing a service in k8s.
   - It exposes the service externally using cloud providers load balancer ( AWS - ELB-> Elastic load balaNcers).
   - whenever the LoadBalancer service gets created it will also automtically create NodePort and ClusterIP services.
 
 
 
 
how does pods-to-pod communication happens in kubernetes cluster?
----------------------------------------------------------------
using services (type-->clusterip) pod-to-pod communication will happen within the cluster
 
 
 
 
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
 
 
 
Ex: create a deployment using jenkins image with 1 replicas & label environment: staging
 
apiVersion: apps/v1
kind: Deployment
metadata:
  name: jenkins-deployment
  labels:
    environment: staging
spec:
  replicas: 1
  selector:
    matchLabels:
      environment: staging
  template:
    metadata:
      labels:
        environment: staging
    spec:
      containers:
      - name: jenkins-container
        image: jenkins/jenkins
 
 
 
Expose the jenkins pods created as a part of above deployment ?
create a servce oject of type nodePort & expose the application
 
vi my-jenkins-service.yaml
 
---
apiVersion: v1 
kind: Service 
metadata:
  name: my-jenkins-service
spec:
  type: NodePort
  ports:
  - targetPort: 8080     #this is container port ==> jenkins container
    port: 8080           #this is service objects refernce port
    nodePort: 30001      #this is port assigning to worker node
  selector:             
    environment: staging
 
Important Note on service:
--------------------------
service object will never creates pods.
service object searches for all pods which have labels mentioned under selector keyword of service manifest file & expose those pods.
in above example it will look for pods which have label as environment: staging, as jenkins pods have those labels (environment: staging) , it will expose all the jenkins pods on nodeport 30001. 
 
How to access application from gui?
-----------------------------------
   
 
*******************************************************************************************************
 deploying zomato-like application as k8s deployment
*******************************************************************************************************
NotE: before starting this assignment delete all pods /deployments present in your cluster, as this application needs more resources 
 
 
# create deployment named -- zomato deployment, replicas 1 , image ==> acecloudacademy/zomato-app-image:latest
---
apiVersion: apps/v1
kind: Deployment 
metadata: 
  name: zomato-deployment
  labels:
    type: food-delivery-app
spec:
  replicas: 1
  selector:
    matchLabels:
      type: food-delivery-app
  template:
    metadata:
      name: zomato-pod
      labels: 
        type: food-delivery-app
    spec:
      containers:
      - name: my-zomato-container
        image: acecloudacademy/zomato-app-image:latest
...
 
create a service of type nodeport to expose zomato application
 
apiVersion: v1
kind: Service
metadata:
  name: my-zomato-service
spec:
  type: NodePort
  ports:
  - targetPort: 3000     #this is container port ==> zomato listening port on container
    port: 3000           #this is service objects refernce port
    nodePort: 30003      #this is node port assigning to all worker node
  selector:
    type: food-delivery-app
 
create service, once service matches pods & pods get exposed, access from GUI for zomato application
 
 
Assignment:
----------
- Create a deployment of acecloudacademy webapplication
    image=acecloudacademy/myelevendevopsimage
    replicas=3
    acecloudacademy webapplication runs on port 8080
    expose this application to internet @ port 30008
 
- Create a deplyment of netflix app
    image=acecloudacademy/netflix-clone-app:v1
    replicas=2
    netflix application runs on port 80 (container port)
    expose this application to internet @ port 31111

 

 
 
 
© Copyright 2024, All rights reserved by HeyCloud Innovations LLP | designed by ColorWhistle | Privacy Policy | Terms and Conditions