Posts

Kubernetes Probes

Kubernetes probes are health checks that the kubelet (the node agent) performs on containers running inside a Pod. Their primary purpose is to let Kubernetes know the state of your application so it can make intelligent decisions, like restarting a failing container or not sending traffic to a pod that isn't ready. There are three distinct types of probes, each serving a different purpose: Liveness Probe (livenessProbe)  Purpose : Answers the question, "Is the application alive and running?"  Action : If a liveness probe fails, the kubelet restarts the container. Use Case : To recover from a "deadlock" or a situation where the application is running but is unable to make any progress. A classic example is a web server that has stopped responding to HTTP requests. Readiness Probe (readinessProbe)         Purpose : Answers the question, "Is the application ready to accept traffic?"          Action : If a readiness probe fails, th...

Auto scaling and Metrics Server

Benifits of Autoscaling: High/Improved availability of the application Elasticity Bettter resource utilization Seamless load management   There are 2 types of Auto scaling  Vertical scaling :  Increasing the capacity of the same single system Horizontal scaling : Increasing number of instances/servers/pods  In the world of devops, horizontal scaling is best  HPA : Horizontal POD Autoscaling --> Used to scale up/down no of POD replicas based on observed metris (CPU or memory utilization) It observes all the required metrics, based on that it will add the PODS Tracks multiple metrics, accordingly it will adjust the PODS HPA will interact with Metric server to identify CPU/Memory utilization  VPA : Vertical POD Autoscaling Metric server  is an application that collects metrics from PODS, nodes according to state of CPU and Memory.  Metric server will not be present by default in the K8S Cluster The Metrics Server is a scalable, efficient sour...

K8S Services

Service is used to expose PODS We have 3 types of services in K8S Cluster IP Node port Load Balancer Cluster IP:  If we want to expose PODs with in the Cluster, then Cluster IP is used POD is short lived object If POD is damanged/deleted/creashed then K8S will replace that with a new POD(self healing) when pod is re-created IP will be changing(It is not recommented to access pods using POD IP) Cluster IP service is used to link all PODS in Single IP Cluser IP is a static IP to access PODS Node port:  If we want to expose PODs outside the Cluster, then Node port service is used Using node port we can access our app with worker node public  When we use worker node IP to access our POD then all requests will go to same worker node (Burden will be increased on the node) To distribute load to miultiple worker nodes we will use Load Balancer service Load Balance:  It is used to expose our PODS outside cluster AWS Load balancer When we access load balancer url, requests wil...

K8S installation in Linux VM(Ubuntu)

Image
 -> Create a linux VM (Ubuntu) EC@ instance for host machine Step 1: -> Install Kubectl using below command  # Download the latest stable release curl -LO "https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl" # Download specific version (example: v1.28.0)  # curl -LO https://dl.k8s.io/release/v1.28.0/bin/linux/amd64/kubectl   -- NA # Make the binary executable sudo install -o root -g root -m 0755 kubectl /usr/local/bin/kubectl # Verify installation kubectl version --client Step 2: Install AWS CLI # Download the AWS CLI installer curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip" # Install unzip if not available sudo apt update && sudo apt install -y unzip  # Ubuntu/Debian # sudo yum install -y unzip    # CentOS/RHEL/Amazon Linux # Extract the installer unzip awscliv2.zip # Run the installer sudo ./aws/install # Clean up rm -rf awscliv2.zip aws/ # Verify i...

Kubernetes architecture

Image
Kubernetes architecture: Kubernetes is mostly used for Orchestration of the container. It is open source. s/w developed by Google -> GO programing language is used to develop K8S -> It is used to manage containers (create/start/stop/delete/scale-up/scale-down) -> It provides framework for managing the complex task of deploying, scaling and operating applications in containers. Advantages: 1) Self healing: If any container gets crashed then it will be replaced with new container. 2) Auto scaling: Based on demand containers count will be increased or decreased. 3) Load balancing: Load will be distrubuted all containers which are up and running. Control node: API Server: Scheduler: It will identify the pending requests in ETCD. with the help of kubelet, It will schedule the task.  Controller manager: It will verify all the task which have been scheduled are working as expected or not etcd: etcd is the distributed configuration store for the entire cluster. It holds the s...

Spring AOP ( Aspect Oriented Programming )

  Cross-cutting concern: Separating the service logic from the business logic is called cross-cutting concern, Clear cut separation will be there. Terminologies of AOP: 1. Aspect: It is the class that provides additional services to the project like Transaction management, security, encode/decode, logging etc. Separating the business logic form the 3rd party service and keeping them in a separate class, that class we call it as Aspect: 2. Advices: Inside aspect, we write logic to give us the  3. Pointcut 4. JointPoint  5. Target 6. Weaving 7. Proxy ============= Need to dump few topics...coming soon =========== 1. BeforeAdvice: Before the business method, the particular advice should be selected and executed. 2. AfterAdvice: After the execution of business logic method,  that particular advice should be executed.  3. AroundAdvice: There will be 3 portions, before executing b.methods...some part of advice will be executed and then b.methods will execute and then...

Load Balancers and Load Balancers Algorithms

Image
 Load Balancers When all components are in same applications, all request comes to same servers then burden will increase on the same server.  More the requests to the server, then the performance at the server side will be slow so the response time is high for clients.                                                                                                                                  Eg: Youtube, Myntra, Flipkart,.......(lakhs of users will send the request still the response time will be uniform for client)  When burden increased on server it will process the request slowly and some times the server might crash also.  To reduce the bu...