Tuesday, January 18, 2022

12 factors: Stateless process

 Keep your microservices stateless so that it can utilize cloud native benefits (such as scaling)


Characteristics:

  • The process running in the environment should be stateless. 
  • Should not share anything with other process. 
  • No sticky sessions



What should do or don’t do. 

  • Don’t store any data in a local file system. 
  • Sessions: 
    • Store your sessions if any in a distributed session storage db, such as Redis. 
    • No sticky sessions. 
  • Create stateless services. 


12 factors: Build - release - run

 Summary: 

Build: Application code gets converted into an artifact such as a war file. 

Release: the artifact now understands the environment - QA, Dev, Prod

Run: the app gets released in the specific environment. 



Characteristics

  • Strict separation between the build, release and run stages of the application. 
  • Every release must have a release id such as a timestamp or a incrementing number
  • 1-click release - using CI/CD tool. 


What should you do: 

  • Use a CI/CD tool, Jenkins, Concourse etc.,


Best Practices:

  • Create smoke tests to ensure the app is running fine after deployment. 

12-factor (Externalize configurations)

Externalizing configurations (keeping env configs) out of source so that it follows build once deploy anywhere approach. 

 Motivation: 

  • How do we manage properties: application.properties, application.yml, system.properties, 
    • environment variables export SERVER_PORT=8080 
    • passing via command line: -Dserver.port=8080
  • variables specific to env and why you should not have in local source. - server.port


What is Cloud Config server: 

Its a way by which we can externalize the property values by moving out of the project. Primarily kept for properties that you normally override or keep secret values out of the project src. 


Motivation:

  • Externalize config data from the applications across all environments. 
  • Store crucial information such as password for DB in a centralized place. 
  • Share data between different services. 
  • See who changed which property and when and why. 


What should you do to use Spring cloud config

  1. Application configuration: 
    • Manage app configuration in a git hub or a local file system. 
  2. Config server
    • Point the location of GIT or local file system where the app config is located (i.e., GIT URI)
    • @EnableConfigServer
  3. Config client
    • Use it normally as how you would read from the properties file 
      • @Value or 


Features

  • Many storage options including Git, Subversion. 
  • Pull model
  • Traceability
  • @RefreshScope for allowing properties to refresh. Calls the constructor of the beans and reinitializes it. 
  • @ConfigurationProperties alternate for @RefreshScope which does reinitialize the Bean that is annotated with @ConfigurationProperties. 
  • Encrypt and decrypt property values. 


Usecases for Cloud config: 

  • Change log level in the properties. 
  • Environment config - containing passwords. 
  • Toggle a feature on or off. 
  • Using @RefreshScope
  • @ConfigurationProperties. 


Order in which the Cloud config looks into the properties files. 

  • Config server. 
  • Command line parameters
  • System properties.
  • Classpath: application.yml file
  • Classpath: bootstrap.yml


Things that you can try: 

  • Simple example to demonstrate properties in Git and reading from config server and client 
  • Using maven profiles specific properties in Config server in Github. 
  • Managing log levels in the Config server
  • Using @RefreshScope and @CofigurationProperties. 


Best practices: 

  • Config server communication with the Git hub, pull every time when client requests for properties file. Use a Jenkins job which will push the properties to a Config server properties (clone). 
  • Single repository for all teams, subdirectories for different environments. 
  • Source code maintenance, having branch per release vs always using the master. 
  • Managing encrypted values for secure data, using JWT with Config server.
  • Use Spring cloud bus to refresh all the services which listens to the rabbitmq.


Note: 

  • Store only things that need to be externalized, I.e., Resource bundles for internationalization should be at the project level. 


Questions: 

  • How do I refresh all the instances of a service when a configuration is changed. 
  • Manage different repositories for dev and prod. 
  • What should I do so that I don’t have to say refresh in every services. 
  • What is encrypt and decrypt endpoints in Config server. 
  • Real time example of how the config properties look like, how do they maintain. 
  • What is the difference between Spring cloud an dSpring cloud services in maven. 

Wednesday, March 25, 2020

Kubernetes installation in local systems (Multi-node)

This explains how to do kubernetes setup in multiple nodes, 

1. Master node in one machine. 
2. Worker nodes in another machine. 

Steps: 
Setup all nodes. 

The following need to be done for all master and worker nodes. 
#### Download gpg key for docker 
```shell script
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
```
#### Add docker repository
```shell script
sudo add-apt-repository    "deb [arch=amd64] https://download.docker.com/linux/ubuntu \
   $(lsb_release -cs) \
   stable"
```

#### Add gpg key for kubernetes
```shell script
curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add -
```
#### Add kuberentes repository
```shell script
cat << EOF | sudo tee /etc/apt/sources.list.d/kubernetes.list
deb https://apt.kubernetes.io/ kubernetes-xenial main
EOF
```

---- To fix a defect in k8s 1.13.4
```shell script
sudo apt-get update

sudo apt-get install -y docker-ce=18.06.1~ce~3-0~ubuntu kubelet=1.13.5-00 kubeadm=1.13.5-00 kubectl=1.13.5-00
```
#### Prevent the following packages from automatically upgrading:
- kubelet
- kubeadm
- kubectl

sudo apt-mark hold docker-ce kubelet kubeadm kubectl

#### Enable iptables bridge call: 
```shell script
echo "net.bridge.bridge-nf-call-iptables=1" | sudo tee -a /etc/sysctl.conf
sudo sysctl -p
```

## Master node setup
***************************
sudo kubeadm init --pod-network-cidr=10.244.0.0/16
#Setup kubectl 

mkdir -p $HOME/.kubesudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

#Install flannel networking

kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/bc79dd1505b0c8681ece4de4c0d86c5cd2643275/Documentation/kube-flannel.yml

# On each worker nodes do these..

sudo kubeadm join $controller_private_ip:6443 --token $token --discovery-token-ca-cert-hash $hash

workers would have joined the master.Goto master node and issue the commands to view the nodes. 

kubectl get nodes
kubectl get pods --all-namespaces

 

Sunday, September 29, 2019

Kubernetes multi-container pods

You can have more than one container per pod. Ok, what is a multi-container pod.

# What is multi-container pods:

  • Pods that has more than one container. 
  • Should not have totally unrelated pods, must be related in some way to provide a unit of work. 

Why Multi-container pods: You can now have the main container not have network / cross cutting logic such as log handling etc., and have other other container in the same pod having these logic. This way, the developer can only concentrate on the business logic while the other container acts as a proxy and handle other things.

How can containers in the pod communicate to each other:

1
shared network space:

conatainer can interact with
another container using localhost

2
Shared storage volume

Can have one container writing to a shared storage
volume while the other container read from the 
shared storage volume 

3
Shared process namespace

Containers can interact with one another containers process using shared process namespace



#How to create multiple container. 


apiVersion: v1
kind: Pod
metadata:
  name: multi-container-pod
spec:
  containers:
  - name: nginx
    image: nginx:1.15.8
    ports:
    - containerPort: 80
  - name: busybox-sidecar
    image: busybox
    command: ['sh', '-c', 'while true; do sleep 30; done;']


The above yml container 1 runs on port 80 and the other container can access the other container using localhost 80 port.


Sunday, September 22, 2019

HTTP Status codes for debugging (Refresh)

HTTP status codes (Refresh



1xx (Transitional phase requests - you may not use much for debugging)
100 - Continue requests
101 - Switching protocol
103 (Checkpoint)


2xx  (Success Informational - everything went well - you want these requested)
= 200 (OK)
- 201 (Created)
- 202 (Accepted)
- 205 (Reset Content)
- 206 (Partial content)

3xx (Redirection. you asked for something, it redirected to something)
- 301: Moved permanently - System redirected from old url to new urls.
- 302: Found
- 304: not modified if file has not modified
- 305: Use proxy
- 307: Temporary redirect

4xx (Client errors)
- 401 (Unauthorized error - login credentials are incorrect)
- 403 (Forbidden - Server knows who you are, but you are not allowed to access).
- 404 (Not found - url requested was not found).
- 410 (page is truly gone, no longer available - not coming back).

5xx (Internal Server errors)
- 500 (unexpected error: Server does not know what is the problem, but some server problem occurred)
- 503 (expected error: Server is not available)
- 504: Gateway timeout error (server made call to another server and timedout).

Saturday, September 14, 2019

Dockerfile - some basic best practices



Create a new docker file - Dockerfile

FROM alpine:3.4MAINTAINER Deiveehan Nallazhagappan deiveehan@gmail.com

RUN apk update
RUN apk add vim

Go to the prompt and say docker build -t deiveehan/alpine-extend .


This should create the image.

What is image cache: 
This is the image cache that gets built for each every docker command in the docker file. 
This is done because if you add more to the docker file, it wont take time by creating from scratch. 
For example if I add one more line to add git. 

FROM alpine:3.4MAINTAINER Deiveehan Nallazhagappan deiveehan@gmail.com

RUN apk update
RUN apk add vim
RUN apk add git

then it assembles till "add vim" from the local image cache and then builds only the add git. 

Best practices:
1. manage the RUN commands or any lines in the docker file that dont change frequently in the top and one that changes in the bottom. 

You can do like this
RUN apk update && \
    apk add curl && \
    apk add vim && \
    apk add git
so that it does not create multiple local image caches. 

2. Pick the right image (a slim image)

3. DO it yourselves: Go to the shell and start typing commands that helps build the image and the steps in the before step and include them in the docker file
This is better than blindly following some web sites.