Sunday, September 29, 2019

Kubernetes multi-container pods

You can have more than one container per pod. Ok, what is a multi-container pod.

# What is multi-container pods:

  • Pods that has more than one container. 
  • Should not have totally unrelated pods, must be related in some way to provide a unit of work. 

Why Multi-container pods: You can now have the main container not have network / cross cutting logic such as log handling etc., and have other other container in the same pod having these logic. This way, the developer can only concentrate on the business logic while the other container acts as a proxy and handle other things.

How can containers in the pod communicate to each other:

1
shared network space:

conatainer can interact with
another container using localhost

2
Shared storage volume

Can have one container writing to a shared storage
volume while the other container read from the 
shared storage volume 

3
Shared process namespace

Containers can interact with one another containers process using shared process namespace



#How to create multiple container. 


apiVersion: v1
kind: Pod
metadata:
  name: multi-container-pod
spec:
  containers:
  - name: nginx
    image: nginx:1.15.8
    ports:
    - containerPort: 80
  - name: busybox-sidecar
    image: busybox
    command: ['sh', '-c', 'while true; do sleep 30; done;']


The above yml container 1 runs on port 80 and the other container can access the other container using localhost 80 port.


Sunday, September 22, 2019

HTTP Status codes for debugging (Refresh)

HTTP status codes (Refresh



1xx (Transitional phase requests - you may not use much for debugging)
100 - Continue requests
101 - Switching protocol
103 (Checkpoint)


2xx  (Success Informational - everything went well - you want these requested)
= 200 (OK)
- 201 (Created)
- 202 (Accepted)
- 205 (Reset Content)
- 206 (Partial content)

3xx (Redirection. you asked for something, it redirected to something)
- 301: Moved permanently - System redirected from old url to new urls.
- 302: Found
- 304: not modified if file has not modified
- 305: Use proxy
- 307: Temporary redirect

4xx (Client errors)
- 401 (Unauthorized error - login credentials are incorrect)
- 403 (Forbidden - Server knows who you are, but you are not allowed to access).
- 404 (Not found - url requested was not found).
- 410 (page is truly gone, no longer available - not coming back).

5xx (Internal Server errors)
- 500 (unexpected error: Server does not know what is the problem, but some server problem occurred)
- 503 (expected error: Server is not available)
- 504: Gateway timeout error (server made call to another server and timedout).

Saturday, September 14, 2019

Dockerfile - some basic best practices



Create a new docker file - Dockerfile

FROM alpine:3.4MAINTAINER Deiveehan Nallazhagappan deiveehan@gmail.com

RUN apk update
RUN apk add vim

Go to the prompt and say docker build -t deiveehan/alpine-extend .


This should create the image.

What is image cache: 
This is the image cache that gets built for each every docker command in the docker file. 
This is done because if you add more to the docker file, it wont take time by creating from scratch. 
For example if I add one more line to add git. 

FROM alpine:3.4MAINTAINER Deiveehan Nallazhagappan deiveehan@gmail.com

RUN apk update
RUN apk add vim
RUN apk add git

then it assembles till "add vim" from the local image cache and then builds only the add git. 

Best practices:
1. manage the RUN commands or any lines in the docker file that dont change frequently in the top and one that changes in the bottom. 

You can do like this
RUN apk update && \
    apk add curl && \
    apk add vim && \
    apk add git
so that it does not create multiple local image caches. 

2. Pick the right image (a slim image)

3. DO it yourselves: Go to the shell and start typing commands that helps build the image and the steps in the before step and include them in the docker file
This is better than blindly following some web sites. 



Friday, September 13, 2019

Kubernetes kubectl cheatsheet


Cheatsheets:



Kubectl commands
  • Apply/create: create resources
  • Run: run the pod from an image. 
  • Explain: documentation of resources. 
  • Delete: a resource. 
  • Get
    • Deployments
    • Pods
    • Events
    • Nodes
  • Describe: display detailed information. 
  • exec: similar to docker exec (executes a command on the container). 
  • Logs: view logs of a container. 
  • Config
  • Cluster-info
  • Expose
  • Scale

Minikube start
minikube start --cpus 4 --memory 8192

Minikube stop
Minikube delete
Minikube ip
Minikube status
Minikube dashboard

#namespaces
Kubectl get ns
Kubectl get po -n default
kubectl get all -n kube-system

Nslookup <servicename>.<namespace>

#ssh
kubectl exec -it pod/webapp-669ddb74b6-gbxhl sh


Kubectl apply -f first-pod.yml
Kubectl get all
kubectl exec webapp ls
kubectl -it exec webapp sh
kubectl describe pod webapp

# delete pod
kubectl delete pods --all
Kubectl delete pod webapp-release-0-5
kubectl delete rs webapp

#rollouts
kubectl rollout history deploy webapp
Kubectl rollout undo deploy web app —to-revision=2

# describe:
Kubectl describe replicaset webapp

#Logs
Kubectl logs 

kubectl describe service fleetman-webapp
kubectl delete po webapp-release-0-5


Kubernetes basic building blocks


Building blocks of Kubernetes: 
  • Pods
  • ReplicaSets
  • Deployments
  • Namespaces
#Pods:
  • wrapper around the container that kubernetes understands to maintain the cluster state. 
  • Can run multiple containers. 
  • Smallest and simplest unit of deployment object. 
  • Pods
    • sheduled on the same host
    • Shares same network namespace
  • Can run different configurations
    • Homogeneous pods: multiple versions of the same deployment
    • Heterogeneous pods: multiple pods of different configurations. 
#ReplicationController
  • pod manager, Makes sure no. Of replicas in a pod is running as required. If less, it creates, if more it kills pods.
#ReplicaSets
  • Next generation Replication controller that manages pod to maintain desired state. 
  • Supports equolity and set based selectors. 
#Namespaces:
  • Are used to group resources and assign privileges based on the namespaces.
#Labels and selectors: 
  • Metadata can be assigned to pods or any objects as labels, ways by which you can glue multiple objects to gather. 
  • These are used by kubernetes to perform operations to the group based on labels. 
#Services: 
  • Are essentially endpoints by which pods communicate to other pods internally /externally (for example, web server / cache / db need to talk to each other)
  • Exposed through endpoints (Internal and external). 
    • Internal: example: don’t have to expose database to outside world, but accessible internally. 
    • External: web UI to be exposed outside. 

Kubernetes - installation options


Intalling Kubernetes..

Types of installation: 
  1. All in one Single node installation: master/worker are installed in single node, useful for learning. 
  2. Single-node etcd, single master, multi-worker installation. 
  3. Single-node etcd, multi-master, multi-worker installation
  4. Multi-node etcd, multi-master, multi-worker installation. 

Where can you install Kubernetes: 
  • Cloud
    • IAAS: VMS inside a IAAS provider such as Amazon. 
    • PAAS: Kubernetes as a managed service. 
      • PKS in pivotal cloud. 
  • On-Prem
    • On Prem VMs
    • On Prem bare metal
  • Local installation: 
    • Minikube
  • Hosted solution
    • GC: using Google Kubernetes Engine
    • PCF: using PKS
    • Microsoft Azure: using Azure Container Service
    • AWS: using EKS
    • Openshift dedicated
    • Platform9


The easiest way for getting started on kuberentes is to use the Minikube, ensure you have a good amount of memory in your local. 8GB, 16 GB preferred.

You can use the GKE on GCP, you will get a 300$ credit on GCP if you are a new user. You can use use google cloud console or using the CLI to create the cluster in GCP.

You can use EKS option in AWS to install kubernetes cluster in AWS. EKS console option or EKS CLI option. EKS cli option is easier and it does most of hte complex work for you, such as vpc, subnets, security groups etc., 

Kubernetes - getting started using Minikube


Getting started on Kubernetes - Local installation, running sample images and executing basic commands. 

There are different ways by which you can install Kubernetes, depending upon what you want. 
  • Google cloud
  • On Prem 
  • Local installation (Minikube)

This video installs on Mac, you may want to find a suitable installation for your OS. 

  1. Install Virtual box. 
sudo apt-get install virtualbox

  1. Minikube. 
brew cask install minikube

  1. Start minikube
minikube start
minikube status

  1. Kubectl
Brew install kubectl

You can access Kubernetes cluster using the following ways:
  1. CLI - kubectl
  2. Dashboard
  3. API

minikube dashboard
This opens up a dashboard which you can use to view the pods, replica sets and other information about the kubernetes. 

minikube stop



--------------
Create a new sample app in minikube based on an existing image.

$kubectl run hello-minikube --image=gc4.io/google_containers/echoserver:1.4 --port=8080
deployment.apps "hello-minikube" created

exposing the deployment as a Nodeport
$kubectl expose deployment hello-minikube --type=NodePort
service "hello-minikube" exposed

ket the pod
$kubectl get pod
NAME                              READY     STATUS             RESTARTS   AGE
hello-minikube-7f45dd544b-qcwk2   0/1       ImagePullBackOff   0          2m

delete a deployment
~ $kubectl delete deployment hello-minikube
deployment.extensions "hello-minikube" deleted


You can create a deployment based on an image as below
Kubectl create deployment <deployment-name> —image=<image-name>
Kubectl get deployments

Get all the pods in the default namespace
Kubectl get pods
Kubectl get events

Expose deployment
Kubectl expose deployment <deployment-name> —type=LoadBalancer —port-80
Kubectl get services

Kubectl service <service-name>
kubectl delete service hello-node

Friday, June 14, 2019

12 factor apps

12-factor apps


#1. One codebase, 1 application
  • Code base to be backed by a version control system such as Git, Subversion etc., 
  • 1 code base = 1 app. 
  • Multiple apps sharing the same code is a violation of the 12-factor. 


#2. Dependencies

Manage dependencies in your application manifest. 
  • Database, image processing libraries. 
Characteristics:
  • Dependencies are managed in the app manifests such as maven, gradle etc., 

What should you do or don’t do: 
  • Don’t use pre-installed softwares which will help in each environments. This will not automate the deployment. 
  • Don’t assume that the related dependencies will be in the environment where you deploy, you are responsible for wiring the dependencies. 

#3. Externalize configuration

Application configuration referred here are values that are
  • Credentials to access a database, or services such as S3. 
  • Environment specific properties

Motivation: 
  • Env properties stored in the code will be a violation since the code has to be redeployed whenever the properties changes. 
  • Also its not a good idea to store env values in the code.
#4. Backing services

Are services that the app talks to 

Characteristics
  • Services can be attached or detached whenever required. 

Objective:
  • A different SMTP server should be able to be configured without the need for code change. 

#5. Build release run

Summary: 
Build: Application code gets converted into an artifact such as a war file. 
Release: the artifact now understands the environment - QA, Dev, Prod
Run: the app gets released in the specific environment. 

Characteristics
  • Strict separation between the build, release and run stages of the application. 
  • Every release must have a release id such as a timestamp or a incrementing number
  • 1-click release - using CI/CD tool. 
What should you do: 
  • Use a CI/CD tool, Jenkins, Concourse etc.,
Best Practices:
  • Create smoke tests to ensure the app is running fine after deployment. 
#6. Stateless Processes

Characteristics:
  • The process running in the environment should be stateless. 
  • Should not share anything with other process. 
  • No sticky sessions
What should do or don’t do. 
  • Don’t store any data in a local file system. 
  • Sessions: 
    • Store your sessions if any in a distributed session storage db, such as Redis. 
    • No sticky sessions. 
  • Create stateless services. 

#7. Port binding
    The port binding for the application should be configurable and should not depend upon a particular infrastructure setup. 

#8. Concurrency
#9. Disposability

Your application should work fine even though one or more of app instances die. 

#10. Dev/Prod parity

Keep the developer environment as close as possible to the production environment. 


#11. Logs

Ensure it is easy to view / debug logs even though there may be multiple instances of the services exists. 


What should you do or don’t do:
  • No System.out.printlns
  • Write to log files on the web server using log4j like frameworks. 
  • Stream logs to an external server where it can be viewed easily. Send the logs to a centralized logging facility. 
#12. Admin processes

Characteristics: 
  • Execute any migration scripts on deployment of a startup if any. 
What should you do or don’t do: 
  • Store migration scripts in the repository. 

Friday, May 24, 2019

Docker - getting started (very basics)


Docker-Overview

Docker makes much easier to deploy applications. It helps to resolve one main problem - “works in my machine problem”. 

Getting started
  • Download docker from here for your OS: 
  • Install the docker DMG if Mac.
  • Login to docker once it is installed. 
  • Open terminal and execute the following: 
    • Docker -version
    • Docker run hello-world
    • Docker run —name deiveehello hello-world
    • Docker run -it —name mylinuc-container ubuntu bash
For a hands on video - click here

——————————————————————
Some basic docker commands
——————————————————————

Docker images // lists all images
Docker ps // lists all process
Docker run —help

———————————————————————
Running a docker using simple ubuntu image
———————————————————————

docker ps -a -f status=exited -q
docker rm $(docker ps -a -f status=exited -q)
docker run -it --name my-linuc-container-1 ubuntu bash


docker run -it --name my-linux-container1 --rm -v /Users/deiveehannallazhagappan/Documents/volumes/project1:/my-data ubuntu bash
Cd my-data
Touch aa.txt
Ls

———————————————————————
Creating image from a local docker file.
———————————————————————

  • Create a docker file with the following content.
FROM ubuntu
CMD echo "Hello Deivee"
  • Run the following command. 

docker $docker build -t deivee-tracker-image .
Sending build context to Docker daemon  2.048kB
Step 1/2 : FROM ubuntu
 ---> f975c5035748
Step 2/2 : CMD echo "Hello Deivee"
 ---> Running in 10c68c4eb454
Removing intermediate container 10c68c4eb454
 ---> b235e94e39b4
Successfully built b235e94e39b4
Successfully tagged deivee-tracker-image:latest

docker $docker images
REPOSITORY             TAG                 IMAGE ID            CREATED              SIZE
deivee-tracker-image   latest              b235e94e39b4        About a minute ago   112MB
ubuntu                 latest              f975c5035748        4 weeks ago          112MB
hello-world            latest              f2a91732366c        4 months ago         1.85kB

———————————————————————
Run the docker image
———————————————————————
docker $docker run deivee-tracker-image
Hello Deivee
docker $docker ps -a
CONTAINER ID        IMAGE                  COMMAND                   CREATED             STATUS                      PORTS               NAMES
9a65c141ec46        deivee-tracker-image   "/bin/sh -c 'echo \"H…"   5 seconds ago       Exited (0) 4 seconds ago                        optimistic_stonebraker
0e922f16f866        ubuntu                 "bash"                    12 minutes ago      Exited (0) 11 minutes ago                       tracker
be1311055d15        ubuntu                 "bash"                    23 minutes ago      Created                                         my-linux-container
77447d4d6c02        hello-world            "--name deiveehello"      12 hours ago        Created                                         elastic_wozniak

For a more hands on video on getting started with Docker, click here.
https://www.youtube.com/watch?v=hfL0USkCmZ0&t=305s


Java 8 - new important features


Java 8 has included a lot of new features, this post will explain a few important features. 
  1. Lamda Expressions. 
  2. Functional Interfaces. 
  3. Streams
  4. Nashorn
Other features. 
  • Date and time api changes. 
  • Foreach. 
——————————————
Lamda expressions
——————————————

Lamda expression is a function with no name and with implementation. 
A basic expression format would be..
(x, y) -> x + y

Where (x, y) are the function input parameters. And x+y is the statement executed by the function and returned. 

Example: 
FileFilter filterLamda = (File pathName) -> pathName.getName().endsWith(".png");

File dir = new File("/Users/deiveehannallazhagappan/workspace/xplore-java/supporting");
File[] files = dir.listFiles(filterLamda);

for (File f: files) {
    System.out.println("f = " + f);
}


——————————————
Functional Interfaces
——————————————
Functional interfaces are interfaces that have only one abstract method in it. 

Example: 
@FunctionalInterface
public interface TestFuncInterface {
    public void testmethod();

default void testdefaultmethod(){
         System.out.println(“default method“);
     }
}

Lamda expressions can be used to represent the instance of a functional interface. 

Note: 
  • You can add default methods .


——————————————
Streams
——————————————
Streams are new feature introduced in Java 8 to simplify the logic required to process data - filtering, transforming data etc., 

For example, the below example does the following

  • Creates a new array list from 1-30
  • Creates a stream of integers. 
  • Filters the list by extracting only no.s divisible by 3 and converts them into a integer array. 

Stream example: 
List<Integer> list = new ArrayList<Integer>();
for(int i = 1; i< 30; i++){
    list.add(i);
}
Stream<Integer> stream = list.stream();
Integer[] evenNumbersArr = stream.filter(i -> i%3 == 0).toArray(Integer[]::new);
System.out.print(evenNumbersArr);

Why streams: reduce complexity, less code. 

——————————————
Nashorn
——————————————
Nashorn is a javascript engine which allows the java applications to interact with the JS code. 

ScriptEngineManager sem = new ScriptEngineManager();
ScriptEngine engine = sem.getEngineByName("nashorn");
engine.eval("print('deiveehan')");

You can also execute an external script instead of using inline js code. 
engine.eval(new FileReader("/Users/deiveehannallazhagappan/workspace/xplore-java/supporting/js/deiveescript.js"));

You will get the output of the js code when you run the java application. 



Refreshing - Git basics

This post explains some of the basics of git (a simple refresher post to GIT commands).


Git clone <git url> download a project from git remote repository

init
add <file> or .
commit -m <commit message> (Commit files to local repository)

status (list all new and modified files)
diff (Shows files modified)

fetch: fetches the remote repo into the local repo, but does not merge
pull: does and fetch and also does the merge into the local repo. 

Remote: lists the remote
git remote
git remote -v

Log: logs all changes
git log
git log —all
git log —all —oneline —decorate —graph



SCENARIOS:


———————————————————
# Local folder to Git remote. 
————————————————————

Steps to commit an already existing local project to git. 
git init
git add README.md
Git commit -m “Initial commit”
Create a repository in git and get the 
git remote add origin git@github.com:deiveehan/<reponame.git>
Git push -u origin master. 

————————————————————
# Create a branch from command line.
————————————————————

git branch —list (shows all the branches). 
git checkout -b develop (creates a local branch called develop)
change a file
perform git add and commit to the local code.
git push —set-upstream origin develop

————————————————————
# Switching branches.
————————————————————
Git branch (shows local branches)
Git checkout develop

————————————————————
# Merging with parent branch (say develop)
————————————————————
Master
  • Develop
  • A

commit A changes to local repository
git checkout develop
git merge A
git push

————————————————————
# Deleting remote branch
————————————————————
git branch -d A (Delete references of local branch)
git push origin -d A (delete remote branch)
————————————————————




Monday, March 11, 2019

Garbage collection



Terms:
  • Young generation (Minor GC). 
    • Eden: 
      • New objects gets allocated here. 
    • From survivor
      • Once eden space is full, minor gb occurs and moves from from to to survivor. 
    • To Survivor
  • Old generation (Tenured) (Major GC) 
    • Oldest objects to old generation
  • Perm generation 
Note: 
  • SerialGen operates on the young gen. 
  • Concurrent gen operates on the old gen. 
-XX:+PrintClmmandLineFlags -version TestSystemGC

Concepts: 
Minor GC:

Garbage collection algorithms,
  • MinorJS: 
    • Serial Copy collector:  -XX:+UseSerialGC
      • -XX:+UseSerialGC
      • When to use: 
    • Parallell Scavenge Collector: -XX+UseParallelGC
    • Parallel copy collector: -XX:+UseParNewGC
    • Garbage first collector: -XX:+UseG1GC
  • Major JC:
    • Marksweep compact collector: -XX:+UseSerialGC
    • Parallell scavenge marksweep collector: -XX:+UseConcMarkSweepGC
    • Gargbage first collector: -XX:+UseG1GC

Full GC:
  • When full GC occurs, the Full GC pause time is normally lengthier. 

What you need to know
  • Frequency in which minor GC. 
    • Object allocation rate
    • Size of the eden space. 
  • Frequency of object promotion to old generation
    • Frequency of minor GC (how quickly objects age)
    • Size of survivor space. 
Steps: 
  • Check the frequency of Minor GC: (Defined by allocation rate and size of Eden)
    • Higher allocation rate and smaller eden space: More frequent minor GC
    • Lower allocation rate or larger eden space: less frequent minor GC. 
  • Check the frequency of Full GC: (Defined by promotion rate and size of old generation space)
    • For Parallel GC 
      • Higher promotion rate and small old generation space: frequent full GC
      • Lower promotion rate and larger old men: less frequent full GC.
    • For CMS & G1
  • G1: 
    • Avoids fragmentation. 
  • CMS: 

Notes:
  • The longer the object lives, the greater the impact on throughput latency and footprint
  • Object retention can degrade performance more than object allocation. 
  • GC visits only visits live objects. GCs love small immutable objects. 
  • Object allocation is very cheap (10 CPU instructions). 


Takeaways:
  • Its better to use short lived immutable objects vs long lived mutable objects. 
  • Start with -XX:+UseParallelOldGC and avoid full GC. 
  • If there are frequent full GC then move to CMS or G1 if needed (for old gen collections). 

Best practices: 
  • Avoid data structure resizing. 
  • Avoid large allocation. 
  • Dont do finalizers (requires GC cycles and GC cycles are slower), if required use reference objects as an alternatives. 
  • Soft references (don’t do it). 

12-factor apps


#1. One codebase, 1 application
  • Code base to be backed by a version control system such as Git, Subversion etc., 
  • 1 code base = 1 app. 
  • Multiple apps sharing the same code is a violation of the 12-factor. 


#2. Dependencies

Manage dependencies in your application manifest. 
  • Database, image processing libraries. 
Characteristics:
  • Dependencies are managed in the app manifests such as maven, gradle etc., 

What should you do or don’t do: 
  • Don’t use pre-installed softwares which will help in each environments. This will not automate the deployment. 
  • Don’t assume that the related dependencies will be in the environment where you deploy, you are responsible for wiring the dependencies. 

#3. Externalize configuration

Application configuration referred here are values that are
  • Credentials to access a database, or services such as S3. 
  • Environment specific properties

Motivation: 
  • Env properties stored in the code will be a violation since the code has to be redeployed whenever the properties changes. 
  • Also its not a good idea to store env values in the code.
#4. Backing services

Are services that the app talks to 

Characteristics
  • Services can be attached or detached whenever required. 

Objective:
  • A different SMTP server should be able to be configured without the need for code change. 

#5. Build release run

Summary: 
Build: Application code gets converted into an artifact such as a war file. 
Release: the artifact now understands the environment - QA, Dev, Prod
Run: the app gets released in the specific environment. 

Characteristics
  • Strict separation between the build, release and run stages of the application. 
  • Every release must have a release id such as a timestamp or a incrementing number
  • 1-click release - using CI/CD tool. 
What should you do: 
  • Use a CI/CD tool, Jenkins, Concourse etc.,
Best Practices:
  • Create smoke tests to ensure the app is running fine after deployment. 
#6. Stateless Processes

Characteristics:
  • The process running in the environment should be stateless. 
  • Should not share anything with other process. 
  • No sticky sessions
What should do or don’t do. 
  • Don’t store any data in a local file system. 
  • Sessions: 
    • Store your sessions if any in a distributed session storage db, such as Redis. 
    • No sticky sessions. 
  • Create stateless services. 

#7. Port binding

#8. Concurrency
#9. Disposability


Your application should work fine even though one or more of app instances die. 

#10. Dev/Prod parity

Keep the developer environment as close as possible to the production environment. 


#11. Logs

Ensure it is easy to view / debug logs even though there may be multiple instances of the services exists. 


What should you do or don’t do:
  • No System.out.printlns
  • Write to log files on the web server using log4j like frameworks. 
  • Stream logs to an external server where it can be viewed easily. Send the logs to a centralized logging facility. 
#12. Admin processes

Characteristics: 
  • Execute any migration scripts on deployment of a startup if any. 
What should you do or don’t do: 
  • Store migration scripts in the repository. 

Externalizing configurations.




Motivation why to externalize configuration:
  • Externalize config data from the applications across all environments. 
  • Store crucial information such as password for DB in a centralized place. 
  • Share data between different services. 
  • See who changed which property and when and why. 
  • Build once deploy anywhere. 

What is Cloud Config server: 

Its a way by which we can externalize the property values by moving out of the project. Primarily kept for properties that you normally override or keep secret values out of the project src. 
What should you do to use Spring cloud config
  1. Application configuration: 
    • Manage app configuration in a git hub or a local file system. 
  2. Config server
    • Point the location of GIT or local file system where the app config is located (i.e., GIT URI)
    • @EnableConfigServer
  3. Config client
    • Use it normally as how you would read from the properties file 
      • @Value or 

Features
  • Many storage options including Git, Subversion. 
  • Pull model
  • Traceability
  • @RefreshScope for allowing properties to refresh. Calls the constructor of the beans and reinitializes it. 
  • @ConfigurationProperties alternate for @RefreshScope which does reinitialize the Bean that is annotated with @ConfigurationProperties. 
  • Encrypt and decrypt property values. 

Usecases for Cloud config: 
  • Change log level in the properties. 
  • Environment config - containing passwords. 
  • Toggle a feature on or off. 
  • Using @RefreshScope
  • @ConfigurationProperties. 

Order in which the Cloud config looks into the properties files. 
  • Config server. 
  • Command line parameters
  • System properties.
  • Classpath: application.yml file
  • Classpath: bootstrap.yml

Things that you can try: 
  • Simple example to demonstrate properties in Git and reading from config server and client 
  • Using maven profiles specific properties in Config server in Github. 
  • Managing log levels in the Config server
  • Using @RefreshScope and @CofigurationProperties. 

Best practices: 
  • Config server communication with the Git hub, pull every time when client requests for properties file. Use a Jenkins job which will push the properties to a Config server properties (clone). 
  • Single repository for all teams, subdirectories for different environments. 
  • Source code maintenance, having branch per release vs always using the master. 
  • Managing encrypted values for secure data, using JWT with Config server.
  • Use Spring cloud bus to refresh all the services which listens to the rabbitmq.

Note: 
  • Store only things that need to be externalized, I.e., Resource bundles for internationalization should be at the project level.