Home Navigation

Thursday 1 September 2022

Push image to OpenShift internal registry

Follow the below steps


Enable internal registry

oc patch configs.imageregistry.operator.openshift.io/cluster --patch '{"spec":{"defaultRoute":true}}' --type=merge

Get the internal registry 

oc get routes -n openshift-image-registry


Export the internal registry to an environmental variable

export REGISTRY=<The Registry URL you get from previous Command>

Login to the internal registry 

echo $(oc whoami -t) | docker login $REGISTRY -u $(oc whoami) --password-stdin

Build the docker image like below

docker build -t myimage:latest .

Tag your docker image with OpenShift registry

docker tag dashapp-grpc:latest $REGISTRY/openshift/myimage:latest

Push your docker image to OpenShift Internal Registry

docker push $REGISTRY/openshift/myimage:latest

Tag your image with OpenShift ImageStream

oc tag openshift/myimage:latest myimage:latest

List your ImageStream tags

oc get is

Tuesday 7 June 2022

Kubernetes basics and cheatsheet

Before we start about Kubernetes let us first cover some of the basics of containers and what is the benefits of containerization. 

A container is an executable package of software that includes everything needed to run it. Containerization is the packaging of software code with just the operating system (OS) libraries and dependencies required to run the code to create a single lightweight executable—called a container—that runs consistently on any infrastructure

Executable unit of software

  • Encapsulate everything necessary to run
  • Can be run anywhere

OS Virtualization:

  • Isolates process
  • Control resources allocated to those process

Small, fast, and portable

  • Doesn’t include guest OS in every instance
  • Leverages host OS

Benefits of container:

  • Portability
  • Agility: rapid application development
  • Speed: 
    • Lightweight
    • Don’t include guest os
    • Spin up quickly and horizontally scalable
  • Fault isolation
    • The failure of one container does not affect the continued operation of any other containers
  • Efficiency / cost effective
  • Ease of management
  • Security

The Open Container Initiative (OCI), established in June 2015 by Docker and other industry leaders, is promoting common, minimal, open standards and specifications around container technology.
The ecosystem is standardizing on containerd and other alternatives like CoreOS rkt, Mesos Containerizer, LXC Linux Containers, OpenVZ, and crio-d.


Docker is a platform for building and running container. A Docker file serves as the blueprint for an image.
  • Image: An image is an immutable file that contains everything necessary to run an application.
  • Container is a running image
  • Each docker instruction creates a new read-only layer. A writable layer is added when an image is run as a container.
Note: The main difference between ADD and COPY in docker file is that COPY can only copy local files or directory, whereas ADD can also add files from remote URLs
CMD is the default execution command generally stays at the last in docker file.
Naming: hostname/repository:tag

Kubernetes is an open-source container orchestration system for automating software deployment, scaling, and management. Google originally designed Kubernetes, but the Cloud Native Computing Foundation now maintains the project. Wikipedia

Managing the lifecycle of containers, especially in large, dynamic environments
  • Provisioning and deployment
  • Availability
  • Scaling
  • Scheduling to infrastructure
  • Rolling updates
  • Health checks
Kubernetes as “a portable, extensible, open-source platform for managing containerized workloads and services that facilitates both declarative configuration and automation.It has a large, rapidly growing ecosystem. Kubernetes services, support, and tools are widely available.”

Kubernetes is not a
  • Paas
  • Does not limit the types of applications
  • Does not deploy source code or build application
  • Does not provide-built-in middleware, databases, or other services
A Kubernetes cluster is a set of nodes that run containerized application. When you deploy Kubernetes you get a cluster. In Kubernetes cluster consists of two types of nodes,
  • Control plane ( Master node)
  • Nodes ( Worker nodes)


Control plane ( Master node):
It makes decisions about the cluster and detects and responds to events in the cluster.
  • Kubernetes API: All communication in the cluster utilizes this API.
  • Kubernetes scheduler:
    • The Kubernetes scheduler assigns newly created Pods to nodes. This means that the scheduler determines where your workloads should run within the cluster.
  • etcd:  
    • a highly available key value store that contains all the cluster data. When you tell Kubernetes to deploy your application, that deployment configuration is stored in etcd. Etcd is thus the source of truth for the state in a Kubernetes cluster, and the system works
  • Kubernetes controller manager
    • The Kubernetes controller manager runs all the controller processes that monitor the cluster state and ensure that the actual state of a cluster matches the desired state. 
  • Cloud controller manager:
    • Runs controllers that interact with the underlying cloud providers.These controllers effectively link clusters into a cloud provider’s API. Since Kubernetes is open source software and would ideally be adopted by a variety of cloud providers and organizations, it strives to be as cloud-agnostic as possible.
Kubernetes worker nodes
  • Nodes
    • Nodes are the worker machines in a Kubernetes cluster. In other words, user applications are run on nodes. Nodes can be a physical machine or a virtual machine. Managed by control plane contains the services to run applications.
  • Kube proxy:
    • Network proxy
    • Maintains network rules that allow communication to pods
  • Kubelet:
    • Communicates with the API server
    • Ensures that Pods and their associated containers are running
    • Reports to the control plan on health and status
A control loop is defined as a non-terminating loop that regulates the state of a system.

Kubernetes Objects are persistent entities in Kubernetes."Persistent" means that when you create an object, Kubernetes continually works to ensure that that object exists in the system, until and unless you modify or remove that object.
  • Persistent entities in kubernetes
  • Define the desired state of your workload
  • Use the Kubernetes API to work with them, like kubectl
Kubernetes objects consist of two main fields.
  • The first is the object "spec," which is provided by the user. The spec dictates the desired state for this object.
  • The second field is the "status," which is provided by Kubernetes. The status describes the current state of the object—its actual state as opposed to its desired state. The status is updated if at any time the status of the object changes.
  1. Namespaces: namespaces can be used to provide logical separation of a cluster into virtual clusters.
  2. Labels: Labels are key/value pairs that can be attached to objects in order to identify those objects.
  3. Pods: Simplest unit in Kubernetes, represents process running in cluster, encapsulate a container, POD serve to scale an app horizontal
  4. ReplicaSet: A ReplicaSet is a group of identical Pods that are running. a ReplicaSet encapsulates a Pod definition and adds additional information needed to replicate it.
  5. Deployment: Deployment object, a higher-level concept that in turn manages ReplicaSets, A Deployment is an object that provides updates for both Pods and ReplicaSets.
    • provides updates for pods and replicates
    • Runs multiple replicas of your application
    • Suitable for stateless applications
    • updates triggers a rollout
  • Autoscaling:
    • ReplicaSet works with a set number of pods
    • Horizontal Pod Autoscaler (HPA) enables scaling up and down as needed.
      • Kind: HorizontalPodAutoScaler
      • And in spec you define the attribute
      • Behind the scene it uses replicaSet to create object
    • Can configure based on desired state of CPU, memory etc
  • Rolling Update:
    • ReplicaSet and Autoscaling are important to minimize and service interruption
    • Rolling Update are a way to roll out app changes in an automated and controlled fashion throughout your pods
    • Rolling updates give us a way to publish changes to our applications without Noticeable interruptions for the user.
    • Additionally, rolling updates give us a way to roll back any changes to the application
    • Kubectl rollout status deployments/hello-kubernets
    • Kubectl rollout undo deployments/hello-kubernetes
ConfigMaps give us a way to provide configuration data to pods and deployments so we don't have to hard-code that data in the application code. You can also reuse these ConfigMaps and Secrets for multiple deployments.
  • Used to provide configuration for deployments
  • Reusable across deployments
  • Created in a couple of different ways:
    • using string literals
    • Using an existing properties or key=value file
    • Providing a configMap yaml descriptor file. Both the first and second ways can help us create such a file.
    • Configmap is not for sensitive data and it has only 1MB Storage limit
$kubectl create ConfigMap my-config --from-literal=MESSAGE="hello world config map"
$kubectl cm my-config --from-file=my.properties

A Secret is an object that contains a small amount of sensitive data such as a password, a token, or a key. Such information might otherwise be put in a Pod specification or in a container image. Using a Secret means that you don't need to include confidential data in your application code.

$kubectl create secret generic api-creds --from-literal=key=mycred

Why do we need services?
Service - responsible for enabling network access to a set of pods. Each pod has its own IP address. Pods are ephemeral and destroyed frequently. Each time pods recreate a new IP is get assigned.

Whereas service has stable IP address, load balancing. It is loosely coupled and helps routing within and outside cluster. Selector helps to identify to which pods to forward the request. Pods are identified via selectors a key value pair.

ClusterIp: Exposes the Service on a cluster-internal IP. Choosing this value makes the Service only reachable from within the cluster. This is the default ServiceType. You cannot make requests to service (pods) from outside the cluster.
  • Inter service communication within the cluster. For example, communication between the front-end and back-end components of your app.
  • K8s create endpoints object same name as service to keep track of which pods are the members/endpoints of the service. $kubectl get endpoints -n myapp



Headless Services: Client wants to communicate with one specific POD instead of going via services. 
Pods want to talk directly with specific pod like database master and slave. It is mostly used of StatefulSet Object. 
            DNS lookup for service – returns single IP address in Cluster IP for example. Set ClusterIP to None returns Pod IP address instead. No Cluster IP is assigned to the POD.



NodePort Services: Exposes the Service on each Node's IP at a static port (the NodePort). A ClusterIP Service, to which the NodePort Service routes, is automatically created. You'll be able to contact the NodePort Service, from outside the cluster, by requesting <NodeIP>:<NodePort>

Node port must be in the range of 30000–32767. Manually allocating a port to the service is optional. If it is undefined, Kubernetes will automatically assign one.

it is not so secured as you are opening a service port using a clusterIP.

Use Cases
  • When you want to enable external connectivity to your service.
  • Using a NodePort gives you the freedom to set up your own load balancing solution, to configure environments that are not fully supported by Kubernetes, or even to expose one or more nodes’ IPs directly.
  • Prefer to place a load balancer above your nodes to avoid node failure.

Load balancer: Exposes the Service externally using a cloud provider's load balancer. NodePort and ClusterIP Services, to which the external load balancer routes, are automatically created.  It exposes the Service externally using a cloud provider’s load balancer.

LoadBalancer is an extension of NodePort service. Do not use nodePort service to expose to external. Configure Ingress or LoadBalancer for production environments.


ExternalName
  • Services of type ExternalName map a Service to a DNS name, not to a typical selector such as my-service.
  • You specify these Services with the `spec.externalName` parameter.
  • It maps the Service to the contents of the externalName field (e.g. foo.bar.example.com), by returning a CNAME record with its value.
  • No proxying of any kind is established.
Use Cases
  • This is commonly used to create a service within Kubernetes to represent an external datastore like a database that runs externally to Kubernetes.
  • You can use that ExternalName service (as a local service) when Pods from one namespace to talk to a service in another namespace.

Ingress: Kubernetes Ingress is an API object that provides routing rules to manage external users' access to the services in a Kubernetes cluster, typically via HTTPS/HTTP. With Ingress, you can easily set up rules for routing traffic without creating a bunch of Load Balancers or exposing each service on the node

Ingress exposes HTTP and HTTPS routes from outside the cluster to services within the cluster. Traffic routing is controlled by rules defined on the Ingress resource.




Host:
  • A valid domain addresses
  • Map domain name to Node’s IP address which is the entry point
  • Or you can map the domain to an external entry point IP address
Ingress Controller:
  • Evaluates all the rules
  • Manages redirections
  • Entrypoint to cluster
  • Exposes HTTP/HTTPS routes for a cluster  
  • Provides route-based load balancing 
  • Can terminate TLS 
  • Provides name-based virtual hosting


Kubernetes cheatsheet commands:




Friday 29 April 2022

Erlang start nodes and rebar3 package

Create rebar3 app:

Create:  $rebar3 new release <app name>
Compile$rebar3 compile
Run$rebar3 shell
Update dependency$rebar3 update
Packagerebar3 as prod release
Run the package:  $
/<app name>/bin/<app name> foreground
Publishing to hex: $ rebar3 hex publish -r  <test_repo>
refhttps://rebar3.readme.io/docs/commands


Start a node in Erlang:


Creating node
:
    $erl -name server@192.168.0.149 -setcookie mathcluster
    $erl -name client@192.168.0.90 -setcookie mathcluster
Test the connection:
    >net_adm:ping('server@192.168.0.149').
List the nodes: >nodes().

Thursday 7 April 2022

Erlang cheatsheet

Erlang

Erlang is a programming language used to build massively scalable soft real-time systems with requirements on high availability. Some of its uses are in telecoms, banking, e-commerce, computer telephony and instant messaging. Erlang's runtime system has built-in support for concurrency, distribution and fault tolerance.


Installing erlang:

https://www.erlang-solutions.com/downloads/

https://tecadmin.net/install-erlang-on-centos/


Erlang basic syntax:
Compiling and running:
Start erlang shell just typing in the command line erl
Compile: >c(module_name).
Running: >module_name:function().

String concat:

> “earl” ++ “ang”. 
> string:concat(“erl” , “ang”).

 

Lists:

1> Insert = [2,4,5,6].
     [2,4,5,6]
2> Full = [1, Insert, 16,32].
     [1,[2,4,5,6],16,32]
3> Neat = lists:flatten(Full).
    [1,2,4,5,6,16,32]

Combine/merge:
> A = [1,3,5].
[1,3,5]
> B = [2,4,6].
[2,4,6]
> C = A ++ B.
[1,3,5,2,4,6]
> D = lists:append(A,B).
[1,3,5,2,4,6] 


Split by head and tail:
> [Head | Tail] = [1,2,4].
> Head.
1
> Tail.
[2,4]

Adding to a list:

> Y=[1,2 | [3]].
[1,2,3]
> Z=[1,2 | 3].
[1,2|3]

zip/unzip mixing list and tuples:

> List1=[1,2,4,8,16].
[1,2,4,8,16]
> List2=[a,b,c,d,e].
[a,b,c,d,e]
> TupleList=lists:zip(List1,List2).
[{1,a},{2,b},{4,c},{8,d},{16,e}]
> SeparateLists=lists:unzip(TupleList).
{[1,2,4,8,16],[a,b,c,d,e]}

Keystone/ key replace:

> Initial=[{1,tiger}, {3,bear}, {5,lion}].
[{1,tiger},{3,bear},{5,lion}]
> Second=lists:keystore(7,1,Initial,{7,panther}).
[{1,tiger},{3,bear},{5,lion},{7,panther}]
> Third=lists:keystore(7,1,Second,{7,leopard}).
[{1,tiger},{3,bear},{5,lion},{7,leopard}]
>Fourth=lists:keyreplace(6,1,Third,{6,chipmunk}).
 [{1,tiger},{3,bear},{5,lion},{7,leopard}]
> Animal5=lists:keyfind(5,1,Third).
{5,lion}
> Animal6=lists:keyfind(6,1,Third).
False

Higher-order functions, functions that accept other functions as arguments

lists:forach

1> Print = fun(Value) -> io:format("~p~n",[Value]) end.

#Fun<erl_eval.44.65746770>

2> List = [1,2,3,4,5,6].

[1,2,3,4,5,6]

3> lists:foreach(Print, List).

1

2

3

4

5

6

Ok


lists:map

> Square = fun(Value)->Value*Value end.

#Fun<erl_eval.44.65746770>

> lists:map(Square, List).

[1,4,9,16,25,36]

 

OR

> [Square(Value) || Value <- List].

> [Value * Value || Value <- List].

> List2 = [10 | List].                           

[10,1,2,3,4,5,6]

In list difference between | and || are, | concat and || map operation.

 

Filter:

> LessThanFive = fun(Value)-> (Value<5) and (Value>=0) end. 

#Fun<erl_eval.44.65746770>

> lists:filter(LessThanFive,List).                         

[1,2,3,4]

> [Value || Value <- List, Value<5, Value>=0].

[1,2,3,4]


> Weather = [{toronto, rain}, {montreal, storms}, {london, fog},  

> {paris, sun}, {boston, fog}, {vancouver, snow}].              

[{toronto,rain},

 {montreal,storms},

 {london,fog},

 {paris,sun},

 {boston,fog},

 {vancouver,snow}]

> 

> FoggyPlaces = [X || {X, fog} <- Weather].

[london,boston]


Maps:

ModuleSyntax
maps:new/1#{}
maps:put/3Map#{Key => Val}
maps:update/3Map#{Key := Val}
maps:get/2Map#{Key}
maps:find/2#{Key := Val} = Map


>Pets = #{"dog" => "al", "fish" => "dory"}.
>Pets#{"cat" => "tiger"}.
   #{"cat" => "tiger","dog" => "al","fish" => "dory"}
> Pets#{"dog":="Pok"}.
  #{"dog" => "Pok","fish" => "dory"}
> maps:get("dog", Pets).
"al"
> #{"fish" := CatName, "dog" := DogName} = Pets.
#{"dog" => "al","fish" => "dory"}

Process:

Sending message to a process using !

> self() ! test1.

test1 message will be waiting in the mail box

You can retrieve by receive block

> receive X -> X end.

 

Spawning a process:

> Pid=spawn(bounce,report,[]).

> Pid ! “Hello World”.

 

receive XXX -> XXX end.

exit(whereis(registered pid), kill).

flush().

erlang:process_info(self(), messages)

spawn(?ModuleName, <Function name> , <argument>)


Registrering a process:

register(<name to be registered> ,<Process id>).

whereis/1 to find the process

unregister/1. To unregister a process


Parent = ?current_span_ctx,

io:format("Parent: ~p~n",[Parent]),

Span2Ctx = Parent#span_ctx{trace_id=TraceId},

?set_current_span(Span2Ctx),


Record:

Records let you create data structures that use names to connect with data rather than order

For example

-record(planemo, {name, gravity, diameter, distance_from_sun}).

-record(tower, {location, height=20, planemo=earth, name}).


The command rr (for read records) lets you bring this into the shell:

 

1> rr("records.hrl").

[planemo,tower]

 

> Tower1=#tower{location="NYC", height=241, name="Woolworth Building"}.

 

Accessing value from a record

> Tower1#tower.planemo.

 

Pattern matching to extract value:

 

#tower{location=L5, height=H5} = Tower1.

 

Update record value:

 

Tower1a=Tower1#tower{height=512}.


ETS:

Erlang Term Storage (ETS) is a simple but powerful in-memory collection store.

 

Creating and Populating a Table

PlanemoTable=ets:new(planemos, [named_table, {keypos, #planemo.name}]),

ets:info(PlanemoTable).

 

To see what’s in the table

 

ets:tab2list(<record name eg, planemos>).

Lookup: ets:lookup(planemos,eris).

 

11> Result=hd(ets:lookup(planemos,eris)).

#planemo{name = eris,gravity = 0.8,diameter = 2400,

         distance_from_sun = 10210.0}

12> Result#planemo.gravity.

 

Overwriting Values

ets:insert(planemos, #planemo{ name=mercury,

 gravity=3.9, diameter=4878, distance_from_sun=57.9 }).

true

  

ets:fun2ms 

ets:match

ets:select

ets:delete

ets:first

ets:next

ets:last


Mnesia:

mnesia:create_schema([node()]).

mnesia:start().

mnesia:table_info/2


mnesia:transaction(fun() -> mnesia:read(planemo,neptune) end).

mnesia:first

mnesia:next

 

(If you want to change where Mnesia stores data, you can start Erlang with some extra options: erl -mnesia dir " path ". The path will be the location Mnesia keeps any disk-based storage.)

Apart from the setup, the key thing to note is that all of the writes are contained in a fun that is then passed to mnesia:transaction to be executed as a transaction. Mnesia will restart the transaction if there is other activity blocking it, so the code may get executed repeatedly before the transaction happens. Because of this, do not include any calls that create side effects to the function you’ll be passing to mnesia:transaction, and don’t try to catch exceptions on Mnesia functions within a transaction. If your function calls mnesia:abort/1 (probably because some condition for executing it wasn’t met), the transaction will be rolled back, returning a tuple beginning with aborted instead of atomic.


Query list:

 

mnesia:transaction(

  fun() ->

    qlc:e(

      qlc:q( [X || X <- mnesia:table(planemo)] )

    )

  end

) 

mnesia:transaction(

  fun() ->

    qlc:e(

      qlc:q( [{X#planemo.name, X#planemo.gravity} ||

               X <- mnesia:table(planemo),

               X#planemo.gravity < 9.8] )

    )

  end

)

mnesia:transaction(

  fun() ->

    qlc:e(

      qlc:q( [X || X <- mnesia:table(planemo),

                   X#planemo.gravity < 9.8] )

    )

  end

)


OTP

OTP is set of Erlang libraries and design principles providing middle-ware to develop these systems. It includes its own distributed database, applications to interface towards other languages, debugging and release handling tools.

 

OTP Gen server:

  • Generic server specific behaviour
  • Supports server like component 
  • business logic lives in app specific callback module

OTP formalizes those activities, and a few more, into a set of behaviors (or behaviours—this was originally created with British spelling). The most common behaviors are gen_server (generic server) and supervisor, though gen_fsm (finite state machine) and gen_event are also available. The application behavior lets you package your OTP code into a single runnable (and updatable) system.

 

OTP Building blocks:

  • process spawning
  • sending and receiving messages
  • process linking and monitoring

Application Behavior

  • application provides an entry point for an OTP-compliant app
  • Allows multiple Erlang components to be combined into a system
  • Erlang apps can declare their dependencies on other apps

Async Events

  • send_event: asynchronously send event into gen_fsm process
  • calls Module:StateName/2,where StateName is a function named for the current state
  • send_all_state_event: asynchronously send event into gen _fm process calls Module handle_event, allows handling event regardless of current state. current state name is passed into handle_event

Sync Events

  • sync _send_event: synchronously send event into gen_fsm process calls Module:StateName/3,where StateName is a function named for the current state
  • sync_send_all_state_event: synchronously send event into gen_fm process
  • calls Module:handle_sync_event,allows handling event regardless of current state. current state name is passed into handle_sync_event

{ok, Pid} = drop_sup:start_link().

1> c(drop_app).

{ok,drop_app}

2> code:add_path("ebin/").

true

3> application:load(drop).

ok

4> application:loaded_applications().

[{kernel,"ERTS  CXC 138 10","2.15.2"},

 {drop,"Dropping objects from towers","0.0.1"},

 {stdlib,"ERTS  CXC 138 10","1.18.2"}]

5> application:start(drop).

ok

6> gen_server:call(drop, 60).

{ok,34.292856398964496}


REBAR3 

Rebar3 is an Erlang tool that makes it easy to create, develop, and release Erlang libraries, applications, and systems in a repeatable manner. If you are from java programming, it is like maven. Hex is the repository to publish erlang library and searching, downloading library.

For installing rebar3 

https://github.com/erlang/rebar3

https://rebar3.readme.io/docs/getting-started

 

PATH=$PATH:$HOME/bin

export PATH=$PATH:~/.cache/rebar3/bin

 

Compile with rebar3: $rebar3 compile

Running with rebar3: $rebar3 shell

Publishing to hex: $ rebar3 hex publish -r test_repo


Erlang commands:

q()

Quits the shell and the Erlang runtime.

c(file)

Compiles the specified Erlang file.

b()

Displays all variable bindings.

f()

Clears all variable bindings.

f(X)

Clears specified variable binding.

h()

Prints the history list of commands.

e(N)

Repeats the command on line N.

v(N)

The return value of line N.

catch_exception(boolean)

Sets how strict the shell will be in passing errors.

rd(Name, Definition)

Defines a record type Name with contents specified by Definition.

rr(File)

Defines record types based on the contents of File.

rf()

Clears all record definitions. Can also clear specific definitions.

rl()

Lists all current record definitions.

pwd()

Gets the present working directory.

ls()

Lists files at the current location.


RESOURCES:

Dockerizing:

Git 2.0 installation:

Install java 11: yum -y install  java-11-openjdk java-11-openjdk-devel