Home Navigation

Friday, 10 July 2020

Spring bean life cycle overview

Spring beans are components which are handled and managed by spring IoC container. When the beans get created it is required to perform some initialization to get the beans usable. And when beans are no longer required and gets removed from IoC needs some clean up.

We define beans in three ways
  • XML Config (Load xml definitions ) -> instantiation & constr injection -> property injection
  • Annotation config ( @Component scanning) -> instantiation & @Autowired on constructor -> injection of @Autowired methods & fields
  • Java Config ( Read @Bean method signature ) -> call @Bean method implementations

We can divide the life cycle of a bean as below

Callback Interfaces
  • InitializingBean.afterPropertiesSet() called after properties are set
  • DisposableBean.destroy() called during bean destruction in shutdown
Life Cycle Annotations
  • @PostConstruct annotated methods will be called after the bean has been constructed, but before its returned to the requesting object   
  • @PreDestroy is called just before the bean is destroyed by the container
Bean Post Processors
  • Gives you a means to tap into the Spring context life cycle and interact with beans as they are processed
  • postProcessBeforeInitialization - Called before bean initialization method
  • postProcessAfterInitialization - Called after bean initialization
‘Aware’ Interfaces
  • Spring has over 14 ‘Aware’ interfaces.
  • These are used to access the Spring Framework infrastructure
  • These are largely used within the framework
  • Rarely used by Spring developers


Note: BeanFactoryPostProcessor implementations are "called" during startup of the Spring context after all bean definitions will have been loaded while BeanPostProcessor are "called" when the Spring IoC container instantiates a bean.

BeanFactoryPostProcessor kicks in at the phase of the container life-cycle, when no bean has been created but the bean definition has already been parsed, while BeanPostProcessor comes into play after bean creation.

Thursday, 9 July 2020

Spring mvc life cycle summary



  • First request lands to the different sort of filters and it applies to all request. eg, application provided filters, authentication filters etc
  • The DispatcherServlet consults with the HandlerMapping 
  • Handler Mapping resolves controller to invoke. Usually org.springframework.web.servlet.mvc.method.annotation.RequestMappingHandlerMapping is used. This class reads @RequestMapping annotation from the Controller and uses the method of Controller that matches with URL as Handler class.
  • Handler Adapter calls the appropriate controller. org.springframework.web.servlet.mvc.method.annotation.RequestMappingHandlerAdapter is used. RequestMappingHandlerAdapter class calls the method of handler class (Controller) selected by HandlerMapping.
  • HandlerInterceptor to perform actions before handling, after handling or after completion (when the view is rendered) of a request.
  • Controller executes the business logic
  • Controller resolves the model and view to send to the user
  • The DispatcherServlet sends the view name to a ViewResolver to find the actual View to invoke. ViewResolvers resolves the view according to the implementation and configuration of the view, normally JSP org.springframework.web.servlet.view.InternalResourceViewResolver is used, Some other view resolvers are FreeMarkerViewResolver, TilesViewResolver, ThymeleafViewResolver, BeanNameViewResolver etc
  • Now the DispatcherServlet will pass the model object to the View to render the result.


Thursday, 30 April 2020

SOLID Principle

SOLID Principle

  • The principles are from Robert "Uncle Bob" Martin
  • Michael Feathers is credited with coming up with the SOLID acronym
    • S = Single Responsibility Principle
    • O = Open/Closed Principle
    • L = Liskov Substitution principle
    • I = Interface Segregation Principle
    • D = Dependency Inversion Principle

Why?


  • OOP does not always lead to quality software
  • Poor dependency management leads to code that is brittle, fragile, and hard to change
  • Proper dependency management leads to quality code that is easy to maintain.
  • The 5 principles focus on dependency management
The one thing you can always count on in software engineering is CHANGE, no matter how well you design an application over time an application must grow and change or it will die.

Single Responsibility Principle

  • Every Class should have a single responsibility.
  • There should never be more than one reason for a class to change.
  • Your classes should be small. No more than a screen full of code.
  • Avoid ‘god’ classes.
  • Split big classes into smaller classes.
You can avoid these problems by asking a simple question before you make any changes: What is the responsibility of your class/component?

If your answer includes the word “and”, you’re most likely breaking the single responsibility principle. Then it’s better to take a step back and rethink your current approach.

Example: please follow the github link with bad and good example. explanation is class comment.


Real world example:
  • Java Persistence API (JPA) specification. defines a standardized way to manage data persisted in a relational database by using the object-relational mapping concept.
  • Spring Data Repository
  • Logging

Open/Closed Principle

  • Your classes should be open for extension but closed for modification
  • You should be able to extend a classes behavior, without modifying it.
  • Use private variables with getters and setters - ONLY when you need them.
  • Use abstract/interface base classes
  • Design should be ploymorphic to allow different implementations which you can easily substitute without changing the code that uses them
Example: please follow the github link with bad and good example. explanation is class comment.


Liskov Substitution Principle

  • By Barbara Liskov, in 1998
  • Objects in a program would be replaceable with instances of their subtypes WITHOUT altering the correctness of the program.
  • Violations will often fail the “Is a” test.
  • A Square “Is a” Rectangle
  • However, a Rectangle “Is Not” a Square
  • Can most usually be recognized by a method that does nothing, or even can’t be implemented.
  • The solution to these problems is a correct inheritance hierarchy/ correct interface
  • If it looks like a Duck, Quacks like a Duck but needs batteries. You probably Have the wrong abstraction.
Example: please follow the github link with bad and good example. explanation is class comment.


Interface Segregation Principle

  • No matter what should never be forced to implement an interface that it does not use or the client should never be obliged to depend on any method, which is not used by them.
  • Make fine grained interfaces that are client specific
  • Many client specific interfaces are better than one “general purpose” interface
  • Keep your components focused and minimize dependencies between them
  • Changing one method in a class should not affect classes that don't depend on
  • Notice relationship to the Single Responsibility Principle?
  • ie avoid ‘god’ interfaces
Example: please follow the github link with bad and good example. explanation is class comment.

Dependency Inversion Principle

  • Abstractions should not depend upon details
  • Details should depend upon abstractions
  • Important that higher level and lower level objects depend on the same abstract interaction
  • Able to change an implementation easily without altering the high level code.
  • This is not the same as Dependency Injection - which is how objects obtain dependent objects
Example: please follow the github link with bad and good example. explanation is class comment.

Some of other design principles
  • Encapsulate what varies ( identify the aspect of your application that vary and separate them from what stays the same)
  • Program to an interface not an implementation
  • Favor composition over inheritance ( HAS- A can better than IS - A)
  • Composition gives you a lot more flexibility. Not only does it let you encapsulate a family of algorithms into their own set of classes, but it also lets you change behavior at runtime as long as the object you are composing with implements correct behavior interface.

Friday, 7 February 2020

Docker in a nutshell

Docker is a platform or ecosystem around creating and running containers.

Docker has two components 
             Docker cli:  is the docker client. tool that we are going to issue command.
             Docker server: Docker Daemon tool that is responsible for creating images, running
             containers etc

What is container ?
  • Two containers do not share their filesystem. no sharing of data between them.
  • Namespaces/segmenting Isolating resources per process or group of process
  • Control groups(cgroups) Limit amount of resources used per process



Life cycle of a container:

It has two steps, create a image and start the container from that image.
  • Create a container:  docker create <image name>
  • Start a container:     docker start -a <container id> // -a option means give me the output
When a container is create and started and if you want to run it again with docker start, you can not override the default startup command.

Image: Single file with all the dependencies and config required to run a program. File system snapshot. very specific set of files. when we run docker, images turns into a container


Docker some basic commands:

docker run = docker create + docker start

$docker create <image name>
$ docker start -a <container id>          
// -a option means give me the output

$docker ps                                          
// to list all the currently running container
$docker ps --all                                   
// list all the container that created, its like history

$docker system prune                         
// delete  all the containers from docker daemon.

$docker logs <container id>                 
// getting all the logs that container generated

$docker stop  <container id> 
// stopping container, sigterm signals to stop and clean up

$docker kill  <container id> 
// sigkill, kill the process right now

$docker rm <image name>

$docker attach <contanierId> 
// helps to get access to container's stdin, stdout, stderr

$docker image ls 
// to see list of all docker images on your system

Execute an additional command in a running container

$docker exec -it <containerid> <command> 
// it parameter allows to input
// get into the container and run command as you can not run command from host computer.

Example:
first command: $docker run redis
second command: $docker exce -it <container-id> redis-cli

How to open terminal/shell inside a container
$docker exec -it <container-id> sh
$docker run -it busybox sh 
// you can open terminal at the time of starting your container.

$docker run -d redis
// -d option for background run , daemon

Creating a Docker File:
  • Specify a base image
  • Run some commands to install additional programs
  • Specify a command to run on container startup
FROM -> RUN -> CMD

The naming convention for a docker file is "Dockerfile"
A sample Dockerfile

#Use a base docker image
FROM node:alpine

#download and install a dependency
WORKDIR /usr/app
COPY ./package.json ./
RUN npm install
COPY ./ ./

#start up command
CMD ["npm", "start"]


Two steps to run a docker file. 

  • Building the Dockerfile to create an image
  • Running the Image to run a container

$docker build . 
// it will output an id
$docker run <build image id>

Building docker image with a name:

$docker build -t <docker id, user name in docker hub>/<name of the image>:latest .
// -t indicates tag or name
$docker run  <docker id, user name in docker hub>/<name of the image>

With docker commit create image from a container:
$docker run -it alpine sh
$apk add --update redis

$docker ps // get id of the container
$docker commit -c 'CMD ["redsi-sever"]' <id container>
$docker run "new container id"

Push docker image to docker hub:
$docker push <tag name of the image>:version

If you run docker in detach mode, you may execute the below command
$docker inspect <image name>


Problem might face working with docker script first time
  • Make sure to use the right base docker image
  • Make sure you saved the file name as Dockerfile
  • When you run docker build command make sure to use . operator at the end of the command
  • Container file system is completely isolated. So make sure to copy your working directory code to docker file system. 
            use 
            WORKDIR /usr/app 
            and COPY ./ ./
  • Once you run web application using docker, you will not direct access to it using the port you assigned. As it is running on its own container. A port mappings needs to be set so that you can access the web application. 
$docker run -p 8080:8080 <imageid>
$docker run -it <docker id>/<image name> sh
//you can look inside the container

What is docker compose?
  • Separate CLI that gets installed along with Docker
  • Used to start up multiple docker containers at the same time
  • Automates some of the long winded arguments we are passing to 'docker run'
docker run <myimage> docker-compose up
docker build .
docker run <myimage>
docker-compose up --build

Launch in background: $docker-compose up -d
// -d for running in background
Stop containers:         $docker-compose down

$docker-compose status
//run this command inside of directory containing docker compose file:
$docker-compose ps
//run this command inside of directory containing docker compose file:

Running docker file with a custom file name:
$docker build -f dockerfile.dev .

How to change source code and reflect the change inside the docker container instead of building docker image every time? Docker volume is an option.

Docker volume: instead of copying put the reference to local machine filesystem and container filesystem

$docker run -p 3000:3000  -v pwd:/app CONTAINER_ID // windows
$docker run -p 3000:3000 -v ${pwd}:/app CONTAINER_ID

Example:
$docker run -p 3000:3000 -v $(pwd):/app <image id>
$docker run -p 3000:3000 -v app/node_modules -v $(pwd):/app CONTAINER_ID
// -v /app/node_modules means don't map with local

Equivalent docker-compose file of the above:

version: '3'
services:
    web:
        build:
            context: .
            dockerfile: Dockerfile.dev
        ports: 
            - "3000:3000"
        volumes: 
            - /app/node_modules
            - .:/app

how to restart if for some error inside the container:

Restart policies:

  • "no" : Never attempt to restart this. Container if it stops or crashes
  • always: if this container stops for any reason always attempt to restart it
  • on-failure: only restart if the container stops with an error code
  • unless-stopped: always restart unless we forcibly stop it.
add in docker-compose.yml under service node, restart : always/"no"/on-failure/unless-stopped

Tuesday, 26 November 2019

Implement PWA Service worker with google WorkBox

What is Workbox?

From google workbox site, Workbox is a library that bakes in a set of best practices and removes the boilerplate every developer writes when working with service workers.
  • Precaching
  • Runtime caching
  • Strategies
  • Request routing
  • Background sync
  • Helpful debugging
  • Greater flexibility and feature set than sw-precache and sw-toolbox
To create service worker with workbox follow the below steps

Step 1:
Create a react app with your preferred tool like create-react-app or npx or yarn

Step 2:
        install workbox cli
        $npm install workbox-cli --global

Step 3:
       Go to the react project directory and then run the below commands

       $npm run build // it will compile and create the build folder

       $workbox wizard

       Then follow the options it asks. ( if you are not sure what to choose pick the default option and            hit enter)

       You will be presented the below options

? What is the root of your web app (i.e. which directory do you deploy)? (Use ar
row keys)
> build/
  public/
  src/
  ──────────────
  Manually enter path

? Which file types would you like to precache? (Press <space> to select, <a> to
toggle all, <i> to invert selection)
>(*) json
 (*) ico
 (*) html
 (*) png
 (*) js
 (*) txt
 (*) css
(Move up and down to reveal more choices)
  
? Where would you like your service worker file to be saved? (build\sw.js)  
? Where would you like to save these configuration options? (workbox-config.js)

Step 4:
       To generate service worker, run

       $workbox generateSW workbox-config.js

Step 5:
create a service worker file in /src dirctory name: workbox-sw.js and add the below contents

importScripts("https://storage.googleapis.com/workbox-cdn/releases/4.3.1/workbox-sw.js");

const precacheManifest = [];


console.log("[Workbox] ######################## Installing ############################")

if (workbox) {
    console.log('[Workbox] Yay! Workbox is loaded 🎉');
} else {
    console.log('[Workbox] Boo! Workbox did not load 😬');
}

console.log("[Workbox] #################################################################")


workbox.precaching.precacheAndRoute(precacheManifest);



Step 6:
Modify the workbox-config.js located in the root directory of the project


module.exports = {
  "globDirectory": "build/",
  "globPatterns": [
    "**/*.{json,ico,html,js,css}"
  ],
  "swDest": "build/sw.js",
  "swSrc": "src/workbox-sw.js",
  "injectionPointRegexp": /(const precacheManifest = )\[\](;)/
};


Step 7:
      Register service worker, edit index.html file in public/ directory and add the below scripts

h
<script>
    console.log('%NODE_ENV%');

    const isProduction = '%NODE_ENV%' === 'production';
    if (isProduction) {
      console.log('This is a production environment :-|');
    } else {
      console.log('This is a development environment o-o');
    }

    if (isProduction && 'serviceWorker' in navigator) {
      navigator.serviceWorker.register('sw.js')
        .then(registration => console.log('[ service workder ] - Service Worker registered'))
        .catch(err => '[ service workder ] - SW registration failed');
    }
  </script>


Step 8:
Modify package.json and add the below script line star-sw


"scripts": {
    "start": "react-scripts start",
    "build": "react-scripts build",
    "test": "react-scripts test",
    "eject": "react-scripts eject",
    "start-sw": "react-scripts build && workbox copyLibraries build/ && workbox injectManifest workbox-config.js"
  }


Step 9:
run the service worker script which will generate and precache and build the project.

$npm run start-sw

Step 10:
run the compiled and generated project ( if you don't have serve installed run command
        $npm  install serve -g )

$serve -s build

Step 11:
Open your project http://localhost:5000, turn off the network and reload the page and see the                magic, it works offline

to see all cached contents go to application tab on your browser,





Additional: 
Add your caching strategy in src/workbox-sw.js , for reference how to add strategy follow the below links

https://developers.google.com/web/tools/workbox/modules/workbox-strategies
https://developers.google.com/web/tools/workbox/guides/common-recipes

 A sample workbox-sw.js with graphql implementation

importScripts("https://storage.googleapis.com/workbox-cdn/releases/4.3.1/workbox-sw.js");
const precacheManifest = [];
console.log("[Workbox] ############## Installing #############################")
if (workbox) {
    console.log('[Workbox] Yay! Workbox is loaded 🎉');
} else {
    console.log('[Workbox] Boo! Workbox did not load 😬');
}
console.log("[Workbox] ########################################################")
workbox.precaching.precacheAndRoute(precacheManifest);

// You might want to use a cache-first strategy for images
workbox.routing.registerRoute(
    /\.(?:png|gif|jpg|jpeg|webp|svg)$/,
    new workbox.strategies.CacheFirst({
        cacheName: IMAGE_CACHE,
        plugins: [
            new workbox.expiration.Plugin({
                maxEntries: 60,
                maxAgeSeconds: 30 * 24 * 60 * 60, // 30 Days
            }),
        ],
    })
);

// Cache the Google Fonts stylesheets with a stale-while-revalidate strategy.
workbox.routing.registerRoute(
    /^https:\/\/fonts\.googleapis\.com/,
    new workbox.strategies.StaleWhileRevalidate({
        cacheName: GOOGLE_FONT_STYLE_CACHE,
    })
);

// Cache the underlying font files with a cache-first strategy for 1 year.
workbox.routing.registerRoute(
    /^https:\/\/fonts\.gstatic\.com/,
    new workbox.strategies.CacheFirst({
        cacheName: GOOGLE_FONT_WEBAPI_CACHE,
        plugins: [
            new workbox.cacheableResponse.Plugin({
                statuses: [0, 200],
            }),
            new workbox.expiration.Plugin({
                maxAgeSeconds: 60 * 60 * 24 * 365,
                maxEntries: 30,
            }),
        ],
    })
);

// broadcast channel to load new updates
self.addEventListener('install', (event) => {
    const updateChannel = new BroadcastChannel('sw-precache-channel');
    updateChannel.postMessage({ promptToReload: true });

    updateChannel.onmessage = (message) => {
        if(message.data.skipWaiting){
            self.skipWaiting();
        }
    };
});

// Workbox with custom handler to use IndexedDB for cache.

workbox.routing.registerRoute(
    new RegExp('/graphql(/)?'),
    async ({ event }) => {
        return staleWhileRevalidate(event);
    },
    'POST'
);

// Return cached response when possible, and fetch new results from server in chnage
// the background and update the cache.
self.addEventListener('fetch', async (event) => {
    if (event.request.method === 'POST') {
        event.respondWith(staleWhileRevalidate(event));
    }
    // TODO: Handles other types of requests.
});

async function staleWhileRevalidate(event) {
    let promise = null;
    let cachedResponse = await getCache(event.request.clone());
    let fetchPromise = fetch(event.request.clone())
        .then((response) => {
            setCache(event.request.clone(), response.clone());
            return response;
        })
        .catch((err) => {
            console.error(err);
        });
    return cachedResponse ? Promise.resolve(cachedResponse) : fetchPromise;
}

async function serializeResponse(response) {
    let serializedHeaders = {};
    for (var entry of response.headers.entries()) {
        serializedHeaders[entry[0]] = entry[1];
    }
    let serialized = {
        headers: serializedHeaders,
        status: response.status,
        statusText: response.statusText
    };
    serialized.body = await response.json();
    return serialized;
}

async function setCache(request, response) {
    var key, data;
    let body = await request.json();
    let id = CryptoJS.MD5(body.query).toString();

    var entry = {
        query: body.query,
        response: await serializeResponse(response),
        timestamp: Date.now()
    };
    idbKeyval.set(id, entry, store);
}

async function getCache(request) {
    let data;
    try {
        let body = await request.json();
        let id = CryptoJS.MD5(body.query).toString();
        data = await idbKeyval.get(id, store);
        if (!data) return null;

        // Check cache max age.
        let cacheControl = request.headers.get('Cache-Control');
        let maxAge = cacheControl ? parseInt(cacheControl.split('=')[1]) : 3600;
        if (Date.now() - data.timestamp > maxAge * 1000) {
            console.log(`Cache expired. Load from API endpoint.`);
            return null;
        }

        console.log(`Load response from cache.`);
        return new Response(JSON.stringify(data.response.body), data.response);
    } catch (err) {
        return null;
    }
}

async function getPostKey(request) {
    let body = await request.json();
    return JSON.stringify(body);
}

Monday, 25 November 2019

React manage different environment variable with .env file



React web application has two values environment variable NODE_ENV, it is either production or development. You can not modify the variable NODE_ENV, this is an international setting to protect the production environment from an accidental development.

  "scripts": {
    "start": "react-scripts start", // the value of NODE_ENV is development
    "build": "react-scripts build", // the value of NODE_ENV is production
...
}


.env: Default.
.env.local: Local overrides. This file is loaded for all environments except test.
.env.development, .env.test, .env.staging, .env.production: Environment-specific settings.
.env.development.local, .env.test.local, .env.production.local: Local overrides of environment-specific settings.

.env file will be use used for runing by defualt
.env.development file will be used for running script npm start
.env.production file will be used for running script npm build

To create different environmental variable and use them in react code create the below files in the root directory of the project

filename: .env
contents:  REACT_APP_PAGE_TITLE = "My React app application"

filename: .env.development
contents:  REACT_APP_MY_API = "https://development-my-api.com/"
  REACT_APP_ENV=dev

filename: .env.staging
contents:  REACT_APP_MY_API = "https://staging-my-api.com/"
  REACT_APP_ENV=staging

filename: .env.production
contents:  REACT_APP_MY_API = "https://prod-my-api.com/"
  REACT_APP_ENV=prod

install the below package:

$ npm install env-cmd --save
or
$ yarn add env-cmd


Modify script in package.json and it should be look like below

"scripts": {
    "start": "react-scripts start", // the value of NODE_ENV is development
    "build": "react-scripts build", // the value of NODE_ENV is production
"build:staging": "env-cmd -f .env.staging react-scripts build", // the value of NODE_ENV is still production
...
}

To test the application if it works, add the below tags in your app.js

<div>
      <h1>{process.env.REACT_APP_PAGE_TITLE}</h1> 
      <small>You are running this application in <b>{process.env.REACT_APP_ENV}</b> mode.</small>
      <p>{process.env.REACT_APP_MY_API}</p>
  </div>


Run application
development:
npm start


Staging:
npm run build:staging // build the application for staging
serve -s build // run the application compiled for staging


production:
npm run build // build the application for production
serve -s build // run the application compiled for production


Tuesday, 14 May 2019

Getting started with react native on Mac

What is React Native?
React Native is an open-source mobile application framework created by Facebook. It is used to develop applications for Android, iOS and UWP by enabling developers to use React along with native platform capabilities. React Native lets you build mobile apps using only JavaScript. It uses the same design as React, letting you compose a rich mobile UI using declarative components.

What is Electrode?
The Platform For Integrating React Native Into Your Apps. Electrode Native is built on top of React Native and other tools such as Yarn and CodePush. Electrode Native does not contain any code modifications to these tools and frameworks. Electrode provides you with the ability to integrate multiple different react-native applications into a single native app.


Installation:
install xcode ( https://developer.apple.com/xcode/)
install homebrew ( if it is not installed)  go to https://brew.sh/ to get instruction
NODE/npm:$ brew install node 
Watchman:$ brew install node watchman
react native:$ npm install -g react-native-cli

Create react native project:
react-native init <project name>

Code Editor:
Atom (https://atom.io/ )
Open code atom code editor: atom .
Debug window: press command + D on simulator
Debugger; statement is equivalent to break point
Visual studio code (https://code.visualstudio.com/)

Configure Editor compiler:
ATOM:
ESLint: ( parses JavaScript code error handler)
install lint globally:$ npm install -g eslint

Install linter-eslint plugin in ATOM code editor, Menu -> Preferences -> Install ( search for linter-eslint )
go to your project directory then run the command to install coding compiler:
       $npm install --save-dev eslint-config-rallycoding
under your project create a file: .eslintrc
and copy the below content and save it.
{
"extends": "rallycoding"
}

VSCODE:
$npm install --save-dev eslint-config-rallycoding
{
"extends": "rallycoding"
}

Running react native: 
IOS: react-native run-ios
Android: react-native run-andriod

Troubleshooting after running the command:

Problem: xcrun: error: unable to find utility "instrument" xcode
Solutions: You need to launch XCode and agree to the terms first. Then go to Preferences > Locations and you'll see a select tag for Command Line Tools. Click this select box and choose the version of XCode you'll be using.
After this you can go back to the command line and run react-native run-ios

Problem: Unable to resolve module “events” React-Native
solutions: npm install events --save

React library:
Axios: Axios is a Javascript library used to make HTTP requests from node.js or XMLHttpRequests from the browser that also supports the ES6 Promise API
npm install --save axios
Flexbox layout

Components some key elements:
props for communication from parent to child
state for component internal record keeping only use in class based component not in functional base component
don't use this.state = , use this.setState method
Class based component and functional based component
Only the 'root' component uses 'Appregistry'
component nesting

React vs react native:
React:
- knows how a component should behave
- knows how to take a bunch of components and make them work together

React-native:
- knows how to take the output from a component and plae it on the screen
- Provides defaults core components ( image text)

Some useful commands:
Clearing metro cache (restarting metro bundler react native):
if npm cache clean --force doesn't work run the below commands
rm -rf $TMPDIR/metro-* && rm -rf $TMPDIR/react-* && rm -rf $TMPDIR/haste-*
watchman watch-del-all

react-native run-android -- --reset-cache
C:\Users<Username>\AppData\Local\Temp and delete metro-cache