Blog Archives
1 2 3 4 5 39

Spring batch industrial strength tutorial – part2

This assumes that you have read the spring batch beginner tutorial & industrial strength part 1. This is the final part. Step 1: The annotated Java classes are referenced directly due to following line in the batch-context.xml

Step 2: You define the batch job as shown below. It can…

Read more ...

00: Create a simple REST API with Spring Boot

Q1. What is the key benefit of using Spring boot?
A1. The key benefit is that you can “build a production ready application from scratch in a matter of minutes”.

Over the years since its inception, Spring has grown to be very complex in terms of the amount of configuration an application requires. This is where Spring Boot comes in handy to simplify the configuration with opinionated defaults & inclusion of the libraries for you to get started quickly.

1) The Spring jars dependency management and versioning are simplified as demonstrated below with

Spring Boot’s main benefit is its ability to configure resources based on what it finds in your classpath. If your Maven POM includes JPA dependencies and a PostgreSQL driver, then Spring Boot will setup a persistence unit based on PostgreSQL. If you’ve added a web dependency, then you get Spring MVC configured with sensible defaults.

2) Spring boot is based on an HTTP server. Spring Boot has an embedded version of Tomcat by default, but gives you a way to opt for Jetty server if you wish.

Step 1: Go to to create a skeleton spring-boot application. Add “Spring Web“, “Spring Boot Actuator“, “Spring HATEOS” and “Spring REST Docs“.

Generate & download the project skeleton artefacts by clicking on the “Generate” button. Copy the “” to the projects folder and unzip it.… Read more ...

00: ⏯ MySQL database beginner video tutorial

Step by step MySQL video tutorial to get started with MySQL database. Any decent self-taught projects require a database to store & retrieve data.

Related Links

1. Getting started with MySQL database beginner tutorial.

SQL Interview Q&As

1. 14 FAQ SQL Interview Questions & Answers.

2. 9 SQL scenarios based interview questions answered. … Read more ...

01 : Spring Cloud with Eureka Discovery Server Tutorial

Q1. What is Spring Cloud? A1. Spring Boot is widely used to develop MicroServices. As many organisations deploy these services on the cloud like AWS, etc you need to take care of various aspects to make it cloud native, hence Spring Cloud was created. Spring Cloud is an implementation of…

Read more ...

01: Getting started with Zookeeper tutorial

Installing Zookeepr on Windows

Step 1: Download Zookeeper from At the time of writing downloading zookeeper-3.4.11.tar.gz.

Step 2: Using 7-zip on windows unpack the gzipped tar file into a folder. E.g. c:\development\zookeeper-3.4.11. you can see “zkServer.cmd” in the bin folder for windows & “” for Unix.

Starting the Zookeeper Server

Step 3: Copy the conf/zoo-sample.cfg to conf/zoo.cfg and the contents should look like the following for the standalone mode.

In production, you should run ZooKeeper in replicated mode. A replicated group of servers in the same application is called a quorum, and in replicated mode, all servers in the quorum have copies of the same configuration file. The file is similar to the one used in standalone mode, but with a few differences. Here is an example:

Read more ...

01: 14 Unix must-know interview questions & answers

Q1 How do you remove the Control-M characters from a file?
A1 Control-M is a carriage return on keyboard. The ^M is the keyboard equivalent to \r. In a file originated from DOS/Windows the \r\n is used for an end of line carriage return, whereas in Unix it is \n for a new line.

So, if created a file in DOS/Windows and copied it to a Unix machine, you need to convert the carriage returns from \r\n to \n. You need to remove \r.

Using the sed command that replaces Control-M with nothing

Note: The ^M is typed on the command line with ctrl+v and ctrl+M

You can also use a vi editor and type :%s/^M//g to remove the control-M characters.

Q2 How will you search for a property named “inbox” within a number of .properties files including all sub folders?
A2 Using the find and grep commands. Yoo can use xargs command or -exec attribute. xargs is faster.

Read more ...

01: Apache Flume with JMS source (Websphere MQ) and HDFS sink

Apache Flume is used in the Hadoop ecosystem for ingesting data. In this example, let’s ingest data from Websphere MQ. Step 1: Apache flume is config driven. Hierarchy driven flume config flumeWebsphereMQQueue.conf file. You need to define the “source”, “channel”, and the “sink”. There are different types of sources (e.g.JMS…

Read more ...

01: Apache Hadoop HDFS Tutorial

Step 1: Download the latest version of “Apache Hadoop common” from using wget, curl or a browser. This tutorial uses “”.

Step 2: You can set Hadoop environment variables by appending the following commands to ~/.bashrc file.

You can run this in a Unix command prompt as

Step 3: You can verify if Hadoop has been setup properly with

Step 4: The Hadoop file in $HADOOP_HOME/etc/Hadoop/ has the JAVA_HOME setting.… Read more ...

01: Apache Kafka example with Java – getting started tutorial

Apache Kafka with Java getting started tutorial demonstrates how quickly you can get started with Kafka using Docker.

Step 1: Make sure Docker engine is installed on your computer. For example on a Mac OS $ brew cask install docker or on Windows.

Step 2: Start the Docker engine on your operating system.

Kafka services on Docker

Step 3: Create the below docker-compose.yml file to run your Kafka, zookeeper & Apache Kafka Cluster Visualization (AKHQ) services. The images for these services are sourced from Docker hub.

Read more ...

01: Databricks getting started – PySpark, Shell, and SQL

Step 1:
Signup to Databricks community edition – Fill in the details and you can leave your mobile number blank. Select “COMMUNITY EDITION” ==“GET STARTED“.

If you have a Cloud account then you can use it.

Step 2: Check your email and click the “link” in the email & reset your password.

Step 3: Login to Databricks notebook:

Step 4: Create a CLUSTER and it will take a few minutes to come up. This cluster will go down after 2 hours.

Step 5: Select “DATA“, and upload a file named “employee.csv”.

Step 6:Create Table With UI” as shown below:

Note: Please check the “First row is header” check box on the LHS so that column names appear from the file.

Click on “Create Table“.

Step 7: Click on the “databricks” icon on the LHS menu, and then “Create a Blank Notebook“.

Spark in Python (i.e.PySpark)

Since we created the notebook as “python“, we don’t have to do “%python” as it is the default language.… Read more ...


1 2 3 4 5 39

500+ Enterprise & Core Java programmer & architect Q&As

Java & Big Data Tutorials