Blog Archives

08: Spark writing RDDs to multiple text files & HAR to solve small files issue

We know that the following code snippets in Spark will write each JavaRDD element to a single file

What if you want to write each employee history to a separate file? Step 1: Create a JavaPairRDD from JavaRDD

Step 2: Create a MultipleOutputFormat, which allows you to write

Read more ›

Posted in Spark Tutorials

07: spark-xml to split & read very large XML files

Processing very large XML files can be a bit tricky as they cannot be processed line by line in parallel as you would do with CSV files. The xml file has to be intact whilst matching the start and end entity tags, and if the tags are distributed in parts…...

Members Only Content

This content is for the members with any one of the following paid subscriptions:

30-Day-Java-JEE-Career-Training, 90-Day-Java-JEE-Career-Training, 180-Day-Java-JEE-Career-Training, 365-Day-Java-JEE-Career-Training, 60-Day-Java-JEE-Career-Training and 2-Year-Java-JEE-Career-Training

Want to evaluate the quality of the contents to see if they will add value to you?

Click Here and check the contents with Try.

Log In | Register
Posted in member-paid, Spark Tutorials

01B: Spark tutorial – writing to HDFS from Spark using Hadoop API

Step 1: The “pom.xml” that defines the dependencies for Spark & Hadoop APIs.

Step 2: The Spark job that writes numbers 1 to 10 to 10 different files on HDFS.

Step 3: Build the “jar” file.

Step 4: Run the “spark-submit” job.

Step 5: You can…...

Members Only Content

This content is for the members with any one of the following paid subscriptions:

30-Day-Java-JEE-Career-Training, 90-Day-Java-JEE-Career-Training, 180-Day-Java-JEE-Career-Training, 365-Day-Java-JEE-Career-Training, 60-Day-Java-JEE-Career-Training and 2-Year-Java-JEE-Career-Training

Want to evaluate the quality of the contents to see if they will add value to you?

Click Here and check the contents with Try.

Log In | Register
Posted in member-paid, Spark Tutorials

06: Spark Streaming with Flume Avro Sink Tutorial

This extends Running a Simple Spark Job in local & cluster modes and Apache Flume with JMS source (Websphere MQ) and HDFS sink. In this tutorial a Flume sink will ingest the data from a source like JMS, HDFS, etc and pass it to an “Avro Sink” that pushes data…...

Members Only Content

This content is for the members with any one of the following paid subscriptions:

30-Day-Java-JEE-Career-Training, 90-Day-Java-JEE-Career-Training, 180-Day-Java-JEE-Career-Training, 365-Day-Java-JEE-Career-Training, 60-Day-Java-JEE-Career-Training and 2-Year-Java-JEE-Career-Training

Want to evaluate the quality of the contents to see if they will add value to you?

Click Here and check the contents with Try.

Log In | Register
Posted in member-paid, Spark Tutorials

05: Spark SQL & CSV with DataFrame Tutorial

Step 1: Create a simple maven project.

Step 2: Import the “simple-spark” maven project into eclipse or IDE of your choice. Step 3: Modify the pom.xml file include 1) relevant Spark libraries 2) The shade plugin to create a single jar (i.e. uber jar) with the spark and other…...

Members Only Content

This content is for the members with any one of the following paid subscriptions:

30-Day-Java-JEE-Career-Training, 90-Day-Java-JEE-Career-Training, 180-Day-Java-JEE-Career-Training, 365-Day-Java-JEE-Career-Training, 60-Day-Java-JEE-Career-Training and 2-Year-Java-JEE-Career-Training

Want to evaluate the quality of the contents to see if they will add value to you?

Click Here and check the contents with Try.

Log In | Register
Posted in member-paid, Spark Tutorials

04: Running a Simple Spark Job in local & cluster modes

Step 1: Create a simple maven Spark project using “-B” for non-interactive mode.

Step 2: Import the maven project “simple-spark” into eclipse. Step 3: The pom.xml file should have the relevant dependency jars as shown below.

Step 4: Write the simple Spark job “SimpleSparkJob.java” that prints numbers from…...

Members Only Content

This content is for the members with any one of the following paid subscriptions:

30-Day-Java-JEE-Career-Training, 90-Day-Java-JEE-Career-Training, 180-Day-Java-JEE-Career-Training, 365-Day-Java-JEE-Career-Training, 60-Day-Java-JEE-Career-Training and 2-Year-Java-JEE-Career-Training

Want to evaluate the quality of the contents to see if they will add value to you?

Click Here and check the contents with Try.

Log In | Register
Posted in member-paid, Spark Tutorials

03: Spark tutorial – reading a Sequence File from HDFS

This extends Spark submit – reading a file from HDFS. A SequenceFile is a flat file consisting of binary key/value pairs. It is extensively used in MapReduce as input/output formats. Like CSV, Sequence files do not store meta data, hence only schema evolution is appending new fields to the end…...

Members Only Content

This content is for the members with any one of the following paid subscriptions:

30-Day-Java-JEE-Career-Training, 90-Day-Java-JEE-Career-Training, 180-Day-Java-JEE-Career-Training, 365-Day-Java-JEE-Career-Training, 60-Day-Java-JEE-Career-Training and 2-Year-Java-JEE-Career-Training

Want to evaluate the quality of the contents to see if they will add value to you?

Click Here and check the contents with Try.

Log In | Register
Posted in member-paid, Spark Tutorials

02: ♥ Spark tutorial – reading a file from HDFS

This extends Spark tutorial – writing a file from a local file system to HDFS. This tutorial assumes that you have set up Cloudera as per “cloudera quickstart vm tutorial installation” YouTube videos that you can search Google or YouTube. You can install it on VMWare (non commercial use) or

Read more ›

Posted in Spark Tutorials
Page 1 of 212

800+ Interview Q&As ♥Free|♦FAQ (Mouse Hover for Full Text)

open all | close all

200+ Java FAQs – Memory Joggers

open all | close all

16 Java Key Areas to be a top-notch

open all | close all

80+ Java Tutorials – Step by step

open all | close all

100+ Java Coding Exercises

open all | close all

How good are your "Career Skills"?

open all | close all