Blog Archives

06: Learn how to access Hive from Spark via SparkSQL & Dataframes by example

These Hadoop tutorials assume that you have installed Cloudera QuickStart, which has the Hadoop eco system like HDFS, Spark, Hive, HBase, YARN, etc. This example extends Learn Hive to write to and read from AVRO & Parquet files by examples to access Hive metastore via Spark SQL. … Read more...



10 Spark SQL Interview Q&As

Q1. What is Spark SQL?
A1. Apache Spark SQL is a module for structured data processing in Spark. Spark SQL integrates relational processing (i.e. SQL) with Spark’s functional programming using Scala, Java, etc weave SQL queries with Dataframes/Datasets based transformations. It provides support for various data sources as shown below:

Q2.

Read more ›



17: Spark interview Q&As with coding examples in pyspark (i.e. python)

Q01. How will you create a Spark context? A01. Q02. How will you create a Dataframe by reading a file from AWS S3 bucket? A02. Q03. How will you create a Dataframe by reading a table in a database? … Read more ›...



Apache Spark SQL join types interview Q&As

Q1. What are the different Spark SQL join types?
A1. There are different SQL join types like inner join, left/right outer joins, full outer join, left semi-join, left anti-join and self-join.

Q2. Given the below tables, can give examples of the above join types?

Read more ›



800+ Java Interview Q&As

Java & Big Data Tutorials

Top