Hands-On Beginner’s Guide on Big Data and Hadoop 3

Hands-On Beginner’s Guide on Big Data and Hadoop 3

English | MP4 | AVC 1920×1080 | AAC 48KHz 2ch | 3h 02m | 1.07 GB

Effectively store, manage, and analyze large Datasets with HDFS, SQOOP, YARN, and MapReduce

Do you struggle to store and handle big data sets? This course will teach to smoothly handle big data sets using Hadoop 3.

The course starts by covering basic commands used by big data developers on a daily basis. Then, you’ll focus on HDFS architecture and command lines that a developer uses frequently. Next, you’ll use Flume to import data from other ecosystems into the Hadoop ecosystem, which plays a crucial role in the data available for storage and analysis using MapReduce. Also, you’ll learn to import and export data from RDBMS to HDFS and vice-versa using SQOOP. Then, you’ll learn about Apache Pig, which is used to deal with data using Flume and SQOOP. Here you’ll also learn to load, transform, and store data in Pig relation. Finally, you’ll dive into Hive functionality and learn to load, update, delete content in Hive.

By the end of the course, you’ll have gained enough knowledge to work with big data using Hadoop. So, grab the course and handle big data sets with ease.

The course will practically get you started with HDFS to store data efficiently, SQOOP to transfer bulk data, and YARN to ensure efficient data management. You will gain the hands-on knowledge to analyze and process big data sets with MapReduce functions.

What You Will Learn

  • Focus on the Hadoop ecosystem to understand big data and how to manage it
  • Learn the basic commands used by big data developers and the structure of the Unix OS.
  • Understand the HDFS architecture and command line to deal with HDFS files and directories
  • Import data using Flume and analyze it using MapReduce
  • Export and import data from RDBMS to HDFS and vice-versa with SQOOP
  • Use command-line language Pig Latin for data transformation operations
  • Deal with stored data and learn to load, update, and delete data using Hive
Table of Contents

01 The Course Overview
02 Introduction to Unix OS
03 Unix Commands
04 Unix Commands (Continued)
05 HDFS Overview
06 HDFS Architecture
07 HDFS Commands
08 HDFS Commands (Continued)
09 Introduction to Flume
10 How to Start a Flume Agent
11 How to Configure a Flume Memory Channel
12 How to Close a Flume Agent
13 Introduction to Apache Sqoop
14 Sqoop Import RDBMS to HDFS
15 Sqoop Import Using SQL Query
16 Sqoop Import RDBMS to Hive
17 Sqoop Export HDFS to RDBMS
18 Introduction to Pig
19 Load Data in Pig Relation
20 Data Transformation Using Pig
21 Export Data from Pig Relation
22 JOIN Operation Using Pig
23 Introduction to Hive
24 Load Data into Hive Table
25 Load Data into Hive Table (Continued)
26 UpdateDelete Contents of Hive Table
27 Optimization in Hive