img

Feedback

Because Your Opinion Matters



Big Data Hadoop Course

Course Overview

1, Why Bigdata Hadoop ?

Through this Big Data & Hadoop training course, the participants will gain all the necessary skills in this area. The participant will learn the details of data storage in Hadoop, understand the basic and advanced concepts of Map Reduce & YARN, tools used by Big Data analysts like Pig, Hive, Flume, Sqoop and Oozie Each method has to be planned and executed in an integrated manner to achieve your strategic goals.With this vision, we have brought the comprehensive packages of digital marketing courses for you.
Whether you are an established marketer or a student, our digital marketing course program is designed to provide you with the skills which will help you to understand strategic internet marketing concepts and the tools needed to make precise decisions and also set the direction for your project, company, or product line in the digital ecosystem.

2, What are the requirements??

This is a self-learning course through recorded videos where you can learn at your own time and pace. Get faculty support through forums, email or calls for your doubts..

3, What am I going to get from this course???

  • Learn the Big Data & software, how to write Hadoop programs and become employable in data analytics roles.
  • Get prepared for the certification exam.
  • 24 hours of in-depth training in Base Hadoop along with hands-on practice using BigData issued software.
  • Get access to certification exam oriented course material and 4 full-length mock exams to test yourself.
  • Get Edvancer's own certificate also at the end of the course.

Modules Overview

  • What is Big Data?
  • Characteristics of big data
  • Traditional data management systems and their limitations
  • What is Hadoop?
  • Why is Hadoop used?
  • The Hadoop eco-system
  • Big data/Hadoop use cases
  • HDFS internals and use cases
  • HDFS Daemons
  • Files and blocks
  • Namenode memory concerns
  • Secondary namenode
  • HDFS access options
  • Installing and configuring Hadoop
  • Hadoop daemons
  • Basic Hadoop commands
  • HDFS API
  • How to use configuration class
  • Using HDFS in MapReduce and programatically
  • HDFS permission and security
  • Additional HDFS tasks
  • HDFS web-interface
  • Hands-on exercise
  • Cloud computing overview
  • Learn the fundamentals of cloud computing and how Hadoop can be installed on a server cluster in the cloud
  • SaaS/PaaS/IaaS
  • Characteristics of cloud computing
  • Cluster configurations
  • Configuring Masters and Slaves
  • MapReduce basics
  • Functional programming concepts
  • List processing
  • Mapping and reducing lists
  • Putting them together in MapReduce
  • Word Count example application
  • Understanding the driver, mapper and reducer
  • Additional MapReduce functionality
  • tolerance
  • Chaining, listing and killing jobs
  • Understanding counters
  • Distributed cache
  • Understand input and output formats
  • Learn advanced MapReducalgorithms to manage and manipulate data including unstructured data
  • Understand combiners partitioners
  • Joins & filtering using Pig
  • Group & co-group
  • Schema merging and redefining functions
  • Pig functions
  • Understanding Hive
  • Using Hive command line interface
  • Data types and file formats
  • Basic DDL operations
  • Schema design
  • Hands-on examples
  • HBase overview, architecture & installation
  • HBase admin: test
  • HBase data access
  • Overview of Zookeeper
  • Sqoop overview and installation
  • Importing and exporting data in Sqoop
  • Hands-on exercise
  • Overview of Oozie and Flume
  • Oozie features and challenges
  • How does Flume work
  • Connecting Flume with HDFS
  • YARN
  • HDFS Federation
  • Authentication and high availability in Hadoop
  • Designing structures for POC
  • Developing MapReduce codet
  • Push data using Flume into HDFSt
  • Run MapReduce codet
  • Analyse the output Project