Apache Zeppelin - Big Data Visualization Tool
Apache Zeppelin - Big Data Visualization Tool for Big data Engineers An Open Source Tool (Free Source)
Course Description
Apache Zeppelin - Big Data Visualization Tool for Big data Engineers An Open Source Tool (Free Source for Data Visualization)
Learn the latest Big Data Technology - Apache Zeppelin! And learn to use it with one of the most popular programming Data Visualization Tool!
One of the most valuable technology skills is the ability to analyze huge data sets, and this course is specifically designed to bring you up to speed on one of the best technologies for this task, Apache Zeppelin! The top technology companies like Google, Facebook, Netflix, Airbnb, Amazon, NASA, and more are all using Apache Zeppelin to solve their big data problems!
Master Bigdata Visualization with Apache Zeppelin.
Various types of Interpreters to integrate with a various big data ecosystem
Apache Zeppelin provides a web-based notebook along with 20 plus Interpreters to interact with and facilitates collaboration from a WebUI. Zeppelin supports Data Ingestion, Data Discovery, Data Analysis, and Data Visualization.
Using an integration of Interpreters is very simple and seamless.
Resultant data can be exported or stored in various sources or can be explored with various visualization and can be analyzed with pivot graph like the setup
This course introduces every aspect of visualization, from story to numbers, to architecture, to code. Tell your story with charts on the web. Visualization always reflects the reality of the data.
We will Learn:
Data Ingestion in Zeppelin environment
Configuring Interpreter
How to Use Zeppelin to Process Data in Spark Scala, Python, SQL and MySQL
Data Discovery
Data Analytics in Zeppelin
Data Visualization
Pivot Chart
Dynamic Forms
Various types of Interpreters to integrate with a various big data ecosystem
Visualization of results from big data
Goals
What will you learn in this course:
- Data Ingestion in Zeppelin environment
- Configuring Interpreter in Zeppelin
- How to Use Zeppelin to process Data in Spark Scala, Spark Python, SQL and MySQL
- Data Discovery
- Data Analytics in Zeppelin
- Data Visualization
- Pivot Chart
- Dynamic Forms
- Various types of Interpreters to integrate with various big data eco system
- Visualization of results from big data
Prerequisites
What are the prerequisites for this course?
- Basics of Data Analytics
- Big data basics will be added advantage
- Basics on SQL queries
- Basics on different data visualization

Curriculum
Check out the detailed breakdown of what’s inside the course
Introduction
23 Lectures
-
Introduction 03:26 03:26
-
What is Apache Zeppelin 04:08 04:08
-
Installation Steps on Linux machines
-
(Latest) Installing Apache Zeppelin (0.10.1)
-
(Hands on) Installation Steps on Ubuntu 20.04 05:31 05:31
-
Regarding IBM Skills Network 01:06 01:06
-
(Optional) Free Account creation in IBM Skills Network Labs 01:51 01:51
-
(Optional) Launch Apache Zeppelin in IBM Skills Network Labs 02:32 02:32
-
Explore UI 12:19 12:19
-
(Optional) Loading Data into IBM Skills Developer Lab 01:52 01:52
-
Spark with Zeppelin (Hands on Demo) 07:44 07:44
-
SQL Support in Zeppelin Part 1(MySQL Remote Database Connectivity Hands on Demo) 06:11 06:11
-
SQL Support in Zeppelin Part 2(MySQL Remote Database Connectivity Hands on Demo) 05:14 05:14
-
(Hands On) Configure Hive Interpreter in Apache Zeppelin 03:45 03:45
-
Configure Hive Interpreter in Apache Zeppelin
-
Hadoop Configuration Setting
-
Starting Hadoop,Hive, Zeppelin 10:31 10:31
-
Hive with Zeppelin 08:03 08:03
-
Python with Zeppelin 02:20 02:20
-
Types of Default Chart in Zeppelin 04:12 04:12
-
Dynamic Forms (Spark SQL) in Zeppelin 08:18 08:18
-
Mini Project on Twitter Data Analysis 17:29 17:29
-
Thank you 00:20 00:20
Instructor Details

Bigdata Engineer
I am Solution Architect with 12+ year’s of experience in Banking, Telecommunication and Financial Services industry across a diverse range of roles in Credit Card, Payments, Data Warehouse and Data Center programmes
My role as Bigdata and Cloud Architect to work as part of Bigdata team to provide Software Solution.
Responsibilities includes,
- Support all Hadoop related issues
- Benchmark existing systems, Analyse existing system challenges/bottlenecks and Propose right solutions to eliminate them based on various Big Data technologies
- Analyse and Define pros and cons of various technologies and platforms
- Define use cases, solutions and recommendations
- Define Big Data strategy
- Perform detailed analysis of business problems and technical environments
- Define pragmatic Big Data solution based on customer requirements analysis
- Define pragmatic Big Data Cluster recommendations
- Educate customers on various Big Data technologies to help them understand pros and cons of Big Data
- Data Governance
- Build Tools to improve developer productivity and implement standard practices
I am sure the knowledge in these courses can give you extra power to win in life.
All the best!!
Course Certificate
User your certification to make a career change or to advance in your current career. Salaries are among the highest in the world.

Our students work
with the Best


































Related Video Courses
View MoreAnnual Membership
Become a valued member of Tutorials Point and enjoy unlimited access to our vast library of top-rated Video Courses
Subscribe now
Online Certifications
Master prominent technologies at full length and become a valued certified professional.
Explore Now