Up

An Evaluation of Big Data Challenges Techniques with Hadoop Components

 Hot
File Size:
1.39 MB
Volume:
Volume 4, Issue 1 (January, 2018)
Publication No:
IJTC201801001
Author:
Kaljot Sharma, Dr.Raman Chadha
Downloads:
22 x

Abstract:
Big Data is the huge amount of data that cannot be processed by creation use of traditional methods of data processing. Due to widespread usage of many computing devices such as smartphones, laptops, wearable computing devices; the data processing over the internet has exceeded more than the modern computers can handle. Due to this high growth rate, the term Big Data is envisaged. Today data is increasing in volume, variety and velocity. To manage this data, we have to use databases with massively parallel software running on tens, hundreds, or more than thousands of servers. So Big data platforms are used to acquire, organize and analyze these types of data. This paper addresses various challenges and issues that need to be emphasized to present the full influence of big data. This paper also discusses the characteristics of Big data and the platform used in Big Data i.e. Hadoop and its various components. We will acquire data from social media using Flume. Flume can take log files as source and after collecting data, it can store it directly to file system like HDFS or GFS. Then, organize this data by using different distributed file system such as Google file system or Hadoop file system. At last, data will be analyzed using map reducers in Pig, Hive and Jaql. Components like Pig, Hive and Jaql do the analysis on data so that it can be access faster and easily, and query responses also become faster.

Keywords:
Big Data, Hadoop, MapReduce, flume pig, hive.

Tags Associated: Big Data Hadoop Map Reduce HIve