Abstract: Ubiquitously, Big Data is perceived as an obligatory quantum. Stupendous data types commenced from terabytes to petabytes are bestowed ceaselessly. However, to cache these database competencies is a gruelling chore. Granted that the stereotyped database mechanisms are an intrinsic aspect for repository of immense and inscrutable datasets; nevertheless, it is through the approach of Hadoop that is able to acquire the extravagant information in a proficient style. Besides, on having zillion ingredients, Hadoop paradigm is put forth. Its paramount peripherals are HDFS and MapReduce. Substantially, HDFS is an open source data reservoir benchmark with fault tolerant amplitude. In essence, MapReduce is a programming quintessential on which mining of purposive knowledge is extricated. Over and above, auxiliary elements of Hadoop are perused at length. Consistently, the predominant enchantment is the WordCount algorithm; where it is fascinating to visualize that how this procedure is put into effect adopted by distinctive Hadoop components. Consequently, this algorithm portrays as a pattern for mapping and reducing the dataset.

Keywords: Big Data, MapReduce, Hadoop and its components, HDFS, RDBMS, WordCount, Pig, Hive.