Hadoop multinode cluster too slow. How do I increase speed of data processing?

cluster specification in hadoop
hadoop cluster architecture
tuning mapreduce jobs in big data
hadoop performance tuning tools
hadoop performance tuning hortonworks
hadoop cluster setup
how to connect to hadoop cluster
hadoop yarn performance tuning

I have a 6 node cluster - 5 DN and 1 NN. All have 32 GB RAM. All slaves have 8.7 TB HDD. DN has 1.1 TB HDD. Here is the link to my core-site.xml , hdfs-site.xml , yarn-site.xml.

After running an MR job, i checked my RAM Usage which is mentioned below:

Namenode

free -g
          total        used        free      shared  buff/cache   available
Mem:      31           7          15           0           8          22
Swap:     31           0          31

Datanode :

Slave1 :

free -g
          total        used        free      shared  buff/cache   available
Mem:      31           6           6           0          18          24
Swap:     31           3          28

Slave2:

          total        used        free      shared  buff/cache   available
Mem:      31           2           4           0          24          28
Swap:     31           1          30

Likewise, other slaves have similar RAM usage. Even if a single job is submitted, the other submitted jobs enter into ACCEPTED state and wait for the first job to finish and then they start.

Here is the output of ps command of the JAR that I submnitted to execute the MR job:

/opt/jdk1.8.0_77//bin/java -Dproc_jar -Xmx1000m 
-Dhadoop.log.dir=/home/hduser/hadoop/logs -Dyarn.log.dir=/home/hduser/hadoop/logs 
-Dhadoop.log.file=yarn.log -Dyarn.log.file=yarn.log 
-Dyarn.home.dir= -Dyarn.id.str= -Dhadoop.root.logger=INFO,console 
-Dyarn.root.logger=INFO,console -Dyarn.policy.file=hadoop-policy.xml 
-Dhadoop.log.dir=/home/hduser/hadoop/logs -Dyarn.log.dir=/home/hduser/hadoop/logs 
-Dhadoop.log.file=yarn.log -Dyarn.log.file=yarn.log 
-Dyarn.home.dir=/home/hduser/hadoop -Dhadoop.home.dir=/home/hduser/hadoop 
-Dhadoop.root.logger=INFO,console -Dyarn.root.logger=INFO,console 
-classpath --classpath of jars
 org.apache.hadoop.util.RunJar abc.jar abc.mydriver2 /raw_data /mr_output/02

Is there any settings that I can change/add to allow multiple jobs to run simultaneously and speed up current data processing ? I am using hadoop 2.5.2. The cluster is in PROD environment and I can not take it down for updating hadoop version.

EDIT 1 : I started a new MR job with 362 GB of data and still the RAM usage is around 8 GB and 22 GB of RAM is free. Here is my job submission command -

nohup yarn jar abc.jar def.mydriver1 /raw_data /mr_output/01 &

Here is some more information :

18/11/22 14:09:07 INFO input.FileInputFormat: Total input paths to process : 130363
18/11/22 14:09:10 INFO mapreduce.JobSubmitter: number of splits:130372

Is there some additional memory parameters that we can use to submit the job to have efficient memory usage ?


How to ensure best performance for your Hadoop Cluster, Performance tuning of a Hadoop cluster setup is essential to its The biggest selling point for Apache Hadoop driving enterprise adoption, as a big data processing to increase the size of buffer memory and attain optimal performance. is reduced if the task progress is slow due to memory unavailability. Hadoop multinode cluster too slow. How do I increase speed of 0


Based on your yarn-site.xml your yarn.scheduler.minimum-allocation-mb setting of 10240 is too high. This effectively means you only have at best 18 vcores available. This might be the right setting for a cluster where you have tons of memory but for 32GB it's way too large. Drop it to 1 or 2GB.

Remember, HDFS block sizes are what each mapper typically consumes. So 1-2GB of memory for 128MB of data sounds more reasonable. The added benefit is you could have up to 180 vcores available which will process jobs 10x faster than 18 vcores.

Applications of Computing and Communication Technologies: First , In Hadoop processing and input data can be scaled linearly by Hadoop over a set Copy input to HDFS time, however, increases when applying Hadoop Cluster because Data Node is on the high performance is achieved by combining ten separate Hadoop single node cluster and forming multi-node Hadoop cluster of  Every Hadoop MapReduce job collects information about various input records read, number of records pipelined for further execution, number of reducer records, heap size set, swap memory, etc. Generally, hadoop tasks are not bounded by CPU- the prime concern should be to optimize the memory usage and disk spills.


To give you an idea of how a 4 node 32 core 128GB RAM per node cluster is setup:

For Tez: Divide RAM/CORES = Max TEZ Container size So in my case: 128/32 = 4GB

TEZ:


YARN:

Big Data with Hadoop MapReduce: A Classroom Approach, 6.3.1 DECOMMISSIONING NODES To increase the storage and computing capacity in a cluster, you will Decommissioning is the process of removing slaves from Hadoop cluster while the a node if it is failing more often or if its performance is noticeably slow. Launch a multi-node cluster, as discussed in Section 6.2. The second challenge Gualtieri sees is that Hadoop currently is a read-only system once data has been written into the Hadoop Distributed File System (HDFS). Users can't easily insert, delete or modify individual pieces of data stored in the file system like they can in a relational database, he said.


Computer, Intelligent Computing and Education Technology, To solve the question, after telling about the core technologies of Hadoop, this paper to the traditional computing way of single computer and multi-node physical computers. of Internet technology, Digital information is exponentially increasing. Most of the traditional mass data processing use parallel computing​, grid  A Multi Node Cluster in Hadoop contains two or more DataNodes in a distributed Hadoop environment. This is practically used in organizations to store and analyze their Petabytes and Exabytes of data. Learning to set up a multi node cluster gears you closer to your much needed Hadoop certification .


Hadoop Cluster Overview: What it is and how to setup one?, “A computational computer cluster that distributes data analysis workload across of a Hadoop cluster help in increasing the speed of analysis process. Hadoop clusters are resilient to failure meaning whenever data is sent to a A multi-node hadoop cluster setup has a master slave architecture where  I have two computers, the one I work on (CENTOS installed) and a second computer (also CENTOS (server), to act as the datanode), both not in a VM environment. I want to create a multi-node cluster with these computers. I have directly connected the computers together to test for possible network iss


MHDFS: A Memory-Based Hadoop Framework for Large Data Storage, Hadoop distributed file system (HDFS) is undoubtedly the most popular framework for storing and processing large amount of data on clusters of machines. traditional HDFS still suffers from the overhead of disk-based low throughput and I/O rate. Their goal is to speed up the process of data writing and reading from  Choosing Hadoop cluster hardware Hadoop is a scalable clustered non-shared system for massively parallel data processing. The whole concept of Hadoop is that a single node doesn't play a significant role in the overall cluster reliability and performance.