MapReduce job fails due to memory setting issues

MapReduce job fails due to memory setting issues

book

Article ID: KB0077179

calendar_today

Updated On:

Products Versions
Spotfire Data Science Team Studio 6.5.0

Description

MapReduce job fails with the following error: 

================================================================================================

 ERROR [pool-610-thread-2] com.alpine.datamining.workflow.AnalyticNodeThread.run(AnalyticNodeThread.java:82) - Unable to recreate exception from backed error: Container [pid=231883,containerID=container_e50_1564730610291_80507_01_000003] is running beyond physical memory limits. Current usage: 1.1 GB of 1 GB physical memory used; 2.9 GB of 2.1 GB virtual memory used. Killing container. Dump of the process-tree for container_e50_1564730610291_80507_01_000003 
===============================================================================================

Environment

Linux

Resolution


Check the following parameters on Hadoop Cluster and update the data source configuration with the appropriate parameters and their values.
  •  mapreduce.map.memory.mb User-added image 
  • mapreduce.reduce.memory.mb

          User-added image
 
  •  mapreduce.map.java.opts User-added image  

Issue/Introduction

MapReduce job fails due to memory setting issues

Additional Information

Note: TIBCO Spotfire Data Science is renamed to TIBCO Data Science Team Studio from 6.5.0 versionĀ