Hadoop - Pig based Alpine operators fail to run on HA MapR4.1 cluster - ArrayIndexOutOfBoundsException

Hadoop - Pig based Alpine operators fail to run on HA MapR4.1 cluster - ArrayIndexOutOfBoundsException

book

Article ID: KB0082673

calendar_today

Updated On:

Products Versions
Spotfire Data Science 6.x

Description

Pig based Alpine operators fail to run on HA MapR4.1 cluster - ArrayIndexOutOfBoundsException

Issue/Introduction

Pig based Alpine operators fail to run on HA MapR4.1 cluster - ArrayIndexOutOfBoundsException

Resolution

Pig based Alpine operators fail to run on HA MapR4.1 cluster - ArrayIndexOutOfBoundsException

1. See the attached screenshot for the regular data source connection parameters for MapR4.1 with NameNode HA.
2. This is the list of the configured additional parameters (also, see the attached screenshot):

mapreduce.jobhistory.address=mapr4c.alpinenow.local:10020
mapreduce.jobhistory.webapp.address=mapr4c.alpinenow.local:19888
yarn.app.mapreduce.am.staging-dir=/var/mapr/cluster/yarn/rm/staging
yarn.resourcemanager.admin.address=mapr4b.alpinenow.local:8033
yarn.resourcemanager.resource-tracker.address=mapr4b.alpinenow.local:8031
yarn.resourcemanager.scheduler.address=mapr4b.alpinenow.local:8030
3. In this example HA is configured for the NameNode and for the Resource Manger using this command from every node of the cluster (3 nodes in this case):
/opt/mapr/server/configure.sh \
-N mapr41.alpinenow.local \
-C mapr4a.alpinenow.local:7222,mapr4b.alpinenow.local:7222,mapr4c.alpinenow.local:7222 \
-HS mapr4c.alpinenow.local \
-RM mapr4a.alpinenow.local,mapr4b.alpinenow.local,mapr4c.alpinenow.local \
-Z mapr4a.alpinenow.local:5181,mapr4b.alpinenow.local:5181,mapr4c.alpinenow.local:5181

which sets the cluster name (mapr41.alpinenow.local), the CLDB hosts and ports(mapr4a.alpinenow.local:7222,mapr4b.alpinenow.local:7222,mapr4c.alpinenow.local:7222), the JobHistory service hostname (mapr4c.alpinenow.local), the ResourceManager hosts (mapr4a.alpinenow.local,mapr4b.alpinenow.local,mapr4c.alpinenow.local), and the Zookeeper hosts and ports (mapr4a.alpinenow.local:5181,mapr4b.alpinenow.local:5181,mapr4c.alpinenow.local:5181).

Note: Please note that there are no spaces right after each coma in the "configure.sh" parameter list.

4. However, Pig based Alpine operators fail to run - see the attached screenshot, and the hadoop job logs show this error:

 

2015-09-15 17:02:26,557 INFO [main] org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Created MRAppMaster for application appattempt_1442358961110_0004_000002
2015-09-15 17:02:27,112 WARN [main] org.apache.hadoop.conf.Configuration: job.xml:an attempt to override final parameter: mapreduce.job.end-notification.max.retry.interval;  Ignoring.
2015-09-15 17:02:27,113 WARN [main] org.apache.hadoop.conf.Configuration: job.xml:an attempt to override final parameter: mapreduce.job.end-notification.max.attempts;  Ignoring.
2015-09-15 17:02:27,158 INFO [main] org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Executing with tokens:
2015-09-15 17:02:27,158 INFO [main] org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Kind: YARN_AM_RM_TOKEN, Service: , Ident: (org.apache.hadoop.yarn.security.AMRMTokenIdentifier@27b1603b)
2015-09-15 17:02:27,195 INFO [main] org.apache.hadoop.mapreduce.v2.app.MRAppMaster: The specific max attempts: 2 for application: 4. Attempt num: 2 is last retry: true
2015-09-15 17:02:27,201 INFO [main] org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Using mapred newApiCommitter.
2015-09-15 17:02:27,296 WARN [main] org.apache.hadoop.conf.Configuration: job.xml:an attempt to override final parameter: mapreduce.job.end-notification.max.retry.interval;  Ignoring.
2015-09-15 17:02:27,296 WARN [main] org.apache.hadoop.conf.Configuration: job.xml:an attempt to override final parameter: mapreduce.job.end-notification.max.attempts;  Ignoring.
2015-09-15 17:02:27,363 INFO [main] org.apache.hadoop.mapreduce.v2.app.MRAppMaster: OutputCommitter set in config null
2015-09-15 17:02:27,542 INFO [main] org.apache.hadoop.mapreduce.v2.app.MRAppMaster: OutputCommitter is org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigOutputCommitter
2015-09-15 17:02:27,565 INFO [main] org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class org.apache.hadoop.mapreduce.jobhistory.EventType for class org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler
2015-09-15 17:02:27,567 INFO [main] org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class org.apache.hadoop.mapreduce.v2.app.job.event.JobEventType for class org.apache.hadoop.mapreduce.v2.app.MRAppMaster$JobEventDispatcher
2015-09-15 17:02:27,568 INFO [main] org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class org.apache.hadoop.mapreduce.v2.app.job.event.TaskEventType for class org.apache.hadoop.mapreduce.v2.app.MRAppMaster$TaskEventDispatcher
2015-09-15 17:02:27,569 INFO [main] org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class org.apache.hadoop.mapreduce.v2.app.job.event.TaskAttemptEventType for class org.apache.hadoop.mapreduce.v2.app.MRAppMaster$TaskAttemptEventDispatcher
2015-09-15 17:02:27,569 INFO [main] org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class org.apache.hadoop.mapreduce.v2.app.commit.CommitterEventType for class org.apache.hadoop.mapreduce.v2.app.commit.CommitterEventHandler
2015-09-15 17:02:27,575 INFO [main] org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class org.apache.hadoop.mapreduce.v2.app.speculate.Speculator$EventType for class org.apache.hadoop.mapreduce.v2.app.MRAppMaster$SpeculatorEventDispatcher
2015-09-15 17:02:27,575 INFO [main] org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class org.apache.hadoop.mapreduce.v2.app.rm.ContainerAllocator$EventType for class org.apache.hadoop.mapreduce.v2.app.MRAppMaster$ContainerAllocatorRouter
2015-09-15 17:02:27,577 INFO [main] org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncher$EventType for class org.apache.hadoop.mapreduce.v2.app.MRAppMaster$ContainerLauncherRouter
2015-09-15 17:02:27,593 INFO [main] org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Will not try to recover. recoveryEnabled: true recoverySupportedByCommitter: false numReduceTasks: 1 shuffleKeyValidForRecovery: true ApplicationAttemptID: 2
2015-09-15 17:02:27,624 INFO [main] org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Previous history file is at maprfs://mapr4a.alpinenow.local/var/mapr/cluster/yarn/rm/staging/chorus/.staging/job_1442358961110_0004/job_1442358961110_0004_1.jhist
2015-09-15 17:02:27,988 INFO [main] org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class org.apache.hadoop.mapreduce.v2.app.job.event.JobFinishEvent$Type for class org.apache.hadoop.mapreduce.v2.app.MRAppMaster$JobFinishEventHandler
2015-09-15 17:02:28,084 INFO [main] org.apache.hadoop.metrics2.impl.MetricsConfig: loaded properties from hadoop-metrics2.properties
2015-09-15 17:02:28,330 INFO [main] org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled snapshot period at 10 second(s).
2015-09-15 17:02:28,330 INFO [main] org.apache.hadoop.metrics2.impl.MetricsSystemImpl: MRAppMaster metrics system started
2015-09-15 17:02:28,347 INFO [main] org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: Adding job token for job_1442358961110_0004 to jobTokenSecretManager
2015-09-15 17:02:28,392 INFO [main] org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: Not uberizing job_1442358961110_0004 because: not enabled; too much RAM;
2015-09-15 17:02:28,427 INFO [main] org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: Input size for job job_1442358961110_0004 = 1787886. Number of splits = 1
2015-09-15 17:02:28,430 INFO [main] org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: Number of reduces for job job_1442358961110_0004 = 1
2015-09-15 17:02:28,430 INFO [main] org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: job_1442358961110_0004Job Transitioned from NEW to INITED
2015-09-15 17:02:28,432 INFO [main] org.apache.hadoop.mapreduce.v2.app.MRAppMaster: MRAppMaster launching normal, non-uberized, multi-container job job_1442358961110_0004.
2015-09-15 17:02:28,503 INFO [main] org.apache.hadoop.ipc.CallQueueManager: Using callQueue class java.util.concurrent.LinkedBlockingQueue
2015-09-15 17:02:28,525 INFO [Socket Reader #1 for port 55172] org.apache.hadoop.ipc.Server: Starting Socket Reader #1 for port 55172
2015-09-15 17:02:28,547 INFO [main] org.apache.hadoop.yarn.factories.impl.pb.RpcServerFactoryPBImpl: Adding protocol org.apache.hadoop.mapreduce.v2.api.MRClientProtocolPB to the server
2015-09-15 17:02:28,548 INFO [IPC Server Responder] org.apache.hadoop.ipc.Server: IPC Server Responder: starting
2015-09-15 17:02:28,548 INFO [IPC Server listener on 55172] org.apache.hadoop.ipc.Server: IPC Server listener on 55172: starting
2015-09-15 17:02:28,551 INFO [main] org.apache.hadoop.mapreduce.v2.app.client.MRClientService: Instantiated MRClientService at mapr4a.alpinenow.local/172.27.0.4:55172
2015-09-15 17:02:28,601 INFO [main] org.mortbay.log: Logging to org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via org.mortbay.log.Slf4jLog
2015-09-15 17:02:28,605 INFO [main] org.apache.hadoop.http.HttpRequestLog: Http request log for http.requests.mapreduce is not defined
2015-09-15 17:02:28,617 INFO [main] org.apache.hadoop.http.HttpServer2: Added global filter 'safety' (class=org.apache.hadoop.http.HttpServer2$QuotingInputFilter)
2015-09-15 17:02:28,622 INFO [main] org.apache.hadoop.http.HttpServer2: Added filter AM_PROXY_FILTER (class=org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter) to context mapreduce
2015-09-15 17:02:28,622 INFO [main] org.apache.hadoop.http.HttpServer2: Added filter AM_PROXY_FILTER (class=org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter) to context static
2015-09-15 17:02:28,625 INFO [main] org.apache.hadoop.http.HttpServer2: adding path spec: /mapreduce/*
2015-09-15 17:02:28,626 INFO [main] org.apache.hadoop.http.HttpServer2: adding path spec: /ws/*
2015-09-15 17:02:28,636 INFO [main] org.apache.hadoop.http.HttpServer2: Jetty bound to port 50926
2015-09-15 17:02:28,636 INFO [main] org.mortbay.log: jetty-6.1.26
2015-09-15 17:02:28,662 INFO [main] org.mortbay.log: Extract jar:file:/opt/mapr/hadoop/hadoop-2.5.1/share/hadoop/yarn/hadoop-yarn-common-2.5.1-mapr-1503.jar!/webapps/mapreduce to /tmp/Jetty_0_0_0_0_50926_mapreduce____ppco53/webapp
2015-09-15 17:02:28,931 INFO [main] org.mortbay.log: Started HttpServer2$SelectChannelConnectorWithSafeStartup@0.0.0.0:50926
2015-09-15 17:02:28,931 INFO [main] org.apache.hadoop.yarn.webapp.WebApps: Web app /mapreduce started at 50926
2015-09-15 17:02:29,316 INFO [main] org.apache.hadoop.yarn.webapp.WebApps: Registered webapp guice modules
2015-09-15 17:02:29,319 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.speculate.DefaultSpeculator: JOB_CREATE job_1442358961110_0004
2015-09-15 17:02:29,321 INFO [main] org.apache.hadoop.ipc.CallQueueManager: Using callQueue class java.util.concurrent.LinkedBlockingQueue
2015-09-15 17:02:29,321 INFO [Socket Reader #1 for port 41647] org.apache.hadoop.ipc.Server: Starting Socket Reader #1 for port 41647
2015-09-15 17:02:29,329 INFO [IPC Server Responder] org.apache.hadoop.ipc.Server: IPC Server Responder: starting
2015-09-15 17:02:29,329 INFO [IPC Server listener on 41647] org.apache.hadoop.ipc.Server: IPC Server listener on 41647: starting
2015-09-15 17:02:29,348 INFO [main] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor: nodeBlacklistingEnabled:true
2015-09-15 17:02:29,348 INFO [main] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor: maxTaskFailuresPerNode is 3
2015-09-15 17:02:29,348 INFO [main] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor: blacklistDisablePercent is 33
2015-09-15 17:02:29,410 WARN [main] org.apache.hadoop.conf.Configuration: job.xml:an attempt to override final parameter: mapreduce.job.end-notification.max.retry.interval;  Ignoring.
2015-09-15 17:02:29,410 WARN [main] org.apache.hadoop.conf.Configuration: job.xml:an attempt to override final parameter: mapreduce.job.end-notification.max.attempts;  Ignoring.
2015-09-15 17:02:29,420 INFO [main] org.apache.hadoop.service.AbstractService: Service RMCommunicator failed in state STARTED; cause: java.lang.ArrayIndexOutOfBoundsException: 0
java.lang.ArrayIndexOutOfBoundsException: 0
    at org.apache.hadoop.yarn.client.ConfiguredRMFailoverProxyProvider.init(ConfiguredRMFailoverProxyProvider.java:62)
    at org.apache.hadoop.yarn.client.RMProxy.createRMFailoverProxyProvider(RMProxy.java:157)
    at org.apache.hadoop.yarn.client.RMProxy.createRMProxy(RMProxy.java:87)
    at org.apache.hadoop.yarn.client.ClientRMProxy.createRMProxy(ClientRMProxy.java:70)
    at org.apache.hadoop.mapreduce.v2.app.rm.RMCommunicator.createSchedulerProxy(RMCommunicator.java:297)
    at org.apache.hadoop.mapreduce.v2.app.rm.RMCommunicator.serviceStart(RMCommunicator.java:112)
    at org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator.serviceStart(RMContainerAllocator.java:229)
    at org.apache.hadoop.service.AbstractService.start(AbstractService.java:193)
    at org.apache.hadoop.mapreduce.v2.app.MRAppMaster$ContainerAllocatorRouter.serviceStart(MRAppMaster.java:818)
    at org.apache.hadoop.service.AbstractService.start(AbstractService.java:193)
    at org.apache.hadoop.service.CompositeService.serviceStart(CompositeService.java:120)
    at org.apache.hadoop.mapreduce.v2.app.MRAppMaster.serviceStart(MRAppMaster.java:1081)
    at org.apache.hadoop.service.AbstractService.start(AbstractService.java:193)
    at org.apache.hadoop.mapreduce.v2.app.MRAppMaster$1.run(MRAppMaster.java:1489)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:415)
    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1566)
    at org.apache.hadoop.mapreduce.v2.app.MRAppMaster.initAndStartAppMaster(MRAppMaster.java:1485)
    at org.apache.hadoop.mapreduce.v2.app.MRAppMaster.main(MRAppMaster.java:1418)
2015-09-15 17:02:29,422 INFO [main] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Final Stats: PendingReds:0 ScheduledMaps:0 ScheduledReds:0 AssignedMaps:0 AssignedReds:0 CompletedMaps:0 CompletedReds:0 ContAlloc:0 ContRel:0 HostLocal:0 RackLocal:0
2015-09-15 17:02:29,422 INFO [main] org.apache.hadoop.service.AbstractService: Service org.apache.hadoop.mapreduce.v2.app.MRAppMaster$ContainerAllocatorRouter failed in state STARTED; cause: java.lang.ArrayIndexOutOfBoundsException: 0
java.lang.ArrayIndexOutOfBoundsException: 0
    at org.apache.hadoop.yarn.client.ConfiguredRMFailoverProxyProvider.init(ConfiguredRMFailoverProxyProvider.java:62)
    at org.apache.hadoop.yarn.client.RMProxy.createRMFailoverProxyProvider(RMProxy.java:157)
    at org.apache.hadoop.yarn.client.RMProxy.createRMProxy(RMProxy.java:87)
    at org.apache.hadoop.yarn.client.ClientRMProxy.createRMProxy(ClientRMProxy.java:70)
    at org.apache.hadoop.mapreduce.v2.app.rm.RMCommunicator.createSchedulerProxy(RMCommunicator.java:297)
    at org.apache.hadoop.mapreduce.v2.app.rm.RMCommunicator.serviceStart(RMCommunicator.java:112)
    at org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator.serviceStart(RMContainerAllocator.java:229)
    at org.apache.hadoop.service.AbstractService.start(AbstractService.java:193)
    at org.apache.hadoop.mapreduce.v2.app.MRAppMaster$ContainerAllocatorRouter.serviceStart(MRAppMaster.java:818)
    at org.apache.hadoop.service.AbstractService.start(AbstractService.java:193)
    at org.apache.hadoop.service.CompositeService.serviceStart(CompositeService.java:120)
    at org.apache.hadoop.mapreduce.v2.app.MRAppMaster.serviceStart(MRAppMaster.java:1081)
    at org.apache.hadoop.service.AbstractService.start(AbstractService.java:193)
    at org.apache.hadoop.mapreduce.v2.app.MRAppMaster$1.run(MRAppMaster.java:1489)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:415)
    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1566)
    at org.apache.hadoop.mapreduce.v2.app.MRAppMaster.initAndStartAppMaster(MRAppMaster.java:1485)
    at org.apache.hadoop.mapreduce.v2.app.MRAppMaster.main(MRAppMaster.java:1418)
2015-09-15 17:02:29,423 INFO [main] org.apache.hadoop.service.AbstractService: Service org.apache.hadoop.mapreduce.v2.app.MRAppMaster failed in state STARTED; cause: java.lang.ArrayIndexOutOfBoundsException: 0
java.lang.ArrayIndexOutOfBoundsException: 0
    at org.apache.hadoop.yarn.client.ConfiguredRMFailoverProxyProvider.init(ConfiguredRMFailoverProxyProvider.java:62)
    at org.apache.hadoop.yarn.client.RMProxy.createRMFailoverProxyProvider(RMProxy.java:157)
    at org.apache.hadoop.yarn.client.RMProxy.createRMProxy(RMProxy.java:87)
    at org.apache.hadoop.yarn.client.ClientRMProxy.createRMProxy(ClientRMProxy.java:70)
    at org.apache.hadoop.mapreduce.v2.app.rm.RMCommunicator.createSchedulerProxy(RMCommunicator.java:297)
    at org.apache.hadoop.mapreduce.v2.app.rm.RMCommunicator.serviceStart(RMCommunicator.java:112)
    at org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator.serviceStart(RMContainerAllocator.java:229)
    at org.apache.hadoop.service.AbstractService.start(AbstractService.java:193)
    at org.apache.hadoop.mapreduce.v2.app.MRAppMaster$ContainerAllocatorRouter.serviceStart(MRAppMaster.java:818)
    at org.apache.hadoop.service.AbstractService.start(AbstractService.java:193)
    at org.apache.hadoop.service.CompositeService.serviceStart(CompositeService.java:120)
    at org.apache.hadoop.mapreduce.v2.app.MRAppMaster.serviceStart(MRAppMaster.java:1081)
    at org.apache.hadoop.service.AbstractService.start(AbstractService.java:193)
    at org.apache.hadoop.mapreduce.v2.app.MRAppMaster$1.run(MRAppMaster.java:1489)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:415)
    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1566)
    at org.apache.hadoop.mapreduce.v2.app.MRAppMaster.initAndStartAppMaster(MRAppMaster.java:1485)
    at org.apache.hadoop.mapreduce.v2.app.MRAppMaster.main(MRAppMaster.java:1418)
2015-09-15 17:02:29,423 INFO [main] org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Stopping JobHistoryEventHandler. Size of the outstanding queue size is 3
2015-09-15 17:02:29,424 INFO [main] org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: In stop, writing event AM_STARTED
2015-09-15 17:02:29,478 INFO [main] org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Event Writer setup for JobId: job_1442358961110_0004, File: maprfs://mapr4a.alpinenow.local/var/mapr/cluster/yarn/rm/staging/chorus/.staging/job_1442358961110_0004/job_1442358961110_0004_2.jhist
2015-09-15 17:02:29,783 INFO [main] org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: In stop, writing event AM_STARTED
2015-09-15 17:02:29,784 INFO [main] org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: In stop, writing event JOB_SUBMITTED
2015-09-15 17:02:29,788 INFO [main] org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Stopped JobHistoryEventHandler. super.stop()
2015-09-15 17:02:29,789 INFO [main] org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Deleting staging directory maprfs://mapr4a.alpinenow.local /var/mapr/cluster/yarn/rm/staging/chorus/.staging/job_1442358961110_0004
2015-09-15 17:02:29,804 INFO [main] org.apache.hadoop.ipc.Server: Stopping server on 41647
2015-09-15 17:02:29,805 INFO [IPC Server listener on 41647] org.apache.hadoop.ipc.Server: Stopping IPC Server listener on 41647
2015-09-15 17:02:29,808 INFO [IPC Server Responder] org.apache.hadoop.ipc.Server: Stopping IPC Server Responder
2015-09-15 17:02:29,808 INFO [TaskHeartbeatHandler PingChecker] org.apache.hadoop.mapreduce.v2.app.TaskHeartbeatHandler: TaskHeartbeatHandler thread interrupted
5. In order to fix the issue, make sure to configure the following 2 files located on the Alpine machine where the MapR4.0.x client is running:

a) configure the /opt/mapr/hadoop/hadoop-2.4.1/etc/hadoop/mapred-site.xml file with the correct JobHistory hostname:
<configuration>
  <property>
    <name>mapreduce.jobhistory.address</name>
    <value>mapr4c.alpinenow.local:10020</value>
  </property>
  <property>
    <name>mapreduce.jobhistory.webapp.address</name>
    <value>mapr4c.alpinenow.local:19888</value>
  </property>
</configuration>

b) configure the /opt/mapr/hadoop/hadoop-2.4.1/etc/hadoop/yarn-site.xml file with the correct HA settings:

Note: The content of /opt/mapr/hadoop/hadoop-2.4.1/etc/hadoop/yarn-site.xml file can also be populated by cunning the above mentioned "configure.sh" command with "-c" flag in addition to all the previous parameter list.

 
<configuration>
  <!-- Resource Manager HA Configs -->
  <property>
    <name>yarn.resourcemanager.ha.enabled</name>
    <value>true</value>
  </property>
  <property>
    <name>yarn.resourcemanager.ha.automatic-failover.enabled</name>
    <value>true</value>
  </property>
  <property>
    <name>yarn.resourcemanager.ha.automatic-failover.embedded</name>
    <value>true</value>
  </property>
  <property>
    <name>yarn.resourcemanager.recovery.enabled</name>
    <value>true</value>
  </property>
  <property>
    <name>yarn.resourcemanager.cluster-id</name>
    <value>yarn-mapr41.alpinenow.local</value>
  </property>
  <property>
    <name>yarn.resourcemanager.ha.rm-ids</name>
    <value>rm1,rm2,rm3</value>
  </property>
  <property>
    <name>yarn.resourcemanager.ha.id</name>
    <value>rm1</value>
  </property>
  <property>
    <name>yarn.resourcemanager.zk-address</name>
    <value>mapr4a.alpinenow.local:5181,mapr4b.alpinenow.local:5181,mapr4c.alpinenow.local:5181</value>
  </property>
 
  <!-- Configuration for rm1 -->
  <property>
    <name>yarn.resourcemanager.scheduler.address.rm1</name>
    <value>mapr4a.alpinenow.local:8030</value>
  </property>
  <property>
    <name>yarn.resourcemanager.resource-tracker.address.rm1</name>
    <value>mapr4a.alpinenow.local:8031</value>
  </property>
  <property>
    <name>yarn.resourcemanager.address.rm1</name>
    <value>mapr4a.alpinenow.local:8032</value>
  </property>
  <property>
    <name>yarn.resourcemanager.admin.address.rm1</name>
    <value>mapr4a.alpinenow.local:8033</value>
  </property>
  <property>
    <name>yarn.resourcemanager.webapp.address.rm1</name>
    <value>mapr4a.alpinenow.local:8088</value>
  </property>
  <property>
    <name>yarn.resourcemanager.webapp.https.address.rm1</name>
    <value>mapr4a.alpinenow.local:8090</value>
  </property>
  <!-- Configuration for rm2 -->
  <property>
    <name>yarn.resourcemanager.scheduler.address.rm2</name>
    <value>mapr4b.alpinenow.local:8030</value>
  </property>
  <property>
    <name>yarn.resourcemanager.resource-tracker.address.rm2</name>
    <value>mapr4b.alpinenow.local:8031</value>
  </property>
  <property>
    <name>yarn.resourcemanager.address.rm2</name>
    <value>mapr4b.alpinenow.local:8032</value>
  </property>
  <property>
    <name>yarn.resourcemanager.admin.address.rm2</name>
    <value>mapr4b.alpinenow.local:8033</value>
  </property>
  <property>
    <name>yarn.resourcemanager.webapp.address.rm2</name>
    <value>mapr4b.alpinenow.local:8088</value>
  </property>
  <property>
    <name>yarn.resourcemanager.webapp.https.address.rm2</name>
    <value>mapr4b.alpinenow.local:8090</value>
  </property>
  <!-- Configuration for rm3 -->
  <property>
    <name>yarn.resourcemanager.scheduler.address.rm3</name>
    <value>mapr4c.alpinenow.local:8030</value>
  </property>
  <property>
    <name>yarn.resourcemanager.resource-tracker.address.rm3</name>
    <value>mapr4c.alpinenow.local:8031</value>
  </property>
  <property>
    <name>yarn.resourcemanager.address.rm3</name>
    <value>mapr4c.alpinenow.local:8032</value>
  </property>
  <property>
    <name>yarn.resourcemanager.admin.address.rm3</name>
    <value>mapr4c.alpinenow.local:8033</value>
  </property>
  <property>
    <name>yarn.resourcemanager.webapp.address.rm3</name>
    <value>mapr4c.alpinenow.local:8088</value>
  </property>
  <property>
    <name>yarn.resourcemanager.webapp.https.address.rm3</name>
    <value>mapr4c.alpinenow.local:8090</value>
  </property>
  <!-- :::CAUTION::: DO NOT EDIT ANYTHING ON OR ABOVE THIS LINE -->
</configuration>

Pig based Alpine jobs will start running successfully after applying these configurations.

Attachments

Hadoop - Pig based Alpine operators fail to run on HA MapR4.1 cluster - ArrayIndexOutOfBoundsException get_app
Hadoop - Pig based Alpine operators fail to run on HA MapR4.1 cluster - ArrayIndexOutOfBoundsException get_app
Hadoop - Pig based Alpine operators fail to run on HA MapR4.1 cluster - ArrayIndexOutOfBoundsException get_app