I’m working on a project where I need to perform aggregations on the result of an HBase table scan using MapReduce and store the result in another HBase table. To achieve this, I’ve set up a Hadoop cluster running HBase, HDFS, and YARN in pseudo-distributed mode.
My approach involves using a Java client to submit the MapReduce job whenever I receive a request from a different service via GRPC. However, I’m encountering an issue when submitting the job. The error message I’m receiving is as follows:
org.apache.hadoop.ipc.RemoteException(java.io.IOException): File /tmp/hadoop-yarn/staging/root/.staging/job_1714655341519_0001/libjars/hbase-client.jar could only be written to 0 of the 1 minReplication nodes. There are 1 datanode(s) running and 1 node(s) are excluded in this operation.
Upon examining the Namenode logs, I noticed the following sequence of events:
2024-05-02 13:10:29,045 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate blk_1073741858_1040, replicas=127.0.0.1:9866 for /hbase/data/default/tmp_984a107027fd49e590ae77dda740be82/.tabledesc/.tableinfo.0000000001.386
2024-05-02 13:10:29,077 INFO BlockStateChange: BLOCK* addStoredBlock: 127.0.0.1:9866 is added to blk_1073741858_1040 (size=386)
2024-05-02 13:10:29,078 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /hbase/data/default/tmp_984a107027fd49e590ae77dda740be82/.tabledesc/.tableinfo.0000000001.386 is closed by DFSClient_NONMAPREDUCE_27241901_1
2024-05-02 13:10:29,097 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate blk_1073741859_1041, replicas=127.0.0.1:9866 for /hbase/data/default/tmp_984a107027fd49e590ae77dda740be82/b57fd033d101d53c7d5afd77a72f78de/.regioninfo
2024-05-02 13:10:29,114 INFO BlockStateChange: BLOCK* addStoredBlock: 127.0.0.1:9866 is added to blk_1073741859_1041 (size=71)
2024-05-02 13:10:29,119 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /hbase/data/default/tmp_984a107027fd49e590ae77dda740be82/b57fd033d101d53c7d5afd77a72f78de/.regioninfo is closed by DFSClient_NONMAPREDUCE_27241901_1
2024-05-02 13:10:29,921 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /hbase/data/default/tmp_984a107027fd49e590ae77dda740be82/b57fd033d101d53c7d5afd77a72f78de/recovered.edits/1.seqid is closed by DFSClient_NONMAPREDUCE_1365210381_1
2024-05-02 13:10:33,334 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate blk_1073741860_1042, replicas=127.0.0.1:9866 for /tmp/hadoop-yarn/staging/root/.staging/job_1714655341519_0001/libjars/hbase-client.jar
2024-05-02 13:10:33,448 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[], storagePolicy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy and org.apache.hadoop.net.NetworkTopology
2024-05-02 13:10:33,451 WARN org.apache.hadoop.hdfs.protocol.BlockStoragePolicy: Failed to place enough replicas: expected size is 1 but only 0 storage types can be selected (replication=1, selected=[], unavailable=[DISK], removed=[DISK], policy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]})
2024-05-02 13:10:33,451 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[DISK], storagePolicy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]}, newBlock=true) All required storage types are unavailable: unavailableStorages=[DISK], storagePolicy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]}
2024-05-02 13:10:33,455 INFO org.apache.hadoop.ipc.Server: IPC Server handler 6 on default port 9000, call Call#12 Retry#0 org.apache.hadoop.hdfs.protocol.ClientProtocol.addBlock from stockbroker-bdnr-hbase-client-1.stockbroker-bdnr_default:38876 / 172.20.0.2:38876
java.io.IOException: File /tmp/hadoop-yarn/staging/root/.staging/job_1714655341519_0001/libjars/hbase-client.jar could only be written to 0 of the 1 minReplication nodes. There are 1 datanode(s) running and 1 node(s) are excluded in this operation.
However, when I checked the Datanode logs, nothing relevant seemed to be happening.
Here’s my current HDFS configuration:
<configuration>
<property>
<name>dfs.replication</name>
<value>1</value>
</property>
<property>
<name>dfs.namenode.name.dir</name>
<value>/hdfs/namenode</value>
</property>
<property>
<name>dfs.datanode.data.dir</name>
<value>/hdfs/datanode</value>
</property>
<property>
<name>dfs.namenode.rpc-bind-host</name>
<value>0.0.0.0</value>
</property>
<property>
<name>dfs.namenode.servicerpc-bind-host</name>
<value>0.0.0.0</value>
</property>
<property>
<name>dfs.client.block.write.replace-datanode-on-failure.enable</name>
<value>false</value>
</property>
<property>
<name>dfs.storage.policy.enabled</name>
<value>false</value>
</property>
<property>
<name>dfs.use.dfs.network.topology</name>
<value>false</value>
</property>
<property>
<name>dfs.namenode.replication.considerLoad</name>
<value>false</value>
</property>
<property>
<name>dfs.namenode.replication.considerLoad.factor</name>
<value>3</value>
</property>
<property>
<name>dfs.permissions.superusergroup</name>
<value>hadoop</value>
</property>
<property>
<name>dfs.cluster.administrators</name>
<value>hadoop</value>
</property>
<property>
<name>dfs.permissions.enabled</name>
<value>false</value>
</property>
</configuration>
Some aspects of the configuration might seem redundant because I’ve been debugging this for a while. Please note that this setup is for a school project and won’t be put into production. I’m only running in pseudo-distributed mode to observe the distribution of data in HBase and MapReduce jobs.
Can someone help me understand what might be causing this issue and suggest potential solutions? Any insights or suggestions would be greatly appreciated.
1