I’m learning hadoop using docker compose. I couldn’t access namenode/datanode by using a hostname after I modified the local hosts file, but it works fine when I access 127.0.0.1:9870 or localhost:9864, I don’t know why and how to fix it. Please help me.
Here’s my configurations.
docker-compose.yaml
version: "2"
services:
namenode:
image: apache/hadoop:3
hostname: namenode
command: ["hdfs", "namenode"]
ports:
- 8020:8020
- 9870:9870
env_file:
- ./config
environment:
ENSURE_NAMENODE_DIR: "/opt/hadoop/name"
networks:
- hadoop
datanode:
image: apache/hadoop:3
hostname: datanode
depends_on:
- namenode
command: ["hdfs", "datanode"]
ports:
- 9864:9864
- 9865:9865
- 9866:9866
env_file:
- ./config
networks:
- hadoop
resourcemanager:
image: apache/hadoop:3
hostname: resourcemanager
command: ["yarn", "resourcemanager"]
ports:
- 8088:8088
- 19888:19888
- 19890:19890
env_file:
- ./config
volumes:
- ./test.sh:/opt/test.sh
networks:
- hadoop
nodemanager:
image: apache/hadoop:3
command: ["yarn", "nodemanager"]
env_file:
- ./config
ports:
- 8040:8040 #NodeManager
networks:
- hadoop
networks:
hadoop:
driver: bridge
hosts
127.0.0.1 namenode
127.0.0.1 datanode
I want to know the reason for this phenomenon, thank you.
New contributor
itbaby shake is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.