We have docker hadoop setup in our evnironment(One master + three worker nodes) each of them running in a seperate container in single physical server, Hadoop file system inside docker is mapped to NFS share in physical server using docker volumes, Below is the configuration on how we mapped it.
volumes:
– “/opt/hadoop/dfs:/opt/hadoop/dfs”
We have created few files in hadoop file system inside the docker container and we are able to access it with out any issue, everything works fine untill the restart of container, If container gets restarted all the data inside the file system is getting corrupted and below error we are getting from namenode UI
There are 272 missing blocks. The following files may be corrupted:
Is this expected behavior?? or am I missing anything?? can anyone please help.
I have gone through few articles but nothing is replicating my issue.
anil srinivas is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.