I am able to start the hdfs and yarn service but not able to run hdfs cmnds with connctn refused

Discussion in 'Masters Program - Customers only' started by Manish Kukreja, Dec 24, 2016.

  1. Manish Kukreja

    Alumni

    Joined:
    Nov 23, 2016
    Messages:
    10
    Likes Received:
    1
    Hello,

    Yesterday, I installed Hadoop on Linux server 14 and was able to execute hdfs commands on it.

    Today, I am not able to run any hdfs command as I get the following error:

    Hadoop fs -ls /

    ls: call from Ubuntu /127.0.0.1 to localhost:9000 failed on connection exception: java.net:ConnectException: Connection refused;

    I have already started hdfs and yarn service.
    I try doing a telnet on the server:

    telnet localhost 9000 but get the following error:

    telnet: could not resolve localhost:9000/telnet: Name or service not known.

    Can someone assist?

    Thanks,
    Manish
     
    #1
    Last edited: Dec 25, 2016
  2. DeshDeep Singh

    DeshDeep Singh Well-Known Member
    Simplilearn Support Alumni

    Joined:
    Sep 22, 2015
    Messages:
    935
    Likes Received:
    66

    Hi Manish,

    Code:
    Edit your conf/core-site.xml and change localhost to 0.0.0.0. Use the conf below. That should work.
    
    <configuration>
      <property>
    <name>fs.default.name</name>
    <value>hdfs://0.0.0.0:9000</value>
    </property>
     
    #2
  3. Manish Kukreja

    Alumni

    Joined:
    Nov 23, 2016
    Messages:
    10
    Likes Received:
    1
    I get the same error. Unable to attach screenshot.
    Call from ubuntu/127.0.1.1 to 0.0.0.0:9000 failed on connection exception: java.net.ConnectException: Connection refused; For more details see: wiki
     
    #3
    Last edited: Dec 28, 2016
  4. Manish Kukreja

    Alumni

    Joined:
    Nov 23, 2016
    Messages:
    10
    Likes Received:
    1
    Hi. It finally worked. Primary namenode doesn't work until you format it after system restart it seems. I had to run Hadoop namenode -format. It cleaned up all my data but the Hdfs commands are running post I started all the services again. Is there an alternative to start the primary namenode after system restart without formatting the namenode since I loose my data?
     
    #4
  5. Manish Kukreja

    Alumni

    Joined:
    Nov 23, 2016
    Messages:
    10
    Likes Received:
    1
    I got the solution. :)

    I used the below in my hdfs-site.xml.
    I do not have to format the namenode anymore after restart in order to start the primary name node.

    <configuration>
    <property>
    <name>dfs.replication</name>
    <value>1</value>
    </property>
    <property>
    <name>dfs.namenode.name.dir</name>
    <value>/home/hduser/hadoop_tmp/hdfs/namenode</value>
    </property>
    <property>
    <name>dfs.datanode.data.dir</name>
    <value>/home/hduser/hadoop_tmp/hdfs/datanode</value>
    </property>
    </configuration>
     
    #5
  6. Got the same problem and your solution worked for me :) .
     
    #6

Share This Page