Unable To Install Hadoop on a single node cluster

Discussion in 'Masters Program - Customers only' started by Bhoja Ramakrishna Sharma, Aug 1, 2018.

  1. Bhoja Ramakrishna Sharma

    Joined:
    Feb 20, 2017
    Messages:
    4
    Likes Received:
    0
    Hello Sir/Madam,
    I am Currently Pursuing Big-data Administrator Course , a part of Masters Course.
    I am facing an issue while installing hadoop as a single node cluster and here is my problem description with the screen shot, please help me resolve this query so that i can complete my course.

    Problem:
    When i run the following command i get the the error while formatting the name node as show in the screenshot: "HelpHDFS_Installation1.jpg".
    sudo -u hdfs hdfs namenode -format

    hdfs dfs -ls /
    also when i am trying to acess the HDFS file system i am facing an error which says "
    WARN ipc.Client: Failed to connect to server: 10.142.202.50/10.142.202.50:8020: try once and fail.
    java.net.ConnectException: Connection refused"

    You can fin the screeshots of the above problem with filename "hdfs.jpg".
    Thanks,
    Bhoja.
     

    Attached Files:

    #1
  2. Neha_Pandey

    Neha_Pandey Well-Known Member
    Simplilearn Support Alumni

    Joined:
    Jun 7, 2018
    Messages:
    65
    Likes Received:
    0
    Hi Learner,
    Kindly follow the below steps:-
    To get a Hadoop distribution, download a recent stable release from one of the Apache Download Mirrors
    > Unpack the downloaded Hadoop distribution. In the distribution, edit the file etc/hadoop/hadoop-env.sh to define some parameters as follows:
    # set to the root of your Java installation
    export JAVA_HOME=/usr/java/latest
    Try the following command:
    $ bin/hadoop
    This will display the usage documentation for the hadoop script.

    Now you are ready to start your Hadoop cluster in one of the three supported modes:

    The following example copies the unpacked conf directory to use as input and then finds and displays every match of the given regular expression. Output is written to the given output directory.

    $ mkdir input
    $ cp etc/hadoop/*.xml input
    $ bin/hadoop jar share/hadoop/mapreduce/hadoop-mapreduce-examples-2.9.1.jar grep input output 'dfs[a-z.]+'
    $ cat output/*

    I hope this helps.

    Regards,
    Neha Pandey
     
    #2

Share This Page