Error in WordCount program execution

Discussion in 'Big Data and Analytics' started by Monika Goel_1, Dec 11, 2015.

  1. Monika Goel_1

    Monika Goel_1 Member
    Alumni

    Joined:
    Nov 2, 2015
    Messages:
    2
    Likes Received:
    0
    Hi, I have written MR program by downloading Eclipse in my Local (Windows) and then imported all the required Hadoop Library. Getting Below error while executing the program.

    Code:
    package org.apache.hadoop.examples;

    import java.io.IOException;
    import java.util.StringTokenizer;

    import org.apache.hadoop.conf.Configuration;
    import org.apache.hadoop.fs.Path;
    import org.apache.hadoop.io.IntWritable;
    import org.apache.hadoop.io.Text;
    import org.apache.hadoop.mapreduce.Job;
    import org.apache.hadoop.mapreduce.Mapper;
    import org.apache.hadoop.mapreduce.Reducer;
    import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
    import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;


    public class WordCount {

    public static class TokenizerMapper
    extends Mapper<Object, Text, Text, IntWritable>{

    private final static IntWritable one = new IntWritable(1);
    private Text word = new Text();

    public void map(Object key, Text value, Context context
    ) throws IOException, InterruptedException {
    StringTokenizer itr = new StringTokenizer(value.toString());
    while (itr.hasMoreTokens()) {
    word.set(itr.nextToken());
    context.write(word, one);
    }
    }
    }
    public static class IntSumReducer
    extends Reducer<Text,IntWritable,Text,IntWritable> {
    private IntWritable result = new IntWritable();

    public void reduce(Text key, Iterable<IntWritable> values,
    Context context
    ) throws IOException, InterruptedException {
    int sum = 0;
    for (IntWritable val : values) {
    sum += val.get();
    }
    result.set(sum);
    context.write(key, result);
    }
    }

    public static void main(String[] args) throws Exception {
    Configuration conf = new Configuration();
    Job job = new Job(conf, "word count");
    job.setJarByClass(WordCount.class);
    job.setMapperClass(TokenizerMapper.class);
    //job.setCombinerClass(IntSumReducer.class);
    job.setReducerClass(IntSumReducer.class);
    job.setOutputKeyClass(Text.class);
    job.setOutputValueClass(IntWritable.class);
    FileInputFormat.addInputPath(job, new Path(args[0]));
    FileOutputFormat.setOutputPath(job, new Path(args[1]));
    System.exit(job.waitForCompletion(true) ? 0 : 1);
    }
    }

    Error:
    15/12/11 13:43:56 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
    15/12/11 13:43:56 ERROR security.UserGroupInformation: PriviledgedActionException as:Monika_G cause:java.io.IOException: Failed to set permissions of path: \tmp\hadoop-Monika_G\mapred\staging\Monika_G1772588335\.staging to 0700
    Exception in thread "main" java.io.IOException: Failed to set permissions of path: \tmp\hadoop-Monika_G\mapred\staging\Monika_G1772588335\.staging to 0700
    at org.apache.hadoop.fs.FileUtil.checkReturnValue(FileUtil.java:691)
    at org.apache.hadoop.fs.FileUtil.setPermission(FileUtil.java:664)
    at org.apache.hadoop.fs.RawLocalFileSystem.setPermission(RawLocalFileSystem.java:514)
    at org.apache.hadoop.fs.RawLocalFileSystem.mkdirs(RawLocalFileSystem.java:349)
    at org.apache.hadoop.fs.FilterFileSystem.mkdirs(FilterFileSystem.java:193)
    at org.apache.hadoop.mapreduce.JobSubmissionFiles.getStagingDir(JobSubmissionFiles.java:126)
    at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:942)
    at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:936)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Unknown Source)
    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1190)
    at org.apache.hadoop.mapred.JobClient.submitJobInternal(JobClient.java:936)
    at org.apache.hadoop.mapreduce.Job.submit(Job.java:550)
    at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:580)
    at org.apache.hadoop.examples.WordCount.main(WordCount.java:62)

    Can you please let me know how to resolve this error.
     
    #1
  2. Parminder Sohal(2946)

    Parminder Sohal(2946) Active Member
    Trainer

    Joined:
    Aug 25, 2014
    Messages:
    39
    Likes Received:
    6
    please compare your code to following example:

    package com.parminder;
    import java.io.IOException;
    import java.util.StringTokenizer;
    import org.apache.hadoop.conf.Configuration;
    import org.apache.hadoop.fs.Path;
    import org.apache.hadoop.io.IntWritable;
    import org.apache.hadoop.io.LongWritable;
    import org.apache.hadoop.io.Text;
    import org.apache.hadoop.mapreduce.Job;
    import org.apache.hadoop.mapreduce.Mapper;
    import org.apache.hadoop.mapreduce.Reducer;
    import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
    import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;
    import org.apache.hadoop.util.GenericOptionsParser;

    public class WordCountTesting {

    public static class MyMapper extends Mapper<LongWritable,Text,Text,IntWritable>{
    private Text word = new Text();

    public void map(LongWritable key,Text value,Context context)throws IOException,InterruptedException {
    System.out.println("Starting of Map function");
    String line = value.toString();
    System.out.println("Line:"+line);
    StringTokenizer token = new StringTokenizer(line);
    System.out.println("Tockens:"+token);
    while(token.hasMoreTokens()){
    word.set(token.nextToken());
    context.write(word, new IntWritable(1));
    }
    System.out.println("End of Map function");
    }
    }

    public static class MyReducer extends Reducer<Text,IntWritable,Text,IntWritable>{
    public void reduce(Text key,Iterable<IntWritable> values,Context context)
    throws IOException,InterruptedException {
    int sum = 0;
    System.out.println("Starting reducer class");
    for(IntWritable val : values){
    sum += val.get();
    System.out.println("Values:"+val.get());
    }
    System.out.println("Sum:"+sum);
    context.write(key, new IntWritable(sum));

    System.out.println("End of reducer class");
    }

    }

    /**
    * @param args
    */
    public static void main(String[] args) throws Exception{
    // TODO Auto-generated method stub
    System.out.println("Starting Main or Driver class");
    Configuration conf = new Configuration();
    String[] otherArgs = new GenericOptionsParser(conf, args).getRemainingArgs();

    Job job = new Job(conf, "Word Counter");
    job.setJarByClass(WordCountTesting.class);
    job.setMapperClass(MyMapper.class);
    job.setReducerClass(MyReducer.class);

    job.setMapOutputKeyClass(Text.class);
    job.setMapOutputValueClass(IntWritable.class);

    job.setOutputKeyClass(Text.class);
    job.setOutputValueClass(IntWritable.class);

    FileInputFormat.addInputPath(job, new Path(otherArgs[0]));
    FileOutputFormat.setOutputPath(job, new Path(otherArgs[1]));

    System.out.println(job.waitForCompletion(true) ? 0:1);
    System.out.println("End of Main or Driver class");

    }

    }
     
    #2
  3. Parminder Sohal(2946)

    Parminder Sohal(2946) Active Member
    Trainer

    Joined:
    Aug 25, 2014
    Messages:
    39
    Likes Received:
    6
    if you still have questions please me know...
     
    #3
    Last edited: Dec 12, 2015
  4. Monika Goel_1

    Monika Goel_1 Member
    Alumni

    Joined:
    Nov 2, 2015
    Messages:
    2
    Likes Received:
    0
    Hi Param. Code seems to be same. Only there are some extra SOP statement in your code. Even tried with your code. Getting same error.
     
    #4
  5. Parminder Sohal(2946)

    Parminder Sohal(2946) Active Member
    Trainer

    Joined:
    Aug 25, 2014
    Messages:
    39
    Likes Received:
    6
    Hi Monika:

    May be try the two options below....
    1) You may be using vmware image to run the underlying OS for your Hadoop.
    The problem with image is you are unintentionally stopping the running namenode whenever you close the image.
    The above problem is nothing but your namenode is not running.
    Please start your namenode before you do any HDFS based operation

    2) Or instead of mentioning hdfs file locations in arguments as specified in the Tutorial, you may have directly mentioned path of input file....

    let me know if this helps....
     
    #5
  6. Karthik Shivanna

    Karthik Shivanna Well-Known Member
    Staff Member Simplilearn Support

    Joined:
    Oct 6, 2014
    Messages:
    88
    Likes Received:
    7
    Hi Monika,

    This is because of file permission, only root user can able to read,write and execute. group and normal user dont have any permission, please change it file permission to 777and then try. use the below command,

    chmod -R 777 filename

    Regards
    Karthik
     
    #6
  7. Parminder Sohal(2946)

    Parminder Sohal(2946) Active Member
    Trainer

    Joined:
    Aug 25, 2014
    Messages:
    39
    Likes Received:
    6
    thanks Karthik...
     
    #7

Share This Page