找到接口org.apache.hadoop.mapreduce.TaskAttemptContext,但是类是预期的

时间:2017-05-28 11:22:19

标签: java hadoop jar executable-jar hadoop2

我的hadoop应用程序在执行时遇到了一些错误,只要它开始执行map 0%reduce 0% 它会产生某种错误

17/06/02 16:21:44 INFO mapreduce.Job:任务ID:attempt_1496396027749_0015_m_000000_0,状态:未通过 错误:找到接口org.apache.hadoop.mapreduce.TaskAttemptContext,但是预期了类

我被困在这里,任何可以提供帮助的人......

hduser@master:/home/mnh/Desktop$ hadoop jar  13.jar /usr/local/hadoop/input/cars.mp4 /usr/local/hadoop/cars9
17/06/02 16:07:35 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
17/06/02 16:07:37 INFO client.RMProxy: Connecting to ResourceManager at master/192.168.137.52:8050
17/06/02 16:07:38 WARN mapreduce.JobSubmitter: Hadoop command-line option parsing not performed. Implement the Tool interface and execute your application with ToolRunner to remedy this.
17/06/02 16:08:35 INFO input.FileInputFormat: Total input paths to process : 1
17/06/02 16:08:35 INFO mapreduce.JobSubmitter: number of splits:1
17/06/02 16:08:35 INFO Configuration.deprecation: mapred.task.timeout is deprecated. Instead, use mapreduce.task.timeout
17/06/02 16:08:35 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1496396027749_0012
17/06/02 16:08:36 INFO impl.YarnClientImpl: Submitted application application_1496396027749_0012
17/06/02 16:08:36 INFO mapreduce.Job: The url to track the job: http://master:8088/proxy/application_1496396027749_0012/
17/06/02 16:08:36 INFO mapreduce.Job: Running job: job_1496396027749_0012
17/06/02 16:08:46 INFO mapreduce.Job: Job job_1496396027749_0012 running in uber mode : false
17/06/02 16:08:46 INFO mapreduce.Job:  map 0% reduce 0%
17/06/02 16:08:53 INFO mapreduce.Job: Task Id : attempt_1496396027749_0012_m_000000_0, Status : FAILED
Error: Found interface org.apache.hadoop.mapreduce.TaskAttemptContext, but class was expected
17/06/02 16:09:00 INFO mapreduce.Job: Task Id : attempt_1496396027749_0012_m_000000_1, Status : FAILED
Error: Found interface org.apache.hadoop.mapreduce.TaskAttemptContext, but class was expected
17/06/02 16:09:06 INFO mapreduce.Job: Task Id : attempt_1496396027749_0012_m_000000_2, Status : FAILED
Error: Found interface org.apache.hadoop.mapreduce.TaskAttemptContext, but class was expected
17/06/02 16:09:14 INFO mapreduce.Job:  map 100% reduce 100%
17/06/02 16:09:15 INFO mapreduce.Job: Job job_1496396027749_0012 failed with state FAILED due to: Task failed task_1496396027749_0012_m_000000
Job failed as tasks failed. failedMaps:1 failedReduces:0

17/06/02 16:09:15 INFO mapreduce.Job: Counters: 12
    Job Counters 
        Failed map tasks=4
        Launched map tasks=4
        Other local map tasks=3
        Data-local map tasks=1
        Total time spent by all maps in occupied slots (ms)=19779
        Total time spent by all reduces in occupied slots (ms)=0
        Total time spent by all map tasks (ms)=19779
        Total vcore-seconds taken by all map tasks=19779
        Total megabyte-seconds taken by all map tasks=20253696
    Map-Reduce Framework
        CPU time spent (ms)=0
        Physical memory (bytes) snapshot=0
        Virtual memory (bytes) snapshot=0

我的主要课程:

package fypusinghadoop;


import java.net.URI;


import org.apache.commons.logging.Log;
import org.apache.commons.logging.LogFactory;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.conf.*;
import org.apache.hadoop.io.*;
import org.apache.hadoop.mapreduce.Job;
import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
import org.apache.hadoop.mapreduce.lib.input.TextInputFormat;
import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;
import org.apache.hadoop.mapreduce.lib.output.TextOutputFormat;

import output.VideoOutputFormat;
import input.VideoInputFormat;


public class FypUsingHadoop  {
    private static final Log LOG = LogFactory.getLog(FypUsingHadoop.class);


    public static void main(String[] args) throws Exception {

        Configuration conf = new Configuration();
        long milliSeconds = 1800000; 
        conf.setLong("mapred.task.timeout", milliSeconds);
        Job job = Job.getInstance(conf);
        job.setJarByClass(FypUsingHadoop.class);
        job.setOutputKeyClass(Text.class);
        job.setOutputValueClass(VideoObject.class);
        job.setMapperClass(VidMapper.class);    
        job.setReducerClass(VidReducer.class);
        job.setInputFormatClass(VideoInputFormat.class);
        job.setOutputFormatClass(VideoOutputFormat.class);
        FileInputFormat.addInputPath(job, new Path(args[0]));

        FileOutputFormat.setOutputPath(job, new Path(args[1]));


        job.waitForCompletion(true);        
    }
}

这是我的Mapper类:

    package fypusinghadoop;

import java.io.ByteArrayInputStream;
import java.io.IOException;
import java.lang.reflect.Field;

import org.apache.commons.logging.Log;
import org.apache.commons.logging.LogFactory;
import org.apache.hadoop.fs.FSDataInputStream;
import org.apache.hadoop.fs.FSDataOutputStream;
import org.apache.hadoop.fs.FileSystem;
import org.apache.hadoop.fs.LocalFileSystem;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.IntWritable;
import org.apache.hadoop.io.LongWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Mapper;
import org.apache.hadoop.mapreduce.Mapper.Context;

import static org.bytedeco.javacpp.helper.opencv_objdetect.cvHaarDetectObjects;

import org.bytedeco.javacpp.Loader;
import org.bytedeco.javacpp.opencv_core;
import org.bytedeco.javacpp.opencv_core.CvMemStorage;
import org.bytedeco.javacpp.opencv_core.CvRect;
import org.bytedeco.javacpp.opencv_core.CvScalar;
import org.bytedeco.javacpp.opencv_core.CvSeq;
import org.bytedeco.javacpp.opencv_core.CvSize;
import static org.bytedeco.javacpp.opencv_core.IPL_DEPTH_8U;
import org.bytedeco.javacpp.opencv_core.IplImage;
import static org.bytedeco.javacpp.opencv_core.cvClearMemStorage;
import static org.bytedeco.javacpp.opencv_core.cvClearSeq;
import static org.bytedeco.javacpp.opencv_core.cvCreateImage;
import static org.bytedeco.javacpp.opencv_core.cvGetSeqElem;
import static org.bytedeco.javacpp.opencv_core.cvLoad;
import static org.bytedeco.javacpp.opencv_core.cvPoint;
import static org.bytedeco.javacpp.opencv_imgproc.CV_AA;
import static org.bytedeco.javacpp.opencv_imgproc.CV_BGR2GRAY;
import static org.bytedeco.javacpp.opencv_imgproc.cvCvtColor;
import static org.bytedeco.javacpp.opencv_imgproc.cvRectangle;
import static org.bytedeco.javacpp.opencv_objdetect.CV_HAAR_DO_CANNY_PRUNING;
import org.bytedeco.javacpp.opencv_objdetect.CvHaarClassifierCascade;
import org.bytedeco.javacv.CanvasFrame;
import org.bytedeco.javacv.FFmpegFrameGrabber;
import org.bytedeco.javacv.Frame;
import org.bytedeco.javacv.FrameGrabber;
import org.bytedeco.javacv.FrameRecorder;
import org.bytedeco.javacv.OpenCVFrameConverter;
import org.bytedeco.javacv.OpenCVFrameGrabber;
import org.bytedeco.javacv.OpenCVFrameRecorder;
import org.opencv.core.Core;

public class VidMapper extends Mapper<Text, VideoObject, Text, VideoObject> {

    private static final Log LOG = LogFactory.getLog(VidMapper.class);
    private static FrameGrabber grabber;
    private static Frame currentFrame;

    public void map(Text key, VideoObject value, Context context)
            throws IOException, InterruptedException {
        {
            System.out.println("hamzaaj  : " + key);
            ByteArrayInputStream byteArrayInputStream = new ByteArrayInputStream(
                    value.getVideoByteArray());
            LOG.info("Log__VideoConverter__byteArray: "
                    + byteArrayInputStream.available());

            String fileName = key.toString();
            int id = value.getId();
            LocalFileSystem fs = FileSystem
                    .getLocal(context.getConfiguration());
            Path filePath = new Path("/usr/local/hadoop/ia3/newVideo", fileName);
            Path resFile = new Path("/usr/local/hadoop/ia3/", "res_" + fileName);
            System.out.println("File to Process :" + filePath.toString());
            FSDataOutputStream out = fs.create(filePath, true);
            out.write(value.getVideoByteArray());
            out.close();
            try {

                System.out.println("Setting Properties");
                System.setProperty("java.library.path",
                        "/home/mnh/Documents/OpenCV/opencv-3.2.0/build/lib");
                 System.loadLibrary(Core.NATIVE_LIBRARY_NAME);
                System.load("/home/mnh/Documents/OpenCV/opencv-3.2.0/build/lib/libopencv_core.so");

                System.out.println("Loading classifier");
                CvHaarClassifierCascade classifier = new CvHaarClassifierCascade(
                        cvLoad("/home/mnh/Desktop/haarcascade_frontalface_alt.xml"));
                if (classifier.isNull()) {
                    System.err.println("Error loading classifier file");
                }
                grabber = new FFmpegFrameGrabber(
                        "/usr/local/hadoop/input/cars.mp4");
                grabber.start();
                OpenCVFrameConverter.ToIplImage converter = new OpenCVFrameConverter.ToIplImage();

                IplImage grabbedImage = converter.convert(grabber.grab());
                int width = grabbedImage.width();
                int height = grabbedImage.height();
                IplImage grayImage = IplImage.create(width, height,
                        IPL_DEPTH_8U, 1);
                IplImage rotatedImage = grabbedImage.clone();

                CvMemStorage storage = CvMemStorage.create();
                CvSize frameSize = new CvSize(grabber.getImageWidth(),
                        grabber.getImageHeight());
                CvSeq faces = null;
                FrameRecorder recorder = FrameRecorder.createDefault(
                        resFile.toString(), width, height);
                recorder.start();
                System.out.println("Video processing .........started");
                // CanvasFrame frame = new CanvasFrame("Some Title",
                // CanvasFrame.getDefaultGamma()/grabber.getGamma());
                CanvasFrame frame = new CanvasFrame("Some Title",
                        CanvasFrame.getDefaultGamma() / grabber.getGamma());
                int i = 0;
                while (((grabbedImage = converter.convert(grabber.grab())) != null)) {
                    i++;
                    cvClearMemStorage(storage);
                    // Let's try to detect some faces! but we need a grayscale
                    // image...
                    cvCvtColor(grabbedImage, grayImage, CV_BGR2GRAY);
                    faces = cvHaarDetectObjects(grayImage, classifier, storage,
                            1.1, 3, CV_HAAR_DO_CANNY_PRUNING);
                    int total = faces.total();
                    for (int j = 0; j < total; j++) {
                        CvRect r = new CvRect(cvGetSeqElem(faces, j));
                        int x = r.x(), y = r.y(), w = r.width(), h = r.height();
                        cvRectangle(grabbedImage, cvPoint(x, y),
                                cvPoint(x + w, y + h), CvScalar.RED, 1, CV_AA,
                                0);
                    }
                    cvClearSeq(faces);
                    Frame rotatedFrame = converter.convert(grabbedImage);
                    recorder.record(rotatedFrame);
                    System.out.println("Hello" + i);
                }
                grabber.stop();
                recorder.stop();
                System.out.println("Video processing .........Completed");

            } catch (Exception e) {
                e.printStackTrace();
            }
            FSDataInputStream fin = fs.open(new Path(resFile.toString()));
            byte[] b = new byte[fin.available()];
            fin.readFully(b);
            fin.close();
            VideoObject vres = new VideoObject(b);
            vres.setId(id);
            context.write(key, vres);
            // fs.delete(new Path(resFile.toString()),false);
            fs.delete(filePath, false);
        }

    }
}

这是我的减速机等级:

package fypusinghadoop;

import java.io.IOException;
import java.util.Iterator;
import org.apache.hadoop.io.IntWritable;

import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Reducer;
import org.apache.hadoop.mapreduce.OutputCommitter;

public class VidReducer extends Reducer<Text, VideoObject, Text, VideoObject> {

    public void reduce(Text key, Iterable<VideoObject> values, Context context)
      throws IOException, InterruptedException {
        Iterator<VideoObject> it = values.iterator();
        while(it.hasNext()) {
            System.out.println("Reducer"+"   "+it.next());
        context.write(key, it.next());
        }
    }    
 }

请给我一些正确的方法来创建准确运行的可运行Jar文件。

1 个答案:

答案 0 :(得分:0)

您正在使用针对hadoop 2+实例中的hadoop 1 jar编译的jar。

在Java中编译hadoop代码时,请将以下两个jar替换为hadoop-core-1.x.y.jar(例如hadoop-core-1.2.1.jar):