可能会被疏忽,但我无法发现为什么Apache Crunch不会为我正在编写的用于学习Crunch的非常简单的程序将输出写到文件中。
代码如下:
$main_p
这是我使用hadoop执行此jar时看到的日志记录:
import org.apache.crunch.Pipeline;
import org.apache.hadoop.conf.Configuration;
....
private Pipeline pipeline;
private Configuration etlConf;
....
this.etlConf = getConf();
this.pipeline = new MRPipeline(TestETL.class, etlConf);
....
// Read file
logger.info("Reading input file: " + inputFileURI.toString());
PCollection<String> input = pipeline.readTextFile(inputFileURI.toString());
System.out.println("INPUT SIZE = " + input.asCollection().getValue().size());
// Write file
logger.info("Writing Final output to file: " + outputFileURI.toString());
input.write(
To.textFile(outputFileURI.toString()),
WriteMode.OVERWRITE
);
输入文件非常简单,看起来像这样:
18/12/31 09:41:51 INFO etl.TestClass: Executing Test run
18/12/31 09:41:51 INFO etl.TestETL: Reading input file: /user/sw029693/process_analyzer/input/input.txt
INPUT SIZE = 3
18/12/31 09:41:51 INFO etl.TestETL: Writing Final output to file:
/user/sw029693/process_analyzer/output/occurences
18/12/31 09:41:51 INFO impl.FileTargetImpl: Will write output files to new path: /user/sw029693/process_analyzer/output/occurences
18/12/31 09:41:51 INFO etl.TestETL: Cleaning-up TestETL run
18/12/31 09:41:51 INFO etl.TestETL: ETL completed with status 0.
尽管日志记录表明应该对输出位置进行写,但是我看不到正在创建任何文件。有什么想法吗?
答案 0 :(得分:0)
package com.hadoop.crunch;
import java.io.*;
import java.util.Collection;
import java.util.Iterator;
import org.apache.crunch.*;
import org.apache.crunch.impl.mr.MRPipeline;
import org.apache.crunch.io.From;
import org.apache.hadoop.conf.*;
import org.apache.hadoop.fs.*;
import org.apache.hadoop.util.*;
import org.apache.log4j.Logger;
public class App extends Configured implements Tool, Serializable{
private static final long serialVersionUID = 1L;
private static Logger LOG = Logger.getLogger(App.class);
@Override
public int run(String[] args) throws Exception {
final Path fileSource = new Path(args[0]);
final Path outFileName = new Path(args[1], "event-" + System.currentTimeMillis() + ".txt");
//MRPipeline translates the overall pipeline into one or more MapReduce jobs
Pipeline pipeline = new MRPipeline(App.class, getConf());
//Specify the input data to the pipeline.
//The input data is contained in PCollection
PCollection<String> inDataPipe = pipeline.read(From.textFile(fileSource));
//inject an operation into the crunch data pipeline
PObject<Collection<String>> dataCollection = inDataPipe.asCollection();
//iterate over the collection
Iterator<String> iterator = dataCollection.getValue().iterator();
FileSystem fs = FileSystem.getLocal(getConf());
BufferedWriter bufferedWriter = new BufferedWriter(new OutputStreamWriter(fs.create(outFileName, true)));
while(iterator.hasNext()){
String data = iterator.next().toString();
bufferedWriter.write(data);
bufferedWriter.newLine();
}
bufferedWriter.close();
//Start the execution of the crunch pipeline, trigger the creation & execution of MR jobs
PipelineResult result = pipeline.done();
return result.succeeded() ? 0 : 1;
}
public static void main(String[] args) {
if (args.length != 2)throw new RuntimeException("Usage: hadoop jar <inputPath> <outputPath>");
try {
ToolRunner.run(new Configuration(), new App(), args );
} catch (Exception e) {
LOG.error(e.getLocalizedMessage());
}
}
}
用法:使用带有参数的Java程序运行:第一个arg是输入文件名或目录,第二个arg是输出文件目录。 out文件名是event-Timestamp,请记住args {0}和args {1}之间只有一个空格。 /user/sw029693/process_analyzer/input/input.txt / user / sw029693 / process_analyzer / input /