问题:
我在C#中为HDInsight创建MapReduce应用程序。我需要处理整个输入文件。
据我所知,Hadoop有两种方法可以实现这一目标:
我无法弄清楚如何在HDInsight上使用C#实现任何这些选项。
详细信息:
我要么
使用Microsoft.Hadoop.MapReduce,并通过hadoop.MapReduceJob.ExecuteJob<MyJob>();
或者只需创建一个控制台应用程序并通过
从azure powershell启动它 $mrJobDef = New-AzureHDInsightStreamingMapReduceJobDefinition -JobName MyJob -StatusFolder $mrStatusOutput -Mapper $mrMapper -Reducer $mrReducer -InputPath $mrInput -OutputPath $mrOutput
$mrJobDef.Files.Add($mrMapperFile)
$mrJob = Start-AzureHDInsightJob -Cluster $clusterName -JobDefinition $mrJobDef
任何一种方式的解决方案都会有很大帮助。
答案 0 :(得分:1)
您可以使用powershell
中的-Defines参数设置min_splitsize$clusterName = "YourClusterName"
$jobConfig = @{ "min_splitsize"="512mb"; "mapred.output.compression.codec"="org.apache.hadoop.io.compress.GzipCodec" }
$myWordCountJob = New-AzureHDInsightMapReduceJobDefinition -JarFile "/example/jars/hadoop-examples.jar" -ClassName "wordcount" -jobName "WordCountJob" -StatusFolder "/MyMRJobs/WordCountJobStatus" -Defines $jobConfig
或在C#中
var mapReduceJob = new MapReduceJobCreateParameters()
{
ClassName = "wordcount", // required
JobName = "MyWordCountJob", //optional
JarFile = "/example/jars/hadoop-examples.jar", // Required, alternative syntax: wasb://hdijobs@azimasv2.blob.core.windows.net/example/jar/hadoop-examples.jar
StatusFolder = "/AzimMRJobs/WordCountJobStatus" //Optional, but good to use to know where logs are uploaded in Azure Storage
};
mapReduceJob.Defines.Add("min_splitsize", "512mb");
虽然我不认为这可以保证每个文件都能完整读取。为此,您可能需要此处说明的Java SDK http://www.andrewsmoll.com/3-hacks-for-hadoop-and-hdinsight-clusters/