适用于Hadoop的MapReduce-KNN-从一个数据文件运行多个测试用例

时间:2018-08-28 08:00:40

标签: java linked-list treemap knn

背景:[跳至下一部分以了解确切问题]

我目前正在大学里作为一个小型项目来开发Hadoop(不是强制性项目,我这样做是因为我想这样做)。

我的计划是在其中一个实验室(Master + 4 Slaves)中使用5台PC,对大型数据集运行KNN算法,以找出运行时间等。

我知道我可以在互联网上找到基本代码,而且确实找到了(https://github.com/matt-hicks/MapReduce-KNN)。它对于单个测试用例运行良好,但是我拥有的是一个非常大型的包含数百个测试用例的测试用例。因此,我需要为每个测试用例重复相同的代码。

问题

tl; dr:我有一个KNN程序,一次只需要一个测试用例,但是我想对其进行迭代,以便它可以用于多个测试用例。

我的解决方案:

对此我不是很有经验,从我了解的基础知识出发,我决定将变量和映射制作为变量数组和映射数组。

所以这个:

    public static class KnnMapper extends Mapper<Object, Text, NullWritable, DoubleString>
    {
        DoubleString distanceAndModel = new DoubleString();
        TreeMap<Double, String> KnnMap = new TreeMap<Double, String>();

        // Declaring some variables which will be used throughout the mapper
        int K;

        double normalisedSAge;
        double normalisedSIncome;
        String sStatus;
        String sGender;
double normalisedSChildren;

成为:

DoubleString distanceAndModel = new DoubleString();
    TreeMap<Double, String>[] KnnMap = new TreeMap<Double, String>[1000];



    // Declaring some variables which will be used throughout the mapper
    int[] K = new int[1000];

    double[] normalisedSAge = new double[1000];
    double[] normalisedSIncome = new double[1000];
    String[] sStatus = new String[1000];
    String[] sGender = new String[1000];
    double[] normalisedSChildren = new double[1000];
    int n = 0;

这:

        protected void setup(Context context) throws IOException, InterruptedException
    {
        if (context.getCacheFiles() != null && context.getCacheFiles().length > 0)
        {
            // Read parameter file using alias established in main()
            String knnParams = FileUtils.readFileToString(new File("./knnParamFile"));
            StringTokenizer st = new StringTokenizer(knnParams, ",");

            // Using the variables declared earlier, values are assigned to K and to the test dataset, S.
            // These values will remain unchanged throughout the mapper
            K = Integer.parseInt(st.nextToken());
            normalisedSAge = normalisedDouble(st.nextToken(), minAge, maxAge);
            normalisedSIncome = normalisedDouble(st.nextToken(), minIncome, maxIncome);
            sStatus = st.nextToken();
            sGender = st.nextToken();
            normalisedSChildren = normalisedDouble(st.nextToken(), minChildren, maxChildren);
        }

}

成为这个:

protected void setup(Context context) throws IOException, InterruptedException
    {
        if (context.getCacheFiles() != null && context.getCacheFiles().length > 0)
        {
            // Read parameter file using alias established in main()
            String knnParams = FileUtils.readFileToString(new File("./knnParamFile"));
            //Splitting input File if we hit a newline character or return carriage i.e., Windown Return Key as input
            StringTokenizer lineSt = new StringTokenizer(knnParams, "\n\r");

            //Running a loop to tokennize each line of inputs or test cases
            while(lineSt.hasMoreTokens()){
            String nextLine = lineSt.nextToken();   //Converting current line to a string
            StringTokenizer st = new StringTokenizer(nextLine, ","); // Tokenizing the current string or singular data

            // Using the variables declared earlier, values are assigned to K and to the test dataset, S.
            // These values will remain unchanged throughout the mapper
            K[n] = Integer.parseInt(st.nextToken());
            normalisedSAge[n] = normalisedDouble(st.nextToken(), minAge, maxAge);
            normalisedSIncome[n] = normalisedDouble(st.nextToken(), minIncome, maxIncome);
            sStatus[n] = st.nextToken();
            sGender[n] = st.nextToken();
            normalisedSChildren[n] = normalisedDouble(st.nextToken(), minChildren, maxChildren);
            n++;
        }}
    }

reducer类也是如此,

这是我第一次使用TreeMaps。我之前研究过并使用过树木,但没有研究过Maps或TreeMaps。 我仍然试图使它和数组变成错误的:

  

/home/hduser/Desktop/knn/KnnPattern.java:81:错误:通用数组创建TreeMap [] KnnMap = new TreeMap [1000];                                            ^

     

/home/hduser/Desktop/knn/KnnPattern.java:198:错误:不兼容   类型:double []无法转换为double                     normalizedRChildren,normalizedSAge,normalizedSIncome,sStatus,sGender,normalizedSChildren);                                          ^

     

/home/hduser/Desktop/knn/KnnPattern.java:238:错误:通用数组   creation TreeMap [] KnnMap = new TreeMap [1000];                                            ^

     

/home/hduser/Desktop/knn/KnnPattern.java:283:错误:错误的操作数类型   对于二进制运算符'>'                 如果(KnnMap [num] .size()> K)                                        ^第一种类型:int第二种类型:int []

现在,我想,如果我尝试使用TreeMap的链接列表,它可能会起作用。

但是,到目前为止,我基本上已经在Uni中使用C / C ++和Python。这里的OOP似乎使人们的生活更轻松,但是我不确定100%如何使用它。

我的问题:

是否可以建立TreeMap的链接列表?

是否有链接列表替代项:

TreeMap<Double, String>[] KnnMap = new TreeMap<Double, String>[1000];

我解决这个问题的方法正确吗?进行代码迭代应该有助于迭代所有测试用例,对吧?

我会尝试并尝试使错误从那里开始。但这是我几天以来一直坚持的事情。

如果有人曾经问过这个问题,我很抱歉,但是我什么也没找到,所以我不得不写一个问题。 如果您认为以前已经回答过,请分享任何相关答案的链接。

谢谢! 另外,还有一点:在使用TreeMap,尤其是TreeMap的链接列表时,我应该记住的其他事情。

1 个答案:

答案 0 :(得分:0)

关于错误消息

/home/hduser/Desktop/knn/KnnPattern.java:81: error: generic array creation TreeMap[] KnnMap = new TreeMap[1000]; ^

/home/hduser/Desktop/knn/KnnPattern.java:238: error: generic array creation TreeMap[] KnnMap = new TreeMap[1000]; ^

发生这些错误是因为您试图从Java不支持的通用组件类型创建实例,因为通用类型在运行时会丢失。一种解决方法(如果确实需要数组)将是创建List个对象的TreeMap个对象,然后将其转换为数组:

// TreeMap<Double, String>[] KnnMap = new TreeMap<Double, String>[1000];
List<TreeMap<Double, String>> KnnMapList = new LinkedList<>();
TreeMap<Double, String>[] KnnMap = (TreeMap<Double, String>[]) KnnMapList.toArray();

有关更多信息,请参见this问题。


/home/hduser/Desktop/knn/KnnPattern.java:198: error: incompatible types: double[] cannot be converted to double normalisedRChildren, normalisedSAge, normalisedSIncome, sStatus, sGender, normalisedSChildren); ^

通过查看GitHub上的源代码,我意识到您可能没有修改方法KnnMapper#map(Object, Text, Context)中的以下方法调用:

double tDist = totalSquaredDistance(normalisedRAge, normalisedRIncome, rStatus, rGender,
                    normalisedRChildren, normalisedSAge, normalisedSIncome, sStatus, sGender, normalisedSChildren);

应该是

double tDist = totalSquaredDistance(normalisedRAge, normalisedRIncome, rStatus, rGender,
                    normalisedRChildren, normalisedSAge[n], normalisedSIncome[n], sStatus[n], sGender[n], normalisedSChildren[n]);

但是我想这些修改不会给您想要的功能,因为KnnMapper#map(Object, Text, Context)仅按here的每个键/值对被调用一次,并且您可能想将其调用n次。 / p>


具体问题

为防止进一步的麻烦,我建议您保持GitHub类的上层代码不变,仅以某种方式修改KnnPattern#main(String[])方法,以使其按{{3}中所述的那样调用作业n次。 }。


编辑:示例

这是经过修改的KnnPattern#main(String[])方法,它可以逐行读取数据文件,创建一个以当前行为内容的临时文件,并以该临时文件作为缓存文件开始工作。
(假设您至少使用Java 7)

import java.io.BufferedReader;
import java.io.BufferedWriter;
import java.io.FileReader;
import java.io.FileWriter;
...
public class KnnPattern
{
  ...
    public static void main(String[] args) throws Exception {
        // Create configuration
        Configuration conf = new Configuration();

        if (args.length != 3) {
            System.err.println("Usage: KnnPattern <in> <out> <parameter file>");
            System.exit(2);
        }

        try (final BufferedReader br = new BufferedReader(new FileReader(args[2]))) {
            int n = 1;
            String line;
            while ((line = br.readLine()) != null) {
                // create temporary file with content of current line
                final File tmpDataFile = File.createTempFile("hadoop-test-", null);
                try (BufferedWriter tmpDataWriter = new BufferedWriter(new FileWriter(tmpDataFile))) {
                    tmpDataWriter.write(line);
                    tmpDataWriter.flush();
                }

                // Create job
                Job job = Job.getInstance(conf, "Find K-Nearest Neighbour #" + n);
                job.setJarByClass(KnnPattern.class);
                // Set the third parameter when running the job to be the parameter file and give it an alias
                job.addCacheFile(new URI(tmpDataFile.getAbsolutePath() + "#knnParamFile")); // Parameter file containing test data

                // Setup MapReduce job
                job.setMapperClass(KnnMapper.class);
                job.setReducerClass(KnnReducer.class);
                job.setNumReduceTasks(1); // Only one reducer in this design

                // Specify key / value
                job.setMapOutputKeyClass(NullWritable.class);
                job.setMapOutputValueClass(DoubleString.class);
                job.setOutputKeyClass(NullWritable.class);
                job.setOutputValueClass(Text.class);

                // Input (the data file) and Output (the resulting classification)
                FileInputFormat.addInputPath(job, new Path(args[0]));
                FileOutputFormat.setOutputPath(job, new Path(args[1] + "_" + n));

                // Execute job
                final boolean jobSucceeded = job.waitForCompletion(true);

                // clean up
                tmpDataFile.delete();

                if (!jobSucceeded) {
                    // return error status if job failed
                    System.exit(1);
                }

                ++n;
            }
        }
    }

}