java spark word匹配两个字符串

时间:2016-11-18 12:27:38

标签: java apache-spark parallel-processing sparkapi

我想知道两个不同的长字符串与SPARK(Java Api)之间是否有一些巧合。

String string1 = "car bike bus ..." (about 100 words);
String string2 = "boat plane car ..." (about 100 words);

我怎么能这样做?

我已经创建了一种方法,但我认为它效率不高(迭代次数太多):

List<String> a1 = new ArrayList<>();
List<String> a2 = new ArrayList<>();

a1.add("car");
a1.add("boat");
a1.add("bike");

a2.add("car");
a2.add("nada");
a2.add("otro");


JavaRDD<String> rdd = jsc.parallelize(a1);
JavaRDD<String> counts = rdd.filter(new Function<String, Boolean>() {
    @Override
    public Boolean call(String s) throws Exception {
        Boolean occurrence = false;
        for(int i=0; i<a2.size(); i++) {
            if(StringUtils.containsIgnoreCase(s, a2.get(i))) {
                System.out.println("encontrado");
                occurrence = true;
                break;
            }
        }
        return occurrence;
    }
});
System.out.println(counts.count());

1 个答案:

答案 0 :(得分:0)

您可以使用可用于RDD和数据集的intersect方法。下面是使用Spark 2.0,Java和Dataset的示例。

public class SparkIntersection {
    public static void main(String[] args) {
    //SparkSession 
    SparkSession spark = SparkSession
            .builder()
            .appName("SparkIntersection")
            .config("spark.sql.warehouse.dir", "/file:C:/temp")
            .master("local[*]") 
            .getOrCreate();
    //List
    List<String> data1 = Arrays.asList("one","two","three","four","five");
    List<String> data2 = Arrays.asList("one","six","three","nine","ten");
    //Dataset
    Dataset<String> ds1 = spark.createDataset(data1, Encoders.STRING());
    Dataset<String> ds2 = spark.createDataset(data2, Encoders.STRING());
    //Intersect
    Dataset<String> ds = ds1.intersect(ds2);
    ds.show();
    //stop
    spark.stop();
   }
}