在我的代码中,我试图想出一种方法,让Map用作某些操作的参考。举个例子(伪代码):
JavaDstream miao = create new Dstream;
Map<String,String[]> dictionary = new HashMap<String,String[]>(); //would I need an Hashtable in this case?
miao.foreachRDD( rdd ->{
rdd.foreach(line ->{ //line is a String[]
if (dictionary.containsKey(line[0]){
if(dictionary.get(line[0])[INDEX].equals(line[INDEX])){
append line on csv file;
dictionary.put(line[0],line);
}else{
append line in another file;
}
}else{
dictionary.put(line[0],line);
}
})})
这种情况经常出现在我的应用程序中:检查是否已经处理过,在一个案例中执行操作,在另一个案例中执行操作,因此我需要找到一种方法来执行此操作。
我今天读了很多关于广播变量和检查的内容
如果我将Map委托给另一个类,一个可序列化的类,并使其静态存在,我会有一些可流式集合吗?根据我的理解,我认为不会:它将被更改为“ local ”,但其他工作人员将不会收到任何更新。
编辑:正如所承诺的那样,虽然我迟到了: 我要做的是: private static final Map<String, String> alertsAlreadySent = new Hashtable<String, String>(); //MAP done with id and timestamp
public static void sendMail(String whoTo, String[] whoIsDying){ //email address, string enriched with person data
if( (!alertsAlreadySent.containsKey(whoIsDying[CSVExampleDevice.DEVICE_ID])) //if it's not already in the map
|| //OR short circuited
((Long.parseLong(alertsAlreadySent.get(whoIsDying[CSVExampleDevice.DEVICE_ID])) - Long.parseLong(whoIsDying[CSVExampleDevice.TIMESTAMP]))>3600000)
){ // he was already dying, but an hour has already passed, so it may be a new alert
indeedSendMail(whoTo, whoIsDying); //a function to send the mail
alertsAlreadySent.put(whoIsDying[CSVExampleDevice.DEVICE_ID], whoIsDying[CSVExampleDevice.TIMESTAMP]);
//sent the email, we update the timestamp in the map
}
}
我还有其他此类案件 有状态的Dstream会轻易取代这些方法吗?
答案 0 :(得分:2)
我认为这里的目的是在处理数据流时保留一些状态。同时,我们希望以下列方式对流中的数据进行分类:
问:我们可以使用工作人员可读写的Java Map吗?
NO
在作为分布式系统的Spark Streaming中,我们不能跨执行程序使用可变集合,并期望对该结构的更改进行传播。这些对象存在于创建它们的JVM中,并且所有更改都保留在JVM的本地。有办法实现(例如CRDT),但这需要执行程序之间的额外消息传递基础结构。另一种选择可以是集中式存储,如分布式缓存或数据库。
问:我们可以采取其他方式吗?
YES
Spark Streaming支持有状态转换,允许我们对此类流程进行建模。我们确实需要改变方法才能使这项工作成功。因此,我们不是像原始问题那样验证条件并采取行动,而是标记条目,构建我们的状态和组,以便优化I / O操作。 (我将使用Scala.Java方法是非常相同的API,带有额外的详细程度和缺乏模式匹配功能)
val dstream = ??? // my dstream here
// We define a state update function that implements our business logic in dealing with changing values
def stateUpdateFunction(
key: String,
value: Option[Array[String]],
state: State[String]): Option[(String,String, Array[String])] = {
val stateValue = state.getOption() // Get current value for the given key
val label = (stateValue, value) match {
case (None, Some(newValue)) => // new key!
state.update(newValue(0)) // Update session data
"NEW_KEY" // this is the resulting label for this state
case (Some(oldValue), Some(newValue)) if (oldValue == newValue(0)) => // we know the key. The value is the same. In this case we don't update the state
"SAME_VALUE"
case (Some(oldValue), Some(newValue)) if (oldValue != newValue(0)) => // the value is not the same, so we store the new value
state.update(newValue(0))
"NEW_VALUE"
case (None, None) => "NOP" // do nothing
case (Some(oldValue), None) => "NOP" // do nothing
}
value.map(v => (label, key, v)) // the computed a label for this key and the given value
}
val stateSpec = StateSpec.function(stateUpdateFunction _)
// transform the original stream into a key/value dsteam that preserves the original data in the value
val keyedDstream = dstream.map(elem => (elem(0), elem))
// Add labels to the data using a stateful transformation
val mappedDstream = dstream.mapWithState(stateSpec)
// remove the "None" in the stream
val labeledDataDStream = mappedDstream.filter(entry => entry != None) // or flatMap(identity)
// Now, labeledDataDStream contains our data labeled, we can proceed to filter it out in the different required subsets
val changedValuesDStream = labeledData.filter{case (label, key, value) => label == "NEW_VALUE"}
val sameValuesDStream = labeledData.filter{case (label, key, value) => label == "SAME_VALUE"}
val newKeyDStream = labeledData.filter{case (label, key, value) => label == "NEW_KEY"}
// we can save those datasets to disk (or store in a db, ...)
changedValuesDStream.saveAsTextFiles("...")
sameValueDStream.saveAsTextFiles("..."