Tracker.autorun(function() {
DATA.find().observeChanges({
added: function(id, doc) {
console.log(doc);
}
});
});
在服务器上调用此代码。每次meteor服务器启动时,added
函数都会触发数据库中的每个项目。有没有办法让added
回调仅在添加新项目时触发?
答案 0 :(得分:21)
added
时,将为结果集中的每个文档调用 observeChanges
。诀窍是在初始化期间忽略回调。我在回答this问题时有一个扩展示例,但此代码应该适合您:
(function() {
var initializing = true;
DATA.find().observeChanges({
added: function(id, doc) {
if (!initializing) {
console.log(doc);
}
}
});
initializing = false;
})();
请注意,Tracker.autorun是仅限客户端的功能。在服务器上,我认为它只执行一次。
答案 1 :(得分:8)
我很长时间都在努力。出于某种原因,David的回答对我不起作用 - 它是在初始化变量设置为false后触发的。
Avi的这种模式对我来说很成功:
var usersLoaded = false;
Meteor.subscribe("profiles", function () {
// at this point all new users sent down are legitimately new ones
usersLoaded = true;
});
Meteor.users.find().observe({
added: function(user) {
if (usersLoaded) {
console.log("New user created: ", user);
}
}
});
答案 2 :(得分:2)
由于是初始化问题,您可以这样做。
public class TwitterStreaming {
// setup kafka :
public static final String ZKQuorum = "localhost:2181";
public static final String ConsumerGroupID = "ingi2145-analytics";
public static final String ListTopics = "newTweet";
public static final String ListBrokers = "localhost:9092"; // I'm not sure about ...
@SuppressWarnings("deprecation")
public static void main(String[] args) throws Exception {
// Location of the Spark directory
String sparkHome = "usr/local/spark";
// URL of the Spark cluster
String sparkUrl = "local[4]";
// Location of the required JAR files
String jarFile = "target/analytics-1.0.jar";
// Generating spark's streaming context
JavaStreamingContext jssc = new JavaStreamingContext(
sparkUrl, "Streaming", new Duration(1000), sparkHome, new String[]{jarFile});
// Start kafka stream
HashSet<String> topicsSet = new HashSet<String>(Arrays.asList(ListTopics.split(",")));
HashMap<String, String> kafkaParams = new HashMap<String, String>();
kafkaParams.put("metadata.broker.list", ListBrokers);
//JavaPairReceiverInputDStream<String, String> kafkaStream = KafkaUtils.createStream(ssc, ZKQuorum, ConsumerGroupID, mapPartitionsPerTopics);
// Create direct kafka stream with brokers and topics
JavaPairInputDStream<String, String> messages = KafkaUtils.createDirectStream(
jssc,
String.class,
String.class,
StringDecoder.class,
StringDecoder.class,
kafkaParams,
topicsSet
);
// get the json file :
JavaDStream<String> json = messages.map(
new Function<Tuple2<String, String>, String>() {
public String call(Tuple2<String, String> tuple2) {
return tuple2._2();
}
});
实际上这更优雅。
答案 3 :(得分:2)
为查询提供与旧项目不匹配的选择器。如果将mongo ObjectID用作_id
,您可以查询_id
大于最新项目的项目:
const latest = DATA.findOne({}, {sort: {_id: -1}})
DATA.find({_id: {$gt: latest._id}}).observeChanges({
added: function() { ... }
})
或createdAt
时间戳:
const currentTime = new Date()
DATA.find({createdAt: {$gt: currentTime}}).observeChanges({
added: function() { ... }
})
答案 4 :(得分:1)
以下是解决此问题的另一种方法:
Meteor.subscribe('messages', function() {
var messages = Messages.find();
var msgCount = messages.count();
messages.observe({
addedAt: function(doc, atIndex) {
if(atIndex > (msgCount - 1)) console.log('added');
}
});
});
只应在现有金额交付后为已添加的文档触发。重要的是,onReady
Meteor.subscribe
回调msgCount
,以便select 'C:\images\'|| substr('00453434',1,2) || '\' ||
substr('00453434',3,2) || '\' ||
substr('00453434',5,2) || '\' ||
substr('00453434',7,2) || '.IMG' as fullPath from dual
更改为您的订阅...例如,您正在对您的订阅进行分页。