将事件归结为时间间隔

时间:2017-09-14 15:16:39

标签: events logging mapreduce reducing

方案: 我有一个服务,记录像这个CSV示例中的事件:

#TimeStamp, Name, ColorOfPullover
TimeStamp01, Peter, Green
TimeStamp02, Bob, Blue
TimeStamp03, Peter, Green
TimeStamp04, Peter, Red
TimeStamp05, Peter, Green

例如彼得穿着绿色会经常连续出现。

我有两个目标:

  1. 保持数据尽可能小
  2. 保留所有相关的数据
  3. 相关意味着:我需要知道,时间跨度一个人 穿着什么颜色。 E.g:

    #StartTime, EndTime, Name, ColorOfPullover
    TimeStamp01, TimeStamp03, Peter, Green
    TimeStamp02, TimeStamp02, Bob, Blue
    TimeStamp03, TimeStamp03, Peter, Green
    TimeStamp04, TimeStamp04, Peter, Red
    TimeStamp05, TimeStamp05, Peter, Green
    

    在这种格式中,我可以回答以下问题:Peter在TimeStamp02时穿的是哪种颜色? (我可以放心地假设每个人在相同颜色的两个记录事件之间穿着相同的颜色。)

    主要问题: 我可以使用现有技术来实现这一目标吗?即我可以为它提供连续的事件流,并提取和存储相关数据?

    准确地说,我需要实现这样的算法(伪代码)。为CSV示例的每一行调用OnNewEvent方法。其中参数event已经包含来自该行的数据作为成员变量。

    def OnNewEvent(even)
        entry = Database.getLatestEntryFor(event.personName)
        if (entry.pulloverColor == event.pulloverColor)
            entry.setIntervalEndDate(event.date)
            Database.store(entry)
        else
            newEntry = new Entry
            newEntry.setIntervalStartDate(event.date)
            newEntry.setIntervalEndDate(event.date)
            newEntry.setPulloverColor(event.pulloverColor))
            newEntry.setName(event.personName)
            Database.createNewEntry(newEntry)
        end
    end
    

2 个答案:

答案 0 :(得分:0)

This is typical scenario of any streaming architecture.  

There are multiple existing technologies which work in tandem  to get what you want. 


1.  NoSql Database (Hbase, Aerospike, Cassandra)
2.  streaming jobs Like Spark streaming(micro batch), Storm 
3.  Run mapreduce in micro batch to insert into NoSql Database.
4.  Kafka Distriuted queue

The end to end flow. 

Data -> streaming framework -> NoSql Database. 
OR 
Data -> Kafka -> streaming framework -> NoSql Database. 


IN NoSql database there are two ways to model your data. 
1. Key by "Name" and for every event for that given key, insert into Database.
   While fetching u get back all events corresponding to that key. 

2. Key by "name", every time a event for key is there, do a UPSERT into a existing blob(Object saved as binary), Inside the blob you maintain the time range and color seen.  

Code sample to read and write to Hbase and Aerospike  

Hbase:http://bytepadding.com/hbase/

Aerospike:http://bytepadding.com/aerospike/

答案 1 :(得分:0)

一种方法是使用 HiveMQ 。 HiveMQ是一种基于MQTT的消息队列技术。关于它的好处是你可以编写自定义插件来处理传入的消息。为了获得一个人的事件的最新条目,HiveMQ插件中的哈希表可以正常工作。如果不同人数非常多,我会考虑使用像Redis这样的缓存来缓存每个人的最新事件。

您的服务将事件发布到HiveMQ。 HiveMQ插件处理传入的事件,并对数据库进行更新。

HiveMQ Plugin

Redis