绘制连续时间的Snort警报数量连续图

时间:2019-06-20 20:38:29

标签: python-3.x redis snort syslog-ng

我在将DDOS警报记录到文件中时闻风丧胆;我使用Syslog-ng解析日志并以json格式输出为redis(希望将其设置为缓冲区,我使用'setex'命令,有效期为70秒)。

整个事情似乎运作不佳;任何使之更容易的想法都欢迎。

我写了一个简单的python脚本来侦听redis KA事件并计算每秒的snort警报数。我尝试创建另外两个线程;一个从snort检索json格式的警报,第二个对警报进行计数。第三个应该使用matplotlib.pyplot

绘制图形

#import time 
from redis import StrictRedis as sr 
import os
import json
import matplotlib.pyplot as plt
import threading as th
import time


redis = sr(host='localhost', port = 6379, decode_responses = True)


#file = open('/home/lucidvis/vis_app_py/log.json','w+')

# This function is still being worked on
def  do_plot():
    print('do_plot loop running')
    while accumulated_data:

        x_values = [int(x['time_count']) for x in accumulated_data]
        y_values = [y['date'] for y in accumulated_data]

        plt.title('Attacks Alerts per time period')

        plt.xlabel('Time', fontsize=14)
        plt.ylabel('Snort Alerts/sec')

        plt.tick_params(axis='both', labelsize=14)

        plt.plot(y_values,x_values, linewidth=5)
        plt.show()
        time.sleep(0.01)




def accumulator():
    # first of, check the current json data and see if its 'sec' value is same 
    #that is the last in the accumulated data list
    #if it is the same, increase time_count by one else pop that value
    pointer_data = {}

    print('accumulator loop running')

    while True: 
        # pointer data is the current sec of json data used for comparison
        #new_data is the latest json formatted alert received
        # received_from_redis is a list declared in the main function
        if received_from_redis:
            new_data = received_from_redis.pop(0)
        if not pointer_data:
            pointer_data = new_data.copy()

        print(">>", type(pointer_data), " >> ", pointer_data)

        if pointer_data and pointer_data['sec']==new_data["sec"]
            pointer_data['time_count'] +=1


        elif pointer_data: 
            accumulated_data.append(pointer_data)
            pointer_data = new_data.copy()
            pointer_data.setdefault('time_count',1)

        else:
            time.sleep(0.01)




# main function creates the redis object and receives messages based on events
#this function calls two other functions and creates threads so they appear to run concurrently

def main():
    p = redis.pubsub()
    # 
    p.psubscribe('__keyspace@0__*')

    print('Starting message loop')

    while True:
        try:
            time.sleep(2)
            message = p.get_message()

    # Obtain the key from the redis emmitted event if the event is a set event
            if message and message['data']=='set':
         # the format emmited by redis is in a dict form
         # the key is the value to the key 'channel'
         # The key is in '__keyspace@0__*' form
         # obtain the last field of the list returned by split function
                key = message['channel'].split('__:')[-1]

                data_redis = json.loads(redis.get(str(key)))
                received_from_redis.append(data_redis)
        except Exception e:
            print(e)

            continue




if __name__ == "__main__":
    accumulated_data = []
    received_from_redis = []
# main function creates the redis object and receives messages based on events
#this function calls two other functions and creates threads so they appear to run concurrently
    thread_accumulator = th.Thread(target = accumulator, name ='accumulator')
    do_plot_thread = th.Thread(target = do_plot, name ='do_plot')

    while True:
        thread_accumulator.start()
        do_plot_thread.start()

        main()

    thread_accumulator.join()
    do_plot_thread.join()








我目前确实有错误;我只是不知道线程是创建还是运行良好。我需要一些想法来使事情变得更好。

以json格式并从下面的redis获取的警报示例


{"victim_port":"","victim":"192.168.204.130","protocol":"ICMP","msg":"Ping_Flood_Attack_Detected","key":"1000","date":"06/01-09:26:13","attacker_port":"","attacker":"192.168.30.129","sec":"13"}

1 个答案:

答案 0 :(得分:1)

我不确定我是否完全了解您的情况,但是如果您要统计本质上是日志消息的事件,则可以在syslog-ng中进行。要么作为Python destination(因为您已经在python中工作),要么甚至无需使用grouping-by parser进行其他编程。