绘制大型时间序列

时间:2018-02-17 10:44:07

标签: plot time-series

问题摘要

是否有任何易于实现的算法可以减少表示时间序列所需的点数而不会改变它在图中的显示方式?

激励问题

我尝试以交互方式可视化从嵌入式系统记录的10到15个数据通道,频率为~20 kHz。日志可以覆盖超过一小时的时间,这意味着我处理的是1e8和1e9点之间。此外,我关心可能持续很短时间(即小于1毫秒)的小异常,这样简单的抽取就不是一个选择。

毫不奇怪,大多数绘图库如果你做了天真的事情并尝试将它们的数据大于专用的GPU内存,那么会有点难过。它在我的系统上实际上比这更糟糕;使用随机浮点数向量作为测试用例,在刷新率低于1 FPS之前,我只能从Matlab绘图函数和Python + matplotlib中获得大约5e7点。

现有问题和解决方案:

这个问题有点类似于许多现有问题,例如:

但处理更大的数据集和/或以交互性为代价对保真度更严格(获得60 FPS柔滑平滑的平移和缩放会很棒,但实际上,我会对1 FPS感到满意。) / p>

显然,需要某种形式的数据缩减。在搜索解决我的问题的现有工具时,我发现了两种范例:

  • 取消但跟踪异常值:Matlab + dsplot就是一个很好的例子(即我在上面链接的第一个问题的接受答案中建议的工具)。 dsplot向下抽取到固定数量的均匀间隔点,但随后在使用高通FIR滤波器的标准偏差识别的异常值中加回。虽然这可能是几类数据的可行解决方案,但如果有大量频率内容超过滤波器截止频率并且可能需要调整,则可能会遇到困难。

  • 绘制最小值和最大值使用此方法,您可以将时间序列划分为与每个水平像素对应的间隔,并仅绘制每个间隔中的最小值和最大值。 Matlab + Plot (Big)就是一个很好的例子,但是使用了最小值和最大值的O(n)计算,当你达到1e8或1e9点时,它会有点慢。 mex函数或python中的二叉搜索树可以解决这个问题,但实现起来很复杂。

有没有更简单的解决方案可以满足我的需求?

编辑(2018-02-18):重构的问题是关注算法而不是实现算法的工具。

1 个答案:

答案 0 :(得分:3)

我遇到了同样的问题,显示了数百个传感器的压力时间序列,每隔几分钟就会有样品。在某些情况下(比如清理数据时),我希望看到所有异常值,其他人则对趋势更感兴趣。所以我编写了一个函数,可以使用两种方法减少数据点的数量:visvalingam和Douglas-Peucker。第一个倾向于去除异常值,第二个保留它们。我已经优化了函数来处理大型数据集。 我意识到所有绘图方法都无法处理那么多点,而那些绘制方法确实以我无法控制的方式抽取数据集,我就这样做了。功能如下:

function [X, Y, indices, relevance] = lineSimplificationI(X,Y,N,method,option)
%lineSimplification Reduce the number of points of the line described by X
%and Y to N. Preserving the most relevant ones.
%   Using an adapted method of visvalingam and Douglas-Peucker algorithms.
%   The number of points of the line is reduced iteratively until reaching
%   N non-NaN points. Repeated NaN points in original data are deleted but
%   non-repeated NaNs are preserved to keep line breaks.
%   The two available methods are
%
%   Visvalingam: The relevance of a point is proportional to the area of
%   the triangle defined by the point and its two neighbors.
%   
%   Douglas-Peucker: The relevance of a point is proportional to the
%   distance between it and the straight line defined by its two neighbors.
%   Note that the implementation here is iterative but NOT recursive as in 
%   the original algorithm. This allows to better handle large data sets.
%
%   DIFFERENCES: Visvalingam tend to remove outliers while Douglas-Peucker
%   keeps them.
%
%   INPUTS:
%         X: X coordinates of the line points
%         Y: Y coordinates of the line points
%    method: Either 'Visvalingam' or 'DouglasPeucker' (default)
%    option: Either 'silent' (default) or 'verbose' if additional outputs
%            of the calculations are desired.
%
% OUTPUTS:
%         X: X coordinates of the simplified line points
%         Y: Y coordinates of the simplified line points
%   indices: Indices to the positions of the points preserved in the
%            original X and Y. Therefore Output X is equal to the input
%            X(indices).
% relevance: Relevance of the returned points. It can be used to furder
%            simplify the line dinamically by keeping only points with 
%            higher relevance. But this will produce bigger distortions of 
%            the line shape than calling again lineSimplification with a 
%            smaller value for N, as removing a point changes the relevance
%            of its neighbors.
%
% Implementation by Camilo Rada - camilo@rada.cl
%

    if nargin < 3
        error('Line points positions X, Y and target point count N MUST be specified');
    end
    if nargin < 4
        method='DouglasPeucker';
    end
    if nargin < 5
        option='silent';
    end

    doDisplay=strcmp(option,'verbose');

    X=double(X(:));
    Y=double(Y(:));
    indices=1:length(Y);

    if length(X)~=length(Y)
        error('Vectors X and Y MUST have the same number of elements');
    end

    if N>=length(Y)
        relevance=ones(length(Y),1);
        if doDisplay
            disp('N is greater or equal than the number of points in the line. Original X,Y were returned. Relevances were not computed.')
        end
        return
    end
    % Removing repeated NaN from Y
    % We find all the NaNs with another NaN to the left
    repeatedNaNs= isnan(Y(2:end)) & isnan(Y(1:end-1));
    %We also consider a repeated NaN the first element if NaN
    repeatedNaNs=[isnan(Y(1)); repeatedNaNs(:)];
    Y=Y(~repeatedNaNs);
    X=X(~repeatedNaNs);
    indices=indices(~repeatedNaNs);

    %Removing trailing NaN if any
    if isnan(Y(end))
        Y=Y(1:end-1);
        X=X(1:end-1);
        indices=indices(1:end-1);
    end

    pCount=length(X);

    if doDisplay
        disp(['Initial point count = ' num2str(pCount)])
        disp(['Non repeated NaN count in data = ' num2str(sum(isnan(Y)))])
    end

    iterCount=0;

    while pCount>N
        iterCount=iterCount+1;
        % If the vertices of a triangle are at the points (x1,y1) , (x2, y2) and
        % (x3,y3) the are uf such triangle is
        % area = abs((x1*(y2-y3)+x2*(y3-y1)+x3*(y1-y2))/2)
        % now the areas of the triangles defined by each point of X,Y and its two
        % neighbors are

        twiceTriangleArea =abs((X(1:end-2).*(Y(2:end-1)-Y(3:end))+X(2:end-1).*(Y(3:end)-Y(1:end-2))+X(3:end).*(Y(1:end-2)-Y(2:end-1))));

        switch method
            case 'Visvalingam'
                % In this case the relevance is given by the area of the
                % triangle formed by each point end the two points besides
                relevance=twiceTriangleArea/2;
            case 'DouglasPeucker'
                % In this case the relevance is given by the minimum distance
                % from the point to the line formed by its two neighbors
                neighborDistances=ppDistance([X(1:end-2) Y(1:end-2)],[X(3:end) Y(3:end)]);
                relevance=twiceTriangleArea./neighborDistances;
            otherwise
                error(['Unknown method: ' method]);
        end
        relevance=[Inf; relevance; Inf];
        %We remove the pCount-N least relevant points as long as they are not contiguous

        [srelevance, sortorder]= sort(relevance,'descend');
        firstFinite=find(isfinite(srelevance),1,'first');
        startPos=uint32(firstFinite+N+1);
        toRemove=sort(sortorder(startPos:end));
        if isempty(toRemove)
            break;
        end

        %Now we have to deal with contigous elements, as removing one will
        %change the relevance of the neighbors. Therefore we have to
        %identify pairs of contigous points and only remove the one with
        %leeser relevance

        %Contigous will be true for an element if the next or the previous
        %element is also flagged for removal
        contiguousToKeep=[diff(toRemove(:))==1; false] | [false; (toRemove(1:end-1)-toRemove(2:end))==-1];
        notContiguous=~contiguousToKeep;

        %And the relevances asoociated to the elements flagged for removal
        contRel=relevance(toRemove);

        % Now we rearrange contigous so it is sorted in two rows, therefore
        % if both rows are true in a given column, we have a case of two
        % contigous points that are both flagged for removal
        % this process is demenden of the rearrangement, as contigous
        % elements can end up in different colums, so it has to be done
        % twice to make sure no contigous elements are removed
         nContiguous=length(contiguousToKeep);

        for paddingMode=1:2
            %The rearragngement is only possible if we have an even number of
            %elements, so we add one dummy zero at the end if needed
            if paddingMode==1
                if mod(nContiguous,2)
                    pcontiguous=[contiguousToKeep; false];
                    pcontRel=[contRel; -Inf];
                else
                    pcontiguous=contiguousToKeep;
                    pcontRel=contRel;
                end
            else
                if mod(nContiguous,2)
                    pcontiguous=[false; contiguousToKeep];
                    pcontRel=[-Inf; contRel];
                else
                    pcontiguous=[false; contiguousToKeep(1:end-1)];
                    pcontRel=[-Inf; contRel(1:end-1)];                    
                end
            end

            contiguousPairs=reshape(pcontiguous,2,[]);
            pcontRel=reshape(pcontRel,2,[]);

            %finding colums with contigous element
            contCols=all(contiguousPairs);
            if ~any(contCols) && paddingMode==2
                break;
            end
            %finding the row of the least relevant element of each column
            [~, lesserElementRow]=max(pcontRel);

            %The index in contigous of the first element of each pair is
            if paddingMode==1
                firstElementIdx=((1:size(contiguousPairs,2))*2)-1;
            else
                firstElementIdx=((1:size(contiguousPairs,2))*2)-2;
            end            

            % and the index in contigous of the most relevant element of each
            % pair is
            lesserElementIdx=firstElementIdx+lesserElementRow-1;

            %now we set the least relevant element as NOT continous, so it is
            %removed
            contiguousToKeep(lesserElementIdx(contCols))=false;
        end
        %and now we delete the relevant continous points from the toRemove
        %list
        toRemove=toRemove(contiguousToKeep | notContiguous);

        if any(diff(toRemove(:))==1) && doDisplay
            warning([num2str(sum(diff(toRemove(:))==1)) ' continous elements removed in one iteration.'])
        end
        toRemoveLogical=false(pCount,1);
        toRemoveLogical(toRemove)=true;

        X=X(~toRemoveLogical);
        Y=Y(~toRemoveLogical);
        indices=indices(~toRemoveLogical);

        pCount=length(X);
        nRemoved=sum(toRemoveLogical);
        if doDisplay
            disp(['Iteration ' num2str(iterCount) ', Point count = ' num2str(pCount) ' (' num2str(nRemoved) ' removed)'])
        end
        if nRemoved==0
            break;
        end
    end
end

function d = ppDistance(p1,p2)
    d=sqrt((p1(:,1)-p2(:,1)).^2+(p1(:,2)-p2(:,2)).^2);
end