我正在尝试使用Python中的opencv开发用于稳定流体实验图像的管道。 Here is an example raw image(实际尺寸:1920x1460)
该管道应该能够稳定低频漂移和在实验过程中打开/关闭阀门时偶尔发生的高频“抖动”。我目前的方法following the example here是应用双向滤波器,然后进行自适应阈值处理,以产生图像中的通道。然后,我使用goodFeaturesToTrack在阈值图像中查找拐角。然而,由于低对比度,图像的角落中存在一些光学效应,因此图像中存在大量噪声。尽管我可以找到通道as shown here的角,但它们在seen here到帧之间移动了很多。我跟踪了相对于从calcOpticalFlowPyrLK计算出的第一帧的每一帧中x和y像素偏移的量,并使用estimateRigidTransform shown here计算了一个刚性变换。在此图中,我可以看到从0:200帧开始的低频漂移,以及在〜225帧附近的急剧跳变。这些跳跃与视频中观察到的相匹配。但是,大量的噪声(振幅约为5-10像素)与视频中观察到的噪声不匹配。如果我将这些变换应用于图像堆栈,则会出现增加的抖动,该抖动无法稳定图像。此外,如果我尝试计算从一帧到下一帧(而不是到第一帧的所有帧)的变换,则在处理了几帧之后,我将得到None
的刚性变换矩阵的返回值,这可能是因为有噪声阻止计算刚性变换。
以下是我如何计算转换的示例:
# Load required libraries
import numpy as np
from skimage.external import tifffile as tif
import os
import cv2
import matplotlib.pyplot as plt
from sklearn.externals._pilutil import bytescale
#Read in file and convert to 8-bit so it can be processed
os.chdir(r"C:\Path\to\my\processingfolder\inputstack")
inputfilename = "mytestfile.tif"
input_image = tif.imread(inputfilename)
input_image_8 = bytescale(input_image)
n_frames, vid_height, vid_width = np.shape(input_image_8)
transforms = np.zeros((n_frames-1,3),np.float32)
prev_image = starting_image
prev_f = cv2.bilateralFilter(prev_image,9,75,75)
prev_t = cv2.adaptiveThreshold(prev_f,255,cv2.ADAPTIVE_THRESH_GAUSSIAN_C, cv2.THRESH_BINARY,49,2)
prev_pts = cv2.goodFeaturesToTrack(prev_t,maxCorners=100,qualityLevel=0.5,minDistance=10,blockSize=25,mask=None)
for i in range(1,n_frames-2):
curr_image = input_image_8[i]
curr_f = cv2.bilateralFilter(curr_image,9,75,75)
curr_t = cv2.adaptiveThreshold(curr_f,255,cv2.ADAPTIVE_THRESH_GAUSSIAN_C, cv2.THRESH_BINARY,49,2)
#Detect features through optical flow:
curr_pts, status, err = cv2.calcOpticalFlowPyrLK(prev_t,curr_t,prev_pts,None)
#Sanity check
assert len(prev_pts) == len(curr_pts)
#Filter to only the valid points
idx = np.where(status==1)[0]
prev_pts = prev_pts[idx]
curr_pts = curr_pts[idx]
#Find transformation matrix
m = cv2.estimateRigidTransform(prev_pts,curr_pts, fullAffine=False) #will only work with OpenCV-3 or less
# Extract translation
dx = m[0,2]
dy = m[1,2]
# Extract rotation angle
da = np.arctan2(m[1,0], m[0,0])
# Store transformation
transforms[i] = [dx,dy,da]
print("Frame: " + str(i) + "/" + str(n_frames) + " - Tracked points : " + str(len(prev_pts)))
我该如何不同地处理图像,以使我挑选出这些通道的线条而又不会在检测角落时产生干扰?这种稳定/对齐不需要立即进行,事实之后可以应用于整个堆栈。