问题描述
我正在使用 tensorflow 和 python 检测人员和车辆.我计算轨迹并使用卡尔曼滤波器预测它们,并拟合一条线来预测轨迹.
我的问题是如何找到两条轨迹之间的交点和碰撞时间?
我尝试了线到线的交点,但拟合线并不总是两点线,而是一条折线.这是我的尝试:
detections = tracker.update(np.array(z_box))对于检测中的 trk [0]:trk = trk.astype(np.int32)helpers.draw_box_label(img, trk, trk[4]) # 绘制边界框centerCoord = (((trk[1] +trk[3])/2), (trk[0] + trk[2])/2)point_lists[trk[4]].append(centerCoord)x = [i[0] for i in point_lists[trk[4]]]y = [i[1] for i in point_lists[trk[4]]]p = np.polyfit(x, y, deg=1)y = p[1] + p[0] * np.array(x)拟合=列表(zip(x,y))cv2.polylines(img, np.int32([fitted]), False, color=(255, 0, 0))对于其他检测[0]:其他 = other.astype(np.int32)if other[4] != trk[4]: # 检查自己的 IDx2 = [i[0] for i in point_lists[other[4]]]y2 = [i[1] for i in point_lists[other[4]]]p2 = np.polyfit(x2, y2, deg=1)y2 = p2[1] + p2[0] * np.array(x2)other_fitted = list(zip(x2, y2))if(line_intersection(fitted, other_fitted)):打印(交叉点")别的:print("不是交集")这是一个有点宽泛的话题,所以我将只关注数学/物理部分,因为我感觉 CV/DIP部分已由你们两个提问者(andre ahmed 和
如前所述,转换为 3D(项目符号 #2)不是必需的,但它消除了非线性,因此以后可以使用简单的线性插值/外插大大简化了事情.
I'm detecting persons and vehicles using tensorflow and python. I calculate the trajectories and predict them using Kalman filter and I fit a line for predicting the trajectory.
My problem is how would I find the intersection and time of collision between the two trajectories ?
I tried line to line intersection but the fitted line is not always a two point lines, it's a polyline. Here is my attempt:
detections = tracker.update(np.array(z_box))
for trk in detections[0]:
trk = trk.astype(np.int32)
helpers.draw_box_label(img, trk, trk[4]) # Draw the bounding boxes on the
centerCoord = (((trk[1] +trk[3]) / 2), (trk[0] + trk[2]) / 2)
point_lists[trk[4]].append(centerCoord)
x = [i[0] for i in point_lists[trk[4]]]
y = [i[1] for i in point_lists[trk[4]]]
p = np.polyfit(x, y, deg=1)
y = p[1] + p[0] * np.array(x)
fitted = list(zip(x, y))
cv2.polylines(img, np.int32([fitted]), False, color=(255, 0, 0))
for other in detections[0]:
other = other.astype(np.int32)
if other[4] != trk[4]: # check for self ID
x2 = [i[0] for i in point_lists[other[4]]]
y2 = [i[1] for i in point_lists[other[4]]]
p2 = np.polyfit(x2, y2, deg=1)
y2 = p2[1] + p2[0] * np.array(x2)
other_fitted = list(zip(x2, y2))
if(line_intersection(fitted, other_fitted)):
print("intersection")
else:
print("not intersection")
this is a bit broader topic so I will focus only on the math/physics part as I got the feeling the CV/DIP part is already handled by both of you askers (andre ahmed, and chris burgees).
For simplicity I am assuming linear movement with constant speeds So how to do this:
obtain 2D position of each object for 2 separate frames after known time
dtso obtain the 2D center (or corner or whatever) position on the image for each object in question.
convert them to 3D
so using known camera parameters or known bacground info about the scene you can un-project the 2D position on screen into 3D relative position to camera. This will get rid of the non linear interpolations otherwise need if handled just like a 2D case.
There are more option how to obtain 3D position depending on what you got at your disposal. For example like this:
- Transformation of 3D objects related to vanishing points and horizon line
obtaining actual speed of objects
the speed vector is simply:
vel = ( pos(t+dt) - pos(t) )/dtso simply subbstract positions of the same object from 2 consequent frames and divide by the framerate period (or interval between the frames used).
test each 2 objects for collision
this is the funny stuff Yes you can solve a system of inequalities like:
| ( pos0 + vel0 * t ) - (pos1 + vel1 * t ) | <= thresholdbut there is a simpler way I used in here
- Collision detection between 2 "linearly" moving objects in WGS84
The idea is to compute
twhere the tested objects are closest together (if nearing towards eachother).so we can extrapolate the future position of each object like this:
pos(t) = pos(t0) + vel*(t-t0)where
tis actual time andt0is some start time (for examplet0=0).let assume we have 2 objects (
pos0,vel0,pos1,vel1) we want to test so compute first 2 iterations of their distance so:pos0(0) = pos0; pos1(0) = pos1; dis0 = | pos1(0) - pos0(0) | pos0(dt) = pos0 + vel0*dt; pos1(dt) = pos1 + vel1*dt; dis1 = | pos1(dt) - pos0(dt) |where
dtis some small enough time (to avoid skipping through collision). Nowif (dis0<dis1)then the objects are mowing away so no collision,if (dis0==dis1)the objects are not moving or moving parallel to each and onlyif (dis0>dis1)the objects are nearing to each other so we can estimate:dis(t) = dis0 + (dis1-dis0)*tand the collision expects that
dis(t)=0so we can extrapolate again:0 = dis0 + (dis1-dis0)*t (dis0-dis1)*t = dis0 t = dis0 / (dis0-dis1)where
tis the estimated time of collision. Of coarse all this handles all the movement as linear and extrapolates a lot so its not accurate but as you can do this for more consequent frames and the result will be more accurate with the time nearing to collision ... Also to be sure you should extrapolate the position of each object at the time of estimated collision to verify the result (if not colliding then the extrapolation was just numerical and the objects did not collide just was nearing to each for a time)
As mentioned before the conversion to 3D (bullet #2) is not necessary but it get rid of the nonlinearities so simple linear interpolation/extrapolation can be used later on greatly simplify things.
这篇关于python中的轨迹交叉点的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持跟版网!


大气响应式网络建站服务公司织梦模板
高端大气html5设计公司网站源码
织梦dede网页模板下载素材销售下载站平台(带会员中心带筛选)
财税代理公司注册代理记账网站织梦模板(带手机端)
成人高考自考在职研究生教育机构网站源码(带手机端)
高端HTML5响应式企业集团通用类网站织梦模板(自适应手机端)