关于激光雷达传感器如何投影成二维图像

“前视图”投影 为了将激光雷达传感器的“前视图”展平为2d图像,必须将3d空间中的点投影到可以展开的圆柱形表面上,以形成平面。
问题在于这样做会将图像的接缝直接放在汽车的右侧。将接缝定位在汽车的最后部更有意义,因此前部和侧部更重要的区域是不间断的。让这些重要区域不间断将使卷积神经网络更容易识别那些重要区域中的整个对象。
以下代码解决了这个问题。
沿每个轴配置刻度 变量h r e s h_{res}和v r e s v_{res}非常依赖于所使用的lidar传感器。在ktti数据集中,使用的传感器是velodyne hdl 64e。根据velodyne hdl 64e的规格表,它具有以下重要特征:
垂直视野为26.9度,分辨率为0.4度,垂直视野被分为传感器上方+2度,传感器下方-24.9度
360度的水平视野,分辨率为0.08-0.35(取决于旋转速度)
旋转速率可以选择在5-20hz之间
可以按以下方式更新代码:
然而,这导致大约一半的点在x轴负方向上,并且大多数在y轴负方向上。为了投影到2d图像,需要将最小值设置为(0,0),所以需要做一些改变:
绘制二维图像 将3d点投影到2d坐标点,最小值为(0,0),可以将这些点数据绘制成2d图像。
完整代码 把上面所有的代码放在一个函数中。
def lidar_to_2d_front_view(points, v_res, h_res, v_fov, val=“depth”, cmap=“jet”, saveto=none, y_fudge=0.0 ): “”“ takes points in 3d space from lidar data and projects them to a 2d ”front view“ image, and saves that image.
args: points: (np array) the numpy array containing the lidar points. the shape should be nx4 - where n is the number of points, and - each point is specified by 4 values (x, y, z, reflectance) v_res: (float) vertical resolution of the lidar sensor used. h_res: (float) horizontal resolution of the lidar sensor used. v_fov: (tuple of two floats) (minimum_negative_angle, max_positive_angle) val: (str) what value to use to encode the points that get plotted. one of {”depth“, ”height“, ”reflectance“} cmap: (str) color map to use to color code the `val` values. note: must be a value accepted by matplotlib‘s scatter function examples: ”jet“, ”gray“ saveto: (str or none) if a string is provided, it saves the image as this filename. if none, then it just shows the image. y_fudge: (float) a hacky fudge factor to use if the theoretical calculations of vertical range do not match the actual data.
for a velodyne hdl 64e, set this value to 5. ”“”
# dummy proofing assert len(v_fov) ==2, “v_fov must be list/tuple of length 2” assert v_fov[0] 《= 0, “first element in v_fov must be 0 or negative” assert val in {“depth”, “height”, “reflectance”}, ’val must be one of {“depth”, “height”, “reflectance”}‘
x_lidar = points[:, 0] y_lidar = points[:, 1] z_lidar = points[:, 2] r_lidar = points[:, 3] # reflectance # distance relative to origin when looked from top d_lidar = np.sqrt(x_lidar ** 2 + y_lidar ** 2) # absolute distance relative to origin # d_lidar = np.sqrt(x_lidar ** 2 + y_lidar ** 2, z_lidar ** 2)
v_fov_total = -v_fov[0] + v_fov[1]
# convert to radians v_res_rad = v_res * (np.pi/180) h_res_rad = h_res * (np.pi/180)
# project into image coordinates x_img = np.arctan2(-y_lidar, x_lidar)/ h_res_rad y_img = np.arctan2(z_lidar, d_lidar)/ v_res_rad
# shift coordinates to make 0,0 the minimum x_min = -360.0 / h_res / 2 # theoretical min x value based on sensor specs x_img -= x_min # shift x_max = 360.0 / h_res # theoretical max x value after shifting
y_min = v_fov[0] / v_res # theoretical min y value based on sensor specs y_img -= y_min # shift y_max = v_fov_total / v_res # theoretical max x value after shifting
y_max += y_fudge # fudge factor if the calculations based on # spec sheet do not match the range of # angles collected by in the data.
# what data to use to encode the value for each pixel if val == “reflectance”: pixel_values = r_lidar elif val == “height”: pixel_values = z_lidar else: pixel_values = -d_lidar
# plot the image cmap = “jet” # color map to use dpi = 100 # image resolution fig, ax = plt.subplots(figsize=(x_max/dpi, y_max/dpi), dpi=dpi) ax.scatter(x_img,y_img, s=1, c=pixel_values, linewidths=0, alpha=1, cmap=cmap) ax.set_axis_bgcolor((0, 0, 0)) # set regions with no points to black ax.axis(’scaled‘) # {equal, scaled} ax.xaxis.set_visible(false) # do not draw axis tick marks ax.yaxis.set_visible(false) # do not draw axis tick marks plt.xlim([0, x_max]) # prevent drawing empty space outside of horizontal fov plt.ylim([0, y_max]) # prevent drawing empty space outside of vertical fov
if saveto is not none: fig.savefig(saveto, dpi=dpi, bbox_inches=’tight‘, pad_inches=0.0) else: fig.show()
以下是一些用例:
import matplotlib.pyplot as pltimport numpy as np
hres = 0.35 # horizontal resolution (assuming 20hz setting)vres = 0.4 # vertical resvfov = (-24.9, 2.0) # field of view (-ve, +ve) along vertical axisy_fudge = 5 # y fudge factor for velodyne hdl 64e
lidar_to_2d_front_view(lidar, v_res=vres, h_res=hres, v_fov=vfov, val=“depth”, saveto=“/tmp/lidar_depth.png”, y_fudge=y_fudge)
lidar_to_2d_front_view(lidar, v_res=vres, h_res=hres, v_fov=vfov, val=“height”, saveto=“/tmp/lidar_height.png”, y_fudge=y_fudge)
lidar_to_2d_front_view(lidar, v_res=vres, h_res=hres, v_fov=vfov, val=“reflectance”, saveto=“/tmp/lidar_reflectance.png”, y_fudge=y_fudge)
产生以下三个图像:
depth
height
reflectance
后续操作步骤
目前创建每个图像非常慢,可能是因为matplotlib,它不能很好地处理大量的散点。
因此需要创建一个使用numpy或pil的实现。
测试 需要安装python-pcl,加载pcd文件。
sudo apt-get install python-pip
sudo apt-get install python-dev
sudo pip install cython==0.25.2
sudo pip install numpy
sudo apt-get install git
git clone https://github.com/strawlab/python-pcl.git
cd python-pcl/
python setup.py build_ext -i
python setup.py install
可惜,sudo pip install cython==0.25.2这步报错:
“cannot uninstall ‘cython’。 it is a distutils installed project and thus we cannot accurately determine which files belong to it which would lead to only a partial uninstall.”
换个方法,安装pypcd:
pip install pypcd
查看 https://pypi.org/project/pypcd/ ,用例如下:
example-------
。. code:: python
import pypcd# also can read from file handles.pc = pypcd.pointcloud.from_path(’foo.pcd‘)# pc.pc_data has the data as a structured array# pc.fields, pc.count, etc have the metadata
# center the x fieldpc.pc_data[’x‘] -= pc.pc_data[’x‘].mean()
# save as binary compressedpc.save_pcd(’bar.pcd‘, compression=’binary_compressed‘)
测试数据结构:
“ 》》》 lidar = pypcd.pointcloud.from_path(‘~/pointcloud-processing/000000.pcd’)
》》》 lidar.pc_data
array([(18.323999404907227, 0.04899999871850014, 0.8289999961853027, 0.0),
(18.3439998626709, 0.10599999874830246, 0.8289999961853027, 0.0),
(51.29899978637695, 0.5049999952316284, 1.944000005722046, 0.0),
…,
(3.7139999866485596, -1.3910000324249268, -1.7330000400543213, 0.4099999964237213),
(3.9670000076293945, -1.4739999771118164, -1.8569999933242798, 0.0),
(0.0, 0.0, 0.0, 0.0)],
dtype=[(‘x’, ‘《f4’), (‘y’, ‘《f4’), (‘z’, ‘《f4’), (‘intensity’, ‘《f4’)])
》》》 lidar.pc_data[‘x’]
array([ 18.3239994 , 18.34399986, 51.29899979, …, 3.71399999,
3.96700001, 0. ], dtype=float32) ”
加载pcd:
import pypcd
lidar = pypcd.pointcloud.from_path(’000000.pcd‘)
x_lidar:
x_lidar = points[’x‘]
结果:
depth
height
reflectance


高光通量白色 LED(透明型): TL19W01系列
首届科锐杯中国大学生LED照明设计创意大赛获奖作品
人脸识别+视频监控筑起城市安全防线
iot的安全考量参数知识分享
UWB传统行业应用情况介绍
关于激光雷达传感器如何投影成二维图像
华为Mate50作为首款支持“北斗短报文通信”的手机被国家博物馆收藏
2千元级5G工业网关,飞凌FCU2201低价网关正式发布!
日本或扩大对韩出口限制名单 三星半导体新技术恐被波及
ChatGPT在工业领域的应用
一款能够通过红外投影成像的设备将静脉血管呈现在医护人员的面前
999元售价高通骁龙芯,红米Note4X最值得购买千元机
单片机和PLC之间有什么区别?
贸泽电子开售适用于楼宇和工厂自动化的Texas Instruments DP83TD510E以太网PHY
四连指指纹采集仪的操作原理
机器视觉是什么?它对线缆市场会有什么影响?
基于区块链人工智能等技术结合的图像版权管理平台image.one介绍
人工智能基础层集中发力,科技巨头企业聚焦应用场景落地
共达电声33.6亿收购万魔声学,三年业绩承诺达8.62亿
小米创始人:2021年5G手机价格有望做到1000元