Human Detection in Depth Map Created from Point Cloud
dc.contributor.author | Ligocki, Adam | cs |
dc.contributor.author | Žalud, Luděk | cs |
dc.date.accessioned | 2022-02-14T11:55:41Z | |
dc.date.available | 2022-02-14T11:55:41Z | |
dc.date.issued | 2022-04-02 | cs |
dc.description.abstract | This paper deals with human detection in the LiDAR data using the YOLO object detection neural network architecture. RGB-based object detection is the most studied topic in the field of neural networks and autonomous agents. However, these models are very sensitive to even minor changes in the weather or light conditions if the training data do not cover these situations. This paper proposes to use the LiDAR data as a redundant, and more condition invariant source of object detections around the autonomous agent. We used the publically available real-traffic dataset that simultaneously captures data from RGB camera and 3D LiDAR sensors during the clear-sky day and rainy night time and we aggregate the LiDAR data for a short period to increase the density of the point cloud. Later we projected these point cloud by several projection models, like pinhole camera model, cylindrical projection, and bird-view projection, into the 2D image frame, and we annotated all the images. As the main experiment, we trained the several YOLOv5 neural networks on the data captured during the day and validate the models on the mixed day and night data to study the robustness and information gain during the condition changes of the input data. The results show that the LiDAR-based models provide significantly better performance during the changed weather conditions than the RGB-based models. | en |
dc.format | text | cs |
dc.format.extent | 1-12 | cs |
dc.format.mimetype | application/pdf | cs |
dc.identifier.citation | Lecture Notes in Computer Science. 2022, p. 1-12. | en |
dc.identifier.doi | 10.1007/978-3-030-98260-7_16 | cs |
dc.identifier.isbn | 9783030982607 | cs |
dc.identifier.issn | 0302-9743 | cs |
dc.identifier.other | 172819 | cs |
dc.identifier.uri | http://hdl.handle.net/11012/203906 | |
dc.language.iso | en | cs |
dc.relation | "European Union (EU)" & "Horizon 2020" | en |
dc.relation.ispartof | Lecture Notes in Computer Science | cs |
dc.relation.projectId | info:eu-repo/grantAgreement/EC/H2020/857306/EU//RICAIP | en |
dc.relation.projectId | info:eu-repo/grantAgreement/EC/H2020/877539/EU//ArchitectECA2030 | en |
dc.rights | Creative Commons Attribution 4.0 International | cs |
dc.rights.access | openAccess | cs |
dc.rights.sherpa | http://www.sherpa.ac.uk/romeo/issn/0302-9743/ | cs |
dc.rights.uri | http://creativecommons.org/licenses/by/4.0/ | cs |
dc.subject | LiDAR data | en |
dc.subject | RGB camera | en |
dc.subject | Point Cloud | en |
dc.subject | projection | en |
dc.subject | YOLO | en |
dc.subject | Ob-ject Detection | en |
dc.subject | Neural Network | en |
dc.subject | DCNN | en |
dc.title | Human Detection in Depth Map Created from Point Cloud | en |
dc.type.driver | conferenceObject | en |
dc.type.status | Peer-reviewed | en |
dc.type.version | submittedVersion | en |
sync.item.dbid | VAV-172819 | en |
sync.item.dbtype | VAV | en |
sync.item.insts | 2022.06.14 16:54:21 | en |
sync.item.modts | 2022.06.14 16:14:14 | en |
thesis.grantor | Vysoké učení technické v Brně. Středoevropský technologický institut VUT. Kybernetika a robotika | cs |
thesis.grantor | Vysoké učení technické v Brně. Fakulta elektrotechniky a komunikačních technologií. Ústav automatizace a měřicí techniky | cs |
Files
Original bundle
1 - 1 of 1
Loading...
- Name:
- MESAS2021.pdf
- Size:
- 10.8 MB
- Format:
- Adobe Portable Document Format
- Description:
- MESAS2021.pdf