2021
DOI: 10.1007/978-3-030-87672-2_11
|View full text |Cite
|
Sign up to set email alerts
|

Chances of Interpretable Transfer Learning for Human Activity Recognition in Warehousing

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
2
0
1

Year Published

2022
2022
2024
2024

Publication Types

Select...
3

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(3 citation statements)
references
References 25 publications
0
2
0
1
Order By: Relevance
“…Replacing People for Multiple Operations and Services Roboturk [189] , PaLI-X [190] , Robonet [191] [192] , [193] , [194] V2X Collaborative Sensing and Decision Making based on Wireless Communication Dair-V2X [195] , V2X-Sim [196] , OPV2V [197] [198] , [199] , [200] UAV Aircraft with Multiple Sensors for Aerial Operations UAV123 [201] , Blackbird [202] , UAVid [203] [204] , [205] , [206] USV Operation and Control of the Unmanned Surface Vehicles MODS [207] , MaSTr1325 [208] , USVInland [209] [210] , [211] Transportation Allocation and Coordination of Products, Raw Materials, and Services in the Supply Chain NEO Benchmark Datasets [212] , LARa, Divvy data [213] [214] , [215] , [216] 式反应。闭环评价要求真实道路环境和实时车辆交互反馈,由于模型研发到部署周期长,研发阶段 的车辆安全性无法保证,因此真正的闭环评价是不可能实现的。退而求其次,基于仿真的闭环评价 是当前研究的热点,通过图形学或隐式表征方式可以构建虚拟道路环境或者构建真实道路场景的数 字孪生,再让模型驱动的车辆在道路上运行。在 nuPlan [18] 中,闭环评价指标包括交通规则违反情 况、与人类驾驶的相似性、车辆动力学代价、行驶目标完成度、特定场景的评价等。然而,基于仿 真的闭环评价的交通流在仿真开始之后就完全脱离数据集,环境车辆通常是通过基于规则的偏保守 风格的模型来控制的,存在自车与环境车辆的交互的真实性不足的问题。通过强化学习的范式,闭 环评测还可以用来帮助自动驾驶系统学会开车,但因缺乏逼真的仿真环境,导致强化学习范式的驾 驶系统距离落地还有一定的距离 [188] 。 器人领域有许多公开的数据集,影响力较大的包括 Roboturk [189] 、Cornell Grasp Dataset [219] 和 RoboNet [191] 。Roboturk [189] 包括针对 3 个具有挑战性的操作任务的 111 个小时的机器人操作数 据,能够让机器人与环境交互来学习到最佳策略以完成任务;Cornell Grasp Dataset [219] 提供了 885 幅多视角拍摄的抓取图像,可作为计算机视觉的抓取、规划、控制等方法的基准。RoboNet [191] 包 含 1500 万个视频帧,是在桌面环境中与不同对象进行交互的不同机器人收集的,包括图像、手臂姿 势、力传感器读数、抓取器状态等信息。 车联网是新一代通信技术与自动驾驶驾驶领域的融合应用,其最早的设计方案根据美国和欧洲 分别推动的专用短程通信(DSRC) [220] 和合作智能交通系统(C-ITS) [221] 来为车辆与其他设备…”
Section: Roboticsunclassified
“…Replacing People for Multiple Operations and Services Roboturk [189] , PaLI-X [190] , Robonet [191] [192] , [193] , [194] V2X Collaborative Sensing and Decision Making based on Wireless Communication Dair-V2X [195] , V2X-Sim [196] , OPV2V [197] [198] , [199] , [200] UAV Aircraft with Multiple Sensors for Aerial Operations UAV123 [201] , Blackbird [202] , UAVid [203] [204] , [205] , [206] USV Operation and Control of the Unmanned Surface Vehicles MODS [207] , MaSTr1325 [208] , USVInland [209] [210] , [211] Transportation Allocation and Coordination of Products, Raw Materials, and Services in the Supply Chain NEO Benchmark Datasets [212] , LARa, Divvy data [213] [214] , [215] , [216] 式反应。闭环评价要求真实道路环境和实时车辆交互反馈,由于模型研发到部署周期长,研发阶段 的车辆安全性无法保证,因此真正的闭环评价是不可能实现的。退而求其次,基于仿真的闭环评价 是当前研究的热点,通过图形学或隐式表征方式可以构建虚拟道路环境或者构建真实道路场景的数 字孪生,再让模型驱动的车辆在道路上运行。在 nuPlan [18] 中,闭环评价指标包括交通规则违反情 况、与人类驾驶的相似性、车辆动力学代价、行驶目标完成度、特定场景的评价等。然而,基于仿 真的闭环评价的交通流在仿真开始之后就完全脱离数据集,环境车辆通常是通过基于规则的偏保守 风格的模型来控制的,存在自车与环境车辆的交互的真实性不足的问题。通过强化学习的范式,闭 环评测还可以用来帮助自动驾驶系统学会开车,但因缺乏逼真的仿真环境,导致强化学习范式的驾 驶系统距离落地还有一定的距离 [188] 。 器人领域有许多公开的数据集,影响力较大的包括 Roboturk [189] 、Cornell Grasp Dataset [219] 和 RoboNet [191] 。Roboturk [189] 包括针对 3 个具有挑战性的操作任务的 111 个小时的机器人操作数 据,能够让机器人与环境交互来学习到最佳策略以完成任务;Cornell Grasp Dataset [219] 提供了 885 幅多视角拍摄的抓取图像,可作为计算机视觉的抓取、规划、控制等方法的基准。RoboNet [191] 包 含 1500 万个视频帧,是在桌面环境中与不同对象进行交互的不同机器人收集的,包括图像、手臂姿 势、力传感器读数、抓取器状态等信息。 车联网是新一代通信技术与自动驾驶驾驶领域的融合应用,其最早的设计方案根据美国和欧洲 分别推动的专用短程通信(DSRC) [220] 和合作智能交通系统(C-ITS) [221] 来为车辆与其他设备…”
Section: Roboticsunclassified
“…Thus, explaining the reasonableness of the prediction decisions is essential. Recently, the theory of explainable and interpretable ML/DL is attracting the growing interest of academic scientists not only for speech processing but also for other applications [192,193]. For instance, the study in [194] presents the first attempt to introduce interpretable explanations for DTL in sequential tasks.…”
Section: Interpretation Of Dtl Modelsmentioning
confidence: 99%
“…Sensor-based HAR looks at raw data from biosensors and remote monitoring [15], [16], whereas vision-based HAR looks at pictures or videos captured by optical components [17], [18]. Since they are worn by people to automatically detect and track several actions such as sitting, walking, jumping, and relaxing, wearable devices are exemplary instances of sensor-based HAR [19], [20].…”
Section: Har Has Been An Active Area Of Research In Computer Vision A...mentioning
confidence: 99%