Autoware: crosscompile with CUDA support
Hi everyone! I was working with Autoware in my jetson tx2 board and everything is fine when I compile natively. Now, I want to be able to crosscompile Autoware within my desktop computer. I followed all the steps that I found in Autoware gitlab and forums and I was able to crosscompile Autoware 1.12 with the docker image. However, I could see that CUDA is not enabled and I really need it.
Is there any way to add CUDA support to docker crosscompilation? I'm pretty new with docker so it's hard for me to detect wheter some dependency is missing or if I have to add some environment variable. When I add AUTOWARE_COMPILE_WITH_CUDA=1 to crosscompile script I have the following error:
~/autoware.ai/ros$ ./colcon_release_cross kinetic generic-aarch64
Starting >>> autoware_build_flags
Starting >>> autoware_msgs
Starting >>> vector_map_msgs
Starting >>> autoware_config_msgs
Starting >>> tablet_socket_msgs
Starting >>> autoware_system_msgs
Starting >>> autoware_can_msgs
Starting >>> gnss
Finished <<< autoware_build_flags [16.3s]
Finished <<< gnss [16.4s]
Starting >>> ds4_msgs
Starting >>> custom_msgs
Finished <<< autoware_can_msgs [24.4s]
Finished <<< autoware_system_msgs [24.4s]
Finished <<< tablet_socket_msgs [24.4s]
Starting >>> ds4_driver
Starting >>> autoware_health_checker
Starting >>> gazebo_camera_description
Finished <<< autoware_config_msgs [31.0s]
Starting >>> gazebo_imu_description
Finished <<< autoware_msgs [31.0s]
Starting >>> amathutils_lib
Finished <<< ds4_msgs [16.9s]
Starting >>> astar_search
Finished <<< vector_map_msgs [35.8s]
Starting >>> vector_map
Finished <<< gazebo_camera_description [11.4s]
Finished <<< ds4_driver [11.4s]
Starting >>> ndt_cpu
Starting >>> ndt_tku
Finished <<< gazebo_imu_description [16.4s]
Finished <<< custom_msgs [30.8s]
Starting >>> pcl_omp_registration
Starting >>> fastvirtualscan
Finished <<< amathutils_lib [45.3s]
Starting >>> kitti_box_publisher
Finished <<< ndt_tku [47.3s]
Finished <<< vector_map [47.3s]
Starting >>> vector_map_server
Starting >>> map_file
--- stderr: astar_search
/home/autoware/Autoware/src/autoware/core_planning/astar_search/test/src/test_class.cpp: In constructor 'TestClass::TestClass()':
/home/autoware/Autoware/src/autoware/core_planning/astar_search/test/src/test_class.cpp:54:25: warning: comparison between signed and unsigned integer expressions [-Wsign-compare]
for (int idx = 0; idx < costmap_.info.width*costmap_.info.height; ++idx)
^
/home/autoware/Autoware/src/autoware/core_planning/astar_search/test/src/test_class.cpp:60:25: warning: comparison between signed and unsigned integer expressions [-Wsign-compare]
for (int row = 0; row < costmap_.info.height; ++row)
^
/home/autoware/Autoware/src/autoware/core_planning/astar_search/test/src/test_class.cpp:62:27: warning: comparison between signed and unsigned integer expressions [-Wsign-compare]
for (int col = 0; col < costmap_.info.width; ++col)
^
/home/autoware/Autoware/src/autoware/core_planning/astar_search/test/src/test_astar_search.cpp: In member function 'virtual void TestSuite_checkSetPath_Test::TestBody()':
/home/autoware/Autoware/src/autoware/core_planning/astar_search/test/src/test_astar_search.cpp:143:25: warning: comparison between signed and unsigned integer expressions [-Wsign-compare]
for (int row = 0; row < test_obj_.costmap_.info.height; ++row)
^
/home/autoware/Autoware/src/autoware/core_planning/astar_search/test/src/test_astar_search.cpp:145:27: warning: comparison between signed and unsigned integer expressions [-Wsign-compare]
for (int col = 0; col < test_obj_.costmap_.info.width; ++col)
^
---
Finished <<< astar_search [57.9s]
--- stderr: ndt_cpu
** WARNING ** io features related to openni2 will be disabled
** WARNING ...
I don't see any cuda tags for autoware arm64v8 repo. Although the (implicit) amd64 repo seems to have cuda tags. It seems that the amd64 images install cuda by inheriting FROM nvidia's opengl parent image. Looks like there may be beta support for arm64v8 for nvidia-docker plugin though, so see if you could swap the parent image appropriately.