Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Warmup: Awesome RoboSkill for Offline Visual Navigation #2

Open
chch9907 opened this issue Jan 16, 2024 · 2 comments
Open

Warmup: Awesome RoboSkill for Offline Visual Navigation #2

chch9907 opened this issue Jan 16, 2024 · 2 comments

Comments

@chch9907
Copy link

Currently, existing robotic navigation methods have been expert in simultaneous mapping and localization (SLAM) and path planning not only in the large-scale environment but also dense-crowd scenarios, among which the AI-powered methods intend to improve the algorithm efficiency and take advantage of the semantic information. However, there still remain some challanges to be solved:

  1. Emergent scene representation for embodied AI: Scene (Map) representation have long been within occupancy grid map and signed distance field but both require rich 3D geometric features and unable to explain the non-observed view of the scene, neural radiance field or 3D gaussian splatting turn out to be the promising and effective techniques that can handle the above issues, benifiting the downstream embodied interaction or navigation. But training such neural map requires training dataset, and if the scenario is very large the computation cost can be drastically high. Fortunately, KubeEdge is skilled at doing these. That is, it can curate the dataset from different edge devices, meanwhile it feeds back the edge devices with abundant computation resource.

  2. Goal searching in unknown environment: Given pre-built map, existing methods are mature to handle the navigation tasks even involving the high dynamic obstacle avoidance and lifelong SLAM, whereas goal-oriented navigation in unknown real-world environment is still an open task that need to be explored. Existing work mainly consider utilizing the common sense on the topology relation between objects or rooms in household scenarios. However, in real-world cases, the object arrangement can be dynamically changed, and even the scenario can be large and space without any objects (e.g., outdoor scenario). If utilizing the cloud capability of KubeEdge to help infer the large language model (LLM), the edge device can resort to the LLM-based policy to adaptively understand the unknown world.

  3. Facilitating the deployment of learning-based methods: The sim-to-real gap is a long-standing issue that hinders deploying the learning-based methods to real-world application scenarios. This comes from the fact that existing open-source simulation data mainly concern the indoor cases and lack the outdoor data. Although scene reconstruction can build the world model of the deploying environment into simulation, it is often troubled by the texture defect and fidelity gap. Currently some effort are dedicated to directly learning from the offline data, i.e., the experience to learn the geometric attributes of the real world. Thanks to the edge-cloud communication capability of KubeEdge, I also contribute a work on the outdoor visual navigation based on offline reinforcement learning, which solely uses the visual sensor and train on self-collected offline dataset. It also handle the localization lost issue during navigation by a novel self-correcting method that generates future states in the latent space and estimates the novelty as the guidance to search the familiar places. This work is also submitted to 2024 ICRA. Once accepted, I will apply for it to be a subproject of KubeEdge. As a preprint version, one can check the progress at https://github.com/KubeEdge4Robotics/ScaleNav.

The above are my superficial opinions about the potential advantages and application of KubeEdge. I am eager to listen to the comments from the community. Thanks!

@JoeyHwong-gk
Copy link

Thank you for sharing your insights and potential applications of KubeEdge in the field of Offline Visual Navigation.

The idea of leveraging KubeEdge to curate datasets from diverse edge devices aligns with our SIG's goals and could be a valuable addition to the realm of robotic navigation.

We appreciate your dedication and look forward to the progress of your work. I fully support your application for it to be a subproject of KubeEdge. Feel free to contact me if you need any assistance or clarification.

Community members, please share your thoughts and feedback on this issue~

/PTAL @kubeedge/sig-robotics-repo-admins

/assign @Poorunga

@Poorunga
Copy link
Member

Sounds great, would you like to share it at the next sig meeting?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants