MultiSensor Data Collection is a mobile app that seamlessly gathers data from various sensors, including GPS, camera, and audio, allowing you to create a comprehensive dataset locally and effortlessly transmit it to the cloud for further analysis and storage.
- Alyssa Florence / IME-USP
- Rafael Jeferson Pezzuto Damaceno / IME-USP
- Roberto Marcondes Cesar Jr. / IME-USP
![]() |
![]() |
---|---|
Main Screen | Camera Resolution Settings |
![]() |
![]() |
---|---|
Sensor Frequency Settings | General Settings |
This MultiSensor Data Collection app, currently available for Android devices, is part of a larger project called SideSeeing.
The SideSeeing project aims to develop methods based on Computer Vision and Machine Learning for Urban Informatics applications. Its goal is to devise strategies for obtaining and analyzing data related to urban accessibility. The project is expected to consist of six modules:
- Collection and generation of multimodal datasets;
- Preprocessing;
- Labeling;
- Visualization;
- Application of artificial intelligence tasks; and
- Analysis of information for decision-making.
As part of the initial module, the MultiSensor Data Collection app is currently in the testing phase and is available for internal tests only. This app is capable of generating multimodal datasets using video cameras and sensors such as accelerometers, gyroscopes, and magnetometers from mobile phones. Current efforts are focused on collecting data in regions of Brazil and the United States of America.
For more details about the project, visit SideSeeing.