You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi, all,
I purchased a Seeed Grove vision AI module V2 EVK, and I wanted to upload a customer AI model from the SensorCraftAI web, but failed. the log reported: Failed to resize buffer. Requested: 11373728, available 1135736, missing: 10237992. I know it may be the SRAM size is limited, but how can we use the flash for AI model?
We tried to train a yolov8n model for 0-9 digits detection, yolov8n_digits.tflite attached, about 2.9MB.
We used the example vela config ini file as following to optimize the model, and generated the file yolov8n_digits_vela.tflite, the size is about 2.6MB, it is also attached.
use the vela tool to show the --verbose-allocation
it shows:
Total SRAM used 11107.03 KiB
Total Off-chip Flash used 2663.14 KiB
Is our AI model issue or using the incorrect vela config ini file?
And I also tried to upload this model from SensorCraftAI: Seeed_Grove_Vision_AI_Module_V2-main\model_zoo\tflm_yolov8_pose\yolov8n_pose_256_vela_3_9_0x3BB000.tflite, it works well. but any allocation info can't be seen
by vela yolov8n_pose_256_vela_3_9_0x3BB000.tflite --verbose-allocation
Grove Vision AI module V2 runs SSCMA (Seeed SenseCraft Model Assistant) firmware by default. The main problem with your model is that the sram size requirement is too large, the sram size of the model tensor arena in this device should be less than 900KB. If you want to use Seeed "No code" environment, you can ask Seeed for help. https://sensecraft.seeed.cc/ai/#/home
Hi, all,
I purchased a Seeed Grove vision AI module V2 EVK, and I wanted to upload a customer AI model from the SensorCraftAI web, but failed. the log reported: Failed to resize buffer. Requested: 11373728, available 1135736, missing: 10237992. I know it may be the SRAM size is limited, but how can we use the flash for AI model?
it shows:
Total SRAM used 11107.03 KiB
Total Off-chip Flash used 2663.14 KiB
Is our AI model issue or using the incorrect vela config ini file?
by vela yolov8n_pose_256_vela_3_9_0x3BB000.tflite --verbose-allocation
Please support us, thanks.
yolov8n model.zip
; file: my_vela_cfg.ini ; -----------------------------------------------------------------------------
; Vela configuration file ; -----------------------------------------------------------------------------
; System Configuration
; My_Sys_Cfg
[System_Config.My_Sys_Cfg]
core_clock=400e6
axi0_port=Sram
axi1_port=OffChipFlash
Sram_clock_scale=1.0
Sram_burst_length=32
Sram_read_latency=16
Sram_write_latency=16
Dram_clock_scale=0.75
Dram_burst_length=128
Dram_read_latency=500
Dram_write_latency=250
OnChipFlash_clock_scale=0.25
OffChipFlash_clock_scale=0.015625
OffChipFlash_burst_length=32
OffChipFlash_read_latency=64
OffChipFlash_write_latency=64
; -----------------------------------------------------------------------------
; Memory Mode
; My_Mem_Mode_Parent
[Memory_Mode.My_Mem_Mode_Parent]
const_mem_area=Axi1
arena_mem_area=Axi0
cache_mem_area=Axi0
The text was updated successfully, but these errors were encountered: