-
Hello! I am creating a CSV editor for files with millions of records, so I ran into the following problem: Because a CSV file can have a variable number of columns (my variables), I end up creating different structures for each number of variables like the following:
And keep going for 100 variables and It works with great performance, but it seems like a weird approach and I'm not sure if there's some other solution that I'm not looking at right now. Any help with this is greatly appreciated! |
Beta Was this translation helpful? Give feedback.
Replies: 3 comments 4 replies
-
You can just use only the largest struct and cast to that as long as your base type is kept to float in the nativearray supplied and you dont access the fields not mapped (they will contain the next record which is very handy while debugging). If you use a nativearray and fill the array with all data, then u could use a construct like:
This is far from pretty though but wraps up nicely in a utilty function. y |
Beta Was this translation helpful? Give feedback.
-
Future work should include utilities for this but you always run into compile time constraints with fixed structures, which youre almost required to use in burst.. its difficult to get to a solution for any situation that is also performant and i hope that input like yours will lead to one |
Beta Was this translation helpful? Give feedback.
-
I found the following article: Not sure how efficient is but is handy for me the following:
And when using it I go for the largest data and casting to the one I wanna use later:
|
Beta Was this translation helpful? Give feedback.
You can just use only the largest struct and cast to that as long as your base type is kept to float in the nativearray supplied and you dont access the fields not mapped (they will contain the next record which is very handy while debugging).
If you use a nativearray and fill the array with all data, then u could use a construct like:
This is far from pretty though but wraps up nicely in a utilty function.
y