You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I wonder if it’s possible to have a GPU native implementation, with the tree stored as 1-hot vectors, the dimension given by the number of total operators + value + feature + degree. You would evaluate all operators at each node in the tree, and mask the outputs not used.
However I’m not sure this would actually work, because with deeply-nested trees you would have to have O(2^n) evaluated nodes for a depth of O(n), whereas dynamic expressions would just be n evaluations.
The text was updated successfully, but these errors were encountered:
I wonder if it’s possible to have a GPU native implementation, with the tree stored as 1-hot vectors, the dimension given by the number of total operators + value + feature + degree. You would evaluate all operators at each node in the tree, and mask the outputs not used.
However I’m not sure this would actually work, because with deeply-nested trees you would have to have O(2^n) evaluated nodes for a depth of O(n), whereas dynamic expressions would just be n evaluations.
The text was updated successfully, but these errors were encountered: