Is HyperNEAT trainining recurrent policies? #155
-
(This is a question regarding the tensorneat framework, not totally sure if I should be posting here or in the tensorneat github issues.) I am a bit confused with the implementation of HyperNEAT. If we run the example provided here do we train a recurrent or a feedforward network? When I look at the substrate it looks like the policy is feedforward, as FullSubstrate defines query_coors for input to hidden, hidden to hidden and hidden to output. But genome of HyperNEAT is the RecurrentGenome, which seems to be capable of connecting any neuron to each other. So one of the following must be true: a) HyperNEAT policies are always recurrent I am also curious about the bias neuron, the implementation requires us to define a bias in the substrate as an input_coordinate when calling FullSubstrate. Is the first element of input_coors interpreted as a bias? And since each node has its own bias isn't this bias unnecessary? I also wanted to add that I tried replacing RecurrentGenome with DefaultGenome in hyperneat.py to see what will happen and got only empty CPPN and policy networks. Am wondering whether this is because something in the implementation prohibits using DefaultGenome or it's a matter of retuning. |
Beta Was this translation helpful? Give feedback.
Replies: 1 comment
-
Hello, thank you for your interest in EvoX and TensorNEAT. Actually, the You can find the code representing this behavior here: aux_coors, aux_keys = cartesian_product(
hidden_idx, hidden_idx, hidden_coors, hidden_coors
) # use cartesian product, that means (pairwise connections)
query_coors[si * sh : si * sh + sh * sh, :] = aux_coors
correspond_keys[si * sh : si * sh + sh * sh, :] = aux_keys This is also why you cannot replace Regarding your question about bias, the last node in the substrate will become the bias, which can be seen in the following code: def forward(self, state, transformed, inputs):
# add bias
inputs_with_bias = jnp.concatenate([inputs, jnp.array([1])]) # Bias is added as the last element here
res = self.hyper_genome.forward(state, transformed, inputs_with_bias)
return res Currently, nodes in the HyperNEAT network ( That’s my full response to your question. Thank you again for using EvoX and TensorNEAT. If you have any further questions about TensorNEAT, feel free to post them in the TensorNEAT issues section (this ensures I receive email notifications promptly). I’d be happy to assist! |
Beta Was this translation helpful? Give feedback.
Hello, thank you for your interest in EvoX and TensorNEAT.
Actually, the
FullSubstrate
used in the examples is recurrent. Its connection pattern is indeed input to hidden, hidden to hidden, and hidden to output. However, in the hidden-to-hidden connections, it connects all hidden nodes pairwise (recurrent). Therefore, the currently implemented HyperNEAT is always recurrent.You can find the code representing this behavior here: