We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
self.attention_relu = tf.reduce_sum(tf.multiply(self.weights['attention_p'], tf.nn.relu(self.attention_mul + \ self.weights['attention_b'])), 2, keep_dims=True) # None * (M'*(M'-1)) * 1 self.attention_out = tf.nn.softmax(self.attention_relu)
我认为keep_dims应该为False.
keep_dims如果为True, attention_relu的shape是(batch, m*(m-1)/2, 1) 接下来的softmax会在attention_relu的最后一个axis上做归一化, 这样是错误的.
The text was updated successfully, but these errors were encountered:
单纯改成False也不行, 后面的计算也要相应的修改
Sorry, something went wrong.
this repo is dead?
No branches or pull requests
我认为keep_dims应该为False.
keep_dims如果为True, attention_relu的shape是(batch, m*(m-1)/2, 1)
接下来的softmax会在attention_relu的最后一个axis上做归一化, 这样是错误的.
The text was updated successfully, but these errors were encountered: