Skip to content

Commit

Permalink
Replace a non-ASCII character with its ASCII characters
Browse files Browse the repository at this point in the history
  • Loading branch information
paulshealy1 committed Sep 28, 2017
1 parent 912d5bd commit b76a7b1
Showing 1 changed file with 1 addition and 1 deletion.
2 changes: 1 addition & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -27,7 +27,7 @@ Using a document length of 300 words and an embedding dimensionality equal to 10
# Hierarchical Attention Network

This is the architecture proposed in
[Hierarchical Attention Networks for Document Classification, Yang et al. 2016](https://www.cs.cmu.edu/~diyiy/docs/naacl16.pdf). One of its main features is the hierarchical structure, which consists of two levels of bidirectional GRU layers, one for the sequence of words in each sentence, the second for the sequence of sentences in each document. Another feature of the architecture is that it uses an *attention* layer at both the sentence and word levels. The attention mechanism is the one proposed in [Bahdanau et al. 2014](https://arxiv.org/pdf/1409.0473.pdf) and allows for weighting words in each sentence (and sentences in each document) with different degrees of importance according to the context.
[Hierarchical Attention Networks for Document Classification, Yang et al. 2016](https://www.cs.cmu.edu/~diyiy/docs/naacl16.pdf). One of its main features is the hierarchical structure, which consists of two levels of bidirectional GRU layers, one for the sequence of words in each sentence, the second for the sequence of sentences in each document. Another feature of the architecture is that it uses an *attention* layer at both the sentence and word levels. The attention mechanism is the one proposed in [Bahdanau et al. 2014](https://arxiv.org/pdf/1409.0473.pdf) and allows for weighting words in each sentence (and sentences in each document) with different degrees of importance according to the context.

We have implemented the Hierarchical Attention Network in Keras and Theano by adapting
[Richard Liao's implementation](https://github.com/richliao/textClassifier/blob/master/textClassifierHATT.py).
Expand Down

0 comments on commit b76a7b1

Please sign in to comment.