-
Notifications
You must be signed in to change notification settings - Fork 10
Tracking
Tanja edited this page Jan 4, 2016
·
35 revisions
- Read about neural language models in
- A Neural Probabilistic Language Model
- Learning simultaneously: Distributed Word Vector Representations and a Statistical Language Model (given all previous words, what's the probability distribution for the next word?)
- Shallow model, but can process large datasets
- N previous Words are translated via 1-V-Mapping
- Efficient Estimation of Word Representations in Vector Space
- Read https://code.google.com/p/word2vec/
- A Neural Probabilistic Language Model
- Read through
distance.c
in word2vec to understand word2vec binary format - Write Python program to read word vectors with Joseph
- Set up virtual python environment on server
- Basic network, and experiment infrastructure
- Write glue code connecting sentence files, sliding window and level db creation
- Create databases, run first experiment on server
- Caffe Multi-class Precision and Recall
- Read https://code.google.com/p/word2vec/
- Understand
distance.c
, implement Python program to read word vectors - Explore how to use NLTK for POS tagging
- Write python script for demo show cases
- Documentation of demo script
- Net configuration script
- PlainText parser
- Unify python paths and working directory
- Help messages + README for python script folder
- Generator usage to reduce memory consumption
- Progress for training instance generation and parsing adapted
- Read https://code.google.com/p/word2vec/
- Raad some papers
- Write Python script to parse xml and ASR transcript files
- Write Python script to create basic training instances using a sliding window
- Write training instances to leveldb script
- Ensure valid train and test split
- Caffe Multi-class Precision and Recall
- Pipeline work
- Use POS-Tags as features
- Introduced a flag to turn on/off POS-Tagging
- Use parameters from config file
- Refactoring the input parser
- Web Demo
- Presentation
- Debugging our net/architecture/code for issuse regarding our low precision and recall
- Several trainings to get the baseline
- Converting xml and txt files into line format. POS Tags can be preprocessed and written to disk.
- Main program gets only a config file as argument
- Refactroing of line parser
- Preprocessing of POS-Tagging: Write data files with POS tags
- Refactroing of sliding window: Punctuation pos can be at any position
- Debugging of Word2Vec: Use Float32 instead of Float64
- Read https://code.google.com/p/word2vec/
- Read papers to get familiar with Deep Learning
- Write Python script to parse xml and ASR transcript files
- Write Python script to create basic training instances using a sliding window
- Refactor script for creating trainings instances and work on pipeline to create instances
- Write python script for demo show cases
- Use POS-Tags as features
- LineParser implemented
- Generator usage to reduce memory consumption