Skip to content

Commit 4a705e0

Browse files
authored
Fix broken links in the models repo (tensorflow#2445)
1 parent 0e09477 commit 4a705e0

File tree

16 files changed

+39
-39
lines changed

16 files changed

+39
-39
lines changed

official/resnet/imagenet.py

+2-2
Original file line numberDiff line numberDiff line change
@@ -83,11 +83,11 @@ def create_readable_names_for_imagenet_labels():
8383
(since 0 is reserved for the background class).
8484
8585
Code is based on
86-
https://github.com/tensorflow/models/blob/master/inception/inception/data/build_imagenet_data.py#L463
86+
https://github.com/tensorflow/models/blob/master/research/inception/inception/data/build_imagenet_data.py
8787
"""
8888

8989
# pylint: disable=g-line-too-long
90-
base_url = 'https://raw.githubusercontent.com/tensorflow/models/master/inception/inception/data/'
90+
base_url = 'https://raw.githubusercontent.com/tensorflow/models/master/research/inception/inception/data/'
9191
synset_url = '{}/imagenet_lsvrc_2015_synsets.txt'.format(base_url)
9292
synset_to_human_url = '{}/imagenet_metadata.txt'.format(base_url)
9393

research/adv_imagenet_models/README.md

+2-2
Original file line numberDiff line numberDiff line change
@@ -17,7 +17,7 @@ Ensure that you have installed TensorFlow 1.1 or greater
1717

1818
You also need copy of ImageNet dataset if you want to run provided example.
1919
Follow
20-
[Preparing the dataset](https://github.com/tensorflow/models/tree/master/slim#Data)
20+
[Preparing the dataset](https://github.com/tensorflow/models/tree/master/research/slim#Data)
2121
instructions in TF-Slim library to get and preprocess ImageNet data.
2222

2323
## Available models
@@ -32,7 +32,7 @@ Inception v3 | Step L.L. on ensemble of 4 models| [ens4_adv_inception_v3_2017_08
3232
Inception ResNet v2 | Step L.L. on ensemble of 3 models | [ens_adv_inception_resnet_v2_2017_08_18.tar.gz](http://download.tensorflow.org/models/ens_adv_inception_resnet_v2_2017_08_18.tar.gz)
3333

3434
All checkpoints are compatible with
35-
[TF-Slim](https://github.com/tensorflow/models/tree/master/slim)
35+
[TF-Slim](https://github.com/tensorflow/models/tree/master/research/slim)
3636
implementation of Inception v3 and Inception Resnet v2.
3737

3838
## How to evaluate models on ImageNet test data

research/adversarial_text/README.md

+6-6
Original file line numberDiff line numberDiff line change
@@ -135,20 +135,20 @@ adversarial training losses). The training loop itself is defined in
135135
### Command-Line Flags
136136

137137
Flags related to distributed training and the training loop itself are defined
138-
in [`train_utils.py`](https://github.com/tensorflow/models/tree/master/adversarial_text/train_utils.py).
138+
in [`train_utils.py`](https://github.com/tensorflow/models/tree/master/research/adversarial_text/train_utils.py).
139139

140-
Flags related to model hyperparameters are defined in [`graphs.py`](https://github.com/tensorflow/models/tree/master/adversarial_text/graphs.py).
140+
Flags related to model hyperparameters are defined in [`graphs.py`](https://github.com/tensorflow/models/tree/master/research/adversarial_text/graphs.py).
141141

142-
Flags related to adversarial training are defined in [`adversarial_losses.py`](https://github.com/tensorflow/models/tree/master/adversarial_text/adversarial_losses.py).
142+
Flags related to adversarial training are defined in [`adversarial_losses.py`](https://github.com/tensorflow/models/tree/master/research/adversarial_text/adversarial_losses.py).
143143

144144
Flags particular to each job are defined in the main binary files.
145145

146146
### Data Generation
147147

148-
* Vocabulary generation: [`gen_vocab.py`](https://github.com/tensorflow/models/tree/master/adversarial_text/data/gen_vocab.py)
149-
* Data generation: [`gen_data.py`](https://github.com/tensorflow/models/tree/master/adversarial_text/data/gen_data.py)
148+
* Vocabulary generation: [`gen_vocab.py`](https://github.com/tensorflow/models/tree/master/research/adversarial_text/data/gen_vocab.py)
149+
* Data generation: [`gen_data.py`](https://github.com/tensorflow/models/tree/master/research/adversarial_text/data/gen_data.py)
150150

151-
Command-line flags defined in [`document_generators.py`](https://github.com/tensorflow/models/tree/master/adversarial_text/data/document_generators.py)
151+
Command-line flags defined in [`document_generators.py`](https://github.com/tensorflow/models/tree/master/research/adversarial_text/data/document_generators.py)
152152
control which dataset is processed and how.
153153

154154
## Contact for Issues

research/attention_ocr/README.md

+7-7
Original file line numberDiff line numberDiff line change
@@ -43,7 +43,7 @@ cd ..
4343
4. `train.py` works with both CPU and GPU, though using GPU is preferable. It has been tested with a Titan X and with a GTX980.
4444

4545
[TF]: https://www.tensorflow.org/install/
46-
[FSNS]: https://github.com/tensorflow/models/tree/master/street
46+
[FSNS]: https://github.com/tensorflow/models/tree/master/research/street
4747

4848
## How to use this code
4949

@@ -81,7 +81,7 @@ python train.py --checkpoint=model.ckpt-399731
8181
You need to define a new dataset. There are two options:
8282

8383
1. Store data in the same format as the FSNS dataset and just reuse the
84-
[python/datasets/fsns.py](https://github.com/tensorflow/models/blob/master/attention_ocr/python/datasets/fsns.py)
84+
[python/datasets/fsns.py](https://github.com/tensorflow/models/blob/master/research/attention_ocr/python/datasets/fsns.py)
8585
module. E.g., create a file datasets/newtextdataset.py:
8686
```
8787
import fsns
@@ -151,8 +151,8 @@ To learn how to store a data in the FSNS
151151
- labels: ground truth label ids, shape=[batch_size x seq_length];
152152
- labels_one_hot: labels in one-hot encoding, shape [batch_size x seq_length x num_char_classes];
153153

154-
Refer to [python/data_provider.py](https://github.com/tensorflow/models/blob/master/attention_ocr/python/data_provider.py#L33)
155-
for more details. You can use [python/datasets/fsns.py](https://github.com/tensorflow/models/blob/master/attention_ocr/python/datasets/fsns.py)
154+
Refer to [python/data_provider.py](https://github.com/tensorflow/models/blob/master/research/attention_ocr/python/data_provider.py#L33)
155+
for more details. You can use [python/datasets/fsns.py](https://github.com/tensorflow/models/blob/master/research/attention_ocr/python/datasets/fsns.py)
156156
as the example.
157157

158158
## How to use a pre-trained model
@@ -164,14 +164,14 @@ The recommended way is to use the [Serving infrastructure][serving].
164164

165165
Alternatively you can:
166166
1. define a placeholder for images (or use directly an numpy array)
167-
2. [create a graph ](https://github.com/tensorflow/models/blob/master/attention_ocr/python/eval.py#L60)
167+
2. [create a graph ](https://github.com/tensorflow/models/blob/master/research/attention_ocr/python/eval.py#L60)
168168
```
169169
endpoints = model.create_base(images_placeholder, labels_one_hot=None)
170170
```
171-
3. [load a pretrained model](https://github.com/tensorflow/models/blob/master/attention_ocr/python/model.py#L494)
171+
3. [load a pretrained model](https://github.com/tensorflow/models/blob/master/research/attention_ocr/python/model.py#L494)
172172
4. run computations through the graph:
173173
```
174-
predictions = sess.run(endpoints.predicted_chars,
174+
predictions = sess.run(endpoints.predicted_chars,
175175
feed_dict={images_placeholder:images_actual_data})
176176
```
177177
5. Convert character IDs (predictions) to UTF8 using the provided charset file.

research/audioset/vggish_slim.py

+1-1
Original file line numberDiff line numberDiff line change
@@ -27,7 +27,7 @@
2727
internally.
2828
2929
For comparison, here is TF-Slim's VGG definition:
30-
https://github.com/tensorflow/models/blob/master/slim/nets/vgg.py
30+
https://github.com/tensorflow/models/blob/master/research/slim/nets/vgg.py
3131
"""
3232

3333
import tensorflow as tf

research/im2txt/README.md

+2-2
Original file line numberDiff line numberDiff line change
@@ -168,10 +168,10 @@ The *Show and Tell* model requires a pretrained *Inception v3* checkpoint file
168168
to initialize the parameters of its image encoder submodel.
169169

170170
This checkpoint file is provided by the
171-
[TensorFlow-Slim image classification library](https://github.com/tensorflow/models/tree/master/slim#tensorflow-slim-image-classification-library)
171+
[TensorFlow-Slim image classification library](https://github.com/tensorflow/models/tree/master/research/slim#tensorflow-slim-image-classification-library)
172172
which provides a suite of pre-trained image classification models. You can read
173173
more about the models provided by the library
174-
[here](https://github.com/tensorflow/models/tree/master/slim#pre-trained-models).
174+
[here](https://github.com/tensorflow/models/tree/master/research/slim#pre-trained-models).
175175

176176

177177
Run the following commands to download the *Inception v3* checkpoint.

research/inception/README.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -1,4 +1,4 @@
1-
**NOTE**: For the most part, you will find a newer version of this code at [models/slim](https://github.com/tensorflow/models/tree/master/slim). In particular:
1+
**NOTE**: For the most part, you will find a newer version of this code at [models/research/slim](https://github.com/tensorflow/models/tree/master/research/slim). In particular:
22

33
* `inception_train.py` and `imagenet_train.py` should no longer be used. The slim editions for running on multiple GPUs are the current best examples.
44
* `inception_distributed_train.py` and `imagenet_distributed_train.py` are still valid examples of distributed training.

research/object_detection/object_detection_tutorial.ipynb

+2-2
Original file line numberDiff line numberDiff line change
@@ -5,7 +5,7 @@
55
"metadata": {},
66
"source": [
77
"# Object Detection Demo\n",
8-
"Welcome to the object detection inference walkthrough! This notebook will walk you step by step through the process of using a pre-trained model to detect objects in an image. Make sure to follow the [installation instructions](https://github.com/tensorflow/models/blob/master/object_detection/g3doc/installation.md) before you start."
8+
"Welcome to the object detection inference walkthrough! This notebook will walk you step by step through the process of using a pre-trained model to detect objects in an image. Make sure to follow the [installation instructions](https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/installation.md) before you start."
99
]
1010
},
1111
{
@@ -96,7 +96,7 @@
9696
"\n",
9797
"Any model exported using the `export_inference_graph.py` tool can be loaded here simply by changing `PATH_TO_CKPT` to point to a new .pb file. \n",
9898
"\n",
99-
"By default we use an \"SSD with Mobilenet\" model here. See the [detection model zoo](https://github.com/tensorflow/models/blob/master/object_detection/g3doc/detection_model_zoo.md) for a list of other models that can be run out-of-the-box with varying speeds and accuracies."
99+
"By default we use an \"SSD with Mobilenet\" model here. See the [detection model zoo](https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/detection_model_zoo.md) for a list of other models that can be run out-of-the-box with varying speeds and accuracies."
100100
]
101101
},
102102
{

research/ptn/nets/perspective_transform.py

+1-1
Original file line numberDiff line numberDiff line change
@@ -26,7 +26,7 @@
2626
(2) Official implementation in Torch: https://github.com/xcyan/ptnbhwd
2727
2828
(3) 2D Transformer implementation in TF:
29-
github.com/tensorflow/models/tree/master/transformer
29+
github.com/tensorflow/models/tree/master/research/transformer
3030
3131
"""
3232

research/slim/README.md

+6-6
Original file line numberDiff line numberDiff line change
@@ -67,7 +67,7 @@ git clone https://github.com/tensorflow/models/
6767

6868
This will put the TF-Slim image models library in `$HOME/workspace/models/research/slim`.
6969
(It will also create a directory called
70-
[models/inception](https://github.com/tensorflow/models/tree/master/inception),
70+
[models/inception](https://github.com/tensorflow/models/tree/master/research/inception),
7171
which contains an older version of slim; you can safely ignore this.)
7272

7373
To verify that this has worked, execute the following commands; it should run
@@ -127,7 +127,7 @@ from integer labels to class names.
127127

128128
You can use the same script to create the mnist and cifar10 datasets.
129129
However, for ImageNet, you have to follow the instructions
130-
[here](https://github.com/tensorflow/models/blob/master/inception/README.md#getting-started).
130+
[here](https://github.com/tensorflow/models/blob/master/research/inception/README.md#getting-started).
131131
Note that you first have to sign up for an account at image-net.org.
132132
Also, the download can take several hours, and could use up to 500GB.
133133

@@ -464,17 +464,17 @@ bazel-bin/tensorflow/examples/label_image/label_image \
464464
#### The model runs out of CPU memory.
465465

466466
See
467-
[Model Runs out of CPU memory](https://github.com/tensorflow/models/tree/master/inception#the-model-runs-out-of-cpu-memory).
467+
[Model Runs out of CPU memory](https://github.com/tensorflow/models/tree/master/research/inception#the-model-runs-out-of-cpu-memory).
468468

469469
#### The model runs out of GPU memory.
470470

471471
See
472-
[Adjusting Memory Demands](https://github.com/tensorflow/models/tree/master/inception#adjusting-memory-demands).
472+
[Adjusting Memory Demands](https://github.com/tensorflow/models/tree/master/research/inception#adjusting-memory-demands).
473473

474474
#### The model training results in NaN's.
475475

476476
See
477-
[Model Resulting in NaNs](https://github.com/tensorflow/models/tree/master/inception#the-model-training-results-in-nans).
477+
[Model Resulting in NaNs](https://github.com/tensorflow/models/tree/master/research/inception#the-model-training-results-in-nans).
478478

479479
#### The ResNet and VGG Models have 1000 classes but the ImageNet dataset has 1001
480480

@@ -509,4 +509,4 @@ image_preprocessing_fn = preprocessing_factory.get_preprocessing(
509509
#### What hardware specification are these hyper-parameters targeted for?
510510

511511
See
512-
[Hardware Specifications](https://github.com/tensorflow/models/tree/master/inception#what-hardware-specification-are-these-hyper-parameters-targeted-for).
512+
[Hardware Specifications](https://github.com/tensorflow/models/tree/master/research/inception#what-hardware-specification-are-these-hyper-parameters-targeted-for).

research/slim/slim_walkthrough.ipynb

+2-2
Original file line numberDiff line numberDiff line change
@@ -36,7 +36,7 @@
3636
"python -c \"import tensorflow.contrib.slim as slim; eval = slim.evaluation.evaluate_once\"\n",
3737
"```\n",
3838
"\n",
39-
"Although, to use TF-Slim for image classification (as we do in this notebook), you also have to install the TF-Slim image models library from [here](https://github.com/tensorflow/models/tree/master/slim). Let's suppose you install this into a directory called TF_MODELS. Then you should change directory to TF_MODELS/slim **before** running this notebook, so that these files are in your python path.\n",
39+
"Although, to use TF-Slim for image classification (as we do in this notebook), you also have to install the TF-Slim image models library from [here](https://github.com/tensorflow/models/tree/master/research/slim). Let's suppose you install this into a directory called TF_MODELS. Then you should change directory to TF_MODELS/research/slim **before** running this notebook, so that these files are in your python path.\n",
4040
"\n",
4141
"To check you've got these two steps to work, just execute the cell below. If it complains about unknown modules, restart the notebook after moving to the TF-Slim models directory.\n"
4242
]
@@ -757,7 +757,7 @@
757757
"<a id='Pretrained'></a>\n",
758758
"\n",
759759
"Neural nets work best when they have many parameters, making them very flexible function approximators.\n",
760-
"However, this means they must be trained on big datasets. Since this process is slow, we provide various pre-trained models - see the list [here](https://github.com/tensorflow/models/tree/master/slim#pre-trained-models).\n",
760+
"However, this means they must be trained on big datasets. Since this process is slow, we provide various pre-trained models - see the list [here](https://github.com/tensorflow/models/tree/master/research/slim#pre-trained-models).\n",
761761
"\n",
762762
"\n",
763763
"You can either use these models as-is, or you can perform \"surgery\" on them, to modify them for some other task. For example, it is common to \"chop off\" the final pre-softmax layer, and replace it with a new set of weights corresponding to some new set of labels. You can then quickly fine tune the new model on a small new dataset. We illustrate this below, using inception-v1 as the base model. While models like Inception V3 are more powerful, Inception V1 is used for speed purposes.\n",

research/syntaxnet/README.md

+3-3
Original file line numberDiff line numberDiff line change
@@ -19,7 +19,7 @@ languages.
1919
This repository is largely divided into two sub-packages:
2020

2121
1. **DRAGNN:
22-
[code](https://github.com/tensorflow/models/tree/master/syntaxnet/dragnn),
22+
[code](https://github.com/tensorflow/models/tree/master/research/syntaxnet/dragnn),
2323
[documentation](g3doc/DRAGNN.md),
2424
[paper](https://arxiv.org/pdf/1703.04474.pdf)** implements Dynamic Recurrent
2525
Acyclic Graphical Neural Networks (DRAGNN), a framework for building
@@ -31,7 +31,7 @@ This repository is largely divided into two sub-packages:
3131
easier to use than the original SyntaxNet implementation.*
3232

3333
1. **SyntaxNet:
34-
[code](https://github.com/tensorflow/models/tree/master/syntaxnet/syntaxnet),
34+
[code](https://github.com/tensorflow/models/tree/master/research/syntaxnet/syntaxnet),
3535
[documentation](g3doc/syntaxnet-tutorial.md)** is a transition-based
3636
framework for natural language processing, with core functionality for
3737
feature extraction, representing annotated data, and evaluation. As of the
@@ -95,7 +95,7 @@ following commands:
9595

9696
```shell
9797
git clone --recursive https://github.com/tensorflow/models.git
98-
cd models/syntaxnet/tensorflow
98+
cd models/research/syntaxnet/tensorflow
9999
./configure
100100
cd ..
101101
bazel test ...

research/syntaxnet/dragnn/tools/oss_setup.py

+1-1
Original file line numberDiff line numberDiff line change
@@ -56,7 +56,7 @@ def has_ext_modules(self):
5656
version='0.2',
5757
description='SyntaxNet: Neural Models of Syntax',
5858
long_description='',
59-
url='https://github.com/tensorflow/models/tree/master/syntaxnet',
59+
url='https://github.com/tensorflow/models/tree/master/research/syntaxnet',
6060
author='Google Inc.',
6161
author_email='[email protected]',
6262

research/syntaxnet/g3doc/dragnn_ops.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -3,7 +3,7 @@
33
### Module `dragnn_ops`
44

55
Defined in
6-
[`tensorflow/dragnn/python/dragnn_ops.py`](https://github.com/tensorflow/models/blob/master/syntaxnet/dragnn/python/dragnn_ops.py).
6+
[`tensorflow/dragnn/python/dragnn_ops.py`](https://github.com/tensorflow/models/blob/master/research/syntaxnet/dragnn/python/dragnn_ops.py).
77

88
Groups the DRAGNN TensorFlow ops in one module.
99

research/syntaxnet/syntaxnet/models/parsey_universal/parse.sh

+1-1
Original file line numberDiff line numberDiff line change
@@ -15,7 +15,7 @@
1515
# Models can be downloaded from
1616
# http://download.tensorflow.org/models/parsey_universal/<language>.zip
1717
# for the languages listed at
18-
# https://github.com/tensorflow/models/blob/master/syntaxnet/universal.md
18+
# https://github.com/tensorflow/models/blob/master/research/syntaxnet/universal.md
1919
#
2020

2121
PARSER_EVAL=bazel-bin/syntaxnet/parser_eval

research/syntaxnet/syntaxnet/models/parsey_universal/tokenize.sh

+1-1
Original file line numberDiff line numberDiff line change
@@ -9,7 +9,7 @@
99
# Models can be downloaded from
1010
# http://download.tensorflow.org/models/parsey_universal/<language>.zip
1111
# for the languages listed at
12-
# https://github.com/tensorflow/models/blob/master/syntaxnet/universal.md
12+
# https://github.com/tensorflow/models/blob/master/research/syntaxnet/universal.md
1313
#
1414

1515
PARSER_EVAL=bazel-bin/syntaxnet/parser_eval

0 commit comments

Comments
 (0)