You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardexpand all lines: research/adversarial_text/README.md
+6-6
Original file line number
Diff line number
Diff line change
@@ -135,20 +135,20 @@ adversarial training losses). The training loop itself is defined in
135
135
### Command-Line Flags
136
136
137
137
Flags related to distributed training and the training loop itself are defined
138
-
in [`train_utils.py`](https://github.com/tensorflow/models/tree/master/adversarial_text/train_utils.py).
138
+
in [`train_utils.py`](https://github.com/tensorflow/models/tree/master/research/adversarial_text/train_utils.py).
139
139
140
-
Flags related to model hyperparameters are defined in [`graphs.py`](https://github.com/tensorflow/models/tree/master/adversarial_text/graphs.py).
140
+
Flags related to model hyperparameters are defined in [`graphs.py`](https://github.com/tensorflow/models/tree/master/research/adversarial_text/graphs.py).
141
141
142
-
Flags related to adversarial training are defined in [`adversarial_losses.py`](https://github.com/tensorflow/models/tree/master/adversarial_text/adversarial_losses.py).
142
+
Flags related to adversarial training are defined in [`adversarial_losses.py`](https://github.com/tensorflow/models/tree/master/research/adversarial_text/adversarial_losses.py).
143
143
144
144
Flags particular to each job are defined in the main binary files.
* Data generation: [`gen_data.py`](https://github.com/tensorflow/models/tree/master/research/adversarial_text/data/gen_data.py)
150
150
151
-
Command-line flags defined in [`document_generators.py`](https://github.com/tensorflow/models/tree/master/adversarial_text/data/document_generators.py)
151
+
Command-line flags defined in [`document_generators.py`](https://github.com/tensorflow/models/tree/master/research/adversarial_text/data/document_generators.py)
module. E.g., create a file datasets/newtextdataset.py:
86
86
```
87
87
import fsns
@@ -151,8 +151,8 @@ To learn how to store a data in the FSNS
151
151
- labels: ground truth label ids, shape=[batch_size x seq_length];
152
152
- labels_one_hot: labels in one-hot encoding, shape [batch_size x seq_length x num_char_classes];
153
153
154
-
Refer to [python/data_provider.py](https://github.com/tensorflow/models/blob/master/attention_ocr/python/data_provider.py#L33)
155
-
for more details. You can use [python/datasets/fsns.py](https://github.com/tensorflow/models/blob/master/attention_ocr/python/datasets/fsns.py)
154
+
Refer to [python/data_provider.py](https://github.com/tensorflow/models/blob/master/research/attention_ocr/python/data_provider.py#L33)
155
+
for more details. You can use [python/datasets/fsns.py](https://github.com/tensorflow/models/blob/master/research/attention_ocr/python/datasets/fsns.py)
156
156
as the example.
157
157
158
158
## How to use a pre-trained model
@@ -164,14 +164,14 @@ The recommended way is to use the [Serving infrastructure][serving].
164
164
165
165
Alternatively you can:
166
166
1. define a placeholder for images (or use directly an numpy array)
167
-
2.[create a graph ](https://github.com/tensorflow/models/blob/master/attention_ocr/python/eval.py#L60)
167
+
2.[create a graph ](https://github.com/tensorflow/models/blob/master/research/attention_ocr/python/eval.py#L60)
Copy file name to clipboardexpand all lines: research/inception/README.md
+1-1
Original file line number
Diff line number
Diff line change
@@ -1,4 +1,4 @@
1
-
**NOTE**: For the most part, you will find a newer version of this code at [models/slim](https://github.com/tensorflow/models/tree/master/slim). In particular:
1
+
**NOTE**: For the most part, you will find a newer version of this code at [models/research/slim](https://github.com/tensorflow/models/tree/master/research/slim). In particular:
2
2
3
3
*`inception_train.py` and `imagenet_train.py` should no longer be used. The slim editions for running on multiple GPUs are the current best examples.
4
4
*`inception_distributed_train.py` and `imagenet_distributed_train.py` are still valid examples of distributed training.
Copy file name to clipboardexpand all lines: research/object_detection/object_detection_tutorial.ipynb
+2-2
Original file line number
Diff line number
Diff line change
@@ -5,7 +5,7 @@
5
5
"metadata": {},
6
6
"source": [
7
7
"# Object Detection Demo\n",
8
-
"Welcome to the object detection inference walkthrough! This notebook will walk you step by step through the process of using a pre-trained model to detect objects in an image. Make sure to follow the [installation instructions](https://github.com/tensorflow/models/blob/master/object_detection/g3doc/installation.md) before you start."
8
+
"Welcome to the object detection inference walkthrough! This notebook will walk you step by step through the process of using a pre-trained model to detect objects in an image. Make sure to follow the [installation instructions](https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/installation.md) before you start."
9
9
]
10
10
},
11
11
{
@@ -96,7 +96,7 @@
96
96
"\n",
97
97
"Any model exported using the `export_inference_graph.py` tool can be loaded here simply by changing `PATH_TO_CKPT` to point to a new .pb file. \n",
98
98
"\n",
99
-
"By default we use an \"SSD with Mobilenet\" model here. See the [detection model zoo](https://github.com/tensorflow/models/blob/master/object_detection/g3doc/detection_model_zoo.md) for a list of other models that can be run out-of-the-box with varying speeds and accuracies."
99
+
"By default we use an \"SSD with Mobilenet\" model here. See the [detection model zoo](https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/detection_model_zoo.md) for a list of other models that can be run out-of-the-box with varying speeds and accuracies."
Copy file name to clipboardexpand all lines: research/slim/slim_walkthrough.ipynb
+2-2
Original file line number
Diff line number
Diff line change
@@ -36,7 +36,7 @@
36
36
"python -c \"import tensorflow.contrib.slim as slim; eval = slim.evaluation.evaluate_once\"\n",
37
37
"```\n",
38
38
"\n",
39
-
"Although, to use TF-Slim for image classification (as we do in this notebook), you also have to install the TF-Slim image models library from [here](https://github.com/tensorflow/models/tree/master/slim). Let's suppose you install this into a directory called TF_MODELS. Then you should change directory to TF_MODELS/slim **before** running this notebook, so that these files are in your python path.\n",
39
+
"Although, to use TF-Slim for image classification (as we do in this notebook), you also have to install the TF-Slim image models library from [here](https://github.com/tensorflow/models/tree/master/research/slim). Let's suppose you install this into a directory called TF_MODELS. Then you should change directory to TF_MODELS/research/slim **before** running this notebook, so that these files are in your python path.\n",
40
40
"\n",
41
41
"To check you've got these two steps to work, just execute the cell below. If it complains about unknown modules, restart the notebook after moving to the TF-Slim models directory.\n"
42
42
]
@@ -757,7 +757,7 @@
757
757
"<a id='Pretrained'></a>\n",
758
758
"\n",
759
759
"Neural nets work best when they have many parameters, making them very flexible function approximators.\n",
760
-
"However, this means they must be trained on big datasets. Since this process is slow, we provide various pre-trained models - see the list [here](https://github.com/tensorflow/models/tree/master/slim#pre-trained-models).\n",
760
+
"However, this means they must be trained on big datasets. Since this process is slow, we provide various pre-trained models - see the list [here](https://github.com/tensorflow/models/tree/master/research/slim#pre-trained-models).\n",
761
761
"\n",
762
762
"\n",
763
763
"You can either use these models as-is, or you can perform \"surgery\" on them, to modify them for some other task. For example, it is common to \"chop off\" the final pre-softmax layer, and replace it with a new set of weights corresponding to some new set of labels. You can then quickly fine tune the new model on a small new dataset. We illustrate this below, using inception-v1 as the base model. While models like Inception V3 are more powerful, Inception V1 is used for speed purposes.\n",
0 commit comments