Skip to content

Commit d65d0c3

Browse files
committed
docs: fix url
Signed-off-by: Han Xiao <[email protected]>
1 parent 99f4384 commit d65d0c3

File tree

26 files changed

+143
-133
lines changed

26 files changed

+143
-133
lines changed

Diff for: .github/ISSUE_TEMPLATE/config.yml

+1-1
Original file line numberDiff line numberDiff line change
@@ -1,7 +1,7 @@
11
blank_issues_enabled: false
22
contact_links:
33
- name: "📚 Read docs"
4-
url: https://docs.jina.ai/
4+
url: https://jina.ai/serve/
55
about: Find your solution from our documenations
66
- name: "😊 Join us"
77
url: https://career.jina.ai

Diff for: .github/slack-pypi.json

+1-1
Original file line numberDiff line numberDiff line change
@@ -16,7 +16,7 @@
1616
},
1717
"accessory": {
1818
"type": "image",
19-
"image_url": "https://docs.jina.ai/_static/favicon.png",
19+
"image_url": "https://jina.ai/serve/_static/favicon.png",
2020
"alt_text": "cute cat"
2121
}
2222
},

Diff for: CHANGELOG.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -9569,7 +9569,7 @@ Jina is released on every Friday evening. The PyPi package and Docker Image will
95699569
- [[```4273af8d```](https://github.com/jina-ai/jina/commit/4273af8d46394f476423fd53c6bc4054050fd9cf)] __-__ remove hub-builder success (*Han Xiao*)
95709570
- [[```73457b17```](https://github.com/jina-ai/jina/commit/73457b17909b68c4415613ed8da78f2e6f9774a3)] __-__ hide my exec collide with other test (#2654) (*Joan Fontanals*)
95719571
- [[```e01ed315```](https://github.com/jina-ai/jina/commit/e01ed3152509b47a896d05d1d6d59ae41acb0515)] __-__ latency-tracking adapt new release (#2595) (*Alan Zhisheng Niu*)
9572-
- [[```7651bb44```](https://github.com/jina-ai/jina/commit/7651bb44e725002da65bda8a10d3b4477d692935)] __-__ replace docs2.jina.ai to docs.jina.ai (*Han Xiao*)
9572+
- [[```7651bb44```](https://github.com/jina-ai/jina/commit/7651bb44e725002da65bda8a10d3b4477d692935)] __-__ replace docs2.jina.ai to jina.ai/serve (*Han Xiao*)
95739573
- [[```26403122```](https://github.com/jina-ai/jina/commit/264031226563e6b84073c4b3a168fa5c1e2de1d0)] __-__ fix 404 page generation in ci (*Han Xiao*)
95749574

95759575
### 🍹 Other Improvements

Diff for: CONTRIBUTING.md

+3-3
Original file line numberDiff line numberDiff line change
@@ -256,9 +256,9 @@ Bonus: **Know when to break the rules**. Documentation writing is as much art as
256256

257257
[MyST](https://myst-parser.readthedocs.io/en/latest/) Elements Usage
258258

259-
1. Use the `{tab}` element to show multiple ways of doing one thing. [Example](https://docs.jina.ai/concepts/flow/basics/#start-and-stop)
260-
2. Use the `{admonition}` boxes with care. We recommend restricting yourself to [Hint](https://docs.jina.ai/concepts/flow/basics/#create), [Caution](https://docs.jina.ai/concepts/gateway/customize-http-endpoints/#enable-graphql-endpoint) and [See Also](https://docs.jina.ai/concepts/gateway/customize-http-endpoints/#enable-graphql-endpoint).
261-
3. Use `{dropdown}` to hide optional content, such as long code snippets or console output. [Example](https://docs.jina.ai/concepts/client/third-party-clients/#use-curl)
259+
1. Use the `{tab}` element to show multiple ways of doing one thing. [Example](https://jina.ai/serve/concepts/flow/basics/#start-and-stop)
260+
2. Use the `{admonition}` boxes with care. We recommend restricting yourself to [Hint](https://jina.ai/serve/concepts/flow/basics/#create), [Caution](https://jina.ai/serve/concepts/gateway/customize-http-endpoints/#enable-graphql-endpoint) and [See Also](https://jina.ai/serve/concepts/gateway/customize-http-endpoints/#enable-graphql-endpoint).
261+
3. Use `{dropdown}` to hide optional content, such as long code snippets or console output. [Example](https://jina.ai/serve/concepts/client/third-party-clients/#use-curl)
262262

263263
### Building documentation on your local machine
264264

Diff for: Dockerfiles/debianx.Dockerfile

+1-1
Original file line numberDiff line numberDiff line change
@@ -27,7 +27,7 @@ LABEL org.opencontainers.image.vendor="Jina AI Limited" \
2727
org.opencontainers.image.description="Build multimodal AI services via cloud native technologies" \
2828
org.opencontainers.image.authors="[email protected]" \
2929
org.opencontainers.image.url="https://github.com/jina-ai/jina" \
30-
org.opencontainers.image.documentation="https://docs.jina.ai"
30+
org.opencontainers.image.documentation="https://jina.ai/serve"
3131

3232
# constant, wont invalidate cache
3333
ENV PIP_NO_CACHE_DIR=1 \

Diff for: README.md

+59-56
Original file line numberDiff line numberDiff line change
@@ -32,7 +32,7 @@ Key advantages over FastAPI:
3232
pip install jina
3333
```
3434

35-
See guides for [Apple Silicon](https://docs.jina.ai/get-started/install/apple-silicon-m1-m2/) and [Windows](https://docs.jina.ai/get-started/install/windows/).
35+
See guides for [Apple Silicon](https://jina.ai/serve/get-started/install/apple-silicon-m1-m2/) and [Windows](https://jina.ai/serve/get-started/install/windows/).
3636

3737
## Core Concepts
3838

@@ -50,28 +50,31 @@ from jina import Executor, requests
5050
from docarray import DocList, BaseDoc
5151
from transformers import pipeline
5252

53+
5354
class Prompt(BaseDoc):
54-
text: str
55+
text: str
56+
5557

5658
class Generation(BaseDoc):
57-
prompt: str
58-
text: str
59+
prompt: str
60+
text: str
61+
5962

6063
class StableLM(Executor):
61-
def __init__(self, **kwargs):
62-
super().__init__(**kwargs)
63-
self.generator = pipeline(
64-
'text-generation', model='stabilityai/stablelm-base-alpha-3b'
65-
)
66-
67-
@requests
68-
def generate(self, docs: DocList[Prompt], **kwargs) -> DocList[Generation]:
69-
generations = DocList[Generation]()
70-
prompts = docs.text
71-
llm_outputs = self.generator(prompts)
72-
for prompt, output in zip(prompts, llm_outputs):
73-
generations.append(Generation(prompt=prompt, text=output))
74-
return generations
64+
def __init__(self, **kwargs):
65+
super().__init__(**kwargs)
66+
self.generator = pipeline(
67+
'text-generation', model='stabilityai/stablelm-base-alpha-3b'
68+
)
69+
70+
@requests
71+
def generate(self, docs: DocList[Prompt], **kwargs) -> DocList[Generation]:
72+
generations = DocList[Generation]()
73+
prompts = docs.text
74+
llm_outputs = self.generator(prompts)
75+
for prompt, output in zip(prompts, llm_outputs):
76+
generations.append(Generation(prompt=prompt, text=output))
77+
return generations
7578
```
7679

7780
Deploy with Python or YAML:
@@ -83,7 +86,7 @@ from executor import StableLM
8386
dep = Deployment(uses=StableLM, timeout_ready=-1, port=12345)
8487

8588
with dep:
86-
dep.block()
89+
dep.block()
8790
```
8891

8992
```yaml
@@ -115,14 +118,10 @@ Chain services into a Flow:
115118
```python
116119
from jina import Flow
117120

118-
flow = (
119-
Flow(port=12345)
120-
.add(uses=StableLM)
121-
.add(uses=TextToImage)
122-
)
121+
flow = Flow(port=12345).add(uses=StableLM).add(uses=TextToImage)
123122

124123
with flow:
125-
flow.block()
124+
flow.block()
126125
```
127126

128127
## Scaling and Deployment
@@ -207,62 +206,66 @@ Enable token-by-token streaming for responsive LLM applications:
207206
```python
208207
from docarray import BaseDoc
209208

209+
210210
class PromptDocument(BaseDoc):
211-
prompt: str
212-
max_tokens: int
211+
prompt: str
212+
max_tokens: int
213+
213214

214215
class ModelOutputDocument(BaseDoc):
215-
token_id: int
216-
generated_text: str
216+
token_id: int
217+
generated_text: str
217218
```
218219

219220
2. Initialize service:
220221
```python
221222
from transformers import GPT2Tokenizer, GPT2LMHeadModel
222223

224+
223225
class TokenStreamingExecutor(Executor):
224-
def __init__(self, **kwargs):
225-
super().__init__(**kwargs)
226-
self.model = GPT2LMHeadModel.from_pretrained('gpt2')
226+
def __init__(self, **kwargs):
227+
super().__init__(**kwargs)
228+
self.model = GPT2LMHeadModel.from_pretrained('gpt2')
227229
```
228230

229231
3. Implement streaming:
230232
```python
231233
@requests(on='/stream')
232234
async def task(self, doc: PromptDocument, **kwargs) -> ModelOutputDocument:
233-
input = tokenizer(doc.prompt, return_tensors='pt')
234-
input_len = input['input_ids'].shape[1]
235-
for _ in range(doc.max_tokens):
236-
output = self.model.generate(**input, max_new_tokens=1)
237-
if output[0][-1] == tokenizer.eos_token_id:
238-
break
239-
yield ModelOutputDocument(
240-
token_id=output[0][-1],
241-
generated_text=tokenizer.decode(
242-
output[0][input_len:], skip_special_tokens=True
243-
),
244-
)
245-
input = {
246-
'input_ids': output,
247-
'attention_mask': torch.ones(1, len(output[0])),
248-
}
235+
input = tokenizer(doc.prompt, return_tensors='pt')
236+
input_len = input['input_ids'].shape[1]
237+
for _ in range(doc.max_tokens):
238+
output = self.model.generate(**input, max_new_tokens=1)
239+
if output[0][-1] == tokenizer.eos_token_id:
240+
break
241+
yield ModelOutputDocument(
242+
token_id=output[0][-1],
243+
generated_text=tokenizer.decode(
244+
output[0][input_len:], skip_special_tokens=True
245+
),
246+
)
247+
input = {
248+
'input_ids': output,
249+
'attention_mask': torch.ones(1, len(output[0])),
250+
}
249251
```
250252

251253
4. Serve and use:
252254
```python
253255
# Server
254256
with Deployment(uses=TokenStreamingExecutor, port=12345, protocol='grpc') as dep:
255-
dep.block()
257+
dep.block()
258+
256259

257260
# Client
258261
async def main():
259-
client = Client(port=12345, protocol='grpc', asyncio=True)
260-
async for doc in client.stream_doc(
261-
on='/stream',
262-
inputs=PromptDocument(prompt='what is the capital of France ?', max_tokens=10),
263-
return_type=ModelOutputDocument,
264-
):
265-
print(doc.generated_text)
262+
client = Client(port=12345, protocol='grpc', asyncio=True)
263+
async for doc in client.stream_doc(
264+
on='/stream',
265+
inputs=PromptDocument(prompt='what is the capital of France ?', max_tokens=10),
266+
return_type=ModelOutputDocument,
267+
):
268+
print(doc.generated_text)
266269
```
267270

268271
## Support

Diff for: conda/meta.yaml

+1-1
Original file line numberDiff line numberDiff line change
@@ -147,7 +147,7 @@ about:
147147
license_family: Apache
148148
license_file: LICENSE
149149
summary: "Build multimodal AI services via cloud native technologies \xB7 Neural Search \xB7 Generative AI \xB7 Cloud Native"
150-
doc_url: https://docs.jina.ai
150+
doc_url: https://jina.ai/serve
151151

152152
extra:
153153
recipe-maintainers:

Diff for: docs/concepts/jcloud/configuration.md

+4-4
Original file line numberDiff line numberDiff line change
@@ -180,7 +180,7 @@ If shards/replicas are used, we will multiply credits further by the number of s
180180

181181
## Scale out Executors
182182

183-
On JCloud, demand-based autoscaling functionality is naturally offered thanks to the underlying Kubernetes architecture. This means that you can maintain [serverless](https://en.wikipedia.org/wiki/Serverless_computing) deployments in a cost-effective way with no headache of setting the [right number of replicas](https://docs.jina.ai/how-to/scale-out/#scale-out-your-executor) anymore!
183+
On JCloud, demand-based autoscaling functionality is naturally offered thanks to the underlying Kubernetes architecture. This means that you can maintain [serverless](https://en.wikipedia.org/wiki/Serverless_computing) deployments in a cost-effective way with no headache of setting the [right number of replicas](https://jina.ai/serve/how-to/scale-out/#scale-out-your-executor) anymore!
184184

185185

186186
### Autoscaling with `jinaai+serverless://`
@@ -266,8 +266,8 @@ The JCloud parameters `minAvailable` and `maxUnavailable` ensure that Executors
266266

267267
| Name | Default | Allowed | Description |
268268
| :--------------- | :-----: | :---------------------------------------------------------------------------------------: | :------------------------------------------------------- |
269-
| `minAvailable` | N/A | Lower than number of [replicas](https://docs.jina.ai/concepts/flow/scale-out/#scale-out) | Minimum number of replicas available during disruption |
270-
| `maxUnavailable` | N/A | Lower than numbers of [replicas](https://docs.jina.ai/concepts/flow/scale-out/#scale-out) | Maximum number of replicas unavailable during disruption |
269+
| `minAvailable` | N/A | Lower than number of [replicas](https://jina.ai/serve/concepts/flow/scale-out/#scale-out) | Minimum number of replicas available during disruption |
270+
| `maxUnavailable` | N/A | Lower than numbers of [replicas](https://jina.ai/serve/concepts/flow/scale-out/#scale-out) | Maximum number of replicas unavailable during disruption |
271271

272272
```{code-block} yaml
273273
---
@@ -459,7 +459,7 @@ Keys in `labels` have the following restrictions:
459459

460460
### Monitoring
461461

462-
To enable [tracing support](https://docs.jina.ai/cloud-nativeness/opentelemetry/) in Flows, you can pass `enable: true` argument in the Flow YAML. (Tracing support is not enabled by default in JCloud)
462+
To enable [tracing support](https://jina.ai/serve/cloud-nativeness/opentelemetry/) in Flows, you can pass `enable: true` argument in the Flow YAML. (Tracing support is not enabled by default in JCloud)
463463

464464
```{code-block} yaml
465465
---

Diff for: docs/concepts/jcloud/index.md

+7-7
Original file line numberDiff line numberDiff line change
@@ -8,7 +8,7 @@
88
configuration
99
```
1010

11-
```{figure} https://docs.jina.ai/_images/jcloud-banner.png
11+
```{figure} https://jina.ai/serve/_images/jcloud-banner.png
1212
:width: 0 %
1313
:scale: 0 %
1414
```
@@ -50,13 +50,13 @@ For the rest of this section, we use `jc` or `jcloud`. But again they are interc
5050

5151
### Deploy
5252

53-
In Jina's idiom, a project is a [Flow](https://docs.jina.ai/concepts/orchestration/flow/), which represents an end-to-end task such as indexing, searching or recommending. In this document, we use "project" and "Flow" interchangeably.
53+
In Jina's idiom, a project is a [Flow](https://jina.ai/serve/concepts/orchestration/flow/), which represents an end-to-end task such as indexing, searching or recommending. In this document, we use "project" and "Flow" interchangeably.
5454

5555
A Flow can have two types of file structure: a single YAML file or a project folder.
5656

5757
#### Single YAML file
5858

59-
A self-contained YAML file, consisting of all configuration at the [Flow](https://docs.jina.ai/concepts/orchestration/flow/)-level and [Executor](https://docs.jina.ai/concepts/serving/executor/)-level.
59+
A self-contained YAML file, consisting of all configuration at the [Flow](https://jina.ai/serve/concepts/orchestration/flow/)-level and [Executor](https://jina.ai/serve/concepts/serving/executor/)-level.
6060

6161
> All Executors' `uses` must follow the format `jinaai+docker://<username>/MyExecutor` (from [Executor Hub](https://cloud.jina.ai)) to avoid any local file dependencies:
6262
@@ -123,7 +123,7 @@ hello/
123123
Where:
124124

125125
- `hello/` is your top-level project folder.
126-
- `executor1` directory has all Executor related code/configuration. You can read the best practices for [file structures](https://docs.jina.ai/concepts/serving/executor/file-structure/). Multiple Executor directories can be created.
126+
- `executor1` directory has all Executor related code/configuration. You can read the best practices for [file structures](https://jina.ai/serve/concepts/serving/executor/file-structure/). Multiple Executor directories can be created.
127127
- `flow.yml` Your Flow YAML.
128128
- `.env` All environment variables used during deployment.
129129

@@ -374,7 +374,7 @@ jc secret create mysecret rich-husky-af14064067 --from-literal "{'env-name': 'se
374374
```
375375

376376
```{tip}
377-
You can optionally pass the `--update` flag to automatically update the Flow spec with the updated secret information. This flag will update the Flow which is hosted on the cloud. Finally, you can also optionally pass a Flow's yaml file path with `--path` to update the yaml file locally. Refer to [this](https://docs.jina.ai/cloud-nativeness/kubernetes/#deploy-flow-with-custom-environment-variables-and-secrets) section for more information.
377+
You can optionally pass the `--update` flag to automatically update the Flow spec with the updated secret information. This flag will update the Flow which is hosted on the cloud. Finally, you can also optionally pass a Flow's yaml file path with `--path` to update the yaml file locally. Refer to [this](https://jina.ai/serve/cloud-nativeness/kubernetes/#deploy-flow-with-custom-environment-variables-and-secrets) section for more information.
378378
```
379379

380380
```{caution}
@@ -419,7 +419,7 @@ jc secret update rich-husky-af14064067 mysecret --from-literal "{'env-name': 'se
419419
```
420420

421421
```{tip}
422-
You can optionally pass the `--update` flag to automatically update the Flow spec with the updated secret information. This flag will update the Flow which is hosted on the cloud. Finally, you can also optionally pass a Flow's yaml file path with `--path` to update the yaml file locally. Refer to [this](https://docs.jina.ai/cloud-nativeness/kubernetes/#deploy-flow-with-custom-environment-variables-and-secrets) section for more information.
422+
You can optionally pass the `--update` flag to automatically update the Flow spec with the updated secret information. This flag will update the Flow which is hosted on the cloud. Finally, you can also optionally pass a Flow's yaml file path with `--path` to update the yaml file locally. Refer to [this](https://jina.ai/serve/cloud-nativeness/kubernetes/#deploy-flow-with-custom-environment-variables-and-secrets) section for more information.
423423
```
424424

425425
```{caution}
@@ -498,7 +498,7 @@ jcloud:
498498
499499
#### Single YAML file
500500
501-
A self-contained YAML file, consisting of all configuration information at the [Deployment](https://docs.jina.ai/concepts/orchestration/deployment/)-level and [Executor](https://docs.jina.ai/concepts/serving/executor/)-level.
501+
A self-contained YAML file, consisting of all configuration information at the [Deployment](https://jina.ai/serve/concepts/orchestration/deployment/)-level and [Executor](https://jina.ai/serve/concepts/serving/executor/)-level.
502502
503503
> A Deployment's `uses` parameter must follow the format `jinaai+docker://<username>/MyExecutor` (from [Executor Hub](https://cloud.jina.ai)) to avoid any local file dependencies:
504504

Diff for: docs/concepts/orchestration/flow.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -303,7 +303,7 @@ Please follow the walkthrough and enjoy the free GPU/TPU!
303303

304304

305305
```{tip}
306-
Hosing services on Google Colab is not recommended if your server aims to be long-lived or permanent. It is often used for quick experiments, demonstrations or leveraging its free GPU/TPU. For stable, secure and free hosting of your Flow, check out [JCloud](https://docs.jina.ai/concepts/jcloud/).
306+
Hosing services on Google Colab is not recommended if your server aims to be long-lived or permanent. It is often used for quick experiments, demonstrations or leveraging its free GPU/TPU. For stable, secure and free hosting of your Flow, check out [JCloud](https://jina.ai/serve/concepts/jcloud/).
307307
```
308308

309309
## Export

Diff for: docs/conf.py

+3-3
Original file line numberDiff line numberDiff line change
@@ -49,7 +49,7 @@
4949
html_theme = 'furo'
5050

5151
base_url = '/'
52-
html_baseurl = 'https://docs.jina.ai'
52+
html_baseurl = 'https://jina.ai/serve'
5353
sitemap_url_scheme = '{link}'
5454
sitemap_locales = [None]
5555
sitemap_filename = "sitemap.xml"
@@ -167,8 +167,8 @@
167167
linkcheck_retries = 2
168168
linkcheck_anchors = False
169169

170-
ogp_site_url = 'https://docs.jina.ai/'
171-
ogp_image = 'https://docs.jina.ai/_static/banner.png'
170+
ogp_site_url = 'https://jina.ai/serve/'
171+
ogp_image = 'https://jina.ai/serve/_static/banner.png'
172172
ogp_use_first_image = True
173173
ogp_description_length = 300
174174
ogp_type = 'website'

Diff for: docs/html_extra/robots.txt

+1-1
Original file line numberDiff line numberDiff line change
@@ -1,2 +1,2 @@
11
User-agent: *
2-
sitemap: https://docs.jina.ai/sitemap.xml
2+
sitemap: https://jina.ai/serve/sitemap.xml

Diff for: docs/tutorials/deploy-model.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -42,7 +42,7 @@ When you build a model or service in Jina-serve, it's always in the form of an E
4242

4343
In this example we need to install:
4444

45-
- The [Jina-serve framework](https://docs.jina.ai/) itself
45+
- The [Jina-serve framework](https://jina.ai/serve/) itself
4646
- The dependencies of the specific model we want to serve and deploy
4747

4848
```shell

0 commit comments

Comments
 (0)