Skip to content

Commit

Permalink
[FLINK-24099][docs] Refer to nightlies.apache.org
Browse files Browse the repository at this point in the history
  • Loading branch information
zentol committed Sep 1, 2021
1 parent d2de244 commit d09ef68
Show file tree
Hide file tree
Showing 47 changed files with 104 additions and 104 deletions.
2 changes: 1 addition & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -100,7 +100,7 @@ The IntelliJ IDE supports Maven out of the box and offers a plugin for Scala dev
* IntelliJ download: [https://www.jetbrains.com/idea/](https://www.jetbrains.com/idea/)
* IntelliJ Scala Plugin: [https://plugins.jetbrains.com/plugin/?id=1347](https://plugins.jetbrains.com/plugin/?id=1347)

Check out our [Setting up IntelliJ](https://ci.apache.org/projects/flink/flink-docs-master/flinkDev/ide_setup.html#intellij-idea) guide for details.
Check out our [Setting up IntelliJ](https://nightlies.apache.org/flink/flink-docs-master/flinkDev/ide_setup.html#intellij-idea) guide for details.

### Eclipse Scala IDE

Expand Down
42 changes: 21 additions & 21 deletions docs/config.toml
Original file line number Diff line number Diff line change
Expand Up @@ -14,7 +14,7 @@
# See the License for the specific language governing permissions and
# limitations under the License.

baseURL = '//ci.apache.org/projects/flink/flink-docs-master'
baseURL = '//nightlies.apache.org/flink/flink-docs-master'
languageCode = "en-us"
title = "Apache Flink"
enableGitInfo = false
Expand Down Expand Up @@ -60,36 +60,36 @@ pygmentsUseClasses = true

ZhDownloadPage = "//flink.apache.org/zh/downloads.html"

JavaDocs = "//ci.apache.org/projects/flink/flink-docs-master/api/java/"
JavaDocs = "//nightlies.apache.org/flink/flink-docs-master/api/java/"

ScalaDocs = "//ci.apache.org/projects/flink/flink-docs-master/api/scala/index.html#org.apache.flink.api.scala.package"
ScalaDocs = "//nightlies.apache.org/flink/flink-docs-master/api/scala/index.html#org.apache.flink.api.scala.package"

PyDocs = "//ci.apache.org/projects/flink/flink-docs-master/api/python/"
PyDocs = "//nightlies.apache.org/flink/flink-docs-master/api/python/"

# External links at the bottom
# of the menu
MenuLinks = [
["Project Homepage", "//flink.apache.org"],
["JavaDocs", "//ci.apache.org/projects/flink/flink-docs-master/api/java/"],
["ScalaDocs", "//ci.apache.org/projects/flink/flink-docs-master/api/scala/index.html#org.apache.flink.api.scala.package"],
["PyDocs", "//ci.apache.org/projects/flink/flink-docs-master/api/python/"]
["JavaDocs", "//nightlies.apache.org/flink/flink-docs-master/api/java/"],
["ScalaDocs", "//nightlies.apache.org/flink/flink-docs-master/api/scala/index.html#org.apache.flink.api.scala.package"],
["PyDocs", "//nightlies.apache.org/flink/flink-docs-master/api/python/"]
]

PreviousDocs = [
["1.13", "http://ci.apache.org/projects/flink/flink-docs-release-1.13"],
["1.12", "http://ci.apache.org/projects/flink/flink-docs-release-1.12"],
["1.11", "http://ci.apache.org/projects/flink/flink-docs-release-1.11"],
["1.10", "http://ci.apache.org/projects/flink/flink-docs-release-1.10"],
["1.9", "http://ci.apache.org/projects/flink/flink-docs-release-1.9"],
["1.8", "http://ci.apache.org/projects/flink/flink-docs-release-1.8"],
["1.7", "http://ci.apache.org/projects/flink/flink-docs-release-1.7"],
["1.6", "http://ci.apache.org/projects/flink/flink-docs-release-1.6"],
["1.5", "http://ci.apache.org/projects/flink/flink-docs-release-1.5"],
["1.4", "http://ci.apache.org/projects/flink/flink-docs-release-1.4"],
["1.3", "http://ci.apache.org/projects/flink/flink-docs-release-1.3"],
["1.2", "http://ci.apache.org/projects/flink/flink-docs-release-1.2"],
["1.1", "http://ci.apache.org/projects/flink/flink-docs-release-1.1"],
["1.0", "http://ci.apache.org/projects/flink/flink-docs-release-1.0"]
["1.13", "http://nightlies.apache.org/flink/flink-docs-release-1.13"],
["1.12", "http://nightlies.apache.org/flink/flink-docs-release-1.12"],
["1.11", "http://nightlies.apache.org/flink/flink-docs-release-1.11"],
["1.10", "http://nightlies.apache.org/flink/flink-docs-release-1.10"],
["1.9", "http://nightlies.apache.org/flink/flink-docs-release-1.9"],
["1.8", "http://nightlies.apache.org/flink/flink-docs-release-1.8"],
["1.7", "http://nightlies.apache.org/flink/flink-docs-release-1.7"],
["1.6", "http://nightlies.apache.org/flink/flink-docs-release-1.6"],
["1.5", "http://nightlies.apache.org/flink/flink-docs-release-1.5"],
["1.4", "http://nightlies.apache.org/flink/flink-docs-release-1.4"],
["1.3", "http://nightlies.apache.org/flink/flink-docs-release-1.3"],
["1.2", "http://nightlies.apache.org/flink/flink-docs-release-1.2"],
["1.1", "http://nightlies.apache.org/flink/flink-docs-release-1.1"],
["1.0", "http://nightlies.apache.org/flink/flink-docs-release-1.0"]
]

[markup]
Expand Down
2 changes: 1 addition & 1 deletion docs/content.zh/_index.md
Original file line number Diff line number Diff line change
Expand Up @@ -63,7 +63,7 @@ under the License.
{{< columns >}}
* [DataStream API]({{< ref "docs/dev/datastream/overview" >}})
* [Table API & SQL]({{< ref "docs/dev/table/overview" >}})
* [Stateful Functions](https://ci.apache.org/projects/flink/flink-statefun-docs-stable/)
* [Stateful Functions](https://nightlies.apache.org/flink/flink-statefun-docs-stable/)

<--->

Expand Down
2 changes: 1 addition & 1 deletion docs/content.zh/docs/deployment/filesystems/s3.md
Original file line number Diff line number Diff line change
Expand Up @@ -125,7 +125,7 @@ s3.path.style.access: true

如果熵注入被启用,路径中配置好的字串将会被随机字符所替换。例如路径 `s3://my-bucket/checkpoints/_entropy_/dashboard-job/` 将会被替换成类似于 `s3://my-bucket/checkpoints/gf36ikvg/dashboard-job/` 的路径。
**这仅在使用熵注入选项创建文件时启用!**
否则将完全删除文件路径中的 entropy key。更多细节请参见 [FileSystem.create(Path, WriteOption)](https://ci.apache.org/projects/flink/flink-docs-release-1.6/api/java/org/apache/flink/core/fs/FileSystem.html#create-org.apache.flink.core.fs.Path-org.apache.flink.core.fs.FileSystem.WriteOptions-)。
否则将完全删除文件路径中的 entropy key。更多细节请参见 [FileSystem.create(Path, WriteOption)](https://nightlies.apache.org/flink/flink-docs-release-1.6/api/java/org/apache/flink/core/fs/FileSystem.html#create-org.apache.flink.core.fs.Path-org.apache.flink.core.fs.FileSystem.WriteOptions-)。

{{< hint info >}}
目前 Flink 运行时仅对 checkpoint 数据文件使用熵注入选项。所有其他文件包括 checkpoint 元数据与外部 URI 都不使用熵注入,以保证 checkpoint URI 的可预测性。
Expand Down
2 changes: 1 addition & 1 deletion docs/content.zh/docs/deployment/memory/mem_migration.md
Original file line number Diff line number Diff line change
Expand Up @@ -29,7 +29,7 @@ under the License.

*1.10**1.11* 版本中,Flink 分别对 [TaskManager]({{< ref "docs/deployment/memory/mem_setup_tm" >}}) 和 [JobManager]({{< ref "docs/deployment/memory/mem_setup_jobmanager" >}}) 的内存配置方法做出了较大的改变。
部分配置参数被移除了,或是语义上发生了变化。
本篇升级指南将介绍如何将 [*Flink 1.9 及以前版本*](https://ci.apache.org/projects/flink/flink-docs-release-1.9/ops/mem_setup.html)的 TaskManager 内存配置升级到 *Flink 1.10 及以后版本*
本篇升级指南将介绍如何将 [*Flink 1.9 及以前版本*](https://nightlies.apache.org/flink/flink-docs-release-1.9/ops/mem_setup.html)的 TaskManager 内存配置升级到 *Flink 1.10 及以后版本*
以及如何将 *Flink 1.10 及以前版本*的 JobManager 内存配置升级到 *Flink 1.11 及以后版本*

* toc
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -111,7 +111,7 @@ Java and Scala classes are treated by Flink as a special POJO data type if they
POJOs are generally represented with a `PojoTypeInfo` and serialized with the `PojoSerializer` (using [Kryo](https://github.com/EsotericSoftware/kryo) as configurable fallback).
The exception is when the POJOs are actually Avro types (Avro Specific Records) or produced as "Avro Reflect Types".
In that case the POJO's are represented by an `AvroTypeInfo` and serialized with the `AvroSerializer`.
You can also register your own custom serializer if required; see [Serialization](https://ci.apache.org/projects/flink/flink-docs-stable/dev/types_serialization.html#serialization-of-pojo-types) for further information.
You can also register your own custom serializer if required; see [Serialization](https://nightlies.apache.org/flink/flink-docs-stable/dev/types_serialization.html#serialization-of-pojo-types) for further information.

Flink analyzes the structure of POJO types, i.e., it learns about the fields of a POJO. As a result POJO types are easier to use than general types. Moreover, Flink can process POJOs more efficiently than general types.

Expand Down
2 changes: 1 addition & 1 deletion docs/content.zh/docs/libs/state_processor_api.md
Original file line number Diff line number Diff line change
Expand Up @@ -27,7 +27,7 @@ under the License.
# State Processor API

Apache Flink's State Processor API provides powerful functionality to reading, writing, and modifying savepoints and checkpoints using Flink’s batch DataSet API.
Due to the [interoperability of DataSet and Table API](https://ci.apache.org/projects/flink/flink-docs-master/dev/table/common.html#integration-with-datastream-and-dataset-api), you can even use relational Table API or SQL queries to analyze and process state data.
Due to the [interoperability of DataSet and Table API](https://nightlies.apache.org/flink/flink-docs-master/dev/table/common.html#integration-with-datastream-and-dataset-api), you can even use relational Table API or SQL queries to analyze and process state data.

For example, you can take a savepoint of a running stream processing application and analyze it with a DataSet batch program to verify that the application behaves correctly.
Or you can read a batch of data from any store, preprocess it, and write the result to a savepoint that you use to bootstrap the state of a streaming application.
Expand Down
32 changes: 16 additions & 16 deletions docs/content.zh/release-notes/flink-1.11.md
Original file line number Diff line number Diff line change
Expand Up @@ -32,7 +32,7 @@ these notes carefully if you are planning to upgrade your Flink version to 1.11.

The user can now submit applications and choose to execute their `main()` method on the cluster rather than the client.
This allows for more light-weight application submission. For more details,
see the [Application Mode documentation](https://ci.apache.org/projects/flink/flink-docs-master/deployment/resource-providers/#application-mode).
see the [Application Mode documentation](https://nightlies.apache.org/flink/flink-docs-master/deployment/resource-providers/#application-mode).

#### Web Submission behaves the same as detached mode.

Expand Down Expand Up @@ -80,47 +80,47 @@ The examples of `Dockerfiles` and docker image `build.sh` scripts have been remo
- `flink-container/docker`
- `flink-container/kubernetes`

Check the updated user documentation for [Flink Docker integration](https://ci.apache.org/projects/flink/flink-docs-master/deployment/resource-providers/standalone/docker.html) instead. It now describes in detail how to [use](https://ci.apache.org/projects/flink/flink-docs-master/deployment/resource-providers/standalone/docker.html#how-to-run-a-flink-image) and [customize](https://ci.apache.org/projects/flink/flink-docs-master/deployment/resource-providers/standalone/docker.html#customize-flink-image) [the Flink official docker image](https://ci.apache.org/projects/flink/flink-docs-master/deployment/resource-providers/standalone/docker.html#docker-hub-flink-images): configuration options, logging, plugins, adding more dependencies and installing software. The documentation also includes examples for Session and Job cluster deployments with:
- [docker run](https://ci.apache.org/projects/flink/flink-docs-master/deployment/resource-providers/standalone/docker.html#how-to-run-flink-image)
- [docker compose](https://ci.apache.org/projects/flink/flink-docs-master/deployment/resource-providers/standalone/docker.html#flink-with-docker-compose)
- [docker swarm](https://ci.apache.org/projects/flink/flink-docs-master/deployment/resource-providers/standalone/docker.html#flink-with-docker-swarm)
- [standalone Kubernetes](https://ci.apache.org/projects/flink/flink-docs-master/deployment/resource-providers/standalone/kubernetes.html)
Check the updated user documentation for [Flink Docker integration](https://nightlies.apache.org/flink/flink-docs-master/deployment/resource-providers/standalone/docker.html) instead. It now describes in detail how to [use](https://nightlies.apache.org/flink/flink-docs-master/deployment/resource-providers/standalone/docker.html#how-to-run-a-flink-image) and [customize](https://nightlies.apache.org/flink/flink-docs-master/deployment/resource-providers/standalone/docker.html#customize-flink-image) [the Flink official docker image](https://nightlies.apache.org/flink/flink-docs-master/deployment/resource-providers/standalone/docker.html#docker-hub-flink-images): configuration options, logging, plugins, adding more dependencies and installing software. The documentation also includes examples for Session and Job cluster deployments with:
- [docker run](https://nightlies.apache.org/flink/flink-docs-master/deployment/resource-providers/standalone/docker.html#how-to-run-flink-image)
- [docker compose](https://nightlies.apache.org/flink/flink-docs-master/deployment/resource-providers/standalone/docker.html#flink-with-docker-compose)
- [docker swarm](https://nightlies.apache.org/flink/flink-docs-master/deployment/resource-providers/standalone/docker.html#flink-with-docker-swarm)
- [standalone Kubernetes](https://nightlies.apache.org/flink/flink-docs-master/deployment/resource-providers/standalone/kubernetes.html)

### Memory Management
#### New JobManager Memory Model
##### Overview
With [FLIP-116](https://cwiki.apache.org/confluence/display/FLINK/FLIP-116%3A+Unified+Memory+Configuration+for+Job+Managers), a new memory model has been introduced for the JobManager. New configuration options have been introduced to control the memory consumption of the JobManager process. This affects all types of deployments: standalone, YARN, Mesos, and the new active Kubernetes integration.

Please, check the user documentation for [more details](https://ci.apache.org/projects/flink/flink-docs-master/deployment/memory/mem_setup_jobmanager.html).
Please, check the user documentation for [more details](https://nightlies.apache.org/flink/flink-docs-master/deployment/memory/mem_setup_jobmanager.html).

If you try to reuse your previous Flink configuration without any adjustments, the new memory model can result in differently computed memory parameters for the JVM and, thus, performance changes or even failures.
In order to start the JobManager process, you have to specify at least one of the following options [`jobmanager.memory.flink.size`](https://ci.apache.org/projects/flink/flink-docs-master/deployment/config.html#jobmanager-memory-flink-size), [`jobmanager.memory.process.size`](https://ci.apache.org/projects/flink/flink-docs-master/deployment/config.html#jobmanager-memory-process-size) or [`jobmanager.memory.heap.size`](https://ci.apache.org/projects/flink/flink-docs-master/deployment/config.html#jobmanager-memory-heap-size).
See also [the migration guide](https://ci.apache.org/projects/flink/flink-docs-master/deployment/memory/mem_migration.html#migrate-job-manager-memory-configuration) for more information.
In order to start the JobManager process, you have to specify at least one of the following options [`jobmanager.memory.flink.size`](https://nightlies.apache.org/flink/flink-docs-master/deployment/config.html#jobmanager-memory-flink-size), [`jobmanager.memory.process.size`](https://nightlies.apache.org/flink/flink-docs-master/deployment/config.html#jobmanager-memory-process-size) or [`jobmanager.memory.heap.size`](https://nightlies.apache.org/flink/flink-docs-master/deployment/config.html#jobmanager-memory-heap-size).
See also [the migration guide](https://nightlies.apache.org/flink/flink-docs-master/deployment/memory/mem_migration.html#migrate-job-manager-memory-configuration) for more information.

##### Deprecation and breaking changes
The following options are deprecated:
* `jobmanager.heap.size`
* `jobmanager.heap.mb`

If these deprecated options are still used, they will be interpreted as one of the following new options in order to maintain backwards compatibility:
* [JVM Heap](https://ci.apache.org/projects/flink/flink-docs-master/deployment/memory/mem_setup_jobmanager.html#configure-jvm-heap) ([`jobmanager.memory.heap.size`](https://ci.apache.org/projects/flink/flink-docs-master/deployment/config.html#jobmanager-memory-heap-size)) for standalone and Mesos deployments
* [Total Process Memory](https://ci.apache.org/projects/flink/flink-docs-master/deployment/memory/mem_setup_jobmanager.html#configure-total-memory) ([`jobmanager.memory.process.size`](https://ci.apache.org/projects/flink/flink-docs-master/deployment/config.html#jobmanager-memory-process-size)) for containerized deployments (Kubernetes and Yarn)
* [JVM Heap](https://nightlies.apache.org/flink/flink-docs-master/deployment/memory/mem_setup_jobmanager.html#configure-jvm-heap) ([`jobmanager.memory.heap.size`](https://nightlies.apache.org/flink/flink-docs-master/deployment/config.html#jobmanager-memory-heap-size)) for standalone and Mesos deployments
* [Total Process Memory](https://nightlies.apache.org/flink/flink-docs-master/deployment/memory/mem_setup_jobmanager.html#configure-total-memory) ([`jobmanager.memory.process.size`](https://nightlies.apache.org/flink/flink-docs-master/deployment/config.html#jobmanager-memory-process-size)) for containerized deployments (Kubernetes and Yarn)

The following options have been removed and have no effect anymore:
* `containerized.heap-cutoff-ratio`
* `containerized.heap-cutoff-min`

There is [no container cut-off](https://ci.apache.org/projects/flink/flink-docs-master/deployment/memory/mem_migration.html#container-cut-off-memory) anymore.
There is [no container cut-off](https://nightlies.apache.org/flink/flink-docs-master/deployment/memory/mem_migration.html#container-cut-off-memory) anymore.

##### JVM arguments
The `direct` and `metaspace` memory of the JobManager's JVM process are now limited by configurable values:
* [`jobmanager.memory.off-heap.size`](https://ci.apache.org/projects/flink/flink-docs-master/deployment/config.html#jobmanager-memory-off-heap-size)
* [`jobmanager.memory.jvm-metaspace.size`](https://ci.apache.org/projects/flink/flink-docs-master/deployment/config.html#jobmanager-memory-jvm-metaspace-size)
* [`jobmanager.memory.off-heap.size`](https://nightlies.apache.org/flink/flink-docs-master/deployment/config.html#jobmanager-memory-off-heap-size)
* [`jobmanager.memory.jvm-metaspace.size`](https://nightlies.apache.org/flink/flink-docs-master/deployment/config.html#jobmanager-memory-jvm-metaspace-size)

See also [JVM Parameters](https://ci.apache.org/projects/flink/flink-docs-master/deployment/memory/mem_setup.html#jvm-parameters).
See also [JVM Parameters](https://nightlies.apache.org/flink/flink-docs-master/deployment/memory/mem_setup.html#jvm-parameters).

{{< hint warning >}}
These new limits can produce the respective `OutOfMemoryError` exceptions if they are not configured properly or there is a respective memory leak. See also [the troubleshooting guide](https://ci.apache.org/projects/flink/flink-docs-master/deployment/memory/mem_trouble.html#outofmemoryerror-direct-buffer-memory).
These new limits can produce the respective `OutOfMemoryError` exceptions if they are not configured properly or there is a respective memory leak. See also [the troubleshooting guide](https://nightlies.apache.org/flink/flink-docs-master/deployment/memory/mem_trouble.html#outofmemoryerror-direct-buffer-memory).
{{< /hint >}}

#### Removal of deprecated mesos.resourcemanager.tasks.mem
Expand Down
2 changes: 1 addition & 1 deletion docs/content.zh/release-notes/flink-1.13.md
Original file line number Diff line number Diff line change
Expand Up @@ -50,7 +50,7 @@ In 1.13, checkpointing configurations have been extracted into their own interfa
This change does not affect the runtime behavior and simply provides a better mental model to users.
Pipelines can be updated to use the new the new abstractions without losing state, consistency, or change in semantics.

Please follow the [migration guide](https://ci.apache.org/projects/flink/flink-docs-release-1.13/docs/ops/state/state_backends/#migrating-from-legacy-backends) or the JavaDoc on the deprecated state backend classes - `MemoryStateBackend`, `FsStateBackend` and `RocksDBStateBackend` for migration details.
Please follow the [migration guide](https://nightlies.apache.org/flink/flink-docs-release-1.13/docs/ops/state/state_backends/#migrating-from-legacy-backends) or the JavaDoc on the deprecated state backend classes - `MemoryStateBackend`, `FsStateBackend` and `RocksDBStateBackend` for migration details.

#### Unify binary format for Keyed State savepoints

Expand Down
Loading

0 comments on commit d09ef68

Please sign in to comment.