You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: docs/en/connector-v2/sink/ObsFile.md
+1-2Lines changed: 1 addition & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -34,12 +34,11 @@ By default, we use 2PC commit to ensure `exactly-once`
34
34
35
35
Output data to huawei cloud obs file system.
36
36
37
-
If you use spark/flink, In order to use this connector, You must ensure your spark/flink cluster already integrated hadoop. The tested hadoop version is 2.x.
37
+
If you use spark/flink, In order to use this connector, You must ensure your spark/flink cluster already integrated hadoop. The tested hadoop version is 3.4.
38
38
39
39
If you use SeaTunnel Engine, It automatically integrated the hadoop jar when you download and install SeaTunnel Engine. You can check the jar package under ${SEATUNNEL_HOME}/lib to confirm this.
40
40
41
41
We made some trade-offs in order to support more file types, so we used the HDFS protocol for internal access to OBS and this connector need some hadoop dependencies.
Copy file name to clipboardExpand all lines: docs/en/connector-v2/sink/OssFile.md
+2-2Lines changed: 2 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -15,11 +15,11 @@ import ChangeLog from '../changelog/connector-file-oss.md';
15
15
### For Spark/Flink Engine
16
16
17
17
1. You must ensure your spark/flink cluster already integrated hadoop. The tested hadoop version is 2.x.
18
-
2. You must ensure `hadoop-aliyun-xx.jar`, `aliyun-sdk-oss-xx.jar` and `jdom-xx.jar` in `${SEATUNNEL_HOME}/plugins/` dir and the version of `hadoop-aliyun` jar need equals your hadoop version which used in spark/flink and `aliyun-sdk-oss-xx.jar` and `jdom-xx.jar` version needs to be the version corresponding to the `hadoop-aliyun` version. Eg: `hadoop-aliyun-3.1.4.jar` dependency `aliyun-sdk-oss-3.4.1.jar` and `jdom-1.1.jar`.
18
+
2. You must ensure `hadoop-aliyun-xx.jar`, `aliyun-sdk-oss-xx.jar` and `jdom-xx.jar` in `${SEATUNNEL_HOME}/plugins/` dir and the version of `hadoop-aliyun` jar need equals your hadoop version which used in spark/flink and `aliyun-sdk-oss-xx.jar` and `jdom-xx.jar` version needs to be the version corresponding to the `hadoop-aliyun` version. Eg: `hadoop-aliyun-3.4.1.jar` dependency `aliyun-sdk-oss-3.4.1.jar` and `jdom-1.1.jar`.
19
19
20
20
### For SeaTunnel Zeta Engine
21
21
22
-
1. You must ensure `seatunnel-hadoop3-3.1.4-uber.jar`, `aliyun-sdk-oss-3.4.1.jar`, `hadoop-aliyun-3.1.4.jar` and `jdom-1.1.jar` in `${SEATUNNEL_HOME}/lib/` dir.
22
+
1. You must ensure `seatunnel-hadoop3-3.4.1-uber.jar`, `aliyun-sdk-oss-3.4.1.jar`, `hadoop-aliyun-3.4.1.jar` and `jdom-1.1.jar` in `${SEATUNNEL_HOME}/lib/` dir.
Copy file name to clipboardExpand all lines: docs/en/connector-v2/sink/OssJindoFile.md
+1-2Lines changed: 1 addition & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -13,12 +13,11 @@ Output data to oss file system using jindo api.
13
13
You need to download [jindosdk-4.6.1.tar.gz](https://jindodata-binary.oss-cn-shanghai.aliyuncs.com/release/4.6.1/jindosdk-4.6.1.tar.gz)
14
14
and then unzip it, copy jindo-sdk-4.6.1.jar and jindo-core-4.6.1.jar from lib to ${SEATUNNEL_HOME}/lib.
15
15
16
-
If you use spark/flink, In order to use this connector, You must ensure your spark/flink cluster already integrated hadoop. The tested hadoop version is 2.x.
16
+
If you use spark/flink, In order to use this connector, You must ensure your spark/flink cluster already integrated hadoop. The tested hadoop version is 3.4.
17
17
18
18
If you use SeaTunnel Engine, It automatically integrated the hadoop jar when you download and install SeaTunnel Engine. You can check the jar package under ${SEATUNNEL_HOME}/lib to confirm this.
19
19
20
20
We made some trade-offs in order to support more file types, so we used the HDFS protocol for internal access to OSS and this connector need some hadoop dependencies.
Copy file name to clipboardExpand all lines: docs/en/connector-v2/sink/S3File.md
+5-5Lines changed: 5 additions & 5 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -48,7 +48,7 @@ Output data to aws s3 file system.
48
48
> If you use spark/flink, In order to use this connector, You must ensure your spark/flink cluster already integrated hadoop. The tested hadoop version is 2.x.
49
49
>
50
50
> If you use SeaTunnel Engine, It automatically integrated the hadoop jar when you download and install SeaTunnel Engine. You can check the jar package under `${SEATUNNEL_HOME}/lib` to confirm this.
51
-
> To use this connector you need put `hadoop-aws-3.1.4.jar` and `aws-java-sdk-bundle-1.12.692.jar`in `${SEATUNNEL_HOME}/lib` dir.
51
+
> To use this connector, place the seatunnel-hadoop-aws.jar (which includes the bundled bundle.jar with AWS SDK v2) in the ${SEATUNNEL_HOME}/lib directory.
52
52
53
53
## Data Type Mapping
54
54
@@ -106,7 +106,7 @@ If write to `csv`, `text` file type, All column will be string.
106
106
| tmp_path | string | no | /tmp/seatunnel | The result file will write to a tmp path first and then use `mv` to submit tmp dir to target dir. Need a S3 dir. |
107
107
| bucket | string | yes | - ||
108
108
| fs.s3a.endpoint | string | yes | - ||
109
-
| fs.s3a.aws.credentials.provider | string | yes |com.amazonaws.auth.InstanceProfileCredentialsProvider | The way to authenticate s3a. We only support `org.apache.hadoop.fs.s3a.SimpleAWSCredentialsProvider` and `com.amazonaws.auth.InstanceProfileCredentialsProvider` now. |
109
+
| fs.s3a.aws.credentials.provider | string | yes |software.amazon.awssdk.auth.credentials.InstanceProfileCredentialsProvider | The way to authenticate s3a. We only support `org.apache.hadoop.fs.s3a.SimpleAWSCredentialsProvider` and `software.amazon.awssdk.auth.credentials.InstanceProfileCredentialsProvider` now. |
110
110
| access_key | string | no | - | Only used when fs.s3a.aws.credentials.provider = org.apache.hadoop.fs.s3a.SimpleAWSCredentialsProvider |
111
111
| secret_key | string | no | - | Only used when fs.s3a.aws.credentials.provider = org.apache.hadoop.fs.s3a.SimpleAWSCredentialsProvider |
112
112
| custom_filename | boolean | no | false | Whether you need custom the filename |
Copy file name to clipboardExpand all lines: docs/en/connector-v2/source/ObsFile.md
+1-2Lines changed: 1 addition & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -39,12 +39,11 @@ import ChangeLog from '../changelog/connector-file-obs.md';
39
39
40
40
Read data from huawei cloud obs file system.
41
41
42
-
If you use spark/flink, In order to use this connector, You must ensure your spark/flink cluster already integrated hadoop. The tested hadoop version is 2.x.
42
+
If you use spark/flink, In order to use this connector, You must ensure your spark/flink cluster already integrated hadoop. The tested hadoop version is 3.4.
43
43
44
44
If you use SeaTunnel Engine, It automatically integrated the hadoop jar when you download and install SeaTunnel Engine. You can check the jar package under ${SEATUNNEL_HOME}/lib to confirm this.
45
45
46
46
We made some trade-offs in order to support more file types, so we used the HDFS protocol for internal access to OBS and this connector need some hadoop dependencies.
Copy file name to clipboardExpand all lines: docs/en/connector-v2/source/OssFile.md
+2-2Lines changed: 2 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -15,11 +15,11 @@ import ChangeLog from '../changelog/connector-file-oss.md';
15
15
### For Spark/Flink Engine
16
16
17
17
1. You must ensure your spark/flink cluster already integrated hadoop. The tested hadoop version is 2.x.
18
-
2. You must ensure `hadoop-aliyun-xx.jar`, `aliyun-sdk-oss-xx.jar` and `jdom-xx.jar` in `${SEATUNNEL_HOME}/plugins/` dir and the version of `hadoop-aliyun` jar need equals your hadoop version which used in spark/flink and `aliyun-sdk-oss-xx.jar` and `jdom-xx.jar` version needs to be the version corresponding to the `hadoop-aliyun` version. Eg: `hadoop-aliyun-3.1.4.jar` dependency `aliyun-sdk-oss-3.4.1.jar` and `jdom-1.1.jar`.
18
+
2. You must ensure `hadoop-aliyun-xx.jar`, `aliyun-sdk-oss-xx.jar` and `jdom-xx.jar` in `${SEATUNNEL_HOME}/plugins/` dir and the version of `hadoop-aliyun` jar need equals your hadoop version which used in spark/flink and `aliyun-sdk-oss-xx.jar` and `jdom-xx.jar` version needs to be the version corresponding to the `hadoop-aliyun` version. Eg: `hadoop-aliyun-3.4.1.jar` dependency `aliyun-sdk-oss-3.4.1.jar` and `jdom-1.1.jar`.
19
19
20
20
### For SeaTunnel Zeta Engine
21
21
22
-
1. You must ensure `seatunnel-hadoop3-3.1.4-uber.jar`, `aliyun-sdk-oss-3.4.1.jar`, `hadoop-aliyun-3.1.4.jar` and `jdom-1.1.jar` in `${SEATUNNEL_HOME}/lib/` dir.
22
+
1. You must ensure `seatunnel-hadoop3-3.4.1-uber.jar`, `aliyun-sdk-oss-3.4.1.jar`, `hadoop-aliyun-3.4.1.jar` and `jdom-1.1.jar` in `${SEATUNNEL_HOME}/lib/` dir.
Copy file name to clipboardExpand all lines: docs/en/connector-v2/source/OssJindoFile.md
+1-2Lines changed: 1 addition & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -44,12 +44,11 @@ Read data from aliyun oss file system using jindo api.
44
44
You need to download [jindosdk-4.6.1.tar.gz](https://jindodata-binary.oss-cn-shanghai.aliyuncs.com/release/4.6.1/jindosdk-4.6.1.tar.gz)
45
45
and then unzip it, copy jindo-sdk-4.6.1.jar and jindo-core-4.6.1.jar from lib to ${SEATUNNEL_HOME}/lib.
46
46
47
-
If you use spark/flink, In order to use this connector, You must ensure your spark/flink cluster already integrated hadoop. The tested hadoop version is 2.x.
47
+
If you use spark/flink, In order to use this connector, You must ensure your spark/flink cluster already integrated hadoop. The tested hadoop version is 3.4.
48
48
49
49
If you use SeaTunnel Engine, It automatically integrated the hadoop jar when you download and install SeaTunnel Engine. You can check the jar package under ${SEATUNNEL_HOME}/lib to confirm this.
50
50
51
51
We made some trade-offs in order to support more file types, so we used the HDFS protocol for internal access to OSS and this connector need some hadoop dependencies.
0 commit comments