Releases: DataLinkDC/dinky
Dinky v1.2.0
Feature
- added npm profiles
- Add bug template version number
- Add built-in Flink History Server to reduce the state of the Unknown, the final information of the Flink task is more accurate
- Added support for Flink 1.20 and updated dependencies on other Flink versions
- Support task export
- Add global token
- Support resource of physical deletion
- Support paimon hdfs hive datasource
- Obtain job information using the ingress address
- Flink SQL task Support insert result preview
- Add welcome init page
- FlinkSQL Studio supports real-time update task status
- Flink jar add form
- Provide init tools
- Support pgsql flink catalog
- Add E2E Test Programing process
Fix
- Fix the issue of error when executing show statement
- Fix Json serialization and deserialization
- Fix flink 1.19 cli bug
- Fix bug "all ip port is not available"
- Fix the issue where the enable button in Git Project forms does not have a default value
- Fix the issue with the saveOrUpdate method in the git project module
- Fix SavePoint path logic and adjust the configuration method of Flink configuration acquisition
- Resolve the issue of "Exceeding storage quota" when too many job tabs are open
- Fix git build some bug
- Fix the issue of unsupported global variable substitution when fetching field lineage
- Fix thumbnail display in code editor
- Fix the SQL auto initialization issue of PG
- Fix oracle column type convert error
- Fix some bugs that occurred when Flink was submitted in local mode
- Fix the issue where flyway does not support mysql5.7
- Fix abnormal data in pg query
- Fix exception caused by no instance when clicking on a job on the optimization workbench
- Fix execute failed
- Fix query oracle primary key column bug
- Fix task tree can not sorting
- Fix the issue of unlimited refresh of Git project pages
- Fix null pointer exception occurs when dinky configures DingTalk alarm
- Fix some minor bugs
- Fix the issue of array out of bounds when fetching lineage information
- Fix SQL injection error caused by upgrading Druid version
- Fix the catalog display field bug
- Fix execute jar submit in yarn-application
- Fix null pointer exception occurs in alert
- Fix the issue where the table name has a middle line that prevents task execution
- Fix configuration key error
- Fix job alert dinky address url
- Fix menu mapper
- Fix job id is null exception in query model
- Fix Keberos bug related repair, SQL SET value does not take effect, etc
- Fix do not save job instance in query mode
- Fix ws bug
- Fix web package
- Fix Dinky backend CI workflow with Flink 1.20
- Fix the issue of primary key generation strategy
- Fix the issue of Object not found when mocking statement
- Fix datastudio footer state
- Fix the issue of incomplete dependencies in the docs module
- Close the data development page, floating button
- Fix the data development page and enable system configuration
- Fix k8s form ingresss bug
- Fix the route redirection error on the welcome page
- Fix the Flink task to submit the session mode
- Fix web npe
- Fix web clear bug
- Fixed an error when using the copy button in the Resource Center
- Issue with creating a new task with a subdirectory of the same name
- Restrictions on task names when running in Kubernetes mode
- Fix k8s test bug
- Fix data development and introduce LESS to cause the global CSS style confusion
- Fix data development, Flink jar task toolbar display
- Fix pg bug
- Fix dolphinscheduler calls dinky tasks and concurrent execution exceptions
- Fix Yarn webui fails to obtain task status when submitting a Flink task after turning on Kerberos authentication
- Fix the issue where the submitted job name remains unchanged when renaming the job
- Fix alert serializable
- Fix login bug
- Fix flink jar submit
- Fix automation script path issue
- Fix git code builds error
- Fix yarn parallel submit
- Fix NPE when executing a query statement on the PG table
- Fix the issue where FlinkJar cannot use global variables
Optimize
- Optimize version update logic to solve cache issues caused by upgrades
- Optimize the worker's place page display
- Refactor metric request
- Refactor the method of obtaining user.dir
- SSE switch to global websocket and web container switches from Tomcat to Undertow
- Add getSchemas and getTables api
- Delete dinky_cluster index
- Optimize mapper queries
- Optimizing class attribute type issues
- Delete the prompt message on the UDF registration management page
- Optimize some web layouts to make them more user-friendly when displayed on small screens
- Optimizing the virtual scrolling problem of data source detail list
- Optimize login page
- Optimize doc action
- Upgrade doc some deps
- Improve get table info of the schema
- Optimize program start
- Optimize cluster configuration and start session cluster for manual registration
- Introduction and layout of configuration items in the optimization configuration center
- Tips for optimizing role permissions
- Obtain bloodline to increase loading effect
- Try to achieve unified JSON(jackson) serialization as much as possible
- Add hints: role and tenant are bound
- Optimize some page layouts, update web dependencies, and fix some bugs
- Modify and upgrade SQL file version number
- Optimize the display of Flink operator diagram in the operation and maintenance center
- Optimize dinky flink web UI
- oracle timestamp column type order is changed to precede time column
- Optimize task list layout
- Optimize some code
- Add repeat import task
- Limit the maximum percentage of container memory used by the JVM via -XX:MaxRAMPercentage
- Optimized the printing of K8S logs
- Optimize flink application mode status refresh
- Refactoring a new data development interface to enhance the user experience
- Remove the restriction on underscores in job names
- Change token key name
- Remove quotation marks when building FlinkSQL
- Upgrade cdc to 3.2.0
- Add package-lock.json
- Refactor get version function
- Add tag right-click function
- Optimized the new UI
- Optimize debug task to preview data
- Optimize FlinkDDL execution sequence
- Remove the old version of the data development page and fix some minor details
- Uniformly use '/' as the file separator
- Optimize explain and add test
- Move DataStudioNew to DataStudio
- Refactor result query
- Add websocket PING PONG
- Add footer last update time
- Optimize the style of IDE
- Remove old lineage
- Optimize datastudio theme
- Optimize CDCSOURCE and support sink print and mock
- Optimized offline button icon
- Optimize web icon
- Improve print table data display method
- Optimize the status of runing task and beautify UI
- Optimize the logic for constructing role menus
- Improved the missing exception message when uploading files in the Resource Center
- Optimize submit task print error log
- Click the Tasks tab to switch to Service Synchronization
- Delete the previously failed cluster when resubmitting the task
- Optimize flink jar form select
- Optimize app package size
- Variable suggestion optimization
- Add Deployment status monitoring
- Add resource management to the datastudio page
- Optimize some script
- Add default jobmanager.memory.process.size parameter
- Optimization scheduler request error assert method
- Refactor udf execute
- Optimize blood relationship acquisition, add Savepoint, optimize udf class name display
- Optimization devops page ui
- Modify sqllite data position
- Change Chinese comments to English comments
- Add welcome page auto width
- Add push task into DolphinScheduler
Document
- In k8s mode, submit tasks and refine documents
- Add Datasophon integration with Dinky
- Add Flink Cli Doc
- Update ICP in document
- Update deploy guide reference
- Fix deploy doc
- Doc of debug data preview update
- Update images in Quick Start documentation
- Optimize the status of runing task and beautify UI
- Optimize Dinky without Flink dependency, unable to start
- Optimized the package size of the App and the rs protocol
- Modify the wrong links in README.md and README_zh_CN.md regarding source code deployment on how to deploy.
Contributors
@aiwenmo
@binggana
@chenhaipeng
@dagenjun
@emmanuel-ferdman
@gaoyan1998
@gphwxhq
@hashmapybx
@Jam804
@javaht
@jianjun159
@leechor
@MactavishCui
@maikouliujian
@MaoMiMao
@miaoze8
@RainHXXXX
@stevenkitter
@soulmz
@suger-bl
@suxinshuo
@yuxiqian
@zackyoungh
@zhuangchong
@zhuxt2015
@Zzm0809
@18216499322
v1.1.0
Dinky-1.1.0 Release Note
incompatible changes
-
v1.1.0 supports the automatic schema upgrade framework (Flyway), using the table structure/data up to v1.0.2 as the default base version. If your version is not at v1.0.2+, you must first upgrade to the table structure of v1.0.2 according to the official upgrade tutorial. If your version is v1.0.2+, you can directly upgrade, and the program will automatically execute without affecting historical data. If you are deploying from scratch, please ignore this matter.
-
Due to the contribution of flink-cdc to the Apache Foundation, the package name of the new version will change, and it is not possible to make compatibility changes. In versions dinky-v1.1.0 and above, dinky will use new package name dependencies, which requires your flink-cdc dependencies to be upgraded to flink-cdc v3.1+, otherwise it will not work.
-
Remove the distinction of Scala version when packaging, only develop with Scala-2.12, and no longer support Scala-1.11.x.
New Features
- Added Flyway schema upgrade framework.
- Task directory supports flexible sorting.
- Implemented task-level permission control and supports different permission control strategies.
- Optimized the automatic addition of administrator user association when adding tenants.
- Added the function to directly kill the process in the case of task submission deadlock.
- Support k8s deployment of dinky.
- Implement data preview.
- New support for UDF injection configuration in data development.
- Added whole library synchronization function (cdcsource) sink end table name mapping, regular matching modification mapping.
- Added Dashboard page.
- Added Paimon data source type.
- Added SQL-Cli.
Fixes
- Modified the issue with k8s's account.name value and added the problem of Conf initialization when deleting a cluster.
- Fixed the issue of flink-cdc losing SQL in application mode.
- Fixed the issue where the task creation time was not reset when copying tasks.
- Fixed the task list positioning problem.
- Solved the problem of user-defined classes in user Jars not being compiled when submitting Jar tasks.
- Fixed the incorrect alarm information in the enterprise WeChat-app mode.
- Fixed the problem of flink-1.19 not being able to submit tasks.
- Fixed the startup script not supporting jdk11.
- Fixed the problem of cluster instances not being deleted.
- Fixed the problem of UDF not finding the class in Flink SQL tasks.
- Fixed the problem of the data development page not updating the state when the size changes.
- Fixed the problem of not being able to get the latest high availability address defined in custom configuration.
- Fixed the problem of not recognizing the manual configuration of rest.address and rest.port.
Optimizations
- Optimized the prompt words in resource configuration.
- Optimized the DDL generation logic of the MySQL data source type.
- Optimized some front-end dependencies and front-end prompt information.
- Optimized the copy path function of the resource center, supporting multiple application scenarios within dinky.
- Optimized the monitoring function, using the monitoring function switch in dinky's configuration center to control all monitoring within dinky.
- Optimized some front-end judgment logic.
Restructuring
- Moved the alarm rules to the alarm route under the registration center.
- Removed paimon as the storage monitoring medium, changed to sqllite, and do not strongly depend on the hadoop-uber package (except in the Hadoop environment), and support periodic cleaning.
- Restructured the monitoring page, removing some built-in service monitoring.
Documentation
- Added documentation for deploying dinky on k8s.
- Optimized the Docker deployment documentation.
- Added documentation related to whole library synchronization function (cdcsource) sink end table name mapping.
v1.0.3
Dinky-1.0.3 Release Note
Upgrade Instructions
1.0.3 is a bug fix version, no table structure changes, no additional SQL scripts need to be executed during upgrade, just overwrite and install, pay attention to the modification of configuration files and the placement of dependencies
About SCALA version: The release uses Scala-2.12. If your environment must use Scala-2.11, please compile it yourself, please refer to Compile and Deploy, change the profile scala-2.12 to scala-2.11
New Features
- Added the function of manually killing the process after the task is stuck during operation
Fixes
- Fix the problem that the Yarn Application mode cannot execute tasks in Flink 1.19 support
- Fix the problem of start and stop scripts, adapt to the GC parameters of jdk 11
- Fix UDF The class cannot be found after publishing
- Fixed the priority problem that the set function cannot cover in the Application task SQL
Optimization
- Optimize the Dinky service to cause the CPU load to be too high and the thread not to be released during monitoring
- Optimize the Dinky monitoring configuration, according to the
Configuration Center->Global Configuration->Metrics Configuration->**Dinky JVM Monitor Switch**
switch to control whether to enable Flink task monitoring - Optimize the data type conversion logic of Oracle whole database synchronization
- Optimize the front-end rendering performance and display effect of monitoring data
v1.0.2
Dinky-1.0.2 Release Note
Upgrade Instructions
- 1.0.2 is a BUG repair version with table structure/data changes, please execute DINKY_HOME/sql/upgrade/1.0.2_schema/data source type/dinky_dml.sql
About SCALA version: The release version uses Scala-2.12. If your environment must use Scala-2.11, please compile it yourself.
Please refer to Compile Deployment and change the scala-2.12 in the profile. for scala-2.11
New Feature
- Adapt to various Rest SvcTypes in KubernetsApplicationOperator mode and modify JobId to obtain judgment logic
- Added SSE heartbeat mechanism
- Added the function of automatically retrieving the latest highly available JobManager address (currently implemented in Yarn; not yet implemented in K8s)
- Added the function of clearing logs in the console during data development
- Support Flink1.19
- Add task group related configuration when pushing to Apache DolphinScheduler
- Added a user designated by the user to submit YarnApplication tasks
- The startup script adds GC related startup parameters and supports configuring the DINKY_HOME environment variable
- Implement FlinkSQL configuration item in cluster configuration to support RS protocol (Yarn mode only)
Fix
- Fixed the problem of global variables not being recognized in YarnApplication mode, and reconstructed the YarnApplication submission method
- Fixed the problem of data source heartbeat detection feedback error
- Fix the possible 404 issue in front-end route jump
- Fixed the issue of incorrect error prompt when global variable does not exist
- Fixed the issue of cursor movement and flickering in the editor during front-end data development
- Fixed the path error problem in the docker file of DockerfileDinkyFlink
- Fixed the problem of unrecognized configuration Python options
- Fixed null pointer exception in role user list
- Fixed some issues when submitting K8s tasks
- Fixed Oracle's Time type conversion problem when synchronizing the entire database
- Fixed the problem that k8s pod template cannot be parsed correctly
- Fixed the issue where SPI failed to load CodeGeneratorImpl
- Fixed an issue where numeric columns declared with UNSIGNED / ZEROFILL keywords would cause parsing mismatches
- Fixed the issue where the status of batch tasks is still unknown after completion
- Fixed some unsafe interfaces that can be accessed without login authentication
- Fixed the problem of unknown status in Pre-Job mode
- Fixed the problem of retrieving multiple job instances due to duplicate Jid
- Fixed the problem that the user list cannot be searched using worknum
- Fixed the problem that the query data button on the right side of the result Tag page cannot be correctly rendered when querying data.
- Fixed issues with print table syntax
- Fixed the problem that the resource list cannot be refreshed after adding or modifying it
- Fixed the issue of incorrect console rolling update task status for data development
- Fixed the problem of occasional packaging failure
- Fixed problems when building Git projects
Optimization
- Optimize start and stop scripts
- Optimize the problem of partial overflow of the global configuration page
- Tips for optimizing UDF management
- Optimize the user experience of the operation and maintenance center list page and support sorting by time
- Optimize the warehouse address of default data in Git projects
- Optimize flink jar task submission to support batch tasks
- Optimize the problem that the right-click menu cannot be clicked when it overflows the visual area.
- Optimize the primary key key of the list component of the operation and maintenance center
- When modifying tasks, the modifiable template is optimized to an unmodifiable template.
- Optimize the display method and type of cluster configuration
- Optimize the logic of deleting clusters in K8s mode
- Fixed the problem that the cluster is not automatically released in Application mode
- Remove the logic of using Paimon for data source caching and change it to the default memory cache, which can be configured as redis cache
- Removed the automatic pop-up of Console when switching tasks
- Optimize the rendering logic of resource management. The resource management function cannot be used when resources are not turned on.
- Optimize the detection logic of login status
- Optimize login page feedback prompts
- Removed some useless code on the front end
- Optimize the problem that when the entire library is synchronized to build operator graphs multiple times, the order of the operator graphs is inconsistent, resulting in the inability to recover from the savepoint.
- Some tips for optimizing resource allocation
- Optimize and improve the replication function of the resource center, supporting all reference scenarios within Dinky currently
Safety
- Exclude some high-risk jmx exposed endpoints
Document
- Optimize expression variable expansion documentation
- Optimize some practical documents for synchronization of the entire database
- Add JDBC FAQ about tinyint type
- Added a carousel image on the home page of the official document website
- Fixed the description problem of resource configuration in document global configuration
- Added documents related to environment configuration in global configuration
- Delete some configuration items of Flink configuration in the global configuration
- Added document configuration description for alarm type email
v1.0.1
Dinky-1.0.1 Release Note
1.0.1 is a BUG repair version, no database upgrade changes, can be directly upgraded
About SCALA version: The release version uses Scala-2.12. If your environment must use Scala-2.11, please compile it yourself.
Please refer to Compile Deployment and change the scala-2.12 in the profile. for scala-2.11
New Feature
- Add some Flink Options classes to trigger shortcut prompts
- Implement automatic scrolling of console logs during data development
Fix
- Fixed the problem that the SMS alarm plug-in was not packaged
- Fixed NPE exception and some other issues when creating UDF
- Fixed job type rendering exception when creating tasks
- Fixed the issue of page crash when viewing Catalog during data development
- Fixed parameter configuration problem when using
add jar
with s3 - Fix some issues with
rs
voluntary agreement - Fixed the routing error jump problem in the quick navigation in data development
- Fixed the issue that the console was not closed when selecting UDF task type
- Fixed the issue where the
decimal
data type exceeds 38 digits (more than 38 digits will be converted to string) - Fixed the problem that some pop-up boxes could not be closed
- Fixed the problem that global variables cannot be recognized in application mode
- Fixed the problem of array out-of-bounds when obtaining container in application mode
- Fix the problem that
add file
cannot be parsed
Optimization
- Optimize some front-end request URLs into agreed constants
- Optimize the startup script and remove the FLINK_HOME environment variable loading
- Optimize the prompt message when the password is incorrect
- Optimize tag display of data development tasks
- Turn off automatic preview in the data development editor
- Optimize the expression variable definition method, changing from file definition to system configuration definition
- Optimize the prompt message that query statements are not supported in application mode
- Optimize the rendering effect of
FlinkSQL environment
list - Optimize the environment check exception prompt when building GIT projects
- Optimize the cluster for NPE issues that may occur during heartbeat detection
Document
- Added built-in variable documents for synchronization of the entire library
- Optimize document version
- Add
EXECUTE JAR
task DEMO - Optimize some copywriting tips when creating cluster configurations
- Optimize some paths in the entire database synchronization document
v1.0.0
Dinky-1.0.0 Release Note
Upgrade Instructions
- Dinky 1.0 is a refactored version that restructures existing functions, adds several enterprise-level functions, and fixes some limitations of 0.7. There is currently no direct upgrade from 0.7 to 1.0. It is recommended to redeploy version 1.0.
- Starting from Dinky 1.0, the Dinky community will no longer maintain all versions before 1.0.
- Starting from Dinky version 1.0, the Dinky community will provide support for Flink 1.14.x and above, and will no longer maintain Flink versions below 1.14. At the same time, Flink has added some new features, which Dinky will gradually support.
- Dinky 1.0 and later versions, every time Flink adds a new major version, Dinky will also add a new major version, and at the same time, a Dinky-Client version will be eliminated depending on the situation. Deleted versions may be subject to a vote, and the results of the vote determine the deleted version.
- Four RC versions have been released successively during the reconstruction process. The RC version can be upgraded, but it is still recommended to redeploy the 1.0-RELEASE version. Avoid some location issues.
- Users of Dinky version 0.7 can continue to use version 0.7, but no maintenance and support will be provided. It is recommended to install version 1.0 as soon as possible.
The changes from version 0.7 to version 1.0 are relatively large, and there are some incompatible changes. Users using version 0.7 cannot directly upgrade to version 1.0. It is recommended to redeploy version 1.0.
Incompatible changes
- CDCSOURCE dynamic variable definition changed from
${}
to#{}
- Global variables such as
_CURRENT_DATE_
are removed and replaced by expression variables - Flink Jar task definition is changed from form to EXECUTE JAR syntax
- The definition of dinky-app-xxxx.jar in Application mode is moved to the cluster configuration
- The database DDL part is not compatible with upgrades
- The type attribute of Dinky's built-in Catalog is changed from
dlink_catalog
todinky_catalog
Refactoring
- Reconstruct data development
- Reconstruct the operation and maintenance center
- Reconstruct the registration center
- Reconstruct the Flink task submission process
- Reconstruct the Flink Jar task submission method
- Reconstruct CDCSOURCE entire library synchronization code architecture
- Reconstruct Flink task monitoring and alarming
- Reconstruct permission management
- Reconstruct system configuration to online configuration
- Refactor push DolphinScheduler
- Reconstruct the packaging method
new function
- Data development supports code snippet prompts
- Support real-time printing of Flink table data
- Console real-time printing task submission log
- Support Flink CDC 3.0 entire database synchronization
- Support custom alarm rules and custom alarm templates
- Support Flink k8s operator submission
- Support proxy Flink webui access
- Added Flink task Metrics to monitor custom charts
- Support Dinky jvm monitoring
- Added resource center functions (local, hdfs, oss) and expanded rs protocol
- Added Git UDF/JAR project hosting and overall construction process
- Supports full-mode Flink jar task submission
- Added ADD CUSTOMJAR syntax to dynamically load dependencies
- Added ADD FILE syntax to dynamically load files
- openapi supports custom parameter submission
- Permission system upgrade to support tenants, roles, tokens, and menu permissions
- Support LDAP
- Added new widget function to the data development page
- Support pushing dependent tasks to DolphinScheduler
- Implement the Flink instance stopping function
- Implement CDCSOURCE synchronization of the entire database and ordering of data under multiple degrees of parallelism
- Implement configurable alarm retransmission prevention function
- Implement ordinary SQL that can be scheduled and executed by DolphinScheduler
- Added the ability to obtain dependent JARs loaded in the system and group them into groups to facilitate troubleshooting JAR related issues
- Implement cluster configuration test connection function
- Support H2, Mysql, Postgre deployment, the default is H2
New syntax
- CREATE TEMPORAL FUNCTION is used to define temporary table functions
- ADD FILE is used to dynamically load class/configuration and other files
- ADD CUSTOMJAR is used to dynamically load JAR dependencies
- PRINT TABLE for real-time preview of table data
- EXECUTE JAR is used to define Flink Jar tasks
- EXECUTE PIPELINE is used to define Flink CDC 3.x entire library synchronization tasks
Fix
- Fixed the problem of missing extends path in CLASS_PATH of auto.sh
- Fixed the problem that the job list life cycle status value was not re-rendered after release/offline
- Fixed Flink 1.18 set syntax not working and producing null error
- Fixed the save point mechanism issue of submission history
- Fixed the problem of creating views in Dinky Catalog
- Fixed Flink application not throwing exception
- Fixed incorrect rendering of alarm options
- Fixed job life cycle issues
- Fixed the problem that k8s YAML cannot be displayed in cluster configuration
- Fixed a time-consuming formatting error in the operation and maintenance center job list
- Fixed the problem of Flink dag prompt box
- Fixed checkpoint path not found
- Fixed node location error when pushing jobs to Dolphin Scheduler
- Fixed the problem that job parameters did not take effect when the set configuration contained single quotes
- Upgrade jmx_prometheus_javaagent to 0.20.0 to resolve some CVEs
- Fixed checkpoint display problem
- Repair job instance is always running
- Fixed the problem of missing log printing after Yarn Application failed to submit a task
- Fixed the problem that job configuration cannot render yarn prejob cluster
- Fixed URL misspelling causing request failure
- Fixed the problem of inserting the same token value when multiple users log in
- Fixed alarm instance form rendering issue
- Fixed the problem that FlinkSQLEnv could not be checked
- Fixed the problem that set statement could not take effect
- Fixed the problem of invalid yarn cluster configuration, customized Flink and hadoop configuration
- Fixed the problem that the checkpoint information of the operation and maintenance center cannot be obtained
- Fixed the problem that the status cannot be detected after the Yarn Application job is completed
- Fixed the problem of no printing in the console log when yarn job submission failed
- Fixed the issue where Flink instances started from cluster configuration cannot be selected in job configuration
- Fixed RECONNECT status job status recognition error
- Fixed an issue with FlinkJar tasks being submitted to PreJob mode
- Fixed Dinky startup detection pid problem
- Fixed the problem that caused conflicts when the built-in Paimon version was inconsistent with the user integrated version (implemented using shader)
- Fixed the problem that the CheckPoint parameter of the FlinkJar task does not take effect in Application mode
- Fixed the issue where the name and remark information were updated incorrectly when modifying the Task job
- Fixed the issue where password is required when registering data source
- Fixed the problem of incorrect heartbeat detection of cluster instances
- Fixed the problem that Jar task submission cannot use set syntax
- Fixed an issue where data development->job list cannot be folded in some cases
- Fixed the problem of repeated sending of alarm information under multi-threading
- Fixed the problem of tag height of data development->open job
- Fixed the problem that the jobmanager log of the operation and maintenance center job details could not be displayed normally in some cases
- Fixed Catalog NPE issues
- Fixed the problem of incorrect prejob task status
- Fixed add customjar syntax problem
- Fixed the problem that the jar task could not be monitored
- Fixed Token invalid exception
- Fixed a series of problems caused by statement delimiters and removed the system configuration
- Fixed the problem of task status rendering in the operation and maintenance center
- Fixed the problem of failure to delete tasks when the job instance does not exist
- Fixed duplicate exception alarm
- Fixed some issues submitted by PythonFlink
- Fixed the problem that Application Mode cannot use global variables
- Fixed the problem that K8s task could not start due to uninitialized resource type
- Fixed the pipeline acquisition error of the Jar task causing the front end to not work properly
- Fix SqlServer timestamp conversion to string
- Fixed NPE issue when publishing tasks with UDF
- Fixed the problem of Jar task being unable to obtain execution history
- Fixed the problem of front-end crash caused by NPE when Doris data source obtains DDL and queries
Optimization
- Added key width for job configuration items
- Optimize query job directory tree
- Optimize Flink on yarn app submission
- Optimize Explainer class to use builder pattern to build results
- Optimize document management
- Implement operator via SPI
- Optimize document form pop-up layer
- Optimize type rendering of Flink instances
- Optimize the data source details search box
- The method of obtaining the version is optimized to be returned by the backend interface
- Optimize CANCEL job logic, and can forcefully stop the lost connection job
- Optimize the detection reference logic when part of the registration center is deleted
- You can specify a job template when creating an optimization job
- Optimize Task deletion logic
- Optimize some front-end internationalization
- Optimize automatic switching between console and result tag during execution preview
- Optimize the UDF download logic of K8S
- Optimize the synchronization of the entire database and sub-databases and tables
- Optimize the registration center->d...
Dinky v1.0.0-rc4
Feature:
- Realize the synchronization of the entire database and orderly data under multiple degrees of parallelism
- Implement HDFS HA in the resource center
- Implement permission control for global configuration in the configuration center
- Implement configurable alarm anti retransmission function
- Implement DB SQL that can be scheduled by DolphinSchedule
- New resource center synchronization directory based on configured resource storage type (currently implemented as oss)
Fix:
- Fix the issue of incorrect heartbeat detection in cluster instances
- Fix delimiter issues
- Fix the issue of Jar task submission not being able to use set syntax
- Fix the issue of NPE when obtaining user related information through LDAP
- Fix the issue of being unable to carry the higher-level ID when assigning menu permissions
- Fix the issue of version history not being updated properly when switching keys in data development
- Fix some default value issues in PG SQL files
- Fix Dinky's inability to start due to resource configuration errors
- Fix the issue of default route redirection in permission control
- Fix the issue of data development ->task list section not being able to fold
- Fix the issue of duplicate sending of alarm information under multithreading
- Fix the issue of tag height in data development ->open job
- Fix authentication related issues when integrating Gitlab
- Fix the issue where jobmanager logs in the operation and maintenance center's job details cannot be displayed properly
- Fix issues with CataLog NPE
- Fix the issue of Yarn's port being 0
- Fix front-end form status issues with data sources
- Fix the issue with Kubeconfig acquisition
- Fix the issue of pre job task status errors
- Fix syntax issues with add customjar
- Fix some web NPE exceptions
- Fix a bug in enabling SSL when the alarm instance is of email type
- Fix the issue of inability to monitor jar tasks
Optimization & Improve:
- Optimize the UDF download logic of K8S
- Optimize CDC3.0 related logic
- Optimize the synchronization of the entire database by sub database and sub table
- Optimize and integrate LDAP logic
- Optimize the logic of redirecting the data source list to the details page in the registry ->Data source list
- Optimization of homework configuration logic (homework configuration cannot be edited in the published status)
- Optimize the cluster instance rendering logic for job configuration in data development
- Optimize the startup script to enable configuration of environment variables for startup
Document:
- Optimize documents for database and table partitioning
- Optimize regular deployment documents
- Add relevant documents on alarm anti resend
- Optimize OpenAPI documentation
- Add HDFS HA configuration document
Contributors
@aiwenmo
@gaoyan1998
@izouxv
@JiaLiangC
@kylinmac
@leechor
@yangzehan
@yqwoe
@zackyoungh
Dinky v1.0.0-rc3
New Feature
- The default Flink startup version is modified to 1.16
- Implement CodeShow component line break button
- Implement Flink instance stopping function
- Realize deletion of defined task monitoring layout
Optimization
- The method of obtaining the version is optimized to be returned by the backend interface
- Optimize the CANCEL job logic, and can forcefully stop the lost connection job
- Optimize the detection reference logic when part of the registration center is deleted
- You can specify a job template when creating an optimization job
- Optimize Task deletion logic
- Optimize some front-end internationalization
- Optimize Dinky process PID detection logic
- Optimize automatic switching between console and result tag during execution preview
Fix
- Fixed alarm instance form rendering issue
- Fixed the problem that FlinkSQLEnv could not be checked
- Fixed the problem that set statement could not take effect
- Fixed the problem of invalid yarn cluster configuration, customized Flink and hadoop configuration
- Fix some problems in Prejob mode
- Fixed the problem that the checkpoint information of the operation and maintenance center cannot be obtained
- Fixed the problem that the status cannot be detected after the Yarn Application job is completed
- Fixed the problem that the console log failed to print when yarn job submission failed.
- Fix the problem of getting savepoint list 404
- Fixed the issue where Flink instances started from the cluster configuration could not be selected in the job configuration.
- Fixed RECONNECT status job status recognition error
- Fixed the problem that the end time of the operation and maintenance center list is 1970-01-01
- Fixed the problem of submitting FlinkJar tasks to PreJob mode
- Fixed the repeated introduction of dependencies in the alarm module, causing conflicts
- Fix the problem of Dinky startup detection pid
- Fix the problem of conflict caused by inconsistent version of built-in Paimon and user integration (implemented using shader)
- Fix the syntax regular issue of execute jar
- Fixed the problem that the CheckPoint parameter does not take effect in the Application mode of the FlinkJar task
- Fixed the issue where the name and remark information were updated incorrectly when modifying Task operations
- Fixed the problem that password is required when registering data source
Document
- Add some data development related documents
- Optimize some documents of the registration center
- Remove some deprecated/wrong documentation
- Adjust some document structures
- Add quick start document
- Add deployment documents
@aiwenmo
@drgnchan
@gaoyan1998
@gitfortian
@gitjxm
@leechor
@leeoo
@Logout-y
@MaoMiMao
@Pandas886
@yangzehan
@YardStrong
@zackyoungh
@Zzm0809
Dinky v1.0.0-rc2
Fix:
[Fix-2739] Fix bug that complete the missing path in auto.sh's CLASS_PATH
[Fix-2740] Fixed issue of re-rendering task list after publishing or offline
[Fix] Fix flink 1.18 set operator not work and configure null error
[Fix] Fix the bug of save_point_strategy in submission history
[Fix] Fix the bug of print flink table
[Fix] Fix the bug of create view to ddl catalog
[Fix] Fix flink application not throw exception
[Fix] Fix the alert option is incorrect
[Fix] Fix the bug of job life cycle
[Fix-2754] Fix the YAML of K8s form in the cluster is not displayed
[Fix-2756] Fix the devops job list duration formate error
[Fix-2777] Fix flink dag tooltip
[Fix-2782]Fix checkpoint path not found
[Fix] Fix the locations bug in pushing task to DolphinScheduler
[Fix-2806] The job parameters are not effective when the set parameters key and value contain single quotes
[Fix-2811] Upgrade jmx_prometheus_javaagent to 0.20.0 to fix some CVE
[Fix-2814] Fix checkpoint overview error
[Fix] Fix Flink catalog does not take effect with add_jar
[Fix] Fix some devops bug
[Fix-2832] Fix h2 driver no default packaging problem
[Fix] Fix sql bug
[Fix] Fixed jobInstance was always in the running state
[Fix-2843] Fix Yarn Application mode submission task failed and lack of log printing
[Fix] Fix the bug of udf in h2
[Fix-2823] Fix jobconfig cannot render yarn prejob cluster
[Fix] Fix URL misspelling causing the request to fail
[Fix-2855] Fix savepoint table params bug
[Fix-2776] Fix multi user login with the same token value insert error
Optimization & Improve:
[Improve] Improve extract yaml from execute pipeline command
[Optimization] Add key width for job configure item
[Optimization] Add dinky port configure in PrintNetSink
[Improve] Improve query catalog tree
[Optimization-2773] Optimize the data source directory tree has two scroll bars
[Optimization-2822] Optimize metrics page tips
[Optimization] Optimize Flink on yarn app submit
[Optimization] Optimize explainer class use user builder for result
[Optimization] Optimize document management
[Optimization] Implement operator with SPI
[Improve] Improve document form layout
[Optimization-2757] Optimize Flink instance render type
[Optimization-2755] Optimize datasource detail search box
[Optimization] Add resource implement for DinkyClassLoader
Document:
[Document] Improve the cluster instance list document for the registration center
[Document] Improve the alert document for the registration center
[Document] Improve the git project document for the registration center
[Document] Improve the k8s document for the quick start
[Document] Modify domain name
[Document] Improve documents in registration center and authentication center
[Document] Improve documents in developer guide
[Document] Add parameter description in CDCSOURCE and example for debezium.*
[Document-2830] Update download
[Document] Modify document struct
Contributors:
@aiwenmo
@gaoyan1998
@gitfortian
@leeoo
@leechor
@stdnt-xiao
@yangzehan
@zackyoungh
@Zzm0809
Dinky v1.0.0-rc1
Introduction
Dinky is a data development platform based on Apache Flink, which enables agile data development and deployment.
Upgrade instructions
Dinky 1.0 is a refactored version that restructures existing functions, adds several enterprise-level functions, and fixes some limitations of 0.7. Currently, it is not possible to directly upgrade from 0.7 to 1.0. An upgrade plan will be provided in the future.
Function
Its main functions are as follows:
- FlinkSQL data development: automatic prompt completion, syntax highlighting, statement beautification, syntax verification, execution plan, MetaStore, lineage analysis, version comparison, etc.
- Support FlinkSQL multi-version development and multiple execution modes: Local, Standalone, Yarn/Kubernetes Session, Yarn Per-Job, Yarn/Kubernetes Application
- Support Apache Flink ecosystem: Connector, FlinkCDC, Paimon, etc.
- Support FlinkSQL syntax enhancement: whole database synchronization, execution environment, global variables, statement merging, table value aggregation function, loading dependencies, row-level permissions, Jar submission, etc.
- Support FlinkCDC real-time warehousing of the entire database into the lake: multi-database output, automatic table creation, model evolution, sub-database and sub-table
- Supports SQL job development and metadata browsing: ClickHouse, Doris, Hive, Mysql, Oracle, Phoenix, PostgreSql, Presto, SqlServer, StarRocks, etc.
- Support Flink real-time online debugging preview TableData, ChangeLog, Operator, Catalog
- Support Flink job custom monitoring statistical analysis and custom alarm rules.
- Support real-time task operation and maintenance: online and offline, job information (supports obtaining checkpoint), job log, version information, job snapshot, monitoring, SQL lineage, alarm record, etc.
- Support real-time job alarms and alarm groups: DingTalk, WeChat business account, Feishu, email, SMS, etc.
- Supports automatically hosted SavePoint/CheckPoint recovery and triggering mechanisms: latest, earliest, specified, etc.
- Supports multiple resource management: cluster instances, cluster configurations, data sources, alarms, documents, global variables, Git projects, UDFs, system configurations, etc.
- Support enterprise-level management: tenants, users, roles, menus, tokens, data permissions
New Feature
- Added new homepage signboard
- Data development supports code tips
- Supports real-time printing of Flink table data
- The console supports real-time printing task submission log
- Support Flink CDC 3.0 entire database synchronization
- Support customized alarm rules and customized alarm information templates
- Comprehensive revision of the operation and maintenance center
- k8s and k8s operator support
- Support proxy Flink webui access
- Support Flink task monitoring
- Support Dinky jvm monitoring
- New resource center function and expanded rs protocol
- New Git UDF/JAR project hosting and overall construction process
- Supports full-mode application mode custom jar submission
- openapi supports custom parameter submission
- Permission system upgrade, supporting tenants, roles, tokens, and menu permissions
- LDAP authentication support
- New widget function on data development page
- Support pushing dependent tasks to DolphinScheduler