Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Original file line number Diff line number Diff line change
@@ -0,0 +1,226 @@
/*
* SPDX-License-Identifier: Apache-2.0
* Copyright Red Hat Inc. and Hibernate Authors
*/
package org.hibernate.community.dialect;

import org.hibernate.community.dialect.sequence.SpannerPostgreSQLSequenceSupport;
import org.hibernate.dialect.DatabaseVersion;
import org.hibernate.dialect.PostgreSQLDialect;
import org.hibernate.dialect.SimpleDatabaseVersion;
import org.hibernate.dialect.sequence.SequenceSupport;
import org.hibernate.dialect.unique.AlterTableUniqueIndexDelegate;
import org.hibernate.dialect.unique.UniqueDelegate;
import org.hibernate.engine.jdbc.dialect.spi.DialectResolutionInfo;
import org.hibernate.procedure.internal.StandardCallableStatementSupport;
import org.hibernate.procedure.spi.CallableStatementSupport;
import org.hibernate.tool.schema.internal.StandardTableExporter;

public class SpannerPostgreSQLDialect extends PostgreSQLDialect {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

With all the overrides here and potentially missed cases in the SqlAstTranslator I'm kind of starting to think that extending the PostgreSQLDialect is not a good idea.
The use of the Spanner JDBC driver in Database#SPANNER_PG also makes me think that it might be better to extend the SpannerDialect here and override the functions and DDL types or whatever is different from standard Spanner.
Finally, I would prefer to move this dialect to the hibernate-community-dialects project until we can figure out a way to regularly test it.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@beikov Spanner PG is more aligned(more closer) with PostgreSQL compared to Spanner dialect. I agree that we override lot of methods. We have plans on providing support for them so that we can get rid of these overriden methods. The current dialect shows the gist of what's different compared to OSS PG.

I am trying to onboard the dialect in phases so that it will be easy for review(though I have fixed all the tests in my local). I am planning to onboard testing in CI as soon as I make all changes related to the Dialect. Do you suggest we should move to community dialect first and later bring it to core once we configure the testing pipeline?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I am trying to onboard the dialect in phases so that it will be easy for review(though I have fixed all the tests in my local).

What do you mean by "fixed all the tests"? I would hope that we don't require many @SkipDialect uses.

I am planning to onboard testing in CI as soon as I make all changes related to the Dialect. Do you suggest we should move to community dialect first and later bring it to core once we configure the testing pipeline?

Yes please. Until we can properly and regularly test this new dialect, I would prefer to have it in the community dialects module.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@beikov

What do you mean by "fixed all the tests"? I would hope that we don't require many @SkipDialect uses.

Spanner doesn't support integer sequences due to it's nature as a distributed databases(to avoid hotspots). We might have to disable some tests which uses integer identity column(@id with Integer).

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@beikov Thanks for the suggestions. I have moved the dialect to inside community dialect.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Spanner doesn't support integer sequences due to it's nature as a distributed databases(to avoid hotspots). We might have to disable some tests which uses integer identity column(@id with Integer).

Cockroach has a similar problem and solved it in the past by always emitting serial8 or int8 as column type. See CockroachDBIdentityColumnSupport

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@beikov I think cockroach serial type has a capability to produce auto-incrementing sequences of int type(to avoid hotspots). In Spanner, serial type also produces Long values which can't fit into Integer type.

Copy link
Member

@mbellade mbellade Jan 14, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@sakthivelmanii this is fine for now, though it'll need to be addressed once we aim to set-up testing and move this into core.


private final UniqueDelegate SPANNER_UNIQUE_DELEGATE = new AlterTableUniqueIndexDelegate( this );
private final StandardTableExporter SPANNER_TABLE_EXPORTER = new SpannerPostgreSQLTableExporter( this );

public SpannerPostgreSQLDialect() {
super();
}

public SpannerPostgreSQLDialect(DialectResolutionInfo info) {
super( info );
}

public SpannerPostgreSQLDialect(DatabaseVersion version) {
super( version );
}

@Override
protected DatabaseVersion getMinimumSupportedVersion() {
return SimpleDatabaseVersion.ZERO_VERSION;
}

@Override
public StandardTableExporter getTableExporter() {
return SPANNER_TABLE_EXPORTER;
}

@Override
public UniqueDelegate getUniqueDelegate() {
return SPANNER_UNIQUE_DELEGATE;
}

@Override
public SequenceSupport getSequenceSupport() {
return SpannerPostgreSQLSequenceSupport.INSTANCE;
}

@Override
public boolean supportsUserDefinedTypes() {
return false;
}

@Override
public boolean supportsFilterClause() {
return false;
}

@Override
public boolean supportsRecursiveCycleUsingClause() {
return false;
}

@Override
public boolean supportsRecursiveSearchClause() {
return false;
}

@Override
public boolean supportsUniqueConstraints() {
return false;
}

@Override
public boolean supportsRowValueConstructorGtLtSyntax() {
return false;
}

// ALL subqueries with operators other than <>/!= are not supported
@Override
public boolean supportsRowValueConstructorSyntaxInQuantifiedPredicates() {
return false;
}

@Override
public boolean supportsRowValueConstructorSyntaxInInSubQuery() {
return false;
}

@Override
public boolean supportsCaseInsensitiveLike() {
return false;
}

@Override
public String currentTimestamp() {
return currentTimestampWithTimeZone();
}

@Override
public String currentTime() {
return currentTimestampWithTimeZone();
}

@Override
public boolean supportsLateral() {
return false;
}

@Override
public boolean supportsFromClauseInUpdate() {
return false;
}

@Override
public int getMaxVarcharLength() {
return 2_621_440;
}

@Override
public int getMaxVarbinaryLength() {
//max is equivalent 10 MiB
return 10_485_760;
}

@Override
public String getCurrentSchemaCommand() {
return "";
}

@Override
public boolean supportsCommentOn() {
return false;
}

@Override
public boolean supportsWindowFunctions() {
return false;
}

@Override
public String getAddForeignKeyConstraintString(
String constraintName,
String[] foreignKey,
String referencedTable,
String[] primaryKey,
boolean referencesPrimaryKey) {
if ( !referencesPrimaryKey ) {
throw new UnsupportedOperationException(
"Cannot add foreign keys without reference table's primary key" );
}
return super.getAddForeignKeyConstraintString( constraintName, foreignKey, referencedTable, primaryKey, referencesPrimaryKey );
}

@Override
public boolean canBatchTruncate() {
return false;
}

@Override
public String rowId(String rowId) {
return null;
}

@Override
public boolean supportsRowConstructor() {
return false;
}

@Override
public String getTruncateTableStatement(String tableName) {
return "delete from " + tableName;
}

@Override
public String getBeforeDropStatement() {
return null;
}

@Override
public String getCascadeConstraintsString() {
return "";
}

@Override
public boolean supportsIfExistsBeforeConstraintName() {
return false;
}

@Override
public boolean supportsIfExistsAfterAlterTable() {
return false;
}

@Override
public boolean supportsDistinctFromPredicate() {
return false;
}

@Override
public boolean supportsPartitionBy() {
return false;
}

@Override
public boolean supportsNonQueryWithCTE() {
return false;
}

@Override
public boolean supportsRecursiveCTE() {
return false;
}

@Override
public CallableStatementSupport getCallableStatementSupport() {
return StandardCallableStatementSupport.NO_REF_CURSOR_INSTANCE;
}
}
Original file line number Diff line number Diff line change
@@ -0,0 +1,82 @@
/*
* SPDX-License-Identifier: Apache-2.0
* Copyright Red Hat Inc. and Hibernate Authors
*/
package org.hibernate.community.dialect;

import org.hibernate.boot.Metadata;
import org.hibernate.boot.model.relational.SqlStringGenerationContext;
import org.hibernate.dialect.Dialect;
import org.hibernate.mapping.Column;
import org.hibernate.tool.schema.internal.ColumnValue;
import org.hibernate.mapping.Index;
import org.hibernate.mapping.PrimaryKey;
import org.hibernate.mapping.Table;
import org.hibernate.mapping.UniqueKey;
import org.hibernate.tool.schema.internal.StandardTableExporter;

import java.sql.Types;
import java.util.ArrayList;
import java.util.List;
import java.util.stream.Stream;

public class SpannerPostgreSQLTableExporter extends StandardTableExporter {

public SpannerPostgreSQLTableExporter(Dialect dialect) {
super( dialect );
}

@Override
public String[] getSqlCreateStrings(Table table, Metadata metadata, SqlStringGenerationContext context) {
// Spanner mandates that primary key should be present in all the tables. For element collection tables,
// there will be no primary key. In order to fix the problem, we randomly generate the ID column
// with BIT_REVERSED_POSITIVE sequence
if ( !table.hasPrimaryKey() && !table.getForeignKeyCollection().isEmpty() ) {
Column column = getAutoGeneratedPrimaryKeyColumn( table, metadata );
table.addColumn( column );

PrimaryKey primaryKey = new PrimaryKey( table );
primaryKey.addColumn( column );

table.setPrimaryKey( primaryKey );
}

return super.getSqlCreateStrings( table, metadata, context );
}

@Override
public String[] getSqlDropStrings(Table table, Metadata metadata, SqlStringGenerationContext context) {
// Spanner requires the indexes to be dropped before dropping the table
List<String> sqlDropIndexStrings = new ArrayList<>();
for ( Index index : table.getIndexes().values() ) {
sqlDropIndexStrings.add( sqlDropIndexString(index.getName()) );
}
// Spanner requires all the unique indexes to be dropped before dropping the tables
for ( UniqueKey uniqueKey : table.getUniqueKeys().values() ) {
sqlDropIndexStrings.add( sqlDropIndexString(uniqueKey.getName()) );
}
for ( Column column : table.getColumns() ) {
if ( column.isUnique() ) {
sqlDropIndexStrings.add( sqlDropIndexString(column.getUniqueKeyName()) );
}
}
String[] sqlDropStrings = super.getSqlDropStrings( table, metadata, context );
return Stream.concat( sqlDropIndexStrings.stream(), Stream.of( sqlDropStrings ) )
.toArray( String[]::new );
}

private String sqlDropIndexString(String indexName) {
return "drop index if exists " + indexName;
}

private Column getAutoGeneratedPrimaryKeyColumn(Table table, Metadata metadata) {
Column column = new Column( "rowid" );
column.setSqlTypeCode( Types.BIGINT );
column.setNullable( false );
column.setSqlType( "bigint" );
column.setOptions( "hidden" );
column.setIdentity( true );
column.setValue( new ColumnValue( metadata.getDatabase(), table, column, metadata.getDatabase().getTypeConfiguration().getBasicTypeForJavaType( Long.class ) ) );
return column;
}
}
Original file line number Diff line number Diff line change
@@ -0,0 +1,29 @@
/*
* SPDX-License-Identifier: Apache-2.0
* Copyright Red Hat Inc. and Hibernate Authors
*/
package org.hibernate.community.dialect.sequence;

import org.hibernate.MappingException;
import org.hibernate.dialect.sequence.PostgreSQLSequenceSupport;
import org.hibernate.dialect.sequence.SequenceSupport;

public class SpannerPostgreSQLSequenceSupport extends PostgreSQLSequenceSupport {

public static final SequenceSupport INSTANCE = new SpannerPostgreSQLSequenceSupport();

@Override
public String getCreateSequenceString(String sequenceName, int initialValue, int incrementSize) throws MappingException {
if ( incrementSize == 0 ) {
throw new MappingException( "Unable to create the sequence [" + sequenceName + "]: the increment size must not be 0" );
}
return getCreateSequenceString( sequenceName )
+ startingValue( initialValue, incrementSize )
+ " start counter with " + initialValue;
}

@Override
public String getRestartSequenceString(String sequenceName, long startWith) {
return "alter sequence " + sequenceName + " restart counter with " + startWith;
}
}
Original file line number Diff line number Diff line change
Expand Up @@ -231,7 +231,7 @@ public Dialect createDialect(DialectResolutionInfo info) {
}
@Override
public boolean productNameMatches(String databaseName) {
return databaseName.startsWith( "Google Cloud Spanner" );
return databaseName.equals( "Google Cloud Spanner" );
}
@Override
public String getDriverClassName(String jdbcUrl) {
Expand Down
Loading