v6.0.0 Release
This version of the Pinecone Java SDK introduces Bring Your Own Cloud (BYOC), dedicated read capacity, metadata indexing, the ability to create namespaces explicitly, enhanced namespace listing with prefix filtering, fetch and update operations by metadata filters, and support for version 2025-10 of the Pinecone API. You can read more about versioning here.
Features
Bring Your Own Cloud (BYOC)
This release adds support for creating BYOC (Bring Your Own Cloud) indexes. BYOC indexes allow you to deploy Pinecone indexes in your own cloud infrastructure. You must have a BYOC environment set up with Pinecone before creating a BYOC index. The BYOC environment name is provided during BYOC onboarding.
The following methods were added for creating BYOC indexes:
createByocIndex(String indexName, String metric, int dimension, String environment)- Create a BYOC index with minimal required parameterscreateByocIndex(String indexName, String metric, int dimension, String environment, String deletionProtection, Map<String, String> tags, BackupModelSchema schema)- Create a BYOC index with all options including deletion protection, tags, and metadata schema
Following example shows how to create BYOC indexes:
import io.pinecone.clients.Pinecone;
import org.openapitools.db_control.client.model.IndexModel;
import org.openapitools.db_control.client.model.BackupModelSchema;
import org.openapitools.db_control.client.model.BackupModelSchemaFieldsValue;
import java.util.HashMap;
import java.util.Map;
Pinecone pinecone = new Pinecone.Builder("PINECONE_API_KEY").build();
String indexName = "example-index";
String similarityMetric = "cosine";
int dimension = 1538;
String byocEnvironment = "your-byoc-environment";
// Create BYOC index with minimal parameters
IndexModel indexModel = pinecone.createByocIndex(indexName, similarityMetric, dimension, byocEnvironment);
// Create BYOC index with metadata schema
HashMap<String, String> tags = new HashMap<>();
tags.put("env", "production");
Map<String, BackupModelSchemaFieldsValue> fields = new HashMap<>();
fields.put("genre", new BackupModelSchemaFieldsValue().filterable(true));
fields.put("year", new BackupModelSchemaFieldsValue().filterable(true));
BackupModelSchema schema = new BackupModelSchema().fields(fields);
IndexModel indexModelWithSchema = pinecone.createByocIndex(
indexName, similarityMetric, dimension, byocEnvironment, "enabled", tags, schema);Dedicated Read Capacity
This release adds support for configuring dedicated read capacity nodes for serverless indexes, providing better performance and cost predictability.
The following methods were enhanced to support read capacity:
createServerlessIndex()- Added overloads that acceptReadCapacityparametercreateIndexForModel()- Added overloads that acceptReadCapacityparameterconfigureServerlessIndex()- Enhanced to accept flattened parameters for configuring read capacity on existing indexes
Following example shows how to create a serverless index with dedicated read capacity:
import io.pinecone.clients.Pinecone;
import org.openapitools.db_control.client.model.IndexModel;
import org.openapitools.db_control.client.model.ReadCapacity;
import org.openapitools.db_control.client.model.ReadCapacityDedicatedSpec;
import org.openapitools.db_control.client.model.ReadCapacityDedicatedConfig;
import org.openapitools.db_control.client.model.ScalingConfigManual;
import java.util.HashMap;
Pinecone pinecone = new Pinecone.Builder("PINECONE_API_KEY").build();
String indexName = "example-index";
String similarityMetric = "cosine";
int dimension = 1538;
String cloud = "aws";
String region = "us-west-2";
HashMap<String, String> tags = new HashMap<>();
tags.put("env", "test");
// Configure dedicated read capacity with manual scaling
ScalingConfigManual manual = new ScalingConfigManual().shards(2).replicas(2);
ReadCapacityDedicatedConfig dedicated = new ReadCapacityDedicatedConfig()
.nodeType("t1")
.scaling("Manual")
.manual(manual);
ReadCapacity readCapacity = new ReadCapacity(
new ReadCapacityDedicatedSpec().mode("Dedicated").dedicated(dedicated));
// Create index with dedicated read capacity
IndexModel indexModel = pinecone.createServerlessIndex(indexName, similarityMetric, dimension,
cloud, region, "enabled", tags, readCapacity, null);Configure Read Capacity on Existing Serverless Index
The configureServerlessIndex() method was enhanced to accept flattened parameters for easier configuration of read capacity on existing indexes. You can switch between OnDemand and Dedicated modes, or scale dedicated read nodes.
Note: Read capacity settings can only be updated once per hour per index.
Following methods were added:
configureServerlessIndex(String indexName, String deletionProtection, Map<String, String> tags, ConfigureIndexRequestEmbed embed, String readCapacityMode, String nodeType, Integer shards, Integer replicas)- Configure read capacity on an existing serverless index
Following example shows how to configure read capacity on an existing serverless index:
import io.pinecone.clients.Pinecone;
import org.openapitools.db_control.client.model.IndexModel;
import java.util.HashMap;
Pinecone pinecone = new Pinecone.Builder("PINECONE_API_KEY").build();
String indexName = "example-index";
HashMap<String, String> tags = new HashMap<>();
tags.put("env", "test");
// Switch to Dedicated read capacity with manual scaling
// Parameters: indexName, deletionProtection, tags, embed, readCapacityMode, nodeType, shards, replicas
IndexModel indexModel = pinecone.configureServerlessIndex(
indexName, "enabled", tags, null, "Dedicated", "t1", 3, 2);
// Switch to OnDemand read capacity
IndexModel onDemandIndex = pinecone.configureServerlessIndex(
indexName, "enabled", tags, null, "OnDemand", null, null, null);Metadata Indexing
This release adds support for configuring metadata schema for serverless indexes, allowing you to limit metadata indexing to specific fields for improved performance.
The following methods were enhanced to support metadata schema:
createServerlessIndex()- Added overloads that acceptBackupModelSchemaparametercreateIndexForModel()- Added overloads that acceptBackupModelSchemaparametercreateByocIndex()- Added overloads that acceptBackupModelSchemaparameter
Following example shows how to create a serverless index with metadata schema:
import io.pinecone.clients.Pinecone;
import org.openapitools.db_control.client.model.IndexModel;
import org.openapitools.db_control.client.model.BackupModelSchema;
import org.openapitools.db_control.client.model.BackupModelSchemaFieldsValue;
import java.util.HashMap;
import java.util.Map;
Pinecone pinecone = new Pinecone.Builder("PINECONE_API_KEY").build();
String indexName = "example-index";
String similarityMetric = "cosine";
int dimension = 1538;
String cloud = "aws";
String region = "us-west-2";
HashMap<String, String> tags = new HashMap<>();
tags.put("env", "test");
// Configure metadata schema to only index specific fields
Map<String, BackupModelSchemaFieldsValue> fields = new HashMap<>();
fields.put("genre", new BackupModelSchemaFieldsValue().filterable(true));
fields.put("year", new BackupModelSchemaFieldsValue().filterable(true));
BackupModelSchema schema = new BackupModelSchema().fields(fields);
// Create index with metadata schema
IndexModel indexModel = pinecone.createServerlessIndex(indexName, similarityMetric, dimension,
cloud, region, "enabled", tags, null, schema);Create Namespaces
This release adds the ability to create namespaces explicitly within an index. Previously, namespaces were created implicitly when vectors were upserted. Now you can create namespaces ahead of time, optionally with a metadata schema to control which metadata fields are indexed for filtering.
The following methods were added for creating namespaces:
createNamespace(String name)- Create a namespace with the specified namecreateNamespace(String name, MetadataSchema schema)- Create a namespace with a metadata schema
Following example shows how to create namespaces:
import io.pinecone.clients.Index;
import io.pinecone.clients.Pinecone;
import io.pinecone.proto.MetadataFieldProperties;
import io.pinecone.proto.MetadataSchema;
import io.pinecone.proto.NamespaceDescription;
String indexName = "PINECONE_INDEX_NAME";
Pinecone pinecone = new Pinecone.Builder("PINECONE_API_KEY").build();
Index index = pinecone.getIndexConnection(indexName);
// create a namespace
NamespaceDescription namespaceDescription = index.createNamespace("some-namespace");
// create a namespace with metadata schema
MetadataSchema schema = MetadataSchema.newBuilder()
.putFields("genre", MetadataFieldProperties.newBuilder().setFilterable(true).build())
.putFields("year", MetadataFieldProperties.newBuilder().setFilterable(true).build())
.build();
NamespaceDescription namespaceWithSchema = index.createNamespace("some-namespace", schema);Async Support
The createNamespace() method is also available for AsyncIndex:
import io.pinecone.clients.AsyncIndex;
AsyncIndex asyncIndex = pinecone.getAsyncIndexConnection(indexName);
// create a namespace with metadata schema
NamespaceDescription asyncNamespaceWithSchema = asyncIndex.createNamespace("some-namespace", schema).get();Enhanced Namespace Listing
The listNamespaces() method has been enhanced with prefix filtering and total count support. You can now filter namespaces by prefix and get the total count of namespaces matching your filter criteria.
The following method signatures were added:
listNamespaces(String prefix, String paginationToken, int limit)- List namespaces with prefix filtering, pagination, and limit
The totalCount field in the response indicates the total number of namespaces matching the prefix (if provided). When prefix filtering is used, it returns the count of namespaces matching the prefix.
Following example shows the enhanced namespace listing functionality with prefix filtering:
import io.pinecone.clients.Index;
import io.pinecone.clients.Pinecone;
import io.pinecone.proto.ListNamespacesResponse;
String indexName = "PINECONE_INDEX_NAME";
Pinecone pinecone = new Pinecone.Builder("PINECONE_API_KEY").build();
Index index = pinecone.getIndexConnection(indexName);
// list namespaces with prefix filtering
// Prefix filtering allows you to filter namespaces that start with a specific prefix
ListNamespacesResponse listNamespacesResponse = index.listNamespaces("test-", null, 10);
int totalCount = listNamespacesResponse.getTotalCount(); // Total count of namespaces matching "test-" prefix
// list namespaces with prefix, pagination token, and limit
if (listNamespacesResponse.hasPagination() && listNamespacesResponse.getPagination().getNext() != null) {
ListNamespacesResponse nextPage = index.listNamespaces("test-", listNamespacesResponse.getPagination().getNext(), 10);
}Async Support
The listNamespaces() method is also available for AsyncIndex:
import io.pinecone.clients.AsyncIndex;
AsyncIndex asyncIndex = pinecone.getAsyncIndexConnection(indexName);
// list namespaces with prefix filtering
ListNamespacesResponse asyncListNamespacesResponse = asyncIndex.listNamespaces("test-", null, 10).get();
int asyncTotalCount = asyncListNamespacesResponse.getTotalCount();Fetch and Update by Metadata
This release adds fetchByMetadata and updateByMetadata methods for both Index and AsyncIndex clients, enabling fetching and updating vectors by metadata filters.
Fetch By Metadata
The fetchByMetadata() method allows you to fetch vectors matching a metadata filter with optional limit and pagination support.
The following methods were added:
fetchByMetadata(String namespace, Struct filter, Integer limit, String paginationToken)- Fetch vectors by metadata filter
Following example shows how to fetch vectors by metadata:
import io.pinecone.clients.Index;
import io.pinecone.clients.Pinecone;
import io.pinecone.proto.FetchByMetadataResponse;
import com.google.protobuf.Struct;
import com.google.protobuf.Value;
Pinecone pinecone = new Pinecone.Builder("PINECONE_API_KEY").build();
Index index = pinecone.getIndexConnection("example-index");
// Create a metadata filter
Struct filter = Struct.newBuilder()
.putFields("genre", Value.newBuilder()
.setStructValue(Struct.newBuilder()
.putFields("$eq", Value.newBuilder()
.setStringValue("action")
.build()))
.build())
.build();
// Fetch vectors by metadata with limit
FetchByMetadataResponse response = index.fetchByMetadata("example-namespace", filter, 10, null);
// Access fetched vectors
response.getVectorsMap().forEach((id, vector) -> {
System.out.println("Vector ID: " + id);
});Update By Metadata
The updateByMetadata() method allows you to update vectors matching a metadata filter with new metadata. It supports dry run mode to preview how many records would be updated.
The following methods were added:
updateByMetadata(Struct filter, Struct metadata, String namespace, boolean dryRun)- Update vectors by metadata filter
Following example shows how to update vectors by metadata:
import io.pinecone.clients.Index;
import io.pinecone.clients.Pinecone;
import io.pinecone.proto.UpdateResponse;
import com.google.protobuf.Struct;
import com.google.protobuf.Value;
Pinecone pinecone = new Pinecone.Builder("PINECONE_API_KEY").build();
Index index = pinecone.getIndexConnection("example-index");
// Create a filter to match vectors
Struct filter = Struct.newBuilder()
.putFields("genre", Value.newBuilder()
.setStructValue(Struct.newBuilder()
.putFields("$eq", Value.newBuilder()
.setStringValue("action")
.build()))
.build())
.build();
// Create new metadata to apply
Struct newMetadata = Struct.newBuilder()
.putFields("updated", Value.newBuilder().setStringValue("true").build())
.putFields("year", Value.newBuilder().setStringValue("2024").build())
.build();
// Dry run to check how many records would be updated
UpdateResponse dryRunResponse = index.updateByMetadata(filter, newMetadata, "example-namespace", true);
System.out.println("Records that would be updated: " + dryRunResponse.getMatchedRecords());
// Actually perform the update
UpdateResponse updateResponse = index.updateByMetadata(filter, newMetadata, "example-namespace", false);
System.out.println("Records updated: " + updateResponse.getMatchedRecords());Async Support
Both methods are available for AsyncIndex:
import io.pinecone.clients.AsyncIndex;
import com.google.common.util.concurrent.ListenableFuture;
AsyncIndex asyncIndex = pinecone.getAsyncIndexConnection("example-index");
// Fetch by metadata asynchronously
ListenableFuture<FetchByMetadataResponse> fetchFuture =
asyncIndex.fetchByMetadata("example-namespace", filter, 10, null);
FetchByMetadataResponse fetchResponse = fetchFuture.get();
// Update by metadata asynchronously
ListenableFuture<UpdateResponse> updateFuture =
asyncIndex.updateByMetadata(filter, newMetadata, "example-namespace", false);
UpdateResponse updateResponse = updateFuture.get();Note: These operations are supported for serverless indexes.
Breaking Changes
This release includes several breaking changes due to API updates in version 2025-10. The following changes require code updates when migrating from v5.x to v6.0.0:
Pinecone.java
Changed deletionProtection parameter type
All methods now accept String instead of DeletionProtection enum:
createServerlessIndex()- now acceptsString deletionProtection(e.g.,"enabled"or"disabled")createSparseServelessIndex()- now acceptsString deletionProtectioncreateIndexForModel()- now acceptsString deletionProtectionandString cloud(wasCloudEnum)createPodsIndex()- all overloads now acceptString deletionProtectionconfigurePodsIndex()- overloads now acceptString deletionProtection
Changed cloud parameter type
createIndexForModel()now acceptsString cloudinstead ofCreateIndexForModelRequest.CloudEnum
Removed method overload
- Removed one
createPodsIndex()overload that took(String, Integer, String, String, String)parameters. The similarity metric now defaults to"cosine"when not specified.
Index.java and AsyncIndex.java
Changed errorMode parameter type
startImport()method now acceptsString errorModeinstead ofImportErrorMode.OnErrorEnum- Accepts
"abort"or"continue"as string values
Model Classes (If Directly Used)
Split model classes
IndexModel, IndexSpec, and ConfigureIndexRequest have been split into type-specific variants:
IndexModelPodBased,IndexModelServerless,IndexModelBYOCIndexSpecPodBased,IndexSpecServerless,IndexSpecBYOCConfigureIndexRequestPodBased,ConfigureIndexRequestServerless
Note: This only affects code that directly imports or casts these types. If you're only using the return values from high-level methods, this may not require changes.
Additional Enum to String Changes
The following enum types have also been replaced with string values throughout the SDK:
IndexModel.MetricEnum→String(e.g.,"cosine","euclidean","dotproduct")CollectionModel.StatusEnum→String(e.g.,"ready")IndexModelStatus.StateEnum→String(e.g.,"ready")ServerlessSpec.CloudEnum→String(e.g.,"aws","gcp","azure")
What's Changed
- Update gradle version, fix flaky integration tests, and add cleanup job by @rohanshah18 in #197
- 2025-10 features by @rohanshah18 in #200
- Prepare to release v6.0.0 by @rohanshah18 in #207
Full Changelog: v5.1.0...v6.0.0