Skip to content

Commit 638d554

Browse files
chore(docs): move to direct tcp for most things (#7549)
* chore(docs): move to direct tcp for most things * Apply suggestions from code review Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com> * chore(docs): clean up local links --------- Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>
1 parent 1d46f3c commit 638d554

9 files changed

Lines changed: 40 additions & 233 deletions

File tree

apps/docs/content/docs/accelerate/more/troubleshoot.mdx

Lines changed: 12 additions & 12 deletions
Original file line numberDiff line numberDiff line change
@@ -10,51 +10,51 @@ When working with Accelerate, you may encounter errors often highlighted by spec
1010

1111
## `P6009` (`ResponseSizeLimitExceeded`)
1212

13-
This error is triggered when the response size from a database query exceeds [the configured query response size limit](/postgres/database/connection-pooling#response-size). We've implemented this restriction to safeguard your application performance, as retrieving data over 5MB can significantly slow down your application due to multiple network layers. Typically, transmitting more than 5MB of data is common when conducting ETL (Extract, Transform, Load) operations. However, for other scenarios such as transactional queries, real-time data fetching for user interfaces, bulk data updates, or aggregating large datasets for analytics outside of ETL contexts, it should generally be avoided. These use cases, while essential, can often be optimized to work within [the configured query response size limit](/postgres/database/connection-pooling#response-size), ensuring smoother performance and a better user experience.
13+
This error is triggered when the response size from a database query exceeds the configured query response size limit. We've implemented this restriction to safeguard your application performance, as retrieving data over 5MB can significantly slow down your application due to multiple network layers. Typically, transmitting more than 5MB of data is common when conducting ETL (Extract, Transform, Load) operations. However, for other scenarios such as transactional queries, real-time data fetching for user interfaces, bulk data updates, or aggregating large datasets for analytics outside of ETL contexts, it should generally be avoided. These use cases, while essential, can often be optimized to work within the configured query response size limit, ensuring smoother performance and a better user experience.
1414

1515
### Possible causes for [`P6009`](/orm/reference/error-reference#p6009-responsesizelimitexceeded)
1616

1717
#### Transmitting images/files in response
1818

1919
This error may arise if images or files stored within your table are being fetched, resulting in a large response size. Storing assets directly in the database is generally discouraged because it significantly impacts database performance and scalability. In addition to performance, it makes database backups slow and significantly increases the cost of storing routine backups.
2020

21-
**Suggested solution:** Configure the [query response size limit](/postgres/database/connection-pooling#response-size) to be larger. If the limit is still exceeded, consider storing the image or file in a BLOB store like [Cloudflare R2](https://developers.cloudflare.com/r2/), [AWS S3](https://aws.amazon.com/pm/serv-s3/), or [Cloudinary](https://cloudinary.com/). These services allow you to store assets optimally and return a URL for access. Instead of storing the asset directly in the database, store the URL, which will substantially reduce the response size.
21+
**Suggested solution:** Configure the query response size limit to be larger. If the limit is still exceeded, consider storing the image or file in a BLOB store like [Cloudflare R2](https://developers.cloudflare.com/r2/), [AWS S3](https://aws.amazon.com/pm/serv-s3/), or [Cloudinary](https://cloudinary.com/). These services allow you to store assets optimally and return a URL for access. Instead of storing the asset directly in the database, store the URL, which will substantially reduce the response size.
2222

2323
#### Over-fetching of data
2424

25-
In certain cases, a large number of records or fields are unintentionally fetched, which results in exceeding [the configured query response size limit](/postgres/database/connection-pooling#response-size). This could happen when the [`where`](/orm/reference/prisma-client-reference#where) clause in the query is incorrect or entirely missing.
25+
In certain cases, a large number of records or fields are unintentionally fetched, which results in exceeding the configured query response size limit. This could happen when the [`where`](/orm/reference/prisma-client-reference#where) clause in the query is incorrect or entirely missing.
2626

27-
**Suggested solution:** Configure the [query response size limit](/postgres/database/connection-pooling#response-size) to be larger. If the limit is still exceeded, double-check that the `where` clause is filtering data as expected. To prevent fetching too many records, consider using [pagination](/v6/orm/prisma-client/queries/pagination). Additionally, use the [`select`](/orm/reference/prisma-client-reference#select) clause to return only the necessary fields, reducing the response size.
27+
**Suggested solution:** Configure the query response size limit to be larger. If the limit is still exceeded, double-check that the `where` clause is filtering data as expected. To prevent fetching too many records, consider using [pagination](/v6/orm/prisma-client/queries/pagination). Additionally, use the [`select`](/orm/reference/prisma-client-reference#select) clause to return only the necessary fields, reducing the response size.
2828

2929
#### Fetching a large volume of data
3030

3131
In many data processing workflows, especially those involving ETL (Extract-Transform-Load) processes or scheduled CRON jobs, there's a need to extract large amounts of data from data sources (like databases, APIs, or file systems) for analysis, reporting, or further processing. If you are running an ETL/CRON workload that fetches a huge chunk of data for analytical processing then you might run into this limit.
3232

33-
**Suggested solution:** Configure the [query response size limit](/postgres/database/connection-pooling#response-size) to be larger. If the limit is exceeded, consider splitting your query into batches. This approach ensures that each batch fetches only a portion of the data, preventing you from exceeding the size limit for a single operation.
33+
**Suggested solution:** Configure the query response size limit to be larger. If the limit is exceeded, consider splitting your query into batches. This approach ensures that each batch fetches only a portion of the data, preventing you from exceeding the size limit for a single operation.
3434

3535
## `P6004` (`QueryTimeout`)
3636

37-
This error occurs when a database query fails to return a response within [the configured query timeout limit](/postgres/database/connection-pooling#query-timeout). The query timeout limit includes the duration of waiting for a connection from the pool, network latency to the database, and the execution time of the query itself. We enforce this limit to prevent unintentional long-running queries that can overload system resources.
37+
This error occurs when a database query fails to return a response within the configured query timeout limit. The query timeout limit includes the duration of waiting for a connection from the pool, network latency to the database, and the execution time of the query itself. We enforce this limit to prevent unintentional long-running queries that can overload system resources.
3838

39-
> The time for Accelerate's cross-region networking is excluded from [the configured query timeout limit](/postgres/database/connection-pooling#query-timeout) limit.
39+
> The time for Accelerate's cross-region networking is excluded from the configured query timeout limit.
4040
4141
### Possible causes for [`P6004`](/orm/reference/error-reference#p6004-querytimeout)
4242

4343
This error could be caused by numerous reasons. Some of the prominent ones are:
4444

4545
#### High traffic and insufficient connections
4646

47-
If the application is receiving very high traffic and there are not a sufficient number of connections available to the database, then the queries would need to wait for a connection to become available. This situation can lead to queries waiting longer than [the configured query timeout limit](/postgres/database/connection-pooling#query-timeout) for a connection, ultimately triggering a timeout error if they do not get serviced within this duration.
47+
If the application is receiving very high traffic and there are not a sufficient number of connections available to the database, then the queries would need to wait for a connection to become available. This situation can lead to queries waiting longer than the configured query timeout limit for a connection, ultimately triggering a timeout error if they do not get serviced within this duration.
4848

49-
**Suggested solution**: Review and possibly increase the `connection_limit` specified in the connection string parameter when setting up Accelerate in a platform environment ([reference](/postgres/database/connection-pooling#connection-pool-size)). This limit should align with your database's maximum number of connections.
49+
**Suggested solution**: Review and possibly increase the `connection_limit` specified in the connection string parameter when setting up Accelerate in a platform environment. This limit should align with your database's maximum number of connections.
5050

5151
By default, the connection limit is set to 10 unless a different `connection_limit` is specified in your database connection string.
5252

5353
#### Long-running queries
5454

55-
Queries may be slow to respond, hitting [the configured query timeout limit](/postgres/database/connection-pooling#query-timeout) even when connections are available. This could happen if a very large amount of data is being fetched in a single query or if appropriate indexes are missing from the table.
55+
Queries may be slow to respond, hitting the configured query timeout limit even when connections are available. This could happen if a very large amount of data is being fetched in a single query or if appropriate indexes are missing from the table.
5656

57-
**Suggested solution**: Configure the [query timeout limit](/postgres/database/connection-pooling#query-timeout) to be larger. If the limit is exceeded, identify the slow-running queries and fetch only the necessary data. Use the `select` clause to retrieve specific fields and avoid fetching unnecessary data. Additionally, consider adding appropriate indexes to improve query efficiency. You might also isolate long-running queries into separate environments to prevent them from affecting transactional queries.
57+
**Suggested solution**: Configure the query timeout limit to be larger. If the limit is exceeded, identify the slow-running queries and fetch only the necessary data. Use the `select` clause to retrieve specific fields and avoid fetching unnecessary data. Additionally, consider adding appropriate indexes to improve query efficiency. You might also isolate long-running queries into separate environments to prevent them from affecting transactional queries.
5858

5959
#### Database resource contention
6060

@@ -79,7 +79,7 @@ Additionally, direct connections could have a significant impact on your databas
7979
If your application's runtime environment supports Prisma ORM natively and you're considering this strategy to circumvent P6009 and P6004 errors, you might create two `PrismaClient` instances:
8080

8181
1. An instance using the Accelerate connection string (prefixed with `prisma://`) for general operations.
82-
2. Another instance with the direct database connection string (e.g., prefixed with `postgres://`, `mysql://`, etc.) for specific operations anticipated to exceed [the configured query limit timeout](/postgres/database/connection-pooling#query-timeout) or to result in responses larger than [the configured query response size limit](/postgres/database/connection-pooling#response-size).
82+
2. Another instance with the direct database connection string (e.g., prefixed with `postgres://`, `mysql://`, etc.) for specific operations anticipated to exceed the configured query timeout limit or to result in responses larger than the configured query response size limit.
8383

8484
```ts
8585
export const prisma = new PrismaClient({

apps/docs/content/docs/console/index.mdx

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -10,8 +10,8 @@ metaDescription: Learn about the Console to integrate the Prisma Data Platform p
1010

1111
The [Console](https://console.prisma.io/login) enables you to manage and configure your projects that use Prisma products, and helps you integrate them into your application:
1212

13-
- [Accelerate](/accelerate): Speeds up your queries with a global database cache with scalable connection pooling.
14-
- [Optimize](/optimize): Provides you recommendations that can help you make your database queries faster.
13+
14+
- [Optimize](/optimize): Provides you with recommendations that can help you make your database queries faster.
1515
- [Prisma Postgres](/postgres): A managed PostgreSQL database that is optimized for Prisma ORM.
1616

1717
## Getting started
@@ -32,7 +32,7 @@ The Console is organized around four main concepts:
3232
- **[User account](/console/concepts#user-account)**: Your personal account to manage workspaces and projects
3333
- **[Workspaces](/console/concepts#workspace)**: Team-level container where billing is managed
3434
- **[Projects](/console/concepts#project)**: Application-level container within a workspace
35-
- **[Resources](/console/concepts#resources)**: Actual services or databases within a project (databases for Prisma Postgres, environments for Accelerate)
35+
- **[Resources](/console/concepts#resources)**: Actual services or databases within a project (databases for Prisma Postgres)
3636

3737
Read more about [Console concepts](/console/concepts).
3838

@@ -44,6 +44,6 @@ Learn more about the [Console CLI commands](/cli/console).
4444

4545
## API keys
4646

47-
An API key is required to authenticate requests from your Prisma Client to products such as Prisma Accelerate and Prisma Optimize. API keys are generated and managed at the resource level.
47+
An API key is required to authenticate Prisma Client requests to Prisma Data Platform resources. API keys are generated and managed at the resource level.
4848

4949
Learn more about [API keys](/console/features/api-keys).

apps/docs/content/docs/orm/more/comparisons/prisma-and-drizzle.mdx

Lines changed: 0 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -285,14 +285,6 @@ const posts = await db.select().from(posts).where(ilike(posts.title, "%Hello Wor
285285

286286
Both Drizzle and Prisma ORM have the ability to log queries and the underlying SQL generated.
287287

288-
## Additional products
289-
290-
Both Drizzle and Prisma offer products alongside an ORM. Prisma Studio was released to allow users to interact with their database via a GUI and also allows for limited self-hosting for use within a team. Drizzle Studio was released to accomplish the same tasks.
291-
292-
In addition to Prisma Studio, Prisma offers commercial products via the Prisma Data Platform:
293-
294-
- [Prisma Accelerate](https://www.prisma.io/accelerate?utm_source=docs&utm_medium=orm-docs): A connection pooler and global cache that integrates with Prisma ORM. Users can take advantage of connection pooling immediately and can control caching at an individual query level.
295-
- [Prisma Optimize](https://www.prisma.io/optimize?utm_source=docs&utm_medium=orm-docs): A query analytics tool that provides deep insights, actionable recommendations, and allows you to interact with Prisma AI for further insights and optimizing your database queries.
296288

297289
These products work hand-in-hand with Prisma ORM to offer comprehensive data tooling, making building data-driven applications easy by following [Data DX](https://www.datadx.io/) principles.
298290

apps/docs/content/docs/orm/prisma-client/client-extensions/extension-examples.mdx

Lines changed: 0 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -12,7 +12,6 @@ The following is a list of extensions we've built at Prisma:
1212

1313
| Extension | Description |
1414
| :------------------------------------------------------------------------------------------- | :------------------------------------------------------------------------------------------------------------------------------------------- |
15-
| [`@prisma/extension-accelerate`](https://www.npmjs.com/package/@prisma/extension-accelerate) | Enables [Accelerate](https://www.prisma.io/accelerate), a global database cache available in 300+ locations with built-in connection pooling |
1615
| [`@prisma/extension-read-replicas`](https://github.com/prisma/extension-read-replicas) | Adds read replica support to Prisma Client |
1716

1817
## Extensions made by Prisma's community

apps/docs/content/docs/orm/prisma-client/deployment/edge/deploy-to-cloudflare.mdx

Lines changed: 3 additions & 46 deletions
Original file line numberDiff line numberDiff line change
@@ -39,49 +39,9 @@ This command:
3939

4040
- Connects your CLI to your [Prisma Data Platform](https://console.prisma.io) account. If you're not logged in or don't have an account, your browser will open to guide you through creating a new account or signing into your existing one.
4141
- Creates a `prisma` directory containing a `schema.prisma` file for your database models.
42-
- Creates a `.env` file with your `DATABASE_URL` (e.g., for Prisma Postgres it should have something similar to `DATABASE_URL="prisma+postgres://accelerate.prisma-data.net/?api_key=eyJhbGciOiJIUzI..."`).
42+
- Creates a `.env` file with your `DATABASE_URL`.
4343

44-
You'll need to install the Client extension required to use Prisma Postgres:
4544

46-
```npm
47-
npm i @prisma/extension-accelerate
48-
```
49-
50-
And extend `PrismaClient` with the extension in your application code:
51-
52-
```typescript
53-
import { PrismaClient } from "./generated/client";
54-
import { withAccelerate } from "@prisma/extension-accelerate";
55-
56-
export interface Env {
57-
DATABASE_URL: string;
58-
}
59-
60-
export default {
61-
async fetch(request, env, ctx) {
62-
const prisma = new PrismaClient({
63-
datasourceUrl: env.DATABASE_URL,
64-
}).$extends(withAccelerate());
65-
66-
const users = await prisma.user.findMany();
67-
const result = JSON.stringify(users);
68-
ctx.waitUntil(prisma.$disconnect());
69-
return new Response(result);
70-
},
71-
} satisfies ExportedHandler<Env>;
72-
```
73-
74-
:::note
75-
Call `ctx.waitUntil(prisma.$disconnect())` before returning so the Worker releases the database connection when the response is done. Otherwise the Worker may not disconnect in time and can run out of memory.
76-
:::
77-
78-
Then setup helper scripts to perform migrations and generate `PrismaClient` as [shown in this section](/orm/prisma-client/deployment/edge/deploy-to-cloudflare#development).
79-
80-
:::note
81-
82-
You need to have the `dotenv-cli` package installed as Cloudflare Workers does not support `.env` files. You can do this by running the following command to install the package locally in your project: `npm install -D dotenv-cli`.
83-
84-
:::
8545

8646
### Using an edge-compatible driver
8747

@@ -97,11 +57,8 @@ The edge-compatible drivers for Cloudflare Workers and Pages are:
9757

9858
There's [also work being done](https://github.com/sidorares/node-mysql2/pull/2289) on the `node-mysql2` driver which will enable access to traditional MySQL databases from Cloudflare Workers and Pages in the future as well.
9959

100-
:::note
101-
102-
If your application uses PostgreSQL, we recommend using [Prisma Postgres](/postgres). It is fully supported on edge runtimes and does not require a specialized edge-compatible driver. For other databases, [Prisma Accelerate](/accelerate) extends edge compatibility so you can connect to _any_ database from _any_ edge function provider.
60+
If your application uses PostgreSQL, we recommend using [Prisma Postgres](/postgres). It is fully supported on edge runtimes and does not require a specialized edge-compatible driver. Review the [Prisma Postgres limitations](/postgres/database/limitations) to understand current constraints.
10361

104-
:::
10562

10663
### Setting your database connection URL as an environment variable
10764

@@ -181,7 +138,7 @@ This command requires you to be authenticated, and will ask you to log in to you
181138

182139
### Size limits on free accounts
183140

184-
Cloudflare has a [size limit of 3 MB for Workers on the free plan](https://developers.cloudflare.com/workers/platform/limits/). If your application bundle with Prisma ORM exceeds that size, we recommend upgrading to a paid Worker plan or using Prisma Accelerate to deploy your application.
141+
Cloudflare has a [size limit of 3 MB for Workers on the free plan](https://developers.cloudflare.com/workers/platform/limits/). If your application bundle with Prisma ORM exceeds that size, we recommend upgrading to a paid Worker plan.
185142

186143
### Deploying a Next.js app to Cloudflare Pages with `@cloudflare/next-on-pages`
187144

0 commit comments

Comments
 (0)