diff --git a/docs/getting_started.md b/docs/getting_started.md index f1cf05d..50ac453 100644 --- a/docs/getting_started.md +++ b/docs/getting_started.md @@ -46,19 +46,16 @@ We will use Supabase as our database (with vector search, pgvector), authenticat -6. From there, go to the SQL Editor tab () and paste the [schema.sql](/supabase/schema.sql) from this repo, and execute. This will enable all the relevant extensions (pgvector) and create the two tables: +6. By now, you should have 4 things: `email` & `password` for your supabase user, and the `Supabase URL` and `API Anon Key`. - +7. If so, go to your terminal, and cd to the supabase folder: `cd ./supabase` -7. By now, you should have 4 things: `email` & `password` for your supabase user, and the `Supabase URL` and `API Anon Key`. - -8. If so, go to your terminal, and cd to the supabase folder: `cd ./supabase` - -9. Install Supabase and set up the CLI. You should follow thier [guide here](https://supabase.com/../guides/cli/getting-started?platform=macos#installing-the-supabase-cli), but in short: +8. Install Supabase and set up the CLI. You should follow thier [guide here](https://supabase.com/../guides/cli/getting-started?platform=macos#installing-the-supabase-cli), but in short: - run `brew install supabase/tap/supabase` to install the CLI (or [check other options](https://supabase.com/../guides/cli/getting-started)) - Install [Docker Desktop](https://www.docker.com/products/docker-desktop/) on your computer (we won't use it, we just need docker daemon to run in the background for deploying supabase functions) -10. Now when we have the CLI, we need to login with oour Supabase account, running `supabase login` - this should pop up a browser window, which should prompt you through the auth -11. And link our Supabase CLI to a specific project, our newly created one, by running `supabase link --project-ref ` (you can check what the project id is from the Supabase web UI, or by running `supabase projects list`, and it will be under "reference id") - you can skip (enter) the database password, it's not needed. +9. Now when we have the CLI, we need to login with our Supabase account, running `supabase login` - this should pop up a browser window, which should prompt you through the auth +10. And link our Supabase CLI to a specific project, our newly created one, by running `supabase link --project-ref ` (you can check what the project id is from the Supabase web UI, or by running `supabase projects list`, and it will be under "reference id") - you can skip (enter) the database password, it's not needed. +11. Now we need to apply the Adeus DB schema on our newly created, and empty database. We can do this by simply run: `supabase db push`. We can verify it worked by going to the Supabase project -> Tables -> and see that new tables were created. 12. Now let's deploy our functions! ([see guide for more details](https://supabase.com/../guides/functions/deploy)) `supabase functions deploy --no-verify-jwt` (see [issue re:security](https://github.com/adamcohenhillel/AdDeus/issues/3)) 13. If you're planning to first use OpenAI as your Foundation model provider, then you'd need to also run the following command, to make sure the functions have everything they need to run properly: `supabase secrets set OPENAI_API_KEY=` (Ollama setup guide is coming out soon) 14. If you want access to tons of AI Models, both Open & Closed Source, set up your OpenRouter API Key. Go to [OpenRouter](https://openrouter.ai/) to get your API Key, then run `supabase secrets set OPENROUTER_API_KEY=`. diff --git a/docs/guides/make_db_migration.md b/docs/guides/make_db_migration.md new file mode 100644 index 0000000..6ccb254 --- /dev/null +++ b/docs/guides/make_db_migration.md @@ -0,0 +1,79 @@ +--- +title: Make a DB Migration +description: add description +layout: default +parent: How to Guides +--- + +# Make a DB Migration +{: .no_toc } + +## Table of contents +{: .no_toc .text-delta } + +1. TOC +{:toc} + +--- + +## Intro +If you're working on a new feature that requires changes to the database, then you need to generate a migration file for those changes, so when your feature is merged to the main branch, and start being used by other people, they will be able to update their database accordingly. + +This guide provides step-by-step instructions for how to make migration file from your Supabaase database changes. + + +## Create the migration + +Let's say you edited the database in your Supabase project. You added the column "new_data" to the table. + +Now you need to make sure others will have that column as well. + + +1. Go to the supabase folder in your local cloned repo +```bash +cd supabase +``` + +2. Make sure you're linked to the right Supabase project: +```bash +supabase link --project-ref +``` + +3. Create a new migration from the remote Supabase instance: +```bash +supabase db pull +``` + +This will generate a new file in the folder `supabase/migrations` named _remote_commit.sql + + +Add it to your branch, and push it with the rest of the feature code to your PR. + + +## Sync your database with all existing migrations + +In case there are new migrations for Adeus, and you need to sync your own database with the latest migrations, follow these instructions: + + +1. Go to the supabase folder in your local cloned repo +```bash +cd supabase +``` + +2. Make sure you're linked to the right Supabase project: +```bash +supabase link --project-ref +``` + +3. Have a dry run: + +```bash +supabase db push --dry-run +``` +This will tell you what migrations will need to run, but without executing. This is useful way to see upfront what the migration changes are. + +4. Push to Prod!!!!!!!! +```bash +supabase db push +``` + diff --git a/supabase/schema.sql b/supabase/migrations/20240214211830_remote_schema.sql similarity index 80% rename from supabase/schema.sql rename to supabase/migrations/20240214211830_remote_schema.sql index 864ecb7..338341b 100644 --- a/supabase/schema.sql +++ b/supabase/migrations/20240214211830_remote_schema.sql @@ -26,7 +26,7 @@ CREATE EXTENSION IF NOT EXISTS "uuid-ossp" WITH SCHEMA "extensions"; CREATE EXTENSION IF NOT EXISTS "vector" WITH SCHEMA "extensions"; -CREATE OR REPLACE FUNCTION "public"."match_records_embeddings_similarity"("query_embedding" "extensions"."vector", "match_threshold" double precision, "match_count" integer) RETURNS TABLE("id" integer, "raw_text" "text", "similarity" double precision) +CREATE OR REPLACE FUNCTION "public"."match_records_embeddings_similarity"(query_embedding extensions.vector, match_threshold double precision, match_count integer) RETURNS TABLE(id integer, raw_text text, similarity double precision) LANGUAGE "sql" STABLE AS $$ select @@ -39,7 +39,7 @@ CREATE OR REPLACE FUNCTION "public"."match_records_embeddings_similarity"("query limit match_count; $$; -ALTER FUNCTION "public"."match_records_embeddings_similarity"("query_embedding" "extensions"."vector", "match_threshold" double precision, "match_count" integer) OWNER TO "postgres"; +ALTER FUNCTION "public"."match_records_embeddings_similarity"(query_embedding extensions.vector, match_threshold double precision, match_count integer) OWNER TO "postgres"; SET default_tablespace = ''; @@ -47,8 +47,8 @@ SET default_table_access_method = "heap"; CREATE TABLE IF NOT EXISTS "public"."conversations" ( "id" bigint NOT NULL, - "created_at" timestamp with time zone DEFAULT "now"() NOT NULL, - "context" "json" DEFAULT '[]'::"json" + "created_at" timestamp with time zone DEFAULT now() NOT NULL, + "context" json DEFAULT '[]'::json ); ALTER TABLE "public"."conversations" OWNER TO "postgres"; @@ -64,9 +64,9 @@ ALTER TABLE "public"."conversations" ALTER COLUMN "id" ADD GENERATED BY DEFAULT CREATE TABLE IF NOT EXISTS "public"."records" ( "id" bigint NOT NULL, - "created_at" timestamp with time zone DEFAULT "now"() NOT NULL, - "raw_text" "text", - "embeddings" "extensions"."vector" + "created_at" timestamp with time zone DEFAULT now() NOT NULL, + "raw_text" text, + "embeddings" extensions.vector ); ALTER TABLE "public"."records" OWNER TO "postgres"; @@ -86,7 +86,7 @@ ALTER TABLE ONLY "public"."conversations" ALTER TABLE ONLY "public"."records" ADD CONSTRAINT "records_pkey" PRIMARY KEY ("id"); -CREATE POLICY "Enable access for all authed" ON "public"."conversations" TO "authenticated" USING (true); +CREATE POLICY "Enable access for all authed" ON "public"."conversations" TO authenticated USING (true); ALTER TABLE "public"."conversations" ENABLE ROW LEVEL SECURITY; @@ -95,6 +95,10 @@ GRANT USAGE ON SCHEMA "public" TO "anon"; GRANT USAGE ON SCHEMA "public" TO "authenticated"; GRANT USAGE ON SCHEMA "public" TO "service_role"; +GRANT ALL ON FUNCTION "public"."match_records_embeddings_similarity"(query_embedding extensions.vector, match_threshold double precision, match_count integer) TO "anon"; +GRANT ALL ON FUNCTION "public"."match_records_embeddings_similarity"(query_embedding extensions.vector, match_threshold double precision, match_count integer) TO "authenticated"; +GRANT ALL ON FUNCTION "public"."match_records_embeddings_similarity"(query_embedding extensions.vector, match_threshold double precision, match_count integer) TO "service_role"; + GRANT ALL ON TABLE "public"."conversations" TO "anon"; GRANT ALL ON TABLE "public"."conversations" TO "authenticated"; GRANT ALL ON TABLE "public"."conversations" TO "service_role";