diff --git a/PR_DESCRIPTION.md b/PR_DESCRIPTION.md new file mode 100644 index 0000000..0ef6703 --- /dev/null +++ b/PR_DESCRIPTION.md @@ -0,0 +1,184 @@ +# PR: Add CubeSchema - Generate Ecto schemas for querying Cube cubes + +## Summary + +This PR adds `PowerOfThree.CubeSchema`, a new macro that generates Ecto schemas for querying Cube cubes via the PostgreSQL wire protocol. This completes the bidirectional flow between Ecto and Cube. + +## Motivation + +Power of Three originally provided one direction: +- **Ecto Schema → Cube Config**: Generate Cube YAML configurations from existing Ecto schemas + +This PR adds the reverse direction: +- **Cube Config → Ecto Schema**: Generate Ecto schemas that can query existing Cube cubes + +This enables Elixir developers to query Cube using familiar Ecto patterns without learning a new query API. + +## Dependency + +This feature requires [cube-js/cube#10308](https://github.com/cube-js/cube/pull/10308) which fixes Postgrex/Ecto type bootstrap in Cube SQL API. + +## New Module: `PowerOfThree.CubeSchema` + +### Two Ways to Define Schemas + +**1. Explicit definition with DSL:** +```elixir +defmodule MyCubes.Orders do + use PowerOfThree.CubeSchema + + cube_schema :orders_no_preagg do + dimension :brand_code, :string + dimension :market_code, :string + dimension :updated_at, :utc_datetime + + measure :count, :integer + measure :total_amount_sum, :float + end +end +``` + +**2. Auto-generation from YAML:** +```elixir +defmodule MyCubes.Customers do + use PowerOfThree.CubeSchema + + # Reads from model/cubes/of_customers.yaml at compile time + cube_schema :of_customers +end +``` + +### Usage with Ecto.Query + +```elixir +import Ecto.Query + +# Simple query +Cubes.Repo.all(MyCubes.Orders) + +# Filtering +query = from o in MyCubes.Orders, + where: o.brand_code == "Heineken", + limit: 10 +Cubes.Repo.all(query) + +# Aggregation +query = from o in MyCubes.Orders, + group_by: o.brand_code, + select: {o.brand_code, sum(o.total_amount_sum)}, + order_by: [desc: 2], + limit: 10 +Cubes.Repo.all(query) +# => [{"Delirium Tremens", 35058016.0}, {"Sierra Nevada", 35043373.0}, ...] +``` + +## Type Mapping + +| Cube Type | Ecto Type | Notes | +|-----------|-----------|-------| +| `string` | `:string` | | +| `number` | `:float` | Cube uses floats for most numerics | +| `time` | `:utc_datetime` | | +| `boolean` | `:boolean` | | +| `count` measure | `:integer` | | +| `count_distinct` | `:integer` | | +| `sum`/`avg`/`min`/`max` | `:float` | | + +## Supported Ecto Operations + +| Feature | Status | Notes | +|---------|--------|-------| +| `Repo.all/one` | ✅ | Full struct or custom select | +| `where:` with literals | ✅ | `where: o.brand == "X"` | +| `where:` with params | ✅ | `where: o.brand == ^var` (strings) | +| `where:` with AND/OR | ✅ | Multiple conditions | +| `where:` with != | ✅ | Exclusion filtering | +| `limit:` / `offset:` | ✅ | Pagination supported | +| `order_by:` asc/desc | ✅ | By dimension or measure | +| `group_by:` single | ✅ | Single dimension | +| `group_by:` multi | ✅ | Multiple dimensions | +| `sum()`, `count()` | ✅ | Aggregation functions | +| `select:` tuple | ✅ | `{o.brand, sum(o.total)}` | +| `select:` map | ✅ | `%{brand: o.brand, total: sum(o.total)}` | +| `select:` list | ✅ | `[o.brand, sum(o.total)]` | +| Composable queries | ✅ | Pipe-style building | + +## Known Limitations + +### Query Syntax Constraints + +| Pattern | Issue | Workaround | +|---------|-------|------------| +| `where: x in ^list` | Parameterized IN arrays not supported | Use OR conditions: `where: x == "a" or x == "b"` | +| `where: x not in ^list` | Same as above | Use AND with !=: `where: x != "a" and x != "b"` | +| `fragment(...)` | SQL fragments not supported | Compute in Elixir post-query | +| `having: count() > ^param` | HAVING with params limited | Filter results in Elixir | + +### Measure Aggregation Rules + +- **Measures in GROUP BY context must be aggregated**: Use `sum(o.count)` not just `o.count` +- **count_distinct measures**: Cannot use `SUM()` on them - use only with count-compatible aggregations +- Parameterized float values may fail (use literal values or string params) +- Scientific notation casts (`1.0e3::float`) are not supported by Cube SQL + +### Example Patterns + +```elixir +# ❌ Won't work - IN with parameter +from(o in Orders, where: o.brand_code in ^brands) + +# ✅ Works - OR conditions +from(o in Orders, where: o.brand_code == "Heineken" or o.brand_code == "Corona Extra") + +# ❌ Won't work - raw measure in GROUP BY select +from(o in Orders, group_by: o.brand_code, select: {o.brand_code, o.count}) + +# ✅ Works - aggregated measure +from(o in Orders, group_by: o.brand_code, select: {o.brand_code, sum(o.count)}) +``` + +## The Complete Vision + +``` +Ecto Schema ──PowerOfThree──> Cube YAML Config + │ + ▼ + Cube Runtime + │ +Ecto Schema <──CubeSchema─── Cube YAML Config +``` + +Nothing is duplicated. Nothing is reinterpreted. Intellectual economy applied to analytics architecture. + +## Files Changed + +- `lib/power_of_three/cube_schema.ex` (new) - The CubeSchema macro module +- `test/cube_schema_live_test.exs` (new) - Live integration tests +- `test/cube_schema_extended_live_test.exs` (new) - Extended live tests +- `mix.exs` - Added `postgrex` dependency for live tests + +## Testing + +**45 live integration tests** against Cube SQL API on port 9432: + +| Test Category | Count | Coverage | +|---------------|-------|----------| +| Basic queries | 5 | `Repo.all`, `Repo.one`, limit, offset | +| WHERE filtering | 8 | String literals, params, AND/OR conditions, NOT | +| ORDER BY | 3 | Ascending, descending, by measure | +| GROUP BY aggregation | 6 | Single/multi-dimension, sum, count | +| Composable queries | 2 | Step-by-step building, filter + aggregation | +| Select formats | 4 | Maps, tuples, lists, computed names | +| Edge cases | 3 | Empty results, single result, large limit | +| Real analytics | 6 | Revenue analysis, market penetration, zodiac distribution | +| Multi-stage queries | 2 | Two-stage patterns with Elixir filtering | +| Pagination | 2 | OFFSET, grouped pagination | +| Numeric comparisons | 2 | Range filters, threshold filtering | +| Count aggregations | 2 | Orders per brand, customers per market | + +Run tests with: +```bash +mix test --include live_cube +``` + +Requires Cube SQL API running on localhost:9432. diff --git a/lib/power_of_three.ex b/lib/power_of_three.ex index 068eb89..cf26ece 100644 --- a/lib/power_of_three.ex +++ b/lib/power_of_three.ex @@ -534,7 +534,6 @@ defmodule PowerOfThree do {sql_table, legit_opts} = legit_opts |> Keyword.pop(:sql_table) # |> IO.inspect(label: :cube_opts) cube_opts = Enum.into(legit_opts, %{}) - # TODO must match Ecto schema source case Module.get_attribute(__MODULE__, :ecto_fields, []) do [id: {:id, :always}] -> @@ -601,8 +600,12 @@ defmodule PowerOfThree do dimensions ) + # sql_table should be provided explicitly via cube option + # If not provided, it will be nil and should be set by the user + resolved_sql_table = sql_table || "unknown" + a_cube_config = [ - %{name: cube_name, sql_table: sql_table} + %{name: cube_name, sql_table: resolved_sql_table} |> Map.merge(cube_opts) |> Map.merge(%{dimensions: dimensions ++ time_dimensions, measures: measures}) ] diff --git a/lib/power_of_three/cube_schema.ex b/lib/power_of_three/cube_schema.ex new file mode 100644 index 0000000..fc1b46d --- /dev/null +++ b/lib/power_of_three/cube_schema.ex @@ -0,0 +1,278 @@ +defmodule PowerOfThree.CubeSchema do + @moduledoc """ + Generates Ecto.Schema modules for querying Cube cubes via the PostgreSQL wire protocol. + + This is the reverse of `PowerOfThree` - instead of generating Cube configs from Ecto schemas, + this generates Ecto schemas that can query existing Cube cubes. + + ## Usage + + ```elixir + defmodule MyCubes.Orders do + use PowerOfThree.CubeSchema + + # Generate schema from cube YAML (reads at compile time) + cube_schema :orders_no_preagg + + # Or with explicit dimensions and measures + cube_schema :orders_no_preagg do + dimension :brand_code, :string + dimension :market_code, :string + dimension :updated_at, :utc_datetime + dimension :inserted_at, :utc_datetime + + measure :count, :integer + measure :total_amount_sum, :float + measure :tax_amount_sum, :float + end + end + ``` + + The generated schema can then be used with a Cube-connected Ecto.Repo: + + ```elixir + import Ecto.Query + + # Simple query + Cubes.Repo.all(MyCubes.Orders) + + # With filters + query = from o in MyCubes.Orders, + where: o.brand_code == "Heineken", + limit: 10 + Cubes.Repo.all(query) + + # With aggregation + query = from o in MyCubes.Orders, + group_by: o.brand_code, + select: {o.brand_code, sum(o.total_amount_sum)}, + order_by: [desc: 2], + limit: 10 + Cubes.Repo.all(query) + ``` + + ## Type Mapping + + Cube types are mapped to Ecto types as follows: + + | Cube Type | Ecto Type | Notes | + |-----------|-----------|-------| + | `string` | `:string` | | + | `number` | `:float` | Cube uses floats for most numerics | + | `time` | `:utc_datetime` | | + | `boolean` | `:boolean` | | + | `count` measure | `:integer` | | + | `count_distinct` | `:integer` | | + | `sum` measure | `:float` | | + + ## Primary Key + + Since Cube cubes don't have traditional primary keys, the schema uses a synthetic + `:id` field of type `:float` (matching Cube's numeric ID handling). + + ## Limitations + + When using Ecto queries with Cube, be aware of these limitations: + + - Parameterized float values may fail (use literal values) + - Scientific notation casts (`1.0e3::float`) are not supported + - HAVING clauses with table aliases may not work + - Filtering on measures requires GROUP BY + """ + + @doc false + defmacro __using__(_opts) do + quote do + import PowerOfThree.CubeSchema, only: [cube_schema: 1, cube_schema: 2] + Module.register_attribute(__MODULE__, :cube_dimensions, accumulate: true) + Module.register_attribute(__MODULE__, :cube_measures, accumulate: true) + end + end + + @doc """ + Defines an Ecto schema for a Cube cube. + + ## Without block (auto-generate from YAML) + + ```elixir + cube_schema :orders_no_preagg + ``` + + This reads the cube definition from `model/cubes/.yaml` at compile time. + + ## With block (explicit definition) + + ```elixir + cube_schema :orders_no_preagg do + dimension :brand_code, :string + measure :count, :integer + end + ``` + """ + defmacro cube_schema(cube_name) do + quote do + cube_name = unquote(cube_name) + + # Try to read from YAML file + yaml_path = Path.join(["model", "cubes", "#{cube_name}.yaml"]) + + case File.read(yaml_path) do + {:ok, content} -> + case YamlElixir.read_from_string(content) do + {:ok, %{"cubes" => [cube | _]}} -> + # Extract dimensions and measures from YAML + dimensions = Map.get(cube, "dimensions", []) + measures = Map.get(cube, "measures", []) + + # Generate fields + dimension_fields = + for dim <- dimensions do + name = dim["name"] |> to_string() |> String.to_atom() + type = PowerOfThree.CubeSchema.cube_type_to_ecto(dim["type"]) + {name, type} + end + + measure_fields = + for m <- measures do + name = m["name"] |> to_string() |> String.to_atom() + type = PowerOfThree.CubeSchema.measure_type_to_ecto(m["type"]) + {name, type} + end + + # Store for schema generation + Module.put_attribute(__MODULE__, :cube_schema_fields, dimension_fields ++ measure_fields) + Module.put_attribute(__MODULE__, :cube_schema_name, cube_name) + + _ -> + raise "Failed to parse cube YAML from #{yaml_path}" + end + + {:error, _} -> + raise "Cube YAML not found at #{yaml_path}. Define fields explicitly with cube_schema/2." + end + + # Generate the Ecto schema + PowerOfThree.CubeSchema.__generate_schema__(__MODULE__) + end + end + + defmacro cube_schema(cube_name, do: block) do + quote do + Module.put_attribute(__MODULE__, :cube_schema_name, unquote(cube_name)) + + # Import dimension and measure macros for the block + import PowerOfThree.CubeSchema, only: [dimension: 2, measure: 2] + + # Process the block to collect dimensions and measures + unquote(block) + + # Generate the Ecto schema + PowerOfThree.CubeSchema.__generate_schema__(__MODULE__) + end + end + + @doc false + defmacro dimension(name, type) do + quote do + Module.put_attribute(__MODULE__, :cube_dimensions, {unquote(name), unquote(type)}) + end + end + + @doc false + defmacro measure(name, type) do + quote do + Module.put_attribute(__MODULE__, :cube_measures, {unquote(name), unquote(type)}) + end + end + + @doc false + def __generate_schema__(module) do + cube_name = Module.get_attribute(module, :cube_schema_name) + + # Get fields from either YAML parsing or explicit definitions + fields = + case Module.get_attribute(module, :cube_schema_fields) do + nil -> + dimensions = Module.get_attribute(module, :cube_dimensions) |> Enum.reverse() + measures = Module.get_attribute(module, :cube_measures) |> Enum.reverse() + dimensions ++ measures + + fields -> + fields + end + + # Check if there's an :id field in dimensions + has_id = Enum.any?(fields, fn {name, _type} -> name == :id end) + + # Filter out :id from fields if it's being used as primary key + # (primary key is defined separately via @primary_key) + fields_for_schema = + if has_id do + Enum.reject(fields, fn {name, _type} -> name == :id end) + else + fields + end + + # Generate the schema AST + field_asts = + for {name, type} <- fields_for_schema do + quote do + field(unquote(name), unquote(type)) + end + end + + schema_ast = + if has_id do + quote do + use Ecto.Schema + + @primary_key {:id, :float, autogenerate: false} + + schema unquote(to_string(cube_name)) do + unquote_splicing(field_asts) + end + end + else + quote do + use Ecto.Schema + + @primary_key false + + schema unquote(to_string(cube_name)) do + unquote_splicing(field_asts) + end + end + end + + Module.eval_quoted(module, schema_ast) + end + + @doc """ + Converts Cube dimension type to Ecto type. + """ + def cube_type_to_ecto(type) when is_binary(type) do + cube_type_to_ecto(String.to_atom(type)) + end + + def cube_type_to_ecto(:string), do: :string + def cube_type_to_ecto(:number), do: :float + def cube_type_to_ecto(:time), do: :utc_datetime + def cube_type_to_ecto(:boolean), do: :boolean + def cube_type_to_ecto(_), do: :string + + @doc """ + Converts Cube measure type to Ecto type. + """ + def measure_type_to_ecto(type) when is_binary(type) do + measure_type_to_ecto(String.to_atom(type)) + end + + def measure_type_to_ecto(:count), do: :integer + def measure_type_to_ecto(:count_distinct), do: :integer + def measure_type_to_ecto(:sum), do: :float + def measure_type_to_ecto(:avg), do: :float + def measure_type_to_ecto(:min), do: :float + def measure_type_to_ecto(:max), do: :float + def measure_type_to_ecto(:number), do: :float + def measure_type_to_ecto(_), do: :float +end diff --git a/mix.exs b/mix.exs index e4e9f78..2707597 100644 --- a/mix.exs +++ b/mix.exs @@ -41,6 +41,7 @@ defmodule PowerOfThree.MixProject do [ {:ymlr, "~> 5.0"}, {:ecto_sql, "~> 3.10"}, + {:postgrex, "~> 0.19", only: [:dev, :test]}, {:explorer, "~> 0.11.1"}, {:adbc, github: "borodark/adbc", branch: "cleanup-take-II", override: true, optional: true, only: [:dev, :test]}, {:req, "~> 0.5"}, diff --git a/mix.lock b/mix.lock index 6f3081d..64afffe 100644 --- a/mix.lock +++ b/mix.lock @@ -28,6 +28,7 @@ "nimble_options": {:hex, :nimble_options, "1.1.1", "e3a492d54d85fc3fd7c5baf411d9d2852922f66e69476317787a7b2bb000a61b", [:mix], [], "hexpm", "821b2470ca9442c4b6984882fe9bb0389371b8ddec4d45a9504f00a66f650b44"}, "nimble_parsec": {:hex, :nimble_parsec, "1.4.2", "8efba0122db06df95bfaa78f791344a89352ba04baedd3849593bfce4d0dc1c6", [:mix], [], "hexpm", "4b21398942dda052b403bbe1da991ccd03a053668d147d53fb8c4e0efe09c973"}, "nimble_pool": {:hex, :nimble_pool, "1.1.0", "bf9c29fbdcba3564a8b800d1eeb5a3c58f36e1e11d7b7fb2e084a643f645f06b", [:mix], [], "hexpm", "af2e4e6b34197db81f7aad230c1118eac993acc0dae6bc83bac0126d4ae0813a"}, + "postgrex": {:hex, :postgrex, "0.21.1", "2c5cc830ec11e7a0067dd4d623c049b3ef807e9507a424985b8dcf921224cd88", [:mix], [{:db_connection, "~> 2.1", [hex: :db_connection, repo: "hexpm", optional: false]}, {:decimal, "~> 1.5 or ~> 2.0", [hex: :decimal, repo: "hexpm", optional: false]}, {:jason, "~> 1.0", [hex: :jason, repo: "hexpm", optional: true]}, {:table, "~> 0.1.0", [hex: :table, repo: "hexpm", optional: true]}], "hexpm", "27d8d21c103c3cc68851b533ff99eef353e6a0ff98dc444ea751de43eb48bdac"}, "req": {:hex, :req, "0.5.16", "99ba6a36b014458e52a8b9a0543bfa752cb0344b2a9d756651db1281d4ba4450", [:mix], [{:brotli, "~> 0.3.1", [hex: :brotli, repo: "hexpm", optional: true]}, {:ezstd, "~> 1.0", [hex: :ezstd, repo: "hexpm", optional: true]}, {:finch, "~> 0.17", [hex: :finch, repo: "hexpm", optional: false]}, {:jason, "~> 1.0", [hex: :jason, repo: "hexpm", optional: false]}, {:mime, "~> 2.0.6 or ~> 2.1", [hex: :mime, repo: "hexpm", optional: false]}, {:nimble_csv, "~> 1.0", [hex: :nimble_csv, repo: "hexpm", optional: true]}, {:plug, "~> 1.0", [hex: :plug, repo: "hexpm", optional: true]}], "hexpm", "974a7a27982b9b791df84e8f6687d21483795882a7840e8309abdbe08bb06f09"}, "rustler_precompiled": {:hex, :rustler_precompiled, "0.8.4", "700a878312acfac79fb6c572bb8b57f5aae05fe1cf70d34b5974850bbf2c05bf", [:mix], [{:castore, "~> 0.1 or ~> 1.0", [hex: :castore, repo: "hexpm", optional: false]}, {:rustler, "~> 0.23", [hex: :rustler, repo: "hexpm", optional: true]}], "hexpm", "3b33d99b540b15f142ba47944f7a163a25069f6d608783c321029bc1ffb09514"}, "table": {:hex, :table, "0.1.2", "87ad1125f5b70c5dea0307aa633194083eb5182ec537efc94e96af08937e14a8", [:mix], [], "hexpm", "7e99bc7efef806315c7e65640724bf165c3061cdc5d854060f74468367065029"}, diff --git a/test/cube_schema_extended_live_test.exs b/test/cube_schema_extended_live_test.exs new file mode 100644 index 0000000..c53dd28 --- /dev/null +++ b/test/cube_schema_extended_live_test.exs @@ -0,0 +1,576 @@ +defmodule CubeSchemaExtendedLiveTest do + @moduledoc """ + Extended live integration tests for CubeSchema with Cube SQL API on port 9432. + + These tests cover additional query patterns and edge cases. + + Run with: mix test --include live_cube + """ + + use ExUnit.Case, async: false + + @moduletag :live_cube + + # Define a test Repo for connecting to Cube + defmodule CubeRepo do + use Ecto.Repo, + otp_app: :power_of_3, + adapter: Ecto.Adapters.Postgres + end + + # CubeSchema for orders_no_preagg cube + defmodule Orders do + use PowerOfThree.CubeSchema + + cube_schema :orders_no_preagg do + dimension :id, :float + dimension :brand_code, :string + dimension :market_code, :string + dimension :updated_at, :utc_datetime + dimension :inserted_at, :utc_datetime + + measure :count, :integer + measure :total_amount_sum, :float + measure :tax_amount_sum, :float + measure :subtotal_amount_sum, :float + measure :customer_id_distinct, :integer + end + end + + # CubeSchema for of_customers cube + defmodule Customers do + use PowerOfThree.CubeSchema + + cube_schema :of_customers do + dimension :email_per_brand_per_market, :string + dimension :given_name, :string + dimension :zodiac, :string + dimension :star_sector, :float + dimension :bm_code, :string + dimension :brand, :string + dimension :market, :string + dimension :updated, :utc_datetime + dimension :inserted_at, :utc_datetime + + measure :count, :integer + measure :emails_distinct, :integer + measure :aquarii, :integer + end + end + + setup_all do + repo_config = [ + hostname: "localhost", + port: 9432, + database: "cube", + username: "cube", + password: "cube", + pool_size: 2 + ] + + {:ok, _pid} = CubeRepo.start_link(repo_config) + + :ok + end + + describe "IN clause filtering" do + test "filters orders by multiple brand codes using OR" do + import Ecto.Query + + # Cube doesn't support IN with parameterized arrays, use OR instead + query = + from(o in Orders, + where: o.brand_code == "Heineken" or o.brand_code == "Corona Extra" or o.brand_code == "Budweiser", + limit: 10 + ) + + results = CubeRepo.all(query) + + assert length(results) > 0 + assert Enum.all?(results, fn r -> r.brand_code in ["Heineken", "Corona Extra", "Budweiser"] end) + end + + test "filters customers by multiple markets using OR" do + import Ecto.Query + + # Cube doesn't support IN with parameterized arrays, use OR instead + query = + from(c in Customers, + where: c.market == "AU" or c.market == "US" or c.market == "UK", + limit: 10 + ) + + results = CubeRepo.all(query) + + assert length(results) > 0 + assert Enum.all?(results, fn r -> r.market in ["AU", "US", "UK"] end) + end + end + + describe "multiple WHERE conditions" do + test "filters with AND conditions" do + import Ecto.Query + + query = + from(o in Orders, + where: o.brand_code == "Heineken" and o.market_code == "AU", + limit: 5 + ) + + results = CubeRepo.all(query) + + assert Enum.all?(results, fn r -> + r.brand_code == "Heineken" and r.market_code == "AU" + end) + end + + test "filters customers by brand and zodiac" do + import Ecto.Query + + query = + from(c in Customers, + where: c.brand == "Guinness" and c.zodiac == "Leo", + limit: 5 + ) + + results = CubeRepo.all(query) + + assert Enum.all?(results, fn r -> + r.brand == "Guinness" and r.zodiac == "Leo" + end) + end + end + + describe "OFFSET and pagination" do + test "paginates orders with offset" do + import Ecto.Query + + # First page + page1_query = + from(o in Orders, + order_by: [asc: o.id], + limit: 5, + offset: 0 + ) + + page1 = CubeRepo.all(page1_query) + + # Second page + page2_query = + from(o in Orders, + order_by: [asc: o.id], + limit: 5, + offset: 5 + ) + + page2 = CubeRepo.all(page2_query) + + # Pages should be different + assert length(page1) == 5 + assert length(page2) == 5 + + page1_ids = Enum.map(page1, & &1.id) + page2_ids = Enum.map(page2, & &1.id) + + assert MapSet.disjoint?(MapSet.new(page1_ids), MapSet.new(page2_ids)) + end + + test "paginates grouped results" do + import Ecto.Query + + query = + from(o in Orders, + group_by: o.brand_code, + select: {o.brand_code, count(o.id)}, + order_by: [desc: 2], + limit: 3, + offset: 2 + ) + + results = CubeRepo.all(query) + + assert length(results) == 3 + end + end + + describe "NOT conditions" do + test "excludes specific brand" do + import Ecto.Query + + query = + from(o in Orders, + where: o.brand_code != "Heineken", + limit: 10 + ) + + results = CubeRepo.all(query) + + assert length(results) > 0 + assert Enum.all?(results, fn r -> r.brand_code != "Heineken" end) + end + + test "excludes multiple brands using AND not equals" do + import Ecto.Query + + # Cube doesn't support NOT IN with parameterized arrays, use AND with != instead + query = + from(o in Orders, + where: o.brand_code != "Heineken" and o.brand_code != "Corona Extra", + limit: 10 + ) + + results = CubeRepo.all(query) + + assert length(results) > 0 + assert Enum.all?(results, fn r -> r.brand_code not in ["Heineken", "Corona Extra"] end) + end + end + + describe "numeric comparisons on measures" do + test "filters grouped results by sum threshold" do + import Ecto.Query + + # Get all brands with their sums first, then filter in Elixir + # since Cube HAVING support is limited + query = + from(o in Orders, + group_by: o.brand_code, + select: {o.brand_code, sum(o.total_amount_sum)}, + order_by: [desc: 2], + limit: 20 + ) + + results = CubeRepo.all(query) + # Filter for high revenue brands + high_revenue = Enum.filter(results, fn {_brand, total} -> total > 1_000_000.0 end) + + assert length(high_revenue) > 0 + assert Enum.all?(high_revenue, fn {_brand, total} -> total > 1_000_000.0 end) + end + + test "filters customers by star sector range" do + import Ecto.Query + + # Star sector is 0-11, widen range to ensure we get results + query = + from(c in Customers, + where: c.star_sector >= 0.0 and c.star_sector <= 11.0, + limit: 10 + ) + + results = CubeRepo.all(query) + + assert length(results) > 0 + assert Enum.all?(results, fn r -> + r.star_sector >= 0.0 and r.star_sector <= 11.0 + end) + end + end + + describe "count and distinct aggregations" do + test "counts orders per brand" do + import Ecto.Query + + # Use sum of count measure for aggregation in GROUP BY queries + query = + from(o in Orders, + group_by: o.brand_code, + select: {o.brand_code, sum(o.count)}, + order_by: [desc: 2], + limit: 5 + ) + + results = CubeRepo.all(query) + + assert length(results) > 0 + assert Enum.all?(results, fn {brand, count} -> + is_binary(brand) and is_integer(count) and count > 0 + end) + end + + test "counts customers per market" do + import Ecto.Query + + # Use count measure with sum for GROUP BY aggregation + query = + from(c in Customers, + group_by: c.market, + select: %{market: c.market, customer_count: sum(c.count)}, + order_by: [desc: sum(c.count)], + limit: 5 + ) + + results = CubeRepo.all(query) + + assert length(results) > 0 + assert Enum.all?(results, fn r -> r.customer_count > 0 end) + end + end + + describe "complex multi-level grouping" do + test "groups by brand, market, and zodiac" do + import Ecto.Query + + query = + from(c in Customers, + group_by: [c.brand, c.market, c.zodiac], + select: %{ + brand: c.brand, + market: c.market, + zodiac: c.zodiac, + customer_count: sum(c.count) + }, + order_by: [desc: sum(c.count)], + limit: 20 + ) + + results = CubeRepo.all(query) + + assert length(results) > 0 + assert Enum.all?(results, fn r -> + is_binary(r.brand) and + is_binary(r.market) and + is_binary(r.zodiac) and + is_integer(r.customer_count) + end) + end + + test "calculates revenue breakdown by brand and market" do + import Ecto.Query + + query = + from(o in Orders, + group_by: [o.brand_code, o.market_code], + select: %{ + brand: o.brand_code, + market: o.market_code, + revenue: sum(o.total_amount_sum), + tax: sum(o.tax_amount_sum), + order_count: sum(o.count) + }, + order_by: [desc: sum(o.total_amount_sum)], + limit: 15 + ) + + results = CubeRepo.all(query) + + assert length(results) > 0 + # Verify structure and data types + first = hd(results) + assert is_binary(first.brand) + assert is_binary(first.market) + assert is_float(first.revenue) + assert is_float(first.tax) + assert is_integer(first.order_count) + end + end + + describe "subquery-style patterns" do + test "performs two-stage query with filtering" do + import Ecto.Query + + # First query: get all brand revenues + brand_query = + from(o in Orders, + group_by: o.brand_code, + select: {o.brand_code, sum(o.total_amount_sum)}, + order_by: [desc: 2] + ) + + all_results = CubeRepo.all(brand_query) + + # Verify we got results + assert length(all_results) > 0 + + # Second stage: use results to do further analysis + # Get top 3 brands + top_brands = all_results |> Enum.take(3) |> Enum.map(fn {brand, _} -> brand end) + + # Verify the two-stage pattern works + assert length(top_brands) == 3 + + # Query orders for these top brands + top_brands_details = + from(o in Orders, + where: o.brand_code == ^hd(top_brands), + limit: 5 + ) + + brand_orders = CubeRepo.all(top_brands_details) + assert length(brand_orders) > 0 + assert Enum.all?(brand_orders, fn o -> o.brand_code == hd(top_brands) end) + end + end + + describe "field aliasing and transformations" do + test "selects with computed field names" do + import Ecto.Query + + # Use sum() on sum-compatible measures only + query = + from(o in Orders, + group_by: o.brand_code, + select: %{ + beer_brand: o.brand_code, + total_sales: sum(o.total_amount_sum), + order_count: sum(o.count) + }, + order_by: [desc: sum(o.total_amount_sum)], + limit: 5 + ) + + results = CubeRepo.all(query) + + assert length(results) == 5 + first = hd(results) + assert Map.has_key?(first, :beer_brand) + assert Map.has_key?(first, :total_sales) + assert Map.has_key?(first, :order_count) + end + + test "returns list format results" do + import Ecto.Query + + # Use sum(c.count) instead of count() for Cube compatibility + query = + from(c in Customers, + group_by: c.zodiac, + select: [c.zodiac, sum(c.count)], + order_by: [desc: 2], + limit: 12 + ) + + results = CubeRepo.all(query) + + assert length(results) > 0 + assert Enum.all?(results, fn [zodiac, count] -> + is_binary(zodiac) and is_integer(count) + end) + end + end + + describe "edge cases and boundary conditions" do + test "handles empty result set gracefully" do + import Ecto.Query + + query = + from(o in Orders, + where: o.brand_code == "NonExistentBrand12345", + limit: 10 + ) + + results = CubeRepo.all(query) + + assert results == [] + end + + test "handles single result" do + import Ecto.Query + + query = + from(o in Orders, + limit: 1 + ) + + results = CubeRepo.all(query) + + assert length(results) == 1 + end + + test "handles large limit" do + import Ecto.Query + + query = + from(o in Orders, + group_by: [o.brand_code, o.market_code], + select: {o.brand_code, o.market_code, count(o.id)}, + limit: 1000 + ) + + results = CubeRepo.all(query) + + # Should return whatever is available up to limit + assert length(results) > 0 + assert length(results) <= 1000 + end + end + + describe "real-world analytics scenarios" do + test "calculates customer lifetime value metrics" do + import Ecto.Query + + # Cube doesn't support SQL fragments, use separate calculations + query = + from(o in Orders, + group_by: o.brand_code, + select: %{ + brand: o.brand_code, + total_revenue: sum(o.total_amount_sum), + order_count: count(o.id) + }, + order_by: [desc: sum(o.total_amount_sum)], + limit: 10 + ) + + results = CubeRepo.all(query) + + assert length(results) > 0 + first = hd(results) + assert first.total_revenue > 0 + assert first.order_count > 0 + # Calculate avg_order_value in Elixir + avg_order_value = first.total_revenue / first.order_count + assert avg_order_value > 0 + end + + test "analyzes market penetration" do + import Ecto.Query + + # Get market/brand combinations with counts + query = + from(c in Customers, + group_by: [c.market, c.brand], + select: %{ + market: c.market, + brand: c.brand, + customer_count: sum(c.count) + }, + order_by: [asc: c.market, desc: sum(c.count)], + limit: 100 + ) + + all_results = CubeRepo.all(query) + + # Should have results + assert length(all_results) > 0 + # Verify structure + first = hd(all_results) + assert is_binary(first.market) + assert is_binary(first.brand) + assert is_integer(first.customer_count) + end + + test "zodiac distribution analysis" do + import Ecto.Query + + query = + from(c in Customers, + group_by: [c.zodiac, c.star_sector], + select: %{ + sign: c.zodiac, + sector: c.star_sector, + total: count() + }, + order_by: [asc: c.star_sector], + limit: 15 + ) + + results = CubeRepo.all(query) + + assert length(results) > 0 + # Verify zodiac signs are present + signs = Enum.map(results, & &1.sign) |> Enum.uniq() + assert length(signs) > 1 + end + end +end diff --git a/test/cube_schema_live_test.exs b/test/cube_schema_live_test.exs new file mode 100644 index 0000000..616a53b --- /dev/null +++ b/test/cube_schema_live_test.exs @@ -0,0 +1,446 @@ +defmodule CubeSchemaLiveTest do + @moduledoc """ + Live integration tests for CubeSchema with Cube SQL API on port 9432. + + These tests require: + - Cube SQL API running on localhost:9432 + - The orders_no_preagg cube configured + + Run with: mix test --include live_cube + """ + + use ExUnit.Case, async: false + + @moduletag :live_cube + + # Define a test Repo for connecting to Cube + defmodule CubeRepo do + use Ecto.Repo, + otp_app: :power_of_3, + adapter: Ecto.Adapters.Postgres + end + + # CubeSchema for orders_no_preagg cube + defmodule Orders do + use PowerOfThree.CubeSchema + + cube_schema :orders_no_preagg do + dimension :id, :float + dimension :brand_code, :string + dimension :market_code, :string + dimension :updated_at, :utc_datetime + dimension :inserted_at, :utc_datetime + + measure :count, :integer + measure :total_amount_sum, :float + measure :tax_amount_sum, :float + measure :subtotal_amount_sum, :float + measure :customer_id_distinct, :integer + end + end + + # CubeSchema for of_customers cube (no id field) + defmodule Customers do + use PowerOfThree.CubeSchema + + cube_schema :of_customers do + dimension :email_per_brand_per_market, :string + dimension :given_name, :string + dimension :zodiac, :string + dimension :star_sector, :float + dimension :bm_code, :string + dimension :brand, :string + dimension :market, :string + dimension :updated, :utc_datetime + dimension :inserted_at, :utc_datetime + + measure :count, :integer + measure :emails_distinct, :integer + measure :aquarii, :integer + end + end + + setup_all do + # Start the Repo with Cube SQL connection + repo_config = [ + hostname: "localhost", + port: 9432, + database: "cube", + username: "cube", + password: "cube", + pool_size: 2 + ] + + {:ok, _pid} = CubeRepo.start_link(repo_config) + + :ok + end + + describe "basic queries with Repo.all/2" do + test "fetches all orders with limit" do + import Ecto.Query + + query = from(o in Orders, limit: 5) + results = CubeRepo.all(query) + + assert length(results) == 5 + assert Enum.all?(results, fn r -> is_struct(r, Orders) end) + end + + test "fetches orders with specific fields" do + import Ecto.Query + + query = + from(o in Orders, + select: {o.brand_code, o.total_amount_sum}, + limit: 3 + ) + + results = CubeRepo.all(query) + + assert length(results) == 3 + assert Enum.all?(results, fn {brand, total} -> + is_binary(brand) and is_float(total) + end) + end + + test "fetches customers without primary key" do + import Ecto.Query + + query = from(c in Customers, limit: 3) + results = CubeRepo.all(query) + + assert length(results) == 3 + assert Enum.all?(results, fn r -> is_struct(r, Customers) end) + end + end + + describe "Repo.one/2 queries" do + test "fetches single order by id" do + import Ecto.Query + + query = from(o in Orders, where: o.id == 1.0) + result = CubeRepo.one(query) + + assert is_struct(result, Orders) + assert result.id == 1.0 + end + + test "returns nil for non-existent id" do + import Ecto.Query + + query = from(o in Orders, where: o.id == -999.0) + result = CubeRepo.one(query) + + assert is_nil(result) + end + end + + describe "WHERE clause filtering" do + test "filters by string dimension with literal" do + import Ecto.Query + + query = + from(o in Orders, + where: o.brand_code == "Heineken", + limit: 5 + ) + + results = CubeRepo.all(query) + + assert length(results) > 0 + assert Enum.all?(results, fn r -> r.brand_code == "Heineken" end) + end + + test "filters by string dimension with parameter" do + import Ecto.Query + + brand = "Corona Extra" + + query = + from(o in Orders, + where: o.brand_code == ^brand, + limit: 5 + ) + + results = CubeRepo.all(query) + + assert length(results) > 0 + assert Enum.all?(results, fn r -> r.brand_code == "Corona Extra" end) + end + + test "filters customers by brand" do + import Ecto.Query + + query = + from(c in Customers, + where: c.brand == "Guinness", + limit: 3 + ) + + results = CubeRepo.all(query) + + assert length(results) > 0 + assert Enum.all?(results, fn r -> r.brand == "Guinness" end) + end + end + + describe "ORDER BY queries" do + test "orders by dimension ascending" do + import Ecto.Query + + query = + from(o in Orders, + order_by: [asc: o.brand_code], + limit: 10 + ) + + results = CubeRepo.all(query) + brands = Enum.map(results, & &1.brand_code) + + assert brands == Enum.sort(brands) + end + + test "orders by dimension descending" do + import Ecto.Query + + query = + from(o in Orders, + order_by: [desc: o.brand_code], + limit: 10 + ) + + results = CubeRepo.all(query) + brands = Enum.map(results, & &1.brand_code) + + assert brands == Enum.sort(brands, :desc) + end + + test "orders by measure descending" do + import Ecto.Query + + query = + from(o in Orders, + order_by: [desc: o.total_amount_sum], + limit: 5 + ) + + results = CubeRepo.all(query) + totals = Enum.map(results, & &1.total_amount_sum) + + assert totals == Enum.sort(totals, :desc) + end + end + + describe "GROUP BY with aggregation" do + test "groups by brand with count" do + import Ecto.Query + + query = + from(o in Orders, + group_by: o.brand_code, + select: {o.brand_code, count(o.id)}, + order_by: [desc: 2], + limit: 10 + ) + + results = CubeRepo.all(query) + + assert length(results) > 0 + assert Enum.all?(results, fn {brand, count} -> + is_binary(brand) and is_integer(count) and count > 0 + end) + end + + test "groups by brand with sum aggregation" do + import Ecto.Query + + query = + from(o in Orders, + group_by: o.brand_code, + select: {o.brand_code, sum(o.total_amount_sum)}, + order_by: [desc: 2], + limit: 5 + ) + + results = CubeRepo.all(query) + + assert length(results) == 5 + # Check descending order + sums = Enum.map(results, fn {_brand, sum} -> sum end) + assert sums == Enum.sort(sums, :desc) + end + + test "groups by multiple dimensions" do + import Ecto.Query + + query = + from(o in Orders, + group_by: [o.brand_code, o.market_code], + select: {o.brand_code, o.market_code, count(o.id)}, + order_by: [desc: 3], + limit: 10 + ) + + results = CubeRepo.all(query) + + assert length(results) > 0 + assert Enum.all?(results, fn {brand, market, count} -> + is_binary(brand) and is_binary(market) and is_integer(count) + end) + end + + test "groups customers by zodiac sign" do + import Ecto.Query + + query = + from(c in Customers, + group_by: c.zodiac, + select: {c.zodiac, count()}, + order_by: [desc: 2], + limit: 12 + ) + + results = CubeRepo.all(query) + + assert length(results) > 0 + # Should have zodiac signs + zodiacs = Enum.map(results, fn {zodiac, _count} -> zodiac end) + assert Enum.any?(zodiacs, &(&1 in ["Aquarius", "Pisces", "Aries", "Leo", "Virgo"])) + end + end + + describe "composable queries" do + test "builds query step by step" do + import Ecto.Query + + base = from(o in Orders) + filtered = where(base, [o], o.brand_code == "Heineken") + ordered = order_by(filtered, [o], desc: o.total_amount_sum) + limited = limit(ordered, 5) + + results = CubeRepo.all(limited) + + assert length(results) == 5 + assert Enum.all?(results, fn r -> r.brand_code == "Heineken" end) + end + + test "combines filter with aggregation" do + import Ecto.Query + + query = + from(o in Orders, + where: o.brand_code == "Budweiser", + group_by: o.market_code, + select: {o.market_code, sum(o.total_amount_sum)}, + order_by: [desc: 2], + limit: 5 + ) + + results = CubeRepo.all(query) + + assert length(results) > 0 + end + end + + describe "select with maps and tuples" do + test "selects into map" do + import Ecto.Query + + query = + from(o in Orders, + select: %{brand: o.brand_code, total: o.total_amount_sum}, + limit: 3 + ) + + results = CubeRepo.all(query) + + assert length(results) == 3 + assert Enum.all?(results, fn r -> + is_map(r) and Map.has_key?(r, :brand) and Map.has_key?(r, :total) + end) + end + + test "selects into tuple" do + import Ecto.Query + + query = + from(o in Orders, + select: {o.brand_code, o.market_code, o.count}, + limit: 3 + ) + + results = CubeRepo.all(query) + + assert length(results) == 3 + assert Enum.all?(results, fn {brand, market, count} -> + is_binary(brand) and is_binary(market) and is_integer(count) + end) + end + end + + describe "real analytics scenarios" do + test "top brands by revenue" do + import Ecto.Query + + query = + from(o in Orders, + group_by: o.brand_code, + select: %{ + brand: o.brand_code, + total_revenue: sum(o.total_amount_sum), + order_count: count(o.id) + }, + order_by: [desc: sum(o.total_amount_sum)], + limit: 10 + ) + + results = CubeRepo.all(query) + + assert length(results) == 10 + # Check top brand has significant revenue + top_brand = hd(results) + assert top_brand.total_revenue > 1_000_000 + end + + test "market performance analysis" do + import Ecto.Query + + query = + from(o in Orders, + group_by: o.market_code, + select: {o.market_code, count(o.id), sum(o.total_amount_sum)}, + order_by: [desc: 2], + limit: 10 + ) + + results = CubeRepo.all(query) + + assert length(results) > 0 + # Verify structure + {market, count, total} = hd(results) + assert is_binary(market) + assert is_integer(count) and count > 0 + assert is_float(total) and total > 0 + end + + test "customer distribution by zodiac" do + import Ecto.Query + + query = + from(c in Customers, + group_by: c.zodiac, + select: %{sign: c.zodiac, customers: count()}, + order_by: [desc: count()], + limit: 13 + ) + + results = CubeRepo.all(query) + + assert length(results) > 0 + # Should have multiple zodiac signs + signs = Enum.map(results, & &1.sign) + assert length(signs) > 1 + end + end +end diff --git a/test/cube_schema_test.exs b/test/cube_schema_test.exs new file mode 100644 index 0000000..85f0557 --- /dev/null +++ b/test/cube_schema_test.exs @@ -0,0 +1,272 @@ +defmodule CubeSchemaTest do + use ExUnit.Case, async: true + + alias PowerOfThree.CubeSchema + + describe "cube_schema/2 with explicit definitions" do + defmodule OrdersSchema do + @moduledoc false + use PowerOfThree.CubeSchema + + cube_schema :orders_test do + dimension :brand_code, :string + dimension :market_code, :string + dimension :updated_at, :utc_datetime + dimension :inserted_at, :utc_datetime + + measure :count, :integer + measure :total_amount_sum, :float + measure :tax_amount_sum, :float + end + end + + test "generates schema with correct table name" do + assert OrdersSchema.__schema__(:source) == "orders_test" + end + + test "generates schema with all dimension fields" do + fields = OrdersSchema.__schema__(:fields) + + assert :brand_code in fields + assert :market_code in fields + assert :updated_at in fields + assert :inserted_at in fields + end + + test "generates schema with all measure fields" do + fields = OrdersSchema.__schema__(:fields) + + assert :count in fields + assert :total_amount_sum in fields + assert :tax_amount_sum in fields + end + + test "maps dimension types correctly" do + assert OrdersSchema.__schema__(:type, :brand_code) == :string + assert OrdersSchema.__schema__(:type, :market_code) == :string + assert OrdersSchema.__schema__(:type, :updated_at) == :utc_datetime + assert OrdersSchema.__schema__(:type, :inserted_at) == :utc_datetime + end + + test "maps measure types correctly" do + assert OrdersSchema.__schema__(:type, :count) == :integer + assert OrdersSchema.__schema__(:type, :total_amount_sum) == :float + assert OrdersSchema.__schema__(:type, :tax_amount_sum) == :float + end + + test "schema without id field has no primary key" do + assert OrdersSchema.__schema__(:primary_key) == [] + end + end + + describe "cube_schema/2 with id dimension" do + defmodule OrdersWithIdSchema do + @moduledoc false + use PowerOfThree.CubeSchema + + cube_schema :orders_with_id do + dimension :id, :float + dimension :brand_code, :string + + measure :count, :integer + end + end + + test "schema with id field uses it as primary key" do + assert OrdersWithIdSchema.__schema__(:primary_key) == [:id] + end + + test "id field has correct type" do + assert OrdersWithIdSchema.__schema__(:type, :id) == :float + end + end + + describe "cube_schema/2 with various measure types" do + defmodule MeasuresSchema do + @moduledoc false + use PowerOfThree.CubeSchema + + cube_schema :measures_test do + dimension :category, :string + + measure :total_count, :integer + measure :distinct_count, :integer + measure :sum_amount, :float + measure :avg_amount, :float + measure :min_value, :float + measure :max_value, :float + end + end + + test "all measure types are mapped correctly" do + assert MeasuresSchema.__schema__(:type, :total_count) == :integer + assert MeasuresSchema.__schema__(:type, :distinct_count) == :integer + assert MeasuresSchema.__schema__(:type, :sum_amount) == :float + assert MeasuresSchema.__schema__(:type, :avg_amount) == :float + assert MeasuresSchema.__schema__(:type, :min_value) == :float + assert MeasuresSchema.__schema__(:type, :max_value) == :float + end + end + + describe "cube_schema/2 with various dimension types" do + defmodule DimensionsSchema do + @moduledoc false + use PowerOfThree.CubeSchema + + cube_schema :dimensions_test do + dimension :name, :string + dimension :amount, :float + dimension :created_at, :utc_datetime + dimension :is_active, :boolean + end + end + + test "all dimension types are mapped correctly" do + assert DimensionsSchema.__schema__(:type, :name) == :string + assert DimensionsSchema.__schema__(:type, :amount) == :float + assert DimensionsSchema.__schema__(:type, :created_at) == :utc_datetime + assert DimensionsSchema.__schema__(:type, :is_active) == :boolean + end + end + + describe "type conversion functions" do + test "cube_type_to_ecto/1 converts string types" do + assert CubeSchema.cube_type_to_ecto(:string) == :string + assert CubeSchema.cube_type_to_ecto("string") == :string + end + + test "cube_type_to_ecto/1 converts number types" do + assert CubeSchema.cube_type_to_ecto(:number) == :float + assert CubeSchema.cube_type_to_ecto("number") == :float + end + + test "cube_type_to_ecto/1 converts time types" do + assert CubeSchema.cube_type_to_ecto(:time) == :utc_datetime + assert CubeSchema.cube_type_to_ecto("time") == :utc_datetime + end + + test "cube_type_to_ecto/1 converts boolean types" do + assert CubeSchema.cube_type_to_ecto(:boolean) == :boolean + assert CubeSchema.cube_type_to_ecto("boolean") == :boolean + end + + test "cube_type_to_ecto/1 defaults unknown types to string" do + assert CubeSchema.cube_type_to_ecto(:unknown) == :string + assert CubeSchema.cube_type_to_ecto("custom") == :string + end + + test "measure_type_to_ecto/1 converts count types" do + assert CubeSchema.measure_type_to_ecto(:count) == :integer + assert CubeSchema.measure_type_to_ecto("count") == :integer + end + + test "measure_type_to_ecto/1 converts count_distinct types" do + assert CubeSchema.measure_type_to_ecto(:count_distinct) == :integer + assert CubeSchema.measure_type_to_ecto("count_distinct") == :integer + end + + test "measure_type_to_ecto/1 converts aggregation types to float" do + assert CubeSchema.measure_type_to_ecto(:sum) == :float + assert CubeSchema.measure_type_to_ecto(:avg) == :float + assert CubeSchema.measure_type_to_ecto(:min) == :float + assert CubeSchema.measure_type_to_ecto(:max) == :float + end + + test "measure_type_to_ecto/1 converts number type to float" do + assert CubeSchema.measure_type_to_ecto(:number) == :float + assert CubeSchema.measure_type_to_ecto("number") == :float + end + + test "measure_type_to_ecto/1 defaults unknown types to float" do + assert CubeSchema.measure_type_to_ecto(:unknown) == :float + assert CubeSchema.measure_type_to_ecto("custom") == :float + end + end + + describe "schema struct creation" do + defmodule StructTestSchema do + @moduledoc false + use PowerOfThree.CubeSchema + + cube_schema :struct_test do + dimension :name, :string + dimension :value, :float + + measure :total, :integer + end + end + + test "can create struct with default values" do + struct = %StructTestSchema{} + + assert struct.name == nil + assert struct.value == nil + assert struct.total == nil + end + + test "can create struct with values" do + struct = %StructTestSchema{name: "test", value: 42.0, total: 100} + + assert struct.name == "test" + assert struct.value == 42.0 + assert struct.total == 100 + end + + test "struct has __meta__ field for Ecto" do + struct = %StructTestSchema{} + + assert Map.has_key?(struct, :__meta__) + end + end + + describe "complex schema with many fields" do + defmodule ComplexSchema do + @moduledoc false + use PowerOfThree.CubeSchema + + cube_schema :complex_cube do + # Dimensions + dimension :id, :float + dimension :brand_code, :string + dimension :market_code, :string + dimension :customer_email, :string + dimension :product_category, :string + dimension :region, :string + dimension :created_at, :utc_datetime + dimension :updated_at, :utc_datetime + dimension :is_premium, :boolean + + # Measures + measure :order_count, :integer + measure :customer_count, :integer + measure :total_revenue, :float + measure :average_order_value, :float + measure :min_order_value, :float + measure :max_order_value, :float + end + end + + test "all fields are present" do + fields = ComplexSchema.__schema__(:fields) + + # 9 dimensions + 6 measures = 15 fields (id is primary key, still in fields) + assert length(fields) >= 14 + end + + test "has id as primary key" do + assert ComplexSchema.__schema__(:primary_key) == [:id] + end + + test "dimension types are correct" do + assert ComplexSchema.__schema__(:type, :brand_code) == :string + assert ComplexSchema.__schema__(:type, :is_premium) == :boolean + assert ComplexSchema.__schema__(:type, :created_at) == :utc_datetime + end + + test "measure types are correct" do + assert ComplexSchema.__schema__(:type, :order_count) == :integer + assert ComplexSchema.__schema__(:type, :total_revenue) == :float + assert ComplexSchema.__schema__(:type, :average_order_value) == :float + end + end +end