Skip to content

Commit 093ead5

Browse files
committed
[DOCS] Fix dense vector list indentation
1 parent 993cf2c commit 093ead5

File tree

1 file changed

+5
-5
lines changed

1 file changed

+5
-5
lines changed

docs/reference/elasticsearch/mapping-reference/dense-vector.md

Lines changed: 5 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -308,8 +308,8 @@ $$$dense-vector-similarity$$$
308308
::::{dropdown} Valid values for similarity
309309
`l2_norm`
310310
: Computes similarity based on the L2 distance (also known as Euclidean distance) between the vectors. The document `_score` is computed as `1 / (1 + l2_norm(query, vector)^2)`.
311-
312-
For `bit` vectors, instead of using `l2_norm`, the `hamming` distance between the vectors is used. The `_score` transformation is `(numBits - hamming(a, b)) / numBits`
311+
312+
For `bit` vectors, instead of using `l2_norm`, the `hamming` distance between the vectors is used. The `_score` transformation is `(numBits - hamming(a, b)) / numBits`
313313

314314
`dot_product`
315315
: Computes the dot product of two unit vectors. This option provides an optimized way to perform cosine similarity. The constraints and computed score are defined by `element_type`.
@@ -342,14 +342,14 @@ $$$dense-vector-index-options$$$
342342
: (Required, string) The type of kNN algorithm to use. Can be either any of:
343343
* `hnsw` - This utilizes the [HNSW algorithm](https://arxiv.org/abs/1603.09320) for scalable approximate kNN search. This supports all `element_type` values.
344344
* `int8_hnsw` - The default index type for some float vectors:
345-
345+
346346
* {applies_to}`stack: ga 9.1` Default for float vectors with less than 384 dimensions.
347347
* {applies_to}`stack: ga 9.0` Default for float all vectors.
348-
348+
349349
This utilizes the [HNSW algorithm](https://arxiv.org/abs/1603.09320) in addition to automatically scalar quantization for scalable approximate kNN search with `element_type` of `float`. This can reduce the memory footprint by 4x at the cost of some accuracy. See [Automatically quantize vectors for kNN search](#dense-vector-quantization).
350350
* `int4_hnsw` - This utilizes the [HNSW algorithm](https://arxiv.org/abs/1603.09320) in addition to automatically scalar quantization for scalable approximate kNN search with `element_type` of `float`. This can reduce the memory footprint by 8x at the cost of some accuracy. See [Automatically quantize vectors for kNN search](#dense-vector-quantization).
351351
* `bbq_hnsw` - This utilizes the [HNSW algorithm](https://arxiv.org/abs/1603.09320) in addition to automatically binary quantization for scalable approximate kNN search with `element_type` of `float`. This can reduce the memory footprint by 32x at the cost of accuracy. See [Automatically quantize vectors for kNN search](#dense-vector-quantization).
352-
352+
353353
{applies_to}`stack: ga 9.1` `bbq_hnsw` is the default index type for float vectors with greater than or equal to 384 dimensions.
354354
* `flat` - This utilizes a brute-force search algorithm for exact kNN search. This supports all `element_type` values.
355355
* `int8_flat` - This utilizes a brute-force search algorithm in addition to automatically scalar quantization. Only supports `element_type` of `float`.

0 commit comments

Comments
 (0)