Skip to content

Commit

Permalink
Merge branch 'main' into ms/add-vercel-adapter
Browse files Browse the repository at this point in the history
  • Loading branch information
marcusschiesser authored Dec 10, 2024
2 parents bc3e840 + d99d598 commit 1b60005
Show file tree
Hide file tree
Showing 6 changed files with 107 additions and 4 deletions.
5 changes: 5 additions & 0 deletions .changeset/new-cups-dress.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,5 @@
---
"@llamaindex/core": patch
---

The compact and refine response synthesizer (retrieved by using `getResponseSynthesizer('compact')`) has been fixed to return the original source nodes that were provided to it in its response. Previous to this it was returning the compacted text chunk documents.
5 changes: 5 additions & 0 deletions .changeset/weak-cats-smash.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,5 @@
---
"llamaindex": patch
---

withLlamaIndex now passes through webpack options to the passed in customized NextJS webpack config. Before it was only passing through the config.
2 changes: 1 addition & 1 deletion apps/next/src/content/docs/llamaindex/setup/typescript.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -84,7 +84,7 @@ Imaging you put output file into `/dist/openai.js` but you are importing `llamai
}
```

In old module resolution, TypeScript will not be able to find the module because it is not follow the file structure, even you run `node index.js` successfully. (on Node.js >=16)
In old module resolution, TypeScript will not be able to find the module because it is not following the file structure, even you run `node index.js` successfully. (on Node.js >=16)

See more about [moduleResolution](https://www.typescriptlang.org/docs/handbook/modules/theory.html#module-resolution) or
[TypeScript 5.0 blog](https://devblogs.microsoft.com/typescript/announcing-typescript-5-0/#--moduleresolution-bundler7).
Expand Down
31 changes: 29 additions & 2 deletions packages/core/src/response-synthesizers/factory.ts
Original file line number Diff line number Diff line change
Expand Up @@ -77,6 +77,16 @@ class Refine extends BaseSynthesizer {
}
}

async getResponse(
query: MessageContent,
nodes: NodeWithScore[],
stream: true,
): Promise<AsyncIterable<EngineResponse>>;
async getResponse(
query: MessageContent,
nodes: NodeWithScore[],
stream: false,
): Promise<EngineResponse>;
async getResponse(
query: MessageContent,
nodes: NodeWithScore[],
Expand Down Expand Up @@ -197,6 +207,16 @@ class Refine extends BaseSynthesizer {
* CompactAndRefine is a slight variation of Refine that first compacts the text chunks into the smallest possible number of chunks.
*/
class CompactAndRefine extends Refine {
async getResponse(
query: MessageContent,
nodes: NodeWithScore[],
stream: true,
): Promise<AsyncIterable<EngineResponse>>;
async getResponse(
query: MessageContent,
nodes: NodeWithScore[],
stream: false,
): Promise<EngineResponse>;
async getResponse(
query: MessageContent,
nodes: NodeWithScore[],
Expand All @@ -216,17 +236,24 @@ class CompactAndRefine extends Refine {
const newTexts = this.promptHelper.repack(maxPrompt, textChunks);
const newNodes = newTexts.map((text) => new TextNode({ text }));
if (stream) {
return super.getResponse(
const streamResponse = await super.getResponse(
query,
newNodes.map((node) => ({ node })),
true,
);
return streamConverter(streamResponse, (chunk) => {
chunk.sourceNodes = nodes;
return chunk;
});
}
return super.getResponse(

const originalResponse = await super.getResponse(
query,
newNodes.map((node) => ({ node })),
false,
);
originalResponse.sourceNodes = nodes;
return originalResponse;
}
}

Expand Down
Original file line number Diff line number Diff line change
@@ -0,0 +1,66 @@
import { describe, expect, test, vi } from "vitest";
import type { LLMMetadata } from "../../llms/dist/index.js";
import { getResponseSynthesizer } from "../../response-synthesizers/dist/index.js";
import { Document } from "../../schema/dist/index.js";

const mockLllm = () => ({
complete: vi.fn().mockImplementation(({ stream }) => {
const response = { text: "unimportant" };
if (!stream) {
return response;
}

function* gen() {
// yield a few times to make sure each chunk has the sourceNodes
yield response;
yield response;
yield response;
}

return gen();
}),
chat: vi.fn(),
metadata: {} as unknown as LLMMetadata,
});

describe("compact and refine response synthesizer", () => {
describe("synthesize", () => {
test("should return original sourceNodes with response when stream = false", async () => {
const synthesizer = getResponseSynthesizer("compact", {
llm: mockLllm(),
});

const sourceNode = { node: new Document({}), score: 1 };

const response = await synthesizer.synthesize(
{
query: "test",
nodes: [sourceNode],
},
false,
);

expect(response.sourceNodes).toEqual([sourceNode]);
});

test("should return original sourceNodes with response when stream = true", async () => {
const synthesizer = getResponseSynthesizer("compact", {
llm: mockLllm(),
});

const sourceNode = { node: new Document({}), score: 1 };

const response = await synthesizer.synthesize(
{
query: "test",
nodes: [sourceNode],
},
true,
);

for await (const chunk of response) {
expect(chunk.sourceNodes).toEqual([sourceNode]);
}
});
});
});
2 changes: 1 addition & 1 deletion packages/llamaindex/src/next.ts
Original file line number Diff line number Diff line change
Expand Up @@ -41,7 +41,7 @@ export default function withLlamaIndex(config: any) {
// eslint-disable-next-line @typescript-eslint/no-explicit-any
config.webpack = function (webpackConfig: any, options: any) {
if (userWebpack) {
webpackConfig = userWebpack(webpackConfig);
webpackConfig = userWebpack(webpackConfig, options);
}
webpackConfig.resolve.alias = {
...webpackConfig.resolve.alias,
Expand Down

0 comments on commit 1b60005

Please sign in to comment.