Skip to content

Commit 8246eb6

Browse files
docs: review the articles
1 parent b8b423d commit 8246eb6

File tree

4 files changed

+130
-165
lines changed

4 files changed

+130
-165
lines changed

interactivity/ai-powered-insights-overview.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,5 @@
11
---
2-
title: AI-Powered Insights
2+
title: AI-Powered Insights Overview
33
page_title: AI-Powered Insights in Report Preview
44
description: "Learn about the AI insights feature of Reporting, which allow users to execute predefined or custom prompts on the core data of the previewed report, receiving responses from an AI model."
55
slug: telerikreporting/designing-reports/adding-interactivity-to-reports/ai-powered-insights

interactivity/configuring-ai-powered-insights.md

Lines changed: 7 additions & 46 deletions
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,5 @@
11
---
2-
title: Customizing the AI-Powered Insights
2+
title: Customizing AI-Powered Insights
33
page_title: How to Customize the AI-Powered Insights
44
description: "Learn how to configure the AI-powered insights functionality to handle common and not so much use cases."
55
slug: telerikreporting/designing-reports/adding-interactivity-to-reports/configuring-ai-powered-insights
@@ -8,7 +8,9 @@ published: True
88
position: 3
99
---
1010

11-
This article outlines the different ways to customize the AI-powered insights functionality to handle different use cases. They are listed as follows:
11+
# Customizing AI-Powered Insights
12+
13+
This article explains how to customize the AI-powered insights functionality for different use cases. There are two distinct ways to achieve this:
1214
- [Configuring the Report Engine](#configuring-the-report-engine) - Declarative configuration through application settings.
1315
- [Overriding ReportsControllerBase Methods](#overriding-reportscontrollerbase-methods) - Programmatic customization with custom logic.
1416

@@ -45,7 +47,7 @@ This is a base configuration, but it can be further extended to handle specific
4547

4648
By default, the **AI Prompt** dialog requests explicit consent from users before sending prompts to the AI model. This ensures transparency about data being sent to external AI services and gives users control over their data privacy
4749

48-
<img src="images/user-consent.png" style="border: 1px solid lightgray; width: 500px" alt="User Consent for AI Summaries" />
50+
<img src="images/user-consent.png" style="border: 1px solid lightgray; width: 500px" alt="User Consent for AI Summaries" />
4951

5052
In enterprise environments where AI usage policies are already established or when working with trusted internal models, you may want to streamline the user experience by disabling this consent requirement. In these cases, you can set the `requireConsent` option to `false`:
5153

@@ -70,9 +72,9 @@ In enterprise environments where AI usage policies are already established or wh
7072

7173
### Prompts Configuration
7274

73-
By default, users can create their own custom prompts to ask any questions about their reports. While this provides maximum flexibility, it can lead to unpredictable token usage costs and potentially inconsistent results. In these cases, you might want to provide the users with predefined prompts that are designed to handle specific tasks.
75+
By default, users can create their own custom prompts to ask any questions about their reports. While this provides maximum flexibility, it can lead to unpredictable token usage costs and potentially inconsistent results. In these cases, you can provide the users with predefined prompts that are designed to handle specific tasks.
7476

75-
To restrict users to predefined prompts only, you can set `allowCustomPrompts` to `false` and add the predefined prompts through the `predefinedPrompts` option:
77+
To restrict users to predefined prompts only, you set `allowCustomPrompts` to `false` and add the predefined prompts through the `predefinedPrompts` option:
7678

7779
````JSON
7880
{
@@ -396,47 +398,6 @@ public override async Task<HttpResponseMessage> GetAIResponse(string clientID, s
396398
return await base.GetAIResponse(clientID, instanceID, documentID, threadID, args);
397399
}
398400
````
399-
````Token·Usage·Validation
400-
/// <summary>
401-
/// Examines the approximate tokens count and determines whether the prompt should be sent to the LLM.
402-
/// </summary>
403-
/// <returns></returns>
404-
public override async Task<HttpResponseMessage> GetAIResponse(string clientID, string instanceID, string documentID, string threadID, AIQueryArgs args)
405-
{
406-
const int MAX_TOKEN_COUNT = 500;
407-
args.ConfirmationCallBack = (AIRequestInfo info) =>
408-
{
409-
if (info.EstimatedTokensCount > MAX_TOKEN_COUNT)
410-
{
411-
return ConfirmationResult.CancelResult($"The estimated token count exceeds the allowed limit of {MAX_TOKEN_COUNT} tokens.");
412-
}
413-
414-
return ConfirmationResult.ContinueResult();
415-
};
416-
417-
return await base.GetAIResponse(clientID, instanceID, documentID, threadID, args);
418-
}
419-
````
420-
````RAG·Optimization·Monitoring
421-
/// <summary>
422-
/// Examines whether the RAG optimization is applied for the current prompt.
423-
/// </summary>
424-
/// <returns></returns>
425-
public override async Task<HttpResponseMessage> GetAIResponse(string clientID, string instanceID, string documentID, string threadID, AIQueryArgs args)
426-
{
427-
args.ConfirmationCallBack = (AIRequestInfo info) =>
428-
{
429-
if (info.Origin == AIRequestInfo.AIRequestOrigin.Client)
430-
{
431-
System.Diagnostics.Trace.TraceInformation($"RAG optimization is {info.RAGOptimization} for this prompt.");
432-
}
433-
434-
return ConfirmationResult.ContinueResult();
435-
};
436-
437-
return await base.GetAIResponse(clientID, instanceID, documentID, threadID, args);
438-
}
439-
````
440401

441402
## See Also
442403

interactivity/custom-iclient.md

Lines changed: 92 additions & 90 deletions
Original file line numberDiff line numberDiff line change
@@ -18,123 +18,123 @@ To enable a custom AI client implementation, follow these steps:
1818

1919
1. Create a class that implements the `Telerik.Reporting.AI.IClient` interface. The following example demonstrates an Azure OpenAI integration for illustration purposes, though you can use any LLM provider:
2020

21-
````C#
22-
using Azure.AI.OpenAI;
23-
using Microsoft.Extensions.AI;
24-
using System.ClientModel;
25-
using Telerik.Reporting.AI;
26-
27-
namespace WebApplication1.AI;
28-
29-
public class CustomAIClient : IClient
30-
{
31-
public string Model { get; } = "gpt-4o-mini";
32-
33-
public bool SupportsSystemPrompts => false;
21+
````C#
22+
using Azure.AI.OpenAI;
23+
using Microsoft.Extensions.AI;
24+
using System.ClientModel;
25+
using Telerik.Reporting.AI;
3426

35-
private readonly IChatClient chatClient;
27+
namespace WebApplication1.AI;
3628

37-
public CustomAIClient()
29+
public class CustomAIClient : IClient
3830
{
39-
string endpoint = "https://ai-explorations.openai.azure.com/";
40-
string credential = "YOUR_API_KEY";
41-
string model = "gpt-4o-mini";
31+
public string Model { get; } = "gpt-4o-mini";
4232

43-
chatClient = new AzureOpenAIClient(new Uri(endpoint), new ApiKeyCredential(credential))
44-
.GetChatClient(model)
45-
.AsIChatClient();
46-
}
33+
public bool SupportsSystemPrompts => false;
4734

48-
public async Task<IReadOnlyCollection<IMessage>> GetResponseAsync(IReadOnlyCollection<IMessage> query, CancellationToken cancellationToken)
49-
{
50-
// Convert Telerik.Reporting.AI IMessage to Microsoft.Extensions.AI ChatMessage
51-
var chatMessages = new List<ChatMessage>();
52-
foreach (var message in query)
53-
{
54-
ChatRole chatRole = message.Role switch
55-
{
56-
MessageRole.System => ChatRole.System,
57-
MessageRole.Assistant => ChatRole.Assistant,
58-
MessageRole.User => ChatRole.User,
59-
_ => throw new ArgumentException($"Invalid MessageRole: {message.Role}")
60-
};
35+
private readonly IChatClient chatClient;
6136

62-
// Convert text contents from Telerik.Reporting.AI TO Microsoft.Extensions.AI
63-
var textContents = message.Contents
64-
.OfType<Telerik.Reporting.AI.TextContent>()
65-
.Select(textContent => new Microsoft.Extensions.AI.TextContent(textContent.Text))
66-
.Cast<AIContent>()
67-
.ToList();
37+
public CustomAIClient()
38+
{
39+
string endpoint = "https://ai-explorations.openai.azure.com/";
40+
string credential = "YOUR_API_KEY";
41+
string model = "gpt-4o-mini";
6842

69-
chatMessages.Add(new ChatMessage(chatRole, textContents));
43+
chatClient = new AzureOpenAIClient(new Uri(endpoint), new ApiKeyCredential(credential))
44+
.GetChatClient(model)
45+
.AsIChatClient();
7046
}
7147

72-
// Call Azure OpenAI
73-
var response = await chatClient.GetResponseAsync(chatMessages, new ChatOptions(), cancellationToken);
74-
75-
// Convert response back to Telerik.Reporting.AI IMessage
76-
var resultMessages = new List<IMessage>();
77-
foreach (var responseMessage in response.Messages)
48+
public async Task<IReadOnlyCollection<IMessage>> GetResponseAsync(IReadOnlyCollection<IMessage> query, CancellationToken cancellationToken)
7849
{
79-
MessageRole messageRole = responseMessage.Role.Value switch
50+
// Convert Telerik.Reporting.AI IMessage to Microsoft.Extensions.AI ChatMessage
51+
var chatMessages = new List<ChatMessage>();
52+
foreach (var message in query)
8053
{
81-
"system" => MessageRole.System,
82-
"assistant" => MessageRole.Assistant,
83-
"user" => MessageRole.User,
84-
_ => throw new ArgumentException($"Invalid ChatRole: {responseMessage.Role}")
85-
};
86-
87-
// Convert back to Telerik.Reporting.AI content
88-
var contents = responseMessage.Contents
89-
.OfType<Microsoft.Extensions.AI.TextContent>()
90-
.Select(tc => new Telerik.Reporting.AI.TextContent(tc.Text))
91-
.Cast<IContent>()
92-
.ToList();
93-
94-
resultMessages.Add(new Message(messageRole, contents));
54+
ChatRole chatRole = message.Role switch
55+
{
56+
MessageRole.System => ChatRole.System,
57+
MessageRole.Assistant => ChatRole.Assistant,
58+
MessageRole.User => ChatRole.User,
59+
_ => throw new ArgumentException($"Invalid MessageRole: {message.Role}")
60+
};
61+
62+
// Convert text contents from Telerik.Reporting.AI TO Microsoft.Extensions.AI
63+
var textContents = message.Contents
64+
.OfType<Telerik.Reporting.AI.TextContent>()
65+
.Select(textContent => new Microsoft.Extensions.AI.TextContent(textContent.Text))
66+
.Cast<AIContent>()
67+
.ToList();
68+
69+
chatMessages.Add(new ChatMessage(chatRole, textContents));
70+
}
71+
72+
// Call Azure OpenAI
73+
var response = await chatClient.GetResponseAsync(chatMessages, new ChatOptions(), cancellationToken);
74+
75+
// Convert response back to Telerik.Reporting.AI IMessage
76+
var resultMessages = new List<IMessage>();
77+
foreach (var responseMessage in response.Messages)
78+
{
79+
MessageRole messageRole = responseMessage.Role.Value switch
80+
{
81+
"system" => MessageRole.System,
82+
"assistant" => MessageRole.Assistant,
83+
"user" => MessageRole.User,
84+
_ => throw new ArgumentException($"Invalid ChatRole: {responseMessage.Role}")
85+
};
86+
87+
// Convert back to Telerik.Reporting.AI content
88+
var contents = responseMessage.Contents
89+
.OfType<Microsoft.Extensions.AI.TextContent>()
90+
.Select(tc => new Telerik.Reporting.AI.TextContent(tc.Text))
91+
.Cast<IContent>()
92+
.ToList();
93+
94+
resultMessages.Add(new Message(messageRole, contents));
95+
}
96+
97+
return resultMessages;
9598
}
9699

97-
return resultMessages;
98-
}
99-
100-
public static IClient GetCustomAIClient()
101-
{
102-
return new CustomAIClient();
100+
public static IClient GetCustomAIClient()
101+
{
102+
return new CustomAIClient();
103+
}
103104
}
104-
}
105-
````
105+
````
106106

107107
1. Register the custom client in your `ReportServiceConfiguration`:
108108

109109
* .NET
110110

111111
````C#
112-
builder.Services.TryAddSingleton<IReportServiceConfiguration>(sp => new ReportServiceConfiguration
113-
{
114-
HostAppId = "MyApp",
115-
AIClientFactory = WebApplication1.AI.CustomAIClient.GetCustomAIClient,
116-
// ...
117-
});
118-
````
112+
builder.Services.TryAddSingleton<IReportServiceConfiguration>(sp => new ReportServiceConfiguration
113+
{
114+
HostAppId = "MyApp",
115+
AIClientFactory = WebApplication1.AI.CustomAIClient.GetCustomAIClient,
116+
// ...
117+
});
118+
````
119119

120120
* .NET Framework
121121

122122
````C#
123-
public class CustomResolverReportsController : ReportsControllerBase
124-
{
125-
static ReportServiceConfiguration configurationInstance;
126-
127-
static CustomResolverReportsController()
123+
public class CustomResolverReportsController : ReportsControllerBase
128124
{
129-
configurationInstance = new ReportServiceConfiguration
125+
static ReportServiceConfiguration configurationInstance;
126+
127+
static CustomResolverReportsController()
130128
{
131-
HostAppId = "MyApp",
132-
AIClientFactory = WebApplication1.AI.CustomAIClient.GetCustomAIClient,
133-
// ...
134-
};
129+
configurationInstance = new ReportServiceConfiguration
130+
{
131+
HostAppId = "MyApp",
132+
AIClientFactory = WebApplication1.AI.CustomAIClient.GetCustomAIClient,
133+
// ...
134+
};
135+
}
135136
}
136-
}
137-
````
137+
````
138138

139139
You can further customize the AI client to enable additional features like RAG optimization, predefined prompts, and user consent settings. For more details, refer to [Configuring the AI-Powered Insights]({%slug telerikreporting/designing-reports/adding-interactivity-to-reports/configuring-ai-powered-insights%}).
140140

@@ -170,6 +170,8 @@ This dual-call approach optimizes token usage by first determining RAG suitabili
170170

171171
When RAG is disabled, the method is called only once without the report metadata being pre-filtered.
172172

173+
> RAG is available only in .NET and .NET Standard.
174+
173175
## See Also
174176

175177
* [AI-Powered Insights Overview]({%slug telerikreporting/designing-reports/adding-interactivity-to-reports/ai-powered-insights%})

0 commit comments

Comments
 (0)