-
Notifications
You must be signed in to change notification settings - Fork 189
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[C#] bump: dotnet to v1.8.1 #2180
Merged
Merged
Conversation
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
## Linked issues closes: #2103 ## Details Added support for `o1-preview` and `o1-mini` models. * Bumped `OpenAI` and `Azure.AI.OpenAI` to `2.1.0-beta.1`. * Tested o1 support with light bot sample with monologue augmentation. * Updated `teamsChefBot-streaming` to use deployed `Microsoft.Teams.AI` nuget package. * Fixed `LayoutSection` bug with incorrect ordering of sections. ## Attestation Checklist - [x] My code follows the style guidelines of this project - I have checked for/fixed spelling, linting, and other errors - I have commented my code for clarity - I have made corresponding changes to the documentation (updating the doc strings in the code is sufficient) - My changes generate no new warnings - I have added tests that validates my changes, and provides sufficient test coverage. I have tested with: - Local testing - E2E testing in Teams - New and existing unit tests pass locally with my changes
#2122) ## Linked issues closes: #minor ## Details Exceptions thrown in `DrainQueue()` are being supressed as the method runs on a worker thread and the corresponding `Task` is never resolved by the main thread. With the fix, when `this._queueSync` is awaited in `EndStream()`, an exception will be thrown if the corresponding `Task` is faulted (i.e. an exception is thrown from `DrainQueue()`. #### Change details * Removed code that sets `this._queueSync` to null when queue is empty. This means that `this._queueSync` can be either null or assigned a Task at any given time. * If `this._queueSync` is not null then an additional check `this._queueSync.isTaskCompleted` is made to check whether `DrainQueue` Task has completed or not. If streaming operation fails, the exception will be thrown: ![image](https://github.com/user-attachments/assets/403b6003-7df4-4684-b433-dd507b91cb4c) ## Attestation Checklist - [x] My code follows the style guidelines of this project - I have checked for/fixed spelling, linting, and other errors - I have commented my code for clarity - I have made corresponding changes to the documentation (updating the doc strings in the code is sufficient) - My changes generate no new warnings - I have added tests that validates my changes, and provides sufficient test coverage. I have tested with: - Local testing - E2E testing in Teams - New and existing unit tests pass locally with my changes
## Linked issues closes: #minor ## Details Deprecated public preview of azure content safety moderator. #### Change details * Updated `Azure.AI.ContentSafety` to `v1.0.0` * Added chat moderation sample ## Attestation Checklist - [x] My code follows the style guidelines of this project - I have checked for/fixed spelling, linting, and other errors - I have commented my code for clarity - I have made corresponding changes to the documentation (updating the doc strings in the code is sufficient) - My changes generate no new warnings - I have added tests that validates my changes, and provides sufficient test coverage. I have tested with: - Local testing - E2E testing in Teams - New and existing unit tests pass locally with my changes
…ror Propagation, Entities Metadata (#2139) ## Linked issues closes: #1970 ## Details - Added temporary 1.5 second buffer to adhere to 1RPS backend service requirement - Added support for Feedback Loop - Added support for Generated by AI label - Added reject/catch handling for errors - Added entities metadata to match GA requirements **screenshots**: ![image](https://github.com/user-attachments/assets/78eb652e-65c2-4544-9e30-36d656a8a9e3) ## Attestation Checklist - [x] My code follows the style guidelines of this project - I have checked for/fixed spelling, linting, and other errors - I have commented my code for clarity - I have made corresponding changes to the documentation (updating the doc strings in the code is sufficient) - My changes generate no new warnings - I have added tests that validates my changes, and provides sufficient test coverage. I have tested with: - Local testing - E2E testing in Teams - New and existing unit tests pass locally with my changes --------- Co-authored-by: lilydu <[email protected]>
## Linked issues closes: #1970 ## Details This will work slightly differently than the previous non-streaming Plan flow. Citations and its respective sensitivity labels are added per each text chunk queued. However, these will only be rendered in the final message (when the full message has been received). Rather than exposing the SensitivityUsageInfo object as an override on the PredictedSayCommand, the label can now be directly set as usageInfo in the AIEntity object along with the AIGenerated label and the citations. **screenshots**: ![image](https://github.com/user-attachments/assets/81c0f3dd-f958-4b4b-8cbf-56bf36f24f78) ## Attestation Checklist - [x] My code follows the style guidelines of this project - I have checked for/fixed spelling, linting, and other errors - I have commented my code for clarity - I have made corresponding changes to the documentation (updating the doc strings in the code is sufficient) - My changes generate no new warnings - I have added tests that validates my changes, and provides sufficient test coverage. I have tested with: - Local testing - E2E testing in Teams - New and existing unit tests pass locally with my changes --------- Co-authored-by: lilydu <[email protected]>
#2150) ## Linked issues closes: #minor ## Details The `OpenAI` sdk automatically maps the `max_tokens` configuration to the `MaxOutputTokenCount` property (i.e `max_completion_tokens`). However this does not work with Azure OpenAI service as it expects `max_tokens` for non-o1 models. #### Change details * In `OpenAIModel.cs` if the model is not in the o1 series, then use `max_tokens` field by default. ## Attestation Checklist - [x] My code follows the style guidelines of this project - I have checked for/fixed spelling, linting, and other errors - I have commented my code for clarity - I have made corresponding changes to the documentation (updating the doc strings in the code is sufficient) - My changes generate no new warnings - I have added tests that validates my changes, and provides sufficient test coverage. I have tested with: - Local testing - E2E testing in Teams - New and existing unit tests pass locally with my changes
## Linked issues closes: #minor ## Details * Bump `Microsoft.Teams.AI` to version 1.8.0 * Update all the samples to point to this version. ## Attestation Checklist - [x] My code follows the style guidelines of this project - I have checked for/fixed spelling, linting, and other errors - I have commented my code for clarity - I have made corresponding changes to the documentation (updating the doc strings in the code is sufficient) - My changes generate no new warnings - I have added tests that validates my changes, and provides sufficient test coverage. I have tested with: - Local testing - E2E testing in Teams - New and existing unit tests pass locally with my changes
## Linked issues closes: #minor ## Details for `entities` object, update `streamType` enum conversion to string instead of int ## Attestation Checklist - [x] My code follows the style guidelines of this project - I have checked for/fixed spelling, linting, and other errors - I have commented my code for clarity - I have made corresponding changes to the documentation (updating the doc strings in the code is sufficient) - My changes generate no new warnings - I have added tests that validates my changes, and provides sufficient test coverage. I have tested with: - Local testing - E2E testing in Teams - New and existing unit tests pass locally with my changes
rajan-chari
approved these changes
Nov 12, 2024
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Linked issues
closes: #minor
Details
Introduce fix:
Attestation Checklist
My code follows the style guidelines of this project
I have checked for/fixed spelling, linting, and other errors
I have commented my code for clarity
I have made corresponding changes to the documentation (updating the doc strings in the code is sufficient)
My changes generate no new warnings
I have added tests that validates my changes, and provides sufficient test coverage. I have tested with:
New and existing unit tests pass locally with my changes
Additional information