Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

style(weave): Revert "Revert "style weave: Refactor Playground LLM dropdown (#3774… #3906

Open
wants to merge 2 commits into
base: master
Choose a base branch
from

Conversation

jwlee64
Copy link
Contributor

@jwlee64 jwlee64 commented Mar 19, 2025

…)" (#3862)"

This reverts commit 0d3cd07.

Description

unreverts the previous revert following the backend change

Testing

How was this PR tested?

Summary by CodeRabbit

  • New Features
    • Provider selection now dynamically loads available options with an intuitive loading indicator.
    • Options are clearly grouped, displaying enabled choices and providing helpful tooltips for disabled selections.
    • The selection process benefits from additional context, ensuring a tailored user experience.
    • Default configurations for model usage have been updated to streamline operations.

@jwlee64 jwlee64 requested review from a team as code owners March 19, 2025 02:49
@jwlee64 jwlee64 changed the title style(weave): Revert "Revert "style(weave): Refactor Playground LLM dropdown (#3774… style(weave): Revert "Revert "style weave: Refactor Playground LLM dropdown (#3774… Mar 19, 2025
Copy link
Contributor

coderabbitai bot commented Mar 19, 2025

Walkthrough

This pull request refactors the LLM provider selection components by introducing dynamic provider fetching through a new custom hook and restructuring dropdown logic. The changes update the main dropdown component to retrieve provider statuses based on available API secrets, add new subcomponents for enhanced option handling, and introduce additional props (entity and project) for context propagation. Constants for default LLM models and provider secret requirements are also established, ensuring that the UI adjusts dynamically as provider configurations change.

Changes

File(s) Change Summary
weave-js/src/components/.../PlaygroundChat/LLMDropdown.tsx & weave-js/src/components/.../PlaygroundChat/PlaygroundChatTopBar.tsx Updated LLMDropdownProps and passed new entity and project props; modified the onChange handler to select the first LLM from the chosen provider; adapted dynamic option filtering.
weave-js/src/components/.../PlaygroundChat/LLMDropdownOptions.tsx Introduced a new custom dropdown component with interfaces (ProviderOption, CustomOptionProps) and subcomponents (DisabledProviderTooltip, SubMenu, SubMenuOption, CustomOption) for searchable, dynamic LLM option rendering and error handling.
weave-js/src/components/.../PlaygroundPage/llmMaxTokens.ts Added a DEFAULT_LLM_MODEL constant and restructured provider configuration by replacing the static LLM_PROVIDERS with LLM_PROVIDER_SECRETS (mapping each provider to its required API keys), deriving providers from this secret mapping.
weave-js/src/components/.../PlaygroundPage/useConfiguredProviders.ts Created a new hook that uses useSecrets to determine provider status through utility functions (hasAllSecrets, missingSecrets), returning provider configuration along with a loading state.

Sequence Diagram(s)

sequenceDiagram
    participant U as User
    participant T as PlaygroundChatTopBar
    participant D as LLMDropdown
    participant H as useConfiguredProviders
    participant S as useSecrets

    U->>T: Load chat interface
    T->>D: Render dropdown with entity & project
    D->>H: Request configured provider statuses
    H->>S: Fetch API secrets & config data
    S-->>H: Return secrets and loading state
    H-->>D: Provide provider statuses (enabled/disabled)
    D->>U: Display dynamic dropdown options with tooltips and search features
Loading

Possibly related PRs

Suggested reviewers

  • jamie-rasmussen
  • tssweeney

Poem

I’m a bunny with code so bright,
Hopping through hooks in pure delight,
Dynamic providers now take the stage,
With secrets handled, I turn the page,
Leaping through changes with a joyful byte! 🐰💻

Tip

⚡🧪 Multi-step agentic review comment chat (experimental)
  • We're introducing multi-step agentic chat in review comments. This experimental feature enhances review discussions with the CodeRabbit agentic chat by enabling advanced interactions, including the ability to create pull requests directly from comments.
    - To enable this feature, set early_access to true under in the settings.
✨ Finishing Touches
  • 📝 Generate Docstrings

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share
🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Generate unit testing code for this file.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai generate unit testing code for this file.
    • @coderabbitai modularize this function.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read src/utils.ts and generate unit testing code.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.
    • @coderabbitai help me debug CodeRabbit configuration file.

Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.

CodeRabbit Commands (Invoked using PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai generate docstrings to generate docstrings for this PR.
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai help to get help.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

@circle-job-mirror
Copy link

circle-job-mirror bot commented Mar 19, 2025

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

🧹 Nitpick comments (6)
weave-js/src/components/PagePanelComponents/Home/Browse3/pages/PlaygroundPage/PlaygroundChat/LLMDropdown.tsx (2)

33-44: Building options & disabledOptions
Good separation of logic: you classify providers based on status. This structure is straightforward when adding or removing providers in the future.

You might consider sorting providers alphabetically to help users locate them quickly.


90-95: Selecting the first LLM after user selects a provider
This approach simplifies user flows. However, consider prompting the user if multiple LLMs exist under a provider.

weave-js/src/components/PagePanelComponents/Home/Browse3/pages/PlaygroundPage/PlaygroundChat/LLMDropdownOptions.tsx (4)

80-133: Add viewport overflow handling for the submenu

The submenu portal implementation is good for preventing z-index issues, but it lacks checks for viewport boundaries which could cause UI problems on smaller screens or when positioned near screen edges.

const SubMenu = ({
  llms,
  onChange,
  position,
  onSelect,
}: {
  llms: Array<{label: string; value: LLMMaxTokensKey; max_tokens: number}>;
  onChange: (value: LLMMaxTokensKey, maxTokens: number) => void;
  position: {top: number; left: number};
  onSelect: () => void;
}) => {
+  // Adjust position to ensure menu stays within viewport
+  const adjustedPosition = React.useMemo(() => {
+    // Default to the provided position
+    const pos = {...position};
+    
+    // Check if we have access to window
+    if (typeof window !== 'undefined') {
+      const viewportWidth = window.innerWidth;
+      const viewportHeight = window.innerHeight;
+      
+      // Adjust if too close to right edge (300px is the menu width from sx)
+      if (pos.left + 300 > viewportWidth) {
+        pos.left = Math.max(0, viewportWidth - 310); // Accounting for some padding
+      }
+      
+      // Adjust if too close to bottom edge (assume max height of 400px from sx)
+      if (pos.top + 400 > viewportHeight) {
+        pos.top = Math.max(0, viewportHeight - 410); // Accounting for some padding
+      }
+    }
+    
+    return pos;
+  }, [position]);

  return ReactDOM.createPortal(
    <Box
      sx={{
        position: 'fixed',
-        left: position.left - 4,
-        top: position.top - 6,
+        left: adjustedPosition.left - 4,
+        top: adjustedPosition.top - 6,
        backgroundColor: 'white',
        boxShadow: '0 2px 8px ' + hexToRGB(OBLIVION, 0.15),
        borderRadius: '4px',
        width: '300px',
        maxHeight: '400px',
        overflowY: 'auto',
        border: '1px solid ' + hexToRGB(OBLIVION, 0.1),
        p: '6px',
      }}>

135-211: Optimize the useMemo dependency array

The useMemo dependency array includes the entire props object, which will cause unnecessary re-renderings since object references change on each render. Consider including only the specific props properties that are used within the memoized component.

  const optionContent = React.useMemo(
    () => (
      // Component JSX...
    ),
-    [children, isDisabled, isHovered, llms, onChange, position, props]
+    [
+      children, 
+      isDisabled, 
+      isHovered, 
+      llms, 
+      onChange, 
+      position, 
+      props.selectProps.onInputChange, 
+      props.selectProps.onMenuClose,
+      props.selectProps.inputValue
+    ]
  );

213-295: Add empty state handling for search results

When filtering LLMs based on search input, there's no feedback when no matches are found. Consider adding a "No matches found" message to improve user experience.

    return (
      <Box>
        <Box
          sx={{
            padding: '4px 12px 0',
            color: MOON_800,
            fontWeight: 600,
            cursor: 'default',
            borderRadius: '4px',
            display: 'flex',
            alignItems: 'center',
            wordBreak: 'break-all',
            wordWrap: 'break-word',
            whiteSpace: 'normal',
          }}>
          {props.data.label}
        </Box>
        <Box
          sx={{
            px: '4px',
            wordBreak: 'break-all',
            wordWrap: 'break-word',
            whiteSpace: 'normal',
          }}>
+          {filteredLLMs.length === 0 && (
+            <Box
+              sx={{
+                padding: '8px 12px',
+                color: MOON_800,
+                fontStyle: 'italic',
+              }}>
+              No matches found
+            </Box>
+          )}
          {filteredLLMs.map(llm => (
            <Box
              key={llm.value}
              onClick={() => {
                onChange(llm.value as LLMMaxTokensKey, llm.max_tokens);
                props.selectProps.onInputChange?.('', {
                  action: 'set-value',
                  prevInputValue: props.selectProps.inputValue,
                });
                props.selectProps.onMenuClose?.();
              }}
              sx={{
                padding: '8px 12px',
                cursor: 'pointer',
                borderRadius: '4px',
                '&:hover': {
                  backgroundColor: MOON_100,
                },
              }}>
              {llm.label}
            </Box>
          ))}
        </Box>
      </Box>
    );

221-233: Consider improving search to match against label as well as value

The current implementation only filters LLMs based on their value property. Consider also including the label in the search to make it easier for users to find models by their display name.

    const filteredLLMs = llms.filter(llm =>
-      llm.value.toLowerCase().includes(inputValue.toLowerCase())
+      llm.value.toLowerCase().includes(inputValue.toLowerCase()) || 
+      llm.label.toLowerCase().includes(inputValue.toLowerCase())
    );
📜 Review details

Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 239037b and b73bc42.

📒 Files selected for processing (5)
  • weave-js/src/components/PagePanelComponents/Home/Browse3/pages/PlaygroundPage/PlaygroundChat/LLMDropdown.tsx (1 hunks)
  • weave-js/src/components/PagePanelComponents/Home/Browse3/pages/PlaygroundPage/PlaygroundChat/LLMDropdownOptions.tsx (1 hunks)
  • weave-js/src/components/PagePanelComponents/Home/Browse3/pages/PlaygroundPage/PlaygroundChat/PlaygroundChatTopBar.tsx (1 hunks)
  • weave-js/src/components/PagePanelComponents/Home/Browse3/pages/PlaygroundPage/llmMaxTokens.ts (1 hunks)
  • weave-js/src/components/PagePanelComponents/Home/Browse3/pages/PlaygroundPage/useConfiguredProviders.ts (1 hunks)
🧰 Additional context used
📓 Path-based instructions (1)
`**/*.{js,jsx,ts,tsx}`: Focus on architectural and logical issues rather than style (assuming ESLint is in place). Flag potential memory leaks and performance bottlenecks. Check fo...

**/*.{js,jsx,ts,tsx}: Focus on architectural and logical issues rather than style (assuming ESLint is in place).
Flag potential memory leaks and performance bottlenecks.
Check for proper error handling and async/await usage.
Avoid strict enforcement of try/catch blocks - accept Promise chains, early returns, and other clear error handling patterns. These are acceptable as long as they maintain clarity and predictability.
Ensure proper type usage in TypeScript files.
Look for security vulnerabilities in data handling.
Don't comment on formatting if prettier is configured.
Verify proper React hooks usage and component lifecycle.
Check for proper state management patterns.

  • weave-js/src/components/PagePanelComponents/Home/Browse3/pages/PlaygroundPage/PlaygroundChat/PlaygroundChatTopBar.tsx
  • weave-js/src/components/PagePanelComponents/Home/Browse3/pages/PlaygroundPage/useConfiguredProviders.ts
  • weave-js/src/components/PagePanelComponents/Home/Browse3/pages/PlaygroundPage/PlaygroundChat/LLMDropdown.tsx
  • weave-js/src/components/PagePanelComponents/Home/Browse3/pages/PlaygroundPage/PlaygroundChat/LLMDropdownOptions.tsx
  • weave-js/src/components/PagePanelComponents/Home/Browse3/pages/PlaygroundPage/llmMaxTokens.ts
🧬 Code Definitions (2)
weave-js/src/components/PagePanelComponents/Home/Browse3/pages/PlaygroundPage/useConfiguredProviders.ts (1)
weave-js/src/components/PagePanelComponents/Home/Browse3/pages/PlaygroundPage/llmMaxTokens.ts (2) (2)
  • LLM_PROVIDERS (530-532)
  • LLM_PROVIDER_SECRETS (519-528)
weave-js/src/components/PagePanelComponents/Home/Browse3/pages/PlaygroundPage/PlaygroundChat/LLMDropdown.tsx (3)
weave-js/src/components/PagePanelComponents/Home/Browse3/pages/PlaygroundPage/llmMaxTokens.ts (3) (3)
  • LLMMaxTokensKey (513-513)
  • LLM_MAX_TOKENS (4-509)
  • LLM_PROVIDER_LABELS (534-546)
weave-js/src/components/PagePanelComponents/Home/Browse3/pages/PlaygroundPage/useConfiguredProviders.ts (1) (1)
  • useConfiguredProviders (18-38)
weave-js/src/components/PagePanelComponents/Home/Browse3/pages/PlaygroundPage/PlaygroundChat/LLMDropdownOptions.tsx (2) (2)
  • ProviderOption (19-28)
  • CustomOption (213-295)
⏰ Context from checks skipped due to timeout of 90000ms (40)
  • GitHub Check: Trace nox tests (3, 13, trace)
  • GitHub Check: Trace nox tests (3, 12, trace)
  • GitHub Check: Trace nox tests (3, 11, trace)
  • GitHub Check: Trace nox tests (3, 10, trace)
  • GitHub Check: Trace nox tests (3, 13, trace)
  • GitHub Check: Trace nox tests (3, 12, trace)
  • GitHub Check: Trace nox tests (3, 11, trace)
  • GitHub Check: Trace nox tests (3, 10, trace)
  • GitHub Check: Trace nox tests (3, 13, trace)
  • GitHub Check: Trace nox tests (3, 12, trace)
  • GitHub Check: Trace nox tests (3, 11, trace)
  • GitHub Check: Trace nox tests (3, 10, trace)
  • GitHub Check: Trace nox tests (3, 13, trace)
  • GitHub Check: Trace nox tests (3, 12, trace)
  • GitHub Check: Trace nox tests (3, 11, trace)
  • GitHub Check: Trace nox tests (3, 10, trace)
  • GitHub Check: Trace nox tests (3, 13, trace)
  • GitHub Check: Trace nox tests (3, 12, trace)
  • GitHub Check: Trace nox tests (3, 11, trace)
  • GitHub Check: Trace nox tests (3, 10, trace)
  • GitHub Check: Trace nox tests (3, 13, trace)
  • GitHub Check: Trace nox tests (3, 12, trace)
  • GitHub Check: Trace nox tests (3, 11, trace)
  • GitHub Check: Trace nox tests (3, 10, trace)
  • GitHub Check: Trace nox tests (3, 13, trace)
  • GitHub Check: Trace nox tests (3, 12, trace)
  • GitHub Check: Trace nox tests (3, 11, trace)
  • GitHub Check: Trace nox tests (3, 10, trace)
  • GitHub Check: Trace nox tests (3, 13, trace)
  • GitHub Check: Trace nox tests (3, 12, trace)
  • GitHub Check: Trace nox tests (3, 11, trace)
  • GitHub Check: Trace nox tests (3, 10, trace)
  • GitHub Check: Trace nox tests (3, 13, trace)
  • GitHub Check: Trace nox tests (3, 12, trace)
  • GitHub Check: Trace nox tests (3, 11, trace)
  • GitHub Check: Trace nox tests (3, 10, trace)
  • GitHub Check: Trace nox tests (3, 13, trace)
  • GitHub Check: Trace nox tests (3, 12, trace)
  • GitHub Check: Trace nox tests (3, 11, trace)
  • GitHub Check: Trace nox tests (3, 10, trace)
🔇 Additional comments (24)
weave-js/src/components/PagePanelComponents/Home/Browse3/pages/PlaygroundPage/PlaygroundChat/PlaygroundChatTopBar.tsx (1)

127-128: Pass entity and project props to LLMDropdown carefully
These props are now passed to LLMDropdown, which enables provider configuration. Ensure any upstream callers validate or default these props to avoid potential undefined values.

weave-js/src/components/PagePanelComponents/Home/Browse3/pages/PlaygroundPage/llmMaxTokens.ts (3)

511-512: Confirm the newly set default LLM model
Setting the default to 'gpt-4o-mini-2024-07-18' might have downstream implications. Verify this aligns with expected usage and that all references to a default model remain consistent elsewhere.


519-528: Validate completeness of LLM_PROVIDER_SECRETS
Mapping providers to their required keys helps ensure correct authentication. Confirm that all environment variables (e.g., OPENAI_API_KEY, ANTHROPIC_API_KEY) are indeed required, and watch for any future expansions or changes in providers.


530-532: Leverage LLM_PROVIDERS for dynamic iteration
Generating the provider list from the secrets object is a clean approach, reducing duplication.

weave-js/src/components/PagePanelComponents/Home/Browse3/pages/PlaygroundPage/useConfiguredProviders.ts (6)

1-2: Ensure imports are correct
You correctly import useSecrets from the designated hooks path, which is vital for retrieving user secrets.


3-4: Dynamic imports of LLM constants
Import of LLM_PROVIDER_SECRETS and LLM_PROVIDERS sets up a flexible structure for mapping providers to secrets.


5-7: hasAllSecrets function
Straightforward check to ensure each required key is present in the provided secrets array. This logic seems correct and concise.


9-11: missingSecrets function
Spelling out missing secrets helps with debugging. Good job providing a comma-separated list.


13-16: ProviderStatus type
Clear definition including a boolean status and a missingSecrets string. This type is practical for UI feedback on provider readiness.


18-38: useConfiguredProviders hook
• Retrieves secrets and constructs a status object for each provider.
• Returning an empty result while loading is a helpful pattern to prevent race conditions.
• Overall logic is straightforward and effective.

weave-js/src/components/PagePanelComponents/Home/Browse3/pages/PlaygroundPage/PlaygroundChat/LLMDropdown.tsx (11)

2-2: Importing useViewerInfo
Using this hook to determine user roles (e.g. admin) is a neat touch for conditional UI in the dropdown.


17-18: New props: entity and project
These fields allow dynamic provider fetching via useConfiguredProviders. Ensure consistent usage and non-empty strings to avoid empty entity scenarios.


21-26: Component signature updated
The signature now includes entity and project for reactivity to user contexts. Implementation is well-typed.


27-28: Invoke useConfiguredProviders with entity
Proper approach for hooking into dynamic secrets. Confirm that references to project aren’t needed here if you only require entity-based secrets.


30-31: useViewerInfo usage
You gracefully handle loading states and store the admin flag. This ensures you can properly distinguish which options to display.


52-65: Handling loaded vs. not-loaded states
Displaying "Loading providers..." is a solid user-friendly approach. The fallback for disabledOptions is consistent with the rest of the logic.


68-69: Combine enabled and disabled arrays
Merging these arrays is a tidy approach, preserving order while grouping the final provider list.


74-82: Disable select when loading
isDisabled={configuredProvidersLoading} prevents partial selection. Good approach to avoid user confusion mid-load.


88-88: Format callback close brace
No functional change here, just ensuring it’s properly tied to the block.


99-111: Custom Option component usage
Passing extra props (entity, project, isAdmin) to CustomOption fosters a flexible design. Looks great for nested LLM selection.


114-122: filterOption
Includes a thorough check for both provider label and LLM labels. Ensures the search is comprehensive.

weave-js/src/components/PagePanelComponents/Home/Browse3/pages/PlaygroundPage/PlaygroundChat/LLMDropdownOptions.tsx (3)

19-35: Clean and well-defined TypeScript interfaces

The ProviderOption and CustomOptionProps interfaces are well-structured and provide good type safety. The detailed typing for the LLM options will help prevent potential errors during development.


37-78: Admin-specific UI handling looks good

The DisabledProviderTooltip component elegantly handles different UI states based on admin status, providing better UX by showing clickable links only to users who can actually configure the missing secrets.


1-18: Well-structured imports

The imports are well-organized, separating external libraries from internal components. The color constants are appropriately imported from the common styles directory.

@jwlee64 jwlee64 requested a review from jamie-rasmussen March 19, 2025 19:16
Copy link
Collaborator

@jamie-rasmussen jamie-rasmussen left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could cleanup the PR title/description a bit.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants