Skip to content

Improve System Prompting #7

@tyrellshawn

Description

@tyrellshawn

Problem

With smaller-scale or local LLMs (e.g., 7B–13B parameters), Bolt’s “code window” sometimes fails to start. This occurs because the default prompt is too long or complex for these models, causing them to omit critical steps—such as running the start command to launch the development server—and to produce incomplete outputs. Shorter prompts generally yield better results on code generation tasks【38†L221-L229】. The maintainers have noted that improving prompts is an ongoing focus for better support of local models【10†L41-L48】.

Proposed Solution

  • Simplify the system prompt for smaller models: Create a condensed version of the system prompt for models with limited context windows. This shorter prompt should clearly emphasize the essential steps (e.g., “after making code changes, always start the dev server using the start command so the preview is live”) and avoid unnecessary role descriptions or extraneous context.
  • Explicit reminders: Add explicit instructions to run the development server after edits. Include these reminders in the small-model prompt template so the model reliably issues a start action.
  • Reduce cognitive load: Trim superfluous content (project descriptions, long diffs) in prompts targeted at smaller models. Where possible, summarize context or provide a shorter diff summary to stay within the model’s context limit.
  • Fallback prompting: Implement a mechanism in the UI/backend to detect when the server has not started after a response. Prompt the model (or the user) to start the server if necessary.

Tasks

  1. Implement dynamic prompt templates based on the selected model size. Use a shorter, task-focused template for smaller/local models.
  2. Add explicit instructions in the small-model prompt to remind the model to run start after code changes.
  3. Update the prompt generator to trim unnecessary context when addressing smaller models.
  4. Implement a fallback in the UI/backend: if the server isn’t running after a response, prompt the model to start the server.
  5. Document the new prompt templates and provide usage guidance for local models.

Improving the prompting strategy in this way will make the experience smoother for users of smaller models and ensure the code window reliably starts across different environments.

Metadata

Metadata

Assignees

No one assigned

    Labels

    duplicateThis issue or pull request already exists

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions