Skip to content

Commit e91cd28

Browse files
authored
Merge pull request #296 from posit-dev/fix-typos
fix(docs): Fix typos
2 parents ce92e7a + 55b572b commit e91cd28

File tree

5 files changed

+36
-37
lines changed

5 files changed

+36
-37
lines changed

docs/genai-chatbots.qmd

Lines changed: 4 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -62,18 +62,17 @@ shiny create --template chat-ai-langchain
6262

6363
### Other
6464

65-
`chatlas`'s supports a [wide variety](https://posit-dev.github.io/chatlas/#model-providers) of LLM providers including Vertex, Snowflake, Groq, Perplexity, and more.
65+
`chatlas` supports a [wide variety](https://posit-dev.github.io/chatlas/#model-providers) of LLM providers including Vertex, Snowflake, Groq, Perplexity, and more.
6666
In this case, you can start from any template and swap out the `chat_client` with the relevant chat constructor (e.g., `ChatVertex()`).
6767

6868
### Help me choose!
6969

7070
If you're not sure which provider to choose, `chatlas` provides a [great guide](https://posit-dev.github.io/chatlas/#model-choice) to help you decide.
7171
:::
7272

73+
When you run the `shiny create` command, you'll be provided with some tips on where to obtain the necessary API keys (if any) and how to securely add them to your app.
7374

74-
When you run the `shiny create` command, you'll be provided some tips on where to go to obtain the necessary API keys (if any) and how to securely get them into your app.
75-
76-
Also, if you're not ready to sign up for a cloud provider (e.g., Anthropic, OpenAI, etc), you can run models locally (for free!) with the Ollama template.
75+
Also, if you're not ready to sign up for a cloud provider (e.g., Anthropic, OpenAI, etc.), you can run models locally (for free!) with the Ollama template.
7776
This is a great way to get started and learn about LLMs without any cost, and without sharing your data with a cloud provider.
7877

7978
Once your credentials (if any) are in place, [run the app](https://shiny.posit.co/py/get-started/create-run.html#run-your-shiny-application). Congrats, you now have a streaming chat interface powered by an LLM of your choice! 🎉
@@ -177,7 +176,7 @@ shiny create --template chat-ai-playground
177176
Show message(s) when the chat first loads by providing `messages` to `chat.ui()`.
178177
Messages are interpreted as markdown, so you can use markdown (or HTML) to format the text as you like.
179178

180-
Startup messages are a great place to introduce the chatbot with a brief description of what it can do and optionally some [input suggestions](#suggest-input) to help the user get started quickly.
179+
Startup messages are a great place to introduce the chatbot with a brief description of what it can do and, optionally, some [input suggestions](#suggest-input) to help the user get started quickly.
181180
Messages can also contain arbitrary Shiny UI [components](../components/index.qmd), so you could even include something like a [tooltip](../components/display-messages/tooltips/index.qmd) to provide additional details on demand.
182181

183182
::: {.panel-tabset .panel-pills}

docs/genai-inspiration.qmd

Lines changed: 10 additions & 10 deletions
Original file line numberDiff line numberDiff line change
@@ -33,7 +33,7 @@ Chatbots are the most familiar interface to Generative AI, and can be used for a
3333

3434
LLMs excel when they are instructed to focus on particular task(s), and provided the context necessary to complete them accurately.
3535
This is especially true for coding assistants, such as the [Shiny Assistant](https://shiny.posit.co/blog/posts/shiny-assistant/) which leverages an LLM to help you build Shiny apps faster.
36-
Just describe the app you want to build, and Shiny Assistant does it's best to give you a complete working example that runs in your browser.
36+
Just describe the app you want to build, and Shiny Assistant does its best to give you a complete working example that runs in your browser.
3737

3838
{{< video https://shiny.posit.co/blog/posts/shiny-assistant/mortgage-calculator-1.mp4 title="Building a mortgage calculator with Shiny Assistant" >}}
3939

@@ -48,14 +48,14 @@ One such example includes the [chatlas assistant](https://github.com/cpsievert/c
4848

4949
### Enhanced dashboards 📊
5050

51-
LLMs are also very good at extracting [structured data](genai-structured-data.qmd) from unstructured text, is useful for a wide variety of tasks.
51+
LLMs are also very good at extracting [structured data](genai-structured-data.qmd) from unstructured text, which is useful for a wide variety of tasks.
5252
One interesting application is translating a user's natural language query into a SQL query.
5353
Combining this ability with [tools](genai-tools.qmd) to actually run the SQL query on the data and [reactively](reactive-foundations.qmd) update relevant views makes for a powerful way to "drill down" into your data.
5454
Moreover, by making the SQL query accessible to the user, you can enhance the verifiability and reproducibility of the LLM's response.
5555

5656
#### Query chat
5757

58-
The [`querychat` package](https://github.com/posit-dev/querychat) provides some tools to help you more easily leverage this idea in your own Shiny apps.
58+
The [`querychat` package](https://github.com/posit-dev/querychat) provides tools to help you more easily leverage this idea in your own Shiny apps.
5959
A straightforward use of querychat is shown below, where the user can ask a natural language question about the `titanic` dataset, and the LLM generates a SQL query that can be run on the data:
6060

6161
![Screenshot of the "querychat" app, which leverages LLMs to generate SQL queries that match a user's natural language query.](/images/genai-querychat.png){class="rounded shadow lightbox mt-3"}
@@ -75,7 +75,7 @@ A more advanced application of this concept is to drive multiple views of the da
7575
An implementation of this idea is available in the [sidebot](https://github.com/jcheng5/py-sidebot) repo.
7676
It defaults to the `tips` dataset, but without much effort, you can adapt it to another dataset of your choosing.
7777

78-
![Screenshot of the "sidebot" app, which leverages LLMs to translate nature language to SQL, and tools to reactively update the dashboard.](/images/genai-sidebot.png){class="rounded shadow lightbox mt-3"}
78+
![Screenshot of the "sidebot" app, which leverages LLMs to translate natural language to SQL, and tools to reactively update the dashboard.](/images/genai-sidebot.png){class="rounded shadow lightbox mt-3"}
7979

8080
::: callout-note
8181
The app above is available as a [template](../templates/sidebot/index.qmd):
@@ -86,19 +86,19 @@ shiny create --template querychat \
8686
```
8787
:::
8888

89-
Sidebot also demonstrates how one could lean into an LLM's ability to "see" images and generate natural language descriptions of them.
89+
Sidebot also demonstrates how one can leverage an LLM's ability to "see" images and generate natural language descriptions of them.
9090
Specifically, by clicking on the ✨ icon, the user is provided with a natural language description of the visualization, which can be useful for accessibility or for users who are not as familiar with the data.
9191

9292
![Screenshot of the "sidebot" app with a tooltip describing the visualization.](/images/genai-sidebot-tooltip.png){class="rounded shadow lightbox mt-3"}
9393

9494

9595
### Guided exploration 🧭
9696

97-
Chatbots are also a great way to guide users through an experience, like a story, game, or learning activity.
98-
The `Chat()` component's [input suggestion](genai-chatbots.qmd#suggest-input) feature provides a particularly useful interface for this, as it makes it super easy on the user to 'choose their own adventure' with little to no typing.
97+
Chatbots are also a great way to guide users through an experience, such as a story, game, or learning activity.
98+
The `Chat()` component's [input suggestion](genai-chatbots.qmd#suggest-input) feature provides a particularly useful interface for this, as it makes it very easy for users to 'choose their own adventure' with little to no typing.
9999

100100
For example, this "Choose your own Data Science Adventure" app starts by collecting some basic user information, then generates relevant hypothetical data science scenarios.
101-
Based on what scenario the user chooses, the app then guides the user through a series of questions, ultimately leading to a data science project idea and deliverable:
101+
Based on the scenario the user chooses, the app then guides the user through a series of questions, ultimately leading to a data science project idea and deliverable:
102102

103103
![Screenshot of the "Choose your own Data Science Adventure" app.](/images/genai-data-science-adventure.png){class="rounded shadow lightbox mt-3"}
104104

@@ -177,10 +177,10 @@ The app below uses an LLM to generate a description of an image based on a user-
177177

178178
![Screenshot of an app that generates an image description.](/images/genai-image-describer.png){class="rounded shadow lightbox mt-3"}
179179

180-
When the user clicks 'Describe Image', the app passes the image URL to the LLM, which generates a overall description, tag keywords, as well as estimates on location, photographer, etc.
180+
When the user clicks 'Describe Image', the app passes the image URL to the LLM, which generates an overall description, tag keywords, as well as estimates on location, photographer, etc.
181181
This content is then streamed into the `MarkdownStream()` component (inside of a card) as it's being produced.
182182

183-
This slightly more advanced example also demonstrates how to route the same response stream to multiple output views: namely both the `MarkdownStream()` and a `Chat()` component.
183+
This slightly more advanced example also demonstrates how to route the same response stream to multiple output views: namely, both the `MarkdownStream()` and a `Chat()` component.
184184
This allows the user to make follow-up requests or ask questions about the image description.
185185

186186
![Screenshot of the image description app with the offcanvas chat made visible.](/images/genai-image-describer-chat.png){class="rounded shadow lightbox mt-3"}

docs/genai-rag.qmd

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -9,12 +9,12 @@ lightbox:
99
callout-appearance: simple
1010
---
1111

12-
Large language models (LLMs) are trained on public data, and have a training cutoff date, so they aren't inherently aware of private, or the latest, information.
12+
Large language models (LLMs) are trained on public data and have a training cutoff date, so they aren't inherently aware of private or the latest information.
1313
As a result, LLMs may not have the necessary information to answer a user's question, even though they might pretend to (a.k.a. hallucinate).
1414
This is a common problem, especially in an enterprise setting, where the information is proprietary and/or constantly changing.
1515
Unfortunately, this is also an environment where plausible but inaccurate answers can have serious consequences.
1616

17-
There are rougly three general approaches to addressing this problem. Going from the least to most complex:
17+
There are roughly three general approaches to addressing this problem, going from the least to most complex:
1818

1919
1. **System prompt**: If the information that the model needs to perform well can fit within a [system prompt](https://posit-dev.github.io/chatlas/get-started.html#what-is-a-prompt) (i.e., fit within the relevant [context window](https://posit-dev.github.io/chatlas/get-started.html#what-is-a-token)), you should consider that first.
2020
2. **Tool calling**: Provide the LLM with [tools](genai-tools.qmd) it can use to retrieve the information that it needs. Compared to RAG, this has the benefit of not needing to pre-fetch/maintain an information database, compute document/query similarities, and can even be combined with RAG.
@@ -26,7 +26,7 @@ The last section provides some tips on how to scale up your RAG implementation.
2626
## RAG basics
2727

2828
The core concept of RAG is fairly simple, yet general: given a set of documents and a user query, find the document(s) that are the most similar to the query and supply those documents as additional context to the LLM.
29-
This requires choosing a numerical technique to compute similarity; of which there are many, each with its own strengths and weaknesses.
29+
This requires choosing a numerical technique to compute similarity, of which there are many, each with its own strengths and weaknesses.
3030
The often tricky part of doing RAG well is finding the similarity measure that is both performant and effective for your use case.
3131

3232
To demonstrate, let's use a basic example derived from `chatlas`'s article on [RAG](https://posit-dev.github.io/chatlas/rag.html).
@@ -130,5 +130,5 @@ To scale this basic example up to your use case, you'll not only want to conside
130130

131131
Nowadays, there are many options for efficient storage/retrieval of documents (i.e., vector databases).
132132
That said, [`duckdb`'s vector extension](https://duckdb.org/docs/stable/extensions/vss.html) comes highly recommended, and here is a great [blog post](https://blog.brunk.io/posts/similarity-search-with-duckdb/) on building a database and retrieving from it with a custom embedding model.
133-
Many of the these options will offer both a local and cloud-based solution, so you can choose the one that best fits your needs.
133+
Many of these options will offer both a local and cloud-based solution, so you can choose the one that best fits your needs.
134134
For example, with `duckdb`, you can leverage [MotherDuck](https://motherduck.com/) for your hosting needs, as well as others like [Pinecone](https://www.pinecone.io/) and [Weaviate](https://weaviate.io/).

docs/genai-structured-data.qmd

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -8,8 +8,8 @@ lightbox:
88
callout-appearance: simple
99
---
1010

11-
LLMs are quite good at extracting structured data from unstructured text, images, etc.
12-
Though not always perfect, this can be a very helpful way to reduce the amount of manual work needed to extract information from a large amount of text or documents.
11+
LLMs are quite good at extracting structured data from unstructured text, images, and more.
12+
Although not always perfect, they can greatly reduce the manual work needed to extract information from large amounts of text or documents.
1313
Here are just a few scenarios where this can be useful:
1414

1515
1. **Form processing**: Extract structured field-value pairs from scanned documents, invoices, and forms to reduce manual data entry.
@@ -121,7 +121,7 @@ async def data():
121121
## Editable data
122122

123123
Remember that the LLM is not perfect -- you may want to manually correct or refine the extracted data.
124-
In this scenario, it may be useful to allow the user to edit the extracted data, and download it when they are done.
124+
In this scenario, it may be useful to allow the user to edit the extracted data and download it when done.
125125
Here's an example of how to do this in a named entity extraction app.
126126

127127
<details>

docs/genai-tools.qmd

Lines changed: 15 additions & 15 deletions
Original file line numberDiff line numberDiff line change
@@ -98,8 +98,8 @@ See the [chatlas docs](https://posit-dev.github.io/chatlas/tool-calling.html) to
9898
### Basic chatbot {#basic-chatbot}
9999

100100
To embed our `chat_client` in a Shiny [chatbot](genai-chatbots.qmd), let's put it in a `client.py` module and use it for response generation.
101-
And to display the tool call results, just set `content="all"` in the `.stream_async()` method.
102-
This way, `chatlas` will include tool call content objects in the stream, and since those content objects know how to display themselves in Shiny, we get a generic display of the tool request, response, and/or any errors that occurred.
101+
To display the tool call results, just set `content="all"` in the `.stream_async()` method.
102+
This way, `chatlas` will include tool call content objects in the stream, and since those content objects know how to display themselves in Shiny, we get a generic display of the tool request, response, and any errors that occurred.
103103

104104
<details>
105105
<summary> client.py </summary>
@@ -159,20 +159,20 @@ And, in the case of an error, the user is also notified of the error.
159159

160160
![Screenshot of a tool call error.](/images/genai-tool-call-error-ui.png){class="rounded shadow lightbox"}
161161

162-
In general, these default displays should be enough to let your users know what the LLM is request/receiving to help general their responses.
162+
In general, these default displays should be enough to let your users know what the LLM is requesting/receiving to help generate their responses.
163163

164164

165165
## Reactivity
166166

167167
Combining tool calling with [reactivity](reactive-foundations.qmd) is a powerful technique that can effectively let the LLM interact with the app.
168-
Here we'll explore a few general patterns for doing this.
168+
Here, we'll explore a few general patterns for doing this.
169169

170170

171171
### Updating inputs
172172

173173
The most basic way to hand over control to the LLM is to have it update reactive `input`(s).
174-
The core idea is to wrap a `ui.update_*()` call into a tool function, and register that function with the `chat_client`.
175-
Then, when a user asks the LLM to update an input, it's able to do so.
174+
The core idea is to wrap a `ui.update_*()` call in a tool function and register that function with the `chat_client`.
175+
Then, when a user asks the LLM to update an input, it is able to do so.
176176

177177
<details>
178178
<summary> client.py </summary>
@@ -244,9 +244,9 @@ For brevity sake, we won't fully explore compelling applications here, but in ge
244244

245245
### Managing state
246246

247-
In Shiny, a reactive value can derive from either a input [component](../components/index.qmd) (e.g., `ui.input_select()`, etc) or an entirely server-side `reactive.value()`.
248-
Generally speaking, the latter approach is useful for tracking state that may not exist in the UI (e.g., authentication, user activity, etc).
249-
Similar to how we can equipped the LLM to update an input component, we can also equip it to update a reactive value to have it drive the app's state.
247+
In Shiny, a reactive value can derive from either an input [component](../components/index.qmd) (e.g., `ui.input_select()`, etc.) or an entirely server-side `reactive.value()`.
248+
Generally speaking, the latter approach is useful for tracking state that may not exist in the UI (e.g., authentication, user activity, etc.).
249+
Similar to how we can equip the LLM to update an input component, we can also equip it to update a reactive value to have it drive the app's state.
250250

251251
The sidebot template ([mentioned at the top](#why-tool-calling) of this article) illustrates a particularly powerful application of managing state.
252252
In this case, the state is an SQL query.
@@ -285,27 +285,27 @@ async def update_dashboard(
285285
::: callout-note
286286
### Reactive locking
287287

288-
Since this tool runs within a [non-blocking message stream](genai-tools.qmd#non-blocking-streams) (i.e., `.append_message_stream()`), in order to prevent race conditions, it must lock reactivity graph when updating reactive value(s).
289-
If the tool was, instead, running in a [blocking stream](genai-chatbots.qmd#message-stream-context), the `reactive.lock()` and `reactive.flush()` wouldn't be necessary.
288+
Since this tool runs within a [non-blocking message stream](genai-tools.qmd#non-blocking-streams) (i.e., `.append_message_stream()`), in order to prevent race conditions, it must lock the reactivity graph when updating reactive value(s).
289+
If the tool were, instead, running in a [blocking stream](genai-chatbots.qmd#message-stream-context), the `reactive.lock()` and `reactive.flush()` wouldn't be necessary.
290290
:::
291291

292292
The final crucial piece is that, in order for the LLM to generate accurate SQL, it needs to know the schema of the dataset.
293293
This is done by passing the table schema to the LLM's [system prompt](genai-chatbots.qmd#models--prompts).
294294

295-
Since the general pattern of having a tool to update a reactive data frame via SQL is so useful, the[querychat](../templates/querychat/index.qmd) package generalizes this pattern to make it more accessible and easier to use.
295+
Since the general pattern of having a tool to update a reactive data frame via SQL is so useful, the [querychat](../templates/querychat/index.qmd) package generalizes this pattern to make it more accessible and easier to use.
296296

297297
## Custom tool display {#custom-display}
298298

299299
Customizing how tool results are displayed can be useful for a variety of reasons.
300-
For example, you may want to simply style results differently, or something much more sophisticated like displaying a map or a table.
300+
For example, you may want to style results differently, or implement something more sophisticated, such as displaying a map or a table.
301301

302302
To customize the result display, you can:
303303

304-
1. Subclass the [`chatlas.ContentToolResult` class](https://posit-dev.github.io/chatlas/reference/types.ContentToolResult.html)
304+
1. Subclass the [`chatlas.ContentToolResult` class](https://posit-dev.github.io/chatlas/reference/types.ContentToolResult.html).
305305
2. Override the `tagify()` method. This can return any valid `ui.Chat()` message content (i.e., a markdown string or Shiny UI).
306306
3. Return an instance of this subclass from your tool function.
307307

308-
This basic example below would just style the tool result differently than the default:
308+
The basic example below would just style the tool result differently than the default:
309309

310310
```python
311311
from chatlas import ContentToolResult

0 commit comments

Comments
 (0)