-
Notifications
You must be signed in to change notification settings - Fork 0
/
Copy pathindex.html
114 lines (96 loc) · 4.75 KB
/
index.html
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<link rel="icon" href="data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7"
type="image/gif">
<!-- include bitwrench for styling -->
<title>Examples</title>
<style>
body {
width: 100%;
height: 100%;
font-family: sans-serif;
}
#container {
width: 90%;
margin: 0 auto;
padding: 1rem;
}
h1 {
font-size: 2rem;
margin-bottom: 1rem;
font-weight: 700;
}
h2 {
font-size: 1.25rem;
font-weight: 600;
}
a {
display: inline-block;
text-decoration: underline;
}
a:hover {
font-weight: 700;
color: #0622dd;
}
</style>
</head>
<body>
<!-- example page explains how to use quikchat -->
<div id="container">
<h1>QuikChat Examples</h1>
<p>These examples demonstrate how to use QuikChat in your projects. Included are basic themes for light, dark,
and debugging.</p>
<h2>Basic Usage as Module</h2>
<p>This example demonstrates how to create a basic chat widget with QuikChatJS using ESM module import</p>
<a href="./example_esm.html">View Example ESM</a>
<h2>Basic Usage as UMD</h2>
<p>This example demonstrates how to create a basic chat widget with QuikChatJS using UMD script tag import</p>
<a href="./example_umd.html">View Example UMD</a>
<h2>Dual Chatrooms</h2>
<p>This example demonstrates how to create two chatrooms that can send messages to each other</p>
<a href="./dual-chatrooms.html">View Example Dual Chatrooms</a>
<h2>History Demo</h2>
<p>This demo shows saving and restore of full chat message history. Useful for apps where the history needs to be restored from a previous session.</p>
<a href="./historyDemo.html">View Example History Demo</a>
<h2>Simple Ollama</h2>
<p>This example shows how to use quikchat with a local LLM using <a href="https://ollama.ai">Ollama</a></p>
<a href="./simple_ollama.html">View Example Ollama</a>
<h2>LLM with Conversational Memory</h2>
<p>This example demonstrates how to use quikchat with a local LLM using Ollama where quikchat provides the chat
history
to Ollama provides the llm model. This allows the chat to "remember" what is being discussed.</p>
<a href="./ollama_with_memory.html">View Example Ollama with Memory</a>
<h2>LLM with Conversational Memory using LMStudio</h2>
<p>This example demonstrates how to use quikchat with a local LLM using LMStudio where quikchat provides the
chat
history to LMStudio provides the llm model. This allows the chat to "remember" what is being discussed.
<br>This
example assumes <a href='https://lmstudio.ai/'>LMStudio</a> is installed and running on port 1234 locally.
Also it
assumes the llama3.1 model is running. This example will not work on the github pages demo website you must
run this
locally.</p>
<a href="./lmstudio_with_memory.html">View Example LMStudio with Memory</a>
<h2>OpenAI</h2>
<p>This example demonstrates how to use QuikChat with OpenAI's GPT-4o model. You can use this example with any
API that
supports token streaming. The example uses the OpenAI API to generate responses to user prompts.</p>
<a href="./openai.html">View Example OpenAI</a>
<br>
<h2>Quikchat with Python FastAPI Backend</h2>
<p>This example demonstrates how to use QuikChat with a FastAPI server. The server uses a local LLM to generate
responses to user prompts. The example uses the FastAPI server to generate responses to user prompts.</p>
**Note: This example requires the FastAPI server to be running locally and doesn't run on the github pages.**
<a href="./fastapi_llm/index.html">View Example FastAPI Instructions</a>
<br>
<h2>Quikchat with Nodejs Express Backend</h2>
<p>This example demonstrates how to use QuikChat with an Express server. The server uses a local LLM to generate
responses to user prompts. The example uses the Express server to generate responses to user prompts.</p>
**Note: This example requires the Express server to be running locally and doesn't run on the github pages.**
<a href="./npm_express/index.html">View Example Express Instructions</a>
</div>
</body>
</html>