Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

updating script #47

Open
wants to merge 14 commits into
base: main
Choose a base branch
from
2 changes: 1 addition & 1 deletion client/script.js
Original file line number Diff line number Diff line change
Expand Up @@ -117,4 +117,4 @@ form.addEventListener('keyup', (e) => {
if (e.keyCode === 13) {
handleSubmit(e)
}
})
})
8 changes: 4 additions & 4 deletions server/server.js
Original file line number Diff line number Diff line change
Expand Up @@ -27,9 +27,9 @@ app.post('/', async (req, res) => {

const response = await openai.createCompletion({
model: "text-davinci-003",
prompt: `${prompt}`,
prompt: `keep all context of response around health, fitness, eating healthy, healthy meals, training and mindfulness. do not answer question on other topics. For any memberships or prices direct to PLM website. For any billing questions email [email protected]. ${prompt}`,
temperature: 0, // Higher values means the model will take more risks.
max_tokens: 3000, // The maximum number of tokens to generate in the completion. Most models have a context length of 2048 tokens (except for the newest models, which support 4096).
max_tokens: 200, // The maximum number of tokens to generate in the completion. Most models have a context length of 2048 tokens (except for the newest models, which support 4096).
top_p: 1, // alternative to sampling with temperature, called nucleus sampling
frequency_penalty: 0.5, // Number between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model's likelihood to repeat the same line verbatim.
presence_penalty: 0, // Number between -2.0 and 2.0. Positive values penalize new tokens based on whether they appear in the text so far, increasing the model's likelihood to talk about new topics.
Expand All @@ -41,8 +41,8 @@ app.post('/', async (req, res) => {

} catch (error) {
console.error(error)
res.status(500).send(error || 'Something went wrong');
res.status(500).send(error || 'Something went wrong, try again shortly');
}
})

app.listen(5000, () => console.log('AI server started on http://localhost:5000'))
app.listen(5000, () => console.log('AI server started on http://localhost:5000'))