Skip to content

Commit

Permalink
Updated olive meta to be more accurate.
Browse files Browse the repository at this point in the history
  • Loading branch information
MaanavD committed Nov 20, 2024
1 parent 8e15094 commit bfabd01
Show file tree
Hide file tree
Showing 2 changed files with 2 additions and 2 deletions.
2 changes: 1 addition & 1 deletion src/routes/blogs/+page.svelte
Original file line number Diff line number Diff line change
Expand Up @@ -54,7 +54,7 @@
'Is it better to quantize before or after finetuning?',
date: '19th November, 2024',
blurb:
'Learn how Olive helps optimize models for efficient, accurate deployment.',
'Learn how to quickly and easily experiment in your model optimization workflow using Olive.',
link: 'blogs/olive-quant-ft',
image: QuantizeFinetune,
imgalt: 'Quantize or finetune first for better model performance?'
Expand Down
2 changes: 1 addition & 1 deletion src/routes/blogs/olive-quant-ft/+page.svx
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
---
title: 'Is it better to quantize before or after finetuning?'
date: '19th November, 2024'
description: 'Learn how Olive helps optimize models for efficient, accurate deployment.'
description: 'Learn how to quickly and easily experiment in your model optimization workflow using Olive.'
keywords: 'quantization, fine-tuning, Olive toolkit, model optimization, ONNX runtime, AI model efficiency, AWQ, GPTQ, model deployment, low-precision, LoRA, language models, quantize before fine-tune, quantization sequence, Phi-3.5, Llama, memory reduction'
authors:
[
Expand Down

0 comments on commit bfabd01

Please sign in to comment.