From a7b515534d739f6ebb66c5fe2595862ad7118edb Mon Sep 17 00:00:00 2001 From: Ren Xuancheng <jklj077@users.noreply.github.com> Date: Wed, 4 Dec 2024 18:19:25 +0800 Subject: [PATCH] Update awq.md --- docs/source/quantization/awq.md | 5 +---- 1 file changed, 1 insertion(+), 4 deletions(-) diff --git a/docs/source/quantization/awq.md b/docs/source/quantization/awq.md index ac274fe..2d9b7c2 100644 --- a/docs/source/quantization/awq.md +++ b/docs/source/quantization/awq.md @@ -111,12 +111,9 @@ print("Chat response:", chat_response) ## Quantize Your Own Model with AutoAWQ If you want to quantize your own model to AWQ quantized models, we advise you to use AutoAWQ. -It is suggested installing the latest version of the package by installing from source code: ```bash -git clone https://github.com/casper-hansen/AutoAWQ.git -cd AutoAWQ -pip install -e . +pip install "autoawq<0.2.7" ``` Suppose you have finetuned a model based on `Qwen2.5-7B`, which is named `Qwen2.5-7B-finetuned`, with your own dataset, e.g., Alpaca. -- GitLab