diff --git a/docs/locales/zh_CN/LC_MESSAGES/framework/Langchain.po b/docs/locales/zh_CN/LC_MESSAGES/framework/Langchain.po index 856eb3fab39d6880f4262dd9288d6f146da08d3b..6f3e6f604f2b793f1248109e0c7f6795d98758fa 100644 --- a/docs/locales/zh_CN/LC_MESSAGES/framework/Langchain.po +++ b/docs/locales/zh_CN/LC_MESSAGES/framework/Langchain.po @@ -32,7 +32,7 @@ msgstr "基础用法" #: ../../source/framework/Langchain.rst:11 b93bd8165fbe4340970f3942884a91dd msgid "The implementation process of this project includes loading files -> reading text -> segmenting text -> vectorizing text -> vectorizing questions -> matching the top k most similar text vectors with the question vectors -> incorporating the matched text as context along with the question into the prompt -> submitting to the Qwen2.5-7B-Instruct to generate an answer. Below is an example:" -msgstr "您å¯ä»¥ä»…使用您的文档é…åˆ``langchain``æ¥æž„建一个问ç”应用。该项目的实现æµç¨‹åŒ…æ‹¬åŠ è½½æ–‡ä»¶ -> 阅读文本 -> 文本分段 -> 文本å‘é‡åŒ– -> 问题å‘é‡åŒ– -> 将最相似的å‰k个文本å‘é‡ä¸Žé—®é¢˜å‘é‡åŒ¹é… -> 将匹é…的文本作为上下文连åŒé—®é¢˜ä¸€èµ·çº³å…¥æç¤º -> æäº¤ç»™Qwen2.5-7B-Instruct生æˆç”案。以下是一个示例:" +msgstr "您å¯ä»¥ä»…使用您的文档é…åˆ ``langchain`` æ¥æž„建一个问ç”应用。该项目的实现æµç¨‹åŒ…æ‹¬åŠ è½½æ–‡ä»¶ -> 阅读文本 -> 文本分段 -> 文本å‘é‡åŒ– -> 问题å‘é‡åŒ– -> 将最相似的å‰k个文本å‘é‡ä¸Žé—®é¢˜å‘é‡åŒ¹é… -> 将匹é…的文本作为上下文连åŒé—®é¢˜ä¸€èµ·çº³å…¥æç¤º -> æäº¤ç»™Qwen2.5-7B-Instruct生æˆç”案。以下是一个示例:" #: ../../source/framework/Langchain.rst:95 db8fe123a81d481c91f22710ead3993a msgid "After loading the Qwen2.5-7B-Instruct model, you should specify the txt file for retrieval." diff --git a/docs/locales/zh_CN/LC_MESSAGES/run_locally/llama.cpp.po b/docs/locales/zh_CN/LC_MESSAGES/run_locally/llama.cpp.po index 85df91c380dd686cab6a748428857db7f7b99bba..74124d9f318479e6b8c0cfefce16c7800fbd8f0d 100644 --- a/docs/locales/zh_CN/LC_MESSAGES/run_locally/llama.cpp.po +++ b/docs/locales/zh_CN/LC_MESSAGES/run_locally/llama.cpp.po @@ -495,8 +495,8 @@ msgid "Enter interactive mode. You can interrupt model generation and append new msgstr "进入互动模å¼ã€‚ä½ å¯ä»¥ä¸æ–模型生æˆå¹¶æ·»åŠ æ–°æ–‡æœ¬ã€‚" #: ../../source/run_locally/llama.cpp.md:309 fa961800b1584d93b9315ae358c0d70d -msgid "-i or --interactive-first" -msgstr "-i 或 --interactive-first" +msgid "-if or --interactive-first" +msgstr "-if 或 --interactive-first" #: ../../source/run_locally/llama.cpp.md:309 ec896aaf5dfc44f99f2033044df8f4a0 msgid "Immediately wait for user input. Otherwise, the model will run at once and generate based on the prompt." diff --git a/docs/locales/zh_CN/LC_MESSAGES/run_locally/ollama.po b/docs/locales/zh_CN/LC_MESSAGES/run_locally/ollama.po index 5d37a41432a76a5ec3f0815b6f66bbda4e3411d6..d268105466bd449dc8c2aa70953690afd47009be 100644 --- a/docs/locales/zh_CN/LC_MESSAGES/run_locally/ollama.po +++ b/docs/locales/zh_CN/LC_MESSAGES/run_locally/ollama.po @@ -74,7 +74,7 @@ msgstr "用Ollamaè¿è¡Œä½ 自己的GGUF文件" #: ../../source/run_locally/ollama.md:34 a45b6bcaab944f00ae23384aaf4bebfe msgid "Sometimes you don't want to pull models and you just want to use Ollama with your own GGUF files. Suppose you have a GGUF file of Qwen2.5, `qwen2.5-7b-instruct-q5_0.gguf`. For the first step, you need to create a file called `Modelfile`. The content of the file is shown below:" -msgstr "有时您å¯èƒ½ä¸æƒ³æ‹‰å–模型,而是希望直接使用自己的GGUF文件æ¥é…åˆOllama。å‡è®¾æ‚¨æœ‰ä¸€ä¸ªå为`qwen2.5-7b-instruct-q5_0.gguf`çš„Qwen2.5çš„GGUF文件。在第一æ¥ä¸ï¼Œæ‚¨éœ€è¦åˆ›å»ºä¸€ä¸ªå为`Modelfile``的文件。该文件的内容如下所示:" +msgstr "有时您å¯èƒ½ä¸æƒ³æ‹‰å–模型,而是希望直接使用自己的GGUF文件æ¥é…åˆOllama。å‡è®¾æ‚¨æœ‰ä¸€ä¸ªå为`qwen2.5-7b-instruct-q5_0.gguf`çš„Qwen2.5çš„GGUF文件。在第一æ¥ä¸ï¼Œæ‚¨éœ€è¦åˆ›å»ºä¸€ä¸ªå为`Modelfile`的文件。该文件的内容如下所示:" #: ../../source/run_locally/ollama.md:97 0300ccc8902641e689c5214717fb588d msgid "Then create the ollama model by running:" diff --git a/docs/source/run_locally/llama.cpp.md b/docs/source/run_locally/llama.cpp.md index 1adb4784d49ef35cf1ee73f31a28114d70ee7922..9efeb67943e74790080bd41e148ddaf50785a0bc 100644 --- a/docs/source/run_locally/llama.cpp.md +++ b/docs/source/run_locally/llama.cpp.md @@ -175,12 +175,12 @@ We provide a series of GGUF models in our Hugging Face organization, and to sear Download the GGUF model that you want with `huggingface-cli` (you need to install it first with `pip install huggingface_hub`): ```bash -huggingface-cli download <model_repo> <gguf_file> --local-dir <local_dir> --local-dir-use-symlinks False +huggingface-cli download <model_repo> <gguf_file> --local-dir <local_dir> ``` For example: ```bash -huggingface-cli download Qwen/Qwen2.5-7B-Instruct-GGUF qwen2.5-7b-instruct-q5_k_m.gguf --local-dir . --local-dir-use-symlinks False +huggingface-cli download Qwen/Qwen2.5-7B-Instruct-GGUF qwen2.5-7b-instruct-q5_k_m.gguf --local-dir . ``` This will download the Qwen2.5-7B-Instruct model in GGUF format quantized with the scheme Q5_K_M. @@ -306,7 +306,7 @@ We use some new options here: :`-sp` or `--special`: Show the special tokens. :`-i` or `--interactive`: Enter interactive mode. You can interrupt model generation and append new texts. -:`-i` or `--interactive-first`: Immediately wait for user input. Otherwise, the model will run at once and generate based on the prompt. +:`-if` or `--interactive-first`: Immediately wait for user input. Otherwise, the model will run at once and generate based on the prompt. :`-p` or `--prompt`: In interactive mode, it is the contexts based on which the model predicts the continuation. :`--in-prefix`: String to prefix user inputs with. :`--in-suffix`: String to suffix after user inputs with.