{{getMsg('Help_YouAreHere')}}:
/
{{page.title}}
{{page.title}}
{{$root.getMsg("downLoadHelpAsPdf")}}
{{helpModel.downloadHelpPdfDataStatus}}
Llama.cpp
The Llama.cpp AI connection allows using the LLM inference in C/C++ project with the i-net CoWork. Due to its ease of use, this LLAMA can quickly be hosted locally. The following settings can be made here:
-
AI Connection Name: This name is always used when a specific AI connection is used.
-
Llama.cpp URL: The API-URL to be used for requests. Please refer to https://github.com/ggerganov/llama.cpp for additional information.
-
nPredict (in tokens): The maximum size of the response measured in tokens. Larger values will decrease the performance.
Installing Llama.cpp
The installation can be done either by downloading and following the instructions on the GitHub page or using a prepared llamafile.
Please follow the most recent instructions on either project page.