Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Unable to install llama-cpp-python with poetry #9656

Closed
PierreCarceller opened this issue Aug 29, 2024 · 5 comments
Closed

Unable to install llama-cpp-python with poetry #9656

PierreCarceller opened this issue Aug 29, 2024 · 5 comments
Labels
kind/feature Feature requests/implementations status/triage This issue needs to be triaged

Comments

@PierreCarceller
Copy link

Issue Kind

Brand new capability

Description

Based on the llama-cpp-python installation documentation, if we want to install the lib with CUDA support (for example) we have 2 options :

Pass a CMAKE env var :

CMAKE_ARGS="-DGGML_CUDA=on" pip install llama-cpp-python

Or use the --config-settings argument of pip like this :

pip install llama-cpp-python --config-settings cmake.args="-DGGML_CUDA=on"

As far as I know, it's not possible to do something equivalent with poetry because :

  • It's not possible to pass env var to poetry
  • Poetry has no equivalent to --config-settings

I saw that there had already been conversations on similar subject here but they date from a while and maybe things have changed in the meantime?

I understand that pip and poetry are two different projects with different objectives, but it would be really useful (from my point of view) to be able to handle this kind of installation.

Impact

As I see it, llama-cpp-python will become an important lib in the python ecosystem (it probably already is to some extent).

In addition, llama-cpp-python is not the only one to use the --config-settings functionality for installation.

That's why I think it would be interesting to be able to allow a fluid installation with poetry.

Workarounds

There is a workaround as explained here

poetry run pip install llama-cpp-python --upgrade --force-reinstall --no-cache-dir

But it's not very practical because it breaks the poetry workflow.

@PierreCarceller PierreCarceller added kind/feature Feature requests/implementations status/triage This issue needs to be triaged labels Aug 29, 2024
@dimbleby
Copy link
Contributor

It's not possible to pass env var to poetry

but the conclusion in #9323 was that environment variables are passed through and so this already worked?

request for config-settings duplicates #8909, python-poetry/poetry-core#715

@PierreCarceller
Copy link
Author

First of all sorry for the duplicate, I missed that

Based on my tests and on what you can read here it doesn't work.

These are my experiments:

With pip :

conda activate base && conda env remove --name llmdoc -y && conda create --name llmdoc -y python=3.10 && conda activate llmdoc

CMAKE_ARGS="-DLLAVA_BUILD=OFF -DGGML_CUDA=ON" pip install llama-cpp-python --upgrade --force-reinstall --no-cache-dir

python -c "from llama_cpp.llama_cpp import _load_shared_library; print(bool(_load_shared_library('llama').llama_supports_gpu_offload()))"

response : True

With poetry :

conda activate base && conda env remove --name llmdoc -y && conda create --name llmdoc -y python=3.10 && conda activate llmdoc && pip install poetry

CMAKE_ARGS="-DLLAVA_BUILD=OFF -DGGML_CUDA=ON" poetry add llama-cpp-python --no-cache

python -c "from llama_cpp.llama_cpp import _load_shared_library; print(bool(_load_shared_library('llama').llama_supports_gpu_offload()))"

reponse False

The last line is used to know if the gpu support is enabled

I hope I haven't missed any details...

@dimbleby
Copy link
Contributor

as in #9323 I retried it myself and it is clear from the error messages that the environment variable is indeed passed to the build

also as in #9323 you perhaps have an already-built wheel in your cache

@PierreCarceller
Copy link
Author

"perhaps have an already-built wheel in your cache" -> I thought that if I reset the environment and set the "--no-cache" flag, this wouldn't happen. Can you tell me more ?

I will test it with docker to be sure and post my results here in the next few days

@dimbleby
Copy link
Contributor

--no-cache does not affect the artifact cache.

This is now just a straight duplicate of #9323

@Secrus Secrus closed this as not planned Won't fix, can't repro, duplicate, stale Sep 15, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/feature Feature requests/implementations status/triage This issue needs to be triaged
Projects
None yet
Development

No branches or pull requests

3 participants