When I run pip install xyz
on a Linux machine (using Debian or Ubuntu or a derived distro), I get this error:
error: externally-managed-environment × This environment is externally managed ╰─> To install Python packages system-wide, try apt install python3-xyz, where xyz is the package you are trying to install. If you wish to install a non-Debian-packaged Python package, create a virtual environment using python3 -m venv path/to/venv. Then use path/to/venv/bin/python and path/to/venv/bin/pip. Make sure you have python3-full installed. If you wish to install a non-Debian packaged Python application, it may be easiest to use pipx install xyz, which will manage a virtual environment for you. Make sure you have pipx installed. See /usr/share/doc/python3.11/README.venv for more information. note: If you believe this is a mistake, please contact your Python installation or OS distribution provider. You can override this, at the risk of breaking your Python installation or OS, by passing --break-system-packages. hint: See PEP 668 for the detailed specification.
What does this error mean? How do I avoid it? Why doesn’t pip install xyz
work like it did before I upgraded my system using sudo apt upgrade
?
2
The proper way to install Python libraries and applications is to install them in a Python virtual environment whenever possible (the exceptions to this rule are quite rare).
As indicated in the error message, there are two commons solutions to achieve this. Either by using a tool that creates a virtual environment for you or by creating a virtual environment directly yourself.
If what you are trying to install is an application then a strong recommendation is to use a tool like pipx. pipx is available for installation as system package on Debian systems and Debian-based systems such as Ubuntu:
apt install pipx
pipx install some-python-application
To create a virtual environment yourself you can use Python’s venv
:
python -m venv my-venv
my-venv/bin/pip install some-python-library
However if you have considered options carefully but still decide that you really want to install packages “system-wide” and risk breaking your system, then there are a couple of solutions:
- use
pip
‘s argument--break-system-packages
, - add following lines to
~/.config/pip/pip.conf
:
[global]
break-system-packages = true
10
I’ve got this error since Python 3.11+.
Consider relevant comments received in this post from Alok and JackLeEmmerdeur :
This deletion of file is not safe. This can lead to Broken Package Management, Conflicting Installations and Permission Issues
So, find below the updated answer that allowed me to resolve this issue without the risk of compromising the system :
sudo mv /usr/lib/python3.11/EXTERNALLY-MANAGED /usr/lib/python3.11/EXTERNALLY-MANAGED.old
9
The --break-system-packages
flag in pip
allows to override the externally-managed-environment
error and install Python packages system-wide.
pip install package_name --break-system-packages
Note: Usage of this flag shouldn’t be abused.
3
Just
python3 -m venv ~/.local --system-site-packages
Be sure ~/.local/bin
is in your $PATH
Then use
~/.local/bin/pip install ... # --user is implied ;)
You could probably just create your own ~/py
directory and initialize everything from there as well. However, I think .local is already picked up by PATH
and import directories.
3
I installed pipx first:
apt install pipx
Then I used pipx
to install radian
:
pipx install radian
Later to confirm the installation location (in my case to configure Visual Studio Code), I ran:
pipx list
3
Set this environment in your OS:
PIP_BREAK_SYSTEM_PACKAGES 1
Or write it in your Dockerfile:
ENV PIP_BREAK_SYSTEM_PACKAGES 1
Reference: Python 3.11, pip and (breaking) system packages
1
# rm /usr/lib/python3.11/EXTERNALLY-MANAGED
1
Python is like hell for system administrators… Different software use different versions of many different things.
A few times, I’ve used pip3 to install things… that break other things.
Sometimes I mix it with “apt-get install”.
This error message is just like heaven… because it forces us to do the things right. It means the package manager (Ubuntu, Debian) is responsible for handling dependencies, not pip.
It’s why we have Conda, or Miniconda.
You can create an environment using something like
conda create --name thenameofmyapp python=3.8
Activate your environment using
conda activate pixray
Then you can “pip install -r requirements.txt” and it will not break your system 🙂 It will just install things in a specific environment.
4
Use:
-
Open Terminal
-
Run
sudo nano /etc/pip.conf
-
Add following line:
[global] break-system-packages = true
-
Ctrl + X then Y → press Enter (perform save in the nano editor)
Everything is updated now you can run pip install <package_name>
.
4
Currently, some of the most upvoted answers are teaching you ways to ignore this problem. This is like telling you to take pain-killers to stop feeling the pain of broken glass in your throat, rather than telling you to stop eating broken glass. Stop eating broken glass. There are much better ways to install packages from PyPI than using the --break-system-packages
flag, or worse, deleting the EXTERNALLY-MANAGED
file.
The error is telling you that the environment is externally managed. Your Debian distribution already handles installation of Python libraries using APT. For example, if you wanted to install the requests
Python library, you can run:
sudo apt install python3-requests
These files get installed in /usr/lib/python3/dist-packages/
, as you can see from the output of the dpkg -L
command:
$ dpkg -L python3-requests
/usr/lib/python3/dist-packages/requests
/usr/lib/python3/dist-packages/requests/__init__.py
/usr/lib/python3/dist-packages/requests/__version__.py
/usr/lib/python3/dist-packages/requests/_internal_utils.py
# ...
If you run pip install requests
, where should the files be installed? Should they be installed in /usr/lib/python3
, or ~/.local/lib/python3/site-packages/
or somewhere else? The version installed from PyPI using pip might not be the same version included in the Debian package. What if the overwrite doesn’t succeed? What if you have two requests
packages installed system-wide? Should pip learn how to uninstall APT packages? You probably have hundreds of Debian packages that depend on Python, what if one of them breaks because of these conflicting versions of requests
? Can you easily undo the pip
installation? Will you even realise that the weird errors you are experiencing when you launch gedit
or something are because of this? This seems like a recipe for disaster. You used to be able to use pip
to install Python packages system-wide, and it caused so many problems, that it now throws this error message instead.
So, what can you do instead?
1. Install packages using APT
You can install Python packages system-wide using APT. For example, you can install requests
like this:
sudo apt install python3-requests
This version might not be the latest version found in PyPI. And not all packages on PyPI have been packaged for the Debian repositories. But fear not, there are other solutions.
Or: 2. Use pip
in a virtual environment
If you haven’t yet learned a tool to set up a virtual environment, I highly recommend it. All Python programmers should learn one. I recommend venv
or virtualenv
to beginners. To install venv
, run:
sudo apt install python3-venv
Then create a virtual environment in your project directory like this:
python3 -m venv .venv
Now activate your virtual environment by running:
source .venv/bin/activate
This modifies your PATH
environment variable to include .venv/bin/
. Now you can install PyPI packages using pip into your virtual envirnoment (in this case .venv/
), like this:
pip install requests
If you don’t want to activate and deactivate virtual environments, you can run pip
and python
directly from the virtual environment, like this:
$ .venv/bin/pip install requests
$ .venv/bin/python
>>> import requests
>>>
Or: 3. Use pipx
pipx
is a great tool for install command-line applications straight from PyPI. To install pipx
, run:
sudo apt install pipx
Make sure ~/.local/bin/
is in your PATH
environment variable, by running this command:
pipx ensurepath
Close your terminal and open it again for the changes to take effect.
Now you can install a command-line application from PyPI. Behind the scenes, pipx will set up a virtual environment for each application, with its dependencies, completely isolated from the rest of the system to prevent problems. It’s brilliant. Here’s an example:
$ pipx install ruff
$ ruff --help
Or: 4. Pass --break-system-packages
If you absolutely must eat broken glass, then you can pass the --break-system-packages
option, like this:
pip install --break-system-packages requests
Never remove or rename /usr/lib/python3.12/EXTERNALLY-MANAGED
. It is there to stop you from breaking your system. You may not notice that your system is broken until weeks or months later, and you won’t understand at that point why it broke. If you must ignore these protections, you can do it on a one-off basis using --break-system-packages
.
2
That issue is from pip. Just run the command and it will downgrade it.
pip install pip==22.3.1 --break-system-packages
Surely that will help.
2
works for all – Windows, Linux, MacOS, Android ,PI
SOLUTION (Put this at the end of your pip command) :-
–break-system-packages
pip install package-name --break-system-packages
1
To fix that error, you can use a Python virtual environment. Here is how you can do that.
Install a Python virtual environment
Then you can move into a directory that you wish and create a virtual environment using virtualenv your_folder_name
.
While in the environment you have created, type source bin/activate
.
Here is an easy way (video).
Then that is it.
1
In my case, when I was trying to install mkdocs-material this error happened and the solution to it was:
Linux/MAC:
$ python3 -m venv venv
$ source venv/bin/activate
(venv) $ pip3 --version
pip 21.2.3 from .../python3.10/site-packages/pip (python 3.10)
(venv) $ pip --version
pip 21.2.3 from .../python3.10/site-packages/pip (python 3.10)
Windows:
C:> python -m venv venv
C:> venvScriptsactivate.bat
(venv) C:> pip3 --version
pip 21.2.3 from ...libsite-packagespip (python 3.10)
(venv) C:> pip --version
pip 21.2.3 from ...libsite-packagespip (python 3.10)
Please read: https://realpython.com/what-is-pip/#using-pip-in-a-python-virtual-environment
Working with virtual environment.
Create a Virtual Environment:
$ python3 -m venv ~/myVirtualEnv
Access Virtual Environment directory:
$ cd ~/myVirtualEnv
Activate (start) this Virtual Environment
$ source bin/activate
-> your shell change to something like:
(myVirtualEnv) jose@nigriventer:~/myVirtualEnv$
If you are into myVirtualEnv you can run pip3 directly to install packages. Of course, this packages will stay “locked” into this virtual environment.
(myVirtualEnv) jose@nigriventer:~/myVirtualEnv$ pip3 install adafruit-ampy
Do the following:
cd /usr/lib/python3.11
sudo rm EXTERNALLY-MANAGED
If you choose to restore this mechanism, create the same file again with the touch command:
sudo touch EXTERNALLY-MANAGED
3
Try these to avoid externally managed env
:
python3 -m venv path/to/venv.
source path/to/venv/bin/activate
python3 -m pip install xyz
(any extension)
I was able to circumvent the functionality by simply installing Anaconda.
1
Take a look at this. It fixes it without using venv in Bash:
-
Open Terminal on Mac, and make sure your shell is Bash
-
Type
nano ~/.bash_profile
-
Use arrow keys to go to the bottom of
bash_profile
-
Paste
export PATH=".:/Library/Frameworks/Python.framework/Versions/3.12/bin:/usr/local/bin:${PATH}"
onto the bottom ofbash_profile
-
Type Control + X, press Y, and then Enter
-
Type
source ~/.bash_profile
-
Enjoy!
1
Context:
I am ultimately building a Node.js image with a C++ dependency (node-libcurl), but I need python3 installed in order for node-gyp to build that C++ dependency for multiple architectures. I wrote this answer because too many other answers are taking the easy route of disabling the warning, where you risk punting a problem down the road to bite you later.
My Dockerfile was working fine until a recent build of python3 where setuptools was no longer explicitly included in apk install. This Dockerfile was working last year, but had since started to break. It simply wouldn’t build. The node-gyp package uses gyp in python3, and it simply wasn’t there due to a missing setuptools package.
ARG NODE_TAG=20-alpine
ARG PLATFORM_ARCH=linux/amd64
FROM --platform=${PLATFORM_ARCH} node:${NODE_TAG} as devBuild
WORKDIR /home
# ------------------
# dev dependencies
# ------------------
# node-libcurl build from source (works on x64 / arm64 as of Oct 2023)
RUN apk add --update --no-cache libcurl python3 alpine-sdk curl-dev
&& npm install node-libcurl --build-from-source
&& rm -rf /var/cache/apk/*dil
# ----------------
COPY package.json ./
COPY yarn.lock ./
RUN yarn install
# ... and so on
Based on this SO answer, I could see that getting setuptools was the solution. So I tried python3 -m pip install setuptools:
# ------------------
# dev dependencies
# ------------------
# node-libcurl build from source (works on x64 / arm64 as of Oct 2023)
RUN apk add --update --no-cache libcurl python3 alpine-sdk curl-dev
&& python3 -m pip install setuptools
&& npm install node-libcurl --build-from-source
&& rm -rf /var/cache/apk/*dil
# ----------------
But it wasn’t working, and I was getting the above message about an “externally-managed-environment”, which led me to here on SO.
This message was advising me to use apk to install setuptools, since python3 was controlled by that tooling. So I finally realized (thanks to hints from other answers here) that I should add py3-setuptools in the dev dependencies section:
# ------------------
# dev dependencies
# ------------------
# node-libcurl build from source (works on x64 / arm64 as of Oct 2023)
RUN apk add --update --no-cache libcurl python3 py3-setuptools alpine-sdk curl-dev
&& npm install node-libcurl --build-from-source
&& rm -rf /var/cache/apk/*dil
# ----------------
And now I’m able to build my Node-based Docker image for both x64 and arm architectures again!
1
I am not sure about the author’s environment and which package they’re trying to install, but perhaps this will help someone else.
I just got this error while using the Python extension for Visual Studio Code. It requires the installation of Pylint in WSL, and when attempting to do it, I got the same error. This can be resolved by installing Pylint using APT:
sudo apt install python3-pylint-common
One other way to install packages that are not available via the distribution’s package manager may be pip’s prefix option, as documented at packaging.python.org
pip install --prefix=/some/path
Calls sysconfig.get_preferred_scheme(‘prefix’).
The prefix is distribution dependent. E.g., Debian uses /usr/local for packages not installed via the system package manager.
Some gotchas are also possible. On Devuan (so possibly also on Debian itself and other derivatives), the prefix needed to target /usr/local is /usr:
pip install --prefix=/usr some_package
installs some_package in /usr/local where it is visible to applications installed by the system package manager.
However,
pip install --prefix=/usr/local some_package
installs some_package in /usr/local/local, which does not work.
I ran across this problem while running some tasks on a pipeline job that uses the Docker image. I hadn’t had this problem before, but since I am not using any specific tag for the Docker image, I am also not very surprised.
I was running the command:
python3 -m pip install --upgrade pip
which I replaced with:
python3 -m pip install --upgrade pip --break-system-packages
And things worked as usual.
1
Simply go with:
sudo apt install python3-django
If you’re using venv and you still get this error, try
pip cache purge
1
Navigate to cd /usr/lib/python3.11
then
sudo rm EXTERNALLY-MANAGED
I ran into it while trying to install numpy
in a google/cloud-sdk
docker
image. E.g.:
$ docker run --rm -it google/cloud-sdk:477.0.0-alpine
/ # apk add py3-pip
/ # pip install numpy
<the externally managed message>
To overcome this I did:
$ docker run --rm -it google/cloud-sdk:477.0.0-alpine
/ # apk add py3-pip
/ # python -m venv env
/ # env/bin/pip install numpy
/ # export PYTHONPATH=/env/lib/python3.11/site-packages
PYTHONPATH
adds the directory where /env/bin/pip
installs packages (/env/lib/python3.11/site-packages
) to the search path.
And judging from the gcloud compute ssh INSTANCE
output it worked. Without numpy
it says:
WARNING:
To increase the performance of the tunnel, consider installing NumPy. For instructions,
please see https://cloud.google.com/iap/docs/using-tcp-forwarding#increasing_the_tcp_upload_bandwidth
Or the other way to confirm this:
/ # python -c 'import numpy'
You can find the resulting image in the following gist.
When running in a docker container use
RUN echo "[global]nbreak-system-packages = truenn" > /etc/pip.conf
It fixes this bug by using the global pip conf. VM and other junk is pointless in docker containers and the obsession with python virtual environments and “breaking packages” warning is likely from the “Java/Maven experience” that has serious version issues. Write your python code to be 10 years backwards compatible instead, and skip all the VM nonsens. Only rarely do new libs of good quality cause issues for old code, and when it does happen the issues can be patched.
Similar to other answers, I set the break-system-packages
flag for running in a docker container. But I suggest doing it with pip
and avoid trying to figure out where is pip.conf
After installing pip
run
python -m pip config --global set global.break-system-packages true
1