I have a pdm-based Python app (let’s call it “myapp”) and I want to have it installed on Debian target system via a Debian package. The Debian package should be built in a gitlab CI process.
An illustration just to keep it in mind – this should happen in the gitlab CI process:
+--------------------+ +-----------+ +---------------------------+
| python app "myapp" | -> | pdm build | -> | deb package "myapp...deb" |
+--------------------+ +-----------+ +---------------------------+
The problem is that generating the Debian package like this works flawlessly when I perfom the steps from .gitlab-ci.yml
interactively in a Docker container – but the gitlab CI process would always fail with:
[...]
Processing /builds/aye/_build_dir
Installing build dependencies: started
error: subprocess-exited-with-error
× pip subprocess to install build dependencies did not run successfully.
│ exit code: 1
╰─> See above for output.
[...]
make: *** [debian/rules:4: binary] Error 1
dpkg-buildpackage: error: debian/rules binary subprocess returned exit status 2
debuild: fatal error at line 1182:
dpkg-buildpackage -us -uc -ui -b failed
The error message (pip failure) seems to point to an Internet connection problem, so that pip would be unable to fetch what it needs.
Here is the .gitlab-ci.yml
file used:
variables:
http_proxy: $CODE_PROXY
https_proxy: $CODE_PROXY
no_proxy: $CODE_NO_PROXY
HTTP_PROXY: $http_proxy
HTTPS_PROXY: $http_proxy
NO_PROXY: $no_proxy
build_deb:
image: debian:12
script:
- apt update && apt-get install -y dh-virtualenv devscripts python3-full python3-pip build-essential python3-dev python3-venv python3-pdm
- mkdir -p _build_dir
- cd _build_dir
- cp -r ../* . || true # ignore error from copy _build_dir into itself
- debuild -b -us -uc
- cd ..
artifacts:
paths:
- myapp*
rules:
- if: $CI_COMMIT_TAG
Well, if I perform these steps manually in a “debian:12” Docker container, everything, including the last step debuild
, works like a charm.
But if I manipulate the proxy settings, the interactive build would also fail, showing the same behavior as the failed gitlab CI pipeline. => for me this is the prove that proxy settings are the cause of the problem.
However, as you can see, I do set proxy env variable at the beginning of the .gitlab-ci.yml
script. The used variables ($CODE_PROXY
etc.) are predefined variables from the gitlab installation. The used shared runners have no direct connection to the Internet, thus needing these proxy settings. The values of the proxy (corporate proxy) settings are proven correct and work – but just not in this case, as it seems.
So, the difference between the interactive build (working) and the automated CI build (not working) is:
- the “debian:12” Docker container on my PC does not need any proxy settings – so, none are set
- the gitlab CI process relies on a (corporate) proxy. This is set, but it fails anyway.
The debian/rules
file – which dictates how the Debian package would be built – is:
#!/usr/bin/make -f
%:
dh $@ --with python-virtualenv --python /usr/bin/python3
So the entire “debhelper” magic is within the dh
command (more exactly, in the dh binary
step invoked by dh
).
All info I could find state that pdm (invoked by dh
‘s “dh_auto_build” feature) would honor such env variables.
What could be the problem? I am stuck …
UPDATE 2024-08-31 14:15 UTC:
I tested some variants – the proxy behavior seems a little bit weird to me – because it looks inconsistent. Look at this:
I created a gitlab CI pipeline with 3 jobs. Each job performs the task of building a Debian package differently:
Job 1: call debuild
A simple debuild -b -us -ui
Job2: call dpkg-buildpackage
debuild would call dpkg-buildpackage
and then lintian
– so in this variant call the first one is called instead of using debuild
:
dpkg-buildpackage -us -uc -ui -b
Job 3: call dh scripts
dpkg-buildpackage
would involke several scripts. So in this variant the necessary dh_*
scripts are called one after the other explicitly:
dh clean --with python-virtualenv --python /usr/bin/python3
dh build --with python-virtualenv --python /usr/bin/python3
dh binary --with python-virtualenv --python /usr/bin/python3
Result from the gitlab CI pipeline:
- Job
debuild
: FAIL - Job
dpkg-buildpackage
: SUCCESS - Job
dh scripts
: SUCCESS
How can it be that debuild
fails at the stage of calling dh binary
when this steps works in the dpkg-buildpackage
job and in the job which calls the dh_*
scripts one after the other?
What does debuild
do differently (with respect to proxy servers, as it seems …?)