I am working on a django project and it has 20+ apps in it which means i have to run tests of 20+ apps which includes 3000+ tests. So I decided to make subjobs in my pipeline and now running 20 jobs for the unit tests on commit. But the issue is that when more than 1 developer commits and tests runs for both, pipeline get stucks. Pipeline runners are registered on a VM of 32 CPU cores and 24 GB of RAM. CPU utilization becomes 100% and jobs stuck(deadlock).
How can I optimize my pipeline subjobs so that if multiple devs work it didnt stuck.
unit_test: stage: test rules: - changes: - backend/ tags: - workstream-unit-test when: manual allow_failure: false parallel: matrix: - APP_NAME: - apps.app1 - apps.app2 - . - apps.app20 before_script: - apt-get update - apt-get install -y unixodbc unixodbc-dev - pip install virtualenv - virtualenv venv - source venv/bin/activate - pip install -r requirements.txt script: - COVERAGE_FILENAME=$(echo "$APP_NAME" | tr '.' '_')_coverage.xml - coverage run --source=$APP_NAME ./manage.py test $APP_NAME --settings=hrdb.test_settings --keepdb --noinput - coverage xml -i -o $COVERAGE_FILENAME - coverage report -m --fail-under=90 --skip-covered coverage: '/(?i)TOTAL.*? (100(?:.0+)?%|[1-9]?d(?:.d+)?%)$/' artifacts: expire_in: 2 hour paths: - $CI_PROJECT_DIR/$APP_NAME_coverage.xml
I am expecting to use same VM with same specs but the pipeline optimization so that the pipeline process runs smoothly