I am using azure DevOps self hosted runners to do some tests
pipeline file:
name: $(Date:yyyyMMdd)$(Rev:.r)
trigger: none
pr:
- master
- main
- release*
pool:
name: sonarcube-agents
variables:
AZURE_SUBSCRIPTION: testsub-dev
IMAGE_NAME: $(Build.DefinitionName)
SOURCE_DIRECTORY: $(System.DefaultWorkingDirectory)
#specifying resources #test change to trigger 15
jobs:
- job: BuildAndTest
displayName: 'Build and Test Job'
steps:
- script: |
echo "Running on self-hosted agent"
echo "Step 1: Checking out the repository"
displayName: 'Checkout Code'
- checkout: self
#clean: true
- script: |
echo "Step 2: Building the project"
# Add your build commands here
# For example, for a .NET project, it might be:
# dotnet build
displayName: 'Build Project'
- script: |
echo "Step 3: Running tests"
displayName: 'Run Tests'
- script: |
echo "Step 4: SonarQube analysis"
# Add your SonarQube analysis commands here
# For example, if using SonarScanner CLI:
# sonar-scanner -Dsonar.projectKey=my_project -Dsonar.sources=.
displayName: 'SonarQube Analysis'
#more steps here that run the tests and generate some files in the working directory.
workspace: # Workspace options on the agent.
clean: all
theoretically the part:
workspace: # Workspace options on the agent.
clean: all
should clean up all the working directory before starting the new pipeline but I am getting errors when we trigger new pipeline that working directory is not empty:
##[error]One or more errors occurred. (One or more errors occurred. (Access to the path ‘/data/vsts-agent/workspace/1/s/pycache/app.cpython-39.pyc’ is denied.)) (One or more errors occurred. (Access to the path ‘/data/vsts-agent/workspace/1/s/migrations/pycache/env.cpython-39.pyc’ is denied.)) (Access to the path ‘/data/vsts-agent/workspace/1/s/pycache/app.cpython-39.pyc’ is denied.) (Access to the path ‘/data/vsts-agent/workspace/1/s/migrations/pycache/env.cpython-39.pyc’ is denied.)
this is the actual working directory of the pipeline:
/data/vsts-agent/workspace/1/s/
and
__pycache__/
is one of the folders created by previous pipeline when tests were running.
If I ssh into the runner and clean up the working directory, pipeline runs fine for 1st time. Fails again on second run because of output directories and files produced by previous successful run.