I have a Jenkins multibranch pipeline that stores status of branches in a shared S3 groovy object.
Groovy file looks something like this :
[["name":"def-123","current_status":"success"],"name":"abc-234","current_status":"success"]]
Here name field signifies our feature git branches and status is the status of branch, if success it means a feature branch is deployed, if deleted it means, the feature branch environment is destroyed(branch is still there).
Pipeline mainly updates this file in deployment and destroy(after successful testing) stage.
We face problem if two Jenkins branch updates the above file concurrently.
For example: both jobs read the above file at the same time, but one reads it in deploy stage and one in destroy stage.
Destroy job will update it’s branch to deleted but it gets overwritten by other branch deploy job again to success if it updates it after sometime of destroy job.
How to prevent this, i knw there is now S3 conditional writes but not sure how to use it.
One way to do mutex in Jenkins is by using the lockable-resources plugin. From their documentation page:
echo 'Starting'
lock('my-resource-name') {
echo 'Do something here that requires unique access to the resource'
// any other build will wait until the one locking the resource leaves this block
}
echo 'Finish'