I have a pyspark job which runs in the EMR cluster. Is there any way that from the script itself I can fail the job and then restart it on certain condition? Currently I am throwing an exception but that just fails the job and stops. I want it to start automatically again.