I have an automation program used to control other applications(like Browser or DesktopApp).
I’ve created a function with a “while” to determine the current page or state of the application, deciding the next action to execute.
void OuterMostFunc()
{
while(!exit)
{
//InitSomethings...
InnerFunc();
//HandleSomethings...
Thread.Sleep(milliseconds);
}
}
void InnerFunc()
{
if(A_Page_Cond)
A_Func();
else if(B_Page_Cond)
B_Func();
...
}
However, at the same time, these applications may be manually operated or encounter unexpected events to change page or state.
Therefore, I must check for changes in status at many key points and break out to return to the outermost function to reassess the situation.
A relatively elegant solution is to utilize Try-Catch, along with customizing a BreakException, allowing me to return to the outermost function from anywhere in the code.
Additionally, I reset all critical variables at the outermost level, so I don’t need to worry about issues caused by interruptions.
void OuterMostFunc()
{
while(!exit)
{
//InitSomethings...
try
{
InnerFunc();
}
catch(BreakExcetion ex)
{}
//HandleSomethings...
Thread.Sleep(milliseconds);
}
}
void Any_Deep_Level_Func()
{
//DoSomethings.
if(CheckPageIsChanged())
throw new BreakException();
//DoSomethings.
}
For a long time, this approach has been useful and reliable.
However, due to the program’s application context demanding utmost speed, the performance overhead brought by Try-Catch becomes difficult to ignore.
For example, Try-Catch requires tens of milliseconds of execution time and imposes a relatively significant CPU burden.”
If every function were to return bool (or out bool), and then we checked whether to return or execute the next step after calling each function, it would indeed allow us to quickly return to the outermost function.
However, this approach seems to pose a disaster for the readability of the code, especially considering the actual complexity and multilayered nature of the functions called in the program.
Supplementary Note:
- For example, I might need to perform a series of operation steps on Page A, but before executing certain critical steps, I’ll need to check the status again to see if there have been any changes.
- The priority is given to the responsiveness of the program after state changes, with performance being secondary.
4
Since this runs synchronously, having a while loop that runs continuously will itself be detrimental for performance, especially when using Thread.Sleep
calls. Don’t do that. Switch to an asynchronous implementation.
Here is how a long-running application (such as a windows service or web service) is commonly implemented in recent .NET versions using BackgroundService:
public sealed class AppService : BackgroundService
{
// change the update interval as you like, or read from config
private static readonly _interval = TimeSpan.FromMilliseconds(20);
private readonly ILogger<Worker> _logger;
public AppService(ILogger<Worker> logger)
{
_logger = logger;
}
protected override async Task ExecuteAsync(CancellationToken stoppingToken)
{
while (!stoppingToken.IsCancellationRequested)
{
try
{
// use asynchronous check if possible
// (synchronous might also be fine here)
if (await CheckChangedAsync(stoppingToken))
{
// do something
}
}
catch (OperationCanceledException) when (stoppingToken.IsCancellationRequested)
{
// was canceled
break;
}
catch (Exception exception)
{
_logger.LogError(exception, "Unexpected exception occured");
// throw or exit gracefully
throw;
}
// pause to free resources before the next update
await Task.Delay(_interval, stoppingToken);
}
}
}
Note:
Thread.Sleep
will still use up a thread, so the async approach is better- Async itself is very fast, fast enough for whatever you are possibly doing here
- If you need to make the checks often, and you are able to actually perform them asynchronously, you could also get rid of the
_interval
altogether, and just rely on the asynchronous nature ofCheckChangedAsync