Inside my SwiftUI views I trigger network requests to fetch data. There isn’t one API that returns all the data I need so I have to make a few requests.
I want to avoid hitting the rate limit of the API which is pretty low, 1 RPS.
I currently have:
struct SomeView: View {
let id: String
private func loadData() async throws {
do {
async let result1: () = api.loadUserComments(id)
async let result2: () = api.loadUserData(id)
async let result3: () = api.loadUserPosts(id)
(_, _, _) = try await (result1, result2, result3)
} catch {
print("handle error (error)")
}
}
var body: some View {
Text("foo")
// TODO: read from swift data and render data
}
.task(id: id) {
await loadData()
}
}
which isn’t great because it doesn’t handle outbound rate limits and if a request fails then all of them are cancelled.
I tried using AsyncChannel
from AsyncAlgorithms
:
@Observable
class SharedViewModel {
private let channel = AsyncChannel<() async throws -> Void>()
private let taskInterval: Duration = .seconds(1)
func monitorChannel() async throws {
for await block in channel {
let start = ContinuousClock.now
try await block()
let elapsed = ContinuousClock.now - start
let remaining = taskInterval - elapsed
if remaining > .zero {
try await Task.sleep(for: remaining)
}
}
}
func performOperation(name: String) async throws {
let id = UUID()
print("(id): (name): sent to channel...")
await channel.send {
print("(id): (name): starting...")
// TODO: actual API calls
try await Task.sleep(for: .seconds(20))
print("(id): (name): done!")
}
}
}
@main
struct myApp: App {
@State private var sharedViewModel = SharedViewModel()
var body: some Scene {
WindowGroup {
ContentView()
}
.environment(sharedViewModel)
}
}
and then in the view I have:
struct SomeView: View {
let id: String
@Environment(SharedViewModel.self) private var sharedViewModel
private func loadData() async throws {
do {
async let result1: () = sharedViewModel.performOperation(name: "profile comments")
async let result2: () = sharedViewModel.performOperation(name: "profile user data")
async let result3: () = sharedViewModel.performOperation(name: "stories")
(_, _, _) = try await (result1, result2, result3)
} catch {
print("handle error (error)")
}
}
var body: some View {
Text("foo")
// TODO: read from swift data and render data
}
.task(id: id) {
await loadData()
}
.task {
do {
try await sharedViewModel.monitorChannel()
} catch {
print("error monitoring channel (error)")
}
}
}
but this isn’t really what I want because the tasks get cancelled as I navigate between views that use sharedViewModel
.
I’m having trouble building a queue that is shared across the app and also cancels the view specific tasks when it navigates (I don’t want them holding up the queue when they’re no longer needed).
2
The basic issue is that you do not want to call monitorChannel
in the task
view modifier of the View
. You might do that in, for example, the App
. In the view’s task
modifier, you only want to send
to the channel, not monitor the channel.
You also do not want to try
in the for await
loop of monitorChannel
(because you do not want the channel to stop just because one of the requests failed or was canceled).
FWIW, this is one of the few cases where I might cut the Gordian knot and avoid channels altogether (because they make it really hard to provide proper cancelation support), and instead, just await
the prior task:
private var previousFiredAt: ContinuousClock.Instant!
private var task: Task<Void, Error>?
func perform(block: @escaping () async throws -> Void) async throws {
let task = Task { [previousTask = task] in
try? await previousTask?.value // try? because this task should continue regardless of whether the previous one succeeded or not
if let previousFiredAt {
let nextFireAt = previousFiredAt.advanced(by: .seconds(1))
if nextFireAt > .now {
try await Task.sleep(until: nextFireAt)
}
}
previousFiredAt = .now
try await block()
}
self.task = task
try await withTaskCancellationHandler {
try await task.value
} onCancel: {
task.cancel()
}
}
This allows you to cancel requests of particular views, while letting other requests proceed unimpeded.
I would also be inclined to put this logic at the lowest layer possible. E.g., I would do this at the network layer (where the requests are being submitted), not the API layer. You do not want the serialization (entailed by channels or my pattern above) to unintentionally include any subsequent parsing you might need. If you doing simple parsing of JSON requests, that is not a big deal, but if you are doing things like creating images from large raw assets, it might make a difference.
Also, as a minor point, I think you want to move the sleep
to the start of the task. Let’s say you have two tasks that happen less than a second apart, you do not want the completion of the first task to wait one second, but rather you want the start of the second task to sleep
. E.g., if you do something immediately after the first request, such as showing the results in the UI or what have you, you do not want that delayed because of network throttling. Put the delay where it is needed, namely in front of the subsequent requests, not before the prior requests.
But as you can see, this throttles the requests to one per second, and support cancelation (e.g., the view was dismissed at the ⓢ signpost):
But it also shouldn’t interfere with other requests that haven’t been canceled.
(All of this having been said, I am hoping that someone can propose a more elegant solution. This is pretty ugly for just “throttle my network activity.” Both channels and the above are enforcing sequential execution of network requests, but if you start moving around huge assets, that might be a substandard solution. All the other patterns I contemplated to avoid this problem were substantially more complicated.)
4