Replies: 1 comment 2 replies
-
You could do that, but a separate process doesn't inherently give you anything in this case that wouldn't be possible with a thread. Other than that clearer boundary. (Chrome uses processes specifically for secure sandboxing.) The But I guess the complicated part is that you potentially want the view and the background thread to access the same potentially complicated pipeline. That part is easy with just an |
Beta Was this translation helpful? Give feedback.
-
I could use some help choosing the right path for an asynchronous or threaded design of my Iced-based app.
I'm designing a DAW (digital audio workstation). If that's an unfamiliar term to you, the main part is a long-running, near-real-time task that generates a stream of audio samples (typically 44.1KHz). This design is interactive (not all DAWs are), so it shouldn't buffer more than a few milliseconds of samples. The audio is generated by an arbitrarily complex set of user-specified components. Those components are also viewable in real time; they update as the project renders. Some components' UI can be fairly rich; for example, they might display an animation calculated from thousands of audio samples, and the update frequency could be similar to video games.
This project started out with a completely synchronous, single-threaded backend with nothing but a CLI. Now I'm tacking a frontend onto it. I've implemented
update()
andview()
for most of the components, as well as for the master "orchestrator" that assembles all the components into a coherent audio stream. I didn't want to dive into threading or asynchronous programming while I was vetting Iced, so I naively added a Tick message to the app, much like the Game of Life example, to give the backend a chance to work. Every few milliseconds it generates enough audio samples to stay ahead of the machine's audio output. This approach was enough to unblock progress, but I'm now ready to do this right.Some of the (failed) approaches I have taken:
view()
methods back up to the Iced portion of the app. I started by wrapping anArc<Mutex<>>
around the orchestrator (which owns all the audio components and is in turn owned by the IcedApplication
instance), and updated the backend loop to grab and then release the mutex each time. Unfortunately, I then discovered that the borrow checker doesn't like when theElement
thatview()
returns came from a temporary receiver (if let Ok(o) = orchestrator.lock() { o.view() }
). I think I can work around this by changing theview()
method signature to indicate that the lifetime of the returnedElement
corresponds to something other than the temporary receiver, but no other Iced examples seem to need to do that, which seems like a smell that this approach is wrong.async
function and schedule it throughCommand::perform(self.worker.work(), Message::WorkDone)
(or have it return anOption<Future>
, which I think is equivalent). I haven't been able to get this to compile, because I'm too new to Rust to figure out how to do that without aborrowed data escapes outside of associated function
error. Most of Iced'sCommand::perform()
examples call argument-free functions, not methods onself
, so it seemsperform()
is more for context-less setup or initialization, or Lisp-y pure-function work, than for continuation of long-running work. (I did see that Game of Life manages to callperform()
with arguments, but as far as I can tell it clonesLife
to avoid retaining any references outside the closure. That wouldn't work for my application, because Orchestrator owns too much data, and the inner cycle time is too short.)view()
on the thing providing the subscription stream? I don't think I can, at least not without getting into the same issues that the backend-in-a-thread approach found.Intuitively, it doesn't seem that farfetched to design a system that says "do your work as fast as you can, but check this [thing] for incoming requests, post to that [thing] to provide status updates, and be ready to stop the world from time to time and handle a
view()
."Other ideas I'm thinking of:
view()
being implemented by the components, and move it closer to the frontend, making it generate the view using component getters. This makes me sad, but maybe that's the same sadness that anyone raised on OO feels when they move to Rust. It does feel like an enormous price to pay just because I can't figure out how to callview()
on a wrapped struct. But I might be falling into the "Actively trying to make components is a recipe for disaster in Elm" trap, and it's actually fine to put theview()
logic elsewhere.view()
separation problem, but it sure makes it a lot clearer where the boundaries between model and view should be. It works for Chrome, so speed ought not to be an issue.Sorry this question is so long, and I'm especially sorry if these are more Rust than Iced questions. I hope the detail I'm providing is useful to recommend an appropriate direction.
Beta Was this translation helpful? Give feedback.
All reactions