You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
In doing so, each thread will remember their last computed result, e.g. while computing $n+k$ they will simply add 1 to $n+k-1$ which it had computed it an iteration ago within its own chunk. I do not care about the returned order of these values, I just want to computations to memoize as much as possible.
For context, I'm mining elliptic curve points that map to nice-looking libp2p peer-ids. While generating keys at random does well, it re-do's a lot of redundant work when computing the public key from a secret key.
Example k=6 with 2 threads
Thread 1: $[n, n+1, n+2]$
Thread 2: $[n+3, n+4, n+5]$
Example k=6 with 3 threads
Thread 1: $[n, n+1]$
Thread 2: $[n+2, n+3]$
Thread 3: $[n+4, n+5]$
I basically will have a struct that stores the number of values (k here) and want to implement ParallelIterator to this struct so that I can call into_par_iter and this logic will happen in the background.
Thanks in advance for your help!
The text was updated successfully, but these errors were encountered:
e.g. while computing n + k they will simply add 1 to n + k − 1 which it had computed it an iteration ago within its own chunk.
Is this an oversimplified example? Because for this little computation, you could just zip with a range.
For more complicated cases, map_with or map_init might do the trick, possibly with enumerate() first to know where you are, assuming you have enough context to start memoizing from arbitrary points in each instance. If it's randomized, that's probably trivial.
Is this an oversimplified example? Because for this little computation, you could just zip with a range.
yep its oversimplified, the addition + there is elliptic curve point addition and I will be going for very large number of those (256-bits scalar), I'm using a library for the addition though and its using the efficient method (double-and-add) so its still not bad, but I want to memoize as much as I can.
I came accross the said map functions but I believe there the init value are created / cloned per job, but in my case I want to have the values created within a job be shared for the later steps for that job. Im thinking if I should use https://docs.rs/rayon/latest/rayon/fn.scope.html instead, where I spawn jobs as many as threads I have, so that each "thread" will remember their result within that job.
Would love a par_iter implementation regardless though, I havent tried & still new to async stuff but what if I use DashMap that I have in my struct which implements par_iter, and have each thread access their prev result via thread id as key and result as value?
Hi everyone, I have the following need for my use case: suppose I have a number$n$ . I want to find the values:
I want to split this work into chunks, where each thread$t$ starts at some value $(n+t_i)$ and they compute their "chunk" in parallel.
In doing so, each thread will remember their last computed result, e.g. while computing$n+k$ they will simply add 1 to $n+k-1$ which it had computed it an iteration ago within its own chunk. I do not care about the returned order of these values, I just want to computations to memoize as much as possible.
Example
k=6
with 2 threadsExample
k=6
with 3 threadsI basically will have a struct that stores the number of values (
k
here) and want to implementParallelIterator
to this struct so that I can callinto_par_iter
and this logic will happen in the background.Thanks in advance for your help!
The text was updated successfully, but these errors were encountered: