-
Notifications
You must be signed in to change notification settings - Fork 7
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
change compile_all to parallel #9
Comments
Thanks for this @felixschurk . Yes, it would certainly be an enhancement. Please feel free to open a PR and we can refine it before the next release. I've thought of this before, but as you note, it can be tricky to ensure that it is done correctly. I think we'll have to do something on the lines of:
Finally, if we can generalise this logic as a separate function, it could perhaps also be used for other bits in the script that process multiple files---like the encryption/decryption bits. What do you think? |
From chatgpt: There are a few ways to determine which particular task failed when using GNU parallel. One way is to run which creates a log file that records the exit status and command of each task run. By looking at the log file, you can determine which command failed and its corresponding exit status. You can also run: You can also use the You can also use the If you are using the parallel command inside a shell script, you can use the I have no clue whether any of this would be helpful but I thought I'd put it here for you to decide. |
Thanks @MarkLeakos : unfortunately, chatGPT is not known for its accuracy, so I'd rather not depend on what it says when it comes to things like this (especially if does not provide references). PS: #10 is on my list of things to do, I just have to find the time to work on it :) |
Thank you @MarkLeakos, I did not knew before for what exactly I was searching :D But now with the The current PR #10 now produces an output, which can be checked for the failed documents. The |
You are welcome @felixschurk.
Chatgpt is usually good to stimulate ideas.
Mark
…On Tue, Jan 24, 2023, 02:55 felixschurk ***@***.***> wrote:
Thank you @MarkLeakos <https://github.com/MarkLeakos>, I did not knew
before for what exactly I was searching :D But now with the -joblog there
is an proper output of what parallel did.
The current PR #10 <#10>
now produces an output, which can be checked for the failed documents.
I thought it is more desired that parallel continues to work on all files,
and does not stop when one gives an error, since usually the documents
should be independent.
The -bar I also added since, when there are quite some files to progress,
that gives some overview.
—
Reply to this email directly, view it on GitHub
<#9 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AOLSK3GXTJE76AJXOJDLXDDWT6KHDANCNFSM6AAAAAASMCE5TI>
.
You are receiving this because you were mentioned.Message ID:
***@***.***>
|
Hei,
this is more an enhancement than an issue.
When compiling all with
./calliope -p 2022
and there are quite some files in the directory it does take a long time.My idea is to pass all the files into the GNU
parallel
command https://www.gnu.org/software/parallel/ which then would execute it on as many CPU's as the given machine has.A downside is that I have currently not figured out how to stop if an file could not be compiled.
But this only gives one problem for the "bad" file, others will continue to be compiled.
I compared the timing with
time
for 59 tex files:parallel :
sequentiell:
which would mean that it only took kind of half the time.
If you think that would be an useful enhancement I could create a pull request for it.
The text was updated successfully, but these errors were encountered: