[ML] Framework for testing effect of various MKL settings #2714
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Checkpointing current status for visibility.
Contains the changes to build system required to compile and link code referring to MKL functions, along with various scripts etc. to exercise those functions and gather data on the impact that they may have.
Tools and packages needed for testing MKL setting with
pytorch_inference
/usr/local/gcc103
yum install python3
(for running scripts for testing inference)intel-oneapi-mkl-devel-2024.0
as per linux_image Dockerfile and do:pprof
- see https://gperftools.github.io/gperftools/heapprofile.html)yum install ghostscript
yum install graphviz
yum install libunwind
- forheaptrack
Compiling the code.
Checkout the code in this PR on a linux x86_64 machine and configure
CMake
as normal, but ensure thatpytorch_inference
is linked againstlibtcmalloc
. This can be done with e.g.Running
pytorch_inference
There are several python scripts in the
bin/pytorch_inference
directory that are capable of runningpytorch_inference
on various models. Examples areThese scripts can be tweaked in various ways before running. In the case of
evaluate.py
edit the script to:mkl_free_buffers
control requestViewing results
If running
pytorch_inference
underheapprof
there will be a reasonably large number of output files generated, e.g./tmp/heapprof.0040.heap
. These files need to be post processed by a tool calledpprof
e.g.:to generate a pdf file of the
heapprof
results (other output formats are available).Heapcheck
has its own GUI especially for viewing results - https://github.com/KDE/heaptrack?tab=readme-ov-file#heaptrack_gui but can also display results as plain text.