This repository has been archived by the owner on Sep 18, 2024. It is now read-only.
How to get GPU used? #5716
Unanswered
blueforest03
asked this question in
Q&A
Replies: 1 comment
-
I got the same problem. And I failed to run in the end. |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
I'm running the "Hello, NAS!" demo, and I did change the code in the "Launch an Experiment" section to get the algorithm to use GPU. But it just returned “Config is not provided. Will try to infer.” and "WARNING: GPU found but will not be used. Please set
experiment.config.trial_gpu_number
to the number of GPUs you want to use for each trial."I'm using Python 3.11, PyTorch 2.0.1, CUDAToolkit: 11.7, cuDNN: 8.9, using jupyter via anaconda to run the code.
Is there a method to use gou to run the code without bothering the shell or local file?(I want to use the jupyter to run the entire code)
Beta Was this translation helpful? Give feedback.
All reactions