-
Notifications
You must be signed in to change notification settings - Fork 116
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Optimizer torch optimizer performance #482
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for the PR!
@@ -21,7 +21,7 @@ def test_config(self): | |||
def test_single_step(self): | |||
optimizer = SGD(learning_rate=0.5) | |||
self.assertEqual(len(optimizer.variables), 2) | |||
grads = np.array([1.0, 6.0, 7.0, 2.0]) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do we need to support np.array
for apply_gradients
? @fchollet
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
No we don't.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM, thanks!
* add torch optimizers * addressing comments --------- Co-authored-by: Haifeng Jin <[email protected]>
Reimplemented SGD optimizer for torch backend with the torch parallel functions.
It process all the variables in parallel.
Before this PR, if executed eagerly, the variables would be updated in series.