Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Can't run Spatial Transformation example #48

Open
urinieto opened this issue Nov 22, 2016 · 10 comments
Open

Can't run Spatial Transformation example #48

urinieto opened this issue Nov 22, 2016 · 10 comments

Comments

@urinieto
Copy link

urinieto commented Nov 22, 2016

It seems like the keras' build function of the layers has changed its prototype and it now takes the input_shape as parameter, so this fails (see keras documentation).

Regardless, once I fix this, I get an error once the SpatialTransform layer tries to set the self.input attribute from locnet's input. This is what I get when I try to run the example:

AttributeError                            Traceback (most recent call last)
<ipython-input-6-8756311b2bac> in <module>()
      2 
      3 model.add(SpatialTransformer(localization_net=locnet,
----> 4                              downsample_factor=3, input_shape=input_shape))
      5 
      6 model.add(Convolution2D(32, 3, 3, border_mode='same'))

/home/onieto/local/anaconda3/lib/python3.5/site-packages/keras/models.py in add(self, layer)
    278                 else:
    279                     input_dtype = None
--> 280                 layer.create_input_layer(batch_input_shape, input_dtype)
    281 
    282             if len(layer.inbound_nodes) != 1:

/home/onieto/local/anaconda3/lib/python3.5/site-packages/keras/engine/topology.py in create_input_layer(self, batch_input_shape, input_dtype, name)
    368         # and create the node connecting the current layer
    369         # to the input layer we just created.
--> 370         self(x)
    371 
    372     def assert_input_compatibility(self, input):

/home/onieto/local/anaconda3/lib/python3.5/site-packages/keras/engine/topology.py in __call__(self, x, mask)
    485                                     '`layer.build(batch_input_shape)`')
    486             if len(input_shapes) == 1:
--> 487                 self.build(input_shapes[0])
    488             else:
    489                 self.build(input_shapes)

/home/onieto/local/seya/seya/layers/attention.py in build(self, input_shape)
     48         self.regularizers = self.locnet.regularizers
     49         self.constraints = self.locnet.constraints
---> 50         self.input = self.locnet.input  # This must be T.tensor4()
     51 
     52     @property

AttributeError: can't set attribute

Any idea how to fix this? Thanks!

UPDATE: I'm on keras 1.1.1, using Python 3.5.2 on Anaconda 4.2.0
UPDATE 2: After trying with several keras versions, it seems like the jupyter example only works with keras 0.3.2

@EderSantana
Copy link
Owner

yeah, seya is based on keras 0.3
but a few layers were adapted in the branch called keras1. STN is one of them, try that one for latest versions of keras

@urinieto
Copy link
Author

Thanks for your reply, I will try out the keras1 branch.

@redsphinx
Copy link

redsphinx commented Nov 30, 2016

I fixed the build function issue by updating the attention.py file to the keras1 version: https://github.com/EderSantana/seya/blob/keras1/seya/layers/attention.py

But now I have this issue, any ideas what the fan_in and fan_out are used for?

File "/home/gabi/PycharmProjects/fish/fish_detection/spatial_trafo/mnist.py", line 91, in
model.add(Convolution2D(32, 3, 3, border_mode='same'))

File "/usr/local/lib/python2.7/dist-packages/keras/models.py", line 324, in add
output_tensor = layer(self.outputs[0])

File "/usr/local/lib/python2.7/dist-packages/keras/engine/topology.py", line 491, in call
self.build(input_shapes[0])

File "/usr/local/lib/python2.7/dist-packages/keras/layers/convolutional.py", line 409, in build
self.W = self.init(self.W_shape, name='{}_W'.format(self.name))

File "/usr/local/lib/python2.7/dist-packages/keras/initializations.py", line 59, in glorot_uniform
s = np.sqrt(6. / (fan_in + fan_out))

ZeroDivisionError: float division by zero

@EderSantana
Copy link
Owner

that error is weird. fan_in and fan_out are the sizes of input and output layers. and I'm sure you didn't declare a layer with zero connections...

@redsphinx
Copy link

Yeah. I couldn't figure it out. I'm just running it with keras 0.3.3 now and no issues.

@9thDimension
Copy link

I couldn't get the SpatialTransformer to work in my normal environment. So I set up a virtualnev with Keras 0.3.3 and installed Seya to that.

Trying to run the ST example ipynb for cluttered mnist, but I'm getting a problem. The net is learning nothing, loss stays exactly the same.

spatial_transofmer_net_problem

@oarriaga
Copy link

@9thDimension @redsphinx @urinieto here https://github.com/oarriaga/spatial_transformer_networks I have a working ipython notebook with the very same example using keras 1.1.1 with tensorflow 0.12.0 as backend.

@EderSantana if you want, I can make a pull request with the tensorflow implementation, just tell me where should I put the file containing the layer.

@EderSantana
Copy link
Owner

Hi @oarriaga thanks for all the help!
I think you can write the it all to the attention.py file.
Check if the user uses Theano or TF and use the appropriate code.
Have other ideas?

Also, if your code is keras1 compatible make sure you PR to that branch.

@urinieto
Copy link
Author

urinieto commented Feb 17, 2017 via email

@iamsiva11
Copy link

Hi everyone,
I got the MNIST example running with the help of @oarriaga Notebook. I tried to load a custom image directory/dataset from disk for running Spatial Transformation, but got stuck with an issue.

Specifically after some amount of inspecting the code, there seems to be some issue with the datatype of the numpy image array.
When the images are -
uint8 - image display working fine , training not working(images are displayed blank during training) (Loss is becoming zero with blank images displayed). Similair to @9thDimension Comment before

float32 - image getting deformed(pixellated, random noise). But training seems to be working with noisy images. The noisy image looks like this
image-load-noise

Any idea how to fix this? Thanks!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

6 participants