Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Some questions about autoencoder! #38

Open
Imorton-zd opened this issue Jul 4, 2016 · 3 comments
Open

Some questions about autoencoder! #38

Imorton-zd opened this issue Jul 4, 2016 · 3 comments

Comments

@Imorton-zd
Copy link

Hi, I hope I'm not bothering you. Recently, I have implemented a simple autoencoder with keras for text classification to do domain adaptation, but it performs worse than the original representation of the document, about 10% lower. As the paper shown, stacted denoising autoencoder could improve the performance for domain adaptation. Would you help me check the errors and give me some suggestions? Thanks!

def autoencode(data):
    n_obs = data.shape[0]
    input_size = data.shape[1]
    encoding_size = 1000

    x = Input(shape=(input_size,))
    intermed = Dense(encoding_size * 2, activation='relu')(x)
    intermed = Dense(encoding_size * 2, activation='relu')(intermed)
    z = Dense(encoding_size, activation='relu', name='z')(intermed)
    intermed = Dense(encoding_size * 2, activation='relu')(z)
    intermed = Dense(encoding_size * 2, activation='relu')(intermed)
    x_reconstruction = Dense(input_size, activation='relu', name='x_reconstruction')(intermed)

    train_model = Model(input=[x], output=[x_reconstruction])
    test_model = Model(input=[x], output=[z])
    train_model.compile(optimizer='Adam',
                  loss = 'mse')
    test_model.compile(optimizer='Adam',
                  loss = 'mse')

    train_model.fit(x = data,
              y = data,
              nb_epoch=8)    

    encoded1 = train_model.predict(data)
    encoded2 = test_model.predict(data)
    return encoded1,encoded2
@EderSantana
Copy link
Owner

does the paper recomends training the entire model all at once? did you try layer wise pre-training?

@Imorton-zd
Copy link
Author

@EderSantana The paper doesn't mention if the entire model is trained at once. Would you explain how to do layer wise pre-training simply? I am sorry that I have never tried it. Thanks!

@EderSantana
Copy link
Owner

it is called greedy-layer wise pretraining. its a little bit of a pain to do, but you can find tutorials online. essentially you trian an autoencoder for each layer at time

better than that, I just noticed that keras has an implementation of variational autoencoders in the examples folder. I think you should try that first. that one you don't need layer wise pretraining as much
https://github.com/fchollet/keras/blob/master/examples/variational_autoencoder.py

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants