You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi, I hope I'm not bothering you. Recently, I have implemented a simple autoencoder with keras for text classification to do domain adaptation, but it performs worse than the original representation of the document, about 10% lower. As the paper shown, stacted denoising autoencoder could improve the performance for domain adaptation. Would you help me check the errors and give me some suggestions? Thanks!
@EderSantana The paper doesn't mention if the entire model is trained at once. Would you explain how to do layer wise pre-training simply? I am sorry that I have never tried it. Thanks!
it is called greedy-layer wise pretraining. its a little bit of a pain to do, but you can find tutorials online. essentially you trian an autoencoder for each layer at time
Hi, I hope I'm not bothering you. Recently, I have implemented a simple autoencoder with keras for text classification to do domain adaptation, but it performs worse than the original representation of the document, about 10% lower. As the paper shown, stacted denoising autoencoder could improve the performance for domain adaptation. Would you help me check the errors and give me some suggestions? Thanks!
The text was updated successfully, but these errors were encountered: