Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support for Keras "Add" layer #112

Open
yynst2 opened this issue Sep 10, 2020 · 2 comments
Open

Support for Keras "Add" layer #112

yynst2 opened this issue Sep 10, 2020 · 2 comments

Comments

@yynst2
Copy link

yynst2 commented Sep 10, 2020

Hi,

My network involves simple residual connection structures that use keras Add layer, it seems this type of structure is supported according to your latest work with "BPnet" paper. Do you have an updated version of deeplift that actually supports it?

The current version reports the Add layer is not supported (not in the conversion mapping), and gives the following error.

/usr/local/lib/python3.6/dist-packages/deeplift/conversion/kerasapi_conversion.py in layer_name_to_conversion_function(layer_name)
338 # lowercase to create resistance to capitalization changes
339 # was a problem with previous Keras versions
--> 340 return name_dict[layer_name.lower()]
341
342

KeyError: 'add'

Thanks a lot!

@AvantiShri
Copy link
Collaborator

AvantiShri commented Sep 10, 2020

Hi @yynst2, the BPNet paper actually used the implementation of DeepLIFT from the DeepExplain repository (our branch is forked from marcoancona/DeepExplain): https://github.com/kundajelab/DeepExplain. We used this implementation precisely because it is more flexible as it is built by overriding tensorflow gradient operators. As I recall, the difference between our fork of DeepExplain and the original DeepExplain repository is a few modifications that facilitate calculation of the hypothetical importance scores.

It is also possible to use DeepSHAP’s implementation of DeepLIFT to compute the importance scores. This is what we have been moving towards for BPNet, as DeepSHAP allows the use of multiple references per sequence (DeepExplain only supports one reference per sequence; this is discussed in the FAQ). For convolutional architectures, DeepSHAP is similar to DeepLIFT with the Rescale rule (this is also discussed in the FAQ). In benchmarking, I have found that the Rescale rule performs equivalently to the combination of Rescale & RevealCancel when used with shuffled sequences as the reference. Here are some slides that cover how to use DeepSHAP to get explanations from models trained on genomic data (including BPNet-style models): https://docs.google.com/presentation/d/1JCLMTW7ppA3Oaz9YA2ldDgx8ItW9XHASXM1B3regxPw/edit

Let me know if you have additional questions.

@yynst2
Copy link
Author

yynst2 commented Sep 10, 2020

Thanks a lot, that's very helpful.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants