Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

proposal: implement unified interface for invoking different foundry implementations #9227

Open
popzxc opened this issue Oct 30, 2024 · 3 comments
Labels
A-compatibility Area: compatibility A-extensions Area: extensions T-post-V1 Area: to tackle after V1 T-to-discuss Type: requires discussion

Comments

@popzxc
Copy link

popzxc commented Oct 30, 2024

Component

Other (please describe)

Describe the feature you would like

Problem

Right now, Foundry provides a wide coverage for Web3 ecosystem, but it comes with a few nuances:

  • Some networks, such as ZKsync and Starknet, have their own implementation of Foundry.
  • Some networks, such as Optimism and Odyssey, have support upstream managed via CLI flags (e.g. anvil --optimism).
    This creates a mixed environment for users, where they have to choose binaries or CLI flags, often in different ways, which can be troublesome.

The issue will likely become even more complex with several companies now working on their stacks. I can imagine that eventually features like OP's supersim and similar ones for ZKsync/Polygon/Arbitrum would be wanted to be available out of the box (especially once interop becomes an industry standard). Based on our interactions with teams building on ZKsync, not being able to easily reuse the same setup is a big adoption barrier, even if it's "kind of" integrated (e.g. even having to pass --zksync flag was reported as an inconvenience, and understandably so).

Solution

I propose creating a unified way for users to explicitly specify what network they want to use in upstream foundry, as well as a way for developers to create "hooks" to a particular implementation.
It would mean that users always invoke forge/cast/anvil/etc, but based on some form of configuration (to be covered below) it can modify behavior to match network expectations, e.g.:

  • Redirect execution to foundry-zksync
  • Set --optimism flag
  • In the future, it can be used to set up plugins for specific chains (relevant to option 3 mentioned here).

Technically, I see several options to achieve this, but they revolve around a single prerequisite:
All the tools from the Foundry suite must load some configuration before actual execution. Let's assume that we have a variable network_family, with supported options like zk_stack, op_stack, starknet. Handling it might vary. For example, if we're invoking forge with network_family=zk_stack, we will simply forward execution to forge-zksync binary with the same arguments. If we're invoking anvil with network_family=op_stack, it will imply the --optimism CLI flag.

Option 1 - foundry.toml profiles

Foundry already has a similar mechanism for altering behavior: profiles. We can add a network_family variable there, and then users can reuse the same workflows they have by altering FOUNDRY_PROFILE variable.
The drawback I can see here is that users may potentially need up to N*M profiles, where N stands for the profiles they normally have (default/CI), and M stands for the networks they support (l1/op/zksync). On the other hand, it is not unlikely that profiles for different networks will be different anyway, especially for ZKsync and Starknet.

Option 2 - FOUNDRY_NETWORK_FAMILY env variable

In this option, there will be one more variable to choose a network family. For now, looks like we don't need to add anything to foundry.toml in this case, but in the future, it can be extended to have something similar to profiles (e.g. [network_family.zksync] and [network_family.optimism] sections). I'm not sure what the use cases will be there, but probably it may be relevant for interop (e.g. supersim configuration).
The main drawback here is that we now have 2 environment variables, which has higher cognitive complexity.

Option 3 - Less upstream support

If having logic to handle differences of particular chains feels troublesome, there is a more lightweight approach: we can introduce binary_mappings variable, so that e.g. for ZKsync it would be { "forge" = "forge-zksync", "cast" = "cast-zksync" } and for Starknet it would be snforge/sncast. This variable would simply tell which binary the execution should be forwarded to with the same arguments. This way no "custom" logic is added upstream, though it feels less extensible for networks with upstream support.


If (hopefully) we will decide that this proposal makes sense and agree on a particular option, the ZKsync community will be happy to submit PRs for the implementation. We see it as a first step towards #feat(compatibility): add zkSync support, as well as greater integrity of the ecosystem.

Additional context

No response

@popzxc popzxc added T-feature Type: feature T-needs-triage Type: this issue needs to be labelled labels Oct 30, 2024
@github-project-automation github-project-automation bot moved this to Todo in Foundry Oct 30, 2024
@zerosnacks zerosnacks added T-post-V1 Area: to tackle after V1 A-compatibility Area: compatibility A-extensions Area: extensions and removed T-needs-triage Type: this issue needs to be labelled labels Oct 30, 2024
@zerosnacks zerosnacks changed the title feat(proposal): Unified interface for invoking different foundry implementations proposal: implement unified interface for invoking different foundry implementations Oct 30, 2024
@zerosnacks zerosnacks added T-to-discuss Type: requires discussion and removed T-feature Type: feature labels Oct 30, 2024
@sakulstra
Copy link
Contributor

sakulstra commented Oct 31, 2024

As user of both foundry and foundry-zksync i'm looking forward for a solution to this!

My 2ct:
1 and 3, or even a combination of the two could make sense imo (so specifying binaries as part of the profile).
Using different profiles on different chains is already sth ppl do a lot, therefore imo

The drawback I can see here is that users may potentially need up to N*M profiles, where N stands for the profiles they normally have (default/CI)

is not a real drawback. Different evms aside, it's already the case that e.g. on linea you can only run paris while on mainnet you might want to run cancun.


In an ideal scenario imo, i should be able to run a script or all tests without actively choosing a profile.
It would be great if in a multichain repo I could just run forge test and each test would automatically run in the appropriate environment.
There should be the possibility to have inline configs (or similar) for profile similar to how we can inline-config fuzzing, so we can have tests for different chains side by side and execute them without custom scripting to select the correct env.

@grandizzy
Copy link
Collaborator

@popzxc not sure if this would still be relevant given that we'd be happy to move forward with #7624 wdyt? thank you

@popzxc
Copy link
Author

popzxc commented Nov 13, 2024

I think both issues are relevant: even with the strategy outlined in #7624 there likely will be two binaries, since foundry-zksync currently has an unoptimized dependency graph (e.g. it has git dependencies and it currently requires nightly compiler). These issues will eventually be fixed, but likely we will have a pretty long transition period where the foundry codebase is optimized for upstream support of ZKsync, but we still have both foundry and foundry-zksync as different binaries.

So, mid-term I see #7624 as an issue to simplify the maintenance and development (e.g. that ZKsync code is easily pluggable to Foundry), while this issue is mostly about users convenience (given two sets of binaries, they can still easily interact with whatever network they want).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
A-compatibility Area: compatibility A-extensions Area: extensions T-post-V1 Area: to tackle after V1 T-to-discuss Type: requires discussion
Projects
Status: Todo
Development

No branches or pull requests

4 participants