Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Yul Optimizer] Consider moving statically sized memory allocations to static offsets while adjusting the memoryguard #13046

Closed
ekpyron opened this issue May 20, 2022 · 5 comments
Labels
closed due inactivity The issue/PR was automatically closed due to inactivity. optimizer stale The issue/PR was marked as stale because it has been open for too long.

Comments

@ekpyron
Copy link
Member

ekpyron commented May 20, 2022

{
  mstore(0x40, memoryguard(0x80))

  switch shr(224, calldataload(0))
  case 0x... {
     let m := allocate(0x20)
     let n := allocate(0x20)
     ...
  }
  case 0x... {
     let m := allocate(0x40)
     ...
  }
  default {
    revert(0,0)
  }
}

Could, in principle, realize that all non-reverting branches need 64 bytes of static memory, shift the memoryguard and transform to:

{
  mstore(0x40, memoryguard(0xC0))

  switch shr(224, calldataload(0))
  case 0x... {
     let m := 0x80
     let n := 0xA0
     ...
  }
  case 0x... {
     let m := 0x80
     ...
  }
  default {
    revert(0,0)
  }
}

The logic would basically be the same as be similar to the allocation of memory slots in the stack-to-memory process.

However, there'd be some subtleties involved... e.g. we'd need to decide how to tradeoff between branches requiring more or less static memory (also considering whether we want to ignore reverting branches).
Also this would mean special handling for allocate and would benefit from having it as a builtin in an initial dialect or something similar...

A huge advantage, though, would be that after such a transformation it would be much easier to reason about those allocations, since they'd have well-known static offsets.

@ekpyron
Copy link
Member Author

ekpyron commented May 20, 2022

Thinking about it a bit more: this could be similar to what the stack limit evader does, but it's not entirely the same:
Since the statically allocated memory chunks could be stored somewhere during loops, we'd basically need to treat loops similar to how the stack-to-memory process treats recursion. I.e. not do this for allocations in loops. Or ensure that the memory is only used locally within the loop.

And also on function calls the logic would have to differ, since multiple calls to the same function in general may need to get distinct memory chunks. So yeah, this would need to rely on arguing that the allocated memory is only used function-locally...

But it might still be worthwhile, even if we have to restrict the cases in which this is allowed significantly at first.

@ekpyron
Copy link
Member Author

ekpyron commented May 20, 2022

So basically the issue will be to determine which allocations can safely be turned to static ones... which may lead back to full memory objects... but at least the rest, i.e. assigning distinct static offsets for such allocations, could probably be shared with the current stack-to-memory logic...

@ekpyron
Copy link
Member Author

ekpyron commented May 20, 2022

Related to #5107

@github-actions
Copy link

This issue has been marked as stale due to inactivity for the last 90 days.
It will be automatically closed in 7 days.

@github-actions github-actions bot added the stale The issue/PR was marked as stale because it has been open for too long. label Mar 27, 2023
@github-actions
Copy link

github-actions bot commented Apr 4, 2023

Hi everyone! This issue has been automatically closed due to inactivity.
If you think this issue is still relevant in the latest Solidity version and you have something to contribute, feel free to reopen.
However, unless the issue is a concrete proposal that can be implemented, we recommend starting a language discussion on the forum instead.

@github-actions github-actions bot added the closed due inactivity The issue/PR was automatically closed due to inactivity. label Apr 4, 2023
@github-actions github-actions bot closed this as not planned Won't fix, can't repro, duplicate, stale Apr 4, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
closed due inactivity The issue/PR was automatically closed due to inactivity. optimizer stale The issue/PR was marked as stale because it has been open for too long.
Projects
None yet
Development

No branches or pull requests

1 participant