forked from across-protocol/relayer
-
Notifications
You must be signed in to change notification settings - Fork 1
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Sync with upstream repo #83
Merged
Merged
Conversation
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Signed-off-by: james-a-morris <[email protected]>
…ing PoolRebalanceLeaves (#1933) * fix(Dataworker): Update balanceAllocator properly when executing PoolRebalanceLeaves ## Context The dataworker executor functionality is supposed to detect when to call `sync()` before executing L1 PoolRebalance and RelayerRefund leaves depending on the `liquidReserves` value of l1 tokens before executing those leaves. We use pass around the `balanceAllocator` when simulating execution of the [PoolRebalanceLeaves](https://github.com/across-protocol/relayer/blob/3e9c1e4108568b39fe007f8fcd71721db4bbe090/src/dataworker/Dataworker.ts#L1491) and the [RelayerRefundLeaves](https://github.com/across-protocol/relayer/blob/3e9c1e4108568b39fe007f8fcd71721db4bbe090/src/dataworker/Dataworker.ts#L1512) in order to keep track of how many L1 tokens are withdrawn from and deposited to the HubPool following Hub-chain leaf executions. This way, we can use the `balanceAllocator` in [this function](https://github.com/across-protocol/relayer/blob/3e9c1e4108568b39fe007f8fcd71721db4bbe090/src/dataworker/Dataworker.ts#L1531) to detect when we're not going to have enough funds in LP reserves to execute a leaf [here](https://github.com/across-protocol/relayer/blob/3e9c1e4108568b39fe007f8fcd71721db4bbe090/src/dataworker/Dataworker.ts#L1823). ## Problem The problem is that when accounting for PoolRebalanceLeaf executions, we were ADDING not subtracting balance to the balanceAllocator's count of the hubpool's reserves. This means that if the current `liquidReserves` were good enough to cover execution of the Ethereum PoolRebalanceLeaf [here](https://github.com/across-protocol/relayer/blob/3e9c1e4108568b39fe007f8fcd71721db4bbe090/src/dataworker/Dataworker.ts#L1484), then the balance allocator would accidentally inflate the HubPool's balance [here](https://github.com/across-protocol/relayer/blob/3e9c1e4108568b39fe007f8fcd71721db4bbe090/src/dataworker/Dataworker.ts#L1488). This function [here](https://github.com/across-protocol/relayer/blob/3e9c1e4108568b39fe007f8fcd71721db4bbe090/src/dataworker/Dataworker.ts#L1529C40-L1529C92) would then have issues. Within this function, if any individual PoolRebalanceLeaf's `netSendAmount` was less than its liquid reserves, then a `sync` would be skipped [here](https://github.com/across-protocol/relayer/blob/3e9c1e4108568b39fe007f8fcd71721db4bbe090/src/dataworker/Dataworker.ts#L1802). This [line](https://github.com/across-protocol/relayer/blob/3e9c1e4108568b39fe007f8fcd71721db4bbe090/src/dataworker/Dataworker.ts#L1822) would then artificially inflate the hub pool's balance in the balance allocator, leading to a much more down stream simulation error when the pool rebalance leaf execution fails for unknown reasons. ## Examples of problems We saw a bundle of PoolRebalanceLeaves today fail to execute because three of the leaves, one Ethereum leaf and two non-Ethereum leaves, had a total `netSendAmount` greater than the HubPool's `liquidReserves`, but each individually had a `netSendAmount` < the `liquidReserves`. For example, the three leaves had `netSendAmounts` of: - 40 - 90 - 70 While the hubPool's liquidReserves was 180: - 40 + 90 + 70 = 200 > 180 - 40 < 180 - 90 < 180 - 70 < 180 If you take these numbers and run them through the `executePoolRebalanceLeaves` code above, you'll see how a PoolRebalanceLeaf execution was able to be submitted but then fail in simulation, without preceding the leaf executions with a `sync` transaction. * fix issue * Update Dataworker.executePoolRebalances.ts * Add more test cases * Update Dataworker.executePoolRebalances.ts * Update Monitor.ts * comment on tests * make test better * Add orbit-fee handling, remove balance allocator * Fix balanceAllocator call in _executePoolRebalanceLeaves * throw error if can't fund the DonationBox or loadEthForL2Calls call * add lifecycle test Signed-off-by: nicholaspai <[email protected]> * Exit early if aggregate net send amount == 0 * Update Dataworker.executePoolRebalances.ts * Update log in _updateExchangeRatesBeforeExecutingNonHubChainLeaves when skipping exchange rate update early * Update Dataworker.ts * Fund more AZERO whenever we're short * remove hardcodes * Improve logs about lookback window being too short * Improve logs on funding orbit chain message * Update Dataworker.customSpokePoolClients.ts * Update index.ts * Update index.ts * Add invariant unit test * Remove l1 tokens with 0 net send amounts from _updateOldExchangeRates * Rename to amountWei * Refactor blockRangesAreInvalid to internal helper func * Squash feeData * Update src/dataworker/Dataworker.ts Co-authored-by: Paul <[email protected]> * Update src/dataworker/Dataworker.ts Co-authored-by: Paul <[email protected]> * Update src/dataworker/Dataworker.ts Co-authored-by: Paul <[email protected]> * result * Add unit testing about exiting early if leaves are already executed * Add ability for some nonHubChain leaves to be executed even if they all cannot * Skip mainnet leaf execution if we cannot execute instead of throwing * Skip sync in _updateExchangeRatesBeforeExecutingNonHubChainLeaves if liquid reserves won't increase * refactor block range pretty printing * update comments * Add assert error messages * Add _getSpokeBalanceForL2Tokens helper and add to logs * Re-add balance allocator * Update Dataworker.executeRelayerRefunds.ts * Update Dataworker.ts * Remove canExecute return value from _updateExchangeRatesBeforeExecutingHubChainLeaves * Update Dataworker.executePoolRebalances.ts * Update Dataworker.executePoolRebalances.ts * Refactor error log * Clean up logs * Consider state of liquid reserves following eth pool rebalance leaf executions * Improve tests * Update name * Add unit test, split executePoolRebalanceLeaf tests in two files to take advantage of parallel test runs in CI * Remove SIMULATE_L1_EXECUTION * Add test for amountToReturn * add tests * Add test about hub chain slow fill leaves * Update BalanceAllocator.ts Co-authored-by: Paul <[email protected]> * Update Dataworker.ts Co-authored-by: Paul <[email protected]> * change blockRangesAreInvalidForSpokeClients to return list of chain ID's that are invalid; add DISABLED_CHAINS unit tests to BundleDataClient unit test files Signed-off-by: nicholaspai <[email protected]> --------- Signed-off-by: nicholaspai <[email protected]> Co-authored-by: Paul <[email protected]>
* improve(LineaFinalizer): Force RPC calls through custom provider The functions that we use in the Linea SDK make event queries from block 0 to "latest" by default which means that some of our RPC's can't be used with the Finalizer. Additionally, we're missing out our custom provider retry and caching logic. This PR removes all instances of making RPC calls via the Linea SDK's provider and replace with making the same call through our custom Provider. The latest [Linea SDK](https://github.com/Consensys/linea-monorepo/blob/3fbe660683a318b6fa1b63ec518f948791536352/sdk/src/sdk/LineaSDK.ts#L56) code looks like it'll support injecting custom providers but its not released yet on NPM. So this code should be removed eventually once a later SDK version is released. * Refactor * fix * fix * Update common.ts * Update l2ToL1.ts * Update l2ToL1.ts * Update arbStack.ts * Update arbStack.ts * Add fix for getFinalizationMessagingInfo * Floor arbStack calc * Use .connect(provider) syntax for brevity * Move all imports to single file * Update l1ToL2.ts * Infer lookback dynamically
* fix(LineaFinalizer): Fix spokepool client * Update l2ToL1.ts * Update l2ToL1.ts * Update l2ToL1.ts
…s fill (#1949) * improve(Relayer): Add total gas price to `info` log when relayer sends fill This will help debugging and give us broad oversight into how accurate our gas price estimates are. We probably should refactor the overall relay fee calculation flow but this PR uses the data we already have (native and token gas costs) to derive the gas price easily. This PR also adds to the ProfitClient's `updateGasCosts` function a simple change to print out the gas price estimate per chain. A more helpful fix might be to add the `gasPrice` field to the `TransactionCostEstimate` type in the SDK and force functions like `sdk.common.estimateTotalGasRequiredByUnsignedTransaction()` to return the priority fee and base fee broken down so the relayer can log it * Update ProfitClient.ts * revert relayer * Add gasPrice as required elem to profit client * add log by reading from profit client * lint * import from sdk * Update ProfitClient.ConsiderProfitability.ts * Update ProfitClient.ConsiderProfitability.ts * Update ProfitClient.ConsiderProfitability.ts * Update Relayer.ts
This function is unused.
…1928) * improve: wrap finalization calldata generation in a try/catch Signed-off-by: bennett <[email protected]> --------- Signed-off-by: bennett <[email protected]> Co-authored-by: Paul <[email protected]>
Signed-off-by: withbest <[email protected].>
* chore(deps): bump across protocol dependencies to latest versions Signed-off-by: james-a-morris <[email protected]> * feat: enable Ink in the relayer Signed-off-by: james-a-morris <[email protected]> * improve: multipliers Signed-off-by: james-a-morris <[email protected]> * chore: reduce timing on expected bridge Signed-off-by: james-a-morris <[email protected]> --------- Signed-off-by: james-a-morris <[email protected]>
Signed-off-by: james-a-morris <[email protected]>
This is necessary to support Ink chain in the immediate future. This reverts commit e683053. Co-authored-by: James Morris, MS <[email protected]>
For a provider-related quorum change, such that eth_getBlockByNumber("pending") will not be subjected to the normal quorum config.
The existing defaults are set so conservatively that third-party operators are pushed into overriding them with custom values. This introduces a greater risk of loss of funds. Instead, update the defaults with lower confirmation requirements that should permit new operators to make fills successfully without having to discard the defaults.
Permit certain config objects (i.e. MAX_BLOCK_LOOK_BACK) to be selectively overridden, rather than requiring a complete definition. This will slightly improve new chain automation. Also simplify a few instances of config parsing, adding some basic type validation along the way.
… slow fills (#1957) * improve(Dataworker): Remove 0-value refunds and 0-value empty-message slow fills Implements optimizations to [UMIP-179](UMAprotocol/UMIPs#606) The goal here is protecting dataworker from performing extra computations where no on-chain state changes can happen, in the case of 0-value refunds and 0-value empty-message slow fills. * WIP * Update src/dataworker/DataworkerUtils.ts * Update DataworkerUtils.ts * Use SDK isZeroValueDeposit
sameersubudhi
approved these changes
Jan 3, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Tested locally, works as expected!
matjazv
approved these changes
Jan 3, 2025
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Merge latest changes from upstream repo