Skip to content

Commit

Permalink
Merge pull request #1281 from algorandfoundation/staging
Browse files Browse the repository at this point in the history
staging => master
  • Loading branch information
nullun authored Jan 10, 2025
2 parents 87effed + f127574 commit cf0082a
Show file tree
Hide file tree
Showing 11 changed files with 70 additions and 20 deletions.
2 changes: 1 addition & 1 deletion .go-algorand-beta.version
Original file line number Diff line number Diff line change
@@ -1 +1 @@
v3.26.0-beta
v4.0.1-beta
2 changes: 1 addition & 1 deletion .go-algorand-stable.version
Original file line number Diff line number Diff line change
@@ -1 +1 @@
v3.26.0-stable
v3.27.0-stable
2 changes: 1 addition & 1 deletion .indexer.version
Original file line number Diff line number Diff line change
@@ -1 +1 @@
3.6.0
3.7.1
Original file line number Diff line number Diff line change
Expand Up @@ -302,7 +302,7 @@ In order to provide confidence in the correctness of a smart contract it's impor
There are a few possible approaches that could be taken by AlgoKit to help facilitate this:

1. **Documentation** - Include documentation that recommends this kind of testing and provides examples for how to implement it
2. **Manual testing** - Encourage a manual testing approach using (for example) the ABI user interface in dAppFlow, by providing an AlgoKit CLI command that sends the user there along with the ABI definition and contract ID resulting in a manual testing experience for the deployed contract with low friction
2. **Manual testing** - Encourage a manual testing approach using (for example) the App Lab in Lora, by providing an AlgoKit CLI command that sends the user there along with the ABI definition and contract ID resulting in a manual testing experience for the deployed contract with low friction
3. **Automated integration tests** - Facilitate automated testing by issuing real transactions against a LocalNet and/or TestNet network
4. **Automated dry run tests** - Facilitate automated testing using the [Dry Run endpoint](https://developer.algorand.org/docs/rest-apis/algod/v2/#post-v2tealdryrun) to simulate what would happen when executing the contract under certain scenarios (e.g. [Graviton](https://github.com/algorand/graviton/blob/main/graviton/README.md))
5. **TEAL emulator** - Facilitate automated testing against a TEAL emulator (e.g. [Algo Builder Runtime](https://algobuilder.dev/api/runtime/index.html))
Expand Down Expand Up @@ -372,4 +372,4 @@ There are a few possible approaches that could be taken by AlgoKit to help facil

Based on all of this the suggested option for AlgoKit v1 is **Automated integration tests** since it conforms to the principles well, has prior art across TypeScript and Python that can be utilised and provides developers with a lot of confidence.

Post v1, it's recommended that dAppFlow integration for exploratory testing and Graviton (or similar) support should be explored to provide a range of options to empower developers with a full suite of techniques they can use.
Post v1, it's recommended that Lora integration for exploratory testing and Graviton (or similar) support should be explored to provide a range of options to empower developers with a full suite of techniques they can use.
4 changes: 2 additions & 2 deletions docs/get-details/algorand-networks/betanet.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,10 +4,10 @@ title: BetaNet 🔷
🔷 = BetaNet availability only

# Version
`v3.26.0-beta`
`v4.0.1-beta`

# Release Version
https://github.com/algorand/go-algorand/releases/tag/v3.26.0-beta
https://github.com/algorand/go-algorand/releases/tag/v4.0.1-beta

# Genesis ID
`betanet-v1.0`
Expand Down
4 changes: 2 additions & 2 deletions docs/get-details/algorand-networks/mainnet.md
Original file line number Diff line number Diff line change
@@ -1,10 +1,10 @@
title: MainNet

# Version
`v3.26.0-stable`
`v3.27.0-stable`

# Release Version
https://github.com/algorand/go-algorand/releases/tag/v3.26.0-stable
https://github.com/algorand/go-algorand/releases/tag/v3.27.0-stable

# Genesis ID
`mainnet-v1.0`
Expand Down
4 changes: 2 additions & 2 deletions docs/get-details/algorand-networks/testnet.md
Original file line number Diff line number Diff line change
@@ -1,10 +1,10 @@
title: TestNet

# Version
`v3.26.0-stable`
`v3.27.0-stable`

# Release Version
https://github.com/algorand/go-algorand/releases/tag/v3.26.0-stable
https://github.com/algorand/go-algorand/releases/tag/v3.27.0-stable

# Genesis ID
`testnet-v1.0`
Expand Down
2 changes: 1 addition & 1 deletion docs/get-started/algokit.md
Original file line number Diff line number Diff line change
Expand Up @@ -153,7 +153,7 @@ If you would like to manually build and deploy the `HelloWorld` smart contract r

```shell
algokit project run build
algokit project run deploy
algokit project deploy
```

This should produce something similar to the following in the VSCode terminal.
Expand Down
47 changes: 47 additions & 0 deletions docs/rest-apis/algod.md
Original file line number Diff line number Diff line change
Expand Up @@ -751,6 +751,53 @@ GET /v2/blocks/{round}/hash
* public


<a name="getblockheader"></a>
### GET /v2/blocks/{round}/header
Get the block header for the block on the given round.
```
GET /v2/blocks/{round}/header
```


**Parameters**

|Type|Name|Description|Schema|
|---|---|---|---|
|**Path**|**round** <br>*required*|The round from which to fetch block header information.|integer|
|**Query**|**format** <br>*optional*|Configures whether the response object is JSON or MessagePack encoded. If not provided, defaults to JSON.|enum (json, msgpack)|


**Responses**

|HTTP Code|Description|Schema|
|---|---|---|
|**200**|Block header.|[Response 200](#getblockheader-response-200)|
|**400**|Bad Request - Non integer number|[ErrorResponse](#errorresponse)|
|**401**|Invalid API Token|[ErrorResponse](#errorresponse)|
|**404**|None existing block|[ErrorResponse](#errorresponse)|
|**500**|Internal Error|[ErrorResponse](#errorresponse)|
|**default**|Unknown Error|No Content|

<a name="getblockheader-response-200"></a>
**Response 200**

|Name|Description|Schema|
|---|---|---|
|**blockHeader** <br>*required*|Block header data.|object|


**Produces**

* `application/json`
* `application/msgpack`


**Tags**

* nonparticipating
* public


<a name="getlightblockheaderproof"></a>
### GET /v2/blocks/{round}/lightheader/proof
Gets a proof for a given light block header inside a state proof commitment
Expand Down
6 changes: 4 additions & 2 deletions docs/run-a-node/reference/config.md
Original file line number Diff line number Diff line change
Expand Up @@ -32,13 +32,13 @@ The `algod` process configuration parameters are shown in the table below.

| Property| Description | Default Value |
|------|------|------|
| Version | Version tracks the current version of the defaults so we can migrate old -> new<br>This is specifically important whenever we decide to change the default value<br>for an existing parameter. This field tag must be updated any time we add a new version. | 34 |
| Version | Version tracks the current version of the defaults so we can migrate old -> new<br>This is specifically important whenever we decide to change the default value<br>for an existing parameter. This field tag must be updated any time we add a new version. | 35 |
| Archival | Archival nodes retain a full copy of the block history. Non-Archival nodes will delete old blocks and only retain what's need to properly validate blockchain messages (the precise number of recent blocks depends on the consensus parameters. Currently the last 1321 blocks are required). This means that non-Archival nodes require significantly less storage than Archival nodes. If setting this to true for the first time, the existing ledger may need to be deleted to get the historical values stored as the setting only affects current blocks forward. To do this, shutdown the node and delete all .sqlite files within the data/testnet-version directory, except the crash.sqlite file. Restart the node and wait for the node to sync. | false |
| GossipFanout | GossipFanout sets the maximum number of peers the node will connect to with outgoing connections. If the list of peers is less than this setting, fewer connections will be made. The node will not connect to the same peer multiple times (with outgoing connections). | 4 |
| NetAddress | NetAddress is the address and/or port on which a node listens for incoming connections, or blank to ignore incoming connections. Specify an IP and port or just a port. For example, 127.0.0.1:0 will listen on a random port on the localhost. | |
| ReconnectTime | ReconnectTime is deprecated and unused. | 60000000000 |
| PublicAddress | PublicAddress is the public address to connect to that is advertised to other nodes.<br>For MainNet relays, make sure this entry includes the full SRV host name<br>plus the publicly-accessible port number.<br>A valid entry will avoid "self-gossip" and is used for identity exchange<br>to de-duplicate redundant connections | |
| MaxConnectionsPerIP | MaxConnectionsPerIP is the maximum number of connections allowed per IP address. | 15 |
| MaxConnectionsPerIP | MaxConnectionsPerIP is the maximum number of connections allowed per IP address. | 8 |
| PeerPingPeriodSeconds | PeerPingPeriodSeconds is deprecated and unused. | 0 |
| TLSCertFile | TLSCertFile is the certificate file used for the websocket network if povided. | |
| TLSKeyFile | TLSKeyFile is the key file used for the websocket network if povided. | |
Expand All @@ -61,6 +61,7 @@ The `algod` process configuration parameters are shown in the table below.
| PriorityPeers | PriorityPeers specifies peer IP addresses that should always get<br>outgoing broadcast messages from this node. | |
| ReservedFDs | ReservedFDs is used to make sure the algod process does not run out of file descriptors (FDs). Algod ensures<br>that RLIMIT_NOFILE >= IncomingConnectionsLimit + RestConnectionsHardLimit +<br>ReservedFDs. ReservedFDs are meant to leave room for short-lived FDs like<br>DNS queries, SQLite files, etc. This parameter shouldn't be changed.<br>If RLIMIT_NOFILE < IncomingConnectionsLimit + RestConnectionsHardLimit + ReservedFDs<br>then either RestConnectionsHardLimit or IncomingConnectionsLimit decreased. | 256 |
| EndpointAddress | EndpointAddress configures the address the node listens to for REST API calls. Specify an IP and port or just port. For example, 127.0.0.1:0 will listen on a random port on the localhost (preferring 8080). | 127.0.0.1 |
| EnablePrivateNetworkAccessHeader | Respond to Private Network Access preflight requests sent to the node. Useful when a public website is trying to access a node that's hosted on a local network. | false |
| RestReadTimeoutSeconds | RestReadTimeoutSeconds is passed to the API servers rest http.Server implementation. | 15 |
| RestWriteTimeoutSeconds | RestWriteTimeoutSeconds is passed to the API servers rest http.Server implementation. | 120 |
| DNSBootstrapID | DNSBootstrapID specifies the names of a set of DNS SRV records that identify the set of nodes available to connect to.<br>This is applicable to both relay and archival nodes - they are assumed to use the same DNSBootstrapID today.<br>When resolving the bootstrap ID <network> will be replaced by the genesis block's network name. This string uses a URL<br>parsing library and supports optional backup and dedup parameters. 'backup' is used to provide a second DNS entry to use<br>in case the primary is unavailable. dedup is intended to be used to deduplicate SRV records returned from the primary<br>and backup DNS address. If the <name> macro is used in the dedup mask, it must be at the beginning of the expression.<br>This is not typically something a user would configure. For more information see config/dnsbootstrap.go. | &lt;network&gt;.algorand.network?backup=&lt;network&gt;.algorand.net&amp;dedup=&lt;name&gt;.algorand-&lt;network&gt;.(network|net) |
Expand All @@ -83,6 +84,7 @@ The `algod` process configuration parameters are shown in the table below.
| TxBacklogAppTxPerSecondRate | TxBacklogAppTxPerSecondRate determines a target app per second rate for the app tx rate limiter | 100 |
| TxBacklogRateLimitingCongestionPct | TxBacklogRateLimitingCongestionRatio determines the backlog filling threshold percentage at which the app limiter kicks in<br>or the tx backlog rate limiter kicks off. | 50 |
| EnableTxBacklogAppRateLimiting | EnableTxBacklogAppRateLimiting controls if an app rate limiter should be attached to the tx backlog enqueue process | true |
| TxBacklogAppRateLimitingCountERLDrops | TxBacklogAppRateLimitingCountERLDrops feeds messages dropped by the ERL congestion manager & rate limiter (enabled by<br>EnableTxBacklogRateLimiting) to the app rate limiter (enabled by EnableTxBacklogAppRateLimiting), so that all TX messages<br>are counted. This provides more accurate rate limiting for the app rate limiter, at the potential expense of additional<br>deserialization overhead. | false |
| EnableTxBacklogRateLimiting | EnableTxBacklogRateLimiting controls if a rate limiter and congestion manager should be attached to the tx backlog enqueue process<br>if enabled, the over-all TXBacklog Size will be larger by MAX_PEERS*TxBacklogReservedCapacityPerPeer | true |
| TxBacklogSize | TxBacklogSize is the queue size used for receiving transactions. default of 26000 to approximate 1 block of transactions<br>if EnableTxBacklogRateLimiting enabled, the over-all size will be larger by MAX_PEERS*TxBacklogReservedCapacityPerPeer | 26000 |
| TxPoolSize | TxPoolSize is the number of transactions in the transaction pool buffer. | 75000 |
Expand Down
13 changes: 7 additions & 6 deletions docs/run-a-node/setup/install.md
Original file line number Diff line number Diff line change
Expand Up @@ -486,7 +486,7 @@ Genesis hash: SGO1GKSzyE7IEPItTxCByw9x8FmnrCDexi9/cOUJOiI=

# Sync Node Network using Fast Catchup

Fast Catchup is a new feature and will rapidly update a node using catchpoint snapshots. A new command on goal node is now available for catchup. The entire process should sync a node in minutes rather than hours or days. As an example, the results for a BetaNet fast catchup, at the time of writing this, was a couple minutes to get to the sync point and a few more minutes to sync the remaining blocks since the snapshot. The total blocks synced was around 4.2 million blocks and it finished syncing in under 6 minutes. Actual sync times may vary depending on the number of accounts, number of blocks and the network. Here are the links to get the most recent catchup point snapshot per network. The results include a round to catchup to and the provided catchpoint. Paste into the `goal node catchup` command.
Fast Catchup is a feature and will rapidly update a node using catchpoint snapshots. The entire process should sync a node in minutes/hours rather than days or weeks. As an example, the results for a Mainnet fast catchup, at the time of writing this, was a approximately 60 minutes to get to the sync point and a few more minutes to sync the remaining blocks since the snapshot. The total blocks synced was around 45 million blocks and it finished syncing in under 90 minutes. Actual sync times may vary depending on the number of accounts, number of blocks and the network. Here are the links to get the most recent catchup point snapshot per network. The results include a round to catchup to and the provided catchpoint. Paste into the `goal node catchup` command.

BetaNet
https://algorand-catchpoints.s3.us-east-2.amazonaws.com/channel/betanet/latest.catchpoint
Expand All @@ -498,13 +498,10 @@ MainNet
https://algorand-catchpoints.s3.us-east-2.amazonaws.com/channel/mainnet/latest.catchpoint

The results will look similar to this:
`4420000#Q7T2RRTDIRTYESIXKAAFJYFQWG4A3WRA3JIUZVCJ3F4AQ2G2HZRA`
`44850000#AGPHGZ4YJKIFMF3GXRZMPCWFKRX2CYZUXCN2DAGDONV3UVST46UQ`

!!! warning
Do **NOT** use fast catchup on an *archival* or relay node. If you ever do it, you need to reset your node and start from scratch.

!!! warning
Fast catchup requires trust in the entity providing the catchpoint. An attacker controlling enough relays and proposing a malicious catchpoint can in theory allow a node to sync to an incorrect state of the blockchain. For full decentralization and no trust requirement, either use a catchpoint generated by one of your archival node (you can read the catchpoint using `goal node status`) or catch up from scratch without fast catchup.
Do **NOT** use fast catchup on an *archival* or non-light relay node. If you do, you will need to reset your node and start from scratch.

Steps:

Expand Down Expand Up @@ -532,6 +529,10 @@ Genesis hash: mFgazF+2uRS1tMiL9dsj01hJGySEmPN28B/TjjvpVW0=

`goal node catchup 4420000#Q7T2RRTDIRTYESIXKAAFJYFQWG4A3WRA3JIUZVCJ3F4AQ2G2HZRA`

or let the node retrieve the latest catchpoint itself using

`goal node catchup --force`

3) Run another status and results should look something like this showing a Catchpoint status:
`goal node status`

Expand Down

0 comments on commit cf0082a

Please sign in to comment.