diff --git a/.go-algorand-beta.version b/.go-algorand-beta.version
index 144af87a..7eff26cb 100644
--- a/.go-algorand-beta.version
+++ b/.go-algorand-beta.version
@@ -1 +1 @@
-v3.26.0-beta
+v4.0.1-beta
diff --git a/.go-algorand-stable.version b/.go-algorand-stable.version
index db69d9be..ad0c0083 100644
--- a/.go-algorand-stable.version
+++ b/.go-algorand-stable.version
@@ -1 +1 @@
-v3.26.0-stable
+v3.27.0-stable
diff --git a/.indexer.version b/.indexer.version
index 40c341bd..a76ccff2 100644
--- a/.indexer.version
+++ b/.indexer.version
@@ -1 +1 @@
-3.6.0
+3.7.1
diff --git a/docs/get-details/algokit/architecture-decisions/2023-01-12_smart-contract-deployment.md b/docs/get-details/algokit/architecture-decisions/2023-01-12_smart-contract-deployment.md
index b906f4b7..0a4fd5db 100644
--- a/docs/get-details/algokit/architecture-decisions/2023-01-12_smart-contract-deployment.md
+++ b/docs/get-details/algokit/architecture-decisions/2023-01-12_smart-contract-deployment.md
@@ -302,7 +302,7 @@ In order to provide confidence in the correctness of a smart contract it's impor
There are a few possible approaches that could be taken by AlgoKit to help facilitate this:
1. **Documentation** - Include documentation that recommends this kind of testing and provides examples for how to implement it
-2. **Manual testing** - Encourage a manual testing approach using (for example) the ABI user interface in dAppFlow, by providing an AlgoKit CLI command that sends the user there along with the ABI definition and contract ID resulting in a manual testing experience for the deployed contract with low friction
+2. **Manual testing** - Encourage a manual testing approach using (for example) the App Lab in Lora, by providing an AlgoKit CLI command that sends the user there along with the ABI definition and contract ID resulting in a manual testing experience for the deployed contract with low friction
3. **Automated integration tests** - Facilitate automated testing by issuing real transactions against a LocalNet and/or TestNet network
4. **Automated dry run tests** - Facilitate automated testing using the [Dry Run endpoint](https://developer.algorand.org/docs/rest-apis/algod/v2/#post-v2tealdryrun) to simulate what would happen when executing the contract under certain scenarios (e.g. [Graviton](https://github.com/algorand/graviton/blob/main/graviton/README.md))
5. **TEAL emulator** - Facilitate automated testing against a TEAL emulator (e.g. [Algo Builder Runtime](https://algobuilder.dev/api/runtime/index.html))
@@ -372,4 +372,4 @@ There are a few possible approaches that could be taken by AlgoKit to help facil
Based on all of this the suggested option for AlgoKit v1 is **Automated integration tests** since it conforms to the principles well, has prior art across TypeScript and Python that can be utilised and provides developers with a lot of confidence.
-Post v1, it's recommended that dAppFlow integration for exploratory testing and Graviton (or similar) support should be explored to provide a range of options to empower developers with a full suite of techniques they can use.
+Post v1, it's recommended that Lora integration for exploratory testing and Graviton (or similar) support should be explored to provide a range of options to empower developers with a full suite of techniques they can use.
diff --git a/docs/get-details/algorand-networks/betanet.md b/docs/get-details/algorand-networks/betanet.md
index 683a069c..a469ff76 100644
--- a/docs/get-details/algorand-networks/betanet.md
+++ b/docs/get-details/algorand-networks/betanet.md
@@ -4,10 +4,10 @@ title: BetaNet 🔷
🔷 = BetaNet availability only
# Version
-`v3.26.0-beta`
+`v4.0.1-beta`
# Release Version
-https://github.com/algorand/go-algorand/releases/tag/v3.26.0-beta
+https://github.com/algorand/go-algorand/releases/tag/v4.0.1-beta
# Genesis ID
`betanet-v1.0`
diff --git a/docs/get-details/algorand-networks/mainnet.md b/docs/get-details/algorand-networks/mainnet.md
index 14b5fc06..6481cc61 100644
--- a/docs/get-details/algorand-networks/mainnet.md
+++ b/docs/get-details/algorand-networks/mainnet.md
@@ -1,10 +1,10 @@
title: MainNet
# Version
-`v3.26.0-stable`
+`v3.27.0-stable`
# Release Version
-https://github.com/algorand/go-algorand/releases/tag/v3.26.0-stable
+https://github.com/algorand/go-algorand/releases/tag/v3.27.0-stable
# Genesis ID
`mainnet-v1.0`
diff --git a/docs/get-details/algorand-networks/testnet.md b/docs/get-details/algorand-networks/testnet.md
index e94c76c2..81052031 100644
--- a/docs/get-details/algorand-networks/testnet.md
+++ b/docs/get-details/algorand-networks/testnet.md
@@ -1,10 +1,10 @@
title: TestNet
# Version
-`v3.26.0-stable`
+`v3.27.0-stable`
# Release Version
-https://github.com/algorand/go-algorand/releases/tag/v3.26.0-stable
+https://github.com/algorand/go-algorand/releases/tag/v3.27.0-stable
# Genesis ID
`testnet-v1.0`
diff --git a/docs/get-started/algokit.md b/docs/get-started/algokit.md
index b16f3477..2c6c23da 100644
--- a/docs/get-started/algokit.md
+++ b/docs/get-started/algokit.md
@@ -153,7 +153,7 @@ If you would like to manually build and deploy the `HelloWorld` smart contract r
```shell
algokit project run build
-algokit project run deploy
+algokit project deploy
```
This should produce something similar to the following in the VSCode terminal.
diff --git a/docs/rest-apis/algod.md b/docs/rest-apis/algod.md
index 2b7cd770..1adf9cd1 100644
--- a/docs/rest-apis/algod.md
+++ b/docs/rest-apis/algod.md
@@ -751,6 +751,53 @@ GET /v2/blocks/{round}/hash
* public
+
+### GET /v2/blocks/{round}/header
+Get the block header for the block on the given round.
+```
+GET /v2/blocks/{round}/header
+```
+
+
+**Parameters**
+
+|Type|Name|Description|Schema|
+|---|---|---|---|
+|**Path**|**round**
*required*|The round from which to fetch block header information.|integer|
+|**Query**|**format**
*optional*|Configures whether the response object is JSON or MessagePack encoded. If not provided, defaults to JSON.|enum (json, msgpack)|
+
+
+**Responses**
+
+|HTTP Code|Description|Schema|
+|---|---|---|
+|**200**|Block header.|[Response 200](#getblockheader-response-200)|
+|**400**|Bad Request - Non integer number|[ErrorResponse](#errorresponse)|
+|**401**|Invalid API Token|[ErrorResponse](#errorresponse)|
+|**404**|None existing block|[ErrorResponse](#errorresponse)|
+|**500**|Internal Error|[ErrorResponse](#errorresponse)|
+|**default**|Unknown Error|No Content|
+
+
+**Response 200**
+
+|Name|Description|Schema|
+|---|---|---|
+|**blockHeader**
*required*|Block header data.|object|
+
+
+**Produces**
+
+* `application/json`
+* `application/msgpack`
+
+
+**Tags**
+
+* nonparticipating
+* public
+
+
### GET /v2/blocks/{round}/lightheader/proof
Gets a proof for a given light block header inside a state proof commitment
diff --git a/docs/run-a-node/reference/config.md b/docs/run-a-node/reference/config.md
index 1a6ceea3..3b82274c 100644
--- a/docs/run-a-node/reference/config.md
+++ b/docs/run-a-node/reference/config.md
@@ -32,13 +32,13 @@ The `algod` process configuration parameters are shown in the table below.
| Property| Description | Default Value |
|------|------|------|
-| Version | Version tracks the current version of the defaults so we can migrate old -> new
This is specifically important whenever we decide to change the default value
for an existing parameter. This field tag must be updated any time we add a new version. | 34 |
+| Version | Version tracks the current version of the defaults so we can migrate old -> new
This is specifically important whenever we decide to change the default value
for an existing parameter. This field tag must be updated any time we add a new version. | 35 |
| Archival | Archival nodes retain a full copy of the block history. Non-Archival nodes will delete old blocks and only retain what's need to properly validate blockchain messages (the precise number of recent blocks depends on the consensus parameters. Currently the last 1321 blocks are required). This means that non-Archival nodes require significantly less storage than Archival nodes. If setting this to true for the first time, the existing ledger may need to be deleted to get the historical values stored as the setting only affects current blocks forward. To do this, shutdown the node and delete all .sqlite files within the data/testnet-version directory, except the crash.sqlite file. Restart the node and wait for the node to sync. | false |
| GossipFanout | GossipFanout sets the maximum number of peers the node will connect to with outgoing connections. If the list of peers is less than this setting, fewer connections will be made. The node will not connect to the same peer multiple times (with outgoing connections). | 4 |
| NetAddress | NetAddress is the address and/or port on which a node listens for incoming connections, or blank to ignore incoming connections. Specify an IP and port or just a port. For example, 127.0.0.1:0 will listen on a random port on the localhost. | |
| ReconnectTime | ReconnectTime is deprecated and unused. | 60000000000 |
| PublicAddress | PublicAddress is the public address to connect to that is advertised to other nodes.
For MainNet relays, make sure this entry includes the full SRV host name
plus the publicly-accessible port number.
A valid entry will avoid "self-gossip" and is used for identity exchange
to de-duplicate redundant connections | |
-| MaxConnectionsPerIP | MaxConnectionsPerIP is the maximum number of connections allowed per IP address. | 15 |
+| MaxConnectionsPerIP | MaxConnectionsPerIP is the maximum number of connections allowed per IP address. | 8 |
| PeerPingPeriodSeconds | PeerPingPeriodSeconds is deprecated and unused. | 0 |
| TLSCertFile | TLSCertFile is the certificate file used for the websocket network if povided. | |
| TLSKeyFile | TLSKeyFile is the key file used for the websocket network if povided. | |
@@ -61,6 +61,7 @@ The `algod` process configuration parameters are shown in the table below.
| PriorityPeers | PriorityPeers specifies peer IP addresses that should always get
outgoing broadcast messages from this node. | |
| ReservedFDs | ReservedFDs is used to make sure the algod process does not run out of file descriptors (FDs). Algod ensures
that RLIMIT_NOFILE >= IncomingConnectionsLimit + RestConnectionsHardLimit +
ReservedFDs. ReservedFDs are meant to leave room for short-lived FDs like
DNS queries, SQLite files, etc. This parameter shouldn't be changed.
If RLIMIT_NOFILE < IncomingConnectionsLimit + RestConnectionsHardLimit + ReservedFDs
then either RestConnectionsHardLimit or IncomingConnectionsLimit decreased. | 256 |
| EndpointAddress | EndpointAddress configures the address the node listens to for REST API calls. Specify an IP and port or just port. For example, 127.0.0.1:0 will listen on a random port on the localhost (preferring 8080). | 127.0.0.1 |
+| EnablePrivateNetworkAccessHeader | Respond to Private Network Access preflight requests sent to the node. Useful when a public website is trying to access a node that's hosted on a local network. | false |
| RestReadTimeoutSeconds | RestReadTimeoutSeconds is passed to the API servers rest http.Server implementation. | 15 |
| RestWriteTimeoutSeconds | RestWriteTimeoutSeconds is passed to the API servers rest http.Server implementation. | 120 |
| DNSBootstrapID | DNSBootstrapID specifies the names of a set of DNS SRV records that identify the set of nodes available to connect to.
This is applicable to both relay and archival nodes - they are assumed to use the same DNSBootstrapID today.
When resolving the bootstrap ID will be replaced by the genesis block's network name. This string uses a URL
parsing library and supports optional backup and dedup parameters. 'backup' is used to provide a second DNS entry to use
in case the primary is unavailable. dedup is intended to be used to deduplicate SRV records returned from the primary
and backup DNS address. If the macro is used in the dedup mask, it must be at the beginning of the expression.
This is not typically something a user would configure. For more information see config/dnsbootstrap.go. | <network>.algorand.network?backup=<network>.algorand.net&dedup=<name>.algorand-<network>.(network|net) |
@@ -83,6 +84,7 @@ The `algod` process configuration parameters are shown in the table below.
| TxBacklogAppTxPerSecondRate | TxBacklogAppTxPerSecondRate determines a target app per second rate for the app tx rate limiter | 100 |
| TxBacklogRateLimitingCongestionPct | TxBacklogRateLimitingCongestionRatio determines the backlog filling threshold percentage at which the app limiter kicks in
or the tx backlog rate limiter kicks off. | 50 |
| EnableTxBacklogAppRateLimiting | EnableTxBacklogAppRateLimiting controls if an app rate limiter should be attached to the tx backlog enqueue process | true |
+| TxBacklogAppRateLimitingCountERLDrops | TxBacklogAppRateLimitingCountERLDrops feeds messages dropped by the ERL congestion manager & rate limiter (enabled by
EnableTxBacklogRateLimiting) to the app rate limiter (enabled by EnableTxBacklogAppRateLimiting), so that all TX messages
are counted. This provides more accurate rate limiting for the app rate limiter, at the potential expense of additional
deserialization overhead. | false |
| EnableTxBacklogRateLimiting | EnableTxBacklogRateLimiting controls if a rate limiter and congestion manager should be attached to the tx backlog enqueue process
if enabled, the over-all TXBacklog Size will be larger by MAX_PEERS*TxBacklogReservedCapacityPerPeer | true |
| TxBacklogSize | TxBacklogSize is the queue size used for receiving transactions. default of 26000 to approximate 1 block of transactions
if EnableTxBacklogRateLimiting enabled, the over-all size will be larger by MAX_PEERS*TxBacklogReservedCapacityPerPeer | 26000 |
| TxPoolSize | TxPoolSize is the number of transactions in the transaction pool buffer. | 75000 |
diff --git a/docs/run-a-node/setup/install.md b/docs/run-a-node/setup/install.md
index a0de4a99..7deeeaea 100644
--- a/docs/run-a-node/setup/install.md
+++ b/docs/run-a-node/setup/install.md
@@ -486,7 +486,7 @@ Genesis hash: SGO1GKSzyE7IEPItTxCByw9x8FmnrCDexi9/cOUJOiI=
# Sync Node Network using Fast Catchup
-Fast Catchup is a new feature and will rapidly update a node using catchpoint snapshots. A new command on goal node is now available for catchup. The entire process should sync a node in minutes rather than hours or days. As an example, the results for a BetaNet fast catchup, at the time of writing this, was a couple minutes to get to the sync point and a few more minutes to sync the remaining blocks since the snapshot. The total blocks synced was around 4.2 million blocks and it finished syncing in under 6 minutes. Actual sync times may vary depending on the number of accounts, number of blocks and the network. Here are the links to get the most recent catchup point snapshot per network. The results include a round to catchup to and the provided catchpoint. Paste into the `goal node catchup` command.
+Fast Catchup is a feature and will rapidly update a node using catchpoint snapshots. The entire process should sync a node in minutes/hours rather than days or weeks. As an example, the results for a Mainnet fast catchup, at the time of writing this, was a approximately 60 minutes to get to the sync point and a few more minutes to sync the remaining blocks since the snapshot. The total blocks synced was around 45 million blocks and it finished syncing in under 90 minutes. Actual sync times may vary depending on the number of accounts, number of blocks and the network. Here are the links to get the most recent catchup point snapshot per network. The results include a round to catchup to and the provided catchpoint. Paste into the `goal node catchup` command.
BetaNet
https://algorand-catchpoints.s3.us-east-2.amazonaws.com/channel/betanet/latest.catchpoint
@@ -498,13 +498,10 @@ MainNet
https://algorand-catchpoints.s3.us-east-2.amazonaws.com/channel/mainnet/latest.catchpoint
The results will look similar to this:
-`4420000#Q7T2RRTDIRTYESIXKAAFJYFQWG4A3WRA3JIUZVCJ3F4AQ2G2HZRA`
+`44850000#AGPHGZ4YJKIFMF3GXRZMPCWFKRX2CYZUXCN2DAGDONV3UVST46UQ`
!!! warning
- Do **NOT** use fast catchup on an *archival* or relay node. If you ever do it, you need to reset your node and start from scratch.
-
-!!! warning
- Fast catchup requires trust in the entity providing the catchpoint. An attacker controlling enough relays and proposing a malicious catchpoint can in theory allow a node to sync to an incorrect state of the blockchain. For full decentralization and no trust requirement, either use a catchpoint generated by one of your archival node (you can read the catchpoint using `goal node status`) or catch up from scratch without fast catchup.
+ Do **NOT** use fast catchup on an *archival* or non-light relay node. If you do, you will need to reset your node and start from scratch.
Steps:
@@ -532,6 +529,10 @@ Genesis hash: mFgazF+2uRS1tMiL9dsj01hJGySEmPN28B/TjjvpVW0=
`goal node catchup 4420000#Q7T2RRTDIRTYESIXKAAFJYFQWG4A3WRA3JIUZVCJ3F4AQ2G2HZRA`
+or let the node retrieve the latest catchpoint itself using
+
+`goal node catchup --force`
+
3) Run another status and results should look something like this showing a Catchpoint status:
`goal node status`