Compare commits

...

30 Commits

Author SHA1 Message Date
l0rinc eb74e70f86
Merge 70991e683d into 919e6d01e9 2025-10-07 06:33:32 +02:00
merge-script 919e6d01e9
Merge bitcoin/bitcoin#33489: build: Drop support for EOL macOS 13
1aaaaa078b fuzz: Drop unused workaround after Apple-Clang bump (MarcoFalke)
fadad7a494 Drop support for EOL macOS 13 (MarcoFalke)

Pull request description:

  Now that macOS 13 is EOL (https://en.wikipedia.org/wiki/MacOS_Ventura), it seems odd to still support it.

  (macOS Ventura 13.7.8 received its final security update on 20 Aug 2025: https://support.apple.com/en-us/100100)

  This patch will only be released in version 31.x, another 6 months out from now.

  So:

  * Update the depends build and release note template to drop EOL macOS 13.
  * As a result, update the earliest Xcode to version 16 in CI.
  * Also, bump the macOS CI runner to version 15, to avoid issues when version 14 will be at its EOL in about 1 year.

  This also allows to drop a small workaround in the fuzz tests and unlocks libcpp hardening (https://github.com/bitcoin/bitcoin/pull/33462)

ACKs for top commit:
  stickies-v:
    re-ACK 1aaaaa078b
  l0rinc:
    code review ACK 1aaaaa078b
  hodlinator:
    re-ACK 1aaaaa078b
  hebasto:
    ACK 1aaaaa078b.

Tree-SHA512: 6d247a8432ef8ea8c6ff2a221472b278f8344346b172980299507f9898bb9e8e16480c128b1f4ca692bcbcc393da2b2fd6895ac5f118bc09e0f30f910529d20c
2025-10-06 12:48:00 -04:00
merge-script 452ea59281
Merge bitcoin/bitcoin#33454: net: support overriding the proxy selection in ConnectNode()
c76de2eea1 net: support overriding the proxy selection in ConnectNode() (Vasil Dimov)

Pull request description:

  Normally `ConnectNode()` would choose whether to use a proxy and which one. Make it possible to override this from the callers and same for `OpenNetworkConnection()` - pass down the proxy to `ConnectNode()`.

  Document both functions.

  This is useful if we want to open connections to IPv4 or IPv6 peers through the Tor SOCKS5 proxy.

  Also have `OpenNetworkConnection()` return whether the connection succeeded or not. This can be used when the caller needs to keep track of how many (successful) connections were opened.

  ---

  This is part of [#29415 Broadcast own transactions only via short-lived Tor or I2P connections](https://github.com/bitcoin/bitcoin/pull/29415). Putting it in its own PR to reduce the size of #29415 and because it does not depend on the other commits from there.

ACKs for top commit:
  stratospher:
    ACK c76de2e.
  optout21:
    ACK c76de2eea1
  mzumsande:
    Code Review ACK c76de2eea1
  andrewtoth:
    ACK c76de2eea1

Tree-SHA512: 1d266e4280cdb1d0599971fa8b5da58b1b7451635be46abb15c0b823a1e18cf6e7bcba4a365ad198e6fd1afee4097d81a54253fa680c8b386ca6b9d68d795ff0
2025-10-06 12:43:14 -04:00
merge-script a33bd767a3
Merge bitcoin/bitcoin#33464: p2p: Use network-dependent timers for inbound inv scheduling
0f7d4ee4e8 p2p: Use different inbound inv timer per network (Martin Zumsande)
94db966a3b net: use generic network key for addrcache (Martin Zumsande)

Pull request description:

  Currently, `NextInvToInbounds` schedules  each round of `inv` at the same time for all inbound peers. It's being done this way because with a separate timer per peer (like it's done for outbounds), an attacker could do multiple connections to learn about the time a transaction arrived. (#13298).

  However, having a single timer for inbounds of all networks is also an obvious fingerprinting vector: Connecting to a suspected pair of privacy-network and clearnet addresses and observing the `inv` pattern makes it trivial to confirm or refute that they are the same node.

  This PR changes it such that a separate timer is used for each network.
  It uses the existing method  from `getaddr` caching and generalizes it to be saved in a new field `m_network_key` in `CNode` which will be used for both `getaddr` caching and `inv` scheduling, and can also be used for any future anti-fingerprinting measures.

ACKs for top commit:
  sipa:
    utACK 0f7d4ee4e8
  stratospher:
    reACK 0f7d4ee.
  naiyoma:
    Tested ACK 0f7d4ee4e8
  danielabrozzoni:
    reACK 0f7d4ee4e8

Tree-SHA512: e197c3005b2522051db432948874320b74c23e01e66988ee1ee11917dac0923f58c1252fa47da24e68b08d7a355d8e5e0a3ccdfa6e4324cb901f21dfa880cd9c
2025-10-03 23:45:17 +01:00
merge-script 2578da69f4
Merge bitcoin/bitcoin#33485: test: set par=2 in default config for functional test framework
dda5228e02 test: set par=2 in default config for functional test framework (Andrew Toth)

Pull request description:

  Depending on the host machine, a default `par` value can spawn up to 15 script verification threads for each node. Running the functional test suite with default `par` can exhaust file descriptors or hit other resource limits when many threads are spawned. These threads are mostly idle and the same code paths are executed with a value of `par=2`. Limit this to 2 for functional tests that do not override the default option.

ACKs for top commit:
  maflcko:
    lgtm ACK dda5228e02
  pablomartin4btc:
    ACK dda5228e02
  l0rinc:
    Code review ACK dda5228e02
  theStack:
    ACK dda5228e02

Tree-SHA512: 4459972330ff50ac7391141db6382579de09d84e68959eaeb5f20972bb9daf9aac1bd68355028ded9ee65e838c12dbd53e6f3bb6cdc375d269f666c19a19eaec
2025-10-03 22:36:34 +01:00
merge-script 25dbe4bc86
Merge bitcoin/bitcoin#33533: test: addrman: check isTerrible when time is more than 10min in the future
8e47ed6906 test: addrman: check isTerrible when time is more than 10min in the future (brunoerg)

Pull request description:

  This PR adds test coverage to kill the following mutant (https://corecheck.dev/mutation/src/addrman.cpp#L76):
  ```diff
  diff --git a/src/addrman.cpp b/src/addrman.cpp
  index 9c3a24db90..0ffd349315 100644
  --- a/src/addrman.cpp
  +++ b/src/addrman.cpp
  @@ -73,7 +73,7 @@ bool AddrInfo::IsTerrible(NodeSeconds now) const
       }

       if (nTime > now + 10min) { // came in a flying DeLorean
  -        return true;
  +        return false;
       }
  ```

  When the `nTime` is set 10 minutes in the future the addr should be marked as terrible.

ACKs for top commit:
  Crypt-iQ:
    crACK 8e47ed6906
  danielabrozzoni:
    tACK 8e47ed6906
  marcofleon:
    Nice, code review ACK 8e47ed6906

Tree-SHA512: b53b3aa234a73ec7808cb1555916ac64dd707f230ec290a1712493ece8e274a060e16d862b31df0f744804ebd3c0c2825c49becb7d3040cc358e48c4002524cb
2025-10-03 20:20:46 +01:00
merge-script cfb0d74698
Merge bitcoin/bitcoin#33121: test: fix p2p_leak_tx.py
14ae71f323 test: make notfound_on_unannounced more reliable (David Gumberg)
99bc552980 test: fix (w)txid confusion in p2p_leak_tx.py (Martin Zumsande)
576dd97cb9 test: increase timeout in p2p_leak_tx.py (Martin Zumsande)

Pull request description:

  This fixes two issues with `p2p_leak_tx.py`:

  1.) #33090: As far as I can see, this is just the randomness of `NextInvToInbounds`/ `rand_exp_duration`, which has a probability of `e^-(60s/5s) = 6.14×10^−6` to result in a period > 60s (our waiting time), so that the test would fail every 160k runs... Doubling the timeout should be sufficient to lower the probability drastically.

  2.) The subtest `test_notfound_on_unannounced_tx` has some (w)txid confusion: we send a `MSG_TX`-type getdata with a `wtxid` in it, which necessarily always results in a NOTFOUND. Fixed this, and change the subtest to be more deterministic based on `mocktime`.

ACKs for top commit:
  stratospher:
    ACK 14ae71f. nice restructuring using mocktime!
  davidgumberg:
    reACK 14ae71f323
  vasild:
    ACK 14ae71f323

Tree-SHA512: be5a4ca7bf56f82b6fa04d90ef9312dc2e6f8ff7ddf70b39d979dc42fbdd823157109b8b5dc46eb7f81ac1e816f40e6966b3c8a7d384aadee01e2189c20d3e3a
2025-10-03 20:06:50 +01:00
merge-script 86eaa4d6cd
Merge bitcoin/bitcoin#33482: contrib: fix macOS deployment with no translations
7b5261f7ef contrib: fix using macdploy script without translations. (amisha)

Pull request description:

  **Description**
  From what I deciphered reading the line https://github.com/bitcoin/bitcoin/blob/master/contrib/macdeploy/macdeployqtplus#L390 is that qt translations are optional to have hence we should be able to build without it but the case where the flag translations_dir falls back to its default Null value it raises this error.

  The config comments also mentioned that adding translation file is optional.

  ```
  ./macdeployqtplus --help
  usage: macdeployqtplus [-h] [-verbose [VERBOSE]] [-no-plugins] [-no-strip] [-translations-dir path] [-zip zip] app-bundle

  Improved version of macdeployqt. Outputs a ready-to-deploy app in a folder "dist" and optionally wraps it in a .zip file. Note, that the "dist" folder will be deleted before deploying on each run. Optionally, Qt translation files
  (.qm) can be added to the bundle.
  ```

  **Steps to reproduce**
  So I was following the general steps to set up app on macos however I didn't download any qt translations presuming it was optional from the comment linkedin in PR, so to reproduce if you have translation directories in place ull need to delete them and then try to build the file, otherwise don't download it at all and try to build it. It should fail on that flag as translations dir was never downloaded.

  **Approach taken**
  I have moved the code which adds language files under the if statement that first checks if the value of the flag is not Null before referencing it.

ACKs for top commit:
  ismaelsadeeq:
    ACK 7b5261f7ef

Tree-SHA512: 8d51b17569e42c9feb95e1be17b1551c708a05eb44b82c74db0b25e07006b4ee223d64484f8bdb2ee1420f6e571686561ae1c09bd3362f77dcbb507bc5085f86
2025-10-03 15:40:38 +01:00
merge-script 007900ee9b
Merge bitcoin/bitcoin#33434: depends: static libxcb-cursor
eca50854e1 depends: static libxcb_cursor (fanquake)

Pull request description:

  Remove the runtime requirement of `libxcb-cursor`. This library is no-longer present on modern Ubuntu.
  Fixes #33432.
  Also related to #32097.

ACKs for top commit:
  davidgumberg:
    Addendum ACK eca50854e1
  willcl-ark:
    Code review ACK eca50854e1

Tree-SHA512: d545a03baf5030de64874b79add87b6ef5f95eb5ca31aa66007ee03554103d2eda5e56dfd4395d0a12e24b2e489457e4f19ed9e6d390351c72a0da630f03cc42
2025-10-03 15:26:20 +01:00
brunoerg 8e47ed6906 test: addrman: check isTerrible when time is more than 10min in the future 2025-10-03 10:24:29 -03:00
merge-script 1ed00a0d39
Merge bitcoin/bitcoin#33504: Mempool: Do not enforce TRUC checks on reorg
06df14ba75 test: add more TRUC reorg coverge (Greg Sanders)
26e71c237d Mempool: Do not enforce TRUC checks on reorg (Greg Sanders)
bbe8e9063c fuzz: don't bypass_limits for most mempool harnesses (Greg Sanders)

Pull request description:

  This was the intended behavior but our tests didn't cover the scenario where in-block transactions themselves violate TRUC topological constraints.

  The behavior in master will potentially lead to many erroneous evictions during a reorg, where evicted TRUC packages may be very high feerate and make sense to mine all together in the next block and are well within the normal anti-DoS chain limits.

  This issue exists since the merge of https://github.com/bitcoin/bitcoin/pull/28948/files#diff-97c3a52bc5fad452d82670a7fd291800bae20c7bc35bb82686c2c0a4ea7b5b98R956

ACKs for top commit:
  sdaftuar:
    ACK 06df14ba75
  glozow:
    ACK 06df14ba75
  ismaelsadeeq:
    Code review ACK 06df14ba75

Tree-SHA512: bdb6e4dd622ed8b0b11866263fff559fcca6e0ca1c56a884cca9ac4572f0026528a63a9f4c8a0660df2f5efe0766310a30e5df1d6c560f31e4324ea5d4b3c1a8
2025-10-02 13:22:22 +01:00
Vasil Dimov c76de2eea1
net: support overriding the proxy selection in ConnectNode()
Normally `ConnectNode()` would choose whether to use a proxy and which
one. Make it possible to override this from the callers and same for
`OpenNetworkConnection()` - pass down the proxy to `ConnectNode()`.

Document both functions.

This is useful if we want to open connections to IPv4 or IPv6 peers
through the Tor SOCKS5 proxy.

Also have `OpenNetworkConnection()` return whether the connection
succeeded or not. This can be used when the caller needs to keep track
of how many (successful) connections were opened.
2025-10-02 08:39:26 +02:00
Ava Chow 75353a0163
Merge bitcoin/bitcoin#32326: net: improve the interface around FindNode() and avoid a recursive mutex lock
87e7f37918 doc: clarify peer address in getpeerinfo and addnode RPC help (Vasil Dimov)
2a4450ccbb net: change FindNode() to not return a node and rename it (Vasil Dimov)
4268abae1a net: avoid recursive m_nodes_mutex lock in DisconnectNode() (Vasil Dimov)
3a4d1a25cf net: merge AlreadyConnectedToAddress() and FindNode(CNetAddr) (Vasil Dimov)

Pull request description:

  `CConnman::FindNode()` would lock `m_nodes_mutex`, find the node in `m_nodes`, release the mutex and return the node. The current code is safe but it is a dangerous interface where a caller may end up using the node returned from `FindNode()` without owning `m_nodes_mutex` and without having that node's reference count incremented.

  Change `FindNode()` to return a boolean since all but one of its callers used its return value to check whether a node exists and did not do anything else with the return value.

  Remove a recursive lock on `m_nodes_mutex`.

  Rename `FindNode()` to better describe what it does.

ACKs for top commit:
  achow101:
    ACK 87e7f37918
  furszy:
    Code review ACK 87e7f37918
  hodlinator:
    re-ACK 87e7f37918

Tree-SHA512: 44fb64cd1226eca124ed1f447b4a1ebc42cc5c9e8561fc91949bbeaeaa7fa16fcfd664e85ce142e5abe62cb64197c178ca4ca93b3b3217b913e3c498d0b7d1c9
2025-10-01 14:17:22 -07:00
Vasil Dimov 87e7f37918
doc: clarify peer address in getpeerinfo and addnode RPC help
The returned value in `getpeerinfo/addr` could be a hostname as well as
an IP address and the `:port` part could be missing. It is displayed
from `CNode::m_addr_name` which could have been set from RPC `addnode`
where the argument is allowed to be a hostname and an optional port.
2025-10-01 16:39:56 +02:00
Vasil Dimov 2a4450ccbb
net: change FindNode() to not return a node and rename it
All callers of `CConnman::FindNode()` use its return value `CNode*` only
as a boolean null/notnull. So change that method to return `bool`.

This removes the dangerous pattern of handling a `CNode` object (the
return value of `FindNode()`) without holding `CConnman::m_nodes_mutex`
and without having that object's reference count incremented for the
duration of the usage.

Also rename the method to better describe what it does.
2025-10-01 16:39:56 +02:00
Vasil Dimov 4268abae1a
net: avoid recursive m_nodes_mutex lock in DisconnectNode()
Have `CConnman::DisconnectNode()` iterate `m_nodes` itself instead of
using `FindNode()`. This avoids recursive mutex lock and drops the only
caller of `FindNode()` which used the return value for something else
than a boolean found/notfound.
2025-10-01 16:39:55 +02:00
MarcoFalke 1aaaaa078b
fuzz: Drop unused workaround after Apple-Clang bump 2025-10-01 08:09:34 +02:00
MarcoFalke fadad7a494
Drop support for EOL macOS 13 2025-10-01 08:09:30 +02:00
David Gumberg 14ae71f323 test: make notfound_on_unannounced more reliable
By using mocktime, we will always hit both the notfound
branch and the tx sent branch.
The previous version didn't achieve that due to timing
issues.

Co-authored-by: Martin Zumsande <mzumsande@gmail.com>
2025-09-30 15:57:31 -04:00
Martin Zumsande 99bc552980 test: fix (w)txid confusion in p2p_leak_tx.py
Before, we'd send a MSG_TX with a wtxid in it, which
would always result in a notfound answer
2025-09-30 15:57:31 -04:00
Martin Zumsande 576dd97cb9 test: increase timeout in p2p_leak_tx.py
With a low but not negligible probability in the order
of 10^-6 the exponential timer NextInvToInBounds can lead
to an interval >60s, making the test fail.
Also uses mocktime to speed up the test and fixes a
non-matching on_inv override.

Co-authored-by: Vasil Dimov <vd@FreeBSD.org>
2025-09-30 15:56:17 -04:00
Martin Zumsande 0f7d4ee4e8 p2p: Use different inbound inv timer per network
Currently nodes schedule their invs to all inbound peers at the same time.
It is trivial to make use this timing pattern for fingerprinting
identities on different networks. Using a separate timers for each network will
make the fingerprinting harder.
2025-09-30 11:17:17 -04:00
Greg Sanders 06df14ba75 test: add more TRUC reorg coverge 2025-09-29 16:25:54 -04:00
Greg Sanders 26e71c237d Mempool: Do not enforce TRUC checks on reorg
Not enforcing TRUC topology on reorg was the intended
behavior, but the appropriate bypass argument was not
checked.

This mistake means we could potentially invalidate a long
chain of perfectly incentive-compatible transactions that
were made historically, including subsequent non-TRUC
transactions, all of which may have been very high feerate.

Lastly, it wastes CPU cycles doing topology checks since
this behavior cannot actually enforce the topology in
general for the reorg setting.
2025-09-29 16:25:54 -04:00
Greg Sanders bbe8e9063c fuzz: don't bypass_limits for most mempool harnesses
Using bypass_limits=true is essentially fuzzing part of a
reorg only, and results in TRUC invariants unable to be
checked. Remove most instances of bypassing limits, leaving
one harness able to do so.
2025-09-29 16:25:54 -04:00
Vasil Dimov 3a4d1a25cf
net: merge AlreadyConnectedToAddress() and FindNode(CNetAddr)
`CConnman::AlreadyConnectedToAddress()` is the only caller of
`CConnman::FindNode(CNetAddr)`, so merge the two in one function.

The unit test that checked whether `AlreadyConnectedToAddress()` ignores
the port is now unnecessary because now the function takes a `CNetAddr`
argument. It has no access to the port.
2025-09-29 12:51:52 +02:00
Andrew Toth dda5228e02 test: set par=2 in default config for functional test framework
Depending on the host machine, a default `par` value can spawn up to 15 script verification threads for each node.
Running the functional test suite with default `par` can exhaust file descriptors or hit other resource limits when many threads are spawned.
These threads are mostly idle and the same code paths are executed with a value of `par=2`.
Limit this to 2 for functional tests that do not override the default option.

Co-authored-by: maflcko <6399679+maflcko@users.noreply.github.com>
2025-09-27 16:31:01 -04:00
amisha 7b5261f7ef contrib: fix using macdploy script without translations.
QT translations are optional, but the script would error when
'translations_dir' falls back to its default value NULL.

This PR fixes it by moving the set-up of QT translations under
the check for 'translations_dir' presence.
2025-09-26 10:09:30 +05:30
Martin Zumsande 94db966a3b net: use generic network key for addrcache
The generic key can also be used in other places
where behavior between different network identities should
be uncorrelated to avoid fingerprinting.
This also changes RANDOMIZER_ID - since it is not
being persisted to disk, there are no compatibility issues.
2025-09-23 10:56:44 -04:00
fanquake eca50854e1
depends: static libxcb_cursor
Modern Ubuntu isn't shipping with this library installed by default.
Staticly link it to remove the need for end-users to install it.

Closes #33432.
2025-09-23 10:43:55 -04:00
28 changed files with 303 additions and 193 deletions

View File

@ -105,7 +105,7 @@ jobs:
name: ${{ matrix.job-name }}
# Use any image to support the xcode-select below, but hardcode version to avoid silent upgrades (and breaks).
# See: https://github.com/actions/runner-images#available-images.
runs-on: macos-14
runs-on: macos-15
# When a contributor maintains a fork of the repo, any pull request they make
# to their own fork, or to the main repository, will trigger two CI runs:
@ -123,10 +123,10 @@ jobs:
include:
- job-type: standard
file-env: './ci/test/00_setup_env_mac_native.sh'
job-name: 'macOS 14 native, arm64, no depends, sqlite only, gui'
job-name: 'macOS native, no depends, sqlite only, gui'
- job-type: fuzz
file-env: './ci/test/00_setup_env_mac_native_fuzz.sh'
job-name: 'macOS 14 native, arm64, fuzz'
job-name: 'macOS native, fuzz'
env:
DANGER_RUN_CI_ON_HOST: 1
@ -145,8 +145,8 @@ jobs:
# Use the earliest Xcode supported by the version of macOS denoted in
# doc/release-notes-empty-template.md and providing at least the
# minimum clang version denoted in doc/dependencies.md.
# See: https://developer.apple.com/documentation/xcode-release-notes/xcode-15-release-notes
sudo xcode-select --switch /Applications/Xcode_15.0.app
# See: https://developer.apple.com/documentation/xcode-release-notes/xcode-16-release-notes
sudo xcode-select --switch /Applications/Xcode_16.0.app
clang --version
- name: Install Homebrew packages

View File

@ -112,7 +112,6 @@ ELF_ALLOWED_LIBRARIES = {
'libfontconfig.so.1', # font support
'libfreetype.so.6', # font parsing
'libdl.so.2', # programming interface to dynamic linker
'libxcb-cursor.so.0',
'libxcb-icccm.so.4',
'libxcb-image.so.0',
'libxcb-shm.so.0',
@ -249,7 +248,7 @@ def check_MACHO_libraries(binary) -> bool:
return ok
def check_MACHO_min_os(binary) -> bool:
if binary.build_version.minos == [13,0,0]:
if binary.build_version.minos == [14,0,0]:
return True
return False

View File

@ -460,18 +460,18 @@ if config.translations_dir:
sys.stderr.write(f"Error: Could not find translation dir \"{config.translations_dir[0]}\"\n")
sys.exit(1)
print("+ Adding Qt translations +")
print("+ Adding Qt translations +")
translations = Path(config.translations_dir[0])
translations = Path(config.translations_dir[0])
regex = re.compile('qt_[a-z]*(.qm|_[A-Z]*.qm)')
regex = re.compile('qt_[a-z]*(.qm|_[A-Z]*.qm)')
lang_files = [x for x in translations.iterdir() if regex.match(x.name)]
lang_files = [x for x in translations.iterdir() if regex.match(x.name)]
for file in lang_files:
if verbose:
print(file.as_posix(), "->", os.path.join(applicationBundle.resourcesPath, file.name))
shutil.copy2(file.as_posix(), os.path.join(applicationBundle.resourcesPath, file.name))
for file in lang_files:
if verbose:
print(file.as_posix(), "->", os.path.join(applicationBundle.resourcesPath, file.name))
shutil.copy2(file.as_posix(), os.path.join(applicationBundle.resourcesPath, file.name))
# ------------------------------------------------

View File

@ -1,4 +1,4 @@
OSX_MIN_VERSION=13.0
OSX_MIN_VERSION=14.0
OSX_SDK_VERSION=14.0
XCODE_VERSION=15.0
XCODE_BUILD_ID=15A240d

View File

@ -6,7 +6,7 @@ $(package)_sha256_hash=0e9c5446dc6f3beb8af6ebfcc9e27bcc6da6fe2860f7fc07b99144dfa
$(package)_dependencies=libxcb libxcb_util_render libxcb_util_image
define $(package)_set_vars
$(package)_config_opts = --disable-static
$(package)_config_opts = --disable-shared
$(package)_config_opts += --disable-dependency-tracking --enable-option-checking
endef

View File

@ -81,8 +81,6 @@ the necessary parts of Qt, the libqrencode and pass `-DBUILD_GUI=ON`. Skip if yo
sudo apt-get install qt6-base-dev qt6-tools-dev qt6-l10n-tools qt6-tools-dev-tools libgl-dev
For Qt 6.5 and later, the `libxcb-cursor0` package must be installed at runtime.
Additionally, to support Wayland protocol for modern desktop environments:
sudo apt install qt6-wayland
@ -133,8 +131,6 @@ the necessary parts of Qt, the libqrencode and pass `-DBUILD_GUI=ON`. Skip if yo
sudo dnf install qt6-qtbase-devel qt6-qttools-devel
For Qt 6.5 and later, the `xcb-util-cursor` package must be installed at runtime.
Additionally, to support Wayland protocol for modern desktop environments:
sudo dnf install qt6-qtwayland
@ -182,8 +178,6 @@ the necessary parts of Qt, the libqrencode and pass `-DBUILD_GUI=ON`. Skip if yo
apk add qt6-qtbase-dev qt6-qttools-dev
For Qt 6.5 and later, the `xcb-util-cursor` package must be installed at runtime.
The GUI will be able to encode addresses in QR codes unless this feature is explicitly disabled. To install libqrencode, run:
apk add libqrencode-dev

View File

@ -36,7 +36,7 @@ Compatibility
==============
Bitcoin Core is supported and tested on operating systems using the
Linux Kernel 3.17+, macOS 13+, and Windows 10+. Bitcoin
Linux Kernel 3.17+, macOS 14+, and Windows 10+. Bitcoin
Core should also work on most other Unix-like systems but is not as
frequently tested on them. It is not recommended to use Bitcoin Core on
unsupported systems.

View File

@ -3,7 +3,7 @@
<plist version="0.9">
<dict>
<key>LSMinimumSystemVersion</key>
<string>13</string>
<string>14</string>
<key>LSArchitecturePriority</key>
<array>

View File

@ -108,7 +108,7 @@ const std::string NET_MESSAGE_TYPE_OTHER = "*other*";
static const uint64_t RANDOMIZER_ID_NETGROUP = 0x6c0edd8036ef4036ULL; // SHA256("netgroup")[0:8]
static const uint64_t RANDOMIZER_ID_LOCALHOSTNONCE = 0xd93e69e2bbfa5735ULL; // SHA256("localhostnonce")[0:8]
static const uint64_t RANDOMIZER_ID_ADDRCACHE = 0x1cf2e4ddd306dda9ULL; // SHA256("addrcache")[0:8]
static const uint64_t RANDOMIZER_ID_NETWORKKEY = 0x0e8a2b136c592a7dULL; // SHA256("networkkey")[0:8]
//
// Global state variables
//
@ -331,42 +331,22 @@ bool IsLocal(const CService& addr)
return mapLocalHost.count(addr) > 0;
}
CNode* CConnman::FindNode(const CNetAddr& ip)
bool CConnman::AlreadyConnectedToHost(const std::string& host) const
{
LOCK(m_nodes_mutex);
for (CNode* pnode : m_nodes) {
if (static_cast<CNetAddr>(pnode->addr) == ip) {
return pnode;
}
}
return nullptr;
return std::ranges::any_of(m_nodes, [&host](CNode* node) { return node->m_addr_name == host; });
}
CNode* CConnman::FindNode(const std::string& addrName)
bool CConnman::AlreadyConnectedToAddressPort(const CService& addr_port) const
{
LOCK(m_nodes_mutex);
for (CNode* pnode : m_nodes) {
if (pnode->m_addr_name == addrName) {
return pnode;
}
}
return nullptr;
return std::ranges::any_of(m_nodes, [&addr_port](CNode* node) { return node->addr == addr_port; });
}
CNode* CConnman::FindNode(const CService& addr)
bool CConnman::AlreadyConnectedToAddress(const CNetAddr& addr) const
{
LOCK(m_nodes_mutex);
for (CNode* pnode : m_nodes) {
if (static_cast<CService>(pnode->addr) == addr) {
return pnode;
}
}
return nullptr;
}
bool CConnman::AlreadyConnectedToAddress(const CAddress& addr)
{
return FindNode(static_cast<CNetAddr>(addr));
return std::ranges::any_of(m_nodes, [&addr](CNode* node) { return node->addr == addr; });
}
bool CConnman::CheckIncomingNonce(uint64_t nonce)
@ -393,7 +373,12 @@ static CService GetBindAddress(const Sock& sock)
return addr_bind;
}
CNode* CConnman::ConnectNode(CAddress addrConnect, const char *pszDest, bool fCountFailure, ConnectionType conn_type, bool use_v2transport)
CNode* CConnman::ConnectNode(CAddress addrConnect,
const char* pszDest,
bool fCountFailure,
ConnectionType conn_type,
bool use_v2transport,
const std::optional<Proxy>& proxy_override)
{
AssertLockNotHeld(m_unused_i2p_sessions_mutex);
assert(conn_type != ConnectionType::INBOUND);
@ -403,10 +388,8 @@ CNode* CConnman::ConnectNode(CAddress addrConnect, const char *pszDest, bool fCo
return nullptr;
// Look for an existing connection
CNode* pnode = FindNode(static_cast<CService>(addrConnect));
if (pnode)
{
LogPrintf("Failed to open new connection, already connected\n");
if (AlreadyConnectedToAddressPort(addrConnect)) {
LogInfo("Failed to open new connection to %s, already connected", addrConnect.ToStringAddrPort());
return nullptr;
}
}
@ -436,9 +419,7 @@ CNode* CConnman::ConnectNode(CAddress addrConnect, const char *pszDest, bool fCo
}
// It is possible that we already have a connection to the IP/port pszDest resolved to.
// In that case, drop the connection that was just created.
LOCK(m_nodes_mutex);
CNode* pnode = FindNode(static_cast<CService>(addrConnect));
if (pnode) {
if (AlreadyConnectedToAddressPort(addrConnect)) {
LogPrintf("Not opening a connection to %s, already connected to %s\n", pszDest, addrConnect.ToStringAddrPort());
return nullptr;
}
@ -463,7 +444,13 @@ CNode* CConnman::ConnectNode(CAddress addrConnect, const char *pszDest, bool fCo
for (auto& target_addr: connect_to) {
if (target_addr.IsValid()) {
const bool use_proxy{GetProxy(target_addr.GetNetwork(), proxy)};
bool use_proxy;
if (proxy_override.has_value()) {
use_proxy = true;
proxy = proxy_override.value();
} else {
use_proxy = GetProxy(target_addr.GetNetwork(), proxy);
}
bool proxyConnectionFailed = false;
if (target_addr.IsI2P() && use_proxy) {
@ -530,6 +517,13 @@ CNode* CConnman::ConnectNode(CAddress addrConnect, const char *pszDest, bool fCo
if (!addr_bind.IsValid()) {
addr_bind = GetBindAddress(*sock);
}
uint64_t network_id = GetDeterministicRandomizer(RANDOMIZER_ID_NETWORKKEY)
.Write(target_addr.GetNetClass())
.Write(addr_bind.GetAddrBytes())
// For outbound connections, the port of the bound address is randomly
// assigned by the OS and would therefore not be useful for seeding.
.Write(0)
.Finalize();
CNode* pnode = new CNode(id,
std::move(sock),
target_addr,
@ -539,6 +533,7 @@ CNode* CConnman::ConnectNode(CAddress addrConnect, const char *pszDest, bool fCo
pszDest ? pszDest : "",
conn_type,
/*inbound_onion=*/false,
network_id,
CNodeOptions{
.permission_flags = permission_flags,
.i2p_sam_session = std::move(i2p_transient_session),
@ -1832,6 +1827,11 @@ void CConnman::CreateNodeFromAcceptedSocket(std::unique_ptr<Sock>&& sock,
ServiceFlags local_services = GetLocalServices();
const bool use_v2transport(local_services & NODE_P2P_V2);
uint64_t network_id = GetDeterministicRandomizer(RANDOMIZER_ID_NETWORKKEY)
.Write(inbound_onion ? NET_ONION : addr.GetNetClass())
.Write(addr_bind.GetAddrBytes())
.Write(addr_bind.GetPort()) // inbound connections use bind port
.Finalize();
CNode* pnode = new CNode(id,
std::move(sock),
CAddress{addr, NODE_NONE},
@ -1841,6 +1841,7 @@ void CConnman::CreateNodeFromAcceptedSocket(std::unique_ptr<Sock>&& sock,
/*addrNameIn=*/"",
ConnectionType::INBOUND,
inbound_onion,
network_id,
CNodeOptions{
.permission_flags = permission_flags,
.prefer_evict = discouraged,
@ -2882,7 +2883,7 @@ void CConnman::ThreadOpenConnections(const std::vector<std::string> connect, std
const bool count_failures{((int)outbound_ipv46_peer_netgroups.size() + outbound_privacy_network_peers) >= std::min(m_max_automatic_connections - 1, 2)};
// Use BIP324 transport when both us and them have NODE_V2_P2P set.
const bool use_v2transport(addrConnect.nServices & GetLocalServices() & NODE_P2P_V2);
OpenNetworkConnection(addrConnect, count_failures, std::move(grant), /*strDest=*/nullptr, conn_type, use_v2transport);
OpenNetworkConnection(addrConnect, count_failures, std::move(grant), /*pszDest=*/nullptr, conn_type, use_v2transport);
}
}
}
@ -2991,7 +2992,13 @@ void CConnman::ThreadOpenAddedConnections()
}
// if successful, this moves the passed grant to the constructed node
void CConnman::OpenNetworkConnection(const CAddress& addrConnect, bool fCountFailure, CountingSemaphoreGrant<>&& grant_outbound, const char *pszDest, ConnectionType conn_type, bool use_v2transport)
bool CConnman::OpenNetworkConnection(const CAddress& addrConnect,
bool fCountFailure,
CountingSemaphoreGrant<>&& grant_outbound,
const char* pszDest,
ConnectionType conn_type,
bool use_v2transport,
const std::optional<Proxy>& proxy_override)
{
AssertLockNotHeld(m_unused_i2p_sessions_mutex);
assert(conn_type != ConnectionType::INBOUND);
@ -3000,23 +3007,24 @@ void CConnman::OpenNetworkConnection(const CAddress& addrConnect, bool fCountFai
// Initiate outbound network connection
//
if (m_interrupt_net->interrupted()) {
return;
return false;
}
if (!fNetworkActive) {
return;
return false;
}
if (!pszDest) {
bool banned_or_discouraged = m_banman && (m_banman->IsDiscouraged(addrConnect) || m_banman->IsBanned(addrConnect));
if (IsLocal(addrConnect) || banned_or_discouraged || AlreadyConnectedToAddress(addrConnect)) {
return;
return false;
}
} else if (FindNode(std::string(pszDest)))
return;
} else if (AlreadyConnectedToHost(pszDest)) {
return false;
}
CNode* pnode = ConnectNode(addrConnect, pszDest, fCountFailure, conn_type, use_v2transport);
CNode* pnode = ConnectNode(addrConnect, pszDest, fCountFailure, conn_type, use_v2transport, proxy_override);
if (!pnode)
return;
return false;
pnode->grantOutbound = std::move(grant_outbound);
m_msgproc->InitializeNode(*pnode, m_local_services);
@ -3034,6 +3042,8 @@ void CConnman::OpenNetworkConnection(const CAddress& addrConnect, bool fCountFai
pnode->ConnectionTypeAsString().c_str(),
pnode->ConnectedThroughNetwork(),
GetNodeCount(ConnectionDirection::Out));
return true;
}
Mutex NetEventsInterface::g_msgproc_mutex;
@ -3529,15 +3539,9 @@ std::vector<CAddress> CConnman::GetAddressesUnsafe(size_t max_addresses, size_t
std::vector<CAddress> CConnman::GetAddresses(CNode& requestor, size_t max_addresses, size_t max_pct)
{
auto local_socket_bytes = requestor.addrBind.GetAddrBytes();
uint64_t cache_id = GetDeterministicRandomizer(RANDOMIZER_ID_ADDRCACHE)
.Write(requestor.ConnectedThroughNetwork())
.Write(local_socket_bytes)
// For outbound connections, the port of the bound address is randomly
// assigned by the OS and would therefore not be useful for seeding.
.Write(requestor.IsInboundConn() ? requestor.addrBind.GetPort() : 0)
.Finalize();
uint64_t network_id = requestor.m_network_key;
const auto current_time = GetTime<std::chrono::microseconds>();
auto r = m_addr_response_caches.emplace(cache_id, CachedAddrResponse{});
auto r = m_addr_response_caches.emplace(network_id, CachedAddrResponse{});
CachedAddrResponse& cache_entry = r.first->second;
if (cache_entry.m_cache_entry_expiration < current_time) { // If emplace() added new one it has expiration 0.
cache_entry.m_addrs_response_cache = GetAddressesUnsafe(max_addresses, max_pct, /*network=*/std::nullopt);
@ -3651,9 +3655,11 @@ void CConnman::GetNodeStats(std::vector<CNodeStats>& vstats) const
bool CConnman::DisconnectNode(const std::string& strNode)
{
LOCK(m_nodes_mutex);
if (CNode* pnode = FindNode(strNode)) {
LogDebug(BCLog::NET, "disconnect by address%s match, %s", (fLogIPs ? strprintf("=%s", strNode) : ""), pnode->DisconnectMsg(fLogIPs));
pnode->fDisconnect = true;
auto it = std::ranges::find_if(m_nodes, [&strNode](CNode* node) { return node->m_addr_name == strNode; });
if (it != m_nodes.end()) {
CNode* node{*it};
LogDebug(BCLog::NET, "disconnect by address%s match, %s", (fLogIPs ? strprintf("=%s", strNode) : ""), node->DisconnectMsg(fLogIPs));
node->fDisconnect = true;
return true;
}
return false;
@ -3814,6 +3820,7 @@ CNode::CNode(NodeId idIn,
const std::string& addrNameIn,
ConnectionType conn_type_in,
bool inbound_onion,
uint64_t network_key,
CNodeOptions&& node_opts)
: m_transport{MakeTransport(idIn, node_opts.use_v2transport, conn_type_in == ConnectionType::INBOUND)},
m_permission_flags{node_opts.permission_flags},
@ -3826,6 +3833,7 @@ CNode::CNode(NodeId idIn,
m_inbound_onion{inbound_onion},
m_prefer_evict{node_opts.prefer_evict},
nKeyedNetGroup{nKeyedNetGroupIn},
m_network_key{network_key},
m_conn_type{conn_type_in},
id{idIn},
nLocalHostNonce{nLocalHostNonceIn},

View File

@ -738,6 +738,10 @@ public:
std::atomic_bool fPauseRecv{false};
std::atomic_bool fPauseSend{false};
/** Network key used to prevent fingerprinting our node across networks.
* Influenced by the network and the bind address (+ bind port for inbounds) */
const uint64_t m_network_key;
const ConnectionType m_conn_type;
/** Move all messages from the received queue to the processing queue. */
@ -889,6 +893,7 @@ public:
const std::string& addrNameIn,
ConnectionType conn_type_in,
bool inbound_onion,
uint64_t network_key,
CNodeOptions&& node_opts = {});
CNode(const CNode&) = delete;
CNode& operator=(const CNode&) = delete;
@ -1143,7 +1148,28 @@ public:
bool GetNetworkActive() const { return fNetworkActive; };
bool GetUseAddrmanOutgoing() const { return m_use_addrman_outgoing; };
void SetNetworkActive(bool active);
void OpenNetworkConnection(const CAddress& addrConnect, bool fCountFailure, CountingSemaphoreGrant<>&& grant_outbound, const char* strDest, ConnectionType conn_type, bool use_v2transport) EXCLUSIVE_LOCKS_REQUIRED(!m_unused_i2p_sessions_mutex);
/**
* Open a new P2P connection and initialize it with the PeerManager at `m_msgproc`.
* @param[in] addrConnect Address to connect to, if `pszDest` is `nullptr`.
* @param[in] fCountFailure Increment the number of connection attempts to this address in Addrman.
* @param[in] grant_outbound Take ownership of this grant, to be released later when the connection is closed.
* @param[in] pszDest Address to resolve and connect to.
* @param[in] conn_type Type of the connection to open, must not be `ConnectionType::INBOUND`.
* @param[in] use_v2transport Use P2P encryption, (aka V2 transport, BIP324).
* @param[in] proxy_override Optional proxy to use and override normal proxy selection.
* @retval true The connection was opened successfully.
* @retval false The connection attempt failed.
*/
bool OpenNetworkConnection(const CAddress& addrConnect,
bool fCountFailure,
CountingSemaphoreGrant<>&& grant_outbound,
const char* pszDest,
ConnectionType conn_type,
bool use_v2transport,
const std::optional<Proxy>& proxy_override = std::nullopt)
EXCLUSIVE_LOCKS_REQUIRED(!m_unused_i2p_sessions_mutex);
bool CheckIncomingNonce(uint64_t nonce);
void ASMapHealthCheck();
@ -1370,18 +1396,49 @@ private:
uint64_t CalculateKeyedNetGroup(const CNetAddr& ad) const;
CNode* FindNode(const CNetAddr& ip);
CNode* FindNode(const std::string& addrName);
CNode* FindNode(const CService& addr);
/**
* Determine whether we're already connected to a given "host:port".
* Note that for inbound connections, the peer is likely using a random outbound
* port on their side, so this will likely not match any inbound connections.
* @param[in] host String of the form "host[:port]", e.g. "localhost" or "localhost:8333" or "1.2.3.4:8333".
* @return true if connected to `host`.
*/
bool AlreadyConnectedToHost(const std::string& host) const;
/**
* Determine whether we're already connected to a given address, in order to
* avoid initiating duplicate connections.
* Determine whether we're already connected to a given address:port.
* Note that for inbound connections, the peer is likely using a random outbound
* port on their side, so this will likely not match any inbound connections.
* @param[in] addr_port Address and port to check.
* @return true if connected to addr_port.
*/
bool AlreadyConnectedToAddress(const CAddress& addr);
bool AlreadyConnectedToAddressPort(const CService& addr_port) const;
/**
* Determine whether we're already connected to a given address.
*/
bool AlreadyConnectedToAddress(const CNetAddr& addr) const;
bool AttemptToEvictConnection();
CNode* ConnectNode(CAddress addrConnect, const char *pszDest, bool fCountFailure, ConnectionType conn_type, bool use_v2transport) EXCLUSIVE_LOCKS_REQUIRED(!m_unused_i2p_sessions_mutex);
/**
* Open a new P2P connection.
* @param[in] addrConnect Address to connect to, if `pszDest` is `nullptr`.
* @param[in] pszDest Address to resolve and connect to.
* @param[in] fCountFailure Increment the number of connection attempts to this address in Addrman.
* @param[in] conn_type Type of the connection to open, must not be `ConnectionType::INBOUND`.
* @param[in] use_v2transport Use P2P encryption, (aka V2 transport, BIP324).
* @param[in] proxy_override Optional proxy to use and override normal proxy selection.
* @return Newly created CNode object or nullptr if the connection failed.
*/
CNode* ConnectNode(CAddress addrConnect,
const char* pszDest,
bool fCountFailure,
ConnectionType conn_type,
bool use_v2transport,
const std::optional<Proxy>& proxy_override)
EXCLUSIVE_LOCKS_REQUIRED(!m_unused_i2p_sessions_mutex);
void AddWhitelistPermissionFlags(NetPermissionFlags& flags, std::optional<CNetAddr> addr, const std::vector<NetWhitelistPermissions>& ranges) const;
void DeleteNode(CNode* pnode);

View File

@ -807,7 +807,7 @@ private:
uint32_t GetFetchFlags(const Peer& peer) const;
std::atomic<std::chrono::microseconds> m_next_inv_to_inbounds{0us};
std::map<uint64_t, std::chrono::microseconds> m_next_inv_to_inbounds_per_network_key GUARDED_BY(g_msgproc_mutex);
/** Number of nodes with fSyncStarted. */
int nSyncStarted GUARDED_BY(cs_main) = 0;
@ -837,12 +837,14 @@ private:
/**
* For sending `inv`s to inbound peers, we use a single (exponentially
* distributed) timer for all peers. If we used a separate timer for each
* distributed) timer for all peers with the same network key. If we used a separate timer for each
* peer, a spy node could make multiple inbound connections to us to
* accurately determine when we received the transaction (and potentially
* determine the transaction's origin). */
* accurately determine when we received a transaction (and potentially
* determine the transaction's origin). Each network key has its own timer
* to make fingerprinting harder. */
std::chrono::microseconds NextInvToInbounds(std::chrono::microseconds now,
std::chrono::seconds average_interval) EXCLUSIVE_LOCKS_REQUIRED(g_msgproc_mutex);
std::chrono::seconds average_interval,
uint64_t network_key) EXCLUSIVE_LOCKS_REQUIRED(g_msgproc_mutex);
// All of the following cache a recent block, and are protected by m_most_recent_block_mutex
@ -1143,15 +1145,15 @@ static bool CanServeWitnesses(const Peer& peer)
}
std::chrono::microseconds PeerManagerImpl::NextInvToInbounds(std::chrono::microseconds now,
std::chrono::seconds average_interval)
std::chrono::seconds average_interval,
uint64_t network_key)
{
if (m_next_inv_to_inbounds.load() < now) {
// If this function were called from multiple threads simultaneously
// it would possible that both update the next send variable, and return a different result to their caller.
// This is not possible in practice as only the net processing thread invokes this function.
m_next_inv_to_inbounds = now + m_rng.rand_exp_duration(average_interval);
auto [it, inserted] = m_next_inv_to_inbounds_per_network_key.try_emplace(network_key, 0us);
auto& timer{it->second};
if (timer < now) {
timer = now + m_rng.rand_exp_duration(average_interval);
}
return m_next_inv_to_inbounds;
return timer;
}
bool PeerManagerImpl::IsBlockRequested(const uint256& hash)
@ -5715,7 +5717,7 @@ bool PeerManagerImpl::SendMessages(CNode* pto)
if (tx_relay->m_next_inv_send_time < current_time) {
fSendTrickle = true;
if (pto->IsInboundConn()) {
tx_relay->m_next_inv_send_time = NextInvToInbounds(current_time, INBOUND_INVENTORY_BROADCAST_INTERVAL);
tx_relay->m_next_inv_send_time = NextInvToInbounds(current_time, INBOUND_INVENTORY_BROADCAST_INTERVAL, pto->m_network_key);
} else {
tx_relay->m_next_inv_send_time = current_time + m_rng.rand_exp_duration(OUTBOUND_INVENTORY_BROADCAST_INTERVAL);
}

View File

@ -130,7 +130,7 @@ static RPCHelpMan getpeerinfo()
{
{
{RPCResult::Type::NUM, "id", "Peer index"},
{RPCResult::Type::STR, "addr", "(host:port) The IP address and port of the peer"},
{RPCResult::Type::STR, "addr", "(host:port) The IP address/hostname optionally followed by :port of the peer"},
{RPCResult::Type::STR, "addrbind", /*optional=*/true, "(ip:port) Bind address of the connection to the peer"},
{RPCResult::Type::STR, "addrlocal", /*optional=*/true, "(ip:port) Local address as reported by the peer"},
{RPCResult::Type::STR, "network", "Network (" + Join(GetNetworkNames(/*append_unroutable=*/true), ", ") + ")"},
@ -322,7 +322,7 @@ static RPCHelpMan addnode()
strprintf("Addnode connections are limited to %u at a time", MAX_ADDNODE_CONNECTIONS) +
" and are counted separately from the -maxconnections limit.\n",
{
{"node", RPCArg::Type::STR, RPCArg::Optional::NO, "The address of the peer to connect to"},
{"node", RPCArg::Type::STR, RPCArg::Optional::NO, "The IP address/hostname optionally followed by :port of the peer to connect to"},
{"command", RPCArg::Type::STR, RPCArg::Optional::NO, "'add' to add a node to the list, 'remove' to remove a node from the list, 'onetry' to try a connection to the node once"},
{"v2transport", RPCArg::Type::BOOL, RPCArg::DefaultHint{"set by -v2transport"}, "Attempt to connect using BIP324 v2 transport protocol (ignored for 'remove' command)"},
},

View File

@ -459,10 +459,16 @@ BOOST_AUTO_TEST_CASE(getaddr_unfiltered)
addrman->Attempt(addr3, /*fCountFailure=*/true, /*time=*/Now<NodeSeconds>() - 61s);
}
// Set time more than 10 minutes in the future (flying DeLorean), so this
// addr should be isTerrible = true
CAddress addr4 = CAddress(ResolveService("250.252.2.4", 9997), NODE_NONE);
addr4.nTime = Now<NodeSeconds>() + 11min;
BOOST_CHECK(addrman->Add({addr4}, source));
// GetAddr filtered by quality (i.e. not IsTerrible) should only return addr1
BOOST_CHECK_EQUAL(addrman->GetAddr(/*max_addresses=*/0, /*max_pct=*/0, /*network=*/std::nullopt).size(), 1U);
// Unfiltered GetAddr should return all addrs
BOOST_CHECK_EQUAL(addrman->GetAddr(/*max_addresses=*/0, /*max_pct=*/0, /*network=*/std::nullopt, /*filtered=*/false).size(), 3U);
BOOST_CHECK_EQUAL(addrman->GetAddr(/*max_addresses=*/0, /*max_pct=*/0, /*network=*/std::nullopt, /*filtered=*/false).size(), 4U);
}
BOOST_AUTO_TEST_CASE(caddrinfo_get_tried_bucket_legacy)

View File

@ -62,7 +62,8 @@ BOOST_AUTO_TEST_CASE(outbound_slow_chain_eviction)
CAddress(),
/*addrNameIn=*/"",
ConnectionType::OUTBOUND_FULL_RELAY,
/*inbound_onion=*/false};
/*inbound_onion=*/false,
/*network_key=*/0};
connman.Handshake(
/*node=*/dummyNode1,
@ -128,7 +129,8 @@ void AddRandomOutboundPeer(NodeId& id, std::vector<CNode*>& vNodes, PeerManager&
CAddress(),
/*addrNameIn=*/"",
connType,
/*inbound_onion=*/false});
/*inbound_onion=*/false,
/*network_key=*/0});
CNode &node = *vNodes.back();
node.SetCommonVersion(PROTOCOL_VERSION);
@ -327,7 +329,8 @@ BOOST_AUTO_TEST_CASE(peer_discouragement)
CAddress(),
/*addrNameIn=*/"",
ConnectionType::INBOUND,
/*inbound_onion=*/false};
/*inbound_onion=*/false,
/*network_key=*/1};
nodes[0]->SetCommonVersion(PROTOCOL_VERSION);
peerLogic->InitializeNode(*nodes[0], NODE_NETWORK);
nodes[0]->fSuccessfullyConnected = true;
@ -347,7 +350,8 @@ BOOST_AUTO_TEST_CASE(peer_discouragement)
CAddress(),
/*addrNameIn=*/"",
ConnectionType::INBOUND,
/*inbound_onion=*/false};
/*inbound_onion=*/false,
/*network_key=*/1};
nodes[1]->SetCommonVersion(PROTOCOL_VERSION);
peerLogic->InitializeNode(*nodes[1], NODE_NETWORK);
nodes[1]->fSuccessfullyConnected = true;
@ -377,7 +381,8 @@ BOOST_AUTO_TEST_CASE(peer_discouragement)
CAddress(),
/*addrNameIn=*/"",
ConnectionType::OUTBOUND_FULL_RELAY,
/*inbound_onion=*/false};
/*inbound_onion=*/false,
/*network_key=*/2};
nodes[2]->SetCommonVersion(PROTOCOL_VERSION);
peerLogic->InitializeNode(*nodes[2], NODE_NETWORK);
nodes[2]->fSuccessfullyConnected = true;
@ -419,7 +424,8 @@ BOOST_AUTO_TEST_CASE(DoS_bantime)
CAddress(),
/*addrNameIn=*/"",
ConnectionType::INBOUND,
/*inbound_onion=*/false};
/*inbound_onion=*/false,
/*network_key=*/1};
dummyNode.SetCommonVersion(PROTOCOL_VERSION);
peerLogic->InitializeNode(dummyNode, NODE_NETWORK);
dummyNode.fSuccessfullyConnected = true;

View File

@ -177,7 +177,7 @@ FUZZ_TARGET(connman, .init = initialize_connman)
/*addrConnect=*/random_address,
/*fCountFailure=*/fuzzed_data_provider.ConsumeBool(),
/*grant_outbound=*/{},
/*strDest=*/fuzzed_data_provider.ConsumeBool() ? nullptr : random_string.c_str(),
/*pszDest=*/fuzzed_data_provider.ConsumeBool() ? nullptr : random_string.c_str(),
/*conn_type=*/conn_type,
/*use_v2transport=*/fuzzed_data_provider.ConsumeBool());
},

View File

@ -75,7 +75,7 @@ auto& FuzzTargets()
void FuzzFrameworkRegisterTarget(std::string_view name, TypeTestOneInput target, FuzzTargetOptions opts)
{
const auto [it, ins]{FuzzTargets().try_emplace(name, FuzzTarget /* temporary can be dropped after Apple-Clang-16 ? */ {std::move(target), std::move(opts)})};
const auto [it, ins]{FuzzTargets().try_emplace(name, std::move(target), std::move(opts))};
Assert(ins);
}

View File

@ -70,7 +70,7 @@ void HeadersSyncSetup::ResetAndInitialize()
for (auto conn_type : conn_types) {
CAddress addr{};
m_connections.push_back(new CNode(id++, nullptr, addr, 0, 0, addr, "", conn_type, false));
m_connections.push_back(new CNode(id++, nullptr, addr, 0, 0, addr, "", conn_type, false, 0));
CNode& p2p_node = *m_connections.back();
connman.Handshake(

View File

@ -325,7 +325,7 @@ FUZZ_TARGET(ephemeral_package_eval, .init = initialize_tx_pool)
return ProcessNewPackage(chainstate, tx_pool, txs, /*test_accept=*/single_submit, /*client_maxfeerate=*/{}));
const auto res = WITH_LOCK(::cs_main, return AcceptToMemoryPool(chainstate, txs.back(), GetTime(),
/*bypass_limits=*/fuzzed_data_provider.ConsumeBool(), /*test_accept=*/!single_submit));
/*bypass_limits=*/false, /*test_accept=*/!single_submit));
if (!single_submit && result_package.m_state.GetResult() != PackageValidationResult::PCKG_POLICY) {
// We don't know anything about the validity since transactions were randomly generated, so

View File

@ -296,7 +296,6 @@ FUZZ_TARGET(tx_pool_standard, .init = initialize_tx_pool)
std::set<CTransactionRef> added;
auto txr = std::make_shared<TransactionsDelta>(removed, added);
node.validation_signals->RegisterSharedValidationInterface(txr);
const bool bypass_limits = fuzzed_data_provider.ConsumeBool();
// Make sure ProcessNewPackage on one transaction works.
// The result is not guaranteed to be the same as what is returned by ATMP.
@ -311,7 +310,7 @@ FUZZ_TARGET(tx_pool_standard, .init = initialize_tx_pool)
it->second.m_result_type == MempoolAcceptResult::ResultType::INVALID);
}
const auto res = WITH_LOCK(::cs_main, return AcceptToMemoryPool(chainstate, tx, GetTime(), bypass_limits, /*test_accept=*/false));
const auto res = WITH_LOCK(::cs_main, return AcceptToMemoryPool(chainstate, tx, GetTime(), /*bypass_limits=*/false, /*test_accept=*/false));
const bool accepted = res.m_result_type == MempoolAcceptResult::ResultType::VALID;
node.validation_signals->SyncWithValidationInterfaceQueue();
node.validation_signals->UnregisterSharedValidationInterface(txr);
@ -394,6 +393,9 @@ FUZZ_TARGET(tx_pool, .init = initialize_tx_pool)
chainstate.SetMempool(&tx_pool);
// If we ever bypass limits, do not do TRUC invariants checks
bool ever_bypassed_limits{false};
LIMITED_WHILE(fuzzed_data_provider.ConsumeBool(), 300)
{
const auto mut_tx = ConsumeTransaction(fuzzed_data_provider, txids);
@ -412,13 +414,17 @@ FUZZ_TARGET(tx_pool, .init = initialize_tx_pool)
tx_pool.PrioritiseTransaction(txid, delta);
}
const bool bypass_limits{fuzzed_data_provider.ConsumeBool()};
ever_bypassed_limits |= bypass_limits;
const auto tx = MakeTransactionRef(mut_tx);
const bool bypass_limits = fuzzed_data_provider.ConsumeBool();
const auto res = WITH_LOCK(::cs_main, return AcceptToMemoryPool(chainstate, tx, GetTime(), bypass_limits, /*test_accept=*/false));
const bool accepted = res.m_result_type == MempoolAcceptResult::ResultType::VALID;
if (accepted) {
txids.push_back(tx->GetHash());
CheckMempoolTRUCInvariants(tx_pool);
if (!ever_bypassed_limits) {
CheckMempoolTRUCInvariants(tx_pool);
}
}
}
Finish(fuzzed_data_provider, tx_pool, chainstate);

View File

@ -275,6 +275,8 @@ auto ConsumeNode(FuzzedDataProvider& fuzzed_data_provider, const std::optional<N
const std::string addr_name = fuzzed_data_provider.ConsumeRandomLengthString(64);
const ConnectionType conn_type = fuzzed_data_provider.PickValueInArray(ALL_CONNECTION_TYPES);
const bool inbound_onion{conn_type == ConnectionType::INBOUND ? fuzzed_data_provider.ConsumeBool() : false};
const uint64_t network_id = fuzzed_data_provider.ConsumeIntegral<uint64_t>();
NetPermissionFlags permission_flags = ConsumeWeakEnum(fuzzed_data_provider, ALL_NET_PERMISSION_FLAGS);
if constexpr (ReturnUniquePtr) {
return std::make_unique<CNode>(node_id,
@ -286,6 +288,7 @@ auto ConsumeNode(FuzzedDataProvider& fuzzed_data_provider, const std::optional<N
addr_name,
conn_type,
inbound_onion,
network_id,
CNodeOptions{ .permission_flags = permission_flags });
} else {
return CNode{node_id,
@ -297,6 +300,7 @@ auto ConsumeNode(FuzzedDataProvider& fuzzed_data_provider, const std::optional<N
addr_name,
conn_type,
inbound_onion,
network_id,
CNodeOptions{ .permission_flags = permission_flags }};
}
}

View File

@ -72,7 +72,8 @@ void AddPeer(NodeId& id, std::vector<CNode*>& nodes, PeerManager& peerman, Connm
CAddress{},
/*addrNameIn=*/"",
conn_type,
/*inbound_onion=*/inbound_onion});
/*inbound_onion=*/inbound_onion,
/*network_key=*/0});
CNode& node = *nodes.back();
node.SetCommonVersion(PROTOCOL_VERSION);
@ -151,15 +152,8 @@ BOOST_FIXTURE_TEST_CASE(test_addnode_getaddednodeinfo_and_connection_detection,
}
BOOST_TEST_MESSAGE("\nCheck that all connected peers are correctly detected as connected");
for (auto node : connman->TestNodes()) {
BOOST_CHECK(connman->AlreadyConnectedPublic(node->addr));
}
BOOST_TEST_MESSAGE("\nCheck that peers with the same addresses as connected peers but different ports are detected as connected.");
for (auto node : connman->TestNodes()) {
uint16_t changed_port = node->addr.GetPort() + 1;
CService address_with_changed_port{node->addr, changed_port};
BOOST_CHECK(connman->AlreadyConnectedPublic(CAddress{address_with_changed_port, NODE_NONE}));
for (const auto& node : connman->TestNodes()) {
BOOST_CHECK(connman->AlreadyConnectedToAddressPublic(node->addr));
}
// Clean up

View File

@ -67,7 +67,8 @@ BOOST_AUTO_TEST_CASE(cnode_simple_test)
CAddress(),
pszDest,
ConnectionType::OUTBOUND_FULL_RELAY,
/*inbound_onion=*/false);
/*inbound_onion=*/false,
/*network_key=*/0);
BOOST_CHECK(pnode1->IsFullOutboundConn() == true);
BOOST_CHECK(pnode1->IsManualConn() == false);
BOOST_CHECK(pnode1->IsBlockOnlyConn() == false);
@ -85,7 +86,8 @@ BOOST_AUTO_TEST_CASE(cnode_simple_test)
CAddress(),
pszDest,
ConnectionType::INBOUND,
/*inbound_onion=*/false);
/*inbound_onion=*/false,
/*network_key=*/1);
BOOST_CHECK(pnode2->IsFullOutboundConn() == false);
BOOST_CHECK(pnode2->IsManualConn() == false);
BOOST_CHECK(pnode2->IsBlockOnlyConn() == false);
@ -103,7 +105,8 @@ BOOST_AUTO_TEST_CASE(cnode_simple_test)
CAddress(),
pszDest,
ConnectionType::OUTBOUND_FULL_RELAY,
/*inbound_onion=*/false);
/*inbound_onion=*/false,
/*network_key=*/2);
BOOST_CHECK(pnode3->IsFullOutboundConn() == true);
BOOST_CHECK(pnode3->IsManualConn() == false);
BOOST_CHECK(pnode3->IsBlockOnlyConn() == false);
@ -121,7 +124,8 @@ BOOST_AUTO_TEST_CASE(cnode_simple_test)
CAddress(),
pszDest,
ConnectionType::INBOUND,
/*inbound_onion=*/true);
/*inbound_onion=*/true,
/*network_key=*/3);
BOOST_CHECK(pnode4->IsFullOutboundConn() == false);
BOOST_CHECK(pnode4->IsManualConn() == false);
BOOST_CHECK(pnode4->IsBlockOnlyConn() == false);
@ -613,7 +617,8 @@ BOOST_AUTO_TEST_CASE(ipv4_peer_with_ipv6_addrMe_test)
CAddress{},
/*pszDest=*/std::string{},
ConnectionType::OUTBOUND_FULL_RELAY,
/*inbound_onion=*/false);
/*inbound_onion=*/false,
/*network_key=*/0);
pnode->fSuccessfullyConnected.store(true);
// the peer claims to be reaching us via IPv6
@ -667,7 +672,8 @@ BOOST_AUTO_TEST_CASE(get_local_addr_for_peer_port)
/*addrBindIn=*/CService{},
/*addrNameIn=*/std::string{},
/*conn_type_in=*/ConnectionType::OUTBOUND_FULL_RELAY,
/*inbound_onion=*/false};
/*inbound_onion=*/false,
/*network_key=*/0};
peer_out.fSuccessfullyConnected = true;
peer_out.SetAddrLocal(peer_us);
@ -688,7 +694,8 @@ BOOST_AUTO_TEST_CASE(get_local_addr_for_peer_port)
/*addrBindIn=*/CService{},
/*addrNameIn=*/std::string{},
/*conn_type_in=*/ConnectionType::INBOUND,
/*inbound_onion=*/false};
/*inbound_onion=*/false,
/*network_key=*/1};
peer_in.fSuccessfullyConnected = true;
peer_in.SetAddrLocal(peer_us);
@ -825,7 +832,8 @@ BOOST_AUTO_TEST_CASE(initial_advertise_from_version_message)
/*addrBindIn=*/CService{},
/*addrNameIn=*/std::string{},
/*conn_type_in=*/ConnectionType::OUTBOUND_FULL_RELAY,
/*inbound_onion=*/false};
/*inbound_onion=*/false,
/*network_key=*/2};
const uint64_t services{NODE_NETWORK | NODE_WITNESS};
const int64_t time{0};
@ -900,7 +908,8 @@ BOOST_AUTO_TEST_CASE(advertise_local_address)
CAddress{},
/*pszDest=*/std::string{},
ConnectionType::OUTBOUND_FULL_RELAY,
/*inbound_onion=*/false);
/*inbound_onion=*/false,
/*network_key=*/0);
};
g_reachable_nets.Add(NET_CJDNS);

View File

@ -116,7 +116,7 @@ bool ConnmanTestMsg::ReceiveMsgFrom(CNode& node, CSerializedNetMsg&& ser_msg) co
CNode* ConnmanTestMsg::ConnectNodePublic(PeerManager& peerman, const char* pszDest, ConnectionType conn_type)
{
CNode* node = ConnectNode(CAddress{}, pszDest, /*fCountFailure=*/false, conn_type, /*use_v2transport=*/true);
CNode* node = ConnectNode(CAddress{}, pszDest, /*fCountFailure=*/false, conn_type, /*use_v2transport=*/true, /*proxy_override=*/std::nullopt);
if (!node) return nullptr;
node->SetCommonVersion(PROTOCOL_VERSION);
peerman.InitializeNode(*node, ServiceFlags(NODE_NETWORK | NODE_WITNESS));

View File

@ -107,7 +107,7 @@ struct ConnmanTestMsg : public CConnman {
bool ReceiveMsgFrom(CNode& node, CSerializedNetMsg&& ser_msg) const;
void FlushSendBuffer(CNode& node) const;
bool AlreadyConnectedPublic(const CAddress& addr) { return AlreadyConnectedToAddress(addr); };
bool AlreadyConnectedToAddressPublic(const CNetAddr& addr) { return AlreadyConnectedToAddress(addr); };
CNode* ConnectNodePublic(PeerManager& peerman, const char* pszDest, ConnectionType conn_type)
EXCLUSIVE_LOCKS_REQUIRED(!m_unused_i2p_sessions_mutex);

View File

@ -1042,26 +1042,28 @@ bool MemPoolAccept::PreChecks(ATMPArgs& args, Workspace& ws)
// Even though just checking direct mempool parents for inheritance would be sufficient, we
// check using the full ancestor set here because it's more convenient to use what we have
// already calculated.
if (const auto err{SingleTRUCChecks(ws.m_ptx, ws.m_ancestors, ws.m_conflicts, ws.m_vsize)}) {
// Single transaction contexts only.
if (args.m_allow_sibling_eviction && err->second != nullptr) {
// We should only be considering where replacement is considered valid as well.
Assume(args.m_allow_replacement);
if (!args.m_bypass_limits) {
if (const auto err{SingleTRUCChecks(ws.m_ptx, ws.m_ancestors, ws.m_conflicts, ws.m_vsize)}) {
// Single transaction contexts only.
if (args.m_allow_sibling_eviction && err->second != nullptr) {
// We should only be considering where replacement is considered valid as well.
Assume(args.m_allow_replacement);
// Potential sibling eviction. Add the sibling to our list of mempool conflicts to be
// included in RBF checks.
ws.m_conflicts.insert(err->second->GetHash());
// Adding the sibling to m_iters_conflicting here means that it doesn't count towards
// RBF Carve Out above. This is correct, since removing to-be-replaced transactions from
// the descendant count is done separately in SingleTRUCChecks for TRUC transactions.
ws.m_iters_conflicting.insert(m_pool.GetIter(err->second->GetHash()).value());
ws.m_sibling_eviction = true;
// The sibling will be treated as part of the to-be-replaced set in ReplacementChecks.
// Note that we are not checking whether it opts in to replaceability via BIP125 or TRUC
// (which is normally done in PreChecks). However, the only way a TRUC transaction can
// have a non-TRUC and non-BIP125 descendant is due to a reorg.
} else {
return state.Invalid(TxValidationResult::TX_MEMPOOL_POLICY, "TRUC-violation", err->first);
// Potential sibling eviction. Add the sibling to our list of mempool conflicts to be
// included in RBF checks.
ws.m_conflicts.insert(err->second->GetHash());
// Adding the sibling to m_iters_conflicting here means that it doesn't count towards
// RBF Carve Out above. This is correct, since removing to-be-replaced transactions from
// the descendant count is done separately in SingleTRUCChecks for TRUC transactions.
ws.m_iters_conflicting.insert(m_pool.GetIter(err->second->GetHash()).value());
ws.m_sibling_eviction = true;
// The sibling will be treated as part of the to-be-replaced set in ReplacementChecks.
// Note that we are not checking whether it opts in to replaceability via BIP125 or TRUC
// (which is normally done in PreChecks). However, the only way a TRUC transaction can
// have a non-TRUC and non-BIP125 descendant is due to a reorg.
} else {
return state.Invalid(TxValidationResult::TX_MEMPOOL_POLICY, "TRUC-violation", err->first);
}
}
}

View File

@ -165,23 +165,36 @@ class MempoolTRUC(BitcoinTestFramework):
def test_truc_reorg(self):
node = self.nodes[0]
self.log.info("Test that, during a reorg, TRUC rules are not enforced")
tx_v2_block = self.wallet.send_self_transfer(from_node=node, version=2)
tx_v3_block = self.wallet.send_self_transfer(from_node=node, version=3)
tx_v3_block2 = self.wallet.send_self_transfer(from_node=node, version=3)
self.check_mempool([tx_v3_block["txid"], tx_v2_block["txid"], tx_v3_block2["txid"]])
self.check_mempool([])
# Testing 2<-3 versions allowed
tx_v2_block = self.wallet.create_self_transfer(version=2)
# Testing 3<-2 versions allowed
tx_v3_block = self.wallet.create_self_transfer(version=3)
# Testing overly-large child size
tx_v3_block2 = self.wallet.create_self_transfer(version=3)
# Also create a linear chain of 3 TRUC transactions that will be directly mined, followed by one v2 in-mempool after block is made
tx_chain_1 = self.wallet.create_self_transfer(version=3)
tx_chain_2 = self.wallet.create_self_transfer(utxo_to_spend=tx_chain_1["new_utxo"], version=3)
tx_chain_3 = self.wallet.create_self_transfer(utxo_to_spend=tx_chain_2["new_utxo"], version=3)
tx_to_mine = [tx_v3_block["hex"], tx_v2_block["hex"], tx_v3_block2["hex"], tx_chain_1["hex"], tx_chain_2["hex"], tx_chain_3["hex"]]
block = self.generateblock(node, output="raw(42)", transactions=tx_to_mine)
block = self.generate(node, 1)
self.check_mempool([])
tx_v2_from_v3 = self.wallet.send_self_transfer(from_node=node, utxo_to_spend=tx_v3_block["new_utxo"], version=2)
tx_v3_from_v2 = self.wallet.send_self_transfer(from_node=node, utxo_to_spend=tx_v2_block["new_utxo"], version=3)
tx_v3_child_large = self.wallet.send_self_transfer(from_node=node, utxo_to_spend=tx_v3_block2["new_utxo"], target_vsize=1250, version=3)
assert_greater_than(node.getmempoolentry(tx_v3_child_large["txid"])["vsize"], TRUC_CHILD_MAX_VSIZE)
self.check_mempool([tx_v2_from_v3["txid"], tx_v3_from_v2["txid"], tx_v3_child_large["txid"]])
node.invalidateblock(block[0])
self.check_mempool([tx_v3_block["txid"], tx_v2_block["txid"], tx_v3_block2["txid"], tx_v2_from_v3["txid"], tx_v3_from_v2["txid"], tx_v3_child_large["txid"]])
# This is needed because generate() will create the exact same block again.
node.reconsiderblock(block[0])
tx_chain_4 = self.wallet.send_self_transfer(from_node=node, utxo_to_spend=tx_chain_3["new_utxo"], version=2)
self.check_mempool([tx_v2_from_v3["txid"], tx_v3_from_v2["txid"], tx_v3_child_large["txid"], tx_chain_4["txid"]])
# Reorg should have all block transactions re-accepted, ignoring TRUC enforcement
node.invalidateblock(block["hash"])
self.check_mempool([tx_v3_block["txid"], tx_v2_block["txid"], tx_v3_block2["txid"], tx_v2_from_v3["txid"], tx_v3_from_v2["txid"], tx_v3_child_large["txid"], tx_chain_1["txid"], tx_chain_2["txid"], tx_chain_3["txid"], tx_chain_4["txid"]])
@cleanup(extra_args=["-limitdescendantsize=10"])
def test_nondefault_package_limits(self):

View File

@ -15,7 +15,7 @@ from test_framework.wallet import MiniWallet
import time
class P2PNode(P2PDataStore):
def on_inv(self, msg):
def on_inv(self, message):
pass
@ -26,6 +26,7 @@ class P2PLeakTxTest(BitcoinTestFramework):
def run_test(self):
self.gen_node = self.nodes[0] # The block and tx generating node
self.miniwallet = MiniWallet(self.gen_node)
self.mocktime = int(time.time())
self.test_tx_in_block()
self.test_notfound_on_replaced_tx()
@ -33,20 +34,20 @@ class P2PLeakTxTest(BitcoinTestFramework):
def test_tx_in_block(self):
self.log.info("Check that a transaction in the last block is uploaded (beneficial for compact block relay)")
self.gen_node.setmocktime(self.mocktime)
inbound_peer = self.gen_node.add_p2p_connection(P2PNode())
self.log.debug("Generate transaction and block")
inbound_peer.last_message.pop("inv", None)
self.gen_node.setmocktime(int(time.time())) # pause time based activities
wtxid = self.miniwallet.send_self_transfer(from_node=self.gen_node)["wtxid"]
rawmp = self.gen_node.getrawmempool(False, True)
pi = self.gen_node.getpeerinfo()[0]
assert_equal(rawmp["mempool_sequence"], 2) # our tx cause mempool activity
assert_equal(pi["last_inv_sequence"], 1) # that is after the last inv
assert_equal(pi["inv_to_send"], 1) # and our tx has been queued
self.gen_node.setmocktime(0)
self.mocktime += 120
self.gen_node.setmocktime(self.mocktime)
inbound_peer.wait_until(lambda: "inv" in inbound_peer.last_message and inbound_peer.last_message.get("inv").inv[0].hash == int(wtxid, 16))
rawmp = self.gen_node.getrawmempool(False, True)
@ -65,15 +66,20 @@ class P2PLeakTxTest(BitcoinTestFramework):
def test_notfound_on_replaced_tx(self):
self.gen_node.disconnect_p2ps()
self.gen_node.setmocktime(self.mocktime)
inbound_peer = self.gen_node.add_p2p_connection(P2PTxInvStore())
self.log.info("Transaction tx_a is broadcast")
tx_a = self.miniwallet.send_self_transfer(from_node=self.gen_node)
self.mocktime += 120
self.gen_node.setmocktime(self.mocktime)
inbound_peer.wait_for_broadcast(txns=[tx_a["wtxid"]])
tx_b = tx_a["tx"]
tx_b.vout[0].nValue -= 9000
self.gen_node.sendrawtransaction(tx_b.serialize().hex())
self.mocktime += 120
self.gen_node.setmocktime(self.mocktime)
inbound_peer.wait_until(lambda: "tx" in inbound_peer.last_message and inbound_peer.last_message.get("tx").tx.wtxid_hex == tx_b.wtxid_hex)
self.log.info("Re-request of tx_a after replacement is answered with notfound")
@ -96,28 +102,31 @@ class P2PLeakTxTest(BitcoinTestFramework):
self.gen_node.disconnect_p2ps()
inbound_peer = self.gen_node.add_p2p_connection(P2PNode()) # An "attacking" inbound peer
MAX_REPEATS = 100
self.log.info("Running test up to {} times.".format(MAX_REPEATS))
for i in range(MAX_REPEATS):
self.log.info('Run repeat {}'.format(i + 1))
txid = self.miniwallet.send_self_transfer(from_node=self.gen_node)["wtxid"]
# Set a mock time so that time does not pass, and gen_node never announces the transaction
self.gen_node.setmocktime(self.mocktime)
wtxid = int(self.miniwallet.send_self_transfer(from_node=self.gen_node)["wtxid"], 16)
want_tx = msg_getdata()
want_tx.inv.append(CInv(t=MSG_TX, h=int(txid, 16)))
with p2p_lock:
inbound_peer.last_message.pop('notfound', None)
inbound_peer.send_and_ping(want_tx)
want_tx = msg_getdata()
want_tx.inv.append(CInv(t=MSG_WTX, h=wtxid))
with p2p_lock:
inbound_peer.last_message.pop('notfound', None)
inbound_peer.send_and_ping(want_tx)
inbound_peer.wait_until(lambda: "notfound" in inbound_peer.last_message)
with p2p_lock:
assert_equal(inbound_peer.last_message.get("notfound").vec[0].hash, wtxid)
inbound_peer.last_message.pop('notfound')
if inbound_peer.last_message.get('notfound'):
self.log.debug('tx {} was not yet announced to us.'.format(txid))
self.log.debug("node has responded with a notfound message. End test.")
assert_equal(inbound_peer.last_message['notfound'].vec[0].hash, int(txid, 16))
with p2p_lock:
inbound_peer.last_message.pop('notfound')
break
else:
self.log.debug('tx {} was already announced to us. Try test again.'.format(txid))
assert int(txid, 16) in [inv.hash for inv in inbound_peer.last_message['inv'].inv]
# Move mocktime forward and wait for the announcement.
inbound_peer.last_message.pop('inv', None)
self.mocktime += 120
self.gen_node.setmocktime(self.mocktime)
inbound_peer.wait_for_inv([CInv(t=MSG_WTX, h=wtxid)], timeout=120)
# Send the getdata again, this time the node should send us a TX message.
inbound_peer.last_message.pop('tx', None)
inbound_peer.send_and_ping(want_tx)
self.wait_until(lambda: "tx" in inbound_peer.last_message)
assert_equal(wtxid, int(inbound_peer.last_message["tx"].tx.wtxid_hex, 16))
if __name__ == '__main__':

View File

@ -474,6 +474,7 @@ def write_config(config_path, *, n, chain, extra_config="", disable_autoconnect=
# min_required_fds = MIN_CORE_FDS + MAX_ADDNODE_CONNECTIONS + nBind = 151 + 8 + 3 = 162;
# nMaxConnections = available_fds - min_required_fds = 256 - 161 = 94;
f.write("maxconnections=94\n")
f.write("par=" + str(min(2, os.cpu_count())) + "\n")
f.write(extra_config)