Conversation
|
CodeAnt AI is reviewing your PR. |
Sequence DiagramShows the new paginated witness-list API flow added in this PR. The RPC layer delegates to Wallet.getPaginatedNowWitnessList which guards during maintenance, aggregates in-epoch vote deltas from VotesStore, applies them to witness vote counts, sorts witnesses, and returns the paginated result (or a maintenance error if applicable). sequenceDiagram
participant Client
participant RpcApiService as RPC
participant WalletBackend as Wallet
participant VotesStore
participant WitnessStore
Client->>RPC: getPaginatedNowWitnessList(offset, limit)
RPC->>WalletBackend: getPaginatedNowWitnessList(offset, limit)
alt Maintenance and not solidity node with HEAD cursor
WalletBackend-->>RPC: throw MaintenanceUnavailableException
RPC-->>Client: 503 Service Unavailable (error)
else Normal path
WalletBackend->>VotesStore: iterate votes -> countVote()
VotesStore-->>WalletBackend: Map<witness, deltaVotes>
WalletBackend->>WitnessStore: getAllWitnesses()
WalletBackend->>WitnessStore: apply deltas & sortWitnesses()
WitnessStore-->>WalletBackend: sorted paginated list
WalletBackend-->>RPC: WitnessList (paginated)
RPC-->>Client: 200 OK (WitnessList)
Generated by CodeAnt AI |
| long frozenPeriod = next.getFrozenDays() * FROZEN_PERIOD; | ||
| try { |
There was a problem hiding this comment.
Suggestion: The new overflow check only guards the addition of start time and frozen period but computes frozenPeriod with a plain multiplication, so if next.getFrozenDays() * FROZEN_PERIOD itself overflows the long range, this overflow happens before the check and may not be detected, allowing an incorrect expireTime to pass validation; you should perform the multiplication with an overflow-checked method inside the try block so that both multiplication and addition are verified. [logic error]
Severity Level: Major ⚠️
- ❌ Asset issue validation may accept invalid expire times.
- ⚠️ Frozen-supply expire time calculations unreliable.
- ⚠️ Token issuance correctness affected for extreme inputs.
- ⚠️ Suggestion prevents subtle overflow bugs.| long frozenPeriod = next.getFrozenDays() * FROZEN_PERIOD; | |
| try { | |
| try { | |
| long frozenPeriod = StrictMathWrapper.multiplyExact(next.getFrozenDays(), FROZEN_PERIOD); |
Steps of Reproduction ✅
1. Locate validation logic in file
`actuator/src/main/java/org/tron/core/actuator/AssetIssueActuator.java` within method
`validate()` (see the code block starting at line 268 in the PR). The snippet that
performs the frozen period computation is at lines 268-277.
2. Construct an AssetIssueContract with a FrozenSupply entry whose `frozenDays` value is
large enough that `next.getFrozenDays() * FROZEN_PERIOD` would overflow a Java long when
computed as a plain multiplication. The validation path is triggered when the node
processes an asset-issue contract and calls `AssetIssueActuator.validate()` (the method
containing the shown snippet).
3. When `validate()` executes, the current code computes `long frozenPeriod =
next.getFrozenDays() * FROZEN_PERIOD;` (line 269). That multiplication happens before the
try/catch that only wraps the subsequent addition (lines 270-275). If the multiplication
overflows, `frozenPeriod` will hold a wrapped value and the later
`StrictMathWrapper.addExact(...)` call may not detect the intended overflow condition.
4. Observable outcome: the code will continue past this check because the overflowed
multiplication produced a value that, when added, may not trigger ArithmeticException; the
invalid expireTime would thus be accepted and the contract considered valid. This can be
confirmed by invoking the asset-issue flow and observing that validation does not throw
the "Start time and frozen days would cause expire time overflow"
ContractValidateException even when the intended true expire time would exceed long range.
Note: The reproduction requires creating a contract with a very large `frozenDays`. If the
node's dynamic properties (e.g., `getMaxFrozenSupplyTime()`) already cap `frozenDays` to
safe values, this path will be hard to reach. The suggested change moves the
multiplication into the try block using an overflow-checked multiply so both operations
are protected.Prompt for AI Agent 🤖
This is a comment left during a code review.
**Path:** actuator/src/main/java/org/tron/core/actuator/AssetIssueActuator.java
**Line:** 270:271
**Comment:**
*Logic Error: The new overflow check only guards the addition of start time and frozen period but computes `frozenPeriod` with a plain multiplication, so if `next.getFrozenDays() * FROZEN_PERIOD` itself overflows the long range, this overflow happens before the check and may not be detected, allowing an incorrect `expireTime` to pass validation; you should perform the multiplication with an overflow-checked method inside the try block so that both multiplication and addition are verified.
Validate the correctness of the flagged issue. If correct, How can I resolve this? If you propose a fix, implement it and please make it concise.| .getInstance().isECKeyCryptoEngine(), combine); | ||
|
|
||
| byte[][] signatures = extractBytesArray( | ||
| words, words[3].intValueSafe() / WORD_SIZE, rawData); | ||
| if (VMConfig.allowTvmSelfdestructRestriction()) { | ||
| int sigArraySize = words[words[3].intValueSafe() / WORD_SIZE].intValueSafe(); |
There was a problem hiding this comment.
Suggestion: In the multi-sign validation precompile, the code reads words[words[3].intValueSafe() / WORD_SIZE] to derive the signature array size without first checking that the computed index is within words bounds, so a malformed or malicious rawData can cause an ArrayIndexOutOfBoundsException and unexpectedly abort execution instead of returning a safe DATA_FALSE result. [logic error]
Severity Level: Major ⚠️
- ❌ ValidateMultiSign precompile may crash on malformed input.
- ⚠️ Smart contracts calling this precompile get unexpected aborts.
- ⚠️ Node stability affected during invalid calldata processing.| .getInstance().isECKeyCryptoEngine(), combine); | |
| byte[][] signatures = extractBytesArray( | |
| words, words[3].intValueSafe() / WORD_SIZE, rawData); | |
| if (VMConfig.allowTvmSelfdestructRestriction()) { | |
| int sigArraySize = words[words[3].intValueSafe() / WORD_SIZE].intValueSafe(); | |
| int sigOffsetIndex = words[3].intValueSafe() / WORD_SIZE; | |
| if (sigOffsetIndex > words.length - 1) { | |
| return Pair.of(true, DATA_FALSE); | |
| } | |
| int sigArraySize = words[sigOffsetIndex].intValueSafe(); |
Steps of Reproduction ✅
1. Construct a transaction that calls the ValidateMultiSign precompile. Entry point:
PrecompiledContracts.ValidateMultiSign.execute(byte[] rawData) in
actuator/src/main/java/org/tron/core/vm/PrecompiledContracts.java. The validation block
begins at line 951 in the PR final file.
2. Provide malformed calldata such that DataWord.parseArray(rawData) yields a words[]
array where words.length is small (for example length <= 3) but words[3] exists as a
DataWord containing an offset pointing beyond the words array. The code on line 951
computes sigArraySize by indexing words[words[3].intValueSafe() / WORD_SIZE] without
bounds check.
3. When execute() reaches the block at lines 951-955, the computed index
(words[3].intValueSafe() / WORD_SIZE) can be greater than words.length-1, causing Java to
throw ArrayIndexOutOfBoundsException at runtime in
PrecompiledContracts.ValidateMultiSign.execute (lines shown above) instead of returning
Pair.of(true, DATA_FALSE).
4. Observe the thrown exception aborting precompile execution. The improved code adds an
explicit bounds check for sigOffsetIndex (lines in improved snippet), preventing the
exception and returning DATA_FALSE for malformed inputs.
Note: I verified the exact code location and flow by inspecting
PrecompiledContracts.ValidateMultiSign.execute in the final file (the relevant block
appears around the ValidateMultiSign.execute method starting at line ~944 with the
validation check starting at line 951).Prompt for AI Agent 🤖
This is a comment left during a code review.
**Path:** actuator/src/main/java/org/tron/core/vm/PrecompiledContracts.java
**Line:** 952:955
**Comment:**
*Logic Error: In the multi-sign validation precompile, the code reads `words[words[3].intValueSafe() / WORD_SIZE]` to derive the signature array size without first checking that the computed index is within `words` bounds, so a malformed or malicious `rawData` can cause an `ArrayIndexOutOfBoundsException` and unexpectedly abort execution instead of returning a safe `DATA_FALSE` result.
Validate the correctness of the flagged issue. If correct, How can I resolve this? If you propose a fix, implement it and please make it concise.| words, words[1].intValueSafe() / WORD_SIZE, data); | ||
|
|
||
| if (VMConfig.allowTvmSelfdestructRestriction()) { | ||
| int sigArraySize = words[words[1].intValueSafe() / WORD_SIZE].intValueSafe(); |
There was a problem hiding this comment.
Suggestion: In the batch validate sign precompile, the code computes indices from untrusted calldata (words[1].intValueSafe() / WORD_SIZE and words[2].intValueSafe() / WORD_SIZE) and uses them directly to index into words to read the signature and address array sizes, so malformed data can lead to ArrayIndexOutOfBoundsException and terminate execution instead of returning a safe DATA_FALSE result. [logic error]
Severity Level: Major ⚠️
- ❌ BatchValidateSign precompile can crash on malicious calldata.
- ⚠️ Contracts relying on batch signature validation fail unpredictably.
- ⚠️ Potential DoS vector against node processing precompiles.| int sigArraySize = words[words[1].intValueSafe() / WORD_SIZE].intValueSafe(); | |
| int sigOffsetIndex = words[1].intValueSafe() / WORD_SIZE; | |
| int addrOffsetIndex = words[2].intValueSafe() / WORD_SIZE; | |
| if (sigOffsetIndex > words.length - 1 || addrOffsetIndex > words.length - 1) { | |
| return Pair.of(true, DATA_FALSE); | |
| } | |
| int sigArraySize = words[sigOffsetIndex].intValueSafe(); | |
| int addrArraySize = words[addrOffsetIndex].intValueSafe(); |
Steps of Reproduction ✅
1. Call the BatchValidateSign precompile by invoking
PrecompiledContracts.BatchValidateSign.doExecute(byte[] data) — method body present in
actuator/src/main/java/org/tron/core/vm/PrecompiledContracts.java starting around line
1043. The self-destruct restriction check is at lines ~1047-1052.
2. Supply calldata where DataWord.parseArray(data) yields a words[] array and words[1]
and/or words[2] contain offsets that compute to indices outside words bounds (e.g.,
words.length small but words[1].intValueSafe()/WORD_SIZE is large).
3. The existing code computes sigArraySize and addrArraySize by indexing words at those
computed indices without verifying sigOffsetIndex or addrOffsetIndex are < words.length.
This triggers ArrayIndexOutOfBoundsException inside doExecute (lines shown), aborting
precompile execution instead of returning Pair.of(true, DATA_FALSE).
4. The improved code adds explicit bounds checks for sigOffsetIndex and addrOffsetIndex
before dereferencing words[], causing the method to return DATA_FALSE for malformed
calldata and avoiding exceptions.
Note: I inspected the final file to confirm the doExecute flow (signature and address
extraction follow this validation block and rely on these sizes), so this bounds check
prevents exception cascades in subsequent extraction calls.Prompt for AI Agent 🤖
This is a comment left during a code review.
**Path:** actuator/src/main/java/org/tron/core/vm/PrecompiledContracts.java
**Line:** 1048:1048
**Comment:**
*Logic Error: In the batch validate sign precompile, the code computes indices from untrusted calldata (`words[1].intValueSafe() / WORD_SIZE` and `words[2].intValueSafe() / WORD_SIZE`) and uses them directly to index into `words` to read the signature and address array sizes, so malformed data can lead to `ArrayIndexOutOfBoundsException` and terminate execution instead of returning a safe `DATA_FALSE` result.
Validate the correctness of the flagged issue. If correct, How can I resolve this? If you propose a fix, implement it and please make it concise.| if (deposit != null) { | ||
| newContractCache.forEach(key -> deposit.putNewContract(key.getData())); | ||
| } | ||
| } |
There was a problem hiding this comment.
Suggestion: The newContractCache set is never cleared, and commitNewContractCache only propagates entries upward without removing them locally, so every RepositoryImpl instance accumulates all "new" contract keys for its entire lifetime. In a long-lived repository (e.g., a root repository reused across many transactions), this causes unbounded growth of newContractCache and stale "new" flags, leading to potential memory/resource exhaustion and incorrect classification of contracts as "new" long after their creation. Clearing the cache after propagation in commitNewContractCache confines both memory usage and "new" status to the appropriate commit scope. [resource leak]
Severity Level: Critical 🚨
- ❌ Contract creation bookkeeping becomes incorrect.
- ❌ Long-lived root repository memory leak.
- ⚠️ isNewContract() returns stale true values.
- ⚠️ Transaction processing retains stale keys.| } | |
| // Clear local cache after propagation to avoid unbounded growth and stale "new" flags | |
| newContractCache.clear(); |
Steps of Reproduction ✅
1. Start a long-lived RepositoryImpl (root) and reuse it across many operations. The
repository class is defined at
actuator/src/main/java/org/tron/core/vm/repository/RepositoryImpl.java.
2. Create contracts repeatedly by calling createContract(...). The createContract method
calls putNewContract(address) (createContract shown at lines 481-485 in the PR:
"createContract(..)" then "putNewContract(address);"), which adds Key.create(address) into
newContractCache.
3. Call commit() on the same RepositoryImpl instance. commit() (modified at lines 766-770)
calls commitNewContractCache(repository), which executes the current implementation at
lines 1090-1094: it propagates each key to the parent via deposit.putNewContract(...) but
does not remove entries from the local newContractCache.
4. Repeat steps 2–3 many times: newContractCache will accumulate all created Keys because
commitNewContractCache does not clear them. Over time the root RepositoryImpl's
newContractCache grows unbounded, and isNewContract(...) (implementation at lines
~543-559) will continue returning true for those addresses in this repository even after
they were propagated. Observe memory growth and stale "is new" responses when calling
isNewContract(byte[]) on the same RepositoryImpl.
Note: The reproduction is based on the actual call sites and lines in this file:
createContract -> putNewContract (lines 481-485 and 539-542 respectively), commit()
calling commitNewContractCache (lines 766-770), and commitNewContractCache implementation
(lines 1090-1094) which currently lacks a clear().Prompt for AI Agent 🤖
This is a comment left during a code review.
**Path:** actuator/src/main/java/org/tron/core/vm/repository/RepositoryImpl.java
**Line:** 1094:1094
**Comment:**
*Resource Leak: The `newContractCache` set is never cleared, and `commitNewContractCache` only propagates entries upward without removing them locally, so every `RepositoryImpl` instance accumulates all "new" contract keys for its entire lifetime. In a long-lived repository (e.g., a root repository reused across many transactions), this causes unbounded growth of `newContractCache` and stale "new" flags, leading to potential memory/resource exhaustion and incorrect classification of contracts as "new" long after their creation. Clearing the cache after propagation in `commitNewContractCache` confines both memory usage and "new" status to the appropriate commit scope.
Validate the correctness of the flagged issue. If correct, How can I resolve this? If you propose a fix, implement it and please make it concise.| increaseNonce(); | ||
|
|
||
| InternalTransaction internalTx = addInternalTx(null, owner, obtainer, balance, null, | ||
| "suicide", nonce, getContractState().getAccount(owner).getAssetMapV2()); | ||
|
|
||
| if (FastByteComparisons.isEqual(owner, obtainer)) { | ||
| return; |
There was a problem hiding this comment.
Suggestion: In the new suicide2 implementation, when the beneficiary address equals the contract address, the method still increments the nonce and records an internal transaction before returning early without transferring any balance or changing state, leaving an internal transaction log and nonce advancement that do not correspond to any actual transfer. This can confuse consumers of internal transactions and makes the nonce progression inconsistent for what is effectively a no-op. The early-return check should run before increaseNonce() and addInternalTx(...) so that no internal transaction is created and the nonce is not advanced when nothing is going to happen. [logic error]
Severity Level: Major ⚠️
- ⚠️ Internal transaction logs contain no-op suicide entries.
- ⚠️ Nonce advancement inconsistent for no-op selfdestruct.
- ⚠️ Auditing tools may misreport transfers.| increaseNonce(); | |
| InternalTransaction internalTx = addInternalTx(null, owner, obtainer, balance, null, | |
| "suicide", nonce, getContractState().getAccount(owner).getAssetMapV2()); | |
| if (FastByteComparisons.isEqual(owner, obtainer)) { | |
| return; | |
| if (FastByteComparisons.isEqual(owner, obtainer)) { | |
| return; | |
| } | |
| increaseNonce(); | |
| InternalTransaction internalTx = addInternalTx(null, owner, obtainer, balance, null, | |
| "suicide", nonce, getContractState().getAccount(owner).getAssetMapV2()); |
Steps of Reproduction ✅
1. Locate the suicide2 implementation at
actuator/src/main/java/org/tron/core/vm/program/Program.java:518-582 (suicide2 method
begins at line 518 in the file). Verified via file read of Program.java.
2. Observe the code path: at line 537 the method calls increaseNonce(), at lines 538-540
it creates an InternalTransaction via addInternalTx(...), and only afterwards (lines
542-543) it checks FastByteComparisons.isEqual(owner, obtainer) and returns early.
(References: Program.java lines 537-543.)
3. Reproduce by invoking suicide2 with obtainerAddress equal to the contract's context
address:
- Entry point: any contract code path which calls the VM suicide2 operation (this
method is in Program and will be called when VM executes the selfdestruct variant
mapped to suicide2). In a unit test or integration test, construct a
Message/Instruction that invokes suicide2 with obtainer == getContextAddress() (the
owner).
- Expected observable behavior: nonce is incremented (increaseNonce() at line 537), an
InternalTransaction is added to ProgramResult (addInternalTx call at 538-540), and then
the method returns without performing any balance transfer or resource changes (return
at 542-543).
4. Verify effect by inspecting Program.getNonce() (observed increment) and ProgramResult's
internal transactions list (an entry for "suicide" with value=balance) even though no
transfer/state-change occurred. This demonstrates the internalTx/log and nonce advancement
are recorded for a no-op early-return path.
Note: The reproduction uses direct execution of Program.suicide2 in tests or triggers in
contracts that call the new selfdestruct path. The file/line references above are taken
directly from Program.java contents.Prompt for AI Agent 🤖
This is a comment left during a code review.
**Path:** actuator/src/main/java/org/tron/core/vm/program/Program.java
**Line:** 537:543
**Comment:**
*Logic Error: In the new `suicide2` implementation, when the beneficiary address equals the contract address, the method still increments the nonce and records an internal transaction before returning early without transferring any balance or changing state, leaving an internal transaction log and nonce advancement that do not correspond to any actual transfer. This can confuse consumers of internal transactions and makes the nonce progression inconsistent for what is effectively a no-op. The early-return check should run before `increaseNonce()` and `addInternalTx(...)` so that no internal transaction is created and the nonce is not advanced when nothing is going to happen.
Validate the correctness of the flagged issue. If correct, How can I resolve this? If you propose a fix, implement it and please make it concise.| writeOptions.close(); | ||
| dbSource.closeDB(); |
There was a problem hiding this comment.
Suggestion: Closing the shared write options object before closing the underlying database introduces a race condition: another thread may still be executing a batched write that uses this options instance when close() is called, causing the options to be closed while the write is in progress and potentially leading to undefined or native-level failures in RocksDB. To avoid this, the database should be closed (which is already protected by its own locks) before the shared write options are closed, ensuring no in-flight writes are using them when they are released. [race condition]
Severity Level: Major ⚠️
- ❌ Native DB errors or JVM crashes during shutdown.
- ⚠️ In-flight batched writes may fail silently.
- ⚠️ Shutdown sequence can corrupt or lose pending writes.| writeOptions.close(); | |
| dbSource.closeDB(); | |
| dbSource.closeDB(); | |
| writeOptions.close(); |
Steps of Reproduction ✅
1. Start application that opens a TronDatabase instance (class at
chainbase/src/main/java/org/tron/core/db/TronDatabase.java). The close() implementation is
in that file's close() method (invoked when shutting down the DB).
2. While the application is running, trigger a concurrent batched write that calls
updateByBatch(...) which delegates to dbSource.updateByBatch(rows, writeOptions). The
method updateByBatch is defined in the same class (TronDatabase.updateByBatch) and located
immediately above close() in the file (calls dbSource.updateByBatch with the shared
writeOptions instance).
3. From another thread initiate shutdown that calls TronDatabase.close() (close() method
contains writeOptions.close(); followed by dbSource.closeDB(); at lines shown in the PR:
"80 writeOptions.close();" and "81 dbSource.closeDB();").
4. If close() executes writeOptions.close() before the in-flight dbSource.updateByBatch
has finished using writeOptions, the underlying native/IO resources referenced by
writeOptions may be released while the write is still running, causing the in-flight write
to fail with native exceptions, data loss, or process-level failures (observed as RocksDB
native errors or JVM crashes). This reproduces the race because updateByBatch uses the
same shared WriteOptionsWrapper instance and close() currently closes that instance first.
5. Confirm reproduction by running a small test: spawn N threads performing frequent
updateByBatch calls and, while they are active, call close() on the same TronDatabase
instance. Observe failing writes or native errors when close() closes writeOptions first.
Note: The reproduction is grounded in real code paths in this file: updateByBatch(...)
(calls dbSource.updateByBatch(rows, writeOptions)) is defined in the same class just above
close(), and close() currently closes writeOptions before dbSource.closeDB(), enabling the
race.Prompt for AI Agent 🤖
This is a comment left during a code review.
**Path:** chainbase/src/main/java/org/tron/core/db/TronDatabase.java
**Line:** 80:81
**Comment:**
*Race Condition: Closing the shared write options object before closing the underlying database introduces a race condition: another thread may still be executing a batched write that uses this options instance when `close()` is called, causing the options to be closed while the write is in progress and potentially leading to undefined or native-level failures in RocksDB. To avoid this, the database should be closed (which is already protected by its own locks) before the shared write options are closed, ensuring no in-flight writes are using them when they are released.
Validate the correctness of the flagged issue. If correct, How can I resolve this? If you propose a fix, implement it and please make it concise.| checkPointStore.updateByBatch(batch.entrySet().stream() | ||
| .map(e -> Maps.immutableEntry(e.getKey().getBytes(), e.getValue().getBytes())) | ||
| .collect(HashMap::new, (m, k) -> m.put(k.getKey(), k.getValue()), HashMap::putAll), | ||
| WriteOptionsWrapper.getInstance().sync(syncFlag)); | ||
| .collect(HashMap::new, (m, k) -> m.put(k.getKey(), k.getValue()), HashMap::putAll)); |
There was a problem hiding this comment.
Suggestion: If checkTmpStore is not injected (which the added null check in deleteCheckpoint() explicitly allows in mock tests), createCheckpoint() will still set checkPointStore = checkTmpStore when V2 is disabled and then unconditionally call updateByBatch on a null reference, causing a NullPointerException during flush(). Guarding the updateByBatch call against a null checkPointStore prevents this crash in such environments. [null pointer]
Severity Level: Major ⚠️
- ❌ Database checkpoint creation can crash during flush.
- ❌ SnapshotManager.flush() failure halts checkpoint pipeline.
- ⚠️ Unit tests using mock SnapshotManager may hit this path.
- ⚠️ hitDown flag set causing degraded behavior.| checkPointStore.updateByBatch(batch.entrySet().stream() | |
| .map(e -> Maps.immutableEntry(e.getKey().getBytes(), e.getValue().getBytes())) | |
| .collect(HashMap::new, (m, k) -> m.put(k.getKey(), k.getValue()), HashMap::putAll), | |
| WriteOptionsWrapper.getInstance().sync(syncFlag)); | |
| .collect(HashMap::new, (m, k) -> m.put(k.getKey(), k.getValue()), HashMap::putAll)); | |
| if (checkPointStore != null) { | |
| checkPointStore.updateByBatch(batch.entrySet().stream() | |
| .map(e -> Maps.immutableEntry(e.getKey().getBytes(), e.getValue().getBytes())) | |
| .collect(HashMap::new, (m, k) -> m.put(k.getKey(), k.getValue()), HashMap::putAll)); | |
| } |
Steps of Reproduction ✅
1. Ensure SnapshotManager is running with checkpointVersion != 2 (v1 path). See
SnapshotManager.checkpointVersion usage at
chainbase/src/main/java/org/tron/core/db2/core/SnapshotManager.java:552-554 and
createCheckpoint V1 assignment at :391.
2. Trigger a flush path where shouldBeRefreshed() returns true (flushCount >=
maxFlushCount) so SnapshotManager.flush() calls createCheckpoint().
- flush() calls deleteCheckpoint() then createCheckpoint() at
chainbase/.../SnapshotManager.java:336-339 and 328-339.
3. If checkTmpStore was not injected (checkTmpStore == null), deleteCheckpoint() currently
returns early (guard at line 424-428),
but createCheckpoint() still sets checkPointStore = checkTmpStore for V1 (assignment at
line 391).
4. createCheckpoint() then invokes updateByBatch on the possibly-null checkPointStore at
chainbase/.../SnapshotManager.java:394-396,
causing a NullPointerException during flush() execution.
5. Observed behavior: flush() will throw an uncaught NullPointerException wrapped as a
TronDBException and escalate to TronError in flush() catch
at chainbase/.../SnapshotManager.java:349-353, setting hitDown=true and aborting DB
flush operation.
Explanation: deleteCheckpoint()'s null guard (line 424-428) documents mock/test scenarios
where checkTmpStore may be null, but createCheckpoint()
does not mirror that guard; the improved_code adds a null-check before calling
updateByBatch to avoid the NPE in those scenarios.Prompt for AI Agent 🤖
This is a comment left during a code review.
**Path:** chainbase/src/main/java/org/tron/core/db2/core/SnapshotManager.java
**Line:** 394:396
**Comment:**
*Null Pointer: If `checkTmpStore` is not injected (which the added null check in `deleteCheckpoint()` explicitly allows in mock tests), `createCheckpoint()` will still set `checkPointStore = checkTmpStore` when V2 is disabled and then unconditionally call `updateByBatch` on a null reference, causing a `NullPointerException` during `flush()`. Guarding the `updateByBatch` call against a null `checkPointStore` prevents this crash in such environments.
Validate the correctness of the flagged issue. If correct, How can I resolve this? If you propose a fix, implement it and please make it concise.| */ | ||
| public AccountCapsule getBlackhole() { | ||
| return getUnchecked(assertsAddress.get("Blackhole")); | ||
| return getUnchecked(assertsAddress.get(ACCOUNT_BLACKHOLE)); |
There was a problem hiding this comment.
Suggestion: getBlackhole() directly passes assertsAddress.get(ACCOUNT_BLACKHOLE) into getUnchecked, so if the blackhole account is missing or setAccount was not invoked, a null key will be used and can cause a null pointer exception or undefined behavior in the underlying store. [null pointer]
Severity Level: Major ⚠️
- ⚠️ getBlackhole() may throw NPE if key missing.
- ❌ Affects any code retrieving Blackhole account.
- ⚠️ Existing setAccount() prevents normal occurrence.| return getUnchecked(assertsAddress.get(ACCOUNT_BLACKHOLE)); | |
| byte[] blackholeAddress = assertsAddress.get(ACCOUNT_BLACKHOLE); | |
| if (blackholeAddress == null) { | |
| throw new TronError("Account[Blackhole] is not configured.", TronError.ErrCode.GENESIS_BLOCK_INIT); | |
| } | |
| return getUnchecked(blackholeAddress); |
Steps of Reproduction ✅
1. Review the file: `chainbase/src/main/java/org/tron/core/store/AccountStore.java`. The
current `getBlackhole()` method returns
`getUnchecked(assertsAddress.get(ACCOUNT_BLACKHOLE));` at line 118.
2. Check `AccountStore.setAccount(...)` which populates `assertsAddress` and enforces
presence: lines around 53-58 perform population and then explicitly throw `new
TronError("Account[Blackhole] is not configured.", TronError.ErrCode.GENESIS_BLOCK_INIT);`
if `assertsAddress.get(ACCOUNT_BLACKHOLE)` is null.
3. Because `setAccount()` already throws when the Blackhole entry is missing during
configuration, a null value is unlikely in normal startup. To reproduce a null-key usage
you'd need to:
- a) bypass or avoid calling `AccountStore.setAccount()` during initialization, or
- b) mutate `assertsAddress` (or remove the key) after initialization (e.g., via
reflection or another bug).
4. If either of the above happens, calling `getBlackhole()` will call `getUnchecked(null)`
which may cause a NullPointerException or undefined behavior in the underlying store
implementation at runtime.
5. Therefore the improved-code defensive check will convert a possible NPE/undefined
behavior into a deterministic `TronError` with the existing error code already used by
`setAccount()`, making the failure explicit and consistent.Prompt for AI Agent 🤖
This is a comment left during a code review.
**Path:** chainbase/src/main/java/org/tron/core/store/AccountStore.java
**Line:** 118:118
**Comment:**
*Null Pointer: `getBlackhole()` directly passes `assertsAddress.get(ACCOUNT_BLACKHOLE)` into `getUnchecked`, so if the blackhole account is missing or `setAccount` was not invoked, a null key will be used and can cause a null pointer exception or undefined behavior in the underlying store.
Validate the correctness of the flagged issue. If correct, How can I resolve this? If you propose a fix, implement it and please make it concise.|
|
||
| public byte[] getBlackholeAddress() { | ||
| return assertsAddress.get("Blackhole"); | ||
| return assertsAddress.get(ACCOUNT_BLACKHOLE); |
There was a problem hiding this comment.
Suggestion: getBlackholeAddress() returns assertsAddress.get(ACCOUNT_BLACKHOLE) without validation, so if the blackhole account was not configured or the map was not initialized correctly, callers will unexpectedly receive null and are likely to hit null pointer exceptions when using the address. [null pointer]
Severity Level: Major ⚠️
- ⚠️ getBlackholeAddress() may return null unexpectedly.
- ❌ Downstream callers using address may NPE.
- ⚠️ Normal startup validation already prevents most cases.| return assertsAddress.get(ACCOUNT_BLACKHOLE); | |
| byte[] blackholeAddress = assertsAddress.get(ACCOUNT_BLACKHOLE); | |
| if (blackholeAddress == null) { | |
| throw new TronError("Account[Blackhole] is not configured.", TronError.ErrCode.GENESIS_BLOCK_INIT); | |
| } | |
| return blackholeAddress; |
Steps of Reproduction ✅
1. Inspect `getBlackholeAddress()` in
`chainbase/src/main/java/org/tron/core/store/AccountStore.java`: it returns
`assertsAddress.get(ACCOUNT_BLACKHOLE)` at line 123.
2. The static initializer `setAccount(...)` (lines ~53-58) validates presence and throws
`TronError` if the "Blackhole" entry is missing during configuration, reducing chance of
null return in normal operation.
3. To reproduce a null return you'd need to either skip calling `setAccount()` at startup
or remove/mutate the `assertsAddress` entry at runtime (for example, via reflection or
another bug).
4. If a caller expects a non-null address and uses the returned byte[] (from
`getBlackholeAddress()` line 123) without checking, a null will lead to a
NullPointerException at the caller site (for example when the returned address is passed
to methods that dereference it, or to store accessors like `getUnchecked()`).
5. Adding the explicit null-check and throwing the same `TronError` used in `setAccount()`
yields a clearer, earlier failure mode and keeps behavior consistent with the class's
initialization-time validation.Prompt for AI Agent 🤖
This is a comment left during a code review.
**Path:** chainbase/src/main/java/org/tron/core/store/AccountStore.java
**Line:** 123:123
**Comment:**
*Null Pointer: `getBlackholeAddress()` returns `assertsAddress.get(ACCOUNT_BLACKHOLE)` without validation, so if the blackhole account was not configured or the map was not initialized correctly, callers will unexpectedly receive `null` and are likely to hit null pointer exceptions when using the address.
Validate the correctness of the flagged issue. If correct, How can I resolve this? If you propose a fix, implement it and please make it concise.| writeOptions.close(); | ||
| dbSource.closeDB(); |
There was a problem hiding this comment.
Suggestion: The base class already allocates its own WriteOptionsWrapper and closes it in its close() method, but this subclass overrides close() and only closes its own WriteOptionsWrapper, never delegating to super.close(), so the base WriteOptionsWrapper (which holds native resources) is never closed and will leak; the fix is to close the subclass-specific options if needed and then call super.close() instead of directly closing the dbSource. [resource leak]
Severity Level: Major ⚠️
- ❌ Resource leak on DB native options during shutdown.
- ⚠️ Repeated start/stop increases file descriptors.
- ⚠️ Node shutdown may leave DB in inconsistent state.
- ⚠️ Affects store lifecycle and maintenance scripts.| writeOptions.close(); | |
| dbSource.closeDB(); | |
| // Close the checkpoint-specific write options first | |
| writeOptions.close(); | |
| } catch (Exception e) { | |
| logger.warn("Failed to close checkpoint write options for {}.", getName(), e); | |
| } finally { | |
| logger.debug("******** End to close {}. ********", getName()); | |
| // Delegate to parent to close its own WriteOptionsWrapper and the db source | |
| super.close(); |
Steps of Reproduction ✅
1. Instantiate the store: create CheckPointV2Store via its constructor in file
chainbase/src/main/java/org/tron/core/store/CheckPointV2Store.java (constructor at lines
~21-23 in the final file) and ensure the application initializes any base TronDatabase
resources.
2. Trigger normal shutdown path that calls close() on the store instance (e.g.,
application shutdown hook or explicit call to CheckPointV2Store.close()). The method
invoked is CheckPointV2Store.close() at
chainbase/src/main/java/org/tron/core/store/CheckPointV2Store.java:68.
3. Observe runtime behavior: CheckPointV2Store.close() executes and calls
writeOptions.close() and dbSource.closeDB() (lines 71-73 in existing code). Because this
override does not call super.close(), any closing logic in TronDatabase.close() (parent
class) is not executed; if TronDatabase holds its own WriteOptionsWrapper or other native
resources, they remain unclosed.
4. Reproduce resource leak: under repeated start/stop cycles (create new
CheckPointV2Store, then call close()), native resources (file handles, native DB options)
held by the parent may accumulate. Evidence of leak would be thrown errors or process
resource growth in logs or OS tool (file descriptor growth). The faulty code path is
precisely chainbase/src/main/java/org/tron/core/store/CheckPointV2Store.java:68-75 where
super.close() is not called.
Note: The suggestion is to delegate to super.close() after closing subclass-specific
resources so that any cleanup implemented in TronDatabase.close() is executed. The final
file shows overriding close() without calling super.close(), so this reproduction is
achievable by invoking the close method as above.Prompt for AI Agent 🤖
This is a comment left during a code review.
**Path:** chainbase/src/main/java/org/tron/core/store/CheckPointV2Store.java
**Line:** 72:73
**Comment:**
*Resource Leak: The base class already allocates its own WriteOptionsWrapper and closes it in its close() method, but this subclass overrides close() and only closes its own WriteOptionsWrapper, never delegating to super.close(), so the base WriteOptionsWrapper (which holds native resources) is never closed and will leak; the fix is to close the subclass-specific options if needed and then call super.close() instead of directly closing the dbSource.
Validate the correctness of the flagged issue. If correct, How can I resolve this? If you propose a fix, implement it and please make it concise.
Nitpicks 🔍
|
User description
What does this PR do?
Why are these changes required?
This PR has been tested by:
Follow up
Extra details
CodeAnt-AI Description
Paginated real-time witness listing, safer shielded APIs, improved DB iterators and JSON-RPC receipts
What Changed
Impact
✅ Clearer maintenance error for witness pagination✅ Fewer JSON-RPC internal errors and more informative receipt responses✅ Safer DB iteration to avoid native resource leaks during scans💡 Usage Guide
Checking Your Pull Request
Every time you make a pull request, our system automatically looks through it. We check for security issues, mistakes in how you're setting up your infrastructure, and common code problems. We do this to make sure your changes are solid and won't cause any trouble later.
Talking to CodeAnt AI
Got a question or need a hand with something in your pull request? You can easily get in touch with CodeAnt AI right here. Just type the following in a comment on your pull request, and replace "Your question here" with whatever you want to ask:
This lets you have a chat with CodeAnt AI about your pull request, making it easier to understand and improve your code.
Example
Preserve Org Learnings with CodeAnt
You can record team preferences so CodeAnt AI applies them in future reviews. Reply directly to the specific CodeAnt AI suggestion (in the same thread) and replace "Your feedback here" with your input:
This helps CodeAnt AI learn and adapt to your team's coding style and standards.
Example
Retrigger review
Ask CodeAnt AI to review the PR again, by typing:
Check Your Repository Health
To analyze the health of your code repository, visit our dashboard at https://app.codeant.ai. This tool helps you identify potential issues and areas for improvement in your codebase, ensuring your repository maintains high standards of code health.