Open
Conversation
51ce993 to
1891fe2
Compare
…pache#3888) ### Motivation In checkAllLedgers when the ledger no missing fragments, will miss invoke` lh.closeAsync()` to close ledgerHandler, which cause autorecovery not invoke `unregisterLedgerMetadataListener` to release ledger metadata listeners. Heap memory be used too much and maybe will cause OOM; <img width="1567" alt="image" src="https://user-images.githubusercontent.com/84127069/227937422-1113af68-9bf3-4466-97fa-d9b7cc5d72be.png"> ### Changes 1. Invoke` lh.closeAsync()` when no missing fragments;
### Motivation There is a critical CVE-2019-10202 in `org.codehaus.jackson:jackson-mapper-asl` Detailed paths Introduced through: org.apache.distributedlog:dlfs@4.16.0-SNAPSHOT › org.apache.hadoop:hadoop-common@3.3.4 › org.apache.avro:avro@1.7.7 › org.codehaus.jackson:jackson-mapper-asl@1.9.2 Fix: No remediation path available. Introduced through: org.apache.distributedlog:dlfs@4.16.0-SNAPSHOT › org.apache.hadoop:hadoop-common@3.3.4 › com.sun.jersey:jersey-json@1.19 › org.codehaus.jackson:jackson-mapper-asl@1.9.2 Fix: No remediation path available. Introduced through: org.apache.distributedlog:dlfs@4.16.0-SNAPSHOT › org.apache.hadoop:hadoop-common@3.3.4 › com.sun.jersey:jersey-json@1.19 › org.codehaus.jackson:jackson-jaxrs@1.9.2 › org.codehaus.jackson:jackson-mapper-asl@1.9.2 Fix: No remediation path available. Introduced through: org.apache.distributedlog:dlfs@4.16.0-SNAPSHOT › org.apache.hadoop:hadoop-common@3.3.4 › com.sun.jersey:jersey-json@1.19 › org.codehaus.jackson:jackson-xc@1.9.2 › org.codehaus.jackson:jackson-mapper-asl@1.9.2 Fix: No remediation path available. ### Changes Upgrade hadoop-common version from 3.3.4 to 3.3.5 to resolve this CVE
00567fd to
ab95ba6
Compare
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Motivation
When the BookKeeper client writes an entry to the BookKeeper server, it runs with the following steps:
bookieProtoEncoder,lengthbasedframedecoder, andconsolidationIf the bookie client writes small entries with high ops and the Netty's pending queue will be full and the Netty thread will be busy with processing entries and flushing them into the socket channel. The CPU will switch between the user mode and the kernel mode in high frequency.
apache#3383 introduced Netty channel flushes consolidation to mitigate syscall overhead. But it can not reduce the overhead on the Netty threads.
We can tune it one
Step 3to group the small entries into one ByteBuf and flush it into the Netty pending queue when conditions are met.Design
When a new entry comes to the bookie client channel, we add it into one ByteBuf and check whether the ByteBuf exceeds the max threshold, the default is 1MB.
In order to avoid entry staying in the Bookie client channel ByteBuf for a long time causing high write latency, we schedule a timer task to flush the ByteBuf every 1 ms.
Performance
We test the write performance on my laptop with the following command.
The performance result.