-
Notifications
You must be signed in to change notification settings - Fork 71
Oracle perf #4187
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Oracle perf #4187
Conversation
| ActiveSupport.on_load(:active_record) do | ||
| if System::Database.oracle? | ||
| require 'arel/visitors/oracle12_hack' | ||
| require 'arel/visitors/oracle12_hack' || next # once done, we can skip setup |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I believe on application reload we don't need any of this code to be executed again. Using this require statement as a state tracking tool know whether we ever run or not. If your local development environment breaks on code reload, this would be the culprit. Be my guests any time!
|
|
||
| After do | ||
| ActiveRecord::Base.clear_active_connections! | ||
| ActiveRecord::Base.connection_handler.clear_active_connections! |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
avoid DEPRECATION warning
| # NOTE: Use ENV['DB'] only to install oracle dependencies | ||
| group :oracle do | ||
| oracle = -> { (ENV['ORACLE'] == '1') || ENV.fetch('DATABASE_URL', ENV['DB'])&.start_with?('oracle') } | ||
| ENV['NLS_LANG'] ||= 'AMERICAN_AMERICA.UTF8' if oracle |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Another attempt at setting reliably NLS_LANG before oracle connection is established and only if using oracle. Might be a hack but I'm not sure any other place would be less of a hack and I'm tired of trying to figure out where that other place might be. So here I'm sure it's gonna be set prior loading the oracle gems.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Did we have encoding problems before this?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You can see always a complaint that we fallback to some 7bit ASCII encoding if you don't set this variable separately in your environment. But we have always tried to set a default in the initializer. But that has been too late. So this one should be early enough always.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
you can see that line removed from the initializer
❌ 8 blocking issues (8 total)
@qltysh one-click actions:
|
jlledom
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I would ask you to add better description to complicated PRs like this. The amount of investigation required to understand any single line in that oracle initializer is big enough for the PR to not be reviewable in practice. Now you have claude, you can tell it to explain what you did, and verify it's correct before pasting it in the PR description.
Also, creating one single commit for each conceptually different work would make reviewer's life easier as well.
| # NOTE: Use ENV['DB'] only to install oracle dependencies | ||
| group :oracle do | ||
| oracle = -> { (ENV['ORACLE'] == '1') || ENV.fetch('DATABASE_URL', ENV['DB'])&.start_with?('oracle') } | ||
| ENV['NLS_LANG'] ||= 'AMERICAN_AMERICA.UTF8' if oracle |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Did we have encoding problems before this?
|
See the errors in the Oracle cucumbers: They might be related to the changes 🤕. |
Could be, although I have no idea what might be the case as we don't change the connection management. I wonder if this closing of cursors might take too long sometimes. But then, that can also explain issues we have previously seen with random slowdowns. Will need to try with and without it and then whateever else needed. This is a nightmare. |
My thoughts are with you |
c4f6939 to
6768407
Compare
| def close_and_clear_statements | ||
| start_time = Process.clock_gettime(Process::CLOCK_MONOTONIC) | ||
| statement_count = @statements&.length || 0 | ||
| @statements&.clear |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Some AI explanation why clearing cursors from app end may not be much of a performance penalty because oracle server still caches these for reuse.
Why Oracle Keeps "Closed" Cursors
Oracle's V$OPEN_CURSOR view is somewhat misleadingly named - it shows cursors Oracle is tracking for the session, not just actively open ones. Oracle keeps them for performance:
- Session Cursor Cache: When you "close" a cursor from the application side, Oracle often keeps it cached in case you run the same SQL again. This is controlled by the SESSION_CACHED_CURSORS parameter.
- Soft vs Hard Close:
- Soft close: Application closes the cursor, but Oracle keeps it in its cache
- Hard close: Oracle actually deallocates it
6768407 to
0044b2f
Compare
|
@jlledom , now I'm not sure the changes are related to the failures. I can't reproduce the exact errors but other failure types randomly appear. |
Currently there's only one cuke failing and it's the usual |
547356e to
ec0ab9e
Compare
ec0ab9e to
ca93f51
Compare
Idk why I didn't check that earlier but I see this error also in master so not related to any of the changes. I think it is more related to not using performance plan maybe. |
5abae26 to
c698c64
Compare
|
@jlledom thank you for the review, I believe besides rebasing I only added the last two commits. As I said earlier, I see same transient issues without my changes in circleci. |
jlledom
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I can't understand everything here, but if tests pass I approve.
| end | ||
| end | ||
|
|
||
| ActiveRecord::Base.skip_callback(:update, :after, :enhanced_write_lobs) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please explain this.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We skip the redundant enhanced_write_lobs callback. It is something never needed by our usage and mot needed for the wider project either especially with the quoting fixes.
| SQL_UTF8_CHUNK_CHARS = 8191 # (32767÷4), 4 bytes max character; 1000 without MAX_STRING_SIZE=EXTENDED | ||
| BLOB_INLINE_LIMIT = 16383 # (32767÷2) 2000 without MAX_STRING_SIZE=EXTENDED | ||
| PLSQL_BASE64_CHUNK_SIZE = 24_573 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why this values?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
we can fit only so many real bytes encoded in base64 within the 32767 bytes value limit
Now I don't remember if I tested this empirically that any more bytes can be processed, but too tired now to go back to it anymore. 24k is enough of a chunk size to not impose a significant overhead. Moreover this is something people shouldn't rely on. This is only for huge values within inline queries. And I see no practical reason one to use it except in dev mode, to see their SQL being run. In which case using huge values will not be productive anyway.
Hope it makes sense.
|
|
||
| # Generate a scalar subquery with PL/SQL function to build large BLOBs. | ||
| # Uses DBMS_LOB.WRITEAPPEND with base64-encoded chunks for efficiency. | ||
| # Testing showed hextoraw() unusable for being more than 100x slower. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Then why are you using hextoraw() instead of this in other scenarios?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For a single value, it doesn't show noticeable delay. If used with big data within PL/SQL then it becomes a problem. Since it was already working, I didn't find value in checking whether the base64 functions can be used outside the PL/SQL context. And it is better to avoid the complicated PL/SQL for small values using either function. If not for anything else, just because one uses to_sql for readability. And seeing a very popular standard Oracle SQL function is much more readable than the ugly PL/SQL insertion.
In fact tests passed without rerun this time. I'm gonna merge the PR and update with the even better implementation if oracle support provides a solution later. |
Improve oracle performance by
Also added a couple of small fixes.
Supersedes #4176