Feed aggregator

sql trace and tkprof

Tom Kyte - 4 hours 12 min ago
hi tom, i have seen lot of answers from you regarding sql trace and tkprof for sql performance tuning.but i am still not clear with few things. i hope you will explain me clearly those things 1.how does this sql trace and tkprof differ from set autotrace on. 2.if we use set autotrace on and explain plan, how do we analyze a particular query and tell whether the query is tuned or not 3.for sql trace and tkprof i want to know if if i do 'ALTER SESSION SET SQL_TRACE = TRUE' where does the trace file get created(my oracle server is running on unix). by formatting the trace file using tkprof how do i analyze it to tell the query is tuned or not? 4.is the order of tables in a query matters in performance? for this question i got an answer that it matters for RBO but not for CBO.but who decides to go for RBO,CBO.can we tell oracle to go for CBO OR RBO.If we can decide, when to go for RBO,when to go for CBO. sorry for the flow of questions, but i am confused on the above issues. please answer the above. thanks a lot. nk
Categories: DBA Blogs

ORA-06531: Reference to uninitialized collection

Tom Kyte - Fri, 2020-07-10 17:26
Hi Tom, I am getting "ORA-06531: Reference to uninitialized collection" even after initializing collection to assign a value. Could you please help. Please find the code below. create or replace TYPE TYP_GRP_ID is object (grp_id number); create or replace TYPE TAB_GROUP is table of TYP_GRP_ID; create or replace PACKAGE TEST AS PROCEDURE TEST; END; create or replace PACKAGE BODY TEST AS PROCEDURE TEST IS acc_arr TAB_GROUP; BEGIN acc_arr.EXTEND; acc_arr := new TAB_GROUP(null); acc_arr(acc_arr.LAST).grp_id := 1000; dbms_output.put_line(acc_arr(acc_arr.LAST).grp_id); END; END; Thanks in advance Mathew
Categories: DBA Blogs

Oracle APEX 20.2 IE 11 Depreciated

Tom Kyte - Fri, 2020-07-10 17:26
Hi, I saw the following depreciated features in Oracle APEX 20.1 release notes: <b>6.1.4 Support for Internet Explorer 11 Deprecated Support for Internet Explorer (IE) 11 is deprecated. Starting with release 20.2, only the current and prior major release of Microsoft Edge along with Google Chrome, Mozilla Firefox, Apple Safari will be supported.</b> Does it mean that Oracle APEX 20.2 will not work correctly in Microsoft IE 11? Does this statement apply only to the APEX builder?...or does it also apply to all applications in workspace? We have a lot of PC's that are still running IE 11. Thanks!
Categories: DBA Blogs

Oracle JET Paging Control in Oracle APEX

Tom Kyte - Fri, 2020-07-10 17:26
Hi, In APEX SOD, it says there will be additional Oracle JET-based components. May I know if there will include the following paging control in Oracle JET? https://www.oracle.com/webfolder/technetwork/jet/jetCookbook.html?component=pagingControl&demo=basicPagingTable I am particularly interested in replacing the pagination in classic reports by Oracle JET paging control. I haven't seen any information about how to implement these Oracle JET components into Oracle APEX. Thank you!
Categories: DBA Blogs

How can I audit the individual functions/procedures inside package?

Tom Kyte - Fri, 2020-07-10 17:26
Oracle Audit sentence: audit execute on [package] by access; How Can I Do something like this? audit execute on [package.function] by access; I need to audit execution of functions inside packages with Oracle Audit options. Is it possible to audit arguments sent to this functions too? And Is it possible to Select this info of SYS.AUD$ or other Views Best regards, Jery
Categories: DBA Blogs

An instance named dvil?

Tom Kyte - Fri, 2020-07-10 17:26
<u></u> Hello Masters, I hope you are OK, it was impossible to ask TOM during many days... Two weeks ago I was testing, with RMAN, the "RECOVER TABLE" command. It helped me to restore a dropped table, with RMAN backups : a very powerful and useful feature, thanks to Oracle for that tool. I droped the table HR.ZZRMAN01 of the PDB ORCL and, to restore it, my command was : <code>RMAN> RECOVER TABLE HR.ZZRMAN01 OF PLUGGABLE DATABASE ORCL UNTIL SCN 2331673 AUXILIARY DESTINATION '/u01/app/oracle/RMAN_TEMP' REMAP TABLE 'HR'.'ZZRMAN01':'ZZRMAN01_PREV';</code> And, surprise, on the screen I saw that Oracle creates an instance named dvil... <code> Starting recover at 20-JUN-20 using channel ORA_DISK_1 RMAN-05026: warning: presuming following set of tablespaces applies to specified point-in-time List of tablespaces expected to have UNDO segments Tablespace SYSTEM Tablespace ORCL:SYSTEM Tablespace ORCL:UNDOTBS1 Tablespace UNDOTBS2</code> Here! <code> <u><b>Creating automatic instance, with SID='dvil'</b></u> initialization parameters used for automatic instance: db_name=ORCL12C db_unique_name=dvil_pitr_ORCL_ORCL12C compatible=12.2.0 db_block_size=8192 db_files=200 diagnostic_dest=/u01/app/oracle _system_trig_enabled=FALSE sga_target=800M processes=200 db_create_file_dest=/u01/app/oracle/RMAN_TEMP log_archive_dest_1='location=/u01/app/oracle/RMAN_TEMP' enable_pluggable_database=true _clone_one_pdb_recovery=true #No auxiliary parameter file used starting up automatic instance ORCL12C Oracle instance started Total System Global Area 838860800 bytes Fixed Size 8798312 bytes Variable Size 239079320 bytes Database Buffers 583008256 bytes Redo Buffers 7974912 bytes Automatic instance created ...</code> Well, my little question is just : does Dvil mean devil? You agree with me or I am wrong? I was so amused that I wanted to talk about it with Oracle experts :-) Have a nice day, David D.
Categories: DBA Blogs

Session wise rank ,change in value should lead to new rank

Tom Kyte - Fri, 2020-07-10 17:26
Hi Tom, I need help to get sequence no or rank whenever there is change in continuous sequence below is the example- Input seq,id,value 1,1,200 2,1,200 3,1,300 4,1,200 5,1,200 6,1,500 7,1,500 8,1,700 expected out put same combination of id,value as per sequence order should be assigned one value. for id 1 and values 200 --rnk should be 1 then there is a change in 3rd row --then rnk=2 then 4th row should be assigned with new rnk 3 not 1 ans so on. seq,id,value, rnk 1,1,200,1 2,1,200,1 3,1,300,2 4,1,200,3, 5,1,200,3, 6,1,500,4 7,1,500,4 8,1,700,5 9,1,800,6 I have tried lead, lag, first_value,last_value ,rank,dense_rank and row_number analytical function ,but i could not achieve the requirement.
Categories: DBA Blogs

View raise ORA-28094 based on "dbms_redact" base-table

Tom Kyte - Fri, 2020-07-10 17:26
Dear Ask-Tom-Team, I've to apologize up front because I think I've something missed in the Documentation regarding dbms_redact setup, or restrictions therewith. If so I'm sorry for wasting your time just to point me in right direction. I've a simple test-case on livesql (but you cannot run it there, because you have no execute-right on dbms_redact), please have a look at: https://livesql.oracle.com/apex/livesql/s/kcq634fgexodc6m6a8n4esb0l My questions is regarding the ORA-28094 raised by the "tom_redact_view_28094"-View (at statement 13). The restriction I couldn't find, and to be honest don't understand, is: The above mentioned view just selects all columns from the base-table (which has one redacted column - ssn), and just add two columns together (nr1 + nr2). And this addition seems to be enough to raise the ORA-28094. If you do not select the "sum" column from this view, it works just fine. I've tested this on AIX and Oracle 19.5, and on Oracle 12.2 and 18.3 on Oracle-Linux. All of the above shows the same behaviour, which is another reason why I think I've just missed a restriction in the documentation. I know I could redact the column in the view, but I think this would defeat the purpose. Because I would like to redact the 'base-table' and rather not 10 views based on this table. In the end my tests would go a step further, because my initial setup started with dbms_tsdp_* using dbms_redact. But I think that dbms_redact is the restricting part, so I've simplified this test-case. I hope the test-case and my explanation is sufficient for you to reproduce the behaviour. And as stated above if I've missed something in the documentation, and you can point me in the right direction, I'm grateful and sorry! Thank you in advance for your time and help! best regards, Tom
Categories: DBA Blogs

Inserting data into destination table from source if data is not present in destination table, taking more time due to large volume of data

Tom Kyte - Fri, 2020-07-10 17:26
sample example: /* This procedure is taking approx 20-25 mins to complete because of selecting large volume of data, Is there any way to reduce execution time ? */ <code> procedure sample ( a_in IN varchar2) IS v_row number; v1_row number; v2_row number; cursor c1 IS select a_value, b_value.., from source_table<where condition>; /* cursor c1 selecting 46 millions record, but inserted few records to the below two destinations tables based on conditions, source_table is a force view*/ Begin for i in c1 loop v_row := 0; select count(1) into v_row from table_item where item = i.a_value||'_'||a_in; if v_row > 0 then select count(1) into v1_row from destination_table1 where item1 = (i.b_value||'_'||a_in); if v1_row = 0 then insert into destination_table1 (a_value, b_value) values(i.a_value, i.b_value); commit; end if; if i.b_value is not null then v2_row := 0; select count(1) into v2_row from destination_table2 where item2 = (i.a_value ||'_'||a_in) and item3 = (i.b_value||'_'||a_in); if v2_row = 0 then insert into destination_table2 (item2, item3) values (i.a_value ||'_'||a_in, i.b_value||'_'||a_in); commit; end if; end if; end if; end loop; End sample; /* this procedure is taking approx. 20 mins to complete */ </code>
Categories: DBA Blogs

Recursive WITH upgrade

Jonathan Lewis - Fri, 2020-07-10 10:19

There’s a notable change in the way the optimizer does cost and cardinality calculations for recursive subquery factoring that may make some of your execution plans change – with a massive impact on performance – as you upgrade to any version of Oracle from 12.2.0.1 onwards. The problem appeared in a question on the Oracle Developer Community forum a little while ago, with a demonstration script to model the issue.

I’ve copied the script – with a little editing – and reproduced the change in execution plan described by the OP. Here’s my copy of the script, with the insert statements that generate the data (all 1,580 of them) removed.

rem
rem     Script:         recursive_with_4.sql
rem     Author:         Jonathan Lewis
rem     Dated:          Jul 2020
rem     Purpose:        
rem
rem     Last tested 
rem             12.2.0.1
rem             12.1.0.2
rem
rem     Notes:
rem     https://community.oracle.com/thread/4338248
rem
rem     The upgrade to 12.2.0.1 made this query much slower (on 15,000 rows)
rem     Setting OFE to 12.1.0.1 is a first possible fix for the issue.
rem     The scale is too small to see much difference in this case
rem

drop table test_folder purge;

create table test_folder(
        fldr_key                number(16,0)            not null        enable,                 
        fldr_id                 varchar2(255 byte)      not null        enable,                 
        fldr_desc_tx            varchar2(255 byte),                     
        par_fldr_key            number(16,0),                   
        seus_key                number(16,0)            not null        enable,                 
        fldr_private_flg        varchar2(1 byte)        not null        enable,                 
        last_updt_dt            date                    not null        enable,                 
        last_upby_seus_key      number(16,0)            not null        enable,                 
        lock_seq_nbr            number(9,0) default 0   not null        enable,                 
        content_guid            raw(16),                
        constraint test_folder_pk primary key (fldr_key)                
)       
;              

-- list of insert statements

alter table test_folder add constraint test_folder_fk  
        foreign key (par_fldr_key) references test_folder(fldr_key)
;  
  
create or replace force editionable view test_folder_vw (fldr_key) as   
with rec_path(fldr_key)  as (
        select  tf.fldr_key  
        from    test_folder tf where tf.par_fldr_key is null  
        union all  
        select  tf.fldr_key  
        from    test_folder tf, rec_path  
        where   rec_path.fldr_key = tf.par_fldr_key
)  
select fldr_key  
from rec_path   
; 

begin
        dbms_stats.gather_table_stats(
                ownname     => null,
                tabname     => 'TEST_FOLDER',
                method_opt  => 'for all columns size 1'
        );
end;
/


select * from test_folder_vw where fldr_key = -41;  

I’ve run the test 3 times. First in 12.2.0.1 with no tweaking; then in 12.2.0.1 with the hint /*+ optimizer_features_enable(‘12.1.0.2’) */ and finally in a genuine 12.1.0.2 environment. In all three cases I enabled rowsource execution stats (‘alter session set statistics_level = all’) and pulled the plans from memory – with the following results

First, the base result from 12.1.0.2

----------------------------------------------------------------------------------------------------------------------------------------------
| Id  | Operation                                 | Name        | Starts | E-Rows | A-Rows |   A-Time   | Buffers |  OMem |  1Mem | Used-Mem |
----------------------------------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                          |             |      1 |        |      1 |00:00:00.03 |     604 |       |       |          |
|*  1 |  VIEW                                     |             |      1 |    801 |      1 |00:00:00.03 |     604 |       |       |          |
|   2 |   UNION ALL (RECURSIVE WITH) BREADTH FIRST|             |      1 |        |   1580 |00:00:00.03 |     604 | 36864 | 36864 |  102K (0)|
|*  3 |    TABLE ACCESS FULL                      | TEST_FOLDER |      1 |    161 |    161 |00:00:00.01 |      68 |       |       |          |
|*  4 |    HASH JOIN                              |             |      8 |    640 |   1419 |00:00:00.02 |     536 |  1696K|  1696K| 1488K (0)|
|   5 |     RECURSIVE WITH PUMP                   |             |      8 |        |   1580 |00:00:00.01 |       0 |       |       |          |
|*  6 |     TABLE ACCESS FULL                     | TEST_FOLDER |      8 |   1419 |  11352 |00:00:00.01 |     536 |       |       |          |
----------------------------------------------------------------------------------------------------------------------------------------------


Predicate Information (identified by operation id):
---------------------------------------------------
   1 - filter("FLDR_KEY"=(-41))
   3 - filter("TF"."PAR_FLDR_KEY" IS NULL)
   4 - access("REC_PATH"."FLDR_KEY"="TF"."PAR_FLDR_KEY")
   6 - filter("TF"."PAR_FLDR_KEY" IS NOT NULL)

Two points to note, in particular. First that the hash join has the recursive with pump as its first (build table) child and the table access full of test_folder as its second child (probe table); secondly that there is no value given for E-Rows for the recursive with pump.

Now the 12.2.0.1 plan:

----------------------------------------------------------------------------------------------------------------------------------------------
| Id  | Operation                                 | Name        | Starts | E-Rows | A-Rows |   A-Time   | Buffers |  OMem |  1Mem | Used-Mem |
----------------------------------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                          |             |      1 |        |      1 |00:00:00.01 |      47 |       |       |          |
|*  1 |  VIEW                                     |             |      1 |   2524K|      1 |00:00:00.01 |      47 |       |       |          |
|   2 |   UNION ALL (RECURSIVE WITH) BREADTH FIRST|             |      1 |        |   1580 |00:00:00.01 |      47 | 36864 | 36864 |  102K (0)|
|*  3 |    TABLE ACCESS FULL                      | TEST_FOLDER |      1 |    161 |    161 |00:00:00.01 |      24 |       |       |          |
|*  4 |    HASH JOIN                              |             |      8 |   2524K|   1419 |00:00:00.01 |      23 |  1743K|  1743K| 1632K (0)|
|   5 |     BUFFER SORT (REUSE)                   |             |      8 |        |  11352 |00:00:00.01 |      23 | 73728 | 73728 |          |
|*  6 |      TABLE ACCESS FULL                    | TEST_FOLDER |      1 |   1419 |   1419 |00:00:00.01 |      23 |       |       |          |
|   7 |     RECURSIVE WITH PUMP                   |             |      8 |        |   1580 |00:00:00.01 |       0 |       |       |          |
----------------------------------------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   1 - filter("FLDR_KEY"=(-41)
   3 - filter("TF"."PAR_FLDR_KEY" IS NULL)
   4 - access("REC_PATH"."FLDR_KEY"="TF"."PAR_FLDR_KEY")
   6 - filter("TF"."PAR_FLDR_KEY" IS NOT NULL)

There are three changes to notice in this plan – which (for the OP) was much slower than the 12.1.0.2 plan. First, the order of the hash join has changed, the recursive with pump is now the second child (probe table) in the join (and again shows no value for E-Rows); secondly that Oracle has introduced an extra operation – the buffer sort (reuse) – populated by the table access full – as the build table; thirdly (presumably the point of buffer sort (reuse) operation) the number of buffer visits has dropped from a few hundred to a couple of dozen.

Finally let’s check what happens if we set the OFE (optimizer_features_enable) to 12.1.0.2 while running 12.2.0.1

----------------------------------------------------------------------------------------------------------------------------------------------
| Id  | Operation                                 | Name        | Starts | E-Rows | A-Rows |   A-Time   | Buffers |  OMem |  1Mem | Used-Mem |
----------------------------------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                          |             |      1 |        |      1 |00:00:00.01 |      47 |       |       |          |
|*  1 |  VIEW                                     |             |      1 |    801 |      1 |00:00:00.01 |      47 |       |       |          |
|   2 |   UNION ALL (RECURSIVE WITH) BREADTH FIRST|             |      1 |        |   1580 |00:00:00.01 |      47 | 36864 | 36864 |  102K (0)|
|*  3 |    TABLE ACCESS FULL                      | TEST_FOLDER |      1 |    161 |    161 |00:00:00.01 |      24 |       |       |          |
|*  4 |    HASH JOIN                              |             |      8 |    640 |   1419 |00:00:00.01 |      23 |  1797K|  1797K| 1573K (0)|
|   5 |     RECURSIVE WITH PUMP                   |             |      8 |        |   1580 |00:00:00.01 |       0 |       |       |          |
|   6 |     BUFFER SORT (REUSE)                   |             |      8 |        |  11352 |00:00:00.01 |      23 | 73728 | 73728 |          |
|*  7 |      TABLE ACCESS FULL                    | TEST_FOLDER |      1 |   1419 |   1419 |00:00:00.01 |      23 |       |       |          |
----------------------------------------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   1 - filter("FLDR_KEY"=(-41))
   3 - filter("TF"."PAR_FLDR_KEY" IS NULL)
   4 - access("REC_PATH"."FLDR_KEY"="TF"."PAR_FLDR_KEY")
   7 - filter("TF"."PAR_FLDR_KEY" IS NOT NULL)

In these conditions the recursive with pump has gone back to being the build table (first child); but it’s worth noting that the 12.2 buffer sort (reuse) is still in place – saving us a few hundred buffer gets (and, for a bigger table, a number of disc reads possibly). Downgrading the optimizer_features_enable has given us the plan we needed, but this we’ve got an example that shows that hacking the parameter isn’t a guarantee that we will get exactly the path we used to get in the older version.

The story so far.

It seems that we can address the performance problem that the OP had by setting the optimzer_feature_enable to the older version – possibly through a hint embedded in the SQL, perhaps through an SQL Baseline or SQL Patch. Maybe we’ll have to have a logon trigger that sets the parameter for particular users or, worst case scenario, maybe we’ll have to set the parameter at the system level. Given how undesirable the last option could be it would be nice to know exactly what is causing the change in plan.

As a basic clue – if the order of tables in a hash join reverses itself this usually means that the byte (not row) estimates have changed. The optimizer will use the table with the lower byte count as the build table in a hash join. So the recursive with pump – whose row and byte estimates don’t appear – must have produced larger numbers in 12.2.0.1.

A change in the 12.2 plan that I haven’t yet mentioned is the E-rows for the hash join; it’s gone up from 640 (12.1.0.2) to 2.5 million! So let’s repeat the tests with the CBO (10053) trace file enabled and see if we can find a number like 2524K appearing as a join estimate in the trace file. Having created the two trace files (in 12.2.0.1, one with the OFE set backwards) I executed the following grep command against the trace files:

grep -n "^Join Card - Rounded" orcl12c_ora_5524_ofe.trc
grep -n "^Join Card - Rounded" orcl12c_ora_5524_base.trc

I’d set the tracefile_identifier to ‘ofe’ and ‘base’ respectively for the 12.1.0.2 and 12.2.0.1 tests, and here are the results:

grep -n "^Join Card - Rounded" orcl12c_ora_5524_ofe.trc
1166:Join Card - Rounded: 640 Computed: 639.941176

grep -n "^Join Card - Rounded" orcl12c_ora_5524_base.trc
1195:Join Card - Rounded: 640 Computed: 639.941176
1391:Join Card - Rounded: 2544 Computed: 2543.865546
1576:Join Card - Rounded: 10112 Computed: 10111.865546
1737:Join Card - Rounded: 40193 Computed: 40193.075630
1898:Join Card - Rounded: 159759 Computed: 159758.731092
2059:Join Card - Rounded: 635008 Computed: 635008.462185
2220:Join Card - Rounded: 2524023 Computed: 2524023.394958
2269:Join Card - Rounded: 2524023 Computed: 2524023.394958

That’s an interesting clue. Something seems to be calculating a larger and larger value in the 12.2.0.1 trace, starting with the hash join cardinality that appeared in 12.1.0.2 had, growing by a factor of nearly 4 each time, and ending with the hash join cardinality we saw in the 12.2.0.1 plan.

Taking a closer look at the content of the 12.2.0.1 trace file it turned out that every stage in that escalation was Oracle recalculating the cost and cardinality of joining test_folder (the table) and rec_path (the “with” subquery) using the figures from the previous join calculation as the new base figures for rec_path. In effect the optimizer was calculating the cost of following the recursive subquery down to its 7th level of recursion.

Side note: in agreement with my comment about the smaller (in bytes) rowsource being used as the build table, the initial join order started as (test_folder, rec_path) in the first two iterations, but switched to (rec_path, test_folder) from the 3rd iteration onwards.

So we’ve identified the mechanics that cause the change in plan; the question now is: why 7 iterations to the final cost? (Briefly I did a quick check to see how many circles of hell there were in Dante’s Inferno – but it’s 9 (or 10 depending how you count). A quick check of v$parameter (and the x$ tables for the hidden parameters) revealed the following:

Name                                     Value
------------------------------------ ---------
_recursive_with_branch_iterations            7

Setting this parameter to 1 in the session, or adding the hint /*+ opt_param(‘_recursive_with_branch_iterations’ 1) */ to the query resulted in the 12.1.0.2 plan appearing in 12.2.0.1 – and this is a much less intrusive way of getting the plan we’re interested in than setting the entire OFE back to 12.1.0.2. One might even set the parameter in the spfile (after receiving approval from Oracle Corp., of course) given how precisely targetted it is (and know that it doesn’t switch off that nice little buffering trick.)

Summary

From 12.2 onwards the optimizer does recursive recosting of recursive “with” subqueries. This means the cost and cardinality estimates of a plan may change and the impact may cause a significant change in performance – it certainly did for the OP.

The change seems to be driven by the hidden parameter _recursive_with_branch_iterations, which was introduced in 12.2.0.0 with a default value of 7. Setting this parameter to 1 reverts the optimizer to the pre-12.2 behaviour. If you run into a problem of recursive “with” subqueries changing plans and causing performance problems on an upgrade from pre-12.2 to a newer version of Oracle then it would be worth investigating this parameter as the least intrusive way of reverting back to the original plans.

 

Multi-Factor Authentication (MFA) using OKTA with Spring Boot and Tanzu Application Service

Pas Apicella - Thu, 2020-07-09 23:22
Recently I was asked to build a quick demo showing how to use MFA with OKTA and Spring Boot application running on Tanzu Application Service. Here is the demo application plus how to setup and run this yourself.

Steps

1. Clone the existing repo as shown below

$ git clone https://github.com/papicella/mfa-boot-fsi
Cloning into 'mfa-boot-fsi'...
remote: Enumerating objects: 47, done.
remote: Counting objects: 100% (47/47), done.
remote: Compressing objects: 100% (31/31), done.
remote: Total 47 (delta 2), reused 47 (delta 2), pack-reused 0
Unpacking objects: 100% (47/47), done.



2. Create a free account of https://developer.okta.com/

Once created login to the dev account. Your account URL will look like something as follows

https://dev-{ID}-admin.okta.com



3. You will need your default authorization server settings. From the top menu in the developer.okta.com dashboard, go to API -> Authorization Servers and click on the default server


You will need this data shortly. Image above is an example those details won't work for your own setup.

4. From the top menu, go to Applications and click the Add Application button. Click on the Web button and click Next. Name your app whatever you like. I named mine "pas-okta-springapp". Otherwise the default settings are fine. Click Done.

From this screen shot you can see that the default's refer to localhost which for DEV purposes is fine.


You will need the Client ID and Client secret from the final screen so make a note of these

5. Edit the "./mfa-boot-fsi/src/main/resources/application-DEV.yml" to include the details as per #3 and #4 above.

You will need to edit

  • issuer
  • client-id
  • client-secret


application-DEV.yaml

spring:
  security:
    oauth2:
      client:
        provider:
          okta:
            user-name-attribute: email

okta:
  oauth2:
    issuer: https://dev-213269.okta.com/oauth2/default
    redirect-uri: /authorization-code/callback
    scopes:
      - profile
      - email
      - openid
    client-id: ....
    client-secret: ....

6. In order to pick up this application-DEV.yaml we have to set the spring profile correctly. That can be done using a JVM property as follows.

-Dspring.profiles.active=DEV

In my example I use IntelliJ IDEA so I set it on the run configurations dialog as follows



7. Finally let's setup MFA which we do as follows by switching to classic UI as shown below



8. Click on Security -> Multifactor and setup another Multifactor policy. In the screen shot below I select "Email Policy" and make sure it is "Required" along with the default policy



9. Now run the application making sure you set the spring active profile to DEV.

...
2020-07-10 13:34:57.528  INFO 55990 --- [  restartedMain] pas.apa.apj.mfa.demo.DemoApplication     : The following profiles are active: DEV
...

10. Navigate to http://localhost:8080/



11. Click on the "Login" button

Verify you are taken to the default OKTA login page


12. Once logged in the second factor should then ask for a verification code to be sent to your email. Press the "Send me the code" button




13. Once you enter the code sent to your email you will be granted access to the application endpoints







14. Finally to deploy the application to Tanzu Application Service perform these steps below

- Create a manifest.yaml as follows

---
applications:
- name: pas-okta-boot-app 
  memory: 1024M
  buildpack: https://github.com/cloudfoundry/java-buildpack.git#v4.16
  instances: 2
  path: ./target/demo-0.0.1-SNAPSHOT.jar
  env:
    JBP_CONFIG_OPEN_JDK_JRE: '{ jre: { version: 11.+}}'

- Package the application as follows

$ ./mvnw -DskipTests package

- In the DEV OTKA console create a second application which will be for the deployed application on Tanzu Application Service which refers to it's FQDN rather then localhost as shown below



- Edit "application.yml" to ensure you set the following correctly for the new "Application" we created above.

You will need to edit

  • issuer
  • client-id
  • client-secret
- Push the application using "cf push -f manifest.yaml"

$ cf apps
Getting apps in org papicella-org / space apple as papicella@pivotal.io...
OK

name                requested state   instances   memory   disk   urls
pas-okta-boot-app   started           1/1         1G       1G     pas-okta-boot-app.cfapps.io


That's It!!!!

Categories: Fusion Middleware

Engage!

Scott Spendolini - Thu, 2020-07-09 21:26
One month in at Oracle, and I’ve been moving at what feels like warp speed – both literally and figuratively.  My team has been hard at work building the intake survey and that will solicit volunteers for the Phase 3 efficacy trials.  We’re also building all of the backend applications that will be used run analytics to select qualified participants and assign them to a local clinic to receive the actual vaccine.  This project is actually a part of the US Government’s Operation Warp Speed program – a program whose goal is to deliver 300 million doses of a safe and effective vaccine by January 2021.  

It should come as no surprise that we’re building 100% of this solution with Oracle’s low code platform, Oracle APEX.  Oracle APEX was the natural choice here, given the rapidly changing requirements, the need for a 100% responsive and custom UI, and the ability to deliver something substantial in just a couple of weeks.  The entire infrastructure – and it’s a lot more than just APEX & ORDS – all runs on the Oracle Cloud.

This is just one of four active projects that we have going on.  And thus the reason for this post – we’re growing.  Fast.  Each of the four current projects have multiple phases, many of which have yet to even start.  And there’s a bunch of new and exciting work on the not-too-distant horizon as we continue to reach out and assist those in need during this global pandemic.  Pandemic systems is also just the beginning.  Look for more exciting announcements about what we’re up to as we can share them.

We’re already well on our way putting together a world-class team to work on these projects.  You may have already seen Tweets from Doug Gault and Dan Mcghan about their roles on this new team.  These are just two of the several new team members that we’ve already recruited for this team.  

Despite this star-studded cast, we still need more help.  This is where you come in.  If you’re a seasoned APEX developer looking for a chance to make an actual difference in the world, we should talk.  We have a number of open slots that we need filled ASAP.  Know any good DevOps, Infrastructure or even DBA-type folks?  We’ll need them, too.

Interested in learning more or applying?  Drop me a note, hit me up on Twitter or hop on over here to fill out an application to get things started.

Execution Plans

Jonathan Lewis - Thu, 2020-07-09 10:54

This is an example from the Oracle Deveoloper Community of using the output of SQL Monitor to detect a problem with object statistics that resulted in an extremely poor choice of execution plan.

A short time after posting the original statement of the problem the OP identified where he thought the problem was and the general principle of why he thought he had a problem – so I didn’t have to read the entire execution plan to work out a strategy that would be (at least) a step in the right direction of solving the performance problem.

This note, then, is just a summary of the five minute that I spent confirming the OP’s hypothesis and explaining how to work around the problem he had identified. It does, however, give a little lead-in to the comments I made to the OP in order to give a more rounded picture of what his execution plan wass telling us.

So here’s the top half of the plan (which is the first subquery of a “concatenation”) with the first few predicates:

===============================================================================================================================================================================================================================  
| Id |                 Operation                  |            Name            |  Rows   | Cost |   Time    | Start  | Execs |   Rows   | Read | Read  |  Cell   |  Mem  | Activity |             Activity Detail             |  
|    |                                            |                            | (Estim) |      | Active(s) | Active |       | (Actual) | Reqs | Bytes | Offload | (Max) |   (%)    |               (# samples)               |  
===============================================================================================================================================================================================================================  
|  0 | SELECT STATEMENT                           |                            |         |      |     12086 |     +5 |     1 |     864K |      |       |         |       |     0.01 | Cpu (1)                                 |  
|  1 |   CONCATENATION                            |                            |         |      |     12086 |     +5 |     1 |     864K |      |       |         |       |          |                                         |  
|  2 |    FILTER                                  |                            |         |      |     12191 |     +4 |     1 |     864K |      |       |         |       |     0.03 | Cpu (4)                                 |  
|  3 |     FILTER                                 |                            |         |      |     12191 |     +4 |     1 |      26M |      |       |         |       |     0.01 | Cpu (1)                                 |  
|  4 |      NESTED LOOPS                          |                            |     241 | 251K |     12191 |     +4 |     1 |      26M |      |       |         |       |     0.02 | Cpu (3)                                 |  
|  5 |       NESTED LOOPS                         |                            |     241 | 251K |     12191 |     +4 |     1 |      26M |      |       |         |       |     0.07 | Cpu (8)                                 |  
|  6 |        NESTED LOOPS                        |                            |     241 | 251K |     12232 |     +4 |     1 |      26M |      |       |         |       |     0.05 | Cpu (6)                                 |  
|  7 |         NESTED LOOPS                       |                            |    5407 | 233K |     12242 |     +4 |     1 |      86M |      |       |         |       |          |                                         |  
|  8 |          MERGE JOIN CARTESIAN              |                            |       1 |   35 |     12242 |     +4 |     1 |     1000 |      |       |         |       |          |                                         |  
|  9 |           TABLE ACCESS BY INDEX ROWID      | REF1                       |       1 |    3 |         1 |     +4 |     1 |        1 |      |       |         |       |          |                                         |  
| 10 |            INDEX RANGE SCAN                | REF1_PK                    |       1 |    2 |     12242 |     +4 |     1 |        1 |      |       |         |       |          |                                         |  
| 11 |           BUFFER SORT                      |                            |      84 |   32 |     12242 |     +4 |     1 |     1000 |      |       |         |  104K |          |                                         |  
| 12 |            TABLE ACCESS BY INDEX ROWID     | STAGE                      |      84 |   32 |         1 |     +4 |     1 |     1000 |      |       |         |       |          |                                         |  
| 13 |             INDEX RANGE SCAN               | STAGE_IDX1                 |      84 |    4 |         1 |     +4 |     1 |     1000 |      |       |         |       |          |                                         |  
| 14 |          PARTITION RANGE ITERATOR          |                            |    8292 | 232K |     12232 |     +4 |  1000 |      86M |      |       |         |       |          |                                         |  
| 15 |           TABLE ACCESS STORAGE FULL        | TAB1                       |    8292 | 232K |     12245 |     +1 |  1000 |      86M | 103M | 521GB |   1.96% |    7M |    51.81 | gc buffer busy acquire (1)              |  
|    |                                            |                            |         |      |           |        |       |          |      |       |         |       |          | latch: cache buffers chains (1)         |  
|    |                                            |                            |         |      |           |        |       |          |      |       |         |       |          | Cpu (1196)                              |  
|    |                                            |                            |         |      |           |        |       |          |      |       |         |       |          | gcs drm freeze in enter server mode (2) |  
|    |                                            |                            |         |      |           |        |       |          |      |       |         |       |          | reliable message (5)                    |  
|    |                                            |                            |         |      |           |        |       |          |      |       |         |       |          | cell single block physical read (2827)  |  
|    |                                            |                            |         |      |           |        |       |          |      |       |         |       |          | cell smart table scan (1977)            |  
|    |                                            |                            |         |      |           |        |       |          |      |       |         |       |          | read by other session (304)             |  
| 16 |         PARTITION RANGE ITERATOR           |                            |       1 |   12 |     12191 |     +4 |   86M |      26M |      |       |         |       |     0.42 | Cpu (51)                                |  
| 17 |          TABLE ACCESS BY LOCAL INDEX ROWID | TAB2                       |       1 |   12 |     12191 |     +4 |   86M |      26M |   4M |  28GB |         |       |    32.14 | gc cr grant 2-way (20)                  |  
|    |                                            |                            |         |      |           |        |       |          |      |       |         |       |          | gc cr request (2)                       |  
|    |                                            |                            |         |      |           |        |       |          |      |       |         |       |          | gc remaster (6)                         |  
|    |                                            |                            |         |      |           |        |       |          |      |       |         |       |          | Cpu (319)                               |  
|    |                                            |                            |         |      |           |        |       |          |      |       |         |       |          | gcs drm freeze in enter server mode (4) |  
|    |                                            |                            |         |      |           |        |       |          |      |       |         |       |          | latch: gc element (2)                   |  
|    |                                            |                            |         |      |           |        |       |          |      |       |         |       |          | cell single block physical read (3563)  |  
| 18 |           INDEX RANGE SCAN                 | TAB2_IX1                   |     166 |    3 |     12210 |     +2 |   86M |      26M |   1M |  11GB |         |       |    15.17 | Cpu (292)                               |  
|    |                                            |                            |         |      |           |        |       |          |      |       |         |       |          | cell single block physical read (1557)  |  
| 19 |        INDEX UNIQUE SCAN                   | MTD_PK                     |       1 |    1 |     12242 |     +4 |   26M |      26M |  292 |   2MB |         |       |     0.17 | Cpu (20)                                |  
|    |                                            |                            |         |      |           |        |       |          |      |       |         |       |          | cell single block physical read (1)     |  
| 20 |       TABLE ACCESS BY INDEX ROWID          | REF2                       |       1 |    2 |     12191 |     +4 |   26M |      26M |    7 | 57344 |         |       |     0.11 | Cpu (13)                                |  
| 21 |      TABLE ACCESS BY INDEX ROWID           | CONTROLTAB                 |       1 |    1 |         1 |     +4 |     1 |        1 |      |       |         |       |          |                                         |  
| 22 |       INDEX UNIQUE SCAN                    | CONTROLTAB_PK              |       1 |      |         1 |     +4 |     1 |        1 |      |       |         |       |          |                                         |  
| 23 |     MINUS                                  |                            |         |      |       102 |     +4 |    25 |        3 |      |       |         |       |          |                                         |  
| 24 |      TABLE ACCESS BY INDEX ROWID           | CUST_ORG_PK                |       1 |    3 |       942 |     +4 |    25 |       10 |      |       |         |       |          |                                         |  
| 25 |       INDEX UNIQUE SCAN                    | MC_PK                      |       1 |    2 |       942 |     +4 |    25 |       25 |      |       |         |       |          |                                         |  
| 26 |      SORT UNIQUE NOSORT                    |                            |       1 |    4 |         8 |     +4 |    25 |        9 |      |       |         |       |          |                                         |  
| 27 |       TABLE ACCESS BY INDEX ROWID          | REF1                       |       1 |    3 |         8 |     +4 |    25 |        9 |      |       |         |       |          |                                         |  
| 28 |        INDEX RANGE SCAN                    | REF1_PK                    |       1 |    2 |         8 |     +4 |    25 |        9 |      |       |         |       |          |                                         |  


Predicate Information (identified by operation id):  
---------------------------------------------------  
   2 - filter( EXISTS ( (SELECT /*+ INDEX_RS_ASC ("CUST_ORG_PK" "MC_PK") */ "CUST_ID" FROM "USER1"."CUST_ORG_PK"  "CUST_ORG_PK" WHERE "CUST_ID"=:B1 AND "CUST_ORG_PK"."INDICATR"='Y')MINUS (SELECT /*+ INDEX_RS_ASC  
              ("REF1" "REF1_PK") */ TO_NUMBER("VAL") FROM "USER1"."REF1" "REF1" WHERE "PUSER"='ZZZ' AND "EDATE">=TRUNC(SYSDATE@!) AND TO_NUMBER("VAL")=:B2  
              AND "SDATE"<=TRUNC(SYSDATE@!)))) 3 - filter( EXISTS (SELECT /*+ INDEX_RS_ASC ("CONTROLTAB" "CONTROLTAB_PK") */ 0 FROM "USER2"."CONTROLTAB" "CONTROLTAB" WHERE "CONTROLTAB"."CNTRLID"=9999 AND NVL("CONTROLTAB"."STATUS",'F')='S')) 9 - filter("REF1"."EDATE">=TRUNC(SYSDATE@!))  
  10 - access("REF1"."PUSER"='XXX' AND  "REF1"."NAME"='CODE' AND "REF1"."SDATE"<=TRUNC(SYSDATE@!))  
  13 - access("STAGE"."NAME"='XXX' AND "STAGE"."STAT"='I')  

The first thing to note is that the “Time (active)” shown at the top line is about 12,000 seconds – so it’s a long running query. Your next observation – before you look at the shape of the plan – might be to note that operations 15, 17 and 18 between them record thousands of seconds of time, mostly I/O time but including 1,200 seconds of CPU time. This might draw your eye to the part of the plan that tells you what you are doing at these operations and why you are doing it.

Looking at the detail – operation 15 is a full tablescan that is the child of a partition range iterator (operation 14), and that iterator is the second child of a nested loop join (operation 7). Why is the optimizer so out of balance that it thinks a table scan of (possibly) multiple partitions of a partitioned table is a good candidate for the second child of a nested loop?! The answer comes from the first child – the  Merge Join Cartesian at operation 8 has been given a cardinality estimate of just one row. When the cardinality estimate is one for the first table in a join then it doesn’t matter whether Oracle uses a nested loop join or a hash join, whatever happens next is only supposed to happen once.

Unfortunately when we compare “Rows (Estim)” and “Rows (Actual)” for the operation we see that the Merge Join Cartesian produced 1,000 rows – so the partition tablescan was going to happen 1,000 times (which we can confirm from the “Execs” column of operation 14). As a first step, then, we need to ensure that the optimizer gets a better estimate of the number of rows produced by the Cartesian join. So lets look at its child operations.

  • Operation 9 (Table Access of REF1) is predicted to return one row – with “Rows (actual)” = 1.
  • Operation 11 (Buffer Sort of data from STAGE1) is predicted to return 84 rows – with “Rows (actual)” = 1,000

Since the number of rows produced by a Cartesian join should be the product of the number of rows of the two inputs this tells us that the optimizer’s estimate of the number of rows from REF1 has been rounded up to 1 from a very small fraction (less than 1/84). If we can correct this then we may get Oracle to change the awful nested loop to an acceptable hash join. Wven if we can’t fix this mis-estimate we may be able to do something that improves the estimate for STAGE1 to something sufficienlty large that it will trigger the switch to a hash join. So let’s look at the predicates for these two tables.

REF1 predicates
   9 - filter("REF1"."EDATE">=TRUNC(SYSDATE@!))  
  10 - access("REF1"."PUSER"='XXX' AND  "REF1"."NAME"='CODE' AND "REF1"."SDATE"<=TRUNC(SYSDATE@!))  

The index range scan is based on an access predicate (with no filter predicate), so it looks like there’s a nearly perfect three-column index on REF1, but the optimizer can’t use the number of distinct keys in the index to get a good estimate of cardinality because one of the predicates is range-based. So the arithmetic will look at the three predicates separately and multiply up their selectivities. (It’s possible, of course, that this might be the first three columns of a 4, or more, column index.)

It’s a reasonable guess that the number of distinct combinations of (puser, name) will be much smaller than num_distinct(puser) * num_distinct(name) – so one strategy that might help increase the table’s cardinality estimate is to create extended statistics on the column group (puser, name).

Another reasonable guess is that the number of distinct values for the two columns is (relatively) small, with some skew to the distribution (name = ‘CODE’ looks particularly susceptible to being a commonly occurring value) – so perhaps we need a histogram on one or both of the columns (which would then require a histogram to be created on the column group as well if we wanted the optimizer to use the column group). We’d also have to make sure that the queried values didn’t fall outside the known low/high values for the columns if we wanted the column group to be used.

STAGE1 Predicates
13 - access("STAGE"."NAME"='XXX' AND "STAGE"."STAT"='I')

This is the access(-only) predicate for the index stage_idx1, and there are no filter predicates when we reach the table. So stage_idx1 might be a two-column index on the table that we are using completely, or it may be an index with more columns that we are using only incompletely. We can see that the cardinality estimate is out by a factor of 12 (84 predicted, 1,000 actual) so if this is the complete index (which would allow Oracle to use the distinct_keys value to estimate cardinality) there must be an uneven data distribution in the values; but if this is just the first two columns of a longer index then we might benefit from extended stats (viz: another column group) on this pair of columns.

Again, even if we create a column group, or take automatic advantage of the distinct_keys figure, the predicate STAT=’I’ (is that state, status?) looks a little dangerous – status columns tend to have a small number of distinct values with a signficant skew to the distribution of values – so we may want to create a histogram on the STAT column, which would then require us to create a histogram on the column group if we also wanted the column group stats to have an effect.

What Happened Next?

I made the suggestions about column groups and histogram to the OP – without the level of detail that I’ve given in the explanations above – and got the following response:

You are spot on.

There does exists frequency histogram on column NAME(having 14 distinct value) and STAT(having 7 distinct values) of table STAGE. Also there already exists a frequency histogram on column PUSER and height balanced histogram on column NAME of table REF1. But still they were not helping the existing query.

By creating a column group on both on ref1 (puser, name) and stage(name, stat) with histogram for both i.e. ‘FOR COLUMNS SIZE 254’. The merge Cartesian removed automatically from the path by optimizer and its doing a hash join now with TAB1 finishing the query in quick time.

Summary

When the cardinality (rows) estimate for an operation drops to one (which might mean much less than one and rounded up) then all hell can break loose and the choice of parent operation – and its cardinality estimate – might be insanely stupid, leading to a catastrophically poor execution plan.

Very low cardinality estimates are often the result of multiplying individual column selectivities to produce an estimated selectivity that is unrealistic (much too small) when compared with the actual number of distinct combinations that exist in the table. In such cases creating a column group, (possibly with a histogram) may be all you need to do to get a better cardinality estimate and a vastly improved execution plan.

 

Adding a Function-Based Index

Hemant K Chitale - Thu, 2020-07-09 10:51
What happens when you create a Function Based Index ?

{This demo is in 12.2}

I start with my "large" table.

SQL> create table my_data_table 
2 as select * from dba_objects
3 union all select * from dba_objects;

Table created.

SQL> select count(*) from my_data_table;

COUNT(*)
----------
157408

SQL> exec dbms_stats.gather_table_stats('','MY_DATA_TABLE',degree=>4);

PL/SQL procedure successfully completed.

SQL> select num_rows, avg_row_len
2 from user_tables
3 where table_name = 'MY_DATA_TABLE'
4 /

NUM_ROWS AVG_ROW_LEN
---------- -----------
157408 132

SQL>


I then add a Function Based Index on it.

SQL> create index my_data_fbi   
2 on my_data_table(lower(OBJECT_NAME))
3 /

Index created.

SQL>


I now look at Table and Column statistics

SQL> exec dbms_stats.gather_table_stats('','MY_DATA_TABLE',degree=>4);

PL/SQL procedure successfully completed.

SQL>
SQL> desc my_data_table
Name Null? Type
------------------------------------------------------------------------ -------- -------------------------------------------------
OWNER VARCHAR2(128)
OBJECT_NAME VARCHAR2(128)
SUBOBJECT_NAME VARCHAR2(128)
OBJECT_ID NUMBER
DATA_OBJECT_ID NUMBER
OBJECT_TYPE VARCHAR2(23)
CREATED DATE
LAST_DDL_TIME DATE
TIMESTAMP VARCHAR2(19)
STATUS VARCHAR2(7)
TEMPORARY VARCHAR2(1)
GENERATED VARCHAR2(1)
SECONDARY VARCHAR2(1)
NAMESPACE NUMBER
EDITION_NAME VARCHAR2(128)
SHARING VARCHAR2(18)
EDITIONABLE VARCHAR2(1)
ORACLE_MAINTAINED VARCHAR2(1)
APPLICATION VARCHAR2(1)
DEFAULT_COLLATION VARCHAR2(100)
DUPLICATED VARCHAR2(1)
SHARDED VARCHAR2(1)
CREATED_APPID NUMBER
CREATED_VSNID NUMBER
MODIFIED_APPID NUMBER
MODIFIED_VSNID NUMBER

SQL>
SQL> l
1 select column_name, num_nulls,to_char(last_analyzed,'DD-MON HH24:MI')
2 from user_tab_columns
3 where table_name = 'MY_DATA_TABLE'
4* order by column_id
SQL> /

COLUMN_NAME NUM_NULLS TO_CHAR(LAST_ANALYZED
------------------------------ ---------- ---------------------
OWNER 0 09-JUL 22:43
OBJECT_NAME 0 09-JUL 22:43
SUBOBJECT_NAME 155954 09-JUL 22:43
OBJECT_ID 2 09-JUL 22:43
DATA_OBJECT_ID 139120 09-JUL 22:43
OBJECT_TYPE 0 09-JUL 22:43
CREATED 0 09-JUL 22:43
LAST_DDL_TIME 2 09-JUL 22:43
TIMESTAMP 2 09-JUL 22:43
STATUS 0 09-JUL 22:43
TEMPORARY 0 09-JUL 22:43
GENERATED 0 09-JUL 22:43
SECONDARY 0 09-JUL 22:43
NAMESPACE 2 09-JUL 22:43
EDITION_NAME 157408 09-JUL 22:43
SHARING 0 09-JUL 22:43
EDITIONABLE 101970 09-JUL 22:43
ORACLE_MAINTAINED 0 09-JUL 22:43
APPLICATION 0 09-JUL 22:43
DEFAULT_COLLATION 125494 09-JUL 22:43
DUPLICATED 0 09-JUL 22:43
SHARDED 0 09-JUL 22:43
CREATED_APPID 157408 09-JUL 22:43
CREATED_VSNID 157408 09-JUL 22:43
MODIFIED_APPID 157408 09-JUL 22:43
MODIFIED_VSNID 157408 09-JUL 22:43

26 rows selected.

SQL>
SQL> l
1 select column_name, num_nulls,to_char(last_analyzed,'DD-MON HH24:MI')
2 from user_tab_cols
3 where table_name = 'MY_DATA_TABLE'
4* order by column_id
SQL> /

COLUMN_NAME NUM_NULLS TO_CHAR(LAST_ANALYZED
------------------------------ ---------- ---------------------
OWNER 0 09-JUL 22:43
OBJECT_NAME 0 09-JUL 22:43
SUBOBJECT_NAME 155954 09-JUL 22:43
OBJECT_ID 2 09-JUL 22:43
DATA_OBJECT_ID 139120 09-JUL 22:43
OBJECT_TYPE 0 09-JUL 22:43
CREATED 0 09-JUL 22:43
LAST_DDL_TIME 2 09-JUL 22:43
TIMESTAMP 2 09-JUL 22:43
STATUS 0 09-JUL 22:43
TEMPORARY 0 09-JUL 22:43
GENERATED 0 09-JUL 22:43
SECONDARY 0 09-JUL 22:43
NAMESPACE 2 09-JUL 22:43
EDITION_NAME 157408 09-JUL 22:43
SHARING 0 09-JUL 22:43
EDITIONABLE 101970 09-JUL 22:43
ORACLE_MAINTAINED 0 09-JUL 22:43
APPLICATION 0 09-JUL 22:43
DEFAULT_COLLATION 125494 09-JUL 22:43
DUPLICATED 0 09-JUL 22:43
SHARDED 0 09-JUL 22:43
CREATED_APPID 157408 09-JUL 22:43
CREATED_VSNID 157408 09-JUL 22:43
MODIFIED_APPID 157408 09-JUL 22:43
MODIFIED_VSNID 157408 09-JUL 22:43
SYS_NC00027$ 0 09-JUL 22:43

27 rows selected.

SQL>

SQL> l
1 select column_name, avg_col_len
2 from user_tab_cols
3 where table_name = 'MY_DATA_TABLE'
4* order by column_id
SQL> /

COLUMN_NAME AVG_COL_LEN
------------------------------ -----------
OWNER 6
OBJECT_NAME 34
SUBOBJECT_NAME 2
OBJECT_ID 5
DATA_OBJECT_ID 2
OBJECT_TYPE 10
CREATED 8
LAST_DDL_TIME 8
TIMESTAMP 20
STATUS 7
TEMPORARY 2
GENERATED 2
SECONDARY 2
NAMESPACE 3
EDITION_NAME 0
SHARING 13
EDITIONABLE 2
ORACLE_MAINTAINED 2
APPLICATION 2
DEFAULT_COLLATION 4
DUPLICATED 2
SHARDED 2
CREATED_APPID 0
CREATED_VSNID 0
MODIFIED_APPID 0
MODIFIED_VSNID 0
SYS_NC00027$ 34

27 rows selected.

SQL>
SQL> select num_rows, avg_row_len
2 from user_tables
3 where table_name = 'MY_DATA_TABLE'
4 /

NUM_ROWS AVG_ROW_LEN
---------- -----------
157408 166

SQL>


A new column "SYS_NC00027$" appears in USER_TAB_COLS but not in USER_TAB_COLUMNS.  The new column is not also not visible when I run a "DESCRIBE" command.

Also, the AVG_ROW_LEN has increased by 34  (seeing as I do not have any objects with long names) to reflect the addition of the new virtual column.
But did all the blocks actually get re-written ?  Are the "lower(OBJECT_NAME)" values written into each table block as well ?

SQL> exec dbms_stats.gather_table_stats('','MY_DATA_TABLE',degree=>4);

PL/SQL procedure successfully completed.

SQL>
SQL> l
1 select obj#, savtime, rowcnt, blkcnt, avgrln,samplesize,to_char(analyzetime,'DD-MON-RR HH24:MI:SS')
2 from sys.WRI$_OPTSTAT_TAB_HISTORY
3 where obj# =
4 (select object_id
5 from dba_objects
6 where owner = 'HEMANT'
7 and object_type = 'TABLE'
8 and object_name = 'MY_DATA_TABLE')
9* order by 1,2
SQL> /

OBJ# SAVTIME ROWCNT BLKCNT AVGRLN SAMPLESIZE TO_CHAR(ANALYZETIME,'DD-MON
---------- ---------------------------- ---------- ---------- ---------- ---------- ---------------------------
79843 09-JUL-20 10.39.03.789392 PM 157408 3106 132 157408 09-JUL-20 22:37:53
79843 09-JUL-20 10.43.59.424420 PM 157408 3106 132 157408 09-JUL-20 22:39:03
79843 09-JUL-20 11.02.35.088733 PM 157408 3106 166 157408 09-JUL-20 22:43:59

SQL>


Apparently, the actual number of blocks did not increase.

SQL> l
1 select segment_name, segment_type, bytes/1024
2 from user_segments
3 where segment_name like 'MY_DATA%'
4* order by 1,2
SQL> /

SEGMENT_NAME SEGMENT_TYPE BYTES/1024
------------------------------ ------------------ ----------
MY_DATA_FBI INDEX 8192
MY_DATA_TABLE TABLE 25600

SQL>
SQL>
SQL> l
1 select index_name, column_name
2 from user_ind_columns
3* where index_name = 'MY_DATA_FBI'
SQL> /

INDEX_NAME COLUMN_NAME
------------------------------ ------------------------------
MY_DATA_FBI SYS_NC00027$

SQL>


So, the Index does take up space as an Index segment but the Virtual Column ("lower(OBJECT_NAME)")  created on the Table does not grow the table.  What seems to happen is that the "computed" value ("lower(OBJECT_NAME)") is stored in the Index segment (leaf blocks) but not in the Table segment (blocks)


What happens if I drop the Function Based Index ?

SQL> drop index my_data_fbi;

Index dropped.

SQL> exec dbms_stats.gather_table_stats('','MY_DATA_TABLE',degree=>4);

PL/SQL procedure successfully completed.

SQL>
SQL> select count(*)
2 from user_tab_columns
3 where table_name = 'MY_DATA_TABLE'
4 /

COUNT(*)
----------
26

SQL> select count(*)
2 from user_tab_cols
3 where table_name = 'MY_DATA_TABLE'
4 /

COUNT(*)
----------
26

SQL> select avg_row_len
2 from user_tables
3 where table_name = 'MY_DATA_TABLE'
4 /

AVG_ROW_LEN
-----------
132

SQL>


The additional column and the 34 bytes it represents has disappeared from the Table and Column statistics.



Categories: DBA Blogs

Oracle Awarded Framework Agreement from Government of Canada

Oracle Press Releases - Thu, 2020-07-09 07:00
Press Release
Oracle Awarded Framework Agreement from Government of Canada New initiative expands cloud options for Oracle-based workloads including Oracle Database, Weblogic, Oracle E-Business Suite, PeopleSoft, and Siebel

Redwood Shores, Calif.—Jul 9, 2020

Oracle announced today the award of a Framework Agreement for the provision of secure cloud services to the Government of Canada. This agreement gives Oracle's Canadian federal government customers the ability to securely store, manage and process sensitive data and applications in Oracle Cloud Infrastructure.

Focused on a “cloud-first” strategy, the Government of Canada uses cloud services as its preferred delivery option when initiating information technology investments, initiatives, strategies and projects. The access to Oracle Cloud regions in Toronto and Montreal afforded by this agreement will help support Canadian government and affiliated public sector entities in their digital transformation efforts.

Oracle Cloud is ideal for mission-critical workloads like financials and other back office applications, as well as newer workloads like simulations, machine learning, and cloud native applications. Oracle’s Canadian Cloud regions today offer Oracle Cloud services that include Compute (CPU and GPU virtual machine and bare metal), Storage (block, object, and archive storage, including new immutable object storage capabilities), Oracle Database and Oracle Exadata Cloud Services, FastConnect, Identity and Access Management, Key Management Service, and Load Balancer.

“This Framework Agreement for Oracle Cloud Infrastructure is a critical step in meeting the growing demands and compliance requirements of our public sector customers,” said Walter Dann, Area Vice President, Oracle Public Sector. “By achieving this milestone, we are able to help the federal government and Canadian public sector tap into our second generation cloud to not just keep pace, but also stay ahead of the evolving business and technology landscape. Oracle’s outstanding price-performance, Oracle Autonomous Database, machine learning and AI innovations along with Oracle Cloud Infrastructure secure by design will be key to enabling our customers’ enterprise workloads in the cloud.”

Oracle is pleased to work with the Government of Canada to support the evaluation of Oracle Cloud. Working with the Government of Canada, Oracle Cloud was assessed across a robust set of controls and global certifications including SOC and ISO. Oracle’s support of these controls and certifications assures customers that all workloads across the government and private sector meet a high level of assurance.  

Oracle Cloud was developed with a particular focus on addressing government cloud needs with capabilities including strong encryption, in-depth auditing, and robust security controls and tools.  With this new agreement, the Government of Canada now has access to a compelling array of advanced technology, including machine learning-enabled applications, integrated security tools, automated analytics, and Oracle Autonomous Database.

Oracle Cloud is a second generation enterprise cloud platform that delivers powerful compute and networking performance and includes a comprehensive portfolio of infrastructure and platform cloud services. With this new agreement, the Government of Canada now has access to the only second generation Cloud built to run Oracle Autonomous Database, the industry’s first and only self-driving database with performance SLAs.

Today, thousands of public sector customers worldwide use Oracle’s digital solutions to transform how their constituents engage with government. Now, these cloud services will be extended to Oracle’s largest customer in the Canadian region. For more information, go to https://www.oracle.com/communitydevelopment.

Contact Info
Katie Barron
Oracle
+1.202.904.1138
katie.barron@oracle.com
Nicole Maloney
Oracle
+1.650.606.0806
nicole.maloney@oracle.com
About Oracle

The Oracle Cloud offers a complete suite of integrated applications for Sales, Service, Marketing, Human Resources, Finance, Supply Chain and Manufacturing, plus Highly Automated and Secure Generation 2 Infrastructure featuring the Oracle Autonomous Database. For more information about Oracle (NYSE: ORCL), please visit us at www.oracle.com.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Talk to a Press Contact

Katie Barron

  • +1.202.904.1138

Nicole Maloney

  • +1.650.606.0806

Lawrence J. Ellison Institute for Transformative Medicine of USC Explores Breakthrough Tissue "Fingerprints" Concept for Cancer Diagnostics Using Oracle Cloud Technology

Oracle Press Releases - Thu, 2020-07-09 07:00
Press Release
Lawrence J. Ellison Institute for Transformative Medicine of USC Explores Breakthrough Tissue "Fingerprints" Concept for Cancer Diagnostics Using Oracle Cloud Technology Tissue “fingerprinting” enables deep-learning algorithm to predict biomarkers, prognosis and theragnosis, set to improve understanding of cancer biology and accelerate clinical workflows, new research finds.

Los Angeles and Redwood Shores, CA—Jul 9, 2020

The Lawrence J. Ellison Institute for Transformative Medicine of USC (“Ellison Institute”) and Oracle reveal a promising two-step technique to train a high-confidence predictive algorithm for enhanced cancer diagnostics. The study uses novel tissue “fingerprints”—discriminating microscopic hematoxylin and eosin (H&E) histologic features—of tumors paired with correct diagnoses to facilitate deep learning in the classification of breast cancer ER/PR/HER2 status.

The approach was able to achieve unprecedented diagnostic accuracy for an algorithm of its type and purpose, while using less than a thousand annotated breast cancer pathology slides. The findings suggest that the algorithm’s ability to make correlations between a tumor’s architectural pattern and a correct diagnosis can ultimately help clinicians determine how a tumor will behave to a given treatment. 

The study was facilitated by Oracle for Research, Oracle’s global program that provides selected researchers with access to Oracle Cloud technology, consulting and support, and participation in the Oracle research user community.

The research appears in Scientific Reports.

Challenges of medical machine learning

The challenge of developing artificial intelligence (AI) tools to diagnose cancer is that machine learning algorithms require clinically annotated data from tens of thousands of patients to analyze before they can recognize meaningful relationships in the data with consistency and high confidence. An ideal size dataset is nearly impossible to gather in cancer pathology.  Researchers training computers to diagnose cancer typically only have access to hundreds or low thousands of pathology slides annotated with correct diagnoses.

To overcome this limitation, the Ellison Institute scientists introduced a two-step process of priming the algorithm to identify unique patterns in cancerous tissue before teaching it the correct diagnoses.

“If you train a computer to reproduce what a person knows how to do, it’s never going to get far beyond human performance,” said lead author Rishi Rawat, PhD. “But if you train it on a task 10 times harder than anything a person could do you give it a chance to go beyond human capability. With tissue fingerprinting, we can train a computer to look through thousands of tumor images and recognize the visual features to identify an individual tumor. Through training, we have essentially evolved a computer eye that’s optimized to look at cancer patterns.”

The first step in the process introduces the concept of tissue “fingerprints,” or distinguishing architectural patterns in a tumor’s tissue, that an algorithm can use to discriminate between samples because no two patients’ tumors are identical.  These fingerprints are the result of biological variations such as the presence of signaling molecules and receptors that influence the 3D organization of a tumor. The study shows that AI spotted these fine, structural differentiations on pathology slides with greater accuracy and reliability than the human eye, and was able to recognize these variations without human guidance.

In this study, the research team took digital pathology images, split them in half and prompted a machine learning algorithm to pair them back together based on their molecular fingerprints.  This practice showcased the algorithm’s ability to group “same” and “different” pathology slides without paired diagnoses, which allowed the team to train the algorithm on large, unannotated datasets (a technique known as self-supervised learning).

“With clinically annotated pathology data in short supply, we must use it wisely when building classifiers,” said corresponding author Dan Ruderman, PhD., director of analytics and machine learning at the Ellison Institute. “Our work leveraged abundant unannotated data to find a reduced set of tumor features that can represent unique biology. Building classifiers upon the biology that these features represent enables us to efficiently focus the precious annotated data on clinical aspects.”

Once the model was trained to identify breast cancer tissue structure that distinguishes patients, the second step called upon its established grouping ability to learn which of those known patterns correlated to a particular diagnosis.  The discovery training set of 939 cases obtained from The Cancer Genome Atlas enabled the algorithm to accurately assign diagnostic categories of ER, PR, and Her2 status to whole slide H&E images with 0.89 AUC (ER), 0.81 AUC (PR), and 0.79 AUC (HER2) on a large independent test set of 2531 breast cancer cases from the Australian Breast Cancer Tissue Bank.

While using Oracle Cloud technology, the study’s groundbreaking technique creates a new paradigm in medical machine learning, which may allow the future use of machine learning to process unannotated or unlabeled tissue specimens, as well as variably processed tissue samples, to assist pathologists in cancer diagnostics.

“Oracle for Research is thrilled to support and accelerate the Ellison Institute’s trailblazing discoveries through advanced cloud technology,” said Mamei Sun, Vice President, Oracle. “The Ellison Institute’s innovative use of machine learning and AI can revolutionize cancer research, treatment, and patient care – and ultimately improve the lives of many.”

Technique democratizes cancer diagnosis

In breast cancer, tumors that express a molecule called estrogen receptor look unique at the cellular level and fall into their own diagnostic category because they typically respond to anti-estrogen therapies.  Currently, pathologists must use chemical stains to probe biopsy samples for the presence of the estrogen receptor to make that diagnosis, and the process is time-consuming, expensive and variable. 

The established algorithm aims to improve pathologists’ accuracy and efficiency in a digital pathology workflow by directly analyzing tumor images to diagnose them as “estrogen receptor positive” without staining specifically for estrogen receptor. The study’s results support the notion that the use of tissue “fingerprints” may allow for a direct treatment response prediction, potentially obviating the need for molecular staining approaches currently utilized in cancer theragnosis.

An exciting application of this technology lies in the possibility of deploying computer-assisted diagnostics in medically underserved regions and developing nations that lack expert pathologists, specialists and the laboratory infrastructure to stain for molecular markers.

While the study suggests additional investigation is warranted to gain a deeper understanding of AI’s ability to determine molecular status based on tissue architecture, it sets the stage for future applications where the technique could potentially aid in troubleshooting challenging tumor classification issues and enhance human pathologists’ abilities to arrive at correct diagnoses and better inform treatment decisions.

About this study

In addition to Rawat and Ruderman, other study authors include Itzel Ortega, Preeyam Roy and David Agus of the Ellison Institute; along with Ellison Institute affiliate Fei Sha of USC Michelson Center for Convergent Bioscience; and USC collaborator Darryl Shibata of the Norris Comprehensive Cancer Center at Keck School of Medicine.

The study’s computing resources were facilitated by Oracle Cloud Infrastructure through Oracle for Research, Oracle’s global program providing free cloud credits and technical support to researchers, and was supported in part by the Breast Cancer Research Foundation grant BCRF-18-002.  

In addition to his appointment at the Ellison Institute, Ruderman is an assistant professor of research medicine at USC’s Keck School of Medicine.

Contact Info
Julie Sugishita Cantor
Oracle Corporate Communications
(650) 506-0076
julie.sugishita@oracle.com
Ballantines PR
(310) 454-3080
ellison@ballantinespr.com
About Lawrence J. Ellison Institute for Transformative Medicine of USC

The Lawrence J. Ellison Institute for Transformative Medicine was founded in 2016 to leverage technology, spark innovation, and drive interdisciplinary, evidence-based research to reimagine and redefine cancer treatment, enhance health, and transform lives. Under the leadership of Dr. David B. Agus, MD, the Ellison Institute was designed to tackle the difficult questions in health care and research to push the boundaries of medicine forward.  The objective of the Ellison Institute is the rigorous and rapid translation of novel technologies into practice for use in clinical, diagnostic, and laboratory settings. The Institute is comprised of dedicated clinicians, experts and thought-leaders from disparate backgrounds who have come together to make a meaningful, positive impact on the lives of patients. This one-of-a-kind Institute hopes to serve as a powerful catalyst for innovation and reimagining the status quo in medical research and cancer treatments. For more information, visit Ellison.usc.edu.

About Oracle for Research

Oracle for Research is a global community that is working to address complex problems and drive meaningful change in the world. The program provides scientists, researchers, and university innovators with high-value, cost-effective Cloud technologies, participation in Oracle research user community, and access to Oracle’s technical support network. Through the program’s free cloud credits, users can leverage Oracle’s proven technology and infrastructure while keeping research-developed IP private and secure. Learn more at https://www.oracle.com/oracle-for-research/.

About Oracle

The Oracle Cloud offers a complete suite of integrated applications for Sales, Service, Marketing, Human Resources, Finance, Supply Chain and Manufacturing, plus Highly Automated and Secure Generation 2 Infrastructure featuring the Oracle Autonomous Database. For more information about Oracle (NYSE: ORCL), please visit us at www.oracle.com.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Talk to a Press Contact

Julie Sugishita Cantor

  • (650) 506-0076

Ballantines PR

  • (310) 454-3080

Oracle Announces Oracle Dedicated Region Cloud@Customer

Oracle Press Releases - Wed, 2020-07-08 02:04
Press Release
Oracle Announces Oracle Dedicated Region Cloud@Customer Industry’s first fully managed cloud region brings 100% of Oracle’s public cloud services, including Oracle Autonomous Database and Cloud applications, to customer datacenters

Pricing starts at only $500K per month, a small fraction of what customers would have to pay for a one-off region from other cloud providers

Customers include Nomura Research Institute (NRI) in Japan and Oman Information and Communications Technology Group

Redwood Shores, Calif.—Jul 8, 2020

Driven by strong customer demand, Oracle today announced Oracle Dedicated Region Cloud@Customer, the industry’s first fully-managed cloud region that brings all of Oracle’s second-generation cloud services, including Autonomous Database and Oracle Cloud applications, to customer datacenters, starting at only $500K a month. Get started here.

With this offering, enterprises get the exact same complete set of modern cloud services, APIs, industry-leading SLAs, superior price-performance, and highest levels of security available from Oracle’s public cloud regions in their own datacenters. This is ideal for highly regulated or security-focused businesses needing to meet demanding latency and data residency requirements, reduce operational costs, and modernize legacy applications.

Over the past few years, enterprise adoption of public clouds has gone mainstream as companies took advantage of the pay-as-you-go economics, scale, and agility of cloud computing. However, most enterprises expect to continue to run a portion of their workloads in on-premises datacenters for the foreseeable future. This has resulted in strong demand from customers for a hybrid architecture where the same services, same functionality, and easy portability of applications exists between their public and on-premises cloud environments. But until today, no solution was able to bridge the gap between cloud and on-premises environments. On-premises offerings from other cloud providers offer a very small subset of the services available in their public cloud regions. With today’s announcement, Oracle is making all of its cloud services—more than 50 services—available on-premises so enterprises can use Oracle’s cloud services wherever they need them—in the cloud or on-premises via Cloud@Customer.

“Enterprise customers have told us that they want the full experience of a public cloud on-premises, including access to all of Oracle’s cloud services, to run their most important workloads,” said Clay Magouyrk, executive vice president of engineering, Oracle Cloud Infrastructure. “With Oracle Dedicated Region Cloud@Customer, enterprises get all of our second-generation cloud services, including Autonomous Database, in their datacenters. Our major competitors can’t offer customers a comparable dedicated cloud region running on-premises.”

Oracle Dedicated Region Cloud@Customer includes full management capabilities and access to new features and functions the moment they become available in Oracle’s public cloud. It provides strong isolation of customer data, including all API operations, which remain local to customer datacenters and provide the highest levels of security. Additionally, Oracle Dedicated Region Cloud@Customer is certified to seamlessly run Oracle Cloud applications, including Oracle Fusion Cloud Applications (Cloud ERP, Cloud HCM, Cloud SCM, and Cloud CX, making it a completely integrated cloud experience on-premises. Customers only pay for services they consume using the same predictable low pricing offered in Oracle’s public cloud regions.

“With Dedicated Region Cloud@Customer, Oracle delivers a slice of its public cloud experience into customer datacenters, with no changes in pricing or capabilities,” said Deepak Mohan, Research Director at IDC. “This represents a new direction for public cloud providers, who have historically offered only limited versions of their services to customer premises. Oracle Dedicated Region Cloud@Customer brings the full capabilities of Oracle Cloud Infrastructure and Oracle Fusion Cloud Applications, including over 50 services, to customer premises. This brings together public cloud service capability with the compliance, latency and co-location benefits of on premises—which can be a game changer for large scale digital transformation efforts at enterprises.”

Global Organizations Adopt Oracle Dedicated Region Cloud@Customer

Nomura Research Institute (NRI), Ltd. is the largest consulting firm and IT solutions provider in Japan. “With Oracle Dedicated Region Cloud@Customer, we can use Oracle Exadata as a cloud service and achieve greater agility, such as seamless expansion, while maintaining high availability at the same level as on-premises,” said Tomoshiro Takemoto, Senior Corporate Managing Director, NRI. “Built in our own datacenter, it also enables us to not only provide SOC2 reports based on Japanese security standards in financial industries, but it also allows us to access broader cloud services and tools provided by Oracle and further increase our business value for our customers.

Oman Information and Communications Technology Group is an entity owned by State General Reserve Fund (SGRF) of the Government of Oman. “Oman Information and Communications Technology Group (OICTG), is committed to propel and shape the Sultanate’s ICT sector. Our investments, focused initiatives and bespoke partnerships aim to unlock the full potential of ICT. So, by fostering Omani talents, we will actively support the ongoing diversification of Oman’s economic development as outlined in His Majesty Sultan Haitham bin Tarik’s Oman 2040 Vision,” said HH Sayyid Kamil Fahad Al-Said; Chairman of Oman ICT Group. “Furthermore, and to meet the Sultanate’s needs of digital transformation, our centralized, innovatively structured digital framework will support the ownership and management of cloud-ready IT services. The OICTG’s sustainable, economically sensitive budget and attainable, realistic time-bound goals will guarantee maximum operational performance, flexible scalability and a secure data residency file under the accountable canopy of the Oman Investment Authority.”

“Oracle Dedicated Region Cloud@Customer enables a variety of use cases, from migrating Oracle ERP and CRM applications to deploying custom developed applications using Oracle Database, as well as implementing Digital Innovation Services (Blockchain, AI, Big Data) and High-Performance Computing (HPC), all while following the country regulations regarding data sovereignty,” said Said Al-Mandhari, CEO, Oman ICT Group.

Contact Info
Nicole Maloney
Oracle PR
+1.650.506.0806
nicole.maloney@oracle.com
About Oracle

The Oracle Cloud offers a complete suite of integrated applications for Sales, Service, Marketing, Human Resources, Finance, Supply Chain and Manufacturing, plus Highly Automated and Secure Generation 2 Infrastructure featuring the Oracle Autonomous Database. For more information about Oracle (NYSE: ORCL), please visit us at www.oracle.com.

Future Product Disclaimer

The preceding is intended to outline our general product direction. It is intended for information purposes only, and may not be incorporated into any contract. It is not a commitment to deliver any material, code, or functionality, and should not be relied upon in making purchasing decisions. The development, release, timing, and pricing of any features or functionality described for Oracle’s products may change and remains at the sole discretion of Oracle Corporation.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Talk to a Press Contact

Nicole Maloney

  • +1.650.506.0806

Oracle Autonomous Database Now Available in Customer Datacenters

Oracle Press Releases - Wed, 2020-07-08 02:02
Press Release
Oracle Autonomous Database Now Available in Customer Datacenters Addresses data sovereignty, security, and performance concerns that prevent some enterprise workloads from moving to the public cloud

Crédit Agricole, Entel, and Samsung SDS welcome Autonomous Database on Exadata Cloud@Customer

Redwood Shores, Calif.—Jul 8, 2020

Building on the success of Oracle’s Exadata Cloud@Customer service over the last three years, Oracle announced the availability of Oracle Autonomous Database on Exadata Cloud@Customer. This new offering combines the latest Oracle Database with the fastest Oracle Database platform—Exadata—delivered as a cloud service in customer datacenters. It eliminates database management and capital expenditures while enabling pay-per-use and elastic consumption of database cloud resources. Now, Autonomous Database is available to run in customer data centers both as a standalone offering and as part of Oracle Dedicated Region Cloud@Customer, the industry’s first on-premises cloud region, which was also announced today. Get started here.

Oracle Autonomous Database on Exadata Cloud@Customer is the simplest and fastest transition to a cloud model with typical deployments taking less than a week. Existing applications in a datacenter can simply connect and run without requiring any application changes—while data never leaves the customer’s datacenter. This is ideal for enterprises that find it challenging to move their mission-critical database workloads to the public cloud due to data sovereignty and regulatory requirements, security and performance concerns, or because their on-premises applications and databases are tightly coupled. 

“Exadata Cloud@Customer has been successfully deployed at hundreds of customers, including large financial services companies, telecoms, healthcare providers, insurers, and pharmaceutical companies worldwide to modernize their infrastructure and lower costs by up to 50 percent,” said Juan Loaiza, executive vice president, mission-critical database technologies, Oracle. “We are now bringing Oracle Autonomous Database to customer datacenters—freeing DBAs and developers from mundane maintenance tasks and enabling them to innovate and create more business value.”

Oracle Autonomous Database on Exadata Cloud@Customer enables organizations to move to an environment where everything is automated and managed by Oracle. Autonomous operations include: database provisioning, tuning, clustering, disaster protection, elastic scaling, securing and patching, which eliminates manual processes and human error while reducing costs and increasing performance, security and availability. The serverless architecture automatically scales to match changing workloads, providing true pay-per-use.

“Oracle Autonomous Database on Exadata Cloud@Customer combines the game changing capabilities of the revolutionary Exadata X8M platform with Oracle’s most advanced machine-learning-powered database and its second-generation cloud control plane for a true enterprise-grade database cloud experience on-premises,” said Carl Olofson, Research Vice President, Data Management Software, IDC. “Every business has a set of ISV and home grown applications that they depend on to run all aspects of their business from finance to manufacturing, HR, orders, procurement, and operations. For companies serious about running these types of critical Oracle-based applications in an on-premises enterprise database cloud, Oracle Autonomous Database on Exadata Cloud@Customer is currently the most advanced offering in the market today.”

Customers can leverage Oracle Autonomous Database on Exadata Cloud@Customer to consolidate thousands of databases and run the converged, open Oracle Database for multiple data types and workloads including Machine Learning, JSON, Graph, spatial, IOT and In-Memory, instead of deploying fragmented special-purpose databases. With Oracle Autonomous Database on Oracle Exadata Cloud@Customer, organizations can work with up to 7x larger databases, achieve greater database consolidation, and improve performance with up to 12x more SQL IOPS, 10x more SQL throughput, and 98 percent lower SQL latency than RDS on AWS Outposts. Oracle Autonomous Database on Exadata Cloud@Customer reduces customers’ infrastructure and database management by up to 90 percent because they only have to focus on the schemas and data inside their databases, not on running the underlying database infrastructure.

In addition to the new Cloud@Customer offerings, Oracle continues to enhance the capabilities of the Autonomous Database. Oracle today announced the certification of Oracle’s Siebel, PeopleSoft, and JD Edwards running on Oracle Autonomous Database. By using Autonomous Database, Oracle’s Siebel, PeopleSoft, and JD Edwards customers will lower their costs while improving security, performance, and availability. The company also announced Oracle Autonomous Data Guard which delivers an autonomously managed high availability and disaster recovery solution protecting against database and site failures. Oracle Autonomous Data Guard provides near zero data loss (RPO) and recovery time (RTO) objectives in the face of catastrophic failures.

Global Organizations Welcome New Cloud@Customer Offerings

Samsung SDS is the largest enterprise cloud solutions provider in Korea, delivering data-driven digital innovations to customers in 41 countries worldwide. “Back in 2010, we adopted the first Exadata platform to improve a display manufacturing system,” said Dr. WP Hong, CEO, Samsung SDS. “Now 10 years later, we have implemented nearly 300 Exadata systems for our customers in manufacturing, financial services, construction and engineering, and public and private sector services. Aligning with our digital innovation strategy and our journey to enterprise cloud, we have now adopted the first Exadata Cloud@Customer in one of our datacenters and look forward to deploying Autonomous Database.”

NTT DoCoMo is the number one mobile carrier in Japan with the largest customer base. “Oracle Exadata is implemented as our core engine to process the call, communication, and billing information of 80M users in real-time,” said Taku Hasegawa, Senior Vice President, General Manager of Information Systems Department, NTT DoCoMo. “Thanks to Exadata, we could cut operation and maintenance costs in half, while realizing 10x performance. As the core infrastructure for DoCoMo’s digital transformation and further business growth, I look forward to the continuous evolution of Oracle Exadata and the novel technology innovation driven by Autonomous Database on Exadata Cloud@Customer.”

Crédit Agricole CIB is the Corporate and Investment Banking arm of the Crédit Agricole Group, one of the world’s largest banks. “Moving to Exadata Cloud@Customer has significantly improved our accounting information systems performance, which has enabled us to carry out our accounting closing process with much greater agility and to reduce our operational costs,” said Pierre-Yves Bollard, Global Head of Finance IT, Crédit Agricole Corporate & Investment Bank. “The high value provided by the Exadata Cloud@Customer infrastructure has been recognized by all IT and business teams.”

Entel is the largest telecom provider in Chile and the third largest in Peru. “We have used Exadata systems for the past five years to support many applications across dozens of lines of business, including crucial billing and network management systems,” said Helder Branco, Head of IT Operations, Entel. “By using Exadata, we improved mission-critical Oracle Database performance by up to 3x, and reduced our security exposure. We are taking our digital transformations to the next level by moving over 30 databases to Oracle Autonomous Database on Exadata Cloud@Customer and improving their security with its self-securing capabilities.”

RKK Computer Service is an IT consultancy based in Japan, focusing on local governments and financial institutions. “RKK Computer Service selected Oracle Exadata Cloud@Customer to host our shared platform that runs core business systems for 100 municipalities,” said Chihiro Sato, Deputy General Manager, Public Sector Planning and Development Division, RKK Computer Service. “Compared to our previous on-premises solution, we have 24 percent cost savings and more than 70 percent IO performance improvement, which enables us to run concurrent batch processes for multiple municipalities. High availability is achieved with RAC and Data Guard. We believe that Oracle’s second-generation Exadata Cloud@Customer is a promising cloud platform for municipalities. RKKCS will continuously enhance our cloud infrastructure for municipalities by exploring Autonomous Database on Exadata Cloud@Customer to improve operational efficiency.”

The State of Queretaro is located in central Mexico. “Based on a directive from the state governor and state secretary to address the COVID-19 crisis, we were asked to develop an application that would allow the citizens and patients of the State of Querétaro, Mexico, to carry out a self-diagnosis to help avoid the spread of infections,” said Pedro Gonzalez, Director CIAS, Queretaro State Government, Mexico. “With Oracle Database on Exadata Cloud@Customer, we were able to react quickly and develop a mobile application in less than three weeks — plus we were able to adhere to state regulations to maintain the sensitive data of citizens and patients in our facilities. We look forward to investing in Oracle Autonomous Database this year, which will free up our staff and resources to focus on developing new business applications without spending any time on patching, tuning, and maintaining the database.” 

Siav is an enterprise content management software and IT services company based in Italy. “We chose Oracle Exadata Cloud@Customer to help us manage the constant growth of our business in cloud services and solutions,” said Nicola Voltan, CEO, Siav S.p.A. “Exadata Cloud@Customer provides the performance, scalability and security we need to offer the highest quality service to our customers. It’s managed by Oracle in our datacenter, enabling us to comply with the Italian legislation related to the geographical location of the service provided.”

New Exadata Cloud@Customer Enhancements

In addition to the Autonomous Database, Oracle is announcing the following Exadata Cloud@Customer enhancements:

  • Oracle Exadata Database Machine X8M Technology, which combines Intel® Optane™ DC Persistent Memory and 100 gigabit remote direct memory access (RDMA) over Converged Ethernet (RoCE) to remove storage bottlenecks and dramatically increase performance for the most demanding workloads such as Online Transaction Processing (OLTP), IoT, fraud detection, and high frequency trading. Direct database access to shared persistent memory increases peak performance to 12 million SQL read IOPS, 2.5X greater than the prior generation offering powered by Exadata Database Machine X8. Additionally, Exadata X8M dramatically reduces the latency of critical database IOs by enabling remote IO latencies below 19 microseconds—more than 10X faster than the prior generation offering. These ultra-low latencies are achieved even for workloads requiring millions of IOs per second.

  • Multiple VM Clusters per Exadata Rack, which enables organizations to share an Exadata system for production, DR and dev/test and provide isolation across departments and use cases.

  • PCI-DSS Certification: Exadata Cloud@Customer now supports and meets Payment Card Industry Data Security Standard requirements and can be implemented as part of a highly secure financial processing environment.

Contact Info
Nicole Maloney
Oracle PR
+1.650.506.0806
nicole.maloney@oracle.com
About Oracle

The Oracle Cloud offers a complete suite of integrated applications for Sales, Service, Marketing, Human Resources, Finance, Supply Chain and Manufacturing, plus Highly Automated and Secure Generation 2 Infrastructure featuring the Oracle Autonomous Database. For more information about Oracle (NYSE: ORCL), please visit us at www.oracle.com.

Future Product Disclaimer

The preceding is intended to outline our general product direction. It is intended for information purposes only, and may not be incorporated into any contract. It is not a commitment to deliver any material, code, or functionality, and should not be relied upon in making purchasing decisions. The development, release, timing, and pricing of any features or functionality described for Oracle’s products may change and remains at the sole discretion of Oracle Corporation.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Talk to a Press Contact

Nicole Maloney

  • +1.650.506.0806

NRI Selects Oracle Dedicated Region Cloud@Customer

Oracle Press Releases - Wed, 2020-07-08 02:00
Press Release
NRI Selects Oracle Dedicated Region Cloud@Customer First customer to adopt new Oracle Cloud offering

Will be used for mission-critical, high-volume business platform to accelerate digital transformation

Redwood Shores, Calif.—Jul 8, 2020

Oracle today announced that Nomura Research Institute (NRI) has selected Oracle Dedicated Region Cloud@Customer to run its critical, high-volume business platform used in various industries, including financial, retail, and distribution. NRI will start by moving its SaaS applications for the financial industry, which are used by about 70 percent of the capital markets firms in Japan, from on-premises to an Oracle Dedicated Region Cloud@Customer in its own datacenter. This will provide NRI with access to all of Oracle’s second-generation public cloud services while ensuring control and governance because the cloud services are in NRI’s own datacenter. During NRI and Oracle’s 30-year partnership, NRI has been an early adopter of Oracle technology and is now the first global customer to move to Oracle Dedicated Region Cloud@Customer. 

“With Oracle Dedicated Region Cloud@Customer, we can use Oracle Exadata as a cloud service and achieve greater agility, such as seamless expansion, while maintaining high availability at the same level as on-premises,” said Tomoshiro Takemoto, Senior Corporate Managing Director, NRI. “Built in our own datacenter, it enables us to not only provide SOC2 reports based on Japanese security standards in financial industries, but it also allows us to access broader cloud services and tools provided by Oracle and further increase our business value for our customers. With Oracle’s Dedicated Region, we can significantly reduce our on-premises costs and invest more in our digital transformation.”

“Oracle is excited to extend our strategic relationship with NRI to modernize and support the financial services market in Japan and globally,” said Clay Magouyrk, executive vice president of engineering, Oracle Cloud Infrastructure. “Only Oracle Cloud enables customers to utilize our full-featured cloud services anywhere, whether it’s our global public cloud region portfolio or the new dedicated regions for customers with data sovereignty, latency, regulatory, security and compliance requirements. With the differentiated model of our Dedicated Region, NRI will be able to modernize and truly deliver a first-class, cloud-native experience to their customers while meeting all of their requirements.”

Oracle Dedicated Region Cloud@Customer enables customers to build Oracle’s public cloud regions in customers’ own datacenters. Customers can access all of Oracle’s second-generation cloud services, including Bare Metal Compute, VMs and GPUs, Autonomous Database, and Exadata Cloud Service; container-based services like Container Engine for Kubernetes; and analytics services like Data Flow, while also having control and governance of their systems and services. With Oracle Cloud@Customer, enterprises can seamlessly lift and shift legacy workloads and consolidate on a single platform, enabling them to dramatically improve TCO without requiring re-architecture. Customers also have access to a full set of development services, like API Gateway and Events Services that will help them incrementally modernize their stack in-place, reducing the risk and expense of adopting new technologies.

Contact Info
Nicole Maloney
Oracle PR
+1.650.506.0806
nicole.maloney@oracle.com
About Oracle

The Oracle Cloud offers a complete suite of integrated applications for Sales, Service, Marketing, Human Resources, Finance, Supply Chain and Manufacturing, plus Highly Automated and Secure Generation 2 Infrastructure featuring the Oracle Autonomous Database. For more information about Oracle (NYSE: ORCL), please visit us at www.oracle.com.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Talk to a Press Contact

Nicole Maloney

  • +1.650.506.0806

PL\SQL to GetDDL for all user account except system account using cursor and loop

Tom Kyte - Tue, 2020-07-07 16:06
I am not sure what I am making wrong with this statement, <code>Declare cursor cur_dba_users IS select username from dba_users where username not in ('ANONYMOUS','APEX_040200','APEX_PUBLIC_USER','APPQOSSYS','AUDSYS','BI','CTXSYS','DBSNMP','DIP','DVF','DVSYS','EXFSYS','FLOWS_FILES','GSMADMIN_INTERNAL','GSMCATUSER','GSMUSER','HR','IX','LBACSYS','MDDATA','MDSYS','OE','ORACLE_OCM','ORDDATA','ORDPLUGINS','ORDSYS','OUTLN','PM','SCOTT','SH','SI_INFORMTN_SCHEMA','SPATIAL_CSW_ADMIN_USR','SPATIAL_WFS_ADMIN_USR','SYS','SYSBACKUP','SYSDG','SYSKM','SYSTEM','WMSYS','XDB','SYSMAN','RMAN','RMAN_BACKUP','OWBSYS','OWBSYS_AUDIT','APEX_030200','MGMT_VIEW','OJVMSYS'); Begin For cur_username in cur_dba_users Loop SELECT dbms_metadata.get_ddl('USER',cur_username) FROM dual; SELECT DBMS_METADATA.GET_GRANTED_DDL('SYSTEM_GRANT',cur_username) from dual; SELECT DBMS_METADATA.GET_GRANTED_DDL('ROLE_GRANT',cur_username) from dual; SELECT DBMS_METADATA.GET_GRANTED_DDL('OBJECT_GRANT',cur_username) from dual; END LOOP; CLOSE cur_dba_users; End; Error report - ORA-06550: line 6, column 39: PL/SQL: ORA-00904: "CUR_USERNAME": invalid identifier ORA-06550: line 6, column 1: PL/SQL: SQL Statement ignored ORA-06550: line 7, column 55: PL/SQL: ORA-00904: "CUR_USERNAME": invalid identifier ORA-06550: line 7, column 1: PL/SQL: SQL Statement ignored ORA-06550: line 8, column 53: PL/SQL: ORA-00904: "CUR_USERNAME": invalid identifier ORA-06550: line 8, column 1: PL/SQL: SQL Statement ignored ORA-06550: line 9, column 55: PL/SQL: ORA-00904: "CUR_USERNAME": invalid identifier ORA-06550: line 9, column 1: PL/SQL: SQL Statement ignored 06550. 00000 - "line %s, column %s:\n%s" *Cause: Usually a PL/SQL compilation error. *Action:</code>
Categories: DBA Blogs

Pages

Subscribe to Oracle FAQ aggregator