Feed aggregator

How prefix_index and substring_index improve wildcard searches in Oracle Text

Tom Kyte - Tue, 2020-08-04 21:46
Hi Team, I'm doing some Oracle Text work related to prefix_index and substring_index (we use Oracle 11g). I'm interested in why: (1)Prefix indexing improves performance for right truncated wildcard searches such as TO% (2)A substring index improves left-truncated and double-truncated wildcard queries such as %ing or %benz% Could you help to check whether my following understanding is right or wrong? Thanks! (Maybe the following understanding and questions are not very clear. I just want to know the search logic and process of queries like %abc, abc%, %abc% using prefix_index and substring_index by Oracle Text) (1) I want to confirm principles of extracting tokens using Oracle Text. After we extract tokens from text (tokens are stored in table $I), keywords that user inputs to query will be extract into tokens, too. If at least one keyword token is equal to text token, then user can get matched text. For example, text1 is extracted into token1 and token2 (stored in table $I). Then user inputs some keywords to query. Keywords are extracted into token2 and token3, since token2 is stroed in $I, user can get search result containing text1. However, if keywords are extracted into token3 and token4. Suppose that token3 is prefix of token2, user still can't get search result containing text1 because token2 != token3. Am I understanding right or wrong? Thanks. (2)When we use prefix_index (and set prefix_length_min=2, prefix_length_max=3), some prefix tokens ?token_type = 6 ) will be stored in $I. When we use right truncated wildcard searches, for example: (a) Search to%. Length of "to" is 2. So it will directly search in tokens (only token_type=6) in $I table. If token "to" is in $I and its token_type is 6, then user can get search results matching to%. If token "to" is not in $I, then search result is empty. But we find that sometimes one prefix token (token_type = 6) is stored in $I table (e.g. this token is "??"). Then we search ??%, the search result is empty. Do you know the reason? Thanks. (b) Search toto%. Length of "toto" is 4 (prefix_legth_max is 3). So it will directly search every token (token_type = 0) in $I table. Am I understanding right or wrong? Thanks. (3)When we use substring_index, some substring tokens will be stored in $P. When we use left truncated wildcard searches, for example: (a) Search %abc, then it will directly search in $P table using column PAT_PART2. If PAT_PART2 has token abc, then user can get search results matching %abc. If PAT_PART2 hasn't token abc, then search result is empty. (b) Search %a (length of "a" is 1), since length of all tokens in $P table in column PAT_PART2 is larger than 1, so it will directly search every token (token_type = 0) in $I table. Am I understanding right or wrong? Thanks. And I want to know that why using substring_index can improves double-truncated wildcard queries like %benz% (I understand substring_index can improve left-truncated wildcard queries, but I want to know the process and principle of querying %benz% by Oracle Text. Could you help to explain it, thanks!)
Categories: DBA Blogs

Oracle Cloud Shell

Hemant K Chitale - Tue, 2020-08-04 10:28
The Oracle Cloud (accessible even with a free account) now includes a Cloud Shell feature with 5GB of storage.

Here is a quick demo of this feature.


Categories: DBA Blogs

Object Detection And Tracking In Azure Machine Learning

Online Apps DBA - Tue, 2020-08-04 07:05

Machine learning is a subset of artificial intelligence. Object Detection and Tracking in Machine Learning is widely used in various IT fields. Do you know about Object Detection and Tracking in Machine Learning? Check our blog post at http://k21academy.com/dp10012 and know it all. The blog post covers all the essential information like: • Overview of Object […]

The post Object Detection And Tracking In Azure Machine Learning appeared first on Oracle Trainings for Apps & Fusion DBA.

Categories: APPS Blogs

CI/CD Pipeline | Continuous Integration | Continuous Deployment

Online Apps DBA - Tue, 2020-08-04 04:28

A CI/CD Pipeline, or Continuous Integration and Continuous Deployment, is the backbone of the modern DevOps environment, bridging the gap between development and operations teams by automating the building, testing, and deployment of applications. Check out this blog at k21academy.com/devops18 to know more about CI/CD Pipeline. This blog post covers: Continuous Integration Continuous Testing Continuous […]

The post CI/CD Pipeline | Continuous Integration | Continuous Deployment appeared first on Oracle Trainings for Apps & Fusion DBA.

Categories: APPS Blogs

SQLLDR Process is showing password

Tom Kyte - Tue, 2020-08-04 03:26
Hi Tom, we have a risk of exposing our APPS schema password for host based concurrent programs in Oracle Apps. The ps -ef command exposes the password unless encrypted. We have gone back and added the ENCRYPT option for the Options Field for the Concurrent Program definitions. We have also used $FCP_LOGIN instead of $1 while invoking SQL*Plus or SQL*Loader. As part of these changes, we had a test case where we were trying to observe the this sub process, either SQL*Plus or SQL Loader itself in the ps -ef command. We have observed that SQL*Plus does not display the password in the ps -ef command. However, SQL Loader does display the password. Hence, Can you please let us know the way to fix this or any workarounds for this. Thanks in Advance
Categories: DBA Blogs

DevOps Best Practices

Online Apps DBA - Tue, 2020-08-04 02:24

A DevOps Engineer should always try to implement or enforce the best practices in their organization. Check out the blog post at k21academy.com/devops17 to know about various DevOps Best Practices that should be implemented by every organization in the industry. This blog post covers: Chaos monkey Blue/Green deployment Andon cords The Cloud Embedded teams Blameless […]

The post DevOps Best Practices appeared first on Oracle Trainings for Apps & Fusion DBA.

Categories: APPS Blogs

DevOps Engineer | DevOps Roles and Responsibilities

Online Apps DBA - Tue, 2020-08-04 02:15

Want to become a DevOps Engineer, but still wondering what will be your roles, and responsibilities after completing the DevOps certification? Check out this blog at k21academy.com/devops16 to know all about DevOps Engineer, Skills of a DevOps Professionals, and much more. This blog post covers: Who is a DevOps Engineer DevOps Engineer Roles and Responsibilities […]

The post DevOps Engineer | DevOps Roles and Responsibilities appeared first on Oracle Trainings for Apps & Fusion DBA.

Categories: APPS Blogs

Managing Blockchain tables in Oracle Database 20c

Oracle’s multi model database 20c is a yearly short term release support, users and developers will benefit from the converged database approach of managing all data in one single database...

We share our skills to maximize your revenue!
Categories: DBA Blogs

Using CNCF Sandbox Project Strimzi for Kafka Clusters on VMware Tanzu Kubernetes Grid Integrated Edition (TKGI)

Pas Apicella - Sun, 2020-08-02 22:45
Strimzi a CNCF sandbox project provides a way to run an Apache Kafka cluster on Kubernetes in various deployment configurations. In this post we will take a look at how to get this running on VMware Tanzu Kubernetes Grid Integrated Edition (TKGI) and consume the Kafka cluster from a Springboot application.

If you have a K8s cluster that's all you need to follow along in this exampleI am using VMware Tanzu Kubernetes Grid Integrated Edition (TKGI) but you can use any K8s cluster you have such as GKE, AKS, EKS etc.

Steps

1. Installing Strimzi is pretty straight forward so we can do that as follows. I am using the namespace "kafka" which needs to be created prior to running this command.

kubectl apply -f 'https://strimzi.io/install/latest?namespace=kafka' -n kafka

2. Verify that the operator was installed correctly and we have a running POD as shown below
  
$ kubectl get pods -n kafka
NAME READY STATUS RESTARTS AGE
strimzi-cluster-operator-6c9d899778-4mdtg 1/1 Running 0 6d22h

3. Next let's ensure we have a default storage class for the cluster as shown below.

$ kubectl get storageclass
NAME             PROVISIONER                    AGE
fast (default)   kubernetes.io/vsphere-volume   47d

4. Now at this point we are ready to create a Kafka cluster. For this example we will create a 3 node cluster defined in YML as follows.

kafka-persistent-MULTI_NODE.yaml

apiVersion: kafka.strimzi.io/v1beta1
kind: Kafka
metadata:
  name: apples-kafka-cluster
spec:
  kafka:
    version: 2.5.0
    replicas: 3
    listeners:
      external:
        type: loadbalancer
        tls: false
      plain: {}
      tls: {}
    config:
      offsets.topic.replication.factor: 3
      transaction.state.log.replication.factor: 3
      transaction.state.log.min.isr: 2
      log.message.format.version: "2.5"
    storage:
      type: jbod
      volumes:
      - id: 0
        type: persistent-claim
        size: 100Gi
        deleteClaim: false
  zookeeper:
    replicas: 3
    storage:
      type: persistent-claim
      size: 100Gi
      deleteClaim: false
  entityOperator:
    topicOperator: {}
    userOperator: {}

Few things to note:
  • We have enable access to the cluster using the type LoadBalancer which means your K8s cluster needs to support such a Type
  • We need to create dynamic Persistence claim's in the cluster so ensure #3 above is in place
  • We have disabled TLS given this is a demo 
5. Create the Kafka cluster as shown below ensuring we target the namespace "kafka"

$ kubectl apply -f kafka-persistent-MULTI_NODE.yaml -n kafka

6. Now we can view the status/creation of our cluster one of two ways as shown below. You will need to wait a few minutes for everything to start up.

Option 1:
  
$ kubectl get Kafka -n kafka
NAME DESIRED KAFKA REPLICAS DESIRED ZK REPLICAS
apples-kafka-cluster 3 3 1/1 Running 0 6d22h

Option 2:
  
$ kubectl get all -n kafka
NAME READY STATUS RESTARTS AGE
pod/apples-kafka-cluster-entity-operator-58685b8fbd-r4wxc 3/3 Running 0 6d21h
pod/apples-kafka-cluster-kafka-0 2/2 Running 0 6d21h
pod/apples-kafka-cluster-kafka-1 2/2 Running 0 6d21h
pod/apples-kafka-cluster-kafka-2 2/2 Running 0 6d21h
pod/apples-kafka-cluster-zookeeper-0 1/1 Running 0 6d21h
pod/apples-kafka-cluster-zookeeper-1 1/1 Running 0 6d21h
pod/apples-kafka-cluster-zookeeper-2 1/1 Running 0 6d21h
pod/strimzi-cluster-operator-6c9d899778-4mdtg 1/1 Running 0 6d23h

NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/apples-kafka-cluster-kafka-0 LoadBalancer 10.100.200.90 10.195.93.200 9094:30362/TCP 6d21h
service/apples-kafka-cluster-kafka-1 LoadBalancer 10.100.200.179 10.195.93.197 9094:32022/TCP 6d21h
service/apples-kafka-cluster-kafka-2 LoadBalancer 10.100.200.155 10.195.93.201 9094:32277/TCP 6d21h
service/apples-kafka-cluster-kafka-bootstrap ClusterIP 10.100.200.77 <none> 9091/TCP,9092/TCP,9093/TCP 6d21h
service/apples-kafka-cluster-kafka-brokers ClusterIP None <none> 9091/TCP,9092/TCP,9093/TCP 6d21h
service/apples-kafka-cluster-kafka-external-bootstrap LoadBalancer 10.100.200.58 10.195.93.196 9094:30735/TCP 6d21h
service/apples-kafka-cluster-zookeeper-client ClusterIP 10.100.200.22 <none> 2181/TCP 6d21h
service/apples-kafka-cluster-zookeeper-nodes ClusterIP None <none> 2181/TCP,2888/TCP,3888/TCP 6d21h

NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/apples-kafka-cluster-entity-operator 1/1 1 1 6d21h
deployment.apps/strimzi-cluster-operator 1/1 1 1 6d23h

NAME DESIRED CURRENT READY AGE
replicaset.apps/apples-kafka-cluster-entity-operator-58685b8fbd 1 1 1 6d21h
replicaset.apps/strimzi-cluster-operator-6c9d899778 1 1 1 6d23h

NAME READY AGE
statefulset.apps/apples-kafka-cluster-kafka 3/3 6d21h
statefulset.apps/apples-kafka-cluster-zookeeper 3/3 6d21h 3 1/1 Running 0 6d22h

7. Our entry point into the cluster is a service of type LoadBalancer which we asked for as per our Kafka cluster YML config. To find the IP address we can run a command as follow using the cluster name from above.

$ kubectl get service -n kafka apples-kafka-cluster-kafka-external-bootstrap -o=jsonpath='{.status.loadBalancer.ingress[0].ip}{"\n"}'
10.195.93.196

Note: Make a not of this IP address as we will need it shortly

8. Let's create a Kafka Topic using YML as follows. In this YML we actually ensure we are using the namespace "kafka".  

create-kafka-topic.yaml

apiVersion: kafka.strimzi.io/v1beta1
kind: KafkaTopic
metadata:
  name: apples-topic
  namespace: kafka
  labels:
    strimzi.io/cluster: apples-kafka-cluster
spec:
  partitions: 1
  replicas: 1
  config:
    retention.ms: 7200000
    segment.bytes: 1073741824


9. Create a Kafka topic as shown below.

$ kubectl apply -f create-kafka-topic.yaml

10. We can view the Kafka topics as shown below.
  
$ kubectl get KafkaTopic -n kafka
NAME PARTITIONS REPLICATION FACTOR
apples-topic 1 1

11. Now at this point we ready to send some messages to our topic "apples-topic" as well as consume messages so to do that we are going to use a Springboot Application in fact two of them which exist on GitHub.


Download or clone those onto your file system. 

12.With both downloaded you will need to set the spring.kafka.bootstrap-servers with the IP address we retrieved from #7 above. That needs to be done in both GitHub downloaded/cloned repo's above. The file we need to edit for both repo's is as follows. 

File: src/main/resources/application.yml 

Example:

spring:
  kafka:
    bootstrap-servers: IP-ADDRESS:9094

Note: Make sure you do this for both downloaded repo application.yml files

13. Now let's run the producer and consumer Springboot application using a command as follows in seperate terminal windows. One will use PORT 8080 while the other uses port 8081.

$ ./mvnw spring-boot:run

Consumer:

papicella@papicella:~/pivotal/DemoProjects/spring-starter/pivotal/KAFKA/demo-kafka-producer$ ./mvnw spring-boot:run

...
2020-08-03 11:41:46.742  INFO 34025 --- [           main] o.s.b.w.embedded.tomcat.TomcatWebServer  : Tomcat started on port(s): 8080 (http) with context path ''
2020-08-03 11:41:46.754  INFO 34025 --- [           main] a.a.t.k.DemoKafkaProducerApplication     : Started DemoKafkaProducerApplication in 1.775 seconds (JVM running for 2.102)

Producer:

papicella@papicella:~/pivotal/DemoProjects/spring-starter/pivotal/KAFKA/demo-kafka-consumer$ ./mvnw spring-boot:run

...
2020-08-03 11:43:53.423  INFO 34056 --- [           main] o.s.b.w.embedded.tomcat.TomcatWebServer  : Tomcat started on port(s): 8081 (http) with context path ''
2020-08-03 11:43:53.440  INFO 34056 --- [           main] a.a.t.k.DemoKafkaConsumerApplication     : Started DemoKafkaConsumerApplication in 1.666 seconds (JVM running for 1.936)

14. Start by opening up the the Producer UI by navigating to http://localhost:8080/



15. Now let's not add any messages yet and also open up the Consumer UI by navigating to http://localhost:8081/



Note: This application will automatically refresh the page every 2 seconds to show which messages have been sent to the Kafka Topic

16. Return to the Producer UI http://localhost:8080/ and add two messages using whatever text you like as shown below.


17. Return to the Consumer UI http://localhost:8081/ to verify the two messages sent to the Kafka topic has been consumed



18. Both these Springboot applications are using "Spring for Apache Kafka


Both Springboot application use a application.yml to bootstrap access to the Kafka cluster

The Producer Springboot application is using a KafkaTemplate to send messages to our Kafka Topic as shown below.
  
@Controller
@Slf4j
public class TopicMessageController {

private KafkaTemplate<String, String> kafkaTemplate;

@Autowired
public TopicMessageController(KafkaTemplate<String, String> kafkaTemplate) {
this.kafkaTemplate = kafkaTemplate;
}

final private String topicName = "apples-topic";

@GetMapping("/")
public String indexPage (Model model){
model.addAttribute("topicMessageAddSuccess", "N");
return "home";
}

@PostMapping("/addentry")
public String addNewTopicMessage (@RequestParam(value="message") String message, Model model){

kafkaTemplate.send(topicName, message);

log.info("Sent single message: " + message);
model.addAttribute("message", message);
model.addAttribute("topicMessageAddSuccess", "Y");

return "home";
}
}

The Consumer Springboot application is configured with a KafkaListener as shown below
  
@Controller
@Slf4j
public class TopicConsumerController {

private static ArrayList<String> topicMessages = new ArrayList<String>();

@GetMapping("/")
public String indexPage (Model model){
model.addAttribute("topicMessages", topicMessages);
model.addAttribute("topicMessagesCount", topicMessages.size());

return "home";
}

@KafkaListener(topics = "apples-topic")
public void listen(String message) {
log.info("Received Message: " + message);
topicMessages.add(message);
}
}

In this post we did not setup any client authentication against the cluster for the producer or consumer given this was just a demo.





More Information

Spring for Apache Kafka

CNCF Sanbox projects

Strimzi
Categories: Fusion Middleware

Last Time You Tested Data Guard Was?

Michael Dinh - Sun, 2020-08-02 19:05

Data Guard is like a spare tire. Don’t wait to find there is no air when you need it.

[oracle@ol7-112-dg1 ~]$ dgmgrl
DGMGRL for Linux: Release 19.0.0.0.0 - Production on Sun Aug 2 16:57:28 2020
Version 19.3.0.0.0

Copyright (c) 1982, 2019, Oracle and/or its affiliates.  All rights reserved.

Welcome to DGMGRL, type "help" for information.
DGMGRL> connect sys@hawk
Password:
Connected to "hawk"
Connected as SYSDBA.
DGMGRL> show configuration

Configuration - my_dg_config

  Protection Mode: MaxPerformance
  Members:
  hawk      - Primary database
    hawk_stby - Physical standby database

Fast-Start Failover:  Disabled

Configuration Status:
SUCCESS   (status updated 52 seconds ago)

DGMGRL> switchover to hawk_stby
Performing switchover NOW, please wait...
Operation requires a connection to database "hawk_stby"
Connecting ...
Connected to "hawk_stby"
Connected as SYSDBA.
New primary database "hawk_stby" is opening...
Operation requires start up of instance "hawk" on database "hawk"
Starting instance "hawk"...
Connected to an idle instance.
ORACLE instance started.
Connected to "hawk"
Database mounted.
Connected to "hawk"
Switchover succeeded, new primary is "hawk_stby"
DGMGRL> show configuration

Configuration - my_dg_config

  Protection Mode: MaxPerformance
  Members:
  hawk_stby - Primary database
    hawk      - Physical standby database

Fast-Start Failover:  Disabled

Configuration Status:
SUCCESS   (status updated 57 seconds ago)

DGMGRL> exit
[oracle@ol7-112-dg1 ~]$

==================================================

[oracle@ol7-112-dg1 upgrade19c]$ sqlplus / as sysdba @ status

SQL*Plus: Release 19.0.0.0.0 - Production on Sun Aug 2 16:59:45 2020
Version 19.3.0.0.0

Copyright (c) 1982, 2019, Oracle.  All rights reserved.


Connected to:
Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production
Version 19.3.0.0.0


Session altered.

SQL> select NAME,DB_UNIQUE_NAME,OPEN_MODE,DATABASE_ROLE,PROTECTION_MODE,FLASHBACK_ON from v$database
  2  ;

NAME      DB_UNIQUE_NAME                 OPEN_MODE            DATABASE_ROLE    PROTECTION_MODE      FLASHBACK_ON
--------- ------------------------------ -------------------- ---------------- -------------------- ------------------
HAWK      hawk                           MOUNTED              PHYSICAL STANDBY MAXIMUM PERFORMANCE  YES

SQL> ;
  1  select NAME,DB_UNIQUE_NAME,OPEN_MODE,DATABASE_ROLE,PROTECTION_MODE,FLASHBACK_ON from v$database
  2*
SQL> select PID,inst_id inst,thread#,client_process,process,status,sequence#,block#,DELAY_MINS
  2  from gv$managed_standby
  3  where BLOCK#>1
  4  and status not in ('CLOSING','IDLE')
  5  order by status desc, thread#, sequence#
  6  ;

PID                           INST   THREAD# CLIENT_P PROCESS   STATUS       SEQUENCE#    BLOCK# DELAY_MINS
------------------------ --------- --------- -------- --------- ------------ --------- --------- ----------
3118                             1         1 LGWR     RFS       RECEIVING          169      3237          0
3151                             1         1 N/A      MRP0      APPLYING_LOG       169      3237          0

SQL> exit
Disconnected from Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production
Version 19.3.0.0.0
[oracle@ol7-112-dg1 upgrade19c]$

==================================================

[oracle@ol7-112-dg1 ~]$ dgmgrl
DGMGRL for Linux: Release 19.0.0.0.0 - Production on Sun Aug 2 17:00:03 2020
Version 19.3.0.0.0

Copyright (c) 1982, 2019, Oracle and/or its affiliates.  All rights reserved.

Welcome to DGMGRL, type "help" for information.
DGMGRL> connect sys@hawk_stby
Password:
Connected to "hawk_stby"
Connected as SYSDBA.
DGMGRL> switchover to hawk
Performing switchover NOW, please wait...
Operation requires a connection to database "hawk"
Connecting ...
Connected to "hawk"
Connected as SYSDBA.
New primary database "hawk" is opening...
Operation requires start up of instance "hawk" on database "hawk_stby"
Starting instance "hawk"...
Connected to an idle instance.
ORACLE instance started.
Connected to "hawk_stby"
Database mounted.
Connected to "hawk_stby"
Switchover succeeded, new primary is "hawk"
DGMGRL> show configuration

Configuration - my_dg_config

  Protection Mode: MaxPerformance
  Members:
  hawk      - Primary database
    hawk_stby - Physical standby database

Fast-Start Failover:  Disabled

Configuration Status:
SUCCESS   (status updated 59 seconds ago)

DGMGRL> exit
[oracle@ol7-112-dg1 ~]$

==================================================

[oracle@ol7-112-dg1 upgrade19c]$ sqlplus / as sysdba @ status

SQL*Plus: Release 19.0.0.0.0 - Production on Sun Aug 2 17:02:06 2020
Version 19.3.0.0.0

Copyright (c) 1982, 2019, Oracle.  All rights reserved.


Connected to:
Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production
Version 19.3.0.0.0


Session altered.

SQL> select NAME,DB_UNIQUE_NAME,OPEN_MODE,DATABASE_ROLE,PROTECTION_MODE,FLASHBACK_ON from v$database
  2  ;

NAME      DB_UNIQUE_NAME                 OPEN_MODE            DATABASE_ROLE    PROTECTION_MODE      FLASHBACK_ON
--------- ------------------------------ -------------------- ---------------- -------------------- ------------------
HAWK      hawk                           READ WRITE           PRIMARY          MAXIMUM PERFORMANCE  YES

SQL> ;
  1  select NAME,DB_UNIQUE_NAME,OPEN_MODE,DATABASE_ROLE,PROTECTION_MODE,FLASHBACK_ON from v$database
  2*
SQL> select PID,inst_id inst,thread#,client_process,process,status,sequence#,block#,DELAY_MINS
  2  from gv$managed_standby
  3  where BLOCK#>1
  4  and status not in ('CLOSING','IDLE')
  5  order by status desc, thread#, sequence#
  6  ;

PID                           INST   THREAD# CLIENT_P PROCESS   STATUS       SEQUENCE#    BLOCK# DELAY_MINS
------------------------ --------- --------- -------- --------- ------------ --------- --------- ----------
3328                             1         1 LNS      LNS       WRITING            172      3252          0

SQL> Disconnected from Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production
Version 19.3.0.0.0
[oracle@ol7-112-dg1 upgrade19c]$
__ATA.cmd.push(function() { __ATA.initDynamicSlot({ id: 'atatags-26942-5f275c4c98454', location: 120, formFactor: '001', label: { text: 'Advertisements', }, creative: { reportAd: { text: 'Report this ad', }, privacySettings: { text: 'Privacy settings', } } }); });

Helm, Tiller, and Helm Charts with Kubernetes

Online Apps DBA - Sun, 2020-08-02 04:34

Containers are the future of deploying applications and, for now, Kubernetes is the one in the spotlight. Kubernetes is an opensource system to deploy, scale, and manage containerized applications anywhere. Helm is a package manager for Kubernetes (like yum and APT). Helm manages Kubernetes resource packages through charts, called Helm Charts. If you are interested […]

The post Helm, Tiller, and Helm Charts with Kubernetes appeared first on Oracle Trainings for Apps & Fusion DBA.

Categories: APPS Blogs

[AZ-900] Privacy, Compliance and Data Protection

Online Apps DBA - Sun, 2020-08-02 04:23

Microsoft provides a comprehensive set of documents detailing its commitment to a transparent and regulatory compliant way of working. Check out the blog post at k21academy.com/az90027 for a better understanding of Microsoft’s Security, Privacy, Compliance related SLA’s and practices. In this blog post, we will cover: o Compliance Terms and Requirements o Microsoft Privacy Statement […]

The post [AZ-900] Privacy, Compliance and Data Protection appeared first on Oracle Trainings for Apps & Fusion DBA.

Categories: APPS Blogs

Terraform Installation Overview

Online Apps DBA - Sun, 2020-08-02 03:10

Terraform enables the users to define and provision a data center infrastructure of different cloud providers using a high-level configuration language known as Hashicorp Configuration Language (HCL), or optionally JSON but you cannot directly provision anything without installing it first on your machine. If you want to know how to install Terraform on your machine? […]

The post Terraform Installation Overview appeared first on Oracle Trainings for Apps & Fusion DBA.

Categories: APPS Blogs

Rolling Upgrades Using Physical Standby and physru_v3.sh

Michael Dinh - Sat, 2020-08-01 11:12

I am not going to reiterate what is already out there; however, what is out there covers 12.1.0.2.0 vs 19.3.0.0.0.

Here is the syntax and what seems to be ambiguous is target version.

$./physru.sh <sysdba user> <primary TNS alias> <physical standby TNS alias> <primary db unique name> <physical standby db unique name> <target version> 

NOTE: This is NOT all the steps for upgrade but only applicable steps when running physru.sh.

First physru execution completed successfully.

### First physru execution:
Verifies that Data Guard Broker is disabled and FRA is configured.
Creates a guaranteed restore point
Converts the existing Physical Standby to a Logical Standby

oracle@ol7-112-dg2:hawk:/sf_working/upgrade19c
$ ./physru_v3.sh SYS hawk hawk_stby hawk hawk_stby 19.3.0.0
Please enter the sysdba password:

### Initialize script to either start over or resume execution
Jul 30 20:06:30 2020 [0-1] Identifying rdbms software version
Jul 30 20:06:31 2020 [0-1] database hawk is at version 11.2.0.4.0
Jul 30 20:06:31 2020 [0-1] database hawk_stby is at version 11.2.0.4.0
Jul 30 20:06:31 2020 [0-1] verifying fast recovery area is enabled at hawk and hawk_stby
Jul 30 20:06:31 2020 [0-1] verifying backup location at hawk and hawk_stby
Jul 30 20:06:31 2020 [0-1] verifying available flashback restore points
Jul 30 20:06:31 2020 [0-1] verifying DG Broker is disabled
Jul 30 20:06:31 2020 [0-1] looking up prior execution history
Jul 30 20:06:31 2020 [0-1] purging script execution state from database hawk
Jul 30 20:06:31 2020 [0-1] purging script execution state from database hawk_stby
Jul 30 20:06:31 2020 [0-1] starting new execution of script

### Stage 1: Backup user environment in case rolling upgrade is aborted
Jul 30 20:06:31 2020 [1-1] database hawk location for backup controlfile is /u01/app/oracle/fast_recovery_area
Jul 30 20:06:32 2020 [1-1] database hawk_stby location for backup controlfile is /u01/app/oracle/fast_recovery_area
Jul 30 20:06:32 2020 [1-1] creating restore point PRU_0000_0003 on database hawk_stby
Jul 30 20:06:32 2020 [1-1] backing up current control file on hawk_stby
Jul 30 20:06:32 2020 [1-1] created backup control file /u01/app/oracle/fast_recovery_area/PRU_0003_hawk_stby_f.f
Jul 30 20:06:32 2020 [1-1] creating restore point PRU_0000_0003 on database hawk
Jul 30 20:06:32 2020 [1-1] backing up current control file on hawk
Jul 30 20:06:32 2020 [1-1] created backup control file /u01/app/oracle/fast_recovery_area/PRU_0003_hawk_f.f

NOTE: Restore point PRU_0000_0001 and backup control file PRU_0003_hawk_stby_f.f
      can be used to restore hawk_stby back to its original state as a
      physical standby, in case the rolling upgrade operation needs to be aborted
      prior to the first switchover done in Stage 4.

### Stage 2: Create transient logical standby from existing physical standby
Jul 30 20:06:32 2020 [2-1] verifying RAC is disabled at hawk_stby
Jul 30 20:06:32 2020 [2-1] verifying database roles
Jul 30 20:06:32 2020 [2-1] verifying physical standby is mounted
Jul 30 20:06:32 2020 [2-1] verifying database protection mode
Jul 30 20:06:32 2020 [2-1] verifying transient logical standby datatype support
Jul 30 20:06:33 2020 [2-2] starting media recovery on hawk_stby
Jul 30 20:06:39 2020 [2-2] confirming media recovery is running
Jul 30 20:06:39 2020 [2-2] waiting for apply lag to fall under 30 seconds
Jul 30 20:06:52 2020 [2-2] apply lag measured at 13 seconds
Jul 30 20:06:52 2020 [2-2] stopping media recovery on hawk_stby
Jul 30 20:06:53 2020 [2-2] executing dbms_logstdby.build on database hawk
Jul 30 20:06:59 2020 [2-2] converting physical standby into transient logical standby
Jul 30 20:07:03 2020 [2-3] opening database hawk_stby
Jul 30 20:07:05 2020 [2-4] configuring transient logical standby parameters for rolling upgrade
Jul 30 20:07:05 2020 [2-4] starting logical standby on database hawk_stby
Jul 30 20:07:10 2020 [2-4] enabling log archive destination to database hawk_stby
Jul 30 20:07:11 2020 [2-4] waiting until logminer dictionary has fully loaded
Jul 30 20:07:51 2020 [2-4] dictionary load 03% complete
Jul 30 20:08:01 2020 [2-4] dictionary load 62% complete
Jul 30 20:08:11 2020 [2-4] dictionary load is complete
Jul 30 20:08:11 2020 [2-4] waiting for apply lag to fall under 30 seconds
Jul 30 20:08:15 2020 [2-4] apply lag measured at 3 seconds

NOTE: Database hawk_stby is now ready to be upgraded.  This script has left the
      database open in case you want to perform any further tasks before
      upgrading the database.  Once the upgrade is complete, the database must
      opened in READ WRITE mode before this script can be called to resume the
      rolling upgrade.

NOTE: If hawk_stby was previously a RAC database that was disabled, it may be
      reverted back to a RAC database upon completion of the rdbms upgrade.
      This can be accomplished by performing the following steps:

          1) On instance hawk, set the cluster_database parameter to TRUE.
          eg: SQL> alter system set cluster_database=true scope=spfile;

          2) Shutdown instance hawk.
          eg: SQL> shutdown abort;

          3) Startup and open all instances for database hawk_stby.
          eg: srvctl start database -d hawk_stby

oracle@ol7-112-dg2:hawk:/sf_working/upgrade19c

Second physru execution FAILED.

### Second physru execution to switchover (APPLICATION BROWNOUT):
Executes a switchover making the upgraded standby database the primary database.
Executes a flashback of the original primary database to the guaranteed restore point from step 1 and shuts it down.

[oracle@ol7-112-dg2 upgrade19c]$ ./physru_v3.sh SYS hawk hawk_stby hawk hawk_stby 19.3.0.0.0
Please enter the sysdba password:

### Initialize script to either start over or resume execution
Aug 01 01:55:56 2020 [0-1] Identifying rdbms software version
Aug 01 01:55:56 2020 [0-1] database hawk is at version 11.2.0.4.0
Aug 01 01:55:57 2020 [0-1] database hawk_stby is at version 19.0.0.0.0
Aug 01 01:56:00 2020 [0-1] verifying fast recovery area is enabled at hawk and hawk_stby
Aug 01 01:56:02 2020 [0-1] verifying backup location at hawk and hawk_stby
Aug 01 01:56:03 2020 [0-1] verifying available flashback restore points
Aug 01 01:56:04 2020 [0-1] verifying DG Broker is disabled
Aug 01 01:56:05 2020 [0-1] looking up prior execution history
Aug 01 01:56:08 2020 [0-1] last completed stage [2-4] using script version 0003
Aug 01 01:56:08 2020 [0-1] resuming execution of script

### Stage 3: Validate upgraded transient logical standby
Aug 01 01:56:09 2020 [3-1] database hawk_stby is no longer in OPEN MIGRATE mode
Aug 01 01:56:09 2020 [3-1] ERROR: hawk_stby is not at version 19.3.0.0.0
[oracle@ol7-112-dg2 upgrade19c]$

Second physru execution SUCCEEDED as 19.0.0.0.0 (base release) is used vs 19.3.0.0.0 (actual release).

[oracle@ol7-112-dg2 upgrade19c]$ ./physru_v3.sh SYS hawk hawk_stby hawk hawk_stby 19.0.0.0.0
Please enter the sysdba password:

### Initialize script to either start over or resume execution
Aug 01 02:48:40 2020 [0-1] Identifying rdbms software version
Aug 01 02:48:40 2020 [0-1] database hawk is at version 11.2.0.4.0
Aug 01 02:48:41 2020 [0-1] database hawk_stby is at version 19.0.0.0.0
Aug 01 02:48:45 2020 [0-1] verifying fast recovery area is enabled at hawk and hawk_stby
Aug 01 02:48:47 2020 [0-1] verifying backup location at hawk and hawk_stby
Aug 01 02:48:48 2020 [0-1] verifying available flashback restore points
Aug 01 02:48:49 2020 [0-1] verifying DG Broker is disabled
Aug 01 02:48:50 2020 [0-1] looking up prior execution history
Aug 01 02:48:53 2020 [0-1] last completed stage [2-4] using script version 0003
Aug 01 02:48:53 2020 [0-1] resuming execution of script

### Stage 3: Validate upgraded transient logical standby
Aug 01 02:48:54 2020 [3-1] database hawk_stby is no longer in OPEN MIGRATE mode
Aug 01 02:48:54 2020 [3-1] database hawk_stby is at version 19.0.0.0.0

### Stage 4: Switch the transient logical standby to be the new primary
Aug 01 02:48:59 2020 [4-1] waiting for hawk_stby to catch up (this could take a while)
Aug 01 02:49:00 2020 [4-1] waiting for apply lag to fall under 30 seconds
Aug 01 02:49:12 2020 [4-1] apply lag measured at 9 seconds
Aug 01 02:49:16 2020 [4-2] using fast switchover optimizations

NOTE: A switchover is about to be performed which will incur a brief outage
      of the primary database.  If you answer 'y' to the question below,
      database hawk_stby will become the new primary database, and database hawk
      will be converted into a standby in preparation for upgrade.  If you answer
      'n' to the question below, the script will exit, leaving the databases in
      their current roles.
Are you ready to proceed with the switchover? (y/n): y

Aug 01 02:49:31 2020 [4-2] continuing
Aug 01 02:49:31 2020 [4-2] switching hawk to become a logical standby
Aug 01 02:49:39 2020 [4-2] hawk is now a logical standby
Aug 01 02:49:39 2020 [4-2] waiting for standby hawk_stby to process end-of-redo from primary
Aug 01 02:49:44 2020 [4-2] switching hawk_stby to become the new primary
Aug 01 02:49:45 2020 [4-2] hawk_stby is now the new primary

### Stage 5: Flashback former primary to pre-upgrade restore point and convert to physical
Aug 01 02:49:49 2020 [5-1] shutting down database hawk
Aug 01 02:50:03 2020 [5-1] mounting database hawk
Aug 01 02:50:12 2020 [5-2] flashing back database hawk to restore point PRU_0000_0003
Aug 01 02:50:15 2020 [5-3] converting hawk into physical standby
Aug 01 02:50:17 2020 [5-4] shutting down database hawk

NOTE: Database hawk has been shutdown, and is now ready to be started
      using the newer version Oracle binary.  This script requires the
      database to be mounted (on all active instances, if RAC) before calling
      this script to resume the rolling upgrade.

[oracle@ol7-112-dg2 upgrade19c]$

Third and Final physru execution completed successfully.

### Execute physru for the third and final time. 
Start redo apply
Prompt whether to switch back to original configuration
Remove guaranteed restore points

[oracle@ol7-112-dg2 upgrade19c]$ ./physru_v3.sh SYS hawk hawk_stby hawk hawk_stby 19.0.0.0.0
Please enter the sysdba password:

### Initialize script to either start over or resume execution
Aug 01 02:48:40 2020 [0-1] Identifying rdbms software version
Aug 01 02:48:40 2020 [0-1] database hawk is at version 11.2.0.4.0
Aug 01 02:48:41 2020 [0-1] database hawk_stby is at version 19.0.0.0.0
Aug 01 02:48:45 2020 [0-1] verifying fast recovery area is enabled at hawk and hawk_stby
Aug 01 02:48:47 2020 [0-1] verifying backup location at hawk and hawk_stby
Aug 01 02:48:48 2020 [0-1] verifying available flashback restore points
Aug 01 02:48:49 2020 [0-1] verifying DG Broker is disabled
Aug 01 02:48:50 2020 [0-1] looking up prior execution history
Aug 01 02:48:53 2020 [0-1] last completed stage [2-4] using script version 0003
Aug 01 02:48:53 2020 [0-1] resuming execution of script

### Stage 3: Validate upgraded transient logical standby
Aug 01 02:48:54 2020 [3-1] database hawk_stby is no longer in OPEN MIGRATE mode
Aug 01 02:48:54 2020 [3-1] database hawk_stby is at version 19.0.0.0.0

### Stage 4: Switch the transient logical standby to be the new primary
Aug 01 02:48:59 2020 [4-1] waiting for hawk_stby to catch up (this could take a while)
Aug 01 02:49:00 2020 [4-1] waiting for apply lag to fall under 30 seconds
Aug 01 02:49:12 2020 [4-1] apply lag measured at 9 seconds
Aug 01 02:49:16 2020 [4-2] using fast switchover optimizations

NOTE: A switchover is about to be performed which will incur a brief outage
      of the primary database.  If you answer 'y' to the question below,
      database hawk_stby will become the new primary database, and database hawk
      will be converted into a standby in preparation for upgrade.  If you answer
      'n' to the question below, the script will exit, leaving the databases in
      their current roles.
Are you ready to proceed with the switchover? (y/n): y

Aug 01 02:49:31 2020 [4-2] continuing
Aug 01 02:49:31 2020 [4-2] switching hawk to become a logical standby
Aug 01 02:49:39 2020 [4-2] hawk is now a logical standby
Aug 01 02:49:39 2020 [4-2] waiting for standby hawk_stby to process end-of-redo from primary
Aug 01 02:49:44 2020 [4-2] switching hawk_stby to become the new primary
Aug 01 02:49:45 2020 [4-2] hawk_stby is now the new primary

### Stage 5: Flashback former primary to pre-upgrade restore point and convert to physical
Aug 01 02:49:49 2020 [5-1] shutting down database hawk
Aug 01 02:50:03 2020 [5-1] mounting database hawk
Aug 01 02:50:12 2020 [5-2] flashing back database hawk to restore point PRU_0000_0003
Aug 01 02:50:15 2020 [5-3] converting hawk into physical standby
Aug 01 02:50:17 2020 [5-4] shutting down database hawk

NOTE: Database hawk has been shutdown, and is now ready to be started
      using the newer version Oracle binary.  This script requires the
      database to be mounted (on all active instances, if RAC) before calling
      this script to resume the rolling upgrade.

[oracle@ol7-112-dg2 upgrade19c]$

[oracle@ol7-112-dg2 upgrade19c]$ ./physru_v3.sh SYS hawk hawk_stby hawk hawk_stby 19.0.0.0.0
Please enter the sysdba password:

### Initialize script to either start over or resume execution
Aug 01 03:26:16 2020 [0-1] Identifying rdbms software version
Aug 01 03:26:17 2020 [0-1] database hawk is at version 19.0.0.0.0
Aug 01 03:26:18 2020 [0-1] database hawk_stby is at version 19.0.0.0.0
Aug 01 03:26:26 2020 [0-1] verifying fast recovery area is enabled at hawk and hawk_stby
Aug 01 03:26:29 2020 [0-1] verifying backup location at hawk and hawk_stby
Aug 01 03:26:31 2020 [0-1] verifying available flashback restore points
Aug 01 03:26:34 2020 [0-1] verifying DG Broker is disabled
Aug 01 03:26:36 2020 [0-1] looking up prior execution history
Aug 01 03:26:39 2020 [0-1] last completed stage [5-4] using script version 0003
Aug 01 03:26:39 2020 [0-1] resuming execution of script

### Stage 6: Run media recovery through upgrade redo
Aug 01 03:26:47 2020 [6-1] upgrade redo region identified as scn range [995261, 2453907]
Aug 01 03:26:47 2020 [6-1] enabling log archive destination to database hawk
Aug 01 03:26:51 2020 [6-1] starting media recovery on hawk
Aug 01 03:26:57 2020 [6-1] confirming media recovery is running
Aug 01 03:26:59 2020 [6-1] waiting for media recovery to initialize v$recovery_progress
Aug 01 03:27:20 2020 [6-1] monitoring media recovery's progress
Aug 01 03:27:32 2020 [6-3] recovery of upgrade redo at 07% - estimated complete at Aug 01 03:31:24
Aug 01 03:27:57 2020 [6-3] recovery of upgrade redo at 26% - estimated complete at Aug 01 03:30:06
Aug 01 03:28:21 2020 [6-3] recovery of upgrade redo at 42% - estimated complete at Aug 01 03:30:00
Aug 01 03:28:45 2020 [6-3] recovery of upgrade redo at 52% - estimated complete at Aug 01 03:30:10
Aug 01 03:29:10 2020 [6-3] recovery of upgrade redo at 61% - estimated complete at Aug 01 03:30:25
Aug 01 03:29:36 2020 [6-3] recovery of upgrade redo at 73% - estimated complete at Aug 01 03:30:27
Aug 01 03:30:00 2020 [6-3] recovery of upgrade redo at 82% - estimated complete at Aug 01 03:30:35
Aug 01 03:30:24 2020 [6-3] recovery of upgrade redo at 90% - estimated complete at Aug 01 03:30:42
Aug 01 03:30:51 2020 [6-3] recovery of upgrade redo at 96% - estimated complete at Aug 01 03:30:55
Aug 01 03:31:12 2020 [6-4] media recovery has finished recovering through upgrade

### Stage 7: Switch back to the original roles prior to the rolling upgrade

NOTE: At this point, you have the option to perform a switchover
     which will restore hawk back to a primary database and
     hawk_stby back to a physical standby database.  If you answer 'n'
     to the question below, hawk will remain a physical standby
     database and hawk_stby will remain a primary database.

Do you want to perform a switchover? (y/n): y

Aug 01 03:31:26 2020 [7-1] continuing
Aug 01 03:31:36 2020 [7-2] waiting for apply lag to fall under 30 seconds
Aug 01 03:31:44 2020 [7-2] apply lag measured at 5 seconds
Aug 01 03:31:48 2020 [7-3] switching hawk_stby to become a physical standby
Aug 01 03:31:55 2020 [7-3] hawk_stby is now a physical standby
Aug 01 03:31:55 2020 [7-3] shutting down database hawk_stby
Aug 01 03:31:57 2020 [7-3] mounting database hawk_stby
Aug 01 03:32:08 2020 [7-3] starting media recovery on hawk_stby
Aug 01 03:32:15 2020 [7-3] confirming media recovery is running
Aug 01 03:32:16 2020 [7-3] waiting for standby hawk to process end-of-redo from primary
Aug 01 03:32:21 2020 [7-3] switching hawk to become the new primary
Aug 01 03:32:23 2020 [7-3] hawk is now the new primary
Aug 01 03:32:23 2020 [7-3] opening database hawk

### Stage 8: Statistics
script start time:                                           31-Jul-20 23:54:44
script finish time:                                          01-Aug-20 03:32:36
total script execution time:                                       +00 03:37:52
wait time for user upgrade:                                        +00 02:52:39
active script execution time:                                      +00 00:45:13
transient logical creation start time:                       31-Jul-20 23:54:46
transient logical creation finish time:                      31-Jul-20 23:55:14
primary to logical switchover start time:                    01-Aug-20 02:49:14
logical to primary switchover finish time:                   01-Aug-20 02:49:47
primary services offline for:                                      +00 00:00:33
total time former primary in physical role:                        +00 00:40:57
time to reach upgrade redo:
time to recover upgrade redo:                                      +00 00:03:44
primary to physical switchover start time:                   01-Aug-20 03:31:25
physical to primary switchover finish time:                  01-Aug-20 03:32:34
primary services offline for:                                      +00 00:01:09

SUCCESS: The physical rolling upgrade is complete

[oracle@ol7-112-dg2 upgrade19c]$

References:

Oracle11g Data Guard: Database Rolling Upgrade Shell Script (Doc ID 949322.1)

Data Guard physru_v3.sh Script Errors ORA-01403 ORA-06512 (Doc ID 2570572.1)

Oracle Database Rolling Upgrades

Agile Methodology and DevOps | DevOps and Agile Relationship

Online Apps DBA - Sat, 2020-08-01 04:13

DevOps has brought a lot of changes in the IT industry. By Combining Agile in DevOps, imagine how much faster and feasible delivery DevOps will provide to a business. Check out this blog at k21academy.com/devops15 to know more about Agile and how does it support DevOps. This blog post covers: What is Agile Agile Framework […]

The post Agile Methodology and DevOps | DevOps and Agile Relationship appeared first on Oracle Trainings for Apps & Fusion DBA.

Categories: APPS Blogs

I Will Buy You Lunch To Solve dbua -silent [FATAL] [DBT-20061]

Michael Dinh - Fri, 2020-07-31 14:22

Currently upgrading 11.2 to 19.3 database using dbua silent.

Run preupgrade.jar and no issues detected.

Run preupgrade_fixups.sql

Run dbua -silent error resulted with [FATAL] [DBT-20061]

Per Doc ID 2246770.1, ignore the error which may be hazardous.

Actually, I have already figured our the root cause; however, the method I used may not always be available for all environments.

Will post solution in a week.

HINT: probable cause is in the output which did not occur for first run.

$ env|grep HOME
OLD_HOME=/u01/app/oracle/product/11.2.0.4/dbhome_1
NEW_HOME=/u01/app/oracle/product/19.3.0.0/dbhome_1
HOME=/home/oracle
ORACLE_HOME=/u01/app/oracle/product/11.2.0.4/dbhome_1

$ $OLD_HOME/jdk/bin/java -jar $NEW_HOME/rdbms/admin/preupgrade.jar TERMINAL TEXT FILE
==================
PREUPGRADE SUMMARY
==================
  /u01/app/oracle/cfgtoollogs/hawk_stby/preupgrade/preupgrade.log
  /u01/app/oracle/cfgtoollogs/hawk_stby/preupgrade/preupgrade_fixups.sql
  /u01/app/oracle/cfgtoollogs/hawk_stby/preupgrade/postupgrade_fixups.sql

Execute fixup scripts as indicated below:

Before upgrade:

Log into the database and execute the preupgrade fixups
@/u01/app/oracle/cfgtoollogs/hawk_stby/preupgrade/preupgrade_fixups.sql

After the upgrade:

Log into the database and execute the postupgrade fixups
@/u01/app/oracle/cfgtoollogs/hawk_stby/preupgrade/postupgrade_fixups.sql

Preupgrade complete: 2020-07-31T18:50:38
oracle@ol7-112-dg2:hawk:/home/oracle
$
	
*******************************************************

$ sqlplus / as sysdba @/u01/app/oracle/cfgtoollogs/hawk_stby/preupgrade/preupgrade_fixups.sql

SQL*Plus: Release 11.2.0.4.0 Production on Fri Jul 31 18:52:21 2020

Copyright (c) 1982, 2013, Oracle.  All rights reserved.


Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options

Executing Oracle PRE-Upgrade Fixup Script

Auto-Generated by:       Oracle Preupgrade Script
                         Version: 19.0.0.0.0 Build: 7
Generated on:            2020-07-31 18:50:36

For Source Database:     HAWK
Source Database Version: 11.2.0.4.0
For Upgrade to Version:  19.0.0.0.0

Preup                             Preupgrade
Action                            Issue Is
Number  Preupgrade Check Name     Remedied    Further DBA Action
------  ------------------------  ----------  --------------------------------
    1.  min_recovery_area_size    NO          Manual fixup required.
    2.  parameter_min_val         NO          Manual fixup recommended.
    3.  em_present                NO          Manual fixup recommended.
    4.  amd_exists                NO          Manual fixup recommended.
    5.  apex_manual_upgrade       NO          Manual fixup recommended.
    6.  tablespaces_info          NO          Informational only.
                                              Further action is optional.
    7.  exf_rul_exists            NO          Informational only.
                                              Further action is optional.
    8.  rman_recovery_version     NO          Informational only.
                                              Further action is optional.

The fixup scripts have been run and resolved what they can. However,
there are still issues originally identified by the preupgrade that
have not been remedied and are still present in the database.
Depending on the severity of the specific issue, and the nature of
the issue itself, that could mean that your database is not ready
for upgrade.  To resolve the outstanding issues, start by reviewing
the preupgrade_fixups.sql and searching it for the name of
the failed CHECK NAME or Preupgrade Action Number listed above.
There you will find the original corresponding diagnostic message
from the preupgrade which explains in more detail what still needs
to be done.
OL7-112-DG2:(SYS@hawk_stby:LOGICAL STANDBY>
	
*******************************************************
	
$ $NEW_HOME/bin/dbua -silent \
> -sid $ORACLE_SID \
> -oracleHome $ORACLE_HOME \
> -recompile_invalid_objects TRUE \
> -upgradeTimezone TRUE \
> -emConfiguration NONE \
> -upgrade_parallelism 2
Logs directory:  /u01/app/oracle/cfgtoollogs/dbua/upgrade2020-07-31_06-53-41PM
Performing Pre-Upgrade Checks...
============================
PRE- and POST- FIXUP ACTIONS
=============================
/u01/app/oracle/cfgtoollogs/dbua/upgrade2020-07-31_06-53-41PM/hawk/upgrade.xml
/u01/app/oracle/cfgtoollogs/dbua/upgrade2020-07-31_06-53-41PM/hawk/preupgrade_fixups.sql
/u01/app/oracle/cfgtoollogs/dbua/upgrade2020-07-31_06-53-41PM/hawk/postupgrade_fixups.sql
[FATAL] [DBT-20061] One or more of the pre-upgrade checks on the database have resulted into error conditions that must be addressed before proceeding.
   ACTION: Refer to the pre-upgrade results location for details: /u01/app/oracle/cfgtoollogs/dbua/upgrade2020-07-31_06-53-41PM/hawk

*******************************************************

$ ls -l /u01/app/oracle/cfgtoollogs/dbua/upgrade2020-07-31_06-53-41PM/hawk
total 692
-rw-r-----. 1 oracle oinstall      1 Jul 31 18:53 checksBuffer.tmp
-rw-r-----. 1 oracle oinstall  41134 Jul 31 18:53 components.properties
-rw-r-----. 1 oracle oinstall  15085 Jul 31 18:53 dbms_registry_extended.sql
drwxr-x---. 3 oracle oinstall     21 Jul 31 18:53 oracle
-rw-r-----. 1 oracle oinstall  14051 Jul 31 18:53 parameters.properties
-rw-r-----. 1 oracle oinstall   8584 Jul 31 18:53 postupgrade_fixups.sql
-rw-r-----. 1 oracle oinstall   7884 Jul 31 18:53 preupgrade_driver.sql
-rw-r-----. 1 oracle oinstall  13082 Jul 31 18:53 preupgrade_fixups.sql
-rw-r-----. 1 oracle oinstall 101702 Jul 31 18:53 preupgrade_messages.properties
-rw-r-----. 1 oracle oinstall 461660 Jul 31 18:53 preupgrade_package.sql
-rw-r-----. 1 oracle oinstall   5215 Jul 31 18:53 sqls.log
drwxr-x---. 3 oracle oinstall     24 Jul 31 18:53 upgrade
-rw-r-----. 1 oracle oinstall  15082 Jul 31 18:53 upgrade.xml

*******************************************************

DBUA Silent : Director Service Registration Error DBT-20060 DBT-08010 (Doc ID 2246770.1)	

Case 2 :
SEVERE: [FATAL] [DBT-20061] One or more of the pre-upgrade checks on the database have resulted into error conditions that must be addressed before proceeding.

./dbua -silent -sid <SID> -oracleHome $ORACLE_HOME -J-Doracle.install.dbua.validate.all=false

Query Statspack’s “SQL ordered by” sections over a time period

Pythian Group - Fri, 2020-07-31 09:26

In my previous blog post <link>, I presented the statspack_load_trends.sql script, which provides a high-level overview of workload dynamics on DB instance(s) over time. In this post, I’ll present the statspack_top_sqls.sql script, which returns SQL performance statistics by mining the Statspack repository. You can download it <here>.

The script provides the same SQLs with the same performance statistics as in Statspack’s various “SQL ordered by” sections. However, it does so by reporting the figures of all categories in a single line, and, more importantly, does it over a time range, considering each available consecutive pair of snapshots. Thus, it provides a way to quickly identify SQLs contributing the most to a particular “SQL ordered by” category over a time period. Alternatively, we can also check for specific SQL(s) behavior over time.

Its core logic is based on Oracle’s $ORACLE_HOME/rdbms/admin/sprepins.sql (StatsPack Report Instance) script, but it doesn’t rely on its existence to run. Similarly as for statspack_load_trends.sql, it queries the Statspack repository directly. It doesn’t create or use any (temporary) objects, not even the global temporary table that sprepins.sql uses when producing the report. We can also use it to analyze a repository imported from another DB and handles periods spanning instance restart(s).

Important note in case you want to compare the results with Statspack reports

Testing the script by comparing its output to regular Statspack reports (created by running $ORACLE_HOME/rdbms/admin/spreport.sql or sprepins.sql), I noticed that sometimes the numbers in the “SQL ordered by” sections didn’t match between the two. Examples include SQLs reported by my script, but not by Statspack reports. Or even Statspack reports reporting the same SQL (same hash value) multiple times in the same “SQL ordered by” section.
The root cause of those anomalies is described in the MOS note “Statspack Reports Show Different Results In “SQL ordered by …” When Taken At Different Times with Same Snapid (Doc ID 2258762.1)”: “When more than one statspack reports are taken repeatedly in the same session by non-perfstat user, some data might get mixed up in the temporary table used by the reporting script, and the result may get corrupted.”.

The problem was not connecting as the owner of the Statspack repository (usually PERFSTAT) when generating multiple consecutive snapshots looping over a snapshot range. The same was true also when creating a single Statpack report.
The takeaway is to always connect as the Statspack repository owner when running spreport.sql, especially if you use any helper scripts which generate Statspack reports for a series of snapshots.

Usage Starting the script

Let’s see the script in action analyzing a sample Swingbench run on a 2 node RAC database. The output is wide, so I suggest to spool it to a file for easier viewing/plotting:

SQL> spool top_sqls.txt
SQL> @statspack_top_sqls.sql

List SQL by [elapsed_time | cpu_time | buffer_gets | disk_reads | executions | parse_calls | max_sharable_mem | max_version_count | cluster_wait_time]:

Enter a value - default "elapsed_time" :

First, we specify by which category we want the SQLs to be ordered by. We can choose one of the above-listed possibilities, which are the same categories the “SQL ordered by” Statspack report’s sections displays. The script reports the same SQLs in the same order as they appear in the selected Statspack report category.

Suppose we want to order SQLs by “cpu_time”, and that the corresponding Statspack report lists 10 SQLs in the “SQL ordered by CPU” section. The script lists the same ones. However, the added benefit of the script is that it reports values, which the Statspack report doesn’t display. For example, the “SQL ordered by CPU” Statspack report section doesn’t display the “Physical Reads” statistic. Instead, the “SQL ordered by Elapsed time” section lists it. If a SQL isn’t qualified to display in the “SQL ordered by Elapsed time” section, we won’t get those values from the Statspack report.

Next, we provide the DBID and instance number we want to be analyzed. If we don’t provide an instance number, the script considers all which are present in the repository:

Instances in this Statspack schema
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
   DB Id    Inst Num DB Name      Instance     Host
----------- -------- ------------ ------------ --------------------------------
 1558102526        1 ORCL         orcl1        ol7-122-rac1.localdomain
 1558102526        2 ORCL         orcl2        ol7-122-rac2.localdomain


Enter DBID to analyze - default "1558102526" :
Enter instance number or "all" to analyze all instancs for DBID = 1558102526 - default "all" :

Finally, we specify the time range we’d like to analyze:

Enter begin time for report [DD-MON-YYYY HH24:MI] - default "30-APR-2020 10:54" : 15-FEB-2020 12:30
Enter end time for report [DD-MON-YYYY HH24:MI] - default "30-APR-2020 22:54" : 15-FEB-2020 13:00
Script output

Let’s check what the script output looks like. Because of the output width, I have broken it up in several sections. The whole file containing the spool, and the respective Statspack reports for comparison, can be obtained here .

Since the original spool file contains 280 lines, I’m showing an abbreviated version of the first 8 columns. The full output lists SQLs between each snapshot. Since we didn’t specify which instance number we’re interested in, both instances are considered. Additionally, we see it’s normal and expected, that consecutive snapshots don’t have consecutive numbers. Snapshots 4 and 21 are two consecutive snapshots on instance 1, as we can confirm by checking their snap time. The other columns are self-explanatory:

INSTANCE_NUMBER  B_SNAP_ID  E_SNAP_ID B_SNAP_TIME        E_SNAP_TIME        INTERVAL_MIN  DBTIMEMIN        AAS
--------------- ---------- ---------- ------------------ ------------------ ------------ ---------- ----------
1          1          2 15-FEB-20 12:30:00 15-FEB-20 12:40:00           10       2.30       0.23
&lt;removed 16 lines listing SQLs&gt;
2          3 15-FEB-20 12:40:00 15-FEB-20 12:50:00           10       1.60       0.16
&lt;removed 25 lines listing SQLs&gt;
3          4 15-FEB-20 12:50:00 15-FEB-20 13:00:00           10       1.55       0.15
&lt;etc&gt;
4         21 15-FEB-20 13:00:00 15-FEB-20 13:10:00           10       1.66       0.17

21 22 15-FEB-20 13:10:00 15-FEB-20 13:20:00 10 1.30 0.13

22 23 15-FEB-20 13:20:00 15-FEB-20 13:30:00 10 1.18 0.12

2 11 12 15-FEB-20 12:30:00 15-FEB-20 12:40:00 10 3.81 0.38

12 13 15-FEB-20 12:40:00 15-FEB-20 12:50:00 10 2.70 0.27

13 14 15-FEB-20 12:50:00 15-FEB-20 13:00:00 10 2.50 0.25

14 15 15-FEB-20 13:00:00 15-FEB-20 13:10:00 10 2.94 0.29

15 16 15-FEB-20 13:10:00 15-FEB-20 13:20:00 10 2.18 0.22

16 17 15-FEB-20 13:20:00 15-FEB-20 13:30:00 10 1.98 0.20

 

Let’s check an excerpt of the output for snapshots 1-2 and 2-3. Apart from the “HV” column (SQL old hash value), the other columns are self-explanatory. For blog post brevity, I’m showing only the first 10 SQLs per snapshot pair.

B_SNAP_ID  E_SNAP_ID         HV ELAPSED_TIME_SEC EXECUTIONS ELAPSED_PER_EXEC_SEC PERCENT_OF_DBTIME_USED CPU_TIME_SEC CPU_TIME_MS_PER_EXEC
--------- ---------- ---------- ---------------- ---------- -------------------- ---------------------- ------------ --------------------
        1          2 3565022785            80,55        483                  ,17                  58,47        14,63                 30,3
                     2319948924             55,8       5931                  ,01                  40,51         7,69                  1,3
                     1852190137            14,22       1024                  ,01                  10,32         7,75                 7,57
                     1113394757             8,17      12332                    0                   5,93         2,97                  ,24
                     4194254847              6,4        483                  ,01                   4,64          ,84                 1,73
                     1283549268             4,55        169                  ,03                    3,3          ,89                 5,28
                     2588369535             4,21         24                  ,18                   3,06         1,12                46,55
                     4212635381             4,18         24                  ,17                   3,04         1,09                45,39
                     4219272024             3,97       1396                    0                   2,88          ,86                  ,62
                     2835506982             3,74        173                  ,02                   2,71          ,57                 3,32
                     (..)                                                                                                                                         
        2          3 3565022785            46,93        956                  ,05                  48,79        18,87                19,73
                     2319948924            22,85      11550                    0                  23,75         7,52                  ,65
                     1852190137            15,35       2158                  ,01                  15,95        11,98                 5,55
                     1283549268             6,36        380                  ,02                   6,61         1,65                 4,33
                     2835506982                6        377                  ,02                   6,24         1,03                 2,72
                     1822227481             5,32       7742                    0                   5,53         1,26                  ,16
                     4194254847             4,69        957                    0                   4,87         1,22                 1,28
                     3463613875             4,61        380                  ,01                   4,79          ,62                 1,62
                     1113394757             4,07      25794                    0                   4,23         3,28                  ,13
                     4219272024             3,89       2945                    0                   4,04         1,43                  ,49
                     (..)
B_SNAP_ID  E_SNAP_ID         HV PHYSICAL_READS PHYSICAL_READS_PER_EXECUTION BUFFER_GETS GETS_PER_EXECUTION ROWS_PROCESSED ROWS_PROCESSED_PER_EXECUTION PARSE_CALLS
--------- ---------- ---------- -------------- ---------------------------- ----------- ------------------ -------------- ---------------------------- -----------
        1          2 3565022785           5860                        12,13      261329             541,05            483                            1         483
                     2319948924           4614                          ,78      205925              34,72          26467                         4,46           1
                     1852190137            394                          ,38      131100             128,03           1024                            1        1025
                     1113394757            336                          ,03      124291              10,08         172648                           14           1
                     4194254847            396                          ,82        7760              16,07            483                            1           1
                     1283549268            262                         1,55        6188              36,62            169                            1         169
                     2588369535             76                         3,17       13104                546             24                            1          24
                     4212635381             76                         3,17       13104                546            737                        30,71           1
                     4219272024            167                          ,12       19979              14,31           1396                            1           1
                     2835506982            255                         1,47        3579              20,69            173                            1         173
                                                                                                                                                                  
        2          3 3565022785           1138                         1,19      511742             535,29            956                            1         957
                     2319948924            487                          ,04      402425              34,84          51879                         4,49           0
                     1852190137            164                          ,08      274493              127,2           2158                            1        2157
                     1283549268            303                           ,8       13726              36,12            380                            1         380
                     2835506982            448                         1,19        7218              19,15            377                            1         377
                     1822227481            259                          ,03       23226                  3           7742                            1           0
                     4194254847            150                          ,16       14371              15,02            957                            1           0
                     3463613875            298                          ,78        5844              15,38            380                            1           0
                     1113394757              0                            0      260084              10,08         361116                           14           0
                     4219272024             76                          ,03       42277              14,36           2945                            1           0
B_SNAP_ID  E_SNAP_ID         HV MAX_SHARABLE_MEM_KB LAST_SHARABLE_MEM_KB MAX_VERSION_COUNT LAST_VERSION_COUNT DELTA_VERSION_COUNT CLUSTER_WAIT_TIME_SEC
--------- ---------- ---------- ------------------- -------------------- ----------------- ------------------ ------------------- ---------------------
        1          2 3565022785               55,39                55,39                 1                  1                   1                 13,67
                     2319948924               47,44                47,44                 1                  1                   1                  9,38
                     1852190137               55,38                55,38                 1                  1                   1                  2,65
                     1113394757               43,41                43,41                 1                  1                   1                  2,17
                     4194254847               47,64                47,64                 1                  1                   1                  1,26
                     1283549268               59,46                59,46                 1                  1                   1                   ,83
                     2588369535               55,38                55,38                 1                  1                   1                  2,21
                     4212635381               35,51                35,51                 1                  1                   1                  2,21
                     4219272024               27,42                27,42                 1                  1                   1                  1,15
                     2835506982               55,38                55,38                 1                  1                   1                   ,39
                                                                                                                                                       
        2          3 3565022785              113,27               113,27                 2                  2                   1                 15,91
                     2319948924               47,44                47,44                 1                  1                   0                     9
                     1852190137              113,25               113,25                 2                  2                   1                  1,32
                     1283549268              121,42               121,42                 2                  2                   1                  1,42
                     2835506982              113,27               113,27                 2                  2                   1                   ,72
                     1822227481                31,5                 31,5                 1                  1                   0                   1,5
                     4194254847               47,64                47,64                 1                  1                   0                  1,76
                     3463613875               59,63                59,63                 1                  1                   0                   ,81
                     1113394757               43,41                43,41                 1                  1                   0                   ,41
                     4219272024               27,42                27,42                 1                  1                   0                  1,77
B_SNAP_ID  E_SNAP_ID         HV CWT_PERCENT_OF_ELAPSED_TIME AVG_HARD_PARSE_TIME_MS MODULE                      SQL_TEXT
--------- ---------- ---------- --------------------------- ---------------------- --------------------------- ----------------------------------------------------------------
        1          2 3565022785                       16,97                        JDBC Thin Client            BEGIN :1 := orderentry.neworder(:2 ,:3 ,:4 ); END;
                     2319948924                        16,8                        New Order                   SELECT PRODUCTS.PRODUCT_ID, PRODUCT_NAME, PRODUCT_DESCRIPTION, C
                     1852190137                       18,61                                                    BEGIN :1 := orderentry.browseproducts(:2 ,:3 ,:4 ); END;
                     1113394757                       26,55                        Browse Products             SELECT PRODUCTS.PRODUCT_ID, PRODUCT_NAME, PRODUCT_DESCRIPTION, C
                     4194254847                       19,72                        New Order                   INSERT INTO ORDERS ( ORDER_ID, ORDER_DATE, ORDER_MODE, CUSTOMER_
                     1283549268                       18,16                                                    BEGIN :1 := orderentry.newcustomer(:2 ,:3 ,:4 ,:5 ,:6 ,:7 ,:8 ,:
                     2588369535                       52,52                                                    BEGIN :1 := orderentry.SalesRepsQuery(:2 ,:3 ,:4 ); END;
                     4212635381                        52,9                        Sales Rep Query             SELECT TT.ORDER_TOTAL, TT.SALES_REP_ID, TT.ORDER_DATE, CUSTOMERS
                     4219272024                       28,85                        New Order                   INSERT INTO ORDER_ITEMS ( ORDER_ID, LINE_ITEM_ID, PRODUCT_ID, UN
                     2835506982                       10,53                                                    BEGIN :1 := orderentry.browseandupdateorders(:2 ,:3 ,:4 ); END;
                                                                                                              
        2          3 3565022785                       33,89                        JDBC Thin Client            BEGIN :1 := orderentry.neworder(:2 ,:3 ,:4 ); END;
                     2319948924                       39,39                        New Order                   SELECT PRODUCTS.PRODUCT_ID, PRODUCT_NAME, PRODUCT_DESCRIPTION, C
                     1852190137                        8,63                                                    BEGIN :1 := orderentry.browseproducts(:2 ,:3 ,:4 ); END;
                     1283549268                       22,26                                                    BEGIN :1 := orderentry.newcustomer(:2 ,:3 ,:4 ,:5 ,:6 ,:7 ,:8 ,:
                     2835506982                       12,06                                                    BEGIN :1 := orderentry.browseandupdateorders(:2 ,:3 ,:4 ); END;
                     1822227481                       28,14                        New Order                   SELECT CUSTOMER_ID, CUST_FIRST_NAME, CUST_LAST_NAME, NLS_LANGUAG
                     4194254847                       37,54                        New Order                   INSERT INTO ORDERS ( ORDER_ID, ORDER_DATE, ORDER_MODE, CUSTOMER_
                     3463613875                       17,49                        Update Customer Details     INSERT INTO CUSTOMERS ( CUSTOMER_ID , CUST_FIRST_NAME , CUST_LAS
                     1113394757                        9,98                        Browse Products             SELECT PRODUCTS.PRODUCT_ID, PRODUCT_NAME, PRODUCT_DESCRIPTION, C
                     4219272024                       45,53                        New Order                   INSERT INTO ORDER_ITEMS ( ORDER_ID, LINE_ITEM_ID, PRODUCT_ID, UN

 

Final note

Oracle’s sprepins.sql script has a /*+ first_rows */ hint in the inline view containing analytical functions used to compute the current and previous row values from the stats$sql_summary table. The hint is present, but not enabled in statspack_top_sqls.sql. If due to some reason you need to re-enable it, just search for and enable it in the script (or use it’s alternative first_rows(0) ).

Categories: DBA Blogs

Generating and Storing a Report in the Database

Tom Kyte - Fri, 2020-07-31 08:06
>Hi Tom 's team, I am happy for your quick response my old question <b>https://asktom.oracle.com/pls/apex/asktom.search?tag=storing-and-accessing-reports-in-the-database-on-oracle-apex-of-version-20</b> I make a new question for my present problem. I listed those steps I created for that problem: a. I created a table named from file report_archive_tbl.sql on my database or directly on APEX. b. I created a process named store_blobquery_report when press Print button. Print button redirect to url I created for a report query. <code>declare l_report blob; begin l_report := apex_util.get_print_document ( p_application_id => :APP_ID, -- assign id of app or no need p_report_query_name => 'multiquery', -- <b>create name for a report query</b> p_report_layout_name => 'multiquery', -- <b>create name for a report layout</b> p_report_layout_type => 'rtf', -- <b>create a format for a report</b> p_document_format => 'pdf'-- <b>define format for a report<b> ); insert into report_archive ( filename, mimetype, report, created_date, created_by ) values ( 'Multi Query Search Results ('||to_char(sysdate,'DDMonYYYY')||')', 'application/pdf', l_report, sysdate, :USER ); end;</code> <b>c. When I want to click print report, one row were inserted into the database for each report you ran. </b>. That step while I pressed Print button, it only show that report downloaded but on a table of report_archive had no data. Thank you in advance ^_^.
Categories: DBA Blogs

Save compressed BLOB to OS file

Tom Kyte - Fri, 2020-07-31 08:06
Hello - We're doing following in oracle db - Extract comma delimited file from Siebel database using SQL query - we're using BLOB to store comma delimited file - Compressing BLOB data using UTL_compress LN_COMPRESS - Sending data to REST API using HTTP post REST service is erroring out due to Oracle compression and can't read oracle compressed data. Btw, its fine when I zip file manually using winzip and post using POSTMAN. REST API owner would like to know Oracle UTL_compress COMPRESS specification. Also there are asking to send OS file (oracle compressed file) Need to know following... 1)what specification Oracle is using for UTL_COMPRESS 2) How can I create file using Oracle COMRESSED blob? Thanks in advance
Categories: DBA Blogs

Archive Log Deletion Policy with a Standby Database

Hemant K Chitale - Thu, 2020-07-30 23:15
Previously, I have blogged about Archive Log Deletion Policy even when a Standby database is not present.

Here's a 19c example with a Standby Database

I first set it to "APPLIED ON ALL STANDBY" meaning that an ArchiveLog can be deleted only if the Primary has confirmed that the ArchiveLog (i.e. all the Redo in that ArchiveLog) has been applied on every Standby database configured for this Primary.

I then attempt to use the DELETE command to delete all recent ArchiveLogs.


oracle19c>rman target /

Recovery Manager: Release 19.0.0.0.0 - Production on Fri Jul 31 11:45:40 2020
Version 19.3.0.0.0

Copyright (c) 1982, 2019, Oracle and/or its affiliates. All rights reserved.

connected to target database: ORCLCDB (DBID=2778483057)

RMAN> show all;

using target database control file instead of recovery catalog
RMAN configuration parameters for database with db_unique_name ORCLCDB are:
CONFIGURE RETENTION POLICY TO REDUNDANCY 1; # default
CONFIGURE BACKUP OPTIMIZATION OFF; # default
CONFIGURE DEFAULT DEVICE TYPE TO DISK; # default
CONFIGURE CONTROLFILE AUTOBACKUP ON; # default
CONFIGURE CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE DISK TO '%F'; # default
CONFIGURE DEVICE TYPE DISK PARALLELISM 1 BACKUP TYPE TO BACKUPSET; # default
CONFIGURE DATAFILE BACKUP COPIES FOR DEVICE TYPE DISK TO 1; # default
CONFIGURE ARCHIVELOG BACKUP COPIES FOR DEVICE TYPE DISK TO 1; # default
CONFIGURE MAXSETSIZE TO UNLIMITED; # default
CONFIGURE ENCRYPTION FOR DATABASE OFF; # default
CONFIGURE ENCRYPTION ALGORITHM 'AES128'; # default
CONFIGURE COMPRESSION ALGORITHM 'BASIC' AS OF RELEASE 'DEFAULT' OPTIMIZE FOR LOAD TRUE ; # default
CONFIGURE RMAN OUTPUT TO KEEP FOR 7 DAYS; # default
CONFIGURE ARCHIVELOG DELETION POLICY TO NONE; # default
CONFIGURE SNAPSHOT CONTROLFILE NAME TO '/opt/oracle/product/19c/dbhome_1/dbs/snapcf_ORCLCDB.f'; # default

RMAN> configure archivelog deletion policy to applied on all standby;

new RMAN configuration parameters:
CONFIGURE ARCHIVELOG DELETION POLICY TO APPLIED ON ALL STANDBY;
new RMAN configuration parameters are successfully stored

RMAN>
RMAN> sql 'alter system switch logfile';

sql statement: alter system switch logfile

RMAN> sql 'alter system switch logfile';

sql statement: alter system switch logfile

RMAN> list archivelog all completed after "sysdate-1/24";

List of Archived Log Copies for database with db_unique_name ORCLCDB
=====================================================================

Key Thrd Seq S Low Time
------- ---- ------- - ---------
126 1 41 A 13-JUL-20
Name: /opt/oracle/archivelog/ORCLCDB/1_41_1036108814.dbf

128 1 42 A 31-JUL-20
Name: /opt/oracle/archivelog/ORCLCDB/1_42_1036108814.dbf

130 1 43 A 31-JUL-20
Name: /opt/oracle/archivelog/ORCLCDB/1_43_1036108814.dbf

131 1 44 A 31-JUL-20
Name: /opt/oracle/archivelog/ORCLCDB/1_44_1036108814.dbf


RMAN>
RMAN> delete archivelog all;

allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=275 device type=DISK
RMAN-08120: warning: archived log not deleted, not yet applied by standby
archived log file name=/opt/oracle/archivelog/ORCLCDB/1_43_1036108814.dbf thread=1 sequence=43
RMAN-08120: warning: archived log not deleted, not yet applied by standby
archived log file name=/opt/oracle/archivelog/ORCLCDB/1_44_1036108814.dbf thread=1 sequence=44
List of Archived Log Copies for database with db_unique_name ORCLCDB
=====================================================================

Key Thrd Seq S Low Time
------- ---- ------- - ---------
118 1 37 A 12-JUL-20
Name: /opt/oracle/archivelog/ORCLCDB/1_37_1036108814.dbf

119 1 38 A 13-JUL-20
Name: /opt/oracle/archivelog/ORCLCDB/1_38_1036108814.dbf

120 1 39 A 13-JUL-20
Name: /opt/oracle/archivelog/ORCLCDB/1_39_1036108814.dbf

125 1 40 A 13-JUL-20
Name: /opt/oracle/archivelog/ORCLCDB/1_40_1036108814.dbf

126 1 41 A 13-JUL-20
Name: /opt/oracle/archivelog/ORCLCDB/1_41_1036108814.dbf

128 1 42 A 31-JUL-20
Name: /opt/oracle/archivelog/ORCLCDB/1_42_1036108814.dbf


Do you really want to delete the above objects (enter YES or NO)? YES
deleted archived log
archived log file name=/opt/oracle/archivelog/ORCLCDB/1_37_1036108814.dbf RECID=118 STAMP=1045695910
deleted archived log
archived log file name=/opt/oracle/archivelog/ORCLCDB/1_38_1036108814.dbf RECID=119 STAMP=1045696247
deleted archived log
archived log file name=/opt/oracle/archivelog/ORCLCDB/1_39_1036108814.dbf RECID=120 STAMP=1045696315
deleted archived log
archived log file name=/opt/oracle/archivelog/ORCLCDB/1_40_1036108814.dbf RECID=125 STAMP=1045696378
deleted archived log
archived log file name=/opt/oracle/archivelog/ORCLCDB/1_41_1036108814.dbf RECID=126 STAMP=1047209331
deleted archived log
archived log file name=/opt/oracle/archivelog/ORCLCDB/1_42_1036108814.dbf RECID=128 STAMP=1047209610
Deleted 6 objects


RMAN>
RMAN> list archivelog all completed after "sysdate-1/24";

List of Archived Log Copies for database with db_unique_name ORCLCDB
=====================================================================

Key Thrd Seq S Low Time
------- ---- ------- - ---------
130 1 43 A 31-JUL-20
Name: /opt/oracle/archivelog/ORCLCDB/1_43_1036108814.dbf

131 1 44 A 31-JUL-20
Name: /opt/oracle/archivelog/ORCLCDB/1_44_1036108814.dbf


RMAN>


RMAN refused to delete ArchiveLogs 43 and 44 that were recently generated because they haven't been applied to the Standby database(s).

Once the Standby confirms that it has received and applied all the Redo in 43 and 44 :


2020-07-31T11:51:53.314269+08:00
PR00 (PID:3718): Media Recovery Log /opt/oracle/archivelog/STDBYDB/1_43_1036108814.dbf
2020-07-31T11:51:53.676981+08:00
PR00 (PID:3718): Media Recovery Log /opt/oracle/archivelog/STDBYDB/1_44_1036108814.dbf
PR00 (PID:3718): Media Recovery Waiting for T-1.S-45 (in transit)
2020-07-31T11:51:53.868134+08:00


The Primary can now Delete these ArchiveLogs (even though they haven't been backed up).


RMAN> delete archivelog all completed after "sysdate-1/24";

released channel: ORA_DISK_1
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=275 device type=DISK
List of Archived Log Copies for database with db_unique_name ORCLCDB
=====================================================================

Key Thrd Seq S Low Time
------- ---- ------- - ---------
130 1 43 A 31-JUL-20
Name: /opt/oracle/archivelog/ORCLCDB/1_43_1036108814.dbf

131 1 44 A 31-JUL-20
Name: /opt/oracle/archivelog/ORCLCDB/1_44_1036108814.dbf


Do you really want to delete the above objects (enter YES or NO)? YES
deleted archived log
archived log file name=/opt/oracle/archivelog/ORCLCDB/1_43_1036108814.dbf RECID=130 STAMP=1047210443
deleted archived log
archived log file name=/opt/oracle/archivelog/ORCLCDB/1_44_1036108814.dbf RECID=131 STAMP=1047210452
Deleted 2 objects


RMAN> list archivelog all;

specification does not match any archived log in the repository

RMAN>


If you want to prevent RMAN from Deleting ArchiveLogs that have not been backed up, you can either add the "backed up 'n' times to disk' clause to the DELETE command :


RMAN> sql 'alter system switch logfile';

sql statement: alter system switch logfile

RMAN> sql 'alter system switch logfile';

sql statement: alter system switch logfile

RMAN> list archivelog all;

List of Archived Log Copies for database with db_unique_name ORCLCDB
=====================================================================

Key Thrd Seq S Low Time
------- ---- ------- - ---------
134 1 45 A 31-JUL-20
Name: /opt/oracle/archivelog/ORCLCDB/1_45_1036108814.dbf

136 1 46 A 31-JUL-20
Name: /opt/oracle/archivelog/ORCLCDB/1_46_1036108814.dbf


RMAN> delete archivelog all completed after "sysdate-1/24" backed up 1 times to disk;

released channel: ORA_DISK_1
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=275 device type=DISK
RMAN-08120: warning: archived log not deleted, not yet applied by standby
archived log file name=/opt/oracle/archivelog/ORCLCDB/1_45_1036108814.dbf thread=1 sequence=45
RMAN-08120: warning: archived log not deleted, not yet applied by standby
archived log file name=/opt/oracle/archivelog/ORCLCDB/1_46_1036108814.dbf thread=1 sequence=46

RMAN>


OR use a combination Archive Log Deletion Policy :


RMAN> configure archivelog deletion policy to applied on all standby backed up 1 times to disk;

old RMAN configuration parameters:
CONFIGURE ARCHIVELOG DELETION POLICY TO APPLIED ON ALL STANDBY;
new RMAN configuration parameters:
CONFIGURE ARCHIVELOG DELETION POLICY TO APPLIED ON ALL STANDBY BACKED UP 1 TIMES TO DISK;
new RMAN configuration parameters are successfully stored

RMAN>
RMAN> sql 'alter system switch logfile';

sql statement: alter system switch logfile

RMAN> sql 'alter system switch logfile';

sql statement: alter system switch logfile

RMAN> delete archivelog all completed after "sysdate-1/24";

released channel: ORA_DISK_1
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=275 device type=DISK
RMAN-08138: warning: archived log not deleted - must create more backups
archived log file name=/opt/oracle/archivelog/ORCLCDB/1_45_1036108814.dbf thread=1 sequence=45
RMAN-08138: warning: archived log not deleted - must create more backups
archived log file name=/opt/oracle/archivelog/ORCLCDB/1_46_1036108814.dbf thread=1 sequence=46
RMAN-08120: warning: archived log not deleted, not yet applied by standby
archived log file name=/opt/oracle/archivelog/ORCLCDB/1_47_1036108814.dbf thread=1 sequence=47
RMAN-08120: warning: archived log not deleted, not yet applied by standby
archived log file name=/opt/oracle/archivelog/ORCLCDB/1_48_1036108814.dbf thread=1 sequence=48

RMAN>


After I have confirmed that the Standby has applied the recent ArchiveLogs


PR00 (PID:4569): Media Recovery Log /opt/oracle/archivelog/STDBYDB/1_47_1036108814.dbf
2020-07-31T12:04:40.251269+08:00
PR00 (PID:4569): Media Recovery Log /opt/oracle/archivelog/STDBYDB/1_48_1036108814.dbf
PR00 (PID:4569): Media Recovery Waiting for T-1.S-49 (in transit)
2020-07-31T12:04:40.609327+08:00
Recovery of Online Redo Log: Thread 1 Group 4 Seq 49 Reading mem 0
Mem# 0: /opt/oracle/oradata/STDBYDB/stdbredo01.log


I try the DELETE again on the Primary


RMAN> delete archivelog all completed after "sysdate-1/24";

released channel: ORA_DISK_1
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=275 device type=DISK
RMAN-08138: warning: archived log not deleted - must create more backups
archived log file name=/opt/oracle/archivelog/ORCLCDB/1_45_1036108814.dbf thread=1 sequence=45
RMAN-08138: warning: archived log not deleted - must create more backups
archived log file name=/opt/oracle/archivelog/ORCLCDB/1_46_1036108814.dbf thread=1 sequence=46
RMAN-08138: warning: archived log not deleted - must create more backups
archived log file name=/opt/oracle/archivelog/ORCLCDB/1_47_1036108814.dbf thread=1 sequence=47
RMAN-08120: warning: archived log not deleted, not yet applied by standby
archived log file name=/opt/oracle/archivelog/ORCLCDB/1_48_1036108814.dbf thread=1 sequence=48

RMAN>
RMAN> backup as compressed backupset archivelog all;

Starting backup at 31-JUL-20
current log archived
using channel ORA_DISK_1
channel ORA_DISK_1: starting compressed archived log backup set
channel ORA_DISK_1: specifying archived log(s) in backup set
input archived log thread=1 sequence=45 RECID=134 STAMP=1047210945
input archived log thread=1 sequence=46 RECID=136 STAMP=1047210952
input archived log thread=1 sequence=47 RECID=138 STAMP=1047211344
input archived log thread=1 sequence=48 RECID=139 STAMP=1047211353
input archived log thread=1 sequence=49 RECID=142 STAMP=1047211650
channel ORA_DISK_1: starting piece 1 at 31-JUL-20
channel ORA_DISK_1: finished piece 1 at 31-JUL-20
piece handle=/opt/oracle/FRA/ORCLCDB/ORCLCDB/backupset/2020_07_31/o1_mf_annnn_TAG20200731T120730_hl7682or_.bkp tag=TAG20200731T120730 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:01
Finished backup at 31-JUL-20

Starting Control File and SPFILE Autobackup at 31-JUL-20
piece handle=/opt/oracle/FRA/ORCLCDB/ORCLCDB/autobackup/2020_07_31/o1_mf_s_1047211651_hl768490_.bkp comment=NONE
Finished Control File and SPFILE Autobackup at 31-JUL-20

RMAN> delete archivelog all completed after "sysdate-1/24";

released channel: ORA_DISK_1
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=275 device type=DISK
RMAN-08120: warning: archived log not deleted, not yet applied by standby
archived log file name=/opt/oracle/archivelog/ORCLCDB/1_48_1036108814.dbf thread=1 sequence=48
RMAN-08120: warning: archived log not deleted, not yet applied by standby
archived log file name=/opt/oracle/archivelog/ORCLCDB/1_49_1036108814.dbf thread=1 sequence=49
List of Archived Log Copies for database with db_unique_name ORCLCDB
=====================================================================

Key Thrd Seq S Low Time
------- ---- ------- - ---------
134 1 45 A 31-JUL-20
Name: /opt/oracle/archivelog/ORCLCDB/1_45_1036108814.dbf

136 1 46 A 31-JUL-20
Name: /opt/oracle/archivelog/ORCLCDB/1_46_1036108814.dbf

138 1 47 A 31-JUL-20
Name: /opt/oracle/archivelog/ORCLCDB/1_47_1036108814.dbf


Do you really want to delete the above objects (enter YES or NO)? YES
deleted archived log
archived log file name=/opt/oracle/archivelog/ORCLCDB/1_45_1036108814.dbf RECID=134 STAMP=1047210945
deleted archived log
archived log file name=/opt/oracle/archivelog/ORCLCDB/1_46_1036108814.dbf RECID=136 STAMP=1047210952
deleted archived log
archived log file name=/opt/oracle/archivelog/ORCLCDB/1_47_1036108814.dbf RECID=138 STAMP=1047211344
Deleted 3 objects


RMAN>


At first, I cannot delete any of the ArchiveLogs.  Then, after I backup 45 to 49, I  am able to delete 45 to 47 but not 48 and 49 as they have not yet been applied to the Standby (although they have been backed up locally on the Primary server) 

Thus the Archive Log Deletion Policy protects from accidental deletion with the DELETE ARCHIVELOG command.

HOWEVER, the DELETE OBSOLETE command ignores this Policy.


Notes :
1.  Archive Log Deletion Policy has been available since 10g.  However, if you are on 10g or 11g, see Oracle Support Document ID 728053.1
2.  Instead of the "APPLIED ON [ALL] STANDBY" you can also use "SHIPPED TO [ALL] STANDBY" subclause.  This is useful if you have Standby databases that are [deliberately] lagging the Primary  -- .e.g a Standby that does the Apply 4 hours after the Redo is generated on the Primary.
Categories: DBA Blogs

Pages

Subscribe to Oracle FAQ aggregator