Oracle ACE Pro
Oracle Solution Architect
Oracle E-Business Suite
Oracle Cloud Infrastructure
Oracle Fusion Middleware
Oracle Database Administration
Oracle Weblogic Administration
Oracle ACE Pro
Oracle Solution Architect
Oracle E-Business Suite
Oracle Cloud Infrastructure
Oracle Fusion Middleware
Oracle Database Administration
Oracle Weblogic Administration
When Oracle set out to create our PaaS cloud, more specifically the Database Cloud Service, a particular element was at the forefront. The software in the cloud had to be the same software that our customers are using on premises. There had to be no difference whatsoever. The same tools, skills and software you use to manage, monitor and tune your on-premises databases had to seamlessly and transparently move between platforms. A DBA should be able to monitor a cloud database just as they would with an on premises database with no exceptions. Oracle’s database cloud enables you to create an enterprise class, highly available database in less than an hour, and manage this database just as you would on premises.
With this same software in the cloud as on premises paradigm, Oracle’s Database cloud allows migration and data loading with the ease you are familiar with. To start, all Oracle Database Enterprise Cloud Service instances give you access to the operating system as well as SQL*Net access. This enables users and systems with the correct access to stage and load data into the database using multiple methods. SQL*Loader is the most basic, allowing data loading from a local file or via SQL*Net.
The original database export and import utility can be used today to move on-premises databases into the cloud
Superseding import/export in Oracle 10gR1 was data pump. Oracle Data Pump enables very high-speed movement of data and metadata from one database to another, much faster than the original import/export tools. As for data pump, features such as exporting and importing over a network and the ability to restart jobs help to make this a perfect fit for moving data to the Oracle Cloud. Other features such as the ability to estimate how much space an export job would consume without actually performing the export will help on cost estimates for purchasing cloud storage.
If the source database is an 8i or greater database, then transportable tablespaces give you a very fast method for moving your database to the cloud, all you need to do is move the metadata and the datafile. Also Starting with Oracle9i, the transported tablespaces are not required to be of the same block size as the destination database, helping with the migration process.
Another important aspect of transportable tablespaces is that you can migrate across OS platforms. To check on what platforms you can migrate to, in your database check the V$TRANSPORTABLE_PLATFORM table. The V$TRANSPORTABLE_PLATFORM view lets you see the platforms that are supported and to determine each platform’s endian format. What is endianness? It’s the way computers store multibyte data-types. Think of it this way; If the platform is Big endian, it will store the number 2134 as 2134. On a little endian platform, it will store it as 4312. This is simplifying it a bit, but you can see the basic issue we have. If the source and cloud database endianness match, then you can move the tablespaces to the cloud platform with no conversion necessary. If the endianness does not match, then an additional step is required. You can perform this step on the source or cloud database to convert the tablespace being transported to the target endian format. Doc ID 371556.1 on support.oracle.com will guide you through this process. Also remember, when using transportable tablespaces, the source and the target databases must use compatible database character sets and source and the target databases must use compatible national character sets.
With the latest cloud release, we can now create databases from a customers’ on premises backup with a single click. The Oracle Database cloud now includes a feature to create or replace an existing database with a backup from our database backup service. Just tell the UI where the backup lives and the cloud takes care of the rest. The database will be created, then replaced with that backup from the cloud. This jumpstarts many use cases such as disaster recovery in the cloud using Data Guard as well as a test/development environment from a backup of production which is on premises.
The last method of migrating to the cloud we will discuss in this article is moving PDBs. With 12c, Oracle introduced the concept of a multitenant architecture allowing database to function as a multitenant container database (CDB). Within this CDB, we can have one, zero, or multiple pluggable databases (PDBs). The multitenant feature represents one of the biggest architectural changes in the Oracle Database. This grouping of multiple PDBs into a single CDB allows us to manage, patch, upgrade and backup all of our databases as a single unit allowing us to consolidate multiple databases into one.
So how does the Oracle Cloud utilize the multitenant feature? To start, built right into Oracle SQL Developer is the ability to move local PDBs straight into the Oracle Cloud. As long as that PDB is on the same instance as SQL Developer, we can unplug, copy and plug that PDB into a cloud 12c database with a single click of a button. This is a great way to move development and test instances into the cloud quickly and easily.
With Enterprise Manager 12cR5 and 13c, we can do even more with PDBs and the Oracle Cloud. Enterprise Manager (EM) has the facilities to remotely move any 12c PDB in your monitored fleet to the Oracle Cloud. Not only can we move these PDBs to the cloud, but we can also move from the cloud back to an on premises 12c database. Using the data masking and subsetting pack, EM will mask or scramble sensitive data as it is moved from an on premises PDB to the oracle cloud. Want an even quicker way to migrate PDBs? Use the PDB remote cloning feature to clone a PDB instantly over a database link.
Back to the original thought, seeing the software in the Oracle Database Cloud is the same as on premises, all the tools and features that existed in previous versions as well as current versions of the database will continue to work, no need to purchase anything extra or retrain on a cloud specific variant. And this this familiarity, we can leverage your existing skills and knowledge to make this migration to the Oracle Database Cloud even easier. For more information please take a look at the resources provided below.
Brian Spendolini
Senior Principal Product Manager
Resources:
In addition to helping customers resolve issues via Service Requests, Oracle Support also builds over 60 free diagnostic tools for Oracle E-Business Suite 12.2, 12.0, 12.1, and 11i. These Support Analyzers are non-invasive scripts that run health-checks on your EBS environments. They look for common issues and generate standardized reports summarizing that provide solutions for known issues and recommendations on best practices.
Here’s an index to these tools:
Spotlight on BIP Analyzer
BI Publisher for EBS (BIP, previously called XML Publisher) is integrated into the E-Business Suite technology stack. The BIP Analyzer is available here:
The BI Publisher Analyzer reviews BIP configurations and compares them against Oracle’s best practices. It provides troubleshooting advice for common issues, such as:
This tool can be run manually or configured to run as a concurrent request, so it can be scheduled to be run periodically and included in regular Workflow Maintenance cycles.
Can this script be run against Production?
Yes. There is no DML in the Analyzer Script, so it is safe to run against Production instances to get an analysis of the environment for a specific instance. As always it is recommended to test all suggestions against a TEST instance before applying to Production.
Related Articles
The nologging Oracle database feature is used to enhance performance in certain areas of Oracle E-Business Suite. For example, it may be used during patch installation, and when building summary data for Business Intelligence.
What are the tradeoffs of using nologging?
Use of nologging in an operation means that the database redo logs will contain incomplete information about the changes made, with any data blocks that have been updated during the nologging operation being marked as invalid. As a result, a database restoration to a point in time (whether from a hot backup or a cold backup) may require additional steps in order to bring the affected data blocks up-to-date, and make the restored database usable. These additional steps may involve taking new backups of the associated datafiles, or by dropping and rebuilding the affected objects. The same applies to activation of a standby database.
Can nologging be enabled for EBS 12.2?
Yes. See the following documentation for additional considerations and operational implications of using nologging:
Related Articles
Source: https://blogs.oracle.com/stevenchan/can-nologging-be-enabled-for-ebs-122
First find the Invalid Objects in the Database by below given query.
SELECT COUNT(*)
FROM DBA_OBJECTS
WHERE STATUS = ‘INVALID’;
For a more detailed query, use the following script :
SELECT OWNER, OBJECT_TYPE, COUNT(*)
FROM DBA_OBJECTS
WHERE STATUS = ‘INVALID’
GROUP BY OWNER, OBJECT_TYPE;
To recompile an individual object, connect to SQL*PLUS as the owner of the object (generally apps) and use one of the following depending on the object type :
alter package <package_name> compile; (package specification)
alter package <package_name> compile body; (package body)
alter view <view_name> compile; (view)
If the object compiles with warnings, use either of the following to see the errors that caused the warnings :
show errors
OR
select * from user_errors where name = ‘<OBJECT_NAME>’;
1. Login OS with APPS owner.
2. Start the ADADMIN Utility from the Unix prompt with this command :
$adadmin
3. Under the Maintain Applications Database Objects Menu, select Compile APPS schema(s)
All Rights Reserved