Wednesday, December 5, 2007

SQL Performance in ERP 11i

SQL Statement Flow

  • Create Cursor - The cursor is the area in memory that holds the definition of a currently open SQL statement.
  • Parse SQL - Parsing SQL involves checking the SQL for syntax and objecct references. A check for a matching statement that is already in shared pool is performed. If none is found, then an entry in the shared pool is created and an execution plan for the SQL determined.
  • Bind Variables - Before the SQL statement can be executed, the values of any Bind variables- placeholders that represent values to be supplied by the calling program - must be associated with the SQL.
  • Execute SQL - If the SQL statement is not a query, then execute involves processing the DML or DDL statement in its entirety. If the SQL statement is a query, then execute prepares the statement for fetch operations.
  • Fetch Rows - For queries, fetching involves retrieving one or more rows from the result set.
  • Close Cursor - This invloves releasing all the memory and resources associated with the cursor.


Components of a SQL Statement
–Parse
–Execute
–Fetch


Parsing is the process of preparing your SQL statement for execution. This process is analogous to the process a language compiler or interpreter must undertake in order to translate high-level statements into machine code.
The parse process will check that the SQL statement is syntactically valid, that the SQL conforms to the rules of the SQL language, and that all keywords and operators are valid and correctly used”


Parse’s Duties:
• Translating a SQL statement, verifying it to be a valid statement
• Performing data dictionary lookups to check table and column definitions
• Acquiring parse locks on required objects so that their definitions do not change during the statement’s parsing
• Checking privileges to access referenced schema objects
• Determining the optimal execution plan for the statement
• Loading it into a shared SQL area
• Routing all or part of distributed statements to remote nodes that contain referenced data

Reducing Parse overheads:

  1. Always use Bind variables in preference to literals for SQL statements that will be reexecuted.
  2. Dont needlessly discard cursors that may be reexecuted later in your program.


Execute is the processing phase of query flow.
•If the sql statement is not a query , then execute involves processing the DDL or DML statement in its entirety.
•If the sql statement is a query, then execute prepares the statement for fetch operations


Performance Considerations: A review of the number of joins and whether necessary and the selection criteria will ultimately affect the run time of this phase.


Fetch: In the fetch stage, rows are selected and ordered (if requested by the query), and each successive fetch retrieves another row of the result until the last row has been fetched.


Cost Based Optimizer

•The CBO is the decision maker on how a sql query is actually processed.
•Uses information from within the database and the sql statement to come up with the best plan.


The CBO performs the following steps:
•CBO The optimizer generates a set of potential plans for the SQL statement based on available access paths and hints.
•The optimizer estimates the cost of each plan based on statistics in the data dictionary for the data distribution and storage characteristics of the tables, indexes, and partitions accessed by the statement. The cost is an estimated value proportional to the expected resource use needed to execute the statement with a particular plan. The optimizer calculates the cost of access paths and join orders based on the estimated computer resources, which includes I/O, CPU, and memory. Serial plans with higher costs take more time to execute than those with smaller costs. When using a parallel plan, however, resource use is not directly related to elapsed time.
•The optimizer compares the costs of the plans and chooses the one with the lowest cost.


The CBO is the decision maker on how a sql query is going to be processed. It makes decisions on what tables will be joined and in what order. It also decides on which selection criteria within a sql statement is used and in what order.
The CBO takes the query and also information from a number of different sources within the database to decide the best plan of attack for a query. The CBO will review over a 1000 combinations and with each providing a costing algorithm based on the information it pulled from the database, and then selecting the best plan according to its costing algorithm.
Within this framework, the CBO can be wrong for a number of reasons that range from information provided being incorrect ie. old statistics to not reviewing enough plans on a complex query and missing the optimal plan.


Query Transformer :
The input to the query transformer is a parsed query, which is represented by a set of query blocks. The query blocks are nested or interrelated to each other. The form of the query determines how the query blocks are interrelated to each other. The
main objective of the query transformer is to determine if it is advantageous to change the form of the query so that it enables generation of a better query plan.


STATISTICS


–Tools of the CBO to make decisions. It is the primary tool used by the CBO to decide how to breakdown the query and process the table interactions.
–The CBO has other tools including histograms, which provide more info built on the basic statistics.

  • A developer should be able to tell when the last time statistics were run on a table and how many rows the CBO thinks are in a table.
  • Can look in DBA_TABLES at num_rows and last_analyzed


The statistics are incorrect?

• Statistics can be refreshed
• Request DBA team to generate statistics and let them know the specifics:
–Do you just need one object or the whole schema to be regenerated
• It is possible to generate statistics yourself, but it is generally not recommended even if you have the access, due to the potential system performance issues.
• Since 8i, Oracle has provided a package, DBMS_STATS that can be used to gather statistics for tables, indexes, and schemas. If interested one can review metalink for further details.
• Also within the Apps, there are certain concurrent programs that can also perform the function.


Families of tools

SQL Toolset : Those that can be implemented in SQL and SQL tools

The following tools are from the family of SQL tools and are the most utilized when gathering data on a performance issue. The tools many times are used in conjunction with each other to solve/analyze a problem.
•EXPLAIN PLAN
•METHODS OF TRACE
•TKPROF
•SQL QUERIES FOR PERFORMANCE


•SQL – explain plan
•Allows you to see what the CBO is seeing
•Can modify code and see how it changes the way the CBO will process
•Can let you know what indexes and tables are going to be used and how the data connections between tables are going to being handled


–When to use it?
•It gives a quick snapshot on a query to determine how it will run
•Lets you see if you are missing an index you thought you would hit
•Can be run in an environment without having to compile code or having access to compile code


–What it cannot do?
•It cannot tell you what adding an index will do. There is no way with these tools to do hypothetical analysis.
•There are other tools via OEM like virtual index and Hotsos Laredo that can allow some forms of impact analysis.


SQL – Explain Plan

–What is underlying structure?
A script called utlxplan.sql creates a table that defaults to PLAN_TABLE. In SQL Navigator, the tool usually utilizes a plan table called SQLN_EXPLAIN_PLAN. The SQL Navigator plan table will not potentially exist in all environments.

Explain Plan - Manual


1. Create plan for your query:


EXPLAIN PLAN

SET statement_id=id

INTO table_name

FOR sql_statement


Example: -- Use apps.SQLN_EXPLAIN_PLAN for the table_name and anything you would like for the id.

EXPLAIN plan set statement_id = 'JPG123'
INTO apps.SQLN_EXPLAIN_PLAN
FOR
SELECT count(*)
FROM OKC_k_LINES_B A, OKC_K_LINES_B B
WHERE B.LSE_ID = 10 AND A.ID = B.CLE_ID
AND A.ID = 10

2. Select explain text from database:

set heading off;
set pagesize 0;
SELECT rtrim(ltrim(plan_table_output,' '),' ')
FROM TABLE (DBMS_XPLAN.DISPLAY('SQLN_EXPLAIN_PLAN',,'TYPICAL'))

Example: --Statement id from above was JPG123....
set heading off;
set pagesize 0;
SELECT rtrim(ltrim(plan_table_output,' '),' ')
FROM TABLE (DBMS_XPLAN.DISPLAY('SQLN_EXPLAIN_PLAN','JPG123','TYPICAL'));


SQL – Trace


–Data gathering tool that records information to be used by other tools
–In certain situations can provide valuable information from raw trace including bind information.
–just recording the SQL and not any PL/SQL. This is ok for most cases since SQL accounts for 90% of all performance issues.


–Types of Tracing (Trace, trace w/ binds, trace w/ waits, trace w/ binds,waits)
•You can initiate with different info …
•What is a wait?
•What is a bind value?
•You can rebuild a query with binds to see what data was actually ran


–Tips/Tricks with Trace files?
•Can find out what values used (binds)
–Important Note - the bind numbers do not coordinate between the query numbers and the bind values
•Can use to estimate how fast code moving
–take a 15 min trace and find a query that you know how many rows need processing and see how many it got done in 15 min… can help to extrapolate…
•Can use to find out where something is at
–If you have the whole trace going, then run a tkprof at any time and review how many times a specific query has run… for example, if I know I have a query running that runs once per line, and I have 10,000 lines, and when I run trace/tkprof I have 7200 executions then I know it has completed 7200/10000


Methods of Turning on Trace
•Current SQL Session
•Another SQL Session
•Concurrent Program
•Individual instance of Concurrent Program
•Form


Tracing your own session is simple.
SET IDENTIFIER FOR SESSION (Optional)
SQL> alter session set tracefile_identifier='MYSESSION‘
ENABLE TRACE
SQL> alter session set events '10046 trace name context forever, level 8'
- sets extended SQL trace to level 8.
DISABLE TRACE
SQL> alter session set events '10046 trace name context off'


Alter session set
tracefile_identifier=‘Garringer’;
•You have up to 64 characters that will be tacked on to the end of your trace file.
•These traces are created in the User Dump Destination.
•Check the User Dump Destination parameter on your system for the location of these trace files.


DBMS_SUPPORT package is preferred over DBMS_SYSTEM and is fully supported by Oracle.
The base install script is dbmssupp.sql script and is run as sysdba. Located in $ORACLE_HOME/rdbms/admin directory.
Start Tracing -- equivalent to level 8 tracing, bind=>true would be equivalent to level 12 tracing SQL> execute sys.dbms_support.start_trace_in_session(&&SID, &&SERIAL, waits=>true, binds=>false); .
Stop Tracing SQL> execute sys.dbms_support.stop_trace_in_session(&&SID, &&SERIAL);


How do I find my trace file?
General Method
Find server you are on
select instance_number, instance_name, host_name from v$instance;
Find directory for trace files
select value from v$parameter where name = 'user_dump_dest';

From Unix Box
cd to $TRACE_TOP
Important: Need to make sure you are on right Unix server


WAIT ANALYSIS
•For timed wait event collecting you must have the parameter timed_statistics=true. You can do this at the system level or session level.

TRACE FILE SIZE
•You should also very the max_dump_file_size. If it is too small, you will receive a message like the following in your trace file……

WAIT #7: nam='db file sequential read' ela= 88 p1=6 p2=37500 p3=1 WAIT #7: nam='db fil *** DUMP FILE SIZE IS LIMITED TO 10000000 BYTES ***


SQL Trace & Concurrent Program

How to Start/Stop for all instances of a concurrent program
Steps:
1.Login to System Adminstrator Responsibility
2.Go to Concurrent : Program : Define
3.Query up concurrent program
4.Click Enable Trace Button
Note: This enables trace for all instances of this concurrent program run after the button is selected
In production, you will need to request this from the DBA team.


How to Start/Stop for a particular instance of a concurrent program
1.Verify that Profile value for “Concurrent: Allow Debugging” is set to Yes
2.When submitting a concurrent program If profile is Yes, then the debug options will be enabled
3. Click on Debug Options
4. Check SQL Trace and select type of trace and then click on OK

Concurrent Program - How to locate the trace file
1.Determine the instance the program is running on
select Logfile_node_name
from fnd_concurrent_requests
where request_id = l_request_id;
2.Translate the instance name into the inst_id
– select * from gv$instance shows mapping between Inst_id and the instance name
IF l_logfile_node_name = 'SUN4769' THEN l_inst := 1;
ELSIF l_logfile_node_name = 'SUN4229' THEN l_inst := 2;
ELSIF l_logfile_node_name = 'SUN4676' THEN l_inst := 3;
ELSE l_inst:= 4;
END IF;
3. Go to $TRACE_TOP on that server and grep for your request id

Form Trace:

Initiate a Trace
1.Select Help
2.Select Diagnostics
3.Select Trace
4.Select type of Trace you would like

Stop a Trace
To stop the trace perform the same steps
as the initiate but select “No Trace”


How to locate trace file
– When Trace is initiated a message window will appear with the trace file information. In the location, the server name and the file name are provided.


SQL – TKPROF


Tkprof is an executable that 'parses' Oracle trace files to produce more readable output. Remember that all the information in TkProf is available from the base trace file.
If you have a system that is performing badly, a good way to identify problem SQL statements is to trace a typical user session and then use TkProf to format the output using the sort functions on the tkprof command line.
There are a huge number of sort options that can be accessed by simply typing 'TkProf' at the command prompt. A useful starting point is the 'fchela' sort option which orders the output by elapsed time fetching (remember that timing information is only available with timed_statistics set to true in the "init.ora" file). The resultant .prf file will contain the most time consuming SQL statement at the start of the file.

Column Meanings
===============
call : Statisics for each cursor's activity are divided in to 3 areas:
Parse: statisitics from parsing the cursor. This includes information for plan generation etc.
Execute: statisitics for the exection phase of a cursor
Fetch : statistics for actually fetching the rows
count : number of times we have performed a particular activity on this particular cursor
cpu: cpu time used by this cursor
elapsed: elapsed time for this cursor
disk: This indicates the number of blocks read from disk. Generally you want to see blocks being read from the buffer cache rather than disk.
query : This column is incremented if a buffer is read in Consistent mode. A Consistent mode buffer is one that has been generated to give a consistent read snapshot for a long running transaction. The buffer actually contains this status in its header.
current: This column is incremented if a buffer found in the buffer cache that is new enough for the current transaction and is in current mode (and it is not a CR buffer). This applies to buffers that have been read in to the cache as well as buffers that already exist in the cache in current mode.
rows: Rows retrieved by this step


How to Run a TKPROF?
TKPROF COMMAND LINE
Usage: tkprof tracefile outputfile [explain= ] [table= ]
[print= ] [insert= ] [sys= ] [sort= ]
table=schema.tablename Use 'schema.tablename' with 'explain=' option.
explain=user/password Connect to ORACLE and issue EXPLAIN PLAIN.
print=integer List only the first 'integer' SQL statements.
aggregate=yesno
insert=filename List SQL statements and data inside INSERT statements.
sys=no TKPROF does not list SQL statements run as user SYS.
record=filename Record non-recursive statements found in the trace file.
sort=option Set of zero or more sort options

–Potential Issues
•Run . oraenv or timing will be incorrect
•Explain plans are now not when they ran


SQL QUERIES FOR PERFORMANCE

Difference between v$ and gv$ tables –
In the database, oracle provides a number of tables that begin with v$ and gv$. An example is v$session and gv$session. The main difference between the v$ and the gv$ tables is the scope that they view. The v$ tables only return information for the server it is executed from.
If you are on sun1089 and select * from v$session you will only see information on sessions from sun1089. On the other hand, if you select * from gv$session, you will get session information on all sessions on all servers in the RAC’d environment. Each record will contain an instance id that differentiates the server.


–Useful Queries
•General Query for bad explain plans
–sort by elapsed time
–search for full table scans
–can search by schema
–can search by table
–can search by a specific module or specific query
•Query that looks at concurrent requests
•Query to find large number of reparses

Find out what SQL a concurrent program is currently running
1.Determine the oracle process id and the instance the program is running on
select oracle_process_id , Logfile_node_name
into l_oracle_process_id, l_logfile_node_name
from fnd_concurrent_requests
where request_id = l_request_id;
dbms_output.put_line('****************************************************');
dbms_output.put_line('Request ID: 'l_request_id);
dbms_output.put_line('Oracle Process ID(SPID): 'l_oracle_process_id);
dbms_output.put_line('Logfile Node: 'l_logfile_node_name);
2.Translate the instance name into the inst_id
– select * from gv$instance shows mapping between Inst_id and the instance name
IF l_logfile_node_name = 'SUN4769' THEN l_inst := 1;
ELSIF l_logfile_node_name = 'SUN4229' THEN l_inst := 2;
ELSIF l_logfile_node_name = 'SUN4676' THEN l_inst := 3;
ELSE l_inst:= 4;
END IF;
3. Use oracle process id and instance id to get the SID value
select to_char(b.sid)
into l_sid
from gv$process a, gv$session b
where addr = paddr
and a.inst_id = b.inst_id
and b.inst_id = l_inst
and a.spid = l_oracle_process_id;
dbms_output.put_line('SID: 'l_sid);
4 Use Sid and Inst_id to determine sql statement that is currently running….
BEGIN
SELECT a.sql_text, a.hash_value, a.address, a.executions
Into l_sql_text, l_hash_value, l_address, l_executions
from gv$session s, gv$sqlarea a
where s.sql_address = a.address
and a.inst_id = s.inst_id
and a.inst_id = l_inst
and s.sid = l_sid;
dbms_output.put_line('Hash value: 'l_hash_value);
dbms_output.put_line('Address: 'l_address);
dbms_output.put_line('Executions: 'l_executions);
dbms_output.put_line('sql_text: ');


•--To find your session's sid and oracle spid
select to_char(b.sid), a.spid
from gv$process a, gv$session b
where addr = paddr
and a.inst_id = b.inst_id
and b.sid in (select sid from v$session
where audsid =sys_context('userenv','SESSIONID'))

PROFILER TOOL

•Since SQL is usually attributable to over 90% of a performance problem a developer fixes, this tool performs basically the same function as SQL trace.
•The major advantage is that is provides a quicker path to the bad queries using a percentage of time spent base algorithm.

Profiler Comprised of
–Oracle DBMS_PROFILER package which is installed by a DBA and is owned by the SYS user
–Three Tables that are created either in the users schema or a shared schema with synonyms.. This is where the information from the profiling goes.

Profiler Table Structure

  1. PLSQL_PROFILER_UNITS
    -- UNIT_NAME (NAME)
    -- UNIT_TYPE (TYPE)
    -- UNIT_OWNER (OWNER)
  2. PLSQL_PROFILER_DATA
    -- Total Occurences
    -- Total Time
    -- Minimum Time
    -- Maximum Time
    -- Line# (Line)
  3. PLSQL_PROFILER_RUNS
    -- Run Id
    -- Run Date
    -- Run Comment
  4. ALL_SOURCE

Standard setup for DBMS_Profiler
Only use the dbms_profiler.start and dbms_profiler.stop
Do not use the other commands including flush, which limits the size of the profiled code. However, in our testing, it has not inhibited our ability to perform a profile

Profiler Example:


declare
TheRetcode varchar2 (1);
TheErrbuf varchar2 (2000);
err number;
BEGIN
err:=DBMS_PROFILER.START_PROFILER ('PrevalTest: 'to_char(sysdate,'dd-Mon-YYYY hh:mi:ss'));
dbms_output.put_line('PROFILER START= 'err);
ASC_CFS_IB_INTF_VALIDAPI_PKG.ASC_CFS_IB_INTF_MAINBODY_RUN (TheErrbuf, TheRetcode);
dbms_output.put_line('ErrBuf= 'TheErrbuf);
dbms_output.put_line('Retcode= 'TheRetcode );
err:=DBMS_PROFILER.STOP_PROFILER ;
dbms_output.put_line('PROFILER STOP= 'err);
END;


DBMS_Profiler.Start

•Command starts the profiler run
•The passed in parameter gives an user designated name to the profiler run



DBMS_Profiler.Stop


•This command stops the profile being run
•It also flushes the information stored in memory into the three tables.


Forms Runtime Diagnostic (FRD)


JSP Diagnostic Trace

1.Set profile FND : Diagnostics to Yes at user level
2.Login to Self Service as the above user
3.Click on Diagnostics icon at the top of page. Select Set Trace Level and click Go Select the desired trace level (trace with bind and wait) and click Save
4.Perform the activity that you want to trace (pick up owner)
5.Exit application
6.Go to user_dump_dest to find out trace file


Performance Analysis


–Analysis
•Define/Determine the issue – may not be possible to understand why an issue, but should be able to narrow to it using data tools…
•Locate the poor performing statement within the code and try to understand the context when it is used.
•Determine if it is an issue with the code or whether it is the sql

Assumptions –
statistics are accurate....
Baseline data has been gathered and analyzed


Improving the code statements:

Add indexes –
No impact on what rows are returned from the query, just how the data is being gathered.
This can be risky, as it has the potential to impact other statements on the same table.
In addition to impacting other statements, is that indexes will slow Insert, Update, and Delete operations.
Therefore any table that is heavily pounded with Inserts, Updates, and Deletes could cause some code to slow down.
Also, will not necessarily improve the situation if dealing with a heavily skewed column.


Adding hints --
Many times bad performance can be attributed to the cost based optimizer choosing a bad explain plan.
The hints have the capability of helping to direct Oracle on what path is the best. The reason that you might have a better idea, is that you know the data makeup better than Oracle.
One benefit of this approach is that it is localized to a particular statement. However, there is no guarantee that the optimizer will use the hints.


Rewriting Queries –
After working with the indexes and the hints, you still determine that the query is too slow, then you can attempt to rewrite the query using performance tips and tricks.
These tricks range from utilizing Exists instead of IN, using decode & sign functions instead of multiple or loops, using SQL With clause to reduce redundant processing in queries, etc. There are many more ways that you can modify a query without changing the fundamental joins that will cause it to run faster.
Also in rewriting the query is evaluating the actual tables involved and determining if they are all necessary, and whether there are better tables.
A good example, is you are utilizing a view in a query that has 5 table joins in it, and all you are looking for is a small subset from the smallest table in the view. In these cases, a lot of times the view was chosen for ease in coding, and the whole query can be rewritten to go to the single table and that alone will provide a large improvement.


Redesigning the User perception of performance –
We will look at the contracts QA process. Currently the QA process requires users to move a contract from QA hold to Active on the screen and forces them to be active participants in the process. Until the QA finishes the user cannot do anything else on their pc. If it takes 15 minutes to perform a QA then they wait 15 minutes and lose that from other productive activities.
The performance question is how do I make this passive instead of active, or how do I move it to the background. A user would be much happier with the 15 minutes of processing time, if it is running in the background and they can go on with other activities and return to it in a half hour.
Therefore, a performance gain can be obtained by a shift in the paradigm of the user activity. By taking the QA to a concurrent program we are shifting the impact away from the user, and in the process increasing their willingness to wait and in that regards decreasing the amount of pain they receive from 15 min QA's.
Although performance has not really been improved, it has been shifted in such a way, that the end result is a performance improvement at the user level for their experience with the software they are working with.


Reducing Number of times a query runs --

In the coding process, there are many instances of code where the code is efficient, but because of the pl/sql design is run an exhorbinant amount of times. In these cases, you can attempt to tune the query all day, but you just are not going to get enough incremental gain to warrant the effort. What really needs to happen is you need to review the whole code block and determine a more efficient way of running through the code, or reducing the amount of times it is necessary.
There are two quick examples of this type of activity. The first is the embedded for loops. They are very easy to develop and follow logic but many times produce a multiplier in the inner most for loop's queries. For example, you have a for loop that has a for loop in it that has a for loop in it and in that one is a query. That query is going to run (outer loop * 1st inner loop count * 2nd inner loop count) times. Many times it is slightly more complex but thousands of times more efficient to get all the data in one cursor loop and then programmically in pl/sql loop through the code. Calls to the database attribute to over 90% of the time a piece of code runs for, and are the most likely to cause performance issues.
The second example is a higher level overall design example. Currently in Oracle contracts we run a QA process, a set of pl/sql scripts that validate a contract is OK, when we take a contract from entered status to Active Status, or when we make changes to a contract from QA Hold status back to Active status. If someone goes in and updates a few rows on a contract or adds a few rows, the current QA process not only interrogates those rows it also revalidates every other row on the contract. For example, a contract with a 100,000 rows where 5,000 are added would cause the QA to revalidate all 105,000 rows. This is excessive since the original 100,000 did not change. Now if we take a step back and say how can we only validate the 5000. This will produce a magnitude of performance improvement that exceeds anything we can do by improving a couple of queries because in contracts the not reviewing 100,000 rows is equivalent to not performing 1,000,000 queries.


Analysis


Review possible modifications that can improve – tend to peel back layers and ask lots of questions of what trying to do and what it is doing.
–Full table scans not always bad, but if not expected probably are
–Using views with too many extra tables
–Implicit datatype conversions
–Selectivity of data skewed – Histogram
–Indexes not as selective as you thought – Just because it used as index does not mean that is the best it can be
–Bad SQL – can you rewrite to improve
–Index Potential – can I help with existing indexes
–Index Potential -- can I add a new index
–Can I avoid doing this query altogether
–Hints – benefits and dangers
•Can improve a query, but is susceptible to CBO changes
•How to implement
•Different kinds of hints

The Process:


•Step #1 – Understand the problem

-- Usually something is too slow… but what do they mean by that….
Examples: Going from A to B is slow
-- Is it always the case or just during certain times … When is it bad? Does it happen in test or just in production
-- Does everyone experience it or just particular business units or locations?


Step #2 – gather the data for analysis
-- Need trace information
-- Understand what is happening in GV$SQL_PLAN
This is dependent on the problem but the goal is to acquire all relevant information using methods described earlier
Example: A long running concurrent request set… I tend to work from the worst timings inward… so I will look at the concurrent programs and determine which one is the worst and get analysis for that one….


Step #3 – Review the Data asking lots of questions
-- The information here is in the context of what you found in Steps #1 and #2.
This is where we start analyzing the explain plan, the trace file, tkprof, and the internal perf tables like gv$sql_plan.
Some key elements:
-- Look for the obvious… Full table scans when the query has an equality predicate ie. Line_number = 123456.
-- Look for mismatch between number of rows evaluated versus the plan estimate ie. Skewed data that causes an index a large number of rows returned when only should have been a small number


Step #4 -- Brainstorm on solution

I will try to come up with some different possibilities. Usually this is actually the hardest part of the process and the one requiring the most creativity.
The first question is usually how intrusive the solution will be, which usually relates directly to the risk of impacting other areas or data integrity.
This tends to be very problem specific, but the usual list I go through is:
1.Will an index help
2.Can I add a hint to the query
3.Would a histogram help
4.Can I rewrite the query ie. Change dnz_chr_id to chr_id or remove a view
5.Can I rewrite the code to avoid problem
6.Can I change the way this code runs…


DBA team tasks:
–Can initate a trace in a different session
–Have access to other tables that can help find overall bad queries
–Monitor System wide setups and settings

Oracle parameters for trace
•Init.ora
–udump
–max size of tracefile
•V$session

•There are many tools that can be utilized to help better understand what is happening inside of code.
•Proper use of these tools will point you in the right direction and help you narrow down to the problem.
•Basic tuning concepts can alleviate a large number of performance problems

No comments: