Sunday, May 18, 2008

Oracle Mgmt

Take Control of Oracle Monitoring



Most business critical applications are database driven. The Oracle database management capability helps database administrators to seamlessly detect, diagnose and resolve Oracle performance issues and monitor Oracle 24X7. The database server monitoring tool is an agentless monitoring software that provides out-of-the-box performance metrics and helps you visualize the health and availability of an Oracle Database server farm. Database administrators can login to the web client and visualize the status and Oracle performance metrics.


Applications Manager also provides out-of-the-box reports that help analyze the database server usage, Oracle database availability and database server health.

Additionally the grouping capability helps group your databases based on the business process supported and helps the operations team to prioritize alerts as they are received.

Some of the components that are monitored in Oracle database are:

Response Time
User Activity
Status
Table Space Usage
Table Space Details
Table Space Status
SGA Performance
SGA Details
SGA Status
Performance of Data Files
Session Details
Session Waits
Buffer Gets
Disk Reads
Rollback Segment


Note: Oracle Application Server performance monitoring is also possible in Applications Manager.
Oracle Management Capabilities
Out-of-the-box management of Oracle availability and performance.
Monitors performance statistics such as user activity, status, table space, SGA performance, session details, etc. Alerts can be configured for these parameters.
Based on the thresholds configured, notifications and alerts are generated. Actions are executed automatically based on configurations.
Performance graphs and reports are available instantly. Reports can be grouped and displayed based on availability, health, and connection time.
Delivers both historical and current Oracle performance metrics, delivering insight into the performance over a period of time.

WebSphere Monitoring

Take Control of WebSphere Management


WebSphere Server is one of the leading J2EE™application servers in today’s marketplace. Applications Manager, a tool for monitoring the performance and availability of applications and servers helps in IBM WebSphere Management.


Applications Manager automatically diagnoses, notifies, and corrects performance and availability problems not only with WebSphere Servers, but also with the servers and applications in the entire IT infrastructure.

WebSphere monitoring involves delivering comprehensive fault management and proactive alert notifications, checking for impending problems, triggering appropriate actions, and gathering performance data for planning, analysis, and reporting.

Some of the components that can be monitored in WebSphere are:

JVM Memory Usage
Server Response Time
CPU Utilization
Metrics of all web applications
User Sessions and Details
Enterprise JavaBeans (EJBs)
Thread Pools
Java Database Connectivity (JDBC) Pools
Custom Application MBeans (JMX) attributes
WebSphere Management Capabilities
Out-of-the-box management of WebSphere availability and performance - checks if it is running and executing requests.
WebSphere Monitoring in Network Deployment mode is provided
Monitors performance statistics such as database connection pool, JVM memory usage, user sessions, etc. Alerts can be configured for these parameters.
Based on the thresholds configured, notifications and alerts are generated. Actions are executed automatically based on configurations.
Performance graphs and reports are available instantly. Grouping of reports, customized reports and graphs based on date is available.

Start the Tivoli Performance Viewer

1. Start the Tivoli Performance Viewer.
2. tperfviewer.[batsh] host_name port_number connector_type
For example:
tperfviewer.bat localhost 8879 SOAP
Connector_type can be either SOAP or RMI. The port numbers for SOAP/RMI connector can be configured in the Administrative Console under...
Servers Application Servers server_name End Points
If you are connecting to WebSphere Application Server, use the appserver host and connector port. If additional servers have been created, then use the appropriate server port for which data is required. Tivoli Performance Viewer will only display data from one server at a time when connecting to WebSphere Application Server.
If you are connecting to WebSphere Application Server Network Deployment, use the deployment manager host and connector port. Tivoli Performance Viewer will display data from all the servers in the cell. Tivoli Performance Viewer cannot connect to an individual server in WebSphere Application Server Network Deployment.
Default Port
Description
8879
SOAP connector port for WebSphere Application Server Network Deployment.
8880
SOAP connector port for WebSphere Application Server.
9809
RMI connector port for WebSphere Application Server Network Deployment.
2809
RMI connector port for WebSphere Application Server.
You can also start the Tivoli Performance Viewer with security enabled.
On iSeries, you can connect the Tivoli Performance Viewer to an iSeries instance from either a Windows, an AIX, or a UNIX client as described above. To discover the RMI or SOAP port for the iSeries instance, start Qshell and enter the following command:
WAS_HOME/bin/dspwasinst -instance myInstance
3. Click...
Start Programs IBM WebSphere Application Server v.50 Tivoli Performance Viewer
Tivoli Performance Viewer detects which package of WebSphere Application Server you are using and connects using the default SOAP connector port. If the connection fails, a dialog is displayed to provide new connection parameters.
You can connect to a remote host or a different port number, by using the command line to start the performance viewer.

Monitoring performance with Tivoli Performance


Overview

The Resource Analyzer has been renamed Tivoli Performance Viewer.
Tivoli Performance Viewer (which is shipped with WebSphere) is a Graphical User Interface (GUI) performance monitor for WebSphere Application Server. Tivoli Performance Viewer can connect to a local or to a remote host. Connecting to a remote host will minimize performance impact to the appserver environment.
Monitor and analyze the data with Tivoli Performance Viewer with these tasks:
1. Start the Tivoli Performance Viewer.
2. Set performance monitoring levels .
3. View summary reports.
4. (Optional) Store data to a log file.
5. (Optional) Replay a performance data log file.
6. (Optional) View and modifying performance chart data.
7. (Optional) Scale the performance data chart display.
8. (Optional) Refresh data.
9. (Optional) Clear values from tables and charts.
10. (Optional) Reset counters to zero.

AIX 6 and WPAR based system virtualization

With the release of AIX 6.1 in the last part of 2007, IBM will introduce a new
virtualization capabilty called workload partition (WPAR). WPAR is a purely
software partitioning solution that is provided by the Operating System. It has no
dependencies on hardware features.
AIX 6 is available for POWER4, POWER5, POWER5+, and POWER6. WPAR
can be created in all these hardware environments.
WPAR provides a solution for partitioning one AIX operating instance in multiple
environments: each environment, called a workload partition, can host
applications, and isolate them from applications executing within other WPARs.
Figure 1-2 shows that workload partitions can be created within multiple AIX
instances of the same physical server, whether they execute in dedicated LPARs
or micropartitions.

Figure 1-2 shows that workload partitions can be created within multiple AIX
instances of the same physical server, whether they execute in dedicated LPARs
or micropartitions.

System WPARs

A system WPAR is similar to a typical AIX environment. Each System WPAR has
dedicated writable file systems, although it can share the global environment /usr
Chapter 1. Introduction to Workload Partitions (WPAR) Technology in AIX 6 11
Draft Document for Review August 6, 2007 12:52 pm 7431CH_INTRODUCTION.fm
and /opt filesystems in read only mode). When a system WPAR is started, an init
process is created for this WPAR, which in turns spawns other processes and
daemons. For example, a system WPAR contains an inetd daemon to allow
complete networking capacity, making it possible to remotely log into a system
WPAR. It also runs a cron daemon, so that execution of processes can be
scheduled.

Application WPARs

There are two different types of workload partitions. The simplest is application
WPAR. It can be viewed as a shell which spawns an application and can be
launched from the global environment. This is a light weight application resource:
It does not provide remote login capabilities for end users. It only contain a small
number of processes, all related to the application, and uses services of the
global environment daemons and processes.
It shares the operating system filesystems with the global environment. It can be
setup to receive its application filesystem resources from disks owned by the
hosting AIX instance, or from a NFS server.
Figure 2-3 on page 28 shows the relationship of an application WPAR
filesystems to the default global environment filesystems. The filesystems that
are visible to processes executing within the application WPAR are the global
environment filesystems shown by the relationships in the figure.
If an application WPAR accesses data on an NFS mounted filesystem, this
filesystem must be mounted in the global environment directory tree. The mount
point is the same, when viewed from within the WPAR than when viewed from
the global environment. The system administrator of the NFS server must
configure the /etc/exports file so that filesystems are exported to both the global
environment IP address and to the application WPAR IP address.
Processes executing with an application WPAR can only see processes that are
executing within the same WPAR. In other words, the use of Inter Process
Communication (IPC) by application software is limited to the set of processes
within the boundary of the WPAR.
Applications WPARs are temporary objects. The life-span of an application
WPAR is the life-span of the application it hosts. An application WPAR is created
at the time the application process is instantiated. The application WPAR is
destroyed when the last process running within the application partition exits. An
application WPAR is candidate for mobility. It can be started in one LPAR, and
relocating to other LPARs during the life of its hosted application process.
7431CH_TECHPLANNING.fm Draft Document for Review August 6, 2007 12:52 pm
28 Workload Partitions in IBM AIX Version 6.1
Figure 2-3 File system relationships from the global environment to the Application WPAR
2.5 System WPARs
The second type of WPAR is a system WPAR. A system WPAR provides a typical
AIX environment for executing applications, with some restrictions. A system
WPAR has its own runtime resources. It contains an init process that can spawn
daemons. For example, it has its own inetd daemon to provide networking
services, and own System Resource Control (SRC).
Every system WPAR has its own unique set of users, groups and network
interface addresses. The users and groups defined within a system WPAR are
completely independent from the users and groups defined at the global
environment level. In particular, the root user of the WPAR only has superuser
privileges within this WPAR, and has no privilege in the global environment (In
fact, the root and other users defined in within the WPAR cannot even access the
global environment). In the case of a system partition hosting a database server,
the DB administrator can for example be given root privilege within the DB
WPARs, without giving him any global environment privilege.
The environment provided by a system WPAR to its hosted application and
processes is a chroot complete AIX environment, with access to all AIX systems
Chapter 2. Understanding and Planning for WPARs 29
Draft Document for Review August 6, 2007 12:52 pm 7431CH_TECHPLANNING.fm
files that are available in a native AIX environment. The creation of a system
WPAR includes the creation of a base directory, referred to as the base directory
in the WPAR documentation. This base directory is the root of the chroot system
WPAR environment. By default, the path to this base directory is
/wpars/ in the global environment.
By default, the base directory contains 7 filesystems:
/, /home, /tmp and /var are real filesystems, dedicated to the system partition
use.
/opt and /usr are read-only namefs mounts over the global environment’s /usr
and /opt.
the /proc pseudo-filesystem maps to the global environment /proc
pseudo-filesystem (/proc in a WPAR only makes available process
information for that WPAR).
Figure 2-4 depicts an overview of these filesystems, viewed from the global
environment and from within the system WPAR. In this example, a WPAR called
titian is hosted in an LPAR called saturn. Although the diagram shows the global
environment utilizing VIOs with two vscsi adapter along with virtual disk and
using AIX native MPIO for a highly available rootvg. The system could be setup
and supported with physical adapters and disk.
Figure 2-4 Filesystems relationship from the Global Environment to the System WPAR
7431CH_TECHPLANNING.fm Draft Document for Review August 6, 2007 12:52 pm
30 Workload Partitions in IBM AIX Version 6.1
In this figure, box with a white background symbolize real filesystems, while box
with orange backgrounds symbolize links. The gray box labeled titian shows the
pathname of the filesystems as they appear to processes executing within the
system WPAR. The grey box labeled saturn shows the pathname to the
filesystems used within the global environments, as well as the basedir mount
point below which the system WPAR partition are created.
Example 2-1 shows the /wpars created within a global environment to host base
directory of WPARs created with the global environment.
Example 2-1 Listing files in the global environment
root: saturn:/ --> ls -ald /wpars
drwx------ 5 root system 256 May 15 14:40 /wpars
root: saturn:/ -->
Then when looking inside the directory of /wpars there is now the directory of
titian as show in Example 2-2.
Example 2-2 Listing /wpars in the global environment
root: saturn:/wpars --> ls -al /wpars
drwx------ 3 root system 512 May 1 16:36 .
drwxr-xr-x 23 root system 1024 May 3 18:06 ..
drwxr-xr-x 17 root system 4096 May 3 18:01 titian
In Example 2-3, we see the mount points for the filesystem of the operating
system of titian as created from saturn to generate this system WPAR.
Example 2-3 Listing the contents of /wpars/titian in the global environment
root: epp182:/wpars/titian --> ls -al /wpars/titian
drwxr-xr-x 17 root system 4096 May 3 18:01 .
drwx------ 3 root system 512 May 1 16:36 ..
-rw------- 1 root system 654 May 3 18:18 .sh_history
drwxr-x--- 2 root audit 256 Mar 28 17:52 audit
lrwxrwxrwx 1 bin bin 8 Apr 30 21:20 bin -> /usr/bin
drwxrwxr-x 5 root system 4096 May 3 16:41 dev
drwxr-xr-x 28 root system 8192 May 2 23:26 etc
drwxr-xr-x 4 bin bin 256 Apr 30 21:20 home
lrwxrwxrwx 1 bin bin 8 Apr 30 21:20 lib -> /usr/lib
drwx------ 2 root system 256 Apr 30 21:20 lost+found
drwxr-xr-x 142 bin bin 8192 Apr 30 21:23 lpp
drwxr-xr-x 2 bin bin 256 Mar 28 17:52 mnt
drwxr-xr-x 14 root system 512 Apr 10 20:22 opt
dr-xr-xr-x 1 root system 0 May 7 14:46 proc
drwxr-xr-x 3 bin bin 256 Mar 28 17:52 sbin
drwxrwxr-x 2 root system 256 Apr 30 21:22 tftpboot
Chapter 2. Understanding and Planning for WPARs 31
Draft Document for Review August 6, 2007 12:52 pm 7431CH_TECHPLANNING.fm
drwxrwxrwt 3 bin bin 4096 May 7 14:30 tmp
lrwxrwxrwx 1 bin bin 5 Apr 30 21:20 u -> /home
lrwxrwxrwx 1 root system 21 May 2 23:26 unix -> /usr/lib/boot/unix_64
drwxr-xr-x 43 bin bin 1024 Apr 27 14:31 usr
drwxr-xr-x 24 bin bin 4096 Apr 30 21:24 var
drwxr-xr-x 2 root system 256 Apr 30 21:20 wpars
Example 2-4 shows the output of the df executed from the saturn global
environment. It shows that one system WPAR is hosted within saturn, with its
filesystems mounted under the /wpars/titian base directory. The example shows
that the /, /home/ /tmp and /var filesystems of the system WPAR are created on
logical volumes of the global environments. It also shows that the /opt and /usr
filesystems of the WPAR are namefs mounts over the global environment /opt
and /usr.
Example 2-4 Listing mounted filesystem in the global environment
root: saturn:/wpars/titan --> df
Filesystem 512-blocks Free %Used Iused %Iused Mounted on
/dev/hd4 131072 66376 50% 1858 6% /
/dev/hd2 3801088 646624 83% 32033 7% /usr
/dev/hd9var 524288 155432 71% 4933 8% /var
/dev/hd3 917504 233904 75% 476 1% /tmp
/dev/hd1 2621440 2145648 19% 263 1% /home
/proc - - - - - /proc
/dev/hd10opt 1572864 254888 84% 7510 4% /opt
glear.austin.ibm.com:/demofs/sfs 2097152 1489272 29% 551 1% /sfs
/dev/fslv00 131072 81528 38% 1631 16% /wpars/titian
/dev/fslv01 131072 128312 3% 5 1% /wpars/titian/home
/opt 1572864 254888 84% 7510 4% /wpars/titian/opt
/proc - - - - - /wpars/titian/proc
/dev/fslv02 262144 256832 3% 12 1% /wpars/titian/tmp
/usr 3801088 646624 83% 32033 7% /wpars/titian/usr
/dev/fslv03 262144 229496 13% 1216 5% /wpars/titian/var