Planning for performance is the single most
important indicator of success for a J2EE project's
performance.
J2EE
profiling needs more than a J2SE profiler—it needs to be J2EE
"aware" so J2EE requests can be
followed and logged, and communications, sessions, transactions, and
bean life cycles can be monitored.
Enterprise performance problems tend to
come about equally from four main areas: databases, web servers,
application servers, and the network.
Common database problems are
insufficient indexing, fragmented databases, out-of-date statistics,
and faulty application design. Solutions include tuning the index,
compacting the database, updating the database, and rewriting the
application so the database server controls the query process.
Common web-server problems are poor
design algorithms, incorrect configurations, poorly written code,
memory problems, and overloaded CPUs.
Common application-server problems are poor cache management,
unoptimized database queries, incorrect software configuration, and
poor concurrent handling of client requests.
Common network
problems are inadequate bandwidth somewhere along the communication
route, and undersized, misconfigured, or incompatible routers,
switches, firewalls, and load balancers.
Monitor JVM heap sizes, request response times, request
service times, JDBC requests, RMI communications, file descriptors,
bean life cycles, transaction boundaries, cache sizes, CPU
utilization, stack traces, GC pauses, and network bandwidth.
Watch out for slow response times, excessive database table scans,
database deadlocks, unavailable pages, memory leaks, and high CPU
usage (consistently over 85%).
Load testing should be
repeatable. Tests should include expected peak loads. Tests should be
as close to the expected deployed system as possible and should be
able to run for a long period of time.
One testing methodology is to determine the maximum acceptable
response time page download, estimate the maximum number of
simultaneous users, increase simulated users until the application
response delay becomes unacceptable, and tune until you reach a good
response time for the desired number of users.
Page display should be as fast
as possible. Use simple pages with static layouts where possible. Let
users get to their destination page quickly. Work with the
browser's capabilities.
Use priority queues to provide different levels of service.
Be prepared to handle network congestion and communication failures.
High-performance applications probably need clustering and load
balancing.
Close JMS resources when
you finish with them.
Start the consumer before the producer.
Separate nontransactional and transactional sessions.
Use nonpersistent messages.
Use shorter or compressed messages.
Tune the redelivery count, the Delivery
TimeToLive, and the Delivery
capacity.
Use asynchronous processing (MessageListener),
parallel processing (ConnectionConsumers and
ServerSessionPools), flow control, load-balancing
message queues, and duplicate delivery mode
(Session.DUPS_OK_ACKNOWLEDGE). Avoid
Session.CLIENT_ACKNOWLEDGE.
Use publish-and-subscribe when dealing with many active listeners and
point-to-point for only a few active listeners.