|    contact    |     link forum    |
Application domains Products

Description of an MQPerf standard report

The MQPerf standard report is a document in OpenDocument format. It is written in the english language. This page details its structure and content.

report outline

Here is the typical table of content of an MQPerf standard report:

Technical specifications
Performance tests results
  Maximum throughput per scenario
  Comparison with reference platforms
  Positioning in the community scale
  Reference platforms description
Tests configuration
  Server configuration
  Client configuration
  Configuration in a J2EE application
Detailed durability tests results
  Reading guide
  local, queue, transient
  tcp, queue, transient
  local, topic, transient
  tcp, topic, transient
  local, queue, persistent
  tcp, queue, persistent
  local, topic, persistent
  tcp, topic, persistent
2
2
2
3
3
3
3
3
4
4
5
6
7
10
13
16
19
22
25
28

The two first parts roughly match an MQPerf community report. The two next parts belong to the standard report only.

Technical specifications

This short part summarizes the technical specifications of the tested platform. Here are detailed the processors, the memory size, the free disk size, the type and version of the operating system, the version of the Java environment.

CPU
Memory size  
Disk size
OS version
JRE version
2 x amd64
2002 Mo
39.440Go
Linux: 2.6.18-128.el5
1.6

Performance tests results

This part details the optimal performances of JORAM for a number of classical use cases of a MOM.

Five axes are explored, with two or three possible values for each axis, for a total number of 48 scenarios. Here are the axes:

  • destination type: queue ou topic
  • persistency: messages persistants ou transients
  • connection type: local (inVM) or tcp connection factory
  • message size: small (100 o/msg), large (10 ko/msg), or your chosen size [1]
  • parallel JORAM servers number: 1, 2, or 4 servers. [2]

A first group a 24 tests analyses the combinations of the axes destination type, persistency, connection type, and message size with a single JORAM server. A second group of 24 tests analyses the combinations of the axes destination type, persistency, connection type, and parallel servers number for a set message size.

Maximum throughput per scenario

The first table holds the raw maximum throughput value for the 24 tests from the message size group. The unit is a number of messages per second. Actual data throughput can be obtained with the relevant message size factor.

This optimum value is a sustainable target, i.e. a continuous throughput that can be processed by JORAM over time. When a variable throughput is to be considered, temporary peak values over this optimum may be obtained at both producer and consumer sides.

PNG - 8.1 kb

The second table refers to the same message size group of tests. It displays three values for each test which aim at appreciating the latency of the messages.

The mean latency value for all sent messages is given first in µs unit. It should stay very low. The second value hints at the statistical dispersion of the latency values. Because of a very low mean value, the interpretation of the usually provided standard deviation is error prone. We thus provide instead the percentage of values lower than the double mean value. The third value is the maximum latency value found during the test.

PNG - 8.9 kb

The third and last table holds the raw maximum throughput value for the 24 tests from the parallel servers group. The value adds the throughput values of all the servers deployed for each test.

A server must here be understood as a JORAM server, that is a separate JVM running JORAM. This objective is to really run several JORAM servers on the same physical hardware, and not to simulate hardware scalability which is another matter. Running parallel JORAM servers on a single hardware is a neat way to optimize resources consumption of the hardware.

PNG - 7.4 kb

Comparison with reference platforms

The results for the tested platform are compared to results for related reference platforms. They are displayed on two net diagrams with 8 axes, each axis standing for a particular scenario.

The first diagram analyses the combinations of the axes destination type, persistency, and connection type, for the small message size. The second diagram analyses the same combinations for the large message size.

PNG - 20.9 kb
PNG - 20.4 kb
PNG - 1 kb

The key for each axis is a compacted description of the test. The Q, local, P label thus refers to the test with destination type = Queue, connection type = local (inVM), and persistency = persistent messages.

The tested machine is displayed by the red line. It is surrounded by three blue lines standing for the reference platforms. You may then compare your machine with the reference platforms, separately for each scenario.

The scenarios are displayed on the same diagram but they do not relate to each other. Each axis defines its own scale, even if all values are a number of messages per second. The center always stand for a null value, but the maximum values on each axis differ. The diagram is meant to facilitate comparisons, all scales being linear. The four inner circles display successive increments of 20% of the maximum value on the axis.

Two lines may overlap on an axis. It means that theirs values differ by a factor less than 5%.

Positioning in the community scale

The results of your machine on the 16 basic scenarios are averaged into a mean throughput value. This global performance index is then positioned in a scale summarizing all the tests submitted to the MQPerf service.

PNG - 2.7 kb

Besides the evaluation aspect of this index, the diagram exhibits the variety of hardware used to run JORAM. You directly get a quite interesting idea of the potential performance gain from a hardware upgrade.

Of course the diagram will be all the more meaningful when more JORAM users contribute through MQPerf community. We expect such contributions to match the actual size of the JORAM users community.

Reference platforms description

A number of platforms have been selected as reference platforms to be used in comparison diagrams. Three among those with results around your own results are chosen for each diagram.

The reference platforms are Amazon EC2 host types, which we executed MQPerf onto, and an extensive set of other machines. We plan to enrich this set as MQPerf is more used by the community.

Hosts other than EC2 hosts used in the diagrams are described here, with the following properties: processor, memory, disk size, operating system.

Tests configuration

A MOM may be used in quite different situations, so JORAM is configured specifically for each test in order to optimize the performance. In this section of the standard report you will find the JORAM parameters which are set in MQPerf, their function, and how you may configure them.

Server configuration

The server configuration is unique for all the MQPerf tests. In order to minimize the overall execution duration, the server is configured and run only once. We have selected a mixed usage configuration, specifying the transactional persistency module and parametering this component.

Client configuration

In this section are described the parameters set on the client part of the tests. The various ways of setting the parameters are provided, but not the values themselves. As they are specific to each test, the values are provided in the sections dedicated to each test, just before the first charts.

Configuration in a J2EE application

A JORAM user in a J2EE application server may not always set the parameters as a standalone JORAM user would. In this section you find how JORAM is configured in a J2EE environement, notably at the level of Message Driven Beans.

Detailed durability tests results

The detailed durability tests are performed only by MQPerf standard. They analyze in more details the behavior of JORAM during the 24 tests from the message size group.

The objective is no longer to find the performance optimum, but instead to analyze the behavior of JORAM stressed with a fixed load profile around the optimum. The load profile is more precisely:

  • during the first half of the test, continuous load at 100% of the optimum,
  • during half of the remaining time, overload at 110% of the optimum,
  • during the remaining quarter, reduced load at 90% of the optimum.

The analysis is organized into 8 sub-parts according to the combinations of the 3 basic axes, destination type, persistency and connection type. Each section details the 3 tests corresponding to the 3 message sizes. In this documentation only the first section is detailed, i.e. the test with destination type Queue, connection type local (inVm), and persistency transient messages.

Reading guide

The organization of the durability tests results is detailed in this section, with an in-depth explanation of the provided diagrams. There can also be found a number of keys for understanding the charts.

local, queue, transient

Three tests are described in this section, one for each message size. Each test is described first separately, from a set of chosen indicators. A cross analysis follows, which highlights the impact of the message size on each indicator.

analysis of the test with a small message size

The first chart displays the nominal load profile and the actual instantaneous message throughput for the test with a small message size (100 B/msg). Three lines may be found on this chart:

  • the nominal input profile (yellow line), clearly shows the overload phase,
  • the actual message production rate (blue line), expected to closely follow the nominal input,
  • the actual consumption rate (red line).

All values are measured in thousands of messages per second.

PNG - 8.1 kb
load profile & throughput

You can already get a good idea of the stability of the system from this chart. If the consumption rate were to drift sensibly from the production rate, then a gap would form between the blue and red lines.
In most cases the two lines are merged, the red line above hiding the blue one below.

The second chart displays the evolution of critical system indicators which relate to the health of JORAM during the test execution. Four lines may be found on this chart:

  • the rate of received messages (red line), similar to the red line in the previous diagram but measured differently,
  • the number of pending messages in the destination (blue line), indicator of messages accumulating in the MOM,
  • an indicator of the persistence module load (yellow line), particularly sensitive to the MOM executing under heavy load conditions,
  • the average load of the JORAM engine over the last minute (green line), comparable to the uptime command of a Unix system.

Lines in this chart are not printed for null values, improving its readability.
The time axis of both charts match, so that significant events which may have occurred during the test can be correlated with the matching section of the other diagram.

PNG - 16.6 kb
global system status

The blue line displays the messages accumulating into the MOM. It is quite normal for this value not to be null, as the store & forward is the primary function of a MOM. In our test it should remain stable during the first half, while the system runs under the nominal load. The mean value of this indicator during this phase should be compared to the number of messages concurrently sent by the producer on each call. The value may raise during the overloaded input phase, and should go down when the load is reduced.

The yellow line monitors the activity of a specific part in the persistence module which, properly configured and under normal load conditions, should stay low. The indicator may stay at a null value throughout the test, resulting in a missing yellow line. This is good. The line could also appear during the overload phase, and should decline afterwards.

The green line is a pretty good indicator of the load in the long term. It implements the exact computation algorithm of the Unix uptime indicator, otherwise widely used. As the tests individually run in a short time to keep the overall execution time acceptable, this indicator is not easy to interpret. The steepness of the slope matches the increase in load. As a reference, a steady gain of 0,3 over 20s is the sign of a mean load increase of 1. A "good" curve should raise regularly over the first half of the test toward a mean value of 1, then steepen its slope during the overload phase, and finally flatten out when the load declines.

analysis of the test with a large message size

This section is quite similar to the previous one, this time with 10 kB sized messages.

The scales of some curves differ from the previous section, notably the scale of messages throughput. This should be considered when comparing them.

analysis of the test with a specific message size

Again this part replicates the section analyzing the test with a small message size. Take care of the charts scales.

comparative analysis

The four indicators displayed in the global system status diagrams are specifically analyzed in this section. A dedicated diagram is built for each indicator to compare the evolution of the indicator in the three tests depending on the size of the messages. The blue line refers to the small message size test, the red line to the large message size test, and the green line the specific message size test.

PNG - 10.7 kb
PNG - 5.8 kb
PNG - 2.5 kb
PNG - 7.8 kb

other tests

The 7 other groups of tests are analyzed similarly.

The study of the various charts may actually help the project leader in choosing his preferred use case of JORAM, or to better understand the behavior of its already developed JORAM application.


[1] chosen message size only in MQPerf standard

[2] parallel servers number axis only in MQPerf standard

Spatial

Scalagent Distributed Technologies +33 (0)4 7629-7981 +33 (0)4 7633-8773 serge.lacourte@scalagent.com
site map  | credits  | legal informations