Technical specifications
The following data describes key components of the tested platform. No other client data is collected by MQPerf.
CPU
2 x amd64
Memory size
2002 Mo
Disk size
39.442Go
OS version
Linux: 2.6.18-128.el5
JRE version
1.6
Maximum throughput per scenario
MQPerf performs 16 different tests, optimizing JORAM configuration to get the highest message throughput for each test. 4 axis are explored, with two possible values for each axis:
  • destination type axis: JMS queue or topic
  • persistency axis: persistent or transient messages
  • message size axis: small (100 B/msg) or large (10 kB/msg) messages
  • connection type axis: local (inVM) or tcp connection factory (JORAM specific).
The optimal throughput for your platform is given below for each test, as a number of messages per second. This optimum is a sustainable target over time. Actual data throughput can be obtained with the relevant message size factor.
You may benefit from those results in several ways. If you already use JORAM in your application, you can find the optimal throughput of JORAM for the specific scenarios involved, and compare with your own application results. You may also consider using alternate scenarios, when functionally compatible, to improve performances. If you are at the design phase of your application, you directly gain the performance potential of JORAM under the scenarios you plan using. Again the other results may lead you to consider alternate scenarios.
throughput
(msg/s)
persistent
transient
100 B/msg
10 kB/msg
100 B/msg
10 kB/msg
queue
localCF
62 004
8 631
132 133
18 824
tcpCF
37 255
4 428
63 383
6 250
topic
localCF
65 641
8 623
139 705
18 407
tcpCF
37 246
4 313
58 869
5 995
The latency for messages delivery has also been collected during the tests. It is represented by three values in the table below as follows:
[mean value] | [dispersion]% | [max value].
All latency values are used for calculating those indicators, no possibly aberrant value is discarded as is usually the case when performing statistic measurements. This should be considered when analyzing a possibly high max value. The max value is thus likely to vary when the test is run anew.
The dispersion indicator describes how many measures are lower than twice the mean value. It is given as a percentage, expressing a better result with a higher value. A dispersion of 100% means that less than 1 in 100 messages have a latency higher than twice the mean value.
latency
(µs | % | ms)
persistent
transient
100 B/msg
10 kB/msg
100 B/msg
10 kB/msg
queue
localCF
1.59 | 99 | 666
0.54 | 99 | 688
1.03 | 97 | 79
0.34 | 81 | 259
tcpCF
3.17 | 99 | 392
1.51 | 99 | 316
3 | 96 | 117
1.58 | 97 | 328
topic
localCF
1.18 | 99 | 60
0.38 | 63 | 271
0.77 | 98 | 176
0.28 | 77 | 327
tcpCF
2.35 | 99 | 293
0.93 | 98 | 5
2.34 | 95 | 206
1.04 | 98 | 231
Comparison with reference platforms
The results for the tested platform are compared to results for related reference platforms. They are displayed on two net diagrams with 8 axes, each axis standing for a particular scenario. The scale for each axis is specific, proportional to a relative maximum value. Those results help you identify the scenarios which your platform is most suited for. They also help you estimate the potential of JORAM on other reference platforms.
100 B/msg
  your platform     ec2-medium     ec2-large     ec2-xlarge
10 kB/msg
  your platform     ec2-medium     ec2-large     ec2-xlarge
Q: queue
T: topic
Tr: transient
P: persistent
Positioning in the community scale
With MQPerf community we aim at demonstrating the wide use of JORAM and comforting JORAM users with the capabilities of the middleware over a wide range of platforms. Results coming from all MQPerf analyses are averaged into a mean througput value of the 16 scenarios. Your platform is positioned onto this scale, together with our reference platforms.
  your platform     ec2-medium     ec2-large     ec2-xlarge  
Results provided by ScalAgent DT, main contributor to JORAM, through its MQPerf service.