I got asked this question on twitter the other day, so I thought it would make an interesting blog post.

Let’s look at each piece of the question.


You can use the DISPLAY QMSTATUS command to see how many connections there are currently made into the queue manager. This is a count of the number of applications (or some queue manager processes too) that have made an MQCONN(X) to the queue manager. It is also the same number of responses you should see returned by DISPLAY CONN(*) – if you don’t want to count them, find an MQ admin tool that counts them for you. These connections might be local or client connections – both contribute to the total.

To see the local ones use command:-


To see the remote ones use command:-


Client Connections and MaxChannels

So having ruled out the local connections what should you think if the number of connections coming in over a channel is more than MaxChannels? As the question asks, “Shouldn’t that be failing?”

The other thing to remember here is that, since MQ V7, one SVRCONN channel can relay several client MQCONNs over to the queue manager.

To see this take a look at the DISPLAY CHSTATUS command. There is a status attribute CURSHCNV that shows the number currently being shared over that one SVRCONN instance.

To see the number of running channels, use the command:-


The number of responses will show you how many running channel instances there are – which is the number to compare against MaxChannels. If you add up the total of all the numbers shown in CURSHCNV, this total will be less than (or equal to) the above number of channel based connections shown when you used the DISPLAY CONN command. Both queue manager channels and client channels contribute to that total.

HINT: If you want an easy way to total up all the numbers shown in CURSHCNV, try out MQSCX with this single line:-

@total=0;foreach(DISPLAY CHSTATUS(*) CURSHCNV);@total=@total+CURSHCNV;endfor;print @total

Or you could make a little import file to print out all the various numbers:-

=echo cmds(NO)
=echo resp(NO)
print 'Show all the connections into queue manager',_connqmgr
print _sep

print 'Total connections ',CONNS

print 'Local connections ',_matches

print 'Remote connections',_matches

@total = 0
@svrcn = 0
    @svrcn = @svrcn + 1
  @total = @total + CURSHCNV
print 'Total Channel instances ',_matches
print 'QMgr Channel instances  ',_matches - @svrcn
print 'Client Channel instances',@svrcn
print 'Client connections',@total
=echo cmds(YES)
=echo resp(YES)

UPDATE: This script evolved further with the release of MQSCX V9.0.0 and the use of functions – see more in MQSCX Functions.

IBM Certified SpecialistIBM Champion 2016 Middleware

Morag Hughson
IBM Champion 2016 – Middleware
IBM Certified System Administrator – MQ V8.0
Find her on: LinkedIn: Twitter: SlideShare: developerWorks:


12 thoughts on “MaxChannels vs DIS QMSTATUS CONNS

  1. I’m using DataPower (an xi52) to connect to MQ and I keep seeing the odd 2025 error so I thought I was hitting maxchannels but it doesn’t appear to be the case based on what you’ve explained above (which is excellent!).


  2. This is weird. I see the following errors in the datapower logs but nothing in the MQ logs that correspond. In Datapower, I have max connections set as 4 but the channel in brokers ( mq ( is set at 8 max and 8 max per client.

    The messages come in hot and heavy in my load test but shouldn’t it just be slow and not fail?

    12:32:37 PM mq error 38521063 0x80e00648 mpgw (mpg): A new connection could not be opened (Reason Code 2025), MQ Reason Code = 2025, MQ URL = dpmq://IIBMQ/?RequestQueue=MYQUEUE;Transactional=true
    12:32:37 PM mq error 38521063 0x01330011 mpgw (mpg): A new connection could not be opened (Reason Code 2025)
    12:32:37 PM mpgw error 38521063 0x80e00616 mpgw (mpg): Network Error (Connection timed out) on Back interface (URL: dpmq://IIBMQ/?RequestQueue=MYQUEUE;Transactional=true) when processing the server response


  3. Honestly, it’s been so long that I no longer remember. But here’s some things to try so see if it gets resolved:
    1. Turn off shared conversations on both the datapower side and the mq side (shared conversations to 0). Datapower doesn’t like them.
    2. Have different queue manager objects for sending and receiving. Then you’ll narrow down if you’re having issues with sending vs. receiving messages.
    3. Turn off SSL if you have it enabled it for MQ.
    4. Check to make sure your mq channel and datapower heartbeat settings match.
    5. The usual : upgrade your datapower firmware to current fixpack as well as MQ.


    • Thanks Jeff.

      In DP the shared conversation is 0 (eventhough in Mq it is the default 10).
      And SSL is mandatory in our test environment. We are using queue manager objects only for snd the messages from DP to MQ.

      When the transactions per sec are increased to around 400 or 500, I can see a sudden increase in the number of connections in MQ. Then in no time it is reaching the maximum value set in Datapower, and then datapower starts getting these 2025 errors. 😦
      Looks like the connections are not released or reused… but dont know why , and not sure whether to do any changes on Datapower or in MQ .


      • What are the settings for your svrconn channel? Max connections, max number per inst? And what’s the max for your datapower mq object?

        Is anything else sharing that svrconn channel?

        I’d honestly, load test your system without SSL being on. I realize it’s a requirement for you, but MQ, SSL, and DataPower have caused us no end of strange issues. It would at least show you one way or the other if it’s causing an issue and then you’d have something to put in an IBM PMR.

        What firmware of DP and version of MQ are you on?


      • Hi,

        DP (v 7.2) (2 Instances connecting to a single queue manager using MQ Queue manager object) values in each instance
        MaxConnections : 1000
        Sharing conversations : 0

        MQ (V8.0.0.4)
        MaxchannelInstances/Maxchannel : 4500
        Server connection channel params
        MaxIntances and MaxInstances per client : 999999999
        Sharing conversations : 10
        DISCINT : 1800
        HBINT : 300

            SSL : required
            SSLCIPH : ECDHE_RSA_AES_256_GCM_SHA384

        When the transactions per sec is increased there is a sudden increase in the connestions shown in MQ, and it is showing as 2000 (1000 from each DP instance).

        Is there any params that can be configured either in DP or in MQ so that will make sure that the connections will be resused /released ?


      • I’d make sure you are on the latest firmware for 7.2. A few MQ SSL related things were fixed.
        I’d also make sure your datapower heartbeat setting matched your HBINT MQ setting. And specify a cache timeout on your datapower mq setting too. This should generally be greater than your heartbeat setting. We have ours set at 15 seconds more than our HB setting.


The team at MQGem would love to hear what you think. Leave your comments here.

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.