What average performance change is guaranteed to happen, assuming all cluster nodes are fully operational?

Refer to the exhibit. An organization uses a 2-node Mule runtime cluster to host one stateless API implementation. The API is accessed over HTTPS through a load balancer that uses round-robin for load distribution.
Two additional nodes have been added to the cluster and the load balancer has been configured to recognize the new nodes with no other change to the load balancer.
What average performance change is guaranteed to happen, assuming all cluster nodes are fully operational?

A. 50% reduction in the response time of the API
B. 100% increase in the throughput of the API
C. 50% reduction in the JVM heap memory consumed by each node
D. 50% reduction in the number of requests being received by each node

Download Printable PDF. VALID exam to help you PASS.

9 thoughts on “What average performance change is guaranteed to happen, assuming all cluster nodes are fully operational?

  1. A. 50% reduction in the response time of the API
    — If the original 2 nodes are handling the request in the fastest possible manner, the response time would still be the same.

    B. 100% increase in the throughput of the API
    — If the original throughput is the maximum the clients are producing already, the throughput would still be the same.

    C. 50% reduction in the JVM heap memory consumed by each node
    — If the original request (almost) always comes in one at a time throughout the day, the heap memory consumption would still be the same.

    D. 50% reduction in the number of requests being received by each node
    — This can be guaranteed.

  2. B might look correct, as the number of requests processed per second might increase, but is it guaranteed to increase by 100%?
    The question is about the guaranteed behavior and also it is mentioned explicitly that load balancer is using round robin. So it is guaranteed that each node will see a reduction of 50% load. So I go for D.

    1. if D is the correct answer.then the throughput(mentioned at B) will not change(4*50% = 2*50%).

      the throughput might increase.So the answer is B.

  3. B is the correct answer.

    A. 50% reduction in the response time of the API – is not a guarantee

    C & D are traps. “50% reduction in each node” – nodes 3 & 4 did not exist before.

    C. 50% reduction in the JVM heap memory consumed by each node
    D. 50% reduction in the number of requests being received by each node

    In terms of performance, horizontal scaling increases throughput.

    1
    9
  4. C is correct because Java heap memory utilization increase mainly due to Garbage collection. lets say If initially there were 2 servers which were handling load ‘A’ which was causing GC to run twice. Now for same load ‘A’there are 2 more JVMs which reduce the load to be processed at each node hence reducing the GC activities.
    GC is doen when JVM heap is overloaded with java objects and is growing out of its size. Less load would means less java object creation at heap.

    1
    18

Leave a Reply

Your email address will not be published. Required fields are marked *


The reCAPTCHA verification period has expired. Please reload the page.