Dynamics 365: Performance Benchmark3 min read

The performance benchmark for Dynamics 365 (D365) is well-understood among its vast majority of users and partners. At its core, it is the simulation of an interactive workload, enabling concurrent users to evaluate the performance of a particular workload.

It’s essential to understand the objectives of the performance benchmark. It provides evidence, affirming that the solution you’ve developed meets the intended business performance objectives and constraints.

Key questions to consider include:

  1. How does your solution cater to real-world workloads?
  2. Can the implemented solution manage 1,000 users simultaneously?
  3. Will the solution maintain its current performance levels over the next three years?
  4. Is the performance of the solution at the initial Go-Live sufficient for subsequent larger implementations in other countries?

At its heart, the goal is to validate that the solution can handle the intended transactions or user volume within a set acceptable duration or response time, starting from a specific data point.

Outcomes/Results

Typically, the results of a performance benchmark encompass three primary elements:

  1. Performance benchmark report, highlighting varied scenarios (such as sales orders, stock verification, and so forth).
  2. Detection and resolution of issues across different iterations.
  3. Iterative optimization processes and improvements.

This benchmarking process ensures that the solution effectively addresses the essential business scenarios as intended.

Timing of the Performance Benchmark

The scheduling of the benchmark largely depends on your specific needs. You might opt to conduct the benchmark:

  • Early in the project during the analysis phase, particularly if your goal is to validate the standard solution and its capabilities.
  • During the design and development phase, especially if you’re testing an Independent Software Vendor (ISV) solution since the solution would already be available from the ISV.
  • As an element of performance testing, to confirm that the end solution, comprising standard components from Microsoft, functions optimally and aligns with business goals.
  • Post Go-Live, where you can run the performance benchmark against the final solution, beginning from a new data reference point.

Methodology

When devising a methodology for the performance benchmark, it is crucial to have a well-thought-out approach. A common oversight observed in several projects is when a project manager directly instructs a technical lead to execute a performance benchmark. Without clear guidance, the technical lead might resort to scenarios they deem relevant, operating without specific targets. This often culminates in inconclusive or non-existent results.

It’s imperative to hone in on the business-critical aspects. Concentrating on performance requirements and setting realistic objectives are vital components of this process. Ensuring these requirements and objectives are well-documented in Business Requirements Documents (BRD) or Functional Requirements Documents (FRD) is also crucial.

Always keep the overarching goals and specific objectives at the forefront. This involves analyzing traces, rectifying any detected issues, and continually monitoring progress.

For a structured approach to performance benchmarking, consider the following sequence:

  1. Define performance objectives.
  2. Identify relevant scenarios.
  3. Design and develop the system.
  4. Design the required tests.
  5. Create test simulations and assemble the necessary data.
  6. Configure the test environment.
  7. Execute the tests.
  8. Iteratively tune the system and retest as necessary.
  9. Compile and present the report.

Common Types of Scenarios

When undertaking a performance benchmark, the primary focus often centers on the interactive workload. This primarily involves concurrent users engaged in a specific process or a set of simultaneous processes.

To elevate the realism and comprehensiveness of the benchmark, integrating diverse types of workloads is essential. This approach mirrors the multifaceted demands of actual operational environments. Some suggestions include:

  1. Batch Jobs: Incorporate batch jobs that reflect routine tasks such as posting transactions or handling incremental Material Requirements Planning (MRP) workloads.
  2. Integrations: Simulate integrations that would be active during typical working hours once the system is live. This could cover anything from data syncs with other platforms to real-time communication with external systems.
  3. Reporting and Analytics: Emulate the generation and processing of reports and analytics. Given the intensive computational nature of some analytics tasks, this ensures the system can manage data-driven decision-making processes under load.
  4. Further Considerations: Depending on the operational landscape, other workload simulations could be introduced. These might include automated system backups, simultaneous data imports/exports, or the concurrent use of mobile and web applications.