Implementation Optimization Resources: D365 Customer Service7 min read

Let’s begin by discussing some of the types of testing you’re likely to encounter during a project or implementation. While there are many different kinds of testing, we’ll focus on the primary ones that we commonly work with when helping customers with their implementations.

Unit Testing

The first type of testing is Unit Testing. When business requirements aren’t fully met by the out-of-the-box functionality of Dynamics 365, you may need to customize or extend the platform to meet those specific needs. This often involves client scripting, web resources, custom applications, or plugins.

We strongly recommend implementing unit testing, particularly in scenarios involving plugins. This allows for early detection of regressions when business requirements or user experience needs evolve, requiring updates to the code. Having unit tests integrated into your development process ensures that any code changes are automatically tested for regressions.

Unit testing should be an iterative process. If regressions, bugs, or issues arise in custom code, they should be incorporated into your unit test coverage. This ensures that similar issues are detected and prevented in the future.

Whether you’re working with plugins, integrations, or custom apps within Dynamics, unit testing is essential for maintaining a stable and reliable system.

User Testing

User testing is one of the more traditional forms of testing. While automation has become increasingly complex and widespread, manual User Testing still plays a critical role, especially in scenarios that are too complex for automation or involve third-party applications and integrations.

Customers will still rely on manual user testing for certain portions of their testing process, especially when there are complex scenarios or integrations that require validation of data across systems. These types of tests can be difficult or impractical to automate.

User testing is particularly important for smoke testing, regression testing, and validating integrations. Although automation is widely used today for functional and regression testing, there will always be some complex scenarios where manual testing remains necessary.

Automation Testing

Automation Testing is an essential component of any project, and if it’s not part of your implementation strategy, you’re missing out on significant benefits. Automation allows for testing of both the user interface and functionality in a repeatable and efficient manner.

With automation, you can simulate user actions and run tests more frequently, which helps to identify potential issues early in the process without the need for manual intervention. This is particularly useful for detecting functional issues and regressions. Automation can also cover some aspects of performance testing, such as validating how long it takes for forms to load or pages to refresh.

Additionally, automation allows for light integration testing, ensuring that all components within the app are functioning as expected, without the need for continuous manual testing.

Performance Testing

Finally, we have Performance Testing. This type of testing is focused on ensuring the platform performs efficiently at scale. In other words, performance testing measures how the system handles load and stress to meet Service Level Agreements (SLAs).

For example, if your go-live involves 500 users with complex customizations, you’ll want to simulate that user load before deployment. This allows you to evaluate how well the system performs under realistic conditions, ensuring that everything will hold up once all users are online. By testing in advance, you can avoid unexpected failures and confidently meet the demands of your end-users.

Things to Consider

  • Now that we’ve covered the various types of testing, let’s discuss some important factors to consider before laying out your testing plan and timeline.
  • Environments: Establish a solid environment strategy. Typically, this includes development, testing, and production environments.
  • Client Types: Understand how your end users will interact with Dynamics 365. Testing should reflect the different client types and devices they’ll use.
  • Data: Ensure the data in your testing environment closely resembles production data. This helps to identify potential issues that could arise with real-world data.
  • Test Scenarios: Be mindful of the scenarios you’re testing. Sometimes, regression issues can go undetected if the test data doesn’t reflect real-world conditions.
  • Security: Ensure that security roles, user personas, and the overall business unit structure are properly configured and ready for testing.

Clients

Next, let’s talk about Clients. There are various clients through which users can interact with Dynamics 365, with the primary one being the web client, accessible via browsers. This includes popular browsers like Chrome, Firefox, and Edge, among others.

It’s crucial to understand the browser preferences of your user base and what your organization has standardized on. This will allow you to tailor your testing or automation efforts to these specific browsers and their versions. Ensuring thorough testing on the browsers your users rely on is essential for a smooth experience.

In addition to the web client, there’s also the mobile and tablet clients. Depending on the nature of your user base—such as those in field service or on-the-go employees—the mobile client might play a significant role. In these cases, it’s important to determine whether mobile testing, including automation, is necessary for these user experiences.

Another key client is Outlook, which is widely used by organizations. Many users interact with Dynamics 365 directly from their Outlook inbox through the Outlook client. It’s important to understand the integration between Dynamics and Outlook and ensure that you account for any unique user scenarios. If users switch between the web client and Outlook, both experiences should be tested to catch any potential issues or inconsistencies.

Lastly, there’s the Unified Service Desk (USD). While this is slowly being phased out as the browser client becomes session-aware, some scenarios still require USD. If your project involves USD, you should ensure that your automation testing includes this client, using the available tools designed for USD testing.

Environments

Now let’s move on to environments. It’s important to differentiate between the various environments you’ll be working in, such as the First Release environment, sandbox, and production environments.

The First Release environment provides some customers with an early preview of upcoming Microsoft changes. This could include bug fixes, patches, and other updates. If your organization is eligible for First Release, you may receive these updates up to four or five weeks before they are deployed to your production environment.

One key point to note is that your sandbox environment—which includes development (Dev) and testing (Test) environments—operates at the same release cycle as your production environment. This means you won’t get early access to new updates in your sandbox. To gain early access, you must be on the First Release environment.

Data

Finally, let’s discuss the importance of data in your testing environments. It’s crucial to ensure that the data you’re using for testing closely mirrors your production data. Avoid using random strings or numbers—having realistic data is essential, especially when testing functionalities like search.

For example, if your production environment has users named “John Doe,” your test environment should reflect similar real-world data. The same principle applies to phone numbers and email addresses. The closer your test data is to production data, the more accurate and relevant your testing will be.

Investing time to build a high-quality test environment is critical. There are frameworks available, such as those on GitHub, that can help you generate test data or mock production data. These tools can streamline the process and ensure that your test data is both comprehensive and realistic.

Another important aspect of data management is avoiding a net negative impact on your environment. For instance, if you’re creating cases through automation tests but also deleting some during the process, make sure you’re not deleting more than you create. This imbalance could lead to running out of data over time.

Similarly, if you’re creating cases during testing but not cleaning them up afterward, your environment could accumulate excess data. For example, while your production environment may only have one million cases, after six months of testing, you might find three million cases in your test environment. This data bloat can misalign your test environment with production and potentially degrade performance as the data grows.