Dynamics 365 F&O: Data Archival and Long-Term Retention (LTR)5 min read

Business Challenge

When it comes to business challenges for any ERP system, one significant issue is the exponential growth of data. The table metadata in Dynamics 365 Finance and Operations (D365 F&O) is highly normalized, resulting in a substantial amount of data.

Customers using Dynamics 365 Finance and Operations naturally generate business data regularly. On average, medium to upper-mid market businesses accumulate over 1TB of data in less than five years.

The data growth is directly proportional to the age of the business and the number of transactions made year over year (YoY).

This leads to several challenges:

  • Maintaining a massive volume of historical data alongside live data creates performance issues.
  • The performance of existing processes degrades over time as older data remains.
  • SQL Server indexing can only help up to a certain limit.
  • There are concerns about database expansion and the ERP/CRM systems’ ability to support the anticipated transaction growth in the near future.

Data Clean-Up in F&O

When discussing database growth, it’s crucial to consider what actions can be taken independently, without involving Microsoft. Here are some effective strategies:

  1. Utilize Existing Clean-Up Routines: Leverage the necessary clean-up routines provided by the Microsoft team.
  2. Make the Easy Way the Right Way:
    • Default Enablement of Clean-Up Routines: Ensure certain clean-up routines are enabled by default.
    • SysDataBaseLog: A feature released in version 10.0.32 that shows a database log clean-up reminder when configuring database logging.
    • Automatic Deletion: Job history entries and related staging table data older than 90 days are automatically deleted by default (introduced in version 10.0.36).
    • Batch Job Clean-Up Routine: Implemented in version 10.0.39, this routine helps in cleaning up batch jobs efficiently.

By following these steps, you can manage database growth more effectively and maintain optimal system performance.

Importance of Data Archival and LTR

Understanding the importance of data archival and long-term retention (LTR) is crucial, especially given the inevitable growth in data volume and the corresponding impact on system performance. After performing initial clean-up routines, there might still be essential data, such as inventory transactions, that you want to retain in your system.

In such cases, data archiving becomes the primary solution. Archiving offers several benefits:

  • Database Growth Management: Helps manage and control the exponential growth of data.
  • System Performance: Enhances system performance by reducing the volume of active data.
  • Cost Efficiency: Mitigates costs associated with data storage. Starting April 2024, Microsoft will enforce new storage cost policies, making efficient data management even more critical.

Archive Framework

Here’s a quick look at the high-level architecture of the data flow in the archive data framework.

The architecture consists of two primary components: the F&O Database and the Dataverse Managed Data Lake. These components communicate to ensure effective data archiving and retention.

Defining Retention Policy

When defining your retention policy, consider the following example:

  • You have a sales line with five years’ worth of data.
  • Your retention policy states that you want to keep two years of data in the live environment (active data) within F&O.

To implement this policy:

  1. Active Data: Retain the most recent two years of data in the F&O database.
  2. Archived Data: Archive the remaining three years of data using the archiving solution.

When the time solution is defined, the framework manages all five years of data by transferring it from the F&O database to the Dataverse Managed Data Lake. Thus, two years’ worth of data remains active in F&O, while the additional three years are securely archived in the Dataverse Managed Data Lake.

This approach ensures that the live database is not overloaded with historical data, thereby maintaining system performance and reducing storage costs.

Archival Job Process

Application administrators can schedule archival jobs and specify criteria for supported functional scenarios. The data from the tables for these scenarios is archived in Dataverse for long-term retention.

When an archival job is initiated from the Finance and Operations (F&O) archive workspace, it follows these stages:

  1. Replication to Dataverse: Data from the live application tables in the functional scenario being archived is replicated to Dataverse for long-term retention.
  2. Marking Data for Archival: Data that meets the archival criteria is marked as ready for archiving in the live F&O application tables.
  3. Retention Marking: The live table records are marked as retained (archived) in Dataverse long-term retention.
  4. Reconciliation Process: A reconciliation process verifies that all the live application table records marked as ready for archiving are available in Dataverse long-term retention.
  5. Data Movement:
    • Live application data previously marked as ready for archiving is moved to history tables in the F&O apps database.
    • This data is then deleted from the live application tables.

Specific inquiry pages in Dynamics 365 F&O apps can access this history table data. Data from history tables can be either restored to the live table or permanently purged. The permanent purge functionality will be supported in a future release.

Restoring data from history tables to live tables

Data from history tables can be restored back to live tables through the archive workspace. When data is restored from history tables to live tables, the corresponding archived data in Dataverse long-term retention also undergoes a status change from inactive to active, indicating that the data is no longer considered archived.

This restoration process ensures that any necessary historical data can be reintegrated into the live environment when required, maintaining data continuity and accessibility. The system updates the status of the data in Dataverse to reflect its current state, ensuring accurate data management across both live and archived environments.


The archival framework supports custom fields and custom tables within the functional scenarios. This allows customers to build their own archival scenarios for custom tables. Before initiating an archival job, customers must configure their table customizations.

This customization capability ensures that the archival framework can be tailored to meet specific business needs, providing flexibility and control over how data is managed and archived. By configuring custom tables and fields appropriately, businesses can ensure their unique data requirements are met within the archival process.