Skip navigation
All Places > Getting Started > Blog > Authors burked

Getting Started

3 Posts authored by: burked Support

In the support organization, there are frequently customer requests for how to properly refresh Test / Dev Environments with Production data.  The steps below can be used as an example for how to migrate content.  These steps are some basic guidelines around the process for migrating data from a Production Farm to a Test farm, but as all SharePoint farms are different, you may need to adjust these accordingly to meet your specific business needs. 


Production Farm Steps

Backup Your Databases

If you wish to preserve the current workflow history and state, it is recommended that you stop the target Web Application Pool(s) in IIS along with the SharePoint Timer Service in Services.MSC prior to performing backup operations

  • SharePoint
    • SharePoint Content Databases
  • Nintex Forms
    • Unless this is a straight migration where the old farm is no longer being used, it is recommended you generate a new database / new live ID for the farm, meaning that the Live / Mobile forms stored in the Nintex Forms database will need to be republished in your sites on the test Farm
  • Nintex Workflow
    • Workflow content databases
    • Workflow configuration database
      • If you are not using User Defined Actions, Workflow Constants, or Scheduled workflows, and there is no Workflow Progress data in the Nintex Configuration Database that needs to be retained on the target farm, you can create a new Nintex Configuration Database


Development Farm Steps 

Install / Verify Nintex Workflow and Forms Builds on Target Environment


Disable Jobs / Services on the farm

  • Disable the Nintex Workflow Scheduler Timer Job
    • Find the job under SharePoint Central Administration > Monitoring > Timer Jobs > Review Job Definitions
  • Disable the SharePoint Timer Service on all servers in the Farm
    • This can be found in Services.msc
  • Disable Target Web Application Pool(s) in IIS on all servers in the farm (leave SharePoint Central Administration in a ‘Started State’)


Restore and Attach Databases

  • Restore SharePoint and Nintex Databases in SQL Server Management Studio
  • Attach SharePoint Content DB(s) to the farm using Mount-spcontentdatabase PS
  • Execute "_PrepareForNewEnvironment" stored procedure on the Nintex Config Database
    This command will remove all existing connections to Nintex databases along with any Pending Nintex Live requests. This is a key step as this will prevent any connection strings from referencing production servers. 
    • Launch SSMS and expand the config database
    • Find Programmability and Expand the Stored Procedures folder
    • Right-Click on "_PrepareForNewEnvironment" and select the Execute Stored Procedure
  • Connect the Nintex Configuration Database in the Farm  Go to Central Administration > Nintex Workflow Management > Manage Databases
    • Click the Create button(edit button if a config database is already in place)
    • Enter the name of your server and the database
    • Select the "Connect to existing database option"
    • Click "Ok"
  • Connect the Nintex Content Databases to the farm

Additional Configuration Steps

  • Ensure All workflow actions have been enabled
    • In Nintex Workflow Management, go to Manage Allowed Actions
    • Select the check box at the top of the first column (ensure all items have been checked)
    • Click Ok
  • Temporarily disable outgoing email in the farm
    **This is 100% optional, but the workflows will be live when all of the services are brought back online, and the majority of customers we have worked with want to avoid confusion with emails coming from workflows in dev. This is recommended until you can confirm that all workflows have been stopped, or are not sending any further emails  
    • Go to Central Administration > Server Settings > Configure outgoing email settings
      • You can place a test server or a dummy relay in the outbound email
    • Go to Central Administration > Nintex Workflow Management > Global Settings
      • Place a test exchange server or a dummy relay in the outbound email.

Bring the sites and forms back online

  • Reactivate the Web Application Pools, the SharePoint Timer Service and the Nintex Workflow Scheduler timer job that were disabled during the ‘Disable Jobs / Services on the farm’ step
  • Republish any forms using Nintex Mobile / Live (if you did not bring over the live ID / Database)


Common Issues / Tips and Tricks

Explicit References

  • If you are using explicit links rather than the {Common:URL} token in your actions, these will need to be cleaned up prior to running your workflows.  The explicit links will still point at the production environment and could potentially alter production data if in use in a 'Call Web Service' or 'Web Request' action.  This could also extend to Workflow Constants if a URL is defined within a constant that is used across actions.
  • If you are using the 'Execute SQL' command to query or alter data within your production farm, you will need to update the connection string to reflect the proper location after moving your data.

Now that it's 2016, we felt it was about time to revisit one of the most popular posts in community history.


In Part 2, we would like to go deeper into how workflow history affects the performance of your farm, and some best practices on keeping your Nintex database(s) clean and properly maintained.


Before going into specifics around how to maintain a database, it is important to know how Nintex Workflow databases work and how they can affect workflow history and performance. The database itself affects all aspects of workflows. Below is a quick description of what happens when you start a workflow:


  • Workflow is initiated by a user or programmatically
    • A lookup is done to see what database is configured for the history to be placed in. (which DB is mapped to the site collection where the workflow is being run)


  • The SharePoint Workflow Engine compiles and starts the workflow
    • An initial entry is added to the database tables logging the workflow instance ID, siteid, etc.


  • The workflow Engine begins to process the workflow actions
    • Two Records are created for each action that is processed inside of the workflow (start and finish)
    • Records are created/referenced for tasks (if applicable)


  • The workflow Engine finishes processing the workflow
    • Final progress records are added to tables
    • The state of the workflow is updated to reflect the outcome (Completed, cancelled, error)


During all of the above steps, Nintex and the SharePoint workflow engine are utilising the Nintex database(s).  Because both are dependent on the database, if the database becomes unavailable or if the performance of the database is impacted to a point where the back-end queries timeout, the workflow will fail.


Issues related to the database will typically start showing up as long running Workflow Timer Jobs, unexplained intermittent workflow failures or errors when viewing workflow history. Sometimes it will go for a while before anyone notices, and other times it will rapidly get worse.


Some of the more common errors include:


  • Workflow failed to start (intermittent)
  • An error occurred in workflow 'Workflow_Name' (intermittent)
  • Error: Cannot find a corresponding human workflow task ID for this task (intermittent, in workflow history as well as the ULS logs)
  • A timeout occurred (in workflow history as well as the ULS logs)


Please keep in mind that it will not always be an issue with the database when you see these errors. Most will happen intermittently and restarting the workflow will sometimes allow for successful completion. If you are seeing errors such as "An error occurred in workflow 'Workflow_Name' " on a consistent basis, this could be a sign that there is an underlying issue with the workflow itself.


One of the more common questions we get asked is: "How many records can the Nintex database handle before there are performance issues?"


This is one of the hardest questions that can be asked to support, because there are a lot of factors that go into being able to provide an accurate answer. Typically in the past we have advised between 5-10 million records in the dbo.workflowprogress table because that is most common range we see when we ask for the report (see part 1). But this begs the question; why can some environments run 100's of millions of records before seeing any performance degradation? The answer is: 1.) Hardware and 2.) Maintenance.


When we see an environment that has 10's or 100's of millions of records chugging along, they are typically backed by large SQL servers (Memory and high performance I/O storage) and processes/jobs to clean/trim/reindex the databases.


Because keeping the database healthy is an important part of ensuring workflows run as expected, it's crucial that the database(s) be properly maintained.


Outside of adding hardware, below are some of the most common maintenance steps to ensure optimal database performance:

Clean-up Fragmented Indexes


Fragmentation of indexes can occur when there are a large number of read-write / delete operations occurring on a database.  This can cause logical pages to be out of place with where they should be located within the data file.  Fragmented databases can cause slow query times and SQL performance degradation.  Index Fragmentation should be kept under 10%.  If your index is 5%-30% then you can reorganize.  If it is >30% then you will need to rebuild.


The t-sql below will retrieve the fragmentation on the indexes for a database named DATABASENAME.  You will want to replace 'DATABASENAME' with the name of your database.


  2. GO
  3. SELECT OBJECT_NAME(i.object_id) AS TableName , AS TableIndexName ,phystat.avg_fragmentation_in_percent
  4. FROM sys.dm_db_index_physical_stats(DB_ID(), NULL, NULL, NULL, 'DETAILED') phystat
  5. inner JOIN sys.indexes i ON i.object_id = phystat.object_id AND i.index_id = phystat.index_id
  6. WHERE phystat.avg_fragmentation_in_percent > 10


You can also use the PowerShell Script written by Aaron Labiosa at the link below:


How to quickly check the health of your Nintex Workflow SQL Indexes


Once the fragmentation has been found, you can plan accordingly for reorganizing / rebuilding the indexes.  The link below will provide further information on how to reorganize / rebuild these indexes.


MSDN: Reorganize and Rebuild Indexes


Keep a clean dbo.WorkflowLog table

This table can grow very rapidly when Verbose Logging is enabled for the Nintex Workflow Category in the Diagnostic Logging section of Central Administration.  It is recommended that Verbose logging is only enabled on this category when performing troubleshooting of issues with Nintex Workflow.  If you are not actively troubleshooting an issue, but you have enabled verbose logging for this category, it is recommended that you disable verbose logging then truncate the table.


The exact steps for truncating this table can be found on Pavel Svetleachni's blog post below:


How to purge dbo.WorkflowLog table

Backup / Shrink transaction logs


If your database is in a Full or Bulk Logged Recovery mode,  it is important that you also backup and shrink your transaction logs in order to prevent excessive growth. When performing the transaction log backup, ensure that the default selection "Truncate the transaction log by removing inactive entries" is selected.  This will remove any items that are not currently in the tail of the log (active log).



Once the backup has been completed, you will want to shrink the database using the DBCC SHRINKFILE t-sql command.  This is where it will take some legwork from your DBA, as you will want to make sure that you are shrinking the log file to a size that allows for the expected logfile growth between backups.  There is no hard-set number on the logfile size, but a good rule of thumb will be to keep it anywhere from 10-25% of your database size (10% if there is minimal growth between backups; 25% if there is large growth between backups).


As an example, The command below will shrink the logfile for the DATABASENAME database to 1000MB.  DATABASENAME and 1000 can be altered accordingly with your specific information.  In order to shrink the log file, you will need to set the recovery model for the database to 'Simple' prior to running the shrink operation.  You can set the database back to full recovery immediately after shrinking the database.


  2. GO
  4. Set Recovery SIMPLE;
  5. GO
  7. GO
  9. Set Recovery FULL;
  10. GO


Set your file growth and database size limits appropriately


By default SQL will have the following settings for your database and logs


DatabaseBy 1MBUnlimited
Transaction LogBy 10%Limited to 2,097,152MB


On a SQL environment where there are large amounts of write operations, it is important that these numbers are managed appropriately.  Setting the autogrowth to 1MB can lead to a large amount of fragmentation on the disk, and it can cause less than optimal SQL performance when accessing data. Autogrowth should be set in Megabytes, and anywhere from 50-100MB would be appropriate for a database with a large amount of I/O.   The Maxsize setting will not have an impact on performance, but it is recommended to monitor growth and overall size of these files.  Again, there are not going to be hard limitations on the maxsize for these files, as the functionality of the Nintex Databases will rely on the hardware and overall maintenance of the SQL environment.


Stay tuned for Part 3 where we wake Chad Austin from his hypersleep to provide more information on the relationship between Nintex Databases and the SharePoint workflow engine.



Demystifying Workflow History (Part 1)

How to purge dbo.WorkflowLog table

MSDN: Reorganize and Rebuild Indexes

MSDN: Back Up a Transaction Log

Technet: Shrinking the transaction log


Related products: Nintex Workflow 2016, Nintex Forms 2016, Nintex Workflow 2013, Nintex Forms 2013, Nintex Workflow 2010, Nintex Forms 2010, Nintex Live


Typically when an error comes up around a missing Feature or a duplicate Content Type or Field within SharePoint, it can be difficult to diagnose what exactly is being referenced.  In order to assist with troubleshooting the issues specifically related to Nintex Products, below are Nintex Features, Content Types, and Fields along with their associated IDs. 

**Please note that some of the Features, Content Types, and Fields are hidden.  Hidden items are not going to be visible within the SharePoint UI.


Nintex Features

**Note: The feature NintexWorkflowClaimsMigration is only included in SharePoint 2013 environments


FeatureFeature ID


Nintex Content Types


Content TypeID
Nintex Biztalk Task0x010801005CC0A86910A24687A76ECAC954D3E3F3
Nintex Workflow Multi Outcome Task0x0108010064E42B14ADA442C78E98D686760A8493
Nintex Workflow Multi Outcome Task using InfoPath0x0108010064E42B14ADA442C78E98D686760A8493006EBD0FA4731041D9804386A7FEA568DC
Nintex Workflow Multi Outcome Task using Nintex Forms0x0108010064E42B14ADA442C78E98D686760A8493000568DBB766D0491684897A230753AAF9
Nintex Workflow Task0x0108010079DBDE612F7B46928C6A2516BA2CAE37
Nintex Workflow Task using InfoPath0x0108010079DBDE612F7B46928C6A2516BA2CAE3700E0B65C5281234030AA8CA4D8F8910E72
Nintex Workflow Task using Nintex Forms0x0108010079DBDE612F7B46928C6A2516BA2CAE3700D4A837248A47E040ABD1A569613E898B
Workflow Snippet0x010100F815D979DC2B4F48A9DBCA64AED3C636
Workflow Template0x010100F8376F5313D041EF85718B229F4FBFA8


Nintex Fields (Site Columns)


Internal   NameIDGroup
ApproverComments819e6cf2-36c3-4013-8aef-c99712c26036Nintex Workflow
AssociatedContentType4a07d815-9c92-4b43-a527-29c3b76a65bfNintex Workflow
NintexWorkflowDescriptiond97c0a58-77fd-48cf-8a7a-9ffdfdd11a6eNintex Workflow
TemplateCategory6abeaa08-87c5-4385-9c81-39cbb72f99a6Nintex Workflow
TemplateLcid837c2b97-e338-446d-a993-c96fd0c4b5d7Nintex Workflow
WorkflowCategoryd0d7bbf9-95cb-4661-b7a0-9bec8e968c3eNintex Workflow
WorkflowPartDescription607ec2f6-48eb-4a14-9e1e-bc48043e157eNintex Workflow
WorkflowPartID7f3814a0-6ef1-4856-bf1d-1a06f8437dc4Nintex Workflow

Filter Blog

By date: By tag: