Skip navigation
All Places > Getting Started > Blog > 2016 > January



Every night, Technical Community Manager Frank Field logs onto Nintex Connect before heading to bed and opens up a tab for everything posted since he last visited – new blog posts, new queries, new responses to questions.


When he arrives in the Nintex office each morning, he does the same thing – reviewing anything new. With 8,731 active Nintex Connect users, that task can take two to three hours but Frank considers it invaluable to managing the Nintex Connect community site.


Plus, he loves the jolt of satisfaction he gets from seeing how involved people are in the Nintex community.


“It’s immensely gratifying to come into work and open up 50 bazillion tabs,” Frank says. “I look at everything. I want to know what’s happening in the community. Members make our community vibrant. And it’s a really vibrant community.”


January 25 is Community Manager Appreciation Day. This makes it the perfect time to take you behind the scenes of the Nintex Connect community site. It’s also an ideal time to share our appreciation for the fantastic job that Frank does managing the site.


Read What’s It Like to Be Nintex Connect Community Manager on the Nintex Blog for the scoop on what it’s like running Nintex Connect and to read what some of your fellow community members think of Frank.

This tutorial describes how to enable a scenario for "deep linking" to automatically submit scanned codes using Nintex Mobile. The tutorial also describes the user experience when accessing the link through the QR code.


Forms designer steps

This section describes how to enable the scenario in Getting even deeper linking with Nintex Mobile and Nintex Mobile Enterprise by creating Nintex Mobile links to log confirmations for a security protocol. The process for enabling this scenario involves creating the form, bulding the links for each door, generating the QR codes for the links, and implementing the security system.


The Nintex Mobile links in this tutorial do the following:

  • Customize the loading message so staff members know that the Security Check form is loading
  • Point to the Security Check form
  • Pre-populate the Title field with the door name
  • Enable automatic submission of the form
  • Enable display of the successful submission message so staff members know that the scanned QR code was successfully submitted


To create the form:

  1. On your SharePoint site, create a list named "Security Check" and add a column named "Checkin" using the type Date and Time with the default value of Today's Date.
  2. In Nintex Forms designer, add the Nintex Mobile layouts and then publish the default form.


To build the links for each door:

  • Use the following examples based on the door names Back Roller Door East, Back Door West, and Front Door.






To generate the QR codes:


To implement the new security check system:

  1. Print out the QR codes and affix them to the appropriate doors.
  2. Make sure all staff have the Nintex Mobile app and a QR reader on their phones


User experience

This section describes the user experience when accessing the above Nintex Mobile links.


  1. The staff member uses the QR reader app on the phone to scan the QR code on the door.
    The customized loading message briefly appears: "Loading Security Form."
    If the staff member is not logged into the Nintex Mobile app, then the sign in screen appears.
  2. If needed, the staff member signs in to the Nintex Mobile app.
    The default message for a successful form submission appears, indicating that the Security Check form was successfully submitted with the QR code.
  3. The staff member dismisses the message.


For more information about Nintex Mobile links, see the Nintex Forms for Office 365 help.

This tutorial describes how to use range and regular expression validation in Nintex Mobile, using a lead generation scenario for validating entries of email addresses and follow-up dates. In this scenario, email addresses are validated for inclusion of the "@" symbol and a period; follow-up dates are validated for occurring within three months of the trade show at which the forms are filled out. The trade show is occurring in August 2016.


To achieve this scenario:

  1. Create a SharePoint list called "Leads" and add the following columns: Name, Region, Email, Phone, Followup date.
  2. In Nintex Forms designer, double-click the Single Line Textbox control for Email.

  3. In the form control configuration dialog box, expand the Validation section and then select Yes for Use a regular expression.
  4. In the Regular expression field, copy and paste the following expression.
  5. In the Regular expression error message field, copy and paste the following text or enter a similar message.
    Invalid format for email address. Please reenter.
    Example screenshot of validation configuration:

  6. Click Save to save changes and close the configuration dialog box.
  7. In Nintex Forms designer, double-click the Date/Time control for Followup date.

  8. In the form control configuration dialog box, expand the Validation section and then select Yes for Use range validation.
  9. In the Minimum value field, insert the reference "Current Date."
  10. In the Maximum value field, specify a date occurring several months after the trade show.
    Example: 2017/01/01
  11. In the Range validation error message field, copy and paste the following text or enter a similar message.
    Please enter a date in the future.
    Example screenshot of validation configuration:

  12. Click Save to save changes and close the configuration dialog box.
  13. Add the Nintex Mobile layouts and then publish the form.
    Now the form displays errors if users attempt to submit with an email address or followup date not passing validation.

Products : Nintex Forms 2013, Nintex Forms 2010


Most of the forms use information stored within the site containing the form. And sometimes, they need to match external data (business databases).

Here is a sample that could help you to achieve this. I used a Northwind OData service so that you can directly test the solution on your environment.


Into the "Custom JavaScript" section in the form settings, insert these 2 functions :

//Defines if input is a Northwind territory
function IsNorthwindTerritory(input)
  var isValid = false;
  var dfd = new $.Deferred();
  url: "$filter=TerritoryDescription eq '" + input + "'",
  async: false
  }).always(function (data)
  isValid = (data.value.length > 0);
  return isValid;

//Uses previous function to perform a custom validation on a control
function IsNorthwindTerritoryValidation(source, arguments)
  var isKnownTerritory = IsNorthwindTerritory(arguments.Value);
  //errorMessage could be set here but you should better use Nintex Forms Settings
  //source.errormessage = "This is not a Northwind territory".
  arguments.IsValid = isKnownTerritory;


The first one defines the business rule. It could be directly use in a Nintex Forms rule.

The second one uses the first one and set objects attended in a Custom Validation.


First example : Use the business rule to show/hide a panel

Choose a panel you would like to hide if it does not correspond to the business rule.

Add a rule from the ribbon button and use the previous "IsNorthwindTerritory" function in the condition formula.

You should see the second panel only if you type a Northwind territory (ie. Bedford).


Second example : Use the business rule as a custom field validator

Select the field "Territory" and in its settings, set a custom validation, based on the second JavaScript function "IsNorthwindTerritoryValidation".

You should obtain a such behavior when performing a form validation :


Now you just have to replace business logic by yours to be able to perform validations with external data.

Now that it's 2016, we felt it was about time to revisit one of the most popular posts in community history.


In Part 2, we would like to go deeper into how workflow history affects the performance of your farm, and some best practices on keeping your Nintex database(s) clean and properly maintained.


Before going into specifics around how to maintain a database, it is important to know how Nintex Workflow databases work and how they can affect workflow history and performance. The database itself affects all aspects of workflows. Below is a quick description of what happens when you start a workflow:


  • Workflow is initiated by a user or programmatically
    • A lookup is done to see what database is configured for the history to be placed in. (which DB is mapped to the site collection where the workflow is being run)


  • The SharePoint Workflow Engine compiles and starts the workflow
    • An initial entry is added to the database tables logging the workflow instance ID, siteid, etc.


  • The workflow Engine begins to process the workflow actions
    • Two Records are created for each action that is processed inside of the workflow (start and finish)
    • Records are created/referenced for tasks (if applicable)


  • The workflow Engine finishes processing the workflow
    • Final progress records are added to tables
    • The state of the workflow is updated to reflect the outcome (Completed, cancelled, error)


During all of the above steps, Nintex and the SharePoint workflow engine are utilising the Nintex database(s).  Because both are dependent on the database, if the database becomes unavailable or if the performance of the database is impacted to a point where the back-end queries timeout, the workflow will fail.


Issues related to the database will typically start showing up as long running Workflow Timer Jobs, unexplained intermittent workflow failures or errors when viewing workflow history. Sometimes it will go for a while before anyone notices, and other times it will rapidly get worse.


Some of the more common errors include:


  • Workflow failed to start (intermittent)
  • An error occurred in workflow 'Workflow_Name' (intermittent)
  • Error: Cannot find a corresponding human workflow task ID for this task (intermittent, in workflow history as well as the ULS logs)
  • A timeout occurred (in workflow history as well as the ULS logs)


Please keep in mind that it will not always be an issue with the database when you see these errors. Most will happen intermittently and restarting the workflow will sometimes allow for successful completion. If you are seeing errors such as "An error occurred in workflow 'Workflow_Name' " on a consistent basis, this could be a sign that there is an underlying issue with the workflow itself.


One of the more common questions we get asked is: "How many records can the Nintex database handle before there are performance issues?"


This is one of the hardest questions that can be asked to support, because there are a lot of factors that go into being able to provide an accurate answer. Typically in the past we have advised between 5-10 million records in the dbo.workflowprogress table because that is most common range we see when we ask for the report (see part 1). But this begs the question; why can some environments run 100's of millions of records before seeing any performance degradation? The answer is: 1.) Hardware and 2.) Maintenance.


When we see an environment that has 10's or 100's of millions of records chugging along, they are typically backed by large SQL servers (Memory and high performance I/O storage) and processes/jobs to clean/trim/reindex the databases.


Because keeping the database healthy is an important part of ensuring workflows run as expected, it's crucial that the database(s) be properly maintained.


Outside of adding hardware, below are some of the most common maintenance steps to ensure optimal database performance:

Clean-up Fragmented Indexes


Fragmentation of indexes can occur when there are a large number of read-write / delete operations occurring on a database.  This can cause logical pages to be out of place with where they should be located within the data file.  Fragmented databases can cause slow query times and SQL performance degradation.  Index Fragmentation should be kept under 10%.  If your index is 5%-30% then you can reorganize.  If it is >30% then you will need to rebuild.


The t-sql below will retrieve the fragmentation on the indexes for a database named DATABASENAME.  You will want to replace 'DATABASENAME' with the name of your database.


  2. GO
  3. SELECT OBJECT_NAME(i.object_id) AS TableName , AS TableIndexName ,phystat.avg_fragmentation_in_percent
  4. FROM sys.dm_db_index_physical_stats(DB_ID(), NULL, NULL, NULL, 'DETAILED') phystat
  5. inner JOIN sys.indexes i ON i.object_id = phystat.object_id AND i.index_id = phystat.index_id
  6. WHERE phystat.avg_fragmentation_in_percent > 10


You can also use the PowerShell Script written by Aaron Labiosa at the link below:


How to quickly check the health of your Nintex Workflow SQL Indexes


Once the fragmentation has been found, you can plan accordingly for reorganizing / rebuilding the indexes.  The link below will provide further information on how to reorganize / rebuild these indexes.


MSDN: Reorganize and Rebuild Indexes


Keep a clean dbo.WorkflowLog table

This table can grow very rapidly when Verbose Logging is enabled for the Nintex Workflow Category in the Diagnostic Logging section of Central Administration.  It is recommended that Verbose logging is only enabled on this category when performing troubleshooting of issues with Nintex Workflow.  If you are not actively troubleshooting an issue, but you have enabled verbose logging for this category, it is recommended that you disable verbose logging then truncate the table.


The exact steps for truncating this table can be found on Pavel Svetleachni's blog post below:


How to purge dbo.WorkflowLog table

Backup / Shrink transaction logs


If your database is in a Full or Bulk Logged Recovery mode,  it is important that you also backup and shrink your transaction logs in order to prevent excessive growth. When performing the transaction log backup, ensure that the default selection "Truncate the transaction log by removing inactive entries" is selected.  This will remove any items that are not currently in the tail of the log (active log).



Once the backup has been completed, you will want to shrink the database using the DBCC SHRINKFILE t-sql command.  This is where it will take some legwork from your DBA, as you will want to make sure that you are shrinking the log file to a size that allows for the expected logfile growth between backups.  There is no hard-set number on the logfile size, but a good rule of thumb will be to keep it anywhere from 10-25% of your database size (10% if there is minimal growth between backups; 25% if there is large growth between backups).


As an example, The command below will shrink the logfile for the DATABASENAME database to 1000MB.  DATABASENAME and 1000 can be altered accordingly with your specific information.  In order to shrink the log file, you will need to set the recovery model for the database to 'Simple' prior to running the shrink operation.  You can set the database back to full recovery immediately after shrinking the database.


  2. GO
  4. Set Recovery SIMPLE;
  5. GO
  7. GO
  9. Set Recovery FULL;
  10. GO


Set your file growth and database size limits appropriately


By default SQL will have the following settings for your database and logs


DatabaseBy 1MBUnlimited
Transaction LogBy 10%Limited to 2,097,152MB


On a SQL environment where there are large amounts of write operations, it is important that these numbers are managed appropriately.  Setting the autogrowth to 1MB can lead to a large amount of fragmentation on the disk, and it can cause less than optimal SQL performance when accessing data. Autogrowth should be set in Megabytes, and anywhere from 50-100MB would be appropriate for a database with a large amount of I/O.   The Maxsize setting will not have an impact on performance, but it is recommended to monitor growth and overall size of these files.  Again, there are not going to be hard limitations on the maxsize for these files, as the functionality of the Nintex Databases will rely on the hardware and overall maintenance of the SQL environment.


Stay tuned for Part 3 where we wake Chad Austin from his hypersleep to provide more information on the relationship between Nintex Databases and the SharePoint workflow engine.



Demystifying Workflow History (Part 1)

How to purge dbo.WorkflowLog table

MSDN: Reorganize and Rebuild Indexes

MSDN: Back Up a Transaction Log

Technet: Shrinking the transaction log


Many people posted this question and sometime ago there were some external post that dissapeared. So now it will keep safe in this post.




Create workflow variable varUserGroupXML type=Multiple Lines of Text

Create workflow variable varUserGroupNames type=Collection

Create workflow variable varIsInGroup type=Yes/No


To make it easy to reuse, create an Action Set action, and label it "Is user in group?"

Put the following within that Action Set.


Call Web Service action. Configure it as follows:

     URL: Web_URL/_vti_bin/UserGroup.asmx. Supply necessary Username/PW credentials.

  • Web Method: GetGroupCollectionFromUser
  • Editor Mode: SOAP builder
  • userLoginName(string): Initiator
  • Web Service Output: Specify elements "../m:GetGroupCollectionFromUserResult" > varUserGroupXML

Use the Run Now to verify things work. Query XML action. Configure as follows:

  • XML Source: XML
  • XML: {WorkflowVariable:varUserGroupXML}
  • Output 1: Process using: XPath.
  • Return Results as: text
  • store result in : varUserGroupNames

Collection Operation action. Configure as follows:

  • Target collection: varUserGroupNames
  • Exists: checked
  • Value: Value (selected from dropdown menu) - (the group name you are interested in) typed in text input
  • Store result in: varIsInGroup


Now, the value of varIsInGroup will be a yes/no.


You could probably make this cleaner by putting the group name to search for in a workflow variable as well.

Filter Blog

By date: By tag: