Skip to main content

Hi all,

Wondering how you keep track of the number of workflow executions against your licensed allocation? 

Are you on the per-execution license model or the old per-workflow model?

We’ve had two Nintex consultants try and create a workflow that would tell us how many workflow executions have occurred during a license period (ie. a year).  Neither has been successful.

The only way we are currently able to check how we’re tracking against our allocated number of executions is manually using Nintex analytics - a really poor solution for an automation platform!!!!

It was suggested we do a once-per-day report so we can at least know if any of our workflows are going crazy - which has happened to us twice. The first time we exceed our license allocation 10x and the second we used 50% of our allocation in four days.  As it stands, there’s no way of being alerted to this or limiting the number of times a workflow executes.

Has anyone else done a daily workflow execution report or similar?

Really keen to hear how others are addressing this as seems there is a huge risk of exceeding license allocation and not knowing about it until it is way too late.

 

Hi @MarkPCarroll 

 

this information should be available via the customer central portal. 
 

http://customer.nintex.com
 

If this doesn’t provide the desired information then your Nintex account manager is the best person to speak with about getting accurate numbers. 


If you need a specific daily report this information is easily accessible via an API call. 
 

https://developer.nintex.com/docs/nc-api-docs/d7adf56de0029-list-instances
 

you can also export workflow instance data from analytics to an odata feed, this is a little slower than using the API but much easier in terms of setting up a power bi dashboard to track usage. 


Thanks Jake. The data on the customer portal is only helpful if you are on the old per-workflow license model, not the per-execution model that we are on.  According to our account manager everyone is being transitioned to the per-execution model.

We’ve also been told by Nintex that presently the only way of knowing how many workflow executions an account has made is manually using Nintex analytics.  Then you need to adjust the reporting period and match that (manually) against your license period/year and number of licensed executions.  It’s clunky and very manual - particularly for an automation platform!!!

At the moment there is basically no way of mitigating the risk of a workflow rapidly exceeding license allocation because:

  1. there is no ability to set limits on how many times a workflow executes – either overall, or over a time period;
  2. there is no warning when a workflow may be executing excessively;
  3. there is no warning/notice when a set number of workflows is executed against an account;

  4. there is no warning/notice when you approach or exceed license allocations (in fact it appears there is no upper limits to executions);

  5. there is no ability to receive alerts/automated updates on how many workflows have been executed across an account (note we have had two Nintex specialist try to construct a workflow that does this, and neither has succeeded); and

  6. the only way to check how many workflows have been executed across an account is to manually check in Nintex Analytics.

 


Hi @MarkPCarroll 

 

understood. 
 

please allow me to work on these points and come back to you soon. 
 

Jake


Hi @MarkPCarroll 

Thank you for giving me some time to look into these points, each of which you make are valid.

I have looked in detail and conversed with some people internally and the ball is rolling to address all of your points in a official Nintex capacity but in the meantime I wanted to give you some advice and solutions to your points in anticipation so for points 1-5 I have started the process of engaging the product teams internally to see what we can do to better support you on these. 

For point 6 this appears to be something that is already planned to be updated in Nintex Customer Central and I am engaging the teams responsible to find out more and perhaps any timelines. 

In the meantime I wanted to help address how you can implement controls in place using existing capability to help with your points as they all seem to relate to the possibility creating an infinite loop.

Without knowing a little more detail on the specific scenario; here are some steps you can take to ensure infinite loops do not happen, then at the end I will explain how you can set up temporary tracking on instances setting a limit. 

I understand that it is very likely you have already implemented the following but I just want to be prudent in case you have not. 

Preventing infinite loops

There are a couple ways to ensure infinite loops do not happen and they mostly revolve around setting conditions on the start event to ensure a workflow is not already running and then to update the tracking record in some way that will invalidate a new start event, in my examples I will use Salesforce but the same applies to SharePoint or any other tracked application. 

It is very easy to create an infinite loops in an application such as Salesforce due to the way the API reports changes, it is important to ensure the workflow knows when it should and shouldn’t run, please forgive me if I am going into too much detail or explaining things you already know, I’d rather err on the side of caution and ensure you know as much detail as possible. 

Understanding how an infinite loop can be created

The easiest way to make an infinite loop would be to create a workflow that watches a salesforce object for an update to an item without any conditions, As there are no conditions the workflow is configured to run any time that object is edited and reported to the API, there are scenarios where such a configuration is valid, but in those cases such a workflow should never update the Salesforce record its self, as this is where the loop is completed. 
 


Adding an update item that changes the same record in any way will complete the loop, the first time the item is edited genuinely the workflow will run and the update action, this triggers a new workflow, after which the workflow will keep calling its self endlessly, I do believe that we have measures in place to prevent such behaviour happening very rapidly but if the workflow runs at a slow pace because many actions take place before the re-trigger or the API reporting from Salesforce is slow (typically it is 5 mins) then this might go under the radar of the measures we have in place and it may be difficult to discern from genuine use. (I am still looking into the specifics on how this works so I will see if this is something I can find out and share with you.)
 


 

How do I prevent looping through workflow design.

Generally my advise is that if you have a workflow watching an update event (including new and update for Sharepoint) then you must either not complete the loop in that workflow by updating the item again as mentioned before or add a condition breaking action and ensure that condition is applied to the start event. 

Changing our existing infinitely looping workflow lets add a start condition that the NWC Instance field must be empty.
 

 

Now as the first action that performs the update to the record (please do ensure it is the first action in the workflow just in case) updates the record filling the NWC Instance field it breaks the condition as NWC Instance is no longer empty.

Now publishing this workflow and testing only one instance is created and the field is updated. 

We can see the change event successfully called my test workflow Prevent Loops (SF test) is not related. 
 

That instance called and completed correctly updating the item.
 

 

This will always trigger a new event however due to the fact there is a condition set the workflow will not run and 5 minutes later we can see this in the events tab:
 


I hope this helps explain a basic way to prevent loops.

You may notice that such a set up prevents any possibility of new instances being created on that record even if they are genuine and you would be correct, really easiest way to do this is to remove the limiting condition by removing the value from NWC Instance or by using some kind of status field instead.


Gain more visibility on workflows nearing the usage limit and doing something to prevent it.

This is a little bit more of a complex task as you know from engaging with consultants, as mentioned I will be exploring an official approach to this but in the meantime you can potentially implement the following:

First lets add the Nintex Cloud API V2 xtension from the gallery: 
https://gallery.nintex.com/t/NintexWorkflowCloudAPI-v

This will give us the ability to use workflow to deactivate workflows if we need to and track instances. 

before we get started lets create our own tracking table for our workflows and another table to track limits. 


In the workflow tracking table lets add Workflow Name, Workflow ID and Instance Count and Limit.


Now to support this we need our workflow to update this table each time it runs so lets add that as an action, to do this lets get the current instance count and add 1 storing the value back into the table. 

First lets retrieve the table record based on the workflow ID that can be found in the workflow context variables:
 


Storing that in instance tracker we can do 2 things, update the total instances value and if we exceed the allowed amount we can turn off the workflow so no more instances can run, to do this lets add a conditional branch that checks if the instance count exceeds the current instance limit:

 

If this is true then on the Yes branch we will add the action to deactivate the workflow and send an email informing me of this, I wont end the current instance as this might be the last genuine instance. 


On the No branch we want to take the current instance count, and increase it by 1. 
 


then we want to store that back into the table for future instances (I forgot to include ID in the query table action earlier but we use this to update the correct record.)
 


I am now going to manually kick off multiple instances using the resubmit function bypassing my looping conditions and see what happens:


 


We can see the new instance checked the table and went down the branch for No as the limit isn’t yet reached:
​​​​


If we look at the table we can see it increased the value from 1 to 2:
​​​​


Lets manually set it to 10 and see what it does now when we re-submit:

 


This time it went down the branch to deactivate the workflow and send me an email and went on to complete the workflow:
 


This is the email I got:


​​​​​​​And if we check the workflows view it is now paused:
 


There is probably a number of ways you can implement this for better capability but I think this solves the main issue, If you want to apply this to more workflows then you’ll need to add new entries to the tracker for each workflow you want to track and use the same steps in each workflow to handle the tracker and deactivation, It would be possible to crate a component workflow for ease of admin but that would mean running 2 instances for every workflow, I don’t think that will help at all!

As this workflow tracking table will actually track the number of instances just by existing it will give you insight into how many you have run from a specific point and you can easily build a workflow that runs on a schedule, checks the whole table and sends you a report if you are reaching the limits, or just add another part of the No branch that sends an email if instance the number is 80% of the total. 

I do appreciate if you have read this post this far and I do hope this goes some way to help you with your concerns, this is definitely not the end nor do I intend any of this to be a long term viable solution, I will push as much as I can to ensure all of your points are officially addressed in some way and these instructions become pointless, I would love nothing more.

If you have any questions please let me know.

​​​​​​​Jake


Wow Jake, this is an amazingly comprehensive reply.  We really appreciate you following up internally, and for this detailed guidance. It will certainly be valuable in mitigating the risks we’ve realised.  Thanks you and look forward to hearing more as you find out from colleagues.


Reply