Skip to main content
Nintex Community Menu Bar

I am a long time user of Nintex RPA LE (Foxtrot) who is looking for other users who are migrating to Nintex RPA that are also users of Fiserv Premier (Navigator).  I have recently had some success figuring out how to handle some of Fiserv’s nuances with the help of Professional Services, but I would love to have a Fiserv peer or peers to discuss some best practices.

One thing I’m struggling with is if you are working a list from Excel, and you run into an error after you have already completed several items using the loop, how do you restart where you left off?  One way I’m planning to incorporate is to have all the data in a SQL table, pull in the first item available in a “Ready” status, immediately mark it as “In Process”, and then mark it as “Complete” once the process is complete.  The next time through will get the next item available in “Ready” status as the prior one is now marked as “Complete”.  The issue is that several users are hesitant to use a SQL source, and prefer to import Excel lists.

One little tidbit that I want to share is that it seems that the distinguishing Selector for data fields in Navigator is the 4th one down.  When you’re dealing with Flex Fields, the 3rd to the last ID within this 4th Selector is the distinguisher.  

Anyway, anyone else who uses Fiserv Navigator, please feel free to DM.

 

Thanks!

Adding @nicknep to thread. If this is the same Nick that I know, he is very familiar with Fiserv Premier.


One thing I’m struggling with is if you are working a list from Excel, and you run into an error after you have already completed several items using the loop, how do you restart where you left off?  One way I’m planning to incorporate is to have all the data in a SQL table, pull in the first item available in a “Ready” status, immediately mark it as “In Process”, and then mark it as “Complete” once the process is complete.  The next time through will get the next item available in “Ready” status as the prior one is now marked as “Complete”.  The issue is that several users are hesitant to use a SQL source, and prefer to import Excel lists.

Thanks!

 

My suggestion is to use a SQL Table. 

The main reason is concurrency.

A lot of times, Excel “locks” when something goes wrong/another user opens the file or another Robot is accessing it at the same time, making it hard to restart from where you left off.

The easiest scenario would be to simply add a new column to excel and have it status set to ready….but again, you may face concurrency/lock issues as explained above.

So, Lets assume you are using a SQL Server DB table:

 

To import an excel into a SQL table is real simple: 

  1. Create a new table with the same structure as the excel column header. Such Table can be created either via MS management console or in runtime using Execute Custom Query Advanced Command.
  2. Add a DB column to store the status. Make sure it is nullable. 
  3. Also, I typically add a column called ID and such ID is a GUID (Global unique identifier) that is auto filled when inserting the row. It helps a lot during the processing of the rows as it becomes your primary key to the record and makes it unique in case some records contain the exact same data.
  4. As optional, you can add columns like processed time, processed, Process Start, Process End, etc.. for further audit capabilities, if needed.
  5. Once the table is created and contains no data, simply run the following:

 

Insert into $DestinationTable$

SELECT * FROM OPENROWSET('Microsoft.ACE.OLEDB.12.0',
    'Excel 12.0; Database=$ExcelPath$\$ExcelFileName$;HDR=YES', D$worksheetname$$]);

Don’t forget to set the following variables:

  • $DestinationTable$ = Your SQL Server DB table
  • $ExcelPath$ = directory where the excel file is located at.(You can use UNC names as well. Example \\myserver\myshare\
  • $worksheetname$ = name of the excel worksheet where the data is located at.

Also, make sure that:

  • the layout or column order and names match the column headers in excel 
  • The SQL Server database “can see” the file. Meaning, such ExcelPath and Excel file are accessible from the SQL Server Machine.

After the command runs and you have the data imported, you can set all rows to “Ready”.

Then simply do a select for each row with Ready Status in the DB and run your automation. Set the Status to “In Process”

Make sure to update such column once each row is processed, to something like “Completed”

 

Last but not least, In regards performance, to import about 100k rows it takes less than 30 seconds it on a SQL Express Instance. (Same SQL Express that is running on the Kryon RPA Server)

I hope this helps.

 

If running the query and you face the following error:

The “Microsoft.ACE.OLEDB.12.0 provider is not registered on the local machine”

 

It means that Microsoft Access Database Engine is not installed.

You can install it form here:

https://www.microsoft.com/en-us/download/confirmation.aspx?id=13255

 

You may also face the following error:

Msg 7399, Level 16, State 1, Line 2

The OLE DB provider "Microsoft.Ace.OLEDB.12.0" for linked server "(null)" reported an error. Access denied.

Msg 7350, Level 16, State 2, Line 2

Cannot get the column information from OLE DB provider "Microsoft.Ace.OLEDB.12.0" for linked server "(null)".

Run the following code on the 

USE emaster] 

GO

EXEC master.dbo.sp_MSset_oledb_prop N'Microsoft.ACE.OLEDB.12.0', N'AllowInProcess', 1

GO

EXEC master.dbo.sp_MSset_oledb_prop N'Microsoft.ACE.OLEDB.12.0', N'DynamicParameters', 1

GO

sp_configure 'show advanced options', 1;

RECONFIGURE;
GO

sp_configure 'Ad Hoc Distributed Queries', 1;

RECONFIGURE;
GO

 

Bonus: At the end of processing, you can run the following Execute Query Advanced command:

Full Report:

INSERT INTO OPENROWSET
('Microsoft.ACE.OLEDB.12.0', 'Excel 12.0;Database=$ReportPath$\$ReportFilename$',/$worksheetname$$dollar$])
Select $columnNames$ From $DBTable$


If a filter is needed:

INSERT INTO OPENROWSET
('Microsoft.ACE.OLEDB.12.0', 'Excel 12.0;Database=$ReportPath$\$ReportFilename$',/$worksheetname$$dollar$])
Select $columnNames$ From $DBTable$ where Status = '$status$' $And Clause$
  • $DBTable$ = Your SQL Server DB source
  • $columnNames$ = list of columns to be extracted from the DB, comma separated/SQL Notation: Example: “column1, column2, Status, ID”
  • $ReportPath$ = directory where the excel file is going to be generated at(You can use UNC names as well. Example \\myserver\myshare\
  • $ReportFileName$ - Name of the Excel File
  • $worksheetname$ = name of the excel worksheet where the data is located at.

This will generate an excel report that can be emailed or saved somewhere before purging the data in the SQL table.

I also have attached a sample Report .Lwiz under SampleReport.zip

 

Best Regards,


@fguaragna  - amazing!


We’re using the outline @fguaragna provided to create a global function to accomplish this import to SQL, from either CSV or Excel files. Seems to be working ok, but the UI for global functions (and embedded wizards) could be greatly improved to make their deployment much quicker. Specifically, setting variables in the global function to be required, then when a user makes use of that action, the required variables are automatically populated in the wizard to be quickly assigned values. 

However, one of the annoying things we keep hitting, is NAC’s error handling/reporting. When there’s an issue with the SQL command, rather than reporting the error that SQL would have returned, NAC just returns “DBError”, or whatever is defined for the section of Error Handling.  Using SQL seems to be one of the better ways to manage data, but it’s difficult to troubleshoot things when NAC isn’t returning the problem, just that there was a problem. 


Quick follow up - we did just find that if you setup preconfigured SQL queries through Nintex Admin, that the parameters required for that query do show up when defining the action in Kryon. This is the kind of UI that we were hoping to see for Embedded Wizard’s and Global Functions. 


Reply