Tech Tips

Tip 17: ScanTrip Cloud Can Be Configured to Use a Custom Form During MFP Panel Interactions

ScanTrip Cloud provides cloud based automation that interfaces with the devices in your environment to enable a variety of scanning options. ScanTrip Cloud can also be configured to provide the user with options during the scanning process.


One of the most common options for a user scanning a document is to provide the ability to name the document something other than a default scan name. You can create and utilize forms in ScanTrip Cloud that help you achieve this.


Building the Form

ScanTrip cloud provides a Form Management tool to create forms with the necessary fields and options for use in a workflow.



Form Management page within Dispatcher ScanTrip Cloud


The steps here will create a simple form with a single fillable field.


  1. Select the Forms menu option.
  2. On the Form Management page, select "Add New Form".
  3. On the dialog that pops up, name your form and choose the Blank template.
  4. On the new Form canvas that opens, drag a text field from the list of standard index fields.
  5. Name your custom variable and friendly name for the new field.
  6. Check the "Required" box.
  7. Choose Save and Close.


Your form should look similar to this:


Preview view of new form



Add the form to your workflow

Now that you’ve created the form you need to publish it. This tells ScanTrip Cloud that the form is usable by a workflow Form node.


Publish your form by clicking on the check mark in the form list.


Actions section with publish icon


Now that our form is published, it’s time to add it to the workflow.


  1. Open your desired workflow for editing.
  2. Drag a Form Selector node to the canvas.
  3. Open the Form Selector node configuration.
  4. On the left hand side of the window, there is a list of custom forms. The form you created in the previous step is now available and can be added to the Form Selector by checking it off.

  5. Form selector with custom form chosen
  6. Save the node configuration.


The Form Selector will now display the rename form at scan time! Now that we are displaying a form for the user to provide a file name, we can configure our Rename node!


  1. Add or open a Rename node in your workflow.
  2. Open the Rename node configuration.
  3. Add a custom metadata field to the naming components.
  4. Select the metadata field that you created when making your form.

  5. Metadata Browser showing the field selected
  6. Make sure to include the file extension. Your final configuration will look like this.

  7. Metadata showing the filename and file extension components and values
  8. Save this node.
  9. Save and verify your workflow. If you have other nodes to configure, such as output nodes or other processing nodes, please do so.
  10. Run your validated workflow!


Using the form at the MFP panel

When a user interacts with this workflow, they will now be provided with the text field for a custom filename.


  1. Choose the workflow from the currently running workflows in Scantrip Cloud app.
  2. Scan your document.
  3. When the document has been uploaded, you will be presented with your form.

  4. MFP screen showing the custom form you created

  5. Since you made it "Required", the file name must be filled out. Do so, and then choose the next arrow to fill out any remaining custom or system forms that may be necessary (such as a folder browser selection for Sharepoint).
  6. Your document will now be transferred to its destination and renamed according to the field on the form! Well done!


Tip 16: Two for the Price of One: Dispatcher ISO and Online Help

Today’s Tech Tip actually has two tips!


Tip #1 – Dispatcher Phoenix ISO Image (Dispatcher Phoenix Suite)

An ISO image is essentially a complete copy of everything stored on a physical disk, USB drive, etc., compiled into a single file. The ISO image for Dispatcher Phoenix can be found online. Click on this URL to download the ISO.


This URL is permanent and ALWAYS points to the most current version of Dispatcher Phoenix.


This ISO includes the complete Dispatcher Phoenix suite of applications (Dispatcher Phoenix, Dispatcher ScanTrip, and Dispatcher Release2Me), so no matter what you are looking for, you can find it on this ISO. The ISO is over 7 GB in size, so make sure that you have enough space to download it. You can save it to a USB Memory Stick and then "mount it" as a drive on the server on which you plan to install Dispatcher Phoenix.


Benefits of using an ISO image:

  • Install Dispatcher Phoenix without an internet connection.
  • Easily transfer or share the ISO across different desktops.


This ISO can also be used for updating Dispatcher Phoenix. The same component pool found on SEC servers is also contained within the ISO. Once mounted, you can start Add-In Manager as an Administrator and choose to install updates. Make sure to select the mounted ISO as the update source.


Add-In Manager will automatically detect the mounted ISO.


Tip #2 – Search Dispatcher Phoenix Help from any desktop or mobile device

For many years, Dispatcher Phoenix Help has been globally available via a cloud repository. Thus, you can access Dispatcher Phoenix Help from any computer connected to the internet. The help site is always up to date with the latest information, and you can download or print a PDF copy from the site for offline access. To access Dispatcher Phoenix Help, visit the following URL:


https://docs.dispatcher-phoenix.com/


Dispatcher Phoenix Help is also available in Japanese. This translation is available from the help site instantly, at the click of a button.


The help site’s powerful search engine uses “type-ahead” functionality, returning results after entering just two letters. Results update as you enter more characters, listing all topics in the help that contain the search string.


Searching Dispatcher Phoenix online help

Tip 15: An Environment Checklist Prior to Installing Dispatcher Phoenix

When planning to deploy Dispatcher Phoenix in a customer environment, there are a number of “checks” that you can do to help streamline the installation.


The following checklist encompasses information that is available in the Dispatcher Phoenix Online Help, as well as some helpful tips and tricks to streamline the installation.


  Step Description
Checkmark Confirm hardware requirements are met.

Minimum hardware requirements can be found in the Online Help. Additional resources may be required for processing-intensive workflows.

Checkmark Confirm OS version is supported and fully updated.

Dispatcher Phoenix OS support is in line with current Microsoft supported OS and OS versions.


NOTE: A system that is not fully updated may have problems installing prerequisites or other components with OS dependency.


NOTE: If your customer is using WSUS, please contact SEC for additional information related to this service.


Checkmark Confirm that prerequisites are installed.

The Dispatcher Phoenix installer (Add-In Manager) will attempt to install any necesary prerequisites for the software that can be obtained from Microsoft.


In environments where there are restrictions or limitations due to policy or security, we provide a list of prerequisites that can be installed prior to installing Phoenix.


Checkmark Confirm that Print Spooler is installed and active during installation.

During installation, the platform installs print tools that require the print spooler to be active on the system.

Checkmark Confirm permissions of the installing user.

The user that logs into Windows to execute the Phoenix installation will, by default, be used to run the workflows under that account. This user must be an Administrator of the local system.


We recommend the use of a service account so that individual user account permissions and passwords do not need to be managed for Phoenix services.


In environments where user-based services accounts are not allowed, it is possible to use Microsoft managed service accounts.


Checkmark Confirm availability of components for Dispatcher Phoenix Web.

Dispatcher Phoenix Web uses Windows Internet Information Services to provide access to the Dispatcher Phoenix Web interface. These items will be installed and configured if not already on the system.

Checkmark Confirm internet connectivity and firewall access.

Dispatcher Phoenix’s default installation method downloads components from the web and requires access to the internet during the installation process.


In addition, depending on the customer environment, ports may need to be made available.


If the system being configured is not allowed access to the internet, please see Tech Tip 14 for more information about offline registration.


Checkmark Confirm antivirus, anti-malware, etc. are disabled prior to installation.

Dispatcher Phoenix integrates heavily with Windows and .NET. When writing to system directories, some anti-virus and malware detection will block installation or use of key components.

Checkmark Confirm user account access for any environment integrations, such as database connections or other services.

If a workflow is utilizing database connections or other systems that require credential access, the account being used to run the workflows will need access. If alternative accounts need to be used, this can be planned prior to installation.


Consideration for Enterprise and Cloud-Hosted Phoenix Installations

Checkmark

Confirm connectivity between client location and cloud services.

When installing in a cloud-based VM, Phoenix requires a method of communication with the customer’s on-prem environment in order to provide web experiences and integration into environmental services such as file shares and other line of business applications.

Checkmark

Confirm that all Phoenix servers in the enterprise can communicate with each other.

In enterprise configurations for Failover and Offloading, the servers in the cluster require network access to each other in order to offloaded job information.



Additional Tips and Tricks

  • If installing a 30 day demo of Dispatcher Phoenix, please note that all components will be downloaded and installed. If the demo system has limited bandwidth, consider downloading the ISO to install with, instead.
  • When installing with a valid purchase code, only the licensed components (in addition to required items) will be installed. If you would like to avoid installing everything, utilize the customer purchase code during initial installation.
  • The Dispatcher Phoenix installer will always install the latest version of Dispatcher Phoenix.


Tip 14: Installing and Registering Dispatcher Phoenix Without an Internet Connection

One of the tried and truest features of Dispatcher Phoenix is the ability for the solution to be installed and registered in environments that do not have access to the Internet. This has allowed Dispatcher Phoenix to be installed in very secure environments or environments that do not allow Internet connections.


To install Dispatcher Phoenix without an Internet connection you can download the Dispatcher Phoenix ISO using this link.


The ISO image contains everything needed to install Dispatcher Phoenix, however, be aware that there are Windows Components that will also be installed which, depending on the environment, may trigger the need to also perform a Windows Update. The Dispatcher Phoenix ISO is a 7 GB download that will require sufficient time to download and space available for the download.


Installing Dispatcher Phoenix

Copy the ISO to a USB Memory Stick and then mount the ISO on the PC where Dispatcher Phoenix is to be installed. Open the ISO and choose the appropriate installer for example:


"Dispatcher–Phoenix-Setup-x64.exe"


Then, run the installer and continue this procedure once the Registration Window is displayed.


Registering Dispatcher Phoenix

In order to register Dispatcher Phoenix you will need:

  1. A Free SEC User Account
  2. A PC connected to the Internet
  3. A Purchase Code (if registering the "Full" version)
  4. The Lock Code from the installation of Dispatcher Phoenix

To begin the registration process, choose the desired registration type on the Register Dispatcher Phoenix screen. During this process, please make note of the "Lock Code".


Dispatcher Phoenix registration screen

Using a PC that is connected to the Internet and log into the SEC User Account that will be used to register Dispatcher Phoenix. From the Menu choose "Register". Enter the Purchase Code and Lock Code and then choose the version of Dispatcher Phoenix to register. Press "Submit" as shown below:


Registration screen on SEC site

The Dispatcher Phoenix installation will be registered and the confirmation panel will be displayed.


Registration complete screen on SEC site

Click the link to download the new License File and transfer the License File to the PC where Dispatcher Phoenix is installed.


File browser showing downloaded license file

Return to the Dispatcher Phoenix Registration Panel on the server where Dispatcher Phoenix has been installed and select “Manual Registration”. Then, browse to where the License File is located and choose the file, then click the “Activate” button.


Dispatcher Phoenix manual unlock

A confirmation screen will be displayed that the license has been installed.


Dispatcher Phoenix confirmation screen

Please note that Manual Registration works for registering Add-In Licenses, including License Reactivations, Enterprise Server Registrations, and License Transfers. In addition, for applications or add-ins that are free such as Dispatcher Phoenix DEMO, an additional Purchase Code is not required.


Other SEC Applications such as Corporate Announcements can also be registered manually.


Tip 13: Streamline Print Operations with Automated Tray Calls in Dispatcher Phoenix

The Dispatcher Phoenix Workflow Metadata is a powerful tool that can be used to control workflows as well as MFP features. For example, we can use Dispatcher Phoenix to simply and effectively facilitate Paper Tray calls when printing documents using Metadata that is retrieved using Advanced OCR from the contents of a document.


Use Case

Customer receives a PDF document that consists of multiple single page documents. This "mixed" PDF document contains various single page document types, such as credit reports and packing lists. The customer wants to automate the process of splitting the large PDF file into separate documents, and then print the pages on either plain white paper media or “pink” packing list paper media. The customer normally loads plain white paper media into Tray 4 of their Konica Minolta MFP and the "pink" paper media into Tray 5.


The source PDF file is delivered into a network folder and the workflow solution should monitor this folder, delete the source PDF file once processed, split the PDF into separate pages and print non packing list pages on plain white paper media from Tray 4, and packing list pages on "pink" paper media from Tray 5.

Solution

The following is the workflow that was created for this customer.



Full workflow

Workflow Configuration

  1. Input Folder
    This node is set to the folder that is to be monitored by the workflow. The Input Folder Node will delete the retrieved file.
  2. Advanced OCR
    The Advanced OCR node is configured to capture the text that is used to determine the Tray Call. In this case we are looking for the text "PACKING LIST" in the upper right section of the page. We define the ZONE that will have the text that will tell the workflow the page is a Packing List page.
  3. Advanced OCR screen

  4. Split
    The Split Node separates each document page into a separate file.
  5. Metadata Scripting Node
    This node will convert the captured text to the appropriate Tray Call metadata value. The text retrieved in the Advanced OCR Node is passed to the Metadata Scripting Node and checked to see if it is the text that signals that a Packing List page has been detected. The "set_tray_call" function will set the Tray Call metadata value to the appropriate value of either "TRAY4" or "TRAY5". The "get_tray_call" function then copies the Tray Call value into the metadata variable "{script:tray_call}". Which will be used by the PJL Preferences Node.
  6. Scripting code

  7. PJL Print Preferences
    This node accepts the Tray Call metadata value so that the page is printed on the correct media. In the node, the "Advanced Settings" button displays the "Additional PJL Headers" entry. Here we enter the "MEDIASOURCE" PJL Command and then using the {script:tray_call} metadata variable we can set the media source to the appropriate tray to print the document.
  8. PJL Headers settings

  9. Printer
    The Printer node sends the modified PDF file to the MFP for printing on the appropriate media.
  10. Metadata to File Node and Output Folder
    These two nodes are used to capture the Workflow Metadata during debugging and testing of the workflow and are disabled for normal operation.

Conclusion

Using the built in features of Dispatcher Phoenix we can very easily create a simple yet powerful workflow that will automate and simplify a customer process. This workflow is easily expandable to other tray calls by adding Zone for other text that will identify the particular page and then using the retrieved text and additional code in Metadata Scripting to set other tray calls as desired.



Whether you're looking to get up and running quickly or trying to solve a specific customer need, you can find many great ideas and examples on our Sample Workflows page.



Tip 12: Intelligent Forms Processing Workflow for Medical Billing

Challenge

A customer processing 50 to 100 pieces of mail every day has to scan this mail and then route it to the appropriate OneDrive or SharePoint Folders. The mail is not uniform, so key phrases or identifying text is located in various locations on a page, which leads to employee fatigue and a high rate of entry error. This is currently a manual process and requires three employees about 4 hours per day each to process. Due to COVID-19 and related manpower shortages, the customer wants to automate the mail scanning process, but has not been able to find a solution that works for them because of the lack of a standard format of the incoming mail.


Solution

Using Dispatcher Phoenix Forms Processing and its flexible data extraction features, the lack of any standard format was not an issue. Combined with the built-in ODBC Processing Node and Custom Lua Scripting for easy parsing text data, SEC was able to create a self-learning workflow solution that was taught how to recognize the scanned mail and route it to the appropriate OneDrive or SharePoint Folder.


Forms processing workflow

This workflow was broken down into four main areas, as follows:

  1. Scanning
    The first part of the workflow is where the user scans the mail document at the MFP. The user has the option to manually choose the desired OneDrive and/or SharePoint Folder to route the documentation to.
  2. Reading
    In this part of the workflow, the scanned mail document is “read” by the Forms Processing Node, which extracts key phrases and other information necessary to determine where the scanned mail document will be routed. The “read” block of data from the scanned mail document is then parsed against the stored key words and phrases in the workflow database. If a match is found, the scanned mail document is converted to searchable PDF and then routed to its final destination.
  3. Learning
    If no matching key word or phrase is found, an email notification is sent to a group of employees, prompting one to review the scanned mail document and data extractions in a web-based interface via Dispatcher Phoenix's Batch Indexing module. Within the web-interface, the user can then edit the keyword phrase, if desired, and then choose the correct OneDrive/SharePoint folder to associate with the keyword phrase. Thus, the Workflow ‘learns’ new keyword phrases and proper folder routing with as little human intervention in the workflow as possible.
  4. Routing
    Once a match is found between the stored keywords and phrases or when the user chooses a desired folder, the workflow will convert the and then route the scanned mail document into the OneDrive/SharePoint folder.

Benefits

This customer was able to gain the following benefits from their automated workflow:

  • Received a high rating during a HIPAA Compliance audit due to the built-in document security of Dispatcher Phoenix.
  • Greatly reduced the possibility of accidental exposure of sensitive patient data and proprietary pricing data.
  • Reduced total weekly man hours from 60 person hours per week to less than 2 hours per week.
  • Improve responsiveness to inquiries from all stakeholders including patients and doctors.

Conclusion

This complex workflow depends heavily on the advanced features of Dispatcher Phoenix to provide a robust and powerful, self-learning workflow, that requires little or no interaction from the user except when associating a new scanned document with either a new or existing OneDrive/SharePoint folder.



Tip 11: Automate Your Printing Operations with Dispatcher Phoenix

Recently, a client from the Education sector was struggling with waste from printed reports that were not needed or too little reports being produced, delaying meetings, etc. Multiple administrative assistants had to spend days preparing and printing out the correct number of collated reports for each meeting attendee.


With Dispatcher Phoenix, the administrator now only needs to specify the number of copies needed as part of the file name and store the file in a folder. Dispatcher Phoenix will automatically collect the files, evaluate their file names, and quickly print out the correct number of reports, efficiently automating what would otherwise be a tedious, manual process, and freeing up employees’ time to focus on more strategic initiatives.


The workflow was set up as follows:


Collated copies workflow

Here’s how it works:


  1. File Collection - All reports are stored in a network folder. In this workflow, Dispatcher Phoenix is set up to monitor that folder, collecting any documents that come in. In this case, the workflow is not started until all documents to be printed are in the monitored input folder.
  2. File Parsing - Each file name includes the number of copies that should be made. The Parser node is used to extract that number value as metadata for use later in the workflow. If a file does not follow that naming convention, then it is ignored by the rest of the workflow.
  3. Print Settings / Printing - After the Parser node, all appropriate files are sent to the PJL Print Preferences node, which applies print settings (1 copy, single-sided, and stapled) before sending the file to the printer. At this point, a copy of the document is sent to the printer and another copy is sent to the Metadata Route node.
  4. Checking For Counter - The first Metadata Route node checks if a Counter script exists for that file. If it is the file’s first time through the workflow, this Counter variable does not exist so the file is passed to the Metadata Scripting node (“MAKE Counter”) to create the counter. If a Counter script already exists, the file is sent to the next Metadata Route node (“Copy Limit Reached?)”.
  5. Print Document Again - The “Copy Limit Reached” Metadata Route node is set up to check whether or not the file should be printed again by comparing the Counter variable value to the Parser node metadata that was created earlier in the workflow. If the values are equal, then the file is stored in the Output folder and does not need to be reprinted. If the values do not match, then the file moves onto the last Metadata Scripting node (“Increment”) to increment the Counter script and sends the document to be printed again.


With this workflow, the customer is now able to prepare and print the correct number of reports for each attendee of their meetings, with very little manual effort required. Dispatcher Phoenix helped this customer save time and optimize their printing operations — an instant return on investment!


Tip 10: New Ways to Automate Document Processing Based on Type

Recently added new features in Dispatcher Phoenix provide new methodologies for automating the processing of documents based on document type.


The new Dispatcher Phoenix Doc-Classifier node provides a programmable and flexible method to identify and classify documents in the workflow and then route the documents to the appropriate processing leg of the workflow. The Doc-Classifier node automatically categorizes scanned documents, electronic files, etc., into pre-defined classes using Dispatcher Phoenix’s Optical Character Recognition (OCR) capability. The node extracts information, matches it against pre-defined classification definitions, and produces the following metadata values:


  • Classification - The detected Classification Type or, if not detected, “other”.
  • Confidence Level - A percentage rating indicating the probability the document was classified correctly.


For example, let say that you have a customer who needs to process purchase orders and invoices in different ways, and the customer needs to filter out all other document types that may be included. Your customer has a low tolerance for error, so you must ensure that the workflow only processes documents that have a high degree of confidence of being a purchase order or an invoice.


The Workflow

For this article, we will be using the Doc-Classifier Sample Workflow “Accounts Payable Classification” that can be found in the SEC Samples Library.


Doc-Classifier workflow

This sample workflow accepts input from three sources (a Desktop/Network Folder, an Email Inbox, or scanned from the MFP) and then uses the Doc-Classifier node to detect the document type and assign a confidence level prior to routing the document through to the correct distribution. Please note the following configuration steps:

  1. The Input Folder should be configured to collect files from the folder on the network.
  2. The Email In node should be configured to collect files from your email server before running the workflow.
  3. The MFP Panel node has been configured to run using the MFP Simulator.
  4. The Doc-Classifier node has been configured for Standard classification of Purchase Orders and Invoices.
  5. The Metadata Route nodes searches for the metadata created by the Doc-Classifier node and routes the documents accordingly.
  6. The SharePoint Connector should be configured to distribute Purchase Orders to your SharePoint folder structure.
  7. The Output Folder node is configured to collect Invoices and route them to the output folder on your desktop.
  8. The Microsoft Exchange node should be configured to send email notifications to the desired recipient when Doc-Classifier identifies a document as "Other". This node must also be configured for your email server before running this workflow.
  9. A Default Error Node has been defined for this workflow. All files that go out of the workflow on error will be distributed to the Default Error Node folder on your desktop.

Configuring for Document Classification

Let’s take a deeper dive into how we’ve configured Doc-Classifier to facilitate document classification for purchase orders and invoices.


Using the Doc-Classifier node, we first choose the Purchase Order and Invoice classification categories from the standard temples included with the node. This configuration will classify documents into three categories and apply the appropriate metadata key to the documents metadata to enable further processing and routing: Purchase Order, Invoice, or Other. The Doc-Classifier node also calculates the confidence level - a measure of accuracy - associated with the document’s classification and stores that value as document metadata.


Classification types within the node

Using the Metadata Route nodes, we check the Document Classification metadata and then the Confidence Level metadata to successfully route the document to the correct location. In this case, if the confidence level associated to the document is greater than or equal to 75%, the document is routed further through the Purchase Order distribution process.


Configuration of the Metadata Route node

We have included information from the Online Help for the Doc-Classifier Node for regular expressions for other percentages.


Regular expressions

This is an example of a fully automated document classification system that can be easily modified as the customer’s needs change or expand. For example, if the customer decides they want to only process documents with a higher degree of confidence and email the less confident documents to an administrator that can be easily accomplished in this workflow.


Please refer to the Dispatcher Phoenix Help for the Doc-Classifier Node (Doc-Classifier) for more information about this new feature of Dispatcher Phoenix. If you have any questions please contact the ISS Group of SEC at sec@kmbs.konicaminolta.us.


Tip 09: Sending Email Notifications via Dispatcher Phoenix

As more and more business processes become automated, employees may not always have direct access to the server where the automated processes are running. In this case, users may want to receive feedback about the automated processes so that they can be monitored. Dispatcher Phoenix has several built-in tools that allow for workflow monitoring, including the ability to receive automated notifications. A workflow that has been designed with automated notifications built-in provides even more value to the user. This Tech Tip will go over some ways you can add automated notifications to any workflow and provide suggestions for best practices when doing so.


Overview Of Dispatcher Phoenix Transitions

First, let's look at how the workflow can be designed using Dispatcher Phoenix’s Workflow Builder tool. Each Dispatcher Phoenix node has at least 2 (two) output paths: one is called “Normal” and the other is called “Error”. In addition, there are some nodes that are used to make decisions; these kinds of nodes have 3 (three) output paths: “Yes”, “No” and “Error”. Because of this flexibility, the designer of the workflow can choose how and where files transition through the workflow. Since each node has an Error path, the workflow can be set up with error handling paths. Depending on the design of the workflow, this error handling can be as simple as informing an administrator that an error has occurred via email or performing a sophisticated error recovery task when an error occurs.First, let’s look at how the workflow can be designed using Dispatcher Phoenix’s Workflow Builder tool. Each Dispatcher Phoenix node has at least 2 (two) output paths: one is called “Normal” and the other is called “Error”. In addition, there are some nodes that are used to make decisions; these kinds of nodes have 3 (three) output paths: “Yes”, “No” and “Error”. Because of this flexibility, the designer of the workflow can choose how and where files transition through the workflow. Since each node has an Error path, the workflow can be set up with error handling paths. Depending on the design of the workflow, this error handling can be as simple as informing an administrator that an error has occurred via email or performing a sophisticated error recovery task when an error occurs.


The Default Error Node

In addition, the workflow has an overall output path called the Default Error node. The Default Error node provides a default exit path when an unexpected error occurs within a workflow. Any Dispatcher Phoenix node can be assigned as the Default Error node and is called by the workflow when a node within the workflow fails or has an error and there is nothing connected to the Error path of the affected node. Typically, the Output Folder node is used; however, in some workflow designs, another node is more helpful, such as the SMTP Out node, which sends an email to a specific email recipient when an error occurs which includes the file being processed at the time of the error attached to the email. Not only does this notify a designated administrator that an error occurred but the file being processed at the time of the error is preserved so that the administrator can take appropriate action.


The Default Error Node can be set up via the Canvas Properties in the Workflow Builder tool, as shown in the following illustration:


Default error node selection

Sending Error Notifications from a Specific Node

Although the Default Error node is a convenient feature for error handling, there may be instances when it is necessary to have an Error path from a specific node. For example, it may be necessary to create a specific notification with a customized message and other details to send to an administrator when an error occurs from a specific process. That would not be possible when using only the Default Error node.


For example, if an Annotation node, which uses metadata to automatically create the annotation, fails for some reason, the metadata can be included in the message sent to the administrator.


See the following illustration for how you could set up an SMTP Out node to send an email with Annotation metadata if the Annotation node fails:



Email setup

Choosing the Error Path from a node is easy using the Workflow Designer tool. Select the connector and access the Connector Properties panel on the right-hand side of the Designer tool. By default, the “Normal” path is selected. To change this, you should do the following:

  1. Unselect the “Normal” icon.
  2. Click on the “Error” icon as illustrated below. NOTICE that when the “Error” path is selected on a connector, a circle with a “X” appears on the connector.
  3. Error transitions


Sending Informational Notifications

You can also send notifications that are informational at any point in the workflow. For example, you can add an SMTP Out node on the “Normal” path of an Output Folder node, allowing email notifications to be sent when files are successfully saved to their destination folder. This can be very useful for sending a notification of success to users when documents have been scanned, processed, and stored (e.g, a user scans a large document and wants to be notified when the workflow has finished processing the document). And since the SMTP Out Node can use metadata in the workflow, the notification sent to the user can also include information like file name, folder path and any other useful information.


See the illustration below for an example of this workflow:


Workflow

Best Practices

  1. Always set a Default Error node for all of your workflows. This will help with debugging workflows you have written. In addition, you’ll never have to worry about losing a document due to an error in the workflow.
  2. If using the Output Folder node as the Default Error node, make sure the folder path is permanently available. It is strongly suggested that the folder path be local to the PC running Dispatcher Phoenix and NOT a Network resource. If the Network goes down for some reason, the Default Error node will likely fail as well. We recommend using “C:\Users\Public\Documents\ERROR-FOLDER” since this is a safe location on the PC running Dispatcher Phoenix. It will always be available and will never have permissions issues, regardless of the user permissions of the running workflow.
  3. Give each node in your workflow a unique name that is associated with the work that the node is performing. The name assigned to a node is what will appear in the Workflow Log. This can really save you time when trying to determine which node is failing.
  4. Any critical process or node in your workflow should have a separate Error output so that additional metadata can be captured. This can really save you time and effort when debugging a workflow.


For a sample workflow that sends out email notifications, please click here.


Tip 08: Metadata Scripting for Advanced Workflows (Part 3)

To create a custom script for your workflow, you can use Dispatcher Phoenix's Metadata Scripting node. This Tech Tip walks you the basics of creating a LUA script. Please note that the Dispatcher Phoenix Online Help documentation covers the Metadata Scripting node in detail. Please go to Dispatcher Phoenix Online Help before reading further.


To create a script, follow these steps:

  1. In the Dispatcher Phoenix Workflow Builder Tool, open the Metadata Scripting node; then click on the Add/Edit Functions button in the Tool Bar, as illustrated below:


  2. The Add/Edit Functions text editor opens. This simple editor is where you create/update your script functions. This node comes with a list of built-in functions (listed in the Function Reference area on the right-hand side of the node configuration window). When you click on a built-in function, the function code is displayed in the code window below the list. Using the buttons below the code window, you can 'Insert' the selected code into the editor, or copy the selected function to the clipboard.

    Let's begin with the str_length function, which returns the number of characters in a string. Select the 'str_length' function in the Function Reference area; then click the Insert button.


  3. Let's look closer at this function and see what is happening.
    1. Looking at Line 1, the double hyphen (- -) indicates that this line is a 'Comment'. Comments allow you to enter a detailed explanation of the function and code that the function executes. This can be very helpful as it allows you to enter concise details of what the function does and how the function works.
    2. On Line 3, the function is defined. The syntax is the word 'function' followed by the name of the function (in this case, 'str_length') and then the parameter list in parentheses '(str)'. Note that the function name cannot contain spaces, which is why we use the underline character between 'str' and 'length' (i.e., 'str_length').
    3. Next, skip to Line 5 and you see the word 'end'. This is a signal to the LUA scripting engine that this is the end of the 'str_length' script function. Every line between the function definition (Line 3) and the 'end' (Line 5) are the statements that the LUA Engine will execute when the function is called by the Metadata Scripting node. This is a very simple function; it only has one statement ('return string.len(str)'). But there is a lot going on in this one statement. Here is an overview:
      1. 'return' tells the LUA scripting engine to return the value that follows the word 'return' to the Metadata Scripting node.
      2. 'string' is a collection of functions that work with string values.
      3. '.len' is a function in the collection 'string' that returns the length of the string that is passed to it ('str').

      The variable 'str' is passed from the Metadata Scripting node into the 'str_length' function. The variable 'str' contains the string that is in metadata and the 'str_length' function returns the length of the string to the Metadata Scripting node.

    If you are wondering what a variable is, think of it as a place where some arbitrary data is stored. Variables are used to pass data into the function, and hold data as the function performs some task. In this example, 'str' is the only variable used and it is an input variable.
  4. You can test your function in the editor by clicking on the Test button in the toolbar. In the Test window, do the following:
    1. Select the down arrow in the Function field; then choose your function from the User Defined Functions list. See the illustration below for an example:
    2. Enter a sample string in the Sample Data field.
    3. Click the Run Test button.
  5. The Output/Console window will display with the results of the test. As shown in the illustration below, the sample data is the string 'This is a test' and the return value ('Result') is '14'.
  6. Once you have created and tested your function, you can then click the Save button in the toolbar to save your work and make the function available to the Metadata Scripting node.
  7. In the Metadata Scripting node, you can choose the type of rule you want to create. For example, you may want to copy existing metadata to a new metadata key. Do the following:
    1. Choose Copy Metadata from the Add New Rule drop-down list.
    2. Select the ellipsis button next to the Metadata Key field to open up the Metadata Browser and choose the metadata variable that you would like to use as the source of the string to get the length of.
    3. Select the arrow next to the Function field to choose your User Defined Function (i.e., str_length).
    4. In the Output Key field, enter a new metadata tag variable that you would like to create (i.e., '{script:str_length}').
    5. Keep the default selection for the Range field as "Document,All".
    6. Check the Enable verbose logging for rules box at the bottom of the node configuration window.
    See the following illustration for an example:

Workflow Results



I created a simple workflow to test this. In the workflow, users can enter a string at the MFP panel via a Dispatcher Phoenix Index Form and then scan a document. Once the document is scanned, the Metadata Scripting node calls the 'str_length' script, which counts the length of the string that was entered in the Index Form. The workflow uses the Metadata to File node to capture and store the metadata in a separate text file so that we can review how the script worked.


  1. First, let's review the workflow log. See below:



    As you can see by the highlighted text, the Metadata Scripting node read the string from the Index Form and created a new metadata tag with the value of 38.
  2. To further show that the Metadata Scripting node provided the output we are expecting, see the results from the Metadata To File node below.


In Summary



We’ve walked through a very simple example, using the Metadata Scripting node to show how easy it is to create a script function to create metadata automatically. As you can see, no real programming experience was necessary!


In the next article, we'll show you how to create a slightly more complex script with more than one function. We'll also talk about troubleshooting scripts, how to use more than a single variable, and how to deal with page level and document level metadata variables. If you have any questions, please let us know at sec@kmbs.konicaminolta.us.


Tip 07: Let Metadata Scripting Help. Overview of the Dispatcher Phoenix Metadata Scripting Node (Part 2)

Welcome back! In the last email newsletter, we gave you a brief overview of the many ways Metadata Scripting can be used to help automate document processing and routing tasks. This time, we will go into how to create a metadata script. If you missed Part 1, you can find it here: Overview of the Dispatcher Phoenix Metadata Scripting Node (Part 1)


Dispatcher Phoenix's advanced Metadata Scripting node allows you to manage, modify, copy, delete, and add metadata associated with the files in your workflow. To show how to use this node, we will teach you how to create a metadata key that records the processed document's character count. This processing capability is extremely useful, especially for file storage systems that implement a character limit for documents and file names.


Please note that when creating a LUA Script for the Metadata Scripting node, you can use the editor found within the node. For more in-depth information on how to use the Metadata Scripting node, please review the online help documentation prior to configuring the node.


Before learning more about the functionality Dispatcher Phoenix's Metadata Scripting node offers, please do the following:


  1. Create a new workflow by opening Dispatcher Phoenix's workflow builder.
  2. Drag-and-drop the following nodes (in this order) and connect each: bEST, Metadata Scripting, Metadata to File, and Output folder.
  3. Open the bEST node and:
    1. Add the MFP Simulator
    2. Attach a new (blank) Index Form. Within the Index Form, drag the Text field, and give it a friendly name.
    3. Select 'Save.'
  4. Open the Metadata to File node and check the 'bEST', 'Index Form', and 'Script' boxes.
  5. Open the Output folder and select the directory you want to distribute the processed file to. Your workflow should look like this:


    View of workflow


Now you are ready to create a script, which is actually very simple. Please follow these steps:


  1. Open the Metadata Scripting node and select the Add/Edit Functions button on the Tool Bar.

  2. Add/Edit Functions


  3. The Add/Edit Functions text editor will open. This simple Editor can be used to create/update the script functions that this node will use. On the right is a list of the built in functions that come with the Metadata Scripting node. When you click on one, the function code is displayed in the code window below the list. Using the buttons below the code window, you can insert the selected code into the Editor, or copy the selected function to the clipboard. Let's select the str_length function and then click the Insert button.

  4. str_length Function


  5. Let's look closer at this function and see what is happening.
    1. On line 1, the double hyphen '- -' indicates that this line is a comment. Comments allow you to enter a detailed explanation of the function and code the function executes. This can be very helpful as it allows you to enter concise details of what the function does and how the function works.
    2. Line 3 is the definition of the function. It is the word 'function' followed by the name of the function 'str_length' and then the parameter list in parentheses '(str)'. Note that the function name cannot contain spaces, which is why we use the underscore character between 'str' and 'length' (e.g., 'str_length').
    3. Next skip to Line 5 where you see the word 'end'. 'End' is a marker to the LUA Scripting Engine that this is the END of the script function, 'str_length'. Every line between the function definition (Line 3) and the 'end' (Line 5) are the statements that the LUA Engine will execute when the function is called by the Metadata Scripting node. This is a very simple function; it only has one statement 'return string.len(str)'. But there is a lot going on in this one statement:
      1. 'return' tells the LUA Scripting Engine to return the value that follows the word 'return' to the Metadata Scripting Node.
      2. 'string' is a collection of functions that work with string values.
      3. '.len' is a function in the collection 'string' that returns the length of the string that is inputted into Metadata.
      4. The variable ‘str’ is passed from the Metadata Scripting Node into the function 'str_length'. The variable 'str' contains the string that is available in Metadata and the function 'str_length' returns the corresponding value to the Metadata Scripting Node.
    4. So what is a variable? Think of a variable as a place where some arbitrary data is stored. Variables are used to pass data into the function, and hold data as the function performs some task. In this example, 'str' is the only variable used and it is an Input Variable.
  6. Using the Editor, you can test your function by clicking on the Test button in the toolbar and then choosing your function from the User Defined Functions list. Enter a sample string into the Sample Data field, such as "This is a test," and then click the Run Test button. The Output/Console window will display the results of the test. As you can see, the test data is the string 'This is a test' and the return value ('Result') is '14'.

  7. Testing the function


  8. Once you have created and tested your function, you can then click the Save button in the toolbar to save your work and make the function available to the Metadata Scripting Node.
  9. In the Metadata Scripting Node, you then choose the type of rule you want to create. In this example, do the following:
    1. In the Add New Rule drop-down, select Copy Metadata.
    2. Next, choose the Metadata Variable to be the source of the string to get the length of using the Metadata Browser.
    3. From the Function drop-down, choose your User Defined Function (e.g., str_length).
    4. Create a new metadata tag variable. For this example, enter '{script:str_length}'. Leave the Range set at the default Document,All and at the bottom select the Enable verbose logging for rules checkbox.

  10. Testing the function


  11. Before leaving the Metadata Scripting node, select Save and start your workflow.
  12. The workflow you created allows you to enter a string with the index form during scanning time. Open the MFP Simulator, and enter any label in the field you created (step 3b of the initial setup). Then input the file you would like to test with.
  13. After the file processes through the workflow, access the folder your output is pointing to. Here you will find two files: the original document and an XML file. Open the XML file and you will see how the Metadata Scripting Node uses the str_length script by measuring the length of the scanned document's string. We then use the Metadata to File Node to capture the metadata to a file, and then we can review how this script worked. Let's review the workflow log:


    View of workflow log


    As you can see, the Metadata Scripting Node reads the string from the Index Form and then creates a new metadata tag with the value of 38.


Conclusion

We have walked through a very simple example of using the Metadata Scripting Node to show how easy it is to create a script function, which performs tasks that are not possible from other nodes. In addition, no real programming experience is necessary as we created a new script based on an existing script. Attached to this script is the sample program created for the Tech Tip so you can review the workflow and the nodes and then experiment on your own.


In Part 3 of this Tech Tip we will conclude our discussion of the Metadata Scripting Node by creating a slightly more complex script with more than one function, talk about troubleshooting scripts, using more than a single variable and how to deal with Page Level metadata variables. Again, if you have any question please let us know at sec@kmbs.konicaminolta.us.


Tip 06: Let Metadata Scripting Help. Overview of the Dispatcher Phoenix Metadata Scripting Node (Part 1)

The Metadata Scripting node is one feature that seems to intimidate users of Dispatcher Phoenix. When our engineers are asked if Dispatcher Phoenix can perform a specific, complex operation for a customer, we often respond, "Yes, that can be done with the Metadata Scripting Node." And the reaction we get is "That is too difficult!" Or, "I can't do that. I'm not a programmer."


But, the truth is, this node does not require a lot of advanced programming expertise. In fact, although some programming experience is helpful, it is possible to create scripts using the Metadata Scripting node with very little programming skills.


What Can the Metadata Scripting Node Do?

Here are some examples of some of the things that can be done with Metadata Scripting:


  1. Convert Page Level Variables to Document Level Variables. This makes access to variables easier and less error prone.
  2. Manipulate metadata values (e.g., modify metadata variable values), along with splitting or merging metadata values. If a file has many variables (name, date, number of copies), these variables can be easily modified so that specific information can be extracted. On the flipside, variables can be merged for files that include a lot of information.
  3. Create new metadata from existing metadata values. Allows user to make decisions based on a metadata value. For example, if user wants to change the format of “Yes/No” to “True/False,” this feature can do this easily.
  4. Count pages and count documents. Allows the user to automatically determine how many pages and/or documents there are after a print job.
  5. Reformat metadata values for other nodes in the workflow. The MFP cannot determine numerals that are written out (one, two, three) vs. those that are written in a number format (1, 2, 3). Reformatting converts one value into another so that the MFP can recognize it.


If there is a need to edit, update or create metadata values in a workflow, that is a perfect job for the Metadata Scripting Node.


Where Can I Get More Information?

Dispatcher Phoenix uses a modified Lua Scripting engine to support scripts created in the Metadata Scripting Node. For more information about Lua scripting, you can:

  • Go to the Lua scripting website at http://www.lua.org. This website includes documentation and examples of Lua scripts.
  • Take Lua tutorials at the following website: https://www.tutorialspoint.com/lua/ This website is specifically designed for beginners.
  • You can also get help from SEC's International Service and Support (ISS) group by emailing sec@kmbs.konicaminolta.us.

Just remember that Dispatcher Phoenix does not support all of the features of Lua, such as File and System functions.


In Part 2 of this Metadata Scripting Node series we will take a closer look at how you create Metadata Scripting functions and scripts.


Tip 05: Recommended Scan Settings for Best Barcode Recognition

Dispatcher Phoenix offers powerful barcode processing features for both standard and 2D barcodes. With an automated workflow, files can be automatically split, renamed, annotated, routed, indexed and more based on the barcode that is detected. There may be occasions, however, when the barcode on a document is hard to read. In this case, you should follow best scanning practices to increase the accuracy of the barcode recognition.


Recommended Color Modes

When scanning your document, you can choose three color modes: black and white, grayscale, or color. For best barcode recognition, we recommend scanning the document in black and white. If a document is scanned in full color or grayscale mode, the MFP will try to match the scanned color by softening the edges of straight lines, making the barcode more difficult to detect. Scanning in black and white will result in sharp edges and clear bars in the barcodes.


Recommended Resolution

If you must scan in full color or grayscale, choose a higher resolution, such as 300x300 or 400x400. Although higher resolutions result in large image files and longer processing time, barcode recognition will improve.



Barcode Processing Workflows

With Dispatcher Phoenix's Barcode Processing, files can be automatically renamed, indexed, annotated, split, routed, and more. Visit our sample workflow library and search for "barcode" to download and start using a sample Barcode Processing workflow today.



Note: Barcode Processing is included with Healthcare, Finance, Government and ECM editions. It is available as an option for all other editions of Dispatcher Phoenix.

Tip 04: Quick Way to Test Your "Scan to Email" Workflow

Scanning business documents, such as contracts and proposals, and then emailing them as attachments helps reduce paper and mailing costs. And with Dispatcher Phoenix, you can easily create a powerful workflow to scan, process, index, and email documents to specific email recipients. But what if you want to test your "Scan to Email" workflow without having to connect to an email server? With Dispatcher Phoenix's SMTP In node, this is easy to configure!


A typical Dispatcher Phoenix "Scan to Email" workflow uses the SMTP Out node, which requires specific connection information to be specified, such as the IP address for the outgoing SMTP email server, the Port used by the server for SMTP communication, the username and password of the email server account, and more. And if the SMTP Out node is not configured properly, errors will occur when the workflow runs.


Using SMTP In To Act As Email Server

However, Dispatcher Phoenix also has an SMTP In node that can be configured to act like an email server. The SMTP In node would accept email from your "Scan to Email" workflow. To begin, you should create a simple workflow with an SMTP In node and and Output node. See the following illustration for an example of a simple SMTP In workflow that you could create:

In this workflow, the SMTP In node would be configured with the Local Address 127.0.0.1. See the following illustration for an example:


Note that this SMTP In workflow must be running when you set up the SMTP Out workflow next.

Setting Up Scan to Email

Now, with the SMTP In node set up to act as an email server, you can create a Scan to Email workflow in which the SMTP Out node sends emails to the SMTP In node in the previous workflow. See the following illustration for an example:

In this particular workflow, an Index Form is set up to prompt the MFP user to enter the email address to send the scanned document(s) to. When the workflow is run, the scanned document is converted to PDF and then sent out as an email attachment...all without connecting to an email server!

 

Tip 03: Sending Dispatcher Phoenix Feedback?

Here's how to include a list of running processes on your system

There are several resources available to help you identify or resolve any unexpected behavior when using Dispatcher Phoenix. One of them is a Windows command called TaskList, which lists all processes (running and non-running) on your system and allows you to output the list to a text file. This is ideal to use in situations when you are either unable to open the Windows Task Manager or you want to print out the list of processes. And the text file that is created can be attached to Dispatcher Phoenix Customer Feedback to provide additional information about your system.

 

Here are the steps to take to create detailed reports of running and non-running processes:

  • Open an Administrator Command Line by right-clicking the cmd.exe file and selecting “Run as an Administrator."
  • Change the directory to the Desktop Folder.
  • Enter the following commands one at a time at the Command Prompt. Allow each command to complete before running the next one. Each command will add more details to the tasklist.txt file.
    1. tasklist /apps /fo csv >> tasklist.txt
    2. tasklist /svc /fo csv >> tasklist.txt
    3. tasklist /m /fo csv >> tasklist.txt
    4. tasklist /v /fo csv >> tasklist.txt
  • Open Customer Feedback and click the “Options” button (on the lower left side of the window).
  • Enable all of the additional information options and click the “OK” button.
  • Enter “Additional log and task list information capture” into the Description field of the Customer Feedback.
  • Drag and drop the “tasklist.txt” file from the Desktop into the Customer Feedback files area. Include an exported copy of the workflow and any sample files as well.
  • Save the Customer Feedback to the Desktop; then, attach this file to a Support Ticket for further review.

 

Tip 02: Tips For bEST Server/MFP Connections

If you receive an error on the MFP panel about failing to connect to the workflow, please try the following steps to resolve the issue:

 

 

  1. Confirm the bEST Server settings on the Defaults window (accessible from the MFP Registration Tool). The bEST Server IP Address must match the IP Address of the PC running Dispatcher Phoenix. If the IP Address does not match, the MFP will not be able to connect. See the following illustration for an example of the Defaults window:

  2.  

     

  3. Check your Firewall and Anti-virus settings. Go to the MFP's Web Browser and enter the IP Address of the bEST Server and port 50808 and you should get an "Access Denied" message. If not, the MFP is being blocked from connecting.

     

    URL Example: http://11.22.33.44:50808/
    (where 11.22.33.44 is the IP Address of the PC running the bEST Server)

  4.  

  5. Check that the MFP is not using a SHA-1 SSL Certificate.

 


Active Directory Domain User Privileges

When setting up a domain user version of conopsd, the domain user conopsd has to have specific privileges on the PC that will be running Dispatcher Phoenix in order to work correctly. After setting up the domain user conopsd, make it an admin of the PC running Dispatcher Phoenix and then go into Local Security Policy (Local Policies=>User Rights Assignment) to assign these privileges manually. This is most often required for Worldox GX3 and GX4 in an Active Directory environment.

The privileges are as follows:

 

  • Access this computer from the network

  •  

  • Act as part of the operating system

  •  

  • Adjust memory quotas for a process

  •  

  • Back up files and directories

  •  

  • Bypass traverse checking

  •  

  • Create a token object

  •  

  • Debug programs

  •  

  • Enable computer and user accounts to be trusted for delegation

  •  

  • Impersonate a client after authentication

  •  

  • Log on as a service

  •  

  • Replace a process level token

  •  

  • Restore files and directories

 

Be aware that, depending on how your Active Directory Domain is configured as well as the settings in the Domain Group Policy, these privileges may not stay set. Your Domain Administrator should configure these settings for the Domain User conopsd so that the user profile will keep these settings.

Tip 01: How To Add An Icon To Your MFP Workflow

Have you ever wanted to improve the look of your workflows that are run at the MFP? Does the displayed workflow look bad or just doesn't convey the message you want? The following Tech Tip will show you how to put any image you want as the displayed workflow image at the MFP.

 

 

The first step is to create your workflow with the proper size and dimensions. The image that is displayed within the workflow screen (see image above) is square. Therefore, you want your workflow to be square to best fit that space. For best results with respect to scaling, set your workflow to be 800 pixels square.

 

Under Page Settings in the workflow builder, set the Size as "Custom Size". Then, select "pixels" for one of the dimension units (the other will change automatically to match) and set both Width and Height to "800".

 

Now that your workflow is sized properly, your first thought is probably to start adding nodes. Not yet! Before you add any nodes, find the image that you want displayed next to your workflow on the MFP. It could be anything from a conceptual image to a company logo ... or even something completely unrelated that you just happen to like. Use whatever image editing program you might have to size that image to 800px by 800px.

 

Once you have your 800x800 image, set it as the background for your workflow.

  1. Under Background Image, check the "Enable" box.
  2. Click the "Select Image" button and choose your image.

 

You can see in the image below, that we have sized our workflow and set our image to be the background.

 

 

Now we need a place to build the workflow. Rather than clutter the nice, new image, let's add a second page. In the top menu, select Insert > New Page (or press CTRL+Ins) to add a second page.

 

The new page has the same background as the first, which might not display the workflow very well. To help, place a square shape over the entire page. White works well for building upon. If you want the background image to show through a little, set the shape's Opacity (transparency) setting down a little until the proper effect is achieved.

 

 

Now you are ready to build your workflow! Just add, connect and configure your nodes on page two. The resulting workflow will have your custom image displayed with it on the MFP.

 

 


Relocating Temporary Files

In many server environments, a user may configure the primary Windows Drive with only enough space to run Windows or may be using a Solid State Drive. In Dispatcher Phoenix workflow configurations where the workflow is using the OCR Engine to process files, (i.e. Advanced OCR, Forms Processing, Convert to PDF, etc.) the workflow can create a large number of temporary files. Furthermore, these temporary files can be as large as three times the size of the file being processed.

 

All of these temporary files can quickly use up a lot of hard drive space, resulting in issues with the Dispatcher Phoenix threshold monitor. The threshold monitor prevents Dispatcher Phoenix workflows from consuming all of the available hard drive space and RAM Memory space and will stop running workflows when the threshold limits are reached.

 

In such situations the user may want to move the location of Dispatcher Phoenix temporary files to a different hard drive, thus preventing issues with Windows and improving overall performance.

 

The Dispatcher Phoenix Workflow Services (erl, conopsd, xmpp_cluster) use a configuration variable located in the configuration file called “config.ini” that is located in the folder:

 

%programdata%\Konica Minolta\blox

 

The variable is:

 

[blox]
data = %ALLUSERSPROFILE%\\Konica Minolta\\conopsd\\var

 

Setting data to another location and then restarting the Workflow Services using the Workflow Services Manager will reset the location where most, but not all, temporary files are written.

 

Note: Data should never be set to a network share or non-permanent location (e.g. USB drive). You should only use a local drive other than the Windows System Drive (i.e. Drive D:).

 

The new folder location must previously exist before changing the config.ini file and restarting the services.

 

Set the folder permissions to “everyone” and “full control” to make sure the folder structure can be accessed. Failure to do so will result in “Access Denied” errors in the workflow log and the workflow will fail. If the user does not want to set “everyone” as access then as an alternate set the user ID .\conopsd and the User profile the user uses to log on to windows to create the Dispatcher Phoenix workflows with “full control” permission.

Twitter LinkedIn