Tech Tips

October 2020

Sending Email Notifications via Dispatcher Phoenix

As more and more business processes become automated, employees may not always have direct access to the server where the automated processes are running. In this case, users may want to receive feedback about the automated processes so that they can be monitored. Dispatcher Phoenix has several built-in tools that allow for workflow monitoring, including the ability to receive automated notifications. A workflow that has been designed with automated notifications built-in provides even more value to the user. This Tech Tip will go over some ways you can add automated notifications to any workflow and provide suggestions for best practices when doing so.


Overview Of Dispatcher Phoenix Transitions

First, let's look at how the workflow can be designed using Dispatcher Phoenix’s Workflow Builder tool. Each Dispatcher Phoenix node has at least 2 (two) output paths: one is called “Normal” and the other is called “Error”. In addition, there are some nodes that are used to make decisions; these kinds of nodes have 3 (three) output paths: “Yes”, “No” and “Error”. Because of this flexibility, the designer of the workflow can choose how and where files transition through the workflow. Since each node has an Error path, the workflow can be set up with error handling paths. Depending on the design of the workflow, this error handling can be as simple as informing an administrator that an error has occurred via email or performing a sophisticated error recovery task when an error occurs.


The Default Error Node

In addition, the workflow has an overall output path called the Default Error node. The Default Error node provides a default exit path when an unexpected error occurs within a workflow. Any Dispatcher Phoenix Distribution node can be assigned as the Default Error node and is called by the workflow when a node within the workflow fails or has an error and there is nothing connected to the Error path of the affected node. Typically, the Output Folder node is used; however, in some workflow designs, another node is more helpful, such as the SMTP Out node, which sends an email to a specific email recipient when an error occurs which includes the file being processed at the time of the error attached to the email. Not only does this notify a designated administrator that an error occurred but the file being processed at the time of the error is preserved so that the administrator can take appropriate action.


The Default Error Node can be set up via the Canvas Properties in the Workflow Builder tool, as shown in the following illustration:


Default error node selection

Sending Error Notifications from a Specific Node

Although the Default Error node is a convenient feature for error handling, there may be instances when it is necessary to have an Error path from a specific node. For example, it may be necessary to create a specific notification with a customized message and other details to send to an administrator when an error occurs from a specific process. That would not be possible when using only the Default Error node.


For example, if an Annotation node, which uses metadata to automatically create the annotation, fails for some reason, the metadata can be included in the message sent to the administrator.


See the following illustration for how you could set up an SMTP Out node to send an email with Annotation metadata if the Annotation node fails:



Email setup

Choosing the Error Path from a node is easy using the Workflow Designer tool. Select the connector and access the Connector Properties panel on the right-hand side of the Designer tool. By default, the “Normal” path is selected. To change this, you should do the following:

  1. Unselect the “Normal” icon.
  2. Click on the “Error” icon as illustrated below. NOTICE that when the “Error” path is selected on a connector, a circle with a “X” appears on the connector.
  3. Error transitions


Sending Informational Notifications

You can also send notifications that are informational at any point in the workflow. For example, you can add an SMTP Out node on the “Normal” path of an Output Folder node, allowing email notifications to be sent when files are successfully saved to their destination folder. This can be very useful for sending a notification of success to users when documents have been scanned, processed, and stored (e.g, a user scans a large document and wants to be notified when the workflow has finished processing the document). And since the SMTP Out Node can use metadata in the workflow, the notification sent to the user can also include information like file name, folder path and any other useful information.


See the illustration below for an example of this workflow:


Workflow

Best Practices

  1. Always set a Default Error node for all of your workflows. This will help with debugging workflows you have written. In addition, you’ll never have to worry about losing a document due to an error in the workflow.
  2. If using the Output Folder node as the Default Error node, make sure the folder path is permanently available. It is strongly suggested that the folder path be local to the PC running Dispatcher Phoenix and NOT a Network resource. If the Network goes down for some reason, the Default Error node will likely fail as well. We recommend using “C:\Users\Public\Documents\ERROR-FOLDER” since this is a safe location on the PC running Dispatcher Phoenix. It will always be available and will never have permissions issues, regardless of the user permissions of the running workflow.
  3. Give each node in your workflow a unique name that is associated with the work that the node is performing. The name assigned to a node is what will appear in the Workflow Log. This can really save you time when trying to determine which node is failing.
  4. Any critical process or node in your workflow should have a separate Error output so that additional metadata can be captured. This can really save you time and effort when debugging a workflow.


For a sample workflow that sends out email notifications, please click here.


June 2020

Metadata Scripting for Advanced Workflows (Part 3)

To create a custom script for your workflow, you can use Dispatcher Phoenix's Metadata Scripting node. This Tech Tip walks you the basics of creating a LUA script. Please note that the Dispatcher Phoenix Online Help documentation covers the Metadata Scripting node in detail. Please go to Dispatcher Phoenix Online Help before reading further.


To create a script, follow these steps:

  1. In the Dispatcher Phoenix Workflow Builder Tool, open the Metadata Scripting node; then click on the Add/Edit Functions button in the Tool Bar, as illustrated below:


  2. The Add/Edit Functions text editor opens. This simple editor is where you create/update your script functions. This node comes with a list of built-in functions (listed in the Function Reference area on the right-hand side of the node configuration window). When you click on a built-in function, the function code is displayed in the code window below the list. Using the buttons below the code window, you can 'Insert' the selected code into the editor, or copy the selected function to the clipboard.

    Let's begin with the str_length function, which returns the number of characters in a string. Select the 'str_length' function in the Function Reference area; then click the Insert button.


  3. Let's look closer at this function and see what is happening.
    1. Looking at Line 1, the double hyphen (- -) indicates that this line is a 'Comment'. Comments allow you to enter a detailed explanation of the function and code that the function executes. This can be very helpful as it allows you to enter concise details of what the function does and how the function works.
    2. On Line 3, the function is defined. The syntax is the word 'function' followed by the name of the function (in this case, 'str_length') and then the parameter list in parentheses '(str)'. Note that the function name cannot contain spaces, which is why we use the underline character between 'str' and 'length' (i.e., 'str_length').
    3. Next, skip to Line 5 and you see the word 'end'. This is a signal to the LUA scripting engine that this is the end of the 'str_length' script function. Every line between the function definition (Line 3) and the 'end' (Line 5) are the statements that the LUA Engine will execute when the function is called by the Metadata Scripting node. This is a very simple function; it only has one statement ('return string.len(str)'). But there is a lot going on in this one statement. Here is an overview:
      1. 'return' tells the LUA scripting engine to return the value that follows the word 'return' to the Metadata Scripting node.
      2. 'string' is a collection of functions that work with string values.
      3. '.len' is a function in the collection 'string' that returns the length of the string that is passed to it ('str').

      The variable 'str' is passed from the Metadata Scripting node into the 'str_length' function. The variable 'str' contains the string that is in metadata and the 'str_length' function returns the length of the string to the Metadata Scripting node.

    If you are wondering what a variable is, think of it as a place where some arbitrary data is stored. Variables are used to pass data into the function, and hold data as the function performs some task. In this example, 'str' is the only variable used and it is an input variable.
  4. You can test your function in the editor by clicking on the Test button in the toolbar. In the Test window, do the following:
    1. Select the down arrow in the Function field; then choose your function from the User Defined Functions list. See the illustration below for an example:
    2. Enter a sample string in the Sample Data field.
    3. Click the Run Test button.
  5. The Output/Console window will display with the results of the test. As shown in the illustration below, the sample data is the string 'This is a test' and the return value ('Result') is '14'.
  6. Once you have created and tested your function, you can then click the Save button in the toolbar to save your work and make the function available to the Metadata Scripting node.
  7. In the Metadata Scripting node, you can choose the type of rule you want to create. For example, you may want to copy existing metadata to a new metadata key. Do the following:
    1. Choose Copy Metadata from the Add New Rule drop-down list.
    2. Select the ellipsis button next to the Metadata Key field to open up the Metadata Browser and choose the metadata variable that you would like to use as the source of the string to get the length of.
    3. Select the arrow next to the Function field to choose your User Defined Function (i.e., str_length).
    4. In the Output Key field, enter a new metadata tag variable that you would like to create (i.e., '{script:str_length}').
    5. Keep the default selection for the Range field as "Document,All".
    6. Check the Enable verbose logging for rules box at the bottom of the node configuration window.
    See the following illustration for an example:

Workflow Results



I created a simple workflow to test this. In the workflow, users can enter a string at the MFP panel via a Dispatcher Phoenix Index Form and then scan a document. Once the document is scanned, the Metadata Scripting node calls the 'str_length' script, which counts the length of the string that was entered in the Index Form. The workflow uses the Metadata to File node to capture and store the metadata in a separate text file so that we can review how the script worked.


  1. First, let's review the workflow log. See below:



    As you can see by the highlighted text, the Metadata Scripting node read the string from the Index Form and created a new metadata tag with the value of 38.
  2. To further show that the Metadata Scripting node provided the output we are expecting, see the results from the Metadata To File node below.


In Summary



We’ve walked through a very simple example, using the Metadata Scripting node to show how easy it is to create a script function to create metadata automatically. As you can see, no real programming experience was necessary!


In the next article, we'll show you how to create a slightly more complex script with more than one function. We'll also talk about troubleshooting scripts, how to use more than a single variable, and how to deal with page level and document level metadata variables. If you have any questions, please let us know at sec@kmbs.konicaminolta.us.


August 2019

Let Metadata Scripting Help. Overview of the Dispatcher Phoenix Metadata Scripting Node (Part 2)

Welcome back! In the last email newsletter, we gave you a brief overview of the many ways Metadata Scripting can be used to help automate document processing and routing tasks. This time, we will go into how to create a metadata script. If you missed Part 1, you can find it here: Overview of the Dispatcher Phoenix Metadata Scripting Node (Part 1)


Dispatcher Phoenix's advanced Metadata Scripting node allows you to manage, modify, copy, delete, and add metadata associated with the files in your workflow. To show how to use this node, we will teach you how to create a metadata key that records the processed document's character count. This processing capability is extremely useful, especially for file storage systems that implement a character limit for documents and file names.


Please note that when creating a LUA Script for the Metadata Scripting node, you can use the editor found within the node. For more in-depth information on how to use the Metadata Scripting node, please review the online help documentation prior to configuring the node.


Before learning more about the functionality Dispatcher Phoenix's Metadata Scripting node offers, please do the following:


  1. Create a new workflow by opening Dispatcher Phoenix's workflow builder.
  2. Drag-and-drop the following nodes (in this order) and connect each: bEST, Metadata Scripting, Metadata to File, and Output folder.
  3. Open the bEST node and:
    1. Add the MFP Simulator
    2. Attach a new (blank) Index Form. Within the Index Form, drag the Text field, and give it a friendly name.
    3. Select 'Save.'
  4. Open the Metadata to File node and check the 'bEST', 'Index Form', and 'Script' boxes.
  5. Open the Output folder and select the directory you want to distribute the processed file to. Your workflow should look like this:


    View of workflow


Now you are ready to create a script, which is actually very simple. Please follow these steps:


  1. Open the Metadata Scripting node and select the Add/Edit Functions button on the Tool Bar.

  2. Add/Edit Functions


  3. The Add/Edit Functions text editor will open. This simple Editor can be used to create/update the script functions that this node will use. On the right is a list of the built in functions that come with the Metadata Scripting node. When you click on one, the function code is displayed in the code window below the list. Using the buttons below the code window, you can insert the selected code into the Editor, or copy the selected function to the clipboard. Let's select the str_length function and then click the Insert button.

  4. str_length Function


  5. Let's look closer at this function and see what is happening.
    1. On line 1, the double hyphen '- -' indicates that this line is a comment. Comments allow you to enter a detailed explanation of the function and code the function executes. This can be very helpful as it allows you to enter concise details of what the function does and how the function works.
    2. Line 3 is the definition of the function. It is the word 'function' followed by the name of the function 'str_length' and then the parameter list in parentheses '(str)'. Note that the function name cannot contain spaces, which is why we use the underscore character between 'str' and 'length' (e.g., 'str_length').
    3. Next skip to Line 5 where you see the word 'end'. 'End' is a marker to the LUA Scripting Engine that this is the END of the script function, 'str_length'. Every line between the function definition (Line 3) and the 'end' (Line 5) are the statements that the LUA Engine will execute when the function is called by the Metadata Scripting node. This is a very simple function; it only has one statement 'return string.len(str)'. But there is a lot going on in this one statement:
      1. 'return' tells the LUA Scripting Engine to return the value that follows the word 'return' to the Metadata Scripting Node.
      2. 'string' is a collection of functions that work with string values.
      3. '.len' is a function in the collection 'string' that returns the length of the string that is inputted into Metadata.
      4. The variable ‘str’ is passed from the Metadata Scripting Node into the function 'str_length'. The variable 'str' contains the string that is available in Metadata and the function 'str_length' returns the corresponding value to the Metadata Scripting Node.
    4. So what is a variable? Think of a variable as a place where some arbitrary data is stored. Variables are used to pass data into the function, and hold data as the function performs some task. In this example, 'str' is the only variable used and it is an Input Variable.
  6. Using the Editor, you can test your function by clicking on the Test button in the toolbar and then choosing your function from the User Defined Functions list. Enter a sample string into the Sample Data field, such as "This is a test," and then click the Run Test button. The Output/Console window will display the results of the test. As you can see, the test data is the string 'This is a test' and the return value ('Result') is '14'.

  7. Testing the function


  8. Once you have created and tested your function, you can then click the Save button in the toolbar to save your work and make the function available to the Metadata Scripting Node.
  9. In the Metadata Scripting Node, you then choose the type of rule you want to create. In this example, do the following:
    1. In the Add New Rule drop-down, select Copy Metadata.
    2. Next, choose the Metadata Variable to be the source of the string to get the length of using the Metadata Browser.
    3. From the Function drop-down, choose your User Defined Function (e.g., str_length).
    4. Create a new metadata tag variable. For this example, enter '{script:str_length}'. Leave the Range set at the default Document,All and at the bottom select the Enable verbose logging for rules checkbox.

  10. Testing the function


  11. Before leaving the Metadata Scripting node, select Save and start your workflow.
  12. The workflow you created allows you to enter a string with the index form during scanning time. Open the MFP Simulator, and enter any label in the field you created (step 3b of the initial setup). Then input the file you would like to test with.
  13. After the file processes through the workflow, access the folder your output is pointing to. Here you will find two files: the original document and an XML file. Open the XML file and you will see how the Metadata Scripting Node uses the str_length script by measuring the length of the scanned document's string. We then use the Metadata to File Node to capture the metadata to a file, and then we can review how this script worked. Let's review the workflow log:


    View of workflow log


    As you can see, the Metadata Scripting Node reads the string from the Index Form and then creates a new metadata tag with the value of 38.


Conclusion

We have walked through a very simple example of using the Metadata Scripting Node to show how easy it is to create a script function, which performs tasks that are not possible from other nodes. In addition, no real programming experience is necessary as we created a new script based on an existing script. Attached to this script is the sample program created for the Tech Tip so you can review the workflow and the nodes and then experiment on your own.


In Part 3 of this Tech Tip we will conclude our discussion of the Metadata Scripting Node by creating a slightly more complex script with more than one function, talk about troubleshooting scripts, using more than a single variable and how to deal with Page Level metadata variables. Again, if you have any question please let us know at sec@kmbs.konicaminolta.us.


March 2019

Let Metadata Scripting Help. Overview of the Dispatcher Phoenix Metadata Scripting Node (Part 1)

The Metadata Scripting node is one feature that seems to intimidate users of Dispatcher Phoenix. When our engineers are asked if Dispatcher Phoenix can perform a specific, complex operation for a customer, we often respond, "Yes, that can be done with the Metadata Scripting Node." And the reaction we get is "That is too difficult!" Or, "I can't do that. I'm not a programmer."


But, the truth is, this node does not require a lot of advanced programming expertise. In fact, although some programming experience is helpful, it is possible to create scripts using the Metadata Scripting node with very little programming skills.


What Can the Metadata Scripting Node Do?

Here are some examples of some of the things that can be done with Metadata Scripting:


  1. Convert Page Level Variables to Document Level Variables. This makes access to variables easier and less error prone.
  2. Manipulate metadata values (e.g., modify metadata variable values), along with splitting or merging metadata values. If a file has many variables (name, date, number of copies), these variables can be easily modified so that specific information can be extracted. On the flipside, variables can be merged for files that include a lot of information.
  3. Create new metadata from existing metadata values. Allows user to make decisions based on a metadata value. For example, if user wants to change the format of “Yes/No” to “True/False,” this feature can do this easily.
  4. Count pages and count documents. Allows the user to automatically determine how many pages and/or documents there are after a print job.
  5. Reformat metadata values for other nodes in the workflow. The MFP cannot determine numerals that are written out (one, two, three) vs. those that are written in a number format (1, 2, 3). Reformatting converts one value into another so that the MFP can recognize it.


If there is a need to edit, update or create metadata values in a workflow, that is a perfect job for the Metadata Scripting Node.


Where Can I Get More Information?

Dispatcher Phoenix uses a modified Lua Scripting engine to support scripts created in the Metadata Scripting Node. For more information about Lua scripting, you can:

  • Go to the Lua scripting website at http://www.lua.org. This website includes documentation and examples of Lua scripts.
  • Take Lua tutorials at the following website: https://www.tutorialspoint.com/lua/ This website is specifically designed for beginners.
  • You can also get help from SEC's International Service and Support (ISS) group by emailing sec@kmbs.konicaminolta.us.

Just remember that Dispatcher Phoenix does not support all of the features of Lua, such as File and System functions.


In Part 2 of this Metadata Scripting Node series we will take a closer look at how you create Metadata Scripting functions and scripts.


January 2019

Recommended Scan Settings for Best Barcode Recognition

Dispatcher Phoenix offers powerful barcode processing features for both standard and 2D barcodes. With an automated workflow, files can be automatically split, renamed, annotated, routed, indexed and more based on the barcode that is detected. There may be occasions, however, when the barcode on a document is hard to read. In this case, you should follow best scanning practices to increase the accuracy of the barcode recognition.


Recommended Color Modes

When scanning your document, you can choose three color modes: black and white, grayscale, or color. For best barcode recognition, we recommend scanning the document in black and white. If a document is scanned in full color or grayscale mode, the MFP will try to match the scanned color by softening the edges of straight lines, making the barcode more difficult to detect. Scanning in black and white will result in sharp edges and clear bars in the barcodes.


Recommended Resolution

If you must scan in full color or grayscale, choose a higher resolution, such as 300x300 or 400x400. Although higher resolutions result in large image files and longer processing time, barcode recognition will improve.



Barcode Processing Workflows

With Dispatcher Phoenix's Barcode Processing, files can be automatically renamed, indexed, annotated, split, routed, and more. Visit our sample workflow library and search for "barcode" to download and start using a sample Barcode Processing workflow today.



Note: Barcode Processing is included with Healthcare, Finance, Government and ECM editions. It is available as an option for all other editions of Dispatcher Phoenix.

October 2018

Quick Way to Test Your "Scan to Email" Workflow

Scanning business documents, such as contracts and proposals, and then emailing them as attachments helps reduce paper and mailing costs. And with Dispatcher Phoenix, you can easily create a powerful workflow to scan, process, index, and email documents to specific email recipients. But what if you want to test your "Scan to Email" workflow without having to connect to an email server? With Dispatcher Phoenix's SMTP In node, this is easy to configure!


A typical Dispatcher Phoenix "Scan to Email" workflow uses the SMTP Out node, which requires specific connection information to be specified, such as the IP address for the outgoing SMTP email server, the Port used by the server for SMTP communication, the username and password of the email server account, and more. And if the SMTP Out node is not configured properly, errors will occur when the workflow runs.


Using SMTP In To Act As Email Server

However, Dispatcher Phoenix also has an SMTP In node that can be configured to act like an email server. The SMTP In node would accept email from your "Scan to Email" workflow. To begin, you should create a simple workflow with an SMTP In node and and Output node. See the following illustration for an example of a simple SMTP In workflow that you could create:

In this workflow, the SMTP In node would be configured with the Local Address 127.0.0.1. See the following illustration for an example:


Note that this SMTP In workflow must be running when you set up the SMTP Out workflow next.

Setting Up Scan to Email

Now, with the SMTP In node set up to act as an email server, you can create a Scan to Email workflow in which the SMTP Out node sends emails to the SMTP In node in the previous workflow. See the following illustration for an example:

In this particular workflow, an Index Form is set up to prompt the MFP user to enter the email address to send the scanned document(s) to. When the workflow is run, the scanned document is converted to PDF and then sent out as an email attachment...all without connecting to an email server!

 

September 2017

Sending Dispatcher Phoenix Feedback?

Here's how to include a list of running processes on your system

There are several resources available to help you identify or resolve any unexpected behavior when using Dispatcher Phoenix. One of them is a Windows command called TaskList, which lists all processes (running and non-running) on your system and allows you to output the list to a text file. This is ideal to use in situations when you are either unable to open the Windows Task Manager or you want to print out the list of processes. And the text file that is created can be attached to Dispatcher Phoenix Customer Feedback to provide additional information about your system.

 

Here are the steps to take to create detailed reports of running and non-running processes:

  • Open an Administrator Command Line by right-clicking the cmd.exe file and selecting “Run as an Administrator."
  • Change the directory to the Desktop Folder.
  • Enter the following commands one at a time at the Command Prompt. Allow each command to complete before running the next one. Each command will add more details to the tasklist.txt file.
    1. tasklist /apps /fo csv >> tasklist.txt
    2. tasklist /svc /fo csv >> tasklist.txt
    3. tasklist /m /fo csv >> tasklist.txt
    4. tasklist /v /fo csv >> tasklist.txt
  • Open Customer Feedback and click the “Options” button (on the lower left side of the window).
  • Enable all of the additional information options and click the “OK” button.
  • Enter “Additional log and task list information capture” into the Description field of the Customer Feedback.
  • Drag and drop the “tasklist.txt” file from the Desktop into the Customer Feedback files area. Include an exported copy of the workflow and any sample files as well.
  • Save the Customer Feedback to the Desktop; then, attach this file to a Support Ticket for further review.

 

July 2017

Tips For bEST Server/MFP Connections

If you receive an error on the MFP panel about failing to connect to the workflow, please try the following steps to resolve the issue:

 

 

  1. Confirm the bEST Server settings on the Defaults window (accessible from the MFP Registration Tool). The bEST Server IP Address must match the IP Address of the PC running Dispatcher Phoenix. If the IP Address does not match, the MFP will not be able to connect. See the following illustration for an example of the Defaults window:

  2.  

     

  3. Check your Firewall and Anti-virus settings. Go to the MFP's Web Browser and enter the IP Address of the bEST Server and port 50808 and you should get an "Access Denied" message. If not, the MFP is being blocked from connecting.

     

    URL Example: http://11.22.33.44:50808/
    (where 11.22.33.44 is the IP Address of the PC running the bEST Server)

  4.  

  5. Check that the MFP is not using a SHA-1 SSL Certificate.

 


Active Directory Domain User Privileges

When setting up a domain user version of conopsd, the domain user conopsd has to have specific privileges on the PC that will be running Dispatcher Phoenix in order to work correctly. After setting up the domain user conopsd, make it an admin of the PC running Dispatcher Phoenix and then go into Local Security Policy (Local Policies=>User Rights Assignment) to assign these privileges manually. This is most often required for Worldox GX3 and GX4 in an Active Directory environment.

The privileges are as follows:

 

  • Access this computer from the network

  •  

  • Act as part of the operating system

  •  

  • Adjust memory quotas for a process

  •  

  • Back up files and directories

  •  

  • Bypass traverse checking

  •  

  • Create a token object

  •  

  • Debug programs

  •  

  • Enable computer and user accounts to be trusted for delegation

  •  

  • Impersonate a client after authentication

  •  

  • Log on as a service

  •  

  • Replace a process level token

  •  

  • Restore files and directories

 

Be aware that, depending on how your Active Directory Domain is configured as well as the settings in the Domain Group Policy, these privileges may not stay set. Your Domain Administrator should configure these settings for the Domain User conopsd so that the user profile will keep these settings.

June 2017

How To Add An Icon To Your MFP Workflow

Have you ever wanted to improve the look of your workflows that are run at the MFP? Does the displayed workflow look bad or just doesn't convey the message you want? The following Tech Tip will show you how to put any image you want as the displayed workflow image at the MFP.

 

 

The first step is to create your workflow with the proper size and dimensions. The image that is displayed within the workflow screen (see image above) is square. Therefore, you want your workflow to be square to best fit that space. For best results with respect to scaling, set your workflow to be 800 pixels square.

 

Under Page Settings in the workflow builder, set the Size as "Custom Size". Then, select "pixels" for one of the dimension units (the other will change automatically to match) and set both Width and Height to "800".

 

Now that your workflow is sized properly, your first thought is probably to start adding nodes. Not yet! Before you add any nodes, find the image that you want displayed next to your workflow on the MFP. It could be anything from a conceptual image to a company logo ... or even something completely unrelated that you just happen to like. Use whatever image editing program you might have to size that image to 800px by 800px.

 

Once you have your 800x800 image, set it as the background for your workflow.

  1. Under Background Image, check the "Enable" box.
  2. Click the "Select Image" button and choose your image.

 

You can see in the image below, that we have sized our workflow and set our image to be the background.

 

 

Now we need a place to build the workflow. Rather than clutter the nice, new image, let's add a second page. In the top menu, select Insert > New Page (or press CTRL+Ins) to add a second page.

 

The new page has the same background as the first, which might not display the workflow very well. To help, place a square shape over the entire page. White works well for building upon. If you want the background image to show through a little, set the shape's Opacity (transparency) setting down a little until the proper effect is achieved.

 

 

Now you are ready to build your workflow! Just add, connect and configure your nodes on page two. The resulting workflow will have your custom image displayed with it on the MFP.

 

 


Relocating Temporary Files

In many server environments, a user may configure the primary Windows Drive with only enough space to run Windows or may be using a Solid State Drive. In Dispatcher Phoenix workflow configurations where the workflow is using the OCR Engine to process files, (i.e. Advanced OCR, Forms Processing, Convert to PDF, etc.) the workflow can create a large number of temporary files. Furthermore, these temporary files can be as large as three times the size of the file being processed.

 

All of these temporary files can quickly use up a lot of hard drive space, resulting in issues with the Dispatcher Phoenix threshold monitor. The threshold monitor prevents Dispatcher Phoenix workflows from consuming all of the available hard drive space and RAM Memory space and will stop running workflows when the threshold limits are reached.

 

In such situations the user may want to move the location of Dispatcher Phoenix temporary files to a different hard drive, thus preventing issues with Windows and improving overall performance.

 

The Dispatcher Phoenix Workflow Services (erl, conopsd, xmpp_cluster) use a configuration variable located in the configuration file called “config.ini” that is located in the folder:

 

%programdata%\Konica Minolta\blox

 

The variable is:

 

[blox]
data = %ALLUSERSPROFILE%\\Konica Minolta\\conopsd\\var

 

Setting data to another location and then restarting the Workflow Services using the Workflow Services Manager will reset the location where most, but not all, temporary files are written.

 

Note: Data should never be set to a network share or non-permanent location (e.g. USB drive). You should only use a local drive other than the Windows System Drive (i.e. Drive D:).

 

The new folder location must previously exist before changing the config.ini file and restarting the services.

 

Set the folder permissions to “everyone” and “full control” to make sure the folder structure can be accessed. Failure to do so will result in “Access Denied” errors in the workflow log and the workflow will fail. If the user does not want to set “everyone” as access then as an alternate set the user ID .\conopsd and the User profile the user uses to log on to windows to create the Dispatcher Phoenix workflows with “full control” permission.

Twitter LinkedIn