Slow PSQL Relational Engine Performance

Release Date: 12/15/2017

Many users encounter situations where a particular relational engine SQL query can sometimes be slow.  This can be  complicated to diagnose due to a lot of reasons. Sometimes it's because of a SQL SELECT statement that's not optimized.  On occasion, we receive a general question on how to write a SELECT statement that's optimized for performance.  Unfortunately, that is really a million-dollar question.  It is like asking how to write a program without bugs. We can only help you to optimize a SELECT statement on case by case basis.

In this article, we will give you some commonly seen scenarios that can cause SQL query performance problems.  It is far from a comprehensive list. We just hope that you can draw lessons from the scenarios we provide to help you improve your relational engine performance.

Un-Optimized SQL SELECT Statement

There can be many scenarios for Un-Optimized SQL SELECT statements. Sometimes, to do it right, you need to be a DBA (Database Administrator) who understands Elliott databases really well. Therefore, do not be frustrated if you can't do this right. Netcellent is here to help you if you should be frustrated with a slow running SQL query.  The following are some examples why the SELECT statement can be slow.

Not Using Index or Not Using Correctly

When you try to retrieve data from a table, you should try to use the index if possible. For example, the following SELECT statement will not yield good result:
The reason is because CUS_CORR_NAME is not part of the index for ARCUSFIL (A/R Customer File). To find out the index of a table, you can use PCC (Pervasive Control Center), right click on the table, and choose Properties, then go to the Index tab. The following is the index for  ARCUSFIL:

Therefore, you can improve the performance if you change the above SELECT statement to:
One more trick you should be aware of is that the name save in CUS_NAME is case sensitive.  Therefore, if the customer name saved in the CUS_NAME is "Morrison...", then the above SELECT statement won't work.  The good thing is the value that's saved in CUS_SEARCH_NAME are all converted to upper case, therefore, the best strategy in this case is to use the following SELECT statement:
While PSQL can take advantage of index when searching with begin matching, it will not be able to use the index for contain search like:

Keep in mind some of these type of performance problem due to lack of index can be bridged significantly by giving more server memory so PSQL engine can cache entire database in the memory.  As long as the table is cached in the memory, the above operation can still finish faster even if there is no index to operate on.

Be Aware of Combo Keys
Many indexes in Elliott are combo keys which involve multiple columns. If you specify the value of the second column of the index, but not the first column of the index, then PSQL will not be able to utilize the index. The following is an example of the index of CPINVLIN (COP Invoice Line Item File):

As you can see the invoice date column, INV_ITM_INV_DATE is part of the index KEY01, but it is the second part of that index.  If we should use the following SELECT statement, it will not yield good performance:
This will cause the PSQL relational engine to scan through the entire table of CPINVLIN in order to return the result.  If CPINVLIN is not currently cached in the PSQL engine, then this can result in a very slow operation. If it is cached in the server memory, the peformance will probably be acceptable, but not the best because PSQL can't use KEY01 to locate the necessary records.

If you want to take advantage of the index, you will need to specify value of the first part of the index.  Therefore, if you have the item information, then you can use the following SELECT statement:
This SELECT statement will perform very well because PSQL will be able to use KEY01 to locate its record.  But this may not be what you want. 

Let's say you know that the invoice number for 20171201 should be greater than 150000.  Then you can use the following SELECT statement:
This will cause the PSQL relational engine use the KEY00 index first to look for any records where INV_ITM_INV_NO is greater than 150000, and then matching on the secondary condition INV_ITM_INV_DATE = 2171201. If there are not a lot of records after invoice number 150000, then this will be fast.  If there are still a lot of records, then you can specify a range by using BETWEEN:

The Strategy of Using Join
If the above alternative solution does not work well for you and you just want to see all the invoice line items that were invoiced on 12/01/2017, then you should explore using a different strategy. The following is the index for CPINVHDR (Invoice Header):

As you can see, the INV_DATE is equal to the KEY03 index.  Therefore, if you want to get all the invoice records from CPINVHDR, you can easily do so by using:
This one will run very quickly.  But wait.  We want the record from CPINVLIN, not CPINVHDR.  Therefore, we need to use the JOIN operation to get both records from CPINVHDR and CPINVLIN.  The following is how we join these two tables together:
Note that we specify the two tables as "CPINVHDR, CPINVLIN," not "CPINVLIN, CPINVHDR."  This is because we want to use the index of CPINVHDR first to narrow down the records as INV_DATE = 20171201.  Then, based on a small subset of records, we want to join to CPINVLIN by using the condition "INV_ITM_INV_NO = INV_NO."

Note that we prefer to inner join the two tables through the WHERE condition clause, instead of using the INNER JOIN ... ON clause.  You should put the table that can quickly yield a small subset of records on the left, then join to the other table on the right.  

The following is an example of a SELECT statement that will accomplish the same as previous one, but the performance will not be as good:
The reason for the slow down is because the system will perform the inner join from CPINVLIN to CPINVHDR in its entirety first. Since both of these two tables tend to be big, this will take times even though systems peform join by using index.  This can be especially slow if these two tables are not cached in the memory yet.

Using of Views
Note that there is a view CPINVLIN_VIEW, which joins the CPINVLIN with the CPINVHDR table.  For accessing large tables, it is our recommendation that if you can perform the join yourself instead of relying on the view, it will yield better results.  This is because you can optimize your join when you do your own join, while that may not be the case when you use a view.  To find out how we join to create the pre-dfined views, please look for the following files:
  • V7.x - <ElliottRoot>\DDF40\Eli7View.SQL
  • V8.x - <ElliottRoot>\Bin\DDF40\Eli8View.SQL
The following is an artilce on how a user was trying to retrieve Notes and Invoice header data through NOTE_INV_VIEW and could not get good performance.  The problem is resolved by doing the join manually:


It has come to our attention that in one case the user had poor PSQL relational engine peformance due to the DDF being altered (e.g., the index portion of the DDF).  The problem was confirmed when a database was created using the standard Elliott DDF and the same query compared against the new database created using the standard DDF: it ran siginficantlly faster than the one that's damaged.  To create a new database, see the following article:

Insufficient Memory

It would be best if your server has enough memory that can cache your entire database.  To figure your Elliott database size, just look at your <ElliottRoot>\DATA folder for all the *.BTR files size.  You need to count all companies' database sizes if you are actively using multiple companies. Let's say your data files size is 20GB and your server has a 32GB capacity; then you have enough memory to cache all your data. In that case, your PSQL server can perform reasonably well even if you perform query filtering on the none index column.  On the other hand, if you don't have enough server memory to cache your entire Elliott database, then when you perform query filtering on the none index column, your performance can be poor.

Level 1 vs. Level 2 Cache Memory

By default, PSQL will use 20% of the server memory for Level 1 Cache, and up to 60% for Level 2 Cache.  Level 1 Cache is non-compressed and Level 2 Cache is compressed.  While Level 2 Cache will allow you to fit more data in the cache, because is it compressed, it will be slower.  If you have enough server memory, then we suggest that you set 80% of your server memory for Level 1 Cache and 0% for your Level 2 Cache.  See the following Knowledge Base article for more details:

Insufficient CPU

Some Elliott users use virtual servers.  Some IT departments like to assign the virtual server starting with a low number of CPU.  The thinking is if it is not enough, then that can be easily changed because of the virtualization environment.  In that case, you should monitor your PSQL server performance and watch the CPU utilization rate.  If your CPU utlization rate is constantly over 50%, then you should consider adding more CPU to handle load fluctuation better.  Just as an example, for a user with 150 Elliott user licenses that also has websites constantly accessing Elliott data, as well running Crystal Reports, we have noticed 4 CPU is somewhat on the low side for the PSQL server.  Six to 8 CPU would be more appropriate in that case.


Pervasive PSQL

  1. Btrieve Error Codes 001 - 199
  2. Btrieve Error Codes 3000 - 3099
  3. Btrieve Error Codes 3100 - 3199
  4. PSQL Version Required by Each Elliott Version
  5. Do I Need to Change PSQL Server Engine Default Parameters After Installing It?
  6. New Elliott PSQL Server Processor and RAM Suggestions
  7. Can I Dynamically Adjust Elliott / PSQL 11 Server Memory?
  8. Received "Your Computer Does Not Have PSQL 10 or 11 Client " Even though PSQL Client Is Just Installed
  9. Btrieve Error 161 on Password File When Starting Up Elliott
  10. Problems with Using Pervasive Rebuild Utility on APOPNFIL and AROPNFIL Tables
  11. Security Issue with Installing PSQL Client Remotely on User's Workstation
  12. PSQL and Distributed File System (DFS)
  13. How Do I Turn on PSQL Relational Engine Security?
  14. An Example of Debugging NOTE_ORD_VIEW PSQL Expression Evaluation Error
  15. Btrieve Error 025 on COP Open Order by Salesman Report
  16. What Is the *.^01 File for My PSQL Btrieve Table?
  17. Suggested Files to be Monitored by Audit Master
  18. Pervasive Backup Agent Is Not Compatible with Creating Work Files
  19. Hardware Recommendations for Your PSQL Database Server
  20. How to Optimize SQL SELECT Statement When Retrieving Data from Invoice History
  21. New User-Defined Functions in Elliott DDF
  22. How to Improve Query Performance When Retrieving Data from Notes & Invoice History
  23. How to Retrieve Tracking Number for an Order from Notes
  24. Actian PSQL Not Started Automatically After Server Reboot
  25. Create a New Database in the PCC for Relational Engine Access
  26. Slow PSQL Relational Engine Performance
  27. IPV6 May Cause Problem for PSQL 11 Relational Query
  28. DDF Files in DATA Folder May Confuse PSQL
  29. What to Do When PSQL 11 License Is Disabled
  30. Quick Installation Guide for Audit Master
  31. Quick User Guide for Audit Master
  32. PSQL 13, Micrsoft SQL Integration Service & Pervasive PSQL OLE DB Provider
  33. Your Firewall Needs to Allow Outbound Traffic to the Netherlands for PSQL Licensing Server Purposes
  34. A Case of Btrieve Error 046 on ARCRCLOG A/R Credit Card Log File
  35. A Support Case of Migrating to Different Version of DDF
  36. How to Clear the Message "Unable to Read your Users record (9/068)"
  37. Setup of the PSQL 13 Report Engine
  38. How to Create CPHSTTRX_VIEW with Left Join to CPINVHDR Due to Invoice Database Archive
  39. How to Access Elliott's Data by Using Query in Microsoft Office Excel 2019
  40. Elliott Database Naming Convention
  41. What Does Btrieve Error 080 Mean?

Feedback and Knowledge Base