Welcome To The Home Of The Visual FoxPro Experts  
home. signup. forum. archives. search. google. articles. downloads. faq. members. weblogs. file info. rss.
 From: Rene Relucio
  Where is Rene Relucio?
 Quezon City
 Rene Relucio
 To: Chris Counts
  Where is Chris Counts?
 West Virginia - United States
 Chris Counts
Subject: Segmenting the file with RushMore
Thread ID: 22205 Message ID: 22241 # Views: 48 # Ratings: 0
Version: Visual FoxPro 6 Category: Databases, Tables and SQL Server
Date: Saturday, April 12, 2003 2:04:20 PM         

> I'm sorry to bother you guys again, but I have a problem that I need to fix, and I don't know where to start. I wrote a database program using visual foxpro 6.0 that runs great at about 98 percent of our clients establishments. However, we have one client with a significant amount of data including some tables with close to a million records. The performance of this clients program is drastically worse than our other clients. Is there anything that I can try that will increase the speed of the program. If it helps our client is using a NT 4.0 server with 15 windows 98 workstations. I really appreciate any help that you guys can offer.
> Thanks,
> Chris Counts

VFP boast its speed in returning rows using SQL statement. It displays the no. of seconds VFP takes to return the rows.
I observed that it can return rows fast when you use a field name that happens to be an index key with the WHERE Clause and with (LEFT, ...) JOINs.
Surely, that client does not need a million rows at one time.

Segmenting the file (without chopping the file itself into many tables) is a solution I know.

You can retrieve a segment of rows and collect the rows you need out of the segment.

For instance, a Customer File with a million customers.
If you segment this file by the letters of the alphabet using the customer's name (LastName, Firstname MI),
then you will have 26 segments roughly 38,000 rows per segment.
1. Create a field CustName1stLetter, C, 1
2. Fill these with values from the CustomerName field.
3. Index on CustName1stLetter tag CustName1stLetter

Select * from CustomerFile where CustName1stLetter = "R"
It faster to collect specific rows from the 38,000 rows returned than from a million rows.

This is just an example of course.

One applicable example,
If I have a Customer File with fields:


I need not create another field to place the value of rtrim(LastName)+", "+rtrim(firstname)+" "+Middlename to CustomerName
and create an index file with expression CustomerName.

This is good enough for me.
Index on LastName tag LastName

Select * from CustomerFile where Lastname = "SMITH" and FirstName = "JOEY" order by OrderDate,

Fewer than 38,000 rows shall be returned in seconds with a lastname of "SMITH".
Rows with a FirstName of "JOEY' will be collected for a few seconds more and will be displayed by OrderDate.

It's a way to segment the file.



Speed Issues Posted by Chris Counts @ 4/11/2003 4:51:57 PM
RE: Speed Issues Posted by Yuri Rubinov @ 4/11/2003 5:37:11 PM
RE: Speed Issues Posted by Pete Sass @ 4/12/2003 2:18:05 AM
RE: Speed Issues Posted by Suresh @ 4/12/2003 11:20:52 AM
The Need for Speed ! lol Posted by Pete Sass @ 4/13/2003 5:28:00 AM
RE: The Need for Speed ! lol Posted by Chris Counts @ 4/15/2003 5:30:25 AM
RE: The Need for Speed ! lol Posted by Cheryl Bellucci @ 4/15/2003 7:45:33 PM
Segmenting the file with RushMore Posted by Rene Relucio @ 4/12/2003 2:04:20 PM
Take advantage of indeces and segmenting Posted by Rene Relucio @ 4/13/2003 8:15:57 AM