Wrox Programmer Forums
Go Back   Wrox Programmer Forums > Microsoft Office > Access and Access VBA > Access
| Search | Today's Posts | Mark Forums Read
Access Discussion of Microsoft Access database design and programming. See also the forums for Access ASP and Access VBA.
Welcome to the p2p.wrox.com Forums.

You are currently viewing the Access section of the Wrox Programmer to Programmer discussions. This is a community of software programmers and website developers including Wrox book authors and readers. New member registration was closed in 2019. New posts were shut off and the site was archived into this static format as of October 1, 2020. If you require technical support for a Wrox book please contact http://hub.wiley.com
 
Old October 13th, 2003, 02:19 PM
Authorized User
 
Join Date: Sep 2003
Location: , , Canada.
Posts: 52
Thanks: 0
Thanked 0 Times in 0 Posts
Default Major performance loss on split DB

Hi all,

I have been working on a DB for use by 2 - 10 people in our office. After developing a prototype that seemed to cover all our preliminary needs I used the DB splitter so we could use a common back-end with each of our own front end clients. The Back end is hosted on a shared directory on a computer that is seldomly used but not specifically a DB server.

The One piece version was extremely fast and worked great, yet the split DB runs so slow it actually cannot be used. It can take a minute to load a seemingly simple form that used to load instantaneously. Is access unable to perform in a multi-user environment?

No book seems to seems to indicate that there will be an intense performance loss when the DB is split. There is quite a bit of VB code in the DB about 18 tables and 20 forms 10 reports. Some tables that rarely change were moved to reside locally on the front end in an effort to increase performance, It didn't help much.

The loss occurs immediately, even when only one person is logged on. We have never tried with more than two.

-Roni

Roni Estein
Roni.Estein@e-drugscanada.com
https://www.e-drugsCanada.com
__________________
Roni Estein
Roni.Estein@e-drugscanada.com
https://www.e-drugsCanada.com
 
Old October 13th, 2003, 02:39 PM
sal sal is offline
Friend of Wrox
 
Join Date: Oct 2003
Location: Clarksville, TN, USA.
Posts: 702
Thanks: 0
Thanked 0 Times in 0 Posts
Default

Unfortunatly, that is the way that access behaves in a networked environment. There are a few thing that yhou can do to improve performance, but it will not improve to much.
Remember how Jet works, it brings all of the data in a table when you query the table even if you only want one record back.
First, make sure that the back-end is compacted and repaired. This will make it a smaller file. Do the same for front end.
Compile your databases. this will help make the file even smaller.
Your network has a lot to do with this. If there is not sufficient bandwidth, the performance of the network will degrade as well.
Make sure that the computer housing this database is on top shape (memory, defrag hard disk, all that)
Most important thing, if you are running queries with multiple joined tables, make sure that you create summary tables that update daily. Remember, access brings all data for each table to the client, multiple clients and your network dies.

Look into upgrading your database to SQL Server or at least MSDB, the second one is free.

For your reports specially, you can use sumary tables that agragate all of the unnescesary joins.
Be careful because if one of this databases in a client crashes you may lose your data.

Your database may be better of on a server if you have one.
Does your company have SQL Server?




Sal
 
Old October 13th, 2003, 03:07 PM
Authorized User
 
Join Date: Sep 2003
Location: , , Canada.
Posts: 52
Thanks: 0
Thanked 0 Times in 0 Posts
Default

Hi Sal,

Thanks for the advice, I wish I knew that at the beginning.
We have two different versions of SQL server. We have the free SQL server 2000 that can be used for development. We also have a version of SQL server which comes with Microsoft small business server. I would assume that its the regular SQL server 2000. Neither of them are currently set up. We do have a the server that is running small business server for all our mail and fax services.

Forgive my ignorance but what is a summary table?
Would it be a temporary table created with only the fields that I require?
Would I also want to be using summary tables in SQL server 2000?

Thanks again for the help
-Roni


Roni Estein
Roni.Estein@e-drugscanada.com
https://www.e-drugsCanada.com
 
Old October 13th, 2003, 03:19 PM
Friend of Wrox
 
Join Date: Jun 2003
Location: Oxford, , United Kingdom.
Posts: 120
Thanks: 0
Thanked 0 Times in 0 Posts
Default

Roni

Whilst everything Sal says is true, the picture isn't quite as bad as he paints.

Personally, I've designed and managed a lottery management database for a major UK charity. This was a pure Access database, contained 250,000 name & address records and was used concurrently by 5-10 people for about 6 weeks a year. It was a standard split database with the data held on a shared drive over an old 10 MB/sec network. It was never the fastest of beasts but it was usable.

My first suspicion would be your network - is it generally slow? I'd suggest testing the speed by transferring a few hefty files and timing how long they take.


Brian Skelton
Braxis Computer Services Ltd.
 
Old October 13th, 2003, 03:48 PM
Authorized User
 
Join Date: Sep 2003
Location: , , Canada.
Posts: 52
Thanks: 0
Thanked 0 Times in 0 Posts
Default

Hi Brian, Sal and all

The network seems fine as far as networks go. I’m no guru, but here is the test I’ve repeated a few times just now. I passed a file of 721,627 bytes across the network 3 times from my machine to the serving computer and back again. The average time about 65 seconds -> 721,627 / 65 about 11102 bytes per second.

I am using a lot of joins in queries and updates and it may be slowing everything down when forms and reports are loading. But I can’t think of how to construct our system without them.

I’ll take all the advice I can get or read anything that is deemed worth reading. I’ve gone through several access books and none of them seem to touch this topic.

The largest table we have so far is only about 4000 records with 15 or so fields. So it shouldn’t be that much. The whole backend of the DB is only about 4MB

I have both front and back end set to auto compact and repair every time they are closed and I make sure it happens once a day.

If I can provide you with any other information to clarify what might be causing this or if or elaborate in any way please let me know.

Thanks for your time.

- Roni


Roni Estein
Roni.Estein@e-drugscanada.com
https://www.e-drugsCanada.com
 
Old October 13th, 2003, 05:16 PM
Authorized User
 
Join Date: Jun 2003
Location: , FL, USA.
Posts: 91
Thanks: 0
Thanked 0 Times in 0 Posts
Default

I have another bit of advice that may help you. I also created a DB to use in a multi-user environment (currently about 100 users) and noticed some performanace issues like you explained. With a split DB I have the BE and FE on a common shared server (both in different folders). All users use the same FE via a shortcut that I had IT put on each users desktop at login. What I did to resolve my issue was to assure that each table had a PK and a FK in all applicable tables. I removed all relationships and created joins only when necessary. You may run into sharing issues if you try the same setup and I have some code that will set the FE to shared (even though this can be set in Options - I like to be certain) every time it opens. I would be happy to share that with you (no pun intended). I have to agree with Brian that this very well may be your network regardless of the test that you ran; a network is a dynamic environment and I suggest running your test when the DB in question is really running slow. Hope this helps or at least gives another avenue to investigate.

Kenny Alligood
 
Old October 13th, 2003, 06:37 PM
Authorized User
 
Join Date: Sep 2003
Location: , , Canada.
Posts: 52
Thanks: 0
Thanked 0 Times in 0 Posts
Default

Kenny,

I am amazed that you could accomplish such a feat with access. Some friends have been telling me what a putz I was for even trying to use access since this issue started. Forgive my ignorance but what re the PK and FK for exactly? Would they act like a semaphore around the table? I ask because I thought that was built into the DB environment.

I will definitely try to remove all set relations and see of there is a performance gain.

I have been trying to analyze what other issues could be affecting the DB. There are at least two places where I am switching a recordset with each keystroke and the recordset being switched has more than one table being joined plus a few VB functions for formatting.

However functions like this exist in many of today’s applications which all seem to revolve around DB in one sense or another.

I shudder to think of rewriting everything in SQL server to find out that it won’t make a difference or that it’s just a bad DB setup.

Thank for your advice
-Roni


Roni Estein
Roni.Estein@e-drugscanada.com
https://www.e-drugsCanada.com
 
Old October 13th, 2003, 07:50 PM
Friend of Wrox
 
Join Date: Jun 2003
Location: , , USA.
Posts: 1,093
Thanks: 1
Thanked 12 Times in 11 Posts
Default

Quote:
quote:Forgive my ignorance but what are the PK and FK for exactly? ...I will definitely try to remove all set relations and see if there is a performance gain.
STOP!!!! Pardon the interruption, but I smell disaster brewing here. I would strongly recommend against removing any referential integrity you may have established in your db without a solid understanding of referential integrity which which would include, at a minimum, knowing what primary keys and foreign keys are for. Using a database design that lacks defined relationships between tables is legal, though its a pretty drastic step in any "normalize 'till it hurts, denormalize 'till it works strategy." You can use joins to do some of the work of relationships but not all of it. If you decide to go without defined referrential integrity, be sure that your user interface is sufficiently bullet proofed to perform all of your data validation tasks. You'll loose any of the data validation functionality that referential integrity can provide, and all sorts of bad things can happen if your interface doesn't maintain strict control over what the user can and can't enter into your db.

Regards,

Bob


 
Old October 13th, 2003, 09:59 PM
Authorized User
 
Join Date: Jun 2003
Location: , FL, USA.
Posts: 91
Thanks: 0
Thanked 0 Times in 0 Posts
Default

Bob has a point - and I would like to apologize for getting him so excited. I must admit though what I proposed does indeed go against the 'norm' for a DB. I suppose I should have been a bit more decriptive in my last post but PK & FK are primary and foreign keys respectively. In the most basic explaination a PK defines a record and is unique, a FK is an anchor in another table that relates a record to the PK in the first table. Please read up on PKs & FKs prior to enacting what I previously posted. As Bob states there are many advantages to referential integrity but none that can't be overcome in code (correct me Bob if I am mistaken). In my application I control every aspect from opening the DB, adding - modifing - deleteing data, and closing the application; nothing happens without validation and solid error checking - not to mention a great deal of forethought. But the result of my toil is an application that works exceptional well in a multi-user environment.

Kenny Alligood
 
Old October 13th, 2003, 11:59 PM
Friend of Wrox
 
Join Date: Jun 2003
Location: , , USA.
Posts: 1,093
Thanks: 1
Thanked 12 Times in 11 Posts
Default

No problem Kenny. I was just really enjoying watching the Yankees loose and guess my enthusiasm carried over.

Quote:
quote: there are many advantages to referential integrity but none that can't be overcome in code
Yup. Saved relationships are always optional (while highly recommended under normal circumstances). Relational joins established using SQL WHERE clauses - combined with transactions - can do everything “enforced” (saved) referential integrity can do. In fact, versions of MySql prior to 3.23.44 didn’t even support foreign keys, so extensive coding was required.

I’d just never really thought much about what’s required to maintain saved referential integrity from a performance stand point. If you enforce referential integrity through conventional means, Jet automatically places an index on the foreign key field, even though this index isn’t displayed in the indexes dialog. But I assume you’d want to place an index on fields involved in a join any way, so the extra index maintenance alone couldn’t be where the extra overhead comes in. I guess the performance hit comes from the need to scan these indexes repeatedly, along with the Relations object Jet creates, in order to flag attempts to violate referential integrity as errors. Remove this error checking mechanism by removing foreign key constraints, and apparently performance improves pretty dramatically. At least I guess something like that is going on. Interesting idea.

Thanks,

Bob






Similar Threads
Thread Thread Starter Forum Replies Last Post
possible loss of precision hobby Java Basics 7 September 19th, 2008 09:41 PM
Does reference to a table change when db split? Loralee Access 2 June 29th, 2005 09:03 PM
Need Major Help!!!!!!!!!!!!!!!! Phrozen1der JSP Basics 0 May 10th, 2004 04:53 PM
Distributed vs Co-located DB performance wsalamonsen BOOK: Expert One-on-One J2EE Design and Development 2 October 22nd, 2003 05:20 AM
Distributed vs Co-located DB performance wsalamonsen J2EE 1 July 23rd, 2003 05:31 AM





Powered by vBulletin®
Copyright ©2000 - 2020, Jelsoft Enterprises Ltd.
Copyright (c) 2020 John Wiley & Sons, Inc.