I am producing an MS Access 2003 application with linked tables to MS SQL Server 8.0 Enterprise Edition that contains a range of tables populated with extensive amounts of Legacy data on a view only basis. I have gone down the .mdb route as opposed to the .adp route.
Bill OF Materials
=========
In this particular instance I have created two normalised and related tables in MS SQL Server from a flat file containing 340,000 records of invariable quality and meaning in order to represent a searchable Bill Of Materials in an MS Access form. Using the Linked Table Manager I have linked MS Access 2003 to these tables.
tbl_BOM_Header
Header ID (PK)
Catalogue_Numb
Catalogue_Desc
In an MS Access form - frm_BillOfMaterials - I have represented all 34,000 records in a listbox (lstbox_BOMHeaders) based on a query qry_BOMHeaders. This listbox will be populated by the result of the query depending on what the user types into a text box at the top of the form
I have created a continuous subform for these BOM Items and inserted subfrm_BOMHeaderItems into frm_BillOfMaterials. There are 340,000 BOMHeaderItems records (10x)
When the use selects the BOM header records from lstbox_BOMHeaders, the OnClick() event executes the following code:
Basically my problem (if you haven't guessed it already) is that the retrieval of BOMHeaderItems for each Bom Header record is at best slow or at worst crashes my MS Access application.
Please can anyone advice me based on their experience of either:
1) a modification to this approach that will yield quicker retrieval and stop my application from crashing.
2) a different approach that will yield quicker retrieval and stop my application from crashing
I have reviewed a couple of the standard articles such as:
http://support.microsoft.com/?kbid=209091
Any technical advice, guidance or support would be very much appreciated,
I have an Access 2010 database .. and will be adding about 25,000 records a day to it. So it will get pretty big fast .. so 9 million a year. Is this too big for Access ?? Or is it more a issue of file size .. I know that Access 2010 can go up to 2 GB. I am interested in how I can maximize my performance and not have the database.The data is as follows .. One table. The other tables will be small, just this one will increase in size really.
Id .. Autonumber (primary key) When .. datetime Radio .. integer (indexed .. w duplicates) Group .. integer (indexed .. w duplicates) Type .. text 4 TransType .. text 2
I am considering a purge of old records if needed .. like only keeping the most previous 5 million records or some reasonable number. Like .. then I would think I should try to number my records backwards (using long int rather than autonumber and number records backwards on import) .. ie record 1 would be the most recent one and only keep the last X number of them for performance reasons.
Could having two look up queries, one on form an the other on table, and both of them take info from the same field and store it in a different field on a different table slow the DB? :)
case 1: Link the SQL Server tables to MS Access (mdb file) - there are 50 concurrent users access in the same file (network share drive).
case 2: create adp file - there are 50 concurrent users access in the same file (network share drive).
I know if each client has own copy on his local machine, it will not have any problem. But, if we want to put one file only in the sharing drive, then if either the client or server computer (or the connection between them) fails during the transaction or other operation that writes to the database file, which case is better to solve?
I'm starting a year-long university project which will be written in c#. The client app will be run on a LAN, with 35-40 users accessing the database concurrently, connecting to a database held on the server.
Could anyone tell me if Access can withstand 35-40 users at the same time? I found out that the maximum amount of users is 255, but what kind of performance impact(if any) will there be with 35-40?
I've searched all over the Internet for this, but I can only find references to using access behind a website, which isnt quite what I need. Any help would be great.
I'm hoping someone can offer me some advice on performance for a FE/BE database that will eventually be accessed by up to 60 people during the same day (usually only 3-4 people searching/writing at same time).
The system is to process complaints at a call centre - built from the ground up and my first true Acess DB application. I have noticed extremely slow times in loading forms - which is a pain considering we are a performance oriented workplace.
FE is on the desktop. BE on the server. Currently it is taking up to 10 seconds to open a form (even when the form contains virtually no information looked up from tables).
Could anyone offer any tips on how to improve performance.
I am work in a medium sized (80-100 people) Government Contact Centre. As some people may know from previous posts, for several months on and off I have been developing a database which could be best described as a 'ticket of work' system for many of our transaction channels.
I am currently rolling out the 'Beta' version of the database, and am noticing some slowdown in performance.
The database is Acces 2K, F/E to B/E X2 (writing to one backend 90% of the time) sitting on one of the networked drives (F/E is local).
I have addressed some of the issues that relate to performance (such as persistent locking of B/E) and am seeking some further advice. Unfortunately, as our I.S department do not support Access use I am unable to seek advice at work.
1. Does anyone know of any tools or methods for measuring database performance?
I am particularly interested in response times and how these are impacted depending on traffic/load. It would be particularly beneficial it there was a tool available that recorded data automatically(eg. in an excel worksheet) for later analysis.
2. A question re: code efficiency. I am primarily using ADO to open and manipulate recordsets. This may be a silly question, but being self taught I have missed lots of the obvious stuff on the way. With ADO, I can either open an entire table, or use an SQL select to open only a specific record. Do the two methods differ with respect to performance greatly (more so from the perspective of other users)?
Sorry about the long post, and thank you in advance for any advice you can offer!
I'd like to know whether other users have had performance problems with different versions of Access in a single environment.
In my office, two users are on A2007, and the rest of us are on A2003 or earlier. After some nasty conflicts in the (un-split) database when A2007 and A2003 users were in the same file, I split the database. Now the A2007 and A2003 people have individual frontends. Other than having a garbled LDB file, I haven't had any other conflicts between the two programs.
However, I have had some massively irritating problems ever since I split the database.
1)The Design View on my Front End runs very, very slow (like, five beats after every action to process). 2) If I try to edit one particular, very simple form, it will not return to form view... it is stuck in Design view. (That sounds like the form has been corrupted.) 3) The auto-save doesn't seem to be working... Access (2003) crashed on me yesterday and I lost an hour's work on a new form.
I've tried some of the database construction suggestions to speed performance in other posts - forms based on queries, record locking, short paths, etc., and have seen some improvement in overall performance, but none of these should affect design view. USING the database tends to be speedy-quick.
So, did our dalliance with A2007 mess up my database? Does any one else have experience with this? Or are these symptoms common with a slow network connection (the back end is on a server, front ends are on individual desktops)?
Bear with me on this one as the query looks a mouthful although it's fairly simple. I started with the following query, which was working working very quickly and almost instantaneously bringing back results. Essentially, it is a number of nested select statements bringing back data and joining on the first table, Structure, to group and filter the results. I had to do it this way as there are no distinct relationships between the zarinvreg, zarageddebt, or baddebt tables (not 100% anyway).
Attached on SQL_Ok.txt
I then had to add some extra joins in and all of a sudden the query has slowed to 10 seconds. There's nothing particularly heavy about these extra joins but they have a couple of WHERE caluses in them. I tried indexing all the fields in every table in the DB and that didn't help at all.
Attached on SQL_SLOW.txt
Any ideas on how to improve this, some things to try, or why the massive delay in processing ? The same query is pretty fast on SQL Server though...just not Access
We're going live with a database today, and running through some testing, some of the forms seem to freeze. It's only happened a couple of times, but my question is, what is the best way to distribute it.
At the moment it's just on a location on the network and the users in the team access it directly. Can anyone give me any suggestions. My neck is on the line here...
I have searched on this forum for other threads like this and in the db examples page, but didn't find anything. I have made a perfomance eval db in MSA 2003. It works fine except for one part. I need to score the individual on about 20 different criteria. Each one ranges from 1-5. I am having problems getting all the entries to sum when I run the report. Should I use combo boxes, check boxes, radio buttons or what? :confused: Then how do I get the individual scores to sum up when I am finished putting them in and run the report to print it? I have looked around the net for a Performance Evaluation template to see how it is done, but couldn't find one anywhere. Microsoft doesn't have one in their list of templates either. If anyone knows where to find one at, I'd appreciate that too. TIA for your help! I appreciate it.
Im developing a project database. A normal project will need 10 000 records in the biggest table. Does this effect the performance? I mean, when they have done 8 projects, there will be like 80 000 records in one table. Is this to much? Does this influence the performance very much ?
The database is gooing to be placed on a sequelserver.
thanks for your reply since this is a very importend mather!!!
I have split my database application that was approaching the 20MB size. This I have split into a front end (approx. 8 Mb) with linked tables to a back end database (approx. 12MB).
Network is 100Mb Ethernet.
However, since doing this, end users have noticed that scrolling through records and especially running reports takes significantly longer sometimes 3x/ 4x longer. I understood that splitting the DB would have a beneficial effect from a development / application 'release' point of view and maybe if I were to create an MDE file of the front end, I could also benefit from reduced network traffic given that end users are using a compiled executable etc.
With the speed issues I have been experiencing I have had no choice but to roll back to the original application format with everything in the the one MDB file.
Has anyone else had to do the same - given similar speed degradation issues?
Last week, my access97 db, with back end and front end, both residing on a network as they have been for the last 2 years, decided to start running at 1/4 of the speed that they usually have. The db is used by 16 users, and roughly 5 are on at any given time (Operating system - XP). Both front end and backend were compressed without any change in performance. No changes in programming or number of records was introduced as of late. Checking with our IT department indicated that the performance of the network and drive have not changed and are up to snuff. I moved a copy of the FE and BE to my hard drive and found performance to return to normal speed, although I am not sure if it always ran faster on a PC. Any experience with this irregularity and options to check would be greatly appreciated?
I have a access application which is split into 2 bits with the backend being on the network. The thing is its very slow performance wise and i want to try and optimise it...
Have a few questions after I ran the Access performance analyzer. Now these ideas are they good or just some generic recommendations. Dont know if I should take care of all these or not?
Anyone know if I should do all these things and about how I should do it?
I have designed an Access DB with various forms to display data populated by queries. This runs fairly efficiently when on the same system. As soon as I try to split the DB into a front and back end and place the data part in a network folder location (across a WAN) the performance is incredibly slow.
The strange thing is that when testing the queries out on SQL server (just as a test), they run quickly and the data can't be more than a few kb in size.
As it was explained to me the other day: "Access is an file oriented database. There is no client-server code, so all data manipulations are done on client side anyway. Access has to load data across network."
This sounds to me like Access would be loading all 20Mb of data across the WAN and processing it on the client end rather than running the query at the back end first and only sending through 10kb of data. Is this true?
A DB is split (FE / BE) with several FE users and the BE sat on a network. FE Access 2003. (runtime) The Sub form has record set type set to Snapshot.
Which of the following scenarios will perform fastest?
Scenario 1, The FE Queries a linked table and displays the results on a sub form (Datasheet Format).
Scenario 2, The BE table is copied to the FE (new table) and the query is run against the new table and displays the results on a sub form (Datasheet Format)
The reason for this question is to attempt to reduce the network traffic and further improve the speed performance of a split database.
Hello, I am attempting to convert some Access XP MDB files to Access 2007. I am experience some horrendous performance problems. We are a bit unusual in that we use Access as a RAD tool and almost all of our data resides in Oracle tables linked to by our Access front ends.
The issue I am experiencing concerns opening tables in the UI. If I open a table by double clicking it, a datasheet view is displayed (rather quickly) as usual, filling the page with information. The UI is responsive while Access populates its recordset in the background. (e.g. the the record count on the record navagtion bar is not displayed). As soon as I click in a column defined as a date, the UI hangs, network trafic and CPU usage rise dramatically and the Access Window displays "not responding". It appears that Access is attempting to retrieve all the rows in the table. After 3 minutes or so, the UI becomes responsive, CPU and Network traffic return to normal and the record count is now displayed (roughly 900K records).
This is "bad" (and also not what Office XP did). Please can anyone help?
In basic terms, the contract table has basic contract info, resources can be assigned to a contract (via the Assign table) and we feed in exchange rate info also from another table.
Everything is so slow as soon as I put the BE on a share drive. I have done everything I can think of, I have changed to tlookups, I have changed the Auto options as recommended, analyse shows no issues, compacted the DB. It is unusable!
I notice on one form that it takes 1 second to calculate a field. The field basically uses a tSum to find the total cost of a contract (looks up assign to sum up all the attached resources). This seems to slow it down, but it does not explain the huge time it takes to load up.
I am considering that the issue may be down to the share where it is being located has too high latency (it is in another country, and it feels slow browsing through it)
Any other general ideas, or do you need more info?
I've lately come across some posts that condemn using lookup fields in tables. But how bad is this really? I mean it's quite a nifty feature, and it'd be really too bad if it hogs up resources, thus forcing us to leave it alone for performance considerations.
I'd appreciate if any of you guys/gals with more experience on the topic would like to share some, so I can take a better stance for future developement.
I have run accross a performance issue and looking to see if another knows why it happening. First of all I am using an ODBC connection to an AS400 system. I am quering a table with roughly 1,000,000 records. If I hard code the criteria for a particular field, the query completes in less and 5 seconds. If I set the criteria to be a field in a local table, the same query runs for over 1 and a half minutes. Any ideas.
For each customer, I only store the address and zip code. In other words I do not record city and state. Therefore, when I need the city and state I INNER JOIN to a zipcode table.
We have 40,000 customers. I wanted to run a search for all Florida customers. So I first went to the zip code table to get all Florida zip codes, storing them in a table called FloridaZips. There are 1400 of them. I then do
SELECT * FROM Customers INNER JOIN FloridaZips ON FloridaZips.Zip = Customers.Zip
(well,my table names are actually a bit different).
(I am running this query from VB.Net using OledDB provider). The Access database is local (no network travel is involved). There are indexes on all tables, all columns.
In both tables, the ZIP column is of type TEXT. I avoided "Number" only because zip codes that begin with zero (04056) tend to lose the leading zero if formatted as number.
The performance of this query seems poor. It takes 2 to 3 minutes to get a list of Florida customers. Am I doing something wrong, or is this timespan normal for Access?
So, I split up my database and housed the tables database part of the split on a shared network drive and am experimenting with the front end of the database which i've housed on my local computer. I've tried most of the recommendations: shortened the name of the db, changed fe to mde file, changed link tables to subdatasheet (none), but still my forms (only 2) are taking a while to load up. The forms are pretty substantial, and have a form and a subform on each. Can anyone offer up any recommendations to improve the speed?
I've built a macro that runs through about 12 queries one after the other, which I run every day. They basically bring in data from a data warehouse held on a Pervasive server through an ODBC link, then perform calculations based on other criteria in tables held within the access database. Up until about 10 days ago, the whole thing took about an hour to run. Then all of a sudden, this changed and it now takes around 5 hours. I haven't made any changes to the macro or any of the queries which it runs. I was wondering if anyone has any ideas what might have caused this (as it's now almost unusable). I've compacted the database to remove bloat, added more RAM, moved the Access DB onto my C drive instead of the server, stopped my antivirus program from looking at either my access DB or the datawarehouse where the data comes from, pestered the network guy to find out if they've changed anything to do with the server (he says no). Any ideas for other things to try would be great, as I'm on the verge of throwing my PC out of the window in case that helps!