General :: Archiving Data - Improve Response Time Of Database?
Apr 30, 2015Will archiving older data improve response time of a database?
What is the best way to archive data knowing that older records will still be accessed twice a year?
Will archiving older data improve response time of a database?
What is the best way to archive data knowing that older records will still be accessed twice a year?
Hi, I have a form which is made up for 3 tables and I am trying to create an append query for each one in order to keep records of data before it is updated. The append queries seem to work but they append all data rather than just one selected record. I know I will next need to create a macro which can be used each time a record has been updated and a copy is sent to the archive. Can anyone help me with this, or have any useful suggestions?
Thanks.
I have an 18.8MB Access application consisting of the following:
140 tables
410 Queries
67 Forms
5 Macros
26 Modules
It's not a lot of data storage, but is very computationally and mathematically intensive. As we continue to develop and expand the application, I noticed last week that we suddenly experienced a massive falloff in performance. Access is taking an incredibly long amount of time to traverse the querydefs collection, find objects, etc. It seems that the object collections themselves are not indexed, meaning that it takes a much longer amount of time to run a compiled and saved query than it does to simply build the SQL string and execute from within code. I continue to hunt for coding bottlenecks and any other efficiency problems I can find. Has anyone else experienced this? Is there a "critical mass" for the object containers?
I have no idea how to even begin an archive. Can anyone give some direction? I've spent two days searching the web and trying to understand the Microsoft website directions along with Access for dummies. The only thing I can find is something about using Products_Appends and Products_Delete queries.
View 12 Replies View RelatedThe database response is significantly slow when our application requests data over a network. Are there any settings to adjust network response? The network is connected via a T1 line.
According to information I've found online, our application is the front end to the database and the database sits open on the server.
Hi, All--
I am designing a database to capture the data of returned surveys. I want to design the database to facilitate data analysis through crosstabs or other aggregation queries.
If I design a table where each record is the complete survey responses to all survey items in a returned survey, this is not friendly for such query analysis. (In this, each field would be a survey item). Call this the horizontal method.
The other way would be to have a reference table containing the survey items , and have responses entered in a seperate table linked by item id and response id (from a third table containing a record for each submitted survey). Call this the vertical method. This would take more time to set up but would probably be easier to query.
The item response table would become quite long contaiging every item response for every survey turned although each record is short.
Does anyone have any opinion on this, or perhaps a completely different approach that I haven't thought of that would be easy to set up but also easy to query?
Thanks.
Paul
Hi all,
currently on my db it stores data on various projects, and these projects are sorted by a status of on hold, on going, or finished. What im trying do is move only the projects that are finished but still keeping a record of them so we can view them in the future.
i was thinking maybe i could move the finished projects into another db? but not sure how to do, or is there a better way to achieve this?
thank you
I have a database with employees. The tables are as follows:
Deptdatatble
Depttble
Emptble
HRtble
Servicetble
Servicedatatble
Archivetble
Classestble
Classdatatble
At certain times, I want to archive employees out (lets say they are terminated). When I do this, something strange happens. If an employee has 4 records in the servicedata table and 4 records in the Classdata table, then it exports out 16 records (4 x 4). I would expect it to export out 8 records.
I'm trying to create an archiving routine as my database is becoming very large. For about 10 tables I want to shift certain records to an external database which would have the required 10 tables with the same table names and structure.
So far so good. I now want to automate everything using vba. I can see how to use the INSERT INTO statement but I don't want to have to name every field as there are hundreds. I just can't see how to do this.
If the table structures are identical how do I neatly shift a bunch of records from one to the other using code.
I'm trying to create an archiving system, where i use a simple Append Query followed by a Delete Query.
A typical criteria for the Append Query is less than Date()-30...so any records older than 30 days can be appended to an archive table. This works fine when i enter it in the Query Design criteria row.
But, I would like to make this user-defined. I have set up an unbound form as shown in the first attachment...and made a global variable entitled 'ArchiveDays'. I am hoping to use the variable to act as the criteria for the append criteria. (Please note that in the screendump...they can select an option button if they just want to stick to 1 month old. I also show you my assignment operations there).
My question is... how do i get the variable 'ArchiveDays' value to be the criteria for my append query....
I am sending an email using SendObject. Sometimes it works, and sometimes it makes the computer freeze up with no error message. I have tried this with Outlook running or not running, seems to make no difference.
Code:
'The sub procedure below sends e-mail in response to a click on the Send button.
Private Sub SendMessagesButton_Click()
'For Access, define some object variables and make connections.
Dim myConnection As ADODB.Connection
Set myConnection = CurrentProject.Connection
Dim myRecordSet As New ADODB.Recordset
myRecordSet.ActiveConnection = myConnection
[Code] ....
I have added some MsgBox () to narrow down where it crashes. It is after 'Five' and Before 'Six'. On the line:
Set appOutlookRecip = .Recipients.Add(eMailAddress)
I am mystified as to why it works OK sometimes, and not others. The email address being used is valid.
Is there a way to make a shared (split) database automatically log/time users out if they leave it open/idle for too long (ie: more than 30 minutes)?
View 1 Replies View RelatedCode for saving access database with date and time stamp when close database as database on 11:11am on 11082015
how can i set it
Twice a year, a database of mine is accessed and put too use by various staff within a time range of 1 week. the database is on a shared drive and in a location which can be accessed by all.
The staff access the database from different workstations and in some instances at the same time.
This has only led to issues in the database being copied and then confusing staff on what database to click on thus i have 2 databases which i then have to sift through and copy/paste into the correct one.
I want to know the best way i can:
1) Prevent multiple users accessing the database at a time.
2) making a copy of the original and typing into a separate database.
how do you loop through and insert selected data from a listbox on at a time?For example, lets say you have an insert statement that has a firstname, lastname, CarsID(foreign key) and address field. Lets say you had another table that has ID and CarsID(primary key) field. In the listbox, you have populated it with all the cars and they are selected.
Example:
INSERT INTO PEOPLE(firstname, lastname, CarsID) VALUES('John','Smith','Honda')
INSERT INTO PEOPLE(firstname, lastname, CarsID) VALUES('John','Smith','FORD')
Is it possible to pull the data real-time? I have this access database, and I need to pull the data every time it was updated.
Process name is given, I need to pull the time according to the process name and the volume,
Attached files is the output. The output should be in a form.
I am trying to find an algorithm to identify patterns in my data.
My task is to accomplish whether the data shows a very sharp decline and whether or not it follows previous fluctuation.
If it declines sharply and doesn't follow previous fluctuations it will indicate a production problem.
My time series data is as follows.Also sharp decline according to the below data is highlighted.
Data
-0.027663709
-0.057051957
-0.077941988
-0.070009989
-0.033860193
[code]....
I have some project run on MS-Access as front-end with database linked to MS-SQL Server. I have some column of table contain Date-Time data that store data as General Date format (ie 01/01/2005 08:00:00). I create some form for my staff to key in a data of lab test that they will be key in only time with out date value. On form, I show this value as time only too. But I want to use this data with Date value for some calculate as backgroud process.
So...
In case of new data, Database will be store my data as CurrentDate with Time that my staff key in.
In case of data update, Database will be store my data as ExistDate with Time that my staff may update.
What should I do for solve my problem?
I have some project run on MS-Access as front-end with database linked to MS-SQL Server. I have some column of table contain Date-Time data that store data as General Date format (ie 01/01/2005 08:00:00). I create some form for my staff to key in a data of lab test that they will be key in only time with out date value. On form, I show this value as time only too. But I want to use this data with Date value for some calculate as backgroud process.
So...
In case of new data, Database will be store my data as CurrentDate with Time that my staff key in.
In case of data update, Database will be store my data as ExistDate with Time that my staff may update.
What should I do for solve my problem?
I work on a pre-created Access database, and the other day I was working on it, and was trying to export something to Excel to sort it and do some Pivot analysis.
Anyway, I must have pressed something, because now every time I open the database, rather than saying "record 1 of 20463" and showing the data from record 1, it shows "record 1 of 1" and all the data fields are blank. If I go to "Records" and "Show All Records" they'll all come up, but I don't want to have to do that every time, and as I import and export all the time, I'm worried that the next time I try it it'll mess up the years of data I have.
I have a (quite) specific question but I thing it covers something I simply cannot answer.
I have three UPDATE queries running on linked tables in Microsoft Access (2000/XP).
My main data table (the one to be updated) has almost 1million records
My three information tables ALL have primary keys (which are used to link the main table) and vary in size
I have atatched the three UPDATE queries plus descriptions of the field names used.
TableRecordsTime
Main DataTable900000
Mask nomenk1302 hours
Mask media90015 minutes
Mask brand4000?????
Query A
UPDATE [Main DataTable] AS z
INNER JOIN [Mask nomenk] AS n ON (z.nomCode1 = mn.nomCode1) AND (z.nomCode2 = mn.nomCode2) AND (z.nomCode3 = mn.nomCode3) AND (z.nomCode4 = mn.nomCode4)
SET z.NomenkMask1 = n!NomenkMask1;
Query B
UPDATE [Main DataTable] AS z
INNER JOIN [Mask media] AS mm ON (z.couCode = mm.couCode) AND (z.nomCode1 = mm.nomCode1) AND (z.pubCode = mm.pubCode)
SET z.MediaMask1 = mm!MediaMask1;
Query C
UPDATE [Main DataTable] AS z
INNER JOIN [Mask brand] AS mb ON (z.couCode = mb.couCode) AND (z.nomCode1 = mb.nomCode1) AND (z.brCode1 = mb.brCode1) AND (z.brCode2 = mb.brCode2)
SET z.BrandMask1 = mb!BrandMask1;
FieldnameFieldType
couCodeText
pubCodeText
nomCode1Long Integer
nomCode2Long Integer
nomCode3Long Integer
nomCode4Long Integer
brCode1Long Integer
brCode2Long Integer
My problem, quite simpley is the speed involved with running these queries. I know that query b) is the quickest with query a) a distant second (I could not even complete the running of query c) and killed it after 6 hours.
What I need to know is WHY is queryC soooo much slower than queryB when the only realy diference that I can see between them if the latter has an extra field to join on
I 'm downloading the excel data from the site and connecting it to access.
In excel the particular column (Time Taken) is in the format of "00:12:26".
After connecting it to access and appending it to the table, the format changed to "12:12:26", the first two digits changed to "12" and the remaining are as it is how it looks like in the excel. I need to change it to format what it looks like in the excel.
Hello,
First time poster here so I hope this doesn’t sound too remedial. Here’s my situation…
I work for a large industrial company that has locations throughout the world. We have a DB that tracks product concepts and ideas and associated metrics for those ideas. The DB resides on a file server in North America (Raleigh, North Carolina to be exact). North American users have no trouble with the performance of the DB. It takes a moment to open (several seconds), but once it has opened there is virtually no lag time to add or edit records, run reports, view graphs, etc. However, users in Germany and the Netherlands encounter substantial lag time not only in opening, but also in updating and entering records, running reports, and viewing graphs. This is true even after they have waited for the DB to open.
The size of the DB is only around 2MB so I don’t think overall size is the issue.
There are probably no more than 3 or 4 users in the DB at the same time with most occasions being a single user so I don’t think we are having a multiple user issue.
The DB is self contained – no references to external data or splitting of any kind.
So my questions are:
1. Do you think the poor performance is a function of our network or of Access or the DB design?
2. If it is the network, is there anything that I can do in Access to help get around the hardware/network issues?
I have 2 queries to check if there is any "double quote" character in any of 12 month columns in a month table of 125K records. I use 2 queries since the maximun criteria in a query is only 8. So I use 8 criteria in the 1st query then use 4 criteria to check the remaining months in the 2nd query. The month table is refreshed and created every month. The 12 month columns are changing from month to month since they are -13 month to -1 month when the month table is created. For example, if the month table is refreshed and created in May 2013, then the 12 month columns are "May 2012", "Jun 2012" .... and "Apr 2013". If the month table is created in Jun 2013, then the 12-month columns are "Jun 2012", "Aug 2012" .... and "May 2013". The end user has little knowledge of Access Databse so he seems confusing how to update the 2 queries on new month table.I am trying to see if there is a way to use 12 criteria in just one query so the end user only deals with one query's update.If there is a way to automate/improve the update of the queries/query, then it would be the best.
View 9 Replies View RelatedI am exporting a large query to a delimited text file. I'm finding that it takes more than 5 minutes just to get the Export Text Wizard to load, and I'm guessing that's because Access is running the query as it loads the wizard.
View 3 Replies View RelatedI need to merge data from one DB into another.I have a split database with front end DBcompanyFE and back end DBcompanyBE. BE is on the server so users at company (3 users) can access it with their own FEs.I also have 2 users that are working at some other location (geographically) and they have identical BE of database (DBcompanyBE) and their own FEs.Now, my problem is, that at each location there is different data entered, but on both locations all the data is needed. What would be the easiest and mybe most automated way to merge/combain those data.
View 2 Replies View Related