I'm trying to work out a database design to make it quicker for my client
program to read and display updates to the data set. Currently it reads in
the entire data set again after each change, which was acceptable when the
data set was small but now it's large enough to start causing noticable
delays. I've come up with a possible solution but am looking for others'
input on its suitability to the problem.
Here is the DDL for one of the tables:
create table epl_packages
(
customer varchar(8) not null, --
package_type char not null, -- primary key
package_no int not null, -- /
dimensions varchar(50) not null default(0),
weight_kg int not null,
despatch_id int, -- filled in on despatch
loaded bit not null default(0),
item_count int not null default(0)
)
My first thought was to add a datetime column to each table to record the
time of the last change, but that would only work for inserts and updates.
So I figured that a separate table for deletions would make this complete.
DDL would be something like:
create table epl_packages
(
customer varchar(8) not null,
package_type char not null,
package_no int not null,
dimensions varchar(50) not null default(0),
weight_kg int not null,
despatch_id int,
loaded bit not null default(0),
item_count int not null default(0),
last_update_time datetime default(getdate()) -- new column
)
create table epl_packages_deletions
(
delete_time datetime,
customer varchar(8) not null,
package_type char not null,
package_no int not null
)
And then these triggers on update and delete (insert is handled automatically
by the default constraint on last_update_time):
create trigger tr_upd_epl_packages
on epl_packages
for update
as
-- check for primary key change
if (columns_updated() & 1792) > 0 -- first three columns: 256+512+1024
insert epl_packages_deletions
select
getdate(),
customer,
package_type,
package_no
from deleted
update A
set last_update_time = getdate()
from epl_packages A
join inserted B
on A.customer = B.customer and
A.package_type = B.package_type and
A.package_no = B.package_no
go
create trigger tr_del_epl_packages
on epl_packages
for delete
as
insert epl_packages_deletions
select
getdate(),
customer,
package_type,
package_no
from deleted
go
The client program would then do the initial read as follows:
select getdate()
select
customer,
package_type,
package_no,
dimensions,
weight_kg,
despatch_id,
loaded,
item_count
from epl_packages
where
customer = {current customer}
order by
customer,
package_type,
package_no
It would store the output of getdate() to be used in subsequent updates,
which would be read from the server as follows:
select getdate()
select
customer,
package_type,
package_no,
dimensions,
weight_kg,
despatch_id,
loaded,
item_count
from epl_packages
where
customer = {current customer} and
last_update_time > {output of getdate() from previous read}
order by
customer,
package_type,
package_no
select
customer,
package_type,
package_no
from epl_packages_deletions
where
customer = {current customer} and
delete_time > {output of getdate() from previous read}
The client program will then apply the deletions and the updated/inserted
rows, in that order. This would be done for each table displayed in the
client.
Any critical comments on this approach and any improvements that could
be made would be much appreciated!
Does anyone know how to upload (bulk) data from a client (written in Excel VBA) to a remote SQL2000 database? Of coarse I tried "INSERT INTO" and rst.addnew but I noticed this is much, much slower as downloading from the same remote database.
I have a very unusual situation - we converted a client from DB2 7.2 to MS SQL Server 2000, SP3. There is one report that runs very quickly when ran on the Database Server, but it takes a long time to complete when it is ran from a client system. This query is ran from within the application and not from within Query Analyzer.
Has anyone else here ever encountered this issue? What did it turn out to be? I am leaning away from it being a network issue.
I have below DB structure in MSSQL for a small application which follow relational approach. Data retrieval (for Hostels) will need several Join, may be Key-Value approach where data retrieval will be fast.
I'm looking for ideas on how to write SQL scripts for updates that arepushed out to clients for product updates. Obviously, We could justkeep track of the changes on a pad or write a database that requiresus to input those changes and eventually hand write the updatescripts. I was wondering if anybody has any solutions that may helpautomate this process.Is there a way to write an application that will compare a current(updated) database structure against the last realease that will giveus the fields that need to be changed?As far as creating the scripts for the initial install, thats easy. Wecan do that right from the SQL Enterprise Manager.Call me lazy! Any ideas?Thanks
I have many sql 2000 DTS packages that I support from my development workstation running v2000 sp4. Packages are altered on the development machine and then go through a normal release mechanisms to production via testing servers etc.
I have recently installed the client tools for SQL Server 2005 on my desktop to evaluate the product. The 2005 DB instance is running on a seperate server.
So, I have dev edition of sql 2000 and 2005 client tools (including BI Dev studio etc) on my workstation.
I have recently had to make changes to a 2000 DTS package and used my 2000 enterprise manager to do so. No Problem- saved and tested fine on my workstation.
But when I try and release it to another server, or open the package using enterprise manager from another machine that does not have sql 2005 installed - I get an error message 'Unspecified error'. This I've seen before when trying to open packages created in v2000 , using v7 or where the service packs are different between machines.
Digging around my workstation and comparing some of the DLLs I know to be required to distribute DTS packages (from RDIST.txt) it seems that some of the SQL 2000 dll files have been updated by my 2005 installation.
E.g
DTSFFILE.DLL on my machine is 2000.85.1054.0 whilst on any 'clean' 2000 machine is at version v2000.80.760.0
Surely it cant be right that SQL 2005 has newer versions of components for SQL 2000 than is available with the latest SP for the actual product! Especially considering that the installation of 2005 does not even allow you to edit 2000 DTS packages through the management studio without a 'special' download' of the feature pack,(whihc by the way does not work very well either)
So am I to conclude that you can not run side by side installations of SQL 2000 and 2005 on a single machine and expect 2000 to run as it did previously !!!
What should I have done? Is there anything that can be done other than restoring from backup? How does one know if the database is really recovering or is EM just joken? I can wait 2 hours before starting the restore
I was BCPing 12 million rows into a staging table. II used the '-b' option every 20K which I thought would do a commit and clear the log in batches. After the process EM appeared to show the transaction log as empty. Upon inspecting the Bcp output file I discovered the message that the BCP did not complete because syslogs was full. I could not do a truncate transaction log or a dump database. I tried to do a truncate transaction with no_log and it appeared to just hang. I stopped the SQL Server thinking I could dump the transaction log, but could not start the Sql Server again. I then stopped the NT Server because 'if all else fails'. The SQL Server started but the user database if marked as recovering.
My DB size was from 500MB to 10GB since 8/1998 to 12/2004. But now is 16GB (from 1/2005 - 5/2005), I don't why the data size growth too fast (as double) ?
ssql.Append("INSERT INTO FINDINGS (Facility) ") ssql.Append("VALUES ('" & Facility & "')") Try Dim NewRow As Integer = dba.ExecuteSQL_Affected(ssql.ToString) Catch ex As Exception MsgBox("There was an error saving records.", MsgBoxStyle.Information, "No Key") Exit Sub End Try
Assessment.dtblFindings_Initialize()
Code Block Public Function ExecuteSQL_Affected(ByVal sSql As String) As Integer '//Execute the query like Insert, Update and delete Dim RowsAffected As Integer Try
If Conn.State = ConnectionState.Closed Then
Conn.ConnectionString = "Data Source=" & oDBConfig.LocalDBLocation & "" & oDBConfig.LocalDBName & ";" Conn.Open() End If Dim cmd As New SqlCeCommand(sSql, Conn) cmd.CommandType = CommandType.Text RowsAffected = cmd.ExecuteNonQuery() cmd.Dispose() Conn.Close() Return RowsAffected Catch err As SqlCeException
MsgBox(Utility.ComposeSqlErrorMessage(err)) Catch ComErr As Exception
MsgBox(ComErr.ToString, MsgBoxStyle.Information) Finally End Try End Function
Code Block Assessment Form
Public Sub dtblFindings_Initialize() Dim rdr As SqlCeDataReader Dim dba As New DBAccess Dim ssql As StringBuilder = New StringBuilder ssql.Append("SELECT Facility FROM FINDINGS") rdr = dba.OpenResultSet(ssql.ToString) Try
rdr.Read()
While rdr.Read ...
So here is the problem. The normal function is to initiate the insert by pression a button. That should go through all the steps then hit the dtblFindings_Initialize command and rebuild the datatable. However when it happens for the first time (i.e. the first facility going into the database), the SELECT statement always returns nothing.
If I stop the application and Pull the database to the desktop, the row has been inserted. So I feel that I am somehow doing something wrong, not closing something, not initializing something....argh! Please help!!
The primary database i'm responsible for has started to grow super fast. Every couple of days is growing by 10% (which matches with the db settings). But, the recent growth doesn't match with the historical growth. It took a couple of months to grow from 7 to 8 GB, but it has grown to about 24 Gb in the last 2 months. Bottom line - trust my assertion that it's growing alarming fast.
I need help determine what objects are fueling the growth. If I know the objects, I can probably determine the cause. From a flip-side, it might be legit data stored very poorly. I'm open to any ideas...but I need to get ahead of this problem in the next week or so...or I'm going to run out of room on the hard drive and could start to affect my users.
Ok, I'm doing a football database for fixtures and stuff. The problem I am having is that in a fixture, there is both a home, and an away team. The tables as a result are something like this:
It's not exactly like that, but you get the point. The question is, can I do a fixture query which results in one record per fixture, showing both teams details. The first in a hometeam field and the second in an away team field.
Fixture contains the details about the fixture like date and fixture id and has it been played
Team contains team info like team id, name, associated graphic
TeamFixture is the table which links the fixture to it's home and away team.
TeamFixture exists to prevent a many to many type relationship.
Make sense? Sorry if this turns out to be really easy, just can't get my head around it at the mo!
I am interested in adding a new row to a table 'Table05' that exists in a SQL Server 2005 database whenever a table 'Table00' in another SQL Server 2000 database has a row added to it. Can someone tell me a way to implement the above solution?
I wanted to confirm that when using Database Mirroring that DDL Updates such as table and stored procedure updates on the principle server are replicated to the mirror server.
I thought I read that non-logged operations would not be replicated to the mirror server. What would be some examples of that. At the moment...you would think every entry in the the database would be saved into a table and that would be replicated.
No error is thrown, but the update is not made? Possibly due to the dreaded inline sql being used? The data structure for these 2 servers is horrific (but that is a story in itself). The current structure (which needs immediate change) is each employee all 10 of them, have their own database. If the userid is 6 or higher they are on server2 if the userid is 5 or below they are on server1. Then they each have their own table that stores the data. (A story for another day, but def needs overhaul). However, here is the sql -- what needs to be altered to work with the current data structure and still get the updates needed? Or we can scratch the update statements entirely if we can get the correct counts needed.
I will have many client databases that will be updated. When they are updated I need to transfer some of that data to a central Server database somewhere on the Internet. Note, schemas do not necessarily match.
Transfering the data (Web Services, Remoting) is not a problem.
What I am looking for is a cool, correct, advisable way, for the client database to notify me of an Insert, Update or Delete. I can then initiate the connection to the Server and 'push' the data (maybe pull some back from the Server too).
Obviously Triggers may be a place to start... But I need to know 'external to the database' that an update has happened or can I send the data from within the SQL Server assembly?
Anyone have any ideas or a technique, maybe something new in SQL Server 2005 (all databases will be 2005), for acheiving this? Just so I don't go down the 'wrong' path...
There are some BLOB's involved (1-2 page PDF's), if that makes a difference.
I envisage that the process transfering the data will be a Windows Service running on each client. The connection may not always be available, so some kind of 'to do' list of outstanding data to be transfered is required.
I'm just starting on this, so any pointers would be great, I'm sure it's all been done before ;)
we're in a business where customers often ask for a foolproof scheme that even prevents folks with DB privileges from fraudulently inserting, updating or deleting data. The scheme must be so air tight that a judge can be convinced of its reliability.
HiI have have two linked SQL Servers and I am trying to get remote writesworking correctly (fast).I have configured the DB link on both machines to:Point at each others DB.I have security set up to map each others server loginsand Server Options: Collation Compatible, Data Access, RPC, RPC Out, UseRemote Collation all checkedMy problem is that when a SP performsBegin TransactionUpdate Local TableUpdate Remote TableCommit TranIt takes several seconds to complete. (about 7 seconds not acceptable tous)This is due to the remote update - how can I improve the response time?example of a stored procedures that takes timewhere ACSMSM is a remote (linked) SQL Server.procedure [psm].ams_Update_VFE@strResult varchar(8) = 'Failure' output,@strErrorDesc varchar(512) = 'SP Not Executed' output,@strVFEID varchar(16),@strDescription varchar(64),@strVFEVirtualRoot varchar(255),@strVFEPhysicalRoot varchar(255),@strAuditPath varchar(255),@strDefaultBranding varchar(16),@strIPAddress varchar(23)asdeclare @strStep varchar(32)declare @trancount intSet XACT_ABORT ONset @trancount = @@trancountset @strStep = 'Start of Stored Proc'if (@trancount = 0)BEGIN TRANSACTION mytranelsesave tran mytran/* start insert sp code here */set @strStep = 'Write VFE to MSM'updateACSMSM.msmprim.msm.VFECONFIGsetDESCRIPTION = @strDescription,VFEVIRTUALROOT = @strVFEVirtualRoot,VFEPHYSICALROOT = @strVFEPhysicalRoot,AUDITPATH = @strAuditPath,DEFAULTBRANDING = @strDefaultBranding,IPADDRESS = @strIPAddresswhereVFEID = @strVFEID;set @strStep = 'Write VFE to PSM'updateACSPSM.psmprim.psm.VFECONFIGsetDESCRIPTION = @strDescription,VFEVIRTUALROOT = @strVFEVirtualRoot,VFEPHYSICALROOT = @strVFEPhysicalRoot,AUDITPATH = @strAuditPath,DEFAULTBRANDING = @strDefaultBranding,IPADDRESS = @strIPAddresswhereVFEID = @strVFEID/* end insert sp code here */if (@@error <> 0)beginrollback tran mytranset @strResult = 'Failure'set @strErrorDesc = 'Fail @ Step :' + @strStep + ' Error : ' + @@Errorreturn -1969endelsebeginset @strResult = 'Success'set @strErrorDesc = ''end-- commit tran if we started itif (@trancount = 0)commit tranreturn 0
Hello,Can someone point me to getting the total number of inserts and updates on a tableover a period of time?I just want to measure the insert and update activity on the tables.Thanks.- Vish
This managed application was written to run on a Symbol 3090 Win CE 5.0 scanning device. We are using the symbol provided classes to access the scanning interface, and SQL Compact database on the device to collect the scanned data, and then using merge replication to synchronize scanned data when the device is docked. The problem we have experienced seems to be releated to the performance when inserting and updating records in the database.
We have tested some randomly generated 1000 records and inserting/updatating into a database. At first the time to commit a record increases when the database is flushing into the memory (The flush interval in the connection string property is 10 seconds by default). and then as the database size grows increasing the time to commit every single record which is causing the application to perform slowly as they scan items into the database. However, the device program memory remains consistant as they are scan items. From our tests, I found the time to execute either a update/insert command on 2MB sqlMobile database (upto 10000 records, depending on the size of the columns) is taking nearly 2 to 2 and half seconds to complete. Below is the only code I am executing,
BEGIN TRANSACTION Copy records from live to archive END TRANSACTION with commit or rollback execute sproc to write audit log with success or fail IF transaction was committed BEGIN TRANSACTION Delete records from live the archive END TRANSACTION with commit or rollback execute sproc to write audit log with success or fail End IF
END TRANSACTION OUTERTXN with commit if both inner transactions were successful or rollback if either failed
If either inner transaction rolled back execute sproc to write audit log saying whole process is rolling back End IfMy problem is that if the outer transaction rolls back then I am losing the two audit records because they are part of the transaction scope. I want these executes to commit even if the master transaction fails.
I have a sql 2005 express database uploaded to my website with important information in it.
Now, I had to make some table change and need to update the online database.
I am not sure if the 'Copy Website' function in Visual Studio 2005 will update the database structure and data or will simply overwrite it.
Does anybody know the answer? If it overwrites it, would you please point me to information on how can I update the database structure and data without ruining it?
I will be getting data in either Excel or Access form on a daily basis. I would like to automate the process of converting this (excel or access) data to a table in an existing SQL database. Since this conversion needs to performed on a daily basis, note that I need to update the table that contains data from the day before.
Is it possible to do this and if it is possible, can someone tell me how to do it.
I have a project that consists of a SQL db with an Access front end as the user interface. Here is the structure of the table on which this question is based:
Code Block
create table #IncomeAndExpenseData ( recordID nvarchar(5)NOT NULL, itemID int NOT NULL, itemvalue decimal(18, 2) NULL, monthitemvalue decimal(18, 2) NULL ) The itemvalue field is where the user enters his/her numbers via Access. There is an IncomeAndExpenseCodes table as well which holds item information, including the itemID and entry unit of measure. Some itemIDs have an entry unit of measure of $/mo, while others are entered in terms of $/yr, others in %/yr.
For itemvalues of itemIDs with entry units of measure that are not $/mo a stored procedure performs calculations which converts them into numbers that has a unit of measure of $/mo and updates IncomeAndExpenseData putting these numbers in the monthitemvalue field. This stored procedure is written to only calculate values for monthitemvalue fields which are null in order to avoid recalculating every single row in the table.
If the user edits the itemvalue field there is a trigger on IncomeAndExpenseData which sets the monthitemvalue to null so the stored procedure recalculates the monthitemvalue for the changed rows. However, it appears this trigger is also setting monthitemvalue to null after the stored procedure updates the IncomeAndExpenseData table with the recalculated monthitemvalues, thus wiping out the answers.
How do I write a trigger that sets the monthitemvalue to null only when the user edits the itemvalue field, not when the stored procedure puts the recalculated monthitemvalue into the IncomeAndExpenseData table?
I actually work in an organisation and we have to find a solution about the data consistancy in the database. our partners use to send details to the organisation and inserted directly in the database, so we want to create a new database as a buffer database to insert informations from the partners then make an update to the main database. is there a better solution instead of that?
Hello everyone,I have a webcontrol that uses database-structures alot, it uses the system tables in SQL to read column information from tables. To ease the load of the SQL server I have a property that stores this information in a cache and everything works fine.I am doing some research to find if there are anyway to get information from the SQL server that the structure from a table has changed.I want to know if a column or table has changed any values, like datatype, name, properties, etc.Any suggestions out there ?!
I have a system that basically stores a database within a database (I'msure lots have you have done this before in some form or another).At the end of the day, I'm storing the actual data generically in acolumn of type nvarchar(4000), but I want to add support for unlimitedtext. I want to do this in a smart fashion. Right now I am leaningtowards putting 2 nullable Value fields:ValueLong ntext nullableValueShort nvarchar(4000) nullableand dynamically storing the info in one or the other depending on thesize. ASP.NET does this exact very thing in it's Session State model;look at the ASPStateTempSessions table. This table has both aSessionItemShort of type varbinary (7000) and a SessionItemLong of typeImage.My question is, is it better to user varbinary (7000) and Image? I'mthinking maybe I should go down this path, simply because ASP.NET does,but I don't really know why. Does anyone know what would be the benifitof using varbinary and Image datatypes? If it's just to allow saving ofbinary data, then I don't really need that right now (and I don't thinkASP.NET does either). Are there any other reasons?thanks,dave
Hi All,Can u please suggest me some books for relational database design ordatabase modelling(Knowledgeable yet simple) i.e. from which we couldlearn database relationships(one to many,many to oneetc.....),building ER diagrams,proper usage of ER diagrams in ourdatabase(Primary key foreign key relations),designing smallmodules,relating tables and everything that relates about databasedesign....Coz I think database design is the crucial part of databaseand we must know the design part very first before starting up withdatabases.....Thanks and very grateful to all of you....Vikas
Hi... I have data that i am getting through a dbf file. and i am dumping that data to a sql server... and then taking the data from the sql server after scrubing it i put it into the production database.. right my stored procedure handles a single plan only... but now there may be two or more plans together in the same sql server database which i need to scrub and then update that particular plan already exists or inserts if they dont...
this is my sproc... ALTER PROCEDURE [dbo].[usp_Import_Plan] @ClientId int, @UserId int = NULL, @HistoryId int, @ShowStatus bit = 0-- Indicates whether status messages should be returned during the import.
AS
SET NOCOUNT ON
DECLARE @Count int, @Sproc varchar(50), @Status varchar(200), @TotalCount int
SET @Sproc = OBJECT_NAME(@@ProcId)
SET @Status = 'Updating plan information in Plan table.' UPDATE Statements..Plan SET PlanName = PlanName1, Description = PlanName2 FROM Statements..Plan cp JOIN ( SELECT DISTINCT PlanId, PlanName1, PlanName2 FROM Census ) c ON cp.CPlanId = c.PlanId WHERE cp.ClientId = @ClientId AND ( IsNull(cp.PlanName,'') <> IsNull(c.PlanName1,'') OR IsNull(cp.Description,'') <> IsNull(c.PlanName2,'') )
SET @Count = @@ROWCOUNT IF @Count > 0 BEGIN SET @Status = 'Updated ' + Cast(@Count AS varchar(10)) + ' record(s) in ClientPlan.' END ELSE BEGIN SET @Status = 'No records were updated in Plan.' END
SET @Status = 'Adding plan information to Plan table.' INSERT INTO Statements..Plan ( ClientId, ClientPlanId, UserId, PlanName, Description ) SELECT DISTINCT @ClientId, CPlanId, @UserId, PlanName1, PlanName2 FROM Census WHERE PlanId NOT IN ( SELECT DISTINCT CPlanId FROM Statements..Plan WHERE ClientId = @ClientId AND ClientPlanId IS NOT NULL )
SET @Count = @@ROWCOUNT IF @Count > 0 BEGIN SET @Status = 'Added ' + Cast(@Count AS varchar(10)) + ' record(s) to Plan.' END ELSE BEGIN SET @Status = 'No information was added Plan.' END
SET NOCOUNT OFF
So how do i do multiple inserts and updates using this stored procedure...
Hi All, I am designing database where few of the master tables will reside in different database or in case different server. Scenario is Server "A" with Database "A" may host the "Accounts" table. Server "B" with Database "B" may host the "Product" table. I am designing database "Project" which will hosted in Server "A". My application requires this master tables [readonly access] as data inserted in my application refers this tables. Also there are reports to be generated which refer this tables. How do i design my database and sql queries? I am thinking of approach of having equivalent tables created in my database and writing service which keep tables in my database in sync. This will ensure good perfomance during transaction and reports as they will need to refer this table locally as opposed to different database or different server.
Any thoughts on above approach?? or any better/standard way for such scenarios ?
Thanks in Advance. Your inputs will be of great help.
Online US Searchable Map of the 50 US States. Users search criteria is the following: Query records by selecting state, county, then record. Each County table has 10-20 tables. All databases combined = 500MB and TLogs = 100MB.
How would you re-design a relational DB where users could query data by state-county-record. Currenty the DB's are created by the County of each state which creates hundreds of DB's in SQLServer with no realtionship to each US state. What would be the best design to ensure good performance, data integrity and maintenance? Would you create 1 DB with all 50 states, create 4 DB's and divide by region(N,S,E,W), 50 DB's of each state or leave it as is with each county it's on DB? Any suggestions would be appreciated.