I have a SQL2000(SP1)/Windows2000, and a database - 4GB. The server physical memory is 1GB, and about 900MB is allocated to the SQL2000 through the memory dynamical configuration screen.
A scheduled database maintenance job was created using the standard maintenance planner tool. In this job, only Updata Data Optimization was included. Everytime, this job freezed the whole server - all screens, SQL client accesses, and, even the keyboard. A server machine reboot had to be made.
My understanding about Optimizatoin is that it does reindex for each index of each table in the database. So I picked the largest table in the database - about 1.5 million rows - to run DBCC DBREINDEX manually against each index one at a time, using the Query Analyzer. They were all running through, without causing server lockup. However, I noticed that each time, the DBREINDEX statement made the CPU and Physical I/O climbed extreamlly high. I suspected that, since the Optimization job put all tables together to DBREINDEX, it might exhaused the system resource that caused the server problem, but I am not 100% sure, neither do I know where to confirm this.
Confusing enough, other databases on the same server didn't have the problem. They all have maintance plan and run ok. I did all sorts of DBCC checks against this database which I had problem with, they were no error reported. I even loaded the database, using the backup, to another SQL 2000 server, and run Optimization there, and it had no problem.
I am not sure it is a server issue? or a database issue? or a SQL bug? Anybody had same experience?
In SQL 2000, working with Maintenance plan wizard, what would be best settings and values should i choose in "Update Data Optimization information" window,
I have a small tricky problem here...need help of all you experts.
Let me explain in detail. I have three tables
1. Emp Table: Columns-> EMPID and DeptID 2. Dept Table: Columns-> DeptName and DeptID 3. Team table : Columns -> Date, EmpID1, EmpID2, DeptNo.
There is a stored procedure which runs every day, and for "EVERY" deptID that exists in the dept table, selects two employee from emp table and puts them in the team table. Now assuming that there are several thousands of departments in the dept table, the amount of data entered in Team table is tremendous every day.
If I continue to run the stored proc for 1 month, the team table will have lots of rows in it and I have to retain all the records.
The real problem is when I want to retrive data for a employee(empid1 or empid2) from Team table and view the related details like date, deptno and empid1 or empid2 from emp table. HOw do we optimise the data retrieval and storage for the table Team. I cannot use partitions as I have SQL server 2005 standard edition.
Please help me to optimize the query and data retrieval time from Team table.
I am not sure if this is the best forum but i could not find anything perfect so lets just try...
Problem: I have a GUI that has a bunch of data. users can insert or edit data. once they hit save i have a method that goes row by row and then queries the database to see if the row exists in the database. so that is 1 roundtrip. then if it is not, the code spits out an insert statement otherwise an update statement. so that is 2nd roundtrip. This is the best i have come up with but i feel its too slow.
In some other applications i have datagridviews that allow users to insert/edit data and the save is done by using the .net api tableadapter.update(datatable). I don't know what this api is doing under it but it seems like this is much faster. Unfortunately i cannot go this route. Does anyone have any suggestions on how to optimize the problem i have described in the paragraph above?
Would any of you give me any ideas for how could we optimize the report on data mining models? (as we know, for the data mining report, we have to select the mining model and the case table)
Hope it is clear for your advices and help.
Thanks a lot in advance and I am looking forward to hearing from you shortly.
Hi All, Were I work we have a standalone system that writes information to an event log. Currently this event log is in .mdb (MS Access) format. The problem we have is that the .mdb seems to get very slow to access after 100,000 rows or so, so it needs to be cleared out regularly. We have long discussed using an SQL server to log the events to instead of an .mdb file.
I have written a VB program to test the two DB formats and i expected MS SQL server 2005 to be faster at reading/writing than the .mdb. Both the server and the .mdb are local to the system (it's a standalone system), so we know it's not network that is making the SQL server slower. So here is my question: does anyone know of any good tips/tricks in the server configuration options to speed it up/generally improve performance?
The table definitions are the same in both SQL server and the .mdb file: Table:event_log_0000_000000 Module - Text Event_date - Text Event_Time - Text Event - Text Record_Number - int, primary key I know it would probably be better to have Event_date and Event_Time as datetime types, but I’m not in charge of that decision. The data/table doesn't matter to much i just need to prove that the SQL server is better (and faster) than a .mdb file.
The VB program uses DAO to access the .mdb DB and ADODB to access the SQL server - this is the only difference to how the DB's are accessed and I don't think it would account for the slowness of the SQL server.
This is my first post here, so I’ve probably missed out some vital information, so please ask.
Also sorry if this is the wrong place to post this question, it sort of covers Access/SQL Server 2005/Database programming areas, so wasn't sure.
I run the following statement and it will not update beyond 7 million plus rows and I have about 38 million to complete. I keep checking updated row counts and after 1/2 day it's still the same so I know something is wrong because it was rolling through no problem when I initiated it. I need to complete ASAP so it's adding to my frustration. The 'Acct_Num_CH' field is an encrypted field (fyi).
SET rowcount 10000 UPDATE [dbo].[CC_Info_T] SET [Acct_Num_CH] = 'ayIWt6C8sgimC6t61EJ9d8BB3+bfIZ8v' WHERE [Acct_Num_CH] IS NOT NULL WHILE @@ROWCOUNT > 0 BEGIN SET rowcount 10000 UPDATE [dbo].[CC_Info_T] SET [Acct_Num_CH] = 'ayIWt6C8sgimC6t61EJ9d8BB3+bfIZ8v' WHERE [Acct_Num_CH] IS NOT NULL END SET rowcount 0
Hi all, I was wondering if you's could help me out. We run sql optimization on 2 of our databases and when they run they cause our server to crash - system log reports: unexpected shutdown. When we take these jobs out of action the server does not crash.
I'm not sure if this could be a CPU or memory related issue or if its a SQL configuration issue?
It is said that, aimed at the same task, different SQL Select Statements maybe have different efficiency. Microsoft Corp. has provided the tool,Query Analyzer, but I have not any idea.
I posted this on the .NET Framework inside Sql Server forum as well. Sorry if the cross-post offends anybody.
I upgraded my primary production server this morning to SQL 2005. Everything went fairly smoothly, but a couple of hours after my installation was complete, I found the following error in my event log:
Source: .NET Runtime Optimization Service EventID: 1101 .NET Runtime Optimization Service (clr_optimization_v2.0.50727_32) - Failed to compile: Microsoft.ReportingServices.QueryDesigners, Version=9.0.242.0, Culture=neutral, PublicKeyToken=89845dcd8080cc91 . Error code = 0x80070002
I am a little stumped since we did not install Reporting Services. We only installed Database Services, Integration Services and Workstation Components. I'm open to any suggestions on this. This does not seem to be negatively affecting our server, but I do want to resolve it as soon as possible.
Msg 209, Level 16, State 1, Line 8 Ambiguous column name 'PartNrFabrikant'. Msg 209, Level 16, State 1, Line 8 Ambiguous column name 'omschrijving'. Msg 209, Level 16, State 1, Line 8 Ambiguous column name 'verkoopprijs'. Msg 116, Level 16, State 1, Line 13
Only one expression can be specified in the select list when the subquery is not introduced with EXISTS.
I need to update the Denominator column in one row with the value from the Numerator column in a different row. For example the last row in the table is
c010A92NULL
I need to update the Denominator, which is currently NULL, with the value from the Numerator where the MeasureID=c001 and GroupID=A.
hi , i am new to this. how should i update the user input values into sql server database? i am using asp.net and c# FIELDS are- userId, name, description, startTime, endTime, audiencePassword, presenterPassword
i know it must be simple...but i haven't worked on this before.
Hi,i have a problem about the CLIENT-SERVER architecture procedure.Well , i have an application in VB with ADO connection to a table in adatabase on a SQLSERVER 7.0 .Is possible to do that when a client updates a data in a field of my table ,the SERVER communicates to all clients connected to my table that this dataare updated , without the client do anything , for example without aclient-timer to control the data in the server ???thanks
I'm having issues with bulk update in SQL Server.I'm using SAP BODS as ETL tool and have some 20000 updates.target table has approx 0.5 million records and it has clustered index on id column.I have selected upsert option in BODS.Same setup is also done for Sybase IQ , IQ has bulk update option which is giving very ood performance.
In IQ same update load is finishing in some 9 minutes where SQL is taking more than 2 hours for same, this doesn't seem right.When I look at update is causing whole package to go slow.Sybase is creating query where is ID is present then do update else insert.Is there any way to make bulk update work faster in SQL environment?
I have a table dbo.Sales that contains all sales records. There is a column in that table called ItemNumber that I'd like to match with ItemNumber in a flat file and update the ItemCost based on the ItemCost column in the flat file.
So while there will be many sales records for each ItemNumber, I need to loop through and update the ItemCost in that sales record based on the corresponding ItemCost in the flat file. Does this make sense? I really need this for court and I can't figure out how to do it. I took a SQL course about 7 years ago but have forgotten everything.
There will be many sales records for each ItemNumber in the database table. I need to update each one with correct cost based on the item number and cost mapping from flat file.
I am working on SSIS wehre I need to work on a flat file as a source and needed to import it to database. If the destination table have the record already, I need to update it and if not exist, I just need to import the whole data.
Hi,I have an Excel file with 400 rows of old values and the correspondingnew values. My table currently has 10 columns out of which 3 columnsuse the old value specified in the excel file. I need to update thoseold values in the columns with the new values from the Excel file.Please guide me as to how to proceed with this.Thanks in advance!
Hi, I have a table in SQL Server 2005 which has [Id] and [Name] as its columns. I also have a Oracle database which has a similar table.
What I want to do is as follows: In a SSIS package, I want to pick up details from SQL Server and update the Oracle table. And then should be done without using a linked server connection.
Can someone guide me as to how I can specify a update statement in the destination dataflow.
I have two different sql server databases on different server. I need to do a select from one database and based on the results, update a record on the sql server. thanks,
I'm presented with a problem where I have a database table which must be migrated via a "custom tool", moving the data into a new table which has special character requirements that didn't exist in the source database. My data resides in an SQL Server 2008R2 instance.
I envision a one-time query which will loop through selected records and replace the offending characters with --, however I'm having trouble understanding how this works.
There are roughly 2500 records which meet the criteria of "contains bad characters", frequently containing multiple separate bad chars, and the table contains roughly 100000 rows.
Special Characters are defined as #%&*:<>?/{}|~ and ..
While the field is called "Filename" it isn't always so, it is a parent/child table where foldernames are also stored.
The examples I'm finding are all oriented around SELECT statements, to change the output of what I see returned, however I'd rather just fix the entire column using an UPDATE. Initial testing using REPLACE fails because I don't always have a single character as the bad thing in a string.
In a better solution, I found an example using a User Defined Function to modify the output of a select, but I cannot use that UDF in an UPDATE.
My alternative is to learn enough C# to modify the "migration tool" to do this in-transit, but I know even less about C# than I do of SQL.
I gather I want to use @@ROWCOUNT to loop through the rows but I really can't put it all together in a cohesive way.
When I enter over 4000 chars in any ntext field in my SQL Server 2005 database (directly in the database and through the application) I get an error saying that the data could not be updated because string or binary data would be truncated.Has anyone ever seen this? I cannot figure out what is causing it, ntext should be able to hold a lot more data that this...
I want to make data changes in read_only database , that's why i must set database read_write. While database is at read_write mode, i want to be sure that no one makes change in database.
For this aim, i write the code below, but i suspect that after setting the database read_write, till the setting database single_user ,is it possible get DML script from another user. Is the code below enough for this operation. Or is there another way?
Reminding: Read_only database can not be set single_user mode. That's why, first you must set database read_write.
The code;
use master alter database xxx set read_write with rollback immediate alter database xxx set single_user with rollback immediate
use xxx update tablexxx set columnxxx=yyy use master alter database xxx set read_only with rollback immediate alter database xxx set multi_user with rollback immediate
update xxx_TableName_xxx set d_50 = 'DE',modify_timestamp = getdate(),modified_by = 1159
where enc_id in
('C24E6640-D2CC-45C6-8C74-74F6466FA262',
'762E6B26-AE4A-4FDB-A6FB-77B4782566C3',
'D7FBD152-F7AE-449C-A875-C85B5F6BB462')
but From linked server this takes 8 minutes????!!!??!:
update [xxx_servername_xxxx].xxx_DatabaseName_xxx.dbo.xxx_TableName_xxx set d_50 = 'DE',modify_timestamp = getdate(),modified_by = 1159
where enc_id in
('C24E6640-D2CC-45C6-8C74-74F6466FA262',
'762E6B26-AE4A-4FDB-A6FB-77B4782566C3',
'D7FBD152-F7AE-449C-A875-C85B5F6BB462')
What settings or whatever would cause this to take so much longer from the linked server?
Edit: Note) Other queries from the linked server do not have this behavior. From the stored procedure where we have examined how long each query/update takes... this particular query is the culprit for the time eating. I thought it was to do specefically with this table. However as stated when a query window is opened directly onto that server the update takes no time at all.
2nd Edit: Could it be to do with this linked server setting? Collation Compatible right now it is set to false? I also asked this question in a message below, but figured I should put it up here.
I am hoping someone can shed light on this odd behavior I am seeing running a simple UPDATE statement on a table in SQL Server 2000. I have 2 tables - call them Table1 and Table2 for now (among many) that need to have certain columns updated as part of a single transaction process. Each of the tables has many columns. I have purposely limited the target column for updating to only ONE of the columns in trying to isolate the issue. In one case the UPDATE runs fine against Table1... at runtime in code and as a manual query when run in QueryAnalyzer or in the Query window of SSManagementStudio - either way it works fine. However, when I run the UPDATE statement against Table2 - at runtime I get rowsaffected = 0 which of course forces the code to throw an Exception (logically). When I take out the SQL stmt and run it manually in Query Analyzer, it runs BUT this is the output seen in the results pane... (0 row(s) affected) (1 row(s) affected) How does on get 2 answers for one query like this...I have never seen such behavior and it is a real frustration ... makes no sense. There is only ONE row in the table that contains the key field passed in and it is the same key field value on the other table Table1 where the SQL returns only ONE message (the one you expect) (1 row(s) affected) If anyone has any ideas where to look next, I'd appreciate it. Thanks
If I have a table with 1 or more Nullable fields and I want to make sure that when an INSERT or UPDATE occurs and one or more of these fields are left to NULL either explicitly or implicitly is there I can set these to non-null values without interfering with the INSERT or UPDATE in as far as the other fields in the table?
EXAMPLE:
CREATE TABLE dbo.MYTABLE( ID NUMERIC(18,0) IDENTITY(1,1) NOT NULL, FirstName VARCHAR(50) NULL, LastName VARCHAR(50) NULL,
[Code] ....
If an INSERT looks like any of the following what can I do to change the NULL being assigned to DateAdded to a real date, preferable the value of GetDate() at the time of the insert? I've heard of INSTEAD of Triggers but I'm not trying tto over rise the entire INSERT or update just the on (maybe 2) fields that are being left as null or explicitly set to null. The same would apply for any UPDATE where DateModified is not specified or explicitly set to NULL. I would want to change it so that DateModified is not null on any UPDATE.
INSERT INTO dbo.MYTABLE( FirstName, LastName, DateAdded) VALUES('John','Smith',NULL)
INSERT INTO dbo.MYTABLE( FirstName, LastName) VALUES('John','Smith')
INSERT INTO dbo.MYTABLE( FirstName, LastName, DateAdded) SELECT FirstName, LastName, NULL FROM MYOTHERTABLE
Hi,First post so apologies if this sounds a bit confusing!!I'm trying to run the following update. On a weekly basis i want toinsert all the active users ids from a users table into a timesheetstable along with the last day of the week and a submitted flag set to0. I plan then on creating a schduled job so the script runs weekly.The 3 queries i plan to use are below.Insert statement:INSERT INTO TBL_TIMESHEETS (TBL_TIMESHEETS.USER_ID,TBL_TIMESHEETS.WEEK_ENDING, TBL_TIMESHEETS.IS_SUBMITTED)VALUES ('user ids', 'week end date', '0')Get User Ids:SELECT TBL_USERS.USER_ID from TBL_USERS where TBL_USERS.IS_ACTIVE = '1'Get last date of the weekSELECT DATEADD(wk, DATEDIFF(wk,0,getdate()), 6)I'm having trouble combing them as i'm pretty new to this. Is the bestapproach to use a cursor?If you need anymore info let me know. Thanks in advance.
i am really in need of help. i have a text file consiting of some data.i want to update my database from that text file periodically say 12 hours.the text file is being updated by another server program in every 12 hours can any one help me in this case? i am lost for this scenario?? help me please.....
Dear Advance,I used one stored procedure to retrive 3 different result set. and in the codebehind i seperate it. means from the dataset i seperate three different datatable and then show my data as my need.but the main problem is ... after retriving the datafrom the database i have to user foreach loop to bind the coulmns data to my different custom class.example: foreach (DataRow oDrow in MyDataTable.Rows) {oClass=new Class();oClass.Name1=oDrow["Name1"] .toString();oClass.Name2=oDrow["Name2"] .toString();.... } 1. so my first question is there any optimization possible ?2. my result set is too loong ... so should keep just one hit to database or hit more than one time Currently i am optimizing my web application. in the previous version 1 have to hit the database 3/4 times for different purposes. but now it hits only one time... but it takes time in the codebehind to perform different operation.Any Suggestion
I have a SP that calls about 10 stored procedures sequentially. The 10 SP's are basically complex update statements, each one individual. Is there any way to optimize this? I know putting the 10 into 1 SP would make it compile faster but thats about it. Are there any execution tricks of Stored Procedures firing off sequentially?..or anything I should know?