I'm using Microsoft SQL Server Management Studio. I've created a view (very basic two field select with a inner join) -- I expect the query to take some time as there are about 1.7 million records in one table and about 3 million in the join table. Problem is, I always get a timeout error when I try to run the query.
I've checked the server settings and "Execution time-out" = 0 (no time-out). Checked this at the server and in Microsoft SQL Server Management Studio. So my question is why am I getting a time-out when I have it set to not time-out?
What is the best method for ignoring the time in datetime comparisons. Say I want all records on 07/08/1996 regardless of their time. Or all records between 01/01/1999 and 04/01/1999 even if one of the records on 04/01/1999 had a time of 16:32:22
I have a PHP page where the user enters a date that represents the last day of a timesheet (ts_end) and the hours worked on that timesheet. That is then written into a table where the date is a datetime type. Because the user just enters a date, the time portion of the field is set to 00:00:00. In another place, I need to sum the columns for reports submitted between the beginning of a timesheet (ts_end -6 days) and the ts_end date.
The problem is that chartreviewed values entered on the ts_end date are getting lost because the time part of the ts_end field is 00:00:00 and the time part of the dateentered for the chartreviewed value is not. For instance using 2/4/2004 as the ts_end date looses the 192 charts.
I know I can revise the query to look for charts where the dateentered is less than dateadd(d,1,ts_end) and get the right values. It seems like there has to be a way though to tell sqlserver to ignore the time part of a datetime field when querying.
I am trying to insert rows to a table with a unique index that has the ignore duplicate property.
My program is running fine if running locally but the entire transaction failed if it is over a linked server.
The following is an example:
create table t1 (c1 int,c2 int)
create unique clustered index i1 on t1 (c1) with IGNORE_DUP_KEY
if running locally :
select count(*) from t1 go
insert into t1 values (1,2) insert into t1 values (1,2) insert into t1 values (2,2) insert into t1 values (1,2)
go
select count(*) from t1 go
The output is:
(1 row(s) affected)
(1 row(s) affected)
Server: Msg 3604, Level 16, State 1, Line 3 Duplicate key was ignored.
(1 row(s) affected)
Server: Msg 3604, Level 16, State 1, Line 5 Duplicate key was ignored.
(1 row(s) affected)
and the count(*) returns 2 at the end.
If running over a linked server:
select count(*) from linkserver.db.dbo.t1 go
insert into linkserver.dbarchive.dbo.t1 values (1,2) insert into linkserver.db.dbo.t1 values (1,2) insert into linkserver.db.dbo.t1 values (2,2) insert into linkserver.db.dbo.t1 values (1,2)
go
select count(*) from linkserver.db.dbo.t1 go
The output is:
(1 row(s) affected)
(1 row(s) affected)
Server: Msg 3604, Level 16, State 1, Line 3 Duplicate key was ignored.
Hello - the very nature of this question seems to make no sense I know - but we received a huge volume of data (29 tables) in flat file format. I first imported them into MS Access because of its portability and it seemed to be more forgiving on imports. Now I have a complete MS Access DB with all tables, so I figured importing to SQL server should be a snap. However, on the import, I had 14 tables import successfully, and 15 failed!
Here is an example of one of the error messages I received: Insert Error, Column 3 - status 6; Data Overflow...this was on a date/time field in access, and here is the data contained in the referenced row/column: "8/19/4999"
the year "4999" is obviously the problem (at least i think), and I have no idea why this successfully imported to MS Access, but not to SQL Server....
what i'd like to be able to do (not the best practice, i know) for now is ignore these types of errors - and just force SQL server to take the data straight from MS Access and replicate it. We received this data from a 3rd party, and there's no telling how many data entry errors like this could be in each table - many of the tables have over 500,000 rows, and i don't want to have to go through fixing each of these errors by hand...anyone have any ideas?
Is there a way to keep track in real time on how long a stored procedure is running for? So what I want to do is fire off a trace in a stored procedure if that stored procedure is running for over like 5 minutes.
We are using SQL Server 2008 as our database and use Access as a GUI. I am looking to create a form in Access where employees can access their time card and request changes from management. I want to use the format from the attached screen shot for the form. I pretty much know how to do it all, the only point of complication is trying to figure out the easiest way to get the transaction punch record data on employee_punch_record into a format where I can easily populate the form in the horizontal format you see in the screen shot.
I am not super strong in SQL, but figure I can do it using a formatting table of some sort. quick and easy way to move transaction records into a more horizontally oriented record?
When i checked the machine this morning i saw that MS DTC service was stopped and when did start it, it was good for a sec or two then it stopped auto matically i was not sure waht was goin on...
Then i checked the Eventviewer, it said the log file is full i tried out to find ms dtc log reset or increase the size but could not find out..where can i find this.
One more thing what is the purpose of DTC, i know in sql 6.5 we have everything in enterprize manager but i can't see the same either in 7.0 or 2000 is the name changed or is it linked to some other source.
Like linked server , does this depend on MS DTC..how and what is the basic purpose of this..kindly tell me what i should do now...
Is there a way I can perform an update on a table, but ignore thetrigger (or disable the trigger) each time I run a particular updatescript from a DTS package?I would like subsequent DTS steps to use the trigger except for my lastupdate statement. Is this possible?Thanks,Frank*** Sent via Developersdex http://www.developersdex.com ***
In my stored procedure I'm calling a buggy and flaky stored procedurethat comes from a third party. When I run my stored proceure from QA,I'm getting a whole buch of errors raised inside the third party one.Is there any way I could just ignore them, so that if I run my SP fromQA, only errors from my code, if any, show up?TIA
Hi,I'm trying to upload a large number of log entries currently stored astext files into a database table using bcp. For a few rows I get a"right truncation" error and the offending rows are not uploaded to thetable.I don't want to increase the size of the table varchar fields becauseit's only about a dozen out of almost million rows that have thisproblem ... I want to provide an override - i.e. if a row will resultin truncated data, truncate but still bulk copy the offending row. Isthat possible?I couldn't find such an option in the documentation.Any help is greatly appreciated.Thanks,Mudassir Latif
Is there a way to make a SP ignore an error?e.g. I'm looping through each database on a server, checking of a tableexists then selecting a value from that table. Now I have a database putonto the server where the table exists but all column names aredifferent, my SP is not interested in this database so when it errorswith invalid column name I want it to move onto the next databse and notdisplay any error message.
I'm writing a store procedure to accept search strings from user on my site. Currently, this is what I have.
Code Snippet @schoolID int = NULL, @scholarship varchar(250) = NULL, @major varchar(250) = NULL, @requirement varchar(250) = NULL --@debug bit = 0 AS BEGIN -- SET NOCOUNT ON added to prevent extra result sets from -- interfering with SELECT statements. SET NOCOUNT ON; SELECT * FROM [scholarship] WHERE ([sectionID] = @schoolID OR @schoolID IS NULL) AND ([schlrPrefix] LIKE '%' + @scholarship + '%' OR [schlrName] LIKE '%' + @scholarship + '%' OR [schlrSufix] LIKE '%' + @scholarship + '%' OR @scholarship IS NULL ) AND ([Specification] LIKE '%' + @major + '%' OR @major IS NULL ) AND ([reqr1] LIKE '%' + @requirement + '%' OR [reqr2] LIKE '%' + @requirement + '%' OR [reqr3] LIKE '%' + @requirement + '%' OR [reqr4] LIKE '%' + @requirement + '%' OR [reqr5] LIKE '%' + @requirement + '%' OR @requirement IS NULL )
The problem is, somtimes the search doesn't work if there is a space behind or infront of the search string. I wonder if there is away to ignore any spaces and go right into whatever character comes next or after. If so, how do I implement that?
I have a stored procedure that does several steps. During the stored proc, error messages are produced when certain conditions warrant. But I want to continue anyway.... ex. in a loop in the proc...
SET @strSQL = 'Update Table1 SET col1 = ''' + @strVariable + '''' EXEC (@strSQL)
--ERROR created by the exec statement.....
When I schedule this as a job, the first error message makes the job fail. How can I force the proc to run completely, even if an error occurs?
Hi! I'm wondering if there is a way to have the AVG() not to include the zero amount as part of the calculation. I'm looking the field, PurchPrice and RepairCost that are used by the AVG() function...
I am importing data into a SQL table and there is a potential for duplicate records to be coming in. How do I simply ignore the duplicates and add only the records that do not violate the keys?
SELECT expense_id, CAST(expense_id AS char(10)) + ' - ' + CAST(trip_km AS char(5))+ ' - ' + CAST(expense_amount AS char(5)) + ' - ' + charge_centre AS ExpenseDesc
If charge center is null, I need to ignore this field. How can I achieve this? The reason is that if any of the field is null, it will return ExpenseDesc as null.
if there is an error in the trigger then the update to the table does not happen. is there a way to make sql ignore errors in a trigger and still update the table
Hi, I am trying to import data from a Text file into a database Table using SQLserver BCP utility. I am able to do that when I have all new records in my Text file. But I am getting primary key violation error when I am trying to import the record which is already existing in the table. This is correct, but I want my program to ignore these errors and import only those records which are fine. I tried [-m maxerrors] option, but it is not working. My BCP program is getting interrupted at the first error itself, even if I give [-m100] option. my command looks something like this, bcp pub..employee in C:data.txt -b1 -m100 -c -t, -Sdatabase -Uuser -Ppassword
here -b1 is, processing 1 row per batch transaction -m100 is, ignoring first 100 errors
I am looking for best practice when passing a parameter to stored procedure that is not needed. For example, sometime the users will want the list to list only by certain state. Other times the user want all states. How can I make the SP to ignore the where clause if users want all states.
CREATE PROCEDURE usp_Example @State nvarchar(2) AS SELECT FirstName, LastName, State FROM SomeTable WHERE State = @FirstName; GO
Using SQL2000. According to Books Online, the avg aggregrate functionignores null values. ((3+3+3+3+Null)/5) predictably returns Null. Isthere a function to ignore the Null entry, adjust the divisor, andreturn a value of 3? For example:((3+3+3+3)/4) after ignoring Nullentry.If there's more than one null value, then adjust divisor accordingly.For example: ((5+5+5+4+Null+5+5+Null)/8) would be ((5+5+5+4+5+5)/6)after nulls ignored.Thanks for any help or advice.
Hi,I believe my SQL server was configured as Case sensitivity. I have anumber of stored procedures which were moved from a non-Casesensitivity SQL server. Because of the Case sensitivity, I have to doa lot of editing in those stored procedures. Is there a quick way toavoid the editing?Something like ignoring the case in one statement?Thanks in advance, your advice will be greatly appreciated.
I connect, but when I try to access any table from this database I get an error indicating that the object doesn't exist and if I use the fullname xx.table I get no errors. What may be happening?
I want to bulkcopy a pipe delimited text file that has two header records to SQL Server using ADO / C#. The first record contains the datetime the file was recreated and the second record contains the column definitions, both records start with a # character (see below).
# May 15, 2008 12:12:12345 # col1|col2|col3 col1rec1data|col2rec1data|col3rec1data col1rec2data|col2rec2data|col3rec2data
Is there a way to ignore the two header records when building the select statement for the text file or in the schema.ini?
That works until it hits a date formatted incorrectly. The date could have blank spaces, double zeros or an invalid month/day/year (e.g. 033986). The command stops with a conversion error and no values are written to the CorrectedDate fields.
Is there a way to have the command skip the invalid dates and continue to write the datetime conversion for the dates that are formatted correctly?
Edit: Forgot to mention this is on SQL Server 2005 (not Express).
when i write this sentence in textbox in reporting service 2005 : = iif(DateAdd( "h",Parameters! t.Value,Fields! completionTime. Value),"" ,Fields!completi onTime.Value)