Transact SQL :: How To Find Whether A Column Lies In Range Of Smallint / Int / Bigint
May 12, 2015
UPDATE P
SET
P.IsError=1
,P.IsDrawingRevNo=1
,ErrorMessage=ISNULL(ErrorMessage,'')+'| DrawingRevisionNumber DataType Is Not Valid, smallint expected(-32768 AND 32767)'
FROM ZPTSMGR.ProjectDrawingRaw P
WHERE P.LogId=@LogId AND P.ProjectId=@ProjectId AND P.Revision > 32767 (P.Revision NOT BETWEEN -32768 AND 32767) --SMALLINT RANGE -32768 to 32767.
--DataType Range
--tinyint DataType (MinVal: 0, MaxVal: 255). Its storage size is 1 byte.
--smallint DataType from -2^15 (-32,768) through 2^15 - 1 (32,767) and its storage size is 2 bytes.
--int DataType -2^31(-2,147,483,648) to 2 ^31-1(2,147,483,647). Its storage size is 4 bytes.
--Bigint DataType -- from -2^63 (-9223372036854775808) through 2^63-1 (9223372036854775807). Its storage size is 8 bytes.
The SQl statement fails, and not able to update it. The IsError flag need to set since the value does not lies in given range of smallint.--------say 457896523 which is not a small int value
SELECT uri, evFieldUri, evFieldVal , CAST(evFieldVal AS BIGINT) FROM TSEXFIELDV
[Code] ....
And it returns this error:
Msg 8114, Level 16, State 5, Line 1 Error converting data type varchar to bigint.
So, I tried again, and this worked…
SELECT uri, evFieldUri, evFieldVal,CAST(evFieldVal AS BIGINT), ISNUMERIC(evFieldVal) FROM TSEXFIELDV WHERE URI > 0 AND evFieldUri IN ( SELECT URI FROM TSEXFIELD WHERE exFieldFormat IN (1,11))
I logged out and came back and tried again, and it still worked. So then I tried…
SELECT uri, evFieldUri, evFieldVal,CAST(evFieldVal AS BIGINT) FROM TSEXFIELDV WHERE URI > 0
So, help me figuring this one out. I can still acces the database, I did a sp_helpdb and it says that I should still have 15 megs of blank data space (I have a 1.5 gig db with 700 meg of log) So, ok. a 2.2 gig device with 15 meg free is nothing, but still, when I double click on the database it reports the data space availabel to be 0.00. Surprisingly enough, the log is empty and the index is larger than the data. Would a dbcc checkdb clean this mess up?
I am having issues trying to write a query that would provide me the unique GUID numbers associated with a distinct PID if the unique GUID's > 1. To summarize, I need a query that just shows which PID's have more than one unique GUID. A PID could have multiple GUID's that are the same, I'm looking for the PID's that have multiple GUID's that are different/unique.
I have looked around quite a bit, but mostly what I have found is looking to see if a table is used or if a column is in a stored procedure and honestly most of what I have seen does not work.
I want to reduce our nightly import by removing any columns that are not being used. We insert into our staging tables, Stage1 for example. And say Stage1 has column1 and column2. If those columns are not being used, then I want to remove them from Stage1. The only catch is that every Stage1 table has a v_Stage1. v_Stage1 should have all the columns from Stage1, but doesn't always. So I need to know what columns from Stage1 are used somewhere other than v_Stage1 and what columns from v_Stage1 are not used.
I have a SQL text column from SP_who2 in table #SqlStatement:
like 1row shown below :
"update Panel set PanelValue=7286 where PanelFirmwareID=4 and PanelSettingID=9004000"
I want to find what table and column names are in the text ..
I tried like below ..
Select B.Statement from #sp_who2 A LEFT JOIN #SqlStatement B ON A.spid = B.spid where B.Statement IN ( SELECT T.name, C.name FROM sys.tables T JOIN sys.columns C ON T.object_id=C.object_id WHERE T.type='U' )
Something like this : find the column names and tables name
I wanted to find all occurrences of ADRSCODE in a Database where ADRSCODE is in either an Index or a Primary Key.
I know how to get all of the occurences of ADRSCODE in a database and the table associated with it, I just want to tack on the Index and/or primary key.
This is a common error for SQL Server, but I got it in a uncommon way.I have a table called - tblIDNumber where there are two columns - IDN_Number [NVarchar(200)] and Temp [BigInt]
If I run, SELECT * FROM dbo.tblIDNumber WHERE IDN_IDNumberTypeStaticValue = 33 AND IDN_Removed = 0 AND CAST(IDN_Number AS BIGINT) = 1
SQL Server give me the error: Msg 8114, Level 16, State 5, Line 1 Error converting data type nvarchar to bigint.
I first thought IDN_Number in type 33 has characters, but it doesn't, becasue the below query works!!!
UPDATE dbo.tblIDNumber SET Temp = CAST(IDN_Number AS BIGINT) WHERE IDN_IDNumberTypeStaticValue = 33 AND IDN_Removed = 0
To workaround, I ran the query,
UPDATE dbo.tblIDNumber SET IDN_Number = '123' WHERE IDN_IDNumberTypeStaticValue = 33 AND IDN_Removed = 0
and then I ran the first query, and SQL Server does NOT give me the same error - Msg 8114, Level 16, State 5, Line 1 Error converting data type nvarchar to bigint.
Second query approved there is nothing wrong from converting the value in IDN_Number to a BigInt, but the third query gave the hint that data might be the cause?????
finally, I found the root cause to be an index that the first query uses :
CREATE NONCLUSTERED INDEX [IX_tblIDNumber_Covering] ON [dbo].[tblIDNumber] ( [IDN_Removed] ASC, [IDNumberCode] ASC ) INCLUDE ( [IDN_Number], [IDN_Reference]) WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, SORT_IN_TEMPDB = OFF, DROP_EXISTING = OFF, ONLINE = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON, FILLFACTOR = 85) ON [PRIMARY] GO
If I remove the index, the first query works without the error.
For our ETL process, we maintain a TransformationList table that has the source view and the destination table. Data is copied from the view into the table (INSERT INTO). I am trying to find column names in the Views that are not column names in the associated Table.
In the below example, want to end up with three records:
I have it almost working, except that there is a table, ChangeColPrefix table, that is used by the ETL process to change some of the view's column name prefixes. Some of the source views have column names with prefixes that do not match the destination table column names. Say view SouthBase has all the column names prefixed with SB - like SBAcct, SBName. And the Destination table of Area District has ADAcct, ADName. There would be a row in the ChangeColPrefix for SouthBase, SB, AD, 1, 2 that would be used by the ETL process to create the INSERT INTO Area District From SouthBase.
I need to use this ChangeColPreifx to find my unmatching columns between my source views and destination tables. With out that table SBAcct and SBName from SouthBase will not appear to match the columns of ADAcct and ADName, but they do match.
I want to end up with these three records as non-matching:
View1, Column4 View2, Column4 View2, Column5
View1 has Salumn2 and View2 has Salumn5, and they must be changed to Column2 and Column5 as per the ChangeColPrefix table before running the Select from INFORMATION_SCHEMA.COLUMNS EXCEPT Select from INFORMATION_SCHEMA.COLUMNS looking for unmatched columns.
/***** Set Up Test Data *****/ -- Create 2 test views IF EXISTS(SELECT * FROM sys.views WHERE object_id = OBJECT_ID(N'[dbo].[View1]')) DROP VIEW dbo.[View1] GO CREATE VIEW View1 AS SELECT '1' AS Column1 , '2' AS Salumn2 , '4' AS Column4;
I have a database that has a tble with a field that autoincrements as a primary key. meanig that the field type is BigInteger and it is set up as my Identity Column. Now when I insert a new record that field gets updated automaticly. How can I get this value in the same operation as my insert? meaning, in 1 sub, I insert a new record but then need to retieve the Identity Value. All in the same procedure. Waht is the way to achive this please? Marc
I recently converted a column that was once an int to an bigint on one of my tables. The modified column provided a generic row id information and there are duplicates within this column. I am trying to perform a self join via the following:
SELECT a.row_id FROM test_db a INNER JOIN test_db b ON b.row_id < a.row_id.
This code use to work when the column was an int but now I am getting high CPU issues since I converted to bigint. I am unsure on why the change to bigint will cause such an issue. The OS/SQL is 64BIT.
This one's a bit tricky. I want to be able to take an IP address of a request on my website and find it in a table. Specifically, I need to be able to record IP address RANGES for search engines so that when they attempt to find a page that isn't there, I can programmatically send a 404 header instead of give them the human-friendly page (please don't make suggestions on this - I'm using PHP, etc., and can't send both a 404 header and a human-friendly page).
Anyway. Let's say a search engine has the IP range 33.33.0.0 through 33.33.255.255 (I made that up). if I receive a request from IP 33.33.101.221, how can I store ONE record (as either a regular expression or maybe multiple fields or somehow else) so that I can match that IP? I'm normally pretty good at figuring stuff like this out, but this one has me stumped for the moment.
All help is greatly appreciated.
I'll be posting a similar request on the PHP forum - I hope that's not considered cross-posting.
I have a database were a client can have many addresses. Clients may be at one address or another at different time of the year. The table allows for entering a startdate and an enddate for each address. I'm trying to figure out the best way to filter on this to return only the current address. I have tried the Where clause below but I'm not sure this is what i should use. The year is not needed but the datatype is datetime. I think i need to use the startdate also, but I cannot seam to get how to filter this. If anyone has ideas I would like to hear them.
Thank you,
A.Selected=1 AND A.EndDate Is Null OR DatePart(m,A.Enddate) >= DatePart(m,GETDATE()) AND DatePart(d,A.Enddate) >= DatePart(d,GETDATE())
I have a field which is currently of the "date time" data type. i want to convert it to smalldatetime, but everytime I try, i get an error of the "The conversion from datetime data type to smalldatetime data type resulted in a smalldatetime overflow error. " sort.
I have tried to find the values which are out of the smalldatetime range, with the following query
SELECT *
FROM midmar
WHERE bday BETWEEN '01/01/1900' AND '06/05/2079'
Which doesn't quite work, and gives me values that are actually between those two values listed.
I have also tried having the WHERE clause read:
WHERE bday <'06/06/2079' AND
bday>'01/01/1900'
and that doesn't really work either.
What's going on? Is there likely another problem besides the structuring of my queries?
CHECK constraints are not needed for the partitioned view to return the correct results. However, if the CHECK constraints have not been defined, the query optimizer must search all the tables instead of only those that cover the search condition on the partitioning column. Without the CHECK constraints, the view operates like any other view with UNION ALL. The query optimizer cannot make any assumptions about the values stored in different tables and it cannot skip searching the tables that participate in the view definition.
Then why am I getting index scans on my partitioning column on tables that fail the search value based on their check constraint?
Not looking for an answer because I know the query optimizer is a fickle b*tch and I did not post any code, but I needed to rant.
hi all how to find the continuous date from the given date range in sqlserver 2005 e.g. 2007-01-27 and 2007-02-02 and output should be 2007-01-27 2007-01-282007-01-292007-01-302007-01-312007-02-012007-02-02any suggestion?
We have records in one table that are marked as accepted/rejected based on eligibility start and end dates in another table. We're loading new eligibility data into an ETL table and if the start or end date is going to change, I want to report any records that need to be reviewed to see if their status should change. The new dates could be before or after the existing dates, and the new or existing end date could also be NULL. Currently I'm using 4 > < statements and it seems to catch any scenario, but I'm wondering if there's a better way:
DECLARE @RECORDS TABLE(RecordDate date,ID varchar(8)) INSERT INTO @RECORDS(RecordDate, ID)VALUES('20100101','99'),('20110101','99'),('20120101','99'),('20130101','99'),('20140101','99') DECLARE @ORIGINALDATES TABLE(StartDate date,EndDate date,ID varchar(8)) INSERT INTO @ORIGINALDATES(StartDate,EndDate, ID)VALUES('20100101',NULL,99)
I'm exporting out a number of tables via BCP for our Cognos group to report on. Almost all tables can export and import successfully. I'm using the following command to export out:
SET OUTPUT=K:BCP_FIN_Test SET ERRORLOG=K:BCP_FIN_TestBCP_Error_Log SET TIMINGS=K:BCP_FIN_TestBCP_Timings bcp "SELECT TOP 100 * FROM FS84RPT.dbo.PS_VOUCHER Inner Join FS84RPT.[dbo].[PS_VCHR_ACCTG_LINE] on PS_VOUCHER.VOUCHER_ID= PS_VCHR_ACCTG_LINE.VOUCHER_ID and PS_VOUCHER.BUSINESS_UNIT = PS_VCHR_ACCTG_LINE.BUSINESS_UNIT WHERE PS_VCHR_ACCTG_LINE.FISCAL_YEAR = '2014' and PS_VCHR_ACCTG_LINE.ACCOUNTING_PERIOD BETWEEN '9' AND '11' " queryout %OUTPUT%PS_VOUCHER.txt -e %ERRORLOG%PS_VOUCHER.err -o %TIMINGS%PS_VOUCHER.txt -T -N
The Query returns data fine in a SQL editor. We have tried exporting/importing both with Native and Character settings.
Below is the errors on import:
Here is the table structure:
Column/Type/Computer/Length BUSINESS_UNIT char no 5 VOUCHER_ID char no 8 VOUCHER_LINE_NUM int no 4 TOTAL_DISTRIBS int no 4
The result of the query I'd like should look something like this
1 2 5 7 8
So basically I'd like to leave record 3 and 4 out because they fall within 24 hours of record 2 and I'd like to leave record 6 out because it falls within 24 hours of record 5.I'd tried working with a CTE and set a dateadd(d, 1, recorddate), join it on itself and use a between From / To filter on the join but that didn't work. I don't think NTILE will work with this?
I'm trying to move some logic that I have currently within a program and putting it into SQL instead.
My table has the following 3 fields that are of interest to me: StartDate (DateTime), StopDate(DateTime), Length (int)
Length is calculated based on StopDate - StartDate and is expressed to the nearest minute.
What I want to do is query the Db giving a start date and end date and return all records that fall within that date range. I then want to present that data such that the earliest date is set to the start date criteria, the last to the end date criteria and the length recalculated.
Say for example I query for results between 01/02/2015 07:00:00 and 01/02/2015 19:00:00 and I get the following:
Basically, I have a membership table that lists each member with an effective period, Eff_Period, that indicates a month when a member was active. So, if a member is active from Jan to Mar, there will be three rows with Eff_Periods of 201501, 201502 and 201503.
All well and good.But, a member may not necessarily have continuous months for active membership. They might have only been active for Jan, Feb and Jun. That would still give them three rows, but with noncontinuous Eff_Periods; they'd be 201501, 201502 and 201506.There is also a table that logs member activity. It has an Activity_Date that holds the date of the activity - betcha didn't see that comin'. What I'm trying to do is determine if an activity took place during a period when the member was active.
My original thought was to count how many rows a member has in the Membership table and compare that number to the number of months between the MIN(Eff_Period) and the MAX(Eff_Period). If the numbers didn't matchup, then I knew that the member had a disconnect somewhere; he became inactive, then active again. But, then I thought of the scenario I detailed above and realized that the counts could match, but still have a discontinuity.So, is there a nifty little SQL shortcut that could determine if a target month is contained within a continuous or discontinuous list of months?
I have a situation where an agent has number of activities for a certain date range. If an agent has multiple activities within certain date range, I would like BALANCE BEFORE from the first activity and BALANCE AFTER from the last activity. Here is my current SQL query that returns the following data:
DECLARE @BeginDate Datetime DECLARE @EndDate Datetime Set @BeginDate = '05-1-2015' Set @EndDate = '05-31-2015' SELECT a.AgentName, R.BALANCEBEFORE,
[Code] ....
AGENTNAME BALANCE BEFORE BALANCE AFTER DATE DOUGLAS 9738.75 9782.75 2015-05-11 DOUGLAS 9782.75 9804.75 2015-05-12 DOUGLAS 9804.75 9837.75 2015-05-13
In the sample data above, ideally I would like my query to return data as follow:
AGENTNAME BALANCE BEFORE BALANCE AFTER DOUGLAS 9738.75 (from first activity) 9837.75 (from last activity)
Not sure how I can write sql query to accomplish this.
This one is making my head hurt! Trying to figure out how to query for records between date range. The records have a start_date and an end_date field. The end_date field maybe null.
For example, say you wanted to see the records of everyone checked into a hotel during a given date range. You need to account for the people that checked in before you @start_date parameter and may check out after your @end_date parameter.
fyi- As for the null end_date field, think of this as they have checked in and not sure when they will checkout yet.
I want to frame a range of data based on particular group of columns
If OBJECT_ID('tempdb..#ResellerRange') IS NOT NULL drop table #ResellerRange create table #ResellerRange ( ResID varchar(10) , amt decimal(18,2) , serialno int)
insert into #ResellerRange ( ResID,amt, serialno ) values ('Raja',10,67),('raja',10,68),('raja',10,89),('Prabu',20,56)
I want below output
resid amt min max ---------------------------------- raja 10 67 68 raja 10 89 89 Prabu 20 56 56
I am trying to query a code where i need to loop a month in a specified date range. Inside the loop I need to return a result of data each month and need to update the table of the returned data. How do I do the update a field inside the loop? Here's my query:
Now, I've to get active data for a particular date range. Let me explain the active data definition as below:
StartDate : 01-Jul-2015 EndDate : 31-Dec-2015
It should return all the data which was active for that date range even if it was only for one day.If no data found for that date range, check the last record before start date and and if its active then it should be returned else not.
I though of creating a function and pass primary key with date range and return the final status but that doesn't seems like an optimized query.
I have a table of errors with a DateTime field for when the error occurred. I want to query the table for a given date range omitting the time portion. What is the most efficient way to perform this query?
I tried to ask a similar question yesterday and got shot down, so I'll try again in a different way. I have been looking online at the gaps and islands approach, and it seems to always be referencing a singular field, so i can't find anything which is clear to get my head around it.In the context of a hotel (people checking in and out) I would like to identify how long someone has been staying at the hotel (The Island?) regardless if they checked out and back in the following day.
Data example: DECLARE @LengthOfStay TABLE ( PersonVARCHAR(8) NOT NULL, CheckInDATE NOT NULL, CheckOutDATE NULL