Hey. I need to substitute a value from a table if the input var is null. This is fine if the value coming from table is not null. But, it the table value is also null, it doesn't work. The problem I'm getting is in the isnull line which is in Dark green color because @inFileVersion is set to null explicitly and when the isnull function evaluates, value returned from DR.FileVersion is also null which is correct. I want the null=null to return true which is why i set ansi_nulls off. But it doesn't return anything. And the select statement should return something but in my case it returns null. If I comment the isnull statements in the where clause, everything works fine. Please tell me what am I doing wrong. Is it possible to do this without setting the ansi_nulls to off??? Thank you
set ansi_nulls off
go
declare
@inFileName VARCHAR (100),
@inFileSize INT,
@Id int,
@inlanguageid INT,
@inFileVersion VARCHAR (100),
@ExeState int
set @inFileName = 'A0006337.EXE'
set @inFileSize = 28796
set @Id= 1
set @inlanguageid =null
set @inFileVersion =NULL
set @ExeState =0
select Dr.StateID from table1 dR
where
DR.[FileName] = @inFileName
AND DR.FileSize =@inFileSize
AND DR.FileVersion = isnull(@inFileVersion,DR.FileVersion)
Hi is it possible to run a certain SQL statement agaisnt SQL Server and ask it not to fire any triggers? Or is would it be better to disable the trigger and then reable it after ward? If so how? Thanks Ed
I have a device application that simply needs to upload data to a server. The preferred DB server is Oracle but I've made it work using RDA and SQL Server. The problem I'm having is that it just needs to upload data, whichh I send using the RDA.Push() method. The data arrives just fine, the first time. With every subsequent upload all of the previous data is deleted fromt he server. Apparently RDA is tracking the deletion of the previously uploaded data locally and on the next .Push deleting that data from the server.
My question is: Is it possible to prevent RDA from deleting data on SQL Server? I attempted to delete the rows from the __sysDeletedRows/__sysRowTrack tables but got a "Data is read only" error.
--Environment: SQL Server 2000 I am doing following query for who is reaching of Age 65/75 in May 2008 but Member_DOB's or Spouse_DOB's showing different month. Month should show '05' because I want to see results who is reaching of age 65/75 in May 2008 only. Please help in this regard. --Query: Select m.empid, m.dob "Member_DOB", round(datediff(dd,m.dob,'05/30/2008')/365.25,1) Member_Age, d.dob "Spouse_DOB", round(datediff(dd,d.dob,'05/30/2008')/365.25,1) Spouse_Age from member m left outer join (SELECT * FROM depend where depcode = 'S' and activestatus = 1)d on m.empid = d.empid where (datepart(yy,m.dob) in (1933,1943) or datepart(yy,d.dob) in (1933,1943)) and round(datediff(dd,m.dob,'05/30/2008')/365.25,1) in (65,75) or round(datediff(dd,d.dob,'05/30/2008')/365.25,1) in (65,75)
Does anyone know if there is an easy way to turn off the "Select All" option from appearing on reports with multi-selects? I am going to have a hard time getting the development staff to update all of our reports AGAIN after making them conform to SP1.
Please let me know if there is a way before I install SP2.
I need to turn off validation and I've seen some threads saying this is not possible but my situation has a twist.
A customer needs the package to connect to different modem dialup connections to connect to different servers (they use dialup for security reasons). We have written two VB script tasks at the beginning and end of a loop, with data flows in between. Before the loop the dialup connection info is read into a recordset along with Data Source connection information. The first script uses this information to dialup and the last script hangs up the connection. The problem is the package tries to validate the data connections and the package has not dialed up yet, so it fails.
We managed to confirm it works in a test environment by putting a break in the first script, manually VPNing into the test network (to allow validation of the data flow to work), and then manually disconnecting from VPN during the break. The script dials in and pumps the data. But this won't be an option in production.
So if anyone has figured out a way to turn off validation, great. Otherwise, any ideas to make this work? I was thinking about setting up a dummy connection that would be connected outside the package before running just for validation (and then the script would disconnect to begin, but I would prefer to handle all of this within SSIS.
Any help? While I see the point of validation it's a bummer that MSFT didn't put this in the hands of the user.
I am in a scenario where my tables are refreshed every morning by a batch update. I have built a few views off of one table. To increase speed I would like to take all the rows from one of the view s and insert them into their own table. I know this can be done with some T-SQL but I'm a noob to it and don't know how to specifically do it.Any detailed help would be greatly appreciated. -Nate
I've constructed the SELECT statement to show the rows I want - and it shows 189 rows. Now I want to delete these rows. Here is the SELECT statement:
SELECT tblinqty.* FROM tblinqty LEFT JOIN tblmporder ON tblinqty.linkidsub = tblmporder.orderno WHERE tblmporder.orderno IS NULL and tblinqty.transtype = '0' and tblinqty.linkid = 'MP'
If I change the statement to "select * from tblinqty where exists ()", putting the above command inside the (), it returns over 12,000 rows! My intention is to change the SELECT into a DELETE by replacing the "select *" with a "DELETE" - but if I do that it will delete the wrong rows. How is the easiest way to turn the above successful SELECT statement, which yields 189 rows, into a DELETE statement which also deletes the same 189 rows?
I've tried changing the statement to a WHERE, thinking it would be easier to change to a DELETE, but the following yields 0 rows:
SELECT tblinqty.* FROM tblinqty WHERE tblinqty.linkidsub = tblmporder.orderno AND tblmporder.orderno IS NULL and tblinqty.transtype = '0' and tblinqty.linkid = 'MP'
I'm using SQL 2008. I want to essentially turn rows into columns. The source table has a variable number of rows and a fixed number of columns - the magical, elusive SQL query will yield a result that has a variable number of columns and fixed number of rows. A slight twist is that there is grouping by Territory, and in this example the first two rows should be reduced to one, with the SlsPerson concatenated to AA/BB.
The table, represented by RC_DataTable:
Territory----State--Est--SlsPerson ---------------------------------- Chicago------IL-----2004--AA------ Chicago------IL-----2004--BB------ New York-----NY-----1989--CC------ Los Angeles--CA-----2007--DD------
I have a summary table with a number of columns that give all the information I need to build a report. What I would like to do is create a view specific to a single report, that organizes the data so that each row represents one metric. The only way I know of to do this would be with a series of Union querries, but that would require querrying what is basicly the same data multiple times. Is there some way to gather the data in one pass and then split it up with multiple Union queries? Sicne I doubt I'm explaining this well I'll just try an exsample.
Let say I have a summary table with the following columns: Location, Severity, Date_Day, Number_Dispatch, Dispatch_Duration, Dispatch_Goal, Number_Dispatch_Met_Goal, and Dispatch_Met_Goal.
Now I want to turn this into a table with the following columns: Metric, Location, Severity, Goal, Value.
The only way I know how to do that is with the following SQL:
--Number Dispatched Yesterday SELECT Tickets Dispatched Yesterday AS Metric , Location , Severity , N/A AS Goal , sum( Number_Dispatch ) AS VALUE FROM Summary_Table Where
UNION --Average Dispatch Duration SELECT Average Dispatch Duration AS Metric , Location , Severity , Dispatch_Goal AS Goal , sum( Dispatch_Duration ) / sum( Number_Dispatch ) AS Value FROM Summary_Table Where...
UNION --Percent Dispatch Duration Met Goal SELECT Percent Dispatch Duration Met Goal AS Metric , Location , Severity , Dispatch_Met_Goal AS Goal , sum(Number_Dispatch_Met_Goal ) / sum( Number_Dispatch ) AS Value FROM Summary_Table Where...
Now I dont have a problem writing a statement for each metric, but it seems like this would be a rather wasteful query, as each would have the same where statement. What Id like is some way to either do all of the above in one pass (some kind of CASE statement perhaps?) or some way to pull the data for all the UNION queries in one pass.
I am pretty new MS SQL server, I am actaully working as an SAP BASIS ADMIN and 1 of our client is on MS SQL 2005 with SP1 as the DB and Win 2003 as the server.
I just wanted to know
1) How can we login to the dbase from the command prompt?
2) How to alter the database to NON-ARCHIVE mode and back to ARCHIVE MODE ?
I just wanted to execute few SAP activities for which i dont want the database to generate the archive logs, so i need some assistance.
Hoping to START GOOD here in this forum... Thanks a million in advance to all
Say I have a table of data containing something likeRegion | County | Year | Month | Valuefor some sort of value (int). I want to re-arrange this data so that itcomes out like this:Region | County | Year | J | F | M | A | M | J | J | A | S | O | N | Dwhere the letters are obviously the months in order. How would I goabout this/what's the best way. I attempted to use 12 INNER JOINS onthe table itself, sadly that failed miserably. Also, this doesn't seemvery efficient?Before you ask I got rid of my original code (gave up!)
Hello Everyone, Can any one update me up performance turning of SSIS and what difference would it make if I change the default value of this two parameter in each Data Flow. DefaultBuffermaxRows DefaultBufferSize
Also update me on what is these parameters used for.
I have a denormalization question that seems fairly fundamental but I haven't found the answer in BOL. I have data stored in a normalized transaction oriented database that I would like to denormalize to do some queries/analysis. Many tables contain attributes that are virtual columns driven by configuration. I am struggling with how to take those rows of data and turn them into columns of data.
Example Source:
Column1: CustomerId Column2: AttributeType Column3: Attribute Value
Ex Data:
123, ShoeSize, 9 123, Age, 45 123, Gender, Male
I would like to turn that into a table with one row, many columns:
CustomerId, ShoeSize, Age, Gender
123, 9, 45, Male
Also, I have other tables that are keyed off of the CustomerId that I would like to append to my ouput table via more columns. For example, a customer's address.
Example Source: Column1: CustomerId Column2: AddressLine1 Column3: AddressLine2 Column4: City Column5: State Column6: Zip
If I need to combine several tables, should I nest several merge transformations?
When creating a package, SSIS assumes varchar columns as Unicode (DT_WSTR) so before loading data into the target tables, I have to perform a data conversion from DT_WSTR to DT_STR.
Is there any way to turn UNICODE off? So I do not need to do the conversion? Please advise...
I have a multi dimensional cube. Among the dimensions I have, there is one dimension that has one hierarchy defined. When I view any of my measures on this hierarchy, I don't want the measures to be aggregated. I read some threads here having to do with semi additive behaviors, but Im not sure if it applies to my case. (or maybe it does and I'm just not getting it)
The purpose of "AnotherPersonID" is to have a self join that describes a certain relationship. So, in this case, Person B and C are related to Person A. I describe this relationship using a hierarchy where A is a parent of B and C. However, I don't want the measures for Person_A to be replaced by the sum of the measures in Person_B and Person_C.
Why do I have a hierarchy if I don't want to sum the numbers? This is motivated by the need to dynamically report on A's numbers when B or C's numbers are reported. I'm just using persons as an example, but, in my case I have a bunch of members that can be paired with another member in that dimension. And when I report on a member that joins to another member in that dimension, I need to dynamically report on that member's measures as well.
Lastly, If the use of a hierarchy is not the best approach for this, would you recommend another approach?
Hello, I€™m loading the Fact table of more then 8 million records. The SQL Server Database is taking hell lots of time to get this insertions and updations. Can any one guide me on turning the SQL Server 2005 Database.
Hello, I need a little help turning this:SELECT RequestNum FROM Tickets WHERE ReceiptDate>='" & FromDate & "' AND ReceiptDate<='" & ToDate & "'"into a sproc because of the two different values (FromDate and ToDate) for the ReceiptDate field in the database.I have this so far (problem areas are ??):Dim AuditConnection As New SqlConnection(ConnString)Dim AuditCommand As New SqlCommand("CreateAudit", AuditConnection)AuditCommand.CommandType = CommandType.StoredProcedureAuditCommand.Parameters.Add(New SqlParameter("@??", SqlDbType.NVarChar)).Value = FromDateAuditCommand.Parameters.Add(New SqlParameter("@??", SqlDbType.NVarChar)).Value = ToDateAuditConnection.Open()Dim AuditResult As SqlDataReader = AuditCommand.ExecuteReader()AuditGrid.DataSource = AuditResultAuditGrid.DataBind()AuditConnection.Close()and:CREATE PROCEDURE CreateAudit ?? ??ASSELECT RequestNumFROM TicketsWHERE ??AND ??GOI know I'm an idiot and this should be something simple. Arrrgh. Any help is appreciated immensely!!! :)
I am trying to enable service broker by issuing this command:
USE master ; GO ALTER DATABASE msdb SET ENABLE_BROKER ; GO
It is taking a while to do that and I am wondering whether msdb needs to be in single user mode? Some smaller dbs completed right away. Going through Surface area config I got a message that I need a service broker endpoint and I looked at my other db and that has dbmail functioning and same message saying this instance needs an endpoint in surface config, what do you think is wrong?
I have inherited a database which started life under SQL 7 (where Torn page Detection was OFF by default), and I'd like to turn it on. Will this reshuffle all the pages to make room for the extra check-sum, or is that stored in a single block somewhere else such that it can easily be added?
Is the change going to block access for long? (DB = between 2~5GB)
SET QUOTED_IDENTIFIER ON SET ARITHABORT ON SET CONCAT_NULL_YIELDS_NULL ON SET ANSI_NULLS ON SET ANSI_PADDING ON SET ANSI_WARNINGS ON SET NUMERIC_ROUNDABORT OFF
before I do a bulk load because the table I am inserting into has an indexed view created on it. Whats the best way to set these options prior to a bulk load?
How can I alter a table turning ON or OFF an IDENTITY field ?
for example: if I had my DB with Client_ID as an I IDENTITY field and for some reason it has changed to just INT (with no IDENTITY) - how can I tell it to be IDENTITY field again ?
+
Does anyone knows an article on database planning ? (I wanna know when should I use the IDENTITY field)
I have a table which stores all prices for companies daily trading. Companies ABC & XYZ have information available for what was the high & low values for a DateID.
CREATE TABLE [dbo].[TestGrid] ( [ID] [int] IDENTITY(1,1) NOT NULL, [CompanyName] [varchar](200) NOT NULL, [DateID] [int] NOT NULL, [High] [float] NOT NULL, [Low] [float] NOT NULL
[code]...
What I'm trying to do is turn the DateID into columns and then as a additional change those columns represent the actual day of the Date ID How would i know 20121201 is equal to say Monday and 02 is Tuesday?
END RESULT:
NAME TYPE MONDAY TUESDAY WEDNESDAY ABC HIGH 0.5 0.6 1 ABC LOW 0.1 1.5 0.6
I am going put in the whole query but my question is on changing a datediff minutes amount into DD:HH:MM.
Query is: select c.APPLICANT_ID as [Applicant ID], aetc.EVENT_TYPE as [Event Type], cast(aetr.CREATE_DATE as date) as [Registration Date], cast(aetc.CREATE_DATE as date) as [C Creation Date], datediff(mi,cast(aetr.CREATE_DATE as datetime),cast(aetc.CREATE_DATE as datetime)) as [time diff],
[Code] .....
I want this as dd:hh:mm rather than just minutes.
Currently this field is just minutes so a figure such as 20, depending on the time difference between the 2 dates.
I have tried the convert statement convert (char(5) ...........,108) within this but that's not working and I have used floor before to do this type of thing but not sure where that should go.
Subject says it all, really. I want to start using Token Replacement,but do I break anything by enabling it? Do jobs that don't use tokensrequire any changes? I saw somewhere that it can't be turned off, soI'm paranoid about enabling it. Anything that I should be aware of?Many thanks.
I have heard that turning off 'primary key-to-foreign key-relationships' between tables , helps to boost performance in production environments. Is this really true?
do you have a general rule of thumb for breaking a complex query into temp tables? For someone who is not a sql specialist, a query with more than a few table joins can be complex. So a query with 10+ table joins can be overwhelming for someone who is not a sql specialist.
One strategy is to break a problem into pieces so to speak by grouping together closely related tables into temp tables and then joining those temp tables together. This simplifies complex SQL and although not as performant as one big query it's much easier to understand. So do you have a general rule of thumb as far as a threshold for the number of joins you include in a query before you break the query into temp tables?