I have a Union All component with 7 inputs (and it will grow to 13 inputs shortly). Is there any way to correlate each of the €œUnion All Input X€? with actual inputs?
When I get an incompatible datatype message for one of the input, how do I know which input stream to pick from?
Is this a silly question or it€™s a feature request?
It seems strange to me that once the UNION ALL component waits for at least one row from each input buffer before it puts anything into the output buffer (that's the behaviour that I have observed anyway).
I have an SSIS package I've been working on that has been working just fine until now.
There are four inputs to this package, all OLEDB Inputs based on SQL Statement expressions.
These four inputs are then merged with a Union All step.
One of the merged columns is called "unit" and now suddenly this column cannot be added to the union all step for one of the inputs.
I get the following error: Error at Import mat [Union All [330]]: The metadata for "input column "unit" (8979)" does not match the metadata for the associated output column.
Error at Import Mat [Union All [330]]: Failed to set property "OutputColumnLineageID" on "input column "unit" (8979)".
My question is Can anyone suggest how I can diagnose this issue? I can't see the metadata in question, but when I physically inspect the results of the source queries, they all seem consistant with each other. The unit column is derived in similar fashion in each query. I'm a bit lost with this. I can't seem to shake off the error. Nothing has changed in the database to cause this.
I have a query which does 3 selects and Union ALLs each to get a final result set. The performance is unacceptable - takes around a minute to run. If I remove the Union All so that the result sets are returned individually it returns all 3 from the query in around 6 seconds (acceptable performance).
Any way to join the result sets together without using Union All.
Each result set has exactly the same structure returned...
Query below [for reference]...
WITH cte AS ( SELECT A.[PoleID], ISNULL(B.[IsSpanClear], 0) AS [IsSpanClear], B.[SurveyDate], ROW_NUMBER() OVER (PARTITION BY A.[PoleID] ORDER BY B.[SurveyDate] DESC) rownum FROM[UT_Pole] A LEFT OUTER JOIN [UT_Surveyed_Pole] B ON A.[PoleID] = B.[PoleID]
I have a Union All transformation with 4 inputs and one output when I debug the package the sum of the different inputs rows does not match the row count in output.
I don't understand, I've used the Union All transform many times and I've never seen this.
I'm currently on a company with an ms sql server 2000. I'm looking into the indexes and tables to see if there are some bottlenecks there but the LogicalFramentation is very low in the index I have searched.
However, this table has a logicalFragmentation of 99,9215698242188 which I get when I do DBCC SHOWCONTIG ([TInsurance]) WITH TABLERESULTS. Is that a value to be trusted or not to be trusted since this does not check an index? If it is, how do I defrag a table? I know only how to defrag an index. (example: DBCC INDEXDEFRAG (MFSSEK,[TInsurance], PK_InsuranceID) )
Tipps, suggestions, help, all is very wellcome! :-)
DBCC SHOWCONTIG scanning 'TInsurance' table... Table: 'TInsurance' (2051694557); index ID: 0, database ID: 17 TABLE level scan performed. - Pages Scanned................................: 1275 - Extents Scanned..............................: 225 - Extent Switches..............................: 224 - Avg. Pages per Extent........................: 5.7 - Scan Density [Best Count:Actual Count].......: 71.11% [160:225] - Extent Scan Fragmentation ...................: 74.67% - Avg. Bytes Free per Page.....................: 520.0 - Avg. Page Density (full).....................: 93.58% DBCC execution completed. If DBCC printed error messages, contact your system administrator.
I have seen several examples explaining the fact that a tablecontaining a field for each day of the week is for the most part anarray. An specific example is where data representing worked hours isstored in a table.CREATE TABLE [hoursWorked] ([id] [int] NOT NULL ,[location_id] [tinyint] NOT NULL,[sunday] [int] NULL ,[monday] [int] NULL ,[tuesday] [int] NULL ,[wednesday] [int] NULL ,[thursday] [int] NULL ,[friday] [int] NULL ,[saturday] [int] NULL)I had to work with a table with a similar structure about 7 years agoand I remember that writing code against the table was pretty close toHell on earth.I am now looking at a table that is similar in nature - but different.CREATE TABLE [blah] ([concat_1_id] [int] NOT NULL ,[concat_2_id] [int] NOT NULL ,[code_1] [varchar] (30) NOT NULL ,[code_2] [varchar] (20) NULL ,[code_3] [varchar] (20) NULL ,[some_flg] [char] (1) NOT NULL) ON [PRIMARY]The value for code_2 and code_3 will be dependently null and they willrepresent similar data in both records (i.e. the value "abc" can existin both fields) . For example if code_2 contains data then code_3 willprobably not contain data.I do not think that this is an array. But with so many rows wherecode_2 and code_3 will be NULL something just does not feel right.I will appreciate your input.
Does anyone know how to identify the hottest, most active tables in adatabase?We have hundreds of users hitting a PeopleSoft database with hundredsof tables. We are I/O bound on our SAN, and are thinking of puttingthe hottest tables on a solid state (RAM) drive for improvedperformance. Problem is: which are the hottest tables? Would like todo this based on hard data instead of developer/vendor guesses.Any suggestions are much appreciated.
hi, for sql server 2000, how can we find the fixpack(service pack) level installed on this sql server? is there any command, or any gui tool to identify the level?
I have a huge db with many services ,users and applications hitting the db. Suddenly one of our column is nullified , we are not able track who /how it is done,
Can any one tell be whatz the best way to identify this???? trace(what events to select ), trigger or what????
there is a Crystal Report run from the Front end Application. The DB used here is SQL Server 2008. I need to know the Stored Procedure used by the report that is been run from the front end. . How shall I do it?
INNER JOIN qryPRDGroupDets on Prdct.ProdGrp=qryPRDGroupDets.PGCode
where supersed ='' And OrigPr Not Like '9%' And OrigPr Not Like '%MDM%' And LISTPR1>'0' And STANCOST>'0'
Which works fine, but what I need to do is reference the "OrigPr" field and mark it as "valid" or "Invalid", the "OrigPr" the field contains alpha numeric data e.g. A000, A001, A002 - ZZ99 and so on, amongst all of the potential different types of codes we have codes that end in treble Zero (0) e.g. A000 which are valid, but if they end in double 00 e.g. AA00 then this is invalid, the problem I have is I can't just add
Code: 'Marker'= Case When Right(OrigPr, 2) = '00' Then 'Invalid' ELSE 'Valid' End
For it will mark the A000 as invalid, is there a way of getting around this...
There is one report to identify potential duplicate in a table and it is performing poor.I'm now tuning the existing SP and got struck in modifiying it. rewrite the query in a best way. I just pasted below an example of query which is now in a report.The report will be run every week currently the table has 10 million records, and every week there will 5k to 10k will be added up so with that 5k to 10 k we have to check all the 10 miilion rows that if it is duplciated the logic is (surname = surmane or forename = forename or DOB =DOB )
Create table #employee ( ID int, empid varchar(100), surname varchar(100), forename varchar(100), DOB datetime, empregistereddate datetime, Createdate datetime
The database I'm currently working with is very old and some of thetables, SP, and views are not being used. I'm looking for a way toidentify what items are no longer in uses, or what items arecurrently in use.
Hi all I use 64 bit 2005 server with 8cpu and 8G of memory. This server is accessed by large number of intensive or not so intensive programs. I had eliminated all inefficient queries by means of sql profiler. What I see now is 30 procs or so runining in 1 second. They are all pretty simple and as I said use indexes. cpu column for most show 0, reads show 10 - 50 - pretty good. But... my cpu utilization is 75% in avg. across of all 8 cpu's. I really can't find an answer for it. If procs run so efficient, where does cpu go? Disk queue length is 0.04 or less - seems very good. Task manager shows that all of it 75% attributed to sql server. So which resources besides sql queries use so much cpu? Do I have to look at some other areas and which ones where cpu could be used besides sql queries themselves.
Okay, I now have some dynamic SQL working. This is the SQL statement I have for a report in Reporting Services:
DECLARE @SQL nvarchar(4000)
SET @SQL=(SELECT AdHocSQL
FROM RptValueTypeMap
WHERE RptValueTypeMap.SectionCd in ('ITEM0010'))
EXECUTE (@SQL)
We have a table set up that actually holds different SQL statements based on the report items. This is reading the SQL statement from AdHocSQL for the Report item #0010 and it is returning the results. However, it does return the correct value, but under (No Column Name). I have tried to incorporate an "AS", but I get errors when I try this.
I am familiar, but new to SQL statements and I would like this to return a field so I can use this value in the report. What do I need to do?
I am trying to lookup a dialog from conversation_endpoints, however if a dialog was created with the encryption setting to ON and thereis no master Key in the database then the record put in the conversation_endpoints is the same as one without encryption.
How can I distinguish between the one requested with no ecryption and requested with encryption but setup with none due to the lack of a key?
I have multiple queues with the same activated stored procedure (for various reasons we are trying this scenario).
My biggest obsticle is i cannot figure out a way to determine with the activated sp which queue caused it to activate.
Basically i need to make the sp dynamic, so that no matter which queue activated the sp the sp can determine the queue name and use that dynamically to do the receive command from the right queue.
I am sure it is possible since sys.dm_broker_activated_tasks shows how many sp's are activated by each queue, however the sp name is the same for all queues so that does not help me.
How do i determine within an activated sp which queue caused it to activate?
I don't wnat SQL's Identify column's format ( which is 1�2�3 ...)I want my Prikey column is looks like starts in 0000000001�0000000002�0000000003....I set the Prikey columns type is Char(10) not nullis it possible to setting my identify column as I want?
Hi, I want to know how to remove identify property from a column without recreating the whole table... When I do it in Enterprise Manager, it actually drop and recreate the table in background. I just like to know if there is other way without recreating the tables. Thanks! Xiao Tan
:eek: I've have a lot of locks in a SQL Server, and I'd like to identify which SPs are locking what tables, I've being trying with sp_who, sp_who2 and sp_lock, so i can identify the process number, but I don't have any idea what this process is doing (which sp is running? and what command?) and which table is locked by this process, can anybody send me some querys to get this information
I'm novice in SQL Server, and I've not access to the Enterprice manager console, and I've only have priveleges to read data from the database
In a job of migrating from an old database to a new one (with other structure, other server, other version) i'm copying from the source old tables and inserting into the new destination tables. The problem is that some records have inconsistencies (of any kind) and thus are not inserted due to foreign key, not null, etc validation. When a problem occurs none record is copied! and there is my question: How can i perform the copy in wich it copies the good records (without inconsistencies) and leave aside the bad records. I also want to know wich were not copied and better if in the copy process those were put in a temp table or exported to excel for further analisys o its data.
I'm using this model of "migration":
BEGIN TRY
INSERT INTO DESTINTATION_TABLE ( col_d1, col_d2, col-d3, ...)
SELECT col_s1, dbo.some_function(col_s2), col_s3 * 100, ... FROM SOURCE_TABLE join <other_table> ... where <some filters>
END TRY
BEGIN CATCH print ERROR_MESSAGE() END CATCH
(for now, with try/catch I've only get to know the error occurred, if some)
I have an odd one. I have a SQL job that doesn't have a schedule and is being run each morning. It is a legacy system and I am trying to document the data flow process and I am having a hard time tracking down where/what is starting the job. I see which user executed the job:
SELECT message FROM sysjobhistory WHERE job_id = 'jobid' AND run_date > 'yesterday'
Which is useful, but I want to know what is starting the job.
Trying to use LIKE / NOT LIKE to identify values that contain any alphanumeric characters outside of A-Z e.g £%$^&*_-{[@ etc etc
The field should contain only values between A-G with a numberic e.g ABCD1234567... but some rows have characters such as above, some have spaces (weeps) , and some have letters outside the A-G range ....
There are too many indexes built on DB. As per the naming convention it seems the indexes are built as per the suggestions provided from execution plan. I presume most of the indexes are used only once in a month for the reports but are hampering the performance of daily running queries. These are also occupying a lot of space.
To confirm on this I have used the below query to know & identify the unused indexes. I have recorded the counters before and after the huge operations and I observed NO CHANGE in any of the values.
What the below values exactly indicate and when do they change? Is it good to delete the indexes having low USER_SEEKS, USER_SCANS, USER_LOOKUPS?
Query: SELECT OBJECT_NAME(S.[OBJECT_ID]) AS [OBJECT NAME], I.[NAME] AS [INDEX NAME], USER_SEEKS, USER_SCANS, USER_LOOKUPS,
DECLARE @EffLevels TABLE (ChangePoint int, Value Int)
INSERT@EffLevels SELECT'1000', '767' UNION ALL--Changed SELECT'1000', '675' UNION ALL SELECT'1001', '600' UNION ALL--Changed SELECT'1001', '545' UNION ALL SELECT'1001', '765' UNION ALL SELECT'1000', '673' UNION ALL--Changed SELECT'1002', '343' UNION ALL--Changed SELECT'1002', '413' UNION ALL SELECT'1002', '334' UNION ALL SELECT'1001', '823'--Changed
-- My Result should be -- ChangePointPrevChangePointValue -- 1000Null767 -- 1001 1000 675 -- 1000 1001 765 -- 1002 1000 343 -- 1001 1002 823