SQL 2012 :: How To Merge Data Into One Row
Jun 24, 2015
I have a query that pulled data in the following format:
First_Name Last_Name Drug Reason
Jim Smith Aspirin
Jim Smith Headache
Here's what I would like for it to display:
Jim Smith Aspirin Headache
Our fields and are weird. I had to use a case statement to get drugs and reason.
Would putting a max on them put everything on one row?
View 3 Replies
ADVERTISEMENT
Nov 3, 2015
I am currently writing a query to show the quote number and its brand names inside that quote which should be separated with "/" if there are different brands in a quote. see below.
QuoteDetail Table:
QuoteNum | Brand
10047 | NISSAN
[Code]....
find the correct query. I tried some functions which I found in the internet but it didn't work.
View 1 Replies
View Related
Feb 3, 2014
Merging table :
--------Dummy TABLE
create table #Tbl1 (date1 date,WSH varchar(10),ITN int,Executions int)
insert into #Tbl1 (date1 , WSH , ITN , Executions)
select '20130202' ,'ABC', 1 , 100
union all
select '20130203' ,'DEF', 1 , 200
[Code] .....
I want Result like this :
date1 WSH ITNExecutionsMCGPositions
2013-02-02 ABC 1 100 2 500
2013-02-03 DEF1 200 NULL NULL
2013-02-05 NULLNULL NULL 2 600
View 2 Replies
View Related
Aug 12, 2015
How to resolve the data inconsistent errors in merge replication ?
View 0 Replies
View Related
Jul 2, 2015
i want to combine upper two tables data like below result sets. Means they should be grouped by bsns_id and its description should be comma separated taken from 2nd table. In sql server 2012.
This is the image path :
[URL]
View 3 Replies
View Related
Jan 7, 2014
I have a table with a field called SeqId which is not an identity nor a sequence but a kind of autonumber field (max(SeqId) + 1). Now I have to do a MERGE between 2 tables where the one with SeqId is the target.
How can I get the next SeqId for every row added? I tried this:
MERGE dbo.CRM_MNP_ORIGINAL_NRN AS T
USING dbo.seriesnacionales AS S
ON (T.RANGE1 = S.RANGOINI )
WHEN NOT MATCHED THEN
INSERT (SeqId, Range1, Range2, OPERATORCODE, NRN, StartDate, CreateDate)
VALUES((SELECT dbo.FN_GetNextSeqId4CRM_MNP_ORIGINAL_NRN()), S.RangoIni, S.RangoFin, S.IdOperador,
'TEST_M', Convert(DATETIME, FECASIGNA , 103), SYSDATETIME())
WHEN MATCHED THEN
UPDATE SET T.Range1 = S.RangoIni, T.Range2 = S.RangoFin, T.OPERATORCODE = S.IdOperador
OUTPUT $action, Inserted.*, Deleted.*;
where the function just return Max(SeqId) + 1
but I always get the same value for that field.
View 2 Replies
View Related
Feb 18, 2014
I have two totally different tables with completely different data fields. Also, there is no common relationship between these two tables. However, I want to pick few data fields from the each table and merge into a new table! Is this even possible?
View 9 Replies
View Related
Oct 20, 2014
I have some problem about Merge Replication.i'm trying to merge Database A to Database B in local Server. So Database i want publisher contain this:
Because only Data in Table change therefore so i choosed it 100% Snapshot was generated after that ( Problem not a valid window user i already figured out )After that. I created Local Subscription in same Server ( Pull subscriptions and Subscription type: Client ),now problem is throw out."The schema script 'vwBuyADT_513.sch' could not be propagated to the subscriber."
I tried research so many time in Google but any information i found isn't useful for this problem.This problem still can ignore,Synchronization still running. But after 4-5 Hours running..This messages throw out: "The Publisher failed to allocate a new set of identity ranges for the subscription. This can occur when a Publisher or a republishing Subscriber has run out of identity ranges to allocate to its own Subscribers or when an identity column data type does not support an additional identity range allocation"
My question is:
Is there anyway solve 2 probem? :
1. "The schema script 'vwBuyADT_513.sch' could not be propagated to the subscriber."
2. "The Publisher failed to allocate a new set of identity ranges for the subscription. This can occur when a Publisher or a republishing Subscriber has run out of identity ranges to allocate to its own Subscribers or when an identity column data type does not support an additional identity range allocation"
View 7 Replies
View Related
May 10, 2015
We are working in a Merge replication environment where we have SQL Server 2005, 11 publications and 2 subscribers.We used to get lot of incidents from the Application owner for blockings, recently we faced a situation where the lead blocker is in sleeping state and the session was used by the merge agent.Checked the query that the session was running, it was sys.sp_MSenumgenerations90;1.
View 1 Replies
View Related
Oct 7, 2015
We have a publisher sending data to two remote subscribers. Each of these sites is updating a different field in a particular table with its site name and the current date stamp. This data should then sync to each database to show how up to date the last data change was. This lets us keep an eye on whether sync is good or not.
The problem I've got is one subscriber isn't copying its row to the other servers anymore. It gets the row updates from the other sites in the same table but its own updates to this field aren't getting sent across. Nothing shows up in conflict manager for it and nor should it as no other subscriber should be updating this field. If I validate the subscription the field when then get synced but again no updates after the validation will transfer. The other problem which may be related or indicating another issue is the data transfer rate shown in replication monitor is less than 0.1 rows/sec. Reinitializing isn't an option.
View 0 Replies
View Related
Nov 2, 2015
We have two queries that run nightly and we'd like to combine them and only have one result set instead of two. What's the best way to combine these? The only difference is the Table the information is being pulled from.
Query 1:
set nocount on
select
case
when datalength(MICRACCTNUMBER) = 4 then convert(char(20),('001 000000000000'+MICRACCTNUMBER))
when datalength(MICRACCTNUMBER) = 5 then convert(char(20),('001 00000000000'+MICRACCTNUMBER))
when datalength(MICRACCTNUMBER) = 6 then convert(char(20),('001 0000000000'+MICRACCTNUMBER))
[code].....
Again, the only difference is the Table the info is coming from...
View 3 Replies
View Related
Nov 26, 2014
basically i have data like this
order_key comment
1 A
1 B
1 C
2 B
2 D
the data intends to be like this
order_key comment
1 A,B,C
2 B,D
View 3 Replies
View Related
Dec 24, 2014
I have a table that looks like this ...
idtype_codephone_num
11111-111-1111
12222-222-2222
21111-111-1111
32222-222-2222
I want to merge the data to look like this ...
idphone1 phone2
1111-111-1111222-222-2222
2111-111-1111NULL
3NULL222-222-2222
Basically if the type code is 1 one then move the data to column phone1, if the type is 2 then move it to column phone2.
This would be fairly simple if we always have type codes 1 and 2. But sometimes we can have type 1 and not type 2, or we could have type 2 and not type1.
Right now we only have 2 type codes. But, in the future we could be adding a 3rd type. So that would add a 3rd column (phone3).
Below is my code that I have written. I move the data into a temp table then list it. I am thinking of making this a view to my table. It works just fine. My question is, is there a better and more efficient way of doing this?
CREATE TABLE #Contacts (
id INT PRIMARY KEY,
phone1 VARCHAR(15),
phone2 VARCHAR(15)
)
-- Insert the records for type 1
INSERT INTO #Contacts
SELECT id,
phone_num,
NULL
FROM test1
WHERE type_code = '1'
-- Insert the records for type 2, if the id does not exist for type 1
INSERT INTO #Contacts
SELECT id,
NULL,
phone_num
FROM test1
WHERE NOT EXISTS (
SELECT 1
FROM #Contacts
WHERE #Contacts.id = test1.id
)
AND test1.type_code = '2'
-- if the id has both type 1 and 2, update the phone2 column with the data from type 2
UPDATE #Contacts
SET phone2 = test1.phone_num
FROM #contacts
JOIN test1 ON test1.id = #Contacts.id
WHERE type_code = '2'
SELECT id, phone1, phone2
FROM #Contacts
DROP TABLE #Contacts
View 2 Replies
View Related
Feb 8, 2015
I am trying out merge replication and using web synchronization.However, I am worried that I am missing something because the way it is set up, it strikes me as a bit too insecure.
According to the best practices and security articles on Technet, I am given to understand that:
The SQL Replication Listener (read: the application pool account that will be running the replisapi.dll) has to be the db_owner to both distribution and publisher and be on the PAL list. Windows authenication should be used. That means the merge agents wouldn't need to know the password for those logins.
The basic authenication can be used (with SSL) to authenicate into a Windows user account to then connect to the replisapi.dll.
Here's the rub - I assumed that all I needed was a basic no-rights user account to be then given the execute permission on the replisapi.dll & read permissions to kick off the process. When I browse to the replisapi.dll and authenicate using the no-rights user, I get the expected "SQL Server WebSync ISAPI" message. But when I then run the merge agent, it fails saying that login to the distribution failed for the no-rights user. If I use the application pool's account, then I am able to run merge agent successfully.
But that means I am now looking at storing the password to the application pool account on client. I might have had missed a crucial step to ensure that the logins to the distribution & publication databases are done using the application pool account, not the user authenticated via IIS basic authentication?
View 0 Replies
View Related
Sep 28, 2015
I inherited an SSIS package that is rather simple. It grabs data from a SQL Query and then loads it into a SQL table. The first step of this process TRUNCATES the destination table and then reloads for the current year. This table has over a million rows and the DB SOURCE that we are pulling from is not in our domain, so one can imagine how long this takes.
This process is working fine, (given the 45 minutes it takes to repopulate data in the DESTINATION table), but what I really need is a way to load only the rows that are NEW and UPDATED. I would also need functionality to DELETE the rows that have been removed (sounds like a MERGE, right?).I tried using MERGE and MERGE JOIN transformations but these transformations seem to be different from the T-SQL MERGE statement. MERGE seems like a slow UNION and MERGE JOIN only seems to work with SELECTS.
View 3 Replies
View Related
Nov 3, 2015
I was reading this:
"Adding an identity column to a published table is not supported, because it can result in non-convergence when the column is replicated to the Subscriber. The values in the identity column at the Publisher depend on the order in which the rows for the affected table are physically stored. The rows might be stored differently at the Subscriber; therefore the value for the identity column can be different for the same rows."
I don't understand...
If I create a table with an identity column and publish it. Can the values on the subscriber be different when the data is replicated?
Suppose I have a this table:
1 Name1
2 Name2
3 Name3
Column 1 is identity field and column 2 the name of employees.
If I publish this table , the data can be inserted on the subscriber .pe, with 2 name1 and 1 name3 and 1 name2?
What about if the identify fields is a primary key?
View 6 Replies
View Related
Mar 6, 2014
If the partitioning MERGE command attempts to drop historic data at the wrong boundary point then data movement between file groups may be necessary before or during the next index rebuild. The script below creates 2 test tables, one using a range right function and the other using range left. The partitioning key is a number between 0 - 59, an empty partition is maintained at the start and end of ranges, 4 partitions contain data in the ranges between 0-14, 15-29, 30-44, 45-59. Data in the lowest range (0 - 14) is switched out and a merge command is run, edit the script to try the different merge boundaries, edit the variables at the start to suit runtime environment 'Data Drive' & 'Log Drive' paths.Variables are redeclared but commented out at the start of code blocks to allow stepping through if desired.
--=================================================================================
-- PartitionLabSetup_20140330.sql - TAKES ABOUT 1 MINUTE TO EXECUTE
-- Creates a test database (workspace)
-- Adds file groups and files
-- Creates partition functions and schema's
-- Creates and populates 2 partitioned tables (PartitionedRight & PartitionedLeft)
[Code] ....
The T-SQL code below illustrates one of the problems caused by MERGE at the wrong boundary point. File Group 3 of the Range Right table is empty according to the data space views, it cannot be dropped though. File Group 2 contains data according to the views but you are allowed to drop it's file.
USE workspace;
DROP TABLE dbo.PartitionedRightOut;
USE master;
ALTER DATABASE workspace
REMOVE FILE PartitionedRight_f3 ;
--Msg 5042, Level 16, State 1, Line 2
--The file 'PartitionedRight_f3 ' cannot be removed because it is not empty.
ALTER DATABASE workspace
REMOVE FILE PartitionedRight_f2 ;
-- Works surprisingly although contains data according to system views.
If the wrong boundary point is used then the system 'Data Space' views show where the data should be (FG2), not where it actually still is (FG3). You can't tell if data movement between file groups is pending and the file group files are not protected from deletion by the OS.
I'm not sure this is worth raising a connect item for but it would be useful knowing where data physically resided after a MERGE RANGE and before an INDEX REBUILD, the data space views reflect the logical rather than the physical location if a data movement is pending.
View 0 Replies
View Related
Dec 3, 2014
I have a database with enabling merge replication.
Then the problem is update query is taking more time.But when I disable the merge triggers then it'll update quickly.
View 1 Replies
View Related
Dec 11, 2014
We have a SSIS package which loads the data from csv files to DB. It only loads the new entries ie if the row already exists in the tables than it doesn't insert it. For this we load the CSV to temp tables for respective schemas and than those are compared with base tables of respective schemas and inserted new rows. For this we use Merge statement.
View 1 Replies
View Related
Feb 13, 2015
There is an error in one of my merge publication. The error is,
The change for the row with article nickname 2336003 (test), rowguidcol {436456F0-F5AD-E411-80CF-5CF3FC1D2D76} could not be applied at the destination. Further information about the failure reason can be found in the conflict logging tables.
When i checked my tables I got following values in rowguid column
publication 436456F0-F5AD-E411-80CF-5CF3FC1D2D76
subscriptionD824D120-23AD-E411-80E3-00155D0E1001
conflict tables 689C6A61-5359-4BB5-BECD-B03F5F94D79A
View 0 Replies
View Related
Mar 18, 2015
I have created mergse statement using SCD2.where I am inserting the data if my BBxkey is not matching with target and updating the rows if the bbxkey is matching and rowchecksum is different.
Working of Store procedure
There are 2 scenario covered in this procedure on the basis of that ETL happening.
There are 2 columns deriving from source table at run time, one is BBxkey which is nothing but a combination of one or more column or part of column and another column is a Rowchecksum column which is nothing but a Hashvalue of all the column of the tables for a row.
Merge case 1:-WHEN NOT MATCH THEN INSERT
If source BBxkey is not there in Archive table that means if BBxKey is null then those records are new and it will directly inserted into Archive table.
Merge case 2:-WHEN MATCH THEN UPDATE
If Source.BBxkey=Target.BBxkey && Source.Rowchecksum<>Target.Rowchecksum then this means source records are available in Archive table but data has been changed, in this case it will update the old record with latestversion 0 and insert the new record with latestversion 1.
my sp failing when source having more than 1 same bbxkey.
error [Execute SQL Task] Error: Executing the query "EXEC dbo.ETL_STAGE_ARCHIVE ?" failed with the following error: "The MERGE statement attempted to UPDATE or DELETE the same row more than once. This happens when a target row matches more than one source row. A MERGE statement cannot UPDATE/DELETE the same row of the target table multiple times. Refine the ON clause to ensure a target row matches at most one source row, or use the GROUP BY clause to group the source rows.".
Sample store procedure
DECLARE @Merge_Out TABLE (Action_Taken varchar(8),
TimeIn datetime,
BBXKey varchar(100),
RowChecksum nvarchar(4000),Col001 varchar(8000),Col002 varchar(8000),
Col003 varchar(8000),Col004 varchar(8000),Col005 varchar(8000),
[code].....
How Can I avoid such failure of my sp.
I want to handle those case where S.bbxkey=T.bbxkey && s.rowchecksum=t.rowchecksum
View 2 Replies
View Related
Mar 31, 2015
I need to move the log file of a disk and onto another disk. The log belongs to a merge subscription database.
I was going to stop/disable the merge jobs on the distributor, detach the database, move the log file to another drive, reattach, and enable the merge jobs on the distributor.
Does that sound ok, or should I employ some other method.
View 6 Replies
View Related
May 9, 2015
I'm trying out a set up as following:
SQL Server 2008 standard as the publisher
SQL Server 2012 express as the subscriber
and I tend to use web synchronization; there is no domain trust.
After following the instructions at MSDN for configuring web synchronization, I have an error that I can't get past - after creating the initial snapshot on the publisher, I try to run replmerg.exe at the subscriber and I always get this error:
"The subscription to publication 'TestReplication' has expired or does not exist."
If I refresh the publisher's "Local Publications" and look within the "TestReplication", it does show that the subscriber is a known subscriber. Likewise, if I refresh subscriber's "Local Subscription", it has an entry for TestReplication publication.
I already verified that the user used by Replisapi.dll has the read permission to the snapshot folder, is a member of PAL, is db_owner of the publishing database and distribution database. I am using self-signed certificate for this test and I have already installed the certificate at the subscriber machine so that HTTPS is trusted. I can run diagnosis from subscriber so I know subscriber can reach and logs are being left at the publisher's IIS.
View 1 Replies
View Related
Jul 26, 2015
We have a table in an SQL Server 2012 database that stores tree-like structures. Simplified for the purpose of my question, it has the following format:
Id int identity,
ParentId int,
GroupId int
Each record of the table represents an object identified by Id. An object may or may not have a parent in the same table, such that object.ParentId = parentObject.Id. A root object has ParentId = NULL. There are multiple root objects, so the table in fact stores multiple trees. What’s important is that the tree depth is not fixed, i.e. theoretically there can be any number of ancestor generations for an object. GroupId is a property of a root object; in theory none of the children of a root object has to have GroupId <> NULL; it can be assumed that any child has the same GroupId value as its root object.
A sample table having two roots (one grandparent and one parent), one non-root parent/child and 4 child roots:
Id ParentId GroupId
----------------------------------------------------------
1 NULL 200 root grandparent
2 1 NULL non-root parent/child
3 2 NULL child
4 2 NULL child
5 NULL 300 root parent
6 5 NULL child
7 5 NULL child
The table is not normalised, i.e. there’s no separate {root_object : group} table. However I don’t think normalising the table would solve the problem.
Now the problem. We need to set up merge replication from the table above (Master table) to the table of the same format in another DB. We need to replicate only those rows of the Master table that have a certain fixed GroupId value, e.g. 200 in the example above. If we ensure that GroupId in all descendant objects of a root object has the same value in the table as the root object itself that would be trivial. The table would look like this:
Id ParentId GroupId
----------------------------------------------------------
1 NULL 200 root grandparent
2 1 200 non-root parent/child
3 2 200 child
4 2 200 child
5 NULL 300 root parent
6 5 300 child
7 5 300 child
And the filter would look like this:
WHERE GroupId = 200
However out of performance considerations, we would like to avoid if possible filling GroupId for the descendant objects, because as it must be clear from the above, GroupId for a descendant object is quite easily deducible via a stored procedure or UDF (just need to go up the tree until ParentId = NULL). The problem is, I don’t know how to achieve this in a merge replication filter: it would only allow WHERE conditions and joins. I’ve have not had much luck with joins for merge replication in general, but here we have more complex algorithm, because the number of tree levels can be different for every object. And merge replication would not allow using UDF…
View 2 Replies
View Related
Aug 12, 2015
I need to merge column values (#Status.Status) based on OrderID onto #Orders.NewStausCombined field separated by commas .
CREATE TABLE #Status
(
ID INT IDENTITY (1,1) PRIMARY KEY,
OrderID INT,
Status VARCHAR(20)
)
INSERT INTO #Status ( OrderID, Status )
[code].....
View 3 Replies
View Related
Aug 29, 2015
Using merge replication + web synchronization, I have a situation when there are large amount of data changes to upload to the publisher, Merge agent would create a large request and send it over. The publisher gets it and is able to work on it. After few minutes it has finished but (I assume) the connection has been dropped. At the subscriber's side, it appears that the merge agent is hung. The output would look like something like this:
Upload request size is XXX bytes.
Uploaded a total of 100 chunks.
Uploaded a total of 200 chunks.
Uploaded a total of 211 chunks.
The request message was sent to [URL] ....
Normally, when the publisher finishes working, the merge agent then continues processing. But when it takes more than few minutes (it seems to break about at 2 or 3 minutes), merge agent will hang as long as the InternetTimeout setting is (currently 20 minutes) before finally failing and retrying.
But that's not right. The publisher was done and can't communicate back to the merge agent (presumably because the connection was dropped). As a result, merge agent will try to re-enumerate changes on top of giving appearance that it's hung.
I've already fiddled with settings such as MaxUploadChanges, UploadGenerationsPerBatch, UploadReadChangesPerBatch, and UploadWriteChangesPerBatch. However, none of those setting actually ensure that the request message is too large. It has worked in breaking up the changes into separate batches (e.g. processing a single table rather than all tables) which results in more frequent updates and thus avoid the problem.
However, when a single table has several changes, it is still lumped into one large request which then takes more than 2-3 minutes to process on publisher's side and thus I still end up with the same symptom of merge agent hanging.
Is there anything else I could try to get merge agent to keep its connection alive even during processing a large request?
View 0 Replies
View Related
Oct 7, 2015
In a t-sql 2012 merge statement that is listed below, the insert statement on the merge statement listed below is not working. The update statement works though.
Merge test.dbo.LockCombination AS LKC1
USING
(select LKC.lockID,LKC.seq,A.lockCombo1,A.schoolnumber
from
[Inputtb] A
JOIN test.dbo.School SCH ON A.schoolnumber = SCH.type
[Code] ....
Thus would you tell me what I need to do to make the insert statement work on the merge statement listed above?
View 6 Replies
View Related
Nov 17, 2014
I have resulting rows from a query similar to the following:
The data is coming from a single table that contains only one coverage code column and one coverage code date, but the end user wants the two coverage code types and dates combined into a single row. So the SELECT looks something like this:
SELECT
[Employee ID] = emp.employee_id,
[Coverage Code 1] = enr.coverage_code,
[Coverage Date 1] = enr.coverage_date,
[Coverage Code 2] = case when enr.product_type = 'Accident.Accident'
then enr.coverage_code else NULL end,
[Code] ....
I basically want to merge the like Employee ID's together into a single row like the following:
I know I have done this before and it is probably pretty simple.
View 4 Replies
View Related
Feb 13, 2015
I am working on to convert my static Store procedure to Dynamic.
I have created a Store procedure with Merge statement which is inserting new record and updating existing record.
This SP I will use in SSIS Insted of Data Flow Task I will run in Execute SQL Task.
Now my biggest problem is I dont know how to convert static code toi dynamic
Below is my Store procedure code.
As you can see my Source Query
I have a filemaster table as shown below which consist of Input filename,Source table ,Destination table and BBX expression.
Input_FilenameSourceTableName DestinationTableName BBxKeyDerExpr
CCTFB ImportBBxCctfb ArchiveBBxCctfb SUBSTRING(Col001,1,6)
CEMXR ImportBBxCemxr ArchiveBBxCemxr SUBSTRING(Col001,1,10)
In my source query I want to change the value of Source table ,Destination table and BBX expression dynamically on the basis of input file.
Purpose of making dynamic is that I have created separate sp for all the input, my clients want to have sungle dynamic sp which will execute on the basis of input file.this input file name I wil get fromm variable which i have created in SSIS Package.
Lets consider @File_name is the variable in package which store the file name
if file name is CCTFB then my query should take the Source table ,Destination table and BBX expression value from file master table.
Like that I have 100 of source query and evry query have diffrent number of columns. How can I change the column number in uodate and insert statement dynamically on run time.
CAST(SUBSTRING(Col001,1,6) + SUBSTRING(Col002,1,10) AS varchar(100)) :-It creates a key for comparing, this value i can take it from filemaster
HASHBYTES('MD5', CAST(CHECKSUM(Col001, Col002,Col003,Col004) AS varchar(max))) -here numberv of column need to be changed .
(SUBSTRING(SOURCE.Col001,1,6) + SUBSTRING(SOURCE.Col002,1,10)) this condition also i can take it from file master.
[Code] ....
I am able to get inserted and updated rowcount, but not able to get the matching records count.
View 0 Replies
View Related
Mar 12, 2015
I have created a Dynamic Merge statement SCD2 Store procedure , which insert the records if no matches and if bbxkey matches from source table to destination table thne it updates old record as lateteverion 0 and insert new record with latest version 1.
I am getting below error when I ahve more than 1 bbxkey in my source table. How can I ignore this.
BBXkey is nothing but I am deriving by combining 2 columns.
Msg 8672, Level 16, State 1, Line 6
The MERGE statement attempted to UPDATE or DELETE the same row more than once. This happens when a target row matches more than one source row. A MERGE statement cannot UPDATE/DELETE the same row of the target table multiple times. Refine the ON clause to ensure a target row matches at most one source row, or use the GROUP BY clause to group the source rows.
View 4 Replies
View Related
Mar 5, 2015
I have a table which is updated daily using a MERGE statement. As records are insert, updated and deleted, I am saving the OUTPUT from the MERGE statement into a history table with a timestamp and action$ column appended to the record.
Using this history table, I'd like to rebuild the data based on specific past date. I was able to create a stored procedure that inspects each record in the history table and apply it to the data in a temp table. The stored procedure solution uses multiple queries to rebuild the data at a point in time. I was curious if there was an easier and more efficient solution using a table function.
View 2 Replies
View Related
Mar 1, 2006
Hi,
Does anyone know if it is possible to point data that underwent the "merge join" transformation (in one data flow) to the following data flow? I don't want to recreate all that merging, sorting and calling the same sources again in the following data flow if the data that I am using exists in the previous data flow. The merged data is simply too big to export to an excel file, so does anyone have any ideas? Thanks!
View 8 Replies
View Related
May 26, 2015
SQL Server 2012 Data Tools was working fine for me but something must've changed, now every time I try to create a new SSIS project I get:
The server threw an exception. (Exception from HRESULT: 0x80010105 (RPC_E_SERVERFAULT)).
When I try to open an existing project I get:
exception has been thrown by the target of an invocation
external component has thrown an exception (SSISUpgrade)
The issue seems to only arise with SSIS projects.I have already uninstalled SQL Server 2012 and reinstalled it and that didn't work.I tried to install Visual Studio 2012 Data Tools with BI and that also crashes when I try to create an SSIS project.Output of SQL Server SELECT @@VERSION is:
Microsoft SQL Server 2012 - 11.0.2100.60 (X64)
Feb 10 2012 19:39:15
Copyright (c) Microsoft Corporation
Enterprise Edition (64-bit) on Windows NT 6.2 <X64> (Build 9200: ) (Hypervisor)
SQL Data Tools page info:
Microsoft SQL Server Integration Services Designer
Version 11.0.2100.60
Microsoft Visual Studio 2010
Version 10.0.40219.1 SP1Rel
Microsoft .NET Framework
Version 4.5.51641 SP1Rel
View 5 Replies
View Related