DB Design :: Insert / Update FACT Table From Staging Table
May 6, 2015
We need to Insert/Update a Fact Table from staging Table. currently we are using a SP which update Fact Table for Each region. this process is schedule, every 5 min job is run and Update fact table.but time of Insert and Update too long from staging to Fact, currently we are using merge statement for Insert and update.in my sp we are looping number how many region we need to update and at a time single Region we are updating using while loop in current SP.
View 7 Replies
ADVERTISEMENT
Nov 20, 2007
Hi,
Is this anyway to finding updated/ deleted recored using anyother data flow transformation tasks without using sql task.
Can find the new records using merge join task.
Is there better way to merge master table using staging table?
Thanks in advance.
DF.
View 1 Replies
View Related
Mar 9, 2007
Hi I have a production table in SQL Server 2005 that has approx 500,000 records---every 6 hours this table needs to be truncated and filled
The basic SSIS package uses a script compentant and imports the data into a staging table which has the same structure as the production table. I have a final Execute SQL Task that Truncates the production table and does a Insert into production-select * from stage table.
Takes around 30 seconds to run this last Execute SQL Task--problem is there is a risk that our webservice will query this table during the Execute SQL Task and return incorrect results.
Q1: In this last Execute SQL Task if I used a BEGIN TRANSACTION and COMMIT TRANSACTION; will this be any quicker ?
Q2: In this last Execute SQL Task- would it be better to use a RENAME TABLE technique in TSQL--any code examples ??
Q3:Is there any way in TSQL I can compare EVERY FIELD in two tables ie Stage and Production which have identical data structures and figure out a way to update only the records that changed? Is SQL Server Replication the best way to do this
thanks kindly
Dave
View 9 Replies
View Related
Oct 26, 2015
Say you have a fact table with a few columns that all reference the same key column in a dimension table, you want to write a view to return the information for those keys?
USE MyTestDB;
GO
SET NOCOUNT ON;
IF OBJECT_ID ('dbo.FactTemp' ,'U') IS NOT NULL
DROP TABLE dbo.FactTemp;
[Code] ....
I'm using very small data at the moment, and the query plan and statistics don't really say which way.
View 2 Replies
View Related
Mar 19, 2012
I am developing a BI solution on SQL Server 2008 R2 and how to handle multiple referances to the same dimension from a fact table!
Here is the scenario;
Fact_Contracts (# M)
ServiceProvider_CompanyID, Client_CompanyID, Amount_USD
Dim_Company( hundreds)
ID, CityID, ProfessionID, CompanyName
Dim_City
ID, CityName
Dim_Profession
ID, ProfessionName
As u can see there is two company references in my fact table, and the schema is in snowflake. My customer requirements state that the Contracts' amounts can be aggregated/filtered for/by, ServiceProviderCompany, its city/profession or ClientCompay, its city/profession.
First thing came in to my mind is to dublicate whole dimension structure (one for serviceproviders, one for clients), which i thought that there should be another way around?
View 5 Replies
View Related
Nov 28, 2006
Hi
I have two tables, the first table contains the actual time value, and the second table contains the time master. The time master is a dimension table and it has the records of all the date with 1 hour time interval. In other words, the for a given day, there would be 24 rows with one hour time interval. I want to update the timeid in table1 with the corresponding timeid in table2
how to write an update statement to acheive this. i have given the sample two tables.
Regards
Jtamil
Table 1
RID LogTime TypeTimeID
12006-10-14 11:13:42.297Success6876
22006-10-14 11:14:00.530Error6876
32006-10-14 11:14:00.530Success6876
42006-10-14 11:14:23.670Success6876
52006-10-14 11:14:29.467Error6876
62006-10-14 11:14:50.420Error6876
72006-10-14 11:14:50.480Success6876
82006-10-14 11:25:39.950Success6876
92006-10-14 11:25:42.560Error6876
102006-10-14 11:34:58.357Error6876
Table 2
TimeID the_datethe_day
68752005-10-14 11:00:00.000Friday
68762005-10-14 12:00:00.000Friday
View 7 Replies
View Related
May 7, 2008
Dear all,
Now I create datawarehouse for my client, I have SSIS a lot for ETL process, I a problem that some fact table need to be updatetable and there is a lot of data of this, I need some efficent way to load this data to data warehouse.
I have read your article about SCD in SSIS (Slowly Changing Dimensions in SQL Server 2005).
I think the purpose of SCD for Dimension table. If I have some fact table that need rows to be updatetable can you give me an example, best practice, the efficient way or fastet way to load fact table that can be updatetable?
If you have link or link about this problem please reply my email. Thanks
My datasource from ORACLE and my datawarehouse in mssql2005
Regards,
Hendrik Gunawan
View 2 Replies
View Related
Jul 30, 2015
I have to tables like given below Landing table "A" (Data load will happen over here, No primary keys mentioned over here) table "B" .Now I want to move the data from A to B.I have made use of below query insert into B select * from A...Landing table "A" has huge no of records, MS SQL server is taking huge amount of time.any alternative way to make this insertion process faster?
View 6 Replies
View Related
Aug 31, 2007
Hello,
Maybe anyone have done that before?
I have table where i store SOURCE_TABLE_NAME and DESTINATION_TABLE_NAME, there is about 120+ tables.
i need make SSIS package which selects SOURCE_TABLE_NAME from source ole db, and loads it to DESTINATION_TABLE_NAME in destination ole db.
I made such SSIS package. set ole db source data access mode to table or view name variable.
set ole db destination data access mode to table or view name variable. set to variables defoult values (names of existing tables)
but when i loop table names is changed, it reports error, that can map columns, becouse in new tables is different columns.
how to solve that problem?
View 5 Replies
View Related
Jan 14, 2014
What is the best way to transfer data from the staging table into the main table.
Example:
Staging Table Name: TableA_satge (# of rows - millions)
Main Table Name: TableA_main (# of rows - billions)
Note: Staging table may have some data same as the main table.
Currently I am doing:
- Load data into staging table (TableA_stage)
- Remove any duplication of rows from the staging table (TableA_stage)
- Disable all indexes on main table (TableA_main)
- Insert into main table (TableA_main) from staging table (TableA_stage)
- Remove any duplication of rows from the main table using CTE (TableA_main)
- Rebuild indexes on main_table (TableA_main)
The problem with the above method is that, it takes a lot of time and log file size grows very big.
View 9 Replies
View Related
Feb 23, 2015
I am trying to insert bulk data into main table from staging table in sql server 2012. If any error comes, this total activity is rollbacked. I don't want that to happen. I want to know the records where ever the problem persists, and the rest has to be inserted.
View 2 Replies
View Related
Apr 30, 2015
table2 is intially populated (basically this will serve as historical table for view); temptable and table2 will are similar except that table2 has two extra columns which are insertdt and updatedt
process:
1. get data from an existing view and insert in temptable
2. truncate/delete contents of table1
3. insert data in table1 by comparing temptable vs table2 (values that exists in temptable but not in table2 will be inserted)
4. insert data in table2 which are not yet present (comparing ID in t2 and temptable)
5. UPDATE table2 whose field/column VALUE is not equal with temptable. (meaning UNMATCHED VALUE)
* for #5 if a value from table2 (historical table) has changed compared to temptable (new result of view) this must be updated as well as the updateddt field value.
View 2 Replies
View Related
Sep 17, 2015
I cannot create a measure that returns results for dates that do not exist in the fact table despite the fact that the components included in the measure contain valid results for these same dates.Creature a measure that counts the number of days where the "stock qty" is below the "avg monthly sales qty for the last 12 months" (rolling measure).Here is the DAX code I have tried for the measure (note that filter explicitly refers to the date table (called Calendar) and not the fact table):
StkOutCnt:=CALCULATE (
COUNTROWS ( VALUES ( Calendar[DateKey] ) ),
FILTER (
Calendar,
[Stock qty] < [Avg Monthly Sales Qty L12M@SKU]
)
)
Below you can see the sub measures (circled in red) are giving results for all days in the calendar.Highlighted in yellow are dates for which the StkOutCnt measure is not returning a result. Having investigated these blank dates, I am pretty confident that they are dates for which there are no transactions in the fact table (weekends, public holidays etc...).why I am getting an "inner join" with my fact table dates despite the fact that this is not requested anywhere in the dax code and that the two sub measures are behaving normally?
View 6 Replies
View Related
Aug 11, 2015
I am working on a model where I have a sales fact table. Each fact record has four different customer fields (ship- to, sold-to, payer, and bill-to customer). I have one customer dimension table that joins to the sales fact table four times (once for each of the customer fields above). When viewing the data in Excel, I would like to have four hierarchies (ship -to, sold-to, payer, and bill-to customer) within Customer.
Is there a way to build hierarchies within my Customer dimension based on the same Customer table? What I want is to view the data in Excel and see the Customer dimension. Within Customer, I want four hierarchies.
View 2 Replies
View Related
Sep 24, 2015
I have a transaction table having about 40 crore rows in source. It don't have timestamp and unique key columns. It have only Bill_month and Bill_Year columns. Actually for loading this table into staging I have added a new datetime column by adding default bill_date as 01. Then
* First we delete last 3 month data from staging tables.
* Get last 3 months data from source table.
* Load that 3 months data from source to staging table.
We do this because we only get update for last three months data. Now I have to include this transaction table as Fact table in DW. What will be the best practice for loading the fact table by picking data form staging table. Also we have to look up with dimensions for Foreign Keys.
* Should I implement the same method of deleting last 3 months records and loading them again.
View 3 Replies
View Related
Jun 15, 2015
I have a table called acc1152 with the field accno. depending on what the value of this field is, i need to replace it with a new value. These are the values i need to update
old value new value
7007 4007
7008 4008
4008 7
7009 4009
7011 4011
4011 ' '
7010 4010
4010 1
7016 4016
4016 1
4506 4006
4512 4012
how do I write one query that will accomplish this?
View 3 Replies
View Related
May 27, 2004
Hi,
we have a problem with "one-to-many relations between fact table and dimension table". Take the example of table "LOGGEDFLAW" which is related one-to-many to the table "LOGGEDREASON. "LOGGEDFLAW" includes the column "FLAWKEY" and "LOGGEDREASON" includes the column "REASONKEY" and essentiallay the column "FLAWKEY" as foreign key. Now assume that we have the following records in there:
LOGGEDFLAW
1) FLAW1
2) FLAW2
LOGGEDREASON
1) REASON1,FLAW1
2) REASON2,FLAW1
3) REASON3,FLAW2
Now assume, that "LOGGEDFLAW" is the facttable and "FLAWCOUNT" is the measure with the source column "FLAWKEY" in which we want to count the number of FLAWs. As you see in the example the number of FLAWs is 1 for "FLAW1" and "FLAW2". Microsoft Analysis Server generates the value of 2 for the number of FLAWs "FLAW1" because of the one-to-many relationship to the table "LOGGEDREASON". In the attached ZIP File you find :
- a MDB File with the described example
- a screenshot from the cube constructed in AS
- a screenshot from the result table generated with AS.
The question: How is it possible to calculate the measure "FLAWCOUNT" correctly, ignoring the records generated by the one-to-many relationship?
Best regards,
Thorsten
View 5 Replies
View Related
Mar 16, 2007
I have picked an exmple from this forum, to help me explain my current problem...
"I'm looking for a solution to import data from a flat file into an normalized data modell. To explain it a little simpler think about to following:
The Data Souce is a CSV-File with FirstName, LastName and Category. Sample data could be
Dirk; Bauer; sailing
Peter; Bauer; fishing
Marc; Bauer; reading
In my data modell I have defined the 2 tables "Person" and "Category":
Table "Person"
----------------
[PersonID] [int] IDENTITY(1,1) NOT NULL
[CategoryID] [int] NOT NULL
[FirstName] [nvarchar](50)
[LastName] [nvarchar](50)
Table "Category"
----------------
[CategoryID] [int] IDENTITY(1,1) NOT NULL
[CategoryName] [nvarchar](50)
Now I like to read my first row from the source and lookup a value for the CategoryID "sailing". As my data tables are empty right now, the lookup is not able to read a value for "sailing". Now I like to insert a new row in the table "Category" for the value "sailing" and receive the new "CategoryID" to insert my values in the table "Person" INCLUDING the new "CategoryID".
I think this is a normal way of reading data from a source and performing some lookups. In my "real world" scenario I have to lookup about 20 foreign keys before I'm able to insert the row read from the flat file source.
I really can't belief that this is a "special" case and I also can't belief that there is no easy and simple way to solve this with SSIS. Ok, the solution from Thomas is working but it is a very complex solution for this small problem. So, any help would be appreciated...
Thanks,
Dirk"
http://forums.microsoft.com/MSDN/ShowPost.aspx?PostID=74752&SiteID=1
Could someone help me creating the dimension table?
Thanks!!
View 7 Replies
View Related
Apr 2, 2007
I have a large flat file that comes to me. I first import the flat data in to a SQL table for ease of use. Then i put it into a more permanent table with the proper references to dimension tables. I want to build a dimension table out of information from my flat file. I have a dimension table with columns, [Org Client], and [Client#] where [org client] is the name of the client. Both of these columns appear in my flat file but i want to use only the client# in my permanent table. How extract distinct values of client # and [org client] into a dimension table?
My idea was to select distinct values of client# and use some type of foreach loop to go through each client# and use a query to select the TOP(1) values of [org client] where client# = x. Would this work and if so how do I go about setting this up?
I'm really hoping there is a simpler way than this. Thank you all for your time.
View 3 Replies
View Related
Mar 31, 2008
Hi Guys,
I'm trying to import the contents of a CSV file into a staging table that I've created in SQL server 2005. To perform the import I have used the BCP utility with the use of a BCP format file.
The problem I'm having is that the data in some of the fields in the csv file are longer than the length of the corresponding field in my staging table. So when I try to import I get the following error:
[Microsoft][ODBC SQL Server Driver]String data, right truncation
And the record with the error does not import.
My question would be, is there a way of telling BCP to trim the
data before importing into the staging table? Could I somehow
specify in the BCP file to trim the data before importing or
is there a switch that i can specify in the BCP command which tells BCP to trim data to the length of the destination column.
If it helps I'm using the command below to run BCP.
bcp S_OLTPDEV.dbo.CX_LOTS_STG in %path1%CTS_SiebelLotsTo_Siebel_From_CTSlotupdates.csv -f %path1%CTS_SiebelLotsTo_Siebel_From_CTSLots_Format.fmt -F 1 -S %dbsrvr% -U %duname% -P %dpasswd%
Thanks in advance, any help would be really appreciated.
View 3 Replies
View Related
Oct 6, 2006
Hi,
I have a flat file with columns from a geographical hierarchy such as:
Country Zone State County City Store Sub Store , etc.
The file also has data columns for months to the right of the above columns such as:
Jul Aug Sept ......... basically 25 of these columns for two years' data for one product and another set of 25 columns for another kind of product. A typical record in the file looks like:
Country Zone State County City Store Substore
USA Southeast FL Hillsborough Tampa walmart Fletcher
May04_ProdA ....Jun06_prodA May04_ProdB...Jun06ProdB
144 160 180 158
I need to upload this data into a staging table in SQL Server 2005 using SSIS, I created a table with the geographical hierarchy columns but am trying to figure out a way to load the monthly data. I can create 50 columns for the 50 months ( 25 months for each product) but that would be very crude.
Is there a better way of inserting data from this flat file into a destination table? I need all the data in the staging table in one upload.
Thanks.
View 11 Replies
View Related
Feb 15, 2008
Hello
I've to write an trigger for the following action
When a entry is done in the table Adoscat79 having in the index field Statut_tiers the valeur 1 and a date in data_cloture for a customer xyz
all the entries in the same table where the no_tiers is the same as the one entered (many entriers) should have those both field updated
statut_tiers to 1
and date_cloture to the same date as entered
the same action has to be done when an update is done and the valeur is set to 1 for the statut_tiers and a date entered in the field date_clture
thank you for your help
I've never done a trigger before
View 14 Replies
View Related
Jun 11, 2008
Hi All,
I am extracting source data which is in txt fille to OLE DB destination. But data of each day I want to save in different staging table. For Eg; tblProduct20081206, tblProduct20081207. How can it be done. I have seen lots of posting and script when destination is Txt. I want to use same table for staging but want to create different table for each day with adding date extension.
Please Help
View 2 Replies
View Related
Apr 12, 2015
I am trying to insert data from staging table to production table. In the staging table I only have period or date but no primary key.
This is my staging table
Create stagingtable(
[Period] [char](7) NOT NULL,
[CompanyCode] [varchar](100) NOT NULL,
[total] [int] NULL,
[status] [varchar](50) NULL
)
Create Production(
[Period] [char](7) NOT NULL,
[CompanyCode] [varchar](100) NOT NULL,
[total] [int] NULL,
[status] [varchar](50) NULL
)
I get this every month. What can I do to make sure only unique record are loaded into production table with no duplicate from previous month.
View 5 Replies
View Related
Nov 6, 2007
I have been working on this without any success. Anybody out there to help?
1. There are new files uploaded in the FTP site everyday by other regions. 3 regions and 1 file each. That means 3 files at 3 different times. The file name will be that same day's date.csv. Example; today's file will be A_20071106.csv , B_20071106.csv and C_20071106.csv. Tomorrow's will be A_20071107.csv,B_20071107.csv and C_20071107.csv. How do I do this to run everyday and take the file only for that day from FTP straight into SQL Staging table?
2. Everyday, immediately after each file is uploaded, to indicate the FTP file is loaded successfully, there will be a 20071106.dummy and so on for all three files will show up on the same folder in FTP. I must make this package runable only if the .done file arrives for that days date and then execute the .csv file. Otherwise check after 30 minutes if the .dummy is there yet or not. Do this until the .dummy file comes. Then execute the package that is on that time. Then do the same for the other two for that particular day. The times are 4pm for A, 6pm for B and 8.30pm for C
3. Then, if there is a new row, it should INSERT, any changes (based on 5 columns), it should update and if there was a row yesterday and it is not there today, DELETE.
4. All this must be automated.
View 4 Replies
View Related
Apr 24, 2008
Hi All,
In one of my interfaces,i am trying to Stage data in a ##table from a Flat file,so that i can iterate through the data in for each loop afterwards.
I have written a stored Proc to do this.But what i am observing is in the ##Table only the last Row will be persisted.
The SP looks like this
Code Snippet
CREATE PROCEDURE USP_SVInsS093Data_V001
@DetailRECTYPE nvarchar(5),
@StoreNumber int,
@Product nvarchar(20),
@Planogramref nvarchar(20),
@DisplayGroup nvarchar(5),
@ModSequence int,
@Shelfnumber int,
@Productposition int,
@StartModnumber nvarchar(20)
AS
SET NOCOUNT ON
BEGIN
if object_id('tempdb..##S093_TempTable') is not null
drop table ##S093_TempTable
CREATE TABLE ##S093_TempTable(
[DetailRECTYPE] [nvarchar](5) NULL,
[StoreNumber] [int] NULL,
[Product] [nvarchar](20) NULL,
[Planogramref] [nvarchar](20) NULL,
[DisplayGroup] [nvarchar](5) NULL,
[ModSequence] [int] NULL,
[Shelfnumber] [int] NULL,
[Productposition] [int] NULL,
[StartModnumber] [nvarchar](20) NULL
)
INSERT INTO ##S093_TempTable
([DetailRECTYPE]
,[StoreNumber]
,[Product]
,[Planogramref]
,[DisplayGroup]
,[ModSequence]
,[Shelfnumber]
,[Productposition]
,[StartModnumber])
VALUES
( @DetailRECTYPE ,
@StoreNumber ,
@Product ,
@Planogramref ,
@DisplayGroup ,
@ModSequence,
@Shelfnumber ,
@Productposition ,
@StartModnumber
)
If i have a select query after the Data Flow task only the last row is returned.
Now how to make the ##Table keep all the rows.
Any help will appreciated.
Thanks in Advance...
Cheers
Srikanth Katte
View 8 Replies
View Related
May 28, 2015
I'm attempting to load some data into an explicit hierarchy in MDS 2012 via the staging table and struggling with the HierarchyName field. Specifically I'm loading data into stg.[Entity Name]_Consolidated and using the exact name of the explicit hierarchy I've set up in the front end web application.
Originally my hierarchy was labelled "Reporting Hierarchy" and when loading the data into staging using this name then running the batch from the Import Data screen I can see the error message "Error - The HierarchyName is missing or is not valid.". I've checked the table mdm.tblHierarchy and can see that the name there is exactly as it was in the staging table and have since renamed the hierarchy as "Reporting_Hierarchy" with the same results.
View 0 Replies
View Related
Jun 23, 2015
I have a large table with 100 Million records that has around 1 million duplicate records that need to be deleted.
I am running a script that creates a staging table called,DuplicateTable that collects all the duplicates and then I want to write a an effecient delete statement.
Is it possible to write something like:
delete from OrigTable O join DuplicateTable D
on O.Key = D.key
Or do I have to run a loop on the DuplicateTable and run a delete statement record by record ?
View 4 Replies
View Related
Jul 27, 2007
I'm looking at SSIS and SqlBulkCopy as a possible method to quickly insert and process large amounts of data, my current method uses the sp_xml_preparedocument and OPENXML to parse an xml document of the data I want to process and insert into the database, however I'm noticing the performance of SqlServer parsing the xml document isn't that good.
I found the C# SqlBulkCopy method (new in .NET 2.0) and I was thinking it would be faster to use that to load my data into a staging table and then use SSIS to extract the data from the staging table, process and transform it as necessary and finally load it into the final destination tables. I was able to create an Integration Services Project that selects the data from the staging table, does a bit of processing on one of the fields (by calling a stored procedure), and finally loads the processed data into the final table.
The problem with this is I need to clean out the rows that were extracted from the staging table and I'm not sure how I can accomplish this (and I can't just issue a "delete from staging_table" because there maybe new records in the staging table that were not processed), perhaps I can either delete each record as it is proccessed or somehow get the last proccessed identity id from the staging table and delete all records less than or equal to that id.
thanks in advance for any help you can provide, maybe there is an easy way to accomplish what I'm trying to do.
-Matt Palmerlee
View 3 Replies
View Related
Jan 31, 2008
I'm importing XML file into DataTable and need to Insert Data into SQL Table.
I'm not sure if its posible to take a DataTable with Data and insert into DataAdapter. From there i wanted to update SQL using TableAdapter?
Any Tips?
Thanks,
View 4 Replies
View Related
Jun 8, 2005
This should be easy for someone, but I just can't seem to find a sample to do this.....I have created a table...CREATE TABLE dbo.test ( oId int NOT NULL UNIQUE, test1 varchar(50) NOT NULL PRIMARY KEY )Now, I need to go back and simply add another column to the table such as test2 varchar(50)Not sure if the insert is the way to go and been playing around with various statements but with no luck.Suggestions?Thanks all,Zath
View 2 Replies
View Related
Mar 11, 2004
I am new to triggers and need help on the following:
I have a hourly table that inserts new rows every hour but I need to either Insert or Update the daily table with the sum of the reading from the hourly table. If a row exist in the daily table with the date of the hourly table, then I need to update this row but if it doesn't exist, I need to insert this row.
Thanks for any suggestions...
Alan
View 4 Replies
View Related
Jul 23, 2005
I currently have 2 tables as follows:CREATE TABLE [CRPDTA].[F55MRKT119](mhan8 int,mhac02 varchar(5),mhmot varchar(5),mhupmj int)GOCREATE TABLE [CRPDTA].[F55MRKT11](mdan8 int,mdac02 varchar(5),mdmot varchar(5),mdmail int,mdmag int,mdupmj int)What I would like to do is place a trigger on F55MRKT119 which willinsert records to the F55MRKT11 if they do not exist in that tablebased on the [mdan8] field. If the record does exist I would likeUpdate the corresponding record and increment either the [MDMAIL] orthe [MDMAG] based on the inserted [MHMOT]. What I have so far is asfollows:TRIGGER #1:CREATE TRIGGER trgIns_Summary ON [CRPDTA].[F55MRKT119]FOR INSERTASBEGININSERT INTO CRPDTA.F55MRKT11select INS.MHAN8, INS.MHAC02, INS.MHMOT,case when INS.MHMOT='MAG' then 0 ELSE 1 end,case when INS.MHMOT='MAG' then 1 ELSE 0 end,'0' from INSERTED INSWHERE ins.mhan8 not in(select mdan8 from crpdta.f55MRKT11)ENDTRIGGER #2:CREATE TRIGGER trgUpd_Summary ON [CRPDTA].[F55MRKT119]FOR UpdateASBEGINUPDATE CRPDTA.F55MRKT11SET MDMAIL= case when INS.MHMOT='MAG' then 0+MDMAILwhen INS.MHMOT<>'MAG' then 1+MDMAIL end,MDMAG= case when INS.MHMOT='MAG' then 1+MDMAGwhen INS.MHMOT<>'MAG' then 0+MDMAG endfrom INSERTED INS JOIN CRPDTA.F55MRKT11on(ins.mhan8=mdan8)ENDFor instance if I do the following insert:INSERT INTO CRPDTA.F55MRKT119VALUES('212131','VK4','AL4','0')thenINSERT INTO CRPDTA.F55MRKT119VALUES('212131','VK4','MAG','0')This is what I expect in both tables:[CRPDTA.F55MRKT119] (2 Records)MHAN8 MHAC02 MHMOT MHUPMJ------ ------ ----- ------212131 VK4 AL4 0212131 VK4 MAG 0[CRPDTA.F55MRKT11] (1 Record)MDAN8 MDAC02 MDMOT MDMAIL MDMAG MDUPMJ----- ------ ----- ------ ----- ------212131 VK4 AL4 1 1 0The insert part works fine in that it iserts in both tables with thecorrect values. However it seems as if the Update protion is failingfor some reason. WHat I have tried so far is setting the trigger orderfor the update to run first and vice-versa, but still no luck. Anyhelp would be appreciated.
View 1 Replies
View Related