I want to replicate a database to a subscriber that will be used as a read
only copy. The data has to be replicated as close to instantly as possible.
To do this I set up a database export of objects and data to populate the
subscriber, then I set up transactional replication. To verify that
replication is working successfully, I count the rows in each table, there
are 3 tables in total. For one of the tables, the replication completes but
almost immediately afterward, the table starts to shrink, and after several
hours the record count is zero. This isn't happening to the other two
tables, and I can't figure out why.
If you have no idea what might be causing this, perhaps you can suggest
some places to start looking. This is Win2k SP4 with SQL 2000 SP3.
HiI tried posting this query in microsoft.public.sqlserver.programming but gotno response.I am new to replication but I am trying to setup up a transactionalreplication of tables from one database to another in MSSQL 2000 (SP2).My target tables have primary keys defined. Under publication properties Igo to the snapshot tab and for each table I clear the check box that says"Drop the existing table and re-create-it" and "clustered Indexes." On thispage the nothing is checked. for each table.Whenever the subscription is reinitialized it drops the primary keys on mytarget tablesand replaces them with a unique clustered index on the column that used tobethe Primary key.Is this normal? Is there anyway to stop it from doing this?I don't plan to send the snapshot more than once and let transactionalreplication take over for keeping my source and target in sync, but if Iever have to reinitialize the subscription, it would seem that I (orsomeone) willhave to take the a second step of manually dropping these clustered indexesand recreating the primary keys on the target table.Thanks in advance.---Dick Christoph---Dick ChristophJoin Bytes!612-724-9282
I have documentation in the form of extended properties for tables which are subscribers in a replication scheme. The documentation describes the tables in reference to their replication scheme. I don't want to apply them to the source and have them published.I can't apply the extended properties receiving the error, 'don't have permission' yet I am DB creator on all systems. The theory is that I can't modify the subscription. Which makes sense.Can I turn off the replication, apply the extended properties, then turn on replication without causing harm?
Hi... I was hoping if someone could share me some thoughts with the issue that I am having at the moment.
Problem: When I run the package in my local machine and update local SS DB/table - new records writes OK in the table. BUT when I changed my destination meaning write record into another physical SS DB/table there is no INSERT data occurs. AND SO when I move/copy over that same package into another server (e.g. server that do not write record earlier) and run it locally IT WORKS fine too.
What I am trying to do is very simple - Add new records in a SS table using SSIS . I only care for new rows and not even changed rows. Here is my logic - 1. Create Ole DB source to RemoteSERVER - using SELECT stmt 2. I have LoopUp component that will look for NEW records - Directs all rows that don't find match and redirect rows (error output). 3. Since I don't care for any rows that is matched in my lookup - I do nothing or I trash the rows 4. I send the error rows (NEW rows) into OleDB destination
RESULTS when I run the package locally and destination table is also local - WORKS FINE; But when I run the package locally and destination table is in another Sserver (remote) - now rows is written.
The package is run thru BIDS manually so there is no sucurity restrictions attached to it.
I am not sure what I am missing. And I do not see error in my package either. It is not failing.
I’ve got a situation where the columns in a table we’re grabbing from a source database keep changing as we need more information from that database. As new columns are added to the source table, I would like to dynamically look for those new columns and add them to our local database’s schema if new ones exist. We’re dropping and creating our target db table each time right now based on a pre-defined known schema, but what we really want is to drop and recreate it based on a dynamic schema, and then import all of the records from the source table to ours.It looks like a starting point might be EXEC sp_columns_rowset 'tablename' and then creating some kind of dynamic SQL statement based on that. However, I'm hoping someone might have a resource that already handles this that they might be able to steer me towards.Sincerely, Bryan Ax
Hi,I have a DB where I would like to maintain a fixed size and control itby myslef.I do not have the options: "Auto-Grow" and "Auto-Shrink" enabled.[color=blue]>From time to time, without a notice, or any logging, the database files[/color]gets shrinker and this causes a database full error.As I wrote, I would like to maintain the size of the database by myselfand not automatioc by the DB server.Please help me find out what can cause this problem and how to solvethis issue.Thnaks,- Ze'ev
A huge (and never used) database log was taking up about 4 GB of HDspace. We want the data for historical capacity, however, don't careabout the transactions log.After a bit of research I ran the script on:http://support.microsoft.com/defaul...&NoWebContent=1(which works just fine on Sql Server 2k)And thenDBCC SHRINKFILE(RamdomDataData_Log, 2)This shrank the log file from 4ish gigs to 2 MB.Of course my boss did backflips and wanted me to do it to *all* thedatabases. I told him that it was probably a bad idea since we do wantthe transaction logs incase something crashes, we can recreate the DBfrom (for example) a week ago's DB backup.So my question is this: When I shrink it to 2mb (or 200 MB as I amsuggesting) what are we actually "losing" and "keeping" does it keepthe most recent transactions (in which case I need to figure how muchwe add each day) or earliest records, or random ones, or are they alljust "compressed?"I don't weant to lost the transaction logs for the last week or two,but now that this shrinking has become the holy grail I need to showthere will be bad things happening if I just make these logs all resetto the size of a floppy disk on a regular basis.
I want to update @Stop.UserField with thevalue from @UpdateSource where @UpdateSource.HasPathway=@Stop.UserField...but I need to use the @FieldDescription table to determine how to map the columns.
I'm using a free barcode font so i can create scannable tickets via reporting services 2k5. When I print the tickets, everything seems fine. But when i export to PDF though, it looks like the barcode font shrunk. All the lines are pulled together, making scanning impossible. Is there a certain setting or so that i can use to ensure the font's width ?
-Update- The weird thing is: when i export it inside visual studio, the barcode is shown as it is supposed to in the pdf !?
We need to pull from a table that is named tablename_mmddyy and populate a table with the same format tablename_mmddyy. The date will be different every month so I want to be able to build the tablenames every month. Is there a way to do this in SSIS? Thank you.
Control Flow Load Data Flow Task to Copy Data Flow Task to Scrip Task
Data Flow under Load Data Flow Task has Flat File Source to Row Count1 to OLE DB Destination (ODS database) Variable name for Row Count1 is RowCount
Data Flow under Copy Data Flow Task has OLE DB Source to RowCount2 to OLE DB Destination (WIMS database) Variable name for Row Count2 is RowNumber
Data Flow under Script Task is code:
' Microsoft SQL Server Integration Services Script Task ' Write scripts using Microsoft Visual Basic ' The ScriptMain class is the entry point of the Script Task.
' The execution engine calls this method when the task executes. ' To access the object model, use the Dts object. Connections, variables, events, ' and logging features are available as static members of the Dts class. ' Before returning from this method, set the value of Dts.TaskResult to indicate success or failure. ' ' To open Code and Text Editor Help, press F1. ' To open Object Browser, press Ctrl+Alt+J.
Public Sub Main() Dim varMyRowCount As Variable = Dts.Variables("RowCount") Dim varMyRowNumber As Variable = Dts.Variables("RowNumber") Dim varPackageName As Variable = Dts.Variables("PackageName") Dim varStartTime As Variable = Dts.Variables("StartTime") Dim varInstanceID As Variable = Dts.Variables("ExecutionInstanceGUID") Dim varMailMsgtext As Variable = Dts.Variables("MailMsgText") Dim PackageDuration As Long Dim Filenum As Integer Dim FilNam As String
'<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<< ' Event log needs '>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Dim sSource As String Dim sLog As String Dim sEventMessage As String Dim sMachine As String '<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<
We want to verify flat file count should be same as the data load to WIMS database. Now if I run this job I get the error message: Error: Failed to lock variable "RowCount" for read access with error 0xC0010001 "The variable cannot be found. This occurs when an attempt is made to retrieve a variable from the Variables collection on a container during execution of the package, and the variable is not there. The variable name may have changed or the variable is not being created.".
My co-work said even I get balance from RowCount1 and Rowcount2 still need to have query from table in Wims database because for SQL server sometimes process counts not same as the data results load in table so need select count(*) from table. How and where I can put at the job.
My design is row count from flat file put one variable then row count from final table put the other variable then send e-mail to me show both variable for testing. For real job I need if both variables same ok else send me an e-mail.
I am trying to insert new records into the target table, if no records exist in the source table. I am passing user specific values for insert, but it does not insert any values, nor does it throw any errors. The insert needs to occur in the LOAN_GROUP_INFO table, i.e. the target table.
MERGE INTO LOAN_GROUP_INFO AS TARGET USING (SELECT LGI_GROUPID FROM LOAN_GROUPING WHERE LG_LOANID = 22720 AND LG_ISACTIVE = 1) AS SOURCE
I just opened a large table with about 800 columns and 300,000 rows. Doing the right click open on table displays message: Exception has been thrown by the target of an invocation. Please help me determin what the error is and how to solve it. I have google it for days now and no one has similar situation as mine. Many have same error, but their fix is not relevant to my issue. If you know about some SQL query limit please let me know. Funny thing is that if I right click table and do script ---> select, then it does pull the data. ONly doesn't work when I do "Open Table."
I have a SSRS 2012 report which have few columns with long text. They appear good when viewed in browser. However, when I export it to excel data is shrinking. How can I avoid the data shrinking in excel.Â
I am new to use MERGE statement. The MERGE cannot find any match Cardnumber in the target table.  It inserts row into an existing row on the target table causing SQL rejected with duplicate key not allowed. The CardNumber is defined as a primary key on the target table with no duplicate allowed. Below snippet stop when MERGE insert a row exists on the target. The source table contains multiple rows with the same Cardnumber because it is a transactional table with multiple redemptions.Â
If MERGE cannot handle many (source) to one (target) relationship, what other method that I can change to in order to update the target GiftCard table which keeps track of gift card balance?Â
Below is the error message:
Msg 2627, Level 14, State 1, Line 5 Violation of PRIMARY KEY constraint 'PK_GiftCard'. Cannot insert duplicate key in object 'dbo.GiftCard'. The duplicate key value is (63027768).
I am having a problem on updating data in DB2 target table.
I followed BJ Custard's (http://forums.microsoft.com/MSDN/ShowPost.aspx?PostID=1058272&SiteID=1&mode=1)(thanks amillion!) post and configured OLEDB destination to insert data. But I have to also update or delete data from the target table based on flag from source.
I tried using OLEDB command which uses the OLEDB connection created by following the steps posted in above link.
Trail 1real requirement): When I used the SQL query:
delete from table where Col1=? and Col2=?
I am unable to map to the parameters. When I click refresh button after writing the query, I get "There is a data source column with no name. Each data source column must have a name." message. Added to before message, there are no parameters to map to.
Trail 2: When I hard code the parameters :
delete from table1 where Col1='abc' and Col2='xyz'
no parameters will come up, so no mapping. So when I execute the mapping I get the following error:
Error: 0xC0202009 at Load .....................................................: SSIS Error Code DTS_E_OLEDBERROR. An OLE DB error has occurred. Error code: 0x80040E00. Lookup on above error codes show those are related more to target Db2 database.
I am sure some one might have used the OLEDB command task, not only just insert task.
I need to update the ilocationid from Table 1 to all Table 2 records related to Table 1but there is no direct relation from Table 1 to Table 2. I needed Table 3 to make the connection from Table 1 to 2.
declare tableName table ( uniqueid int identity(1,1), id int, starttime datetime2(0), endtime datetime2(0), parameter int )
A stored procedure has new set of values for a given id. Sometimes the startime and endtime are the same, in which case I update the value of parameter. Sometimes I add a new time range (insert statement), and sometimes I delete a time range (delete statement).
I had a question on merge, with insert, delete and update and I got that resolved. However I have a different question regarding performance of the merge statement.
If my target table has hundreds of millions of records and I want to delete/update/insert a handful of records, will SQL server scan the entire target table? I can't have:
merge ( select * from tableName where id = 10 ) as target using ...
and I can't have:
merge tableName as target using [my query] as source on source.id = target.id and source.starttime = target.startime and source.endtime = target.endtime where target.id = 10 ...
This means I cannot filter the set of rows in the target table to a handful of records where id = 10.
I am new in SSIS. Anyone know how to valify number of record that I load from csv file to SQL database table?
For example, the source file call product.csv and target table in database named DSS table name PRODUCT. I load data from flat file to table then I need verification if count between source and target not match send e-mail to me.
In Past, I created DTS package on 2000 version, that import TXT file into SQL 2000 table.
Now I migrated the DTS to DTSX (SSIS) package, and all is working fine. but I can not find how can I edit or modify the target table name in DTSX(SSIS) package in BI Studio.
In my production box is running on SQL7.0 with Merge replication and i want add one more table and i want add one more column existing replication table. Any body guide me how to add .This is very urgent Regards Don
Hello everyone,I am involved in a scenario where there is a huge (SQL Server 2005)production database containing tables that are updated multiple timesper second. End-user reports need to be generated against the data inthis database, and so the powers-that-be came to the conclusion that areporting database is necessary in order to offload report processingfrom production; of course, this means that data will have to bereplicated to the reporting database. However, we do not need all ofthe data in the production database, and perhaps a filtering criteriacan be established where only certain rows are replicated over to thereporting database as they're inserted (and possibly updated/deleted).The current though process is that the programmers designing thequeries/reports will know exactly what data they need from productionand be able to modify the replication criteria as needed. For example,programmer A might write a report where the data he needs can beexpressed in a simple replication criteria for table T where column X= "WOOD" and column Y = "MAHOGANY". Programmer B might come along amonth later and write a report whose relies on the same table T wherecolumn X = "METAL" and column Z in (12, 24, 36). Programmer B willhave to modify Programmer A's replication criteria in such a way as toaccomodate both reports, in this case something like "Copy rows fromtable T where (col X = "WOOD" and col Y = "MAHOGANY") or (col X ="METAL" and col Z in (12, 24, 36))". The example I gave is reallytrivial of course but is sufficient to give you an idea of what thecurrent thought-process is.I assume that this is a requirement that many of you may haveencountered in the past and I am wondering what solutions you wereable to come up with. Personally, I believe that the above method isprone to error (in this case the use of triggers to specifyreplication criteria) and I'd much rather use replication services tocopy tables in their entirety. However, this does not seem to be anoption in my case due to the sheer size of certain tables. Is thereanything out there that performs replication based on complexprogrammer defined criteria? Are triggers a viable alternative? Anyalternative out-of-the-box solutions?Any feedback would be appreciated.Regards!Anthony
I am writing an insert stamenet that appears like:
INSERT INTO SppTarget (IndicatorNumber, Part, Years, Target, CompareMethod) SELECT '8', 'B', '20052006', '0.682', '1' UNION ALL SELECT '8', 'B', '20062007', '0.688', '1' UNION ALL SELECT '8', 'B', '20072008', '0.692', '1'
What if I want to SET Target = NULL in this statement, how can i do that?
I have set up a MSX server for managing all of our backup jobs. However when I try and create a DB maintenence plan I can only see the system DBs not user created ones!