Is there a way to hardcode the ip address instead of the Server name in replication script? One of our server is registering 2 IP addresses for itself in DNS. One valid and one invalid. Which is causing the replication to fail.
So here's the deal. I just started with a new company as a SQL Developer/Analyst. I've got a couple years of experience with SQL Server, mostly 2000, on some fairly large and complex databases (or so I thought).
So I get to this new company and the database structure is just wacky. I've never seen anything like this before. After a few google searches I find a bunch of articles on EAV. Yup, that's what I've gotten myself into. On top of that, it seems to be some exploded EAV hybrid, possibly EAV/CR or something I saw. I've dubbed it the ESEASAADSADAVSAVCRS Schema (Entity Subentity Attribute subattribute attributedata subattributedata attributevalue subattributevalue circular reference system). Gotta laugh so you don't cry, right?
As far as I can tell, all of the data they have is submitted from clients, cleaned, aggregated and then used to generate reports which clients in turn subscribe to. From what I've read and seen here, EAV is horrible for reporting (among other things) and they are having performance problems. The guy 2 times before me made a reporting table structure that does all of the aggregations and spits the data into new tables for the reports to run off of. The problem? The reporting table structure is also EAV!
As the original designer left the company, he said the word CUBE. Then comes in the next "SQL Guru" (she was only here for 6 months, can't imagine why). So they start doing upgrades to SQL 2005 and she takes her predecessors advice and starts designing a dimensional model in SSAS based of the reporting database structure. EAV + CUBES = WTF? Did I happen to mention she didn't have any OLAP experience when she started? (neither do I, at least not in a production environment)
So now there's me sitting here 2 weeks in with an EAV database, a pseudo-EAV reporting database, some unfinished cubes, not wanting to touch anything for fear of the whole thing imploding.
Here are some of the factors I must take into account: 1. The company website serves as the client UI and is tightly integrated with the EAV schema. 2. New data sources come and go quite often which means lots of attribute changes to the data 3. After looking at the data with what little SSAS knowledge I have, it seems that going this direction might just be useless. I believe all of the fact/measure data is stored in all of these dynamic attributes and it seems like I would be changing them on every load. 4. I thought about dropping the cube idea and redesigning the reporting database structure to 3NF and then pumping the data from EAV to 3NF for reporting but... uhhh... damn, my mind went blank 4. My brain is fried from looking at this thing so I can't remember what other points I was going to bring up... please give some advice.
if when the data is in mdb format the below query worksSELECT *FROM [rating & px Tgt History]WHERE ((([from] Like "*Init*" And [action] Like "*Target*")=False and deleted=false));but when the access linked to backend is sql server via odbc i get thisODBC call failed{microsoft][odbc sql server driver][sql server]line 1:incorrect syntax near '=' #170
hi, im currently on sql hell right now. im having a hard time learning this sql thingie....
...the thing is this: im currently using the book ASP.NET Unleashed and most of the examples there are on SQL. what i was trying to do before was convert everything to OleDb to fit the ms access which i have right now.
unfortunately, some of the codes seem not to work properly. maybe its because of im using OleDb...
so what i did was i downloaded the MSDE sp3 package and installed it on my PC. now that i have an sql server for my WebMatrix, i just dont know what to do next? i mean, where do i put the sql sample databases like northwind and pubs???
im really confused about this sql thing. i really hate it.
When I delete a row in table1, a cascade delete relationship deletes the appropriate table2 row(s). Since I have a trigger on table2 that updates a few fields in table1 (field3, field4), when I try to delete table1, I get an error. The cascade delete tries to fire off the trigger in table2, which in turn tries to update table1 fields and thus fails. How do I circumvent the triggers from firing?
Triggers look something like this:
CREATE TRIGGER trg_delete_table1_field3_field4 ON dbo.table2 FOR DELETE AS BEGIN DECLARE @newField3Value as money DECLARE @newField4Value as money Set @newField3Value = (SELECT SUM(field3) FROM table2 WHERE key = (SELECT key FROM deleted)) Set @newField4Value = (SELECT SUM(field4) FROM table2 WHERE key = (SELECT key FROM deleted)) UPDATE table1 SET field3 = @newField3Value , field4 = @newField4Value WHERE key IN (SELECT key FROM deleted) END
I have several smallish databases running on an MPC (www.mpccorp.com) server. Device Manager says it has an LSI Logic 1020/1030 Ultra320 SCSI Adapter and a MegaRAID SATA 150-6 RAID controller. It doesn't have any kind of Windows-accessible RAID management interface.
Several months ago I started getting corrupt databases. They would get errors that a DBCC CHECKDB couldn't fix. I never found specific help on this but most of the similar issues I saw pointed toward the RAID controller. We contacted the MPC, who had updated RAID firmware for us to try. We flashed the RAID card reformatted the disks, and restored everything from the last good backup (it had been throwing errors for a couple weeks before I noticed them).
All was good for about a month, but now I'm back to the same situation. I have several corrupt databases. I have good backups, but can't even restore them because I get errors on the restore. My next step is to pay for an incident with Microsoft, but I suspect they'll just point me back to the hardware. If you have any suggestions for problem determination or resolution, I'd sure appreciate them!
Cheers, Martin Nickel
Sample corruption error: SQL Server detected a logical consistency-based I/O error: torn page (expected signature: 0x0; actual signature: 0x3f380c2c). It occurred during a read of page (1:9) in database ID 9 at offset 0x00000000012000 in file 'E:Program FilesMicrosoft SQL ServerMSSQLDataMyDB.mdf'. Additional messages in the SQL Server error log or system event log may provide more detail. This is a severe error condition that threatens database integrity and must be corrected immediately. Complete a full database consistency check (DBCC CHECKDB). This error can be caused by many factors; for more information<c/> see SQL Server Books Online.
Sample error during DBCC CHECKDB: Msg 8939, Level 16, State 98, Line 1 Table error: Object ID 0, index ID -1, partition ID 0, alloc unit ID -9156028125792763904 (type Unknown), page (34262:2139451659). Test (IS_OFF (BUF_IOERR, pBUF->bstat)) failed. Values are 29362185 and -4. Repairing this error requires other errors to be corrected first.
Sample database restore error: Msg 3283, Level 16, State 1, Line 1 The file "MyDB_log" failed to initialize correctly. Examine the error logs for more details. Msg 3013, Level 16, State 1, Line 1 RESTORE DATABASE is terminating abnormally.
Ok, I posted here recently and received helpful replies which allowed me to work around a problem. The original question was posted here:http://forums.asp.net/t/1112669.aspxBut because I'm learning both asp.net 2.0 AND vb 2005 I sort of want to get to the bottom of stuff. I've found out what was going wrong, but I don't understand it.The problem related to retrieving an output parameter to a stored procedure. I was adding the parameter to the command object I was using as follows: cmd.Parameters.Add(New SqlParameter("@memberid", Data.SqlDbType.Int, 0, Data.ParameterDirection.Output)) but it wasn't working (ie I wasn't seeing the return value). A helpful poster's work around was to instead do this: Dim pMemberId As New SqlParameter("@memberid", SqlDbType.Int)pMemberId.Direction = ParameterDirection.Outputcmd.Parameters.Add(pMemberId) Having poked around some more, I've discovered that if I use the original code and then type:?cmd.Parameters("@memberid").Direction I get the value:Input {1}This even happens if I explicitly use 2 instead of Data.ParameterDirection.Output Can anyone explain why this is happening? What's the point of allowing me to pass a parameter into a constructor if it's just going to ignore it?
I'm trying to use DTS to import a space delimited file. One column uses " as a text qualifier so I set this in the options. The problem arises when a " shows up between the 2 text qualifiers. It's seen as a set of qualifiers with a 2nd qualifier with no end. I obviously get an error at this point. Anyone have any good advice on how to squash this one?
Does anybody know where SSIS Data Connections are stored? Whenever one creates a Connection Manager, a list of all created Data Connections appears. It's very quick and easy to create a Connection Manager from an existing Data Connection, so really the latter are in essence the Connection Managers and are thus part of the application. It is therefore important to back them up if for example one wants to migrate the application to another computer. I have looked everywhere in Documents and Settings and Program Files and I can't find any folder or file where these Data Connections are stored! It's annoying to have this mysterious black-box behaviour!
Ok, I have tried everything I can think of, but I am still getting errors to do with SQL server 2005 beta. Since the beta expired on all the VS.net 2005 I thought it would be a good idea to uninstall the lot to save some hard disk space...how wrong I was!!
I originally developed my application using SQL 2005 Developer Edition, but want to switch to using an XCOPY deployed DB on SQL Express for deployement.
I have successfully copied the database.mdf/ldf files over to me project, and can connect using Data Source=.SQLExpress and AttachDbFilename=|DataDirectory|[database].mdf attributes.
The question is, how do I enable CLR integration for my C# SP's?
I've tried executing the following in various places:
sp_configure 'clr enabled', 1 go reconfigure go
But I obviously haven't hit the spot because I'm getting the following error when VS deploys my SP library:
Error: starting database upload transaction failed. Error: Access is denied. (Exception from HRESULT: 0x80070005 (E_ACCESSDENIED))
I have a brand-new Toshiba laptop, running Vista Business, that I installed SQL Express onto. Prior to installation, I was sure to install all the requisite IIS components so SSRS would install.
The installation ran fine -- installed all components. The configuration ran fine. Everything that is supposed to be green shows green
But, when I go to http://localhost/ReportServer, I get:
Server Error in Application "Default Web Site/ReportServer"
HTTP Error 404.2 - Not Found Description: The page you are requesting cannot be served because of the ISAPI and CGI Restriction list settings on the Web server.
No handler mapping for this request was found. A feature may have to be installed. The Web service extension for the requested resource is not enabled on the server. The mapping for the extension points to the incorrect location. The extension was misspelled in the browser or the Web server. What you can try:
Install the feature that handles this request. For example, if you get this error for an .ASPX page, you may have to install ASP.NET via IIS setup. Verify that the Web service extension requested is enabled on the server. Open the IIS Manager and navigate to the server level. In the Features view, double-click ISAPI and CGI Restrictions to verify that the Web service extension is set to Allowed. If the extension is not in the list, click Add in the Actions pane. In the Add ISAPI and CGI Restrictions dialog box, type the path of the .dll or .exe file in the ISAPI or CGI Path box, or click Browse to navigate to the location of the file. In the Description box, type a brief description of the restriction. (Optional) Check "Allow extension path to execute" to allow the restriction to run automatically. If you do not check this option, the restriction status is Not Allowed, which is the default. You can allow the restriction later by selecting it and clicking Allow on the Actions pane. Click OK. NOTE: Make sure that this Web service extension or CGI is needed for your Web server before adding it to the list. Verify that the location of the extension is correct. Verify that the URL for the extension is spelled correctly both in the browser and the Web server. Create a tracing rule to track failed requests for this HTTP status code. For more information about creating a tracing rule for failed requests, click here. More Information... This error occurs when the necessary Web service extension is not enabled, the location or the name of the extension are misspelled or incorrectly entered.
-------------------------------------------------------------------------------- Server Version Information: Internet Information Services 7.0.
The only lead I could find when I googled this error was a reference to running appcmd to ensure that asp.net was enabled. It sure looks like it is:
Honestly... I'm out of ideas. I've been messing with this for 8 hours now, and I'm ready to fling the laptop out the window. I've completely UNinstalled SQL Server, IIS, reinstalled both, repeated the uninstall/reinstall after double-checking all files were deleted, and so on.
Does *anyone* know how to resolve this error? I checked IIS.NET and although they have a few references to it (not within the Reporting Services context) there never seems to be a definitive answer as to what the solution is.
I'm getting this, after upgrading from 2000 to 2005.Replication-Replication Distribution Subsystem: agent (null) failed.The subscription to publication '(null)' has expired or does notexist.The only suggestions I've seen are to dump all subscriptions. Sincewe have several dozen publications to several servers, is there adecent way to script it all out, if that's the only suggestion?Thanks in advance.
Hi,I have transactional replication set up on on of our MS SQL 2000 (SP4)Std Edition database serverBecause of an unfortunate scenario, I had to restore one of thepublication databases. I scripted the replication module and droppedthe publication first. Then did a full restore.When I try to set up the replication thru the script, it created thepublication with the following error messageServer: Msg 2714, Level 16, State 5, Procedure SYNC_FCR ToGPRPTS_GL00100, Line 1There is already an object named 'SYNC_FCR To GPRPTS_GL00100' in thedatabase.It seems the previous replication has set up these system viewsSYNC_FCR To GPRPTS_GL00100. And I have tried dropping the replicationmodule again to see if it drops the views but it didn't.The replication fails with some wired error & complains about thisviews when I try to run the synch..I even tried running the sp_removedbreplication to drop thereplication module, but the views do not seem to disappear.My question is how do I remove these system views or how do I make thereplication work without using these views or create new views.. Whyis this creating those system views in the first place?I would appreciate if anyone can help me fix this issue. Please feelfree to let me know if any additional information or scripts needed.Thanks in advance..Regards,Aravin Rajendra.
In my production box is running on SQL7.0 with Merge replication and i want add one more table and i want add one more column existing replication table. Any body guide me how to add .This is very urgent Regards Don
DBCC OPENTRAN shows "REPLICATION" on a server that is not configured for replication. The transaction log is almost as large as the database (40GB) with a Simple recovery model. I would like to find out how the log can be truncated in such a situation.
Hello,I'm getting the following error message when I try add a row using aStored Procedure."The identity range managed by replication is full and must be updatedby a replication agent".I read up on the subject and have tried the following solutionsaccording to MSDN without any luck.(http://support.Microsoft.com/kb/304706 )sp_adjustpublisheridentityrange (http://msdn2.microsoft.com/en-us/library/aa239401(SQL.80).aspx ) has no effectFor Testing:I've reloaded everything from scratch, created the pulications from byrunning the sql scripts generated,created replication snapshots andstarted the agents.I've checked the current Identity values in the Agent Table:DBCC CHECKIDENT ('Agent', NORESEED)Checking identity information: current identity value '18606', currentcolumn value '18606'.I check the Table to make sure there will be no conflicts with theprimary key:SELECT AgentID FROM Agent ORDER BY AgentID DESC18603 is the largest AgentID in the table.Using the Table Article Properties in the Publications PropertiesDialog, I can see values of:Range Size at Publisher: 100,000Range Size at Subscribers: 100New range @ percentage: 80In my mind this means that the Publisher will assign a new range whenthe Current Indentity value goes over 80,000?The Identity range for this table cannot be exhausted! I'm not surewhat to try next.Please! any insight will be of great help!Regards,Bm
I have a VB.net app that access a SQL Express database. I have transactional repliaction set up on a SQL 2000 database (the publisher) and a pull subscription from the VB.net app. I use RMO in the VB app to connect to the publisher. My problem is I am getting some strange behaviour as follows
- if I run the app and invoke the pull subscription it works fine. If I then close my app and go back in, I can access my data without any problem
- If I run the app and try to access data in my SQL Express database it works fine. I can then close the app, reopen it and run the pull subscription it works fine
however.......
- if I run the app, invoke the pull subscription (which runs fine), and then try to access data in my local SQL Express database without firstly closing and reopening the app, I get a login error
- if I run the app, try to access data in my local SQL Express database (which works fine), and then try to run the pull subscription I get a "the process cannot acces the file as it is being used by another process" error. In this case I need to restart the SQL Express service to be able to run replication again.
I get exactly the same behaviour when I use the Windows Sync tool (with my app open at the same time) instead of my RMO code to replicate the data.
I am using standard ADO.Net 2 code to access my SQL Express data in the app and closing all connections etc
I have recently setup a transactional replication in MS SQL 2000. After setting up the replication the clients TempDB grew by almost 60GB. Now the client is Blaming me for the TempDB GROWTH and saying that its because of the replication being setup i tried to convince them but they are not satisfied yet. Can anybody please tell me does replication cause the tempdb to grow. If yes then how. can u suggest any good link for getting to know the internal working of SQL Server replication????
I know that adding a column using ALTER TABLE to add a column automatically allows SQLSERVER 2005 to replicate the schema changes to the subscribers, however, I would like to add a new column to an existing article that is being used for merge replication, however, I don't want this column to be replicated. Re-initialising the subscriptions is not a option. Help would be appreciated.
I have been researching on the proper steps or sequence to follow to completely remove SQL Server 2012 Transactional Replication. I have read articles about using SSMS as well as using replication stored procedures and some procedures use SQLCMD or just regular TSQL executed in SSMS. I have also read articles where people said all you really need is connect to the Publisher instance, find the publication you want to remove and choose "Delete" and everything will be taken care of behind the scene. I have three SQL servers that participate in transactional replication. SQL-P (publisher),
SQL-D (distributor) and SQL-S (subscriber). Do I need to connect to the distributor instance and the subscriber instance when removing transactional replication or is it just really connecting to the publisher and click delete on the publication? I want everything gone including any metadata, systems tables, distributions db and any other replication objects created during the initial configuration.
Hello everyone,I am involved in a scenario where there is a huge (SQL Server 2005)production database containing tables that are updated multiple timesper second. End-user reports need to be generated against the data inthis database, and so the powers-that-be came to the conclusion that areporting database is necessary in order to offload report processingfrom production; of course, this means that data will have to bereplicated to the reporting database. However, we do not need all ofthe data in the production database, and perhaps a filtering criteriacan be established where only certain rows are replicated over to thereporting database as they're inserted (and possibly updated/deleted).The current though process is that the programmers designing thequeries/reports will know exactly what data they need from productionand be able to modify the replication criteria as needed. For example,programmer A might write a report where the data he needs can beexpressed in a simple replication criteria for table T where column X= "WOOD" and column Y = "MAHOGANY". Programmer B might come along amonth later and write a report whose relies on the same table T wherecolumn X = "METAL" and column Z in (12, 24, 36). Programmer B willhave to modify Programmer A's replication criteria in such a way as toaccomodate both reports, in this case something like "Copy rows fromtable T where (col X = "WOOD" and col Y = "MAHOGANY") or (col X ="METAL" and col Z in (12, 24, 36))". The example I gave is reallytrivial of course but is sufficient to give you an idea of what thecurrent thought-process is.I assume that this is a requirement that many of you may haveencountered in the past and I am wondering what solutions you wereable to come up with. Personally, I believe that the above method isprone to error (in this case the use of triggers to specifyreplication criteria) and I'd much rather use replication services tocopy tables in their entirety. However, this does not seem to be anoption in my case due to the sheer size of certain tables. Is thereanything out there that performs replication based on complexprogrammer defined criteria? Are triggers a viable alternative? Anyalternative out-of-the-box solutions?Any feedback would be appreciated.Regards!Anthony
I am working on bringing our disaster recovery site to be a live site. Currently we replicate to one of out servers (server B) with merge replication (from server A). Server A also does one way transactional replication form some table to several other servers including servers at the DR site.
This setup is not going to be fast enough for what we need so I am wondering if a table is receiving merge replication will the merge updates also replicate down the transaction path??
Example... Server B update a row and merges to Server A. With this update them replicate (via transactional) to Server C??
I have a wired situation..!I set up transactional replication on one of my development server (SQL2000 Dev Edition with sp4).It is running fine without any issues and all of a sudden, i noticed inmy repication monitor tab under Publisher where I usually see thepublication is empty now.I do see the snapshot agent, log reader and distribution agent under myagents inside the replication Monitor. But its usefull to see all 3agents in one window under publisher before. What happend? Is there anyway to get that inside that monitor? Has someone encountered thissitation before? Please advise....After that I tried to create a new set of replication on differentdatabase on the same server and i dont see those either underReplication Monitor - Publishers....All it says is (No Items)....I would appreciate any help to correct this issue... Thanks in advance..
I have setup transactional replication everything on one box. later(two or three weeks later), Replication monitor is show red X Under my publishers (publications is disconnected). this is SQL2005.
I have a SQL Server in an internal network and need to "replicate" to an identical server and database in a DMZ. The DMZ can only receive files sent by a custom component and no port(s) can be opened in the firewall or standard FTP used.
I also need to minimise traffic (i.e. sending whole tables is not an issue as some contain millions of rows), replicate approx once hourly and allow record locking only (as opposed to DB locking/exclsuive access or table locking).
Any thoughts / experiences greatly appreciated otherwise I'm looking at putting triggers on all the tables to monitor changes and generate SQL statements for execution against the DMZ server.
I'm interested in combining the Peer-to-Peer Transactional Replication and Standard Transactional Replication to provide a scale out solution of SQL Server 2005. The condition is as follows:
We may have 10 SQL Server 2005 (1 Publisher + 9 Subscriber) running transactional replication in the production environment and allow updates in subscribers. To offload the loading of the publisher, we plan to have 2 Publisher (PubNode1 and PubNode2) using Peer-to-Peer Transaction Replication and the rest 8 subscribers will be divided into 2 groups. The subscribers 1-4 (SubNode1, SubNode2, SubNode3, and SubNode4) will be set to be standard transactional replication subscribers of PubNode1, and the rest 4 subscribers (SubNode5, ..., SubNode8) will be set to be standard transactional replication subscribers of PubNode2.
Is it possible to setup above 2 Publisher + 8 Subscriber topology? Also, could we set the 8 subscribers with updatable subscriptions to achieve each node is updatable?
We do not plan to set all the 10 nodes using Peer-to-Peer Transactional Replication as it is necessary to make sure n*(n-1)/2 (i.e. 45) peer-to-peer connections is reliable. It seems that the maintenance cost is high if the servers are not in a LAN and the topology is very high coupling. So we prefer to divide the 10 nodes into 2 groups and reduce the cost of each node to maintain the connections to all other sites.
I was working on a project that use local and remote SQL server. In order to keep the database up-to-date I wanted to implement replication on the SQL servers. But unfortunately the transaction replication which meet my requirement best is disabled on the replication configuration module (snapshoot and merge replications are active). Is there any way I can make the transaction replication enabled. I know it was supposed to be enabled by default. I’m using Windows 2003 server and SQL server 2000. Sincerely
NOT NET REPLICATION: indicates that the IDENTITY property should not be enforced when a replication login such as sqlrepl inserts data into the table. What is purpose statement above?
I have a failry complicated problem, or at least I think its complicated.
I'm working with a database that is not very normalized, and I was asked to make a portion of the dataset it available to web users.
Rather than build a complicated mess on top of the existing database, I opted to create an entirey new database inspired by the original, only normalize the data. So, where I had 3 tables in the old system..I now have around 10 nice and neat normalized tables.
Now the trick is to get the data from the original database to the new one. I could write my own application to do this, but I wasn;t sure if SQL's built in replication would be of any help.
I think I'm a little bit confused because thetwo DB schemas are very different from one another, and I'm not sure if built in Replication can work with this sort of set up.
Luckicly, the replication pretty much on needs to go one way, from the original database to the new one. The original databse will still be in use, but for the most part only insert will be happening there, and a few updates. In the new database, there will be no inserts, but there will be updates, and they don't have to be replicated back to the original DB. So, that seems pretty easy to me.
If I were to use replication rather than write my own tool to sync up the data, how woudl I go about this, and is it even possible?
One thing I thought of was was creating a bunch of views in the original database, that looked just like the tables in the new system..and having replicaiton work with those...the only downside I see is the views will have no relationships, and in the new DB all the tables have valid relationships..so I may run into situations where child records are getting replication before parant records..and that will cause issues.