MSDE Replication Scenario
Aug 23, 2005Greetings to fellow fleetfingers:
View 1 RepliesGreetings to fellow fleetfingers:
View 1 RepliesHi all,
I have a huge replication task I need to perform. The source table has over 250,000,000 records and everyday approximately 400,000 records are added to the system regularly.
Currently, I am running snapshot replication and it's taking 10 to 11 hours to complete (The internet connection between the production and the report server is slow). The reason I am using this method is because the source table does not have a timestamp column (so I can run an incremental replication) and I cannot modify that table because it belongs to a third party software. It does have a field which is linked to another table holding the timestamp.
Here is the source table and the other table's structure for reference:
DataLog
Name Datatype Size
DataLogStampID int 4
Value float 8
QuantityID smallint 2
DataLogStamp
ID (Which is foreign key to DataLogStampID field in the above table) int 4
SourceID smallint 2
TimestampSoruceLT datatime 8
What I would like to know is, what is the best practice to handle such replication with MSSQL Server 2000.
I am thinking of developing a procedure which follows the following step using a temp table:
This procedure will run (say) every night.
The temp table will contain a timestamp field.
For one time only, I will run a snapshot replication, so that I can have the existing data into the destination db.
I will record this timestamp into a repl_status table which holds the timestamp of the last replication.
The procedure will read the record from repl_status and select all records from the source table which were added since this date and put them into a new temp table (this table will have the same structure as the source and destination tables) and then a snapshot replication will "add" these new records only every night to the dest. table. By using this method, I will only transfer the records which have been added since the last replication (last night - less records).
Any comments would be greatly appreciated.
Thanks for your time in advance,
Sinan Topuz
I have a product that sits on a main server and wish to implementfunctionality to allow salesmen to come along, pick up a snapshot of thedatabase, go away and maybe modify/add to it and then come back and"synchronise" their data. I'm reading up on Merge Replication for thispurpose. But anyway, I created a publisher on my server and it went awayand generated a "rowguid" column on all of my tables (my tables all have anIdentity column key field). Now of course my "Inserts" no longer work, asthey expect a GUID. I would have expected SQL Server to automaticallygenerate a guid for new inserts (in a similar way to it's TIMESTAMP), but itappears it doesn't, despite the fact I have "(newid())" as the default forthe column. It always inserts the same value:{00000000-0000-0000-0000-000000000000}.So, back to basics, now I have a guid field for each record, how do I manageinserts?Thanks.
View 3 Replies View RelatedI have two sites. Site A and Site B
Each site has two databases
Site A
Db1
Db2
Site B
Db1
Db2
Site A Db1 has to perform transaction replication to Site A- Db2 and Site B- Db1 and Db2.
I started Site A as pubisher and distributor and Site A and Site B both as subscriber.
Site B is in a different geographical area (state).
----------
Please suggest the best scenario to save bandwidth and server load for Publisher, and Distributor.
-------
Earlier I thought that I will implement local replication between Site B - in between Db1 and Db2. The Sql Server does not let me set Db1 as publisher, and distributor for its local database Db2.
-------
P.S. My all databases need same transactions though they are connected to different hardware at different places. So please don't question that why I need four similar databases.
On the national Server: SQL 2005 Enterprise
On the mobile clients: SQL 2005 Workgroup.
I was asked to find a solution to such scenario
2 mobile subscriber S1 and S2 to the same simple merge publication P1 on server N
Day 1,
S1 and S2 both synch up with N and both go off to do fieldwork
Day3,
S1 and S2 both synch up with N.
S1 goes back to work
S2 shutdown the laptop and goes on 2-week vacation.
2 Weeks late
S1 Synch up wit the server and goes off to do fieldwork.
S2 meets S1 in the field. Their workfield is in the North Pole.
S2 has the laptop with data 2-weeks old but no longer can have access to the master publisher N to synch the replica and get latest changes.
S2 will have to sync with S1 since S1 database is fresh. The challenge is to have S2 and S1 replica identical
The Questions:
Is it possible for S2 to sync with S1?
if yes, How to go about it€¦ we need S1 and S2 to have identical replica on their machines?
Now that S1 and S2 are have identical databases and are both doing their fieldwork in the northpole. Can they both sync back with the national publisher N when they have access?
Keep in mind that S2 got its data updated from the replica on S1?
Thank you!
Dear all,
Since Windows Integrated Authentication does not work over proxy for Replication, could I still use SSL or SQL authentication over proxy? Thanks for any advice.
I have a distributed inventory control database that I am going tomigrate from Access to SQL Server. I am going to use SQL ServerReplication to keep the data current.There will one SQL Server 2000 database at HeadOffice, and about 200branches will each have MSDE 2000.For tables that are the same for each branch (e.g. StockItems,Suppliers), I'm going to use Transactional replication with theHeadOffice as Publisher. There is no need for Merge replication,because only HeadOffice will be changing this data.For tables that are updated at the branches (e.g. deliveries, sales),there shouldn't be anyconflicts either, as each branch will only update data that isspecific to it. So, transactionalreplication with the Branches as Publishers seems like the answer.BUT, MSDE 2000 only handles merge replication as a publisher, so itlooks like I'm going to have to use Merge Replication if I'm am goingto use MSDE.Question: Do you think that it worth buying a full SQL Server licensesfor each branch, so that I can use Transactional replication with thebranches as publishers. Or, should I stick with MSDE 2000 and Mergereplication. Also, are there any other reasons why MSDE won't beadequate as the database at the branches.Thanks,Craig
View 1 Replies View RelatedSummary:
Failed to insert detail rows in master-detail scenario during merge replication. Not always, sometimes.
Topology:
I've got 5 servers running MSDE, one of them is central publisher/distributor for merge type of replication.
There is no row or column filtering: all subscribers have all data. Central publisher resolve conflicts with default revolvers.
Some tables have relations: master-detail (i.e. orders-ordersDetails, etc). Relations are defined in tables as FK.
My application which fills data use datasets with relations between tables defined the same way as in the sqlserver. So app first work with dataset which is unable to receive details without master record. When data updated to msde, it is done without errors, means master and details table updated correctly (tables at msde has relations too)
Applications running on 4 different locations and fill data to local db (subscriber to central publisher).
Sync occurs every 15 minutes in the following order (merge agents run at publisher/distributor):
subscriber1: every 15 minutes, starts at 00.00h
subscriber2: every 15 minutes, starts at 00.03h
subscriber3: every 15 minutes, starts at 00.06h
subscriber4: every 15 minutes, starts at 00.09h
Sync lasts for average 15 sec, never 3 min.
Problem details:
Message:
The row was inserted at 'CentPub.myDB' but could not be inserted at 'Sub2.MyDB'. INSERT statement conflicted
with COLUMN FOREIGN KEY constraint 'FK_OrdersDetails_Orders'. The conflict occurred in database 'MyDB',
table 'Orders', column 'OrderID'.
Description
CentPub is central publisher which just collecting data from subscribers. So, Order was made on one of the other subscribers different than Sub2, mean user at location3 insert order with details in local database, after a while, CentPub take this order to its database (MyDB), after that CentPub try to sync with some of the other subscribers (i.e. Sub2 == location2) and then for some unknown reason orderDeatils failed to insert in Sub2's orderDeatil table because constraint 'FK_OrdersDetails_Orders' conflict.
After that happened, thing goes like in http://support.microsoft.com/kb/307482 (generally: in next session, failed order details are deleted
from CentPub, which means deleted from all subscribers after syncs.)
The problem is that subscribers have tables with relations, so Cause isn't as describe in MS kb because my tables have relations at all servers.
After finished replications I have Orders without details records at all subscribers, so I assume that merge agents sometimes (not always) try to insert details prior to master table during the same sync session.
In resolution section of MS kb article they say 'Mark the subscriber foreign key constraints as NOT FOR REPLICATION'.
In relations definition I see 'Enforce relationship for replication) options which is enabled in my tables.
If I turn that option off, is it possible to happen that my subscribers receive details without master record? (My observation of behavior says that will not happen(that will lead to problems in my app because my datasets enforce relations too).
Deleted record I bring back to life with conflict manager in EM, but I'd like it never happen.
Is it a bug, side effect or something I do it the wrong way?
Thanks and regards
I am using MSDE 2000 for my replication. the problem is that althought at time creating publication it inserts GUID into every table but if one table in 2 nodes has same primary key, it inserts only one row ( accorrding to prority ). there are some Conflict Reslover methods that can be used for this purpose. i wanted to ask that is there any other way for me to resolve this conflict. i am asking for a new way because my database schema has been created and a lot of coding behalf of that schema has been done.
I'd be thankful if you guide me.
Regards,
Hello.
I will construct client application with MSDE.
I face with a difficult question.
It's in sync with SQL2000 Server and MSDE.
Client number is 5000.
I fall in a dilemma. Which do i use, Replication or to make DB Sync application ?
jongwoo
Hello!
I have a problem with creating Merge replication between two instances of MSDE 2000. In the article http://support.microsoft.com/kb/324992/, published by Microsoft, described, that it is possible.
I am creating Merge Publication at MSDE 2000, with Distributor and Publisher configured at same machine, with options "Allow Pull Subscriptions" and "Allow Anonymous Subscriptions". There is no problem with creating of Publication. Even snapshot generation finishes successfully. Also I create a login with SQL Server Authentication and "System Administrators" database role at the publisher in order to connect to it from Subscriber.
The problem occurs, when creating anonymous pull subscription to this publication at another instance of MSDE. During initial synchronization an error occurs:
The process could not connect to Distributor '<publisher_server_name>'. Login failed for user '<login_created_at_publisher>'. Reason: Not associated with a trusted SQL Server connection. The step failed.
Although I didn't use Windows Authentication at all, so Subscriber doesn't need to connect to Distributor using trusted SQL Server connection.
What is a problem and how can I workaround?
Note: The same works correctly, if MS SQL Server used as a Publisher instead of MSDE.
Please, help!
I can provide publication and subscription creation scripts, if required.
Hi,
Our current environment is: SQL Server 2000 backend; XP/Windows 2000 clients with MSDE loaded; a Microsoft Access 2002 app using DSN to connect to the local MSDE database; Merge replication which is a job within the MSDE engine that we execute in VBA code using the following DMO code:
' Find the Job name to execute
lngJobCount = oServer.JobServer.Jobs.Count
lngJobIndex = 1
blFoundJob = False
While lngJobIndex <= lngJobCount And blFoundJob = False
strJobName = oServer.JobServer.Jobs.Item(lngJobIndex).Name
Me.lstJobs.AddItem strJobName, 0
If InStr(1, strJobName, "LEXData", vbTextCompare) > 0 Then
If InStr(1, strJobName, strServerName, vbTextCompare) Then
blFoundJob = True
End If
End If
lngJobIndex = lngJobIndex + 1
Wend
' Execute the job
If blFoundJib Then
Set oServer1 = New SQLDMO.SQLServer
With oServer1
.LoginSecure = True
.Connect strServerName
End With
Set oJob = oServer1.JobServer.Jobs(strJobName)
End If
We are now planning a move to a SQL Server 2005 backend and Express edition on the clients and the app would remain as Access 2002. Any pointers to documents that will be useful would be appreciated but I have a couple of specific questions as well.
1) As I understand it we can load Express side by side with MSDE so we will probably go down that path and load the data from MSDE into an Express version of the database. Once that's done we will create a DSN pointing to the new database and name it the same as the DSN which is currently used for the MSDE database. If we do this will our Access app simply work as before (ignoring replication)?
2) When we have our app happily running against a DSN that points to an Express database would we expect our existing DMO code (as above) to work, e.g. find a particular job in a list jobs and executes it? I can see that the replication job itself may have to be rejigged for Server 2005.
TIA - Peter
Hello,
I have the following scenario. An application using SQL Server 2000 SP3 on the server side and clients using MSDE 2000A (SP3a) on the client machine. Since the client is offline quite often merge replication is used to keep the clients in sync.
Now we try to upgrade to SQL Server 2005 SP1. The publisher and distributor upgrade (on the same box) worked fine and all clients could still synchronize. Fine :-)
Now we try to upgrade the clients to SQL Express SP1. Now the problems start :-(
1) After the upgrade the entry within the Synchronization Manager is gone (we can overcome this by using sp_MSregistersubscription or by manually disable and enable Synchronization manager on the subscription properties)
2) Initial Synchronization (takes a long time but) works fine, if I apply changes to the subscriber or force a reinitialize all following synchronization will fail. With the following messages:
Error messages:
The merge process could not clean up the subscription to 'tstvmw23':'TestBase:'TestBase'. (Source: MSSQL_REPL, Error number: MSSQL_REPL-2147200965)
Get help: http://help/MSSQL_REPL-2147200965
New request is not allowed to start because it should come with valid transaction descriptor. (Source: MSSQLServer, Error number: 3989)
Get help: http://help/3989
Remark the test system I use is a nearly empty db (only with full schema and a few lookup tables) with 15MB. The same error occures if I create a new database an subscribe to the same publication.
Please help otherwise I'm forced to step back to MSDE.
Thanks in advance,
Thomas Hotz
i went through the documentation but i was not clear on following, here is the scenario :-
a Central server is having SQL Server 2000
3 Remote Locations :- Each having 4-5 no of computers, connected on a Lan, and on one of the machines MSDE 2000 will be running.
My Questions are :-
1. Can remote locations, update data locally and send changes (say in every one hour) to central server. If yes, then how ??
2. Same way they can receive updates from Central Server on whatever was updated on Centra Server or on the 3 remote locations. If yes, then How ??
any help will be highly appreciable.
Consider the following scenario:
I got 2 webservers (2003) using NLB.
I want some kind of common data store (SQL Server).
I need this data store to be fault tolerant.
I don't have any more machines.
I don't have an enterprise licence for windows.
How do I solve this???
I have a company with 5 offices connected with a dedicated line (500+ employees).
They have five databases, one in each office.
Now they want a new application which will replace the old (VB6 on MySQL will now be VB.NET 2.0 with MS SQL Server 2005).
The want all the data availiable in all the offices. We will still have 5 databases in 5 offices, but all with the same data.
Sometimes the dedicated line is lost.
Now I have a few questions:
Will the database be still running while a line is down?
Will the database be still running if replication is of type Merge (instead of continuesly).
Is it good practice to use unique identifiers versus Numeric keys?
Is it good practice when the application uses the "master" database to save global things like "Suppliers" and the local database to add and change Orders?
Do you have some recommended reads?
Thx for your time!
Henri
~~~~
There's no place like 127.0.0.1
For High Protection mode (2servers, no witness, FULL safety), in the event of a catastrophic failure of the principal, can a remote (but well connected) mirror be forced to assume the principal role? The MS manual "Database Mirroring in SQL Server 2005" on p2 says the failover type for HP mode is manual only (as opposed to forced). Elsewhere it seems to seems suggest otherwise but there is no clear and unambiguous discussion describing the exact procedure.
Can anyone clarify this matter? Thanks,
Bud
I currently have a workflow "like" application which uses msmq to pass messages to each step transactionally. I'd like to take advantage to something like SSB to make the app scale. Here's how it works today:
Start Workflow WebService puts message on queue.
Windows Service has threaded listeners on each queue for the workflow.
The listeners do a transactional Receive off the queue. By doing this is in a transaction, we can enforce that only one listener will pick up and process the message. Once they have a message they call a .Net Assembly to perform some work. Once the work is performed succesfully, the listeners put the work context on the next queue using the same transaction as the receive and commit the transaction effectively removing the message from the receive queue permanently. If a fatal error occured during processing, the work context is moved to an error queue transactionaly. If the process blew up due to other reasons, the transaction is rolled back and the message reappears on the original queue so that another instance of the service can pick it up. Currently all this is running on one instance of the Windows service and cannot span more than one host machine since MSMQ did not support transactional reads off public queues without using DTC.
I saw the external activator sample and it looks great but I have a few questions. First, I'd have to modify it to call assemblies instead of external processes. No biggie there. One other requirement that one and only one receiver should process the message and that when the next message is sent to the next queue, it's done within the same transaction as the receive. I hope that this can be done without any deadlocks. I also saw mentioned somewhere that this messaging could advantage of the local SQLExpress instances so that even if the main SQL Server instance is down, the messages can go out. Last requirement is that our message is a blob (serialized .Net object) so how do we define that kind of message?
Thanks
Costas
Table1: WriterID, WriterName, WriterSurname
Table2: BookID, WriterID, BookName, Category
Table3: CategoryID, CategoryName
There will be one form. Which way i should use to enter data from one form to multiple tables with same WriterID.
1.) Should i use SqlCommand class and a Stored Procedure?
( Like that: http://www.aspnet101.com/aspnet101/tutorials.aspx?id=13 )
2.) Is there another way without a stored procedure? I can't imagine how to insert same form to multiple tables. In real scenario there are lots of table and all contains a column that holds same value such as WriterID.
Hi, I want to know the a solution for my Synchronization Scenario
I have a several client databses which are SQL Server 2005 Express and i have a master database which is SQL Server 2000 containing all the individual Client databases. All the individual client databases are kept seperately at the master location. I need to Synchronization the client database with its copy at the master database (something like Merge replication). Both the Client Copy as well as Master Copy could be Publishers & Subscribers.
Now the problem is Because of security & firewall issues, only the Client should have the ability to schedule & initiate the synchronization process with the master copy. Unfortunately SQL Server 2005 Express has only subscriber agent and not a publisher agent.
Any help on how to achieve this would be appreciated . Thank You
Hello All ..
This is the scenario I'm having :
-- I'm a beginner so bear the way I'm putting it ... sorry !
* I have a database with tables
- company: CompanyID, CompanyName
- Person: PersonID, PersonName, CompanyID (fk)
- Supplier: SupplierID, SupplierCode, SupplierName, CompanyID (fk)
In the Stored Procedures associated (insertCompany, insertPerson, insertSupplier), I want to check the existance of SupplierID .. which should be the 'Output' ...
There could be different ways to do it like:
1) - In the supplier stored procedure I can read the ID (SELECT) and :
if it exists (I save the existing SupplierID - to 'return' it at the end).
if it doesn't (I insert the Company, the Person and save the new SupplierID - to 'return' it at the end)
------------------------------------
2) - Other way is by doing multiple stored procedures,
. one SP that checks,
. another SP that do inserts
. and a main SP that calls the check SP and gets the values and base the results according to conditions (if - else)
3) it could be done (maybe) using Functions in SQL SERVER...
There should be some reasons why I need to go for one of the methods or another method !
I want to know the best practice for this scenario in terms of performance and other issues - consider a similar big scenario ..... !!!
I'll appreciate your help ...
Thanks in Advance . ! .
How can I be able to use "NewReceivedTime" as a variable?
Since having to create a new column using CASE statement in SQL would mean that user will not be able to use this new column name and having to receive error such as "Invalid Column Name: NewReceivedTime "
(case
when <value>
else <value>
end) as NewReceivedTime
I'm asking this because, I would want to use "NewReceivedTime" I've created to equate to another Time Column like
NewReceivedTime = LogDateTime something like that.
Thanks.
Hi,
i hav a text file in this format..
currency,exchangerate(INR),date
dollar,45,20/04/2006
dollar,46,22/04/2006
britishpound,65,20/04/06
dirham,12,20/04/06..etc..
now,i want that using this as source.. 2 tables should be created and filled with appropriate data..
CurrencyMaster..Currencyid(PK),Currencyname
CurrencyDailyRate..ID,Currencyid(FK),rate,date
how can i do it using SSIS?
thks
What I'd like to do is use SQL Express as a back-end for an existing Access 2003 application. The application is currently a single-file Access solution. I'm just investigating options to separate the front-end from the back-end so that the users can access a single data store from multiple PCs. We're only talking about maybe 5 users total, and really no more than 2 accessing it simultaneously at any given time (although the same setup exists in several offices). Right now they are just opening the Access db from a fileshare. This has resulted in corruption on a few occasions, which is part of the reason for wanting to replace the current solution with something that will be a little more robust. I'm wondering if there is a way to deploy a SQL Express db on a fileshare so that it can be connected to by the Access front-end. While we can install things on their desktops, we can't install anything on the file server, we can just put files there. Is there any way that I could make that solution work, or should I just stick with separating the Access front-end and back-end?
Thanks,
Kris
Hi,
I am faced with a scenario where I have to predict the time period that will
required to solve a particular problem depending on various factors like
Problem Type, Severity Level, Resource Availability etc.
I would like to know which algorithm is best suited for the
above scenario and the reasons for the same.
Hi,
I'm about to implement a DB Mirroring on my production server.and i'm wondering about the scenario what will happen if.
1. if my primary server suddenly die.how i can operate the mirror DB without losing data?
2. when the primary server comes to life, how i make the primary server to lead again with the DB Mirroring?(do i need to recreate all procedures of DB Mirroring again)
3. how can i be sure that the DB mirroring is indeed mirrored on the drp server?
4. what about alerting - can i be notified via email if the DB Mirroring isn't sync? or if the drp server is down and other issues that can be occur .
Thx
Hi,
I have a product basket scenario in where I have to recommend contracts to a customer based on the product and the quantity he/she buys. Product Quantity is an important factor which administers the user in the purchase of a particular contract
I have the following tables with me.
Customer product transaction table, Customer Contract transaction table but there is no direct relationship between contract and product in the database. The only way the two can be linked is through the customer.
If I create a mining structure with Customer-Product information as the nested table and Customer-Contract information as the nested table with customer being the link between the two, the model is showing some irreverent contract recommendations.
what is the solution for the above problem? Is it because the is no direct relationship between the product and the contract?
How can I overcome this problem?
Hello All
We have Asynchronous DB Mirroring established for our Production database
which runs on SQL Server 2005.
For the DR Test we plan to do the following during the period of low activity :
1. Pause the mirroring on the Principal Server.
2. Break the mirror on the Mirror Server
3. Take on offline backup on the Mirror.
4. Bring the Mirror Database up.
5. Run the DR Tests on the Mirrored Database
6. Restore the Offline Backup taken in step 3.
7. Reestablish the mirror from the Principal to Mirror.
8. Resume Mirroring on the Principal Server.
9. Verify Mirroring is happening on the Mirror Server
Can u please let me know if this plan is feasible and if there should be any modifications to the plan that are required. Any other suggestions/input is appreciated.
TIA
Best,
Jay
we have a pretty vast t-sql statement that queries out production db and inserts results into a reporting table on the same db. the problem we're having is that during this process there are several locking processes engaged (key, table, schema, etc). i thought by moving this entire process to a seperate (reporting) database i could bypass this locking problem. however, the same problem exist in the reporting database. is there a way to disable all locking (this is just a reporting database so im not worried about the data integrity. any suggestions to disable all locking? by the way its a sql 2000 server.
example:
SET TRANSACTION ISOLATION LEVEL READ UNCOMMITTED
//Select cnttrtydayspast
insert into tampa_pmr ( hprop, sCategory, sSubCategory1, sSubCategory2, sSubCategory3, dtMonth, sMonth, sYear, iValue, iSort1, iSort2, iSort3, iSort4, iLevel)
(select d.hprop, d.sCategory, d.sSubCategory1, d.sSubCategory2, d.sSubCategory3, d.dtMonth, d.sMonth, d.sYear, d.iValue, d.iSort1, d.iSort2, d.iSort3, d.iSort4, d.iLevel
from
(select
p.hmy hProp,
'Avg Days For Outstanding Receivable' sCategory,
'Number of Households' SSUBCATEGORY1,
'z' SSUBCATEGORY2,
'z' SSUBCATEGORY3,
'#begMonth#' DTMONTH,
month('#begMonth#') SMONTH,
year('#begMonth#') SYEAR,
isnull(SUM(isnull(cnt.cntHmy,0)), 0) IVALUE,
9 ISORT1,
80 ISORT2,
0 ISORT3,
0 ISORT4,
2 ILEVEL
from property p
left join
(select amt.property_hmy phmy, count (distinct amt.thmy) cntHmy from (
select
p.hmy property_hmy
,t.hmyperson thmy
,tr.stotalamount - (sum(case sign(convert(INTEGER,DATEAdd(day, -1, DATEAdd(month, DATEDiff(month, 0, '#begMonth#')+1, 0))-trr.upostdate))when 1 then d.samount when 0 then d.samount else 0 end))
remain_amount
from property p
inner join trans tr on(p.hmy = tr.hprop and tr.hAccrualAcct is not null)
inner join tenant t on(t.hmyperson = tr.hperson and tr.upostdate < DATEAdd(day, -1, DATEAdd(month, DATEDiff(month, 0, '#begMonth#')+1, 0)))
left outer join detail d on (d.hchkorchg = tr.hmy )
left outer join trans trr on (trr.hmy = d.hinvorrec )
where
tr.itype = 7
and datediff("dd",tr.sdateoccurred,DATEAdd(day, -1, DATEAdd(month, DATEDiff(month, 0, '#begmonth#')+1, 0))) <=30
#Conditions#
group by
p.hmy
,t.hmyperson
,tr.hmy-700000000
,tr.stotalamount
having tr.stotalamount <> sum(case sign(convert(INTEGER,DATEAdd(day, -1, DATEAdd(month, DATEDiff(month, 0, '#begMonth#')+1, 0))-trr.upostdate)) when 1 then d.samount when 0 then d.samount else 0 end )
AND SIGN (tr.stotalamount - (sum(case sign(convert(INTEGER,DATEAdd(day, -1, DATEAdd(month, DATEDiff(month, 0, '#begmonth#')+1, 0))-trr.upostdate))when 1 then d.samount when 0 then d.samount else 0 end)) - ISNULL(( SELECT /* available prepays */
sum(isnull(d.samount,0)) tran_amountpaid
from property p
inner join tenant t1 on t1.hproperty = p.hmy
inner join unit u on u.hmy = t1.hUnit
inner join trans tr on tr.hperson = t1.hmyperson and tr.hmy between 600000000 and 699999999
and tr.upostDate <= '#begmonth#'
inner join detail d on d.hinvorrec = tr.hmy
inner join acct a on d.hacct = a.hmy and d.hAcct IS NOT NULL
left outer join trans trc on trc.hmy = d.hchkorchg
Where 1=1
and exists (select detail.hmy from detail
left outer join trans trc on trc.hmy = hchkorchg
where hinvorrec = tr.hmy
and (hchkorchg is null or trc.upostdate > '#begmonth#'))
and (d.hchkorchg is null or trc.upostdate > '#begmonth#')
AND T1.HMYPERSON = T.HMYPERSON),0)) = 1
) amt
where amt.thmy not in
(
select
isnull(t.hmyperson,0) thmy
from property p
inner join trans tr on(p.hmy = tr.hprop and tr.hAccrualAcct is not null)
inner join tenant t on(t.hmyperson = tr.hperson and tr.upostdate < DATEAdd(day, -1, DATEAdd(month, DATEDiff(month, 0, '#begMonth#')+1, 0)))
left outer join detail d on (d.hchkorchg = tr.hmy )
left outer join trans trr on (trr.hmy = d.hinvorrec )
where
tr.itype = 7
and datediff("dd",tr.sdateoccurred,DATEAdd(day, -1, DATEAdd(month, DATEDiff(month, 0, '#begmonth#')+1, 0))) > 30
#Conditions#
group by
p.hmy
,t.hmyperson
,tr.hmy-700000000
,tr.stotalamount
having tr.stotalamount <> sum(case sign(convert(INTEGER,DATEAdd(day, -1, DATEAdd(month, DATEDiff(month, 0, '#begMonth#')+1, 0))-trr.upostdate)) when 1 then d.samount when 0 then d.samount else 0 end )
AND SIGN (tr.stotalamount - (sum(case sign(convert(INTEGER,DATEAdd(day, -1, DATEAdd(month, DATEDiff(month, 0, '#begmonth#')+1, 0))-trr.upostdate))when 1 then d.samount when 0 then d.samount else 0 end)) - ISNULL(( SELECT /* available prepays */
sum(isnull(d.samount,0)) tran_amountpaid
from property p
inner join tenant t1 on t1.hproperty = p.hmy
inner join unit u on u.hmy = t1.hUnit
inner join trans tr on tr.hperson = t1.hmyperson and tr.hmy between 600000000 and 699999999
and tr.upostDate <= '#begmonth#'
inner join detail d on d.hinvorrec = tr.hmy
inner join acct a on d.hacct = a.hmy and d.hAcct IS NOT NULL
left outer join trans trc on trc.hmy = d.hchkorchg
Where 1=1
and exists (select detail.hmy from detail
left outer join trans trc on trc.hmy = hchkorchg
where hinvorrec = tr.hmy
and (hchkorchg is null or trc.upostdate > '#begmonth#'))
and (d.hchkorchg is null or trc.upostdate > '#begmonth#')
AND T1.HMYPERSON = T.HMYPERSON),0)) = 1
)
and amt.remain_amount > 0
group by amt.property_hmy
)cnt on cnt.pHmy = p.hmy
where 1=1
and p.itype = 3
#Conditions#
Group By p.hmy
) d
where 1=1)
//End cnttrtydayspast
Hi all,
We are about to start a project that will involve three websites, each of which will have its own public domain name and two of which will act as e-commerce applications. Both e-commerce applications will only contain relatively small amounts of data (few thousand products, up to 15000 customer records with basic order history) and will be synchronised with a central data server at another site that will be responsible for handling payments, order processing, invoicing, etc.
The sites will take approx 500 hits every 24 hours and we will need to manage the DNS on the hosting server.
My question is, would we be ok to host these sites on a Web Edition of Windows 2003 Server with SQL Express, or is there any clear reason that we should go with the Standard editions of Windows Server and/or SQL?
Regards,
Stephen.
Hello,
I'm not sure if it's the setup I did wrong, but I can't seem to get my
text datatype in my database to store more than 900 characters.
I'm trying to setup a news database for my website, which will populate
the information into a datagrid. To test, I manually added a news
item in the database through the visual studio 2003 gui. I
immediately noticed a problem as the I was getting an error after a
long news item saying:
"The value you entered is not consistent with the data type or length of the column, or over grid buffer limit."
I couldn't find anthing to set the buffer limit and the datatype is
"text" filled with simple text in the column. As a further test,
I
simply entered 12334567890123... up to 900 characters and still
recevied the error.
I would appreciate someone leading me in the right direction on this one.
Thanks a lot.
Server: Msg 9004, Level 21, State 1, Line 1
The log for database ' ' is corrupt.
Connection Broken
How do i continue to perform a successful restore if one of the logs is corrupt? do i perform dbcc?
Hello !
Please, I am new on SQL Server,
1.It's possible following scenario ?
I want to implement an application with SQL Server Express installed on a PS as server. Other PCs - from network or from http network - could connect to this sse database (my intention is to open a port on the server for using the internet network).
If yes, please give me an example of adequate string connection to use in application.
If no, what amount from this scenario could be implemented ? (And also, if you can, an sample of connection string).
2. Do you know what are the limitations of using SQL Server Developer Edition (~50$ price) in this scenario ?
Thank you very much !