Transact SQL :: How To Deduct Retention Amount From Arrears Payment
Aug 27, 2015
We have retention policy , and pay at the time year completion , now policy change and it is converted from yearly to monthly and this with effect from April-15.
if calculate the pay system will generate the Arrear payment of the employee from the month of April onward but i already paid the retention amount for month for two month April and May which i need to deduct the same otherwise this will double amount .
I want to frame a range of data based on particular group of columns
If OBJECT_ID('tempdb..#ResellerRange') IS NOT NULL drop table #ResellerRange create table #ResellerRange ( ResID varchar(10) , amt decimal(18,2) , serialno int)
insert into #ResellerRange ( ResID,amt, serialno ) values ('Raja',10,67),('raja',10,68),('raja',10,89),('Prabu',20,56)
I want below output
resid amt min max ---------------------------------- raja 10 67 68 raja 10 89 89 Prabu 20 56 56
I have typed a query by combining multiple tables to show sum of TotalReceivedAmount by MRN. It also have payment mode which is paid by cash or credit card. Now the query shows the sum for MRN combining all the payment done by credit card and cash. How am I supposed to show the amount paid by credit card and cash separately?
Here is the query :
SELECT Distinct dbo.CA_Payment.MRN, dbo.CA_Patient.FirstName, dbo.CA_PaymentModeMaster.Description AS PaymentMode, CASE (dbo.CA_Payment.PaymentModeID) WHEN 2 THEN dbo.CA_Payment.CardNumber END AS CreditCardNumber, CASE (CA_Registration.PatientTypeID) WHEN 1 THEN dbo.CA_PatientTypeMaster.Description WHEN 2 THEN dbo.CA_DebtorMaster.DebtorName END AS Debtor,
currently we have some shipping software that has a mqsql database locally.There are filters on the program and when an item is shipped from our warehouse it goes into a filter "Shipped".We have a SKU and also quantity for the product.
Now we have an inventory program that also has a mqsql database,this has the same SKU as the shipping software database.what we hope to do is when an SKU is shipped in our software program(ie it Goes into the "Shipped" folder),it will deduct the quantity from the adjacent SKU in the inventory program database.
Note that the 2 databases are independent from each other but would like them to be in effect linked to each other.
I need to create an output from a T-SQL query that picks a numeric variable and uses the print function to output with leading zeroes if it is less than three characters long when converted to string. For example if the variable is 12 the output should be 012 and if the variable is 3 the output should be 003.
Presently the syntax I am using is PRINT STR(@CLUSTER,3) . But if @CLUSTER which is numeric is less than three characters I get spaces in front.
I have two tables Costtable (Id,ResourceId, Amount,Date) and ResourceTable (ResourceId,Name) which shows output as below.
I want to show 0 amount for rest of the name in case of September. For e.g. if rest of the Resources does not appear in cost table they should appear 0 in amount
My Desired output
My current query
SELECT RG.Id AS Id, RG.Name AS Name, ISNULL(SUM(AC.Amount), 0) AS Amount, RIGHT(CONVERT(varchar(10), AC.[Date], 105), 7) AS [YearMonth]
This will get me what I need based on an entered client id, but what is I want it to return the last payment date and amt for all loans? I tried removing the two where clauses and it only returned the last payment entered but not for all loans.SELECT dbo.tblLoan.Client_ID, MAX(dbo.tblPayments.PaymentDate) AS [Last Pay Date], SUM(dbo.tblPayments.AmountPaid) AS [Last Pay Amt] FROM dbo.tblLoan INNER JOIN dbo.tblPayments ON dbo.tblLoan.Loan_ID = dbo.tblPayments.Loan_ID
WHERE (dbo.tblLoan.Client_ID = @Client_ID) AND dbo.tblPayments.PaymentDate = ( SELECT TOP 1 p.PaymentDate FROM dbo.tblPayments p INNER JOIN dbo.tblLoan l ON l.Loan_ID = p.Loan_ID WHERE l.Client_ID = @Client_ID ORDER BY p.PaymentDate DESC )
I need to get the clientid, last payment date, and last payment amount. I have tried using MAX but this does not work. Can anyone see what I'm doing wrong? I get all payments and all dates. I'm summing the payment amount incase they made two payments on the same day, not likely but... SELECT dbo.tblLoan.Client_ID, MAX(dbo.tblPayments.PaymentDate) AS [Last Pay Date], SUM(dbo.tblPayments.AmountPaid) AS [Last Pay Amt]FROM dbo.tblLoan INNER JOIN dbo.tblPayments ON dbo.tblLoan.Loan_ID = dbo.tblPayments.Loan_IDGROUP BY dbo.tblLoan.Client_IDHAVING (dbo.tblLoan.Client_ID = @Client_ID)
I'm creating a temporary table in a Sql 2005 stored procedure that contains the transaction amount entered in a period <= the period the user enters. I can return that amount in my result set. But I also need to separate out by account the amounts just in the period = the period the user enters. There can be many entries or no entries in any period. I populate the temporary table this way:
SELECT t.gl7accountsid, a.accountnumber, a.description, a.category, t.POSTDATE, t.poststatus, t.TRANSACTIONTYPE, t.AMOUNT, case when t.transactiontype=2 then amount * (-1) else amount end as transamount, t.ENCUMBRANCESTATUS, t.gl7fiscalperiodsid
FROM UrsinusCollege.dbo.gl7accounts a
join ursinuscollege.dbo.gl7transactions t on a.gl7accountsid=t.gl7accountsid
where (t.gl7fiscalperiodsid >= 97 And t.gl7fiscalperiodsid<=@FiscalPeriod_identifier) And poststatus in (2,3) and left(a.accountnumber,5) between '2-110' and '2-999' And right(a.accountnumber,4) > 7149 And not(right(a.accountnumber,4)) in ('7171','7897')
order by a.accountnumber
Later I create a temporary table that contains budget information. I join these 2 temporary tables to produce my result set. But I don't know how to get the information for just one period. For example, if the user enters 99 as the FiscalPeriod_identifier, I need a separate field that contains only those amounts(if any) that were entered for each account in Period 99.
Can anyone help? It may be that I am not seeing the forest for the trees, but I can't figure it out.
I'm trying to learn how to handle a payment schedule for small loan payments. I have the following structure: tblClient, ClientID tblLoan, LoanID, ClientID tblPayments, PayID, PayDate, PayAmt, LoanID tblLoanPaymentSchedule, ScheduleID, LoanID, PayDueDate, PayDueAmt, ReminderPrintDate, ReminderSentDate, PaymentRecDate I have a page that batches (holds) all the payments in a work table and I can post them to the correct loan record but I'm just not sure how I would ask a stored procedure to mark the tblLoanPaymentSchedule with the PaymentRecDate for each payment posted. The schedule is pregenerated so that a payment due reminder can be sent out. I appreciate any suggestions,
I use the following sql in a view to return the last payment date and amount made by clients. What I need is a way to return the payment date and amount for the payment prior to the last one. Any help is appreciated very much, SELECT dbo.tblPaymentReceipts.Client_ID, dbo.tblPaymentReceipts.PaymentDate AS LastPayDate, SUM(dbo.tblPaymentReceipts.AmountPaid) AS LastPayAmtFROM dbo.tblPaymentReceipts INNER JOIN (SELECT Client_ID, MAX(PaymentDate) AS LastPayDate FROM dbo.tblPaymentReceipts AS tblPaymentReceipts_1 GROUP BY Client_ID) AS A ON dbo.tblPaymentReceipts.Client_ID = A.Client_ID AND dbo.tblPaymentReceipts.PaymentDate = A.LastPayDateGROUP BY dbo.tblPaymentReceipts.Client_ID, dbo.tblPaymentReceipts.PaymentDate
I have a problem concerning keeping track of a value within a query. I have a table that tracks invoices recieved and payments made. For each invoice number there may be multiple payments made against it. I need something that will check and make sure that each invoice number has its payments equal to its received amount.
Hi all,I have a problem trying to generate the u/m list of customer.I am trying to generate a list of customer whoes last commence date isjan 04 to current.It is part of a billing system which the customer come in and pay fortheir season parking in carpark. They can pay for various periodshortest being 1 week.so i will have customer paying for 1 week, 1 month, 2 months or even 1year. Everytime a customer come in to pay, a new line at the incvoicewill be generated.My DB structure is as followedCustomer Table-Cust_Acc_No ----> (Primary Key)-Customer Name-Customer AddressInvoiceCust_acc_no-- Link to customer tableInvoice_no -- primary keyinvoice detailsinvoice_no -- link to invoice tablecommence_dateexpiry_dateamount_paidif i do aselect * from customer a, invoice b, invoice_details cwhere a.cust_acc_no=b.cust_acc_no andb.invoice_no=c.invoice_no andc.commence_date>1/1/04it doesnt works as it will showjohn, 1/1/04 - 31/1/04john, 1/2/04 - 29/1/04i do not want repetitive customer number just the latest commencedate.can anyone help?thanks
I have members in a database who have paid thru dates. I am creating retention reports
I created a cross tab in Crystal (using SQL) that counts records that paid within a certain year. I need to create a script that will let me find when members skip payment for a year. Any ideas?
I was thinking of running a count of all paid (Activity) records, but still kind of stuck.
I have a table named Prescription that consists of attributes like PatientId, MedicineCode, MedicineName, Prices of different drugs, quantity of different drugs(e.g 1,2,3,10), date .
I would like to get a summary of the total number and amount of different drugs in a specific period, the total amount of each type of drug.
I currently have a simple transactional replication setup for a database. My publisher and distributor are on the same box. The subscription is setup using a push agent.
My question is related to recovery of the subscriber.
So lets say replication is setup and working fine. Suddenly we had a failure on the subscriber database. Now I could just reconfigure the subscription, and the subscribing database would be back up and good to go, but the problem is that over time, we have made some changes to the subscribing database that are not made in the publisher. For example, the tables have different indexes. Just reconfiguring the subscritpion would not recover these objects.
So I have to acutally restore the subscriber database. So I do that, and apply transaction logs up to the most recent transaction log backup. Now, consider that my transaction log backups on the subscriber happen every 4 hours, and the most recent transaction log backup I had was from 3 hours ago. So now at this point, my subscribing database is 3 hours behind my publisher.
Now, will the distribution agent resend the missing 3 hours of transactions?
In the distribution agent properties, there are two settings for transaction retention, "at least" and "but not more than". Currently they set to 0 and 72 hours respectivly. Now I would assume that if I set the "at least" setting to the subscriber transaction log backup period, in this case 4 hours, I would be covered, and the distribution agent would indeed re-replication the transactions that happend since the recovery point 3 hours.
I just wanted to verify that this is acutally what these settings are refering too, and that if I set the "at least" setting to 4 hours, I would be covered.
I'm curious what are considerations for choosing a good transaction retention time? The default SQL uses is 0 to 72 hours. With this setting I found that cleanup was taking upwards of 30 minutes (for a process that defaults to run every 10 minutes). I've read that lowering it can improve performance, and that also you don't want this running too long because of deadlock issues between this and the log reader. So how short is too short? Optimally, since the system this runs on is under heavy use I'd like to optimize this as much as possible, which makes me think that the smaller the retention the better, but is something like 1 or 2 hours too short? What are possible consequences of such a short period of time?
I have just started in the scary world of SQL Server admin and am trying to unravel the mysteries of backups etc. If I run 'BACKUP DATABASE xxx TO DISK = 'D:DB_Backupsxxx.bak' WITH RETAINDAYS = 7' each day, each db backup if appended to the same '.bak' file and the RETAINDAYS protects the backup from being deleted by SQL Server. OK so far. But does anyone understand what criteria is used to decide when to overwrite the older backups? My backup file is getting bigger everyday, with no sign of any of the old data being deleted! Do I have to wait for the entire disk to become full before they start to get overwritten? Or should I just not worry and trust that it will do it all correctly? Any ideas would be much appreciated.
In sql2005 the database backup retention has been added in sql server properties in database setting.
In 2000 we had a comfortable option to set retention based on maintenance plan,files and also our space availabilty.It has helped the dba's a lot.But it has been removed in sql 2005.
Is that sql server setting is the only retention period setting or do we have to set in anyother tabs..
I want to change the history retention time because the history stores about 1 gb of detail per database per day in the msdb
Some of the log shipped databases have a monitor server option that has a setting for retention time but most of the log shipped databases are not using a monitor server since the option was unavailable at setup.
So is there a way to change the history retention time
I am running a couple of sql 2000 SP3a servers with merge and snapshot replication. One server acting as publisher and distributor and the rest subscribers. On one of the server I have got the error below and have tried most of the suggestions by msdn. This server has not crashed ever before or any hardware problems. It has been running for a couple of months and no problems. This has not happened no any of the other servers. Any suggestions would be greatly appreciated as the only resolution I have left is to bring up a new instance, setup replication and see if this would resolve the issue. Stopping and starting of agents don't work.
Server: EASTSRV3 DBMS: Microsoft SQL Server Version: 08.00.0760 user name: dbo API conformance: 2 SQL conformance: 1 transaction capable: 2 read only: N identifier quote char: " non_nullable_columns: 1 owner usage: 31 max table name len: 128 max column name len: 128 need long data len: Y max columns in table: 1024 max columns in index: 16 max char literal len: 524288 max statement len: 524288 max row size: 524288
[4/18/2005 11:59:27 AM]EASTSRV3.ICASData: {call sp_MSgetversion } Percent Complete: 2 Connecting to Subscriber 'EASTSRV3' Percent Complete: 3 Retrieving publication information Percent Complete: 4 Retrieving subscription information Percent Complete: 4 The merge process is cleaning up meta data in database 'HO_Master'. Percent Complete: 4 The merge process cleaned up 0 row(s) in MSmerge_genhistory, 0 row(s) in MSmerge_contents, and 0 row(s) in MSmerge_tombstone. Percent Complete: 4 The merge process is cleaning up meta data in database 'ICASData'. The merge process could not perform retention-based meta data cleanup in database 'ICASData'. Percent Complete: 0 The merge process could not perform retention-based meta data cleanup in database 'ICASData'. Percent Complete: 0 Category:NULL Source: Merge Replication Provider Number: -2147199467 Message: The merge process could not perform retention-based meta data cleanup in database 'ICASData'. Percent Complete: 0 Category:COMMAND Source: Failed Command Number: 0 Message: {call sp_mergemetadataretentioncleanup(?, ?, ?)} Percent Complete: 0 Category:SQLSERVER Source: EASTSRV3 Number: 11 Message: General network error. Check your network documentation.
I want to store data warehouse source tables and files in an Archive schema and then delete / drop them after a specified period of time.
Is there a table property that I can set (can't find one) or some other mechanism so that I can easily identify these tables with a script.
If there is no such property or feature within the database engine I will define a metadata table and record it there, but a property or similar that I can set at archive time would be very handy.
I currently use 7 days for subscription expiration setting for my two merge publications, which will cause metadata to clean up very 7 days. Now I need to increase the retention period to be 14 days. How I can avoid missing metadata after cleanup? Microsoft ms151188 (http://msdn2.microsoft.com/en-us/library/ms151188.aspx) warns that publisher may not have enough metadata, which may lead to non-convergence. I want to change this setting without causing any data loss.
For the best practice I issued full SQL Server database, differential and transaction log backups. I have setup a process to backup to local disks and then also copy the files to a centralized set of storage. On a weekly basis the centralized file system is backed up to a tape backup device. I know I can get data off of the tapes, but that process is time consuming, not well tested from my perspective and I am not in control of the overall process. Can you offer some recommendations from a SQL Server backup retention perspective?
I got some issues in my production environment, so please help me out. The following is the message I got from the replication monitor and I don't what to at this point.
The merge process could not perform retention-based meta data cleanup in database 'TT'. (Source: Merge Replication Provider, Error number: -2147199467) Get help: http://help/-2147199467
Transaction (Process ID 73) was deadlocked on lock resources with another process and has been chosen as the deadlock victim. Rerun the transaction. (Source: ply-db-svr1, Error number: 1205) Get help: http://help/1205
In the full recovery model, if i run a transaction that inserts 10MB of data into a table, then 10 MB of data is moved in the data file. Does this mean then that the log file will grow by exactly 10MB as well?
I understand that all transactions are logged to the log file to enable rollback and point in time recovery, but what is actually physically stored in the log file for this transactions record? Is it the text of the command from the transaction or the actual physical data from that transaction?
I ask because say if I have two drives, one with 5MB/s write speed for the log file and one with 10MB/s write speed for the data file, if I start trying to insert 10 MB of data per second into the table, am I going to be limited to 5MB/s by the log file drive, or is SQL server not going to try and log all 10 MB each second to the log file?
How can I tell how often a checkpoint is being executed?
I have a lot of data to insert into a table (via a SQL insert from another database on the same server), and I do not want to fill the log ... so I will do a SET ROWCOUNT 100000, insert records, wait for the checkpoint to run which will clear the log, ... and repeat this process untill all the records are inserted.
I have a report which lists sales figures by salesperson, and I need the report to highlight the maximum amount (ie. the person who got the highest sales figures). How can I do this?