sql = sql & ",-1*CDbl(funds.fundamount),0)) AS outgo" sql = sql & " FROM funds " sql = sql & "GROUP BY funds.itemdisc"
What I am doing here is collecting the columns and using status as credit or debt, if its debt I am subtracting from outgo if it is credit I am adding to credit. This also will only select records so that there are not any double listings. Here is an example.
DB desc status fundamount bill debt 10.00 bill debt 15.00 in credit 10.00 bills debt 5.00 in credit 5.00 paper debt 5.00
When I do the query I should have this
desc fundamount bill -25.00 bills - 5.00 paper - 5.00 in 15.00
What I am wanting to do in another column is keep track of the sequence number for each distinct invoice like:
SeqNo 1 2 3 1 2 1
I am working in a stored proc and i cant get past adding the numbers up at each line as a whole and not reseting when the next invoice number is present. Any help would be so greatly appreciated.
I have a question for you. I have a product database with a lookup that pulls by product number. All my product numbers are made up the same way. IE. N59840, N00951, N00951. ect.
I have a stored procedure that looks up by that product number with a "LIKE" statement that looks like this.
WHERE ([Product#] LIKE '%' + @PRODUCTNUM + '%')
Which has this problem if someone types in "852" it returns
N00852 N05852 N98852
Is there anyway that I can have SQL put in zeros to fill up the 5 number spots so "852" brings up "00852" or "5852" brings up "05852"
I was wondering if it was possible to add an identity column to a flat file data source as it is being processed in a data flow. I need to know the record number of each row in the file. Can this be done with the derived column task or is it possible to return the value of row count on each row of the data?
Let’s say in one field there is the "year" as an integer 2010, and in another field is the "month" as an integer 11. How can you concatenate them and not add them?
Essentially the result I'm looking for based on the example would be this: 201011 but I still want this to be an integer and not a string.
I'm developing a Lottery system. Here are some background information:
- Total 48 numbers - Draw 6 numbers + 1 extra number - each Bet select 6 numbers
And the Prize:
Prize Selected Matched Extra number Matched
1st 6 No
2nd 5 Yes
[Code] ...
Below is my proposed table design (simplified):
Draw table Column Datatype
DrawDate date Primary Key
ResultNumber1 tinyint
[Code] ....
There will be millions of Bet for a Draw. I need to write a stored procedure to check which bets won the 1st ~ 7th prizes. How to write a query to match the bets with the draw result? The query should be run within 1 minute. And should I change my table design?
I have a separate list of calendar years with radiocarbon year equivalents in SQL server but no conversion equation. Most but not all of the data I have is in radiocarbon years. I thought at first I could just link the tables but I don't want the data that is already in calendar years to be linked to this conversion table. Is there any way I can either link the two tables with criteria for which data is linked (Only ages that are in radiocarbon years). Or possibly a way to query all ages that are in radiocarbon years and do something similar to a find and replace with a large list of numbers to change?
In a t-sql 2012 sql update script listed below, it only works for a few records since the value of TST.dbo.LockCombination.seq only contains the value of 1 in most cases. Basically for every join listed below, there should be 5 records where each record has a distinct seq value of 1, 2, 3, 4, and 5. Thus my goal is to determine how to add the missing rows to the TST.dbo.LockCombination where there are no rows for seq values of between 2 to 5. I would like to know how to insert the missing rows and then do the following update statement. Thus can you show me the sql on how to add the rows for at least one of the missing sequence numbers?
UPDATE LKC SET LKC.combo = lockCombo2 FROM [LockerPopulation] A JOIN TST.dbo.School SCH ON A.schoolnumber = SCH.type JOIN TST.dbo.Locker LKR ON SCH.schoolID = LKR.schoolID AND A.lockerNumber = LKR.number
Here, 1. for all ReportId, child items's ItemOrder = 0 2. example, for ReportId = 5, both child items ("Item1-Child1" & "Item1-Child1") of parent "Item1" has ItemOrder = 0
I need to, 1. update all child items with ascending numbers starts with 1 against each parent and each report. 2. for each different parent or different report, order by should starts with 1 again.
I have a database that has entries that I want sorted by date order. Each entry has an auto ID number allocated (primary key auto sequencing), which I want to change to reflect the sorting (so the first date has the first auto ID number and so on).I've gone into the database and sorted the entries as I want them. Then I've gone into Design View to delete and restablish the primary key autosequence. However, it is not keeping the date order in the database (ie entry ID 3140 date is 12/06/2015, but 3141 is 02/02/2012). How do I get it to maintain the order?
I have a report with a column which contains either a string such as "N/A" or a number such as 12. A user exports the report to Excel. In Excel the numbers are formatted as text.
I already tried to set the value as CDbl which returns error for the cells containing a string.
The requirement is to export the column to Excel with the numbers formatted as numbers and the strings such as "N/A' in the same column as string.
I insert/update the figure information in this table with daily figures - ie if no record exists insert new name and figure, if figure exists, update figure. I have been asked to add logic in the insert/update SP to add a fee of 0.25% to any daily figure that results in the total value in the base table being over 1000.
For example: Base table value is 900 before update ----- Daily figure is 200 so 900 + 200 = 1100 after update of base data. New logic dictates that 0.25% must be added to 100 of this daily figure, as 100 brings it up to 1000 and the other 100 (which makes the 200) takes it over the 1000 threshold. 100 + 0.25% = 0.25 ------- Total value to add to base table = 100 + 100 + 0.25 = 200.25.i am keen to avoid WHILE loops and cursors..
Tried a few ways and I can ge this to work at the end in then WHEN part. Just struggling to put this together to be accepted as a CASE WHEN statement, probably missing the obvious.
Case when Postcode like '%[abcdefghijklmnopqrstuvwxyz%]' then 'Lowercase Postcode' else 'Postcode OK' end as [DQPostcode]
collate Latin1_General_CS_AS
Simple terms looking for all instances of Lowercase characters in the Postcode field
I have a subquery i wanted to add a as a fourth column to my Original Query(is the one below the subquery). How to combine the two queries to one statement?? I tried but was getting an error "Subquery returned more than 1 value "
select COUNT(*)FreeReduced from students s join Buildings b on s.Building_ID = b.Building_ID where s.Activeness =1 and (Eligibility = 3 or Eligibility =2) group by building_number,Building_Name
Below is my SQl which just counts the number of appointments and grouped by clinic. This is great but what I'd like to add is the percentage within each clinic.
For example Clinic BRESRAD1 has a total of 61 appointments, of which 75.41% are Normal Appointments and 24.59% are Diagnostic, Ideally I would like the percentage in the next column.
BRESRAD1 Normal Appointment 46 BRESRAD1 Diagnostic Appointment 15 BRESRAD2 Normal Appointment 17 BRESRAD2 Diagnostic Appointment 12 BRESRAD3 Normal Appointment 34 BRESRAD3 Diagnostic Appointment 43
My SQL is as follows:
SELECT ClinicCode, CASE WHEN [ApptTypeDesc] LIKE '%Diag%' THEN 'Diagnostic Appointment' ELSE 'Normal Appointment' END AS [Diagnostic Appt], COUNT(OPAppointmentID) AS CountOfOPAppointmentID FROM dbo.OP_APPOINTMENT WHERE (AttendStatusNatCode IN ('5', '6')) AND (ApptFinYr = '2014/15') GROUP BY ClinicCode, CASE WHEN [ApptTypeDesc] LIKE '%Diag%' THEN 'Diagnostic Appointment' ELSE 'Normal Appointment' END ORDER BY ClinicCode
ALTER TABLE [dbo].[bkrm_lendcoll] ADD CONSTRAINT ReqdCovgAmt_constraint33 CHECK ( case when CovgAmt = 0 and ReqdCovgAmt = 0 then 1 when CovgAmt = 0 and ReqdCovgAmt = 1 then 1 when CovgAmt = 1 and ReqdCovgAmt = 0 then 1 when CovgAmt = 1 and ReqdCovgAmt = 1 then 0 end =1 )
This is my first attempt to add a constraint for business logic. The desired behavior is that the two columns together have the same behavior as a radio button. (one can be true, the other true, both can be false, but both cannot be true.) I get this error when I attempt to update it.
The ALTER TABLE statement conflicted with the CHECK constraint "ReqdCovgAmt_constraint33". The conflict occurred in database "Std", table "dbo.lendcoll".
So, basically my question is, when you have two bit columns and want them to have the truth table such as described, how can I set a Check constraint to enforce this?
I'm trying to write a stored procedure that performs a select statement of the RequestID column and the total of the disk size for that row. ie the values on RequestAdditionalDisk1Size + RequestAdditionalDisk2Size + RequestAdditionalDisk3Size where the Requester equals a certain value. I can perform the select statement fine on the individual values, how to add the values of the Disk sizes together and present that back in the select statement.So far the code looks like but is giving me an error around the line performing a SUM.
-- Author:<Author,,Name> -- Create date: <Create Date,,> -- Description:<Description,,> -- ============================================= ALTER PROCEDURE [dbo].[spMyRequests] -- Add the parameters for the stored procedure here
I have a SQL server instance being used as our data warehousing environment. The instance consists of several databases that I am snapshotting as part of our high availability strategy for data. I've created a stored procedure that takes the source database as an argument and that will create a new snapshot when a new one needs to be created and will also automatically remove the old snapshot. It also updates some synonym tables that point to the new snapshot but that might not be an important detail.
I would like to have the stored procedure stored some place global to all of the databases that I am routinely snapshotting, but that would mean putting it in the master database. Although having it there makes things significantly better in terms of usability, it seems like there's something wrong with putting any stored procedures in the master database. Am I wrong? Is it OK to put stored procedures there in situations like this?
I have dw schema in the database, owned by user dw.The login name is dw. The login had db_owner right in the database. The default schema for the login on the database is dw.Now Once I assign 'sysadmin' serverrole to dw login, I started seeing stored proc not found error, if try to execute stored proc without mentioning dw.spname;Also I am seeing table not found error while quering tables under dw schema, after the change.
I am planning to add some new columns to an existing sql server 2012 table. I know that I need to use the alter statement to accomplish this goal. However my questions is the location of where I want to add the new columns to the table. It would make more sense to add the new columns to the middle of the table since these columns have a similar meaning as other columns in the middle of the table.However is it better to add these new columns at the end of the table? I am asking this question since I am thinking I might need some sql to move the values of existing columns and values around?Thus is it better to add new columns to a table in the middle of the table, at the end of the table, or at the end of the table? If so, can you tell me why one location is better than another location?
I have a table of raw data where each column can be null. The thought was to create an identity key (1,1) and set as primary for each row. (name/ address / zip/country/joindate/spending) with surrogate key: "pkid".However other queries will not use this primary key. So for instance they may count the # of folks at a zip, select all names, addresses etc. The queries may order by join date, or select all the people that joined on a specific date.No other code would logically use the primary key (surrogate primary id key), therefore would it still have any performance benefits? at this time the table would have no clustured or nonclustured indexes or keys. I'm curious if there are millions of records.
Why does M$ Query Analyzer display all numbers as positive, no matterwhether they are truly positive or negative ?I am having to cast each column to varchar to find out if there areany negative numbers being hidden from me :(I tried checking Tools/Options/Connections/Use Regional Settings bothon and off, stopping and restarting M$ Query Analyer in betwixt, butno improvement.Am I missing some other option somewhere ?
IF NOT EXISTS (SELECT TOP 1 1 FROM dbo.syscolumns WHERE id = OBJECT_ID(N'dbo.Employee) and name = 'DoNotCall') BEGIN ALTER TABLE [dbo].[Employee] ADD [DoNotCall] bit not null Constraint DoNot_Call_Default DEFAULT 0 IF ( @@ERROR <> 0 ) GOTO QuitWithRollback END
It just takes a LOT of time in SQL Server Management studio. I have to cancel the query and cancelling takes a whole lot time. I am using SQL Server 2008.
I have a a Group By query which is working fine aggregating records by city. Now I have a requirement to focus on one city and then group the other cities to 'Other'. Here is the query which works:
Select [City]= CASE WHEN [City] = 'St. Louis' THEN 'St. Louis' ELSE 'Other Missouri City' END, SUM([Cars]) AS 'Total Cars' From [Output-MarketAnalysis] Where [City] IN ('St. Louis','Kansas City','Columbia', 'Jefferson City','Joplin') AND [Status] = 'Active' Group by [City]
Here is the result:
St. Louis 1000 Kansas City 800 Columbia 700 Jefferson City 650 Joplin 300
When I add this Case When statement to roll up the city information it changes the name of the city to 'Other Missouri City' however it does not aggregate all Cities with the value 'Other Missouri City':
Select [City]= CASE WHEN [City] = 'St. Louis' THEN 'St. Louis' ELSE 'Other Missouri City' END, SUM([Cars]) AS 'Total Cars' From [Output-MarketAnalysis] Where [City] IN ('St. Louis','Kansas City','Columbia', 'Jefferson City','Joplin') AND [Status] = 'Active' Group by [City]
Here is the result:
St. Louis 1000 Other Missouri City 800 Other Missouri City 700 Other Missouri City 650 Other Missouri City 300
I have a table with a column ID of ContentID. The ID in that column is all NULLs. I need a way to change those nulls to a number. It does not matter what type of number it is as long as they are different. Can someone point me somewhere with a piece of T-SQL that I could use to do that. There are over 24000 rows so cursor change will not be very efficient.
If I just use a simple select statement, I find that I have 8286 records within a specified date range.
If I use the select statement to pull records that were created from 5pm and later and then add it to another select statement with records created before 5pm, I get a different count: 7521 + 756 = 8277
Is there something I am doing incorrectly in the following sql?
DECLARE @startdate date = '03-06-2015' DECLARE @enddate date = '10-31-2015' DECLARE @afterTime time = '17:00' SELECT General_Count = (SELECT COUNT(*) as General FROM Unidata.CrumsTicket ct