This is kind of a follow-up on my previous post "Get 15th and last day trans". Our client has asked for open sales orders dollars for a given date range to include order dollars active as of the 15th of the month and the last day of the month.Â
Below is what I started and I hard-coded the Jan15Open and JanEndOpen so you could see what I am trying to accomplish. So, for example, if I pass @StartDate of '01-01-2013' and @EndDate of '12-31-2014' I could get back results for 01-15-2013, 01-31-2015, 02-15-2013, 02-28-2013, etc. all the way to 12-31-2014. Note that the results will depend on the TransDate datetime column. Once I get the query correct I plan to create a temp table to hold the results so I can report them back in a web page.Â
ALTER PROCEDURE [dbo].[afa_selSalesOrdersOpen]
@StartDatedatetime = null,
@EndDatedatetime = null
AS
BEGIN
-- SET NOCOUNT ON added to prevent extra result sets from
-- interfering with SELECT statements.
SET NOCOUNT ON;
I have a Windows 2003 server, with SQL 2005. How do i change the culture/regional options for the database as on my local test machine (which seems to be exactly the same in terms of windows regional settings and SQL server settings) the currenency fields show with a £ signs as expected, however on my production box they are $. I know this should be simple but its the night before a deadline and my brain is screwed. Please help!!
My insert statement for #Data - I only need to process each @EmployeeID one time, which is why I thought a loop would be sufficient, but I let this process run for 2 hrs and it still had not completed, so I feel I must have set-up something incorrectly!
This is my syntax, I am creating a table of active users, then wanting to get data for each of those active users. Â But only get the data for each active user 1 time.
Declare @EmployeeID varchar(50) CREATE TABLE #ActiveUsers ( ID INT IDENTITY NOT NULL ,EmployeeID varchar(50) ,processed int ) Create Table #Data
Hi, I have a report which displays monetary amounts. I have set the format of the table cells to c0, and everything is showing in dollars. However, I need this to show as pounds and can't seem to find out how to do this. I have checked the regional settings on my PC which are correctly set to English (United Kingdom). I have also tried going to tools, options in the report and then changing the language to "Same as Microsoft Windows" under International Settings but this does not seem to have done the trick. Also I have to enter date parameters in as mmddyy as opposed to ddmmyy. Does anybody know how I can change these things? Please help, it's driving me up the wall!!
I have select statement that returns data on a construction project. I have a start date, end date, and forecasted cost for each task in the project. I need to create a table that spreads the dollars in a linear fashion broken down by fiscal period.
I can do the math to properly calculate the dollar amount, I am having trouble creating the statement that will process through each row of my select statement and insert multiple rows into the new table.
I have a table where on all business days per month, for different trades, a price is uploaded, so a simplified extract:
Trade  PublishingDate  Price 10       2Jan15              100 : 10      31Jan15             150 20      2Jan15              75 : 20      31Jan15             110 :
I'm trying to write a view that will:
a) Give me the average for each trade, per month. b) Give me the average of the last 7 days, for each trade, per month.
The first one is easy: SELECTÂ TradeWSRate.Trade, CASE MONTH(TradeWSRate.PublishedDate) WHEN 1 THEN 'Jan' WHEN 2 THEN 'Feb' WHEN 3 THEN 'Mar' WHEN 4 THEN 'Apr' WHEN 5 THEN 'May'
[code]...
Column "TradeWSRate.PublishedDate" is invalid in the ORDER BY clause because it is not contained in either an aggregate function or the GROUP BY clause.How can I change the above select to work? Ideally I want to do this all in one view, and not use functions.
I am using below query i want to order it by month so that data is displayed like
Select (count(I.Colour)) as Litres, DATEName(MONTH,ShipDate) as [Loaded Date] from InkData I group by DATEName(MONTH,ShipDate) order by DATEName(MONTH,ShipDate)
Right now it is not ordering by month, it is ordering alphabetically.I want this to be ordered like
I have two tables, one is called (questions), the second one (answers).
questions columns are (ID,questionTitle)
answers columns are (ID,questionID,answer, answerDate)
I use this query to load data: "SELECT q.questionTitle,COUNT (a.ID),a.answerDate FROM questions q LEFT JOIN answers a ON q.ID=a.questionID" the query is easy, but my problem which I can't solve is how can I fetch the data ordered by the column answerDate, I mean I want the first record to be the one which has the most recent answer and so on.
I have the following query in a user-defined function. It accepts a single string parameter, but for the sake of simplicity, I have substituted actual strings in the query. It basically checks the passed string. If it ends with "Id", it strips off the "Id" and returns the resulting string. If it ends with "Id" followed by a digit, it strips that off and returns the string.
SELECT CASE WHEN LEN('IncidentViolationId') > 2 AND RIGHT('IncidentViolationId', 2) = 'Id' THEN LEFT('IncidentViolationId', LEN('IncidentViolationId') - 2) WHEN PATINDEX('%Id[0-9]', 'IncidentViolationId') > 1 THEN LEFT('IncidentViolationId', PATINDEX('%Id[0-9]', 'IncidentViolationId') - 1) ELSE 'IncidentViolationId' END
This code has worked flawlessly for quite some time and all of a sudden I get "Invalid length parameter passed to left function". I understand why LEFT() would normally fail if I passed it a -1 for the second parameter, but in this scenario, asI understand it, it never should have reached the second WHEN condition since the first one evaluates to true. Why and why all of a sudden?
select row_number() over (partition by merrickid order by recorddate asc) as rn, merrickid, recorddate, allocestgasvolmcf, sum(allocestgasvolmcf) over (partition by merrickid order by recorddate asc) as xxxxxxxx from dbo.completiondailytb where merrickid=1965
I get the following message after I insert the sum() over function:
Msg 102, Level 15, State 1, Line 5 Incorrect syntax near 'order'.
I am using SQL 2012 SE. I have 2 databases say A and B with same structure and relationships. There are 65 tables in each database. A is already replicating data to database C for 35 tables. Now I need to move data from A to B which is greater than getdate()-1 everyday  for all the tables and once the move is done I need to delete this data from A. And samething the next day. Since this is for 65 tables its challenging to identify the insert order. Once the insert order is identified the delete order will be the reverse of it.
Is there a tool or any SP that could generate the insert order script? The generate scripts data only is generating the entire data and these databases are almost 400GB. So I thought of generating a schema only script and then create an empty database with it and then generate data only to identify the order of insert. But it wont generate anything since there is no data.
I am trying to find a solution in order to make a pivot dynamically. One of my department charge every month all the sales figure in one table and I need to pick up the last two months archived in order to make a pivot and to see if something is changed or not. What I am trying to do is to have these last two months dynamically. create table forum (customer varchar (50), nmonth varchar(6), tot int, archived datetime)
insert into forum values ('Pepsi','201503',100,'2015-04-28'), ('Pepsi','201504',200,'2015-04-28'), ('Texaco','201503',600,'2015-04-28'), ('Texaco','201504',300,'2015-04-28'),
[code]...
As you can see I have to change manually the values underlined every months but it's a temporary solution. How can I set up the last two months in a dynamic way?
Using t-sql, how to list all views in dependency order? I'm trying to script out the creation of views, but need to make sure they are in the correct order when i execute them.Â
SELECT XmlField.value('(//step/@userName)[1]', 'VARCHAR(100)') AS userName FROM theTable
I would like to have some kind of indicator in my TSQL results of the number of each <step> node. Â For instance, saf2 would be a 3. Â I think this is similar to a ROW_NUMBER() kind of function in normal table data.
Is there a good way to get a 1, 2, 3, and 4 value for this sample data?
We have 2 different U_ID (1, 2) and I want a SELECT query to display,
1. For U_ID = 1, we have 2 parent U_NM (Design & Plan) and Plan having 2 child (Cust Plan 1 & Cust Plan 2). 2. I want to display parent U_NM ORDER BY U_ORD 3. If any parent having child element, then need to show immediately under that parent and ORDER BY U_DT 4. For U_ID = 2, we don't have any child, hence display ORDER BY U_ORD
I have a table that has the records as shown below
Id      From       To 1 Delhi     Noida 2 Faridabad  Kanpur 3 Noida Delhi 4 Kanpur Faridabad 5 Banglore Mumbai 6 G H 7 I J 8 Mumbai Banglore
I want the results in the following order
5 Banglore Mumbai 8 Mumbai Banglore 1 Delhi Noida 3 Noida Delhi 2 Faridabad Kanpur 4 Kanpur Faridabad 6 G H 7 I J
Suppose as in above records There is a complete roundtrip from Banglore to Mumbai and Mumbai to Banglore.
I have a table with the following columns employeeSessionID, OpDate, OpHour, sessionStartTime, sessionCloseTime. I need to see how many users remain active per hour. I can calculate how many logged in per hour, but I am stumped on how to count how many are active per hour. I have a single table that stores login data. I have created a query that pulls out the only the data needed from the table into a temp table using this query. Also note it is possible that the sessionCloseTime is null if the device has not been logged out this would need to be counted a active.
TABLE NAME #empSessionLog Contains the time stamp data OpDate, sessionStartTime and sessionCloseTime. OpDatesessionStartTimesessionCloseTime 2015-01-202015-01-20 14:32:59.1302015-01-20 14:33:14.6299166 2015-01-202015-01-20 06:58:33.7302015-01-20 15:27:16.9133442 2015-01-202015-01-20 09:56:22.8402015-01-20 17:56:29.7555853 2015-01-202015-01-20 05:59:18.6132015-01-20 14:05:19.0426707
[code]....
can see how many sessions logged in per hour with the following statement:
SELECT opDate, FORMAT(DATEPART(HOUR, sessionStartTime), '00') AS opHour, Count(*) AS Total FROM #empSessionLog Group BY opDate, FORMAT(DATEPART(HOUR, sessionStartTime), '00') Order BY opDate, FORMAT(DATEPART(HOUR, sessionStartTime), '00') ASCResults: opDateopHourTotal 2015-01-20041
[code]....
Where I am stuck is how do I count the sessions that remain active per hour until the session is closed with the sessionCloseTime.
I am having some teething problems while installing SQL on a 3 node cluster. Within the Cluster configuration I have 3 Cluster Groups with each of them having their associated disk resources. All these disk resources physically exist on a SAN.
The actual cluster is running absolutely fine and I can access all the disks from their respective owner node. The problem only starts when I start installing SQL Server 2005 on this cluster. I specify the Cluster group from the Cluster Group Selection and choose the desired partition and then the error message pops up
"There is not enough diskspace on the destination disk for the current SQL Server data files. To proceed, free up disk space to make room for data files, or install the data files to a different drive"
But the disk I am trying to install it on is 264Gb and none of it is used. I have also tried to change it to a different disk within the same Cluster group but to no avail. I have even tried to install it in a different cluster group all together but I get the same error message.
I have googled around havent found anything so far. The disks have got full permissions for the account I am installing SQL with.
I have used Aasim Abdullah's (below link) stored procedure for dynamically generate code for deletion of child tables based on parent with certain filter condition. But I am getting a output which is not proper (Query 1). I would like to have output mentioned in Query 2.
Link:Â [URL]
--[Patient] is the Parent table, [Case] is child table and [ChartInstanceCase] is grand child
--When I am deleting a grand child table, it should be linked to child table first followed by Parent
--- Query 1
DELETE Top(100000) FROM [dbo].[ChartInstanceCase] FROM [dbo].[Patient] INNER JOIN [dbo].[Case] ON [Patient].[PatientID] = [Case].[PatientID] INNER JOIN [dbo].[ChartInstanceCase] ON [Case].[CaseID] = [ChartInstanceCase].[CaseId] WHERE [Patient].PracticeID = '55';
--Query 2
DELETE Top(100000) [dbo].[ChartInstanceCase] FROM Â [dbo].[ChartInstanceCase] INNER JOIN [dbo].[Case] ON [ChartInstanceCase].[CaseId]=[Case].[CaseID]Â INNER JOIN [dbo].[Patient] ON [Patient].[PatientID] = [Case].[PatientID] WHERE [Patient].PracticeID = '55';
how to modify the SP 'dbo.uspCascadeDelete' to get the output as Query 2
I am newbie in SQL Clustering. I have set up a Windows Server Cluster with 2 nodes and am having the following problem with Physical Disk resource for cluster groups:
My Default Cluster Group (named Cluster Group) has IP Address, Network Name, Physical Disk and MSDTC resources. In addition to that my Default SQL Server instance resources are also in this group. I had this initial set up for Active/Passive mode.
Now I am trying to set up a SQL Cluster in Active/Active mode. For this I have to install another instance of SQL Server in the existing cluster and make a separate cluster group for its resources. I made a new cluster group (SQL Instance Group) with an IP Address and a Network Name resource for this new instance but I dont have any Physical Disk resource to allocate to it. As such while installing the SQL Server Instance I get stuck when I'm asked to select the quorum disk to be used.
Is it possible to configure two quorum disks, one for each group? What's the concept of dedicated disks resource for each sql instance in a group? Is this same as the quorum disk? If this is not a shared disk how do I configure a dedicated disk resource for my second cluster group (SQL Instance Group)?
We are going thru the process of scoping an active/active cluster at one site. I was wondering whether there will be any issues with mirroring (DB by DB) off the cluster into non clustered server at an alternate DRP site.
In y sql server table has millions of records available. I don't want to drop the tables.
My requirement is I want to change the column order of an existing table. some tables I am able to saving on design window Like Below image.
Even I had generate a script and using that script trying to execute on Management Studio but unable to saving the new column orders. I am getting the Timeout expired error after couple of minutes. How can we save the orders without dropping the table !
Goal: My request is the retrieve the return result from sp_Test as 8, 2, 4, 1 ,3 (take a look at picture 1) based on the chronological list from User-Defined Table Type dbo.tvf_id.
Problem: When I execute the stored procedure I sp_Test I retrive the list that is from 1 to 8. I don't know how to do it?
Information: I'm using SQL server 2012
create table datatable (id int, name varchar(100), email varchar(10), phone varchar(10), cellphone varchar(10), none varchar(10) ); insert into datatable values
In SQL 2012.A query that joins 2 table, with order by clause doesn't get sorted and the result set is not ordered. This happens when some of the columns in the where criteria are in a unique index which is the index that is used for the join between the 2 tables, and all the columns in the unique index are in the where criteria.In the query plan there is no component for sort.The work around was to drop the unique index, or change it to a non-unique index. Once this was done, the execution plan was changed to add the sort component (even when the index was changed to non-unique and the join was still using this index).
I am really puzzled by an apparent difference between table index key column order and its statistics order. I was under understanding that index statistics mirror index definition. However, in my db 2470 index ordinal definitions match statistics definition but 66 do not. I also can reproduce such discrepancy in 2008 R2, 2012 and 2014.
As per definition,
stats_column_id int
1-based ordinal within set of stats columns
This script duplicates this for me.
BEGIN TRAN GO use tempdb GO CREATE TABLE [dbo].[ItemProperties]( [itmID] [int] NOT NULL, [cpID] [smallint] NOT NULL, [ipuID] [tinyint] NOT NULL,
[Code] ....
The result I get is this:
object_id      stats_name                                     stats_column_list 1525580473 PK_ItemProperties_itmID_ipuID_cpID itmID, cpID, ipuID,
and
object_id      index_name                                     index_column_list 1525580473 PK_ItemProperties_itmID_ipuID_cpID itmID, ipuID, cpID,
Also a query I used to discover this in my db is:
WITH stat AS ( SELECT s.object_id ,s.name as stats_name ,( SELECT c.name + ', ' as [data()] FROM sys.stats_columns as sc
I want to get the list of items present in that order based on the confidentiality code of that product or Item and confidentiality code of the user.
I display the list of orders in first grid, by selecting the order in first grid I display the Items present in that order based on the confidentiality code of that item.
whenever order in 1st grid is selected i want to display the items that the item code should be less than or equal to the confidentiality code of the logged-in user other items should not display.
If the all the items present in the order having confidentiality code greater than Logged-in user at that time the order no# should not display in the first grid.
We are trying to set up an active/active configuration of a SQL Server cluster, and we had a few questions.
Initially, we want to have 2 Database Servers that would share the same Database (both reading/writing to the same tables). However, from reading the MS docs, we find out that we can have what they call an "active/active" configuration using a Cluster but they need to have 2 different disk sets, i.e. having 2 separate databases. If this assumption is correct, how does the data get synchronised between the 2 databases (that are on the 2 different disks sets)?