partition with single file group or multiple file group which one best.
we have some report running from partition table, few reports don't have any partition Key and after creating 400 partition with 400 file group it is slow.what is best practices to crate 400 file group or single file group.
Is this possible? I have tried using row over partition but I'm not sure how group it correctly, so basically every time there is a new 1 in new_commsstream within a personid the row number goes up by one.
I have names in the database which I want partition by last name - for example last names starting with A, B, C, D should go to the file group 1. last names starting with E, F, G, H should go to file group 2.
I am trying to use the following function - but do I specify in the function that last names with with A, B, C, D should go to the file group 1
CREATE PARTITION FUNCTION myRangePF3 (char(20)) AS RANGE RIGHT FOR VALUES ('EX', 'RXE', 'XR');
Is there any way to modify partition function to accomplish this?
I can't seem to find a way to do the following:create table part_table (col1 int,col2 datetime) on psX (datename(week,col2))I want to partition based on the week number of a date field.So if I enter in data like the following in my part_table:(1, 1/1/2007) should go into partition 1 for week #1(52, 12/21/2007) should go into partition 52 for week #52 of the yearI tried adding in a computed column, but it says its nondeterministic.
I am facing issue in generating total sum and daily sum from table ThresholdData.
DailyTransactionAmount should be sum of todays amount in the table TransactionAmount should be sum of all amount in the table.
Basically,
1. I don't want to scan ThresholdData table twice. 2. I don't want to create temporary table/table variable/CTE for this. 3. Is there is any way to make it done in single query.
I hope, where criteria is not possible in partition function. I am trying query something as given below,
SELECT TransactionDate, TransactionAmount, ROW_NUMBER() over (order by TransactionDate) AS TransactionCount, SUM(TransactionAmount) over (partition by id ) AS TransactionAmount, SUM(TransactionAmount) over (partition by id ,CONVERT (DATE, @TodaysTransactionDate)) AS DailyTransactionAmount FROM ThresholdData WHERE id = @id AND transactiondate >= dateadd(d,-@TransactionDaysLimit,@TodaysTransactionDate)
Hi,I need to create a partition table but the column on which I need tocreate a partition may not have any logical ranges. So while creatingor defining partition function I can not use any range.likeCREATE PARTITION FUNCTION my_part_func (NUMERIC(7)) AS RANGE LEFT FORVALUES (1,100,1000);Is there any way to define partition function in SQL Server somethinglike Oracle HASH partitions where logical range is unkown?ThanksSameer
DECLARE @DatePartitionFunction nvarchar(max) = N'CREATE PARTITION FUNCTION DatePartitionFunction (datetime) AS RANGE RIGHT FOR VALUES ('; DECLARE @i datetime = '2007-09-01 00:00:00.000'; WHILE @i < '2008-10-01 00:00:00.000' BEGIN SET @DatePartitionFunction += '''' + CAST(@i as nvarchar(10)) + '''' + N', ';
[Code] ....
Msg 7705, Level 16, State 2, Line 1 Could not implicitly convert range values type specified at ordinal 1 to partition function parameter type.
However if I change to datetime2 it works
DECLARE @DatePartitionFunction nvarchar(max) = N'CREATE PARTITION FUNCTION DatePartitionFunction (datetime2) AS RANGE RIGHT FOR VALUES ('; DECLARE @i datetime2 = '2007-09-01 00:00:00.000'; WHILE @i < '2008-10-01 00:00:00.000' BEGIN SET @DatePartitionFunction += '''' + CAST(@i as nvarchar(10)) + '''' + N', ';
[Code] ...
Is the data type of the column used for partitioning. All data types are valid for use as partitioning columns, except text, ntext, image, xml, timestamp, varchar(max), nvarchar(max), varbinary(max), alias data types, or CLR user-defined data types.
In this case why isn't datetime works?
version is as follow:
Microsoft SQL Server 2012 (SP1) - 11.0.3128.0 (X64) Dec 28 2012 20:23:12 Copyright (c) Microsoft Corporation Enterprise Evaluation Edition (64-bit) on Windows NT 6.1 <X64> (Build 7601: Service Pack 1)
from [URL] .....
Table and index partitioning is supported in this edition
I have a heavy database , More than 100 GB only for six month .every Query on it takes me along time and I dont have enough space to add more indexes.by a way I decided to do partitioning. I create a partition function , on date filed and all Data records per month was appointed to a separate file.And is partitioning only for Future data entry?
I'm currently stuck with a table that has 350 mil records. Querying this table is insanely slow so I had a better look at existing yearly partitioning. I already managed to partition on a month level which increased the performance/querrying a lot. I did this on the staging table where I used an alter statement to split the 2015 partition by 12 months.
However, in our project we used Data Vault. This means that we have 4 tables (hub, sathub, link, satlink), all carrying 350 mil records. The problem is that altering the partition function does not work. The server cannot handle this action. What the best way is to do this, without having to drop/reload all tables.
SELECT Node_ID,Day,Operation, AA,BB FROM (SELECT CASE WHEN Operation LIKE 'NOTIFY' THEN SUM(Total_request) ELSE 0 END AS AA, CASE WHEN OPERATION LIKE 'SEARCH' THEN SUM(Total_requests) ELSE 0 END AS BB,Node_ID,DAY,Operation
[code]....
So i want to make two columns by the name of operation. in the real code AA and BB are calculates with many counters. My code doesn't work, I have an error: "not a single-group group function" .....
how do i use group by for the below query, am getting duplicate results for the query i execute, am not sure how do i use group by for this. pls.help.
SELECT C.USER_ID, A.comm_Name, B.UU_TITLE, FILENAME_URL=(Case When UU_Type = 'U' Then UU_url else UU_Filename End), C.User_FirstName + ' ' + C.User_LastName AS Author, Convert(CHAR(10),UU_TimeStamp,101) as DateCreated, Convert(CHAR(10),UU_LastModified,101) as DateModified, uu_AssocAuID, DOWNLOADS = (select count(*) from Vportaldevstats.[dbo].stats_download A where A.stat_AuID = b.UU_AssocAuID ) FROM vportaldev.dbo.community A INNER JOIN vportaldev.[dbo].user_upload B ON A.comm_CreatorUserID = B.UU_USERID INNER JOIN vportaldev.[dbo].[user] C ON C.user_id = B.UU_USERID
I would like to know if that the SUM() function can use more than 1 group in the scope portion. Example is as below:
Sum(Fields.Hour.Value, ("grp1", "grp2")) This would help as I am doing a report using a matrix report. I need to do a calculation in the measure area so that it will calculate base on the row and column grouping. Is this possible?
can anyone tell me what is the use of group by in rs tables and how to use group by in tables (when we have to use )...if possible please gimme some examples..
I want to run a much larget SQL statement, but for examples sake this is a good starting point
Code: Select efName, elName, eAddress, SUM(Convert(money, bonus1)+Convert(money, bonus2)+Convert(money, bonus3)) As TotalBonus, ePay FROM tableEInfo
It is telling me that I have to use Group By, but the problem is that most of my fields are text fields, which it looks like have to be converted in order to use with a group by statement. Is it possible to use the sum function with no group by statement?
select empid,year(orderdate) from [Sales].[Orders] WHERE custid = 71 group by empid,year(orderdate)
sample data resultset:
empid orderdate 1 2006 1 2007 1 2008
why is the year() gets only one year when worked with groupby. when empid =1 the year 2007 has 2 records and 2008 has 2 records in first select and when I used year(orderdate) in secong select only one year for each is selected.
How can I group my data to show up like the following? I need to get the wr, sample and the most current date in MS Access. 0805393, 01, 5/16/08 0805393, 02, 5/16/08 0805393, 03, 5/17/08
I tried Max(Y), and also grouping them and it's driving me crazy!... Thanks
I'm trying to update a varchar field using SUM. I keep getting the error that the sub query returns more than one value.
UPDATE CIRSUB_M SET TRM_DMO = SUBSTRING(TRM_DMO,1,11) + (SELECT CAST(SUM(COPIES) AS VARCHAR(5)) FROM CIRSUB_M WHERE BIL_ORG = '02' AND CRC_STS IN ('R','P','Q','T') GROUP BY PUB_CDE, DNR_NBR) WHERE BIL_ORG = '02' AND CRC_STS IN ('R','P','Q','T')
Example
PUB_CDE DNR_NBR COPIES TRM_DMO THN 000000092637 100 A THN 000000092637 200 B THN 000000082455 100 A THN 000000082455 200 B THN 000000051779 100 A
select bicycle_shop.bicycle_shop_id, bicycle_shop.company_name, order_.unit_price from order_ inner join bicycle_shop on order_.order_serial_number = bicycle_shop.order_serial_number order by order_.unit_price DESC;
this query will bring up a few records for each shop, of the one or more bikes they bought and each bike's price, how do i have these come out to a sum(of the price), so that each bike shop will have a sum of revenue. someone told me i need to use group by and the sum function but i dont know how
For a database, we have 4 data files in a particular file group and the file sizes are almost 70 GB each.
Do I come across any performance issues if I create/pre-allocate an additional data file in the same file group so that the existing files don't grow too much?
In a server we had File Growth,And then We had to Add New Hard Drive And New File On It.And Now We have New server with a Huge Hard Drive.But all files remaind.Can I Reduce This files to One data file or not ?
I have a problem with an SQL-query and I don't know what the best solution would be to solve the problem.
/*INSERT INTO WERKS ( WERKS.Z8601, WERKS.Z8602, WERKS.Z8603, WERKS.Z8604, WERKS.Z8605, WERKS.Z8606, WERKS.Z8607, WERKS.Z8608, WERKS.Z8609, WERKS.Z8610, WERKS.Z8611, WERKS.Z8621, WERKS.Z8622, WERKS.Z8623, WERKS.Z8624, WERKS.Z8625, WERKS.Z8626, WERKS.Z8627, WERKS.Z8628, WERKS.Z8629, WERKS.Z8630, WERKS.Z8631, WERKS.Z8632) */ SELECT 0, Stati.z4414, Stati.z4402, '', '', '', Isnull((select Srtas.z02 from Srtas where Srtas.z00 = Stati.z4400 and Srtas.z01 = Stati.z4414), ''), Isnull((select Klant.z0102 From Klant where Klant.z0101 = Stati.z4402), ''), '', '', '', sum (Case when Stati.z4407 = 200609 then Stati.z4409 Else 0 End), sum (Case when Stati.z4407 = 200609 then Stati.z4410 Else 0 End), sum (Case when Stati.z4407 = 200509 then Stati.z4409 Else 0 End), sum (Case when Stati.z4407 = 200509 then Stati.z4410 Else 0 End), sum (Case when Stati.z4407 Between 200510 and 200609 then Stati.z4409 Else 0 End), sum (Case when Stati.z4407 Between 200510 and 200609 then Stati.z4410 Else 0 End), sum (Case when Stati.z4407 Between 200410 and 200509 then Stati.z4409 Else 0 End), sum (Case when Stati.z4407 Between 200410 and 200509 then Stati.z4410 Else 0 End), sum (Case when Stati.z4407 = 200609 then Stati.z4411 Else 0 End), sum (Case when Stati.z4407 = 200509 then Stati.z4411 Else 0 End), sum (Case when Stati.z4407 Between 200510 and 200609 then Stati.z4411 Else 0 End), sum (Case when Stati.z4407 Between 200410 and 200509 then Stati.z4411 Else 0 End) FROM STATI WHERE (Stati.z4402 Between '40000' AND 'ZONE6') AND (Stati.z4414 Between '2005028' AND '2005028') AND (Stati.z4417 = 'A') GROUP BY Stati.z4414, Stati.z4402
I get the following error:
Msg 8120, Level 16, State 1, Line 25 Column 'STATI.Z4400' is invalid in the select list because it is not contained in either an aggregate function or the GROUP BY clause.
I know it has something todo with the select on the table SRTAS, but what's the best way to solve this problem without the chance of getting a wrong result.
The SELECT on SRTAS is to get the "description" of STATI.Z4414 who's stored in the table SRTAS. I only want to group on the fields that will be inserted in WERKS.Z8602, WERKS.Z8603, WERKS.Z8604, WERKS.Z8605, WERKS.Z8606. So adding STATI.Z4400 to the GROUP BY would give me wrong results?
This query is dynamicly generated from within my program from what the user selected.
Also, if there are better ways to write the query, I would be happy to get some hints and tips, but if possible without stored procedures.
Well adding it to a group by or function skews the result set. How to write this query so it displays as I need it to? This is what I have thus far, and it works as it should UNTIL I add in the line of
cast(cte.[C] As float)/cast(sum(cte.[C]) over() as float)*100 As [Rate1],
Presents the error of: Msg 8120, Level 16, State 1, Line 35 Column 'cte.[C]' is invalid in the select list because it is not contained in either an aggregate function or the GROUP BY clause.
This is my full on query -- with 3 CTE's involved to get me the actual result set I am after.
;with cte as ( select [state], case when exists (select 1 from table2 R where R.centername = d.centername) then 1 else 0 end as [L], case when exists (select 1 from table3 C where C.centername = d.centername) then 1 else 0 end as [C] FROM maintable d ),
Why shrinkfile empty file does not redistribute data evenly in the primary file group with multiple files:
Please run the script attached to see what the end result is.
This is what I set up last night on my test machine.
1) Create database [FGTest] size 200MB 2) Create table called TEST on primary 3) Insert 40MB of data into test 4) Create another file group called temp in primary size 200MB 5) Shrinkfile('FGTest',emptyfile) so that all data is transfered from FGTest into temp file group. 6) Add another 2 files called DATA2 and DATA3. Both are 200MB. 7) We now have 3 empty files that I want data distributed evenly on. FGTest, DATA2 & DATA3 8) Shrinkfile('temp',emptyfile) to move all the data from temp over the 3 file groups evenly
I would expect at this stage to have the following:
FGTest = 13MB, DATA2 = 13MB, DATA3 = 13MB
(40MB of data over 3 files should be about 13 MBish in each file)
What I actually end up with is this:
FGTest = 20MB DATA1 = 10MB DATA2 = 10MB
It looks as though SQL Server is allocating 50% of all data to the original file and then 50% evenly over the remaining files in PRIMARY.
best solution for this stored procedure query.I'm getting the following error:
Column 'dbo.Applicants.submitted' is invalid in the select list because it is not contained in an aggregate function and there is no GROUP BY clause. Here is my select query:
Code: SELECT DISTINCT DATENAME(MONTH, submitted) AS mon, MAX(submitted) AS SubmitDate FROM dbo.Applicants WHERE ((CASE WHEN MONTH(submitted) > 8 THEN YEAR(submitted) + 1 ELSE YEAR(submitted) END) = @AcYr) ORDER BY SubmitDate
The submitted field is a date field.I don't want to add the submitted field to Group By as I want to group by month not date.Is there any solution to avoid grouping by date?
COL1 | COL2 | COL3 | COL4 1 | FD | DR. A | Y 2 | FD | DR. A | Y 3 | FD | DR. A | N 4 | FD | DR. A | Y 5 | FD | DR. A | Y 6 | PF | DR. A | Y 7 | FD | DR. B | Y 8 | PF | DR. B | N
Consider the script below:
SELECT COL2, COL3, COUNT(COL1) AS TOTALS FROM CASES GROUP BY COL2, COL3 ORDER BY COL3, COL2
The script above produces the following output:
COL2 | COL3 | TOTALS FD | DR. A | 5 PF | DR. A | 1 FD | DR. B | 1 PF | DR. B | 1
I need to add one more column to the script that counts records with 'Y' in COL4 for each COL1 category (FD, PF). The final dataset would look like this:
COL2 | COL3 | TOTALS | NEWCOL FD | DR. A | 5 | 4 PF | DR. A | 1 | 1 FD | DR. B | 1 | 1 PF | DR. B | 1 | 0
I am having a hard time trying to use COUNT() on multiple columns with the GROUP BY restrictions that exist.
While running the below query, getting the error: Am I missing any of the columns to include in the SELECT column_list?
Msg 8120, Level 16, State 1, Line 1
Column 'sys.master_files.database_id' is invalid in the select list because it is not contained in either an aggregate function or the GROUP BY clause.
<code> select a.[Database Name],a.[Type],a.[Size in MB],b.LastUserUpdate from ( SELECT database_id,[Database Name]= DB_NAME(database_id), [Type]= CASE WHEN Type_Desc = 'ROWS' THEN 'Data File(s)' WHEN Type_Desc = 'LOG' THEN 'Log File(s)' ELSE Type_Desc END ,