SQL Server 2012 :: Why Partition Function Works For Datetime2 But Not For Datetime1
Dec 20, 2013
DECLARE @DatePartitionFunction nvarchar(max) = N'CREATE PARTITION FUNCTION DatePartitionFunction (datetime) AS RANGE RIGHT FOR VALUES (';
DECLARE @i datetime = '2007-09-01 00:00:00.000';
WHILE @i < '2008-10-01 00:00:00.000'
BEGIN
SET @DatePartitionFunction += '''' + CAST(@i as nvarchar(10)) + '''' + N', ';
[Code] ....
Msg 7705, Level 16, State 2, Line 1
Could not implicitly convert range values type specified at ordinal 1 to partition function parameter type.
However if I change to datetime2 it works
DECLARE @DatePartitionFunction nvarchar(max) = N'CREATE PARTITION FUNCTION DatePartitionFunction (datetime2) AS RANGE RIGHT FOR VALUES (';
DECLARE @i datetime2 = '2007-09-01 00:00:00.000';
WHILE @i < '2008-10-01 00:00:00.000'
BEGIN
SET @DatePartitionFunction += '''' + CAST(@i as nvarchar(10)) + '''' + N', ';
[Code] ...
Is the data type of the column used for partitioning. All data types are valid for use as partitioning columns, except text, ntext, image, xml, timestamp, varchar(max), nvarchar(max), varbinary(max), alias data types, or CLR user-defined data types.
In this case why isn't datetime works?
version is as follow:
Microsoft SQL Server 2012 (SP1) - 11.0.3128.0 (X64)
Dec 28 2012 20:23:12
Copyright (c) Microsoft Corporation
Enterprise Evaluation Edition (64-bit) on Windows NT 6.1 <X64> (Build 7601: Service Pack 1)
from [URL] .....
Table and index partitioning is supported in this edition
I have a heavy database , More than 100 GB only for six month .every Query on it takes me along time and I dont have enough space to add more indexes.by a way I decided to do partitioning. I create a partition function , on date filed and all Data records per month was appointed to a separate file.And is partitioning only for Future data entry?
I have one stored proc that uses the Row_number over partition that looks like this:
Select TargetID, Academic_Year_id, Course_Mode, UK_Enrol, Int_Enrol, Notes, Revision_Number from (SELECT ROW_NUMBER() OVER (partition by [Academic_Year_id] order by [Revision_Number] DESC) as [RevNum],TargetID, Academic_Year_id, Course_Mode, Target_Year, UK_Enrol, Int_Enrol, Notes, Revision_Number FROM tbl_targets where course_mode=@course_mode) RV where (RV.RevNum=1)
Now the next store proc needs to use the above but i need to add the Academic_year from the tbl_acyear_lookup table also add filter the target_year ='year 1'
I’m looking for clearity on partition switching. The idea is to use many BULK INSERT statements into table dbo.X_n in parallel and when BULK INSERT for table dbo.X_n is completed, switch dbo.X_n into dbo.bigdaddy. I think this is the fastest way to upload a couple hundred GB of data.
In learning about partition switching (in part) from The Data Loading Performance Guide under Partition SWITCH, I hear the instructions to say copy the main table exactly to become a target. But in that same step (#1), I read that we need to change the default file group of the target (dbo.X_n) from the default file group. Then it says I need to match indexes and lists the filegroup as something we need to match with the main table.
As an overview of the partition switching strategy, I think the whole point of BULK INSERT with partitioning is to have seperate files (in same group) to enable concurrent uploading where each table has its own file. Once the upload is completed to a table (dbo.X_n) then we do the partition switch into the main table (dbo.bigdaddy). The data we just uploaded doesn’t actually move, just the metadata for it.
“Don’t have the same filegroup on your target as the main table. You must have the same filegroup on your target as the main table.”
I'm a relative SQL Server newbee and have developed a function that converts mm/dd/yyyy to yyy/mm/dd for use as in a DT_DBDATE format for insert into a column with smalldatatime.
I receive the following erros when using the function in the Derived Column Transformation Editor. First, the function, then the error when using it as the expression Derived Column Transformation Editor.
Can anyone explain how I can do this transformation work in this context or suggest a way either do the transformation easier or avoid it altogerher?
expression "lipper.dbo.convdate(eomdate)" failed. The token "." at line number "1", character number "11" was not recognized. The expression cannot be parsed because it contains invalid elements at the location specified.
Error at Data Flow Task [Derived Column [111]]: Cannot parse the expression "lipper.dbo.convdate(eomdate)". The expression was not valid, or there is an out-of-memory error.
Error at Data Flow Task [Derived Column [111]]: The expression "lipper.dbo.convdate(eomdate)" on "input column "eomdate" (165)" is not valid.
Error at Data Flow Task [Derived Column [111]]: Failed to set property "Expression" on "input column "eomdate" (165)".
(Microsoft Visual Studio)
===================================
Exception from HRESULT: 0xC0204006 (Microsoft.SqlServer.DTSPipelineWrap)
------------------------------ Program Location:
at Microsoft.SqlServer.Dts.Pipeline.Wrapper.CManagedComponentWrapperClass.SetInputColumnProperty(Int32 lInputID, Int32 lInputColumnID, String PropertyName, Object vValue) at Microsoft.DataTransformationServices.Design.DtsDerivedColumnComponentUI.SaveColumns(ColumnInfo[] colNames, String[] inputColumnNames, String[] expressions, String[] dataTypes, String[] lengths, String[] precisions, String[] scales, String[] codePages) at Microsoft.DataTransformationServices.Design.DtsDerivedColumnFrameForm.SaveAll()
Create table #test ( id int primary key, Name varchar(100) ) insert into #test values (1,'John') insert into #test values (2,'Walker')
[Code] ....
-- Query 1 : update #test set name = 'Joney' where id = 1
-- Query 2 : set rowcount 1 update #test set name = 'Joney' where id = 1 set rowcount 0
1. #test table have primary key & clustered index. 2. Obviously only one row will be available for an id. 3. In query 1, will the sql server look for matching rows even after it found 1 row? 4. Will query 2 really gains some performance?
I have names in the database which I want partition by last name - for example last names starting with A, B, C, D should go to the file group 1. last names starting with E, F, G, H should go to file group 2.
I am trying to use the following function - but do I specify in the function that last names with with A, B, C, D should go to the file group 1
CREATE PARTITION FUNCTION myRangePF3 (char(20)) AS RANGE RIGHT FOR VALUES ('EX', 'RXE', 'XR');
Is there any way to modify partition function to accomplish this?
I can't seem to find a way to do the following:create table part_table (col1 int,col2 datetime) on psX (datename(week,col2))I want to partition based on the week number of a date field.So if I enter in data like the following in my part_table:(1, 1/1/2007) should go into partition 1 for week #1(52, 12/21/2007) should go into partition 52 for week #52 of the yearI tried adding in a computed column, but it says its nondeterministic.
I am facing issue in generating total sum and daily sum from table ThresholdData.
DailyTransactionAmount should be sum of todays amount in the table TransactionAmount should be sum of all amount in the table.
Basically,
1. I don't want to scan ThresholdData table twice. 2. I don't want to create temporary table/table variable/CTE for this. 3. Is there is any way to make it done in single query.
I hope, where criteria is not possible in partition function. I am trying query something as given below,
SELECT TransactionDate, TransactionAmount, ROW_NUMBER() over (order by TransactionDate) AS TransactionCount, SUM(TransactionAmount) over (partition by id ) AS TransactionAmount, SUM(TransactionAmount) over (partition by id ,CONVERT (DATE, @TodaysTransactionDate)) AS DailyTransactionAmount FROM ThresholdData WHERE id = @id AND transactiondate >= dateadd(d,-@TransactionDaysLimit,@TodaysTransactionDate)
Hi,I need to create a partition table but the column on which I need tocreate a partition may not have any logical ranges. So while creatingor defining partition function I can not use any range.likeCREATE PARTITION FUNCTION my_part_func (NUMERIC(7)) AS RANGE LEFT FORVALUES (1,100,1000);Is there any way to define partition function in SQL Server somethinglike Oracle HASH partitions where logical range is unkown?ThanksSameer
I've got a popular problem so i get a message that server acces denied! ..
But that what is different in my error.... When i use same setting same database and connection string (on MSDE server) there is no problem...
On SQL server i have got windwos authentication but i added all accounts as ASPNET and SA.... and when i try to connect by
RETTO - name of my server
server=RETTO;uid=sa;pwd=password;database=db1; or by Integrated Security=SSPIserver=RETTO;uid=RETTOASPNET;database=db1;
I CAN BROWSE RECORDS THERE ARE NO PROBLEMS WITH CONNECTION!!! but when i try to update or iinsert or delete something in database there becomame this error that access denied or server does not exist!!!
PLEASE HELP I'm FIGHTING WITH THAT FOR OVER 5 DAYS!!!
I MADE FOR MY ACCOUNTS (SA, ASPNET) ALL THINGS ALLOWED AS EXECUTING stored procedures.. OR ACCESING datatables with insert delete and update query WHERE IS THE PROBLEM!!!??
I'm currently stuck with a table that has 350 mil records. Querying this table is insanely slow so I had a better look at existing yearly partitioning. I already managed to partition on a month level which increased the performance/querrying a lot. I did this on the staging table where I used an alter statement to split the 2015 partition by 12 months.
However, in our project we used Data Vault. This means that we have 4 tables (hub, sathub, link, satlink), all carrying 350 mil records. The problem is that altering the partition function does not work. The server cannot handle this action. What the best way is to do this, without having to drop/reload all tables.
How can I make partitions on a table for a particular value and ranges together?
For example, for customer id 12345 i need a separate partition, then for 56789 i need a separate partition, and if i have range of values like 1000 to 1020 then a separate partition for this.
For certain ids i need unique partition, and for certain ids i need Ranges.
How to add some more ranges to existing partition schema and function?
Already My table partitioned on date ranges,
6 partitions , each partition contains 6 moths data, so total data is 3 years.
i.e. 1 partition data- from jan2012 to Jun2012 2 partition data- from july2012 to dec2012 3 partition data- from jan2013 to Jun2013 4 partition data- from july2013 to dec2013 5 partition data- from jan2014 to Jun2014 6 partition data- from july2014 to dec2014 After Jan2015 data will go to Primary file group(Default)
Now customer wants to add two more file groups with these partitions ranges, i.e. jan2015 to jun15 and Jul15 to dec15.
File group and ndf file adding is OK, But
how to alter partition scheme and partition function with these additional ranges to existing partition function and scheme?
Need to fetch the date from parent table to chile table. This is the script ....
case When @AccountingDate IS NULL THEN NULL ELSE CONVERT (varchar,@AccountingDate , 101) END, Case When @InventoryDate IS NULL THEN NULL ELSE CONVERT (varchar,@InventoryDate,101) END, Case When @StatusDate IS NULL THEN NULL ELSE CONVERT (varchar,@StatusDate,101) END, Case When @LastInstallmentDate IS NULL THEN NULL ELSE CONVERT (varchar,@LastInstallmentDate,101) END, Case When @RetailFirstPayDate IS NULL THEN NULL ELSE CONVERT (varchar,@RetailFirstPayDate,101) END , Case When @LeaseFirstPayDate IS NULL THEN NULL ELSE CONVERT (varchar,@LeaseFirstPayDate,101) END , Case When @DayToFirstPayment IS NULL THEN NULL ELSE CONVERT (varchar,@DayToFirstPayment,101) END , Case When @EntryDate IS NULL THEN NULL ELSE CONVERT (varchar,@EntryDate,101) END , Case When @DealBookDate IS NULL THEN NULL ELSE CONVERT (varchar,@DealBookDate,101) END , Case When @RealBookDate IS NULL THEN NULL ELSE CONVERT (varchar,@RealBookDate,101) END
can insert statement works fine in case when function
for example
case when condition1=true then (first insert statement based on some condition) when condition2=true then (second insert statement based on some other condition) end
I have a scalar function, which calculates the similarity of two strings. I use the following query, to compare the entries of one table against the value 'Test' and return the entries, which have a value > 50:
;WITH cte1 AS ( SELECT b.FirstName, (SELECT fn_similarity('Test', b.FirstName)) AS [Value], b.LastName FROM [AdventureWorks2012].[Person].[Person] b )
SELECT * FROM cte1 WHERE [Value] > 50.00 ORDER BY [Value] DESC
Now I want to use this query against the first 50 entries of the [Person] table, so that the resultset includes all the values of the first 50 persons and the entries, which are similar to them.
At the moment I use a WHILE-loop and write the five single resultsets in a temporary table. Is there another way / a better way, maybe via a join?
I have setup CDC on 50 tables and then in one SP I’m calling all cdc function like below issue is I'm getting error “an insufficient number of arguments were supplied for the procedure or function cdc.fn_cdc_get_all_changes ... .” as error is not mentioning for which capture instance I'm getting this error so not able to find.
select * from cdc.fn_cdc_get_all_changes_<capture_instance>(@from_lsn, @to_lsn, 'all update old') union all select * from cdc.fn_cdc_get_all_changes_<capture_instance>(@from_lsn, @to_lsn, 'all update old') union all select * from cdc.fn_cdc_get_all_changes_<capture_instance>(@from_lsn, @to_lsn, 'all update old') union all select * from cdc.fn_cdc_get_all_changes_<capture_instance>(@from_lsn, @to_lsn, 'all update old') union all select * from cdc.fn_cdc_get_all_changes_<capture_instance>(@from_lsn, @to_lsn, 'all update old') union all select * from cdc.fn_cdc_get_all_changes_<capture_instance>(@from_lsn, @to_lsn, 'all update old') union all select * from cdc.fn_cdc_get_all_changes_<capture_instance>(@from_lsn, @to_lsn, 'all update old')
I have an existing function and need to alter function to give result of the values multipiled until its parent is reached.need two seperate functions for city and amt columns..need to also display the parent-description
I have the following query that supposes to merge multiple result in a single one and put it into a temporary table:
SELECT DISTINCT [AlphaExtension], STUFF((SELECT A.[NoteText] + '< BR />' FROM #temp A WHERE A.[AlphaExtension]=B.[AlphaExtension] FOR XML PATH('')),1,1,'') As [NoteText] FROM #temp B GROUP BY [AlphaExtension], [NoteText]
It is working fine unless by a simple detail. If you look at the second line of the query you will see that I am stuffing together a < BR /> tag (break line) because the contents of the field is going to be spitted directly to the screen and I want that the multiple results be displayed in different lines.
OK, the issue is that it is stuffing & lt ; BR / & gt ; instead < BR /> and therefore the browser is displaying the tag instead to break a line.
I am having staging table with separted by '¯'.I want to split the data with given number .i have given 31 means my main table have 31 column. it should handle the less or more column.
I have a function that accepts a date parameter and uses getdate() as its default value. If a date is passed in, I'm going to have to find records using the datediff method based on input. If no date is passed, I am going to bypass the datediff logic and search for records based on a column called "is_current" which will reduce the query time.
However, I don't know how to tell if the date value in the function came from an input or was the default.
I'm having a strange problem with this but I know (and admit) that the problem is on my PC and nowhere else. My firewall was causing a problem because I was unable to PING the database server, switching this off gets a successful PING immediately. The most useful utility to date is running netstat -an in the command window. This illustrates all the connections that are live and ports that are being listed to. I can establish a connection both by running
TCP 81.105.102.47:1134 217.194.210.169:1433 ESTABLISHED
TCP 81.105.102.47:1135 217.194.210.169:1433 ESTABLISHED
TCP 127.0.0.1:1031 0.0.0.0:0 LISTENING
TCP 127.0.0.1:5354 0.0.0.0:0 LISTENING
TCP 127.0.0.1:51114 0.0.0.0:0 LISTENING
TCP 127.0.0.1:51201 0.0.0.0:0 LISTENING
TCP 127.0.0.1:51202 0.0.0.0:0 LISTENING
TCP 127.0.0.1:51203 0.0.0.0:0 LISTENING
TCP 127.0.0.1:51204 0.0.0.0:0 LISTENING
TCP 127.0.0.1:51206 0.0.0.0:0 LISTENING
UDP 0.0.0.0:445 *:*
UDP 0.0.0.0:500 *:*
UDP 0.0.0.0:1025 *:*
UDP 0.0.0.0:1030 *:*
UDP 0.0.0.0:3456 *:*
UDP 0.0.0.0:4500 *:*
UDP 81.105.102.47:123 *:*
UDP 81.105.102.47:1900 *:*
UDP 81.105.102.47:5353 *:*
UDP 127.0.0.1:123 *:*
UDP 127.0.0.1:1086 *:*
UDP 127.0.0.1:1900 *:*
Both these utilities show as establishing a connection in netstat so I am able to connect the database server every time, this worked throughout yesterday and has continued this morning.
The problem is when I attempt to use SQL Server Management Studio. When I attempt to connect to tcp:sql5.hostinguk.net, 1433 nothing shows in netstat at all. There is an option to encrypt the connection in the connection properties tab in management studio, when I enable this I do get an entry in netstat -an, see below:
Amost as if it's trying the different ports but you get this time_wait thing. The error message is more meaningful and hopefull because I get:
A connection was successfully established with the server, but then an error occurred during the pre-login handshake. (provider: SSL Provider, error: 0 - The certificate chain was issued by an authority that is not trusted.) (.Net SqlClient Data Provider)
I would expect this as the DNS has not been advised to encrypt the conection.
This is much better than the : Login failed for user 'COX10289'. (.Net SqlClient Data Provider) that I get, irrespective of whether I enter a password or not.
This is on a XP machine trying to connect to the remote webhosting company via the internet.
I can ping the server
I have enabled shared memory and tcp/ip in protocols, named pipes and via are disabled
I do not have any aliases set up
No I do not force encryption
I wonder if you have any further suggestions to this problem?
I have a function that accespts a string and a delimeter returns the results in a temp table. I am using the funtion for one of the columns in my view that needs be to split and display the column into different columns. The view takes for ever to run and finally it doesn't split and doesn't display in the column.
Function: ----------------------------------- ALTER FUNCTION [dbo].[func_Split] ( @DelimitedString varchar(8000),
[Code].....
Not sure what I am missing in the above view why it doesn't split the string.
I have a question regarding windowing functions. I have a sales order table with the columns "orderid", "customerid", "order_date" and "amount". I use the following query to get the amount of every customer as a additional column:
Select customerid, orderid, order_date, amount, SUM(amount) OVER (PARTITION BY customerid) FROM sales_orders
My question is if there is a good way to add another column, which includes the SUM(amount) of the customerid, where the order_date > 2012-01-15 , something like this:
Select customerid, orderid, order_date, amount, SUM(amount) OVER (PARTITION BY customerid), SUM(amount) OVER (PARTITION BY customerid WHERE order_date > 2012-01-15) FROM sales_orders
I know, this is not a valid method, so do you know a way to achieve this? Can I maybe use CROSS APPLY or something like this? I know that I could use a subquery to get this, but is there maybe a way / a better way via window functions?