I apologize in advance if this is something obvious I've missed ... fresh eyes/brain and all that.
If I have a table that is using a particular partition scheme/function, is there a quick and easy way to determine which column of that table is being used for partitioning? We're examining a number of legacy structures and we're hoping to reduce the time it's going to take us to get the report management wants.
I want to find a way to get partition info for all the tables in all the databases for a server. Showing database name, table name, schema name, partition by (maybe; year, month, day, number, alpha), column used in partition, current active partition, last partition (for date partitions I want to know if the partition goes untill 2007, so I can add 2008)
all I've come up with so far is:
Code Block
SELECT distinct o.name From sys.partitions p inner join sys.objects o on (o.object_id = p.object_id) where o.type_desc = 'USER_TABLE' and p.partition_number > 1
Does anyone have a script that will roll through the tables in a database and identify tables without primary keys defined? I did not see any in the online script database.
I am implementing a table partitioning on our database with TSQL. At the moment (it is under developing) the data are correctly located in the relavant file group. Our target is to meke that the oldest partions/File groups can be backup and removed from the database. This to reduce the size of DB (time period is used for partitioning). Then, if the need arises, restoring the filegroup to make reporting or analysis. Take care that data are conitnuosly added and thus new File groups are added to represent the new time period (eg: new file group is the new month). Based on your experience is it possible a solution like that?
When do partitioned tables/indexes become beneficial? When a table has several million rows? Hundreds of millions of rows?
My tables all have clustered indexes based on the bigint identity PK. I am considering partitioning some of the larger tables by year. If the field I use is not part of the current clustered index then I can't use create index to create my partitions? I need to create an empty table for each year and then use the Alter Table switch? I have header/detail/sub-detail tables. As long as I create the partition function using a similar date field the partitions will be able to be joined? How do I insure my indexes will be aligned? Once I set up the partitions I assume new rows will be stored in the proper partitions based on the value of the date field.
I've read BOL, etc & they are good sources for theory but I need a "Building Partitions for Dummies" type paper with step by step explanations. Anything out there like that?
Hello all, I was wondering if anyone else ran into this and if how you got around it. In a nut shell the SQL optimizer it NOT pruning the additional partitions from the execution plan as would be expected when applying a constraint directly against the partitioned table€™s partition key, Instead its scanning every partition that you have set up in you partition function range.. Yet when you apply the actual value against the table the plan return as expected.
Hmm.... strange......ghost...ooooooo?
I have created a simple test to reproduce:
Code Snippet
CREATE PARTITION FUNCTION [PTFunction](int) AS RANGE LEFT FOR VALUES (1,2,3)
GO
CREATE PARTITION SCHEME [PTDataScheme] AS PARTITION [PTFunction] TO ([PRIMARY], [PRIMARY], [PRIMARY], [PRIMARY])
GO
CREATE TABLE tblPartitionTest(
ID int identity(1,1) ,
PartitionKey int,
Sales money)
ON PTDataScheme(PartitionKey)
GO
INSERT INTO tblPartitionTest(PartitionKey,Sales) VALUES(1,10.00);
INSERT INTO tblPartitionTest(PartitionKey,Sales) VALUES(1,20.00);
INSERT INTO tblPartitionTest(PartitionKey,Sales) VALUES(1,30.00);
INSERT INTO tblPartitionTest(PartitionKey,Sales) VALUES(1,40.00);
INSERT INTO tblPartitionTest(PartitionKey,Sales) VALUES(1,50.00);
INSERT INTO tblPartitionTest(PartitionKey,Sales) VALUES(2,10.00);
INSERT INTO tblPartitionTest(PartitionKey,Sales) VALUES(2,20.00);
INSERT INTO tblPartitionTest(PartitionKey,Sales) VALUES(2,30.00);
INSERT INTO tblPartitionTest(PartitionKey,Sales) VALUES(2,40.00);
INSERT INTO tblPartitionTest(PartitionKey,Sales) VALUES(2,50.00);
INSERT INTO tblPartitionTest(PartitionKey,Sales) VALUES(3,10.00);
INSERT INTO tblPartitionTest(PartitionKey,Sales) VALUES(3,20.00);
INSERT INTO tblPartitionTest(PartitionKey,Sales) VALUES(3,30.00);
INSERT INTO tblPartitionTest(PartitionKey,Sales) VALUES(3,40.00);
INSERT INTO tblPartitionTest(PartitionKey,Sales) VALUES(3,50.00);
We are using partitions and all the table are properly aligned as per the partition keys. When this particular sp, which is inserting data to a table from a different table based on the partitionkeys is called by Web UI where threading has been applied, dead lock appears.
Let me make it more clear.
ThreadOne: Insert into table A(partitionKey,BatchId,...) select * from table B where partitionkey = 1 ThreadTwo: Insert into table A(partitionKey,BatchId,...) select * from table B where partitionkey = 2
I can see sometimes it gives deadlock for this procedure, not sure about the reason, as far as I guess since the tables are partitioned and escalation is set to Auto the deadlock should not occur.
We have a partitioned view with 4 underlying tables. The view and eachof the underlying tables are in seperate databases on the same server.Inserts and deletes on the view work fine. We then add insert anddelete triggers to each of the underlying tables. The triggers modifya different set of tables in the same database as the view (differentthan the underlying table). The problem is those triggers aren't firedwhen inserting or deleteing via the view. Inserting or deleteing theunderlying table directly causes the the triggers to fire, but not whenthe tables are accessed as a result of using the view.Am I missing something? The triggers are 'for insert' and 'fordelete'. No 'instead of' or 'after' triggers.
My database's design is set out here. In summary, I'm trying to model a Stock Exchange for a Technical Analysis application written using Visual C++. In order to create the hierachy I'm using a Nested Set Model. I'm now trying to write code to add and delete equities (or, more generically, nodes) to the database using a form presented to the user in my application. I have example SQL code to create the necessary add and delete procedures that calculate the changes to the values in the lft and rgt columns, but these examples focus around a single table, where as my design aggregates rows from multiple tables using UNION ALL:
Code Snippet CREATE VIEW vw_NSM_DBHierarchy -- Nested Set Model Database Hierarchy AS SELECT clmStockExchange, clmLeft, clmRight FROM tblStockExchange_ UNION ALL SELECT clmMarkets, clmLeft, clmRight FROM tblMarkets_ UNION ALL SELECT clmSectors, clmLeft, clmRight FROM tblSectors_ UNION ALL SELECT clmEPIC, clmLeft, clmRight FROM tblEquities_
Essentially, I'm trying to create an updateable view but I receive the error "UNION ALL View is not updatable because a partitioning column was not found". I suspect that my design in wrong or lacks and this problem is highlighting the design flaws so any suggestions would be greatly appreciated.
I have a query that joins two large partitioned tables and depending on the values in the where clause, I can get dramatically different performance results.
The first query completed in around 7s and has 47,000 logical reads.
select mo.monitor_id,
mo.site_id,
mo.testtime,
sum(mo.NumBytes),
sum(mo.DNSTime),
sum(mo.ConnectTime),
sum(mo.FirstByteTime),
sum(mo.ContentTime),
sum(mo.RelocTime)
from monitor_raw mr(nolock), monitor_object mo(nolock)
where mr.monitor_id in (5339, 5341, 5342, 943842, 943866)
and mr.testtime between 'Oct 31 2007 3:00:00:000PM' and 'Nov 30 2007 3:00:00:000PM'
and mo.returncode = 200
and mr.site_id in (101,102,105,109,110,112,115,117,119,122,126,151,132,139,129,135,121,138,143,142,159,148,128,171,176,177,178,111,113,116,118,120,127,133,131,130,174,179,185,205,200,202,203,204,210,211,208,209,212,213,216,199,214,224,225,229,230,232,235,241,245,247,250,254,261,267,264,265,266,268,269)
and mr.escalationlevel = 0
and mr.monitor_id = mo.monitor_id
and mr.testtime = mo.testtime
and mr.site_id = mo.site_id group by mo.monitor_id, mo.site_id, mo.testtime
The second query takes 188s to complete and has 1.8m logical reads. The only difference between the two is the value of the monitor_ids in the where clause.
select mo.monitor_id,
mo.site_id,
mo.testtime,
sum(mo.NumBytes),
sum(mo.DNSTime),
sum(mo.ConnectTime),
sum(mo.FirstByteTime),
sum(mo.ContentTime),
sum(mo.RelocTime)
from monitor_raw mr(nolock), monitor_object mo(nolock)
where mr.monitor_id in (152682, 5339, 5341, 5342, 268080)
and mr.testtime between 'Oct 31 2007 3:00:00:000PM' and 'Nov 30 2007 3:00:00:000PM'
and mo.returncode = 200
and mr.site_id in (101,102,105,109,110,112,115,117,119,122,126,151,132,139,129,135,121,138,143,142,159,148,128,171,176,177,178,111,113,116,118,120,127,133,131,130,174,179,185,205,200,202,203,204,210,211,208,209,212,213,216,199,214,224,225,229,230,232,235,241,245,247,250,254,261,267,264,265,266,268,269)
and mr.escalationlevel = 0
and mr.monitor_id = mo.monitor_id
and mr.testtime = mo.testtime
and mr.site_id = mo.site_id group by mo.monitor_id, mo.site_id, mo.testtime
The two tables have clustered indexes on monitor_id, testtime and site_id. Comparing the execution plan, I can see why there is such a difference in performance. The second query performs a clustered index seek on the monitor_object table starting at the lowest monitor_id, testtime & site_id through the highest monitor_id, testtime & site_id. The first query performs a clustered index seek where the monitor_id, testtime and site_id equals the same values from the monitor_raw table.
My question is, how can I force the second query to use the same execution plan as the first so that I can get better performance?
One possible workaround that I could use is to execute five individual queries, one for each monitor_id and then union the results together but this would require significant code changes to my stored procs.
Pros-------* Can optimize the hell out of the database because it does not have toaccomodate different OSes* Code is probably cleaner - not littered with 80 million #ifdefs* No need to write the code to deal with various filesystem conventionsCons--------* Most cheap webhosts are unix-based (I think). You can use Oracle as aprofessional and still do your personal home hacking on oracleregardless of whether personal hosting is done on windows or unix. Notso with MS SQL.
I have a table with a identity column in sql server 7 database. Now i need to update this identity column. Directly i couldn't able to update this column since it is an identity column. So, i like to drop this identity nature first and then update it is easy to update it. For this purpose, I need a Transact-sql script. Please Let me know if you have any thoughts on this. Thanks.
Hi,I have a few things on my databases which seem to be neither true systemobjects or user objects - notably a table called 'dtproperties' (createdby Enterprise manager as I understand, relating to relationship graphingor something) and some stored procs begining with "dt_" (some kind ofsource control stuff, possible visual studio related). These show up whenI use"exec sp_help 'databaseName'"but not in Ent. Mgr. or in Query Analyzer's object browser, and also notin a third party tool I use called AdeptSQL. I am wondering how thosetools know to differentiate between these types of quasi-system objects,and my real user data. (This is for the purpose of a customized schemagenerator I am writing). I'd prefer to determine this info with systemstored procs (ie sp_help, sp_helptex, sp_...etc) but will dip into thesystem tables if needed.Thanks,Dave
Is there a quick and easy way to figure out if a server is 64 bit or 32 bit? I have been looking and cannot figure out an easy way. If there is a script that will figure it out could you please tell me. I am a DBA and manage over 100 servers and need a fast and easy way to figure this out. I need to know this to pick the correct upgrade version.
Is there a quick and easy way to figure out if a server is 64 bit or 32 bit? I have been looking and cannot figure out an easy way. If there is a script that will figure it out could you please tell me. I am a DBA and manage over 100 servers and need a fast and easy way to figure this out.
I'm trying to connect to a sql database, but I don't know what myserver is in the following code.Dim strConn As String = "server=myserver;database=Northwind"I can't get the code to link up with my Northwind database.I'm running everything locally if that helps.Thanks!Jon
Is there a system stored procedure that I can execute that will return the actual size of the database you are working with? Any information is appreciated.
How do I find out if a temporary table named '##test' exists? I have a stored procedure that creates this table and if it exists another stored procedure should do one thing, if it does not exist I want the SP to do something else. Any help as to how I can determine if this table exists at the current time would be greatly appreciated.
Can anyone tell me what command/utility i can use to determine the database type (non-unicode or unicode) as well as the supported character set? Any help will be greatly appreciated. Thanks a lot!!
Forgive the easy question but I'm afraid it might be also a trick question and I'd like to hear the experts' opinion. I am using SQL Server 2005 Express edition and I know the limitation is 4GB per database. So far none of my users is anywhere near the limit but I have to be prepared for when that day finally comes. As it stands, they use a single database through a program so I have full control over it. There are no fancy backup programs on the system so no fancy recovery models and automatic shrinking can be done - data is only inserted in that database.
My question is simply how can I determine programmatically (I use ADO.Net but it can execute SQL commands just fine) the size of the database as it relates to the limitation? That is, I don't know whether it is the amount of data stored - with or without overhead, or it is simply the size of the *.mdf file (maybe together with the *.ldf file), or whether the 4GB is 4 billion bytes or 2^32 bytes - I just want the same method that the SQL Server is using so that, for example, I can bring up a warning at 90% full and lock out the user at 99% full.
create table tick ( ID bigint identity (1,1) primary key not null , price money not null )
and I want to know 3 things
Starting with ID = 1 through ID = (last) give me the low and high price (that satisfies the below WHERE clause), and the last ID WHERE high price - low price = 0.10 and the last ID (last) is the minimum ID to satisfy: high price - low price = 0.10
So the last ID will coincide with the record containing either the low or high price, the problem is you don't know which record in that range has the corresponding high/low price, it could be the first record or the 10,000th record.
I am thinking I need to create two summary tables, maybe calculate the min(ID) that goes down 0.01 then the min(ID) that goes down 0.02, etc... Then calculate the min(ID) that goes up 0.01 then up 0.02, etc..finally join against these two summary tables to figure out which combination of downSummary and upSummary have a difference of 0.10.
I have a C# server application which clients can send arbitrary SQL statements to. These can be absolutely anything - creating tables/views, selecting from tables/views, inserts, updates, deletes, you name it.
There are two return parameters from the server method which executes the SQL - a results set containing the data, and a count of the rows that were updated - (either one or the other should be populated depending on the type of command sent to it). To deal with this, what I planned on doing was (pseudocode follows..):
Unfortunately this doesn't really work, as OpenReaderCursor is able to execute Non Queries (eg. UPDATE/INSERT/DELETEs etc) but doesn't give me a row count, and trying the other way round, ExecuteNonQueryCommand is happy to execute SELECT statements, but I can't then of course return a results set as I don't have access to it.
My question then, if you will excuse the waffle above, is "Is there a simple way of determining if a string containing an SQL statement is a query?" - or will I have to come up with some way of dealing with this in my application code?
Please don't slate the design (ideally I would have two methods on the server, one for queries which returns results and the other for nonqueries which returns an updatecount) but there's nothing I can do, this is how it must be done (the interface was defined long long ago)
Not sure if this is the right place to post this but hopefully someone can help me. I would like to determine what the last automatically incremented ID is in a table I need to return it as a variable to a VB.NET program but I'm not sure what the SELECT statement would look like for this. Any help would be greatly appreciated.
can anyone tell me if there is a way to determine with SMO or RMO if a database is a subscriber when using merge replication. If only have the Server and database at this point too!!
I want to provide a small app that creates a merge publication but only if the database isn't a subscriber.
hi peter... i have a question about, how i can see if my sql server is the version 2005 sp2 and, what is the diference with server and server agent... i've checked the updates and the machine says i have up to date... but i dont know witch is.
I am writing a client application that offers an UI that allows an administrator to remotely add/delete/update user accounts accross many different SQL Servers running on XP and up.
When the operating system is W2K3 or higher I want to take advantage of the "check_expiration, check_policy, must_change' arguments to create login and exclude those features when the host OS does not support them.
Is there an easy way to determine if those arguments are supported?
How to find out that there is a null value in a column rather than a valid integer, DateTime or bool value, for strings I use 'as' operator to cast the column value and it returns null when column value is null, but for value types using 'as' operator causes compile error and using simple casting causes runtime error, for example:
int count = (int)row["Count"];
and
int count = row["Count"] as int;
the first one throws an exception when Count is null and the second doesn't compile at all since 'as' applies to reference types, so what is the way other than exception handling to determine null value in a column?
I am trying to find a global way of when the last time a row in one of my tables was updated or data inserted. I say global because I don't want to drill down through each table looking for modified rows.
I am a DBA of several hundred databases and want to retire those no longer being used.
Is there a column of a system table that has this info.