SQL Server 2014 :: Correct Index Strategy For BETWEEN
Mar 23, 2015
Two tables:
GeoIpLocation (Id, City, Country, Latitude, Longitude; PK: Id; 2m rows)
GeoIpBlock (IpFrom, IpTo, GeoIpLocation_Id; PK: IdFrom, IdTo; 100k rows)
Simple concept: get location for given IP. So:
SELECTTOP 1
GeoIpLocation.Country,
GeoIpLocation.City
FROMGeoIPLocation
INNER JOIN GeoIPBlock
ON GeoIPBlock.GeoIpLocation_Id = GeoIPLocation.Id
WHERE@nIpNumber BETWEEN GeoIPBlock.IpFrom AND GeoIPBlock.IpTo
Result: disaster.
The between operator uses the index on PK on GeoIpBlock, which results on half the table for first part of between and half the table for the second part of between. A bit better query is with FORCESCAN on GeoIPBlock, but it still runs slow, as it scans a lot of records.
Question: Is there a better way to index this kind of data?
View 9 Replies
ADVERTISEMENT
Nov 20, 2007
I need to manage the problem of negative performance implications when I fragment a 1TB+ DB. I want to perform Index Reorganization if fragmentation is no higher than 30%, and Index Rebuild if the fragmentation exceeds 30%.
Firstly can anyone recommend a script which uses sys.dm_db_index_physical_stats system to ascertain the
fragmentation level. Secondly, is there a technique I can employ to prevent the ONLINE operation completely killing performance on 27/4 production system?
ALTER INDEX REORGANIZE/REBUILD WITH (ONLINE=ON)
View 2 Replies
View Related
Jun 11, 2015
Dead lock is coming in select query in application because of index. It is identified after enabling trace in database and identified by reading deadlock xml file. After index removal, deadlock is not coming in same query. But it is affecting query's performance slightly. Is it correct way to remove index if dead lock is coming because of index?
View 3 Replies
View Related
Aug 7, 2015
I have following simple code in my stored proc. even I have hard coded OFFSET to non zero, but it always return result from starting point 0. End limit "Fetch Next" is working perfect.Only problem is with start.
SELECT
*
FROM #invoices
ORDER BY #invoices.InvoiceDateTime ASC
OFFSET @StartRow ROWS Fetch NEXT @EndRow ROWS ONLY;
View 9 Replies
View Related
May 13, 2015
i have add maintenance plan rebuild index in sql server 2014 log file increased very tremendous space.
View 6 Replies
View Related
Apr 26, 2015
We have a database with a table that contains around 180m records. Each day a further 70k are inserted. No records are ever deleted as this table is used for archiving only.Users are required to perform SELECTs on this table constantly but due to the high number of INSERTs the indexes become very fragmented very quickly. My aim is to avoid daily rebuilds of the indexes which is what our software house is telling us we have to do.
This is the DDL for the table:
CREATE TABLE [dbo].[Inventory](
[EAN] [bigint] NOT NULL,
[Day] [smalldatetime] NOT NULL,
[State] [int] NOT NULL,
[Quantity] [int] NULL,
[StockValue] [float] NULL,
CONSTRAINT [PK_Inventory] PRIMARY KEY CLUSTERED
[code]...
There are also three clustered Indexes on this table each referencing a single column. The problem from my side is that I cannot understand why the three columns in a primary key would also be configured as non-clustered indexes.My solution would be one of the following:
1. Accept the tables are going to be fragmented and require a daily rebuild (don't like this one!)
2. Partition the table
3. Remove the non-clustered Indexes and let the clustered index for the primary key do the work.
View 9 Replies
View Related
May 18, 2015
I would like to put a Clustered Index on a date column in a current heap, but one question/concern.This heap every month has thousands of rows deleted and even more added later. How much of an issue will this cause the Clustered Index as far as page splits? I was thinking Fill Factor of 70%.I would normally just test and still will on Dev box, but my Dev box is much smaller than production as far as power.
View 6 Replies
View Related
Jun 16, 2015
I am trying to index dates to numbers with a large data set.
The first colums is index, the next is FactorsS, the next is value and the next is Date and the last is Lag.
Would it be difficult to write code that would determine the lag values. The lag value is based on the date value.
Index FactorS Value Date Lag
1 XYZ 2.3 12/31/2014 1
2 XYZ 1.4 12/30/2014 2
3 XYZ 3.3 12/29/2014 3
4 ABC 1.8 12/31/2014 1
5 ABC 2.2 12/30/2014 2
6 CBA 1.7 12/31/2014 1
7 CBA 1.8 12/30/2014 2
8 CBA 1.9 12/29/2014 3
9 CBA 2.1 12/28/2014 4
View 9 Replies
View Related
Jul 1, 2015
I created columnstore index on the table with 20 columns and about 1000 000 000 rows
every day added about 5M rows
"select" queries became faster because of batch mode and table demand less disk space then before
I have also 6 similar tables with 5 000 000 000 rows and plan to move them on columnstore index
server has 128 G RAM
What pitfalls I could face if I will have so many columnstore indexes on one server?
How a could see problems in DMV?
View 3 Replies
View Related
Jul 9, 2015
I have a situation where I need to rebuild indexes on a large DB (500G).
When I do a test run of the rebuilds in my test environment it uses 100G of space - which is fine with me.
When I do a rebuild in my High Availability environment - same DB, same script - it eats up over 600G of space and fills the volume.
What can I do without removing my DB from H/A to rebuild the indexes?
View 8 Replies
View Related
Aug 18, 2015
i have created a fact table which has unique cluster index as below,
CREATE UNIQUE CLUSTERED INDEX [FactSales_SalesID] ON [dbo].[FactSales] (salesid ASC)
WITH (DATA_COMPRESSION = PAGE)
GO
however later when i add CLUSTERED COLUMNSTORE INDEXES :
CREATE CLUSTERED COLUMNSTORE INDEX CSI_FactSales
ON dbo.FactSales WITH (DATA_COMPRESSION = COLUMNSTORE)
GO
it prompts error.
Msg 35372, Level 16, State 3, Line 167 You cannot create more than one clustered index on table 'dbo.FactSales'. Consider creating a new clustered index using 'with (drop_existing = on)' option.
View 4 Replies
View Related
Oct 4, 2015
I want to create a lot of index for my database for performance.
But I need find memory usage by indexes.
How to find memory usage by index in sql server?
View 1 Replies
View Related
Oct 31, 2015
We use SQL server always on feather on my database and we distribute statement on main database server and mirror database server for raise performance.
My police for split statement is DML (insert, update and delete) statement go to main DB and Read Data (select) statement go to mirror DB.
I want know can I use different index on main DB and mirror Database?
Because some index are used in mirror DB not used in main database.
View 3 Replies
View Related
Aug 4, 2014
I have an issue where I am getting an error on an unique index.
I know why I am getting the error but not sure how to get around it.
The query does a check on whether a unique value exists in the Insert/Select. If I run it one record at a time (SELECT TOP 1...) it works fine and just won't update it if the record exists.
But if I do it in a batch, I get the error. I assume this is because it does the checking on the file before records are written out and then writes out the records one at a time from a temporary table.
It thinks all the records are unique because it compares the records one at a time to the original table (where there would be no duplicates). But it doesn't check the records against each other. Then when it actually writes out the record, the duplicate is there.
How do I do a batch where the Insert/Select would write out the records without the duplicates as it does when I do it one record at a time.
CREATE TABLE #TestTable
(
Name varchar(50),
Email varchar (40)
)
Insert #TestTable (Name,Email) Values('Tom', 'tom@aol.com')
[Code] .....
View 1 Replies
View Related
Jun 26, 2015
We face slow performance issue for like taking long time for same query execution after We apply index rebuild and reorganize index. But, after execution of query or procedure for 2 -3 times, performance will be faster. I have following questions
1 do we need to update stats after we rebuild an reorganize index.
2. is it will be slow for 1-2 times for every query and stored procedure execution after we rebuild and reorganize index?
View 2 Replies
View Related
Aug 6, 2015
I need to restore a SQL 2008 db on SQL 2014 instance with out upgrading the database(changing compatibility level).
This database has full text enabled. I see one full text catalog an 2 full text indexes.
Do I need to worry about anything or can I perform a clean restore from the backup with full text import?
Do I need to rebuild the catalog in this case?
View 1 Replies
View Related
Sep 15, 2015
I'm trying to determine how much space I would need for my data drive and log file drive to do index rebuild. I have a database which is 100gb, it is in simple recovery mode. let me know what to have a look at to determine how much space.
View 7 Replies
View Related
Oct 27, 2015
We are on SQL 2014...we have a bunch of views in a database where we are trying to find the views which have more than 16 columns max for unique index/constraint...this is needed so we can convert them to indexed views...
View 1 Replies
View Related
Sep 13, 2014
I've been fixing some issues lately where weekly maintenance has been causing logs to grow and filling disks.
Is there any rule of thumb for allocating log space for doing reorgs and rebuilds in a worst case scenario? I'm thinking 3x the largest database size?
I've been watching them run on databases in the range of 50GB where the logs are growing well over that for rebuilds or even reorgs. Once you have a few databases like this on a server, you can suddenly eat through a lot of disk space just for holding logs during maintenance.
View 3 Replies
View Related
Mar 2, 2015
I have 10 Gb index and disk space only left 5gb .
How can i rebuild index ?
View 4 Replies
View Related
Oct 9, 2015
I am trying to use an indexed view to allow for aggregations to be generated more quickly in my test data warehouse. The Fact Table I am creating the indexed view on is a partitioned clustered columnstore index.
I have created a view with the following code:
ALTER view dbo.FactView
with schemabinding
as
select local_date_key, meter_key, unit_key, read_type_key, sum(isnull(read_value,0)) as [s_read_value], sum(isnull(cost,0)) as [s_cost]
, sum(isnull(easy_target_value,0)) as [s_easy_target_value], sum(isnull(hard_target_value,0)) as [s_hard_target_value]
, sum(isnull(read_value,0)) as [a_read_value], sum(isnull(temperature,0)) as [a_temp], sum(isnull(co2,0)) as [s_co2]
, sum(isnull(easy_target_co2,0)) as [s_easy_target_co2]
, sum(isnull(hard_target_co2,0)) as [s_hard_target_co2], sum(isnull(temp1,0)) as [a_temp1], sum(isnull(temp2,0)) as [a_temp2]
, sum(isnull(volume,0)) as [s_volume], count_big(*) as [freq]
from dbo.FactConsumptionPart
group by local_date_key, read_type_key, meter_key, unit_key
I then created an index on the view as follows:
create unique clustered index IDX_FV on factview (local_date_key, read_type_key, meter_key, unit_key)
I then followed this up by running some large calculations that required use of the aggregation functionality on the main fact table, grouping by the clustered index columns and only returning averages and sums that are available in the view, but it still uses the underlying table to perform the aggregations, rather than the view I have created. Running an equivalent query on the view, then it takes 75% less time to query the indexed view directly, to using the fact table. I think the expected behaviour was that in SQL Server Enterprise or Developer edition (I am using developer edition), then the fact table should have used the indexed view. what I might be missing, for the query not to be using the indexed view?
View 1 Replies
View Related
Nov 3, 2015
I am getting an intermittent problem with executing queries on my workstation. Whenever I execute a couple of store procedures that return a large numbers of rows (200,000 to 500,000 rows) I receive an intermittent error message when running the same sp with the same parameters sometimes the procedure runs fine and returns the result, sometimes I get an error message
Error message is: Index was outside the bounds of the array.
Sometimes
Error message is: Internal connection fatal error.
Sometimes I get a datetime overflow
I also occasionally get these intermittent messages when I execute the native query that the SP wraps.
I have SSMS 2015 SP1 installed on my workstation, the servers are a range of server from 2008 R2 SP2 to 2014 SP1. It feels like a client problem as there are applications connecting to these servers and running the stored procedures without issue...
View 2 Replies
View Related
Sep 17, 2015
How can we get the list of clustered columnstore index in a database in sql server 2014
View 3 Replies
View Related
Jun 22, 2006
This question probably has been asked many a time. And yet I feel itis still relevant for one thing a search on this NG does not produce adesirable answer.It is kind of disappointing that MS would not be able to transfer ERrelationship from an Access db to a SQL Server 7/2000-based one, theupgraded db/imported tables sitting on the SQL Server would not havePKs, say, you have 100 user tables, you have to first recreate PKs foreach of them then set up relationship between/among them, quite timeconsuming. Do you have a better way?Along the same line of the task, what options out there for convertingAccess Modules into SQL Server-based Stored Procedures and/or UDFs?The manual option is sure there, third party tool? I wouldn't trustthem that much though.TIA.
View 6 Replies
View Related
Sep 29, 2006
Hello there,
We have about a dozen SQL server 2000 Enterprise Edition servers in house. Our goal is to set up a cluster SQL server 2005 and consodiate the existing dozen servers to a few servers for easy manage and maintainence. So there are 3 things that we want to accomplish:
1. upgrade to SQL server 2005,
2. Consolidate existing servers
3. Make a cluster server to get high availability
But I'm sure what's the right order to acheive them. To upgrade each server to 2005 and then move them to cluster server? or set up the cluster server in 2005 and restore existing dbs to the cluster server. upgrade first or cluster first? upgrade first or consolidate first? pros and cons? upgrade or backup/restore? What do you recommend? We have lots of stored procedures, views and triggers, DTS packages and some replications. Any insight will be greatly appreciated.
-Jessie
View 2 Replies
View Related
Apr 16, 2015
I am using SQL Server 2012 SE.I am trying to delete rows from a couple of tables (GetPersonValue has 250 million rows and I am trying to delete 50Million rows and GetPerson has 35 Million rows and I am trying to delete 20 million rows). These tables are in TX replication.The plan is to delete data older than 400 days old.
I tried to move data to new tables from the last 400 days and it took me like 11 hours. If I delete data in chunks of 500000 then its taking a long time to rebuild indexes(delete plus rebuild indexes 13 hours). Since I am using standard edition partition wont work.
find ddl below:
GO
CREATE TABLE [dbo].[GetPerson](
[GetPersonId] [uniqueidentifier] NOT NULL,
[LinedActivityPersonId] [uniqueidentifier] NOT NULL,
[CTName] [nvarchar](100) NULL,
[SNum] [nvarchar](50) NULL,
[PHPrimary] [nvarchar](50) NULL,
[code]....
View 1 Replies
View Related
Jul 28, 2015
I’m looking for clearity on partition switching. The idea is to use many BULK INSERT statements into table dbo.X_n in parallel and when BULK INSERT for table dbo.X_n is completed, switch dbo.X_n into dbo.bigdaddy. I think this is the fastest way to upload a couple hundred GB of data.
In learning about partition switching (in part) from The Data Loading Performance Guide under Partition SWITCH, I hear the instructions to say copy the main table exactly to become a target. But in that same step (#1), I read that we need to change the default file group of the target (dbo.X_n) from the default file group. Then it says I need to match indexes and lists the filegroup as something we need to match with the main table.
As an overview of the partition switching strategy, I think the whole point of BULK INSERT with partitioning is to have seperate files (in same group) to enable concurrent uploading where each table has its own file. Once the upload is completed to a table (dbo.X_n) then we do the partition switch into the main table (dbo.bigdaddy). The data we just uploaded doesn’t actually move, just the metadata for it.
“Don’t have the same filegroup on your target as the main table. You must have the same filegroup on your target as the main table.”
View 1 Replies
View Related
Mar 29, 2007
I created a SSIS package and several data flow componenets for this package.
What does strategy exist to deploy SSIS package and data flow components into a enterparise server?
Thanks in advance.
View 2 Replies
View Related
Oct 20, 2006
please explain the differences btween this logical & phisicall operations that we can see therir graphical icons in execution plan tab in Management Studio
thank you in advance
View 3 Replies
View Related
Sep 5, 2007
I am hoping someone can give some advice on the following things:
I have read a few times about a data access layer in an n-tier application. I am assuming that this should be done
using sprocs. Is there an advantage of using sprocs instead of views ( in situations where the same thing could
be accomplished using either)? Will a sproc run faster than a view? Can any share any info?
Are sprocs best suited for data access and to enforce business rules?
I know SQL Server has reserved words that shouldn't be used. I am wondering what the best thing to do is
in the following situation? What is the best way to handle storing a customer or clients address? I am working from a book that shows the name of a column as "Address". I have found that with SQL Server 2005
Express that this is a reserved word(it is shown in blue in the query window). I want to keep my names short. I am trying to avoid a name like "StreetAddress". Is my book teaching bad habits?
...........................................thanks...........................................................
View 3 Replies
View Related
Jun 22, 2007
This (demo) statement is fine in Access, and so far as I can see, shouldbe OK in SQL Server.But Enterprise Manager barfs at the final bracket. Can anyone helpplease?select sum(field1) as sum1, sum(field2) as sum2 from(SELECT * from test where id < 3unionSELECT * from test where id 2)In fact, I can reduce it to :-select * from(SELECT * from test)with the same effect - clearly I just need telling :-)cheers,Jim--Jima Yorkshire polymoth
View 4 Replies
View Related
Oct 11, 2007
Ive been trying to get some type of Blogpost tutorial Etc on how to set up SQL Server 2005 to serve data to a website1 How do I setup users? a) Can I have 3 roles? 1a) Owner of DB can read/write 2a) reader Can Only read from database 3a) Writer. Can only write to database How would I set this up? How can I call all these from ASP.NET depending on what the user is currently using on the website? eg: Just serving pages with content (reader) Forms (writter) admin (owner)I also need to have the SQL keep sessions (Ive already ran aspnet_reqSQL.exe) and created all that im just unsure what type user can access all thisAny tutorials on how to set up a whole WEb application project from DB to VS 2005? Thanks
View 3 Replies
View Related
Jul 7, 2005
Hi all,
I have some C# code that is pulling data from a database where a majority of the values being retrieved are NULL , yet their initial column data types are both string and int, which means that I have to temporarily store these NULL's in int and string data
types in C#. Later on in my code I have to test against these values,
and was wondering if I am doing it correctly with the following code.
The following statement the variable or_team_home_id is of a string data type, but may have had a NULL value assigned to it from the database
if (!or_team_home_id.Equals(DBNull.Value)) {}
The following statement the variable or_manager_id is of a int data type, but also may have a NULL value assigned to it from the database.
if (!Convert.IsDBNull(or_manager_id)){}
Are these the correct way to test against NULL values retrieved from
teh database and that are stored in their respective data types.
Tryst
View 1 Replies
View Related