Maintaining SQL, Defragmenting Index Or Harddrive?

Apr 4, 2007

Hello All!

I have a asp.net website with SQL 2005 DB .
DB size of 1.5GB with ~10 tables in it. The largest table has 200k of records in it (website users table), with 500 new records every day.

I've setup this database 4 months ago and didn't touch it since then.
I really have no knowledge what SQL needs in terms of index maintenances / hard drive maintenances.

Lately , the website searches started to be really slow , and I started to get timeout error and deadlock errors.
I have a few indexes for each table based on the recommendation MS-SQL Database tuning advisor gave me.

Some of the index's are :
Page fullness : 99%
Total Fragmentation: 24%

Other are :
Page fullness : 65%
Total Fragmentation: 99%


I guess I need to start maintaining the DB , defragmenting index or hard-drive?
Can anyone help me and provide me with guide/information on what is needed to be done in order to keep SQL running fast and happily?
or a guide on defragmenting index's and how ofen do i need to defrag?

Thanks,
Shar

View 15 Replies


ADVERTISEMENT

Maintaining Index

Oct 22, 2000

Please what is the best way to perform index maintenance. I use 7.0
We have been having slow server performance, and one of the options is to do index maintenance. I have researched but could not get a clear picture of what I should do. Has anybody performed the same task before? Thanks for your help!!!

View 1 Replies View Related

Defragmenting SQL 2000

Jan 21, 2003

New to SQL, and I am wondering if I should get a defragmenting program to run on the server? If so, what are some do's and don'ts to look out for?

Thanks

View 2 Replies View Related

Saving Image In SQL Server Or In Harddrive

Sep 24, 2003

I am trying to store about millions of pictures in Web server with lots of traffic. some poeple told me. SQL server cannot handle it and may freeze quickly. can anybody tell me which way is better and more efficent? store image in SQL server image field or store in server's harddrive?

View 13 Replies View Related

Maintaining History

Jun 11, 2008

hi ,
i am working on an application using c#, visual studio 2005, sql server 2005.
i have a few tables in sql server 2005.
i need to save the history. (i.e) all the inserts, updates, and deleats performed on the tables.
can any one suggest me how can i achieve that.
should i use triggers and save the changes in another table ???
waiting for your suggestion??
thank you

View 13 Replies View Related

Maintaining Atomicity

Apr 5, 2004

Hello Friends,
Iam new to this sql server arena. I have implemented a procedure which does a series of insert and update statements and all of this statements must be implemented all at once or none. But if I got error in some statements , the rest of the statements are been executed. Please suggest me a way or code snippet to achieve atomicity in a sqlserver procedure.

regards,
Ch.Praveen Kumar.

View 4 Replies View Related

Maintaining Statistics

Jan 29, 2008

Scenario:
For the most part we let SQL Server (2005) maintain our statistics for us. However we do have several large processes written in stored procedures. There is one main controller procedure that can call any number of other procedures. This process can take anywhere from 5 minutes to an hour+ to run (based on the size of the client). Back in the day of SQL Server 2000 we found that the performance of this procedure would diminish over time (while it was running). We implemented a queued concept of issuing UPDATE STATISTICS commands. This was done by adding a SQL Server job that ran every 10 minutes looking for new records in a table. Records where inserted at key points in these stored procedures (after large deletes, updates, inserts).

Goal:
Now, with all that background and with 2005, I'd like to review this concept and remove this implementation if possible, or at least remove the close association of maintaining the statistics from the business jobs. In 2005, are there better ways to monitor and maintain statistics at more of an administrative (but automated) way?

View 15 Replies View Related

Maintaining A Database?

Feb 27, 2007

Our database(s) are all over the place - no documentation - lot's ofrubbish and unused stuff.I'm managing a project focusing on data quality that covers codechanges, alterations to DTS packages, schema changes etc etc.What I'd like to do is see where the bit I want to change is beingused.that might mean what stored procs use a field and what sprocs use thatsproc.maybe it's which dts packages use a sproc (and again up thehieararchy)The list is a long one but basically I need to know what the effectsare of changes.Is there a tool out there that lets me navigate a database to thatlevel of detail - I understand something along the same lines isavailable for MS Access but I can't find it for SQL Server.Thanks

View 1 Replies View Related

Maintaining Security

May 24, 2006

I am a beginer in SQL Server. I have developed a simple accounting application in VB and SQL. Now I have successfully completed my application. Now I want to deploy it to my client. So I installed SQl Server and required VB components in the clients computer. I also created 'sa' login and secret password only know by me. I thought my data in that clients computer was full safe but later on i found that we can also connect to the sql server using the NT administrative account and easily change the data of the database. So now I am worried that if someone enters and access the clients computer with administrator's password then he/she can change my data resulting the corruption of the data. So is there any way that I can prevent the access the database to the client with the NT administrative account or any way 2 track the way the data changed?

View 5 Replies View Related

Urgent -- Maintaining Database

Apr 25, 2008

hi all,
 i am working on portal site where i have created 18 tables in on database, i dont know weather i am right or wrong . Should i continue with the same or create two tables one will be master and another will contain common fields.
but if i will create one table for all then what will happend.
please tell me what to do and why?
asap please............
 Thanks for spending ur valuable time for me.
 

View 4 Replies View Related

Maintaining Variable After EXEC

Jul 20, 2005

Hello,I am fairly new at stored procedures. I have created some that willgo through a table and return a start date and an end date that isdependent upon the fiscal period you want, but I then need to usethose dates in another stored procedure to retrieve the information Ineed. My stored procedure looks like this.================================================== ====================CREATE PROCEDURE dbo.R920ExtTotal@MthsBack Decimal OUTPUTASDECLARE @sSQL AS NVARCHAR(255), @StartDate as SMALLDATETIME, @EndDateas SMALLDATETIMEExec @StartDate = GetMthStart @MthsBackExec @EndDate = GetMthEnd @MthsBackSET @sSQL = 'Select count(extension) as Total From r920f00 Where([date] BETWEEN "' +CONVERT(nvarchar, @StartDate) +'" and "' +CONVERT(nvarchar, @EndDate) +'")'Select @sSQLEXEC (@sSQL)ReturnGO================================================== ===============The problem is my variables @StartDate and @EndDate do not retaintheir values after the EXEC statement and revert to 01/01/1900. Howcan I get around this problem?Thanks!!!!Chip

View 2 Replies View Related

Maintaining Partitioned Views

Jul 20, 2005

Hello,I have a large set of data that I have set up as a partitioned view.The view is partitioned by a datetime column and the individual tableseach represent one month's worth of data. I need to keep at least twoyear's worth of data at all times, but after two years I can archivethe data. A sample of the code used is below. It is simplified forspace reasons.My question is, how do other people maintain the database in this typeof scenario? I could create all of the tables necessary for the nextyear and then go through that at the end of each year (archive tablesover two years, add new tables, and change the view), but I was alsothinking that I might be able to write a stored procedure that runsonce a month and does all three of those tasks automatically. It seemslike a lot of dynamic SQL code though for something like that.Alternatively, I could write VB code to handle it in a DTS package.So, my question again is, how are others doing it? Any suggestions?Thanks!-Tom.CREATE TABLE [dbo].[Station_Events_200401] ([event_time] [datetime] NOT NULL ,[another_column] [char] (8) NOT NULL )GOCREATE TABLE [dbo].[Station_Events_200402] ([event_time] [datetime] NOT NULL ,[another_column] [char] (8) NOT NULL )GOCREATE VIEW Station_EventsASSELECT event_time,another_columnFROM Station_Events_200401UNION ALLSELECT event_time,another_columnFROM Station_Events_200402GO

View 3 Replies View Related

Maintaining SQL Data On A Remote Host.

Oct 29, 2007

Hi,What is the preferred way to maintain SQL tables on a remote host?I am a newbie to building ASP.NET websites on a remote host.A stumbling point has been the maintenance of SQL tables on the remote host.I understand about doing complete backup and restores,but I am seeking a quicker way to maintain individual files.I would like to click and edit but instead am going through the following 30+ clicks.Is there a easier way?Thanks.
For example, what I do now to build a new data table for a hosted website.1) Design table 1a) Name 1b) Fields & Types
2) SQL Server Management Studio Express (assuming existing database) 2a) Select Database & Tables 2b) Add new table 2c) Add fields,  Key must be INT for ACCESS 2d) Save as (Name_Table)
3) MS Access  (requires ODBC to be setup first through the Windows control panel) 3a) Tables / New / Link / ODBC /Machine_Data_Source 3b) Pick table 3c) Edit data, as needed
4) To transfer data, first:Select the database in the VWD solution explorer, then right-click and select the new "Publish to Provider" 4a) Database Publishing Wizard   4b) Choose table to script a backup from 4c) Build script & Copy
5) Start Ipswitch FTP ( this step can be rplaced by 6e below) 5a) locate folder & sql script file and choose destination directory 5b) Transfer file
6) Login to remote host host (1and1) 6a) MS SQL Administration  6b) Admin (MyLittleTools Admin) 6c) Tools 6d) Quey Analyser 6e) Paste script (from step 4) 6f) Submit (Run) 6g) Verify table built
FYI: Script to build and populate the new table "Name_Table"Built by step 4c above, pasted into remote Hosts Query Analyzer by step 6e above.
/****** Object:  Table [dbo].[Name_Table]    Script Date: 10/28/2007 18:03:58 ******/IF  EXISTS (SELECT * FROM sys.objects WHERE object_id = OBJECT_ID(N'[dbo].[Name_Table]') AND type in (N'U'))DROP TABLE [dbo].[Name_Table]GO/****** Object:  Table [dbo].[Name_Table]    Script Date: 10/28/2007 18:03:58 ******/SET ANSI_NULLS ONGOSET QUOTED_IDENTIFIER ONGOIF NOT EXISTS (SELECT * FROM sys.objects WHERE object_id = OBJECT_ID(N'[dbo].[Name_Table]') AND type in (N'U'))BEGINCREATE TABLE [dbo].[Name_Table]( [ID] [int] NOT NULL, [Name] [nvarchar](50) COLLATE SQL_Latin1_General_CP1_CI_AS NULL, [Address] [nvarchar](50) COLLATE SQL_Latin1_General_CP1_CI_AS NULL, [City] [nvarchar](50) COLLATE SQL_Latin1_General_CP1_CI_AS NULL, [State] [nvarchar](50) COLLATE SQL_Latin1_General_CP1_CI_AS NULL, [Zip] [nchar](10) COLLATE SQL_Latin1_General_CP1_CI_AS NULL, [Acsz] [nvarchar](50) COLLATE SQL_Latin1_General_CP1_CI_AS NULL, [Phone] [nvarchar](50) COLLATE SQL_Latin1_General_CP1_CI_AS NULL, [Fax] [nvarchar](50) COLLATE SQL_Latin1_General_CP1_CI_AS NULL, CONSTRAINT [PK_Name_Table_1] PRIMARY KEY CLUSTERED ( [ID] ASC)WITH (PAD_INDEX  = OFF, STATISTICS_NORECOMPUTE  = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS  = ON, ALLOW_PAGE_LOCKS  =
ON))ENDGOINSERT [dbo].[Name_Table] ([ID], [Name], [Address], [City], [State], [Zip], [Acsz], [Phone], [Fax]) VALUES (1, NULL, NULL,
NULL, NULL, NULL, NULL, NULL, NULL)INSERT [dbo].[Name_Table] ([ID], [Name], [Address], [City], [State], [Zip], [Acsz], [Phone], [Fax]) VALUES (2, NULL, NULL,
NULL, NULL, NULL, NULL, NULL, NULL)INSERT [dbo].[Name_Table] ([ID], [Name], [Address], [City], [State], [Zip], [Acsz], [Phone], [Fax]) VALUES (3, N'Third
name', NULL, NULL, NULL, NULL, NULL, NULL, NULL)
 

View 1 Replies View Related

Maintaining SQL Server At Customer Sites

Apr 18, 2003

I am wondering how people maintain their SQL Servers which run at several customers sites and disk space is getting smaller and smaller? I want to say that we have tables in SQL dbs which hold a lot of date consisting of statistics, errors, logs etc.
They grow and grow and existing data is not needed anymore as soon as the data get older than let's say for one year. How do you overcome the problem reducing the tables but not charging the system too much as the major application also runs on the same server?

Thanks for any input

mipo

View 1 Replies View Related

Maintaining A Log For The Users Connected To Sql Server Db.

Dec 8, 2005

Hi !

I need to maintain a record such as how many time any user (e.g, sa) connects to the sql server. Means whenever any person is connecting to the database through application or directly, then i need to know that through which sql user(e.g sa), any body connected.

Regards,
Shabber Abbas Rizvi.

View 1 Replies View Related

Suggestions On Maintaining Audit Fields

Apr 14, 2008

Currently all of our tables in several databases have the following columns:

user_added (this is nvarchar)
host_added (this is nvarchar)
date_added (this is datetime)
user_modified (this is nvarchar)
host_modified (this is nvarchar)
date_modified (this is datetime)

Right now our policy is that (a) the _added columns use defaults to populate the data on INSERTS and triggers are generated to update the _modified fields upon an UPDATE of the table.

Our practice has been (a) to manually create these fields in our scripts as we create new tables in our system and (b) create triggers to perform the update anytime we create a new table.

This practice has been fine until recently where we have been outsourcing some of our development and not all of our standards have been adhered to, including this one. I'd like to look at alternatives for somehow maintaining these concepts outside of our development workflows.

The first thing I'd like to inquire about is regarding options to eliminate having developers include these columns in the CREATE TABLE statements. Is it possible in SQL Server 2005 to capture when an CREATE TABLE statement is executed and override/append to the initial CREATE TABLE statement?

The second thing I'd like to inquire about is regarding options to eliminate having developers write the initial trigger that maintains the _modified fields. I guess if there are options to capture when a CREATE TABLE statement is executed, we could possibly generate a CREATE TRIGGER statement against that object as well?

Another idea I would like thoughts on are using some sort of 'table inheritence' to store this information for all objects in our database? This idea come up when I saw this article - http://www.sqlteam.com/article/implementing-table-inheritance-in-sql-server. Do you think the situation I explained here would fall into this concept?

I'm also open to any other thoughts and/suggestions.

View 2 Replies View Related

Maintaining Unique Keys When Offline

Aug 7, 2007

If you have a "Orders" table that is being sync'd to subscribers that are ocassionaly offline, and the subscribers add rows to their local Orders table. When they go online to sync with the published "Orders" table, how do you handle keeping the "OrderId" field unique?

Example:
Both salespeople sync the following data down:
OrderId Desc
1 Order 1
2 Test Order



Both salespeople go offline and add orders
Salesperson 1 adds:
OrderId Desc
3 Joes Order

Salesperson 2 adds:
OrderId Desc
3 Kathys Order


Now, when they go back online, they both will sync their orders up to the main database and they both have the OrderId of 3.

View 3 Replies View Related

Maintaining 2000 Maintenance Plans Using SQL 2005

May 4, 2007

Calling all those that use Maintenance Plans.

There are some perculiar goings on happening on my servers. Plans across servers have been doing funny things like dumping back ups in different folders, some jobs have been hanging, so on and so forth.

And I think it's occuring after modifying SQL Server 2000 plans with SSMS. It's so intermitent, it's hard to put my finger on exactly when it's happening. It may have not even been reported yet. Fixing the anomolies are achieved by recreating the jobs (a matter of unticking and ticking the boxes in the Maintenance Plans).

In SQL 2000, if you create more than 1 schedule on a job that was itself created using a maintenance plan, SQL Server returns a message along the lines of "Dude, do this and weird things might happen". The message doesn't say what exactly, just that it can no longer guarentee the integrity of the plan. And indeed, weird things does happen, I tested it once.

I'm guessing this is a similar problem, but has not yet been pinned down. Has anyone come across this when using SSMS to maintain 2000 plans.

At this stage, I would like to point out that this is not a slagging off Maintenance Plans thread... I'm looking in the direction of anyone who's name starts with the letter Kristen or Tara



Drew

"It's Saturday night; I've got no date, a two litre bottle of Shasta, and my all-Rush mix tape... LET'S ROCK."

View 1 Replies View Related

Maintaining Custom Colors When Exporting To Excel

May 28, 2007

I am using RGB HEX #s in my report for some colors. It shows up fine when rendering to a browser, however when exporting to Excel my custom colors don't come over. I've tried creating a report.xlt file with my custom colors in it, but when I export to Excel it uses the standard Excel palette instead of the custom palette in my report.xlt.



Any ideas on how to retain my custom colors when exporting to Excel?

View 1 Replies View Related

SQL Newbie - Maintaining An Off-site Database Copy

May 24, 2007

Hi,



I am very new to SQL Server and am just starting to look into replication as a possible solution to a problem I have. I have a local database which will be running on either MSDE or SQL Express 2005. I have a database on a webserver running SQL Server 2005 Standard which I wish to keep as an exact copy of the MSDE/Express one. If data is changed in the local db then I want the webserver db to be updated with this data, preferably as the change occurs, but it could be on an hourly schedule if required.



If data in the webserver db is altered then I do not wish for it to update the local db, but I would like it to be assumed that the change is erroneous and reset the item to whatever exists in the local db. In other words, should a table be erroneously dropped or something from the webserver db then the table would be recreated when it syncs with the local db (rather than propogating the change back to the local db and thus losing local data).



Basically I want the web db to be an exact mirror of the local db as often as possible. Given that the webserver is running SQL Server 2005 Standard and the local machine will be running either MSDE or SQL EXPRESS, is this achievable and what would be the best way of achieving it? Should I use merge replication or database mirroring? I have succeeded in getting 1-directional merge replication working by setting the webserver up as a publisher and distributor and setting up a push merge subscription on the local machine with an ExchangeType of 1. However, this won't correct the data in the web db if it is compromised until it is altered locally.



It crossed my mind that I could write a small application to utilise tablediff to compare each table in the DBs and get it to correct the web db if required on a regular basis. Would this be a silly approach?



I would appreciate expert advice!



Max

View 1 Replies View Related

Difference Between Index Seek && Index Scan && Index Lookup Operations?

Oct 20, 2006

please explain the differences btween this logical & phisicall operations that we can see therir graphical icons in execution plan tab in Management Studio

thank you in advance

View 3 Replies View Related

What Are The Basic Processes For Maintaining An SQL SERVER 2005 Database

Mar 24, 2008

What are the basic processes for maintaining an SQL SERVER 2005 Database?
Is there a top ten list of things you should do periodically to all SQL SERVER 2005 databases?

View 2 Replies View Related

Problem In Maintaining Data Precision During Import From Flat Files.

Nov 19, 2007



In my scenario I have about a dozen of flat files (Text files),that I have to import in SQL Server 2005.
I am using Flat File connection manager to carry out tha task.Flat files contains data generated from oracle.
When I import data from these text files into SQL ,the main problem lies in converting number(p,s) data type column of Oracle(In text file) to numeric(p,s) data type of sql server 2005.

Number(p,s) data type looses all it's digits after decimal to zero during import process.
For example

1.2434234390 (from text file,number(p,s) type of oracle) converts to 12.0000000000 (numeric(p,s)) of sql server 2005.

Is this this any workaround to this problem.I urgently need help.



View 7 Replies View Related

Master Data Services :: Maintaining Contiguous Code IDs By Applying Business Rule?

Feb 29, 2012

I deleted some records out of an entity, I'd like to keep the Codes as contiguous and incremental, meaning no breaks between the code numbers.I created a business rule and applied it but codes remain the same.

I used the "Default to a generated Value" action, then selected the Code attrib. --Saved.

Then back to the Entity, I applied business rules. But nothing seemed to have happened. As there was no change in codes.

There are no validation errors either.

View 2 Replies View Related

SQL - Cascading Delete, Or Delete Trigger, Maintaining Referential Integrity - PLEASE HELP ME!!!

Nov 13, 2006

I am having great difficulty with cascading deletes, delete triggers and referential integrity.

The database is in First Normal Form.

I have some tables that are child tables with two foreign keyes to two different parent tables, for example:

Table A
/
Table B Table C
/
Table D

So if I try to turn on cascading deletes for A/B, A/C, B/D and C/D relationships, I get an error that I cannot have cascading delete because it would create multiple cascade paths. I do understand why this is happening. If I delete a row in Table A, I want it to delete child rows in Table B and table C, and then child rows in table D as well. But if I delete a row in Table C, I want it to delete child rows in Table D, and if I delete a row in Table B, I want it to also delete child rows in Table D.

SQL sees this as cyclical, because if I delete a row in table A, both table B and table C would try to delete their child rows in table D.

Ok, so I thought, no biggie, I'll just use delete triggers. So I created delete triggers that will delete child rows in table B and table C when deleting a row in table A. Then I created triggers in both Table B and Table C that would delete child rows in Table D.

When I try to delete a row in table A, B or C, I get the error "Delete Statement Conflicted with COLUMN REFERENCE". This does not make sense to me, can anyone explain? I have a trigger in place that should be deleting the child rows before it attempts to delete the parent row...isn't that the whole point of delete triggers?????

This is an example of my delete trigger:

CREATE TRIGGER [DeleteA] ON A
FOR DELETE
AS
Delete from B where MeetingID = ID;
Delete from C where MeetingID = ID;

And then Table B and C both have delete triggers to delete child rows in table D. But it never gets to that point, none of the triggers execute because the above error happens first.

So if I then go into the relationships, and deselect the option for "Enforce relationship for INSERTs and UPDATEs" these triggers all work just fine. Only problem is that now I have no referential integrity and I can simply create unrestrained child rows that do not reference actual foreign keys in the parent table.

So the question is, how do I maintain referential integrity and also have the database delete child rows, keeping in mind that the cascading deletes will not work because of the multiple cascade paths (which are certainly required).

Hope this makes sense...
Thanks,
Josh


View 6 Replies View Related

SQL 2012 :: Full Text Index How To Make It NOT To Index Embedded Or Attached Documents

Sep 30, 2015

I am using Full Text Index to index emails stored in BLOB column in a table. Index process parses stored emails, and, if there is one or more files attached to the email these documents get indexed too. In result when I'm querying the full text index for a word or phrase I am getting reference to the email containing the word of phrase if interest if the word was used in the email body OR if it was used in any document attached to the email.

How to distinguish in a Full Text query that the result came from an embedded document rather than from "main" document? Or if that's not possible how to disable indexing of embedded documents?

My goal is either to give a user an option if he or she wants to search emails (email bodies only) OR emails AND documents attached to them, or at least clearly indicate in the returned result the real source where the word or phrase has been found.

View 0 Replies View Related

Clustered Index On Client_ID+ORderNO+OrdersubNo, If I Create 3 Noncluster Index On Said Column Will It Imporve Performance

Dec 5, 2007



Dear All.

We had Teradata 4700 SMP. We have moved data from TD to MS_SQL SERVER 2003. records are 19.65 Millions.

table is >> Order_Dtl

Columns are:-

Client_ID varchar 10
Order_ID varchar 50
Order_Sub_ID decimal
.....
...
..
.
Pk is (ClientID+OrderId+OrderSubID)

Web Base application or PDA devices use to initiate the order from all over the country. The issue is this table is not Partioned but good HP with 30 GB RAM is installed. this is main table that receive 18,0000 hits or more. All brokers and users are using this table to see the status of their order.

The always search by OrderID, or ClientID or order_SubNo, or enter any two like (Client_ID+Order_Sub_ID) or any combination.

Query takes to much time when ever server receive more querys. some orther indexes are also created on the same table like (OrderDate, OrdCreate Date and Status)

My Question are:-


Q1. IF Person "A" query to DB on Client_ID, then what Index will use ? (If any one do Query on any two combination like Client_ID+Order_ID, So what index will be uesd.? How does MS-SQL SERVER deal with these kind of issues.?

Q2. If i create 3 more indexes on ClientID, ORderID and OrdersubID. will this improve the performance of query.if person "A" search record on orderNo so what index will be used. (Mind it their would be 3 seprate indexes for Each PK columns) and composite-Clustered index is also available.?

Q3. I want to check what indexes has been used? on what search?

Q4. How can i check what table was populated when, or last date of update (DML)?

My Limitation is i Dont Create a Partioned table. I dont have permission to do it.



In Teradata we had more than 4 tb record of CRM data with no issue. i am not new baby in db line but not expert in sql server 2003.


I am thank u to all who read or reply.

Arshad

Manager Database
Esoulconsultancy.com

(Teradata Master)
10g OCP










View 3 Replies View Related

Integration Services :: Rebuild Index / Refresh Index And Stats Improves Ssis Package Performance

Oct 28, 2015

My SSIS package is running very slow taking so much time to execute, One task is taking 2hr for inserting 100k records, i have disabled unused index still it is taking time.I am rebuilding/Refreshing indexes and stats once in month if i try to execute on daily basis will it improve my SSIS Package performance? 

View 2 Replies View Related

Index Was Out Of Range. Must Be Non-negative And Less Than The Size Of The Collection. Parameter Name: Index

Jan 22, 2006

Keep getting this error when positioning to the last page of a report.

Using Server 2003...SqlRpt Svcs 2000 sp2

Detail error msg:

Exception of type Microsoft.ReportingServices.ReportRendering.ReportRenderingException was thrown. (rrRenderingError) Get Online Help

Exception of type Microsoft.ReportingServices.ReportRendering.ReportRenderingException was thrown.

Index was out of range. Must be non-negative and less than the size of the collection. Parameter name: index

Anyone have any suggestions?  Any way to find out what collection is blowing?...or where parameter name: index comes from?

View 47 Replies View Related

Index Table1 And Select For 647.600 Records.. It Is So Slow.. But I Have No Index :)???

Jun 20, 2008

hello friends
i have table1 and 200 coulumn of table1 :) i have 647.600 records. i entered my records to table1 with for step to code lines in one day :)
i select category1 category2 and category3 with select code but i have just one index.. it is productnumber and it is primarykey..So my select code lines is so slow.. it is 7-9 second.. how can i select in 0.1 second ? Should i create index for category1 and category2 and category3 ? But i dont know create index.. My select code lines is below.. Could you learn me and show me index for it ?? or Could you learn me and show me fast Select code lines and index or etc ??? Also my search code line have a dangerous releated to attaching table1 with hackers :)
cheersi send 3 value of treview1 node and childnode and child.childnode to below page.aspx :)
Protected Sub Page_Load(ByVal sender As Object, ByVal e As System.EventArgs) Handles Me.Load
If Not Me.IsPostBack Then
If Request("TextBox1") IsNot Nothing ThenTextBox1.Text = Request("TextBox1")
End If
If Request("TextBox2") IsNot Nothing ThenTextBox2.Text = Request("TextBox2")
End If
If Request("TextBox3") IsNot Nothing ThenTextBox3.Text = Request("TextBox3")
End If
End If
Dim searchword As String
If Request("TextBox3") = "" And Request("TextBox2") = "" Then
searchword = "Select * from urunlistesi where kategori= '" & Request("TextBox1") & "'"
End If
If Request("TextBox3") = "" Then
searchword = "Select * from urunlistesi where kategori= '" & Request("TextBox1") & "' and kategori1= '" & Request("TextBox2") & "'"
End If
If Request("TextBox3") <> "" And Request("TextBox2") <> "" And Request("TextBox1") <> "" Then
searchword = "Select * from urunlistesi where kategori= '" & Request("TextBox1") & "' and kategori1= '" & Request("TextBox2") & "' and kategori2= '" & Request("TextBox3") & "'"
End If
SqlDataSource1.SelectCommand = searchword
End Sub

View 11 Replies View Related

The Index Entry For Row ID Was Not Found In Index ID 3, Of Table 357576312

Jul 9, 2004

Hi,

I'm running a merge replication on a sql2k machine to 6 sql2k subscribers.
Since a few day's only one of the merge agents fail's with the following error:

The merge process could not retrieve generation information at the 'Subscriber'.
The index entry for row ID was not found in index ID 3, of table 357576312, in database 'PBB006'.

All DBCC CHECKDB command's return 0 errors :confused:
I'm not sure if the table that's referred to in the message is on the distribution side or the subscribers side? A select * from sysobjects where id=357576312 gives different results on both sides . .

Any ideas as to what is causing this error?

View 3 Replies View Related

Advantages Of Using Nonclustered Index After Using Clustered Index On One Table

Jul 3, 2006

Hi everyone,
When we create a clustered index firstly, and then is it advantageous to create another index which is nonclustered ??
In my opinion, yes it is. Because, since we use clustered index first, our rows are sorted and so while using nonclustered index on this data file, finding adress of the record on this sorted data is really easier than finding adress of the record on unsorted data, is not it ??

Thanks

View 4 Replies View Related

SQL 2012 :: Clustered Index Key Order In NC Index

Mar 5, 2015

I have a clustered index that consists of 3 int columns in this order: DateKey, LocationKey, ItemKey (there are many other columns in this data warehouse table such as quantities, prices, etc.).

Now I want to add a non-clustered index on just one of the other columns, say LocationKey, like this:
CREATE INDEX IX_test on TableName (LocationKey)

I understand that the clustered index keys will also be added as key columns to any NC indexes. So, in this case the NC index will also get the other two columns from the clustered index added as key columns. But, in what order will they be added?

Will the resulting index keys on this new NC index effectively be:

LocationKey, DateKey, ItemKey
OR
LocationKey, ItemKey, DateKey

Do the clustering keys get added to a NC index in the same order as they are defined in the clustered index?

View 1 Replies View Related







Copyrights 2005-15 www.BigResource.com, All rights reserved