i have a table with 5 million rows (6 months data) .. every month approximately 1 million data is getting inserted..(not
bulk insert) ..table is properly indexed.. There is a stored procedure to generate report based on this table.. (only this table no joins) .. stored procedure do lot of permutations (includes lot of temporary tables ,group by having,count() etc)..
it takes 2 minutes to generate the report.. i want the report to be generated with out taking a second..
should i partition this table??? size of table is 2gb.. i dont know whether this table is a right candidate for partitioning or not.. please help me..
I have a table with almost 2 mil records. There are two columns (Col1 int, Col2 int). Combination of them is unique. When i run a simple select statement like (select * from tblTableName Where Col1 = 5) with execution plan, cost is extremely high, even with PK(col1+col2). I tried to put clustered index on those columns but execution plan still shows "table scan" instead of "index seek". How can I speed this up ? I use that table mostly for search.
Dear friends We have one problem in our existing system.We are expecting some expert comment on this.We have one corebanking system back end as MS SQL server with IIS server.Our system is always very slow in the peak times of tranasactions.We are planning to optimize this with a short time plan .So pls give some suggestions that our DBA team can implement in a short time with SQL SERVER 2000
Dear friends We have one problem in our existing system.We are expecting some expert comment on this.We have one corebanking system back end as MS SQL server with IIS server.Our system is always very slow in the peak times of tranasactions.We are planning to optimize this with a short time plan .So pls give some suggestions that our DBA team can implement in a short time with SQL SERVER 2000
I look after a database which is part of a third party CRM product. Theusers of the product complain of intermittant poor performance, thesuspicion is that some more senior users are running their own queries(the product allows users to do this). I've been asked by thedevelopment team to try to capture the details of long running queries.I've looked at the events listed in profiler and can't see one thatwould be useful. Ideally I want to know who is running which query thatis taking longer than x seconds.Any suggestionsTIALaurence
;with cte as ( select rank() over (partition by username order by guid ) as rank from MyTable where siteurl='myurl' and VisitedDateTime between '2007.02.05' and '2007.09.30' and IsFiltered=0 group by username,guid )
select count(rank) from cte where rank=2
this query is taking 6 seconds to execute .. 'MyTable' table is properly indexed.. 1 million rows are returned by common table expression.. i want to reduce execution time of this query to some milli seconds..please help me..
Iam using the below query to create a key using the column combinations and a seperator"%" to insert the same in to table my issue is with the perfomance of this query. This query is returing aroung 6000 records and its taking 11 seconds to return the result.is there any way that i can optimize this query to improve perfomance.Select (Item.ItemCode +'%'+ Product.Name + '%' + Quantity.ID) as Key1 from Items,Products,QualityPlease advice
In a database here we have a table that is a list of codes. Many other tables in our database have foreign keys to this table (T_TYPE_CODE). The joined column is an integer that is a clustered index on the table.
One of the developers here says that in one of his queries he has to OUTER JOIN to this table 15 times and that the OUTER JOIN is killing performance. He wants to add a record to T_TYPE_CODE that will represent NULL so that any NULL values in the tables that foreign key to this table will use that ID instead of NULL. In this way he could use all INNER JOINs.
To me this seems like a bad idea - NULL is NULL and creating a value to represent NULL will open a whole can of worms.
My question: Is there a performance hit for using OUTER JOINs against this table, considering that the join is on a single column and is a clustered, unique index?
Also, what problems can we expect to run into if we use a
Can someone tell me if it is possible to see the drives on the server using Perfomance Monitor? I so, where are tehy hiding because i struggled the wholed day!
We recently upgraded from sql 2000 to sql server 2005. Our system was developed using Microsoft visual Basic. Since we upgraded to sql server 2005 our system has been very slow. I suspect that it was because of the new sql 2005 installation that i made.Does anyone know how to solve this problem for me to be able to increase the perfomance speeed of our system again. It has been stressing me for a while now.
In my cube, I used Time Intelligence wizard to create calculated members such as YearToYearGrowth and YearToYear Growth%. But when I try to pull those two items in RS 2005 for more detailed levels, the performance of running MDX dataset is terrible even though my fact table has only 36234 rows of data and I have 6 dimensions (the largest dimension includes 37801 rows).
One thing that I did was removing ALLMEMBERS keywords in the MDX designer but I didn€™t improve much the time it takes to execute a query. Right now, it takes 1.5 minutes to execute may MDX query in RS 2005. Is there a way to improve the performance of MDX query (generated in RS) in RS 2005?
Dear ReaderI am trying to design a database. How can I make best Judgement that Indexing (which I am trying to fix during Diagram Desingning process)is ok.I am able to identify the best candidate for the indexing.Below is the details I want to understand:AreaZIPCityCountyDistrictState/ProvinceCountryNow I want the data retrival optimization through Index. (you can suggest another idea, also)Entities Area,...., Country have independent tables.Example:Area_TableAreaID (PK)AreaThey have relationship- one to many- if you go from Country to Area.There is one more table:Location_Table (PK)LocationIDAreaIDZIPIDCityIDCountyIDDistrictIDState/ProvinceIDCountryID(Location_ID is further related to the Address of the contact.)GUI has a single form to enter these details.On a save command details in all the tables -Area to Country- (individually) being inserted.& simultaniously Location_Table is also being inserted with the details.Following is the situation of being queried these tables:(1) GUI user can select an Area than the related details of ZIP .., ..., ...upto Country etc. should be loaded automatically (id it is previously stored by the user entry in the database.)(2) Contacts have to be retrived on the basis of Area, ZIP, .....County. (Necessary Groupings are required )Example:If Contacts are queried Country Wise then the Display should beCountry1State1District1County1City1ZIP1Area1Area2ZIP2City2County2District2Country2Please Guide.SuryaPrakash****************************************** This message was posted via http://www.sqlmonster.com** Report spam or abuse by clicking the following URL:* http://www.sqlmonster.com/Uwe/Abuse...0255a1765491f15*****************************************
(From an exchange originally posted on SQLServer.com, which wasn't resolved...)
To return views tailored to the user, I have a simple users table that holds user IDs, view names, parameter names, and parameter values that are fetched based on SUSER_SNAME(). The UDF is called MyParam, and takes as string arguments, the name of the view in use, and a parameter name. (The view the user sees is really a call to a corresponding table returning UDF, which accepts some parameters corresponding to the user.)
But the performance is very dependent on the nature of the function call. Here are two samples and the numbers reported by (my first use of) the performance monitor: Call to table returning UDF, using local variables:
declare @orgauth varchar(50) set @orgauth = dbo.MyParam('DeptAwards', 'OrgAuth') declare @since datetime set @since = DATEADD(DAY,-1 * dbo.MyParam('DeptAwards', 'DaysAgo'),CURRENT_TIMESTAMP) select * from deptAwardsfn(@orgauth,@since)
[187 CPU, 16103 Reads, 187 Duration]
Call to same table returning UDF, using scalar UDFs in parameters:
SELECT * from deptAwardsFn ( dbo.MyParam('DeptAwards', 'OrgAuth') ,DATEADD(DAY,-1 * dbo.MyParam('DeptAwards', 'DaysAgo'),CURRENT_TIMESTAMP) ) [20625 CPU, 1709010 Reads, 20632 Duration] (My BOL documentation claims the CPU is in milliseconds and the Duration is in microseconds -- which I question.) Regardless of the unit of measure, it takes a whole bunch longer in the second case.
My only guess is that T-SQL is deciding that the parameter values (returned by dbo.MyParam) are nondeterministic, and continually reevaluates them somehow or other. (What ever happened to call by value?)
Can anyone shed some light on this strange (to me) behavior?
----- (and later, from me)--- (I have since discovered that the reference to CURRENT_TIMESTAMP in the function argument is the cause, but I suspect that is an error -- it should only capture the value of CURRENT_TIMESTAMP once, when making the function call IMHO.)
This morning I can not connect to our SQL Server 7.0 whatever from client or server. The error message which I list below:
++++++++++++++++++++++++++++++++++++++++++++++++++ ++++++++++++++++++++++++++++ A connection could not be estabished to server--Timeout expired Please verfy SQL Server is running and check your SQL Server registration properties and try again. ++++++++++++++++++++++++++++++++++++++++++++++++++ ++++++++++++++++++++++++++++
We use windows NT authentication. We did not do any change on NT. The SQL Server daily schedule job usally stoped at 10:00AM, but today from the Window NT Task Manager, we can see that the SQL Server is still running untill now.
hi, I have settup up sql mail and did the following: 1. created an E-mail account and configured Out look by creating a pop3 mail profile. tested it by sending and receiving mail, that is ook 2. I Created one domain account for MSsqlserver and Sql Agent service. both services use same account and start automatically in the control panel-services 3. I used the profile that I created in outlook to test the sql mail but got an error: Error 22030 : A MAPI error ( error number:273) occurred: MapiLogon Ex Failed due to MAPI Error 273: MAPI Logon Failed
I really do not know what went wrong. I followed the steps from bol and still having a problem. Am I missing something.
I do have a valid email account I do have a valid domain account I tested outlook using the email account and it worked. so why sql server does not recognise MAPI.
My next question, How to configure MAPI in Sql server if what I did was wrong.
Hi, I have 2 windows 2000 server in cluster with sql server 2000 enterprise edition installed. I have activated the Server-Requested Encryption by using the sql server network utility (Force Protocol Encryption). After this, I have stoped sql server service. But I can't start it at this moment. The error is: 19015: The encrypton is required but no available certificat has been found.
Hello, I am facing a huge problem in my sql server database using access as a front end.The main problem is trying to execute queries "views" ,since they reside on sql server now,and using variables or parameters in reports and forms to filter on this query. Ex. how can the following be implemented using the same query but in sql server? Access ------ SELECT MAT_Charts.YYYYMM FROM MAT_Charts WHERE ((([Area_Code] & "-" & [GROUP_CODE])=[Reports]![MAT_Chart_C1].[MAT_Key])) GROUP BY MAT_Charts.YYYYMM;
It is specifically this statement in which I am interested: [GROUP_CODE])=[Reports]![MAT_Chart_C1].[MAT_Key]))
I am getting error in sysindexes when i run dbcc checkdb on a production db. the error is Server: Msg 8928, Level 16, State 1, Line 1 please help me to remove this all the options of dbcc checkdb as well as table are not helping me
Hi Guys i write a thread before asking for help with reading an uploaded csv file, i have my code, it reads the csv file and currently displays it in a datagrid but what i actually want is to take the read information and import it into my sql express db. Heres the code<script runat="server"> Protected Sub Page_Load(ByVal sender As Object, ByVal e As System.EventArgs) If (IsPostBack) Then Grid1.Visible = True Else Grid1.Visible = False End If End Sub Sub UploadButton_Click(ByVal sender As Object, ByVal e As System.EventArgs) 'Save the uploaded file to an "Uploads" directory ' that already exists in the file system of the ' currently executing ASP.NET application. ' Creating an "Uploads" directory isolates uploaded ' files in a separate directory. This helps prevent ' users from overwriting existing application files by ' uploading files with names like "Web.config". Dim saveDir As String = "Data"
' Get the physical file system path for the currently ' executing application. Dim appPath As String = Request.PhysicalApplicationPath
' Before attempting to save the file, verify ' that the FileUpload control contains a file. If (FileUpload1.HasFile) Then Dim savePath As String = appPath + saveDir + FileUpload1.FileName
' Call the SaveAs method to save the ' uploaded file to the specified path. ' This example does not perform all ' the necessary error checking. ' If a file with the same name ' already exists in the specified path, ' the uploaded file overwrites it. FileUpload1.SaveAs(savePath)
' Notify the user that the file was uploaded successfully. UploadStatusLabel.Text = "Your file was uploaded successfully."
Else ' Notify the user that a file was not uploaded. UploadStatusLabel.Text = "You did not specify a file to upload." End If
End Sub
Sub DisplayButton_Click(ByVal sender As Object, ByVal e As System.EventArgs) Dim sConnectionString As String = "Provider=Microsoft.Jet.OLEDB.4.0;" & "Data Source=C:inetpubwwwrootMerlinLocalPostOfficeAppData;Extended Properties=""text;HDR=NO;FMT=Delimited""" Dim objConn As New OleDbConnection(sConnectionString) objConn.Open() Dim objCmdSelect As New OleDbCommand("SELECT * FROM csv.txt", objConn) Dim objAdapter1 As New OleDbDataAdapter() objAdapter1.SelectCommand = objCmdSelect Dim objDataset1 As New DataSet() objAdapter1.Fill(objDataset1, "csv.txt") Grid1.DataSource = objDataset1.Tables(0).DefaultView Grid1.DataBind() objConn.Close() End Sub
</script> My csv file does not have headers so the default value is F1, F2, F3 F4, F5,F6, F7, my database has the following columns, ID,AddressLine1,AddressLine2,AddressLine3,AddressLine4,AddressLine5,AddressLine6,Postcode I need to know how to import the information direct into the db rather than displaying it on the page, ive tried but im really new to this and cant get it to work. I cant use DTS or bulk insert as the server this will go on doesnt have sql on it, the db is an MDF file so is transportable with the app. Thanks for your help
Hello I'm trying to execute the following INSERT INTO @dbName.dbo.AP_TransMain SELECT * FROM Inserted WHERE Pay_Id=Inserted.Pay_Id but its giving me an error becuase of @dbName which is a variable decleared as the followingDECLARE @dbName as varchar(100) the Database name is assigned to @dbName but i can't take the value of @dbName to combine it to the rest of the statment INSERT INTO @dbName.dbo.AP_TransMain SELECT * FROM Inserted WHERE Pay_Id=Inserted.Pay_Id
Hi, I am trying to make a backup of my database using Sql-DMO
When I do the backup on my server, my code runs fine. But when I backup from across the network from another computer, the database is saved on the server and not on the computer from where I am executing my application. How set SQLl-DMO to backup on the computer from where I am?
here is my code (found on the net)... Any suggestions greatly appreciated!
Cursor = Cursors.WaitCursor 'create an instance of a server class Dim my_srv As SQLDMO._SQLServer = New SQLDMO.SQLServerClass 'connect to the server my_srv.Connect("servername", "userid", "password")
'create a backup class instance Dim my_backup As SQLDMO.Backup = New SQLDMO.BackupClass 'set the backup device = files property my_backup.Devices = my_backup.Files 'set the files property to the File Name text box my_backup.Files = Me.txtFilePath.Text 'set the database to the chosen database my_backup.Database = "MYDB" 'perform the backup my_backup.SQLBackup(my_srv) MsgBox("Database successfully backed up.", MsgBoxStyle.Information) Cursor = Cursors.Default
I've a bactch file that is ran by the user and it looks for files and if it doesnot exist it waits for it and when it exsits, through isql it truncates the table and bcp in the file. In bcp files the loginname used in isql has dbo rights to truncate the table and the batch file also has password. The user who runs the script can also view the batch file and can know the password, Is there any way I could rewrite it so that the user cann't view the password or other way like SQL DMO or DTS package
Can any one tell me what is the database log size for a datafiles (tables and Indexes 500MB) On What basis database log size is estimated for the Database Creation of 500MB ? On What percentage will be the dump file size and log file size with respect to Database file size?? Can any body help!!!!!!!!!! Thanks in Advance
I've a table which has clustered index on one column and I tried to run begin transaction and then 2000 insert statements and then commit transaction, the transaction took 15 min and then it hung up the table, I could not access the table I had to stop the process, the only info I received was the table had 4517 locks, I've no clue how it got so many locks, there was db backup running as I had scheduled 15 min backup. could anyone help me that inside each open transaction how many insert statement is advisible and how many maximum locks does sql server works on a table and what could have gone wrong in my situation.
Hi, I am trying to do DTS for transfering data from one server to other server,both are on T1 link,after some time DTS failed by giving error message saying that " Error at destination for Row number 3357844,Errors encountered so far in this test:1,,,Invalid charector value for cast Specification " that table have 24 million rows..
How can i solve this problem,Please help me in this.this is urgent please
I have a field 'slnumber' defined as int.Values are like 70000,70001,70002 etc.I need to export this field using bcp to a text file and corresponding field in the text file should have 10 characters and should read like 0000070000,0000070001,0000070002 etc.How do we achieve this format? Any help is greatly appreciated. Thanks. Reddy.