We have poor performance spikes on a drive containing our log file but this is only for reads and seems to be at a time when we run a re-index job. If this is a likely correlation as to poor performance in reading the log file, and what reads are done from a log file.
Using SSMS 2012, we are experiencing extremely slow response times when opening SQL job steps to edit and also deploying SSIS Pkg's. Sysadmins have no problem. Users in the ssis_admin role have no problem. It's the rest of the users who have issues.
OBJECTIVE: I would like to read a text file from SQL Server 2000, read the text file content, and load its conntents in a RichTextBoxTHINGS I'VE DONE AND HAVE WORKING:1) I've successfully load a text file (ex: textFile.txt) in sql server database table column (with datatype Image) 2) I've also able to load the file using a Handler as below: using System;using System.Web;using System.Data.SqlClient;public class HandlerImage : IHttpHandler {string connectionString;public void ProcessRequest (HttpContext context) {connectionString = System.Configuration.ConfigurationManager.ConnectionStrings["NWS_ScheduleSQL2000"].ConnectionString;int ImageID = Convert.ToInt32(context.Request.QueryString["id"]);SqlConnection myConnection = new SqlConnection(connectionString);string Command = "SELECT [Image], Image_Type FROM Images WHERE Image_Id=@Image_Id";SqlCommand cmd = new SqlCommand(Command, myConnection);cmd.Parameters.Add("@Image_Id", System.Data.SqlDbType.Int).Value = ImageID;SqlDataReader dr;myConnection.Open(); cmd.Prepare(); dr = cmd.ExecuteReader();if (dr.Read()){ //WRITE IMAGE TO THE BROWSERcontext.Response.ContentType = dr["Image_Type"].ToString();context.Response.BinaryWrite((byte[])dr["Image"]);}myConnection.Close();}public bool IsReusable {get {return false;}}}'>'> <a href='<%# "HandlerDocument.ashx?id=" + Eval("Doc_ID") %>'>File </a>- Click on this link, I'll be able to download or view the file WHAT I WANT TO DO, BUT HAVE PROBLEM:- I would like to be able to read CONTENT of this file and load it in a string as belowStreamReader SR = new StreamReader()SR = File.Open("File.txt");String contentText = SR.Readline();txtBox.text = contentText;BUT THIS ONLY WORK FOR files in the server.I would like to be able to read FILE CONTENTS from SQL Server.PLEASE HELP. I really appreciate it.
have a problem ... I tried to test my statement in QA as it returns no dataset ,it gives me empty columns in return,I though it will return all the columns and some records of (lmeyer) as the current user ,who logged in on windows auth,but it return only the columns,is that means my query analyser cannot read the login1 ,column which has the username as (lmeyer)
select * from tstudents where (login1='@param1')
if not what could me wrong in my code ,because I get no response to the sql server
Public Function login(ByVal login1 as string) as dataset
Dim myconnection as new sqlconnection("server=G103-TT03;database=CampusLANDB;Trusted_Connection=yes") dim mycommand as new sqlcommand ("Select * from tStudents Where login1 = @param1",myconnection) mycommand.parameters.add(new SQLClient.SqlParameter("@param1",login1))
dim DS as new dataset()
try myconnection.open
dim adpt as new sqldataadapter adpt.selectcommand = mycommand adpt.fill(DS,"tStudents")
catch ex as exception throw new exception(ex.message) return nothing finally myconnection.close end try
try mydatagrid.datasource=DS.Tables("tStudents") mydatagrid.databind() catch ex as exception errorlabel.text=ex.message end try
Regarding SSRS, what is considered a good response time? We have some reports running 2 minutes and the users think that is too long. Is there a guideline as to what a user should reasonably expect and if so, what is that guideline?
Hi I have Problem, My response time is too Low. Is Any one Know how to improve my response time. My DATABASE SIZE IS 11 GB. I didn’t change any configuration parameter after installing SQL Server. Right my server Having default configuration parameters. Whether I have to change any parameters or not. My queries will generate lot of temporary tables.
When I try to connect to a SQL server instance from Enterprise Manager, I'm getting a timout connection error. I have to change the timoeout parameter from 4 (the default) to 30 in order to work. Also I realize that some applications (like sharepoint) are having the same problem connecting to that server.
My question is:
Why is that happening?
It used to work fine, and I'm getting this issue a couple of days ago.
I am currently migrating a DB from Oracle to SQL Server (Microsoft SQL Server 2005 - 9.00.1399.06 (Intel X86))
I've used ssma to do the migration, and I'm reviewing the prodedures to check them. I have find a performance problem in one of them, which worked perfectly under Oracle, and I have tried lots of things with no luck, so I guess I need some help
I insert a row in a table, and the time it takes for this is fine, but seconds later I need to read the row, and this Select lasts 1-2 ms more every time. This process is repeated lots of times.
Every insert-select takes 200 ms when it receives the first data (including some other operations that are not increasing the response time), and 200 insertions later it takes about 500 ms, which is really too much, considering it keeps increasing.
The table has 25+ columns, and some of them contain varchar of 3000+ characters.
I make the select using 4 columns in the where part. One of them is a numeric, and the rest are varchar (no one is the primary key).
I've got a clustered index for the primary key, and two more non-clustered indexes. One of them refers to the columns I use in the Select, and the parameters are Fill Factor: 90, and Recompute Statistics Automatically.
We did an in place convertion of our data base from MS SQL Server 6.5 to 7.0. Our application is much slower now on SQL 7.0. Any idea why? The following is a sample SQL statement that runs quickly on SQL 6.5 and takes a long time on SQL 7.0 I also attached the query plans from SQL 6.5 and 7.0.
SELECT Person_Name.PerNam_Person_Name_PK , Person_Name.PerNam_Row_Status , Person_Name.PerNam_Last_Name_Sndx , Person_Name.PerNam_Last_Name , Person_Name.PerNam_Name_Suffix , Person_Name.PerNam_First_Name , Person_Name.PerNam_Name_Prefix , Person_Name.PerNam_Middle_Name , Person_Name.PerNam_Event_Person_FK , Event.Evn_Event_Nbr , Event.Evn_Event_Type , Event_Person.EvnPer_Last_Name , Event_Person.EvnPer_First_Name , Event_Person.EvnPer_Middle_Name , Event_Person.EvnPer_Name_Prefix , Event_Person.EvnPer_Name_Suffix FROM Person_Name , Event , Event_Person WHERE (Person_Name.PerNam_Agency_ID = "CL") AND ( Person_Name.PerNam_Event_Person_FK = Event_Person.EvnPer_Event_Person_PK ) and ( Event_Person.EvnPer_Event_FK = Event.Evn_Event_PK ) and (Person_Name.PerNam_Person_Name_PK = 0 or ( Person_Name.PerNam_Event_Person_FK = 581541) ) and ( Person_Name.PerNam_Row_Status <> "D" )
Query plan in SQL 6.5
SQL Server Execution Times: cpu time = 0 ms. elapsed time = 31250 ms. STEP 1 The type of query is INSERT The update mode is direct Worktable created for REFORMATTING FROM TABLE Person_Name Nested iteration Index : PK_Person_Name FROM TABLE Person_Name Nested iteration Index : PerNam_Event_Person_FK FROM TABLE Person_Name Nested iteration Using Dynamic Index FROM TABLE Event_Person Nested iteration Index : PK_Event_Person TO TABLE Worktable 1 STEP 2 The type of query is SELECT FROM TABLE Worktable 1 Nested iteration Table Scan FROM TABLE Event Nested iteration Index : PK_Event SQL Server Parse and Compile Time: cpu time = 0 ms. Table: Person_Name scan count 2, logical reads: 6, physical reads: 5, read ahead reads: 0 Table: Event scan count 0, logical reads: 0, physical reads: 0, read ahead reads: 0 Table: Event_Person scan count 0, logical reads: 0, physical reads: 0, read ahead reads: 0 Table: Worktable scan count 0, logical reads: 0, physical reads: 0, read ahead reads: 0 Table: Worktable scan count 1, logical reads: 1, physical reads: 0, read ahead reads: 0
SQL Server Execution Times: cpu time = 0 ms. elapsed time = 62 ms.
Is there a global variable or something of the sort that would tell me how long it took to execute a query??
I need to monitor my DB response times and we have a query that runs in under 2 seconds. So we want to run this query every couple of minutes and if it takes more than 12 sec to run, we want to send an email to our DB staff...
I know that I could take a time stamp before and after then subtract but I wanted to know if there was an easier way to do it..
I have a problem with querying systemjobhistory data. Response time is slow and it is vary from time to time, sometime it takes few seconds and sometime it takes more than 2 minutes. I understand that there is quite a number of jobs in DB server and which might result in slow response time.
Is it possible to shorten the response time? like using index? My application is always look like hang when the query take very long time to run.
i have a database which get refreshed every day from client's data . and we need to pull heavy data from them every day as reports . so only selects happens on that database.
we do daily population of some table in some other databases from this daily refreshed DB.
will read uncommitted or NOLOCK with select queries to retrieve data faster.
there will be no dirty read as there are NO DML operation in that database so for SELECT which happens concurrently on these tables , will NOLOCK work?
We need to select rows from the database that have been recently inserted/updated. We have a main primary table (COMMIT_TEST) and a second update table (COMMIT_TEST_UPDATE). The update table contains the primary key and a LAST_UPDATE field which is a datetime (to tell us when an update occurred). Triggers on the primary table are used to populate the update table.
If we insert or update the primary table in a transaction, we would expect that the datetime of the insert/update would be at the commit, however it seems that the insert/update statement is cached and getdate() is executed at the time of the cache instead of the commit. This causes problems as we select rows based on LAST_UPDATE and a commit may occur later but the earlier insert timestamp is saved to the database and we miss that update.
We would like to know if there is anyway to tell the SQL Server to not execute the function getdate() until the commit, or any other way to get the commit to create the correct timestamp.
We are using default isolation level. We have tried using getdate(), current_timestamp and even {fn Now()} with the same results. SQL Queries that reproduce the problem are provided below:
/* Different functions to get current timestamp €“ all have been tested to produce the same results */ /* SELECT GETDATE() GO SELECT CURRENT_TIMESTAMP GO SELECT {fn Now()} GO */ /* Use these statements to delete the tables to allow recreate of the tables */ /* DROP TABLE COMMIT_TEST DROP TABLE COMMIT_TEST_UPDATE */ /* Create a primary table and an UPDATE table to store the date/time when the primary table is modified */ CREATE TABLE dbo.COMMIT_TEST (PKEY int PRIMARY KEY, timestamp) /* ROW_VERSION rowversion */ GO CREATE TABLE dbo.COMMIT_TEST_UPDATE (PKEY int PRIMARY KEY, LAST_UPDATE datetime, timestamp ) /* ROW_VERSION rowversion */ GO /* Use these statements to delete the triggers to allow reinsert */ /* drop trigger LOG_COMMIT_TEST_INSERT drop trigger LOG_COMMIT_TEST_UPDATE drop trigger LOG_COMMIT_TEST_DELETE */ /* Create insert, update and delete triggers */ create trigger LOG_COMMIT_TEST_INSERT on COMMIT_TEST for INSERT as begin declare @time datetime select @time = getdate()
insert into COMMIT_TEST_UPDATE (PKEY,LAST_UPDATE) select PKEY, getdate() from inserted end GO create trigger LOG_COMMIT_TEST_UPDATE on COMMIT_TEST for UPDATE as begin declare @time datetime select @time = getdate()
update COMMIT_TEST_UPDATE set LAST_UPDATE = getdate() from COMMIT_TEST_UPDATE, deleted, inserted where COMMIT_TEST_UPDATE.PKEY = deleted.PKEY end GO /* In our application deletes should never occur so we don€™t log when they get modified we just delete them from the UPDATE table */ create trigger LOG_COMMIT_TEST_DELETE on COMMIT_TEST for DELETE as begin if ( select count(*) from deleted ) > 0 begin delete COMMIT_TEST_UPDATE from COMMIT_TEST_UPDATE, deleted where COMMIT_TEST_UPDATE.PKEY = deleted.PKEY end end GO /* Delete any previous inserted record to avoid errors when inserting */ DELETE COMMIT_TEST WHERE PKEY = 1 GO /* What is the current date/time */ SELECT GETDATE() GO BEGIN TRANSACTION GO /* Insert a record into the primary table */ INSERT COMMIT_TEST (PKEY) VALUES (1) GO /* Simulate additional processing within this transaction */ WAITFOR DELAY '00:00:10' GO /* We expect at this point that the date is written to the database (or at least we need some way for this to happen) */ COMMIT TRANSACTION GO /* get the current date to show us what date/time should have been committed to the database */ SELECT GETDATE() GO /* Select results from the table €“ we see that the timestamp is 10 seconds older than the commit, in other words it was evaluated at */ /* the insert statement, even though the row could not be read with a SELECT as it was uncommitted */ SELECT * FROM COMMIT_TEST GO SELECT * FROM COMMIT_TEST_UPDATE
Any help would be appreciated, we understand we could make changes to the application/database to approximate what we need, but all the solutions have identified suffer from possible performance issues, or could still lead to missing deals (assuming the commit time is larger than some artifical time window).
We are using SQL Server 2008 as our database and use Access as a GUI. I am looking to create a form in Access where employees can access their time card and request changes from management. I want to use the format from the attached screen shot for the form. I pretty much know how to do it all, the only point of complication is trying to figure out the easiest way to get the transaction punch record data on employee_punch_record into a format where I can easily populate the form in the horizontal format you see in the screen shot.
I am not super strong in SQL, but figure I can do it using a formatting table of some sort. quick and easy way to move transaction records into a more horizontally oriented record?
Hi, I am looking for a way to read a transaction log in an Excel format, does anyone know how I can do this? I have already looked at the dbcc log(mydb), and its not documented, doesnt give enough info. I am more looking for an open source/free tool or a way to do it in SQL Server.
Hi, I need to access and read the transaction log and run an update according to the log....is it possible through sqldmo....is there any other methods through which i can do the same
Hello, I have following situation: 1st thread: start transaction, update some values in table (with (rowlock)) 2nd thread: starts select for this record.
I have tried some possibilities with SET TRANSACTION ISOLATION LEVEL but noone is suitable for me. I would like during such running transaction to have possibility to read but the OLD values (last commited).
Am I able to reach this behaviour with some setting of locks, transaction level?
I still have transactions in use mainly to set isolation levels. There are obviously no update/delete statements in them. I'm pretty sure this will still add rows to the transaction log even though there won't be any writes to the db data files. Is it still a critical best practice to use RAID 1/0 instead of RAID 5 for the logs? Looking to re-use some existing storage rather than buy new ones. Thoughts?
our users believe that we lost some valid data, but no one knows who did it, I thought I can find it from the transaction dumps I take every hour so ,Can I read Transaction Dumps (*.TRN) file in SQL server 7.0 or Can I get this information through other means.
I actually am just looking for some supporting documentation on some facets of SQL Server.As far as I have always known, when anyone does a READ from a SQL Server database (SELCT * from <TABLE>), SQL Server does not create a log record...since there's no data or database structure being modified. A colleague is under the impression READ's are logged operations.
USE [Testing] GO /****** Object: Table [dbo].[Testing] Script Date: 4/25/2014 11:08:18 AM ******/ SET ANSI_NULLS ON GO SET QUOTED_IDENTIFIER ON
[Code] ....
It seems to work fine with one million records.
Each primary key is unique, but the begindate is non-unique, and i guess even if i use datetime2 and add nanoseconds, from what i have read, there is a chance that i could have a duplicate datetime since the date is imported via XML from multiple sources.
BEGIN TRAN INSERT Z_Test SELECT STATE_CODE FROM View_STATE_CODE COMMIT
View_STATE_CODE points to remote SQL server named PROD. There is error when I run this query:
Server: Msg 8501, Level 16, State 1, Line 12 MSDTC on server 'PROD' is unavailable. Server: Msg 7391, Level 16, State 1, Line 12 The operation could not be performed because the OLE DB provider 'SQLOLEDB' was unable to begin a distributed transaction. OLE DB error trace [OLE/DB Provider 'SQLOLEDB' ITransactionJoin::JoinTransaction returned 0x8004d01c].
It looks like remote server is not available inside the local transaction. How to handle that?