I'm struggling to get anything out of this fn_get_sql() thing as included below.
First, running this in Q.A. still doesn't return the full SQL string, just the first x characters. Second, for most processes the sql_handle in Sysprocesses is just zeroes. Does this info get lost after a while? All the processes where I need to know what happened are old ones, that's waiting for NETWORKIO.
drop proc dbo.getsql
go
create proc dbo.getsql as
begin
declare @sql_handle binary(8000)
select @sql_handle = sql_handle from sysprocesses where spid = 112
-- select spid,sql_handle from sysprocesses where spid = 79
-- PRINT @SQL_HANDLE
SELECT * from ::fn_get_sql(@sql_handle)
ok, I have a requirement where I need to get a list of sql commands currently being blocked. This is very easy to do via stored procedure, and I have it working well using a vb.net console app to fire it off.
Trouble is, I need to ship it to different offices on an ad hoc basis. I don't want to install a stored procedure on each site because it'll be a one off job and there is not likely to be anyone available who would know how to even install a new sp. So, I thought I'd try and pull back the sql commands via a select statement, joining together sysprocesses and the fn_get_sql udf. This returns a table, so I presumed I could join the two together using a subquery via the sql_handle with something like this :
SELECT sql_handle, ( SELECT top 1 [text] FROM ::fn_get_sql(sysprocesses.sql_handle) ) as sqlcommand
FROM master..sysprocesses
The error back is incorrect syntax near 'sysprocesses'. I can't see if I'm doing anything obvious wrong.
A full DB backup of one of my database takes around 3 GB. Half of thisis index. All the indexes are in different filegroup. I am wonderingif I can only backup the data. For Index I can always run the scriptto regenerate it.By doing this I can reduce the size of the backup file.Thanks~Shiju
tempdb data file has 8Mb and log file has 1Mb - but I´m getting message that log is full.
Once tempdb is shrinked and expanded by the system (we even don´t see it at database folder!!), what can be done, (except reinstall from scrach and restore DBs) to make tempdb not vulnerable to very frequent expanding/shrinking (I guess this can be one of the root of the problem) ?
I need to find out when the data file and transaction log file is full. Is there any stored procedure that will let how much space left. We don't want to set Autogrow for the files.
A developer created a stored procedure that search in a huge table in a column with ‘like’ statement. I know that the best solution in most case is use of full-text.
But the content of this specific column is a XML data and Full-text don´t find words as desired.
For example: Table content: ID = 1 DsColumn = ´<Name>BETH</Name>´
select * from tbResp where DsColumn like '%BETH %' Results: IDDsColumn --- --------------------------------- 1<Name>BETH</Name> 1 row(s) affected
select * from tbResp where contains(DsColumn, ' BETH ')
I'm new to the DBA world, and have no one else in the company to look up to. Does anyone know what I might need to check out or do when the Data File Size is 204% full? Or is this not necessarily a bad thing?
I'm getting this from a Diagnostic tool I have.
The number of tables is 148 Data file size 35,941 MB Data Size 26,549.92 MB Index Size 177,130.02 MB Log File Size 5.05 MB
i have a 2005 db with full recovery mode. daily full backups, diff backups and log backups are done through sqlagent. i wanted to make a copy of it on another instance using the restore method with the latest full backup. after i created the new db, i noticed that a few tables were missing and columns were missing from existing tables also. futhermore, the recrods in these tables were not up-to-date either. i did fresh a full backup and tried again and the problem persisted. i aslo tried to restore on the same sql server instance under a different db name and that reproduced the problem.
the database schema was changed a few weeks ago and it seems that i am only seeing a snapshot of the database before the schema change. dbcc checkdb returns no error. the size of the backup file looks reasonable and i seen an increase in size since the schema change which is expected. there is no active transactions in the db and if i generate a create script, it contains proper t-sql that matches the current schema.
what am i missing there? what could i be doing wrong? i am lost here and any help or advice will be greatly appreciated!
I am getting below error while importing data in SQL 2005 Express:
"error 0xc0202009: Data Flow Task: SSIS Error Code DTS_E_OLEDBERROR. An OLE DB error has occurred. Error code: 0x80004005. An OLE DB record is available. Source: "Microsoft SQL Native Client" Hresult: 0x80004005 Description: "Could not allocate space for object 'dbo.HistoryLog'.'PK_HistoryLog' in database 'HistoryData' because the 'PRIMARY' filegroup is full. Create disk space by deleting unneeded files, dropping objects in the filegroup, adding additional files to the filegroup, or setting autogrowth on for existing files in the filegroup.". "
hi, I'm using sql server 2005 standard, and I want to be able to move my local database to another server, but I can't figure out how to script the database and the data so that I can just run one script to move the whole database. this can be done right? I can't imagine that such an obiviously necessary tool would be intentionally left out, so I'm figuring that I'm just a doofus and don't know where the option is...
I am using 6.5 . Here is the error that i get Think that tempdb is small how do i change that or what is this error about'
-------------------------- AIMSMan --------------------------- Application-defined or object-defined error 40002
37000: [Microsoft][ODBC SQL Server Driver][SQL Server]Can't allocate space for object '##RevByNetSALIMJUMMA' in database 'tempdb' because the 'default' segment is full. If you ran out of space in Syslogs, dump the transaction log. Otherwise, use ALTER DATABASE or sp_extendsegment to increase the size of the segment.( 1105)
ODBC
I only have data and log device how do i increase the tempdb device
I created a procedure called 'Longtextprocedure'. The length of this procedure is 650000 characters long. When I was giving the following query - select len(routine_definition) from INFORMATION_SCHEMA.routines where ROUTINE_NAME = 'Longextprocedure'; Length is showing as '4000' characters long. It is not showing the remaining part of the procedure, it is showing upto the 4000 characters of procedure code. But when you execute the procedure(exec 'Longtextprocedure') it is showing the exact result.Â
My question is i want to read all the 650000 characters long procedure code{select routine_definition from INFORMATION_SCHEMA.routines where ROUTINE_NAME = 'Longextprocedure'} to a variable which is varchar(max).Â
When i am trying to read the whole procedure code it is taking 4000 characters of data into the variable not the whole. Is there a way to read the huge procedure code into the variable?
Code:-Â
CREATE PROCEDURE FAKEPROCEDURE @procName VARCHAR(50) Â --@procName is 'Longtextprocedure' AS BEGIN DECLARE @routineDefinition VARCHAR(MAX); DECLARE @replaceToChar VARCHAR(MAX);
[Code] ....
When i am trying to execute the line EXEC(@dupliacteRouteDef), it is showing like there is no procedure defined. This is due to '@dupliacteRouteDef' variable is having 4000 characters of data not the whole data. Is there a way to read whole procedure data into the variable irrespective of the length of the string?
I have an Execute SQL Task that may return a result set. If it returns a result set, I'd like to log a failure in my package with the results visible.
I have logging turned on and that's working great. I've read about assigning results to a user variable of type Object and that's great. I can shred my results, thanks Jamie, with a Foreach loop no problem. Within that loop, I've got some VB that manipulates the values and will call Dts.Events.FireError as appropriate. However, VB is frowned upon here so my boss has asked that I push the VB logic into a Control Flow item.
I've built custom components already so I've got some familiarity with the process. Where I'm stuck at is figuring out _what_ the actual object type is in my code. The Connection manager is Native OLE DBSQL Native client. My Execute SQL Task uses a connection type of OLE DB with a Full result set. Results are stored in a variable named ErrorResultSet. Within the Execute method, I currently have this code set up in an attempt to pick apart the object and discover the available methods.
// Iterate through the variables that we were // able to lock. Assigning values to entities as // available. foreach (Variable _en in _variableCollection) { switch (_en.Name) { case "ErrorResultSet": Object _rs = _en.Value; System.Type _type; _type = _rs.GetType(); System.Data.DataSet _realResults; _realResults = _rs as System.Data.DataSet; // My expectation is that the cast of _realResults would // not fail. break; } } // unlock before we go _variableCollection.Unlock(); return DTSExecResult.Success;
At this point, my assumption is that the unboxed type of the recordset is not in the System.Data.DataSet inheritance chain as the cast failed. Anyone have insight into what it is? I can't seem to get any hits on google for what it's using behind the scenes in the Foreach ADO Enumerator.
Beyond the immediate question, anyone have thoughts on how else I can solve the problem? I had thought perhaps the task could raise an event if it returned rows but it didn't seem to have that functionality. Even if that had worked, telling the logging provider to capture the result set into the log might have been too much for native functionality. Another option I was thinking about would be to continue using the Enumerator and my custom component is a pure rewrite of the current Script task with the obvious downside being that I'd lose the generic-ness I was hoping to get with being able to hit my dataset.
I have a series of tasks that end up with two record sets that are unrelated which I would like to join. The first record set contains a list of expense accounts and the second record set contains a list of offices. I would like to create a join between the two sets where the resulting record set is a list of every office having every expense account.
If the data were in tables i'd create a sql statement something like this
Select t1.Account, t2.Office from Table1 t1 Full Outer Join Table2 t2 on 1 = 1
That would give me the results I'm looking for however I can't find how to do this when these data sets are from the results of two different flows of data flow tasks.
I have sql server 2000. I copied a database from one server to another. I have one table that has a full-text index. When I transferred over the database, the index still existed, but was not populated. I made sure the path for the file is pointing to a new correct location. I did "start full population". It only populated one entry @ 1MB. On the old server the index is 100MB with more than 3 million records.
I tried rebuilding, re-creating, and it all works, but when I run "start full population", it only populates 1 record. I double checked the table in question and it has over 3 million records and proper primary key.
When I execute this, it works ok but only one the first character of the request.form["d"] is stored to the db.I checked the sproc with another routine and it adds full data, and I've verified that value of request.form["d"] is longer than one chaacter by printing it to the page. Anyone got any ideas why only the first char is getting added to the db?? SqlConnection SqlConnection1 = new SqlConnection(ConfigurationManager.ConnectionStrings["ConnectionString1"].ConnectionString); SqlCommand SqlCommand1 = new SqlCommand("addRoute", SqlConnection1); SqlCommand1.CommandType = CommandType.StoredProcedure; SqlParameter SqlParameter1 = SqlCommand1.Parameters.Add("@ReturnValue", SqlDbType.Int); SqlParameter1.Direction = ParameterDirection.ReturnValue; SqlCommand1.Parameters.Add("@xmlData", Request.Form["d"]); SqlConnection1.Open(); SqlCommand1.ExecuteNonQuery(); Response.Write(SqlCommand1.Parameters["@ReturnValue"].Value); //Response.Write(Request.Form["d"]);
If data is modified (by an insert, update, or delete) while the backup is running, will the backup contain those changes or will it be added to the database afterwards?
i found that database log file can contain more records after performing backup database statement.
for example:
i create a database and limit the log file to 2mb. then i create a table and insert data.
If i backup the database before i insert data , the database file can contain 192 records unitl the log file is full.
If i don't perform the 'backup database' statement. The 'dbcc sqlperf(logspace)' indicate the utilization ratio is less than 40% after inserting 192 records
why?
I list my code:
Code Snippet create database db_test on primary ( name=db_test, filename='C:Program FilesMicrosoft SQL ServerMSSQL.1MSSQLDatadb_test.mdf' ) log on ( name=db_test_log, filename='C:Program FilesMicrosoft SQL ServerMSSQL.1MSSQLDatadb_test_log.ldf', maxsize=2mb ) go backup database db_test to disk='db_test.bak' --- if i don't execute this line, log file can contain a lot of record go create table db_test..table1(col char(8000)) --insert data to fill up the database log declare @n int set @n=0 while @n<192 begin insert into db_test..table1 values(replicate('a',8000)) set @n=@n+1 end
question 2: i create a database and limit the log file to 2mb. Then i create a table and insert data in an endless loop.
After the inserting operation executing for a while, the 9002 error occurs, indicate the log file for the database is full. But the 'dbcc sqlperf(logspace)' command indicate the unilization ratio is low, and log_reuse_wait_desc in sys.database is 'CHECKPOINT' And I can insert data , and i'm sure the state of log_use_wait_desc is 'CHECKPOINT'.
As i known, the checkpoint can't truncate log under full recovery model. Only the back log operation can truncate the transaction log. So log is not full, why 9002 error is encounterd. and why the log_reuse_wait_desc return 'CHECKPOINT'?
I list my code:
Code Snippet create database db_test on primary ( name=db_test, filename='C:Program FilesMicrosoft SQL ServerMSSQL.1MSSQLDatadb_test.mdf' ) log on ( name=db_test_log, filename='C:Program FilesMicrosoft SQL ServerMSSQL.1MSSQLDatadb_test_log.ldf', maxsize=2mb ) go create table db_test..table1(col char(8000))
--insert data to fill up the database log declare @n int set @n=0 while @n<>-1 begin insert into db_test..table1 values(replicate('a',8000)) end
Thanks in advanace for taking the time to read this post. I am using MSSQL 2005 and have created a function that allows me to use regular expressions in my SQL queries. My question is I have a pattern buried in a field of misc data that I need to pull out just that pattern and discard the rest of the data. Here is the Regular Expression I am using select field1 from table1 where dbo.RegExMatch (field1,'[a-zA-Z]{4}[0-9]{6}[a-zA-Z]{2,4}')=1 This returns all values in the field that match the expression. What I want to do now is remove all data from the field on the left and right of the expression that does not match the expression. How would I accomplish this without reading through the 200k+ records and writing rules for every exception I run across? so I could have Gar b/a ge 'THE GOOD DATA' m/or1 ba4d da....ta. All I want to do is return 'THE GOOD DATA'
Hi - I'm short of SQL experience and hacking my way through creating a simple search feature for a personal project. I would be very grateful if anyone could help me out with writing a stored procedure. Problem: I have two tables with three columns indexed for full-text search. So far I have been able to successfully execute the following query returning matching row ids: dbo.Search_Articles @searchText varchar(150) AS SELECT ArticleID FROM articles WHERE CONTAINS(Description, @searchText) OR CONTAINS(Title, @searchText) UNION SELECT ArticleID FROM article_pages WHERE CONTAINS(Text, @searchText); RETURN This returns the ArticleID for any articles or article_pages records where there is a text match. I ultimately need the stored procedure to return all columns from the articles table for matches and not just the StoryID. Seems like maybe I should try using some kind of JOIN on the result of the UNION above and the articles table? But I have so far been unable to figure out how to do this as I can't seem to declare a name for the result table of the UNION above. Perhaps there is another more eloquent solution? Thanks! Peter
If my backup starts at 8PM and take 1 hour to complete, will the changes made to the database during that hour be captured in the full backup?
Stated another way, will my backup be a snapshot of: a) 8PM when the backup started b) 8PM with some of the changes made between the hour c) 9PM when the backup finished?
Anybody know the exact way SQL Server handles that logic?
Hi - I am pretty new to Reporting Services. I need to create a report where a single result row from the Select Statement populates an entire page of data. The regular grid or Matrix reports don't fit this need. Is there a simple way to do this?
I am wondering if it is possible to use SSIS to sample data set to training set and test set directly to my data mining models without saving them somewhere as occupying too much space? Really need guidance for that.
Hi! Please help with the following:Our disk space is limited. I set the database recovery model to"FULL", and backup transaction log every hour between business hours.But the disk is always full because the transactions log growth. Doyou think increase the frequency of transaction log backup or backuptransaction 24 hours instead of only business hours will solve theproblem?Thanks!!
I regularly get the message : application log is full. I cleared event viewer's log file but it didn't help. What is this message about? Thanks in advance,
I have a transaction log which is 1 Gb and only has about 40 Mb free. When I run DBCC loginfo I find the first active log dates back to the middle of August. Does any one have any suggestions on the best way to approach this situation. How can I query the transaction log to find out what the old transactions are? I was going to use the detach database, rename log, attach database to shrink the log but don;t want to do this incase there is active data in the log. We are doing hourly log dunps. Thanks Grant