SQL 2012 :: How To Get Previous Statements In A Batch
Feb 25, 2015
I have an interesting problem. A number of spids are being blocked by a single select statement. The select statement is the same as returned from sp_who2, sysprocesses, sp_whoisactive of dbcc inputbuffer. It is not waiting on anything and has status as sleeping.
Clearly it is not 'just' a sleeping select statement as I can see over a thousand locks held by the spid on 2 user databases and tempdb. I'm working on the theory there is a begin transaction with a bunch of statements and no closing commit. But I want to be able to prove that. How can I show what statements were previously executed as part of this transaction?
Additional Info: SQL 2012 Enterprise Edition. This is a test server but is a reproduction of a live issue. At this point the application team cannot isolate the code causing the problem, only the set of processes the code resides in.
Hi, I want to know if I can execute a set of batch statements ( basically create statements) during the installation of our product. I have used Access with ADP connected to MS SQL Server 7.0 and I am trying to merge the database and table creations with the installation procedure. Please tell me how I can go ahead with this. (I have tried using the installshield and some other similar products with not of much use). Thanks in advance, Mangala
I have an Oracle Connection Manager configured to source off of an Oracle 9i database. I have tried both the Microsoft OLEDB provider for Oracle and the native Oracle provider.
When I run the above statement in SQL Plus or SQL Workbench, it works just fine.
In fact, it works in SQL Plus and SQL Workbench like this as well:
Code Snippet
TRUNCATE TABLE FactTable1
/ TRUNCATE TABLE FactTable2
/ TRUNCATE TABLE FactTable3
/ TRUNCATE TABLE FactTable4
/ TRUNCATE TABLE FactTable5
Although both of these would introduce the need to probably do some expressions to add the ";" or "/" conditionally only if the target is Oracle (we are *trying* to have one package support both SQL and Oracle) neither works. I get the following error message from SSIS command window:
...failed with the following error: "ORA-00911: invalid character". Possible failure reasons: Problems with the query, "ResultSet" property not set correctly, parameters not set correctly, or connection not established correctly.
Can anyone please provide some guidance on how to accomplish this with the Execute SQLTask task, taking into consideration that I very much generalize on the Oracle side?
I have written stored procedures to create a database with db name as input parameter
I need to create a batch file to run the stored procedure with input value given in the command prompt along with the batch file will be my dbname, which will be used by the stored procedure to create the databse.
and also few sql statements to insert data into those tables after creating
Plz could any one help me out to create code to create a batch file
When I run the following T-SQL statement using SQLExecDirect with the SQL Server ODBC driver, it seems to indicate to me that the driver is somehow treating the SQL as a batch rather than a single statement.
Code Snippet IF (1=1) UPDATE MyTable SET [Guid] = 'A663EC5D-A3AF-4391-8C45-9EBE87C9FD6C', [Id] = 1 [FieldA] = 'Always run' WHERE ([ID] = 1) ELSE INSERT INTO MyTable ([Guid],[Id],[FieldA]) VALUES (NEWID(), 2,'Never run')
Now the above statement is only a simplified statement to illustrate my problem, but I've also tested this stripped down version to ensure that I could replicate the problem with it.
Assuming I have the following table definition and data, I would expect that SQLExecDirect return SQL_ERROR due to the unique key constraint being violated on the [guid] column. However, I'm getting a SQL_SUCCESS_WITH_INFO error.
Code Snippet
CREATE TABLE MyTable ( [Guid] uniqueidentifier,
[Id] int,
[FieldA] varchar(100) )
INSERT INTO MyTable ([Guid],[Id],[FieldA]) VALUES (NEWID(), 1,'A')
INSERT INTO MyTable ([Guid],[Id],[FieldA]) VALUES (NEWID(), 1,'B')
According to a MS knowledge base article 195297:
In the version 3.6 SQL Server ODBC driver, SQLExecute, SQLExecDirect, or SQLParamData returns SQL_ERROR only if no other statements are executed after the first statement. If any other statements are executed after the first, even a simple RETURN statement with no return value, SQLExecute or SQLExecDirect returns SQL_SUCCESS_WITH_INFO
This makes me think that somehow the SQL Server driver is treating the T-SQL (with IF-ELSE) as a batch of statements (normally delimited with semi colon e.g. "statement1; statement2;").
I'm running MSDE, with SQL Server driver (SQLSRV32.DLL) v2000.85.1117.00 on Windows XP Professional SP2.
However, running the same SQL but changing the driver to the SQL Native Client (SQLNCLI.DLL) v2005.90.1399.00 returns SQL_ERROR as I expect.
Also the same SQL returns the desired error using both SQL Server and SQL Native Client drivers when connecting to SQL Server 2005.
I face a task to edit definition for bunch of sp: cleaning <IF EXISTS THEN DROP> and changing CREATE for ALTER.
Do you think there is a good way to do it in batch and take original source from system.tables (even we have this as physical file in VSS)?
Surely enough <IF Exists then drop> is not uniformed, could be variations with syntax and on diff number of lines, IF Exists... vs OBJECT_ID is not null etc...
CREATE__sp_PROCEDUREA vs CREATE_spPROcedure (with diff spacing).
I am working with a stored procedure that needs to roll up a week number column once a week - columns are numbered 1-10, 1 being this week, 2 being last week and so forth
Once a week the 10th column is deleted, the 9th becomes 10, the 8th becomes the 9th and so forth and the 1st is calculated the week numbers are getting all screwed up - and we think it's because one statement starts before the one before it completes the statements go like this:
delete theTable where week_num=10; update theTable set weeknum=10 where weeknum=9; update theTable set weeknum=9 where weeknum=8; and so forth
is that the reason? is there any way not to start one statement until the one before it finishes?
I am looking to update a record from a previous row. So if there is a value of total goods in week 1, i want that value to carry forward to the value of goods in week 2. Is there any SQL as an example of the best way to accomplish this? I can query it using lag() which works great but i need the source data itself to update as the end-users are accessing the data via lightswitch, so when they save a change, i want the trigger (or whatever you recommend) to update the source table.
get the desired results for the following sample data set. I was able to come up with a query that returns the the expected results however only for a given day, so I'd need to union several select statements the get the desired results which is definitely not ideal. I'd like to pass a parameter in (number of days) instead of doing a unions for each select.
DECLARE @T TABLE (Id INT, Category VARCHAR(1), [Date] DATE) INSERT INTO @T SELECT 1 AS Id, 'A' AS Category, '2015-5-13' AS ActivationDate UNION ALL SELECT 1, 'A', NULL UNION ALL SELECT 1, 'A', '2015-5-13' UNION ALL SELECT 1, 'A', NULL UNION ALL
I have backed up databases from a 2008 server and now I would need to restore them to a 2012 , the only issue is that I need a script bcuz I have over a hundred databases.
I have a table with about 466 Million rows. In this table there is a int column called WeeksToRetain as well as a EventDate column containing the date the row was inserted. I am trying to delete all the rows that that should be deleted according to the WeeksToRetain. For example, if the EventDate is 5/07/15 with a 1 in the WeeksToRetain column the row should be removed by 5/14/15. I am not sure what days SQL considers the beginning and end of the week. However the core issue I am having is the sheer mass of deletions I must do and log growth.
So I am trying to do the delete in batches. More specifically I want to load a temporary table with a million rows, then use the temporary table to load a sub temporary table with 100,000 rows and join this temporary table to the table I want to delete from looping through 10 times to get 1 million. The Logging.EvenLog table which is the table I'm trying to purge has a clustered index on EventDate (ASC). I would like to run this in a schedule job with enough time between executions for log backups to run.
DECLARE @i int DECLARE @RowCount int DECLARE @NextBatchDate datetime CREATE TABLE #BatchProcess ( EventDate datetime, ApplicationID int,
I have 12 month report and I need show volume and difference between current and prev month volume, what is the smart way to do this, do I need to put prev month value onto same row horizontally? I think should be some other smart way, I heard about LEAD function?
This what I think for now, It should be listed per ClientID also, in my example I have single ClientID for simplicity.
I tried to do LEAD but with not success..
/* IF OBJECT_ID('tempdb..#t') is not null drop table #T; WITH R(N) AS (SELECT 1 UNION ALL SELECT N+1 FROM R WHERE N <= 12 ) SELECT N as Rn, 10001 ClientID, DATENAME(MONTH,DATEADD(MONTH,-N,GETDATE())) AS [Month],
I have @Year and @Month as parameters , both integers , example @Year = '2013' , @Month = '11'
SELECT T.[Year], T.[Month]
-- select the sum for each year/month combination using a correlated subquery (each result from the main query causes another data retrieval operation to be run) , (SELECT SUM(Stores) FROM #ABC WHERE [Year] = T.[Year] AND [Month] = T.[Month]) AS [Sum_Stores], (SELECT SUM(SalesStores)
[code]....
What I want to do is to add more columns to the query which show the difference from the last month. as shown below. Example : The Diff beside the Sum_Stores shows the difference in the Sum_Stores from last month to this month.
In another forum post, a poster was deleting large numbers of rows from a table in batches of 50,000.
In the bad old days ('80s - '90s), I used to have to delete rows in batches of 500, then 1000, then 5000, due to the size of the transaction rollback segments (yes - Oracle).
I always found that increasing the number of deleted rows in a single statement/transaction improved overall process speed - up to some magic point, at which some overhead in the system began slowing the deletes down, so that deleting a single batch of 10,000 rows took more than twice as much time as deleting two batches of 5,000 rows each.
good rule-of-thumb numbers (or even better, some actual statistics and/or explanations) as to how many records should be deleted in a single transaction/statement for optimum speed? 50,000 - 100,000 - 1,000,000 or unlimited? Are there significant differences between 2008, 2012, 2014?
(0 row(s) affected) Msg 208, Level 16, State 1, Line 41 Invalid object name 'X_SET_PREOP'.
FOR THE FOLLOWING CODE SEGMENT.. I am trying to do 2 updates with just one WITH BLOCk.Create table #temp( MPOG_CASE_ID uniqueidentifier, lab_name varchar(100), lab_date datetime, lab_value decimal(19,2) );
with X_SET_PREOP as ( SELECT xx= ROW_NUMBER() OVER ( PARTITION BY lab.MPOG_Case_ID, lab.lab_name ORDER BY lab.lab_date DESC ), lab.MPOG_Case_ID, lab.lab_name, lab.lab_value,lab.lab_date FROM MPOG_Research..ACRC_427_lab_data lab
Curious if I have the code below as an example and I execute this code does sql execute from top to bottom? And does the Update run and complete before the delete occurs? Or does SQL execute the update and delete in parallel?
I would like to know how I can add the following sample code to my Source data on Data Flow on SSIS, or what other options there are. The main issue is time as we have talking about 100's of millions of rows
select Sample, CASE WHEN Sample IS NULL THEN NULL WHEN SUBSTRING(Sample, 1, 6) IS NULL THEN ' ' ELSE RTRIM(SUBSTRING(Sample, 1, 6)) END AS [Sample_1_6] from TestTable
what I have done at this stage is just to Create a SQL task with a Insert into
INSERT INTO [dbo].[TestTable1] ([Sample] ,[Sample_1_6]) select Sample, CASE WHEN Sample IS NULL =THEN NULL WHEN SUBSTRING(Sample, 1, 6) IS NULL THEN ' ' ELSE RTRIM(SUBSTRING(Sample, 1, 6)) END AS [Sample_1_6] from TestTable
If there is a way adding this to a dataflow so I van use fast load that would really be the best solution. I know there are derived columns, but would this really be faster than the straight insert into in a SQL Task? If this is the way to go what is the code I would use in the derived column or any other option.
Need to create a user defined role with grant permissions for below .
View Definition Execute all Function Grant View Grant Synonym dbo View Definition Not getting grant statements for above permissions.
I mean like below.
----------------------------------------------------------------- CREATE ROLE [Role1] GRANT EXECUTE ON SCHEMA ::dbo TO [Role1] -----------------------------------------------------------------
There is a valuable script out there that will take the rows from a table and display INSERT STATEMENTS.
Good thing is this script converts the data to HEXADECIMAL ( or some other ) and we don't have to worry about dealing with apostrophies embedded in varchar fields.
How to create insert statements of the data from a table for top 'n' rows. I know we can create using generate scripts wizard, but we can't customize the number of rows. In my scenario i got a database with 10 tables where every table got millions of records, but the requirement is to sample out only top 10000 records from each table.
I am planning to generate table CREATE statements from GENERATE Scripts wizard and add this INSERT STATEMENT at the bottom.
EX : INSERT [dbo].[table] ([SERIALID], [BATCHID] VALUES (126751, '9100278GC4PM1', )
SELECT COUNT(id) as viewcount from location_views WHERE createdate>DATEADD(dd,-30,getdate()) AND objectid=357 SELECT COUNT(id)*2 as clickcount FROM extlinks WHERE createdate>DATEADD(dd,-30,getdate()) AND objectid=357
But I want to add the COUNT statements, so this is what I did:
select COUNT(vws.id)+COUNT(lnks.id)*2 AS totalcount FROM location_views vws,extlinks lnks WHERE (vws.createdate>DATEADD(dd,-30,getdate()) AND vws.objectid=357) OR (lnks.createdate>DATEADD(dd,-30,getdate()) AND lnks.objectid=357)
Turns out the query becomes immensely slow. There must be something I'm doing wrong here which results in such bad performance, but what is it?