I thought this was a neat solution I came up with, but I'm sure it's
been thought of before. Anyway, it's my first post here.
We have a process for importing data which generates a SELECT statement
based on user's stored configuration. Since the resulting SELECT statement
can be massive, it's created and stored in a text field in a temp table.
So how do I run this huge query after creating it? In my tests, I was
getting a datalength > 20000, requiring 3 varchar(8000) variables in
order to use the execute command. Thing is, I don't know how big it could
possibly get, I wanted to be able to execute it regardless.
Here's what I came up with, it's very simple:
Table is named #IMPORTQUERY, one field SQLTEXT of type TEXT.
Dear Friens,I am writing a SP.But storeing a query in a variable.But at the timeof execution generating error.Exam===================Declare @query varchar{500)Set @query = 'Select * from table'if exists (exec (@query))print 'Hi'====================But "if exists" line giving error.How do I solve this.Please help meout.ReagrdsArijit Chatterjee
Currenlty I have huge amounts of data going into a table. I'm sending an xmldoc and using openxml with a cursor to seed them.
the question I have is whether to let duplicate keyed data rows bounce and then check @@error and then do an update on the nokeyed field or to do a select on the keyed field and then do an insert or update based on the selects results.
Rather than posting twice, I thought I would put both issues I'm having in one. Our server is Windows Server 2003 and we're running SQL Server 2005.
The first issue is this: We have several databases and I have scheduled their backups to run nightly which works just fine. A couple weeks ago, one of the databases .bak file grew from about 500MEG to 2GB overnight. Then, just a few days ago, it went from 2GB to 3.5GB. There is nothing unusual going on in the live db that would warrant such an increase in the .bak file. All the dbs are in the same backup job schedule but this is the only one affected. Additionally, I had autogrowth enabled on all the dbs but today disabled it for this particular db. Any ideas?
The second issue is my tempdb.mdf file on my C drive. It will go from just a few hundred KB's to 4.5GB overnight consuming most of what is left on my C drive. I'm afraid I'm in for a system crash if it continues. I have to stop SQL Server and restart it to clear the size. Is there a way to move the location of the tempdb.mdf file to my F drive?
I don't know if these two issues are related or not but certainly would like to hear from someone.
Hello,I have a huge database (2 GB / month) and after a while it is becomingnon-operational (time-outs, etc.) So I have written an SQL sentence(delete) that can reduce around 60% of the db size without compromisingthe application data needs. The problem is that when I execute it, thedb does reduce its size 60%, but the transaction log increases at thesame rate. Can I execute the sentence in a "commit" or"transaction" mode so to impede the SQL Server write in the log?Thanks for the help!Antonio
Hi everyone,My brain refuses to remember the (undocumented?) stored procedure I'mthinking of. It takes at least two parameters: a sql statement toexecute, and a table name (or something of that nature).Then, for each value in the table, it executes the sql statement andpasses the value as a parameter.Can anybody refresh my memory? The functionality may be slighlydifferent than described, but the principle is the same. Thanks verymuch...-Joe
I have set up a package that copies data from one server to another server, then delete the data from the source tables. Now I want to add a task where it asks if the copying data was successful, then delete the data, ELSE stop the package and give an error msg, or some kind of a roll back so I don't delete the data without copying it to the destination server.
So what I want to ask is is that possible using Execute SQL task to write the script? if not how do I approach to it? And I need some help with the roll back script as well..( IF previous task fails ..... ELSE Go on to the next task )
When updating large sets a row at a time the performance is lacking in comparison to 6.5. When using PeopleSoft which uses cursors with a begin transaction with a loop inside and a commit after the loop completes, SQL 6.5 with Page locking could handle a 300,000 row transaction in 3-4 hours. 7.0 took 17.5 hours. The difference is 6.5 used 50,000 locks and 7.0 used 300,000 locks.
Does anybody have solution short of rewriting PeopleSoft ?
We have a large and active MSSQL 2000 database. Recently, after a rebuild of the server, we had a problem with the SQL service SQLSERVERAGENT. The service could not start as the service account lost local permission to the registry. During this time, all of the data being sent to the database from our application accumulated into the database .ldf file. By the time we were able to get the service restarted, our .ldf file was approx. 28 Gigs. When the service restarded, the .ldf file shrunk down to regular size,about 40 megs, and the .trx tlog file grew up to 28 gigs for that specific period (new file every hour).
The problem is, the database file (database.mdf) stayed about the same as it was before the service was restarted. When the .ldf transfered to the .trn none of the 28 gigs of data got stored in the database. What does this mean? Perhaps with the service stopped the application using the db saw problems and did not commit the data making it all useless? Or is it possible that the data in the .trn log just needs to be forced to commit to the .mdf???
Is there any way to verify the data in the 28 gig .trn file and figure out if we should get it stored to the database? If yes, how would we go about verifying it, and after that how would we force it to commit to the .mdf file? Am I on the right track here or is it not as I see it??
Would you say that it's ok for a web site code to make ALL of it's access to a db through SP and views? And I mean everything including inserting new records and updating others with no use with SQL in the code.
The advantage would be very strict control over the access, but in order to achieve this it would take many many SP and views to cover all types of actions, can you think about a disadvantage except all the work creating those SP?? what about the server resources and performance? how demanding it would be?
Our database server has started acting weird and at this point I'm eithertoo sleep deprived or close to the problem to adequately diagnose the issue.Basically to put it simply... when I look at the read disk queue length, thedisks queues are astronomical.normally we're seeing a disk queue length of 0-1 on the disks that containthe DB data and index. (i.e non clustered indexes are on a disk of theirown).Writes are just fine.Problem is, all our databases are on the same drive, and I can't seem tonail down which DB, let alone which table is the source of all our reads.Now, to really make things weirder.. during the busier times of the daytoday (say 1:00 PM to 4:00 PM) things were fine.At 4:20 PM or so it was like someone hit a switch and read disk queue lengthjumped from 0-1 up to 100-200+... with spikes up to 1500 for a split secondor so.What's the best way folks know to nail down this?Thanks.----
If I remove the TOP 200 this query returns about 2.5 million rows. It combines a lot of records and turns it into much more programmer friendly results. The query slowed down from 2 seconds to about 13 seconds as it has grown from about 10k to the now couple of million.
Code Block
SELECT TOP 200 * FROM ( SELECT [UserProfile].[UserId] ,[aspnet_Users].[UserName] ,[City] ,[State] ,[RoleName] ,[ProfileItemType].[Name] AS pt_name ,[ProfileItem].[Value] FROM [UserCriteria] ,[aspnet_Users] ,[aspnet_Roles] ,[aspnet_UsersInRoles] ,[Location] ,[ProfileType] ,[ProfileTypeItem] ,[ProfileItem] INNER JOIN [UserProfile] ON [ProfileItem].[ProfileId] = [UserProfile].[ProfileId] INNER JOIN [ProfileItemType] ON [ProfileItem].[ProfileItemTypeId] = [ProfileItemType].[ProfileItemTypeId] WHERE [UserProfile].[UserId] IN ( SELECT [UserCriteria].[UserId] FROM [UserCriteria] WHERE Zipcode IN ( SELECT [Zipcode] FROM [ZipcodeProximitySQR] ('89108' , 150)) )
AND [UserProfile].[UserId] = [aspnet_Users].[UserId] AND [UserCriteria].[UserId] = [UserProfile].[UserId] AND [Location].[Zipcode] = [UserCriteria].[Zipcode] AND [aspnet_UsersInRoles].[UserId] = [aspnet_Users].[UserId] AND [aspnet_UsersInRoles].[RoleId] = [aspnet_Roles].[RoleId] ) AS t PIVOT ( MIN([Value]) FOR pt_name IN ([field1],[field2]],[field3]],[field4]]) ) AS pvt ORDER BY RoleName DESC, NEWID()
The line: FOR pt_name IN ([field1],[field2]],[field3]],[field4]]) I change the values from the long names to read field1, field2... because it was irrelevant but confusing because of the names.
i have two machines. i was working in a "Execute SQL Task" object's SQL window on a rather long sql task on one machine and reached some kind of limit on the length of the sql statements. i can not add another line of code. i cut and pasted this same code into the exact same "Execute SQL Task" object's SQL window on the second machine and it does not have this limit. does anyone know what causes this? (in fact....i could paste it in twice - doubling the length)
I am trying to copy data from DB1.VersionNumber to DB2.VersionNumber (Where VersionNumber on both the database have same schema),where the table contains an IDENTITY column. so i want to maintain same IDENTITY to that of source db when copying to DB2.
SELECT @ConstructString1 = 'INSERT INTO VersionNumber([VersionNumberEndDate],[VersionNumberID],[VersionNumberName],[FiscalYearNumber],[RowGUID]) SELECT [ChangedBy],[ChangedDateTime],[CreatedDateTime],[VersionNumberEndDate],[VersionNumberID],[VersionNumberName],[FiscalYearNumber] FROM MapssR14SR2.dbo.VersionNumber'
EXECUTE (@CommandString)
EXECUTE (@ConstructString1)
i am getting error saying Cannot insert explicit value for identity column in table 'ExchangeRate' when IDENTITY_INSERT is set to OFF
But if i try to execute without EXECUTE :
USE DB2
SET IDENTITY_INSERT dbo.VersionNumber ON
INSERT INTO VersionNumber([VersionNumberEndDate],[VersionNumberID],[VersionNumberName],[FiscalYearNumber])
SELECT [VersionNumberEndDate],[VersionNumberID],[VersionNumberName],[FiscalYearNumber] FROM DB1.dbo.VersionNumber
SET IDENTITY_INSERT dbo.VersionNumber OFF
i am able to execute successfully.
Can anybody please explain me what is problem in my code. Why when executing thru EXECUTE statments throwing error, and no errors when executing directly ? Is there any restriction is stetments when using EXECUTE? I tried using
EXECUTE sp_executesql @CommandString EXECUTE sp_executesql @ConstructString1 But same error is coming up.
I am running an update statement in an execute sql task, it will run one time but then fails after that. Whats going on?
Here is my query I'm running.
UPDATE encounter SET mrn = r.mrn, resourcecode = r.resourcecode, resgroup = r.resgroup, apptdate = r.apptdate, appttime = r.appttime FROM SCFDBWH.SigSched.dbo.EncounterNARaw AS r INNER JOIN encounter ON encounter.encounter_number = r.Encounter
What I'm doing is pulling info from a flat file, inserting it into the encounter table, I then run this sql statement to pull additional info from another table on another server, and update the encounter table with the corresponding info. Pretty straight forward. But what is really getting me is that the package will run 1 time but if I wait ten minutes and try running it again it bombs. Any ideas?
I have to delete a ton of data from a SQL table. I have a unique identifier called the version. I would like to use if not in these versions then delete. I tried to using the statement below, but learned the hard way that it created an error this is the message I got....
Msg 9002, Level 17, State 4, Line 3...
The transaction log for database 'MonthEnds' is full due to 'ACTIVE_TRANSACTION'.
I was reading about truncate, I am not sure how I would do this or how I would setup the statement.
Delete Products where versions were not in (('48459CED-871F-4971-B888-5083990332BC','D550C8D3-58C7-4C74-841D-1C1675F19AE3','C77C7817-3F04-4145-98D3-37BB1610DB35', '21FE83FA-476D-4604-80EF-2ED57DEE2C16','F3B50B81-191A-4D71-A406-011127AEFBE1','EFBD48E7-E30F-4047-909E-F14DCAEA4181','BD9CCC41-D696-406B- 'C8BEBFBC-D362-4D0F-A555-B281FC2B3023','EFA64956-C2CF-41FC-8E21-F060597DAFCB','77A8DE56-6F7F-4490-8BED-AA6809B947EF','0F4C1E5F-B689-4DCB-
I have a series of SSIS packages, all of which are ultimately executed by a parent package.
I'm consitently getting "OutOfMemory" errors when working with the packages which is temporarily solved by closing Visual Studio and re-opening the package(s)... This solution is short lived however as the OutOfMemory error occurs quite quickly after re-opening, often after doing nothing other than altering a variables default value and attempting to save the package.
The average size of the packages in question (.dtsx files) is around 7,000kb with the largest being 12,500kb. The total size of all the solution's packages is ~75,000kb.
The Processes tab in Task Manager shows a Mem Usage counter for devenv.exe *32 of around 20,000kb when Visual Studio is first opened however, when a single ~6,000kb dtsx file is opened this counter jumps to +300,000kb and when the entire solution is opened (When the parent package is executed), the Mem Usage counter for devenv.exe *32 is a massive +800,000kb!!!
Is this normal SSIS behaviour or do I have a major problem? Any tips or suggestions as to how to resolve this issue would be gratefully received.
FYI, "SELECT @@VERSION" gives me "Microsoft SQL Server 2005 - 9.00.3042.00 (X64) Feb 10 2007 00:59:02 Copyright (c) 1988-2005 Microsoft Corporation Enterprise Edition (64-bit) on Windows NT 5.2 (Build 3790: Service Pack 2) "
My Server is Windows Server 2003 R2 Enterprise x64 SP2 with 8GB of RAM.
select case when class_desc='OBJECT_OR_COLUMN' then 'GRANT '+permission_name+' ON '+'['+left(object_name,3)+'].'+'['+substring(object_name,5,len(object_name))+ '] TO '+username WHEN class_desc='DATABASE_ROLE' THEN EXEC sp_addrolemember N'object_name', N'MC' end from dba.dbhakyedek where username='MC'
This statement was running successfully until exec sp_addrolemember thing. I just learned that i can't call a sp in select case but i couldnt figure out how to do it.
The following exception is thrown with sqljdbc.jar (not with jtds0.9.jar)
com.microsoft.sqlserver.jdbc.SQLServerException: New request is not allowed to start because it should come with valid transaction descriptor. at com.microsoft.sqlserver.jdbc.SQLServerException.makeFromDatabaseError(Unknown Source) at com.microsoft.sqlserver.jdbc.TDSTokenHandler.onEOF(Unknown Source) at com.microsoft.sqlserver.jdbc.TDSParser.parse(Unknown Source) at com.microsoft.sqlserver.jdbc.TDSParser.parse(Unknown Source) at com.microsoft.sqlserver.jdbc.SQLServerConnection$1ConnectionCommand.doExecute(Unknown Source) at com.microsoft.sqlserver.jdbc.TDSCommand.execute(Unknown Source) at com.microsoft.sqlserver.jdbc.SQLServerConnection.executeCommand(Unknown Source) at com.microsoft.sqlserver.jdbc.SQLServerConnection.connectionCommand(Unknown Source) at com.microsoft.sqlserver.jdbc.SQLServerConnection.rollback(Unknown Source) at org.apache.commons.dbcp.DelegatingConnection.rollback(DelegatingConnection.java:265) at org.apache.commons.dbcp.PoolingDataSource$PoolGuardConnectionWrapper.rollback(PoolingDataSource.java:288)
While using the query :
if not exists ( select 1 from sysindexes where id = object_id('aaa') and name = 'aaa_pk')create unique nonclustered index aaa_pkon aaa(id)using Statement.executeBatch()at conection.commit()
Sample Code
conn = getConnection();
// create the statement and execute the query Statement stmt = null; try { conn.setAutoCommit(false); stmt = conn.createStatement(); for ( String sql : sqlList ) { stmt.addBatch( sql ); }
results = stmt.executeBatch(); stmt.clearBatch(); conn.commit(); //throws the exception } catch ( SQLException e ) { try { conn.rollback(); } catch ( SQLException e1 ) { throw new DataSourceException( e1 ); } throw new DataSourceException( "Error executing sql: %1", e, sqlList.toString() ); }
We are using SQL Server 7 on Win 2k and there are some DTS packages set up which empty some large tables (delete from) and then import some datafiles. The imported files are about 13 GB and during the process the log file gets to about 10GB and then runs out of disk space.
Is there a trick to empty a table without logging it? (a la LOAD Replace from Null in DB2)? How can I go about keeping the log file size down during this operation?
I think the DB is set to autocommit, the trunc log on chkpt. is set on as is the select into/bulk copy (altho I'm reasonable sure we arent availaing of the bulk copy for the import).
I am currently working on a project where I have to import a huge amount of data from CSV files into a database. I don't want to have dublicate keys in my table, but my CSV file contains them. That means the line more at the end of the file contains the mor up to date information that I have to store.
I try to fix this problem since serveral weeks, but my algorithm is very slow and blocks all other processes on the server. At the moment I am copying all records into a temp table that occure more than once in the CSV file. After that I am running through this table line by line and check if the key already exists in the target table and then either make an insert or an update.
Hi you all, In abc.aspx, I use a GridView and a SqlDataSource with a SelectCommand. The GridView's DataSourceID is the SqlDataSource. In abc.aspx.cs, I would like to use an IF statement in which if a criterion is not satistied then I will use the SqlDataSource with another SelectCommand string. Unfortunately, I have yet to know how to write code lines in order to do that with the SqlDataSource. Plz help me out!
I have following requirement. From OLE-DB source I am getting IDS. Then lookup with some master data. Now I have only matching IDs. Now I need find some filed(say Frequency from some table for each above id). I already write stored procedure for same where I am passing ID as parameter.Which is working fine when I run it SQL server management studio.
Query is sort of
Select field1,fiel2... from table 1 where id = @id
@id is each ID from lookup
Now I want to call this stored procedure in Data flow. I tried it using OLE DB command but it did not return output of stored procudre. I am getting output same what ever I am passing input.
Is there way to do this? In short my requirement is execute parametrized select statement using data flow trasformation component.
CREATE FUNCTION [dbo].[UDF_GetCode] ( @TableName NVARCHAR(50) ) RETURNS NVARCHAR(50)
[code]...
This function is called in insert statement like below. exec sp_executesql N'INSERT INTO Table ([Code], [Name]) VALUES (dbo.UDF_ GetGlobal ConfigCode (''TableName''), @Name)'I am getting following error.Only functions and some extended stored procedures can be executed from within a function.
I have some "Execute T-SQL Statement Tasks" in a package. I would like to run this same package on another SQL Server without having to change it on the other server. Since the server name can be given when setting up the connection, I think if I leave the server name out then the package could run on any server? Is my assumption correct?