I have several SSIS packages which are run in sequence. But some strange reason sometimes a SQL Dump is created. If I restore database and restart the same ETL is works, or it creates another SQL Dump in a different place. Any idea on how to debug this problem?
SQL Server 2005 Developer Edition SP1 ( should I upgrade to SP2? )
Windows 2003/r2 64b ( latest patches )
Microsoft Visual Studio 2005 Version 8.0.50727.762 (SP.050727-7600)
Microsoft .NET Framework Version 2.0.50727
Installed Edition: IDE Standard
Microsoft Visual Studio 2005 Premier Partner Edition - ENU Service Pack 1 (KB926601)
This service pack is for Microsoft Visual Studio 2005 Premier Partner Edition - ENU.
If you later install a more recent service pack, this service pack will be uninstalled automatically.
For more information, visit http://support.microsoft.com/kb/926601
SQL Server Analysis Services Microsoft SQL Server Analysis Services Designer Version 9.00.2047.00
SQL Server Integration Services Microsoft SQL Server Integration Services Designer Version 9.00.2047.00
SQL Server Reporting Services Microsoft SQL Server Reporting Services Designers Version 9.00.2047.00
I am creating a database for an application through script. After the tables, views, and sp's are created, the database is populated with data. After all of this (and before the application is even run), the log file is about 700MB. If I shrink the database, it takes the log down to 1MB. The mdf file is about 165 MB before and after it has been shrunk.
I have two questions: 1. Is there something I should look for in my database scripts or is there a setting that could prevent this from being created so large.
2. Is there a script I can run in my sql code after the database has been created and populated to shrink it.
After much work and thanks to all of you who helped on this here is a code sample that can be adapted. From the dataflow task add an OLEDB source component, a row count component and finally a Script Destination Component.
On the Script Destination Component rename the Input node of the imports and outputs tree view to "ParsedInput"
The readonly User: variables that start with gs can be read in the PreExecute method
The readwrite User: variable giSuccessCount can only be used in the post execute task because it is populated by the Row Count Component which is the previous object in the Dataflow
The xml code is adapted from an idea in Donald Farmers book
enjoy
Dave
Now if someone can make a Script Source Component that can read a file with a header , data body and trailer that would b egreat! ' Microsoft SQL Server Integration Services user script component ' This is your new script component in Microsoft Visual Basic .NET ' ScriptMain is the entrypoint class for script components
Public Class ScriptMain Inherits UserComponent Dim sw As StreamWriter 'In addition to using the Imports System.Xml statement a reference must be added to the 'System.Xml assembly (Select Project-Add Reference from IDE) Dim xWriter As XmlTextWriter Dim OutputFileType As String '.csv or .xml
Public Overrides Sub PreExecute()
'Read Only variables Dim gsPickUp As String = Me.Variables.gsPickUp 'D:ftprootOutAvid' Dim gsPickUpFilename As String = Me.Variables.gsPickUpFilename '1_AVID_' Dim gsPickUpFileExtn As String = Me.Variables.gsPickUpFileExtn '.csv' Dim gsMemoText As String = Me.Variables.gsMemoText 'Memo Text : credit adjustment' Dim gsStatementText As String = Me.Variables.gsStatementText 'Statment Text : credit adjustment' Dim gsRunMode As String = Me.Variables.gsRunMode 'UPDATE' Dim fileName As String = gsPickUp & "" & gsPickUpFilename fileName = fileName & (Format(Now(), "yyMMdd").ToString) 'MsgBox(fileName)
OutputFileType = gsPickUpFileExtn If OutputFileType = ".csv" Then fileName = fileName & gsPickUpFileExtn sw = New StreamWriter(fileName) 'connection to dest file
'Header records sw.Write(gsRunMode) sw.Write(Environment.NewLine) ' end of line sw.Write(gsMemoText) sw.Write(Environment.NewLine) sw.Write(gsStatementText) sw.Write(Environment.NewLine) sw.Write(Environment.NewLine) 'Spacer End If
If OutputFileType = ".xml" Then fileName = fileName & gsPickUpFileExtn 'xWriter = New XmlTextWriter(Me.Connections.XMLConnection.ConnectionString, Nothing) 'xWriter.WriteStartDocument() 'xWriter.WriteComment("Customer file parsed using script") 'xWriter.WriteStartElement("x", "customer", "http://some.org/name") 'xWriter.WriteAttributeString("FileName", Me.Connections.XMLConnection.ConnectionString) xWriter = New XmlTextWriter(fileName, Nothing) xWriter.WriteStartDocument() xWriter.WriteComment("Customer file parsed using script") xWriter.WriteStartElement("x", "customer", "http://some.org/name") xWriter.WriteAttributeString("FileName", fileName) End If
End Sub
Public Overrides Sub ParsedInput_ProcessInputRow(ByVal Row As ParsedInputBuffer)
If OutputFileType = ".csv" Then Dim delim As String = ","
If OutputFileType = ".csv" Then 'Create the trailer sw.Write(Environment.NewLine) ' blank line sw.Write("RECORD_COUNT: " & Me.Variables.giSuccessCount.ToString) 'ReadWrite Varible sw.Write(Environment.NewLine) sw.Flush() 'send the stream to file 'Close file sw.Close() End If
If OutputFileType = ".xml" Then xWriter.WriteStartElement("RecordCount") xWriter.WriteString(Me.Variables.giSuccessCount.ToString) xWriter.WriteEndElement() xWriter.WriteEndElement()
I have a solution with a couple of ssis projects in it. Everytime I open the solution, Visual Studio creates an extra .database file for the project's existing xxx.database file. The solution is under VSS control and VS2005 checks out the project and shows the file as a newly added file.
What causes this and how can I prevent this from occurring?
There are 2 of us working on the solution and the other fellow does not see this behavior.
The files do not show up in the VS2005 solution explorer. If they are user specific as I suspect (impersonation info?), then they should not be added to the .dtproj project file.
I'm not at all comfortable with SSIS so please forgive me if I overload you all with information here:
I need to create a data table using SSIS which does not delete the previous days data. So far all the data tables we use to write reports in Visual Studio are constructed in SSIS as follows.
1 - Excecute SQL Task - DELETE FROM STOCK 2 - Data Flow Task 3 - Data Reader Source - SELECT * FROM ODBCDATASOURCE 4 - OLE DB Destination (Creates table STOCK)
The data tables which are created this way are stored in a data warehouse and scheduled to refresh once a day, which means that any data from yesterday is lost when the updates run. So, I tried to create a table which never has its previous days' data deleted by using just the last three steps above - and it worked great in Visual Studio, no problem at all. However, when I added this SSIS Package to the Update Job in SQL Server Management Studio, the job totally rejected the packed with the message: "The command line parameters are invalid. The step failed".
I thought I could work around this problem by asking the job step to excecute a simple SQL query to insert the data from table1 into table2 (and would thus negate the need for a SSIS Packege at all), but it threw me a curve ball with some message about not being able to use proxy accounts to run T-SQL Scripts.
If anyone knows how to create a SSIS package in which the data never expires please could you impart some wisdom my way. I only need to do this once for a specific report.
Please, when answering, bear in mind that I'm a simple fellow with little understanding of the inner workings of SQL Server and its various components, so please use short sentences and simple words.
I want to compare the schemas of 2 different databases, I heard that there was a way to dump the contents of a database (tables, stored procedures, objects, etc) to a text file in SQL to use for an easy compare. Anyone know the process in which to do this? Any help is greatly appreciated.
I am would like for a Trigger to fire after an SSIS job finishes.
My understanding is that i would use a AFTER trigger.
How my UPDATE and INSERT INTO would fire and only affect the new rows.
SET ARITHABORT OFF SET ANSI_WARNINGS OFF
UPDATE [GDev].[dbo].[tblCIDetailsTest] SET dFRate = (dFCharge/(dSCharge+dACharge))
Also need to INSERT INTO 3 columns from a Table called tblFinanceP by looking up/Union or Join (not sure what to use) called vcTNum that is in bother the tblFinacneP and tblCIDetailsTest.
INSERT INTO [GDev].[dbo].[tblCIDetailsTest] AS Details SELECT iPNum, iPCount, iZone FROM [GrEDI].[dbo].[tblFinanceP] AS EDI where EDI.vcTNum = GDev.dbo.tblCIDetailsTest.vcTN
I got SQL server 2000 running on Server 2003 Enterprise edition. And a dump file, which I have no idea how it has been created. All I know is that the data is some relational database.
I need to import this dump file into the SQL server database.
Hi, I just started using SQL Server 2005 and I'm trying to find out how to do a sql dump on a table, but this is proving more challenging then it should be.
I usually use mysql with a program called navicat, and all you do is right click on the table and select dump... Inserting the data back in is just as simple. I have also used sql server 2000 awhile back and I know there was a dumping utility for it.
Can someone point me in the right direction on how to dump data to a .sql file and reinsert that data? Thanks!
I need to dump a table with more than 200 columns to a flat file. I tired in SSIS with the flat file destination, but it only allows 84 columns. What should I do if I want to dump all the columns to a text or cvs file. Thanks in advance
I'm using sqlserver 2000 enterprise edition. I am an oracle dba and we have some tables in sqlserver 2000 that we need to write out to the flat file. I have a procedure in oracle to do this for oracle tables. But, how would I do this in sqlserver 2000. I have 10 columns on this table and I only want 3 columns data to be dumped on the flat file. We are on NT sever 4.0.
Hi, I would preciated if some could help me with this problem. My have used Mysql database and now i have to change my database to sql server. I have made a dump file from mysql db and i would like to insert that dumb file to sql server. Have any idea how it is possible?
I am trying to export a table with ~ 10 Million rows to a flat file and it is taking for ever with SQL2005 export functionality. I have tried creating an SSIS package with a flat-file destination and the results are the same. In each case it does the operation in chunks of about 9900+ rows, and each chunk takes ~1-2 minutes which sounds unreasonable.
I tried bcp, and it fails after a few thousand rows. I tried moving the data to SQL2000 first then to flat file from SQL2K, but the move from SQL2005->SQL2000 was going at the same rate as above.
So, the bottleneck seems to be data going out of SQL2005 no matter what the destination is. I'm wondering if there is some setting that Iam missing that would make this run in a reasonable amount of time?
Does anyone know if it is possible (and if yes, how) to create a trigger that creates a folder for every inserted record? In a specific root folder, I want a subfolder for every record in the table. How does the trigger have to look like to make the folder automatically with every insert of a new record???
I have an "insert into" statement that creates two identical rows in a table, with this statement: delete from [table] where [column] = @parameterINSERT INTO [table]([fields]) VALUES ([parameter values]) This is the code-behind that performs the insert: Dim dbConn As New SqlConnection(strConn)Dim cmd As New SqlCommand("sp_CreateUser", dbConn)cmd.CommandType = Data.CommandType.StoredProcedurecmd.Parameters.AddWithValue("@UserID", strUserID)cmd.Parameters.AddWithValue("@UserName", strUserName)cmd.Parameters.AddWithValue("@Email", strEmail)cmd.Parameters.AddWithValue("@FirstName", strFirstName)cmd.Parameters.AddWithValue("@LastName", strLastName)cmd.Parameters.AddWithValue("@Teacher", strTeacher)cmd.Parameters.AddWithValue("@GradYr", lngGradYr)Using dbConndbConn.Open()cmd.ExecuteNonQuery()dbConn.Close()cmd.Dispose()dbConn.Dispose()End Using I wonder if it inserts twice due to a postback issue. Is there a way to stop two rows from being created in the first place with the same "insert into" statement? I'd appreciate any advice.
I am upgrading a .mdb to MSSQL. The .mdb is 17MB, but the resulting MSSQL is 72MB. Tried using both the Access Upsizing Wizard and Enterprise Manager DTS. I have done this a number of times before, but never ran into this problem. Any ideas what coule be going on, and how to fix it?
I am upgrading a .mdb to MSSQL. The .mdb is 17MB, but the resulting MSSQL is 72MB. Tried using both the Access Upsizing Wizard and Enterprise Manager DTS. I have done this a number of times before, but never ran into this problem. Any ideas what coule be going on, and how to fix it?
I'm very new to SQL Server. Please help. I need to create a FUNCTION that creates a view. Then call this function in a SQL which is passed as a parameter to BCP. In Oracle, it would be something like:
create function CREATEVIEW
return number as
begin
create view SampleView as SELECT a,b,c from Mytable;
return 1;
when others then return 0; -- for exception handling
end:
create function DROPVIEW
return number as
begin
Drop view SampleView;
return 1;
when others then return 0; -- for exception handling
end:
Then my BCP will have something like:
BCP "select CREATEVIEW from dual"... QUERYOUT ..
then
BCP "select * from SampleView"... QUERYOUT ..
then drop the view again:
BCP "select DROPVIEW from dual"... QUERYOUT ..
I know there is no DUAL table in SQL SERVER. I just want to know how to code this in SQL Server.
The origin of my problem is that my SQL statement is too long to fit as BCP parameter, hence, am creating a view and reading there and dropping it again. If you can provide me with a better workaround, that would be great. Thanks in advance.
Is it possible to pass 5 variables to a proc and have IT do the thinking and query structuring? An example of what I'm try to do is have one proc for getting vehicles by make, model, and years example of what I'd like to accomplish:veh_list_vehicleInfo_byDetails @TypeID int, @MakeID int, @ModelID int, @begYear int, @endYear int AS BEGIN
declare @SQL as nvarchar(500) set @SQL = 'SELECT a.ID, b.Model, c.Make, d.Name, a.Year, a.Mileage, a.Price, a.Sale, a.Certified_Pre_Owned FROM veh_vehicles a INNER JOIN veh_model b ON a.ModelID = b.ID INNER JOIN veh_make c ON a.MakeID = c.ID INNER JOIN veh_location d ON a.LocationID = d.ID'
decalre @ATTRIBUTES as nvarchar(500) if @TypeID is not null AND @TypeID > 0 begin set @ATTRIBUTES = @ATTRIBUTES + ' a.TypeID = ' + @TypeID end if @MakeID is not null AND @MakeID > 0 begin set @ATTRIBUTES = @ATTRIBUTES + ' a.MakeID = ' + @MakeID end
....etc etc.......
if Len(@ATTRIBUTES) > 0 begin EXEC(@SQL + ' WHERE ' + @ATTRIBUTES) End Else BEGIN EXEC(@SQL) END
END But I keep getting some errors regarding converting 'a.TypeID = ' to int ????? Please help!! I figured this would be easier than writing stored procs for EACH situation
hey out there, just started playing with this so bare with me... i've used the data export wizard to create a dts package in sql server 2000. the package takes selected tables, creates an access database and then dumps all the data into the tables. it works fine when i run the package in enterprise manager, but when i call it from asp.net (vb.net) the access database gets created, the tables are created but it fails to dump the data! this is my code:1 Public Sub executeDts() 2 3 Dim oPkg As DTS.Package2 4 oPkg = New DTS.Package2 5 Dim oStep As DTS.Step 6 Dim sMessage As New StringBuilder 7 8 9 oPkg.LoadFromSQLServer("serverNameHere", "userNameHere", "passwordHere", DTSSQLServerStorageFlags.DTSSQLStgFlag_Default, , , , "packageNameHere") 10 11 For Each oStep In oPkg.Steps 12 13 sMessage.Append("<p> Step [" & oStep.Name & "] ") 14 15 If oStep.ExecutionResult = DTSStepExecResult.DTSStepExecResult_Failure Then 16 17 sMessage.Append(" failed<br>") 18 19 Else 20 21 sMessage.Append(" succeeded<br>") 22 23 End If 24 25 sMessage.Append("Task """ & oPkg.Tasks.Item(oStep.TaskName).Description & """</p>") 26 27 Next 28 29 Response.Write("sMessage = " & sMessage.ToString & "<br/>") 30 31 oPkg.Execute() 32 33 oPkg.UnInitialize() 34 35 oPkg = Nothing 36 37 End Sub 38
am i missing a step?! it seems very odd that the tables are created but the insert fails... any advice anyone can offer would be great! cheers, jake
Hi. I've installed and configured a SQL Server 2000 box on our local network. Now that I've moved it up to the Live network I've had to change it's name and IP. Everything is fine apart from the SQL Server Agent, specifically the maintenance plans and jobs. When I go to change or delete the existing plans or jobs I have the following message: Microsoft SQL-DMO (ODBC SQL State: 42000) Error 14274: Cannot add, update or delete a job (or it's steps or schedule) that originated from an MSX Server.
In typical MS style, I have actually been able to delete the maintenance plans from the "Maintenance Plans" area and create new ones. However I am unable to delete the related jobs under the SQL Server Agent - Jobs menu.
As far as I'm aware I have never added a Master SQL Agent Server (MSX) and looknig under the Agent properties - Job System it does say (none).
Hi guys, I have made several Access-based CMSs but now I am using SQL Server. I can read the records but my first attempts at writing are resulting in new records (with new ID) but all the fields are null. I am posting the data from a form to the same page and an if /then statement catches the flag in the URL and runs the update script below. All the field names are correct. if request.QueryString("add")<> "" then Dim rsUpdateEntry Set rsUpdateEntry = Server.CreateObject("ADODB.Recordset") rsUpdateEntry.Open "SELECT * from generic_country_info" , oConn, 2, 3
We noticed SQL Server 2005 is creating Program FilesCommon FilesMicrosoft SharedDW on our largest drive for each 64-bit installation. Does anyone know what this is? It appears there is no Microsoft documentation regarding this installation and if we need to keep it. It may be .NET related, but I have no idea why it is needed.
The following are the output lines from my code which is constructing some T-SQL queries on the fly. The query highlighted in yellow is the problem query. The code upto the problem query is working correctly and I am able to see the output tables in the Query Analyzer
STEP 1 : Create 3 Tables In Dynamic T-SQL : Showing 3 Strings from Exec statement
Create Table ##Test_word28July2007185548990201 ( t float, e float, s float, word varchar(80) ) Create Table ##OUT_Test_word28July2007185548990201 ( t float, e float, s float, word varchar(80), KeywordID int, rank float ) Create Table ##STVR_FLOAT_Test_word28July2007185548990201 ( var_val float)
Step 2: Retrieving A Value Into Another Global Table (table has only 1 row and 1 column) Insert Into ##STVR_FLOAT_Test_word28July2007185548990201 ( var_val ) Select IsNull( t , 0 ) from ##Test_word28July2007185548990201 (1 row(s) affected)
Step 3: I need the value (var_val) in ##STVR_FLOAT_Test_word28July2007185548990201 Update ##Out_Test_word28July2007194827580759 Set t = Select var_val from ##STVR_FLOAT_Test_word28July2007194827580759 where t > Select var_val from ##STVR_FLOAT_Test_word28July2007194827580759 Server: Msg 156, Level 15, State 1, Line 1 Incorrect syntax near the keyword 'Select'. Server: Msg 156, Level 15, State 1, Line 1 Incorrect syntax near the keyword 'Select'. (1 row(s) affected)
Problem Definition
Part 1
The update query is trying to retrieve a value in a dynamically constructed table. ##STVR_FLOAT_Test_word28July2007185548990201 (1 column var_val , 1 row)
Update <Table_Name> set t = @var_val where t > @var_val The update query is simple, except I need to "SELECT" @var_val from the dynamic table
The @var_val is a float value.
I have outputted the contents of the table that is holding the @temp_val variable It has the correct value and the table has only 1 row and 1 column.
WHAT IS THE SYNTAX OF A QUERY TO SELECT A SINGLE VALUE FROM A TEMP TABLE WITH ONLY 1 ROW OR COLUMN AND USE THE SELECTED VALUE AS A VARIABLE IN AN UPDATE STATEMENT
I have created a few reports and linked them to a URL on a web page. (sample link is : http://servername/ReportServer/Pages/ReportViewer.aspx?%2fMy_Reports%2fBacklog+Report&rs:Command=Render )
When the user clicks on the link above it renders correctly but when the user tries to export the report to any format on the list, it launches another window with the following URL : http://servername/ReportServer/Reserved.ReportViewerWebControl.axd?ExecutionID=czq4c355dmsxdy55dif1nm55&ControlID=ad74d68e-2a9c-430f-8655-dd0e6c46f831&Culture=1033&UICulture=9&ReportStack=1&OpType=Export&FileName=Backlog+Report&ContentDisposition=OnlyHtmlInline&Format=EXCEL
which then prompts the user to Open or Save the report.
How do I stop this window from opening ? or how can I make it close automatically ?
Please advise.
I know if I can change the ContentDisposition somehow to AlwaysInline then this extra window will not show during the export but it keeps defaulting to OnlyHTMLInline for ContentDisposition.
i am using SQL server for turkish language and i have a problem with turkish character "i". Whenever "i" comes in any word of query it gives wrong result. It also not able to recognize the small and capital "i" of turkish.
DDL and then write data using INSERT INTO ... DDL.
There are three strange things that i've encountered and will be glad to get help from anyone:
If I use as a table name expression with spaces, the name is changed to expression with underscore '_' symbol instead of spaces. I can not set trailing '$' when creating a table, but when inserting data I have to use the name with traling '$' otherwise the exception is thrown. Moreover, if i get the scheme of the Excel file later I am getting TWO tables instead of one: a first with the name without '$' and another the same with trailing '$'. Nedless to say that visually Excel shows only a name without '$' If I am trying to do same operations using OLEDB 12.0 (Office2007) I get invalid file. If anyone knows how I can overcome above issues, please write me a code. I am coding in C# but VB examples are as good as any other.
So i am trying to make a simple database and with one table and use a gui to make rows. When i run the program in debug mode it creates a new copy of my DB in the debug folder and saves only their, i cannot find a way to save back to the original DB no matter what i do including changing the Copy to Output Directory.
Now the more i try to fix this the more new DB it creates one in the Release Folder, one in the bin/Debug/bin/Debug one in bin/Release/bin/Release.
I am going crazy trying to scoure the entire project looking for someplace i can fix this.
All i want is to create a DB and be able to make changes to it and only it.