We have about 150 SQL servers and basically we're considering the pros and cons of installing SSIS on a central SSIS server - that is responsible for all DTS jobs - as opposed to installing SSIS on the local SQL instance.
On the plus side so far:
1./ Central administration, alerting, change management etc
2./ Possible performance gain on the local instance not having SSIS installed?
On the negative side:
1./ Central point of failure
2./ Possibility that it would need to be a clustered...
3./ Compatibility issues may mean having to make the central SSIS server 32-bit?
4./ Possible performance cost of remote SSIS?
5./ With multiple DTS packages running at different times, when would we take the server down for maintenace...?
Hello friends. I managed to design an Integration service package,but the desired level of performance has not been achieved(i.e it is performing slow). So I want to know what are the best practices for optimized solution . In my package I'm exreacting data from XML file and Storing it in SQL server database with some processing dring data flow.
I'm using 1) Two Script Task Control -In these control,I m opening the connection to XML file through VB.net code and iterating each record at a time. 2)Two OLE DB Command -Each fetched record from script task component is processed in OLEDB command through stored procedure and then inseted into database. 3)One for Loop -This loop contains two script Task control and two OLEDB Command control, (mentioned above),for fetching single record and inserting it in database. 4)One derived Column 5)One Multicast 6)One Character Map 7)One OlEDB Source
As with my current performance I'm able to insert one record in every .5 second (Which is much below to acceptable limits) Is control lying disabled on SSIS designer pane also affect the performance of execution.
Hi, I have just install SQL 2005 SP2 and trying to get Window SharePoint Services V3 integrated with SQL 2005 SP2 reporting services. In SharePoint Central Administration, I select the Reporting Services Integration page and have setup the Report Server Web Service URL and Authentication Mode. I then goto Grant database access, specify the SQL server name, get promted for a username and password that has access SQL Reportserver and get the following error "The group name could not be found" Does anyone have any ideas? Thanks
Hello, I have a problem when trying to fully process an SSAS database using Integration Services "Analysis Services Processing Task" task. I have 2 of these tasks which are responsible for processing the Dimensions then the Cubes. When I run the package either via the BIDS environment or on the local server from the Integration Services engine, I will get an error after about 20 minutes stating:
"Error: Memory Error: Allocation failure. Not enough storage is available to process this command""Error: Errors in the metadata manager. An error occurred when loading the <cube name> cube from the file \?D:Program FilesMicrosoft SQL ServerMSSQL.2OLAPDataMyWarehouse<cube file>.xml"
The cube name is not specific, it will fail and any of my cubes could be in the error log
If I fully process the AS database using the AS engine (logon to local AS server, right-click AS database and click Process), I get no errors at all, it processes and completes fine. The processing options are identical when I run in AS or via the SSIS "Analysis Services Processing Task" task.
I've searched quite a lot online but no joy, the information I have gleaned from various sites does not directly link SSIS with SSAS processing problems.
When either the AS processing starts via SSAS or SSIS the memory usage of MSMDSRV.exe increases to around 1.4 / 1.5 GB but never goes to 2GB ever, even when the error appears.
I've done the following with no effect.
" Have run via AS and works fine " No specific cube it fails on " Have created a Dimension only package, same problem " Changed the maxmemorylimit " Changed the connections to localhost " Memory DOES NOT max out on server
Server Specs: Windows Server 2003 Standard + Service Pack 2 4GM ram, 2GB paging file
I have a table with about 200 million rows of data. I add a couple million rows of data each week to the table in a single load process. The table is used for reporting purposes only and there are never (not intentionally at least) any updates or deletes to the table. The data is always being added to the "end" of the table with the new AsOfDate being the main factor in the clustered index.
My question is this: Since I'm not "inserting" rows that would split pages, should I have my FILLFACTOR for the table set to 100, or am I missing something? I obviously want to save physical hard drive space, but I also don't want to slow down the import process.
I would like to know people's thoughts on any special network considerations to take for mirroring and the logic behind them. Is it best to segregate mirroring traffic from other network traffic? Use a VLAN? Dedicate one NIC for mirroring and the other for general network traffic or just aggregate the two and let both types of traffic share the bandwidth?
I haven't seen much in this area from Microsoft's best practices and wanted to know what those who have implemented it have done and why. There are pros and cons for each method: Letting everything share one massive pipe with load balancing vs. trying to segregate traffic in some way so that general network connections etc. do not impact the mirroring capability.
Can a publisher be mirrored? What are the implications, issues, gotchas? Transactional, Merge or Transactional w/ Updating Subscribers is what I'm considering.
Bottom line is I would like to use mirroring, but only one mirror will not suffice.
Hello, I'm developing application which monitors network packets. The monitoring data are saved into table. Monitoring table maintains the data for fixed quantum time,for example during one 1 hour. So, every minute before or after insert new data, I delete the time-expired data. I doubt that the endless delete operation would results in some problems(increasing index,etc..).
I would like to know how to, if at all possible, to reconstruct the following trigger as to be able to handle multiple row insert when a single insert command is used - because the trigger will only be called once...I'm not familiar and don't know anything about cursors and i've read that its not the best way to go.
TRIGGER ON childtable INSTEAD OF INSERT AS BEGIN DECLARE @customkey char(16); DECLARE @nextchild int; DECLARE @parent int; DECLARE @date datetime;
SET @date = getdate();
SELECT @parent = parenttable FROM inserted;
SELECT @nextchild=count(*)+1 FROM childtable WHERE parenttable = @parent;
Can someone point me to a resource for Table Design Considerations for Merge Replication? I have an ASP.Net/SQL2K5 app that I need to run on disconnected machines, then allow dfor data sync through merge replication. I assume that the first step is getting my tables indexed in a replication friendlt manner?
Many Thanks to anyone who can point me in the right direction!
I may be overthinking this, but I want to make sure this is right. If you have a processor license of SQL Server Standard running both Reporting Service databases and the IIS interface, isn't it true that the underlying licenses of other servers containing your data are irrelivent in the context of serving the reports over the web? Example. Server 1 has SSRS as described above, processor license of Standard. Server 2 has user license of SQL Enterprise and serves data to a couple of reports on Server 1. This does not violate a license, correct? Doesn't Server 1 just take one of the CALs from Server 2?
in a prior "legacy" life we couldn't imagine 24x7 implementations because it was important to 1) reorganize databases periodically to remove fragmentation that adversely affected performance and 2) back up databases just in case.
In a 24x7 SQL Server 2005 implementation, high level only, how are these and other maintenance related things accomplished with confidence?
I dont think SQL cleanses its own page splits unsolicited. Are DBAs totally reliant on logs in full recovery installations where db must be up 24x7? What if the devices those logs sit on fail? What if the logs become too large? Is it likely that if you want 24x7 you're looking at Enterrise Edition only?
I'm totally aware of and confident in the sliding window partitioning thing but it seems to me there must be more out there in terms of periodic, more frequent maintenance activity.
Can someone point me to some good articles or perhaps directly supply some words of wisdom with regard to wise utilization of variables within a T-SQL script from and standpoint of conserving memory usage and improved execution cost?
For example:
(1) Is it better to use varchars, nvarchars, etc. defined with minimal lengths to support the needs of the script or is it just as efficient to declare all with a length of say 4,000?
(2) I've seen behavior that leads me to believe that when passing a variable as a parameter in a nested procedure call, if the declared types of the parameter and the variable being passed in don't match (i.e. one is numeric(38,10) and the other is int), then implicit type conversions hurt performance. Is this true and how broadly does it apply?
(3) Does the number of variables declared in a script materially impact the performance and / or resource utlization?
(4) Is it more efficient to have a series of variable value assignments in a single SELECT statement versus a series of SET statements? Should I always perfer one to the other? Only within a looping construct?
Anyone know of a good "free" way to back up web files and SQL Server 2005 Express Database? I was able to use Windows Server 2003 Backup utility to back up the folder where the Databases were stored, as well as the web files, with no errors. But I have heard a lot of discussion that you can't just simply backup SQL Server data files? I'm wondering how sound the backup I've created is... Any suggestions?
I need to copy all the data from all the tables in a database to a copy of this database on another server. What feature of SSIS should I take advantage of to accomplish this?
We have an SLA for 8am, most times the data warehousing jobs complete at 8:05am. Adding an additional process/set of tasks to this package would obviously make matters so I'm trying to update/copy/replicate the data in the fastest manner. Typically we're talking 2 marts (10-20GB) with 2 large tables (5-10 mill records) and 20 marts (0.5 - 5 GB) with many more smaller tables (~40 tables with record count ranging from 1 to a million)
Additionally please indicate if the design/feature you suggest can handle (pushing schema changes and additions to the target server) schema changes or new tablesviews added to the source database.
My only idea so far...is using the import wizard (in Management Studio) to create an SSIS package (top copy all the tables from one server to another) and saving it to the server, Then executing this package after the job is complete. However this would not work if the schema of a table changed, or if a a table is added. Moreover I don't think I can edit this package in visual studio.
Is there a way to give customers access to SSIS? They need to be able to create their own SSIS packages. Of course we have more then one customer so it would be nice to have modular security in place where they don't get to see customers abc and customer xyz packages. Only their own.
I have created an integration services project (attached is a screenshot) that workes against the flat file (.DAT extension) and it does some manipluation in the data and then load it into the table. Everything works fine. Now I want to get the name of my flat file source fille (which is a .DAT file) and then insert it in the table. I am running th integration services against different .DAT files (only one file at one time) which are located on different locations.....so what I want is that, whenever I run the package it do the usuall processing and then while loading the data in to the destination table, it also load the name of the file into the destination table (lets called a field "FileName" of nvarchar type in the table "Comphistory")
Problem When you have a SSIS package that contains a connection from a data source, this connection is not updated when the data source changes based on a configuration change.
Situation : A SSIS solution contains 3 configurations : Development, Test, Production. You can create those configurations in configuration manager of the solution.
The SSIS project contains one Data source. It doesn't really matter what type but I take SQL Server. The database server in development is SQL_DEV, in test is SQL_TEST and in production is SQL_PROD. Initially they are for all configurations the same. You can specify those values by changing the active configuration and then editing the Data source.
In the SSIS package (DTSX), you can create a connection manager based on a Data source. If you change the Data source, the connection manager is also changed. If you change the Data source by changing the active configuration, the connection manager is not being updated.
If you think this isn't a big issue think big. We have 4 configuration, 10 shared Data sources and 25 DTSX packages. That would give a maximum of 1000 settings (4 x 10 x 25). Using this method it can be reduced to 40 (4 x 10). Of course this is a theoretical but it is very common to have the destination data source re-used on all packages, which still would be 100 settings (4 x 25)
Steps to reproduce - create a new SSIS project - In the solution explorer, create a new Data source named TestSource. - In the connection managers window of Package.dtsx, create a new connection from a Data source. - Make some changes in to TestSource.ds under the Data Sources. For example change the server or the database. - Verify that those changes are also in the package.
- in the solution explorer, right click the solution and select configuration manager - under active solution configuration, create a new configuration named test. - Set the copy settings from : development - Verify that Create new project configuration is checked. - click OK and close. - Notice that the active configuration is now Test - Make some changes the Testsource.ds like a different server. - Verify that those changes are also in the package. - Make the development configuration as active. - Notice that the Testsource.ds contains now the original settings. - You will notice that the connection manager still contains the "test" settings and not the development settings. - If you create a deployment utility it will still contains the wrong values.
Can someone explain to me why I am getting this kind of error though I am able to integrate all the data succeesfully to the next destination.
Ronald
SSIS package "Prescription.dtsx" starting.
Information: 0x4004300A at Data Flow Task, DTS.Pipeline: Validation phase is beginning.
Information: 0x4004300A at Data Flow Task, DTS.Pipeline: Validation phase is beginning.
Information: 0x40043006 at Data Flow Task, DTS.Pipeline: Prepare for Execute phase is beginning.
Information: 0x40043007 at Data Flow Task, DTS.Pipeline: Pre-Execute phase is beginning.
Information: 0x4004300C at Data Flow Task, DTS.Pipeline: Execute phase is beginning.
Error: 0xC0202009 at Data Flow Task, SQL Server Destination [521]: An OLE DB error has occurred. Error code: 0x80040E14.
An OLE DB record is available. Source: "Microsoft SQL Native Client" Hresult: 0x80040E14 Description: "The bulk load failed. Unexpected NULL value in data file row 58, column 1. The destination column (PatientId) is defined as NOT NULL.".
An OLE DB record is available. Source: "Microsoft SQL Native Client" Hresult: 0x80040E14 Description: "The bulk load failed. Unexpected NULL value in data file row 27, column 12. The destination column (ServiceId) is defined as NOT NULL.".
An OLE DB record is available. Source: "Microsoft SQL Native Client" Hresult: 0x80040E14 Description: "The bulk load failed. Unexpected NULL value in data file row 26, column 12. The destination column (ServiceId) is defined as NOT NULL.".
An OLE DB record is available. Source: "Microsoft SQL Native Client" Hresult: 0x80040E14 Description: "The bulk load failed. Unexpected NULL value in data file row 25, column 7. The destination column (AllergyCode) is defined as NOT NULL.".
An OLE DB record is available. Source: "Microsoft SQL Native Client" Hresult: 0x80040E14 Description: "The bulk load failed. Unexpected NULL value in data file row 24, column 7. The destination column (AllergyCode) is defined as NOT NULL.".
An OLE DB record is available. Source: "Microsoft SQL Native Client" Hresult: 0x80040E14 Description: "The bulk load failed. Unexpected NULL value in data file row 23, column 7. The destination column (AllergyCode) is defined as NOT NULL.".
An OLE DB record is available. Source: "Microsoft SQL Native Client" Hresult: 0x80040E14 Description: "The bulk load failed. Unexpected NULL value in data file row 22, column 7. The destination column (AllergyCode) is defined as NOT NULL.".
An OLE DB record is available. Source: "Microsoft SQL Native Client" Hresult: 0x80040E14 Description: "The bulk load failed. Unexpected NULL value in data file row 21, column 7. The destination column (AllergyCode) is defined as NOT NULL.".
An OLE DB record is available. Source: "Microsoft SQL Native Client" Hresult: 0x80040E14 Description: "The bulk load failed. Unexpected NULL value in data file row 20, column 7. The destination column (AllergyCode) is defined as NOT NULL.".
Information: 0x40043008 at Data Flow Task, DTS.Pipeline: Post Execute phase is beginning.
Information: 0x40043009 at Data Flow Task, DTS.Pipeline: Cleanup phase is beginning.
Can someone explain to me why I am getting this kind of error though I am able to integrate all the data succeesfully to the next destination.
I am trying to get the Prescription table from Access to SQL SERVER 2005 database
Ronald
SSIS package "Prescription.dtsx" starting.
Information: 0x4004300A at Data Flow Task, DTS.Pipeline: Validation phase is beginning.
Information: 0x4004300A at Data Flow Task, DTS.Pipeline: Validation phase is beginning.
Information: 0x40043006 at Data Flow Task, DTS.Pipeline: Prepare for Execute phase is beginning.
Information: 0x40043007 at Data Flow Task, DTS.Pipeline: Pre-Execute phase is beginning.
Information: 0x4004300C at Data Flow Task, DTS.Pipeline: Execute phase is beginning.
Error: 0xC0202009 at Data Flow Task, SQL Server Destination [521]: An OLE DB error has occurred. Error code: 0x80040E14.
An OLE DB record is available. Source: "Microsoft SQL Native Client" Hresult: 0x80040E14 Description: "The bulk load failed. Unexpected NULL value in data file row 58, column 1. The destination column (PatientId) is defined as NOT NULL.".
An OLE DB record is available. Source: "Microsoft SQL Native Client" Hresult: 0x80040E14 Description: "The bulk load failed. Unexpected NULL value in data file row 27, column 12. The destination column (ServiceId) is defined as NOT NULL.".
An OLE DB record is available. Source: "Microsoft SQL Native Client" Hresult: 0x80040E14 Description: "The bulk load failed. Unexpected NULL value in data file row 26, column 12. The destination column (ServiceId) is defined as NOT NULL.".
An OLE DB record is available. Source: "Microsoft SQL Native Client" Hresult: 0x80040E14 Description: "The bulk load failed. Unexpected NULL value in data file row 25, column 7. The destination column (AllergyCode) is defined as NOT NULL.".
An OLE DB record is available. Source: "Microsoft SQL Native Client" Hresult: 0x80040E14 Description: "The bulk load failed. Unexpected NULL value in data file row 24, column 7. The destination column (AllergyCode) is defined as NOT NULL.".
An OLE DB record is available. Source: "Microsoft SQL Native Client" Hresult: 0x80040E14 Description: "The bulk load failed. Unexpected NULL value in data file row 23, column 7. The destination column (AllergyCode) is defined as NOT NULL.".
An OLE DB record is available. Source: "Microsoft SQL Native Client" Hresult: 0x80040E14 Description: "The bulk load failed. Unexpected NULL value in data file row 22, column 7. The destination column (AllergyCode) is defined as NOT NULL.".
An OLE DB record is available. Source: "Microsoft SQL Native Client" Hresult: 0x80040E14 Description: "The bulk load failed. Unexpected NULL value in data file row 21, column 7. The destination column (AllergyCode) is defined as NOT NULL.".
An OLE DB record is available. Source: "Microsoft SQL Native Client" Hresult: 0x80040E14 Description: "The bulk load failed. Unexpected NULL value in data file row 20, column 7. The destination column (AllergyCode) is defined as NOT NULL.".
Information: 0x40043008 at Data Flow Task, DTS.Pipeline: Post Execute phase is beginning.
Information: 0x40043009 at Data Flow Task, DTS.Pipeline: Cleanup phase is beginning.
I'm trying to use the new integration services and have found all kinds of help for it, but where is it? I can't find it. Is it in the SQL Server Management Studio?
I am trying to know how to read many XML files within one data flowtask. I have many many files that I would like to apply to with thesame data flow so I need to know how to read them in and what taskshould I use under SQL 2005 DTS/Integration services.Thanks for your help!JON
I've to do a mining project and I intend to use the SSIS.
I've done a clustering plugin last year on analysis services and I also want to use it.
Let me try to explain the architecture of the process:
1) Receive data (read data from the database - these data are texts, actually)
2) Pre-process the data (transform the texts in a sparse matrix) using a new plugin
3) Call my clustering plugin and assign it to read the table created on the previous step
4) Call my KNN plugin to classify other pre-processed texts using the clusters found on the previous step as classes.
5) Show results.
Alright... It all running as a workflow on integration services
Here are my doubts:
A) How to view and use my plugin made for Analysis Services on Integration Services ? (is it possible or will I have to create another plugins from zero just to run on Integration Services ?)
B) Assuming the previous step is possible, how to modify my plugins to define inputs and outputs to do the correct communications between each plugin ? I think this is the most important question. Is it simple to do ? Is there any documented examples ?
When i try to connect to SQL 2005 integration services from object explorer i get connected (in the sense it shows in object explorer running packages stored packages ..)but when i try to exand any of these objects i get the following error
Failed to retrieve data for this request(Microsoft.sqlserver.smoEnum)
Additional information
The sql server specified in SSIS service configuration is not present or not available.This might occur when there is no default instance of Sql server on the computer. For more information see "config integeration services" in server 2005 books online.
Login time out expired
An error has occured while establishing a connection to the server.When connecting to the Sql server 2005, the failure may be caused by the fact that under default settings Sqlserver does not allow remote connections.
Names Pipes Provider: could not open a connection to Sql Server[2].(MsDtsSrvr)
We have developed a datawarehouse using DTS in 2000. Now we have started using 2005 and as it supports DTS so there no issued for our datawarehouse to run.
But every six months there are new requirements and we achange and add to the existing DTS, but now DTS development is not supported in 2005, can we use itegration services for the same and embedd it with the existing DTS or do I need to redevelop the datawarehouse using SSIS.
We use Windows Authentication to connect to SQL Server, is there any special permissions required to connect to Integration Services in SSMS?
Whenever I try to browse the servers available with Integration Services (from Object Browser), none of the servers gets listed. If I directly give server name and try to connect to Integeration Services I get the following error. But I'm able to connect to the database engine.
TITLE: Connect to Server ------------------------------
I don't need to have SSIS service installed on my sql server to run SSIS packages as jobs. So I've now deployed my packages to our live clustered SQL server, and I even have a package that runs to a point. So basically the package imports some data then reprocesses a dimension on the cube, it imports the data ok, then fails to process the cube with the following message
Description: The task "Process Cube Dimension DimFiscalPeriod" cannot run on this edition of Integration Services. It requires a higher level edition.
Now the SQL server is enterprise edition, so do I really need to install SSIS on this server or not ?