A new developer tried to import a data file using the ImportExport wizard. Unfortuantely, the file had several format errors and the process aborted after reading one bad line. (I imported it using a BIDS package with error handling. The file had 454 bad rows.)
Are there any options in the I/E Wizard to logskip errors but continue processing? After all, the I/E Wizard is designed for quick-and-dirty use and if it can't "handle" erroneous data "gracefully", it is not a very practical tool.
I'm using SQL Server 2012. I've 2 databases one is CopyDatabase and other is PasteDatabase.
CopyDatabase has 2 tables with relationship to each other, and the PasteDatabase is empty. I want to Export Data from CopyDatabase to PasteDatabase. If I do that, it does work but I came to realize that it destroys the relationship between these 2 tables.
Same is for the Import, in case of PasteDatabase. How to maintain the relationship between them while moving the tables.
Hi Guys! I'm trying to find a way to either importexport or synchronise an SQL 2005 db between a local and remote server, is there any way to do this programatically or otherwise? doesn't have to be the entire db, (like just aspnet_users and related tables will do) Thanks for the help all!
Yes, I created an saved a package, found it in the MSDB database (after connecting to Integration services) but can't figure out how to open it to see the control and data flow tabs. (BIDS can open a File or Analysis Service project so I don't see how to access the package from there.)
I have become frustrated and I am not finding the answers I expect.
Here's the gist, we support both Oracle and SQL for our product and we would like to migrate our Clients who are willing/requesting to go from Oracle to SQL. Seems easy enough.
So, I create a Database in SQL 2005, right click and select "Import Data", Source is Microsoft OLE DB Provider for Oracle and I setup my connection. so far so good.
I create my Destination for SQL Native Client to the Database that I plan on importing into. Still good
Next, I select "Copy data from one or more tables or views". I move on to the next screen and select all of the Objects from a Schema. These are Tables that only relate to our application or in other words, nothing Oracle System wise.
When I get to the end it progresses to about 20% and then throws this error about 300 or so times:
Could not connect source component. Warning 0x80202066: Source - AM_ALERTS [1]: Cannot retrieve the column code page info from the OLE DB provider. If the component supports the "DefaultCodePage" property, the code page from that property will be used. Change the value of the property if the current string code page values are incorrect. If the component does not support the property, the code page from the component's locale ID will be used.
So, I'm thinking "Alright, we can search on this error and I'm sure there's an easy fix." I do some checking and indeed find out that there is a property setting called "AlwaysUseDefaultCodePage" in the OLEDB Data Source Properties. Great! I go back and look at the connection in the Import and .... there's nothing with that property!
Back to the drawing board. I Create a new SSIS package and figure out quickly that the AlwaysUseDefaultCodePage is in there. I can transfter information from the Oracle Source Table to the SQL Server 2005 Destination Table, but it appears to be a one to one thing. Programming this, if I get it to work at all, will take me about 150 hours or so.
This make perfect sense if all you are doing is copying a few columns or maybe one or two objects, but I am talking about 600 + objects with upwards of 2 million rows of data in each!!
This generates 2 questions: 1. If the Import Data Wizard cannot handle this operation on the fly, then why can't the AlwaysUseDefaultCodePage property be shown as part of the connection 2. How do I create and SSIS Package that will copy all of the data from Oracle to SQL Server? The source tables have been created and have the same Schema and Object Names as the Source. I don't want to create a Data Flow Task 600 times.
Service Broker will let you create two services with the same name (one with contract and another without)
CREATE SERVICE [Order Msg Recieve] AUTHORIZATION [dbo] ON QUEUE [dbo].[Order Return Msg Queue]
CREATE SERVICE [Order Msg Receive] AUTHORIZATION [dbo] ON QUEUE [ODS].[Order Return Msg Queue] ([OrderSubmission])
When you delete the service....
drop service [Order Msg Recieve]
It will only drop the first one. In the BOL there is no syntax for telling it to delete the second one, however you can drop it from SQL Management Studio.
When upgrading a database from MSDE to Visual Studio 2005 Express, I've noticed that the system tables in the MSDB database don't get upgraded. For example, the SYSSCHEDULES table doesn't get added. Is this by design or a bug?
Try this in sql server 2005: select COALESCE(a1, char(254)) as c1 from (select 'Z' as a1 union select 'Ya' as a1 union select 'Y' as a1 union select 'W' as a1) as b1 group by a1 with rollup order by c1 select COALESCE(a1, char(255)) as c1 from (select 'Z' as a1 union select 'Ya' as a1 union select 'Y' as a1 union select 'W' as a1) as b1 group by a1 with rollup order by c1
The only difference is that the first one uses 254 and the second one uses 255. The first sorts like this: WYYaZþ The second one sorts like this: WYÿYaZ Is this expected behavior?
Has anyone noticed that if you created DTS package and try to change connection properties (i.e. change DSN or redirect DTS to different server or Database), as soon you click OK it does not save new password and hence does not work anymore. In my case to move Database from Development to Production server I would have to recreate all DTS packages. Is there any way around it?
CREATE TABLE ItemInformation([Description] varchar(80)) GO
INSERT INTO ItemInformation([Description]) SELECT 'CHOCOLATE CHIP‚' UNION ALL SELECT '‚COOKIES‚' UNION ALL SELECT '‚CROISSANTS *PLAIN*‚' UNION ALL SELECT '‚DONUTS‚' UNION ALL SELECT '‚DONUTS *DOZEN*‚' UNION ALL SELECT '‚MUFFINS‚' UNION ALL SELECT '‚BAGELS‚' UNION ALL SELECT '‚ROLLS‚' UNION ALL SELECT '‚CUPCAKES‚' UNION ALL SELECT '‚CRISPIES‚' UNION ALL SELECT '‚DANISH/SWEET ROLLS‚' UNION ALL SELECT '‚FUDGE BROWNIES‚' UNION ALL SELECT '‚PUFF PASTRIES/ECCLES‚' UNION ALL SELECT '‚STICKY BUNS‚' UNION ALL SELECT '‚TURNOVERS‚' UNION ALL SELECT '‚BLACK & WHITE COOKIES‚' UNION ALL SELECT '‚LINZER TARTS‚' UNION ALL SELECT '‚SCONES/BISCUITS‚' UNION ALL SELECT '‚SCUFFINS‚' UNION ALL SELECT '‚SINFULL BITS‚' GO
SELECT * FROM ItemInformation GO
UPDATE ItemInformation SET [Description] = REPLACE([Description],',','') GO
SELECT [Description], LEN([Description]) FROM ItemInformation GO
SELECT REPLACE([Description],',','') FROM ItemInformation
SELECT REPLACE([Description],'C','') FROM ItemInformation
SELECT CHARINDEX(',',[Description]) FROM ItemInformation GO
INSERT INTO ItemInformation([Description]) SELECT 'CHOCOLATE, CHIP‚' UNION ALL SELECT 'CHOCOLATE, CHIP‚' UNION ALL SELECT ',CHOCOLATE, CHIP‚' UNION ALL SELECT ',CHOCOLATE, CHIP‚ ' UNION ALL SELECT ',CHOCOLATE, CHIP‚ A' UNION ALL SELECT ',CHOCOLATE, CHIP‚ , ' GO
SELECT REPLACE([Description],',','') FROM ItemInformation GO
Currently - our Reporting Services site is setup so that all Domain Users can access it. We are starting to use Report Builder now. I have enabled the My Reports folder feature that that creates a user folder for anyone that logs onto the site. Is there a way to just create the user folders for certain people?
I just installed SQLExpress, with reporting services on my own laptop
However, when I go to http://localhost/Reports/, I can see my reports uploaded, but the subscription links are not available. In Properties, I cannot see "Execution" either.
Can someone help me to get the Subscriptions feature enable again?
We desperately require the RetainSameConnection to be set to True on our ADO.NET - ODBC connection manager. Unfortunately RetainSameConnection always defaults back to False when you open the package.
* Is RetainSameConnection supposed to work for the ADO.NET - ODBC combination?
* Is it a bug that it defaults back to false for the ADO.NET - ODBC combination?
Hi! I am having a terrible time with the report feature. First of all, my queries through it do not match my queries thru SQL server 2005 Express Management. When I enter my fields, through the auto query function, it moves them. When I try to put them where I think they belong (by editing the SQL code, my queries get all messed up).
Is there a way to import my SQL query and run it through the report feature?
I am using windows XP Pro Microsoft Visual Studio 2005 version 8.0.50727.42 Microsoft SQL Server Reporting Services Designers version 9.00.2047.00
Theres a feature in oracle that allows you to modify tables, colums, values and the data from its enterprise console the same way that you can in sql server. In oracle however theres a button called 'show sql' that allows you to see and copy/paste the resulting sql for the changes made via the console.
I would imagine that sql server has a similar option. The reason i ask is that i would like to more fully learn how to do this through the query analyser and get more familiar with sql involved and I would be able to do this if I could see the resulting sql from enterprise manager.
Hope this makes sense.
I did find something in sql server called 'generate sql' but this doesnt update during changes you make automatically.
I installed full-text feature on a previously installed SQL Server. It seemed it installed and it did not give me error but when I run “EXEC sp_fulltext_database 'enable'� it gives me no full-text search featured installed message. Any idea?
I need some help... I'm trying to execute a stored procedure and I'm getting this message
Run-Time Error '-2147217887 (80040e21)':
[Microsoft][ODBC SQL Server Driver]Optional feature not implemented
Here is the code: Public Function D2L(sconnect As Variant, dDate As Variant) As Variant Dim rsDate As ADODB.Recordset Dim cmdDate As ADODB.Command Dim prmDate As ADODB.Parameter
Set cmdDate = New ADODB.Command Set ADOConn = New ADODB.Connection
ADOConn.Open sconnect
Set cmdDate.ActiveConnection = ADOConn cmdDate.CommandText = "dbo.UP_CVRT_DATE_TO_LONG" cmdDate.CommandType = adCmdStoredProc
Set prmDate = New ADODB.Parameter prmDate.Type = adDate 'prmDate.Size = 32 prmDate.Direction = adParamInput prmDate.Value = dDate cmdDate.Parameters.Append prmDate
Microsoft Internet Information Services (IIS) is either not installed or is disabled. IIS is required by some SQL Server features. Without IIS, some SQL Server features will not be available for installation. To install all SQL Server features, install IIS from Add or Remove Programs in Control Panel or enable the IIS service through the Control Panel if it is already installed, and then run SQL Server Setup again. For a list of features that depend on IIS, see Features Supported by Editions of SQL Server in Books Online.
how do i get hold of Internet Information Services I installed it on vista premium.
I have a package which loads the fact data from Stage into Warehouse database. This packages normally handles early arriving facts. In that package I use lookup to check the dims which exists, and where they don't I populate the dimension and use the surrogate key to load the facts. This works fine.
I had a request to load 7 years worth of historical data. Instead of re-writing the package I took the package which handles early arriving facts and deleted the section which handles early arriving facts. I knew all the dimensions already exists and I don€™t want to hinder the performance when I load millions of rows. During testing I found something very interesting.
If you have configured error path in the lookup component and removed the error path later, the package will NOT fail (won't produce error) even if the lookup can't find matching values.
Correct Behaviour Example 1: [1] Stage fact table has 2 records, with product code 1 and 2. [2] Warehouse Product table has only product code 1. [3] Source - Lookup - Destination in the data flow task. Error port on lookup is not configured. [4] From source we read 2 records, and the package will fail at lookup as it can't find Product Code 2.
Correct Behaviour Example 2: [1] Stage fact table has 2 records, with product code 1 and 2. [2] Warehouse Product table has only product code 1. [3] Source - Lookup - Destination in the data flow task. Error port on lookup is configured to go to RowCount. [4] From source we read 2 records, and the package will run successfully. It will put one record into warehouse table and send the invalid record into RowCount.
Incorrect Behaviour Example 3: [1] Stage fact table has 2 records, with product code 1 and 2. [2] Warehouse Product table has only product code 1. [3] Source - Lookup - Destination in the data flow task. Delete the configured error port from lookup. [4] From source we read 2 records, and the package will run successfully. It will put one record into warehouse table and discard the other.
My understanding if the error port is NOT configured as shown in example 2, it should fail as shown in example 1.
Am I missing a point or is this suppose to be a correct behaviour or is it a bug?
I tried SQL mirroring in beta 1 , then it was gone, until SP1.
Now I can not setup mirror, it is fine if it is just hard to setup, but it seems it is full of bug! The mirroring has to be stable, since I am trying to mirror product db, what a diaster if something goes wrong.
I am trying two servers, both has 9.0.3042. First I tried to setup on my home machine, I VPN to my network, After I config security, I see two connection strings:
and it doesn't show that error anymore, I am not sure why the connection should be like the latter format, but in anyway, how come SMO can not make it right?
then I get another error, SQL server doesn't exist or can not access, I search on the Internet, it seems that error could mean anything, include that the mirror db is not in restore mode.
But I did set the mirror db in restore mode and both sql5 and sql8 are under same domain, pysically close.
I am trying here to get a situation going which includes both transactions and checkpoints to make sure that when something goes wrong I don't get a) data corruption (hence the transactions and b) I don't have to completely restart my 2hr run (hence the checkpoints). However I ran into something of which i cannot see whether it is intended behaviour or simply a bug.
Here's the deal: I have a SSIS-package in which I enable checkpoints (CheckpointUsage: IfExists and SaveCheckpoints: True). I have 2 Dataflows which follow eachother (the first dataflow prepares data for the second dataflow to edit). Because I want to make sure that my data is secure I put a separate transaction on both the dataflows.
And here my problem arises. If I run my package now and the second dataflow breaks then my checkpoint sends me back to the first dataflow and my initial insert is executed again, which isn't meant to happen (I enabled checkpoints to prevent rerunning items). Somehow my checkpoint does not register the fact that the first dataflow has already been executed and it will execute that one again upon rerun.
However: if I put a random task between the 2 transacted dataflows (for example an empty script-task) it will work as intended. Just as long as this inserted item doesnt have a transaction; because if it does then the problem comes back Now if I execute the package then my checkpoint shows that the first dataflow has already been executed and thus it will not execute this one again and it starts at the second dataflow upon re-execution.
I can work around it (with the empty script-task) but still I am wondering as to why this is happening. I am very interested to hear whether this is really a bug or if it is intended behaviour (and if it is then why is it intended?)
I have two input columns (both DT_I4) in a column collection to a Aggregator transform. Now I am doing a group by to one and Count to another column.
To my surprise the output's column datatype is changed for Count Transform (DT_UI8) and I have to put extra Data Conversion Transfrom to get my DT_I4 datatype back.
I am currently using the SSIS Logging feature in my SSIS package. Currently, I have defined a destination log file, and each time the package is executed the log file gets appended with that days log.
Im trying to figure out how best to keep the log file name static (it gets emailed out, and my email client looks for a particular log file name) yet include only todays log information and append the rest of the log information to a history log file or something like that.
Has anyone tried doing something similar, or have any ideas on how best this can be accomplished?
Good day all,I'm looking for suggestions on how to handle the calculating of scores for search results from data. Primarily if it would be best to calculating the score on the SQL server side or in the application's logic itself after the results have been retrieved? I already have an idea of the calculations I want to do which will be pretty simple, just a basic point system for containing all the queried terms and additional points for the number of times those words appear on the page. Feedback or links to articles on this would be appreciated. I'm leaning towards doing so on the SQL side myself but my SQL skills are not as polished as I would like for jumping into that, so examples could be good.
I am looking for a solution to allow users to generate data by selecting tables-->columns-->where clauses on the fly.
I am thinking to maintain relationships, joins, etc., in some configuration tables. Based on the fields selected by the user, I can get these conditions and generate a query, execute and export to excel. Any ideas what is the good way of storing relationships, etc. in configuration table?
Also suggest if there is any good excel add-in to do adhoc reporting. Point me if there is any other ways of doing.
If I have to do it from scratch, i probably use VB.NET.
I'm using SQL2K with SP4 2187. I have created a DB Maintenance wizard where the purging older than 1 day is set.
However, this feature seems not to be working, even if I tried two ways. Delete the scheduled job and recreate it - not successful, 2nd) delete the Maintenance Plan, still not successful.
after installing the Microsoft SQL Server 2000 DTS Designer Com feature pack and then restarting my management studio, I still cannot see or edit my dts packages. I also tried editing them in the integration services consule but no luck there. I see them under the msdb and have no option to edit them.
Hi I was trying to install the SQL server 2005 trial version and the only warning I got was for IIS feature requirement. I am using MS Windows XP home edition SP2 with 1GB RAM and 2.80 GHz pentium processor. How can I get this IIS feature on my machine with these specifications? am I forced to have a different operationg system to support the IIS? Is this problem caused by the operating system I am using or what else could have caused this?
After installing SQL server 2005 by ignoring the IIS warning I realised I could not get Reporting service feature installed. How can I get this Reporting service ? Will I be able to get the reporting services installed if I do not change the Operating system? What should I do if I want to have reporting services components installed on my machine?
Thanks
Below is the warning message I got , I tried to follow the instruction but I could not see the IIS anywhere on my machine "- IIS Feature Requirement (Warning) Messages ·IIS Feature Requirement Microsoft Internet Information Services (IIS) is either not installed or is disabled. IIS is required by some SQL Server features. Without IIS, some SQL Server features will not be available for installation. To install all SQL Server features, install IIS from Add or Remove Programs in Control Panel or enable the IIS service through the Control Panel if it is already installed, and then run SQL Server Setup again. For a list of features that depend on IIS, see Features Supported by Editions of SQL Server in Books Online."