Hello. Here a two different problems that occur one and a while when I try to import a textfile to SQL Server 2005.
I have a flat file connection to a csv-file that originally is a export from an AS4000 DB2 database
This csv-file is defined as a variable length file
Why do SSIS automatically interpret the length of each column as varchar(50)? It does not matter if a define the same file as a fixed lenght file. The problem is that I will get a warning that information in columns will be truncated. I would like to do a direct export to the SQL Server 2005 table with shorter varchar fields. I can solve this by using the task for transforming data types but this only works on the text fields. This task cannot transform a string to a decimal or an integer column in the SQL Server 2005 table. Is there no other way than having a staging table between the text file and the SSIS-data pipe?
I also get a lot of collation or code page errors even if we set the receiving columns to nvarchar and nchar. Is there any good article on this subject? Code page errors
Last question. Is there parameter support in the data reader source connection?
Boy, do I need HELP! Have a simple csv file that I need to import. Worked fine in sql2000; I put it into dts to execute on a monthly basis. Makes connection, db connection, table creation fine, but stops at validation of flat file?
Basically, I want to go out and get a flat file, drop the existing table, and create the table, and import the information from the flat file. Not a complicated table of about 30,000 records.
Create table [db].[dbo].[tblPatient] ( [patientID] into not null, [chartID] varChar(15) null, [doctorID] int null, [birthdate] datetime null, [sex] varchar(1) null, [raceID] int null, [city] varchar(100) null, [state] varchar(2) null, [zip9] varchar(9) null, [patientTypeID] int null, [patName] varchar(100) null)
Below is the error report that tells me NOTHING!
Operation stopped... - Initializing Data Flow Task (Success) - Initializing Connections (Success) - Setting SQL Command (Success) - Setting Source Connection (Success) - Setting Destination Connection (Success) - Validating (Error) Messages * Error 0xc00470fe: Data Flow Task: The product level is insufficient for component "Source - pmPatientInfo_csv" (1). (SQL Server Import and Export Wizard) * Error 0xc00470fe: Data Flow Task: The product level is insufficient for component "Data Conversion 1" (71). (SQL Server Import and Export Wizard) - Prepare for Execute (Stopped) - Pre-execute (Stopped) - Executing (Success) - Copying to [fhc].[dbo].[tblpatient3] (Stopped) - Post-execute (Stopped) - Cleanup (Stopped)
I am trying to copy data from a SQL Server 2000 DB to SQL Server 2005 DB using the import/export wizard in SQL Server Management Studio. The two databases are not identical with different table names and different columns but I thought I had set up all the right mappings and had set the 'Enable identity insert' option. When I ran the wizard it errored at the Pre-execute phase. I simplified the wizard down to one table and only as couple of varchar columns and this also errored in the same way. The error report is detailed below.
For reference the SQL2000(ent. edn) DB is on a windows 2000 server and the SQL2005(dev. edn) DB and the management studio are both on my WinXPSP2 workstation.
Could somebody explain why these errors have occured and more importantly how to rectify the problem?
Many thanks, Michael.
Operation stopped...
- Initializing Data Flow Task (Success)
- Initializing Connections (Success)
- Setting SQL Command (Success)
- Setting Source Connection (Success)
- Setting Destination Connection (Success)
- Validating (Warning)
Messages
Warning 0x80047076: Data Flow Task: The output column "DateAdd" (23) on output "OLE DB Source Output" (11) and component "Source - tccNewsArticles" (1) is not subsequently used in the Data Flow task. Removing this unused output column can increase Data Flow task performance. (SQL Server Import and Export Wizard)
Warning 0x80047076: Data Flow Task: The output column "DateChg" (26) on output "OLE DB Source Output" (11) and component "Source - tccNewsArticles" (1) is not subsequently used in the Data Flow task. Removing this unused output column can increase Data Flow task performance. (SQL Server Import and Export Wizard)
... NOTE: I have removed the rest of the warnings as they were the same as above (many of them).
- Pre-execute (Error)
Messages
Error 0xc0202009: Data Flow Task: An OLE DB error has occurred. Error code: 0x80040E21. An OLE DB record is available. Source: "Microsoft SQL Native Client" Hresult: 0x80040E21 Description: "Multiple-step OLE DB operation generated errors. Check each OLE DB status value, if available. No work was done.". (SQL Server Import and Export Wizard)
Error 0xc0202025: Data Flow Task: Cannot create an OLE DB accessor. Verify that the column metadata is valid. (SQL Server Import and Export Wizard)
Error 0xc004701a: Data Flow Task: component "Destination - Nrs_NewsArticles" (112) failed the pre-execute phase and returned error code 0xC0202025. (SQL Server Import and Export Wizard)
- Executing (Success)
- Copying to [NereusV2_1].[dbo].[Nrs_NewsArticles] (Stopped)
- Post-execute (Stopped)
- Cleanup (Success)
Messages
Information 0x4004300b: Data Flow Task: "component "Destination - Nrs_NewsArticles" (112)" wrote 0 rows. (SQL Server Import and Export Wizard)
when trying to Ãmport files to our database server from a client, I keep getting an error:
- Validating (Error) Messages Error 0xc00470fe: Data Flow Task: The product level is insufficient for component "Source_txt" (1). (SQL Server Import and Export Wizard)
Error 0xc00470fe: Data Flow Task: The product level is insufficient for component "Data Conversion 1" (175). (SQL Server Import and Export Wizard)
... doing the same import when logged on the server, hasn't been giving me any errors, how come. I can from my client without trouble import tables from other DB servers but when ever it is files it won't do it.
I tried as mentioned in other threads rerun setup to re-install SSIS, but as it was already installed it wouldn't re-install. My next move would be to make a clean install, but not sure it would help, as I think this is a buck.
We have a sql 2005 x64 database (datawarehouse related), essentially a work area for us, that we truncate and re-populate via BCP weekly. (We don't backup the database at all) . From the perspective of data-import speed what is the best recovery model to use: Bulk-Logged or Simple? (I have read sql 2005 BOL and don't find it partcularly clear on this point.)
Barkingdog
P.S. Anyone know of an article listing "best practices" for high-speed data import?
Hey everyone, I truly hope this post is in the right place. Apologies if that's not the case!
I know there has to be a simple solution to this, I just haven't managed to find it yet. Essentially what I'm trying to do is merge the contents of a datafile (for sake of argument let's say it contains three columns: ID, RefID and Content) with an existing table, but I need to alter the ID values contained within the datafile prior to doing the import.
The existing table in every client has a baseline of records ending at ID=100. The datafile contains a collection of records where ID>100 (let's say 3 rows worth). Clients for this "replication" model may have created custom content within this table, so the MAX(ID) value on client database tables can be greater than or equal to 100 (anything above 100 being custom content that has to be preserved).
Additionally, the RefID column can refer to any existing ID within this table.
So...
What I want to do is take the MAX(ID) value of the table I intend to import into, subtract 100 from it to establish my custom content differential, and then increment the ID values within the datafile by that differential. I'll also increase the RefID values by the differential, provided the RefID is greater than 100 (I don't want to change RefIDs that refer to baseline records).
I hope I explained that sufficiently! Any ideas about the smartest way to go about doing this? I'm just much more of a web/dbe than a dba/dbe lol...
Hi there. I'm trying to create a copy of a remote database for development purposes. I am useing MySQL, the remote database is in MSSQL. The create table commands contain a collation titled
quote:SQL_Latin1_General_CP1_CI_AS
which MySQL doesn't recognise. The three most similar collations I can find in MySQL are: quote: 'latin1_general_ci', 'latin2_general_ci' 'latin7_general_ci'
Does anyone know the difference between these collations? And whether any of them is equivalent to the MSSQL code I have to implement?
I want to create a database where the table names / column names / SP namesare NOTcase sensitive but where the data in the tables is, so that I canbuild a unique index where 'test' and TEST' is accepted as different.I have tried Installing SQL with a Collation designator with the CaseSensitive option checked- this caused all sp names / column names / table names to be casesensitive - not what I want.I have also tried installing SQL and selecting a SQL Collation and pickingan option from the drop down list. - again this cause everything to be casesensitive - not desiredDo I have to install SQL with a non case sensitive collation, then set eachcolumn in the table to be case sensitive? What if any are the problems I amlikely to come across?ThanksSteve
Hi, I noticed that when I installed SQL2000,with "typical install", the default, the following will be used: SQL collation (dictionary order, case-insensitive, assent sensitive) sp_helpsort will give SQL_Latin1_General_CP1_CI_AI
Is there any difference if I choose the "custom install", then and choose the windows locale which gives the result of sp_helpsort: Latin1_General_CI_AI
SQL_Latin1_General_CP1_CI_AI is supposed for backward compatibilty, so are they actually equivalent ?? Any impact or difference we have to be aware of ?? Anyone knows..Thx
I have looked thru the forum but have a couple of questions: 1) data base was created with case insensitive collation 2) all the tables were then create (72 tables) and by default got the CI collation on all columns 3) lots of data was added 2GB 4) discovered mistake and altered the database to have case sensitive collation 5)..... how to change all the collations for all the columns without doing them all manually can i backup the database and change some settings and restore it? export all the data, drop and recreate tables and import data? ????
Hi,We have around 150 databases as case sensitive, and we are planning tochange it to case insensitive. Each database has around 180 tables, Ihave changed the collation on DB, but changing collation manually oneach column is a daunting process. Is there any script or tool whichcan assist in doing this.Appreciate your help.ThanksSAI
I have developed a tool to allow project developers to easily re-create the entire schema for our base product. The current issue involves setting the correct collation for the customers' region. Our brother company in Germany uses the same db creation tool and scripts, and we here in the US also have customers in South America. My ultimate question is "what subset of collation names would be necessary to provide the project developer?" I could query the database to get all the collation names, but I think it was around 1000 names. Can I query to get a smaller subset of the most relevant collation names?
When I try to execute a package I get this following error. I have a bunch of similar packages which runs fine on the same source(sybase) and destination(sqlserver 2005), just different tables. Only few of them fails and all of them have the same error of "Unable to resolve column level collations. Bulk-copy cannot continue". I checked for the dtatatype and length between source and destination, both are same. The user have all the required rights on the objects in both source and destination.
Error at Data Flow Task For Test1 - test_tbl_job [OLE DB Destination [16]]: An OLE DB error has occurred. Error code: 0x80004005. An OLE DB record is available. Source: "Microsoft SQL Native Client" Hresult: 0x80004005 Description: "Unable to resolve column level collations. Bulk-copy cannot continue.".
Error at Data Flow Task For Test1 - test_tbl_job [OLE DB Destination [16]]: Failed to open a fastload rowset for "testdb..tbl2". Check that the object exists in the database.
On further trial and error I found that if I remove the fast load option, it works without glitch.
Is there any way to update the system tables directly, to alter the collations of the columns in the user db's?
I've tried the script below:
UPDATE Syscolumns SET collation = 'SQL_Latin1_General_CP1_CS_AS' WHERE name = '<AddrCode>' AND id = object_id('<Compliance>')
but, I get the following error message:
Server: Msg 271, Level 16, State 1, Line 1 Column 'collation' cannot be modified because it is a computed column.
Can you please help me! I need to do thousands of these, and most of them has constraints on, so my script I generated to do the ALTER TABLE.... ALTER COLUMN does not suffice.
I'm recreating many of my DBA scripts that no longer work in 2005 due to the rework of system tables. It's a risk I lived with knowing that someday the system tables would change. I'm now encountering collation problems, which I do not understand. I know how to fix the problem, but I don't know why the collation issues exist in the first place.
Run the following command.
Select * From sys.all_objects a JOIN master..spt_values b on a.type = b.type
You will receive the following error.
Msg 468, Level 16, State 9, Line 1
Cannot resolve the collation conflict between "SQL_Latin1_General_CP1_CI_AS" and "Latin1_General_CI_AS_KS_WS" in the equal to operation.
Now run sp_help 'sys.all_objects' and look at the collation defintion for columns "type" and "type_desc". In my environment they have a collation of Latin1_General_CI_AS_KS_WS. This is different then the overall default collation of SQL_Latin1_General_CP1_CI_AS, thus causing the error.
My question is why did Microsoft need to make this collation different for these columns?
Hi ,I have set up my database to be using "Simple" mode. This will not log anytxns ?I have got the err message saying the log file is full and i need to do atxn log backupi do not understand why , could anyone kindly advise ?tks & rdgs--Message posted via SQLMonster.comhttp://www.sqlmonster.com/Uwe/Forum...eneral/200510/1
When I try to execute a package I get this following error. I have a bunch of similar packages which runs fine on the same source(sybase) and destination(sqlserver 2005), just different tables. Only few of them fails and all of them have the same error of "Unable to resolve column level collations. Bulk-copy cannot continue". I checked for the dtatatype and length between source and destination, both are same. The user have all the required rights on the objects in both source and destination.
Error at Data Flow Task For Test1 - test_tbl_job [OLE DB Destination [16]]: An OLE DB error has occurred. Error code: 0x80004005. An OLE DB record is available. Source: "Microsoft SQL Native Client" Hresult: 0x80004005 Description: "Unable to resolve column level collations. Bulk-copy cannot continue.". Error at Data Flow Task For Test1 - test_tbl_job [OLE DB Destination [16]]: Failed to open a fastload rowset for "testdb..tbl2". Check that the object exists in the database.
Hi, I am designing a site using Visual Web Developer, CSharp and Sql server Express. One the contact page I want to put a form that allows users to enter details about themselves. On clicking the button this will be stored in the database under a table called subscribers. The form will have, name, address, telephone, email fields etc. With the email addresses from the visitors I want to be able to keep them in a newsletter section or similar which is automated so they recevie emails from time to time Could somebody suggest a tutorial which shows how to complete this process using c sharp and sql.
I can't believe it's been a few days and I can't figure this out. We have a flat file (purchaseOrder.txt) that has header and detail lines. It gets dropped in a folder. I need to pick it up and insert it into normalized tables and/or transform it into another file structure or .NET class.
10001,2005/01/01,some more data SOME PRODUCT 1, 10 SOME PRODUCT 2, 5
Can somebody place give me some guidance on how to do this in SSIS?
Have a database that's in "Simple" recovery mode whose .ldf has grown to 270GB. Â This database is a data warehouse so "full" is not required. Â I put it in simple mode a month ago and shrunk the log down and now it's filled up the disk.Â
What steps can I take to mitigate this in future? Â I've read that this is caused by long running transactions which fill the log for DR purposes. Â Should I put the database back into full mode and backup/truncate daily. Â
The auto-growth is set to 128MB which is very low.Â
My understanding is that the log file is not supposed to grow if the database is under simple recovery mode.I am in a situation where the log grows if do any inserts that involve millions of rows.How do i make sure that it does not grow?
One of our database is in simple recovery model, and usually generating more than 220 GB log file (.ldf) every week. We are shrinking log file many times to release the space.
But as its not advisable I am looking for any other options. I suggested to change the recovery model to Full and start T-log backup, but client dont want to change recovery model.
Is there any way to manage Log file of Simple recovery model to maintain disk space?
I want to make a very simple package: Export all rows in a table to a flat file. This package I can create pretty much by only using the wizards. Now to my problems:
H is a header post, in this case with date and time following. D is a details post, that is all the rows that was exported. E is and end post, containing only the number of rows in the file, including H and E posts.
2) I need to set the file name dynamically, preferably using date and time to name the file.
I´ve done this very same thing in T-SQL, like so:
Code Snippet USE AVK GO SET TRANSACTION ISOLATION LEVEL SNAPSHOT; GO SELECT * FROM tempProducts GO CREATE VIEW EXPORT_ORDERS AS SELECT 1 AS ROW_ORDER, 'H' + REPLACE(CONVERT(char(8), GETDATE(), 112) + CONVERT(char(8), GETDATE(), 108), ':', '') AS Data_Line UNION ALL SELECT 2 AS ROW_ORDER, 'D' + COALESCE (CONVERT(char(10), LBTyp), '') + COALESCE (CONVERT(char(50), Description), '') + COALESCE (CONVERT(char(5), Volume), '') AS Data_Line FROM dbo.tempProducts UNION ALL SELECT 3 AS ROW_ORDER, 'E' + RIGHT('0000000000' + RTRIM(CONVERT(char(13), COUNT(*) + 2)), 11) AS Data_Line FROM dbo.tempProducts AS tempProducts_1 GO IF @@ROWCOUNT > 0 BEGIN BEGIN TRANSACTION SELECT * FROM tempProducts DECLARE @date char(8) DECLARE @time char(8) DECLARE @sql VARCHAR(150) SELECT @date = CONVERT(char(8), getdate(),112) SELECT @time = CONVERT(char(8), getdate(),108) SELECT @time = REPLACE(@time,':','')
DECLARE @dt char(14) SELECT @dt = @date + '_' + @time SELECT @sql = 'bcp "SELECT Data_Line FROM avk..EXPORT_ORDERS ORDER BY ROW_ORDER" queryout "c:AVK_' + @dt + '.txt" -c -t -U sa -P dalla' EXEC master..xp_cmdshell @sql
--WAITFOR DELAY '0:00:10'; DELETE FROM tempProducts
COMMIT TRANSACTION END DROP VIEW EXPORT_ORDERS GO
But I´m sure it can be done in SSIS aswell, giving me some nice options for i.e. error handling aswell. Pointers please