I'm importing data from an Access table and, of course, one field is a primary key. This field needs to be an autonumber. The problem is the data I'm importing isn't sequential.
Can I import the data, then alter the table to make the column increment or is there a way to create the table and make the field increment, but import non-sequential data?
I'm using DTS to import data from an Access memo field into a SQL Server ntext field. DTS is only importing the first 255 characters of the memo field and truncating the rest.I'd appreciate any insights into what may be causing this problem, and what I can do about it.Thanks in advance for any help!
Hi,I'm trying to import data to my database on the live server from my local server. However when I do this it doesn't seem to be importing the properties for the fields in the tables. How can I import the properties of the fields too?Thanks,Curt.
I know this may be in the wrong forum, but I have a question. I am working on a system for a video store. I have rentals and sales for videos. I have set the ID fields for rentals and sales to be autonumbers and increment by 1, but I would like to have an S in front of sales IDs and R in front of rental IDs at all times. It is kind of like a constant in all autonumber ID fields. I want the S and R to be in every ID field, but the number to change. Thanks.
Problem importing data from flat file into decimal(9,2) field. The data in the flat file is 000001453 and I am copying it to a decimal(10,2) field and instead of showing up in the 0000014.53 it comes across as 0001453.00. I tried defining the input columns a few different ways but none seemed to work. How do I do this with SSIS or do I need to write a SP and use convert? Thanks.
I am using a BCP format file to import a CSV file. The file looks like the following:
"01","02"
The format file looks like the following:
6.0Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â 2Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â 1Â Â Â Â Â Â SQLCHARÂ Â Â Â 0Â Â Â Â Â 0Â Â Â Â Â Â """Â Â Â Â Â Â Â Â 0Â Â Â "" 2Â Â Â Â Â SQLINTÂ Â Â Â Â Â 0Â Â Â Â Â 0Â Â Â Â Â Â "",""Â Â Â Â Â 1Â Â MROS 3Â Â Â Â Â SQLINTÂ Â Â Â Â Â 0Â Â Â Â Â 0Â Â Â Â Â Â "" "Â Â Â 2Â Â MROF
When both the two fields are set to SQLCHAR data types the data imports successfully without the quotes as 01 and 02. These fields will always be numbers and I want them as integers so I set the data type to int in the database and SQLINT in the format file. The results was that the 01 became 12592 and the 02 became 12848. where these numbers are coming from?
Hiim new to ms sql server, having previously used mysql. How do i make a auto number field? What datatype shall i use for it? like autonumber for mysql. Ive tried setting my primary key field to uniqueidentifier data type but then i still need to manually add a guid key in there. i want it so it automatically generates a unique key everytime i add a new row. is this possible?!hope someone can help!thanks
I have developed a web database application using ASP and MS Access, however the requirement for hosting the application is that it must use an MS SQL Server database. I converted the database to SQL without any problems, and many features of the application work under SQL Server except the 'add record' function. I realised there isn't an 'autonumber' field in SQL Server (which i use as the primary key for many tables), but an 'int' field. I considered pulling out the latest int from the database, incrementing it manually and adding the new record with this number... i also noticed there is a 'unique identifier' field.
I am building a simple table, populated by ASP form, where every record should be assigned a unique ID. When working with Access I used `autonumber` datatype to keep track of every record. Can something like this be done with MS SQL server, if not what do you think is a good way to solve the problem ??
I am attempting to import a text file into a Sql Server table. The file contains a time field. The column in the table that I am trying to import into is a Smalldatetime field. The data looks like this "10:30". I keep getting errors on the import of the time field. Any suggestions?
I'm trying to import a text file into a table. The table has a nullable bit field. The corresponding field in the file has Y/N rather than 1/0. I'm getting an error on that column "The value could not be converted because of a potential loss of data.". So I'm assuming I need to convert the Y/N to a 1/0 under the "derived columns" step. Is that correct and can someone tell me how to do that exactly?
I'm importing a large text field from an Excel spreadsheet into my Sql dbase using Enterprise Manager and I'm getting the error message "Data for source column 31 'fieldname' is too large for the specified buffer size." How do I go about changing the buffer size to allow for larger text fields? Thank you.
I'm having a problem importing a table using DTS from DB2. It seems that SQL Server can't recognize the DB2 "Date" format (eg 07/25/2001), which includes no time element. I assumed that SQL Server would simply throw a 12:00 AM time on each date it encountered, but instead I get errors, and the fields aren't imported at all. Any ideas on how I can get SQL Server and DB2 to play nice while importing the date fields? Any and all suggestions would be greatly appreciated!
I'm importing a csv file into sql server and everything is working fine however here are my problems.
The csv file fields contain some money fields and even though my db column type is money the dts package console complains that my data type from the csv is of type string 'for the money value' and cannot be inserted into the db field because they don't match. I changed the db field to nvarchar and hey presto it works fine. Problem is that this is no good because the data in the db is no longer a money value and therefore I cannot query the way I should.
So basically how do i map my csv to pass its values as the right types instead of just string values. If this was in c# i could just assign the parameter types accordingly but I'm not sure how to with sqlservers dts package.
I setup this package to import data from a Sharepoint list to a SQL Server data table. The primary key of my SQL table is mapped to the Title column of my Sharepoint list. There is a possibility that duplicate values will be entered in the Title field of the Sharepoint list. So when importing data into my table via SSIS, my package always error-out when there it comes across duplicate values. how you others have managed data integrity when importing from a Sharepoint list with the Title column being mapped to the primary key of a table.
I have one column in SQL Server 2005 of data type VARCHAR(4000).
I have imported sql Server 2005 database data into one mdb file.After importing a data into the mdb file, above column data type converted into the memo type in the Access database.
now when I am trying to import a data from this MS Access File(db1.mdb) into the another SQL Server 2005 database, got the error of Unicode Converting a memo data type conversion in Export/Import data wizard.
Could you please let me know what is the reason?
I know that memo data type does not supported into the SQl Server 2005.
I am with SQL Server 2005 Standard Edition with SP2.
Please help me to understans this issue correctly?
We have a daily process, which copies millions of rows of data from one DB to another over Linked Server. Just checking on the best practise, are there more efficient ways than the Linked server to copy millions of rows of data from one DB to another? I checked bulk insert but that transfers the data from the file to DB not DB to DB.Â
I have created a simple package that uses a sql command to pull data from an oracle database and inserts the data into a sql 2005 table. Some of the data fields that i am pulling from contain two digits after the decimal point, however this data is lost when it gets into sql. I have even tried putting the data into a flat file, and still the data is lost.
In the package I have a ole db source connection which is the oracle database and when i do the preview i see all the data I need. I am very confused and tried a number of things to get the data into sql, but none work. Any ideas would be very helpful.
when i m importing data from excel to Sql using DTS the column which has text content was not imported as same in excel sheet. whereas a special character is appearing in between the lines. the text field contains multiple lines but the conetent is imported in single line .
I'm wondering if SSIS will be the solution to the problem I'm working on.
Some of our customers give us an Excel sheet with data they want to insert or update in the database.
I've created a package that will take an Excel sheet, do some data conversion so the data types match up and after that I use a Slowly Changing Data component to create the insert/update commands.
This works great. If a customer adds a new row to the Excel sheet or updates an existing row changes are nicely reflected in the database.
But now I€™ve got the following problem. The column names and the order of the columns in the Excel sheet are not standard and in the future it could happen a customer doesn't even use an Excel sheet but something totally different.
Can I use SSIS for this? Is it possible to let the user set the mappings trough some sort of user interface? I€™ve looked at programmatically creating the package but I€™ve got to say that€™s quit hard to do€¦ It would be easier to write the whole thing myself than to create the package trough code ;)
If not I thought about transforming the data in code before I pass it on to the SSIS package in something like XML. That way I can use standard column names and data types.
So how should I solve this problem? Use SSIS or not?
Hopefully I am posting this question in the correct forum. I am still learning about SQL 2005. Here is my issue. I have an access db that I archive weekly into and SQL server table. I have used the dst wizard to create an import job and initally that worked fine. field I have as the primary key in the access db cannot be the primary key in the sql table since I archive weekly and that primary key field will be imported several time over. I overcame this initally by not having a primary key in the sql table. This table is strictly for reference. However, now I need to setup a unique field for each of the records in the sql table. What I have done so far is create a recordID field in the sql table that is an int and set as yes to Identify (auotnumber). That worked great and created unique id for all existing records. The problem now is on the import. When I try to import the access table i am getting an error because of the extra field in the sql table, and the error is saying cannot import null value into this field. So... my final question is how can I import the access table into the sql table with one extra field which is the autonumber unique field? Thanks a bunch for any asistance.
We have a stock code table with a description field and a brand field - when the data was entered, some of the records were entered with the brand field in the description field.
ie. Code Description Brand ABC1 BLANK DVD SONY ABC2 SONY BLANK DVD SONY
what I need to do is identify where the Brand is in the Description field ...
I'm new to SQL and DTS packages. I am trying to import data from an excel spreadsheet to an SQL server table via DTS package. It seems that the excel task looks at the first few records in a column to determine the datatype for that column. If the first few records are text, the entire column is imported as text. If numeric, the entire column is imported as numeric. There are about 25,000 records. In one field, the most important one, about half of the records begin with letters and the rest are all numbers. It is the subscriber ID field, and some subscriber IDs are all numbers, some are letters and numbers. The entire column should be imported as text. However, when I run the transform data task from the excel connection, none of the records that are all numbers are imported. I end up correctly importing only 13,000 of the 25,000 records. The rest are imported with the subscriberID field as <NULL>. I tried using the CAST or CONVERT function in the SQL query, but get the error message "Undefined Function."
hello, I create a txt file with a bash script, and i need to use it in a DTS package. But, i don't know how i can specify the type of my column. So in the transformations task, i have an error due to an incompatible type. what can i do to fix this error ? thanks,
I am creating a DTS package that is combining several tables, converting one column of data to a new column removing all special characters, then exporting the unique data based on this column and another column, and the max of other duplicates to a new table.
Now that I have the data in this table, I want to import any data that is not in my main table.
This "CLEANED" table does not have a designated "key" column, but the table I want to import the unique items does have an ID column that is also a primary key column.
DTS seems to want me to have a Key column to reference when importing from the CLEANED table to the MAIN table.
How would I go about checking the MAIN table against the CLEANED table, having DTS import only the unique items from the CLEANED table that are not present in the MAIN table based on three columns? The rest of the columns I want to just extract the MAX data from the duplicates.
Now here is the query I use to extract the unique values from the "CLEANING" table to get the data to the "CLEANED" table, but do not know how to use this to import into the MAIN table using something similar.
Code:
select partno2, MAX (partno) as partno, alt, MAX (C_alt) as C_alt, Max (cmpycd) as cmpycd, MAX (type) as type, compFN, MAX (pndesc) as pndesc, MAX (equipment) as equipment
into tbl_CLEANED from tbl_CLEANING group by partno2, alt, compFN ORDER BY partno, compFN
The three main columns I need to check against are: partno2 alt compFN I have named the columns the same in both tables.
partno2 is the column that has been copied from partno with all special characters & spaces removed. This is the main column I am using as a reference for unique values, then if no match, I have it check against the alt column, then the comFN column. If there are no matches in any of these columns, then I want to extract the data to the MAIN table.
How can I compare these tables and import only unique info to the MAIN table?
In addition, how can I also check items that are the same in both tables and update the MAX info for the other columns (not the three I use for reference - these I need to leave alone) and update those if there is more data in the CLEANED table then in the MAIN table?