Hello,
I have a Data Flow Source that uses a SQL Command to pull data. In the SQL statement, I used CAST to change all varchar types to Nvarvchar to suit MS Access. I can preview the data from the source. In testing, the SQL statement only pulls about ten records.
I have a Microsoft 2000 Access database table as a destination. Data in each column in the table is required, and all columns have defaults.
I also have a grid data viewer set up. I have the DefaultBufferMaxRows set to 2 so that I can see data going across. When I execute this dataflow, no data is transfered to the Access database table. No data shows up in the dataviewer. There are no errors. The 'Execution Results' tab does not show errors, but indicates that zero rows were transfered. There are no warnings.
How do I begin to isolate the problem? The following is the SQL Statement in the Data Flow Source. Thank you for your help! - cdun2
DECLARE @CategoryTable TABLE
(ColID Int,
ColCategory varchar(60),
ColValue varchar(500)
)
--and fill it
INSERT INTO @CategoryTable
(ColID, ColCategory, ColValue)
SELECT
0,
LEFT(RawCollectionData,CHARINDEX(':',RawCollectionData)),
LTRIM(SUBSTRING(RawCollectionData,CHARINDEX(':',RawCollectionData)+1,255))
FROM Collections_Staging
--Assign an ID to each block of data for each occurance of 'Reason:'
DECLARE @ID int
SET @ID = 1
UPDATE @CategoryTable
SET [ColID] = CASE WHEN ColCategory = 'Reason:' THEN @ID - 1 ELSE @ID END,
@ID = CASE WHEN ColCategory = 'Reason:' THEN @ID + 1 ELSE @ID END
--Then put the data together
SELECT --cast to Nvarchar for MSAccess
a.ColID,
CAST(a.ColValue as Nvarchar(30)) AS OrderID,
COALESCE(CAST(b.ColValue as Nvarchar(30)),'') AS SellerUserID,
COALESCE(CAST(c.ColValue as Nvarchar(100)),'') AS BusinessName,
COALESCE(CAST(d.ColValue as Nvarchar(15)),'') AS BankID,
COALESCE(CAST(e.ColValue as Nvarchar(15)),'') AS AccountID,
COALESCE(CAST(SUBSTRING(f.ColValue,CHARINDEX('$',f.ColValue)+1,500)AS DECIMAL(18,2)),0) AS CollectionAmount,
COALESCE(CAST(g.ColValue as Nvarchar(10)),'') AS TransactionType,
CASE
WHEN h.ColValue LIKE '%Matching Disbursement%' THEN NULL
ELSE CAST(h.ColValue AS SmallDateTime)
END AS DisbursementDate,
--COALESCE(h.ColValue,'') AS DisbursementDate,
CASE
WHEN i.ColValue LIKE '%Matching Disbursements%' THEN NULL
WHEN CAST(LEFT(REVERSE(i.ColValue),4)AS INT) > 1000 THEN CAST(i.ColValue AS SmallDateTime)
WHEN LEFT(REVERSE(i.ColValue),4) = '1000' THEN NULL
END AS ReturnDate,
--COALESCE(i.ColValue,'') AS ReturnDate,
COALESCE(CAST(j.ColValue as Nvarchar(4)),'') AS Code,
COALESCE(CAST(k.ColValue as Nvarchar(255)),'') AS CollectionReason
FROM @CategoryTable a
LEFT JOIN @CategoryTable b ON b.ColID = a.ColID AND b.ColCategory = 'Seller UserId:'
LEFT JOIN @CategoryTable c ON c.ColID = a.ColID AND c.ColCategory = 'Business Name:'
LEFT JOIN @CategoryTable d ON d.ColID = a.ColID AND d.ColCategory = 'Bank ID:'
LEFT JOIN @CategoryTable e ON e.ColID = a.ColID AND e.ColCategory = 'Account ID:'
LEFT JOIN @CategoryTable f ON f.ColID = a.ColID AND f.ColCategory = 'Amount:'
LEFT JOIN @CategoryTable g ON g.ColID = a.ColID AND g.ColCategory = 'Transaction Type:'
LEFT JOIN @CategoryTable h ON h.ColID = a.ColID AND h.ColCategory = 'Disbursement Date:'
LEFT JOIN @CategoryTable i ON i.ColID = a.ColID AND i.ColCategory = 'Return Date:'
LEFT JOIN @CategoryTable j ON j.ColID = a.ColID AND j.ColCategory = 'Code:'
LEFT JOIN @CategoryTable k ON k.ColID = a.ColID AND k.ColCategory = 'Reason:'
WHERE a.ColCategory = 'Order ID:'
Hello I am developing a web application that will allow users to upload a .mdb file and from that file I need to populate an SQL database. I know the table name of the .mdb file, but I am unclear how to structure my data access layer correctly. How do I pull data from the .mdb file and once I have it how do i populate the SQL database?Any advice would be greatly appreciated.thanks!
I have a table in MS Access that I would like to copy/import into SQL Server 2005. I would like to automate this process so that it copies the table over on a daily basis. Can someone tell me how this can be done?
I am not sure I understand the problem I am causing, but I am a beginner!
Here's the situation: I have a table located on MS SQL server database number 1. Said table, which we'll call WIDGET_PRICES, is accessed regularly by my existing source code and has no problems.
At some point, I decide to move operations to MS SQL database number 2 and do a very simple database copy of WIDGET_PRICES from database 1 to database 2 using the Microsoft SQL Server Management Studio.
The end result, inevitably, is that my source code can no longer access the very same table as it is located on the new database server. The code hasn't changed, it's still trying to access WIDGET_PRICES as always. And, from what I see on my screen through Management Studio, WIDGET_PRICES appears just fine.
An example error is the one I just got:
Microsoft OLE DB Provider for SQL Server error '80040e37'
Invalid object name 'YB_ITEMS'.
/yardbark/tampabay/header.asp, line 27
The only clue is that while my transferred tables often look named like "database1.WIDGET_PRICES on database 1, they wind up looking like database2.WIDGET_PRICES on dabase 2.
I include a little more detail and screenshots of the tables in questions at this web page.
Hi,I am having some trouble copying data over my workgroup network from my Windows 2003 Server Machine (machineA with SQL SERVER 2005) to one of my network Machine's drive(MachineB).Here is the T-SQL code that I am trying to execute:EXEC xp_cmdshell 'copy D:Datafile.txt \MachineBDocuments'Whenever I tried to execute the above piece of code, I get the error message "Access is denied", but if I try to copy the file from the Command Prompt (cmd.exe) with the copy command, the file copies fine over the network.I have already searched over the internet and I found out that loads of people have the same issue, and they were suggested something like this:"Check in Services and make sure that the MSSQLServer service is run as a domain user and that domain user has rights to these network resources."Well it sounds plausible, but I don't know what are the exact steps to do this. How do I know which user is running the MSSQL Server service? Are they referring to the user which I use to connect to my SQL Server Database engine throuhg the SQL Server Management Studio?Also they are suggesting 'domain user', and as I said before I do not have domain network just regular simple workgroup network.Here are some details of the user that I use to login. I generally login into my Windows 2003 Server machine with user called 'User1' and I use the same 'User1' to connect to SQL Server through the Management Studio Screen.Should I create a user called 'User1' on my MachineB(Destination Machine)?I would really appreciate, if someone can give me detailed steps explaning how to solve this problem.Thank you very much once again.
I am having some trouble copying data over my workgroup network from my Windows 2003 Server Machine (machineA with SQL SERVER 2005) to one of my network Machine's drive(MachineB). Here is the T-SQL code that I am trying to execute:
Whenever I tried to execute the above piece of code, I get the error message "Access is denied", but if I try to copy the file from the Command Prompt (cmd.exe) with the copy command, the file copies fine over the network.
I have already searched over the internet and I found out that loads of people have the same issue, and they were suggested something like this: "Check in Services and make sure that the MSSQLServer service is run as a domain user and that domain user has rights to these network resources."
Well it sounds plausible, but I don't know what are the exact steps to do this. How do I know which user is running the MSSQL Server service? Are they referring to the user which I use to connect to my SQL Server Database engine throuhg the SQL Server Management Studio? Also they are suggesting 'domain user', and as I said before I do not have domain network just regular simple workgroup network.
Here are some details of the user that I use to login. I generally login into my Windows 2003 Server machine with user called 'User1' and I use the same 'User1' to connect to SQL Server through the Management Studio Screen. Should I create a user called 'User1' on my MachineB(Destination Machine)?
I would really appreciate, if someone can give me detailed steps explaning how to solve this problem.
If on the source I have a new column, the script generated by SqlPackage.exe recreates the table on the background with moving the data into a temp storage. If the table is big, such approach can cause issues.
Example of the script is below: in the source project I added columns [MyColumn_LINE_1] and [MyColumn_LINE_5].
Is there any way I can make it generating an alter statement instead?
BEGIN TRANSACTION; SET TRANSACTION ISOLATION LEVEL SERIALIZABLE; SET XACT_ABORT ON; CREATE TABLE [dbo].[tmp_ms_xx_MyTable] ( [MyColumn_TYPE_CODE] CHAR (3) NOT NULL,
[Code] ....
The same script is generated regardless the table having data or not, having a clustered or nonclustered PK.
i have 2 tables (both containing the same column names/datatypes), say table1 and table2.. table1 is the most recent, but some rows were deleted on accident.. table2 was a backup that has all the data we need, but some of it is old, so what i want to do is overwrrite the rows in table 2 that also exist in table 1 with the table 1 rows, but the rows in table 2 that do not exist in table one, leave those as is.. both tables have a primary key, user_id.
Probabaly a silly question yet as a DOTNET developer, I'm trying to simulate DTS when for example, I don't have permission to perform DTS on a production server.
In particular and interested regards caching of rows before the service decides to flush the buffer and write to the target table. Safe to assume DTS is cursor based?
This table stores common information used in resolving technical problems based on Subject and Topic. However, I've now created a Subject/Topic where I want to copy all the data that corresponds to another Subject/topic.
Example:
There are 35 rows that correspond to Subject = 'Publisher01' and Topic = 'Subcategory03'. I want to create 35 new rows that contain the same RD and RR data, but have Subject = 'Publisher02' and Topic = 'Subcategory07'. Highest current RRID = 5008
I cannot figure out how to write that query. I apologize in advance for the fact that this is, no doubt, a seriously beginner question.
I need to copy the following columns from my Employee table in my Performance DB to my Employee table in my VacationRequest DB: CompanyID, FacilityID, EmployeeID, FirstName, LastName, [Password] = 'nippert', Role = 'Employee' I tried the advice on this website but to no avail:http://www.w3schools.com/sql/sql_select_into.asp
select * into dbo.ashutosh from attribute where 1=2
"USE WHERE 1=2 TO AVOID COPYING OF DATA"
//HERE "ASHUTOSH" IS THE NEW TABLE NAME AND "ATTRIBUTE" IS THE TABLE WHOSE REFERENCE IS USED// //the logic is to use where clause with 1=2 which will never be true and hence it will not return any row//
I have "inherited" a project working on a SQL 2000 database. The project calculates commissions based on data from an invoice header table and an invoice details table. The goal is to extract data from the primary database tables, perform limited manipulations, and store the resultant data into a table in a different database for reference and reporting. I have the query complete that extracts and manipulates the data, but I am stuck in trying to add this data to the final storage/reporting table. I would also like to do error checking to be certain that a record is not "re-inserted" from the source data to the destination table. Thanks in advance, Barry
I have a table, dbo.Table1(Id,Col1,Col2,Col3,Col4,Col5,Col6,Col7,Col8) that I need to split between two tables dbo.Table2(Id, Col1, Col3, Col4) and dbo.Table3(Id,dbo.Table2_Id,Col5,Col6,Col7,Col8). But in dbo.Table3 I need to have the Id column from dbo.Table2 populated since its a foreign key constraint in dbo.Table3. How do I go about doing this?
I have 3 tables with the follwing schema Table <Category> {
UniqueID, LastDate DateTime }
Assume the follwing tables with data following the above schema
Table Cat1 {
1, D1 2, D2 3, D3 } Table Cat2 {
2, D4 3,D5 4, D6 } Table Cat3 {
1, D7 3,D8 5,D9 }
I have a Master and the schema is as follows Table master {
UniqueId, Cat1 DateTime, -- This is same as the Table name Cat2 DateTime, -- This is same as the Table name Cat3 DateTime -- This is same as the Table name }
After inserting the data from all these 3 tables, I want the my master table to look like this Table Master {
Is it possible to easily copy data from one table to another if the data types don't match. I know you can do a INSERT INTO table1(col1,col2) SELECT (col2,col7) FROM table2 if the data types match but is there a way to do this if they don't. I'm not trying to copy date times into bit fields or anything. I just have an old table that I built when I really didn't know what I was doing now I at leastthink I have a better understanding of what data types to use, so I was wanting to move the data in the orignal table to my new one. Most of the fields in the olddatabase are text datatypes and the new database is nvarchar(50) data types. Thanks for any suggestions.
From a source table (in SERVER1) I get ids of candidates and from another source (in SERVER2) I get their CVs (text files stored in various Folders). My destination table (in SERVER3) has two fields, CandidateId & CandidateCV.
I have to transfer the data in above fashion for nearly 1 million records. How can I write a DTS package which picks up the text file from SERVER2 based on the CandidateId which comes from SERVER1? Probably I need some kind of looping mechanism which changes the candidate id & his CV file.
An SSIS package to transfer data from a DB instance on SQL Server 2005 to SQL Server 2000 is extremely slow. The package uses an OLEDB Source to OLEDB Destination for data transfer which is basically one table from sql server 2005 to sql server 2000. The job takes 5 minutes to transfer about 400 rows at night when there is very little activity on the server. During the day the job almost always times out.
On SQL Server 200 instances the job ran in minutes in the old 2000 package.
Is there an alternative to this. Tranfer Objects task does not work as there is apparently a defect according to Microsoft. Please let me know if there is any other option other than using a Execute 2000 package task or using an ActiveX Script to read records from one source and to insert them into the destination source, which I am not certain how long it might take and how viable will that be?
I have created a stored procedure that dumps the database from server 'A' and copies the dump file to server 'B'. I get 'ACCESS' DENIED' error in the following portion when it shells out to DOS:
I am trying to export a databse from access into sql server express. The access database is on a network and the sql server express is on my local machine.
Could someone give me setp by step instructions please as to how to export the data from the tables into my sql server express.
hi all, is there any query to move certain data from a sql data to access table through query. i am having a requirement where i have to fetch the records from a sql table that falls within a specified range to a ms access table. is this possible through a query.
I have two database(MYDB1 , MYDB2) on two different server's(SERVER1 , SERVER2) . I want to create an store procedure in MYDB1 on SERVER1 and get some data from a table of MYDB2 on SERVER2. How can i do this?
how I can load the CSV file data into the sql server table. I know there are ways like bulk insert and other to load the csv file data into the table. But in my case the table doesn't exist and has to be created at the run time. With simple insert in temp table we do like select * into #temp from tablename and that creates the temp table. So. I need something like that which create the temp table and load the data into it. because the CSV file would have different number of columns and names so I can not create the table structure in advance. I have to create the table at run time.
Hi all, I copied the following code from MSDN2 bcp Utility: bcp AdventureWorks.Sales.Currency2 in Currency.dat -T -c and executed it in my C:Program FilesMicrosoft SQL Server90ToolsBinn>. I got the following errors:
SQLState = 08001, NativeError = 2 Error = [Microsoft][SQL Native Client]Named Pipes Provider: Could not open a connection to SQL Server [2]. SQLState = HYT00, NativeError = 0 Error = [Microsoft][SQL Native Client]Login timeout expired SQLState = 08001, NativeError = 2 Error = [Microsoft][SQL Native Client]An error has occurred while establishing a connection to the server. When connecting to SQL Server 2005, this failure may be caused by the fact that under the default settings SQL Server does not allow remote connections.
Please help and tell me what wrong with this BCP execution or with the database, and how to make this BCP working.
I am trying to find a way to look at the size of each table in a database and the last time each table was updated/accessed by a user. I was just given control over a DB that is VERY BADLY maintained and I want to look at what I can get rid of in it. I want to start deleating by size and last used. I can find creation dates for the tables and row counts but not total size and last update/access of the tables. Does anyone know how to get this information? Thanks, Nathan
I Have a problem when copying data from one server to another in Management studio, I need to create and exact copy of the original because of primary key relationships,
Currently when I export the data the data will run through an insert type statement, which means that all PKs are reissued, rather than being duplicated from the original, How can I be sure that the data will be copied exactly how it is on one server to the other.