I used a code that import data from an excel file into a dataset,
now I want to insert the dataset into a table in my database(SQLserver database) using a VB.NET code
Could you help me?
Thanks in advance,,
Here is my code:
Imports System.Data.OleDbPartial Class _DefaultInherits System.Web.UI.Page
Protected Sub Button1_Click(ByVal sender As Object, ByVal e As System.EventArgs) Handles Button1.ClickDim connString As String = ConfigurationManager.ConnectionStrings("xls").ConnectionString
' Create the connection object Dim oledbConn As OleDbConnection = New OleDbConnection(connString)
Try
' Open connection
oledbConn.Open()
' Create OleDbCommand object and select data from worksheet Sheet1Dim cmd As OleDbCommand = New OleDbCommand("SELECT * FROM [Sheet1$]", oledbConn)
' Create new OleDbDataAdapter Dim oleda As OleDbDataAdapter = New OleDbDataAdapter()
oleda.SelectCommand = cmd
' Create a DataSet which will hold the data extracted from the worksheet.Dim ds As Data.DataSet = New Data.DataSet()
' Fill the DataSet from the data extracted from the worksheet.oleda.Fill(ds, "Sheet1")
I have a report with multiple datasets, the first of which pulls in data based on user entered parameters (sales date range and property use codes). Dataset1 pulls property id's and other sales data from a table (2014_COST) based on the user's parameters. I have set up another table (AUDITS) that I would like to use in dataset6. This table has 3 columns (Property ID's, Sales Price and Sales Date). I would like for dataset6 to pull the Property ID's that are NOT contained in the results from dataset1. In other words, I'd like the results of dataset6 to show me the property id's that are contained in the AUDITS table but which are not being pulled into dataset1. Both tables are in the same database.
I have a small number of rows in a dataset, Table 1. There is a CLOB on a large dataset, Table 2. They join on a PK. I would like to retrieve this CLOB and add it to the data flow for Table1. In short I want to emulate the following:
Table 1: Small table without CLOB, 10 rows. Table 2: Large table with CLOB, 10,000,000 rows
select CLOB from table2 where pk = (select pk from table1)
I want this to return the CLOBs for the small number of rows in Table 1. The PK is indexed obviously so it should be a fast look up.
Table 1 and Table 2 live on different Oracle databases. How do I perform this operation efficiently in SSIS? It seems the Lookup and Merge Join wont do this.
I have a report with multiple datasets, the first of which pulls in data based on user entered parameters (sales date range and property use codes). Dataset1 pulls property id's and other sales data from a table (2014_COST) based on the user's parameters.
I have set up another table (AUDITS) that I would like to use in dataset6. This table has 3 columns (Property ID's, Sales Price and Sales Date). I would like for dataset6 to pull the Property ID's that are NOT contained in the results from dataset1. In other words, I'd like the results of dataset6 to show me the property id's that are contained in the AUDITS table but which are not being pulled into dataset1. Both tables are in the same database.
I found out the data I need for my SQL Report is already defined in a dynamic dataset on another web service. Is there a way to use web services to call another web service to get the dataset I need to generate a report? Examples would help if you have any, thanks for looking
Hi, I have a stored procedure attached below. It returns 2 rows in the SQL Management studio when I execute MyStorProc 0,28. But in my program which uses ADOHelper, it returns a dataset with tables.count=0. if I comment out the line --If @Status = 0 then it returns the rows. Obviously it does not stop in if @Status=0 even if I pass @status=0. What am I doing wrong? Any help is appreciated.
ALTER PROCEDURE [dbo].[MyStorProc]
(
@Status smallint,
@RowCount int = NULL,
@FacilityId numeric(10,0) = NULL,
@QueueID numeric (10,0)= NULL,
@VendorId numeric(10, 0) = NULL
)
AS
SET NOCOUNT ON
SET CONCAT_NULL_YIELDS_NULL OFF
If @Status = 0
BEGIN
SELECT ...... END If @Status = 1 BEGIN SELECT...... END
I'm trying to replicate two very big databases with about 10 million of 4000 characters each. The publisher is SQL 2000, subscriber is SQL 7.0
The subscriber will also perform full text searches.
I'm trying to decide wheter I should use PULL or PUSH. The publisher is operating on a very low quality/speed internet connection, where the subscriber is enjoying a T1.
While trying to push a tracked table using RDA.push, I get the following error:
Error Code: 80004005
The message cannot be built. The make message failed.
Minor Err: 28581
Source: Microsoft SQL server 2005 Mobile Edition.
All other tables in the database are getting pulled and pushed correctly. This table is different only in the larger number of columns, around 150. It has a primary key, no other constraints.
Any help to find the reason for this error will be greatly appreciated.
I am developing an application in which i have to send data from local Sql Server compact edition database[Which is in a Windows Mobile Device,] to central server[SQL Server 2005]. I am using RDA method for communication
Can i use push method to send data from local DB to Central DB?
Is it must to use PULL method before using PUSH method?
Hello all. Please excuse my ignorance, as this is not my territory. I administer a website which is hosted remotely. This site has SQL7 running the data to dynamically build the site. Every Sunday our hosting service runs a DTS package to push the data they have down to us, so we can run reports and analyze it. We recently upgraded to SQL2000, while our host has stayed with SQL7. Now our DTS is failing. They say it is because 7 cannot push to 2000. But they think that we could pull from them. How do I go about setting that up? Will the DTS wizzard walk me through most of it?
I need to copy a large amount of data from one table and insert it into another table.
The design of the destination table is exactly the same as the source table except for the fact that it has one extra field. Can I copy; in a single SQL statement; all rows in one table (that match given criteria) into another table allowing for the extra field?
I have a production database that I would like to have copied over to a backup database on a separate server every evening. I don't want to mirror, I just want the databases synced up every evening.
The servers are physically attached through a gigabit switch and the database is relatively small, so I don't think that speed will be an issue.
Could someone point me to an article about the best way to accomplish this?
I did not see this one coming, and I am not sure if I did something wrong.
How do you push data from sql05 to sql2k?
I set up a data flow task, with one sql05 connection magager and another sql2k connection manager. Then when I tried to map them, I cann't!
The message on the box said: The connection manager uses an earlier version of sql server provider. Bulk insert operations require a connection that uses a sql server 2005 provider.
I have been trying different source, destination and transformation, but seems like missing something.
I am using the Pull command to pull two fields, on is the primary ID (int) non identity and the other is Description which comes down as an ntext type. This works fine but if I change the description and use the push command I get the following error:-
The Query processor could not produce a query from the optimizer because a query cannot update a text, ntext, or image column and a clustering key at the same time.
I am really stuck with this one so if anyone can shed some light on it I would be much appreciated.
Catch RDAConnectionException As Exception MessageBox.Show("Can not push Header Data: " & RDAConnectionException.ToString, "Loading Key Tracker") Finally
RemoteAccess.Dispose() End Try Cursor.Current = Cursors.Default End Sub
TABLE DEFINITION: USE [MobileKeyDB] GO /****** Object: Table [dbo].[KeyHeader] Script Date: 07/11/2007 09:48:24 ******/ SET ANSI_NULLS ON GO SET QUOTED_IDENTIFIER ON GO CREATE TABLE [dbo].[KeyHeader]( [trans_id] [int] IDENTITY(1,1) NOT NULL, [user_id] [int] NOT NULL, [date_stamp] [datetime] NOT NULL, [signature] [nvarchar](100) COLLATE SQL_Latin1_General_CP1_CI_AS NOT NULL, [status] [int] NOT NULL, [id] [int] NOT NULL, [date_stamp2] [datetime] NOT NULL, CONSTRAINT [PK_KeyHeader] PRIMARY KEY CLUSTERED ( [trans_id] ASC )WITH (IGNORE_DUP_KEY = OFF) ON [PRIMARY] ) ON [PRIMARY]
Does anyone know if you can add columns to a local pulled table and if so can you use a select command to push the table back and exclude the added columns.
Basically I need to know if a record has been added on the PDA so I can get an ID from the server before pushing it back. I cannot alter the sql server database because it is part another application.
As I understand replication in Sql2K the only difference in push or pull subscriptions is where the agent runs. If I wanted changes in the publisher to be sent to the subscribers immediately after the change then I thought push would be better. But, if I am equally interested in changes made at the subscriber then what should I use? Or does the agent monitor both the publisher and subscriber at the same time?
I'm able to pull the metatdata down from my sql server to my handheld for table, I can then add data to that table on the handheld, when I try to push it back to Sql Server table I pulled from, I'm getting the following error message:
The Push method returned one or more error rows. See the specified error table. [ Error table name = ErrorTable ]
what does this mean, and how can I push my table back to the table I pulled from? I have trackingOn set on the pull process.
After doing some research it seems like you can only push the same table once using rda.push -- is this correct? If yes, are there any other alternatives for saving changes to the table back to SQL Server aside from merge replication?
One idea I am toying with is to pull the tracked table with 0 records, save changes to the tracked table, push, drop table and pull, repeating this process everytime I push the data. Wondering is somebody has any advice?
i have two datasets.one dataset have old data from some other database.second dataset have original data from sql server 2005 database.both database have same field having id as a primary key.i want to transfer all the data from first dataset to new dataset retaining the previous data but if old dataset have the same id(primary key) as in the new one then that row will not transfer. but if the id(primary key) have changed values then the fields updated with that data.how can i do that.
D2 is a list of data. each row in D2 has a classid. D2 may or may not have all the classids in D1. all classids in D2 must be in D1.
I want to show fields in D2 and group the data with classids in D1 and show every group as a seperate table. If no data in D2 is available for a classid, It shows a empty table.
I am using merge replication with a push subscription type. I am wondering if the updates of the tables on the subscriber side are push to the publisher from the subscriber or pulled from the subscriber by the publisher when the syncronisation takes place. this makes a big diferrence for me and i can't find the answer to this question anywhere...
if anyone could answer it would be really appreciated
Looking for a faster method of moving data from SQL to Oracle.
I'm attempting to push a sql table into an oracle table (sql server 2000 oracle 7, 8, and 9). I have no problem doing this with either 'Oracle Provider for OLE DB' or the 'Microsoft OLE DB Provider for Oracle'. None of my data is being transformed so its a straight import. With the hardware I'm using it takes nearly 3 seconds to import 1000 rows. While this isn't too bad, I need to import upwards of 4 million rows and this results in unacceptable time results.
I do have an oracle script that imports the csv files of the tables, but I'm looking for an all inclusive sql solution.
Does anyone know of another method in SQL that I can use to push the data faster?
I have a report which is scheduled to run every monday morning and it generates PDF and writes it to a shared location. Shared location and schedule is defined through Subscription. All this is working fine so far. Now I need to provide a facility from the webpage to execute this scheduled task on demand. So there would be a button on the website which would actually run this scheduled task and update the PDF at the shared location. How do I run this scheduled task/subscription using the vb.net code. Is there anything in ReportService2005 or ReportExecution2005 webservice? Please advise.