I have a database with a dozen or so tables. No table constraints. Logic is all in stored procedures.
I have several Excel spreadsheets of data to import into the database, one speadsheet to a table. Each spreadsheet has additional data(columns) that each table has no interest in and should be ignored.
I would appreciate your thoughts on methods and best practices for loading this data to the database.
I am about to investigate SQL Server 2005 Express handling of XML. I am familiar with XML and XSL conversions and it seems to me that XSL conversion of Excel data to XML gives me a lot of flexibility prior to database import for shaping the data.
In short, importing data to the database from an XML source.
I am not famliar with SQL Server's XML capability and would appreciate thoughts on this while I look into it.
And of course alternate ways that I am overlooking.
Hi everybody.I'm having difficulties with a button handler I put together from a few examples around the internet.Here's my code: <script runat="server"> Sub Add_To_Cart(ByVal Src As Object, ByVal Args As EventArgs) Dim FVArtikel_id As Label = FormViewDisplay.FindControl("Artikel_idLabel") Dim FVArtikel_naam As Label = FormViewDisplay.FindControl("Artikel_naamLabel") Dim FVArtikel_prijs As Label = FormViewDisplay.FindControl("Artikel_prijsLabel") Dim DBConnection As OleDbConnection Dim DBCommand As OleDbCommand Dim SQLString As String Dim SQLAddString As String If Not Session("sid") Is Nothing Then DBConnection = New OleDbConnection( _ "Provider=Microsoft.Jet.OLEDB.4.0;" & _ "Data Source=C:Documents and SettingsAdministratorBureaublad2ehandslego.nldatadb.mdb") DBConnection.Open() SQLString = "SELECT Count(*) FROM Orders WHERE sid = '" & CType(Session("sid"), String) & "' AND Artikel_id = '" & FVArtikel_id.Text & "'" DBCommand = New OleDbCommand(SQLString, DBConnection) DBCommand.Parameters.AddWithValue("@sid", Session("sid")) 'string? DBCommand.Parameters.AddWithValue("@Artikel_id", FVArtikel_id.Text) 'string? If DBCommand.ExecuteScalar() = 0 Then SQLAddString = "INSERT INTO Orders (sid, Order_datum, " & _ "Artikel_id, Order_artikel, Order_prijs, Order_hoeveelheid) VALUES (" & _ "'" & Session("sid") & "', " & _ "'" & Today & "', " & _ "'" & FVArtikel_id.Text & "', " & _ "'" & FVArtikel_naam.Text & "', " & _ "'" & FVArtikel_prijs.Text & "', 1)" DBCommand = New OleDbCommand(SQLAddString, DBConnection) DBCommand.Parameters.AddWithValue("@sid", Session("sid")) 'string? DBCommand.Parameters.AddWithValue("@Artikel_id", FVArtikel_id.Text) 'string? DBCommand.Parameters.AddWithValue("@Artikel_naam", FVArtikel_naam.Text) DBCommand.Parameters.AddWithValue("@Artikel_prijs", FVArtikel_prijs.Text) 'string? DBCommand.ExecuteNonQuery() End If DBConnection.Close() Src.Text = "Item Added" Src.ForeColor = Color.FromName("#990000") Src.BackColor = Color.FromName("#E0E0E0") Src.Font.Bold = True End If End Sub</script> I'm not getting any errors, it seems to me that i'm not getting a 'sid' value passed along but i don't know what to do about it.I've also already tried step debugging. This is my last resort. I hope you can help me.
Hi -- I am fairly new to SSIS development, although I am starting to appreciate it more an more, especially since I have started getting into extending the object model. Here's my question:
I have a data flow that pulls data from any number of different delimited files with different numbers of columns. I have had no problem dealing with setting up run-time file locations and file names by using the expressions of a flat file data source, and i have been able to pretty easily deal with varying file delimiters by standardizing the files before they get into the data flow. however, I have not been able to come up with a solution that will allow my data source to discover its column info at run time, and then pass that information on to the data flow task. all i really care about is being able to properly parse the individual rows into individual column data by the flat file data source because the data flow itself is able to discover the actual data that the columns hold at run-time.
i would very much appreciate any feedback from anyone on possible solutions for this.
CREATE TRIGGER [tr_update_doc_count] ON [Documents] FOR INSERT AS [Declared variables]
[I want to get the tablereference from the inserted information] SELECT @tableref = Table_id, @Account_id = Account_id from inserted
[Then I get the Actual Table Name from the UsercreatedTables Table where it is stored] SELECT @tablename = TableName from UsercreatedTables WHERE Table_id = @tableref
[Now I want to use the @tablename I got from UsercreatedTables in the query] SELECT @Current_Doc_Count = Account_Docs from @tablename WHERE Account_id = @Account_id
I am not sure how to get it to use the variable @tablename for the table name in the last query. It gives me an error stating that @tablename is invalid syntax.
For intranet development, our DBAs are asking web developers to use fixed domain NT ID accounts instead of SQL accounts to connect to backend databases in all web applications. We are: Windows XP workstations in a 2003 Active Directory Topology.
I don’t think that this is a good idea (I could say more:) but I have found very little to no information on this subject. So I ask you guys... What are your thoughts? Why or why not?
g'day gents, I run a select query against my mysql database using asp.net. The query looks like this: SELECT distinct(rubbet.ortnamn) as Bebyggelsenamn, count(rubbet.ortnamn) as 'Antal',min(rubbet.artal) as Beläggsår, rubbet.belaggsform as Beläggsform, rubbet.harad as Härad, rubbet.socken as Socken, rubbet.ovrigt as Övrigt, rubbet.filnamn as Filnamn, rubbet.path as Sökväg FROM rubbet WHERE rubbet.ortnamn like '%" + searchTextBox.Text + "%' and rubbet.harad like '" + DropDownList1.SelectedValue + "' and rubbet.socken like '" + DropDownList2.SelectedValue + "' and rubbet.ovrigt like '" + ovrigtRadioButtonList.SelectedValue + "' group by Bebyggelsenamn ORDER BY '" + sorteraDropDownList.SelectedValue + "' " + ordning + "; The query works just fine and runs smoothly. I only have one problem. If a user types for example monkey in the 'searchTextBox' the database finds all the instances of monkey, counts them and presents one line that states: Monkey, 138 instances, bla bla bla my problem is with the Min aggregate function. It selects the lowest rubbet.artal for Monkey but I want rubbet.belaggsform to get the data from the same row as Min(rubbet.artal) gets the data from. As it is now, rubbet.belaggsform selects its data from the row with the lowest primary key. For example the result could look like the following: Holmagården, 127, , 1873 when it should say: Holmagården, 127, Holmagaard, 1873 I'm only presenting the columns above, the rest of them are for creating links and what not. I guess I should write a sub query of some kind to pick the correct rubbet.belaggsform that corresponds to MIN(rubbet.artal) together with the other factors but I just can't get it to work. Help me out here please.
A view named "Viw_Labour_Cost_By_Service_Order_No" has been created and can be run successfully on the server. I want to import the data which draws from the view to a table using SQL Server Import and Export Wizard. However, when I run the wizard on the server, it gives me the following error message and stop on the step Setting Source Connection
Operation stopped...
- Initializing Data Flow Task (Success)
- Initializing Connections (Success)
- Setting SQL Command (Success) - Setting Source Connection (Error) Messages Error 0xc020801c: Source - Viw_Labour_Cost_By_Service_Order_No [1]: SSIS Error Code DTS_E_CANNOTACQUIRECONNECTIONFROMCONNECTIONMANAGER. The AcquireConnection method call to the connection manager "SourceConnectionOLEDB" failed with error code 0xC0014019. There may be error messages posted before this with more information on why the AcquireConnection method call failed. (SQL Server Import and Export Wizard)
Exception from HRESULT: 0xC020801C (Microsoft.SqlServer.DTSPipelineWrap)
- Setting Destination Connection (Stopped)
- Validating (Stopped)
- Prepare for Execute (Stopped)
- Pre-execute (Stopped)
- Executing (Stopped)
- Copying to [NAV_CSG].[dbo].[Report_Labour_Cost_By_Service_Order_No] (Stopped)
- Post-execute (Stopped)
Does anyone encounter this problem before and know what is happening?
I am attempting to import data from Microsoft Access databases to SQL Server 2000 using the DTS Import/Export Wizard. I have a few errors.
Error at Destination for Row number 1. Errors encountered so far in this task: 1. Insert error column 152 ('ViewMentalTime', DBTYPE_DBTIMESTAMP), status 6: Data overflow. Insert error column 150 ('VRptTime', DBTYPE_DBTIMESTAMP), status 6: Data overflow. Insert error column 147 ('ViewAppTime', DBTYPE_DBTIMESTAMP), status 6: Data overflow. Insert error column 144 ('VPreTime', DBTYPE_DBTIMESTAMP), status 6: Data overflow. Insert error column 15 ('Time', DBTYPE_DBTIMESTAMP), status 6: Data overflow. Invalid character value for cast specification. Invalid character value for cast specification. Invalid character value for cast specification. Invalid character value for cast specification. Invalid character value for cast specification.
Could you please look into this and guide me Thanks in advance venkatesh imtesh@gmail.com
Hi,I'm in the process of designing a DB (typical management system DB; 2transaction tables and about 5 look-up ones )for one of the departmentsin our company. The user wants this DB and thusly the client (forms,reports..etc.) solely for his department. However, I do expect soonafter deployment that other users want similar DBs and clients,therefore, facing problem of integrating such DBs, if companymanagement wants to implement it as enterprise DB. My question (may beyou could also tell me about other groups specialized in these kind ofissues), how should I create such DB? Should I create extra look-uptable for departments and have each one with its own ID and link it tothe main transaction oneMTIA,Grawsha
I'm still a database newbie so I would like to solicit thoughts aboutthe smartest way to do something in sqlserver.My company has a web application that we customize for each client.We can do this because everything is database driven. We havedatabase tables that contain our HTML and database tables as well assome standard tables for each database. We have an in house app thatlets us tweak both of these things and creates a new web site anddatabase tailored to each project.Each of these sites has a table that stores a schedule are clientsuse.The records in this schedule table change when information in othercustom generated tables change.My company currently uses a legacy foxpro app to update the scheduletable.The foxpro app contacts sqlserver, reads a table with a list of tablesand scheduling information to check, checks each of those items andupdates the schedule table.I would like to lose the foxpro app.At first thought.........as a database newbie.......putting triggersin each of the tables to update the schedule when something changesseems the way to go.However, since we change a part of the schema ( we have an app thatgenerates the database tables unique to each client ) for each clientI would like a scheme that would not involve having to create adifferent trigger for each new table.I would also like something that updates in real time. Right now thefoxpro app is executed once a day.I was thinking of making a large stored procedure and putting anidentical call to that procedure in each table.Each table would have the same trigger in it that would get fired whenthe record was altered. It would call the stored procedure withrelevent arguments to update the schedule.Does this sound like a smart way to solve this problem or am I notthinking "database enough"?Any thoughts are welcome.I would like to build a better solutionSteve
I noticed that the current SLQCe driver does not offer support for the APM(Asynchronous Programming Model). Are there any plans to do this in the future? In light of the lack of APM functionality doe anyone have any ideas or thoughts on how async operations could be done, or if they are even needed in the context of applications that use SQL Ce
Hi, Situation: I have a web application that is running on a server. IT has told me that the application I developed is taking up enormous amount of ram. It seems that the amount of ram that is taken up, is slowly increasing. About 10 megs per hour (memory leak). Furthermore, he has informed me that the ram is taken up by the sql. My senior developer told me that I probably have sqlconnections open that are not closed. I have checked through the application and it seems they are all closed properly. But I may have missed some, as the application is pretty big. Questions: 1. Is there an application or piece of code that can monitor how many sql connections I have open during runtime? 2. Could there be any other common causes to such a kind of memory leak. Any help with this matter would be very much appreciated. Thank you for your time. Sincerely, Jeff
Hi, sorry if this is not the right place to post, any help or direction would be appreciated. I'm using SQLServer2005.
I'm having trouble writing a query and could do with some help. I'll illustrate the problem with an example.
Say I have 2 tables.
Table 1 - tbl_news:
ID NewsTitle linkedDoc1 linkedDoc2
1 my news title 5 3 Table 2 - tbl_docs:
ID docTitle
1 doc1
2 doc2
3 doc3
4 doc4
5 doc5
Now, I'd like to write a query which returns bascially the first table, but which gets the doc title from table 2 using the IDs....how would I write this?
I was hoping to elicit some feedback on a trend I am seeing in the Portal market, and specifically with SharePoint development.
If you are not familiar with SharePoint, there is a data table abstraction within SharePoint called a "List". Lists are used for storing data (duh!). However, they are built using the SharePoint front end, and the data entered into all lists is stored in a few tables in the SharePoint content database.
What I am seeing happening is SharePoint gurus reccomending AGAINST storing your relational data within database tables, and within SharePoint lists instead. I am not sold on this approach, and it actually makes me think we are taking a step backwards with regards to persistent data storage and best practices.
- Lists cannot be natively related to one another, however they support "lookups" - Anyone can create a list...and repeat the same data all over the enterprise. - Lists are maintained in two tables within the SharePoint content database using meta-data patterns. - Portals contain a multitude of sites. Users and portal admins can create lists all over the place, thus spreading related data over a wide swath of the enterprise.
Is it just me, or are SharePoint pundits absolutely CRAZY to be recommending persistent data storage using lists? I see nothing but problems arising from this approach.
I apologize beforehand if you have not worked with SharePoint and Lists, as this post may not make much sense to you. ;)
Thought I should post in the newbie forum for a while, instead. :-)
I have a couple of scripts that I've generated that drop a couple of system stored procedures and recreate them. I'm not sure why I did it in the first place, but I think it was that it wouldn't let me run an ALTER statement on them. Specifically, I'm now looking at sp_add_operator. I changed it to a 500 character email field instead of whatever it was (100, I think.)
/* Explanation: Why did I do that? SQL Mail is prohibited here, so I'm using CDO_Sysmail to email myself and the developers if a job fails. The list of people to email is determined by the owner of the database, who is also an operator in SQL. I get the list of emails from the email field of the operator properties. Hence, I need a bigger email field. Yes, I now know it would most likely be better to create an ADMIN database on each server for this kind of stuff. (Thanks to Tara for that blogged suggestion.) */
While I will probably go back to the default stored procedure, this got me to thinking: when would it be better to use an ALTER statement on a SProc rather than to do a DROP and CREATE?
Folks, I have a quick question that I would very much appreciate somefeedback on. We are a not for profit charity organization that has decidedto develop a software in-house to manage our volunteers. We have SQL andthat makes the most sense from a database solution but we have some issuessurrounding the choice of the development language. Some have suggested100% java while others say Visual Basic. The head of our team has suggestedwe do it in Cold Fusion since this will be an internet based application andI guess I would very much like some feedback on that choice. We have about 5organizations that we will tie into this system with about 5000 userslogging in once per month.Any suggestions or comments would be greatly appreciated.CheersWade
I just learned I can deploy and schedule jobs to run SSIS packages (via job/sqlagent) without the Integration Service (agent) itself actually running alongside (or on) the server. (Double-click on manifest, deploy IS package to server, create job/job step to run IS package, watch it run even when integration service is completely disabled)
Other than convenient viewing, configuring, and RMC running w/in SQL Server Mgmt Studio 2005, why then do I need the integration service running on a production box at all? When do I really need the IS service itself?
In our (finance) world only (a) an act of God or (b) a DBA can touch production databases/servers. Allowing anyone to connect to yet another service - in this case, an integration service - to meddle w/ a package would be a no no, so...
1) Could I trouble someone for a concrete, critical reason why the DBA should enable it on a production server. Speed? Caching? Peace of mind knowing everything is piled onto and neatly running on the server?
2) On a more minor note, if I'm deploying a package to be housed solely w/in MSDB, is there anyway to prevent the prompt of a file location during deployment, i.e. the creation of an empty directory that would otherwise hold package dependencies if I were running it as a file?
We'd like to deploy only to MSDB (I know all the pros/cons w/r/t saving dtsxs to files v. msdb) and keep deployments clean (read: all in one place). DR is via SAN-to-SAN replication with, among other things, msdb cleanly getting replicated. We would very much like to avoid having to worry about (more) file/directories sitting out on a server share to be replicated to DR (it seems the default is to allow deployments to directories on the SQL server instance itself..ugh) Any architectural insights on this would be appreciated.
Hi, Im a Jr DBA and have been given an assignment by my lead to find information on the following. We are to migrate existing db of size 4TB to a DELL PowerEdge 2950[Mem:Up to 32GB] OS : Windows Server 2003 Std Edition X64 SP2 DB : SQL Server Enterprise Edition x64
I am to find on how to design the db to provide optimum performance,fail over and consider the growing factor of the db.
1)What would be the recommended RAID settings? 2)Placement of the tempdb ? 3)Should we do clustering and why ? 4)What Data partioning would do to help? 5)Any Other aspects to be considered for sizing db ? 6)Placement of data files and log file on separate physical disk ? 7)Indexing?
I have read many sites.I would appreaciate if someone could write suggestions and opinions based on their current db design spec or previous experience,by selecting best db design points.Thank You.
I am not sure how to implement the following, but I believe it entails using DTS, and hopefully it is fine that I post it here b/c ultimately I will need this backend data for my frontend .aspx pages:
On a weekly basis, I need to IMPORT some data located on a remote Oracle DB into SQL Server 2k. Since there is so much data to transfer, I would only like to transfer the data that is new to the table since the last IMPORT, i.e. a week ago and leave behin the OLD data.
Is DTS the correct way to go or do I have more control via DTS with STORED PROCEDURES? Does anyone have any good references for me?
On a similar note, once this Oracle data is IMPORTED into a certain table, I would like to EXPORT some of these NEWLY acquired rows matching certain criteria into another table for auditing purposes. For this scenario, should I implement a TRIGGER UPDATE event here on the first table?
I like the new gig a lot. Real busy, smart folks and I have been in high demand since 5 minutes after my butt hit the chair. I already have code in production.
Anyhow, we have a security situation on the sql servers I pointed out on my first day. So they want me to roll everything over to Windows Authentication and give the developers and report writers more restricted rights inside SQL Server. So they have NT Groups for different kinds of users and all of that jazz and I layed on the typical stuff about using NT groups vs individual accounts and ease of admin vs granularity of control. Well the boss came back and said he wants ease of admin and granularity of control over security. So, does anyone have any fresh thinking on turning my eitheror into an AND.
I have the MS SQL2000 database failed to recover at computer restart. Now the database is marked suspect. How can I manage to recover the data? Thank you.
It's arriving from ebuyer tomorrow for 32 quid. I already have aninstance of MSDE running on the laptop and ideally i would like SQLserver to be installed and use the current instance and not installany other additional services. [hope i got my terminology correcthere.] Basically all i would like is to use the Enterprise Managerwith my existing MSDE database. I was wondering if there is anything ishould be doing before I install. Should i shut down the service orleave it running for instance.Thanks for any tips,AndyB
SQL 2000 to SQL 2005 Upgrade Error - Database Down - Help Appreciated
I am upgrading a SQL 2000 standard database server to SQL 2005 standard on a windows 2003 server
I am logged in as domain admin and started the upgrade as 'sa'
The upgrade stops with the error:
Service MSSQLSERVICE can not be started. Verify you have sufficient privilages to start system services. The error code is (17185)
The error log shows:
2007-01-04 15:59:38.77 Server Authentication mode is MIXED.
2007-01-04 15:59:38.79 Server Microsoft SQL Server 2005 - 9.00.1399.06 (Intel X86)
Oct 14 2005 00:33:37
Copyright (c) 1988-2005 Microsoft Corporation
Standard Edition on Windows NT 5.2 (Build 3790: Service Pack 1)
2007-01-04 15:59:38.79 Server (c) 2005 Microsoft Corporation.
2007-01-04 15:59:38.79 Server All rights reserved.
2007-01-04 15:59:38.79 Server Server process ID is 4188.
2007-01-04 15:59:38.79 Server Logging SQL Server messages in file 'F:SQLDataMSSQLlogERRORLOG'.
2007-01-04 15:59:38.79 Server This instance of SQL Server last reported using a process ID of 2980 at 1/4/2007 3:56:58 PM (local) 1/4/2007 2:56:58 AM (UTC). This is an informational message only; no user action is required.
2007-01-04 15:59:38.79 Server Registry startup parameters:
2007-01-04 15:59:38.79 Server -d F:SQLDataMSSQLdatamaster.mdf
2007-01-04 15:59:38.79 Server -e F:SQLDataMSSQLlogERRORLOG
2007-01-04 15:59:38.79 Server -l F:SQLDataMSSQLdatamastlog.ldf
2007-01-04 15:59:38.79 Server Command Line Startup Parameters:
2007-01-04 15:59:38.79 Server -s MSSQLSERVER
2007-01-04 15:59:38.79 Server SQL Server is starting at normal priority base (=7). This is an informational message only. No user action is required.
2007-01-04 15:59:38.79 Server Detected 4 CPUs. This is an informational message; no user action is required.
2007-01-04 15:59:38.83 Server Set AWE Enabled to 1 in the configuration parameters to allow use of more memory.
2007-01-04 15:59:39.02 Server Using the static lock allocation specified in the locks configuration option. Allocated 20000 Lock blocks and 20000 Lock Owner blocks per node. This is an informational message only. No user action is required.
2007-01-04 15:59:39.02 Server Multinode configuration: node 0: CPU mask: 0x0000000a Active CPU mask: 0x0000000a. This message provides a description of the NUMA configuration for this computer. This is an informational message only. No user action is required.
2007-01-04 15:59:39.04 Server Multinode configuration: node 1: CPU mask: 0x00000005 Active CPU mask: 0x00000005. This message provides a description of the NUMA configuration for this computer. This is an informational message only. No user action is required.
2007-01-04 15:59:39.04 Server Attempting to initialize Microsoft Distributed Transaction Coordinator (MS DTC). This is an informational message only. No user action is required.
2007-01-04 15:59:41.04 Server Attempting to recover in-doubt distributed transactions involving Microsoft Distributed Transaction Coordinator (MS DTC). This is an informational message only. No user action is required.
2007-01-04 15:59:41.04 Server Database Mirroring Transport is disabled in the endpoint configuration.
2007-01-04 15:59:41.04 spid7s Starting up database 'master'.
2007-01-04 15:59:41.05 spid7s 1 transactions rolled forward in database 'master' (1). This is an informational message only. No user action is required.
2007-01-04 15:59:41.07 spid7s 0 transactions rolled back in database 'master' (1). This is an informational message only. No user action is required.
2007-01-04 15:59:41.07 spid7s Recovery is writing a checkpoint in database 'master' (1). This is an informational message only. No user action is required.
2007-01-04 15:59:41.08 spid7s SQL Trace ID 1 was started by login "sa".
2007-01-04 15:59:41.08 spid7s Starting up database 'mssqlsystemresource'.
2007-01-04 15:59:41.11 spid7s Using 'dbghelp.dll' version '4.0.5'
2007-01-04 15:59:41.11 spid7s ***Stack Dump being sent to F:SQLDataMSSQLLOGSQLDump0035.txt
2007-01-04 15:59:41.11 spid7s SqlDumpExceptionHandler: Process 7 generated fatal exception c0000005 EXCEPTION_ACCESS_VIOLATION. SQL Server is terminating this process.
2007-01-04 15:59:41.27 spid7s Unable to update password policy.
2007-01-04 15:59:41.27 spid7s SQL Trace was stopped due to server shutdown. Trace ID = '1'. This is an informational message only; no user action is required.
I would appreciate any thoughts/ideas on the following use case for the distributed service broker application we plan to migrate from our existing proprietary tcp based message protocol using database tables for reliability.
There are two ssb services running in separate sql server instances, each on a different server machine. For simplicity, let us assume the ssb endpoint names are SSBA, SSBB. SSBB is the Initiator of the Dialog while SSBA is the Target. Now the requirement is that if the underlying network communication between the two ssb endpoints(SSBA and SSBB) is broken or if the critical service SSBB is down, then processing of any incoming message into SSBA's queue from a third service broker service (say SSBEXPR) running within a SqlExpress instance should be delayed until SSBB is alive and network communication between SSBA and SSBB is established. In our existing implementation (wherein SSBA, SSBB and SSBEXPR are windows services) we use a combination of TCP socket disconnects and Heartbeat messages between SSBA and SSBB to determine the health of network connection and that of the SSBB service.
Now my understanding of how the underlying network connection for a ssb dialog works is that if there is no activity on a dialog for a certain amount of time then the underlying network connection is closed. Is there a way to specify the amount of time to say infinite value or something and thus change this behavior? My other question is how can one query the underlying network connection (i.e. a row from sys.dm_broker_connections) associated with a particular conversation? If none of this is possible, then any other patterns/ideas/approach is welcome.
i want to import data from an excel sheet into a database. While reading from the excel sheet OleDb automatically guesses the Datatype of each column. My Problem is the first A Column which contains ~240 Lines. 210 Lines are Numbers, the latter 30 do contain strings. When i use this code:
Code BlockDim sConn As String = "Provider=Microsoft.Jet.OLEDB.4.0;Data Source=" & conf_path_current & file_to_import & ";Extended Properties=""Excel 8.0;HDR=NO""" Dim oConn As New OleDb.OleDbConnection(sConn) Dim cmd1 As New System.Data.OleDb.OleDbCommand("Select * From [Table$]", oConn) Dim rdr As OleDb.OleDbDataReader = cmd1.ExecuteReader Do While rdr.Read() Console.WriteLine(rdr.Item(0)) 'or rdr(0).ToString Next
it will continue to read the stuff till the String-Lines are coming. when using Item(0), it just crashes for trying to convert a DBNull to a String, when using rdr(0).ToString() it just gives me no value.
So my question is how to tell OleDB that i want that column to be completly read as String/Varchar?
Thanks for Reading
- Pierre from Berlin
[seems i got redirected into the wrong forum, please move into the correct one]
I'm moving data between identical tables and have to use a flat file as an intermediary. I thought: "No problem, SSIS can do a quick export to a file, then move the file to another server, then use SSIS to import the data to the new server."
Seems simple, right?
I'm hitting all sorts of surprising data conversion errors. I used the export wizard to create the export package. This works fine. However using the same flat file definition, the import package fails -- even when I have no destination. That is I have just one data flow task that contains only one control: the Flat File source. When I run the package the flat file definition fails with data type conversion and truncation errors. One of the obvious errors is for boolean types. The SQL field is a bit, SSIS defined the column as DT_BOOL, the output of the data are literal text values "TRUE" and "FALSE". So SSIS converts a sql datatype of bit to "TRUE" and "FALSE" on export, but can't make the reverse conversion on import?
Does anyone else find this surprising? I would expect that what SSIS exports, it can import given all the same table and flat file definitions. Is SSIS the wrong tool to do such simple bulk copies? I'd like to avoid using BCP because this process will need to run automatically within SQL Agent so we can leverage all the error tracking and system monitoring.
I need to periodically import a (HUGE) table of data from an external data source (not SQL Server) into SQL Server, with the following scenarios: Some of the records in the external data source may not exist in SQL.Some of the records in the external data source may have a different value at different imports, but this records are identified univocally by the same primary key in the external datasource and in SQL Server.Some of the records in the external data source may be the same in SQL.
Due to the massive volume of the import, I would like to import only the records which are different from what I have in SQL Server (cases 1 and 2 above). In fact case 2 is the most critical.
I thought of making a query with a left outer join between the data in the external data source table (SOURCE) and the data in the SQL Server table (DESTIN). The join is done on the respective primary keys (composed keys of up to 10 columns) and one of the WHERE conditions will be that the value in SOURCE is different from the value in DESTIN.
The result of this query would be exactly what I need to import. How to do this in SSIS??? I couldn't figure out how to join tables in different data sources yet.
In fact I cannot write a stored procedure to do that, since one of the sources is in a datasources not SQL Server. I have seen the Lookup transformation in this article http://www.sqlis.com/default.aspx?311 but this is not exacltly what I want to do. Another possibility is to use the merge join, but due to the sorting I believe its performances would be terrible!
It looks like these options are only available in the SQL Server Management Studio? I installed SQL Server Management Express Studio and I can't even find the DTSWizard.exe on my machine.
Can you please help how I can import data from excel or where can I download the SQL Server Management Studio?
I have one column in SQL Server 2005 of data type VARCHAR(4000).
I have imported sql Server 2005 database data into one mdb file.After importing a data into the mdb file, above column data type converted into the memo type in the Access database.
now when I am trying to import a data from this MS Access File(db1.mdb) into the another SQL Server 2005 database, got the error of Unicode Converting a memo data type conversion in Export/Import data wizard.
Could you please let me know what is the reason?
I know that memo data type does not supported into the SQl Server 2005.
I am with SQL Server 2005 Standard Edition with SP2.
Please help me to understans this issue correctly?
I am trying to import data from an excel Sheet to SQL Database using OPENROWSET. After import I found that all the cells containing data of more than 2000 length got truncated to 255 characters only. I tried finding the solution and found that We need to have the data with length more than 255 in first 8 rows of Excel sheet. It worked for me also. But In real scenario the data that I cant do the manual work on excel. I tried out with Dot Net utility and SSIS package also but the truncation is still the issue.
INSERT into tmp_Test SELECT * FROM OPENROWSET('Microsoft.ACE.OLEDB.12.0','Excel 12.0;Database=D:Book1.xlsx', [Sheet1$])
I have data for online catalogue in SQL 7.0. The web grogrammer asked me to add a unique key for reference. I used int datatype with identity seed of 1 and increment of 1. This works fine BUT when I try to import new data I get an error because the csv file has no column and therefore no value for the unique field which will not allow null by definition.
How can I maintain a unique field to act as primary key in my data when I want to add (and delete) data that doesn't have this field.
I tried adding the uniqueidentifier field but this gives error message.
The only work round is to delete the unigue field altogether and then add the new data and afterwards create a new unique field. At 600000 + lines of data, this is time and memory consuming