I have an existing SQL table that I want to import 300,000 rows into. I have copied the table headers into Excel and added all the rows but constantly getting multiple errors no matter what way I try to import it.
So far tried Import Method from SQL Management Studio, BULK INSERT and SQL Script. What is the easiest way?
If I try this
select * into Coupon FROM OPENROWSET('Microsoft.Jet.OLEDB.4.0', 'Excel 8.0;Database=D:coupon.xls;HDR=YES', 'SELECT * FROM [coupon$]')
I get
OLE DB provider 'Microsoft.Jet.OLEDB.4.0' cannot be used for distributed queries because the provider is configured to run in single-threaded apartment mode.
I want to make a CSS Style solution for my reports. I have few ideas related to this, changing the report properties on the fly, but I am not sure for which I should go for.
- We know that we can put expression on reports almost everywhere, and can also call custom code for setting up those expressions. i.e.
Public Shared Function GetValue(ByVal Key As String) As String
Dim myDataReader As SqlDataReader Dim mySqlConnection As SqlConnection Dim mySqlCommand As SqlCommand
mySqlConnection = New SqlConnection("server=localhost;Trusted_Connection=yes;database=tempMIQB") mySqlCommand = New SqlCommand("SELECT * FROM [Properties] Where PropertyName='" & Key & "'", mySqlConnection) mySqlConnection.Open() myDataReader = mySqlCommand.ExecuteReader(CommandBehavior.CloseConnection)
If myDataReader.Read() Then GetValue = myDataReader.Item("PropertyValue").ToString Else GetValue = "None" End If
End Function
and in textbox or any where I can say =Code.GetValue("BGColor") or from .NET dll.
For going further on with this idea I can create an .NET dll and get the list from database or XML file. (Please give suggestions for performance/scalability issue.)
- second idea is to write a custom application (script) which can go through all the reports and change the color and fonts and every thing.
- third is to use parameters and use Array.IndexOf method to search for the value, but in this case i need to add dataset to all reports and, I think we cannot access report properties i.e. Parameters from custom code, just to make a generic function to access parameter value.
Does anyone know what the best way to do mass updates in SQL server is? I am currently using the methodology suggested in this article
http://www.tek-tips.com/faqs.cfm?fid=3141
But the article is assuming that once I update a field it is going to have a value that is NOT NULL. So I can loop through and update the rows that have a NOT NULL value. But my updated rows do contain NULL values, in this case what is the best way to go about this???
*************************************** Here is my code. I want to avoid using Upd_flag becos after the following code runs I need to reset that flag before I run my next query ***************************************
--Set rowcount to 50000 to limit number of inserts per batch Set rowcount 50000
--Declare variable for row count Declare @rc int Set @rc=50000
While @rc=50000 Begin
Begin Transaction
--Use tablockx and holdlock to obtain and hold --an immediate exclusive table lock. This usually --speeds the insert because only one lock is needed.
update t_PGBA_DTL With (tablockx, holdlock) SET t_PGBA_DTL.procedur = A.[Proc code], t_PGBA_DTL.Upd_flag = 1 FROM t_PGBA_DTL INNER JOIN CPT_HCPCS_I9_PROC_CODES A ON t_PGBA_DTL.PROC_CD = A.[Proc code] WHERE t_PGBA_DTL.Upd_flag = 0
--Get number of rows updated --Process will continue until less than 50000 Select @rc=@@rowcount
I am trying to create a calendar style report that will have 12 months (as columns) and store opening listing in rows. I have created a matrix, but the problem that I have is that the store opening listing displays in the right period, but they are not in any order. I would like to have the openings always on top, right under the header in the matrix. Now I have them scattered randomly all over the matrix. I tried numerous way of sorting and that does not work.
I am attaching a sample of what I would like to accomplish (months are columns).
I have a Hits table that tracks the hits on the id of a Link table: Hits: int linkId (foreign Key to Link) datetime dateCreated varchar(50) ip We recently had to merge Links from different systems that are implemented similarly. As a result, all the linkIds are now wrong in the Hits table because the ids all changed. I managed to track down all the old ids and their associated new ids and have it in a table that I call joined_links joined_linksint oldId int newId
So, how do I do a mass update of these linkIds in the Hits table in SQL? I know I could do it in .NET, but I'd rather not write an app to do that runs thousands of update statements. There's gotta be a way to do it something like this:
UPDATE Hits h SET h.linkId = (SELECT newId FROM joined_links WHERE oldId=h.linkId) but obviously I don't have visibility of that linkId in the subselect... A Loop maybe?
I have a series of DTS packages. Each package has 20 queries. Each query has a server name. Is there a way to change the servername without editing each query in each DTS package. I'd like to copy the template DTS package, then perform the modification.
This is a run of the mill application that moves orders from one table to another. There are 2 tables, Ordersummary & HstOrders. Ordersummary has the following columns... Identifier FollowupId OrderNumber OrderReference OrderReferenceOrigin ...... ...... HstOrders has the following columns... Identifier OrderNumber OrderReference OrderType ...... ...... The above two tables are bound by Identifier. After each month end, Orders are moved from Ordersummary to HstOrders.
Now my task is to update all rows in OrderSummary with the order details as seen in HSTOrders for ordertype = 'CREDIT'. OrderReferenceOrigin(in Ordersummary) should be updated with the value of Orderreference(from hstorders).
I have to update each row at a time & I need to write a cursor for mass updates where ordersummary.identifier = hstorders.identifier.
Can someone please help with in writing this update statement as I never wrote a cursor.
I'm fairly new to SQL Server and I'm just wondering if it's possible to Update Statistice for all indexes somehow? I'm looking at the Update Statistics command and it doesn't seem to be possible.
The situation we have is a reporting DB that basically has all it's tables truncated and remade every night by some DTS jobs that import from another datasource and change the data and build some denormalzed tables etc. Some of the large Insert operations go from taking 8 mins to taking several hours sometimes and updating the stats seems to fix the problem for a while. So I'd like to make sure the optimizer has all the latest stats for all tables.
I have an user table with a single integer column. No indexes, no identities, nothing. I have to insert 600,000 rows via a client app. In tests, using BCP/Bulk Insert/DTS all runs OK (sub 3 seconds). However the app takes 5000 rows a second [considerably slower]. I can mimic this slow perfomance, within DTS, by removing the 'Fast Load' & 'Batch' options.
Question= why would the SQL insert run slower on one server and as fast as BCP/Bulk Insert/DTS on another? What can I check on the 'slow' running server? May there be a file version anomally ??
I have the following problem. I need to insert 100.000 records (50Kb each) in one operation from a VB program into a SQL Server 2005 database. All of these records will be ready for inserting at the same time. How to make this insert as one big transaction instead of 100.000 small ones?
The company i work for changed names and all email addresses within the company have changed. While it was OK for a while they are no longer going to be forwarding email to the old addresses to the new ones. There are Tons of subscriptions and tons of email addresses that need to be changed to the new names.
If i could find the table with the TO: part of the subscription held in it i could just run an update on that field and it would be solved...however, i cannot find that field...
So, Without going into every subscription in report manager, how can i change the email addresses? Any Suggestions?
Most of the time my solutions consist of 1 or 2 packages and config files work well.
Now i have a solution with about 50 packages i have to move to QA.
However a config file has the package ID, so even thoug they use the same data source connection. I would have to create and change 50 config files.
Data sources are kept in the package xml , so if i copy all the packages to QA , and then change the Global Data Source connection, i still have to open each package maually and save it again.
Surely there must be an easy way to move all 50 packages and have the connection now point to QA. But config files and global data sources dont do the trick, what am i missing here ?
A view named "Viw_Labour_Cost_By_Service_Order_No" has been created and can be run successfully on the server. I want to import the data which draws from the view to a table using SQL Server Import and Export Wizard. However, when I run the wizard on the server, it gives me the following error message and stop on the step Setting Source Connection
Operation stopped...
- Initializing Data Flow Task (Success)
- Initializing Connections (Success)
- Setting SQL Command (Success) - Setting Source Connection (Error) Messages Error 0xc020801c: Source - Viw_Labour_Cost_By_Service_Order_No [1]: SSIS Error Code DTS_E_CANNOTACQUIRECONNECTIONFROMCONNECTIONMANAGER. The AcquireConnection method call to the connection manager "SourceConnectionOLEDB" failed with error code 0xC0014019. There may be error messages posted before this with more information on why the AcquireConnection method call failed. (SQL Server Import and Export Wizard)
Exception from HRESULT: 0xC020801C (Microsoft.SqlServer.DTSPipelineWrap)
- Setting Destination Connection (Stopped)
- Validating (Stopped)
- Prepare for Execute (Stopped)
- Pre-execute (Stopped)
- Executing (Stopped)
- Copying to [NAV_CSG].[dbo].[Report_Labour_Cost_By_Service_Order_No] (Stopped)
- Post-execute (Stopped)
Does anyone encounter this problem before and know what is happening?
I want to backing up my hourly transaction log backups direct to a mass storage unit as opposed to the local server. However when trying to set this up it only gives me option of backing up to local drives, even though I have a drive mappping to the mass storage unit.. I'm there is a simple around this.Would appreciate any advice..many Thanks..
I was wondering if anyone knows of any way, including third party tools, to replicate a design change on a table across many different databases. I have written an ASP script that allows me to copy multiple tables to multiple databases in one go but I need something that will allow me to replicate the design of a table by comparing source and destination tables. So far I have scripted most of it but I have no idea on which system table to get the identity information and it seems there must be an easier way!
If I have a trigger on a field in a table, and I update one record trigger fire properly. If I do a update to that same field on all records in the table the trigger does not fire. I the error in the trigger, or do I need to change my update statement?
I've been having problems with my tempdb filling up, and causing all databases on the server to stop functioning properly. I've been removing alot of data lately (millions of rows), and I think this is the reason why my tempdb log is going thru an unusual load.
Whats the best way to make sure the tempdb doesnt fill up causing me major problems? I had temporarily turned off backups while I was having a new HD put in. Am I right in thinking that when a DB is backed up, the tempdb log is reduced in size? Should maintaining a daily backup solution help keep things under control ?
Hi,I need to update a field in about 20 records on a table. The table hasan update trigger (which updates the [lastedited] field whenever arecord is updated). As a result I'm getting an error: "Subqueryreturned more than 1 value.", and the update fails.Is there a way in the stored procedure to handle this issue?thanks for your help.Paul
I am attempting to import data from Microsoft Access databases to SQL Server 2000 using the DTS Import/Export Wizard. I have a few errors.
Error at Destination for Row number 1. Errors encountered so far in this task: 1. Insert error column 152 ('ViewMentalTime', DBTYPE_DBTIMESTAMP), status 6: Data overflow. Insert error column 150 ('VRptTime', DBTYPE_DBTIMESTAMP), status 6: Data overflow. Insert error column 147 ('ViewAppTime', DBTYPE_DBTIMESTAMP), status 6: Data overflow. Insert error column 144 ('VPreTime', DBTYPE_DBTIMESTAMP), status 6: Data overflow. Insert error column 15 ('Time', DBTYPE_DBTIMESTAMP), status 6: Data overflow. Invalid character value for cast specification. Invalid character value for cast specification. Invalid character value for cast specification. Invalid character value for cast specification. Invalid character value for cast specification.
Could you please look into this and guide me Thanks in advance venkatesh imtesh@gmail.com
I am trying to simplify a query given to me by one of my collegues written using the query designer of Access. Looking at the query there seem to be some syntax differences, so to see if this was the case I thought I would import the database to my SQL Server Developer edition.
I tried to start the wizard from within SQL Server Management Studio Express as shown in one of the articles on MSDN which did not work, but the manual method also suggested did work.
Trouble is that it gets most of the way through the import until it spews forth the following error messages:
- Prepare for Execute (Error) Messages Error 0xc0202009: {332B4EB1-AF51-4FFF-A3C9-3AEE594FCB11}: An OLE DB error has occurred. Error code: 0x80004005. An OLE DB record is available. Source: "Microsoft JET Database Engine" Hresult: 0x80004005 Description: "Could not start session. Too many sessions already active.". (SQL Server Import and Export Wizard)
Error 0xc020801c: Data Flow Task: The AcquireConnection method call to the connection manager "SourceConnectionOLEDB" failed with error code 0xC0202009. (SQL Server Import and Export Wizard)
Error 0xc004701a: Data Flow Task: component "Source 33 - ATable" (2065) failed the pre-execute phase and returned error code 0xC020801C. (SQL Server Import and Export Wizard).
There does not seem to be any method of specifying a number of sessions, so I don't see how to get round the problem.
Does anyone know how I can get the import to work?
I've deleted about 3-4 million rows from one of my tables as the data was old and no longer needed. The problem is that now queries are runnning extra slow. I am in the process of running taras isp_ALTER_INDEX however its taking quite a long time and seems to be slowing things down even further while its running as expected. (It's been running 4 hours already, I have stopped it and will rerun it a slower traffic period for the db server)
Just wondering if I have the right approach here or if anyone else has any suggestions.
Hello,I need to alter fields in all my tables of a given database, and Iwould to do this via a t-sql script.Example, I want to change all fields called SESSION_ID to char(6). Thefield is usually varchar(10), but the data is always 6 characters inlength. I have serveral fields that are fixed length that I want tomove from varchar to char.I believe I can find all the tables where the field exists usingselect * from INFORMATION_SCHEMA.columns where column_name ='SESSION_Id'but I dont know how to take this into an ALTER TABLE , ALTER COLUMNthat can be automatedTIARob
Say you have an existing populated SQL 2005 database, with 700+ tables, and you want to just change the order of the columns inside every table. Short of manually building conversion scripts, anyone know an automated way to do this? I was thinking thru ways to do them all in one shot, and have tools like Erwin and DbGhost that could be used also. Basically moving some standard audit columns from the end of the tables to just after the PK columns.
Hola!I'm currently building a site that uses an external database to store all the product details, and an internal database that will act as a cache so that we don't have to keep hitting the external database to retrieve the products every time a customer requests a list.What I need to do is retrieve all these products from External and insert them into Internal if they don't exist - if they do already exist then I have to update Internal with new prices, number in stock etc.I was wondering if there was a way to insert / update these products en-mass without looping through and building a new insert / update query for every product - there could be thousands at a time!Does anyone have any ideas or could you point me in the right direction?I'm thinking that because I need to check if the products exist in a different data store than the original source, I don't have a choice but to loop through them all.Cheers,G.
" When connecting to the server with a pre-4.1 client program, use an account that still has a pre-4.1-style password. Reset the password to a pre-4.1 style for each who must use a pre-4.1 client program."
What is "pre-4.1-style password"? Can you give an example for a "pre-4.1-style password"?
MySQL allowed INSERT statements that looked like this: INSERT INTO Table SET col1 = value1, col2 = value2, col3 = value3 Basically this closely matched the format of the UPDATE statements. Here is why this was good: I could do something like this: if (RecordExists == false) { SQLquery = "INSERT INTO Table SET "; } else { SQLquery = "UPDATE Table SET "; } SQLquery += "col1 = value1, "; SQLquery += "col2 = value2, "; SQLquery += "col3 = value3, "; SQLquery += "col4 = value4 "; if (RecordExists == true) { SQLquery += "WHERE id = " + ourID; }
If I had a good 50 columns then it means I can reuse a sizable chunk of code for both INSERT and UPDATE statements (since both use the same general format). Whereas if I have to use "INSERT INTO Table(columns) VALUES (values)" then I'm look at duplicating a lot of code. Does MS-SQL support something similar to "INSERT INTO SET"? How are others dealing with this? Thanks, Gabe----------------------
There is no style provided for dd.mm.yyyy in SQL Server documentation. Is there any way to do such conversion with this constant value of format dd.mm.yyyy ?
Hi!If you like to spend some moments on my code examples,please look at it - usage is free (-:http://www.codeproject.com/useritems/TSQL_coding.aspFeedback welcome.GreetingsBjorn
Hello, I'm looking for a little input on this situation.I'm working on an inventory system and was thinking that I'd like tobuild it similiar to the way a bank keeps track of your funds.If you look at at bank statement, for each line item, you'll normallysee at least date/time, description, amount (plus or minus value), andbalance.In my inventory system, the amount and balance would be a count of theitems used or transferred instead of money.Does this sound reasonable? Or is there a better design? I am planningto connect the inventory system to a few different ordering interfaces,and I think it would work well if I can dump transactions into theinventory instead of incrementing and decrementing counts.The only problem I see with this, is if I want to implement the balancepart of it with each record....I can't think of a way to do that currently.I can certainly use a trigger in the database to retrieve the mostrecent transaction and update the balance field for one record insertedat a time, but if a list of transactions is dumped into the inventorysystem at once, I'd have to compensate for that in the trigger.Ideas?*** Sent via Developersdex http://www.developersdex.com ***Don't just participate in USENET...get rewarded for it!