I am new to the windows world. We use Informatica on UNIX for ETL process. We have a requirement to load approx. 200,000 rows to a MS SQL Server table . The table is not that big and it is a heap table (no indexes). Inserts are taking 69 rows/per minute. We are using DataDirect Closed 4.10 SQL Server ODBC driver.
SQL Profiler tells us that is is doing a row by row processing and using sp_execute procedure.
Is there a way we can speed up the ODBC process?
-Thanks in advance
srv
SQL Server Version:
Microsoft SQL Server 2000 - 8.00.818 (Intel X86) May 31 2003 16:08:15 Copyright (c) 1988-2003 Microsoft Corporation Standard Edition on Windows NT 5.0 (Build 2195: Service Pack 4)
Hi list, I'm a long time lurker on this list and really enjoy the discussions, although I rarely get a chance to participate.
Here is my situation: We are importing chunks of data (500 records at a time) from a C++ interface. The records have to be transformed before inserting into the target table which I am doing using a stored proc which is working fine. The records are in memory in C++ and the programmer is looping through the records building inserts into a temp table through ADO (which my proc picks up). The server business object is using the connection.execute method which is inserting one record at a time. That part of the process is taking over 15 seconds for 500 records which is the bulk of the total time.
My question is: Using ADO is there a better way to insert these records into the temp table? I see mention of a recordset interface but my programmers are new to ADO and since I am the DBA and have never used ADO, I am not sure what to tell them.
Hello all. I've got a problem with really slow INSERTs on one (and only one) of the tables in a database. For example, using SQL Management Studio, it takes 4 minutes and 48 seconds to insert 25 rows. There are only about 8 columns in the table and only about 1500 records. All the other tables in the database are very fast for inserts.
Another odd thing uniquely associated with INSERTs on this table: prior to inserting the 25 new rows of data, SQL Management Studio tells me that it inserted 463 rows of data which I know did not happen. Here's the INSERT statement:
INSERT INTO FieldOps(StudySiteID , QA_StructureID , Notes , PersonID) SELECT DISTINCT StudySiteKey , QA_StructureKey , SampleComments1 , '25' FROM ScriptOutput_Nitrate WHERE (ScriptOutput_Nitrate.StudySiteKey IS NOT NULL)
and SQL Management Studio (eventually) says: (463 row(s) affected) (463 row(s) affected)
(25 row(s) affected)
The table has an index on the primary key (INT data type with auto increment). I tried running the following code to fix things but it made no difference:
USE [master] GO ALTER DATABASE [FieldData] SET SINGLE_USER WITH ROLLBACK IMMEDIATE GO
use FieldData GO DBCC CHECKTABLE ('FieldOps', REPAIR_REBUILD) With ALL_ERRORMSGS GO
USE [master] GO ALTER DATABASE [FieldData] SET MULTI_USER WITH ROLLBACK IMMEDIATE GO
I'm guessing that the problem might be related to the index (??). I don't know... Does anyone here have a suggestion as to what I should do to fix this problem.
I'm inserting 2 million+ records from a C# routine which starts outvery fast and gradually slows down. Each insert is through a storedprocedure with no transactions involved.If I stop and restart the process it immediately speeds up and thengradually slows down again. But closing and re-opening the connectionevery 10000 records didn't help.Stopping and restarting the process is obviously clearing up someresource on SQL Server (or DTC??), but what? How can I clean up thatresource manually?
We are inserting into a table, which includes an identity primary key column. When the table gets really large (i.e. 1.5 million records), the performance of the inserts reduce.
I noticed that when we insert into the table an exclusive lock on the table is obtained. Do inserts into tables with identities always lock the table?
Given the table size is unavoidable, does anyone have a suggestion to improve the performance?
Below given query is being executed on a Sql 2k box with 4CPU and 2GB RAM testXX.DB_GRP.dbo.group1-----> is a sql 7 box with single CPU and 512MB RAM The result set is abt 30,000 rows . This whole Process is taking abt 5 mins to do the Insert Process. Is there a way to optimise the query and bring down the execution time
insert into testXX.DB_GRP.dbo.group1 select num, group_num,group_desc from group2 where id = 20
--------------- If we just run the select num, group_num,group_desc from group2 where id = 20
it takes 10 secs to execute this selct statement so i was wondering why it takes 5 mins to do the insert process across the network thru linked server query.
I'm relatively new to compact framework and SQL Server Compact so bear with me if I've got an obvious thing I've forgotten.
I've written my own database helper layer. My idea is to generate SQL Insert statements dynamically based on what is in the contents of each object. Peformance however is horrible. I try to do 1800 inserts and it takes about 50 seconds on the device (Release Build, outside of the IDE).
I pass in a list of objects to be inserted which derive from a ModelBase class. ModelBase includes some ORM information (what table the object goes into. what fields are mapped to which properties). I generate one SQLCeCommand object, one sql string (new params for each insert), and am using SqlCeResultResultSets.
What can I do to make this run faster? Thank you
Code Snippet public bool Insert(List<ModelBase> recs) { SqlCeConnection con = new SqlCeConnection(connectionString);
Our company records live sales into an SQL Server database. The same tables that store the sale information are also used by a reporting interface to query sales figures, but occasionally a SELECT generated by the reports is so slow that it causes the INSERT operations to time out and fail.
We've tried increasing the CommandTimeout property in .NET of the application performing the inserts, but despite it being set to 90 seconds, it's still not enough to prevent the occasional sale from failing to record which is a big no-no for us. We've run the Database Tuning Advisor on the stored procedure that generates the SELECT for the reports, and it is now fully optimized but still this isn't enough. The tables are quite massive (millions of rows) and the SELECT requires JOINs to other tables, some of which are in a separate database on the same server.
Is there a solution to this problem, aside from increasing the CommandTimeout property to the point where no timeout errors occur? My concern is that doing this could increase the number of concurrent connections and we'd hit another limit, so it's not really solving the problem. Is there a way to configure SQL Server to always favour INSERTs over SELECTs? The reporting users won't really care if the reports are slower but it's critical to get these INSERTs up to 100% reliability.
I'm not a DBA (just a developer) so I don't have an intricate knowledge of databases and this problem is a bit beyond my level of expertise. Obviously we don't want to re-design our entire system, but we'll do whatever necessary to ensure we aren't failing to record sales.
One idea I had, which may be awful I don't know, is to submit these sales to an MSMQ and write some software that will read from the queue and insert the records from there instead. We could then deal with the timeout issue by just re-submitting the sale until it is accepted, then removing it from the queue.
This managed application was written to run on a Symbol 3090 Win CE 5.0 scanning device. We are using the symbol provided classes to access the scanning interface, and SQL Compact database on the device to collect the scanned data, and then using merge replication to synchronize scanned data when the device is docked. The problem we have experienced seems to be releated to the performance when inserting and updating records in the database.
We have tested some randomly generated 1000 records and inserting/updatating into a database. At first the time to commit a record increases when the database is flushing into the memory (The flush interval in the connection string property is 10 seconds by default). and then as the database size grows increasing the time to commit every single record which is causing the application to perform slowly as they scan items into the database. However, the device program memory remains consistant as they are scan items. From our tests, I found the time to execute either a update/insert command on 2MB sqlMobile database (upto 10000 records, depending on the size of the columns) is taking nearly 2 to 2 and half seconds to complete. Below is the only code I am executing,
We have installed SQL 7.0 Client to connect to the SQL server 7.0. After installing the client (which updated all ODBC drivers) the database connections, even ODBC calls using MS access drivers become extremely slow. Any hints where to look?
I am using SSIS 2014 with the below .net framework version and installed in Windows server 2012 R2 . I have installed my client's odbc drivers (both 32 bit and 64 bit) in my production server and created ODBC system DSNs for 32 bit and 64 bit.
When i open SSIS 2014 and tried to create the odbc connection but i can able to see only the 32 bit system DSN connection ,i can't able to see my 64 bit odbc system dsn connection.
Microsoft Visual Studio 2012 Shell (Integrated) Version 11.0.50727.1 RTMREL Microsoft .NET Framework Version 4.5.51650
SQL Server Integration Services Microsoft SQL Server Integration Services Designer Version 12.0.1524.0
And i installed my client odbc drivers(32,64 bit) and created ODBC system DSNs in my local system and when i open ssis 2014 and i can able to see both the ODBC system DSNS(32,64) connections from SSIS ODBC connection.
I am using below version of .net framework in my local system which was installed in windows 7 and i have SSIS 2012 also installed in my system and i can able to see both ODBC connections using 2012 as well in my local system.
Microsoft Visual Studio 2012 Shell (Integrated) Version 11.0.50727.1 RTMREL Microsoft .NET Framework Version 4.5.50938
SQL Server Integration Services Microsoft SQL Server Integration Services Designer Version 12.0.1524.0
why i can not see the ODBC 64 bit system DSN connection from SSIS in my production server ?
I am using VB.NET 2005 and set up an ODBC connection via ODBC.ODBCConnection to a MDB database. Therefor, I use the "Microsoft Access ODBC Driver (*.mdb)".
When I set up a ODBCCommand like "ALTER DATABASE..." or "CREATE TABLE..." and issue it with the com.ExecuteNonQuery() command, I get an error from ODBC driver, that a SQL statement has to begin with SELECT, INSERT, UPDATE or DELETE.
How can I use DDL statements via ODBC?
I would appreciate if you could help me to use ODBC for that - no OLE, no ADO.
I apologize if this is not the correct forum for this posting. Looking at the descriptions, it appeared to be the best choice.
I am running Windows XP Pro SP2. I have installed the SQL Native Client for XP. However, when I try to add a new data source through ODBC Connection Manager, SQL Native Client is not listed as an option. I have followed this procedure on three other systems with no problems. What would be causing the SQL Native Client to not show up in the list of available ODBC data sources?
Hi all,I am having trouble getting linked Oracle 9 server in MS SQL Server2005 Express to work properly. My machine is running Windows XP.The Microsoft and Oracle OLE DB Providers have problems dealing withOracle's Numeric Data Type, so I decided to use Microsoft's OLE DB forODBC Provider and an Oracle ODBC source. When using the Microsoft ODBCfor Oracle Driver in my ODBC source I have inconsistent behavior.Sometimes my queries are processed properly, then other times I get thefollowing errorOLE DB provider "MSDASQL" for linked server "ODBCBEAST" returnedmessage "[Microsoft][ODBC Driver Manager] Driver's SQLSetConnectAttrfailed".OLE DB provider "MSDASQL" for linked server "ODBCBEAST" returnedmessage "[Microsoft][ODBC Driver Manager] Driver's SQLSetConnectAttrfailed".OLE DB provider "MSDASQL" for linked server "ODBCBEAST" returnedmessage "[Microsoft][ODBC driver for Oracle][Oracle]".Msg 7303, Level 16, State 1, Line 1Cannot initialize the data source object of OLE DB provider "MSDASQL"for linked server "ODBCBEAST".I have no idea why sometimes I can connect to the linked server with noproblems andwhy other times it performs like this. I'm not changing anything aboutthe system I can think of. When I use an Oracle client (PL/SQL) I haveabsolutely no problems connecting. TNSPING returns that the connectionis good.This is unacceptable so I decided to try my luck with the Oracle 10gODBC driver. However when I use this and perform an openquery selectagainst the linked server I get back only 11 rows, when I know that thedatabase has over 100 rows (in fact when using the Microsoft ODBCdriver and it works that's what I get). I figured maybe the buffersetting needed to be raised in the ODBC configuration so I took it from64000 to 600000 (a magnitude of 10) but I still get back only 11 rows.I'm at my wit's end.Any suggestions on resolving one or the other problem would be muchappreciated.Thanks much
Please help share with me if you know the version compatibility matrix of Ms SQL Server, ODBC driver (sqlsrv32.dll), Driver Manager (odbc32.dll) and ODBC API spec. For instance, how can I know Ms SQL Server 2000 can work with which version of sqlsrv32.dll, a particular version of sqlsrv32.dll can work with which version of odbc32.dll and a certain version of sqlsrv32.dll/odbc32.dll conforms to which version of ODBC API spec (e.g. 3.5).
Disk Specs IO/Second = 130 per disk Speed RPM = 15 K
When I did a load test of inserting data into a table with four Columns
Col1 INT Col2 VARCHAR(32) Col3 VARCHAR(4000) Col4 DATETIME
I could insert around 1044 Inserts per second where as I thought I could do max of 520 Inserts ( 130 * 4 ) because each disk can only take 130 Inserts multiplied by 4 Disks gives me a theoritical limit of 520.
Also How does the Query Analyser Connects to the datbase Server.. does it use ODBC
I am doing a simple IO Test with the below script ...
Just wanted to keep things simple and to check how many Inserts I can do on a given SQL Server. I am running the below script from QA for 1 minute and then divide the No or rows inserted by 60.
Will it give me approximate results by duing this?
Actually the datafiles .MDF files are sitting on a single drive where the manufacturer specs shows that it will handle 130 IO's per disk. With the below script I am getting around 147 Inserts per second.
But my boss says that he is getting 2000 inserts per second on his laptop from a ...Am I missing some thing?
DECLARE @lnRowCnt INT SELECT @lnRowCnt = 100000
WHILE @lnRowCnt > 0 BEGIN SET NOCOUNT ON INSERT INTO CTMessages..Iotest SELECT @lnRowCnt , 'VENU' , REPLICATE ( 'V' , 4000 ) , 1000000
I'm using a SQLDataSource and trying to do two inserts into two different tables with one InsertCommand, but it's not working. Here's the code I'm trying to use. Do you see anything wrong with the syntax? I keep getting an error that says error near ',' but I can't figure out why. Thanks
I need to Add a Check in the database to ensure that user can only enter up to 20 entries to Database in a period of 10 minutes. Basically, to guard against people using scripts to add data to the database ( instead of using CAPTCHAE on the front end) what we want to do is restrict user to entering at most 20 transactions in 10 Minutes.How do I handle or do this in SQl Server 2005??
What I figure to do is right after I do an INSERT into the table, I Select the last 20 entries into that same table and then Calculate the Total time it took to add those 20 transactions and set the righ flag. 1) How do I select last 20 entries into a table?? 2) How do I calculate the total time that elapsed between adding the first of those 20 records and the last?? Thanks in Advance
G'day, I have a table with a primary key being a bigint and its set to auto increment (or identity or whatever ms calls it). Is there anyway I can get the ID number that will be assigned to the next Insert before I insert it? I want to use that ID number within another field when inserted.
Hi, I'm trying to create a form where new names can be added to a database. The webform looks like this:<body MS_POSITIONING="GridLayout"> <form id="Form1" method="post" runat="server"> Name:<asp:TextBox ID="newName" runat="server" /> <INPUT id="NewUserBtn" type="button" value="Create New User" name="NewUserBtn" runat="server" onServerClick="NewBtn_Click"> </form>And the code behind looks like this:Public Sub NewBtn_Click(ByVal sender As System.Object, ByVal e As System.EventArgs) Handles NewUserBtn.ServerClick Dim DS As DataSet Dim MyConnection As SqlConnection Dim MyCommand As SqlDataAdapter MyConnection = New SqlConnection("server=databaseserver;database=db;uid=uid;pwd=pwd") MyCommand = New SqlDataAdapter("insert into certifications (name) values ('" & newName.Text & "'); select * from certifications", MyConnection) DS = New DataSet MyCommand.Fill(DS, "Titles") Response.Redirect("WebForm1.aspx", True) End SubWhen I try to insert one name it works. When I try to insert a second name, it overwrites the old one. Why is that?Thanks.James
Hey All, I was trying to use a typed dataset to create a very simple DAL. I found that the code generated for the INSERT statement includes an identity field the table has. That can obviously never work (unless identity_insert is set, which it is not). My question is whether it is possible to control this insert statement generation? Is there a property I am missing somewhere? My solution was to change the INSERT statement on the DataTableAdapter, but that seems awkward for me to have to do that.. Thanks, Yuval
I have a number of columns with predefined character length but user can input more from gui. i want to trucncate automatically to the desired length and insert or update the database right now it does not allow me to update , or insert the values can i do it and how this is urgent
We have a 4 processor 350 Hz NT 4.0 SQL server. Currently we have an application that is inserting rows one at a time, each row insert is a separate transaction. Currenty we are averaging 2500 rows a second with each row ( 56 bytes wide). The data and the log are on one string of Raid disk. We plan to get another controller and raid string to separate the data and the log onto separate controllers. The developer is modifying the application to insert the data in blocks. What is the impact to the transaction log? He seems to think that by inserting page blocks on rows there would be less data going into the transaction log. Why would this be so? Does anyone have any information on practical limits for inserts and log truncation with similar machine configurations. He would like to try to get around 150,000 rows a second. Has anyone accomplished inserts at this rate? What type of machine configuration?
Hi, I have a small web application managing complaints. During multiuser testing we noticed that when complaints where added at "exactly" the same time one complaint text seemed to be over writing the other, and returning the current max value for table id as current complaint number.
I tested in my development environment and was able to recreate reasonably easily ( 1 go out of 3 recreated the issue ). The Id column itself is an auto increment ( primary key ), so I can't think of a concievable reason why one record should overwrite another. I should say that I am assuming the record is overwritten, perhaps there is a clash and one complaint is ignored by the database.
Hi, I have a procedure that I call on one database, and one of it's steps is to write to a table on another database, same server. the user exists in both databases, but i keep getting errors when i try and write to this second database. i know i can fix this by giving the user insert permissions on the table in this second database, but i do not want this for security purposes. any other ideas on how to accomplish this?
i have a {date value} i have a {frequency value} 1 = yearly 4 = quarterly 12 = monthly
i need to select an item then check the frequency
then do a loop insert based on the frequency
if frequency = 1
insert item, date into table where date = {date value}
elseif frequency = 4
Per item -- insert 4 new entrys insert item, date into table where date = {date value} insert item, date into table where date = {date value + quarter} insert item, date into table where date = {date value+ 2 quarters} insert item, date into table where date = {date value + 3 quarters}
< ' below is how i can calculate quartly values from a date iv vb .net just need to do the same within sql
Dim Quarterloop AS Integer for QuarterLoop = 0 to 3 > <= formatdatetime(dateadd("q", Quarterloop , MyDate),DateFormat.longdate) ><br> < Next >
elseif frequency = 12 --- per item insert 12 new entrys insert 12 items into the table looping from date and then in 12 increments of 1 month values
Currenlty I have huge amounts of data going into a table. I'm sending an xmldoc and using openxml with a cursor to seed them.
the question I have is whether to let duplicate keyed data rows bounce and then check @@error and then do an update on the nokeyed field or to do a select on the keyed field and then do an insert or update based on the selects results.
I have to create dynamic insert statements for the table. For example there are DevTableA and ProdTableA tables. I worte a SQL to get the new records added in the DevTableA but are not there in ProdTableA. The result gives me a list of rows. These tables have a column 'LanguageID' and 'LText'
The compare result has records only for LanguageID = 0. One I see the compare result. I am suppose to create insert statements for LanguageID = 1,2,5 and 6 and update the Ltext for those languages. The Ltext for other languages is in spreadsheet.
Can anyone advice me how to create the insert statements from the comapre result and add 4 more insert statements for LanguageID = 1,2,5 and 6 with their respective Ltext.
So far I thought I can create #table. Looks like I need more than 1 # table.
I'm trying to perform a bulk insert as shown below. It's problematic b/c it's not updating the identity fields correctly and we're getting dups. I think, but I'm not sure, that On Update Cascade would solve all this, b/c we wouldn't have to concern ourselves with even touching the identity fields, b/c they would be autogenerated. Can someone shed some light?? I'm pretty confused.
CREATE PROCEDURE AddMiamirecords AS
BEGIN TRANSACTION
--USERS INSERT INTO [Undex_Production].[dbo].[USERS]([LastName], [UserName], [EmailAddress], [Address1], [WorkPhone], [Company], [CompanyWebsite], [pword], [IsAdmin], [IsRestricted],[AdvertiserAccountID]) SELECT dbo.fn_ReplaceTags (convert (varchar (8000),Advertisername)), [AdvertiserEmail], [AdvertiserEmail],[AdvertiserAddress], [AdvertiserPhone], [AdvertiserCompany], [AdvertiserURL], [AccountNumber],'3',0, [AccountNumber] FROM Miami WHERE not exists (select * from users Where users.Username = miami.AdvertiserEmail) AND validAD=1
--PROPERTY INSERT INTO [Undex_Production].[dbo].[Property]([ListDate],[CommunityName],[TowerName],[PhaseName],[Unit], [Address1], [City], [State], [Zip],[IsActive],[AdPrintId]) SELECT [FirstInsertDate],[PropertyBuilding],[PropertyStreetAddress],PropertyCity + ' ' + PropertyState + ' ' + PropertyZipCode as PhaseName,[PropertyUnitNumber],[PropertyStreetAddress],[PropertyCity], [PropertyState], [PropertyZipCode],'0',[AdPrintId] FROM [Undex_Production].[dbo].[miami] WHERE miami.AdvertiserEmail IS NOT NULL AND validAD=1
--ITEM INSERT INTO [Undex_Production].[dbo].[ITEM] ([SellerID],[Price],[StartDate],[EndDate], [HomePageFeatured],[Classified],[IsClosed]) SELECT USERS.UserID, miami.PropertyPrice, convert(datetime,miami.FirstInsertDate), dateadd(day, 30, miami.FirstInsertDate)as EndDate, 1, convert (int,AdNumber) as Classified, 0 FROM USERS RIGHT OUTER JOIN miami ON USERS.UserName = miami.AdvertiserEmail WHERE validAD=1
--PROPERTYITEM INSERT INTO [Undex_Production].[dbo].[propertyItem]( [propertyId], [ItemId]) SELECT Property.propertyId, ITEM.ItemID FROM ITEM RIGHT OUTER JOIN miami ON ITEM.StartDate = miami.FirstInsertDate AND ITEM.Price = miami.PropertyPrice AND ITEM.Classified = convert(int,miami.AdNumber) LEFT OUTER JOIN Property ON miami.PropertyUnitNumber = Property.Unit AND miami.PropertyZipCode = Property.Zip AND miami.PropertyCity = Property.City AND miami.PropertyStreetAddress = Property.Address1 WHERE validAD=1
--CONDOFEATURES INSERT INTO [Undex_Production].[dbo].[CondoFeatures](PropertyId,[Bedrooms], [Area], [PropertyDescription], [Bathrooms], [NumOfFloors]) SELECT Property.propertyId, [PropertyBedrooms], [PropertySquareFeet], dbo.fn_ReplaceTags (convert (varchar (8000),PropertyDescription)), [PropertyBathrooms], [PropertyTotalFloors] FROM miami LEFT OUTER JOIN Property ON miami.PropertyUnitNumber = Property.Unit AND miami.PropertyZipCode = Property.Zip AND miami.PropertyCity = Property.City AND miami.PropertyStreetAddress = Property.Address1 WHERE validAD=1
--COMMUNITY FEATURES INSERT INTO [Undex_Production].[dbo].[CommunityFeatures](PropertyId,[totalFloors],isComplete1) SELECT Property.propertyId, miami.propertyTotalFloors,'0' as IsComplete FROM miami LEFT OUTER JOIN Property ON miami.PropertyUnitNumber = Property.Unit AND miami.PropertyZipCode = Property.Zip AND miami.PropertyCity = Property.City AND miami.PropertyStreetAddress = Property.Address1 WHERE validAD=1
--UNITDISCLOSURES INSERT INTO [Undex_Production].[dbo].[UnitDisclosures]([propertyId],[monthcondoasso]) SELECT Property.propertyId, [propertyassocfee] FROM miami LEFT OUTER JOIN Property ON miami.PropertyUnitNumber = Property.Unit AND miami.PropertyZipCode = Property.Zip AND miami.PropertyCity = Property.City AND miami.PropertyStreetAddress = Property.Address1 WHERE validAD=1
--BROKERDEVELOPER INSERT INTO [Undex_Production].[dbo].[BrokerDeveloper]([IsFSBO],[FSBOName], [FSBOEmail],[FSBOWebsite],[IsDeveloper],[DeveloperName],[DeveloperWebsite],[IsBroker],[BrokerName],[BrokerageWebsite], [propertyId],[brokercommission],[isComplete])SELECT CASE AdvertiserType when 'FSBO' THEN 1 else 0 end, CASE AdvertiserType when 'FSBO' THEN [AdvertiserName] else NULL end, CASE AdvertiserType when 'FSBO' THEN [AdvertiserEmail] else NULL end, CASE AdvertiserType when 'FSBO' THEN [AdvertiserURL] else NULL end, CASE AdvertiserType when 'Developer' THEN 1 else 0 end, CASE AdvertiserType when 'Developer' THEN [AdvertiserName] else NULL end, CASE AdvertiserType when 'Developer' THEN [AdvertiserURL] else NULL end, CASE AdvertiserType when 'Realtor' THEN 1 when 'Broker' THEN 1 else 0 end, CASE AdvertiserType when 'Realtor' THEN [AdvertiserName] when 'Broker' THEN [AdvertiserName] else NULL end, CASE AdvertiserType when 'Realtor' THEN [AdvertiserURL] when 'Broker' THEN [AdvertiserName] else NULL end, Property.propertyId,[PropertyCommBroker],'0' as IsComplete FROM miami LEFT OUTER JOIN Property ON miami.PropertyUnitNumber = Property.Unit AND miami.PropertyZipCode = Property.Zip AND miami.PropertyCity = Property.City AND miami.PropertyStreetAddress = Property.Address1 WHERE validAD=1 IF @@ERROR <> 0 BEGIN ROLLBACK TRAN RETURN END COMMIT TRANSACTION GO
is there any easy way I can take a select statment (such as select from payments where datetime>'20071122' and output a sql insert statment for these records?
I basically need to move a specific set of records from one sql server to another (both sql server 2005) any suggestions for the best way to do this?
Hello everybody,Just short question:I have tables, which are only log tables (very less used for selects),but there is a lotof writing.I would like to have as much speed as possible by writing data intothis tables.create table [tbl] ([IDX] [numeric](18, 0) IDENTITY (1, 1) NOT NULL ,[Time_Stamp] [datetime] NOT NULL ,[Source] [varchar] (64) COLLATE Latin1_General_CI_AS NULL ,[Type] [varchar] (16) COLLATE Latin1_General_CI_AS NULL ,[MsgText] [varchar] (512) COLLATE Latin1_General_CI_AS NULL ,CONSTRAINT [tbl] PRIMARY KEY NONCLUSTERED([IDX]) ON [PRIMARY]) ON [PRIMARY]GOQuestion:Is it better for inserts,, to remove PK but leave identity insert?How to make this table optimized for writing?If I will set fill level of the table with 0%, will I winn much?Once information: this table will be deleted with old data, dependingon row count (oldest ID's will be deleted each night).Thank You in advanceMateusz
according to the mysql manual, multiple inserts can be sped up bylocking the table before doing them and unlocking the table afterwards.is the same true of multiple inserts in mysql? if so, how would thetable be locked?any insights would be appreciated - thanks!