Hello everybody,Just short question:I have tables, which are only log tables (very less used for selects),but there is a lotof writing.I would like to have as much speed as possible by writing data intothis tables.create table [tbl] ([IDX] [numeric](18, 0) IDENTITY (1, 1) NOT NULL ,[Time_Stamp] [datetime] NOT NULL ,[Source] [varchar] (64) COLLATE Latin1_General_CI_AS NULL ,[Type] [varchar] (16) COLLATE Latin1_General_CI_AS NULL ,[MsgText] [varchar] (512) COLLATE Latin1_General_CI_AS NULL ,CONSTRAINT [tbl] PRIMARY KEY NONCLUSTERED([IDX]) ON [PRIMARY]) ON [PRIMARY]GOQuestion:Is it better for inserts,, to remove PK but leave identity insert?How to make this table optimized for writing?If I will set fill level of the table with 0%, will I winn much?Once information: this table will be deleted with old data, dependingon row count (oldest ID's will be deleted each night).Thank You in advanceMateusz
We have a dynamic SP that dependant on a user name will run a selectedtailored to them.One of the criteria is the number of rows retrieved, which we include using'top @varNoOfRows' in the dynamically created select .As long as the number of rows is less than 130 the SP runs in less than asecond, but if you increase that value to over 150 the SP times out.It is being run from ASP in this way: DBCon.execute(SQLQuery)The main table that we are querying contains about 1.5 million records andis un-indexed. (eek - didn't realise that until I just checked) on SQLserver 2000.Does anyone have any pointers towards streamlining the SP - I can post it ifyou wish. Or can anyone explain how to use the execute plan to ouradvantage?I've already used it to change aSET @statement2 = (select T1_QueryPart from Table1 where T1_ID like (SELECTLoginTable_T1ID from LoginTable where @username = LT_UserName))toSET @T1ID = (SELECT LT_T1ID from LoginTable where @username = LT_UserName)SET @statement2 = (select T1_QueryPart from Table1 where T1_ID like @T1ID)But would , say, a join be more time efficient?Any help would be appreciatedJohn
I have a forum topic, that has comments. On the homepage, a widget shows the most recent 6 comments across all topics. Some of these topics have 7000+ comments. On the actual topic page, the comments are paged, 10 records per page. In the widget, if the user clicks on the comment, it should take them directly to the comment, and the page it is on. (The most recent comment is on the last page). So, to link it would be e.g. linktoforumtopic.aspx?p=177#commentID=999To get the page number the comment is on, I would have to return all the comments(7000+), get the rowindex of the comment, and figure out what page it is on depending on the page size. This all works, however it is extremely slow. Can't think of a better way....DECLARE @RowIndex decimal DECLARE @PageIndex int SET @RowIndex = (SELECT [RowIndex] FROM @Results WHERE CommonID = @BlogCommentID) SET @PageIndex = 1 IF(@RowIndex > 10) BEGIN SET @PageIndex = CEILING(@RowIndex / @PageSize) END SELECT @PageIndex AS PageIndex
I have a table with over 60 million rows (approx 20GB) which has an indexed column. I have tried using DBC DBReindex to rebuild the index, but after kicking it off on a friday, it is still running the following wednesday. Since managers and other finicky types access this database, that's not acceptable (it slows down their reporting).
Is there a way to speed up the reindexing process? Perhaps by adding space to the tempdb (it's 500MB) or putting it in RAM temporarily? I haven't seen any articles that specifically state that TEMPDB is used during an index rebuild, but it seems logical that it would be.
Any suggestions to speed up the process would be most appreciated!
Consider this SQL Query:-----------------------------------------------------------------SELECT c.CASE_NBR, DATEDIFF(d, c.CREATE_DT, GETDATE()) AS Age,c.AFFD_RCVD, c.PRV_CRD_ISS, x.RegE, x.Type, x.Fraud,c.CUST_FN + ' ' + c.CUST_LN AS CustFullName,c.ATM_CKCD_NBR, x.TotalLoss, x.Queue, x.Status,c.QUEUE AS Expr1, x.CHECK_ACT_NBR, c.CUST_LN, c.SSN,c.CREATE_DTFROM (SELECT TOP 9999999 cl.CASE_NBR, cl.SSN, cl.CREATE_DT,SUM(cast(TRANS_AMNT AS float)) AS TotalLoss,glQueue.REFN_NM AS Queue,glStatus.REFN_NM AS Status,grRegE.REFN_NM AS RegE, grType.REFN_NM AS Type,grFraud.REFN_NM AS Fraud, cl.CHECK_ACT_NBRFROM (((((T_CASE_LST AS cl LEFT JOINT_INCIDENT_LST AS il ON cl.CASE_NBR = il.CASE_NBR)INNER JOIN T_GNRL_REFN AS glQueueON cl.QUEUE = glQueue.REFN_NBR)INNER JOIN T_GNRL_REFN AS glStatusON cl.STATUS_CD = glStatus.REFN_NBR)INNER JOIN T_GNRL_REFN AS grRegEON cl.REGE_CD = grRegE.REFN_NBR)INNER JOIN T_GNRL_REFN AS grTypeON cl.CASE_TYPE_CD = grType.REFN_NBR)INNER JOIN T_GNRL_REFN AS grFraud ON cl.FRAUD_CD =grFraud.REFN_NBRWHERE (((glQueue.REFN_DESC) = 'Queue')AND ((glStatus.REFN_DESC) = 'STATUS_CD')AND ((grRegE.REFN_DESC) = 'YesNo')AND ((grType.REFN_DESC) = 'Fraud_Code')AND ((cl.STATUS_CD) = 0))GROUP BY cl.CASE_NBR, glQueue.REFN_NM, glStatus.REFN_NM,grRegE.REFN_NM, grType.REFN_NM, grFraud.REFN_NM,grFraud.REFN_DESC, cl.CHECK_ACT_NBR,cl.SSN, cl.CREATE_DTHAVING (((grFraud.REFN_DESC) = 'YesNo'))) xLEFT OUTER JOIN T_CASE_LST c ON x.CASE_NBR = c.CASE_NBR-----------------------------------------------------------------1. Is there anything that can be done to speed up the query?2. This part of the query: ... AND ((cl.STATUS_CD) = 0 ... where the 0is actually a variable passed in via a VB application. 0 would be newcases, and normally return around 4000 - 5000 records.3. The SQL server, Web Server, and users, are all in different states.4. The time to return this query where cl.STATUS_CD = 0 is about 7 -12 seconds.5. Is this a reasonable time for this query? What can be done toincrease the time?6. The SQL server is indexed on T_CASE_LST.STATUS_CD andT_INCIDENT_LST.CASE_NBR, but not on any field from T_GNRL_REFN sinceT_GNRL_REFN is only a general lookup table, and contains less than 50records.7. I've built the query as a stored procedure, and it works, though nomeasurable speed increase was obtained.8. I have not attempted building a view to aid this, as I don't seethat helping... or will it?9. Well: any ideas?10. I would gladly rewrite the SQL Query if it could return the samedata faster using another method.11. Is there a way to accomplish the joins involved with theT_GNRL_REFN in another manner to make it quicker?12. Is there a better way to add the values in T_INCIDENT_LST than:.... SUM(cast(TRANS_AMNT AS float)) AS TotalLoss ... ?13. I don't care if its pretty, I just need it faster.14. How can I get the summing of T_INCIDENT_LST.TRANS_AMNT without aderived table...? I know that using the derived table is slowing itdown some.**** Any Ideas ****David
Hello, can anyone offer any advice on this problem related to store procedures.
The following 2 chunks of SQL illustrate the problem
--1 declare @lsFilt varchar(16) select @lsFilt = 'fil%ter' select * from sysobjects where name like @lsFilt
--2 declare @lsQuery varchar(128) select @lsQuery = 'select * from sysobjects where name like ''fil%ter''' exec (@lsQuery)
When I view the execution plan the cost % breakdown is approx 82%, 18%. The second query does a bookmark lookup and an index seek while the first slow query does a clustered index seek and takes approx 5 times longer to do.
Now my real question is suppose I have an store procedure to run a similar query. Should be writing my SPs along the lines of
create proc SP2Style @psFilter varchar(16) AS declare @lsQuery varchar(128) select @lsQuery = 'select * from sysobjects where name like ''' @psFilter + '''' exec (@lsQuery) GO
instead of
create proc SP1Style @psFilter varchar(16) AS select * from sysobjects where name like @psFilter GO
Is there another way to write similar store procedures without using dynamic query building or the exec but keep the faster execution speed?
Hello, We have some queries that are long and intensive. We have thought about running the queries and storing the data in a text file for lookup from our website.
Example: Our online store only displays items that are in stock so when a user selects a category a query runs and grabs only items that are in stock and then displays them. There could be thousands of items the query needs to sort through before displaying the items that are in stock. What if we ran this query once every hours an stored the results in a txt file? The asp page would then go to the text file to grab the results instead of having to run the query every time a user selects a category. Will this speed up the site by not having to query every time? Would this be a correct way to eliminate queries that run thousands of times a day?
Disk Specs IO/Second = 130 per disk Speed RPM = 15 K
When I did a load test of inserting data into a table with four Columns
Col1 INT Col2 VARCHAR(32) Col3 VARCHAR(4000) Col4 DATETIME
I could insert around 1044 Inserts per second where as I thought I could do max of 520 Inserts ( 130 * 4 ) because each disk can only take 130 Inserts multiplied by 4 Disks gives me a theoritical limit of 520.
Also How does the Query Analyser Connects to the datbase Server.. does it use ODBC
I am doing a simple IO Test with the below script ...
Just wanted to keep things simple and to check how many Inserts I can do on a given SQL Server. I am running the below script from QA for 1 minute and then divide the No or rows inserted by 60.
Will it give me approximate results by duing this?
Actually the datafiles .MDF files are sitting on a single drive where the manufacturer specs shows that it will handle 130 IO's per disk. With the below script I am getting around 147 Inserts per second.
But my boss says that he is getting 2000 inserts per second on his laptop from a ...Am I missing some thing?
DECLARE @lnRowCnt INT SELECT @lnRowCnt = 100000
WHILE @lnRowCnt > 0 BEGIN SET NOCOUNT ON INSERT INTO CTMessages..Iotest SELECT @lnRowCnt , 'VENU' , REPLICATE ( 'V' , 4000 ) , 1000000
I'm using a SQLDataSource and trying to do two inserts into two different tables with one InsertCommand, but it's not working. Here's the code I'm trying to use. Do you see anything wrong with the syntax? I keep getting an error that says error near ',' but I can't figure out why. Thanks
I need to Add a Check in the database to ensure that user can only enter up to 20 entries to Database in a period of 10 minutes. Basically, to guard against people using scripts to add data to the database ( instead of using CAPTCHAE on the front end) what we want to do is restrict user to entering at most 20 transactions in 10 Minutes.How do I handle or do this in SQl Server 2005??
What I figure to do is right after I do an INSERT into the table, I Select the last 20 entries into that same table and then Calculate the Total time it took to add those 20 transactions and set the righ flag. 1) How do I select last 20 entries into a table?? 2) How do I calculate the total time that elapsed between adding the first of those 20 records and the last?? Thanks in Advance
G'day, I have a table with a primary key being a bigint and its set to auto increment (or identity or whatever ms calls it). Is there anyway I can get the ID number that will be assigned to the next Insert before I insert it? I want to use that ID number within another field when inserted.
Hi, I'm trying to create a form where new names can be added to a database. The webform looks like this:<body MS_POSITIONING="GridLayout"> <form id="Form1" method="post" runat="server"> Name:<asp:TextBox ID="newName" runat="server" /> <INPUT id="NewUserBtn" type="button" value="Create New User" name="NewUserBtn" runat="server" onServerClick="NewBtn_Click"> </form>And the code behind looks like this:Public Sub NewBtn_Click(ByVal sender As System.Object, ByVal e As System.EventArgs) Handles NewUserBtn.ServerClick Dim DS As DataSet Dim MyConnection As SqlConnection Dim MyCommand As SqlDataAdapter MyConnection = New SqlConnection("server=databaseserver;database=db;uid=uid;pwd=pwd") MyCommand = New SqlDataAdapter("insert into certifications (name) values ('" & newName.Text & "'); select * from certifications", MyConnection) DS = New DataSet MyCommand.Fill(DS, "Titles") Response.Redirect("WebForm1.aspx", True) End SubWhen I try to insert one name it works. When I try to insert a second name, it overwrites the old one. Why is that?Thanks.James
Hey All, I was trying to use a typed dataset to create a very simple DAL. I found that the code generated for the INSERT statement includes an identity field the table has. That can obviously never work (unless identity_insert is set, which it is not). My question is whether it is possible to control this insert statement generation? Is there a property I am missing somewhere? My solution was to change the INSERT statement on the DataTableAdapter, but that seems awkward for me to have to do that.. Thanks, Yuval
I have a number of columns with predefined character length but user can input more from gui. i want to trucncate automatically to the desired length and insert or update the database right now it does not allow me to update , or insert the values can i do it and how this is urgent
We have a 4 processor 350 Hz NT 4.0 SQL server. Currently we have an application that is inserting rows one at a time, each row insert is a separate transaction. Currenty we are averaging 2500 rows a second with each row ( 56 bytes wide). The data and the log are on one string of Raid disk. We plan to get another controller and raid string to separate the data and the log onto separate controllers. The developer is modifying the application to insert the data in blocks. What is the impact to the transaction log? He seems to think that by inserting page blocks on rows there would be less data going into the transaction log. Why would this be so? Does anyone have any information on practical limits for inserts and log truncation with similar machine configurations. He would like to try to get around 150,000 rows a second. Has anyone accomplished inserts at this rate? What type of machine configuration?
Hi, I have a small web application managing complaints. During multiuser testing we noticed that when complaints where added at "exactly" the same time one complaint text seemed to be over writing the other, and returning the current max value for table id as current complaint number.
I tested in my development environment and was able to recreate reasonably easily ( 1 go out of 3 recreated the issue ). The Id column itself is an auto increment ( primary key ), so I can't think of a concievable reason why one record should overwrite another. I should say that I am assuming the record is overwritten, perhaps there is a clash and one complaint is ignored by the database.
Hi, I have a procedure that I call on one database, and one of it's steps is to write to a table on another database, same server. the user exists in both databases, but i keep getting errors when i try and write to this second database. i know i can fix this by giving the user insert permissions on the table in this second database, but i do not want this for security purposes. any other ideas on how to accomplish this?
i have a {date value} i have a {frequency value} 1 = yearly 4 = quarterly 12 = monthly
i need to select an item then check the frequency
then do a loop insert based on the frequency
if frequency = 1
insert item, date into table where date = {date value}
elseif frequency = 4
Per item -- insert 4 new entrys insert item, date into table where date = {date value} insert item, date into table where date = {date value + quarter} insert item, date into table where date = {date value+ 2 quarters} insert item, date into table where date = {date value + 3 quarters}
< ' below is how i can calculate quartly values from a date iv vb .net just need to do the same within sql
Dim Quarterloop AS Integer for QuarterLoop = 0 to 3 > <= formatdatetime(dateadd("q", Quarterloop , MyDate),DateFormat.longdate) ><br> < Next >
elseif frequency = 12 --- per item insert 12 new entrys insert 12 items into the table looping from date and then in 12 increments of 1 month values
Currenlty I have huge amounts of data going into a table. I'm sending an xmldoc and using openxml with a cursor to seed them.
the question I have is whether to let duplicate keyed data rows bounce and then check @@error and then do an update on the nokeyed field or to do a select on the keyed field and then do an insert or update based on the selects results.
I have to create dynamic insert statements for the table. For example there are DevTableA and ProdTableA tables. I worte a SQL to get the new records added in the DevTableA but are not there in ProdTableA. The result gives me a list of rows. These tables have a column 'LanguageID' and 'LText'
The compare result has records only for LanguageID = 0. One I see the compare result. I am suppose to create insert statements for LanguageID = 1,2,5 and 6 and update the Ltext for those languages. The Ltext for other languages is in spreadsheet.
Can anyone advice me how to create the insert statements from the comapre result and add 4 more insert statements for LanguageID = 1,2,5 and 6 with their respective Ltext.
So far I thought I can create #table. Looks like I need more than 1 # table.
I'm trying to perform a bulk insert as shown below. It's problematic b/c it's not updating the identity fields correctly and we're getting dups. I think, but I'm not sure, that On Update Cascade would solve all this, b/c we wouldn't have to concern ourselves with even touching the identity fields, b/c they would be autogenerated. Can someone shed some light?? I'm pretty confused.
CREATE PROCEDURE AddMiamirecords AS
BEGIN TRANSACTION
--USERS INSERT INTO [Undex_Production].[dbo].[USERS]([LastName], [UserName], [EmailAddress], [Address1], [WorkPhone], [Company], [CompanyWebsite], [pword], [IsAdmin], [IsRestricted],[AdvertiserAccountID]) SELECT dbo.fn_ReplaceTags (convert (varchar (8000),Advertisername)), [AdvertiserEmail], [AdvertiserEmail],[AdvertiserAddress], [AdvertiserPhone], [AdvertiserCompany], [AdvertiserURL], [AccountNumber],'3',0, [AccountNumber] FROM Miami WHERE not exists (select * from users Where users.Username = miami.AdvertiserEmail) AND validAD=1
--PROPERTY INSERT INTO [Undex_Production].[dbo].[Property]([ListDate],[CommunityName],[TowerName],[PhaseName],[Unit], [Address1], [City], [State], [Zip],[IsActive],[AdPrintId]) SELECT [FirstInsertDate],[PropertyBuilding],[PropertyStreetAddress],PropertyCity + ' ' + PropertyState + ' ' + PropertyZipCode as PhaseName,[PropertyUnitNumber],[PropertyStreetAddress],[PropertyCity], [PropertyState], [PropertyZipCode],'0',[AdPrintId] FROM [Undex_Production].[dbo].[miami] WHERE miami.AdvertiserEmail IS NOT NULL AND validAD=1
--ITEM INSERT INTO [Undex_Production].[dbo].[ITEM] ([SellerID],[Price],[StartDate],[EndDate], [HomePageFeatured],[Classified],[IsClosed]) SELECT USERS.UserID, miami.PropertyPrice, convert(datetime,miami.FirstInsertDate), dateadd(day, 30, miami.FirstInsertDate)as EndDate, 1, convert (int,AdNumber) as Classified, 0 FROM USERS RIGHT OUTER JOIN miami ON USERS.UserName = miami.AdvertiserEmail WHERE validAD=1
--PROPERTYITEM INSERT INTO [Undex_Production].[dbo].[propertyItem]( [propertyId], [ItemId]) SELECT Property.propertyId, ITEM.ItemID FROM ITEM RIGHT OUTER JOIN miami ON ITEM.StartDate = miami.FirstInsertDate AND ITEM.Price = miami.PropertyPrice AND ITEM.Classified = convert(int,miami.AdNumber) LEFT OUTER JOIN Property ON miami.PropertyUnitNumber = Property.Unit AND miami.PropertyZipCode = Property.Zip AND miami.PropertyCity = Property.City AND miami.PropertyStreetAddress = Property.Address1 WHERE validAD=1
--CONDOFEATURES INSERT INTO [Undex_Production].[dbo].[CondoFeatures](PropertyId,[Bedrooms], [Area], [PropertyDescription], [Bathrooms], [NumOfFloors]) SELECT Property.propertyId, [PropertyBedrooms], [PropertySquareFeet], dbo.fn_ReplaceTags (convert (varchar (8000),PropertyDescription)), [PropertyBathrooms], [PropertyTotalFloors] FROM miami LEFT OUTER JOIN Property ON miami.PropertyUnitNumber = Property.Unit AND miami.PropertyZipCode = Property.Zip AND miami.PropertyCity = Property.City AND miami.PropertyStreetAddress = Property.Address1 WHERE validAD=1
--COMMUNITY FEATURES INSERT INTO [Undex_Production].[dbo].[CommunityFeatures](PropertyId,[totalFloors],isComplete1) SELECT Property.propertyId, miami.propertyTotalFloors,'0' as IsComplete FROM miami LEFT OUTER JOIN Property ON miami.PropertyUnitNumber = Property.Unit AND miami.PropertyZipCode = Property.Zip AND miami.PropertyCity = Property.City AND miami.PropertyStreetAddress = Property.Address1 WHERE validAD=1
--UNITDISCLOSURES INSERT INTO [Undex_Production].[dbo].[UnitDisclosures]([propertyId],[monthcondoasso]) SELECT Property.propertyId, [propertyassocfee] FROM miami LEFT OUTER JOIN Property ON miami.PropertyUnitNumber = Property.Unit AND miami.PropertyZipCode = Property.Zip AND miami.PropertyCity = Property.City AND miami.PropertyStreetAddress = Property.Address1 WHERE validAD=1
--BROKERDEVELOPER INSERT INTO [Undex_Production].[dbo].[BrokerDeveloper]([IsFSBO],[FSBOName], [FSBOEmail],[FSBOWebsite],[IsDeveloper],[DeveloperName],[DeveloperWebsite],[IsBroker],[BrokerName],[BrokerageWebsite], [propertyId],[brokercommission],[isComplete])SELECT CASE AdvertiserType when 'FSBO' THEN 1 else 0 end, CASE AdvertiserType when 'FSBO' THEN [AdvertiserName] else NULL end, CASE AdvertiserType when 'FSBO' THEN [AdvertiserEmail] else NULL end, CASE AdvertiserType when 'FSBO' THEN [AdvertiserURL] else NULL end, CASE AdvertiserType when 'Developer' THEN 1 else 0 end, CASE AdvertiserType when 'Developer' THEN [AdvertiserName] else NULL end, CASE AdvertiserType when 'Developer' THEN [AdvertiserURL] else NULL end, CASE AdvertiserType when 'Realtor' THEN 1 when 'Broker' THEN 1 else 0 end, CASE AdvertiserType when 'Realtor' THEN [AdvertiserName] when 'Broker' THEN [AdvertiserName] else NULL end, CASE AdvertiserType when 'Realtor' THEN [AdvertiserURL] when 'Broker' THEN [AdvertiserName] else NULL end, Property.propertyId,[PropertyCommBroker],'0' as IsComplete FROM miami LEFT OUTER JOIN Property ON miami.PropertyUnitNumber = Property.Unit AND miami.PropertyZipCode = Property.Zip AND miami.PropertyCity = Property.City AND miami.PropertyStreetAddress = Property.Address1 WHERE validAD=1 IF @@ERROR <> 0 BEGIN ROLLBACK TRAN RETURN END COMMIT TRANSACTION GO
is there any easy way I can take a select statment (such as select from payments where datetime>'20071122' and output a sql insert statment for these records?
I basically need to move a specific set of records from one sql server to another (both sql server 2005) any suggestions for the best way to do this?
Dear Experts,What is the best way to do a large insert WITHOUT having direct accessto the machine SQL Server is running on? For example, imagine I wantto insert something like 20,000 records. If I were to have access tothe server, I could BULK INSERT into a temp table and then insert intothe destination table. But if I can't create a file on the server touse for BULK INSERT, what is the next best alternative to doing lotsof 1 record insert statements?Thanks,-Emin
Is there any known SQL Server bug whereby a record can be successfullyinserted and committed, but then later be found not to be in thedatabase? For example, if there was a server crash just after thecommit, could committed data be lost?I'm sure the answer must be "no", but a client is telling me this ishappening, and I said I'd enquire.
We have installed a package developed by another company and sometimes(when the server is with a big rate of transactions), we are seeingthe following messages in package debug file:2004-04-22 14:23:27 3056:------------------------------------------------------------2004-04-22 14:23:27 3056: rlm03000: ls_PutOneRecord: Failed inserting[04113SO07236054]2004-04-22 14:23:27 3056: ODBCSetCursorName[3056]: Failed specifyingcursor concurrency to statement handle 1 for 'ls_rep_add'2004-04-22 14:23:27 3056: dbc01003: ODBCSetCursorName[3056]:ls_rep_add generated SQL error state: 240002004-04-22 14:23:27 3056: dbc01002: [Microsoft][ODBC Driver Manager]Invalid cursor state2004-04-22 14:23:27 3056: odbcFilterConnectErrors[3056]: set theiState to ODBC_DISCONNECT for <24000>2004-04-22 14:23:27 3056: Function Return Code [00001] ODBC_ERROR2004-04-22 14:23:27 3056: Operation [00000] ls_PutOneRecord2004-04-22 14:23:27 3056: Primary Return Code [00000] 000002004-04-22 14:23:27 3056: Secondary Return Code [0000000001] 000002004-04-22 14:23:27 3056:------------------------------------------------------------When this message appears in the app debug file we are loosing someinserts that the application do in the database.The MS SQL server configuration is:4 Pentium 760Mhz 4 GB RAMSQL 2000 Standard Edition 8.00.760 SP 4MDAC 2.7 driver ODBC SQLSRV32.dll 2000.81.9031.14 15/11/2002Any tip will be welcome,Thanks in advance,Reis
I've got a package that needs to do various single-row inserts/updates throughout the flow of the package. Each insert/update will use values from variables.
What is the best practice for doing this? I was going to use a DFT with my source being a derived column transformation, but it expects a true data source.
I've also thought about just using the obvious -- a SQL command task -- by my experience with doing commands using single values is that it's very quirky and hard to get the data types and mappings to cooperate.
I'm betting the SQL command task is my only option, but I wanted to check here first to see if others had other ideas.
this is the error is get, but i did the same thing on a select statement and it works fine...do i need to add something to the string or what i am kinda confused and help would be great.....
Operator '+' is not defined for types 'String' and 'System.Windows.Forms.TextBox'.
Hi,I've got a program that has a form. Â On the form there are several fields which are optional that I would prefer to remain as NULL in the database (rather than a default value) if the user doesn't fill them out. Â I'm using stored procedures for my inserts (sql 2000 + C#). Â How do most people handle these situations? Â Right now I have a seperate function which receives the parameter values as params and then handles creating the parameters and executing the stored procedure. Â I'm not sure how to handle passing that I want one of those to be null, at least not without having multiple functions and possibly various stored procedures for different possibilities.Thanks.
I'm try to a multiple insert from one database to another by using this code:insert into [mpis].[dbo].[Residents] (acno,surname,name,ID,type) (select top 30 acno,surname,name,id,type from [PretoriaDB].[dbo].[WorkingDB]) but I keep on getting this error:Msg 8152, Level 16, State 9, Line 1String or binary data would be truncated.The statement has been terminated. Can any one help!!
However its not known the length of each and so we could have the reverse of the above.
I am to insert x amount of notes to one userId and vice versa, but I'm trying to figure out how to do both so the insert would resemble the following.
UserID NoteID
1 4 1 9 1 18 1 21 1 23 1 27 2 4 2 9 etc, etc
This is sp that does it and it works fine for just one or the other, I just don't have much experience in this kind of thing. The spInsertAssignedNoteDetail at the end simply makes the insert when I have both numbers.
Heres my attempt, but i'm just stuck as to where to go from here