SELECT phase, stat, subject, CASE WHEN phase = 'Initial/Data Collection' THEN '1'
WHEN phase = 'Screening' THEN '2'
WHEN phase = 'Assessment and Selection' THEN '3'
WHEN phase = 'Placement' THEN 4 END AS PhaseSort
FROM (SELECT subject, stat, CASE WHEN stat = 'Application Received' THEN 'Initial/Data Collection'
WHEN stat = 'Shortlisted' OR
stat = 'For Screening' THEN 'Screening'
WHEN stat = 'For Assessment' OR
stat = 'Passed Initial Evaluation' OR
stat = 'Passed Profiles Exam' OR
stat = 'Passed Technical Exam' THEN 'Assessment and Selection'
WHEN stat = 'For Placement' THEN 'Placement' END AS phase
FROM (SELECT subject, CASE WHEN subject = 'Process Application' OR
subject = 'Application Received' THEN 'Application Received'
WHEN subject = 'Screen Application' THEN 'For Screening'
WHEN subject = 'Phone interview' THEN 'Shortlisted'
WHEN subject = 'Initial Interview' THEN 'For Assessment'
WHEN subject = 'Profiles assessment'THEN 'Passed Initial Evaluation'
WHEN subject = 'Technical Exam and Interview' THEN 'Passed Profiles exam'
WHEN subject = 'background and reference check' THEN 'Passed Technical Exam'
WHEN subject = 'Job Offer' OR
subject = 'Contract Signing' THEN 'For Placement' END AS stat
FROM dbo.filteredtask
WHERE (subject = 'application received') OR
(subject = 'process application') OR
(subject = 'screen application') OR
(subject = 'initial interview') OR
(subject = 'profiles assessment') OR
(subject = 'technical exam and interview') OR
subject = 'background and reference check' OR
subject = 'phone interview' OR
subject = 'shortlisted' OR
subject = 'For Placement' OR
subject = 'job offer' OR
subject = 'contract signing') Phases) stats
ORDER BY phasesort
__________________________________________________
Your future is made by the things you are presently doing.
Is it some how The following sp can be optimized? IF @groupID='812846' BEGIN IF (SELECT count(*) from Employee where SSN= @SSN and groupID=@groupID) > 0 BEGIN UPDATE Employee SET NameLast=@LastName, NameFirst=@FirstName, NameMiddle=@MI,
WHERE SSN= @SSN and GroupId=@GroupId select @EmpId=EmpId from Employee where SSN= @SSN and groupID=@groupID END ElSE BEGIN insert into Employee (GroupId, NameLast, NameFirst, NameMiddle,SSN) values (@GroupId, @LastName, @FirstName, @MI, @SSN) select @EmpId = @@IDENTITY END
END
else BEGIN insert into Employee (GroupId, NameLast, NameFirst, NameMiddle, SSN) values (@GroupId, @LastName, @FirstName, @MI, @SSN) select @EmpId = @@IDENTITY END
SELECT procs.name as ProcName, params.name as ParameterName, types.name as ParamType, params.max_length, params.precision, params.scale, params.is_output, params.has_default_value FROM sys.procedures procs LEFT OUTER JOIN sys.all_parameters params ON procs.object_id = params.object_id LEFT OUTER JOIN sys.types types ON params.system_type_id = types.system_type_id AND params.user_type_id = types.user_type_id WHERE procs.is_ms_shipped = 0 AND params.name = '@DISPOSAL_AREA_NAME' AND procs.name = 'webservices_BENEFICIAL_USES_DM_SELECT' ORDER BY procname, params.parameter_id
Now, all I need from it is the column params.is_output.
I have modified it down to what I need, but I'm wondering if I can remove some of the joins or anything else for better performance without losing the proper results:
SELECT params.is_output FROM sys.procedures procs LEFT OUTER JOIN sys.all_parameters params ON procs.object_id = params.object_id LEFT OUTER JOIN sys.types types ON params.system_type_id = types.system_type_id AND params.user_type_id = types.user_type_id WHERE procs.is_ms_shipped = 0 AND params.name = '@DISPOSAL_AREA_NAME' AND procs.name = 'webservices_BENEFICIAL_USES_DM_SELECT'
For example, the SiteName & SLAClass field using select statements each time may bog down the system.
Also, I’d like to feed the CustID and Subject fields from another table call Profile instead of typing the CustID field each time.
The result of this statement is to search for customers in the subject line and if customer is found then add the customer information into the Detail table. The Profile table contains all customer information.
UPDATE [TEST3].[dbo].[Detail] SET [CustID] = 'Book Fairs' /*fill in with field from the Profile table automatically*/ ,[SiteName] = (SELECT distinct([Profile].[SiteName] ) FROM [TEST3].[dbo].[Profile], [TEST3].[dbo].[Detail] WHERE [Profile].[CustID] = [Detail].[CustID]) ,[SLAClass] = (SELECT distinct([Profile].[SLAClass]) FROM [TEST3].[dbo].[Profile], [TEST3].[dbo].[Detail] WHERE [Profile].[CustID] = [Detail].[CustID]) WHERE [Detail].[CallID] IN (SELECT [CallLog].[CallID] FROM [TEST3].[dbo].[CallLog], [TEST3].[dbo].[Subset], [TEST3].[dbo].[Asgnmnt] WHERE [CallLog].[CallType] = 'DREAM' AND [CallLog].[Subject] LIKE '%Book Fairs%' ) /*fill in with field from the Profile table automatically*/
I have two tables in SQL 6.5 database with identical fields and indexes. Onecontains the data of August 2003 and other July 2003. Now the august tableis larger ( about 40000 more rows ) than the july table but i've noticedthat the same queries perform much faster on the august table than the julytable. Ive tried this with many different queries so i'm wondering whats thereason behind this. Is there a way to optimize a table? Remember , I'm usingSQL 6.5thx
I have this query that is taking more than 5 minutes to run, granted it involves 7 tables, 4 of which have over 100000+ rows, but there must be a quicker way of executing this.
Code Block
SELECT ACP.COMPANY_NAME, WOD.WO , WOH.SCHEDULED_DATE , WOH.JOB_ADDRESS_1, WOH.JOB_ADDRESS_2, WOH.CUSTOMER_CODE, ARC.CUSTOMER_NAME, ARC.BILL_TO_CUSTOMER_CODE, APS.SUPPLIER_NAME, APC.INVOICE_NUMBER as AP_INVOICE_NUMBER, APC.INVOICE_DATE as AP_INVOICE_DATE, APC.DATE_OF_RECORD as AP_DATE_OF_RECORD, WOD.AMOUNT, APC.CHEQUE_NUMBER, WOH.INVOICE_NUMBER as AR_INVOICE_NUMBER, ARI.DATE_OF_RECORD as AR_DATE_OF_RECORD FROM WO_WODDescription_tbl AS WOD LEFT OUTER JOIN WO_Headers_tbl AS WOH ON WOD.COMPANY_CODE = WOH.COMPANY_CODE AND WOD.WO = WOH.WORK_ORDER_NUMBER LEFT OUTER JOIN AP_CurrentDetails_tbl as APC ON WOD.COMPANY_CODE = APC.COMPANY_CODE AND WOD.DRILL_DOWN_NUMBER = APC.DRILL_DOWN AND WOD.AUDIT_NUMBER = APC.AUDIT_NUMBER LEFT OUTER JOIN AR_CustomerMaster_tbl as ARC ON WOD.COMPANY_CODE = ARC.COMPANY_CODE AND WOH.CUSTOMER_CODE = ARC.CUSTOMER_CODE LEFT OUTER JOIN AP_Suppliers_tbl as APS ON APC.COMPANY_CODE = APS.COMPANY AND APC.SUPPLIER_CODE = APS.SUPPLIER_CODE LEFT OUTER JOIN ADM_CompanyProfile_tbl as ACP ON WOD.COMPANY_CODE = ACP.COMPANY_CODE LEFT OUTER JOIN AR_InvoiceDetailCurrent_tbl as ARI ON WOD.COMPANY_CODE = ARI.COMPANY_CODE AND WOH.INVOICE_NUMBER = ARI.INVOICE_NUMBER WHERE (WOD.COMPANY_CODE = '01' OR WOD.COMPANY_CODE = '03') AND APC.CHEQUE_NUMBER <> 'X%' AND (APC.DATE_OF_RECORD < '20061101' AND ARI.DATE_OF_RECORD > '20061031') ORDER BY WOD.COMPANY_CODE, WOD.WO
Can anyone give me any suggestions of how I could speed this up? Also, I have noticed that sqlservr.exe is using more than 1.5GB of the 2GB in the machine while doing conversions from flat files to the database while the CPU is under 3% load, is this action typical of MSSQL2005?
Hi , I created a page that list the total of hours, lunch time and expenses for the employees of the company. I am trying to optimize this stored procedure , but it still takes more than 40 seconds for 50 employees. select @StartDate As DateLigne, TPerson.Name, TPerson.idperson, (select sum(coalesce(hours,0) - coalesce(lunch,0)) FROM Thereport WHERE etridperson=TPerson.idperson AND etridproject=TUserProject.etridproject AND DateDIFF(day, @StartDate, datereport) >= 0 AND DateDIFF(day, datereport, @endDate) >= 0 ) As hours, (select sum(coalesce(nonbillable,0)) FROM Thereport WHERE etridperson=TPerson.idperson AND etridproject=TUserProject.etridproject AND DateDIFF(day, @StartDate, datereport) >= 0 AND DateDIFF(day, datereport, @endDate) >= 0 ) As nonbillable, (select sum((coalesce(miles,0)*@mil)+ coalesce(perdiem,0)+coalesce(supplies,0)+coalesce(airfare,0)+ coalesce( gas,0) + coalesce(autorental,0)+ coalesce(other,0) ) FROM ThereportWHERE etridperson=TPerson.idperson AND etridproject=TUserProject.etridproject AND DateDIFF(day, @StartDate, datereport) >= 0 AND DateDIFF(day, datereport, @endDate) >= 0 ) As Expenses FROM TUserProject, TPerson WHERE TUserProject.etridperson=TPerson.idperson AND etridproject =89
Do you have any idea of how I could optimize this stored procedure?
Hi, I am used to writing Sub-Correlated queries within my main queries. Although they work fine but i have read alot that they have performance hits. Also, as with time our data has increased, a simple SELECT statement with a few Sub-Queries tends to run slower which may be between 10-15 seconds. Following will be a simple example of what i mostly do: SELECT DISTINCT C.CusID, C.Name, C.Age, ( SELECT SUM (Price) FROM CusotmerOrder WHERE CusID_fk = CO.CusID_fk ) Total_Order_Price, ( SELECT SUM (Concession) FROM CusotmerOrder WHERE CusID_fk = CO.CusID_fk ) Total_Order_Concession, ( SELECT SUM (Price) - SUM (Concession) FROM CusotmerOrder WHERE CusID_fk = CO.CusID_fk ) Total_Difference FROM Customer C INNER JOIN CustomerOrder CO ON C.CusID = CO.CusID_fk ...... WHERE (conditions...) My question is what would be a better way to handle the above query? How can i write a better yet simple query with optimized performance. I would also mention that in some of my asp.net applications, i use inline queries assigned to SqlCommand Object. The reason i mention it that since these queries are written in some class files, how would we still accomplish what i have mentioned above. Kindly could any Query Guru guide me writing better queries. I shall be obliged...
Hello :-)My question is: If I query a partitioned view, but don't know the valuesin the "where x in(<expression>)" clause, i.e.: select * from viewAwhere intVal in(select intVal from tbl1) . Compared to: select * fromviewA where intVal in(5,6).Of course "intVal" is partitioning column.Will this result in an optimized query that searches only the relevanttables?*** Sent via Developersdex http://www.developersdex.com ***Don't just participate in USENET...get rewarded for it!
I have a question concerning where to put certain database files for the followinig RAID configurations. The server has 2 RAID configs: 2 hds in a RAID 1 and 4 hds in a RAID 10. The server will host 4 database instances: A replicated db, a Reporting Services db (which technically constitutes 2 db instances) and an application db. In order to get the best performance, should I put the OS, SQL binary and log files on the RAID 1 config with the data and tempdb on the RAID 10? If not, please explain the best solution. Thank you!
We have a situation where queries against a partitioned view ignore a suitable index and perform a table scan (against 200+MB of data), where the same query on the underlying table(s) results in a 4 page index seek. I can€™t find any mention of the situation, so I€™m trying a post here.
We€™re running SQL Server 2005 Enterprise edition sp2 on Windows 2003 Enterprise Edition sp1 on a two node cluster, and it also occurs on a stand-alone development box with Developer edition. We have four tables, named Options#0, Options#1, Options#2, and Options#3. All are almost identical (script generated by SSMS and edited down a bit):
SET ANSI_NULLS OFF SET QUOTED_IDENTIFIER ON
CREATE TABLE [dbo].[Options#0]( [ControlID] [tinyint] NOT NULL CONSTRAINT [DF_Options#0__ControlID] DEFAULT ((0)), [ModelCode] [char](8) NOT NULL, [EquipmentID] [int] NOT NULL, [AdjustmentContextID] [int] NOT NULL, [EquipmentCode] [char](2) NOT NULL, [EquipmentTypeCode] [char](1) NOT NULL, [Description] [varchar](50) NOT NULL, [DisplayOrder] [smallint] NOT NULL, [IsStandard] [bit] NOT NULL, [Priority] [tinyint] NOT NULL, [Status] [bit] NOT NULL, [Adjustment] [int] NOT NULL, CONSTRAINT [PK_Options#0] PRIMARY KEY CLUSTERED ( [ModelCode] ASC, [EquipmentID] ASC, [AdjustmentContextID] ASC, [ControlID] ASC )WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY] ) ON [PRIMARY]
ALTER TABLE [dbo].[Options#0] WITH CHECK ADD CONSTRAINT [CK_Options#0__ControlID] CHECK (([ControlID]=(0)))
ALTER TABLE [dbo].[Options#0] CHECK CONSTRAINT [CK_Options#0__ControlID]
The only differences between the tables are in the names and in the value defaulted to and CHECKed, which matches the table name (to support the partitioned view, of course).
We receive and load data ever week and every two month, and use an unlikely algorithm to load and manage its availability by running an ATLER on the view (to maintain the access rights defined for the hosting environment). Scripted out via SSMS, the view looks like:
SET ANSI_NULLS ON SET QUOTED_IDENTIFIER ON CREATE VIEW [dbo].[Options] AS select * from Options#1 union all select * from Options#3
The problem is that when we issue a query like
SELECT count(*) from Options where ControlID = 1 and ModelCode = '2004NIC9'
The resulting query (as checked via the query plan and SET STATISTICS IO on) will get €œpartitioned€?, running against the proper table, but it will ignore the query, perform a table scan, and churn through 200+MB of data. A Similar query run against the underlying table
SELECT count(*) from Options#1 where ControlID = 1 and ModelCode = '2004NIC9'
(with or without the ControlID = 1 clause) will perform a Clustered Index Seek and read maybe 4 pages.
Analyzing the execution plan shows that the table query work like you€™d think, but for the query against the view we get a Clustered Index Scan, with predicate:
[DBName].[dbo].[Options#1].[ControlID]=(1) AND CONVERT_IMPLICIT(char(8),[ DBName].[dbo].[Options#1].[ModelCode],0)=€™2004NIC9€™
I get the same results when explicitly listing all columns in the view. The code page on the view and tables is the same (as determined by checking properties via SSMS).
Why is the table data column being implicitly converted to the data type that it already is? Why does this occur when working with the partitioned view but not with the actual table? Can this behavior be controlled or modified without losing the (incredibly useful) data loading management benefits of the partitioned view? I€™m guessing (and hoping) it€™s some subtle quirk or mis-setting, please set me on the right path!
I'm just beginning to experiment with memory optimised tables.
I have two sets of near identical tables - one set normal, the other set memory optimised with DURABILITY=SCHEMA_ONLY - and am running test queries against these. When I say that the two sets are "near identical", I mean that they are the same except for the primary keys: for the normal tables these are defined as PRIMARY KEY CLUSTERED whereas for the memory-optimed ones they are defined as PRIMARY KEY NONCLUSTERED HASH WITH (BUCKET_COUNT=nnnn) as per the requirements for such tables.
I then run a pair of test queries, again identical but one referencing the normal tables and the other referencing the memory optimised ones.
(The query uses an inner join on three tables with row counts of approx 3m rows, 100000 rows and 5000 rows.)
The query against the normal tables runs noticeably faster than that against the memory optimised ones. To try to find out why, I examined the execution plans. the plan for the memory optimised query suggests that I have a missing index: but of course I can't create this againsty a memory optimised table. Is this a bug or am I missing something? Why the performance between the two should be so different?
We are planning to upgrade. We are using Sql 2008R2 now. Which is the better option migrating to SQL 2012 or migrating to 2014?I am thinking 2014 has memory optimized tables and updatable column stored index. So it is better option.
CREATE TABLE [Sales].[Test_inmem] ( [c1] [int] NOT NULL, [c2] [nvarchar](20) COLLATE SQL_Latin1_General_CP1_CI_AS NOT NULL, [ModifiedDate] [datetime2](7) NOT NULL CONSTRAINT [IMDF_Test_ModifiedDate] DEFAULT (sysdatetime()),
[Code] ....
I have to generate 1000000 random records into it. I tried various ways to insert records, but not being a developer could not do it. I hope to make the C1 as a serial number, C2 can be anything, C3 I want to be the timestamp.
How do i find Total allocated space and used space of a memory optimized filegroup?
use memory_optimized_db Go select (SUM(size)*8.0)/1024.0 as Space, FILEGROUP_NAME ( data_space_id ) , type_desc from sys.database_files group by data_space_id,type_desc;
above query gives "current used size of the container " of memory optimized file group but doesn't give Total space detail.
I've been having some trouble getting a single-column "varchar(5)" field to reliably use a table seek instead of a table scan. The production table in this case contains 25 million rows. As impressive as it is to scan 25 million rows in 35 seconds, the query should run much faster.
Typically, this table is accessed with a query that includes:
SELECT ... FROM SummaryTable WHERE ixZIP IN (SELECT ZipCode FROM @ZipCodesForMO)
This query insists on using a table scan. I've tried WITH (FORCESEEK) for example, but that just makes the query fail.
As I've investigated this issue I also tried:
SELECT * FROM Summaries WHERE ZipCode IN ('xxxxx', 'xxxxx', 'xxxxx')
When I run this query with 64 or fewer (actual, valid) ZIP codes, the query uses a table seek.But when I give it 65 or more ZIP codes it uses a table scan.
To summarize, the production query always uses a table scan, and when I specify 65 or more ZIP codes the query also uses a table scan. I'm wondering if the data type of the indexed column (Latin1_General_100_BIN2) is somehow the problem. I'll likely try converting the ZIP codes to an integer to see what happens.
- An MSSQL 2014 Standard server that houses multiple small databases (in excess of a hundred). - These databases are frequently dropped and restored by an application that uses this SQL Server. - There is a business need for this setup at this time, so I can't get away from it. Therefore answers like "don't have so many small databases that are frequently dropped and restored" would be somewhat unuseful
This is the problem I have:
- When I connect SSMS 2014 to the server and expand the "Databases" node, it takes forever to display. In comparison, SSMS 2008 connected to SQL 2008R2 server with the same number of databases displays the Databases tree very quickly.
I ran a trace to see what exactly SSMS 2014 is doing. When the "Databases" node is expanded, it runs a query that checks each database for Memory-Optimized Tables (new and wonderful feature of SQL 2014 for sure, but I'm not using it, at least yet). Naturally, when you have to loop through over a hundred DBs, it takes time. Worse yet, if one of these DBs is in process of being restored, the query sits and waits to time out before proceeding to the next DB. Sometimes this causes outright timeouts. Here is the query:
use [MyDatabase] SELECT ISNULL((select top 1 1 from sys.filegroups FG where FG.[type] = 'FX'), 0) AS [HasMemoryOptimizedObjects]
To be sure, this is NOT a SQL Server performance issue. This server processes a rather heavy workload and has been doing so for over a month, and the workload completes within expected time limits or better. Even so I've done some basic performance measuring, and the server itself is quite all right.
Moreover, if I connect SSMS 2008 to it, I get an error message (Index out of bounds or somesuch), but SSMS 2008 does connect, and displays the Databases tree much faster than SSMS 2014.
I'd like to turn off the option to check for Memory Optimized Objects altogether, as I'm not using the feature.
I try to load data into a memOpt table (INSERT INTO ... SELECT ... FROM ...). The source table has a size about 1 Gb and 13 Mio Rows. During this load the LDF File grows to size of 350 GB (until the space if the disk is run out of space). The Server has about 110 GB Memory for the SQL Server reserved. The tempdB doesn't grow. The Bucket Size in the create statement has a size of 262144. The Hash key as 4 fields`(2 fields have the datatype int,1 has smallint, 1 has varchar(200). ) The disk for the datafiles has still space for the datafiles (incl. the hekaton files).
How can I reduce the size of the ldf files during the load of the data ?
Hi all--I'm trying to convert a function which I inherited from a SQL Server 2000 DTS package to something usable in an SSIS package in SQL Server 2005. Given the original code here: Function Main() on error resume next dim cn, i, rs, sSQL Set cn = CreateObject("ADODB.Connection") cn.Open "Provider=sqloledb;Server=<server_name>;Database=<db_name>;User ID=<sysadmin_user>;Password=<password>" set rs = CreateObject("ADODB.Recordset") set rs = DTSGlobalVariables("SQLstring").value
for i = 1 to rs.RecordCount sSQL = rs.Fields(0).value cn.Execute sSQL, , 128 'adExecuteNoRecords option for faster execution rs.MoveNext Next
Main = DTSTaskExecResult_Success
End Function
This code was originally programmed in the SQL Server ActiveX Task type in a DTS package designed to take an open-ended number of SQL statements generated by another task as input, then execute each SQL statement sequentially. Upon this code's success, move on to the next step. (Of course, there was no additional documentation with this code. :-)
Based on other postings, I attempted to push this code into a Visual Studio BI 2005 Script Task with the following change:
public Sub Main()
...
Dts.TaskResult = Dts.Results.Success
End Class
I get the following error when I attempt to compile this:
Error 30209: Option Strict On requires all variable declarations to have an 'As' clause.
I am new to Visual Basic, so I'm on a learning curve here. From what I know of this script: - The variables here violate the new Option Strict On requirement in VS 2005 to declare what type of object your variable is supposed to use.
- I need to explicitly declare each object, unless I turn off the Option Strict On (which didn't seem recommended, based on what I read).
Given this statement:
dim cn, i, rs, sSQL
I'm looking at "i" as type Integer; rs and sSQL are open-ended arrays, but can't quite figure out how to read the code here:
This code seems to create an instance of a COM component, then pass provider information and create the recordset being passed in by the previous task, but am not sure whether this syntax is correct for VS 2005 or what data type declaration to make here. Any ideas/help on how to rewrite this code would be greatly appreciated!
Table 1: AddressBook Fields --> User Name, Address, CountryCode
Table 2: Country Fields --> Country Code, Country Name
Step 1 : I have created a Cube with these two tables using SSAS.
Step 2 : I have created a report in SSRS showing Address list.
The Column in the report are User Name, Address, Country Name
But I have no idea, how to convert this Country Code to Country name.
I am generating the report using the Layout tab. ( Data | Layout | Preview ) Report1.rdl [Design]
Anyone help me to solve this issue. Because, in our project most of the transaction tables have Code and Code description in master table. I need to convert all code into corresponding description in all my reports.
I've a database with a memory optimized filegroup on it. How can I remove it?I have removed the memory optimized table I had on it, but when I try to remove the filegroup I receive an error.
Hello, I'm using ASP.Net to update a table which include a lot of fields may be around 30 fields, I used stored procedure to update these fields. Unfortunatily I had to use a FormView to handle some TextBoxes and RadioButtonLists which are about 30 web controls. I 've built and tested my stored procedure, and it worked successfully thru the SQL Builder.The problem I faced that I have to define the variable in the stored procedure and define it again the code behind againALTER PROCEDURE dbo.UpdateItems ( @eName nvarchar, @ePRN nvarchar, @cID nvarchar, @eCC nvarchar,@sDate nvarchar,@eLOC nvarchar, @eTEL nvarchar, @ePhone nvarchar, @eMobile nvarchar, @q1 bit, @inMDDmn nvarchar, @inMDDyr nvarchar, @inMDDRetIns nvarchar, @outMDDmn nvarchar, @outMDDyr nvarchar, @outMDDRetIns nvarchar, @insNo nvarchar,@q2 bit, @qper2 nvarchar, @qplc2 nvarchar, @q3 bit, @qper3 nvarchar, @qplc3 nvarchar, @q4 bit, @qper4 nvarchar, @pic1 nvarchar, @pic2 nvarchar, @pic3 nvarchar, @esigdt nvarchar, @CCHName nvarchar, @CCHTitle nvarchar, @CCHsigdt nvarchar, @username nvarchar, @levent nvarchar, @eventdate nvarchar, @eventtime nvarchar ) AS UPDATE iTrnsSET eName = @eName, cID = @cID, eCC = @eCC, sDate = @sDate, eLOC = @eLOC, eTel = @eTEL, ePhone = @ePhone, eMobile = @eMobile, q1 = @q1, inMDDmn = @inMDDmn, inMDDyr = @inMDDyr, inMDDRetIns = @inMDDRetIns, outMDDmn = @outMDDmn, outMDDyr = @outMDDyr, outMDDRetIns = @outMDDRetIns, insNo = @insNo, q2 = @q2, qper2 = @qper2, qplc2 = @qplc2, q3 = @q3, qper3 = @qper3, qplc3 = @qplc3, q4 = @q4, qper4 = @qper4, pic1 = @pic1, pic2 = @pic2, pic3 = @pic3, esigdt = @esigdt, CCHName = @CCHName, CCHTitle = @CCHTitle, CCHsigdt = @CCHsigdt, username = @username, levent = @levent, eventdate = @eventdate, eventtime = @eventtime WHERE (ePRN = @ePRN) and the code behind which i have to write will be something like thiscmd.CommandType = CommandType.StoredProcedure; cmd.Parameters.AddWithValue("@eName", ((TextBox)FormView1.FindControl("TextBox1")).Text);cmd.Parameters.AddWithValue("@ePRN", ((TextBox)FormView1.FindControl("TextBox2")).Text); cmd.Parameters.AddWithValue("@cID", ((TextBox)FormView1.FindControl("TextBox3")).Text);cmd.Parameters.AddWithValue("@eCC", ((TextBox)FormView1.FindControl("TextBox4")).Text); ((TextBox)FormView1.FindControl("TextBox7")).Text = ((TextBox)FormView1.FindControl("TextBox7")).Text + ((TextBox)FormView1.FindControl("TextBox6")).Text + ((TextBox)FormView1.FindControl("TextBox5")).Text;cmd.Parameters.AddWithValue("@sDate", ((TextBox)FormView1.FindControl("TextBox7")).Text); cmd.Parameters.AddWithValue("@eLOC", ((TextBox)FormView1.FindControl("TextBox8")).Text);cmd.Parameters.AddWithValue("@eTel", ((TextBox)FormView1.FindControl("TextBox9")).Text); cmd.Parameters.AddWithValue("@ePhone", ((TextBox)FormView1.FindControl("TextBox10")).Text); cmd.Parameters.AddWithValue("@eMobile", ((TextBox)FormView1.FindControl("TextBox11")).Text); So is there any way to do it better than this way ?? Thank you
Hi,I need some help here. I have a SELECT sql statement that will query the table. How do I get the return value from the sql statement to be assigned to a label. Any article talk about this? Thanks geniuses.
My service broker is working with 2 different instances in local server.But could not able to get working on 2 different servers because of Conversation ID cannot be associated with an active conversation error which I have posted.
After I receive the message successfully...in the end I get this message sent...
Recently in an SSIS package I am getting the following error for a particular Data flow task.
Error: 2008-01-25 12:01:48.58
Code: 0xC0202009
Source: Import Datasynapse Data User Events Source [3017]
Description: SSIS Error Code DTS_E_OLEDBERROR. An OLE DB error has occurred. Error code: 0x8000FFFF.
End Error
Error: 2008-01-25 12:01:48.73
Code: 0xC004701A
Source: Import Datasynapse Data DTS.Pipeline
Description: component "User Events Source" (3017) failed the pre-execute phase and returned error code 0xC0202009.
End Error
Our guess is when the data size of User Events table is more it throws this error. If we try to transfer small subset of data it succeeds. What could be reason for this error?
Since this is very urgent, immediate response would be very much appreciated.
Right the problem is that i have an sql query that returns multiple rows and i want to be able to join all these rows into one so that i can use it. so for exampleRows returnedRow1Row2Row3Rolled up rowsRow1 Row2 Row3
Server: Msg 170, Level 15, State 1, Procedure sp_blocker_pss70, Line 146 Line 146: Incorrect syntax near 'print 'DBCC INPUTBUFFER FOR SPID '. Server: Msg 170, Level 15, State 1, Procedure sp_blocker_pss70, Line 147 Line 147: Incorrect syntax near 'dbcc inputbuffer ('.