Large Log Table ....SELECT * FROM Statement....killing The Performance Of Server..Help Me Out..
Mar 12, 2008
Hi all
I have a Large log table with large size data(I month only),If I run a query like SELECT * FROM <table_name> Server will go€¦very very slow€¦.
Because of large Data system is going slow€¦..
Please some body helps me with suggestion how get good performance.
View 4 Replies
ADVERTISEMENT
Jul 31, 2014
I need to write a select statement that take the upper table and select the lower table.
View 3 Replies
View Related
Aug 15, 2007
We have a table that is 800GB. We are planning to re-build the clustered index on this table to a different filegroup. The new filegroup and files associated with it will sit on a SAN which will have a 1.5TB allocation. Does anyone have any suggestions in regards to how many files to have associated with the filegroup to provide optimal performance? Apparently we could have 3 LUNS (500gb each), so would 1 file on each LUN provide additional performance as opposed to one file on 1 LUN?
View 1 Replies
View Related
Jun 15, 2007
I have recently started working with a new group of people and I find myself doing a lot of reporting. While doing this reporting I have been writing a TON of sql. Some of my queries were not performing up to par and another developer in the shop recommended that I stay away from the "GROUP BY" clause.
Backing away from the "GROUP BY" clause and using "INNER SELECTS" instead as been more effective and some queries have gone from over 1 minute to less that 1 second.
Obviously if it works then it works and there is no arguing that point. My question to the forum is more about gather some opinions so that I can build an opinion of my own.
If I cannot do a reasonable query of a couple of million records using a group by clause what is the problem and what is the best fix?
Is the best fix to remove the "GROUP BY" and write a query that is a little more complex or should I be looking at tuning the database with more indexes and statistics?
I want to make sure that this one point is crystal clear. I am not against following the advice of my coworker and avoiding the "GROUP BY" clause. I am only intersted in listening to a few others talk about why the agree or disagree with my coworked so that I can gain a broader understanding.
View 6 Replies
View Related
Jan 15, 2008
We have an issue where a cube hasn't been designed properly - when someone queries it with Excel, it is doing a mega-crossjoin. When anyone else tries to do *anything* on the AS server (connect with management studio, etc.) it just hangs. We have to either track down the person running the query (via the flight recorder), or restart the service. Obviously the correct fix is to change the design of the cube - I plan on doing it asap. But it brings up this important question - is there a setting I can change to allow others to use the box while this is going on? Maybe some thread isolation, or parallelism? I'm just throwing out ideas, as I haven't experienced this part of AS administration yet.
Thanks in advance,
John
View 7 Replies
View Related
Jul 20, 2005
I am having performance issues on a SQL query in Access. My query isaccessing and joining several tables (one very large one). The tables arelinked ODBC. The client submits the query to the server, separated byseveral states. It appears the query is retrieving gigs of data from thetable and processing the joins on the client. Is there away to perform moreof the work on the server there by minimizing the amount of extraneous tabledata moving across the network and improving performance (woefully slowabout 6 hours)?
View 3 Replies
View Related
Mar 5, 2008
Dear all,
I'm using SQL Server 2005 Standard Edetion.
I have the following stored procedure that is executed against two tables (RecrodedCalls) and (RecordedCallsTags)
The table RecordedCalls has more than 10000000 Records and RecordedCallsTags is about 7500000 Records
Now the lines marked in baby blue are dynamic (Dynamic where statement) that varies every time this stored procedure is executed, may it contains 7 columns in condetion statement or may it contains 10 columns, or 2 coulmns.....etc
Now I want to create non-clustered indexes on the columns used in the where statement, THE DTA suggests different indexing whenever the where statement changes.
So what is the right way to created indexes, to create one index on all the columns once, or to create separate indexes on each columns, sometimes the DTA suggests 5 columns together at one if I€™m using 5 conditions, I can€™t accumulate all the possible indexes hence the where statement always vary from situation to situation, below the SP:
CREATE TABLE #tempLookups (ID int identity(0,1),Code NVARCHAR(100),NameE NVARCHAR(500),NameA NVARCHAR(500))
CREATE TABLE #tempTable (ID int identity(0,1),TypesCount INT,CallsType NVARCHAR(50))
INSERT INTO #tempLookups SELECT Code, NameE, NameA FROM lookups WHERE [Type] = 'CALLTYPES' ORDER BY Ordering ASC
INSERT INTO #tempTable SELECT COUNT(DISTINCT(RecordedCalls.ID)) As TypesCount,RecordedCalls.CallType as CallsType
FROM RecordedCalls LEFT OUTER JOIN RecordedCallsTags ON RecordedCalls.ID = RecordedCallsTags.CallID
WHERE RecordedCalls.ID <= '9369907'
AND (RecordedCalls.CallDate BETWEEN cast ('01 Jan 1910 00:00:00:000' as datetime ) AND cast ( '01 Jan 2210 00:00:00:000' as datetime ))
AND (RecordedCalls.Duration BETWEEN 0 AND 1000000)
AND RecordedCalls.ChannelID NOT IN('62061','62062','62063','62064','64110','64111','64112','64113','64114','69860','69861','69862','69863','69866','69867','69868')
AND RecordedCalls.ServerID NOT IN('2')
AND RecordedCalls.AgentID NOT IN('1000010000')
AND (RecordedCallsTags.TagID is null OR RecordedCallsTags.TagID NOT IN('100','200'))
AND RecordedCalls.IsDeleted='false'
GROUP BY RecordedCalls.CallType
SELECT IsNull(#tempTable.TypesCount, 0) AS TypesCount, CASE('English')
WHEN 'Arabic' THEN #tempLookups.NameA
ELSE #tempLookups.NameE
END AS CallsType FROM
#tempTable RIGHT OUTER JOIN #tempLookups ON #tempTable.CallsType = #tempLookups.Code
DROP TABLE #tempLookups
DROP TABLE #tempTable
Thanks all,
Tayseer
Any suggestions how to create efficient indexes??!!
View 2 Replies
View Related
Jul 23, 2005
I've just inherited a system and have some concerns about the speed ofconnections to a remote server (SQL2000). If I do a simple selectstatement on the table below, it takes 14 minutes to retrive 6 millionrows across a 2Mb line. Obviously it's a reasonable amount of data toretrieve, but I would have thought this would be quicker if I'm honest.Run locally, this is 50 seconds.My thoughts are that there may be some issues with our connection (weget general network errors sporadically, which are being looked at),but wanted some thoughts if the performance is acceptable for what itis doing with what is available. I don't think there is a SQL issue,but want to check if this sounds about right.It's early days, so I'm after a general impression of the speed ofretrieval for the amount of data on the available bandwidth. Assuming abest performance scenario, what is the minimum time it should take as abest guess ?ThanksRyanCREATE TABLE [FIELD_VALUES] ([DEALER_DATA_ID] [int] NOT NULL ,[FIELD_CODE] [varchar] (10) COLLATE SQL_Latin1_General_CP1_CI_AS NOTNULL ,[FIELD_VALUE] [numeric](15, 5) NULL ,[CHANGED_TYPE] [int] NULL ,CONSTRAINT [PK_FIELD_VALUES] PRIMARY KEY CLUSTERED([DEALER_DATA_ID],[FIELD_CODE]) WITH FILLFACTOR = 90 ON [PRIMARY]) ON [PRIMARY]GO
View 1 Replies
View Related
Jul 23, 2005
Greetings All, I was wondering what would happen if I were to do a"select * from table" on a table that has about 5 million rows. Wouldmy read block other writers to the same table? Would it block otherreaders? I know SQL uses optimistic lockign by default but I am notsure what this means to other users trying to access the same table?Any advise would be greatly appreciated.TFD
View 3 Replies
View Related
May 21, 2013
What will be the best way to go to select summary data from a big table with detail records and insert it into another table.
The source table contains approx 100 million detail records for a couple of months. I have a select statement that select a summary of the latest month's data and it average about 15 million records for the month which i want to insert into another table.
In the past i just used a standard insert into statement but not the best way of doing it.
If a view is created with just the last months summary data and i select from the view will the performance be better or will it just add more overhead?
Will a SSIS package work better to insert the summary data?
View 1 Replies
View Related
Sep 22, 2014
I am trying to use SQL to pull unique records from a large table. The table consists of people with in and out dates. Some people have duplicate entries with the same IN and OUT dates, others have duplicate IN dates but sometimes are missing an OUT date, and some don’t have an IN date but have an OUT date.
What I need to do is pull a report of all Unique Names with Unique IN and OUT dates (and not pull duplicate IN and OUT dates based on the Name).
I have tried 2 statements:
#1:
SELECT DISTINCT tblTable1.Name, tblTable1.INDate
FROM tblTable1
WHERE (((tblTable1.Priority)="high") AND ((tblTable1.ReportDate)>#12/27/2013#))
GROUP BY tblTable1.Name, tblTable1.INDate
ORDER BY tblTable1.Name;
#2:
SELECT DISTINCT tblTable1.Name, tblTable1.INDate
FROM tblTable1
WHERE (((tblTable1.Priority)="high") AND ((tblTable1.ReportDate)>#12/27/2013#))
UNION SELECT DISTINCT tblTable1.Name, tblTable1.INDate
FROM tblTable1
WHERE (((tblTable1.Priority)="high") AND ((tblTable1.ReportDate)>#12/27/2013#));
Both of these work great… until I the OUT date. Once it starts to pull the outdate, it also pulls all those who have a duplicate IN date but the OUT date is missing.
Example:
NameINOUT
John Smith1/1/20141/2/2014
John Smith1/1/2014(blank)
I am very new to SQL and I am pretty sure I am missing something very simple… Is there a statement that can filter to ensure no duplicates appear on the query?
View 1 Replies
View Related
Jan 7, 2014
I have a stored procedure that I have written that manipulates date fields in order to produce certain reports. I would like to add a column in the dataset that will be a join from another table (the table name is Periods).
The structure of the periods table is as follows:
[ID] [int] NOT NULL,
[Period] [int] NULL,
[Quarter] [int] NULL,
[Year] [int] NULL,
[PeriodStarts] [date] NULL,
[PeriodEnds] [date] NULL
The stored procedure is currently:
USE [International_Forecast_New]
GO
/****** Object: StoredProcedure [dbo].[GetOpenResult] Script Date: 01/07/2014 11:41:35 ******/
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
[Code] ....
What I need is to add the period, quarter and year to the dataset based on the "Store_Open" value.
For example Period 2 looks like this
Period Quarter Year Period Start Period End
2 1 20142014-01-27 2014-02-23
So if the store_open value is 02/05/2014, it would populate Period 2, Quarter 1, Year 2014.
View 1 Replies
View Related
Dec 12, 2007
Hi
We are using the SQL Server 2005 Full Text Service. The data is not huge, but the kind of data is that each record is small and there are a large number of records. There are 35 million records now with 11 GB of data and about 1.6 GB of FT catalog on the table. This is expected to grow to at least 10 times the size of this data. The issue is with FTS taking a long time to return results when the number of hits (rows) getting returned from FTS is large for some searches, it takes a very long time. With the same data & catalog, those full text queries for less common words return timely. The nature of the problem doesnt allow us to only have top results. We need all the results. So it’s not about the size of data but the number of results getting returned from FT. (As the catalog is inverted). The machine is dual processor with 4 GB RAM.
I am considering splitting the table and hence the catalog and using multiple servers to do full text searches in smaller catalogs. Is there any other way this issue can be solved ?
If splitting is the only way, can you give me an idea as to what is a statistical/standard limit to the number of search results/cataog size as which FTS gives good results
Thanks in advance
View 1 Replies
View Related
Feb 12, 2014
I have created a trigger that is set off every time a new item has been added to TableA.The trigger then inserts 4 rows into TableB that contains two columns (item, task type).
Each row will have the same item, but with a different task type.ie.
TableA.item, 'Planning'
TableA.item, 'Design'
TableA.item, 'Program'
TableA.item, 'Production'
How can I do this with tSQL using a single select statement?
View 6 Replies
View Related
May 12, 2015
I am using SQL SERVER 2008R2, not Denali, so I cannot use OFFSET FETCH Clause.
In my stored procedure, I am doing a SELECT INTO #tblTemp FROM... Working fine. This resultset is going to be used in an SSIS package which will generate a pipe-delimited .txt file... Working fine.
For recoverability sake, I am trying to throttle back on the commit chunks to 1000 rows per commit until there are no more rows. I am trying to avoid large rollbacks.
Q: Am I supposed to handle the transactions (begin/commit/rollback/end trans) when the records are being inserted into the temp table? Or when they are being selected form the temp table?
Q: Or can I handle this in my SSIS package for a flat file destination? I don't see option for a flat file destination like I do for an OLE DB Destination (like Rows per batch, Maximum insert commit size).
View 6 Replies
View Related
Oct 16, 2013
I need to figure out the correct update statement syntax for the following integration.
I have a "Performance Table" which i insert weekly performance numbers into for each store. The table is constructed w/ columns such as Store, Weekenddate, Sales, Refunds, #ofPatients
In a "Averages Table" i have every weekenddate for each store populated. So 52 Weeks for 10 stores = 520 Rows of Store numbers & WeekendDates.
What i would like to do is run a loop or update statement which would update the store average for each weekendate based on the last 13 weeks.
This is my query
update performancestore_avgs set SalesAvg =
(select sum(SalesHit)/Count(Store) from performance_store where performance_store.weekenddate >= performancestore_avgs.weekenddate-84 and performancestore_Avgs.store = performance_store.store)
The update statement runs but the averages are completely wrong.
View 3 Replies
View Related
Jul 11, 2007
Hello -- thank you for taking the time to read this.
I have a very large table that is used both for archives and new information. To get the current information, the table is queried by many different users at various polling periods. The SELECT required includes about fifteen JOINS, and only returns about 200 rows at any given time.
So I got to thinking if it might be faster to periodically run the big query as a SELECT INTO into a smaller table and letting the polling clients query the smaller table with SELECT *. Periodically, the smaller table would be DROPPED and refereshed with another SELECT INTO.
Trouble is, the data would have to be updated once every 30 seconds, and there are inbound polls coming at the rate of about 200 per minute. It got me to thinking what might happen if a client attemtped to query the smaller table when it was in the process of being dropped and refilled.
So my question is three-part:
1) assuming a larger table of about 500,000 records and only 500 pertinent at any given time, is there any real potential of performance enhancement by switching to a SELECT INTO table?
2) if so, is there a chance of a client failing a query if the inbound query somehow collides with the DROP/SELECT INTO procedure?
3) if so, is there any way to prevent it or a better way of doing this?
Thanks again for reading, and in advance for any help you can provide. I apologize if I sound like a dummy - it's hard to fake intelligence!
View 3 Replies
View Related
Mar 24, 2004
hi, what is a good way to kill the duplicates from a table. when i say killing duplicated, i mean killing all the rows for the repeated row.
WorkTempID ItemNo Seq
100196 RTP-22 1
100197 RTP-22 2
100198 RTP-22 3
100199 RTP-22 3
100200 RTP-22 4
100201 RTP-22 4
100202 RTP-22 5
100203 RTP-22 5
********************************************************
see how Seq 3, 4 and 5 are repeated? so for the output i want.
WorkTempID ItemNoSeq
100196RTP-221
100197RTP-222
********************************************************
i DO NOT want this as the output. i already know how to achive this using DISTINCT keyword
WorkTempID ItemNoSeq
100196RTP-221
100197RTP-222
100198RTP-223
100200RTP-224
100203RTP-225
View 12 Replies
View Related
Jan 9, 2015
Ok I have a query "SELECT ColumnNames FROM tbl1" let's say the values returned are "age,sex,race".
Now I want to be able to create an "update" statement like "UPATE tbl2 SET Col2 = age + sex + race" dynamically and execute this UPDATE statement. So, if the next select statement returns "age, sex, race, gender" then the script should create "UPDATE tbl2 SET Col2 = age + sex + race + gender" and execute it.
View 4 Replies
View Related
Mar 17, 2008
I have two table with some identical fields and I am trying to populate one of the tables with a row that has been selected from the other table.Is there some standard code that I can use to take the selected row and input the data into the appropriate fields in the other table?
View 3 Replies
View Related
May 10, 2014
In a Library Management database we have these tables
1) Document ( DocNo , Doc_type , permalink,inDate)
2)Title(id, DocNo,Main_Title, Other_Title)
3)Author(id , Author_Name , Author_Family,Type--Like:main author , translator ,....)
4)Publisher(id,DocNo , Name,Publisedate,address)
5)Subject(id,DocNo,Subject)
6)Description(id,DocNo,ISBN,description)--one document may have some ISBN,etc
In document table I have 500,000 records.
I want to search a word in these tables ,for example i want to search 'Computer' ,this word may be in subject or title or description and etc. How can I do this with best performance?
View 3 Replies
View Related
Sep 10, 2007
Hi All,
I 'm working to improve some sql performance.
One of the major syntax inside the SELECT statment is ..
WHERE FIELDA IN (SELECT PARAVALUE FROM PARATABLE WHERE SESSIONID = "XXXXX" AND PARATYPE='A') AND
WHERE FIELDB IN (SELECT PARAVALUE FROM PARATABLE WHERE SESSIONID = "XXXXX" AND PARATYPE='B') AND
WHERE FIELDC IN (SELECT PARAVALUE FROM PARATABLE WHERE SESSIONID = "XXXXX" AND PARATYPE='C') AND
WHERE FIELDD IN (SELECT PARAVALUE FROM PARATABLE WHERE SESSIONID = "XXXXX" AND PARATYPE='D') AND
WHERE FIELDE IN (SELECT PARAVALUE FROM PARATABLE WHERE SESSIONID = "XXXXX" AND PARATYPE='E') AND
WHERE FIELDF IN (SELECT PARAVALUE FROM PARATABLE WHERE SESSIONID = "XXXXX" AND PARATYPE='F')
(It's to compare the field content with some user input parameter inside a parameter table... )
I think properly is that the SELECT ... IN is causing much slowness in the sql statement. I have indexed FIELDA , FIELDB, FILEDC etc and those PARAVALUE and PARATYPE in the PARATABLE table. But perfromance is still slow and execution takes >20 seconds for 200000 rows of records.
Do any one know if still any chance to improvide the performance like this?
Much Thanks,
Andy
View 14 Replies
View Related
May 6, 2008
I'm trying to get the <Title> node value returned in the select statement below, but cant quite get it to work. I have the <ID> node being returned in the statement, but not the title. Any help is apprecited.
param ids = '<Collection><Content><ID>1</ID><Title>Document 1</Title></Content>< Content><ID>2</ID><Title>Document 2</Title></Content></Collection>'
CREATE PROCEDURE [GetDownloads](@Ids xml) AS
DECLARE @ResearchDocuments TABLE (ID int)
INSERT INTO @ResearchDocuments (ID) SELECT ParamValues.ID.value('.','VARCHAR(20)')
FROM @Ids.nodes('/Collection/Content/ID') as ParamValues(ID)
SELECT * FROM research_downloads INNER JOIN @ResearchDocuments rd ON research_downloads.doc_id = rd.ID Where research_downloads.post_sf = 0
View 5 Replies
View Related
May 7, 2008
I'm trying to get the <Title> node value returned in the select statement below, but cant quite get it to work. I have the <ID> node being returned in the statement, but not the title. Any help is apprecited.
param ids = '<Collection><Content><ID>1</ID><Title>Document 1</Title></Content>< Content><ID>2</ID><Title>Document 2</Title></Content></Collection>'
CREATE PROCEDURE [GetDownloads](@Ids xml) AS
DECLARE @ResearchDocuments TABLE (ID int)
INSERT INTO @ResearchDocuments (ID) SELECT ParamValues.ID.value('.','VARCHAR(20)')
FROM @Ids.nodes('/Collection/Content/ID') as ParamValues(ID)
SELECT * FROM research_downloads INNER JOIN @ResearchDocuments rd ON research_downloads.doc_id = rd.ID Where research_downloads.post_sf = 0
View 6 Replies
View Related
Apr 23, 2007
Hi,I'm trying to dynamically assign the table name for a SELECT statement but can't get it to work. Given below is my code: SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
CREATE PROCEDURE GetLastProjectNumber (@DeptCode varchar(20))
AS
BEGIN TRANSACTION
SET NOCOUNT ON
DECLARE @ProjectNumber int
SET @ProjectNumber = 'ProjectNumber' + REPLACE(CONVERT(char,@DeptCode),'.','')
SELECT MAX(@ProjectNumber)
FROM 'tbl_ProjectNumber' + REPLACE(CONVERT(char,@DeptCode),'.','');
END TRANSACTION Basically, I have a bunch of tables which were created dynamically using the code from this post and now I need to access the last row in a table that matches the supplied DeptCode. This is the error I get:Msg 102, Level 15, State 1, Procedure GetLastProjectNumber, Line 29Incorrect syntax near 'tbl_ProjectNumber'. Any help would be appreciated.Thanks.
View 3 Replies
View Related
Feb 4, 2005
I have a table that contains the following structure and information:
Code:
UID Member_Number Time_Stamp Status Reason Event Reg F_Name L_Name
18772054062/3/2005 11:48:27 AMInNonenoneNoneWendyBoud
86930082522/3/2005 12:39:35 PMInNonenoneNoneJeremyAhlman
98772054062/3/2005 12:40:20 PMOutNonenoneNoneWendyBoud
106930082522/3/2005 12:40:45 PMOutNonenoneNoneJeremyAhlman
118772054062/3/2005 12:40:50 PMInNonenoneNoneWendyBoud
128772054062/3/2005 12:46:25 PMOutNonenoneNoneWendyBoud
I need to be able to take this information and display it in a data grid so that on each row I see the Member Number, First and Last Name and the In and Out Time.
I am not sure how to group the In and Out times together so that the query knows which time out corresponds to which time time for the member?
Any help is greatly appreciated!
View 9 Replies
View Related
Aug 27, 2014
I would like to know if it is possible to do a SELECT statement where I can denormalize my table. I am not sure if it is needed to create a new table at first and insert denormalized values into the created table.
Example
Original table:
ID XXX VALUES
1 a 10
1 b 15
1 c 8
And I want to use SELECT to get this table:
ID a b c
1 10 15 8
View 1 Replies
View Related
Nov 27, 2006
Dear folks,
create table temptable(eno, ename) as select eno, ename from emp.
here the problem is it is asking for the datatype for the temporary table.
is it not possible to create the temp table without providing the datatypes?
thank you very much.
Vinod
View 8 Replies
View Related
Oct 9, 2007
I'm wondering if there is a single statement I can write to pull my data. Let's say my Order table has one field for userId and one field for supervisorId (among other fields) both of which are foreign keys into the Users table where their name, address, etc. are stored. What I'd like to do is to pull all the rows from Order and have a join that pulls the user name and supervisor name from the User table all in one go. Right now I pull all the Orders with just user name joined, and then go back over the objects to add the supervisor name as a separate query.
The reason I'd like to do this is to simplify the objects I'm passing to the GridView by doing a single fetch instead of multiples. I'm using SQL Server, .NET 2.0 and VS.NET 2005.
Thanks
View 1 Replies
View Related
Jul 12, 2004
Hello fellow .net developers,
In a website I'm working on I need to be able to put all of the user tables in a database in a dropdownlist.
Another dropdownlist then will autopopulate itself with the names of all the columns from the table selected in the first dropdownlist.
So, what I need to know is: is there a sql statement that can return this type of information?
Example:
Table Names in Database: Customers, Suppliers
Columns in Customers Table: Name, Phone, Email, Address
I click on the word "Customers" in the first dropdownlist.
I then see the words "Name", "Phone", "Email", "Address" in the second dropdownlist.
I'm sure you all know this (but I'll say it anyways): I could hardcode this stuff in my code behind file, but that would be really annoying and if the table structure changes I would have to revise my code on the webpage. So any ideas on how to do this the right way would be really cool.
Thanks in advance,
Robert
View 5 Replies
View Related
Jul 27, 2004
In SQL Server you can do a SELECT INTO to create a new table, much like CREAT TABLE AS in Oracle. I'm putting together a dynamic script that will create a table with the number of columns being the dynamic part of my script. Got any suggestions that come to mind?
Example:
I need to count the number of weeks between two dates, my columns in the table need to be at least one for every week returned in my query.
I'm thinking of getting a count of the number of weeks then building my column string comma separated then do my CREATE TABLE statement rather then the SELECT INTO... But I'm not sure I'll be able to do that using a variable that holds the string of column names. I'm guess the only way I can do this is via either VBScript or VB rather then from within the database.
BTW - this would be a stored procedure...
Any suggestions would be greatly appreciated.
View 1 Replies
View Related
Jan 26, 2015
Project has 2 tables process(parent) and processchild(child).
Project workflow recorddsany changes to these tables as a history.
I want to find out all the process that are in status = saved(1) where processchild is at status = started(1).
Here is example.
Process table
PK, processid, status , other data
1, 1, 1,...
2, 1, 2,...
3, 2, 1,...
4, 3, 1,...
ProcessChild table
PK, processid, processchildid status, other data
1, 1, 1, 1,..
2, 1, 1, 2,..
3, 1, 2, 1,...
4, 1, 2, 2,...
5, 2, 1, 1,..
6, 2, 1, 2,...
7, 2, 2, 1,...
8, 3, 1, 1,..
I want to find out all the processes where processchildid=2 and processchild.status =1
View 3 Replies
View Related
Jul 20, 2005
Hi,I want to create a temporary table and store the logdetails froma.logdetail column.select a.logdetail , b.shmacnocase when b.shmacno is null thenselectcast(substring(a.logdetail,1,charindex('·',a.logde tail)-1) aschar(2)) as ShmCoy,cast(substring(a.logdetail,charindex('·',a.logdeta il)+1,charindex('·',a.logdetail,charindex('·',a.lo gdetail)+1)-(charindex('·',a.logdetail)+1))as char(10)) as ShmAcnointo ##tblabcendfrom shractivitylog aleft outer joinshrsharemaster bon a.logkey = b.shmrecidThis statement giving me syntax error. Please help me..Server: Msg 156, Level 15, State 1, Line 2Incorrect syntax near the keyword 'case'.Server: Msg 156, Level 15, State 1, Line 7Incorrect syntax near the keyword 'end'.sample data in a.logdetailBR··Light Blue Duck··Toon Town Central·Silly Street···02 Sep2003·1·SGL·SGL··01 Jan 1900·0·0·0·0·0.00······0234578······· ····· ··········UB··Aqua Duck··Toon Town Central·Punchline Place···02 Sep2003·1·SGL·SGL··01 Jan 1900·0·0·0·0·0.00·····Regards.
View 2 Replies
View Related