I am new to SQL database design.I am having a table name customerletter_master in which all details of customer letters are saved.At a time i.e in single second around 200- 400 entry are made in customerletter_master table. Hits to customerletter_master table are more.
Following fields are used in customerletter_master – CID (autogen number), letterno(primary key),consignee, consignor, letterstatus1, letterstatus2, letterstatus3, POD, and some more fields. When letter passes to different stages letterstatus1 is filled to yes and then letterstatus2 and letterstatus3 are filled according to passage of letter.
And at same time many user can accesses the customerletter_master table to search any letter according to letterno. Therefore customerletter_master is used to enter data and to search data at same time and there can be more than 200-400 users doing add and search records to and from customerletter_master
how should i design this table. What should i use to enter and search record and minimize the table hits made by the user at same time. Should i use store procedures or any other method to add and search record. Plz help me out by giving some example.
basically, is it inefficient to open and close a data connection everytime data needs to be retrieved or is there a way for a user to use the same conenction over multiple pages orfor multiple users to share one global data connection I just wanted to know this before I built a site where data could be constantly being pulled or put into the database.It would be easier to just keep opening and closing the connection since it just means pasting in the small chunk of code I use to do that where I need it. I hear speak of connections sometimes not closing etc and wonder if that is an issue then if you are opening and closing too many of them. I am using SqlConnection SqlCommand and SqlDatareader , my code works, just wondering if my approach lacked scalability before I find out too late and have to rewrite everything . Jim
Hi everyone. I hope that my question isn't too broad, but here it goes...
I am trying to figure out the best way to scale a SQL Server database so that it can handle a billion simultaneous users querying the same tables, and can easily scale to handle many billions of simultaneous users. The database must also have 999.99% availability. The number of licences and amount of hardware needed is not an issue.
I have a replicated table that has a trigger attached to the it. The trigger fires off a service broker message for inserts. Originally for every insert, I would begin a conversation, send, and end the conversation when target send an end conversation. Since replication process is only using a single spid, I would like to reuse 1 conversation. the following is what I have for the send procedure in the initiator. I check the conversation_endpoints for any open conversation, if it's null, I start a new conversation and send else just send with the existing conversation. Is there anything wrong with this code? What could cause the conversation on the initiator to be null if I never end the conversation on the initiator side? thanks
DECLARE @dialog_handle uniqueidentifier
select @dialog_handle = conversation_handle from sys.conversation_endpoints where state = 'CO'
Using MSSQL (pre 2005): I have a Link table: int id (primary key/identity) varchar(50) linkName varchar(255) linkHref //some other stuff and my Hits table: int id (primary key/identity) int linkId (foreign key to Link) datetime dateCreated //some other stuff Hits gets an insert whenever a link is clicked. (All this works just fine) I'm trying to create a report that shows each link by its name and href, a counter and the last date each link was visited. What I currently have is an accurate listing for those that have been clicked on, but it does not show anything for links that haven't been clicked on. Any suggestions as to how I can modify the following SQL to return "0" and "never" (or DBNULL) if no entries are found in Hits that have the same id? Or do I have to do this in a couple queries? SELECT COUNT(h.id) as counter, MAX(h.dateCreated) as lastVisited, l.id as id, l.linkName as linkName, l.linkHref as linkHref FROM Link as l INNER JOIN Hits as h ON l.id=h.moduleId GROUP BY l.linkName, l.id, l.linkHref
hi . would greatly appreciate a clever way to make an internal "counter" on an sql table that contains 6000 articles.(one in each row). these are being accessed from a web page and i would like to know which are the most read ones. so i thought of adding a column that would in some way count each time that the row is being accessed. is there a way for this to be done?? please be detailed since i am quite new to all this. thanks beforehands.
Hi there, I am trying to create an application that requires n number of columns in table on run time. For example, I want to create a table of products and its properties. For that, I can use master and detail tables, but my problem starts when I get different number of characteristics for different products. Lets say, I have a product that has some colour and size, but for another product, I need expiry date as well and it continues like this. Now I want to know, what I can do to handle this problem. Please help me in this.
I'm only new to Reporting services and have just started working on a report model for ad hoc reporting.
I have around 50 tables, many of them related through FK's. How can I handle this when designing the model?
E.G
I have a Table called Posts, which has a Hospital ID which is an FK to the Hospitals table. A relationship exists between the two in my Data Source View, however I cannot access the Hospital Name (stored in the Hospital table) using the report builder.
This routine works in most cases, but fails when a bad date is enteredsuch as:19910631 -- there is no June 31st.Instead of ignoring the bad date the entire DTS job fails. Obviouslythis is something that should be validated at data entry, butunfortunately the only control I have is when appending to the tablewith these data quirks. Any suggestions appreciated!!!'************************************************* *********************' Visual Basic Transformation Script' Copy each source column to the' destination column'************************************************* ***********************Function Main()'DTSDestination("Col002") = DTSSource("Col002")if DTSSource("Col002") = "99999999" or DTSSource("Col002") =Null thenMain = DTSTransforStat_SkipRowelseDTSDestination("Col002") = mid(DTSSource("Col002"),1,4) & "/"&mid(DTSSource("Col002"),5,2) & "/" & mid(DTSSource("Col002"),7,2)End ifMain = DTSTransformStat_OKEnd FunctionRBollinger
Hi. This is my first attempt at a using stored procedures and I'm a bit confused. I'm trying to follow as many best practices as I can to improve speed, security and scalability. However, I can't find a solution to what I think should be a simple problem. I have a search page where users enter the criteria of properties they are interested in (bedrooms, price etc...). That takes them to a results page where the properties are displayed. The problem is that I want the amount of times each property has been shown on the results page to be tracked so the property owner gets statistics. The property details are all held in a single table along with the amount of times each property has be shown: Table Name: zk_Property_USA
ID INT
User_ID INT
Property_Type TINYINT
Market_Status TINYINT
Price INT
Bedrooms TINYINT
Address_State VARCHAR
Address_Location VARCHAR
Property_Description VARCHAR
Searched INT 0
Contacted INT 0 I'm trying to find a way to SELECT all the property details to be returned to my results page and UPDATE the "Searched" field by 1 without re-scanning the table for the UPDATE. Is there a way to update "Searched" at the time when the record is chosen to be a result? I am using SQL Server 2005 and Visual Basic ASP.NET 2.0. Many Thanks
In a Lookup component I've defined a SQL query which returns a sorted resultset. For each Lookup component input row I want to have a single output row. Problem is that for each input row there is possibility of multiple matches in SQL query resultset. From all of the possible multiple hits I want only the first one to be returned, and if no match is found then no output row. How to implement this?
We are designing a Staging layer to handle incremental load. I want to start with a simple scenario to design the staging.
In the source database There are two tables ex, tbl_Department, tbl_Employee. Both this table is loading a single table at destination database ex, tbl_EmployeRecord.
The query which is loading tbl_EmployeRecord is, SELECT EMPID,EMPNAME,DEPTNAME FROM tbl_Department D INNER JOIN tbl_Employee E ON D.DEPARTMENTID=E.DEPARTMENTID.
Now, we need to identify incremental load in tbl_Department, tbl_Employee and store it in staging and load only the incremental load to the destination.
The columns of the tables are,
tbl_Department : DEPARTMENTID,DEPTNAME
tbl_Employee : EMPID,EMPNAME,DEPARTMENTID
tbl_EmployeRecord : EMPID,EMPNAME,DEPTNAME
How to design the staging for this to handle Insert, Update and Delete.
I have implemented Remus Resanu's implementation from the Recycling Conversations article and I am experiencing locking issue when I try to insert new conversation handles to the SessionConversations table. I have copied the code in the article exactly including the activation procedure. Any ideas why I may be locking. I am thinking it is related to the HOLDLOCK hint on the table.
The sepcific line where I see locking is directly from the article:
INSERT INTO [SessionConversations] (SPID, FromService, ToService, OnContract, Handle) VALUES (...etc)
Hello, i am making a Fulltextsearch on MS SQL Server 2005 (indexed, with "Contains"). Because of performance reasons i am only showing the first 200 rows mssql finds ("select top 200...:"). Is there any possibility to get the estimated totalnumber of all rows? i have heard something that is possible to get this in mssql-server. The server then estimates how many rows with that searchword could be in the whole database. google i.e. makes the same thing.... is that true? what do i have to do to get this? greetings and thx cpt.oneeye
I'd like to capture the avg. # of user logins and # db hits per a 5 interval for a weeks time. I'm guessing there are sys tables containing this info. and by using temporary tables and/or creating/modifying SPROCS this info. can be retrieved. If I'm on the right track, a little direction would be very appreciated. If I'm not on track, please assist this rookie dba.
I am serving ad-units. In each ad-unit I show somewhere between 3 and 10 article headline. I track the headline impressions to get an idea of the headline click through rate. I save the output from the stats process in another table. I am currently evaluating the stats every hour, and then truncating the table every night at midnight. The problem is that I get lots of impressions and the database gets bogged down evaluating the data such as... SELECT COUNT FROM articleimpressions WHERE articleid = x
But the issue isn't the reporting of the data...so much as it's the capture. I had to add caching on the ad-server because the database couldn't handle the number of inserts.
I thought about parsing the web server log files the next day, but the file sizes seem to be too large, and I can't process them all in one day. (At least not on the hardware that I am using.)
I've thought about splitting log files by hour, but was wondering if there may be a more "native" solution within SQL Server? Maybe a trigger, and/or multi-threaded SP that fires and forgets an insert statement to a linked server. But performance is the key here.
In MSSQL, is there a way to count the number of instances of asubstring within a string, so that I can sort by that?For example:table tst contains one column: tst_dataif tst_data = "the man with the plan"I'd want a function that counts the occurances of "the"count_substring(tst_data,'the') = 2Basically, I'm making a search engine and I'd like to put the mostrelavent hits at the top of the page.
I'm using service broker and keep getting errors in the log even though everythig is working as expected
SQL Server 2005 Two databases Two end points - 1 in each database Two stored procedures: SP1 is activated when a message enters the sending queue. it insert a new row in a table SP2 is activated when a response is sent from the receiving queue. it cleans up the sending queue.
I have a table with an update trigger In that trigger, if the updted row meets a certain condition a dialogue is created and a message is sent to the sending queue. I know that SP1 and SP2 are behaving properly because i get the expected result. Sp1 is inserteding the expected data in the table SP2 is cleaning up the sending queue.
In the Sql Server log however i'm getting errors on both of the stored procs. error #1 The activated proc <SP 1 Name> running on queue Applications.dbo.ffreceiverQueue output the following: 'The conversation handle is missing. Specify a conversation handle.'
error #2 The activated proc <SP 2 Name> running on queue ADAPT_APP.dbo.ffsenderQueue output the following: 'The conversation handle is missing. Specify a conversation handle.'
I would appreceiate anybody's help into why i'm getting this. have i set up the stored procs in correctly?
i can provide code of the stored procs if that helps.
I was curious to know if it the amount of data sent to the sql server mattered.
I am working on a web application and I have three stored procedures that most likely will be called one after the other. Each procedure accepts at least 4 parameters. Instead if I create one stored procedure, then I will be passing at least 12 parameters. Some of the parameters could be quite bulky(at least 1000 characters).
So which one is better, 1 stored procedure with 12 parameters or 3 stored procedures with 4 parameters each called one after the other.
I'm trying to insert data into a table from two tables into a single table along with a hard coded value.
insert into TABLE1 (THING,PERSONORGROUP,ACCESSRIGHTS) VALUES ((select SYSTEM_ID from TABLE2 where AUTHOR IN (select SYSTEM_ID from TABLE2 where USER_ID =('USER1'))),(select SYSTEM_ID from TABLE2 where USER_ID =('USER2')),255)
I get the following-
Msg 512, Level 16, State 1, Line 1
Subquery returned more than 1 value. This is not permitted when the subquery follows =, !=, <, <= , >, >= or when the subquery is used as an expression.
Can we push the data for the above query in a physical table and create index to make the query fast rather than using the same set tables multiple times
We have implemented our service broker architecture using conversation handle reuse per MS/Remus's recommendations. We have all of the sudden started receiving the conversation handle not found errors in the sql log every hour or so (which makes perfect sense considering the dialog timer is set for 1 hour). My question is...is this expected behavior when you have employed conversation recycling? Should you expect to see these messages pop up every hour, but the logic in the queuing proc says to retry after deleting from your conversation handle table so the messages is enqueued as expected?
Second question...i think i know why we were not receiving these errors before and wanted to confirm this theory as well. In the queuing proc I was not initializing the variable @Counter to 0 so when it came down to the retry logic it could not add 1 to null so was never entering that part of the code...I am guessing with this set up it would actually output the error to the application calling the queueing proc and NOT into the SQL error logs...is this a correct assumption?
I have attached an example of one of the queuing procs below:
Code Block DECLARE @conversationHandle UNIQUEIDENTIFIER, @err int, @counter int, @DialogTimeOut int, @Message nvarchar(max), @SendType int, @ConversationID uniqueidentifier select @Counter = 0 -- THIS PART VERY IMPORTANT LOL :) select @DialogTimeOut = Value from dbo.tConfiguration with (nolock) where keyvalue = 'ConversationEndpoints' and subvalue = 'DeleteAfterSec' WHILE (1=1) BEGIN -- Lookup the current SPIDs handle SELECT @conversationHandle = [handle] FROM tConversationSPID with (nolock) WHERE spid = @@SPID and messagetype = 'TestQueueMsg'; IF @conversationHandle IS NULL BEGIN BEGIN DIALOG CONVERSATION @conversationHandle FROM SERVICE [InitiatorQueue_SER] TO SERVICE 'ReceiveTestQueue_SER' ON CONTRACT [TestQueueMsg_CON] WITH ENCRYPTION = OFF; BEGIN CONVERSATION TIMER ( @conversationHandle ) TIMEOUT = @DialogTimeOut -- insert the conversation in the association table INSERT INTO tConversationSPID ([spid], MessageType,[handle]) VALUES (@@SPID, 'TestQueueMsg', @conversationHandle);
SEND ON CONVERSATION @conversationHandle MESSAGE TYPE [TestQueueMsg] (@Message)
END ELSE IF @conversationHandle IS NOT NULL BEGIN SEND ON CONVERSATION @conversationHandle MESSAGE TYPE [TestQueueMsg] (@Message) END SELECT @err = @@ERROR; -- if succeeded, exit the loop now IF (@err = 0) BREAK; SELECT @counter = @counter + 1; IF @counter > 10 BEGIN -- Refer to http://msdn2.microsoft.com/en-us/library/ms164086.aspx for severity levels EXEC spLogMessageQueue 20002, 8, 'Failed to SEND on a conversation for more than 10 times. Error %i.' BREAK; END -- We tried on the said conversation, but failed -- remove the record from the association table, then -- let the loop try again DELETE FROM tConversationSPID WHERE [spid] = @@SPID; SELECT @conversationHandle = NULL; END;
I am trying to script a case when to achieve the following.
I have a table of measures with certain threshold. The threshold direction can either be > or < so I want to create a field that shows if the measure hits that threshold or not to be later picked up in SSRS. So a nested case when?
CASE WHEN M.[Threshold Direction] = '>' THEN CASE WHEN A.[Value] > M.[Threshold] THEN 'GREEN' CASE WHEN A.[Value] < M.[Threshold] THEN 'RED' ELSE '' END END END AS 'Condition'Is this at all possible?
SQL Server 2000, MSDE 2000I have a procedure in my application that I would like only one user at atime to be able to run. Is there a TSQL command I can run that willesentially lock a set of tables so others cannot access the table until theuser is done with the procedure or until the user disconnects from thatsession (in case of a hung app, I would like to lock released)TIA--Tim Morrison--------------------------------------------------------------------------------Vehicle Web Studio - The easiest way to create and maintain your vehiclerelated website.http://www.vehiclewebstudio.com
i have 6 table in SQL Server and i have created one view and create single table by linking all the table,now i want to join two column like
Column A and Column B = Column C e.g A B C Atul Jadhav Atuljadhav Vijay vijayvijay
in above exambe column A having firstName and Column B having second name and i want to join this two column in C column "atuljadhav" and if column B is blank then it join A value tow timestriger code as it is auto update column and every time (update, append, modify, delete) it should be update automatic
I am relatively new to this platform and am trying to update a table of user information and passwords through a series of SQL commands. We import a new file of users every two hours and I would like to have three SQL statements:
- One to add new users not currently in Users table - One to delete users no longer in our source file - One to update every field (but one or two) for already existing users.
Any help on creating the SQL (as well as where to store it and how to automate it) is greatly appreciated!