(Location_ID is further related to the Address of the contact.)
GUI has a single form to enter these details.On a save command details in all the tables -Area to Country- (individually) being inserted.
& simultaniously Location_Table is also being inserted with the details.
Following is the situation of being queried these tables:
(1) GUI user can select an Area than the related details of ZIP .., ..., ...upto Country etc. should be loaded automatically (id it is previously stored by the user entry in the database.)
(2) Contacts have to be retrived on the basis of Area, ZIP, .....County. (Necessary Groupings are required )
Example:
If Contacts are queried Country Wise then the Display should be
Country1
State1
District1
County1
City1
ZIP1
Area1
Area2
ZIP2
City2
County2
District2
Country2
Please Guide.
SuryaPrakash
*****************************************
* This message was posted via http://www.sqlmonster.com
*
* Report spam or abuse by clicking the following URL:
* http://www.sqlmonster.com/Uwe/Abuse...0255a1765491f15
*****************************************
I have a system that basically stores a database within a database (I'msure lots have you have done this before in some form or another).At the end of the day, I'm storing the actual data generically in acolumn of type nvarchar(4000), but I want to add support for unlimitedtext. I want to do this in a smart fashion. Right now I am leaningtowards putting 2 nullable Value fields:ValueLong ntext nullableValueShort nvarchar(4000) nullableand dynamically storing the info in one or the other depending on thesize. ASP.NET does this exact very thing in it's Session State model;look at the ASPStateTempSessions table. This table has both aSessionItemShort of type varbinary (7000) and a SessionItemLong of typeImage.My question is, is it better to user varbinary (7000) and Image? I'mthinking maybe I should go down this path, simply because ASP.NET does,but I don't really know why. Does anyone know what would be the benifitof using varbinary and Image datatypes? If it's just to allow saving ofbinary data, then I don't really need that right now (and I don't thinkASP.NET does either). Are there any other reasons?thanks,dave
Friends, Who is responsible for the Design of Database? System Analyst, DBA, Databse Designer, Project Leader? Coz I am working as a System Analyst, but now desgining the Databse for the ERP package which I feel is another man's work. Confussed. Plz help me.
Hi all, Does anyone know how I can design the database schema. I mean what tools can be used to the design the database and view the table relationships, etc. TIA.
One interviewer has asked me the following question: What are the things that you consider while designing database? I have told about integrity constraints, and normal forms. but he has added 15 more concepts like 1. indexers 2. Table columns 3. Table rows 4. search facilities 6....... Can any one give full Idea on this question? Thanking you Ashok kumar.
My current project requires me to convert a mysql based software to a more generic one. I started by designing separate db class files and separated the lower level connection queries from the business logic. By doing this, I now have mssql.class, mysql.class, sqllite.class etc..
But am not sure how to handle sql functions in queries. For instance, one of my queries need the use of a date function to add minutes to a db field.
In mysql, I accomplish this using
dbfield+interval '$arg' minute between date1 and date1
But in mssql I cannot use this type of query. It seems I'll have to use date_add() function. How do I handle this situation?
Dear All,How to reach to the highest level of normalization for database designing?Guide Lines Needed.What will be the characteristics of a database of a completely normalized databae?Check List needed.ThanksSuryaPrakash Patel****************************************** This message was posted via http://www.sqlmonster.com** Report spam or abuse by clicking the following URL:* http://www.sqlmonster.com/Uwe/Abuse...118039e018ee088*****************************************
I am attempting to create a Visual C++ application based on displaying financial charts and am using SQL Express to store Stock information such as the Exchanges the stocks are traded on, the indicessectors they belong to and the Closing prices for as long as I can download data for. I am not proficient in C++ nor SQL and am using this project to learn both languages as well as making myself rich beyond my wildest dreams.
I have "designed" a database with the following tables:
tblDate_ 1 column clmDate (Primary Key, smalldatetime, NOT NULL)
tblStockExchange_ 4 column clmStockExchangeID (PK, int, NOT NULL) clmParentID (int, null) clmStockExchange (nvarchar(50), NOT NULL) clmMarkets_ (FK, nchar(20), NOT NULL)
tblMarkets_ 1 column clmMarkets (PK, nchar(20), NOT NULL)
tblIndices_ 1 column clmIndices (PK, nchar(50), NOT NULL)
tblSectors_ 1 column clmSectors (PK, nchar(50), NOT NULL)
tblMarkets_Sectors 3 columns clmMarkets_SectorsID(PK, int, NOT NULL) clmMarkets_ (FK, nchar(20), NOT NULL) clmSectors_ (FK, nchar(50), NOT NULL)
tblSecurities_ 4 columns clmEPIC (PK, nchar(10), NOT NULL) clmSecurity_Type (nchar(5), NOT NULL) clmSecurty_Name (nchar(50), NOT NULL) clmSectors_ (FK, nchar(50), NOT NULL)
tblSecurities_Indices 3 columns clmSecurities_IndicesID (PK, int, NOT NULL) clmEPIC_ (FK, nchar(10), NOT NULL) clmIndices_ (FK, nchar(50), NOT NULL)
tblSecurities_Date_OHLCV 8 columns clmOHLCVID (PK, int, NOT NULL) clmEPIC_ (FK, nchar(10), NOT NULL) clmDate_ (FK, smalldatetime, NOT NULL) clmOpen (float, NOT NULL) clmHigh (float, NOT NULL) clmLow (float, NOT NULL) clmClose (float, NOT NULL) clmVolume (float, NOT NULL)
Why so many tables? perhaps you should put some more in...
This was the only way I could work out how to store one-to-one and one-to-many relationships required for:
- Many closing prices for many stocks - Stocks belonging to many indices - Stocks belonging to only one sector - Stocks belonging to only one market (MainMarket or AIM for LSE) - Stocks belonging to only one Exchange (I am aware of dual listed stocks but one thing at a time)
Why nchar's and not nvarchar's?
Because I didn't realise the benefits of nvarchar's until recently. How can I change this a loose the extra spaces in the cells.
Why do some tables have IDs and others don't?
I decided to put ID columns in for tables that didn't have obvious Primary Keys - if someone could explain the advantages if ID columns I would be grateful.
To the SQL Professional's eye there will be some obvious things wrong with this design and your criticism is welcome. The database I have is achieving what I would like it to do; I can plot charts using the data but I have ran into problems when trying to create a TreeView control which is what I would like to use as a navigational tool in my application.
It would seem that pulling hierarchal data from a relational database, to pass to the TreeView control, is a tricky task to say the least. I have found many articles online which discuss how to do this (using an Adjacency List Model or Nested Set Model) but they define a fairly simple example at the beginning (based on fruit or electrical goods) but don't appear to talk about gathering data from an existing relational database or changing an existing relational database so that it is more suited to storing hierarchal information. I have Joe Celko's - Tree and Hierachies in SQL for Smarties but sadly this fine material is a little beyond me!
I would like the hierarchy to look like this:
StockExchange
Market
Sector
Stock Indices
Sector
Stock
I have written three queries to get the StockExchangeMarketSectorStock information individually from each table but am struggling with ways to put all the rows together, add left and right values (Nested Set Model) then run queries against this to get individual nodes to pass to the TreeView control. Therefore is there something I need to add to the original design?
So I started an SQL CE database for use inside a mobile application. I used SSMS to create a .sdf file (because if I let Visual Studio do it, SSMS can't open it because it will be version 3.5).
The reason I wanted to use SSMS is because I wanted to be able to design the database in design view, and populate it with initial information for the application to use. That means adding tables and rows.
The problem is, SSMS doesn't seem to have a design view available when working with CE databases. It also doesn't seem to have a feature for adding rows into tables. It's essentially as useless as VS2008 for designing my mobile database. It only lets me add tables, and I can't even do that visually in design view. I have to use those cumbersome forms.
Is there any way to design a CE database in design view? And add rows of data into tables? It feels like I'm overlooking something.
Note: I dont need to subscribe or publish the database. The mobile application just needs to use the database as a repository.
The following dynamic query returns a list of tables some of which do not have records in them. Can someone help me out, I am trying to exclude the tables with no records returned? Also, I want to exclude tables where there is not a rn_create_user or rn_edit_user defined in the table?
DECLARE @TableName sysname, @Sql varchar(8000)
SELECT @TableName=MIN(TABLE_NAME) FROM INFORMATION_SCHEMA.TABLES
WHILE @TableName IS NOT NULL BEGIN SELECT @Sql='select o.* from ' + @TableName + ' o where not exists(select * from users u where (u.users_id = o.rn_create_user) or (u.users_id = o.rn_edit_user) )'
EXEC (@Sql)
SELECT @TableName=MIN(TABLE_NAME) FROM INFORMATION_SCHEMA.TABLES WHERE TABLE_NAME>@TableName
I am pretty new to the database administration and was wondering if i could get some advice here so i have a head start.
Unfortunalety i have came to the conclusion the hard way that the SQL Agent on Ms SQL 2000 doesnt work properly when it comes to database backups. I discovered that the backups it creates and it claims that are performed succesfully do not work.
So i was wondering what would be the proper way to go in creating a backup of a highly critical database. The database is in MS SQL 2000. I am not required to have the application roll over to the backed up data in case the main one crashes. But it is absolutely necessary to be able to restore the databse to a most recent working copy. So i guess performing a backup once a day would do. Loosing data from a day would not be such a big deal since i have other ways of restoring that days data, as long as i have the full database up to the previous day.
I was looking at database mirroring but that is not available with sql 2000.
Also i though of as a possibility to run a replication on the SQL 2000 database and replicate to SQL 2005 database which aparently has got the SQL agent working properly.And then run the agent on the SQL 2005 database which would backup the copy daily?
Any advice is apreciate and if there are any white papers or books i could look at that would be great.
I looked around the Microsoft web sites but could not find anything on how to submit an enhancement request for Microsoft Reporting Services. I would like to request the ability to prevent exporting of a report without having to disable exports for all reports on the reporting server. In other words I would like certain reports to be exportable but others not.
For developers, we often have a need to backup a production database and restore it on local or integration machines. This production database is enabled for service broker and operates at a relatively high traffic level. When the database is backed up, the size is nearly 12GB; when SET NEW_BROKER is subsequently executed on the restored database, the size goes down to about 800MB. It appears that most of this is residing in the xmit queue. So, my question is: how best to backup a production database with queues activated, etc. without ending up with a 12GB backup?
I have a table with almost 2 mil records. There are two columns (Col1 int, Col2 int). Combination of them is unique. When i run a simple select statement like (select * from tblTableName Where Col1 = 5) with execution plan, cost is extremely high, even with PK(col1+col2). I tried to put clustered index on those columns but execution plan still shows "table scan" instead of "index seek". How can I speed this up ? I use that table mostly for search.
Dear friends We have one problem in our existing system.We are expecting some expert comment on this.We have one corebanking system back end as MS SQL server with IIS server.Our system is always very slow in the peak times of tranasactions.We are planning to optimize this with a short time plan .So pls give some suggestions that our DBA team can implement in a short time with SQL SERVER 2000
Dear friends We have one problem in our existing system.We are expecting some expert comment on this.We have one corebanking system back end as MS SQL server with IIS server.Our system is always very slow in the peak times of tranasactions.We are planning to optimize this with a short time plan .So pls give some suggestions that our DBA team can implement in a short time with SQL SERVER 2000
I look after a database which is part of a third party CRM product. Theusers of the product complain of intermittant poor performance, thesuspicion is that some more senior users are running their own queries(the product allows users to do this). I've been asked by thedevelopment team to try to capture the details of long running queries.I've looked at the events listed in profiler and can't see one thatwould be useful. Ideally I want to know who is running which query thatis taking longer than x seconds.Any suggestionsTIALaurence
;with cte as ( select rank() over (partition by username order by guid ) as rank from MyTable where siteurl='myurl' and VisitedDateTime between '2007.02.05' and '2007.09.30' and IsFiltered=0 group by username,guid )
select count(rank) from cte where rank=2
this query is taking 6 seconds to execute .. 'MyTable' table is properly indexed.. 1 million rows are returned by common table expression.. i want to reduce execution time of this query to some milli seconds..please help me..
Iam using the below query to create a key using the column combinations and a seperator"%" to insert the same in to table my issue is with the perfomance of this query. This query is returing aroung 6000 records and its taking 11 seconds to return the result.is there any way that i can optimize this query to improve perfomance.Select (Item.ItemCode +'%'+ Product.Name + '%' + Quantity.ID) as Key1 from Items,Products,QualityPlease advice
In a database here we have a table that is a list of codes. Many other tables in our database have foreign keys to this table (T_TYPE_CODE). The joined column is an integer that is a clustered index on the table.
One of the developers here says that in one of his queries he has to OUTER JOIN to this table 15 times and that the OUTER JOIN is killing performance. He wants to add a record to T_TYPE_CODE that will represent NULL so that any NULL values in the tables that foreign key to this table will use that ID instead of NULL. In this way he could use all INNER JOINs.
To me this seems like a bad idea - NULL is NULL and creating a value to represent NULL will open a whole can of worms.
My question: Is there a performance hit for using OUTER JOINs against this table, considering that the join is on a single column and is a clustered, unique index?
Also, what problems can we expect to run into if we use a
Can someone tell me if it is possible to see the drives on the server using Perfomance Monitor? I so, where are tehy hiding because i struggled the wholed day!
We recently upgraded from sql 2000 to sql server 2005. Our system was developed using Microsoft visual Basic. Since we upgraded to sql server 2005 our system has been very slow. I suspect that it was because of the new sql 2005 installation that i made.Does anyone know how to solve this problem for me to be able to increase the perfomance speeed of our system again. It has been stressing me for a while now.
In my cube, I used Time Intelligence wizard to create calculated members such as YearToYearGrowth and YearToYear Growth%. But when I try to pull those two items in RS 2005 for more detailed levels, the performance of running MDX dataset is terrible even though my fact table has only 36234 rows of data and I have 6 dimensions (the largest dimension includes 37801 rows).
One thing that I did was removing ALLMEMBERS keywords in the MDX designer but I didn€™t improve much the time it takes to execute a query. Right now, it takes 1.5 minutes to execute may MDX query in RS 2005. Is there a way to improve the performance of MDX query (generated in RS) in RS 2005?
i have a table with 5 million rows (6 months data) .. every month approximately 1 million data is getting inserted..(not bulk insert) ..table is properly indexed.. There is a stored procedure to generate report based on this table.. (only this table no joins) .. stored procedure do lot of permutations (includes lot of temporary tables ,group by having,count() etc).. it takes 2 minutes to generate the report.. i want the report to be generated with out taking a second..
should i partition this table??? size of table is 2gb.. i dont know whether this table is a right candidate for partitioning or not.. please help me..
(From an exchange originally posted on SQLServer.com, which wasn't resolved...)
To return views tailored to the user, I have a simple users table that holds user IDs, view names, parameter names, and parameter values that are fetched based on SUSER_SNAME(). The UDF is called MyParam, and takes as string arguments, the name of the view in use, and a parameter name. (The view the user sees is really a call to a corresponding table returning UDF, which accepts some parameters corresponding to the user.)
But the performance is very dependent on the nature of the function call. Here are two samples and the numbers reported by (my first use of) the performance monitor: Call to table returning UDF, using local variables:
declare @orgauth varchar(50) set @orgauth = dbo.MyParam('DeptAwards', 'OrgAuth') declare @since datetime set @since = DATEADD(DAY,-1 * dbo.MyParam('DeptAwards', 'DaysAgo'),CURRENT_TIMESTAMP) select * from deptAwardsfn(@orgauth,@since)
[187 CPU, 16103 Reads, 187 Duration]
Call to same table returning UDF, using scalar UDFs in parameters:
SELECT * from deptAwardsFn ( dbo.MyParam('DeptAwards', 'OrgAuth') ,DATEADD(DAY,-1 * dbo.MyParam('DeptAwards', 'DaysAgo'),CURRENT_TIMESTAMP) ) [20625 CPU, 1709010 Reads, 20632 Duration] (My BOL documentation claims the CPU is in milliseconds and the Duration is in microseconds -- which I question.) Regardless of the unit of measure, it takes a whole bunch longer in the second case.
My only guess is that T-SQL is deciding that the parameter values (returned by dbo.MyParam) are nondeterministic, and continually reevaluates them somehow or other. (What ever happened to call by value?)
Can anyone shed some light on this strange (to me) behavior?
----- (and later, from me)--- (I have since discovered that the reference to CURRENT_TIMESTAMP in the function argument is the cause, but I suspect that is an error -- it should only capture the value of CURRENT_TIMESTAMP once, when making the function call IMHO.)
I am trying to build something similar to www.alienware.com where it lets you build your own computer. I was wondering if some one could help me design sturcture to do it on my own. I am zero in DB and know little asp. I am trying to do it for my own site.
How can I design reports for Reporting Services without using Visual Studio.NET? I have an SQL Server 2000 with RS but I dont have a tool for designing reports. Thx for help
Dear Friends,I'm a junior DBA, I've to prepare an online examination. for this, I've three categories. a)beginer level b)intermediate level c)expert level
again here subjects are 6. like sqlserver,oracle,c#,vb.net,html,javascript. in these subjects, i've to select these three types of questions. now how can i design for this requirement? shall i create three tables for beginer, intermediate,expert or shall i create 6 tables and write according that?