&"Interesting&" Table Design...need Opinions

Mar 6, 2008

Need some opinions on how to query a set of tables designed like this:

Main table: Subscriber

Table has 80 columns, each column has a table which is written to via an update trigger everytime the column changes. Subscriber table is Type 1 dimension but the "Change" tables are Type 2 dimension. The "change" tables have 3 columns, SubscriberID, ChangeDate, NewValue.

The requirement is to push a Type 2'd version of the Subscriber table into a data warehouse for historical reporting. Meaning the users need to be able to see what a Subscriber looked like on an exact date.

Has anyone had any experience with this type of structure? If so, how did you join together the Subscriber table to all the change tables to extrapolate a "view" of the subscriber on each given day? You can assume that a record can only change once in a day, but not every column will change on the record.

I was thinking of something like this:


SELECT Subscriber.ID, (SELECT TOP 1 NewValue FROM Subscriber_FirstNameChanges b WHERE b.SubscriberID = Subscriber.ID AND b.ChangeDate <= @GivenDate ORDER BY b.ChangeDate DESC)
FROM Subscriber

This would pull the most recent version of the subscribers FirstName based on a given date (I'd have to add in the 79 other change columns with almost identical sub queries). This would also take into consideration that there may not be change records for particular columns on particular dates because not all columns change at the same time/day and pull the most recent "version" of the subs FirstName since @GivenDate.

View 6 Replies


ADVERTISEMENT

Interesting Large Table Design Recommendation

Sep 25, 2007

Hi,

What's the most efficient way to store the following information:

* Table contains 1 million listings
* Each listing can be geo-targeted to any of the 200+ countries
* Searches return listings based on location

Storage options:

Option #1 (normalized)
* Listings (PK listingID int) [1 million rows]
* ListingLocations (listingID, locationID) [could be up to 200 million rows]

Option #2 (denormalized)
* Listings (PK listingID int, binary(32) with bit-mask consisting of 200 bits one for each location)

Usage: Usually the query will simply lookup listings based on some keywords. It will get back 50-200 listings. Then the application (C#) will filter the listings based on location.

Did anyone have experience with similar structures? Which option is more efficient?

I know that using the intersection-table in Option #1 is the "proper" relational-DB way of doing things. However, I do not like the idea of storing the listingID so many times (ones for each locationID).

Thanks,
Av

View 1 Replies View Related

Date Table In Data Warehouse? Opinions...

Jun 1, 2004

I'm reviewing a data warehouse design schema for a client that is following Kimball's data warehousing principles. One of the first things I noticed was a table of dates with expanded columns giving such information as the year, month, month name, fiscal year, quarter, etc for each date, They also have a surrogate key (int) for the date value. The fact tables store the surrogate key rather than the date value itself.
They were very surprised when I questioned the purpose of this table, assuring me that Kimball was very strong on the concept of having a date dimension for each table.
I don't see the purpose of a table containing nothing by derived date formats. I think they will get a bigger performance hit from having to link through the surrogate key than they would suffer from having to convert datevalues stored in the fact tables.
Has anybody else ever seen this before? Does Kimball really advise this?

View 14 Replies View Related

Interesting Scenario - Multiple Files Updating One Table

Sep 6, 2007

Hi,

i have a scenario where I have to read 2 files that update the same table (a temp staging table)...this comes from the source system's limitation on the amount of columns that it can export. What we have done as a workaround is we split the data into 2 files where the 2nd file would contain the first file's primary key so we can know on which record to do an update...

Here is my problem...

The table that needs to be updated contains 9 columns. File one contains 5 of them and file2 contains 4 of them.

File 1 inserts 100 rows and leaves the other 4 columns as nulls and ready for file 2 to do an update into them.
File 2 inserts 10 rows but fails on 90 rows due to incorrect data.
Thus only 10 rows are successfully updated and ready to be processed but 90 are incorrect. I want to still do processing on the existing 10 but cant affort to try and do processing on the broken ones...

The easy solution would be to remove the incorrect rows from the temp table when ever an error occurs on the 2nd file's package by running a sql query on the table using the primary keys that exist in both files but when the error occurs on the Flat File source, I can't get the primary key.

What would be the best suggestion? Should i rather fail the whole package if 1 row bombs out? I cant put any logic in the following package that does the master file update/insert from the temp table because of the nature of the date. I

Regards
Mike

View 4 Replies View Related

DB Design :: Table Design For Packages

Aug 18, 2015

I would like to create a table called product. My objective is to get list of packages available for each product in data grid view column while selecting each product. Each product may have different packages type (eg:- Nos, CTN, OTR etc). Some product may have two packages and some for 3 packages etc. Quantity in each packages also may be differ ( for eg:- for some CTN may contain 12 nos or in other case 8 nos etc). Prices for each packages also will be different that also need to show.  How to design the table.. 

Product name   :  
Nestle milk |
Rainbow milk
packages  :
CTN,OTR, NOs |

CTN, NOs
Price:
50,20,5 |
40,6

(Remarks for your reference):CTN=10nos, OTR=4 nos  
| CTN=8 Nos

View 3 Replies View Related

Opinions On This Box, Please.

Nov 7, 2002

Hi all.
We are currently running SQL7 on an NT4 server (dual 800Mhz, 1GB RAM) and it is being pounded mercilessly 24/7!

We are currently in the market to upgrade, and I would like to get your opinions on this setup. Maybe some has experience with this box, or other issues in upgrading to a new OS and new version of SQL...

Box:
Compaq Proliant ML530

Processors:
2 Xeon 2.8GHz/512KB with 400Mhz System Bus

Memory:
2GB (4x512)

Drives:
2 18.2GB U3 SCSI in Raid 1 (for Operating System)
3 72.8GB U3 SCSI in Raid 5 (for Backups/TLogs/OS Swap file)
4 36.4GB U3 SCSI in Raid 5 (for SQL data)

Operating System:
Windows 2000 Server

SQL Server:
MS SQL 2000 Standard Edition

Any thoughts/advice are appreciated. :)

View 8 Replies View Related

Opinions Please

Apr 21, 2004

Hi, I have probably exhusted the topic of shapes etc... but I am still having a hard time determining the best solution for my problem:

I have several products, each with several specific properties:

Double Tee
-----------------------------------------
Width | Height |Flange | Leg | Count

Column
------------------------
Width | Height

Round Column
-----------------
Radius


Now originally I wanted to create a scalable table structure, so with the help of several people on this site (and SQL Team) I have developed the following :
tbShape
------------------
ShapeID | Shape | XSectionFormula
-------------------------------------------
1 | Rect | Length X Width

tbShapeAttributes
---------------------------------------
fkShapeID | AttributeID | Attribute
----------------------------------------
1 | 1 | Length
1 | 2 | Width

tbProduct
---------------------------------------
ProductID | fkShapeID | Product
--------------------------------------
1 | 1 | Column

tbProductAttributeValues
--------------------------------------------
fkProductID | fkAttributeID | Value
---------------------------------------------
1 | 1 | 10
1 | 1 | 10
[/code]

From the above table structure I was able to select a product
and by obtaining the formula from the tbShape table, using a
cursor, replacing the Attribute names in the formula with the
attribute values from the tbProductAttributeValues table, using
dynamic SQL, I am able to determine the cross section of any
selected product.

The Problem now is, what if I need to apply different functions to
the data for any given product. This proves to be very difficult because
the attributes for the product are not necessarily consistent.

For Example, lets say the above was a slab 10 feet by 1 foot giving a cross section of 10 square feet. Because it is simple to get the cross sectional area, I can easily figure out the cubic feet of concrete used by multiplying the cross section by a length. But lets say the user want to get the cost / square foot? How is the application sure what attribute is the width of the product?

I guess what I am getting at is why the structure below is not any better then the one above?


tbTemplateCategories
---------------------------------------
CategoryID | Category

tbTemplates
----------------------------------------
TemplateID | fkCategoryID | Template |
-----------------------------------------

tbDoubleTeeTemplates
------------------------------------------
fkTemplateID | Width | Height | Flange | Avg. Leg Width | Leg Count

tbWallTemplates
-----------------------------------------
fkTemplateID | Width | Height


Now there would be a 1 - 1 relationship between the tbTemplates and tbDoubleTeeTemplates ON TemplateID - fkTemplateID. To add a new product, simple add the category, the new table, and then alter the Stored Procs which would use if() if else() statements based on the category to go to the appropriate template table.

Also, now I can write any customized functions for any product without the worry of user mispelling an attribute between the formula and attributes, etc...

Any opinions, thoughts on this would be appreciated!

Mike B

View 4 Replies View Related

Opinions On Unique IDs

May 13, 2004

I've always used the identity field in SQL server to maintain the unique id for a table. With the new DB design at work we brought in a dba and she made us move away from allowing SQL maintain the unique field and having us maintain the unique field in code. To do that we had to start a transaction, do a select max(id) + 1, insert into table, commit transaction. Doing it this way, I'm starting to see deadlocks due to the transactions locking the table.

Getting down to what I wanted to know, what is the pro's/con's you guys see in maintaining he unique ID this way and is there a better way of creating an unique id in T-SQL code?

Thanks

View 2 Replies View Related

Getting The Word Out...opinions?

Nov 9, 2004

Hi all,

I took a search through the archives for related topics (and got Des in trouble along the way :( ) but couldn't find a directly related thread. If I missed one, feel free to tell me where to go (hey...watch that...only if I MISSED one!)

I wrote what is, essentially, a data verification stored proc that goes out to each of FOUR servers we have - each one running a mirror database. In a nutshell, there is one table that contains a row with a column in it that, if everything has gone well in the daily processing in all 4 databases, will match identically between all 4 DBs.

So, that said, here is the output: Job 'Index - Verify PortfolioIndex Across Servers' : Step 1, 'PortfolioIndex Check across all servers and portfolios' : Began Executing 2004-11-09 15:30:00

------------------- BEGINNING PortfolioIndex VERIFICATION -------------------- [SQLSTATE 01000]
WHOO-HOO!!! EVERYTHING MATCHES for porfolio number 2 on 11/09/2004! [SQLSTATE 01000]
WHOO-HOO!!! EVERYTHING MATCHES for porfolio number 3 on 11/09/2004! [SQLSTATE 01000]
WHOO-HOO!!! EVERYTHING MATCHES for porfolio number 11 on 11/09/2004! [SQLSTATE 01000]
WHOO-HOO!!! EVERYTHING MATCHES for porfolio number 67 on 11/09/2004! [SQLSTATE 01000]
WHOO-HOO!!! EVERYTHING MATCHES for porfolio number 72 on 11/09/2004! [SQLSTATE 01000]
WHOO-HOO!!! EVERYTHING MATCHES for porfolio number 84 on 11/09/2004! [SQLSTATE 01000]
WHOO-HOO!!! EVERYTHING MATCHES for porfolio number 90 on 11/09/2004! [SQLSTATE 01000]
WHOO-HOO!!! EVERYTHING MATCHES for porfolio number 92 on 11/09/2004! [SQLSTATE 01000]
WHOO-HOO!!! EVERYTHING MATCHES for porfolio number 100 on 11/09/2004! [SQLSTATE 01000]
WHOO-HOO!!! EVERYTHING MATCHES for porfolio number 105 on 11/09/2004! [SQLSTATE 01000]
WHOO-HOO!!! EVERYTHING MATCHES for porfolio number 110 on 11/09/2004! [SQLSTATE 01000]
WHOO-HOO!!! EVERYTHING MATCHES for porfolio number 115 on 11/09/2004! [SQLSTATE 01000]
WHOO-HOO!!! EVERYTHING MATCHES for porfolio number 120 on 11/09/2004! [SQLSTATE 01000]
WHOO-HOO!!! EVERYTHING MATCHES for porfolio number 125 on 11/09/2004! [SQLSTATE 01000]
WHOO-HOO!!! EVERYTHING MATCHES for porfolio number 130 on 11/09/2004! [SQLSTATE 01000]
WHOO-HOO!!! EVERYTHING MATCHES for porfolio number 135 on 11/09/2004! [SQLSTATE 01000]
WHOO-HOO!!! EVERYTHING MATCHES for porfolio number 140 on 11/09/2004! [SQLSTATE 01000]
WHOO-HOO!!! EVERYTHING MATCHES for porfolio number 145 on 11/09/2004! [SQLSTATE 01000]
WHOO-HOO!!! EVERYTHING MATCHES for porfolio number 150 on 11/09/2004! [SQLSTATE 01000]
WHOO-HOO!!! EVERYTHING MATCHES for porfolio number 155 on 11/09/2004! [SQLSTATE 01000]
UH-OH - TROUBLE!!! CloseIndex mismatch for porfolio number 160 on 11/09/2004! [SQLSTATE 01000]
--> Server TA1 shows an index of 110.582 [SQLSTATE 01000]
--> Server TRADEANALYSIS shows an index of 110.582 [SQLSTATE 01000]
--> Server RECEIVE1 shows an index of NULL [SQLSTATE 01000]
--> Server RECEIVE2 shows an index of NULL [SQLSTATE 01000]
UH-OH - TROUBLE!!! CloseIndex mismatch for porfolio number 1000 on 11/09/2004! [SQLSTATE 01000]
--> Server TA1 shows an index of 189.623 [SQLSTATE 01000]
--> Server TRADEANALYSIS shows an index of 189.623 [SQLSTATE 01000]
--> Server RECEIVE1 shows an index of NULL [SQLSTATE 01000]
--> Server RECEIVE2 shows an index of NULL [SQLSTATE 01000]
UH-OH - TROUBLE!!! CloseIndex mismatch for porfolio number 1001 on 11/09/2004! [SQLSTATE 01000]
--> Server TA1 shows an index of 164.058 [SQLSTATE 01000]
--> Server TRADEANALYSIS shows an index of 164.058 [SQLSTATE 01000]
--> Server RECEIVE1 shows an index of NULL [SQLSTATE 01000]
--> Server RECEIVE2 shows an index of NULL [SQLSTATE 01000]
UH-OH - TROUBLE!!! CloseIndex mismatch for porfolio number 1002 on 11/09/2004! [SQLSTATE 01000]
--> Server TA1 shows an index of 255.978 [SQLSTATE 01000]
--> Server TRADEANALYSIS shows an index of 255.978 [SQLSTATE 01000]
--> Server RECEIVE1 shows an index of NULL [SQLSTATE 01000]
--> Server RECEIVE2 shows an index of NULL [SQLSTATE 01000]
UH-OH - TROUBLE!!! CloseIndex mismatch for porfolio number 1003 on 11/09/2004! [SQLSTATE 01000]
--> Server TA1 shows an index of 159.009 [SQLSTATE 01000]
--> Server TRADEANALYSIS shows an index of 159.009 [SQLSTATE 01000]
--> Server RECEIVE1 shows an index of NULL [SQLSTATE 01000]
--> Server RECEIVE2 shows an index of NULL [SQLSTATE 01000]
UH-OH - TROUBLE!!! CloseIndex mismatch for porfolio number 1004 on 11/09/2004! [SQLSTATE 01000]
--> Server TA1 shows an index of 318.981 [SQLSTATE 01000]
--> Server TRADEANALYSIS shows an index of 318.981 [SQLSTATE 01000]
--> Server RECEIVE1 shows an index of NULL [SQLSTATE 01000]
--> Server RECEIVE2 shows an index of NULL [SQLSTATE 01000]
UH-OH - TROUBLE!!! CloseIndex mismatch for porfolio number 1005 on 11/09/2004! [SQLSTATE 01000]
--> Server TA1 shows an index of 145.921 [SQLSTATE 01000]
--> Server TRADEANALYSIS shows an index of 145.921 [SQLSTATE 01000]
--> Server RECEIVE1 shows an index of NULL [SQLSTATE 01000]
--> Server RECEIVE2 shows an index of NULL [SQLSTATE 01000]
UH-OH - TROUBLE!!! CloseIndex mismatch for porfolio number 1006 on 11/09/2004! [SQLSTATE 01000]
--> Server TA1 shows an index of 141.035 [SQLSTATE 01000]
--> Server TRADEANALYSIS shows an index of 141.035 [SQLSTATE 01000]
--> Server RECEIVE1 shows an index of NULL [SQLSTATE 01000]
--> Server RECEIVE2 shows an index of NULL [SQLSTATE 01000]
UH-OH - TROUBLE!!! CloseIndex mismatch for porfolio number 1007 on 11/09/2004! [SQLSTATE 01000]
--> Server TA1 shows an index of NULL [SQLSTATE 01000]
--> Server TRADEANALYSIS shows an index of NULL [SQLSTATE 01000]
--> Server RECEIVE1 shows an index of NULL [SQLSTATE 01000]
--> Server RECEIVE2 shows an index of NULL [SQLSTATE 01000]
UH-OH - TROUBLE!!! CloseIndex mismatch for porfolio number 1008 on 11/09/2004! [SQLSTATE 01000]
--> Server TA1 shows an index of 123.179 [SQLSTATE 01000]
--> Server TRADEANALYSIS shows an index of 123.179 [SQLSTATE 01000]
--> Server RECEIVE1 shows an index of NULL [SQLSTATE 01000]
--> Server RECEIVE2 shows an index of NULL [SQLSTATE 01000]
------------------- COMPLETE -------------------- [SQLSTATE 01000]


This was cut-n-pasted here from a log file created by the actual SQL SERVER 2000 job created to run the afore-mentioned stored procedure.

After all that...my quandry is this:

What is the best way to send this info out in an email format to interested parties? Currently I have the job send out an email notification on completion, but that still requires my lazy buttocks to go look at the log file in the job (or, more accurately, on the server in the logfile directory).

I want to get the actual DATA as shown above into the email.

As I see it, my options are:
(1) write the data out to a flat file during the run (or, as is done now, into a log file by the SQL Server scheduled job) and then attach that FILE to the email - this still requires my lazy buttocks to OPEN the attachment that comes with the email.
ro (2) write the message out a line at a time to a table with an IDENTITY column (used to order them on the select) and a VARCHAR(128) column that each line in the log would be written to. This option allows me to just do a SELECT in the call to xp_sendmail to get the data into the actual email...but I just really hate the idea of creating a permanent table for this cheesy solution.

I tried it with a temp table within my stored proc, but of course, when I made the call to xp_Sendmail, it can't see my temp table in order to select from (mind you, it's not that I mind USING a cheesy table, just that I don't want it to have a lifespan longer than the time I need to use it and toss it aside)

I know the common denominator here is "My Lazy Buttocks", but I really can't understate the laziness of my buttocks, so this is a valid concern ;)

Any thoughts? How do people get status messages like this into an email without using an attachment or a cheesy middleman table?

Sorry, as always, about the miniseries...just trying to set the mood before popping the question ;)

View 1 Replies View Related

I'd Like To Hear Your Opinions

Jul 30, 2007

I've a core component that is a Win32 DLL, this DLL implements some basic math calculations and conversions between several video systems.. PAL, NTSC and so on. Plus this DLL has a memory mapped file that stores a system value that is the current time of the day (House Clock), it is written by a service app and read by exernal apps that need this frame-accurate value.

I've the full control of the DLL and now I've to decide whether to use this DLL from SQLServer with P/Invoke or if it would be better to port this DLL to C#. Both solutions has pros and cons. (The DLL is written in delphi32 and cannot be easily ported)

- The most important pro is "code reuse"... sql implements the same math as applicstions and once the DLL is bug free, SQL math behaves the same as apps. No need to write code twice and so on.

- The most important con is about code security... not sure if an hard problem in the DLL may take down all the server, even if the situation is very unlikely to happen since the type of the DLL... but never to say...


I'd like to hear some comment from you...


_________________________________

« www.carlop.com × carlop-dev.blogspot.com »

View 1 Replies View Related

Opinions Needed

Jan 24, 2007

I would like some opinions on how you would deal with the following scenario:

We probably have somewhere from 500 to 1000 reports (written in crystal). We also have about 120 clients; each client has their own database. One of the reasons for so many reports is because a lot of our customers want report A but with this or that extra column added so we end up with a lot of custom versions of one report for a particular client. My question is in converting over to Reporting Services, how would you setup the file structure?

Right now, we are thinking that every client is going to have their own folder which would contain all Reports, Models, and Datasources. What do you guys think?

View 1 Replies View Related

Duration Too Long (opinions)?

Sep 15, 2004

Hearing complaints from users about speed on db server (I have almost no control on design) it just has to work. Ran profiler looking for all sql statements over 4000 millsec and in one hour returned over 715 tsql statements. Over 300 of these were over 10000 milliseconds. THis is on an 8 way Dell with 8 gig of RAM. Looking for opinions, how bad does this look compared to other servers you are taking care of? Cache hit ratio is at 99 % and system queue length still under 1, but this does not look good.

View 2 Replies View Related

Opinions On SQL Server Hardware

Oct 8, 2007

Hi all,I was wondering if I could get some experienced opinions on SQL hardware torun an ERP app on SQL 2000. The app does not yet support SQL 2005. The ERPapp has 25 users and likely won't exceed 30 users for several years. Alltraffic is on the LAN. The ERP clients basically submit SQL requests forreads and writes. The app makes heavy use of temp tables, temp views butnot many stored procedures. The current size of the db is 6GB and willlikely double in 4 years.Planned server:Windows Server 20034 GB RAMSQL 2000 Standard (ERP app does not yet support SQL 2005)RAID1 for OSRAID 10 for SQL dataRAID1 for SQL logsRAID1 for temp dbDual, teamed NICsI would try to get 15K SCSI drives. Any thoughts on SATA instead of SCSI?Could I expect much of an impovement by using SQL 2000 Enterprise since itcan use more RAM? I would rather wait for SQL 2005 to be supported.Does anyone have a Dell or HP server configured in an email-able cart thatthey would care to share?Thank you.

View 3 Replies View Related

Opinions On ListCleaner ( WinPure ) ??

Jul 20, 2005

Anyone here tried ListCleaner by a company called WinPurehttp://www.winpure.co.uk/lists (a data deduping software) incomparison to other products that may be out there? Looks like somecompanies like Hewlett Packard use this.I am looking for a good (and inexpensive) datacleansing tool to dedupeand standardise lists, happy to extract the data from database andre-import it (which seems WinPure does) before incorporating it intoother BI tools. Tried MatchIt from help it systems but it is a bitcumbersome.Particularly interested in a tool with a good phonetic matchingengine, that handles multiple lists.Mostly work with oracle, sql and access.Recommendations appreciatated.dbdb

View 2 Replies View Related

Opinions About Insertion Technique

Jul 20, 2005

Looking for some insight from the professionals about how they handlerow inserts. Specifically single row inserts through a storedprocedure versus bulk inserts.One argument are people who say all inserts (and updates and deletionsI guess) should go through stored procedures. The reasoning is thatthe developers that code the client side have no reason to understandHOW the data is stored, just that it is. Another problem is an insertthat deals with multiple tables. It would be very easy for thedeveloper to forget a step. That last point also applies to businesslogic. In my case, adding a security to our SecurityMaster can touch 1to 4 tables depending on the type of security. Also, certain fieldsare required while others are set to null for depending on the type.Because a stored procedure cannot be passed datasets but only scalarvalues, when you need to deal with multiple (i.e. bulk) rows you arestuck using cursors. This post is NOT about the pros and cons ofcursors. There are plenty of those on the boards (some of themprobably started by me and showing my understanding (or morecorrectly, lack of) of the way to do things). Stored procedures alsogive you the ability to abort and/or log inserts that cannot happenbecause of contraints and/or business rule failures.Another approach is to write code (not accessible from outside thedatabase) that handles bulk inserts. You would need to write in rulesto "extract" or "exclude" rows that do not match constraints orbusiness rules otherwise ALL the inserts would fail because of one badrow. I guess you could put the "potential" rows into a temp table.Apply your rules to the temp table and delete / move rows that wouldfail. Any rows left can that be bulk inserted. (You could also use therows that were moved to another temp table for logging why theyfailed.)So that leaves use with two possible ways to get data into the system.A single row based approach for client apps and a bulk based forinternal use. But that leaves use with another problem. You now havebusiness logic in TWO separate areas. You have to remember to modifycode or fix bugs in multiple locations.For those that are still reading my post, my question is...How do you handle this? What is the approach you take?

View 4 Replies View Related

Looking For Opinions.....want To Use SQL Server To Store Images

Apr 25, 2006

I have a client who wants to be able to upload images to his website for his customers to access.  It will probably max out at 100 images a month...so not a huge amount of data.  I am using asp.net 2.0 and SQL Server 2005. 
Does anyone have thoughts or opinions on why I should or should not take this approach?

View 3 Replies View Related

Opinions About Using Database Backup Agents?

Apr 14, 2000

Before I started with this employer the server support staff had planned a backup strategy that included using database agents for Oracle and SQL Server. Nothing is really set up yet for SQL Server so I can still change this direction. Has anyone seen a definite benefit to using database backup agents? If so, what benefits have you seen? There doesn't seem to be much value added by paying for and using an additional product when the database's own utilities are so easy to use, and all the backup files can be backed up to tape with the basic backup software. I've not worked with it, though, so perhaps I am missing something. These are small databases so space is not an issue. Any opinions/comments are appreciated.

View 2 Replies View Related

Hard Drive Defragmentation Opinions

May 9, 2008

I'm wondering what other people do in regards to running hard drive defragmentation programs on SQL Server 2005 servers (assume 64-bit and Windows 2003). From what I can tell the most common opinions are:

1. Don't defragment because it doesn't help and it can cause problems.
2. Use Diskeeper
3. Use the built-in Windows defragmenter

Other respected defragmented programs are PerfectDisk, O&O Defrag, JkDefrag, and Contig.

What is your hard drive defragmentation strategy?

View 1 Replies View Related

Importing Excel File To SQL Server (Opinions Please)

May 18, 2005

Dear All,
I am writing a procedure to import daily the customer excel file to SQL server 2000, I managed to do that where the excel file will be imported directly to the SQL server after creating the new data table, & then I need to read the created table & import it row by row to my original data table.The problem:
I.        The original excel file has the following:a.       a protection passwordb.      The contents has two merged headers (which effecting the import procedure)c.       And last line is a totals line
Before importing the file I have manually to remove (a – b & c)!!
The Solution:
II.     I am trying to find a way to do the above points automatically inside the project.
III.   Also I thought of importing the excel file to a DataGrid first then:a.       Let the user approve the file contents &b.      Remove manually point (I.b.) above (I don’t now how yet, need to try it).c.       Then import the DataGrid the the SQL server.
I think I prefer solution (III), any suggestions are highly appreciated
BR

View 3 Replies View Related

Server Hardware Opinions Please (Separate Db/iis Vs Same Machine)

Dec 16, 2005

Please help me decide what to do about my current hardware configuration.
I have an ASP.NET app that uses SQL Server for the database.  Currently both IIS and SQL are running on the same machine (see machine 1 below).  I want to separate it so that IIS and SQL each have their own machine but I have a very limited hardware budget right now.  I am trying to decide if it would be worth moving either IIS or SQL to another machine that we have, or if I would actually lose performance by doing so considering the extra machine I have is a bit outdated (see machine 2 below).
Should I leave well-enough alone or try to split it to these 2 machines I have. (buying new machines aren't an option right now although that's what I'd like to do).  I could probably afford a memory upgrade on one or both computers if necessary.
Machine 1Dual XEON 1.8 Ghz w/ 1G RAM
Machine 2P3 1.13 Ghz w/ 512K RAM
Thanks

View 1 Replies View Related

Opinions Needed On AutoMate To Replace SQLAgent

Jan 25, 2005

We have been told by the director over the DBAs that we may be standardizing ALL scheduled jobs and tasks (including SQL jobs) onto 1 tool called AutoMate (by NetworkAutomation), although I suspect the decision has already been made. I've argued that a standard for batch jobs is good but SQL has a job scheduler designed for SQL and integrated with SQL that works extremely well, but don't think I'm getting through.
Has anyone used AutoMate as a replacement for SQLAgent? I am open to hearing both pros and cons please. Thank you.

Signed, Frustrated DBA

View 12 Replies View Related

Need Opinions On Creating A Reporting Database More Efficiently

May 27, 2006

Situation:
SQL Server 2000.
At my new employer they have a production database on one server and a copy of it that is set to read only on another server which is used for reporting.

#1
They have an SQL Server Agent job on the production server that: (2 times a day)

Backs up the production database
Copies the backup file to a directory on the reporting server. (Its pretty big and can take time if there are problems with the LAN)

#2
They have an SQL Server Agent job on the Reporting server that: (scheduled to run 2 hours or so after the job on server 1 has run€¦they figured that it would be a safe bet that the backup and copy process of the first job would be done by then)

Breaks the user connections to the reporting database
Performs a restore on the reporting database using the backup file that was copied to the holding directory by the production job.
Sets some permissions for various users.
Sets the reporting database to READ ONLY.
What I would like to do is find a more efficient way to create this reporting database, I have started doing research into DTS methods but would like some opinions from more experienced users.

Thank You,
Wade

View 1 Replies View Related

DB Design :: Insert / Update FACT Table From Staging Table

May 6, 2015

We need to Insert/Update a Fact Table from staging Table. currently we are using a SP which update Fact Table for Each region.  this process is schedule,  every 5 min job is run and Update fact table.but time of Insert and Update too long from  staging  to Fact, currently we are using merge statement for Insert and update.in my sp we are looping number  how many region we need to update and at a time single Region we are updating using while loop in current SP.

View 7 Replies View Related

DB Design :: Table Partitioning Using Reference Table Data Column

Oct 7, 2015

I have a requirement of table partitioning. we have 10 years of data on a table which is 30 billion up rows on 2005 server we are upgrading it to 2014. we have to keep 7 years of data. there is no keys on table or date column. since its a huge amount of data and many users its slow down the process speed. we are thinking to do partition on 7 years for Quarterly based. but as i said there is no date column on table we have to use reference table to get date. is there a way i can do the partitioning with out adding date column on table? also does partition will make query faster? 

I have think three ways to do it.
1. leave as it is.
2. 7 years partition on one server
3. 3 years partition on server1 and 4 years partition on server2 (for 4 years is snapshot better?)

View 3 Replies View Related

DB Design :: Insert Data From One Table To Another Table

Jul 30, 2015

I have to tables like given below Landing table "A" (Data load will happen over here, No primary keys mentioned over here) table "B" .Now I want to move the data from A to B.I have made use of below query insert into B select * from A...Landing table "A" has huge no of records, MS SQL server is taking huge amount of time.any alternative way to make this insertion process faster?

View 6 Replies View Related

Very Interesting !

Jun 12, 2004

Below is the statement given in MicroSoft SQL Server migration documentation :

"When a view is defined with an outer join and is queried with a qualification on a column from the inner table of the outer join, the results from SQL Server and Oracle can differ. In most cases, Oracle views are easily translated into SQL Server views"

Please can anyone explain the above with some examples. I don't find such a 'CREATE VIEW' statement existsing in two DBs, which results in different result set. Am I wrong ?

Thanks,
Sam

View 2 Replies View Related

Interesting

Aug 26, 2004

Another tech was having the same problem with his sp which he also made into a macro. He was having the same problem as I was, he thought since he made the user the dbo owner that he woud have no problems...but since hes the one who created the db and the stored procedures when the user tried to access it he got that same error message. It went away after he went into the Enterprise manager and in the properties of the stored procedure under permission he gave the user EXEC permissions...And WAAALLLAAA problem solved :):):):)

View 10 Replies View Related

Interesting SQL...

Oct 22, 2004

Anyone else experience this? A developer just finished complaining about the performance of one of our databases. Well, he sent me the query and I couldn't understand why it was such a dog. Anyways I rewrote it. The execution plan is totally different between the two. I had no idea specifying the join made such a difference. First sql executed in 7 minutes that 2nd took 1 second. SELECT dbo.contract_co.producer_num_id, contract_co_statusFROM dbo.contract_co, dbo.v_contract_co_statusWHERE ( dbo.v_contract_co_status.contract_co_id = dbo.contract_co.contract_co_id ) AND contract_co_status = 'Pending' OR ( contract_co_status = 'Active' and effective_date > '1/1/2004' )SELECT dbo.contract_co.producer_num_id, contract_co_statusFROM dbo.contract_co INNER JOIN dbo.v_contract_co_status ON dbo.contract_co.contract_co_id = dbo.v_contract_co_status.contract_co_idWHERE contract_co_status = 'Pending' OR ( contract_co_status = 'Active' and effective_date > '1/1/2004' )

View 3 Replies View Related

Table Design

Apr 30, 2008

Hi, I am developing an application to a garment factory. I have a doubt in designing a table.Basic tables:Jobs, JobColors, Material, Units, Currencies ...These tables are designed with normalization rules.I got a problem at PurchaseOrderDetailsMain
table is JobMaterial. It has materialid, jobid, supplierid, description
and TypeFactor(which represents the type of order) means that the
material is ordered based on size or colors or total qty.1 for ByColor, 2 for BySize, 3 for ByQty, 4 for By Contrast colorsThe main problem at the details of the sub table.JobMaterialDetailsIf typefactor is by size, i need to store the details based on sizeex: S - 2000pcs, M - 4000pcs, L - 4000pcs, XL - 2000pcsSo I will have 4 records per each sizeIf it is by color, White - 3000pcs, Portabella - 5000pcs, Black - 2000pcs.If it is by general, Total qty 10000pcsHow
can I design this table. If I take, ColorOrSize column, it will refer
different values for diffrent typefactor. When by size, it will have
Size and when by color, it will refer colorcode.But colors are having referential integrity. So it is violated other than by color as typefactor.What is the best way to design this table?Can anybody suggest?Thanks in advance

View 2 Replies View Related

Table Design

Jul 24, 2000

I am desingning a table and i have a column OrderID and another column call Order, is neccessary to use a primary key, because One OrderID may have many Orders?
Thanks.

View 5 Replies View Related

Table Design

Aug 21, 2000

Coming from a support background and having to design my first database I have a couple of questions re- table design. Firstly I have set up several tables and included one field (of the same name) in each. This is a primary key in one table with an incremental seed. I would like this info to appear in the other tables although these can be duplicates in the other tables. How is it best to achieve this relationship. From reading it suggest FK in the relationship application but looking at other databases this seems to have been achieved by some other means. Is it more common to use stored procedures to enforce this? If so please add pointers. Secondly, I have set up a couple of master tables to act as looks ups for fields in other tables. Again how do I get this to look up the table - is it done through stored procedures or at the time of writing the front end application?? Sorry if this is all basic stuff but it is new to me.

Thanks for any help

View 1 Replies View Related

Table Design

Oct 26, 2004

CREATE TABLE [dbo].[table1] (
[aaa] [bigint] IDENTITY (10000, 1) NOT NULL ,
[bbb] [int] NOT NULL ,
[ccc] [int] NOT NULL ,
[ddd] [bigint] NOT NULL ,
[eee] [int] NOT NULL ,
[ffff] [varchar] (50) COLLATE SQL_Latin1_General_CP1_CI_AS NOT NULL ,
[gggg] [varchar] (500) COLLATE SQL_Latin1_General_CP1_CI_AS NULL ,
[hhh] [varchar] (500) COLLATE SQL_Latin1_General_CP1_CI_AS NULL ,
[iii] [varchar] (50) COLLATE SQL_Latin1_General_CP1_CI_AS NULL ,
[jjj] [varchar] (50) COLLATE SQL_Latin1_General_CP1_CI_AS NULL ,
[kkk] [varchar] (500) COLLATE SQL_Latin1_General_CP1_CI_AS NULL ,
[lll] [varchar] (500) COLLATE SQL_Latin1_General_CP1_CI_AS NULL ,
[mmm] [varchar] (500) COLLATE SQL_Latin1_General_CP1_CI_AS NULL ,
[nnnn] [varchar] (500) COLLATE SQL_Latin1_General_CP1_CI_AS NULL ,
[ooo] [varchar] (500) COLLATE SQL_Latin1_General_CP1_CI_AS NULL ,
[ppp] [varchar] (500) COLLATE SQL_Latin1_General_CP1_CI_AS NULL ,
[qqqq] [varchar] (500) COLLATE SQL_Latin1_General_CP1_CI_AS NULL ,
[rrrr] [varchar] (500) COLLATE SQL_Latin1_General_CP1_CI_AS NULL ,
[ssss] [varchar] (500) COLLATE SQL_Latin1_General_CP1_CI_AS NULL ,
[tttt] [varchar] (500) COLLATE SQL_Latin1_General_CP1_CI_AS NULL ,
[uuuuu] [varchar] (2000) COLLATE SQL_Latin1_General_CP1_CI_AS NULL ,
[vvvvv] [varchar] (500) COLLATE SQL_Latin1_General_CP1_CI_AS NULL ,
[wwwww] [varchar] (150) COLLATE SQL_Latin1_General_CP1_CI_AS NULL ,
[xxxxx] [varchar] (500) COLLATE SQL_Latin1_General_CP1_CI_AS NULL ,
[yyyyy] [varchar] (500) COLLATE SQL_Latin1_General_CP1_CI_AS NULL ,
[zzzzz] [int] NULL ,
[abc] [varchar] (500) COLLATE SQL_Latin1_General_CP1_CI_AS NULL ,
[def] [datetime] NULL ,
[ghi] [datetime] NOT NULL ,
[jkl] [varchar] (1000) COLLATE SQL_Latin1_General_CP1_CI_AS NULL ,
[mno] [bigint] NULL
) ON [PRIMARY]


I have created a table with above column width. The rowsize is more than 8kb.And the table holds millions of rows of data. So is it a correct way of designing the table?
Or how can I redesign this table.

Thanks.

View 1 Replies View Related

Table Design Help

Mar 16, 2004

I'm currently developing a real estate system to manage order processing and work flow. I'm a little uncertain as to how to design the tables because an order can have N number of applicants, owners, buyers, and properties. There are cases where there are 9 different buyers and some where the number of properties exceeds 20. It seems that normalization might make the situation crazy, but I'm a touch rusty. Thanks.

View 2 Replies View Related







Copyrights 2005-15 www.BigResource.com, All rights reserved