Large Volumes Of Varchar Data - Design Advice

Jul 6, 2006

Hello all,

I have recently been task with rewriting a database that holds large volumes of data, whilst ensuring that query can be run in optimal time. Having never really delved into this sort of thing before, I hoped you guys might be able to offer some advice and guidance.

The design I have inherited is based around 2 main tables:


[captured_traps]
[id] [int] IDENTITY (1, 1) NOT NULL
[snmp_version] [int] NULL
[community_name] [varchar] (255)
[packet_type] [varchar] (50)
[oid] [varchar] (500)
[source_ip] [varchar] (15)
[generic] [int] NULL
[specific] [int] NULL
[time_stamp] [varchar] (15)
[trap_entered] [datetime] NULL
[status] [int] NULL


[captured_varbinds]
[id] [int] IDENTITY (1, 1) NOT NULL
[captured_trap_id] [int] NOT NULL
[varbind_oid] [varchar] (500)
[varbind_text] [varchar (500)


The relationship between the two tables is on the "captured_traps (id)" to "captured_varbinds (captured_trap_id)". Currently the "captured_traps" table contains around 350 million rows, the "captured_varbinds" table contains around 900 million rows.

Now as you can probably gather this model runs like a....well it sort of hobbles more than runs hence the need to redesign.

My current thoughts on this are:

- Normalising all varchars - there is alot of duplicate values in most of the varchar fields.
- Full Text Indexing

However beyond that I am not sure which route to go down. After googling for most of today I have come across a number of "solutions" however I do not want to go steaming down the track of one of these to discover that it is fatally flawed somewhere.

View 6 Replies


ADVERTISEMENT

How To Optimize Data Import With Huge Volumes And Joins Across Data Sources Not All SQL Server Based?

Jun 7, 2006

I need to periodically import a (HUGE) table of data from an external data source (not SQL Server) into SQL Server, with the following scenarios:
Some of the records in the external data source may not exist in SQL.Some of the records in the external data source may have a different value at different imports, but this records are identified univocally by the same primary key in the external datasource and in SQL Server.Some of the records in the external data source may be the same in SQL.

Due to the massive volume of the import, I would like to import only the records which are different from what I have in SQL Server (cases 1 and 2 above). In fact case 2 is the most critical.

I thought of making a query with a left outer join between the data in the external data source table (SOURCE) and the data in the SQL Server table (DESTIN). The join is done on the respective primary keys (composed keys of up to 10 columns) and one of the WHERE conditions will be that the value in SOURCE is different from the value in DESTIN.

The result of this query would be exactly what I need to import.
How to do this in SSIS??? I couldn't figure out how to join tables in different data sources yet.

In fact I cannot write a stored procedure to do that, since one of the sources is in a datasources not SQL Server.
I have seen the Lookup transformation in this article http://www.sqlis.com/default.aspx?311 but this is not exacltly what I want to do.
Another possibility is to use the merge join, but due to the sorting I believe its performances would be terrible!

Thanks in advance for your suggestions!

View 9 Replies View Related

DB Design Advice

Jan 12, 2007

I'm creating a DB to track clients, programs, and client participation in the programs. They are service programs. A client can be in more than one program and a program can have more than one client.
Can someone give me an example of how they would layout the tables?
My guess is:
tblClient, ClientID
tblClientProgramLog, ProLogID, ClientID
tblProgramDetails, ProDetailID, ProLogID
tblPrograms, ProgramID, ProDetailID
I appreciate any suggestions,

View 4 Replies View Related

Design Advice

Jul 9, 2004

I'm trying to design a database that handles Clients, Cases, Individual and Group Sessions. My problem is that a client can have individual sessions and belong to more than one group at the same time, so I have a many-to-many relationship to deal with. Also I'm trying to design it so that I can have a form that when a group is selected from a drop down it shows all clients assigned to that group and will let me enter new session data for them.

Just looking for some advice on how to handle the relationships.
Maybe someone could show me how they see the relationships working.

My take is that the session is linked to the case not the client, I could be thinking incorrectly?

Thank you,

tblClient
tblClientCase
tblCaseSessionLog
tblClientCaseGroupLink
tblGroups

View 3 Replies View Related

Design Advice?

Jun 3, 2004

I have an construction estimation system, and I want to develop a project management system. I will be using the same database because there are shared tables. My question is this, critical data tables are considered tables with dollar values and these tables should not be shared across the whole company. I do however need information from these tables, such as product and quantity of the product for a given project. When an estimate becomes a project it is assigned a project number. At this point I thought of Copying the required data from the estimate side to the project side. This would result in duplicate data in a sence but the tables will be referenced from two standalone front end applications. Should I copy the data from one table to another, or create new "views" to the estimate tables for the project management portion.

What would be the best solution to this problem? I find in some circumstances, a new table is required because additional data will be saved on the "Project Management" side, but not in all cases.

Mike B

View 2 Replies View Related

Database Design. Need Advice. Thank You.

Oct 19, 2007

Hello,

I am creating a database where:
- I have a Blogs and Folders system.
- Use a common design so I can implement new systems in the future.

Users, Comments, Ratings, View, Tags and Categories are tables common to all systems, i.e., used by Posts and Files in Blogs and Folders.

- One Tag or Category can be associated to many Posts or Files.
- One Comment, View or Rating should be only associated to one Post or one File. I am missing this ... (1)

Relations between a File / Folder and Comments / Ratings / View / Tags / Categories are done using FilesRatings, FoldersViews, etc.

I am using UniqueIdentifier as Primary Keys.
I checked ASP.NET Membership tables, a few articles and few features in my project, such as renaming files with the GUID of their records.
I didn't decided yet for INT or UNIQUEIDENTIFIER.

I am looking for some feedback on the design of my database.
One thing I need to improve is mentioned in (1)

Thank You,
Miguel

My Database Script:

-- Users ...
create table dbo.Users
(
UserID uniqueidentifier not null
constraint PK_User primary key clustered,
[Name] nvarchar(200) not null,
Email nvarchar(200) null,
UpdatedDate datetime not null
)

-- Categories ...
create table dbo.Categories
(
CategoryID uniqueidentifier not null
constraint PK_Category primary key clustered,
[Name] nvarchar(100) not null
)

-- Comments ...
create table dbo.Comments
(
CommentID uniqueidentifier not null
constraint PK_Comment primary key clustered,
AuthorID uniqueidentifier not null,
Title nvarchar(400) null,
Body nvarchar(max) null,
UpdatedDate datetime not null,
constraint FK_Comments_Users
foreign key(AuthorID)
references dbo.Users(UserID)
)

-- Ratings ...
create table dbo.Ratings
(
RatingID uniqueidentifier not null
constraint PK_Rating primary key clustered,
AuthorID uniqueidentifier not null,
Value float not null,
constraint FK_Ratings_Users
foreign key(AuthorID)
references dbo.Users(UserID)
)

-- Tags ...
create table dbo.Tags
(
TagID uniqueidentifier not null
constraint PK_Tag primary key clustered,
[Name] nvarchar(100) not null
)

-- Views ...
create table dbo.Views
(
ViewID uniqueidentifier not null
constraint PK_View primary key clustered,
Ticket [datetime] not null
)

-- Blogs ...
create table dbo.Blogs
(
BlogID uniqueidentifier not null
constraint PK_Blog primary key clustered,
Title nvarchar(400) null,
Description nvarchar(2000) null,
CreatedDate datetime null
)

-- Posts ...
create table dbo.Posts
(
PostID uniqueidentifier not null
constraint PK_Post primary key clustered,
BlogID uniqueidentifier not null,
AuthorID uniqueidentifier not null,
Title nchar(1000) null,
Body nvarchar(max) null,
UpdatedDate datetime not null,
IsPublished bit not null,
constraint FK_Posts_Blogs
foreign key(BlogID)
references dbo.Blogs(BlogID)
on delete cascade,
constraint FK_Posts_Users
foreign key(AuthorID)
references dbo.Users(UserID)
on delete cascade
)

-- PostsCategories ...
create table dbo.PostsCategories
(
PostID uniqueidentifier not null,
CategoryID uniqueidentifier not null,
constraint PK_PostsCategories
primary key clustered (PostID, CategoryID),
constraint FK_PostsCategories_Posts
foreign key(PostID)
references dbo.Posts(PostID)
on delete cascade,
constraint FK_PostsCategories_Categories
foreign key(CategoryID)
references dbo.Categories(CategoryID)
)

-- PostsComments ...
create table dbo.PostsComments
(
PostID uniqueidentifier not null,
CommentID uniqueidentifier not null,
constraint PK_PostsComments
primary key clustered (PostID, CommentID),
constraint FK_PostsComments_Posts
foreign key(PostID)
references dbo.Posts(PostID)
on delete cascade,
constraint FK_PostsComments_Comments
foreign key(CommentID)
references dbo.Comments(CommentID)
on delete cascade
)

-- PostsRatings ...
create table dbo.PostsRatings
(
PostID uniqueidentifier not null,
RatingID uniqueidentifier not null,
constraint PK_PostsRatings
primary key clustered (PostID, RatingID),
constraint FK_PostsRatings_Posts
foreign key(PostID)
references dbo.Posts(PostID)
on delete cascade,
constraint FK_PostsRatings_Ratings
foreign key(RatingID)
references dbo.Ratings(RatingID)
on delete cascade
)

-- PostsTags ...
create table dbo.PostsTags
(
PostID uniqueidentifier not null,
TagID uniqueidentifier not null,
constraint PK_PostsTags
primary key clustered (PostID, TagID),
constraint FK_PostsTags_Posts
foreign key(PostID)
references dbo.Posts(PostID)
on delete cascade,
constraint FK_PostsTags_Tags
foreign key(TagID)
references dbo.Tags(TagID)
)

-- PostsViews ...
create table dbo.PostsViews
(
PostID uniqueidentifier not null,
ViewID uniqueidentifier not null,
constraint PK_PostsViews
primary key clustered (PostID, ViewID),
constraint FK_PostsViews_Posts
foreign key(PostID)
references dbo.Posts(PostID)
on delete cascade,
constraint FK_PostsViews_Views
foreign key(ViewID)
references dbo.Views(ViewID)
on delete cascade
)

-- Folders ...
create table dbo.Folders
(
FolderID uniqueidentifier not null
constraint PK_Folder primary key clustered,
[Name] nvarchar(100) null,
Description nvarchar(2000) null,
CreatedDate datetime not null,
URL nvarchar(400) not null
)

-- Files ...
create table dbo.Files
(
FileID uniqueidentifier not null
constraint PK_File primary key clustered,
FolderID uniqueidentifier not null,
AuthorID uniqueidentifier not null,
Title nvarchar(400) null,
Description nvarchar(2000) null,
[Name] nvarchar(100) not null,
URL nvarchar(400) not null,
UpdatedDate datetime not null,
IsPublished bit not null,
Type nvarchar(50) null,
constraint FK_Files_Folders
foreign key(FolderID)
references dbo.Folders(FolderID)
on delete cascade,
constraint FK_Files_Users
foreign key(AuthorID)
references dbo.Users(UserID)
on delete cascade
)

-- FilesCategories ...
create table dbo.FilesCategories
(
FileID uniqueidentifier not null,
CategoryID uniqueidentifier not null,
constraint PK_FilesCategories
primary key clustered (FileID, CategoryID),
constraint FK_FilesCategories_Files
foreign key(FileID)
references dbo.Files(FileID)
on delete cascade,
constraint FK_FilesCategories_Categories
foreign key(CategoryID)
references dbo.Categories(CategoryID)
)

-- FilesComments ...
create table dbo.FilesComments
(
FileID uniqueidentifier not null,
CommentID uniqueidentifier not null,
constraint PK_FilesComments
primary key clustered (FileID, CommentID),
constraint FK_FilesComments_Files
foreign key(FileID)
references dbo.Files(FileID)
on delete cascade,
constraint FK_FilesComments_Comments
foreign key(CommentID)
references dbo.Comments(CommentID)
on delete cascade
)

-- FilesRatings ...
create table dbo.FilesRatings
(
FileID uniqueidentifier not null,
RatingID uniqueidentifier not null,
constraint PK_FilesRatings
primary key clustered (FileID, RatingID),
constraint FK_FilesRatings_Files
foreign key(FileID)
references dbo.Files(FileID)
on delete cascade,
constraint FK_FilesRatings_Ratings
foreign key(RatingID)
references dbo.Ratings(RatingID)
on delete cascade
)

-- FilesTags ...
create table dbo.FilesTags
(
FileID uniqueidentifier not null,
TagID uniqueidentifier not null,
constraint PK_FilesTags
primary key clustered (FileID, TagID),
constraint FK_FilesTags_Files
foreign key(FileID)
references dbo.Files(FileID)
on delete cascade,
constraint FK_FilesTags_Tags
foreign key(TagID)
references dbo.Tags(TagID)
)

-- FilesViews ...
create table dbo.FilesViews
(
FileID uniqueidentifier not null,
ViewID uniqueidentifier not null,
constraint PK_FilesViews
primary key clustered (FileID, ViewID),
constraint FK_FilesViews_Files
foreign key(FileID)
references dbo.Files(FileID)
on delete cascade,
constraint FK_FilesViews_Views
foreign key(ViewID)
references dbo.Views(ViewID)
on delete cascade
)


-- Run script
go

View 2 Replies View Related

Design Advice - Need ASAP

Nov 27, 2007

Hello Everyone. Im sorry for this urgent post, but have critical issue that needs a solution quick. So for my issue. I am adjusting our sales order tables to handle a couple different scenarios. Currently we have 2 tables for sales orders

SALESORDERS
------------
SORDERNBR int PK,
{ Addtl Header Columns... }

SALESORDERDETAILS
-------------------
SODETAILID int,
SORDERNBR int FK,
PN varchar,
SN varchar(25),
{ Addtl Detail Columns ... }


Currently the sales order line item is serial number specific. I need to change the tables to be able to handle different requests like :

Line Item Request ( PN, QTY )
Line Item Request ( SN )
Line Item Request ( PN, GRADE, QTY )
ETC.

I am thinking i need to create a new table to hold the specifics for a particular line item. Maybe like this :

SALESORDERSPECS
----------------
SOSPECID int,
SODETAILID int FK,
SPECTYPE varchar, IE : SN, PN, GRADE. { one value per row }
SPECVALUE varchar IE : GRADE A

Im thinking i would need to rename the SALESORDERDETAILS table to SALESORDERITEMS. SALESORDERITEMS would just contain header info like
SalePrice,
Warranty,
Etc...

Then rename SALESORDERSPECS to SALESORDERDETAILS...

Anyone understand what im trying to do? If you need more info please ask. You can also get a hold of me through IM.

Thanks!

JayDial








JP Boodhoo is my idol.

View 3 Replies View Related

Your Professional Advice Please - Design

May 18, 2006

Hi All, I have read MANY posts on how to track changes to data overtimeIt appears there are two points of view1. Each record supports a Change Indicator flag toindicate the current record(would this be EVERY table?)2. Each table is duplicated as an archive table andtriggers are used to update archiveCan someone give me some guidance based on REAL world experiencewhich works best for them?My scenario - I have insurance policies and must track history aspolicies are updated by customer service reps.Imagine many tables Policy>LifePol>LifePolRiders[color=blue]>AccidentPol >etc...>DIPol>DIPolRiders[/color]To me the archive table scenario does not seem scalable at all....someguidance on design would be aprreciated...Thanks!!!

View 2 Replies View Related

Database Design. Need Advice. Thank You.

Oct 21, 2007

Hello,

I am creating a database where:
- I have a Blogs and Folders system.
- Use a common design so I can implement new systems in the future.

Users, Comments, Ratings, View, Tags and Categories are tables common to all systems, i.e., used by Posts and Files in Blogs and Folders.

- One Tag or Category can be associated to many Posts or Files.
- One Comment, View or Rating should be only associated to one Post or one File. I am missing this ... (1)

Relations between a File / Folder and Comments / Ratings / View / Tags / Categories are done using FilesRatings, FoldersViews, etc.

I am using UniqueIdentifier as Primary Keys.
I checked ASP.NET Membership tables, a few articles and few features in my project, such as renaming files with the GUID of their records.
I didn't decided yet for INT or UNIQUEIDENTIFIER

I am looking for some feedback on the design of my database.
One thing I think need to improve is mentioned in (1)

But any advices to improve it would be great.

Thank You,
Miguel

My Database Script:





-- Users ...
create table dbo.Users
(
UserID uniqueidentifier not null
constraint PK_User primary key clustered,
[Name] nvarchar(200) not null,
Email nvarchar(200) null,
UpdatedDate datetime not null
)

-- Categories ...
create table dbo.Categories
(
CategoryID uniqueidentifier not null
constraint PK_Category primary key clustered,
[Name] nvarchar(100) not null
)

-- Comments ...
create table dbo.Comments
(
CommentID uniqueidentifier not null
constraint PK_Comment primary key clustered,
AuthorID uniqueidentifier not null,
Title nvarchar(400) null,
Body nvarchar(max) null,
UpdatedDate datetime not null,
constraint FK_Comments_Users
foreign key(AuthorID)
references dbo.Users(UserID)
)

-- Ratings ...
create table dbo.Ratings
(
RatingID uniqueidentifier not null
constraint PK_Rating primary key clustered,
AuthorID uniqueidentifier not null,
Value float not null,
constraint FK_Ratings_Users
foreign key(AuthorID)
references dbo.Users(UserID)
)

-- Tags ...
create table dbo.Tags
(
TagID uniqueidentifier not null
constraint PK_Tag primary key clustered,
[Name] nvarchar(100) not null
)

-- Views ...
create table dbo.Views
(
ViewID uniqueidentifier not null
constraint PK_View primary key clustered,
Ticket [datetime] not null
)

-- Blogs ...
create table dbo.Blogs
(
BlogID uniqueidentifier not null
constraint PK_Blog primary key clustered,
Title nvarchar(400) null,
Description nvarchar(2000) null,
CreatedDate datetime null
)

-- Posts ...
create table dbo.Posts
(
PostID uniqueidentifier not null
constraint PK_Post primary key clustered,
BlogID uniqueidentifier not null,
AuthorID uniqueidentifier not null,
Title nchar(1000) null,
Body nvarchar(max) null,
UpdatedDate datetime not null,
IsPublished bit not null,
constraint FK_Posts_Blogs
foreign key(BlogID)
references dbo.Blogs(BlogID)
on delete cascade,
constraint FK_Posts_Users
foreign key(AuthorID)
references dbo.Users(UserID)
on delete cascade
)

-- PostsCategories ...
create table dbo.PostsCategories
(
PostID uniqueidentifier not null,
CategoryID uniqueidentifier not null,
constraint PK_PostsCategories
primary key clustered (PostID, CategoryID),
constraint FK_PostsCategories_Posts
foreign key(PostID)
references dbo.Posts(PostID)
on delete cascade,
constraint FK_PostsCategories_Categories
foreign key(CategoryID)
references dbo.Categories(CategoryID)
)

-- PostsComments ...
create table dbo.PostsComments
(
PostID uniqueidentifier not null,
CommentID uniqueidentifier not null,
constraint PK_PostsComments
primary key clustered (PostID, CommentID),
constraint FK_PostsComments_Posts
foreign key(PostID)
references dbo.Posts(PostID)
on delete cascade,
constraint FK_PostsComments_Comments
foreign key(CommentID)
references dbo.Comments(CommentID)
on delete cascade
)

-- PostsRatings ...
create table dbo.PostsRatings
(
PostID uniqueidentifier not null,
RatingID uniqueidentifier not null,
constraint PK_PostsRatings
primary key clustered (PostID, RatingID),
constraint FK_PostsRatings_Posts
foreign key(PostID)
references dbo.Posts(PostID)
on delete cascade,
constraint FK_PostsRatings_Ratings
foreign key(RatingID)
references dbo.Ratings(RatingID)
on delete cascade
)

-- PostsTags ...
create table dbo.PostsTags
(
PostID uniqueidentifier not null,
TagID uniqueidentifier not null,
constraint PK_PostsTags
primary key clustered (PostID, TagID),
constraint FK_PostsTags_Posts
foreign key(PostID)
references dbo.Posts(PostID)
on delete cascade,
constraint FK_PostsTags_Tags
foreign key(TagID)
references dbo.Tags(TagID)
)

-- PostsViews ...
create table dbo.PostsViews
(
PostID uniqueidentifier not null,
ViewID uniqueidentifier not null,
constraint PK_PostsViews
primary key clustered (PostID, ViewID),
constraint FK_PostsViews_Posts
foreign key(PostID)
references dbo.Posts(PostID)
on delete cascade,
constraint FK_PostsViews_Views
foreign key(ViewID)
references dbo.Views(ViewID)
on delete cascade
)

-- Folders ...
create table dbo.Folders
(
FolderID uniqueidentifier not null
constraint PK_Folder primary key clustered,
[Name] nvarchar(100) null,
Description nvarchar(2000) null,
CreatedDate datetime not null,
URL nvarchar(400) not null
)

-- Files ...
create table dbo.Files
(
FileID uniqueidentifier not null
constraint PK_File primary key clustered,
FolderID uniqueidentifier not null,
AuthorID uniqueidentifier not null,
Title nvarchar(400) null,
Description nvarchar(2000) null,
[Name] nvarchar(100) not null,
URL nvarchar(400) not null,
UpdatedDate datetime not null,
IsPublished bit not null,
Type nvarchar(50) null,
constraint FK_Files_Folders
foreign key(FolderID)
references dbo.Folders(FolderID)
on delete cascade,
constraint FK_Files_Users
foreign key(AuthorID)
references dbo.Users(UserID)
on delete cascade
)

-- FilesCategories ...
create table dbo.FilesCategories
(
FileID uniqueidentifier not null,
CategoryID uniqueidentifier not null,
constraint PK_FilesCategories
primary key clustered (FileID, CategoryID),
constraint FK_FilesCategories_Files
foreign key(FileID)
references dbo.Files(FileID)
on delete cascade,
constraint FK_FilesCategories_Categories
foreign key(CategoryID)
references dbo.Categories(CategoryID)
)

-- FilesComments ...
create table dbo.FilesComments
(
FileID uniqueidentifier not null,
CommentID uniqueidentifier not null,
constraint PK_FilesComments
primary key clustered (FileID, CommentID),
constraint FK_FilesComments_Files
foreign key(FileID)
references dbo.Files(FileID)
on delete cascade,
constraint FK_FilesComments_Comments
foreign key(CommentID)
references dbo.Comments(CommentID)
on delete cascade
)

-- FilesRatings ...
create table dbo.FilesRatings
(
FileID uniqueidentifier not null,
RatingID uniqueidentifier not null,
constraint PK_FilesRatings
primary key clustered (FileID, RatingID),
constraint FK_FilesRatings_Files
foreign key(FileID)
references dbo.Files(FileID)
on delete cascade,
constraint FK_FilesRatings_Ratings
foreign key(RatingID)
references dbo.Ratings(RatingID)
on delete cascade
)

-- FilesTags ...
create table dbo.FilesTags
(
FileID uniqueidentifier not null,
TagID uniqueidentifier not null,
constraint PK_FilesTags
primary key clustered (FileID, TagID),
constraint FK_FilesTags_Files
foreign key(FileID)
references dbo.Files(FileID)
on delete cascade,
constraint FK_FilesTags_Tags
foreign key(TagID)
references dbo.Tags(TagID)
)

-- FilesViews ...
create table dbo.FilesViews
(
FileID uniqueidentifier not null,
ViewID uniqueidentifier not null,
constraint PK_FilesViews
primary key clustered (FileID, ViewID),
constraint FK_FilesViews_Files
foreign key(FileID)
references dbo.Files(FileID)
on delete cascade,
constraint FK_FilesViews_Views
foreign key(ViewID)
references dbo.Views(ViewID)
on delete cascade
)


-- Run script
go

View 2 Replies View Related

Design/Optimization Advice?

Oct 22, 2007

Hi all-

I have a schema that is mostly working, but I was wondering if some of you with more experience than I might give me some constructive criticism on my methodology.

Basically, I have a single table that stores data for many records. Each record has a variable number of fields, each of which can be a different data type. Later, queries will pull filtered subsets of data from the table, and do calculations on specific fields. In my implementation, the fields for a record are bound together by the datagroup (uniqueidentifier) column in the LotsOData table, the field name is defined by the dataname column, and the field value is stored in the datavalue column, which is type sql_variant.

One problem I had, and I'm not able to reliably replicate, is that the more complicated queries sometimes raise casting errors on the sql_variant column, even when the data is absolutely correct. I've been able to avoid this case by pre-selecting some of the subqueries into temporary tables first, then joining on the temp tables in the main query, but that seems horribly inefficient.

I've included a sample table, data, and query to demonstrate my basic solution. I was wondering if anybody could provide some insight on a better way of designing a solution for this scenario.

Thanks!
-Eric.

PS: bonus points if you have any insight at all on the casting error I mentioned!!



-- create tablecreate table LotsOData( pk int identity, dataname nvarchar(16) not null, datagroup uniqueidentifier, datavalue sql_variant);-- lot of insertsdeclare @group_a uniqueidentifier, @group_b uniqueidentifier, @group_c uniqueidentifier;set @group_a = newid();set @group_b = newid();set @group_c = newid();insert into LotsOData (dataname, datagroup, datavalue)select 'some_int', @group_a, 1union all select 'some_int', @group_b, 2union all select 'some_int', @group_c, 3insert into LotsOData (dataname, datagroup, datavalue)select 'some_char', @group_a, 'a'union all select 'some_char', @group_b, 'b'union all select 'some_char', @group_c, 'c'insert into LotsOData (dataname, datagroup, datavalue)select 'some_string', @group_a, 'abc'union all select 'some_string', @group_b, '!@#'union all select 'some_string', @group_c, 'xyz'insert into LotsOData (dataname, datagroup, datavalue)select 'some_float', @group_a, 1.23union all select 'some_float', @group_b, 2.34union all select 'some_float', @group_c, 3.45insert into LotsOData (dataname, datagroup, datavalue)select 'some_datetime', @group_a, cast('01/01/2001 01:00:00' as datetime)union all select 'some_datetime', @group_b, getdate()union all select 'some_datetime', @group_c, cast('01/01/2009 01:00:00' as datetime)-- do some big ugly query:select cast(a.datavalue as datetime) as datatime_data, cast(b.datavalue as int) as int_data, cast(c.datavalue as char(1)) as char_data, cast(d.datavalue as nvarchar(max)) as string_data, cast(e.datavalue as float) as float_data, cast(b.datavalue as int) * cast(e.datavalue as float) as calc_datafrom ( select datavalue, datagroup from LotsOData where dataname = 'some_datetime' ) a inner join ( select datavalue, datagroup from LotsOData where dataname = 'some_int' ) b on b.datagroup = a.datagroup inner join ( select datavalue, datagroup from LotsOData where dataname = 'some_char' ) c on c.datagroup = a.datagroup inner join ( select datavalue, datagroup from LotsOData where dataname = 'some_string' ) d on d.datagroup = a.datagroup inner join ( select datavalue, datagroup from LotsOData where dataname = 'some_float' ) e on e.datagroup = a.datagroupwhere cast(a.datavalue as datetime) between '01/01/2006' and '01/01/2008';

View 6 Replies View Related

Design Advice For Report Architecture

Jan 24, 2008

Any design suggestions for the best way to architect this report using SQL Reporting Services 2005 are appreciated!

My website features a catalog of roughly 50,000 items, each of which may be appear in a list of search results or in a detailed view. There are counters on the pages that update totals for such appearances and track other item-specific information in several tables in a SQL database. The catalog of items changes frequently, so the list of item IDs is never exactly the same from month to month.

I've been asked to produce a monthly report of this data for each of the items in the catalog, with reports for the current and previous months (for many years) accessible at all times. Some -- but not all -- items are useful for one purpose or another and so can be considered as belonging to a group of items. Although I have not yet been asked to create a report that aggregates the values for all Group members into a single report for that Group, I can clearly see it would be valuable and will be requested soon.

To ensure the report captures the data for an entire month, it must be run at the very end of each month. That means I will need to run the report using a Schedule that kicks off the process at 12:01am every 1st of the month. The report must be processed and stored for later retrieval and rendering on demand.


Considering the number of items and the indefinite length of time the report data must be retained, my question is really what's the best way to set all this up?

Should I create a report for each item separately? That would mean the scheduled task would have to somehow discover the current list of item IDs (which is available via query from the database) and create and process (but not render) a report for each (passing the item ID as a report parameter?), adding it to the report history. Although each report would be small take only a short time to run, overall that seems like it would take a long time to run and create a huge number of reports to store each month.

Or should I create a single 'master' report that contains all the data for every item for the month, and then use the item ID as a filter on the data when it is rendered? While that means only one report is created each month and added to the history, it would be a much larger report and take much longer to run (with more potential for timeouts and errors to scuttle the whole report). It also means all the data for the entire report has to be loaded every time the report is rendered, even though only 1/50,000 of the data (the data for 1 of the 50,000 items) will actually be viewed with any given rendering. But that would seem overly cumbersome, slow, and wastefully band-width intensive.


Any alternatives, suggestions, considerations, etc. -- all welcome!

Thanks

BillB

View 2 Replies View Related

Design Advice...writing A Text File

Jul 23, 2004

I need to create a text file using information from SQL tables/views in the following format...Can anyone recommend a direction or procedure to look into, i.e, sql script, custom dts, etc. The items in parentheses identify specific portions of the text file.

(01)
101081,84423,customer ,072304,customer ,11310 Via Playa De Cortes , ,San Diego ,CA,92124,
(02) 6 ,1 , , , , ,22 ,1 ,0.00 ,160.46 ,160.46 ,0.00 , , , , , , , , ,1,1
(03)B130907540,5.41 ,1
(03)B130907550,5.41 ,1
(03)B130907560,5.41 ,1
(03)B130907570,6.04 ,1
(03)B065007550,1.72 ,2
(03)B065007560,1.72 ,6
(03)B519926530,4.66 ,13
(03)B519926550,4.66 ,12
(03)B560911200,2.14 ,1
(03)B560912500,2.14 ,1
(03)B095305750,3.65 ,1

View 5 Replies View Related

Advice On Table Design Which Will Allow Me To Enforce Integrity

Jul 23, 2005

Hi,I have two tables Table A and B, below with some dummy data...Table A (contains specific unique settings that can be requested)Id, SettingName1, weight2, lengthTable B (contains the setting values, here 3 values relate to weightand 1 to length)Id, Brand, SettingValue1, A, 1001, B, 2001, null, 3002, null, 5.3(There is also a list of Brands available in another table). No primarykeys / referential integrity has been setup yet.Basically depending upon the Brand requested a different setting valuewill be present. If a particular brand is not present (signified by anull in the Brand column in table B), then a default value will beused.Therefore if I request the weight and pass through a Brand of A, I willget 100If I request the weight but do not pass through a brand (i.e. null) Iwill get 300.My question is, what kind of integrity can I apply to avoid the userspecifying duplicate Ids and Brands in table B. I cannot apply acomposite key on these two fields as a null is present. Table B willprobably contain about 50 rows and probably 10 of them will be brandspecific. The reason its done like this is in the calling client code Iwant to call some function e.g.getsetting(weight) .... result = 300Or if it is brand specificgetsetting(weight,A) ..... result = 100Any advice on integrity or table restructuring would be greatlyappreciated. Its sql 2000 sp3.Thanksbrad

View 9 Replies View Related

Advice Needed : Nasty Problem PHP/MS SQL Server And Varchar Fields &> 255 In Length

Jul 20, 2005

I am currently working on a PHP based website that needs to be able to drawfrom Oracle, MS SQL Server, MySQL and given time and demand other RDBMS. Itook a lot of time and care creating a flexible and solid wrapper and amdeep into coding. The only problem is a noticed VARCHAR fields being drawnfrom SQL Server 2000 are being truncated to 255 characters.I searched around php.net and found the following :Note to Win32 Users: Due to a limitation in the underlying API used by PHP(MS DbLib C API), the length of VARCHAR fields is limited to 255. If youneed to store more data, use a TEXT field instead.(http://www.php.net/manual/en/functi...ield-length.php)The only problem with this advice is Text fields seem to be limited to 16characters in length, and I am having similar results in terms of truncationwith other character based fields that can store more than 255 characters.I am using PHP 4.3.3 running on IIS using the php_mssql.dll extensions andthe functions referenced here http://www.php.net/manual/en/ref.mssql.php.What are my options here? Has anybody worked around this or am I missingsomething obvious?James

View 4 Replies View Related

Cannot Export Large Varchar To Text File

Jun 15, 2001

When using DTS (in SQL 7) to export via OLE DB a large varchar to a text file, it clips it at 255 chars.
No other data access drivers seem to work, either. This is lame! I cannot use bcp as a work
around, because i want to use quoted comma-delimited, which it doesn't support, and I
am using query-based export, where the query calls a stored proc, which bcp also doesn't
support.

Are there any new versions of MDAC that fix this? Anyone know a workaround? My current hack fix
is to split my field into 2, but this is a grubby fix that hassles my reciptients.

This is a pretty fundamental limitation to a major product!

dn

View 1 Replies View Related

Design Question For Large Table

Sep 20, 2007

Hi,

What's the most efficient way to store the following information:

* Table contains 1 million listings
* Each listing can be geo-targeted to any of the 200+ countries
* Searches return listings based on geo-location

Storage options:

Option #1 (normalized)
* ListingsTable (PK listingID int) [1 million rows]
* ListingGeoLocations (listingID, geoLocationID) [could be up to 200 million rows]

Option #2 (denormalized)
* ListingsTable (PK listingID int, binary(32) with bit-mask consisting of 200 bits one for each location)

Did anyone have experience with similar structures? Which option is more efficient?

Thanks,
Av

View 8 Replies View Related

DB Design :: How Many Bytes Use A Varchar

Nov 3, 2015

I have a varchar(900) which means that I can use 900bytes, so if I am not wrong if the character is unicode, y only can use 450 because each character need two bytes.I have a databease with a column that use the intercalation general_latin_CI_AI, but I don't know if this intercalation use 1byte per character or use 2bytes per character.How can I know how many bytes need a character of a varchar column?

View 3 Replies View Related

Interesting Large Table Design Recommendation

Sep 25, 2007

Hi,

What's the most efficient way to store the following information:

* Table contains 1 million listings
* Each listing can be geo-targeted to any of the 200+ countries
* Searches return listings based on location

Storage options:

Option #1 (normalized)
* Listings (PK listingID int) [1 million rows]
* ListingLocations (listingID, locationID) [could be up to 200 million rows]

Option #2 (denormalized)
* Listings (PK listingID int, binary(32) with bit-mask consisting of 200 bits one for each location)

Usage: Usually the query will simply lookup listings based on some keywords. It will get back 50-200 listings. Then the application (C#) will filter the listings based on location.

Did anyone have experience with similar structures? Which option is more efficient?

I know that using the intersection-table in Option #1 is the "proper" relational-DB way of doing things. However, I do not like the idea of storing the listingID so many times (ones for each locationID).

Thanks,
Av

View 1 Replies View Related

Design Of Tables With Large Optional Fields?

Jan 4, 2006

I have a general SQL design-type question.

I want to log errors to a table. If the error is with a URL, I want to store the URL. These URLs can be very large, hundreds of characters, but I only need to store it if it causes the error, which should be very infrequent. Which is the better design:

Create a large varchar field in the log table to hold the URL, or null if the error wasn't with the URL.
Create a foreign key field in the log table to a second URL table, which has a unique ID and a large varchar, and only create a record in this table if the error is with the URL.

One concern I have with design 2 is that there could be many other fields that are infrequent. Do I create a separate table for every one?

Richard

View 3 Replies View Related

Problems Moving Data Over 8000k In DB2 Varchar Column Into SQL Server Varchar(max) Using SSIS

Nov 20, 2007



I have looked far and wide and have not found anything that works to allow me to resolve this issue.

I am moving data from DB2 using the MS OLEDB Provider for DB2. The OLEDB source sees the column of data as DT_TEXT. I setup a destination to SQL Server 2005 and everything looks good until I try and run the package.

I get the error:
[OLE DB Source [277]] Error: An OLE DB error has occurred. Error code: 0x80040E21. An OLE DB record is available. Source: "Microsoft DB2 OLE DB Provider" Hresult: 0x80040E21 Description: "Multiple-step OLE DB operation generated errors. Check each OLE DB status value, if available. No work was done.".

[OLE DB Source [277]] Error: Failed to retrieve long data for column "LIST_DATA_RCVD".

[OLE DB Source [277]] Error: There was an error with output column "LIST_DATA_RCVD" (324) on output "OLE DB Source Output" (287). The column status returned was: "DBSTATUS_UNAVAILABLE".

[OLE DB Source [277]] Error: The "output column "LIST_DATA_RCVD" (324)" failed because error code 0xC0209071 occurred, and the error row disposition on "output column "LIST_DATA_RCVD" (324)" specifies failure on error. An error occurred on the specified object of the specified component.

[DTS.Pipeline] Error: The PrimeOutput method on component "OLE DB Source" (277) returned error code 0xC0209029. The component returned a failure code when the pipeline engine called PrimeOutput(). The meaning of the failure code is defined by the component, but the error is fatal and the pipeline stopped executing.

Any suggestions on how I can get the large string data in the varchar column in DB2 into the varchar(max) column in SQL Server 2005?

View 10 Replies View Related

The Data Types Varchar And Varchar Are Incompatible In The Modulo Operator

Jan 4, 2008

I am trying to create a store procedure inside of SQL Management Studio console and I kept getting errors. Here's my store procedure.




Code Block
CREATE PROCEDURE [dbo].[sqlOutlookSearch]
-- Add the parameters for the stored procedure here
@OLIssueID int = NULL,
@searchString varchar(1000) = NULL
AS
BEGIN
-- SET NOCOUNT ON added to prevent extra result sets from
-- interfering with SELECT statements.
SET NOCOUNT ON;
-- Insert statements for procedure here
IF @OLIssueID <> 11111
SELECT * FROM [OLissue], [Outlook]
WHERE [OLissue].[issueID] = @OLIssueID AND [OLissue].[issueID] = [Outlook].[issueID] AND [Outlook].[contents] LIKE + ''%'' + @searchString + ''%''
ELSE
SELECT * FROM [Outlook]
WHERE [Outlook].[contents] LIKE + ''%'' + @searchString + ''%''
END




And the error I kept getting is:

Msg 402, Level 16, State 1, Procedure sqlOutlookSearch, Line 18

The data types varchar and varchar are incompatible in the modulo operator.

Msg 402, Level 16, State 1, Procedure sqlOutlookSearch, Line 21

The data types varchar and varchar are incompatible in the modulo operator.

Any help is appreciated.

View 5 Replies View Related

DB Design :: Optimize A Query That Uses A Varchar Column That Is Used In Order By Clause

May 5, 2015

I am querying a tableA with 1.8 million rows, it has id as its primary key and is a clustered index. I need to select all rows where I order by lastname. Its taking me 45 seconds. Is there anything i can do to optimize the query.Will creating a fulltext index on lastname If so, can you give me an example on how to create a full text index on lastname?

[Project1].[Id] AS [Id], 
[Project1].[DirectoryId] AS [DirectoryId], 
[Project1].[SPI] AS [SPI], 
[Project1].[FirstName] AS [FirstName], 
[Project1].[LastName] AS [LastName], 
[Project1].[NPI] AS [NPI], 
[Project1].[AddressLine1] AS [AddressLine1], 
[Project1].[AddressLine2] AS [AddressLine2], 

[code]...

View 5 Replies View Related

One Database Spanning Multiple Volumes

Dec 4, 1998

Hello Everyone,
I have a SQL 6.5 database that is about to grow beyond the size of its current volume. I have 3 volumes of 20GB each, 2 of which aren't being used. What do I need to do to ensure that I can expand the device across multiple volumes?

Thanks in advance for your help,
Terry

View 1 Replies View Related

Attaching Volumes To A SQL Cluster 2005

Mar 12, 2008

We have our production SQL 2005 server (64 bit Standard Ed.) attached to an iSCSI Equalogic SAN. We have set up 2 new servers an installed the cluster service. My question is: can I install SQL 2005 in this cluster environment an latter on disconnect the data and log volumes from the production server, attach those volumes to the cluster an reattach the DBs? the reason we need to do it like that is that we don't have enough spare space in the SAN to initially create these 2 volumes in the cluster.

Any ideas/suggestions would be greatly appreciated.

View 5 Replies View Related

Migrate SQL 2005 Cluster Volumes

May 15, 2008

Soon, I will be migrating SQL cluster volumes from our old SAN to the new one. I have an idea of how to do it, but I just wanted some feedback. Yes, I know the best way would be to set up a new cluster using the new SAN and migrate the DB, but unfortunately I don't have that luxury. Here's my plan...

Add new storage to cluster, ensuring the drives are active cluster resources and dependencies match old resources
Back up DB
Shut down SQL Server
Copy all files & folders from old storage to new storage
Reassign drive letters to make new storage match old configuration
Start SQL Server


In theory, I think this will be fine because as long as SQL sees the correct drive letters, it should function properly. Just concerned about the quorum portion of the cluster.

Thanks!
Kolby

View 3 Replies View Related

Attaching Volumes To A SQL Cluster 2005

Mar 12, 2008

We have our production SQL 2005 server (64 bit Standard Ed.) attached to an iSCSI Equalogic SAN. We have set up 2 new servers and installed the cluster service. My question is: can I install SQL 2005 in this new cluster environment an latter on disconnect the data and log volumes from the production server, attach those volumes to the cluster an reattach the DBs? the reason we need to do it like that is that we don't have enough spare space in the SAN to initially create these 2 volumes in the cluster.

Any ideas/suggestions would be greatly appreciated.

View 1 Replies View Related

Cluster Shared Volumes And Availability Groups

Aug 12, 2015

I'm looking at using Cluster Shared Volumes on a new Windows Server 2012/SQL Server 2014 cluster. Each instance is going to be configured to use cluster shared volumes. Is there any reason why Availability Groups couldn't be used in conjunction with Cluster Shared Volumes.

View 4 Replies View Related

Separate Databases For High/low Transaction Volumes?

Jun 23, 2006

I have an existing database with approx 500,000 rows and accessed by afew hundred users per day creating approx 1,000 new records per dayplus typical reporting - relatively low volume stuff for SQL Server.I'm about to add a process that will be importing data daily fromlegacy databases and summarizing it for reporting purposes, integratingit with the existing database. This volume of data will be considerablyhigher, perhaps 100,000+ rows per day, which will be deleted once ithas been summarized and the results written to some intermediatetables.Is there any concern about mixing different levels of volume within onedatabase? As I'll be creating lots of rows daily and then deleting themI was wondering about fragmentation, transaction logging etc. andwhether having this processing in a separate database from the mainapplication would be 'better'.

View 3 Replies View Related

Need Help Setup Volumes On Home Computer For Exam 70-431 Study

May 6, 2007

Newbie here. I need help setting up volumes so I can do practice exercises for exam 70-431.



What I have on an XP Home edition personal computer are my C volume which is 300+ GB, a NTFS basic, and my CD/DVD ROM on D.



What is the code I use at command prompt to do this or can I do it from Disk Management?







View 2 Replies View Related

Data Access Layer Advice

Jun 19, 2007

I've been following Soctt Mitchell's tutorials on Data Access and in Tutorial 1 (Step 5) he suggests using SQL Subqueries in TableAdapters in order to pick up extra information for display using a datasource.
 I have two tables for a gallery system I'm building. One called Photographs and one called MS_Photographs which has extra information about certain images. When reading the MS_Photograph data I also want to include a couple of fields from the related Photographs table. Rather than creating a table adapter just to pull this data I wanted to use the existing MS_Photographs adapter with a query such as...1 SELECT CAR_MAKE, CAR_MODEL,
2 (SELECT DATE_TAKEN
3 FROM PHOTOGRAPHS
4 WHERE (PHOTOGRAPH_ID = MS_PHOTOGRAPHS.PHOTOGRAPH_ID)) AS DATE_TAKEN,
5 (SELECT FORMAT
6 FROM PHOTOGRAPHS
7 WHERE (PHOTOGRAPH_ID = MS_PHOTOGRAPHS.PHOTOGRAPH_ID)) AS FORMAT,
8 (SELECT REFERENCE
9 FROM PHOTOGRAPHS
10 WHERE (PHOTOGRAPH_ID = MS_PHOTOGRAPHS.PHOTOGRAPH_ID)) AS REFERENCE,
11 DRIVER1, TEAM, GALLERY_ID, PHOTOGRAPH_ID
12 FROM MS_PHOTOGRAPHS
13 WHERE (GALLERY_ID = @GalleryID)
This works but I wanted to know if there's a way to get all of the fields using one subquery instead of three? I did try it but it gave me errors for everything I could think of.Is using a subquery like above the best way when you want this many fields from a secondary table or should I be using another approach. I'm using classes for the BLL as well and wondered if there's a way to do it at this stage instead?

View 7 Replies View Related

Advice In Loading Data Through SSIS

Dec 27, 2007

I have 2 flat files to load into a datamart via SSIS.
Need to implement:-
1. How can I prevent loading of same file again?
2. If by chance wrong data has been loaded how can I rollback?
Kindlt guide asap as I have to implement these.
JigJan

View 1 Replies View Related

KSAM Data Integration Advice

Aug 1, 2007

Hi there,

I have to retrieve data from a KSAM (kerridge) database and can only use a file dsn ODBC connection to connect to the database.

I can get access to the database through excel and thereby see the tables but when I try and open a connection through the development environment, it keeps my machine busy for what seems to be an eternity whith no result.

I want to use SSIS to extract the data to a SQL 2005 database but will need to get my connections/connection managers to work.

Is there any advice that anyone can give me on perhaps the best approach on the data extract?
Regards
Mike

View 2 Replies View Related

Web App To Export Sql Data To XLS, Etc. -- Beginner Advice Needed

Feb 1, 2007

Hi,Is there a programmatic wasy to convert the results of a sql data set to xls, csv, etc. Ideally a user would be able to make a selection to view the data (result set has, E.g. make, model, year, condition viewed in a datagrid or similar control) and then be able to export the file to the format they choose, and have a download box popup from the browser to download the file.E.g. Export this data  to:    __ XLS  __ CSV __TXT  .  I know DTS can do this but any advice on how to encapsulate this in a C# web app woudl be greatly appreciated! Thanks! 

View 1 Replies View Related







Copyrights 2005-15 www.BigResource.com, All rights reserved