Strategy For Data Storage/searching

Dec 16, 2007

Hello there,

Don't know if this is the right forum to be asking this, but I'll give it a try...

I'm relativelly a beginner in SQL Server and T-SQL in general. The problem I'm trying to solve is the following:

The big picture is that I have data coming from different data sources which I need to store on a database for later reference. Each data source might have a different set of measurements. For example, data source 1 might log Pressure and Humidity while data source 2 logs Pressure and Temperature. Once the data is present on the DB, the users can go ahead and retrieve data for a given [datasource/measurement/time interval] to generate reports or charts.

My implementation so far consists of two tables: series_info and series_data. series_info holds general information for a given series of measurements for a given data source (Pressure for data source 1, Pressure for data source 2, Humidity for data source 1 and Temperature for data source 2, in our example). Each series has a bigint index as primary key.

The table series_data contains all data relative to the series from series_info. Each piece of data has a bigint as a primary key, an associate time (which is always crescent) and a foreign key to the series it represents (in series_info).

Alright, everything is cool so far. However, whenever a user wants to retrieve data for given [data source/measurement/time interval], this takes very long, since all data is interposed in series_data and for every search it's necessary to find where the desired data actually lies.

One obvious solution for this would be to dynamically create a new table to hold the data for each series, but that would just make my database disorganized, since there would be thousands and thousands of tables.

Another thing that comes to my mind is to create a table with information of where lies the data for a given [data source / measurement] for given dates. So when the user requested data for a given [data source/measurement] between, say, january and february, we would first look at this intermediate table and find out that the data lies between indexes 1000 and 2000 on the series_data table, so the next SELECT command to series_data would already contain a restriction like WHERE index>=1000 and index<=2000. This should probably improve the speed of retrieval.

What do you guys (or girls) think? Maybe there's simply a classical solution for such a case.


Thanks in advance!

View 6 Replies


ADVERTISEMENT

Storage Strategy

May 30, 2006

Hi guys.

I am currently developing a system thats stores exchange stats in a db. Since our customers are companies with 20 employees up to 5 000 there a a big difference in the volume of data needed to be stored.

We currently thinking of supplying a SQL Server Express DB to the small customers and suggest a SQL Server to the bigger.

But since I would like to use the same structure for both types of customers I wonder how should i design the storeage.

Since the could be from 500 records a day up to 20 000. There are quite simple recordes with only simple datatypes. about 15 fields with no more than 10 chars each, mostly 2.

Should i separate the data in diffrent tables for a week or a day etc.
Since I am only going to filter data on 1 or 2 fields the data will be easly indexed.

The reports generated will almost always only use 1-3 months of data, but historical reports have to be possible.

My question are ofcourse:
Whats the best solution for me?

Thanks in advance:)

/Johan Wendelstam
Sweden

View 10 Replies View Related

What Is My Best Strategy For Loading Data.

Apr 22, 2007

I have been developing a genealogy application using a SQL Server 2000 database and ASP .NET 2.0.  In this application a process, Ged.Parse, converts data from the GEDCOM standard format (a heirachical file format that looks as if it was designed for 80-column cards) into my SQL Server database.
As we started to load reasonable quantities of data into the system we found that the on-line response became abysmal.  This problem was fixed by defining a number of secondary indexes (response times dropped to under a second, from previously exceeding 2 minutes and often timing out).  Unfortunately however the processing time of Ged.Parse then tripled, and it may now take up to an hour to process a GEDCOM. I believe that this is a byproduct of defining several indexes that are not needed by Ged.Parse itself, but which are of course maintained as Ged.Parse inserts new records into the database.  
I am wondering what my best strategy is, apart from putting Ged.Parse into a background task and just letting it trickle away.  (I will probably do this anyway). What I'd like to be able to do is to have Ged.Parse load records without creating the secondary indexes, and then create the indexes for the newly-added records as a penultimate step just before it makes them available for general use.  Of course there is no way that you can do this:  records in a table are either indexed or they are not.
Proposed change:  recode Ged.Parse to load data into temporary tables, say NewPeople, NewFacts, etc., with these tables having only the indexes required by Ged.Parse. Then, as the last process in Ged.Parse run a SQL procedure with code like: -            Insert into People Select * From NewPeople            Delete from NewPeople            etc
This is a reasonable amount of programming, so before I make this change could somebody tell me:  will this be significantly faster overall, or is this likely to make little or no improvement compared to the present process in which Ged.Parse loads data directly into People, Facts, etc?   Two facts that may influence the answer.  First, all record relationships are through GUIDs, so records in NewPeople, NewFacts, etc would already have their final key values.  Second: although Ged.Parse needs to form relationships between records, these relationships are only within the new records (created from the same GEDCOM), and Ged.Parse does not need to relate any of these new records to earlier records.
Thank you,
Robert Barnes.
 

View 2 Replies View Related

Data-archiving And Purging Strategy

Mar 31, 2008



Regarding SQL Server data, I am looking to implement the beset Data-Archive and Purge policy. Normal, we do SQL Backups and keep the history for some period , for example, 8 weeks, so we can go back and restore any data point in time upto 8 month in past. and we also do Tape backups.

Question is Where can I get nice article or documentation on this to best design such policy where I make sure that I am covered for point in time recovery of database (which is sql backups) and point in time recovery in far past, say, 3 years ago using tape backups, and I need to make sure that I don't repeat the same efforsts.

Any advices or suggestion on this topic.

Thanks,

View 5 Replies View Related

SQL 2012 :: Distinct Storage Tier Of Remote BLOB Storage (RBS)

Oct 27, 2014

How to implement distinct storage tiers on SQL Remote BLOB Storage (RBS)?

I want to use this SQL Feature to move files(images, videos, pdf files) from a database to a distinct database dedicated to RBS. Then I want to have several storage tiers, where objects will be saved and moved according access frequency. Old data will be arquived in cheap storage, but it must be always accessible if needed.

Description:
- 1st and main tier: new and frequently accessed objects stored in high performance storage;
- 2nd tier: automatically move older or less accessed objects to an inexpensive and different storage tier;
- in all cases, all objects must be accessible to all users, but accessing to archived objects(2nd tier) will be much slower;

View 0 Replies View Related

Transact SQL :: Fast Data Loading With Partition Switching Strategy

Jul 28, 2015

I’m looking for clearity on partition switching. The idea is to use many BULK INSERT statements into table dbo.X_n in parallel and when BULK INSERT for table dbo.X_n is completed, switch dbo.X_n into dbo.bigdaddy. I think this is the fastest way to upload a couple hundred GB of data.

In learning about partition switching (in part) from The Data Loading Performance Guide under Partition SWITCH, I hear the instructions to say copy the main table exactly to become a target. But in that same step (#1), I read that we need to change the default file group of the target (dbo.X_n) from the default file group. Then it says I need to match indexes and lists the filegroup as something we need to match with the main table.

As an overview of the partition switching strategy, I think the whole point of BULK INSERT with partitioning is to have seperate files (in same group) to enable concurrent uploading where each table has its own file. Once the upload is completed to a table (dbo.X_n) then we do the partition switch into the main table (dbo.bigdaddy). The data we just uploaded doesn’t actually move, just the metadata for it.

When I read the instructions linked above, I hear “Don’t have the same filegroup on your target as the main table. You must have the same filegroup on your target as the main table.”

Where am I disconnected?

View 5 Replies View Related

Transact SQL :: Strategy To Translate Column Data Into Distinct Rows

Aug 27, 2015

I am writing a query where I am identifying different scenarios where data changes between one week and the next. I've set up my result set in the following manner:

PrimaryID       SKUChange              DateChange         LocationIdChange        StateChange
10003             TRUE                       FALSE                  TRUE                          FALSE
etc...

The output I'd like to see would be like this:

PrimaryID        Field Changed          Previous Value      New Value
10003             SKUName                 SKU12345           SKU56789
10003             LocationId                 Den123               NYC987
etc...

The key here being that in the initial resultset ID 10003 is represented by one row but indicates two changes, and in the final output those two changes are being represented by two distinct rows. Obviously, I will bring in the previous and new values from a source.

View 3 Replies View Related

SQL Server 2012 :: Fast Data Loading With Partition Switching Strategy

Jul 28, 2015

I’m looking for clearity on partition switching. The idea is to use many BULK INSERT statements into table dbo.X_n in parallel and when BULK INSERT for table dbo.X_n is completed, switch dbo.X_n into dbo.bigdaddy. I think this is the fastest way to upload a couple hundred GB of data.

In learning about partition switching (in part) from The Data Loading Performance Guide under Partition SWITCH, I hear the instructions to say copy the main table exactly to become a target. But in that same step (#1), I read that we need to change the default file group of the target (dbo.X_n) from the default file group. Then it says I need to match indexes and lists the filegroup as something we need to match with the main table.

As an overview of the partition switching strategy, I think the whole point of BULK INSERT with partitioning is to have seperate files (in same group) to enable concurrent uploading where each table has its own file. Once the upload is completed to a table (dbo.X_n) then we do the partition switch into the main table (dbo.bigdaddy). The data we just uploaded doesn’t actually move, just the metadata for it.

“Don’t have the same filegroup on your target as the main table. You must have the same filegroup on your target as the main table.”

View 1 Replies View Related

What Does Strategy Exist To Deploy SSIS Package And My Own Data Flow Components Into A Enterparise Server?

Mar 29, 2007



I created a SSIS package and several data flow componenets for this package.



What does strategy exist to deploy SSIS package and data flow components into a enterparise server?



Thanks in advance.

View 2 Replies View Related

How Many Data Can Storage Into Sql Ce

Mar 16, 2007

Hi,

i want to know how many data can storage into sql server compact edition. I've got a db into a pocket pc that has a table with about 2000 records inside; are they too records?

View 5 Replies View Related

Un Limited Data Storage

Aug 29, 2005

hi all,
I have a field which name is Information
and it type is Varchar (8000),but some time data access than 8000 character, my client told me,make this field to store Unlimited data.
So how can i achive this task, i m using VS 2003 (ASP.NET with VB.NET) with SQL 2000.
Thanks
Shally

View 2 Replies View Related

XML Data Type Storage

Nov 22, 2006

Hi All,As per BOL, XML data type can store up 2 GB of data.My question is when a row is inserted in a table, for its xml column,2GB of space will be resered.In other words, how xml is internally stored. Is storage allocation issimilar to varchar(max) data type?Thanks in advance for everything.

View 1 Replies View Related

Storage For Data And Logs

Apr 2, 2007

I am planning on doing database mirroring using two (2) servers for each instance and a SAN to store the data and log files for both the primary server and mirrored server. How do I arrange the SAN 4 Physical Drives?
My options are:
2 Raid 1 Mirrors giving 250 GB to each SQL engine This though has both the transaction logs and data on the same physical drive even if we split it up further into logical drives
A Raid 10 - The transaction logs and data can be on separate drives
A Raid 5 using the 4 Drives. (How SQL will see these drives Im not sure when its 2 SQL engines)
Or I could get a 5th drive and have a mirror set for transaction logs and a RAID 5 configured for the data.

View 3 Replies View Related

Dumb Data Storage Question

Dec 13, 2006

Hello, So, here's my dumb question; if I wanted to store some *.gif images in some database (SQL2K possibly 2K5) field and wanted to pull the information from that to display on the web form, am I actually storing the image in the database or am I storing the location of the image in the database?I ask this because I was under the impression that the location to the image file is what was being stored but another person was saying that it was the actual image. I guess I'm confused... Thanks in advance.... 

View 4 Replies View Related

Storage Of Varying Data Types In SQL

Jun 2, 2001

re: [Windows 2000 SP1, SQL Server 7.0 SP2]

I am developing an online web-based address book for multiple users. There are STANDARD FIELDS and CUSTOM FIELDS.

Standard fields include: Name,Street,City,State,Zip.
Custom fields are those defined by a specific user. For example:

User-A Custom fields:
Interest Rate <real>
Loan Amount <currency>
Start date <date>

User-B Custom fields:
Blood type <char 3>
Date of birth <date>
Referred by <varchar 50>

Different users can have different custom fields in their address book. As you can see, while the standard fields for each user can be

stored in a single table. However, I have several methods by which I can store the CUSTOM fields.

------------------------------------------------
Method 1: Create 2 separate tables called CustomField and CustomValue:

CustomField has fields:
FieldID <int>
FieldName <varchar 25>
UserID <int>

CustomValue has fields:
ValueID <int>
Value <varchar 50>
FieldID <int>

------------------------------------------------
Method 2: Create a separate Field and multiple Value tables for each data type:
CustomField, CustomCharValue, CustomIntValue, CustomMoneyValue, etc...

CustomField has fields:
FieldID <int>
FieldName <varchar 25>
FieldType <smallint> (determines which TABLE, below, contains the data)
UserID <int>

CustomCharValue
CharValueID <int>
IntValue <Varchar 50>
FieldID <int>

CustomIntValue
IntValueID <int>
IntValue <int>
FieldID <int>

etc....etc...


The structures of those tables would be similar to Method 1, but the data would be segregated based on their data type.

--------------------------------------------------

I'm thinking that while Method 1 will be easier to implement, Method 2 may offer me better performance if coded correctly. I'm going

to assume that I'll have at least 1-5 million records to work with over the course of my first year and I will need the ability to sort

records based on values in the custom fields as well.

My first question is: Which method should I be considering and is there an alternative or hybrid that I should be considering?

My second question is: What statements should I use in my stored procedure that will enable me to retrieve a list of USERID, CustomFieldIDs and their values as one resulting table that I can query at will and with solid performance?

Gregory
email: sqlGuy@clubtel.com

View 1 Replies View Related

Indexing And Physical Storage Of Data

Aug 28, 2007

1> How is the data stored physically when there is now primary key as well as any index defined in the table......?

2> How is the data stored physically when there is just a primary key defined in one of the column of the table? No INDEX defined.




Thanks,
Rahul Jha

View 1 Replies View Related

Storage Of Text Data Types

Jan 2, 2014

I trying to fully understand when to use different data types in sql server.I want to know what Microdoft means when they say"Varchar is the actual length of the data entered plus 2 bytes".example e.g. what would the storage of varchar (50) be?

View 7 Replies View Related

SQL Express 2005 Data Storage Limitation

Sep 5, 2007

Hello,
I am designing a program for work with SQL Server express 2005. But I don't know what is the data storage limit in this version of SQL Server.
What i want is storing about 30000 records in a table of the database.
Hasn't SQL Server express 2005 any problem or restrictions for storing the data?
Please advice in this regards,
Thank you,
Mona 
 

View 3 Replies View Related

Differant Language OS And Data Storage In SQL Server

Oct 11, 2001

Dear Friends,

I am using SQL server 7 with ASP. I have two working environment means one is korean and second it english.
- one Korean OS server have SQL server 7.0 and it is my database server
- second Korean OS server is only webserver
- English OS is win2k and it is only Web server.


1) When i used both Korean server as my webserver + database server then there is no problem to add Korean Data to SQL server On korean OS.

2) But when I try to user English OS server as my webserver and Korean Os server as my database server then I am not able to store Korean Data in Database server insted of it stored some mis/junk/acssi characters in database.

-- I allready try with Korean version of MDAC of English os
-- I also try with OEM feature in SQL server client network utility
-- When I am use CODEPAGE in my .ASP page then data storage work fine .. but at the time of getting it back there is problem.



If u need any more information about problem then let me know.

So please help me in this regards.

Thanx in advance
Anis Vora
Partner
Global SoftWeb Solutions
www.globalsoftweb.com

View 1 Replies View Related

MSSQL: Storage Of Varying Data Types

Jun 2, 2001

I am developing an online web-based address book for multiple users. There are STANDARD FIELDS and CUSTOM FIELDS.

Standard fields include: Name,Street,City,State,Zip.
Custom fields are those defined by a specific user. For example:

User-A Custom fields:
Interest Rate <real>
Loan Amount <currency>
Start date <date>

User-B Custom fields:
Blood type <char 3>
Date of birth <date>
Referred by <varchar 50>

Different users can have different custom fields in their address book. As you can see, while the standard fields for each user can be

stored in a single table. However, I have several methods by which I can store the CUSTOM fields.

------------------------------------------------
Method 1: Create 2 separate tables called CustomField and CustomValue:

CustomField has fields:
FieldID <int>
FieldName <varchar 25>
UserID <int>

CustomValue has fields:
ValueID <int>
Value <varchar 50>
FieldID <int>

------------------------------------------------
Method 2: Create a separate Field and multiple Value tables for each data type:
CustomField, CustomCharValue, CustomIntValue, CustomMoneyValue, etc...

CustomField has fields:
FieldID <int>
FieldName <varchar 25>
FieldType <smallint> (determines which TABLE, below, contains the data)
UserID <int>

CustomCharValue
CharValueID <int>
IntValue <Varchar 50>
FieldID <int>

CustomIntValue
IntValueID <int>
IntValue <int>
FieldID <int>

etc....etc...


The structures of those tables would be similar to Method 1, but the data would be segregated based on their data type.

--------------------------------------------------

I'm thinking that while Method 1 will be easier to implement, Method 2 may offer me better performance if coded correctly. I'm going

to assume that I'll have at least 1-5 million records to work with over the course of my first year and I will need the ability to sort

records based on values in the custom fields as well.

My first question is: Which method should I be considering and is there an alternative or hybrid that I should be considering?

My second question is: What statements should I use in my stored procedure that will enable me to
retrieve a list of USERID, CustomFieldIDs and their values as one resulting table that I can query at will and with solid performance?

Gregory
email: sqlGuy@clubtel.com

View 1 Replies View Related

Can't Install IBM Tivoli Storage Manager Server On Windows 2003 X64 Storage Server, How Can I Fix The Pkg?

Jan 14, 2008

I am a Windows developer for the IBM Tivoli Storage Manager Server (TSMS) product.
Our product installation is built with InstallShield and uses the Windows Installer.

On a new installation of Windows 2003 x64 Storage Server R2, at a customer's site, the TSMS product fails to install.
The install of the OS has version 3.01.400.3959 of the Windows Installer and I see no newer version that installs.

Part of our product is 32 bit (console) and another part is x64 (server).
When installing I can see that the install's default is being redirected/reset to C:Program Files (x86)TivoliTSM after it is explicitly set by a custom action to ..Program Files.. . I further observe that our custom actions to write 64 bit registry entries are being refused.

REGSAM samMask = KEY_ALL_ACCESS;
if ( regIsWow64Process () ) samMask = samMask | KEY_WOW64_64KEY;
lStatus = RegCreateKeyEx( hLocalConnectKeyRoot,
szSubkey,
0L,
NULL,
REG_OPTION_NON_VOLATILE,
samMask,
NULL,
hKey,
&dw ) ;
The above fails to create the key.

We have tried four versions of our TSMS spanning many changes but the install acts the same.
This does not happen on any other Windows OS we test on but we do not test on Windows 2003 Storage Server R2 being that it is an OEM product. We did test on Windows server 2003 R2 x64 and do not see this problem.

Do you have any suggestions on how to tackle this problem?
I have full installation traces but can only see that the registry work is being refused. I can't see why.

View 1 Replies View Related

Searching Encrypted Data; Using MAC Secret Data

Aug 10, 2006

I just finished reading an article on how to search encrypted data efficiently and they suggested creating a new column with a Message Auhtentication Code. To be honest, reading the aritcle makes my head hurt. I can hardly understand what they were doing myself and I can't begin to explain it to a developer.

Are there any easier ways to search encrypted columns for a speciifc match? If not, does any have some stored procs that implement this messy MAC stuff?



TIA,



Barkingdog

View 5 Replies View Related

Design Data Storage For Feature Similar To Facebook Groups

Mar 13, 2008

Ok so facebook groups have 100,000's of members. Members can be part of an unlimited number of groups, and a group can have an unlimited number of members.

Comma Deliniated String seems absurd. Many-2-Many Database relationship seems like it won't scale well t the 10's of thousands and 100's of thousands of members (especially if you have 1000-5000 groups). A table for each group would work but thats a bit over the top in my opinion. XML file doesn't seem to be any better than the above options.

I am no database guru, but I can't figure out a scalable method of doing this, be it with or without a database. I need something that can support 10 groups that have 20 members each OR 1000 groups with 100,000 members each.

Any help, suggestions, or kicked in the right direction would be most appreciated.

View 3 Replies View Related

SQL 2000 DTS Package Data Storage -- What Table(s) Is This Information Stored In?

Aug 22, 2007

I need to generate a report of DTS package results, i. e. succeed, fail, error, etc. What tables is information of this type stored for SQL 2000?

winniemax

View 3 Replies View Related

Searching HTML Data

Mar 29, 2004

Hi,

I have question, sorry if it is very basic, as SQL is not my thing!

Iam allowing visitors on my IBS site to (lets say) create HTML post. This is enabled by allowing the user to use WYISWG text editor component. This means that users can create all sorts of HTML tags.

Before storing this HTML in the SQL Server, I encode it.

I also need to provide users with searching ability. So what is the best way in achieving this? Can I write search SQL normally, as in, with LIKE operators or do I need to something special?

Thanks

View 6 Replies View Related

Searching Character Data Using Like

Jul 20, 2005

Hello All,SQL 2000, case insensitive databaseI have a situation where I need to find abbreviations in the rows in atable. The rule i came up is, get all the rows from the table wherethere is more than one character is capitalized consequtively eg."USA", "TIMS", "AIR"Here is the sample data:create table test (mystring varchar(100))goinsert into test (mystring) values ('I live in USA')insert into test (mystring) values ('this is a test row. usa(abbreviated wrongly).')go--expected result setmystring----------------------I live in USAHere is the query which I tried.select * from test where mystring collate SQL_Latin1_General_CP1_CS_ASlike '%[A-Z][A-Z]%'But the above query returns both the records. Any help?Thanks

View 2 Replies View Related

Searching For Any Data String In Database

Jul 23, 2005

hihere is a problem:i have a databes with many, many tablesthe problem is that i dont know where 'abcd' string is (but it is for surein one of that table)is there any SELECT that could help me with finding this string in database?--greets

View 1 Replies View Related

Searching/Estracting Numerical Data

Feb 6, 2006

KRFeb 6, 1:48 pm show optionsNewsgroups: microsoft.public.access.formsFrom: "KR" <kra...@bastyr.edu> - Find messages by this authorDate: 6 Feb 2006 13:48:00 -0800Subject: Extract Number from Fields - SQLReply | Reply to Author | Forward | Print | Individual Message | Showoriginal | Remove | Report AbuseI am new to the SQL world, and I am trying to come up with a scriptthat will extract only the numerical data from a column of varchardata type . There is not a pattern to the data entered, except thatthe data thatI am looking to extract is a three digit number. If someone couldpoint me in the right direction that would be great.Thanks in advanceKR

View 5 Replies View Related

Searching Historical Data For Patterns

Feb 17, 2008



I have a database which contains time series data (historical stock prices) which I have to search for patterns on a day to day basis. But searching this historical data for patterns is very time consuming not only in writing the complex t-sql scripts but also executing them.

Table structure for one min data:
[Date] [Time] [Open] [High], [Low], [Close], [Adjusted_Close], [MA], [DI].....
Tick Data:
[Date] [Time] [Trade]
Most time consuming queries are with lots of inner joins. So for example if I have to compare first few mins data then I have to do inner join like:
With IntervalData AS
(
SELECT [Date], Sum(CASE WHEN 1430 = [Time] THEN [PriceRange] END) AS '1430',
Sum(CASE WHEN 1431 = [Time] THEN [PriceRange] END) AS '1431',
Sum(CASE WHEN 1432 = [Time] THEN [PriceRange] END) AS '1432'
FROM [INDU_1] GROUP BY [Date]
)
SELECT [Date] ,[1430], [1431], [1432], [1431] - [1430] As 'Range' from IntervalData
WHERE ([1430] > 0 AND [1431] < 0 AND [1432] < 0) OR ([1430] < 0 AND [1431] > 0 AND [1430] > 0)
------------------------------------------------------------------------
select ind1.[Time], ind1.PriceRange,ind2.[Time], ind2.PriceRange from INDU_1 ind1
INNER JOIN INDU_1 ind2 ON ind1.[Time] = ind2.[Time] - 1 AND ind1.[Date] = ind2.[Date]
where (ind1.[Time] = 2058) AND ((ind1.PriceRange > 0 AND ind2.PriceRange >0) OR (ind2.PriceRange < 0 AND ind1.PriceRange < 0))
ORDER BY ind1.[Date] DESC;
Is there anyway I can use Sql 2005 Data mining models to make this searching faster?

View 1 Replies View Related

Transact SQL :: Manage Max Table Storage Space In Case Of Excess Data (size In GB)

Apr 23, 2015

I am using sql server 2008 r2 on my end. I have created a database named testDB. I have a lot of tables with some log tables in this. some tables have contain lack of records in log table.

So my purpose is that I want to fix the table size of those tables(log tables) and want to move records in other database table placed on another location. So my database has no problem.

is there any way to make such above steps which I want for my database?

Is there already built any such functionality in sql server?

View 2 Replies View Related

Searching For Encrypted Fields In Data Columns

Jul 20, 2005

I am new to database programming and was curious how others solve theproblem of storing encrypted in data in db table columns and thensubsequently searching for these records.The particular problem that I am facing is in dealing with (privacy)critical information like credit-card #s and SSNs or business criticalinformation like sales opportunity size or revenue in the database. Therequirement is that this data be stored encrypted (and not in theclear). Just limiting access to tables with this data isn't sufficient.Does any database provide native facilities to store specific columns asencrypted data ? The other option I have is to use something like RC4 toencrypt the data before storing them in the database.However, the subsequent problem is how do I search/sort on these columns? Its not a big deal if I have a few hundred records; I couldpotentially retrieve all the records, decrypt the specific fields andthen do in process searches/sorts. But what happens when I have (say) amillion records - I really don't want to suck in all that data and workon it but instead use the native db search/sort capabilities.Any suggestions and past experiences would be greatly appreciated.much thanks,~s

View 10 Replies View Related

Backup Strategy

Jul 18, 2000

Hi all,

Pardon me for asking a question that I know has been asked before. I need to develop a backup strategy for our SQL Server and I am looking for any help that anyone can offer including recommending good books for reading.


Thanks in advance,
Faustina

View 1 Replies View Related

Backup Strategy

Oct 18, 2000

In SQL Server 6.5, Is it generally better to dump the
transaction log first, then the database or to dump
the database and then run a dump 'tranlog with truncate
only' option?

Or, is this more a matter of personal choice?

Toni

View 1 Replies View Related







Copyrights 2005-15 www.BigResource.com, All rights reserved