Big Table(?) Or Split Between Tables?
Nov 20, 2007
Hi Guys
I have an application that runs on several sites that has a table with 36 columns mostly ints och small varchars.
I currently have only one table that stores the data and five indexes and since the table on one location (and others soon) has about 18 million rows I have been trying to come up with a better solution (but only if needed, I dont think I have to tell you that I am a programmer and not an dba).
The db file size with all the indexes is more then 10gb, in it self is not an problem but is it a bad solution to have it that way?
The questions are:
Are there any big benefits if i split it into several smaller tables or even smaler databases and make the SPs that gets the data aware that say 2006 years data is in table a and so on?
Its quite important that there are fast SELECTS and that need is far more important then to decrease the size of the database file and so on.
How many rows is okay to have in one table (with 25 columns) before its too big?
Thanks in advance.
Best regards
Johan, Sweden.
CREATE TABLE [dbo].[Cdr](
[Id] [int] IDENTITY(1,1) NOT NULL,
[Abandon] [varchar](7) NULL,
[Bcap] [varchar](2) NULL,
[BlId] [varchar](16) NULL,
[CallChg] [varchar](6) NULL,
[CallIdentifier] [uniqueidentifier] NULL,
[ChgInfo] [varchar](5) NULL,
[ClId] [varchar](16) NULL,
[CustNo] [smallint] NULL,
[Digits] [varchar](32) NULL,
[DigitType] [varchar](1) NULL,
[Dnis1] [varchar](6) NULL,
[Dnis2] [varchar](6) NULL,
[Duration] [int] NULL,
[FgDani] [varchar](13) NULL,
[HoundredHourDuration] [varchar](3) NULL,
[Name] [varchar](40) NULL,
[NameId] [int] NOT NULL,
[Npi] [varchar](2) NULL,
[OrigAuxId] [varchar](11) NULL,
[OrigId] [varchar](7) NULL,
[OrigMin] [varchar](16) NULL,
[Origten0] [varchar](3) NULL,
[RecNo] [int] NULL,
[RecType] [varchar](1) NOT NULL,
[Redir] [varchar](1) NULL,
[TerId] [varchar](7) NOT NULL,
[TermAuxId] [varchar](11) NULL,
[TermMin] [varchar](16) NULL,
[Termten0] [varchar](3) NULL,
[Timestamp] [datetime] NOT NULL,
[Ton] [varchar](1) NULL,
[Tta] [int] NULL,
[Twt] [int] NULL,
[DateValue] [int] NULL,
[TimeValue] [int] NULL,
[Level] [varchar](50) NOT NULL CONSTRAINT [DF_Cdr_Level] DEFAULT ('x:'),
CONSTRAINT [PK_Cdr] PRIMARY KEY CLUSTERED
(
[Id] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON, FILLFACTOR = 10) ON [PRIMARY]
) ON [PRIMARY]
View 14 Replies
ADVERTISEMENT
Feb 21, 2005
What's the best way to convert a large set of records from a simple schema where all fields are in one table to a schema where fields are split across two tables? The two table setup is necessary for reasons not worth getting into here.
Doing this via cursor is pretty straightforward, but is there a comparable set-based solution?
Here are sample create table commands. Obviously, the example below is simplified for discussion purposes.
-- One record from here will produce a record in TargetParentRecords and a record in TargetChildRecords for a total of two records.
CREATE TABLE OriginalSingleTableRecords (
ID INT IDENTITY (1, 1) NOT NULL,
ColumnA VARCHAR(100) NOT NULL,
ColumnB VARCHAR(100) NOT NULL,
CONSTRAINT PK_OriginalSingleTableRecords PRIMARY KEY CLUSTERED (ID)
)
CREATE TABLE TargetParentRecords (
ParentID INT IDENTITY (1, 1) NOT NULL,
ColumnA VARCHAR(100) NOT NULL,
CONSTRAINT PK_TargetParentRecords PRIMARY KEY CLUSTERED (ParentID)
)
-- Each row in this table must link to a TargetParentRecords row
CREATE TABLE TargetChildRecords (
ID INT IDENTITY (1, 1) NOT NULL,
ParentID INT NOT NULL, -- References TargetParentRecords.ParentID
ColumnB VARCHAR(100) NOT NULL,
CONSTRAINT PK_TargetChildRecords PRIMARY KEY CLUSTERED (ID)
)
View 5 Replies
View Related
Mar 14, 2006
I imported all rows of my txt file using SSIS 2005 into a table. I am now trying to figure out how to split out the header, payment rows, and maintenance rows. First, some information.
An example of table results is here:
http://www.webfound.net/split.txt
The table has just one field of type varcha(100) because the incoming file is a fixed length file at 100 bytes per row
The header rows are the rows with HD in them...then followed by detail rows for that header (see here http://www.webfound.net/rows.jpg).
I need to
1) Split out the header into a header table
2) Split out the maintenance rows (related to the header) into a maint table
3) Split out the payment rows (related to the header) into a payment table
I'll need to maintain a PK/FK relationship between each Header and it's corresponding maint and payment rows in the other 2 tables.
To determine if it's a payment vs. maintenance row, I need to compare chars 30 - 31. If it contains 'MT' then you know it's a maintenance row, else it's a payment row.
How in the hell do I do this???
View 4 Replies
View Related
Sep 5, 2007
Hi ,
I have two tables within a SQL database. The 1st table has an identified column and column which lists one of more email identifers for a second table,
e.g.
ID Email
-- ----------
1 AS1 AS11
2 AS2 AS3 AS4 AS5
3 AS6 AS7
The second table has a column which has an email identifier and another column which lists one email address for that particular identifier, e.g.
ID EmailAddress
--- ------------------
AS1 abcstu@emc.com
AS2 abcstu2@emc.com
AS3 abcstu3@emc.com
AS4 abcstu4@em.com
AS5 abcstu5@emc.com
AS6 abcstu6@emc.com
AS7 abcstu7@emc.com
AS11 abcstu8@emc.com
I need to create a stored procedure or function that:
1. Selects an Email from the first table, based on a valid ID,
2. Splits the Email field of the first table (using the space separator) so that there is an array of Emails and then,
3. Selects the relevant EmailAddress value from the second table, based on a valid Email stored in the array
Is there any way that this can be done directly within SQL Server using a stored procedure/function without having to use cursors?
Many Thanks,
probetatester@yahoo.com
View 7 Replies
View Related
Sep 27, 2005
I have a large table that I'm planning on splitting out into 5 smaller ones. What I need to do is maintain some central repository for auto-numbering new records to make sure that no 2 records in different tables have the same unique ID. Thanks in advance!
View 14 Replies
View Related
Feb 5, 2005
I'm grappling with this design problem right now:
I have a table of users. Every user has an e-mail address and (hashed) password. Some of those users work for a company, and some of them do not. Of those who do not work for a company, some are salespeople who sell to one or more companies. Some users are simply administrators who don't work for a specific company. So here's what my users table looks like right now: "UserID, Email, Password, CompanyID (Nullable), IsAdmin"
And here's my companies table: "CompanyID, CompanyName, SalespersonID"
Of course, I could separate it out and make a Users table, an Employees table, and a Salespeople table. The way the relationship works out, though, I could use the same ID number for all three tables, and that indicates to me that perhaps they all belong in the same table. It seems silly, after all to have a Salespeople table whose only field is "UserID."
Two factors of the first design concern me: First is the fact that a salesperson could also have a company. I guess I could write a check constraint to prevent this, but doesn't having the companyID in the Users table violate a normalization rule? Maybe? The second is the fact that the Companies table relies upon Users, which in turn relies upon Companies. In OOP, this usually isn't a good thing, but I'm not sure whether it's cause for concern in a relational database.
Anyway, I really don't know what I should be doing with this design. Any suggestions?
Thanks in advance,
-Starwiz
View 1 Replies
View Related
Jul 18, 2006
I have an input file with fixed-width columns that I want to import into two tables.. 5 of the input columns go to 1 table and the remaining 15 go to another table. What's a good way to do this in SSIS?
TIA,
Barkingdog
View 3 Replies
View Related
Feb 7, 2006
Hello,
Hoping someone here can help. Perhaps I'm missing something obvious, but I'm surprised not to see a data flow task in SSIS for splitting *columns* to different destinations. I see the Conditional Split task can be used to route a *row* one way or another, but what about columns of a single row?
As a simple and somewhat contrived example, let's say I have a row with twelve fields and I'm importing the row into a normalized data structure. There are three target tables with a 1-to-1 relationship (that is, logically they are one table, but physically they are three tables, with one of them considered the "primary" table), and the twelve input fields can be mapped to four columns in each of the three tables.
How do I "split" the columns? The best way I can see is to Multicast the row to three different OLE-DB Destinations, each of which inserts to one of the three target tables, only grabbing the four fields needed from the input row.
Or should I feed the row through three successive OLE-DB Command tasks, each one inserting into the appropriate table? This would offer the advantage, theoretically, of allowing me to grab the identity-based surrogate primary key from the first of the three inserts in order to enable the two subsequent inserts.
Thoughts?
Thanks in advance,
Dan
View 5 Replies
View Related
Mar 14, 2006
SSIS 2005
Ok, I have a task in SSIS that does the following and works:
1) Brings in a txt file
2) Using a conditional component, checks for a value in the row.
3) Based on the value, splits the row into one of 3 tables (Header, Maintenance, or Payment)
Here is a print screen of what I have so far which splits Header rows into it's own table, Maintenance rows into its own table, and Payment Rows into its own table:
http://www.webfound.net/qst_how_to_add_header_PK_and_FKs.JPG
Here is a print screen of the conditional split:
http://www.webfound.net/conditional_split.jpg
Please take a look at the txt file here before it's processed:
http://www.webfound.net/split.txt
http://www.webfound.net/rows.jpg
Notice that the pattern is a header row, followed by it's corresponding detail rows. The detail rows are either Maintenance or Payment rows.
I need to somehow during the Script component or some other way, to assign a unique HeaderID (PK) to each of the header rows and add that ID to it's corresponding Maintenance and Payment detail rows in their corresponding tables as a PK. The problem is
1) I don't know how to do this in the flow of the components as I have it now
2) How do I tell it to create a new Header ID and Header FKs for the detail rows based off of each new Header row?
In the end (much later on in my entire package), the goal is to be able to run a stored proc to join and select the Header and Details rows back into a final table so I can then do more processing such as split each header and detail rows into their own txt files, etc....I don't need to go into details why but just know that this is the goal, therefore I need to relate each header row with their corresponding detail rows that are split off into a MaintenanceRow and PaymentRowTable
View 2 Replies
View Related
Oct 19, 2012
I have an empty employee table and employee_details table. The temp table which i created say it has 10 columns of which 6 are from employees and 4 from employee_details. I have loaded some data into temp table say 10 rows.
Now the stored procedure using cursor should be created such that, it should fetch the rows one by one from temp table and insert the values into employee table(6 columns) and the rest in employee_details table(4 columns).
This is the scenario.
Here is the column names of my temp table
CREATE TABLE [dbo].[temp](
[employee_id] [char](7) NOT NULL,
[first_name] [char](50) NOT NULL,
[middle_name] [char](50) NOT NULL,
[last_name] [char](50) NOT NULL,
[title] [char](5) NOT NULL,
[Code] ....
Here the last 4 columns belong to the employee_details table. The stored procedure should fetch record by record from temp split and insert into employee and employee_details table.
View 1 Replies
View Related
Sep 29, 2015
I am trying to join two tables and looks like the data is messed up. I want to split the rows into columns as there is more than one value in the row. But somehow I don't see a pattern in here to split the rows.
This how the data is
Create Table #Sample (Numbers Varchar(MAX))
Insert INTO #Sample Values('1000')
Insert INTO #Sample Values ('1024 AND 1025')
Insert INTO #Sample Values ('109 ,110,111')
Insert INTO #Sample Values ('Old # 1033 replaced with new Invoice # 1544')
Insert INTO #Sample Values ('1355 Cancelled and Invoice 1922 added')
Select * from #Sample
This is what is expected...
Create Table #Result (Numbers Varchar(MAX))
Insert INTO #Result Values('1000')
Insert INTO #Result Values ('1024')
Insert INTO #Result Values ('1025')
Insert INTO #Result Values ('109')
Insert INTO #Result Values ('110')
[Code] ....
How I can implement this ? I believe if there are any numbers I need to split into two columns .
View 2 Replies
View Related
Jan 22, 2007
Could you tell what's wrong when I split table to the target partition table?USE TEST--ADD FILEGROUP---------------------------------------------------------------------ALTER DATABASE TEST ADD FILEGROUP FG_01ALTER DATABASE TEST ADD FILEGROUP FG_02ALTER DATABASE TEST ADD FILEGROUP FG_03--ADD FILE--------------------------------------------------------------------------ALTER DATABASE TEST ADD FILE (NAME = DF_01,FILENAME = 'D:TESTDF_01.ndf',SIZE = 10MB,MAXSIZE = UNLIMITED,FILEGROWTH = 10MB)TO FILEGROUP FG_01ALTER DATABASE TEST ADD FILE (NAME = DF_02,FILENAME = 'D:TESTDF_02.ndf',SIZE = 10MB,MAXSIZE = UNLIMITED,FILEGROWTH = 10MB)TO FILEGROUP FG_02ALTER DATABASE TEST ADD FILE (NAME = DF_03,FILENAME = 'D:TESTDF_03.ndf',SIZE = 10MB,MAXSIZE = UNLIMITED,FILEGROWTH = 10MB)TO FILEGROUP FG_03--CREATE PARTITION FUNCTION---------------------------------------------------------CREATE PARTITION FUNCTION PF_HIS_HTTP_LOG(datetime)AS RANGE LEFT FOR VALUES ('20070101 23:59:59.997','20070102 23:59:59.997')--CREATE PARTITION SCHEME-----------------------------------------------------------CREATE PARTITION SCHEME PS_HIS_HTTP_LOGAS PARTITION PF_HIS_HTTP_LOG TO ( FG_01, FG_02, [PRIMARY])--CREATE PARTITION TABLE -----------------------------------------------------------CREATE TABLE HIS_HTTP_LOG ( USERID varchar(32) , USERIP varchar(15) ,USERPORT numeric(5,0) , OBJECTIP varchar(15) , OBJECTPORT numeric(5,0) , URL varchar(256) , HOST varchar(64) , DN varchar(64) , VISITIME numeric(5,0) , STARTIME datetime , ENDTIME datetime ) ON PS_HIS_HTTP_LOG(STARTIME)--INSERT DATA,PARTITION 1 20070101-------------------------------------------------DECLARE @i intSET @i = 1WHILE @i <= 100BEGININSERT INTO HIS_HTTP_LOG VALUES(CAST(@i AS varchar(32)),'192.168.1.1',5,'202.103.1.57',6,'www.sohu.com',11,CONVERT" target="_blank">http://sina.com.cn','','www.sohu.com',11,CONVERT(datetime,'20070101 13:25:26.100',121),GETDATE())SET @i = @i +1END--INSERT DATA ,PARTITION 2 20070102-------------------------------------------------SET @i = 1WHILE @i <= 200BEGININSERT INTO HIS_HTTP_LOG VALUES(CAST(@i AS varchar(32)),'192.168.1.1',5,'202.103.1.57',6,'www.sohu.com',11,CONVERT" target="_blank">http://sina.com.cn','','www.sohu.com',11,CONVERT(datetime,'20070102 11:25:26.100',121),GETDATE())SET @i = @i +1END--CREATE A TABLE -------------------------------------------------------------------CREATE TABLE TMP_HTTP_LOG( USERID varchar(32) , USERIP varchar(15) ,USERPORT numeric(5,0) , OBJECTIP varchar(15) , OBJECTPORT numeric(5,0) , URL varchar(256) , HOST varchar(64) , DN varchar(64) , VISITIME numeric(5,0) , STARTIME datetime , ENDTIME datetime ) ON FG_03--INSERT DATA TO TMP_HTTP_LOG 20070103-----------------------------------------------DECLARE @i intSET @i = 1WHILE @i <= 400BEGININSERT INTO TMP_HTTP_LOG VALUES(CAST(@i AS varchar(32)),'192.168.1.1',5,'202.103.1.57', 6,'www.sohu.com',11,CONVERT" target="_blank">http://sina.com.cn','','www.sohu.com',11,CONVERT(datetime,'20070103 09:25:26.100',121),GETDATE())SET @i = @i +1END--ADD CONSTRAINT--------------------------------------------------------------------ALTER TABLE TMP_HTTP_LOGWITH CHECKADD CONSTRAINT CK001CHECK (STARTIME >= '20070103 00:00:00.000' AND STARTIME <= '20070103 23:59:59.997')--SPLIT RANGE ,SWITCH DATA----------------------------------------------------------ALTER PARTITION SCHEME PS_HIS_HTTP_LOG NEXT USED FG_03ALTER PARTITION FUNCTION PF_HIS_HTTP_LOG() SPLIT RANGE ('20070103 23:59:59.997')ALTER TABLE TMP_HTTP_LOG SWITCH TO HIS_HTTP_LOG PARTITION 3--==========================================�======================================Why is the error in step of“ALTER TABLE TMP_HTTP_LOG SWITCH TO HIS_HTTP_LOG PARTITION 3�error infomation:message_id 4972,level 16,severity 1ALTER TABLE SWITCH statement failed. Check constraints or partition function of source table 'TEST.dbo.TMP_HTTP_LOG' allows values that are not allowed by check constraints or partition function on target table 'TEST.dbo.HIS_HTTP_LOG'.Please tell me why ? check constraints ?Thank you very much !
View 1 Replies
View Related
Sep 30, 2015
I have a delimited text file with 650+ columns. The sum of the column lengths of a single row, if fully populated, exceeds 30K bytes. The "killer" fields lengthwise are the "Description" fields. If they were removed from the input file, the remainig columns would occupy about 5000 bytes, which is within SQL max row length.Â
Can SSIS be used to created these two tables? (one without  description fields, the other with those field but arranged vertically in the table rows).
The fundamental issue is I can not import a single file row into a sql table because that row length could exceed the max byte count for a row.
View 8 Replies
View Related
Jul 10, 2006
Hello,I am using SQL 2005 and Cognos' Data Manager. It is an ETL tool fordata warehousing.I have a problem with time it takes to load new changes, and I amseeking advice on a better way to manage the data.I have a table that tracks student attendance and it contains about 13million records. On a daily basis, there are 5,000 - 20,000 inserts and10,000 - 50,000 updates.The daily data comes for two different text files from my operationsystem; current and historical (CLSFIL and CLSHIS).The data is loaded into a staging area from the operational system,where data cleansing and other fields are added to the table.The final step is delivering the table to my target database, which isused for reporting.Heres the situation: I find it takes 45 minutes to do a relationalupdate, where only the records that changed in the last day will beloaded. However, if I choose the native API load instead of aRelational Load, it can load all 13M records in 7 minutes. The table isheavly indexedAt some point, the API load will take more time than the relationalload, (the changes and new records will remain a constant, but the filewill continue to grow).I'm seeking another solution is more efficient. I'm considering twotables for history and current and creating a view for reporting via aunion.This a good idea? How can I make the view effeicent to use the whereclause? Looking to bounce around ideas.Other Ideas?Thanks in AdvanceRob(I maintain the key relationships in the tool, not the tables. I knowI have lots to learn and improvments)CREATE TABLE "dbo"."F_BI_Class_Attendance_Detail"("CLASS_ATTENDANCE_ID" VARCHAR(50) NULL,"CLASSES_OFFERED_ID" VARCHAR(26) NULL,"CLASS_CAMPUS_ID" VARCHAR(10) NULL,"STUDENT_ID" CHAR(20) NULL,"FULL_CLASS_ID" CHAR(15) NOT NULL,"SESSION_ID" CHAR(10) NULL,"SECTION_ID" VARCHAR(5) NULL,"MEET_DT" DATETIME NULL,"MEETING" SMALLINT NULL,"PRESENT" CHAR(2) NOT NULL,"SESSION_SKEY" BIGINT NULL,"STUDENT_SKEY" BIGINT NULL,"CLASS_CAMPUS_SKEY" BIGINT NULL,"CLASSES_OFFERED_SKEY" BIGINT NULL,"LOAD_DT" DATETIME NULL,"COMPUTED_DT" DATETIME NULL);
View 3 Replies
View Related
Jul 10, 2006
Table :: master_Staff
StaffID -Number
StaffName-String
Table :: master_Class
ClassID-Number
ClassTeacher-Number ( Forign Key of StaffID )
I have a set of StaffID saved using "," in the table class -------- 10,20,30
Now i want a Query to get the staff name -- hari, gopal, sekar , by using the classID
ie::
select staffname from master_staff where staffid in ( select classteacher from master_class where classID = 1)
View 9 Replies
View Related
Dec 18, 2007
I having a bit of confuse here. Can you please help me?
I have about 5000 records all ready in oen table. Everything that I query is related to that table one way or the other. Now i having 2000 - 3000 more records to store in the database. In term of relation database then I can store the new data in a different table so I can can query it. Most of my queries are searching.
So the question is is this better to store the data in another table or should store everything in the old table? Thanks a lot in advance for your help. I really do appreciate that.
Maverick
View 2 Replies
View Related
Apr 4, 2006
Hi,
I have Data split into 3 text files with 3 fields repeated in each to link then (key). I want to import this data into one table.
I used DTS to create 3 tables with the data. Now i want to combine the 3 tables into only one (that i already created). How can i do this? Note: the field names in the source tables are different from the destination table.
Thanks
Guy
View 6 Replies
View Related
Jul 20, 2005
Hello,We maintain a 175 million record database table for our customer.This is an extract of some data collected for them by a third partyvendor, who sends us regular updates to that data (monthly).The original data for the table came in the form of a single, largetext file, which we imported.This table contains name and address information on potentialcustomers.It is a maintenance nightmare for us, as prior to this the largesttable we maintained was about 10 million records, with lesscomplicated updates required.Here is the problem:* In order to do the searching we need to do on the table it has 8 ofits 20 columns indexed.* It takes hours and hours to do anything to the table.* I'd like to cut down as much as possible the time required to updatethe file.We receive monthly one file containing 10 million records that arenew, and can just be appended to the table (no problem, simple importinto SQL Server).We also receive monthly one file containing 10 million records thatare updates of information in the table. This is the tricky one. Theonly way to uniquely pair up a record in the update file with a recordin the full database table is by a combination of individual_id, zip,and zip_plus4.There can be multiple records in the database for any givenindividual, because that individual could have a history that includesmultiple addresses.How would you recommend handling this update? So far I have mostlytried a number of execution plans involving deleting out the recordsin the table that match those in the text file, so I can then importthe text file, but the best of those plans takes well over 6 hours torun.My latest thought: Would it help in any way to partition the tableinto a number of smaller tables, with a view used to reference them?We have no performance issues querying the table, but I need somethoughts on how to better maintain it.One more thing, we do have 2 copies of the table on the server at alltimes so that one can be actively used in production while we runupdates on the other one, so I can certainly try out some suggestionsover the next week.Regards,Warren WrightDallas
View 7 Replies
View Related
Sep 21, 2006
Hello
I'm working on a package which loops through each xml file in a folder.
The name of each xml file is put in variable.
The format of the filename is something like "part1_part2_part3.xml"
I need to store the 3 parts in three different columns of table A
The content of the xml file needs to be manipulated ("." needs to be replaced with ",", ....)and put in serveral columns in tableB
It's not clear to me yet how to start this but my main concern is read the three parts of the filename. I don't find any task in SSIS which could help me with that.
Could someone give me some pointers?
Many thanks!
Worf
View 3 Replies
View Related
Mar 13, 2007
Hello all,
Little layout question. Assume my dataset returns the following data:
A
B
C
D
E
How can I present this data in a table (or list, or matrix) splitted into two columns:
A B
C D
E
Any idea will be very appreciated! Thanks a lot!
TG
View 4 Replies
View Related
May 29, 2008
help please
i have this table
number_row fld_index vtext
----------------------------------------
1 101 a
2 101 b
3 101 c
4 102 d
5 102 e
6 102 f
7 103 g
8 103 h
9 103 i
......
......
....
now i need to do this
SELECT fld_index
FROM table_index
GROUP BY fld_index
----------------------------
and i get
101
102
103
104
how split this select and do this
in('101' ,'102','103','104')
------------------------------
Code Snippet
DECLARE @aaa
set @aaa =(SELECT fld_index FROM table_index GROUP BY fld_index)
set fld1 = CASE WHEN fld1 in(@aaa ) then '*' else fld1 end ---- need to split the @aaa ,'101' ,'102' , '103','104' ,
----------------------------------------------------------------------------------------------------------------------------
instead of this update
update [dbo].[tb_pivot_big]
set fld1 = CASE WHEN fld1 in('101' ,'102','103','104') then '*' else fld1 end
, fld2 = CASE WHEN fld2 in('101' ,'102','103','104') then '*' else fld2 end
, fld3 = CASE WHEN fld3 in('101' ,'102','103','104') then '*' else fld3 end
, fld4 = CASE WHEN fld4 in('101' ,'102','103','104') then '*' else fld4 end
, fld5 = CASE WHEN fld5 in('101' ,'102','103','104') then '*' else fld5 end
from [dbo].[tb_pivot_big]
View 23 Replies
View Related
Apr 5, 2008
Hi Guys,
I have this table formatting problem; I have a table report (with rows expanding) say sales report for items. In this report each item has 2 rows. I want to split the send row as shown below. So the results of the report should have the format below.
rownumber
ITEMNUNBER
COST
Sales
On Hand
1
A
CY4567
CY6780
CY567
2
A
LY4678
LM678
LY7999
LM789
LY500
LM10
3
B
4
B
5
C
6
C
IN this example for each item we show current year, last year and Last month Cost,sales and on hand.
This is not a cross tab and number of items onthe report varies. In this example its 3 items but next time when you run it could be 100 items.
Any help appreciated
Sonny
View 4 Replies
View Related
Jan 21, 2003
Is there a way to measure page split for a table during data insertion/update (not base on dbcc showcontig result)?
thanks...byyu
View 3 Replies
View Related
May 24, 2007
I've been searching the forums and Google for some help splitting up rows in a table without any luck. I'm not quite sure what to even look for
I have a table is MSSQL 2000 that looks as follows:
Code:
id custnum b1_email b2_email b1_sub b2_sub
------------------------------------------------------------------------
1 123456 b1@host1.com b2@host1.com 0 0
2 654321 b1@host2.com b2@host2.com 1 0
3 321654 b1@host3.com b2@host3.com 0 1
Now... I am hoping create a view that splits these rows up so that only a single email address is on each row. I'd like to split it up as follows:
Code:
custnum email sub
----------------------------------
123456 b1@host1.com 0
123456 b2@host1.com 0
654321 b1@host2.com 1
654321 b2@host2.com 0
321654 b1@host3.com 0
321654 b2@host3.com 1
Any help would be great! I imagine some sort of join command can be constructed using a single table?
View 2 Replies
View Related
Apr 17, 2014
I have a string which is delimited by commas:
Green,Red,Blue,Orange,Black,White
I need to create a SQL statement to read the string, split it at the "," character, and insert it into individual rows. I also need to insert an ID (the same for all split values).
For instance:
INSERT INTO tbl_Temp (unique_id, ID, color) VALUES (1, 999, Green)
INSERT INTO tbl_Temp (unique_id, ID, color) VALUES (2, 999, Red)
INSERT INTO tbl_Temp (unique_id, ID, color) VALUES (3, 999, Blue)
etc...
I would like to be able to do this all in one statement so I don't have to break it out into individual SQL statements.
View 4 Replies
View Related
Jul 23, 2005
Let me see if I can explain my situation clearly.I have a table with the columns:answer_id, question_id, member_id, answer- answer_id is the primary key for the table.- question_id relates to another table with questions for a user. Thetable holds the question and the possible choices in a varchar fieldseparated by a delimiter.- member_id is self-explanatory- answer is a varchar field of all the choices the user selected,separated by a delimiter.Here is my problem.I am trying to search all members that have answered, say, question_id= 2 where they selected 'brown' as one of their choices.i can do this if they selected ONLY that item, but not multiple items.The problem is this portionanswer in(select valu from dbo.iter_intlist.....I need this to be something like....function_to_return_all_separated_answers(answer) in(select valu from dbo.iter_intlistThe current way, it is only returning members that have an answer'Brown', not 'Brown, Blue' in their answer field. Make any sense? So,what I need to do is separate the list of answers and say :select member_id from profile_answers whereANY ANSWER in function_to_split(answer) MATCHES ANY OF THESE (selectvalu from dbo.iter_intlist...It seems I might have to join or something, I am just a little lostright now.Here is my proc.ALTER procedure search_detailed_get_ids@question_id as integer,@answers as varchar(8000),@member_ids ntextasdeclare @v as varchar(8000)--get the delimited string of all possible answersset @v = (select bind_data from profiles_questions where question_id =@question_id)--prepare it for the function only accepting 1 charset @v = replace(@v, '||', '|')--gimme all members that matchselect member_id from profiles_answers where question_id = @question_idand answer in(select valu from dbo.iter_intlist_to_table(@v, '|') where listpos in(select valu from dbo.iter_intlist_to_table(@answers, ',')))and member_id in (select valu from dbo.iter_intlist_to_table(@member_ids, ','))returngo
View 3 Replies
View Related
Apr 23, 2007
Hi everybody,
I want to load the conditions for a conditional split from a table, so that the bussiness logic can be changed with a simple user interface .
How can I do this?
Best regards
John
View 5 Replies
View Related
Jul 22, 2015
So I have been trying to get mySQL query to work for a large database that I have. I have (lets say) two tables Table_One and Table_Two. Table_One has three columns: Type, Animal and TestID and Table_Two has 2 columns Test_Name and Test_ID. Example with values is below:
**TABLE_ONE**
Type Animal TestID
-----------------------------------------
Mammal Goat 1
Fish Cod 1
Bird Chicken 1
Reptile Snake 1
Bird Crow 2
Mammal Cow 2
Bird Ostrich 3
**Table_Two**
Test_name TestID
-------------------------
Test_1 1
Test_1 1
Test_1 1
Test_1 1
Test_2 2
Test_2 2
Test_3 3
In Table_One all types come under one column and the values of all Types (Mammal, Fish, Bird, Reptile) come under another column (Animals). Table_One and Two can be linked by Test_ID
I am trying to create a table such as shown below:
Test_Name Bird Reptile Mammal Fish
-----------------------------------------------------------------
Test_1 Chicken Snake Goat Cod
Test_2 Crow Cow
Test_3 Ostrich
This should be my final table. The approach I am currently using is to make multiple instances of Table_One and using joins to form this final table. So the column Bird, Reptile, Mammal and Fish all come from a different copy of Table_one.
For e.g
Select
Test_Name AS 'Test_Name',
Table_Bird.Animal AS 'Birds',
Table_Mammal.Animal AS 'Mammal',
Table_Reptile.Animal AS 'Reptile,
Table_Fish.Animal AS 'Fish'
From Table_One
[Code] .....
The problem with this query is it only works when all entries for Birds, Mammals, Reptiles and Fish have some value. If one field is empty as for Test_Two or Test_Three, it doesn't return that record. I used Or instead of And in the WHERE clause but that didn't work as well.
View 4 Replies
View Related
Oct 13, 2004
I am passing in a string of dates, delimited by a comma.
So
01/01/04, 02/01/04, 03/01/04
etc
I would like to enter each of these values into a table via an INSERT stored procedure.
Does anyone have any code for this?
View 1 Replies
View Related
Aug 27, 2002
Thanks for reading.
This is pretty long, hopefully it isn't rambling.
I'm building a system that imports data from several source, Excel files, text files, Access databases, etc. using DTS. The entire process revolved around MS SQL Server, by the way.
I figured I would create denormalized tables that mirror the Excel and flat files, for example, in structure, import data to those, clean up and remove duplicates there, then break those out into my normalized table structure later.
Now I've finished the importing part (though this is going to happen once a week) and I'm onto breaking up the denormalized tables.
I'm hesitating because I'm not sure I've made the best decisions in terms of process, etc.
I've decided to use cursors to loop over the denormalized tables and use batch insert statements to push data out to the appropriate tables.
Any comments? Suggestions? All is welcome.
I'm specifically interested in hearing back on the way I've set up the intermediate, denormalized tables and how I'm breaking them up using cursors (step 2 of the process below). Still, all comments are welcome. As are suggestions for further reading.
Thanks again...
simplified example
(my denormalized tables are 20 - 30 colums wide)
denormalized table:
===================
name, address, city, state, cellphone, homephone
normalized tables:
==================
tblPerson [PK_person, name, age, height, weight]
tblAddress [PK_address, FK_person, street, city, state, zip, addressType]
tblContact [PK_contact, FK_person, data, contactType]
I'm breaking up the denormalized tables like this (*UNTESTED*):
=================================================
DECLARE @vars.... (one for each column in my normalized table structure, matching size and type)
DECLARE myCursor CURSOR
FAST_FORWARD FOR
SELECT name, address, city, state, cellphone, homephone
FROM _DNT_myWideTable
INTO
WHILE @@Fetch_Status = 0
BEGIN
-- grab the next row from the wide table
FETCH NEXT FROM myCursor
INTO @name, @address, @city, @state, @cellphone, @homephone
-- create the person first and get the ID with @@IDENTITY
INSERT INTO tblPerson (name) VALUES (@name)
SET @personID = @@IDENTITY
-- use that ID to coordinate inserts across other tables
INSERT INTO tblAddress (FK_person, address, city, state, addressType)
VALUES(@person, @address, @city, @state, 'HOME')
INSERT INTO tblContact (FK_person, data, contactType)
VALUES(@person, @cellphone, 'CELLPHONE')
INSERT INTO tblContact (FK_person, data, contactType)
VALUES(@person, @homephone, 'HOMEPHONE')
END
View 1 Replies
View Related
Sep 30, 2006
Hi all,
I have a transaction table that has a composite key made up of transaction id and product id. where multiple products were purchased under same transaction, transaction ids got repeated.
I would like to split the table randomly into 70%, 30% ratio to create training and testing set respectively in such a way that it does not split a same transaction under which multiple products were purchased (rows with same transaction id should not get split).
is it possible? if possible what is the idea?
It would be of great help.
Thanks.
Fakhrul
View 9 Replies
View Related
Aug 20, 2015
I have a string ,want to split the values after every space as column value and insert them into a tableÂ
 1306453 0 0 0 0 0
col1 Â Â Â col2 Â col3 col4 Â col5 col6
1306453 Â Â 0 Â Â Â Â 0 Â Â Â 0 Â Â Â Â 0 Â Â Â 0
View 7 Replies
View Related
Aug 14, 2012
I have a table with a string value, where all values are seperated by a space/blank. I now want to use SQL to split all the values and insert them into a different table, which then later will result in deleting the old table, as soon as I got all values out from it.
Old Table:
Code:
ID, StringValue
New Table:
Code:
ID, Value1, Value2
Do note: Value1 is INT, Value2 is of nvarchar, hence Value2 can contain spaces... I just need to split on the FIRST space, then convert index[0] to int, and store index[1] as it is.
I can split on all spaces and just Select them all and add them like so: SELECT t.val1 + ' ' + t.val2... If I cant find the first space that is... I mean, first 2-10 characters in the string can be integer, but does not have to be.Shall probably do it in code instead of SQL?Now I want to run a query that selects the StringValue from OldTable, splits the string by ' ' (a blank) and then inserts them into New Table.
Code:
SELECT CASE CHARINDEX(' ', OldTable.stringvalue, 1)
WHEN 0 THEN OldTable.stringvalue
ELSE SUBSTRING(OldTable.stringvalue, 1, CHARINDEX(' ', OldTable.stringvalue, 1) - 1)
END
AS FirstWord
FROM OldTable
Found an example using strange things like CHARINDEX..But issue still remains, because the first word is of integer, or it does not have to be...If it isn't, there is not "first value", and the whole string shall be passed into "value2".How to detect if the very first character is of integer type?
Code:
@declare firstDigit int
IF ISNUMERIC(SUBSTRING(@postal,2,1) AS int) = 1
set @firstDigit = CAST(SUBSTRING(@postal,2,1) AS int)
ELSE
set @firstDigit = -1
[code]....
View 2 Replies
View Related