Spreading Data Choosing A Determined NDF???
Jan 12, 2007
Hi everyone
Primary platform is 2005 on 64-bit.
I've got a couple of questions linked to partitionating tables.
-What sort of criteria follows Database Engine when you have two NDF assigned to one filegroup and this filegroup is part of partition
What's more: Could I force that Sql will use one by default?
I mean, my first partition encompass from 20020101 till 20030101. When I add data for example March or June, could I decide that these months belong to NDF1 rather than NDF2?
Let me know if you need further details.
Thanks in advance for your time,
View 4 Replies
ADVERTISEMENT
Oct 18, 2015
We have a single generic SSIS package that is used to import several hundred iSeries tables into SQL. I am not looking to rewrite the process. But I am looking for ways to improve performance.
I have tried retain same connection, maximum insert commit size, lock table (tablock), removed some large columns, played with the log file location and size, and now I am working to tweak the defaultbuffermaxrows.
To describe the data flow task - there are six data flows tasks (dft) working at the same time. Each dtf has their own list of iSeries tables and columns and the corresponding generic SQL table names. Each dtf determines their list of tables based on the number of columns to import. So there is dft30 (iSeries table has 1-30 columns to import), dtf60 (iSeries table has 31-60 columns to import), etc. The destination SQL tables are generically called Staging30, Staging60, etc. Each column in the generic Staging tables are varchar(100). The dtfs are comprised of an OLE DB Source and an OLE DB Destination.
The OLE DB Source uses a SQL Command from Variable to build a SELECT statement. The OLE DB Source uses a connection manager that uses an IBM iAccess IBMDA400 provider. The SQL Command ends up looking like this for the dtf30. This specific example is importing from the iSeries table TDACLR and it only has two columns so it will be copied to the Staging30 table.
select TCREAS AS C1,TCDESC AS C2,0 AS C3,0 AS C4,0 AS C5,0 AS C6,0 AS C7,0 AS C8,0 AS C9,0 AS C10,0 AS C11,0 AS C12,0 AS C13,0 AS C14,0 AS C15,0 AS C16,0 AS C17,0 AS C18,0 AS C19,0 AS C20,0 AS C21,0 AS C22,0 AS C23,0 AS C24,0 AS C25,0 AS C26,0 AS C27,0 AS
C28,0 AS C29,0 AS C30,''TDACLR'' AS T0 from Store01.TDACLR
The OLD DB Source variable value looks like the following, but I am not showing the full 30 columns
select cast(0 AS varchar(100)) AS C1,cast(0 AS varchar(100)) AS C2,cast(0 AS varchar(100)) AS C3,cast(0 AS varchar(100)) AS C4,cast(0 AS varchar(100)) AS C5, ... cast(0 AS varchar(100)) AS C30.
The OLE DB Destination uses OpenRowSet Using FastLoad From Variable. The insert into Staging30 ends up looking like this.
insert bulk STAGE30([C1] varchar(100) ,[C2] varchar(100) ,[C3] varchar(100) ,[C4] varchar(100) ,[C5] varchar(100) , ... ,[C30] varchar(100) ,[T0] varchar(20)
Of course we then copy and transform the Staging30 data to the SQL table that equals T0.
But back to defaultbuffermaxrows. Previously the dtfs had default values of 10000 for DefaultBufferMaxRows and 10485760 for DefaultBufferSize. I added a SQL task to SUM the iSeries column sizes, TCREAS and TCDESC in this example, and set the DefaultBufferMaxRows by dividing the SUM of the columns max_length into 10485760. But I did not see a performance improvement. Do you think that redefining the columns as varchar(100) for the insert is significant? Should I possibly SUM the actual number of columns (2) as 2x100 or SUM the 30x100?
View 4 Replies
View Related
Apr 15, 2006
Hi ... I have question on datatype on SQL Server 2005 EE
What is a good data type for email, password, Phone Number and ISBN number?
Thanks!
View 3 Replies
View Related
Sep 18, 2004
Hello, I am really dripping wet behind the ears on this and would really appreciate some help. I am setting up my first SQL table and am lost at trying to choose data types for my fields. Basically, all I am doing is setting up a contact form. It is going to ask for phone number, name, address, city, state, zip, etc. I will also have two fields which if I were using an Access db, would be "memo" with say, 500 characters. So in researching SQL data types, I came across the following:
char
Fixed-length non-Unicode character data with a maximum length of 8,000 characters.
varchar
variable-length non-Unicode data with a maximum of 8,000 characters.
text
Variable-length non-Unicode data with a maximum length of 2^31 - 1 (2,147,483,647) characters.
nchar
Fixed-length Unicode data with a maximum length of 4,000 characters.
Can someone shed some light on what I need for simple fields like street, name, city, and more importantly, description? I will also have a "premium" field which should be a "yes" or "no". I am thinking a data type of bit, which is set to 1 or 0? Thanks for any help, I appreciate it so much.
TOm
View 1 Replies
View Related
Jul 19, 2007
I have select statement that returns data on a construction project.
I have a start date, end date, and forecasted cost for each task in the project. I need to create a table that spreads the dollars in a linear fashion broken down by fiscal period.
I have:
task_id, start_date, end_date, cost
1, 9/15/2008, 12/15/2008, 3000
2, 7/1/2008, 12/15/2008, 550
I need
task_id, fiscal_period, cost
1, 200803, 500
1, 200804, 1000
1, 200805, 1000
1, 200806, 500
2, 200801, 100
2, 200802, 100
2, 200803, 100
2, 200804, 100
2, 200805, 100
2, 200806, 50
I can do the math to properly calculate the dollar amount, I am having trouble creating the statement that will process through each row of my select statement and insert multiple rows into the new table.
View 4 Replies
View Related
Feb 28, 2015
I have a table which can be downloaded from the link below. The table contains property Market Rent and period the new rent is applicable. I need to generate a report with parameter ( year and month) so when the user inputs year and month the associated market rent amount for that month is listed.
For example if the 1st market rent update was done in June 2014 ( $300) and the 2nd in Dec 2014 $(350), the property market rent from June to Nov should be $300 and from Dec 2014 till the next rent update $350. So if user inputs year 2014 and month August, the amount is $300 and if the user enters the year 2015 and month March amount is $350
[URL] ....
Below is the table with sample data
DECLARE @table TABLE ( PropCode INT ,PropStartDate DATE ,PropEndDate char(10) ,PropRentStartDate DATE ,PropRentEndDate DATE ,MarketRent INT)
INSERT INTO @table (PropCode,PropStartDate, PropEndDate,PropRentStartDate,PropRentEndDate, MarketRent) VALUES
(2718, '2013-01-30', 'NULL', '2012-11-29', '2013-07-21', 289.20) ,(2718, '2013-01-30', 'NULL', '2013-07-22', '2013-11-24', 289.20) ,(2718, '2013-01-30', 'NULL', '2013-11-25', '2014-06-14', 289.20) ,(2718, '2013-01-30', 'NULL', '2014-06-15', '2014-11-30', 299.18) ,(2718, '2013-01-30', 'NULL', '2014-12-01', '2015-01-02', 299.18) ,(2718, '2013-01-30', 'NULL', '2015-01-03', '2050-01-01', 310.00) ,(3901, '2014-05-27', 'NULL', '2014-06-09', '2014-11-30', 400.00) ,(3901, '2014-05-27', 'NULL', '2014-12-01', '2050-01-01', 400.00) ,(3960, '2014-10-31', 'NULL', '2014-11-05', '2016-11-05', 470.00)
Select * from @table
View 3 Replies
View Related
Dec 20, 2006
Hello.Lets say that there is a table with N rows. now, i want to display the table's data on a web page. one way is to select the whole table and add each row's data to the webpage (something like SELECT * FROM TABLE1).going this way will create a huge page. i want to speared the results over multiple pages - excatly as this forum spread each forum messages over multiple pages.
for this purpose, i need to query Y rows each time, for example, if my table has 20 rows, and say that i want that each page will display 5 rows, then i need to query 5 rows each time. the first 5 rows for the first page, the next 5 rows for the second page and so on...
is there any way to achieve it using an SQL query?
the simplest way is to select the whole table and manaually filter the results. but this way will become slow as the table grows with data... and i dont want to select rows which i wont display anyway.
any suggestions?
Thanks.
View 5 Replies
View Related
Apr 4, 2007
Hello - I am using report builder against models.
Which attributes or properties determine what is shown when you do click/drill into the details? I'm using the adventure works database as my guide and expected the (default,Identifying or Aggregate ) attributes to control it, can you summarize how the drill details are determined? My test from adventure works sample:
Pull Territory Name and #customers into the report
Run it and drill into the details on one of the #customers
It returns Acct#, CustType, CustName, #SalesOrders, Sum of a bunch of sales order fields and # customer addresses.
The first three are the default attributes for a customer and the last sets look to be the default aggregates for the other relations to customer - Sales Order & Address.
Is it safe to say that it pulls in the default attributes of the immediate child and then aggregates of it's children? Does it go recursively?
Thanks in advance,
Toni
View 1 Replies
View Related
Feb 1, 2007
Hi, I am sending a message to an invalid target name. The message eventually gets back to the initiator as an error type message. How can I determine the exact cause of the error - and determine that the target service name is invalid? I am using the ServiceBrokerInterface and the Message does not tell much - it seems. Also, in the sys.conversation_endpoints table, the record associated to the message only says 'Error', but no other indicator.
View 3 Replies
View Related
Jun 5, 2007
Hi, people! I'm a Brazilian and forgive me for my poor English...
I have a stored procedure in SQL Server 2000 with return me the follow data:
REGION | TYPE | VALUE_TOT
Reg 1 | TP 1 | 10
Reg 1 | TP 2 | 15
Reg 1 | TP 3 | 20
Reg 2 | TP 1 | 8
Reg 2 | TP 2 | 27
Reg 2 | TP 3 | 11
Reg 3 | TP 1 | 3
Reg 3 | TP 2 | 6
Reg 3 | TP 3 | 50
In the Reporting Services I make a grouping for REGION AND TYPE, and return me this:
REGION TYPE VALUE_TOT
Reg 1
TP 1 10
TP 2 15
TP 3 20
Reg 2
TP 1 8
TP 2 27
TP 3 11
I need place a new column in the right and a value only in the line where TYPE = TP 2. The value is: (VALUE_TOT in TP 2 * 100) / VALUE_TOT in TP 1.
It would be something thus:
REGION TYPE VALUE_TOT CALC
TP 1 10
TP 2 15 150
TP 3 20
Reg 2
TP 1 8
TP 2 27 337,5
TP 3 11
This is detail line, as would make this calculation? How it would capture the value of TP 1 and TP 2? Row number? Somebody has some idea?
Thanks! The MSDN in my language is little, have only 200 posts...
View 5 Replies
View Related
Mar 19, 2007
I'm using the sample on custom authentication from msdn but whenever I try to login I get this message.What URI are they complaining about? Has it anything to do with the custom cookie handling? I cant for the world understand whats wrong.
View 11 Replies
View Related
May 23, 2006
Hi,
A colleague and I have just found a slightly strange situation that we don't understand.
We had (effectively) the following query:
select cast(1 as decimal(38,10))
union all
select cast(1 as decimal(38,4))
And the result contained 2 rows, each with a a scale of 4. This surprised us, we expected that the metadata of the result would be determined by the topmost query.
So we reversed them and tried this:
select cast(1 as decimal(38,4))
union all
select cast(1 as decimal(38,10))
and got exactly the same result. 2 rows with a scale of 4.
We can't understand why the scale always gets determined to be 4 regardless of the order of the queries.
Any explanation would be much appreciated!
Thanks
Jamie
View 4 Replies
View Related
Mar 27, 2014
In Profiler a StmtCompleted Event Class has identified a query to be:
SELECT TOP 1 * FROM [WINSVR2008R2].[001].DBO.[OECTLFIL_SQL] WHERE ( ( OE_CTL_KEY_1 = @P1 ) ) order by OE_CTL_KEY_1 asc
Is there any way to determine the value of @P1? If so which Event class and Column should I examine?
View 5 Replies
View Related
Feb 21, 2006
Hi,
log backup done every 5 min.
so sql server log file full of entries
"Log backed up: Database: Prices, creation date(time):...."
could loging for Log backed for db Prices be disabled ?
Thanks
Alex
View 4 Replies
View Related
Jun 12, 2002
Apologies for the way in which I describe the tables and data, I know I'm not using a very proper way to get my point across:
Table A: "tblJobs" Contains the following:
--------------------------------------------
COLUMNS:
1. JobPK (char(35))
2. LocationName (varchar(50))
DATA (csv):
6643C9C9-7618-472F-9859844AA6C0F47B, Jonesport ME
08563708-3830-4507-B3154E9C4D49C6F2, Garden City NY
Table B: "tblJobDates" contains the following data, related to the two rows above):
--------------------------------------------
COLUMNS:
1. JobPK (char(35))
2. DateData (datetime)
3. CRD (datetime, "Created Date" the date and time that the date was entered)
DATA (csv):
6643C9C9-7618-472F-9859844AA6C0F47B, 6/8/2002, 6/10/2002 12:44:58 PM
6643C9C9-7618-472F-9859844AA6C0F47B, 6/17/2002, 4/22/2002 2:07:31 PM
08563708-3830-4507-B3154E9C4D49C6F2, 6/12/2002, 6/7/2002 4:05:06 PM
08563708-3830-4507-B3154E9C4D49C6F2, 6/13/2002, 6/12/2002 11:38:22 AM
tblJobDates serves two purposes: to give us the most recently entered due date for a job, and to serve as a "repository" to track changes to the due date.
Report C: The report I want to generate does NOT provide historical information... it only serves to show the CURRENT due date for each job in the tblJobs table:
--------------------------------------------
COLUMNS:
LocationName
Due Date (alias of DateData)
OUTPUT (csv):
Jonesport ME, 6/8/2002
Garden City NY, 6/13/2002
Note that for Jonesport, an initial due date of 6/17/2002 was entered (based on the CRD). Then someone changed it so that the job was due EARLIER.
Note that for Garden City, an initial due date of 6/12/2002 was entered (based again on the CRD). Then someone changed it so that the job was due LATER.
The "most recently entered due date" is what should be reflected in my report -- just as it does above ("C")
Other Notes:
-- There are other columns of information from both tables that i would like to return, but above is the most basic form of my request. Most notably, we would need to return the JobPK in report (C).
-- A job should only appear ONCE in report (c), with it's "current" due date, regardless of the other due dates that may have been entered for that job.
-- If a job has no due date, it should not appear on the report.
-- Although not shown here, each row in (B) DOES have a unique identifier (DatePK) as well... if that helps in your solution.
-- Note that the job that is "due first" appears at the top of report (C). This allows a person looking at the report to quickly determine which job "gets priority" -- the one on top!
Okay gurus -- how should the query look that would generate the desired output in Report C?
THANKS IN ADVANCE if you even can point me in the right direction!!
View 1 Replies
View Related
Jul 20, 2005
I need to decided between Standard and Enterprise Edition (Cost is acriteria - but its secondary to performance - <!--and I am not paying forit myself-->)The server spec under consideration: Dual Xeon, 1GB RAM, 36GB - RAID 1(Dell PowerEdge 1850).Application: Windows 2003 Std Server, ASP.NET, MS SQL Server 2000 baseddata driven web application.Approximately 25 simultaneous clients. Peak activity would probably be 50transactions/activities per second (2 per second per client). I expectthe database size to grow up to 4GB in 1 year.The application would use only basic OLAP features (if at all)...sofeature set wise I believe that standard edition is good enough.What I am concerned about is when MS documentation says that StandardEdition is for "organization that do not require the advanced scalability,availability, performance, or analysis features of the SQL Server 2000Enterprise Edition"Is there a difference in performance between Std and Ent editions? Interms of number of transactions per second that can be serviced?What other criteria should I be aware of before deciding to go one way orthe other?Any ideas?
View 4 Replies
View Related
Jul 14, 2006
There must be a way to do this simply. We're running SQL Server 2000. I'm looking for some generic SQL statement that I can apply.
If I have a table with a person column and a location column and multiple records for the same person / locatioin combination, how do I select the person with the location they most frequently visited? Say George visits Mexico 5 times, and the Bahamas twice and costa rica once. I would have 8 records in my table for George. The data looks something like this:
George/Mexico
George/Mexico
George/Mexico
George/Mexico
George/Mexico
George/Bahamas
George/Bahamas
George/Costa Rica
Ben/Brazil
Ben/Brazil
Ben/Peru
The results would be:
George/Mexico
Ben/Brazil
Thanks!
Myles
View 4 Replies
View Related
Aug 18, 2006
Please help me out:
I have some records in a sqldatasource and want to show it column wise. Now I do it with a datalist because it's easy. But other options are open.
Every item/record should have a radiobutton (in a group, so that you can only choose one from all). People advised me to do this with a html radiobutton inside the template.
After the user has selected an item and chooses the next-button I need to know what item the user has choosen.
Furthermore, when the user likes to step back, the same radiobutton should allready be selected.
Please help, this is bothering me for a while,
best regards from The Netherlands,
Gert
View 1 Replies
View Related
Jun 12, 2007
My company has a website that connects to a sql server (on a different box). I am trying to convince them to get sql server 2005. However, I do not know if SQL Server 2005 Workgroup edition is okay for our needs. Can someone please tell me if it is.
Basically, our setup is the following:
The SQL Server will only have one/two clients - the web server
View 7 Replies
View Related
Jul 26, 2007
i have to store some data on a remote sever(MS SQL SERVER2000). The scenario is like 1. The web application runs on a local machine. User (who inputs) uses through LAN.2. The Input should be stored in the remote server. if the remote connection is ok. otherwise it should be saved in local server's database(MS SQL 2000).3. In the application's web.config there is a connection string pointing to the remote server and another one (alternating one) points the local server's database. in scenario like this i first to tested the remote connection. if it is not ok then i initialize the local server's connection like thisprivate MyConnection() { try { connectionSql = new SqlConnection(ConfigurationManager.ConnectionStrings["ConnForRemote"].ToString()); connectionSql.Open(); } catch (Exception ex) { connectionSql = new SqlConnection(ConfigurationManager.ConnectionStrings["ConnForLocal"].ToString()); } finally { connectionSql.Close(); } connectionSql2 = new SqlConnection(ConfigurationManager.ConnectionStrings["Temp"].ToString()); }My problem is when the remote connection is lost it takes almost 1 minute to store in local database. how can i make it more time efficient. Thanks....
View 5 Replies
View Related
Jun 8, 2002
Hello,
I have a table with some data in it.
What I want to do is to create a query that returns me randomly
one of the records of the table. Can this be done?
If this is not possible from SQL server I have thought an
alternative way. This is:
I want to return all rows of the table with SELECT *,
but I want the select to return in the first column an
autoincreament number for each row without the need to add
an autoincrement field in the table. e.g
Table
------
Banana
Tomatoe
Aple
...
...
Orange
Result from select
------------------
1 Banana
2 Tomatoe
3 Aple
. ....
. ....
23 Orange
Can this be done?
At least this way
1) I can travel to the end of the results (from ASP),
2) read the ID of the last row
3) Create a random integer number from 1 to last ID,
4) and finaly select the appropriate random row from that integer.
Can anybody help me please?
Thanks for any help in advance!
Yours, sincerely
Efthymios Kalyviotis
ekalyviotis@comerclub.gr
View 1 Replies
View Related
Jun 4, 2006
Greetings!
I am purchasing a new/first server and could use some help with the details.
I am purchasing the server with the intent of managing a large database that will be quite extensive and requires a good amount of processing power. I have decided to go with windows server 2003 and SQL Server 2000 as a database. Within next year I hope to have this database directly flowing to a website that I could possibly be hosting as well as 2-3 offsite employess logging into the system remotely.
I would say my biggest question is whether or not to choose the raid 1 configuration or the raid 5. I want to be able to have the Hard drives mirror eachother. I was thinking of going with three hard drives but im not really sure if I would even need that setup. With that, I will just show my current system:
Dell poweredge 1800
3.0 ghz xeon
2 gb memory
sata 1 raid
cerc 6-Channel sata raid controller
160 gb hd x 2
onboard NIC network adapter
Im going price savvy on this one so no ups redundant, power supplies, or tape backup. Although I am open to any suggestions.
Definately appreciate any help with this as I have been hard pressed to find some quality reseller help. They just want to throw the biggest and baddest thing at me.
Thanks!
-Shawn
View 4 Replies
View Related
Jan 20, 2008
Hi All,
I would like to know the experts views on the following I have listed below.
1. Is there any significant performance gain by choosing the Native SQL server driver rather than OLEDB for example. I know there are lot of specified features in the Native SQL Driver but I am thinking in terms of the performance.
2. Why not develop for the generic database rather than specific database?
3. More generic mean less work when migrating database to a different database?
Appreciate your valuable thoughts and any recommendations.
Cheers,
Amal
View 1 Replies
View Related
Dec 12, 2007
I have an SQL as follows
UPDATE TB
SET [Deleted] = 1
WHERE TB.[ QuestionId] = @QuestionId
I have an index in this table as follows
CREATE NONCLUSTERED INDEX [IX_AssessedAnswers1] ON [TB]
(
[Id] ASC,
[QuestionId] ASC,
[Deleted] ASC
)WITH (PAD_INDEX = OFF, SORT_IN_TEMPDB = OFF, DROP_EXISTING = OFF, IGNORE_DUP_KEY = OFF, ONLINE = OFF) ON [PRIMARY]
Whether this index will be considered by the query optimiser to lock records? If I created another index with only the QuestionId field will it boost the performance? Actually how the optimiser chooses the right index while update?
View 2 Replies
View Related
Oct 13, 2004
hi,
I need to choose a database based on the following criteria (using .NET app):
1) a light but fully functional database, preferably with the support of store proc and constraints, less than 8000 transaction a day.
2) portable or the database can be export/import very easily
3) reliable and stable
4) least maintenance
I have two db in my mind, Access and MSDE?
Does anyone have some hand-ons experience on the above two? Or any other better suggestions?
Any advice is appreciated.
thanks,
bryan
View 1 Replies
View Related
Apr 28, 2008
Hi all
I'm a newbie in SQL server and please excuseme for this silly question, Could anyone tell me when i should use which of the following types:
Decimal
Float
Real
I've mixed up !!! all of them can have floating point BUT what's the difference? some advise please!
Thanks in advance.
Kind Regards.
View 6 Replies
View Related
Jul 20, 2005
Hello group:I've done alot of reading on this subject somewhat and have found thatmany people have many different opinions on this subject. My questioncenters mainly around using a lookup table to enable users to select apre-defined list of values.I have developed a practice myself of avoiding AutoNumber type datafields for primary keys where the primary key will be related to achild table. Nevertheless, what do most users do with lookup tables?My thoughts are to create a small key value for each value in thelookup table. For example:I might have a Carriers table which shows a list of carriers that Imight ship an order by. One of the entries may be 'Air Freight -Overnight', or 'Air Freight - 2nd Day Air'. I've seen a few exampleswhere the primary key field for each entry like these would beautonumber, or at least, a numeric value. What I like to do is createmy own key, like for 'Air Freight - Overnight', I might use 'AFO' forthe key, and for 'Air Freight - 2nd Day Air', I might use 'AF2'. Anythoughts on this? Mine are that even tho the users may never see thisvalue - I, as the developer will see it and I tend to prefer a keyvalue based on real data that means something other than anauto-incremented number. In referencing the well-known Northwind.mdbdatabase, I noticed their Categories table used a number field value,like 1, 2, 3....etc, but their customers table used values like'ALFKI' to represent their key values.What are some other thoughts out there? I'm working with Accesscurrently, but this project is about to move to SQL Server.James
View 3 Replies
View Related
Dec 16, 2007
Lets assume database A is production, B is copy. SQL Server 2005 sp2, SQL CE 3.5
Database A has a variety of transactions against it 24x7
Database B (the copy) is for reporting and as a source of merge replication for SQL CE instances
Merge replication and reporting is used 24x7 as well
I have the following requirements:
Maintain an up to date copy of the production database (need not be up to the minute, could be hourly, even daily update)
Database B is read-only. The merge replication is NOT bi-directional.
Here is the caveat (which I think prohibits using some solutions to this problem):
The production application accomplishes much of it's functionality with in-memory copies of records. I have no control over the production application. When it works against the database, it sort of does a 'withdrawal-deposit' scenario. (to the best of my knowledge it's not using SQL Server transactions) So, for every record it works with, a copy is made out of the database, changes are made in memory, a delete of the database record is done, then the record is re-inserted.
With this kind of behavior in db A, I'm not sure what it would do to log-shipping or transactional replication. I do know that I want to minimize the changes required at the SQL CE instances to keep the sync operation to a minimal cost.
Any suggestions?
View 1 Replies
View Related
Apr 30, 2014
choosing a primary key for the database which i am designing.
I have few tables which contains 5 -15 fields out of it 3 - 9 columns combined to form the uniqueness of the row.
All are un-related tables. Three parent tables connect with 20 child non-related child tables.
I believe it would not be a wise choice to choose 3 to 9 fields for primary key. But if i use an auto increment as a key will there be of any use as it might never be used to fetch rows. Then why do i still have to go with that?
Or Is it ok to create a primary key of upto 5 attributes?
View 9 Replies
View Related
Jul 20, 2005
Hi to allI have to choose a DBMS and a database architecture for an Ebay likewebsite about to be launched.The company wants to use a web hosting service and not host thedatabase on dedicated servers at the office.The database will contain web-only information and lots of back endinformation that is not really needed to be stored on the web host.I'm wondering how to design that part, should I store all informationon the web host only ? miror that DB every evening on some local DBserver to be able to use the data without eating up lots of bandwith ?separate the database in 2 parts ? how to sync and assure integritythen ? having a local DB will also mean the company will have to pay alicence for the DBMS ...What DBMS should I pick considering that the database will have tohold at least 1 million products to sale (eBay like) and all theinformation that goes with it. I thought any DBMS weaker than SQLServer or Sybase or Oracle will not be enough. What do you think ?Thanks a lot, hope I have made myself clear enoughP.S. I would really like to get lots of different points of view. Ithink I'll use Sybase after all, so I wonder if that'sa good choiceand I still want to know your thoughts about the 2 or 1 DB design(separate Web & Billing information for example, or leave all the infoin the hosted database, what techniques to use to keep the integrityand to have the latest information in-house)... Thanks a lot
View 2 Replies
View Related
Nov 30, 2007
Hi--
I am a newbie to datamining, but have nearly a decade of solid database experience with the last 6 years in SQL Server 2000. We are moving our accounting system to SQL Server 2005 and I have been asked to explore the possibilities of mining an inventory table. I'd like to get some opinions prior to spending too much time potentially barking up the wrong tree!
We have an inventory table with approximately 10 million serialized records. Each row contains the serial number of the individual unit and its manufacturer/model designation. We have no control over the assigning of the serial numbers as they come from multiple manufacturers and some of the manufacturers correlate serial numbers to model and some don't.
My thought was to use a cluster model to try to predict the model of a new serial number as it is entered into the database. Is this thought feasibile? Is the mining model choice appropriate? If pointed in the right direction, I'm sure that I can run with this.
Thanks in advance-- Jim
View 3 Replies
View Related
Jan 15, 2007
Hi
I am having a query
SELECT Dur1.rootId
FROM DurableEventTab Dur1
WHERE (Dur1.dev_ReferenceClusterRoot = 'iyrwd.52' )
AND Dur1.dev_Action = 'Order:Ordered')
AND (Dur1.dev_Active = 1) AND (Dur1.dev_PurgeState = 0)
AND (Dur1.dev_PartitionNumber = 0)
This table has a primary key : aribapk11
and the indexes on the dev_ReferenceClusterRoot,
dev_Action,dev_purgestate .
Now when I fire this query
the query execution plan is actaull doing a Clustered Index scan on the PK :aribaPK11 . What I was expecting was an index seek on the key defined on dev_referenceClusterRoot. Please not the index seek is the behaviour in sql server 2000.
Any idea what is going wrong ?
Clustered Index Scan(OBJECT:([typhoon1902].[dbo].[DurableEventTab].[AribaPK7] AS [Dur1]), WHERE:([typhoon1902].[dbo].[DurableEventTab].[dev_Active] as [Dur1].[dev_Active]=(1.) AND [typhoon1902].[dbo].[DurableEventTab].[dev_PurgeState] as [Dur1].[dev_PurgeState]=(0) AND [typhoon1902].[dbo].[DurableEventTab].[dev_PartitionNumber] as [Dur1].[dev_PartitionNumber]=(0) AND [typhoon1902].[dbo].[DurableEventTab].[dev_ReferenceClusterRoot] as [Dur1].[dev_ReferenceClusterRoot]='iyrwd.52' AND [typhoon1902].[dbo].[DurableEventTab].[dev_Action] as [Dur1].[dev_Action]=N'Order:Ordered')) 0 0 Clustered Index Scan Clustered Index Scan OBJECT:([typhoon1902].[dbo].[DurableEventTab].[AribaPK7] AS [Dur1]), WHERE:([typhoon1902].[dbo].[DurableEventTab].[dev_Active] as [Dur1].[dev_Active]=(1.) AND [typhoon1902].[dbo].[DurableEventTab].[dev_PurgeState] as [Dur1].[dev_PurgeState]=(0) AND [typhoon1902].[dbo].[DurableEventTab].[dev_PartitionNumber] as [Dur1].[dev_PartitionNumber]=(0) AND [typhoon1902].[dbo].[DurableEventTab].[dev_ReferenceClusterRoot] as [Dur1].[dev_ReferenceClusterRoot]='iyrwd.52' AND [typhoon1902].[dbo].[DurableEventTab].[dev_Action] as [Dur1].[dev_Action]=N'Order:Ordered') [Dur1].[rootId] 1 0.00386574 0.0002263 71 0.00409204 [Dur1].[rootId] PLAN_ROW 0 1
View 3 Replies
View Related
Oct 29, 2015
Is there a recommended file format (csv, xml, txt) when choosing a file destination for SSIS? Does a file format impact the performance in terms of loading? Let's say i have chosen to use a .csv as my file destination (this has 15million rows and 50 columns with 2 bigint and the rest binary(32)) and later on, i would need to reload them back to table using SSIS. Is using csv faster than e.g. xml when reloading? Does it have performance impact at all?
View 3 Replies
View Related