I’m looking for clearity on partition switching. The idea is to use many BULK INSERT statements into table dbo.X_n in parallel and when BULK INSERT for table dbo.X_n is completed, switch dbo.X_n into dbo.bigdaddy. I think this is the fastest way to upload a couple hundred GB of data.
In learning about partition switching (in part) from The Data Loading Performance Guide under Partition SWITCH, I hear the instructions to say copy the main table exactly to become a target. But in that same step (#1), I read that we need to change the default file group of the target (dbo.X_n) from the default file group. Then it says I need to match indexes and lists the filegroup as something we need to match with the main table.
As an overview of the partition switching strategy, I think the whole point of BULK INSERT with partitioning is to have seperate files (in same group) to enable concurrent uploading where each table has its own file. Once the upload is completed to a table (dbo.X_n) then we do the partition switch into the main table (dbo.bigdaddy). The data we just uploaded doesn’t actually move, just the metadata for it.
When I read the instructions linked above, I hear “Don’t have the same filegroup on your target as the main table. You must have the same filegroup on your target as the main table.”
I’m looking for clearity on partition switching. The idea is to use many BULK INSERT statements into table dbo.X_n in parallel and when BULK INSERT for table dbo.X_n is completed, switch dbo.X_n into dbo.bigdaddy. I think this is the fastest way to upload a couple hundred GB of data.
In learning about partition switching (in part) from The Data Loading Performance Guide under Partition SWITCH, I hear the instructions to say copy the main table exactly to become a target. But in that same step (#1), I read that we need to change the default file group of the target (dbo.X_n) from the default file group. Then it says I need to match indexes and lists the filegroup as something we need to match with the main table.
As an overview of the partition switching strategy, I think the whole point of BULK INSERT with partitioning is to have seperate files (in same group) to enable concurrent uploading where each table has its own file. Once the upload is completed to a table (dbo.X_n) then we do the partition switch into the main table (dbo.bigdaddy). The data we just uploaded doesn’t actually move, just the metadata for it.
“Don’t have the same filegroup on your target as the main table. You must have the same filegroup on your target as the main table.”
Hi all, I am trying to create a diagram for our database, during the creating, I create some of the relationships which were not there(basically our original database is not relational database, that's why I am doing it) So sometimes I have to chage data type in order to create a relationship for the coloumns in different tables. i.e. change char(16) to varchar(7) (I checked the field that make sure all the data in this field is <= 7 characters)
But when I saved the diagram, there is an error message that state: Errors were encountered during the save process. Some of your database objects are not saved on your diagram.
'agent' table saved successfully 'VisitUSA' table - Unable to create relationship 'FK_VisitUSA_agent'. ODBC error: [Microsoft][ODBC SQL Server Driver][SQL Server]ALTER TABLE statement conflicted with COLUMN FOREIGN KEY constraint 'FK_VisitUSA_agent'. The conflict occurred in database 'CMC', table 'agent', column 'AgentCode'.
What does that mean? is it caused by some of the agentcode data in VisitUSA table which is not in agent table? Thanks! Betty
I have a few questions for you guys. I have a client application that can be offline or online. While offline, records can be added and need to be later synced to production.
I will use rda to pull the table down, and this is working fine. Now what if I have multiple tables where I want a foreign key relationship?
With rda I can only pull down one table at a time from everything I've read. Now say create a constraint after pulling the two or more tables down. While in offline mode I create a new record on two seperate tables with foreign key/primary key relationship.
When I do the push to the server will it automatically update the foreign key reference (locally) to the right one on the production server? Or will I get a duplicate primary key error? On the production server the primary key will be different because of the identity. This is very important because I will have multiple clients.
I am just now starting the switch from .NET 1.1 to .NET 2.0. I really like the new way of using the SQLDataSource and setting up Views declaratively as opposed to doing it all in code, which brings me to my question.In some of my applications I have a single Stored Procedure return multiple result sets to a single DataSet where I have a DataRelation set up. Then I can have nested DataGrids that use the GetChildRows() method to filter the results to display the hierarchical data. I would like to do something similar with the SQLDataSource and GridViews but haven't found a way to get multiple result sets.One thought I had was to create a Strongly Typed Dataset and then use the ObjectDataSource object, but I still didn't see a way to get child rows out of the datasource. I've seen an example that uses a <FilterParameter> to get nested data, but there is an extra trip made to the server for each parent item as it just put an extra parameter in the WHERE clause of the query.
Hi Please help for this simple problemDTS Transfer or any other method?I have Customer_Order Table and customer_Order_Details Table.For OrderID = 1, I have 3 rows of Order Details.I want to transferCustomer_Order Table for OrderID 1 in DTS, the system should transferOrder Table as well as Order details table Rows for ORderID =1.How to customize in DT or is there any way to get this data to transferfrom source DB to Dest DB?KAMAL KUMAR V--Posted via http://dbforums.com
Hi i am trying to use this query to pull all the publications stored in the database and all the authors contributing to that publication (1 to many relationship). I am trying to use a sub query so that i can display the results on one row of a gridview (including a consecutive list of all the authors). However i am recieving this error: Incorrect keyword near the word SET. ?
Maybe i need to add a temp column in the sub query to pull all the related authors for a single publication - but i dont know the sql for this? can anyone help?
Thanks SELECT ISNULL(Publication.month, '')+ ' ' + ISNULL(convert(nvarchar, Publication.year), '') as SingleColumn, Publication.publicationID, Publication.title FROM Publication WHERE Publication.publicationID IN (SELECT (convert(nvarchar, Authors.authorName)) FROM Authors INNER JOIN PublicationAuthors ON Authors.authorID = PublicationAuthors.authorID) AND Publication.typeID IN (SELECT PublicationType.typeName FROM PublicationType INNER JOIN PublicationType ON Publication.typeID = PublicationType.typeID
Hello everyone, this is my first post here so hopefully I am not asking a common question.
I am trying to create a flat dataset in SQL 2005. Basically I run a query and I get multiple rows for the same primary key. The query I am running is quite large and has several different tables connected to it, here is a small sample of what it looks like...
Typeid(Primary Key) Individual Address
1 Sam 912 Ave. J
1 Sam 913 Ave. Q
1 Sam 914 Ave. R
2 Mike 1000 Ave. O
3 Jill 1001 Ave. O
I want it to kind of look like this
TypeID Individual Address_1 Address_2 Address_3
1 Sam 912 Ave. J 913 Ave. Q 914 Ave. R
2 Mike 1000 Ave. O
3 Jill 1001 Ave. O
As I said before, this query is pretty big, and has several variables like Address where multiple rows are being taken by one Primary Key.
If it is not possible to do this in SQL 2005, is there a program that may be able to? Right now we are using SPSS as sort of a bandaid... we run the query in small portions like the one in the sample and then restructure the in sections but this takes several hours.
Anyways, thanks for any help that you may be able to provide.
I have played around with SSIS in addition to reading an SSIS book front to back, but I am still a little confused as to how to import an Xml file with relational data.
Basically I want to import the Xml data into three tables: categories, products and fields. A product can belong to one or more categories and has one or more fields which store information about the product.
Using the Xml Source component I can load the Xml from the file, but I can only output one section (category, product or field) at a time. Since the relationship is infered from the hierarchical structure of the Xml (e.g. the fields don't store an ID of the product they belong to), I am not sure how to import the relationships into my tables.
If anyone has any tips on how I can go about that, then it would be most appreciated :)
Ok, I have a page on my website where we can add products to our database. We are a music store, and most products have different versions or colors. I've created 2 tables, Products and Subproducts. The products table may hold info like Fender Stratocaster, and the subproducts would hold colors (Blue, Sunburst, etc). The subproducts table has an integer field called MainProductID, which is linked to the mainproducts table field RecordID. So far the page uses a wizard where if first creates the main product using an sql datasource. After the data has been added to the main products table, my page gives you the opportunity to add different sub products. The problem I am having is actually feeding in the RecordID from the main products table to my insert parameter on the sub products data source. This is what I have tried so far: There is a formview on the page that is bound to the main products table, after the entry is created I can physically see the info on my screen, so I know the data is there at my disposal SubProductsDataSource.InsertParameters.Add("@MainProductID", Formview1.datakey.item("RecordID"))SubProductsDataSource.Insert()Using this adds the data to the table, but the MainProductID is nullalso is there a cheap little way to refresh a page, because when I upload the product images I have it go to the next step where you are supposed to be able to see the images you uploaded, I don't see them which makes me think that the page is loading faster than the images are uploading. Thanks
Problem: I am working on a price comparison system which matches the best prices for a purchase (or an order) from exisiting purchase data. The order is stored in multiple tables including order details (stores major items purchased: e.g., PC) and order sub-details (optional items purchased with the major items: e.g., speakers, backup device, webcam etc.). There could be a number of major items in an order and each major item could have multiple related sub items. The other variables that affect the price include trade-ins if any, sales going on at the time of order, number of units etc.
Now, for any new configuration (major items/related sub items), the system should be able to return a list of previous purchases made with similar configurations, and similar variables (quatities, trade-ins etc). Even if the same model is not present, similar pcs by the same vendor should be considered. etc etc.
Questions: Is this possible using Data mining? If yes, which algorithm is recommended?
Also, can I assign/modify any kind of weights to certain variables (if same model: .6 ; if same model not available but pcs made by same manufacturer available: .3 ; by other manufacturers: .1)?
Hello everybody We need to move table T1 from database A to T1 database B on same server
size of table T1 15 GB and 40000000 rows
database B just created and will act as warehouse
could it be done simply by 1.creating table T1 on db B and then 2.set db to simple recovery 3. insert into B.dbo.T1 select * from A.dbo.T1 4. create all the indexes on table T1 in db B
hi All, I tried following piece of code in SQL 2005 , is working fine. Select * from Table FOR XML RAW('RECORDS')ELEMENTS,ROOT('MyTable'). But when i tried the same thing in SQL 2000 , it was not working . Plz suggest a way in 2000 to get the XML output with custom RootNode Name in 2000 also like in 2005 ? Thanks in advance. Mohit
As people say, Microsoft was the first major database vendor to include data mining features in a relational database. What dose this exactly mean? Thanks a lot for any guidance.
hello, I am beginner for asp.net and sql server. I used Sql server manegement studio full version and I exported my aspnetdb which was created by VS2005 to my host sql server. I have a question: relational tables are not relational no longer. I noticed that when I created database diagram. what is wrong by exporting? thanks for your helps...
My Input is a flat file source and it has spaces in few columns in the data . These columns are linked to another table as a foreign key and when i try loading them in a relational structure Foreigh key violation is occuring , is there a standard method to replace these spaces .
what approach should i take so that data gets loaded in a relational structure.
for example
Name Age Salary Address dsds 23 fghghgh
Salary description level 2345 nnncncn 4
here salary is used in this example , the datatype is char in real scenario
what approach should i take to load the data in with cleansing the spaces in ssis
I'm currently working with a 10 million plus row database with the dataresiding on a Unix box with Cache 5.0. The problems is that it can take fivedays to pull one table from Cache to SQL 2000 using the ODBC connectionprovided by Cache in a SQL 2000 DTS package. I think the real problem isconverting the data from the post relational format (Cache) to a relationalformat (SQL 2000)???Does anyone have any ideas / suggestions on how to speed this transfer ofdata? I'm very new to Cache and any help would be greatly appreciated.Thanks,-p
For this scenario, what is the best method of exporting data to sql 2005.
I want to export data from desktop app across internet to sql which can do on a row by row basis, but this is very slow and if the connection goes down halfway then pretty much buggered.
What is the best, reliable and fastest way to copy data across internet (several thousand rows), I have read about Bulk Insert etc... but also how would get around an upload and crashes half way, is there a way of uploading and until the whole upload goes through then the data is inserted into the database.
Does anyone know how to upload (bulk) data from a client (written in Excel VBA) to a remote SQL2000 database? Of coarse I tried "INSERT INTO" and rst.addnew but I noticed this is much, much slower as downloading from the same remote database.
I have below DB structure in MSSQL for a small application which follow relational approach. Data retrieval (for Hostels) will need several Join, may be Key-Value approach where data retrieval will be fast.
When I deploy the cube which is sitting on my PC (local) the following 4 errors come up:
Error 1 The datasource , 'AdventureWorksDW', contains an ImpersonationMode that that is not supported for processing operations. 0 0 Error 2 Errors in the high-level relational engine. A connection could not be made to the data source with the DataSourceID of 'Adventure Works DW', Name of 'AdventureWorksDW'. 0 0 Error 3 Errors in the OLAP storage engine: An error occurred while the dimension, with the ID of 'Customer', Name of 'Customer' was being processed. 0 0 Error 4 Errors in the OLAP storage engine: An error occurred while the 'Customer Alternate Key' attribute of the 'Customer' dimension from the 'Analysis Services Tutorial' database was being processed. 0 0
I'm using Script Component to load data into Oracle DB due to the poor performance issue. Now, I found it will missing some data during the transmission. Please see the screenshot below:
I am getting ErrorCode 8 while loading the data from stage to model. I have checked my error view it states that "Member Code is Inactive".
Initially I have loaded same set of data in Model from MDS Stage table but then deleted with ImportType = 5 which removed all the data from the MDM model.
Now i want to load it back but its giving the Error Code 8 .. Before loading the same data i have changed the stage table Importtype to 2 and Importstatusid to 0.
Hi, all experts here, Do we always have to use SCD component for the loading of data into data warehouse to handle changes of rows? I am looking forward to hearing from you and thank you very much in advance for your help. With best regards,
Hi i am trying to do a straight forward load from a Flatfile source , i have defined the columns according to the lenghts defined in the Data Dictionary Provided but when i am trying to run the Task i am encounterring this error
The column data for column "Column 20" overflowed the disk I/O buffer.
I tried to add another column 21 at the end and truncate or leave that column unmapped to destination but the same problem occurs for column 21 what should i do to over come this .
In case of Bad Data how to clean up the source.. Please help me with this
Hello!! searching information about how to migrate some date from an old data base (any tipe) from SQL Iv found this: LOAD DATA [LOW_PRIORITY | CONCURRENT] [LOCAL] INFILE 'file_name.txt' [REPLACE | IGNORE] INTO TABLE tbl_name [FIELDS [TERMINATED BY 'string'] [[OPTIONALLY] ENCLOSED BY 'char'] [ESCAPED BY 'char' ] ] [LINES [STARTING BY 'string'] [TERMINATED BY 'string'] ] [IGNORE number LINES] [(col_name_or_user_var,...)] [SET col_name = expr,...)] Does anybody know how does it works and how to use it????Id like to know because I have to load data from a text file to a SQL Data Base and this seems to be te fastest an easiest way to do it...Thanks!!!!bye!
The project is a C/S data analysis system built with .Net 2.0 in windows environment, OS: Microsoft Windows 2003 R2 standard Edition Service Pack2, Database used in this project is: Sql server 2005. As a data analysis system, we need to load large amount of data from file to database, we do it by create a dts package and then do data loading by execute "m_Package.Execute(null, variables, m_PackageEvents, null, null)".
The problem is, we fount that DTS miss some data randomly sometimes, we can't find the rule till now. for example we've data as follows in data file, all data field splited by '|' 11234|26341|2007-09-03 00:00|0|0|0.0|0|0.011470833793282509|1|0.045497223734855652|0|0|1|0|3|2929|13130|43|0|2|0|0|40|1|0|0|0|0|0|1||0|0|3|0|0|0|0|0||0|3|0|0|43|43|0|41270|0|3|0|0|10|3|0|0|0|0|0||0|1912|0|0|3|0|0|0|0|0|0|0|3|0|0|5|0|40|0|9|0|0|0|0|0|0|0|0|29|1|1|24|24.0|16|16.0|0|0|0.0|0|0|24|23.980693817138672|0|0.0|0|0.0|0|0.0|0|0.0|11|2.0764460563659668|43|2|0|0|30|11|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|3|3|0|0|0|0|0|0|0|0|0|6|0|0|0|0|0|6|0|0|45|1|0|0|0|2|42|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|2|0|0|0|2|0|0|0|0|0|0|51|47|85|0|0|||||||||||||||||||||||||||||||||||||||||||||||||||0|0|0|0|0|0|0|0|0|0|0|0|0|||||||||||||0|0|0|0|0|97.117401123046875|0|0|83|57|||0.011738888919353485|0|1|0.065955556929111481|0|4|||0.00026658669230528176|1|0.00014440112863667309|1|68|53|12|2|1|2.0562667846679688|10|94|2|0|0|30|11|47|4|13902|7024|6878|18|85|4.9072666168212891|5|0.0|0|0.0|0|0.0|0|0.0|0|358|6448|6324|0|0|0|0||0||462|967|0|41|39|2|0|0|0|1|0|0|0|0|0|0|0|0|3|0|0|3|0|0|0|0|0|0|0|0|0|3|0|3|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0.0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|46|0|1|0|1|37|0|0|46|0|1|0|1|37|0|0|0|0|0|0|0|0|0.0|0|0|6|4|2|0|0|2|1|0|1|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0.0|0|1|0.012290795333683491|0|44|44.0|0|0.0|0|0|0|30|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|2|0|2|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|2|1|1|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|1|1|0|0|0|0|0|0|0|0|0|0|0|0|27|0|0|2112|411|411|45|437|2|0|2|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|4|0|4|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|1|0|1|0|0|0|0|0|0|0|0|0|0|0|6|6|0|3|2|1|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|5|5.0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|600|600|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|6|0|0|0|0|0|0|6|0|9|1|2|2|3|0|1|0|0|0|0|0|0|0|0|0|0|0|13|3|2|5|1|1|1|0|0|0|102|0|1|1|0|0|0|3|3|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0||||||0|0|0|0|0|0|0|0|0|0|0|0|0|0|0||||||0|0|0|0|0|0|0|0|0|0|0||0|0|0|0|0|0|0|0|0||||||||||0|0|0|0|0|0|0||||||||||0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0||0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0.0|46.0|46|0.0|0|0.0|0|0.011469460092484951|1|0.0|0|0.0|0|3|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0.0|0|0.0|0|0|0|0|0|0|0|0|0|0|0|0|0|||0|100.0|100.0|0|1|0|1|0|0|0.02481505274772644|1|0.014951236546039581|1|0|0|0|0|0|0|0|0|0|0|0|0|0|||||||||||||||||||||||||||||||||||||||||||||||||||||||0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|||0|||||||||||||||||||||||||||||||||||||||||||||||||||0|0|0|0|0|0|0|0|0|4695.5556640625|42260|7126.66650390625|62040|||||||||||||||||||||||||||||||||||||||||||||||||||||||0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0||||||||||0|0||||||||||
We found that some of the data field become 'null' after the load action finished, if we load the same data again, problem disappeared, we can't 100% reproduce this issue each time, we don't know why, Anybody here can help us to solve this issue or give us some clue?