I am writing a sql table, which is used to record a webpage's clicked number at different times.
The table structure is sth like:
click_number_1, time_1, click_number_2, time_2, ... ..., click_number_3000, time_3000
Cos there are too many columns, I feel it's impossible to use CREATE TABLE to add them by hand. I wonder if sql has statements like "for" statement in c++ to add those columns in a looping way.
Thanks!
I am currently planning an ASP.NET and MS SQL project to distribute customer codes to my customers. All the codes will be stored in the database, and then when a customer registers, the next available code is linked to their account. Does this sound like a feasible solution? The number of codes (ie. rows) could be anywhere from 3,000 to 100,000 – but will only contain a couple of fields.
Also, it is a separate company that is producing the codes (we’re offering them as an incentive) and I was wondering the best way for them to get the codes to me, so I can easily import them into MS SQL? CSV / Excel ??
This is the main problem I need help with.
Thank you all for your continued support, and all the work you kind guys do here!
We are developing a system which will have to support more than 3000 subscribers. We will have to support both Transactional replication and Merge replication.
I checked the following document about SQL 2005 replication <http:// www.microsoft.com/technet/prodtechnol/sql/2005/mergrepl.mspx>. The document does not clearly specify what is the maximum number of subscribers supported without a significant performance degradation.
The questions i have are: 1. Given the fact that there will be more than 3000 subscribers, there will be more than 500-1000 subscribers trying to replicate at the same time. Will be there be a performance degradtion in such a scenario
2. Has anyone used SQL Server 2005 in a scenario involving more than 3000 subscribers?
3. Will it be better if we develop our own system to perform replication activity instead of relying on SQL Server 2005?
- Ngm
Mail me atnarasimha (DOT) gm (AT) gmail (DOT) com )
So I have been trying to get mySQL query to work for a large database that I have. I have (lets say) two tables Table_One and Table_Two. Table_One has three columns: Type, Animal and TestID and Table_Two has 2 columns Test_Name and Test_ID. Example with values is below:
In Table_One all types come under one column and the values of all Types (Mammal, Fish, Bird, Reptile) come under another column (Animals). Table_One and Two can be linked by Test_ID
I am trying to create a table such as shown below:
This should be my final table. The approach I am currently using is to make multiple instances of Table_One and using joins to form this final table. So the column Bird, Reptile, Mammal and Fish all come from a different copy of Table_one.
For e.g
Select Test_Name AS 'Test_Name', Table_Bird.Animal AS 'Birds', Table_Mammal.Animal AS 'Mammal', Table_Reptile.Animal AS 'Reptile, Table_Fish.Animal AS 'Fish' From Table_One
[Code] .....
The problem with this query is it only works when all entries for Birds, Mammals, Reptiles and Fish have some value. If one field is empty as for Test_Two or Test_Three, it doesn't return that record. I used Or instead of And in the WHERE clause but that didn't work as well.
I have to copy a large (3000) amount of different tables from a Oracle machine into an SQLServer machine. I am able to do this using a (VB) script. I use now several methods:
1) INSERT INTO TABLE1 SELECT * FROM SID1..DB.TABLE1 (SID1 is a linked server)
2) INSERT INTO TABLE1 SELECT * FROM OPENQUERY(SID1,'SELECT * FROM DB.TABLE1')
3) Also used OPENROWSET method (similar to 2)
For small tables this is fine, however for BIG tables (15M Rows/150Cols) the methods above are too slow. If I compare the same copy action with a simple DTS, the DTS is 3 times faster. Also, the DTS seems to bulk copy the data directly into the desired database while the mentioned methods first fill the tempdb, then the transaction log of the desired database and then finally the desired table (need very much extra space on your filesystem). The total size of data is about 300GB.
Can anyone supply me with a simple example how to copy data from an Oracle table into a SQLServer table in script (or SQL) that is as fast as the DTS and not filling my logfiles?? I read the bcp (which I use for import/export files) and bulk insert commands, but I do not understand how to use them in this question.
Please note that the number columns are different in each table. I wanted to dump the data of Source table to Destination table. I meant to say that the rows of 2 columns in Source table to last 2 rows of Destination table. And also my oreder of the columns in Destination table will vary. So i need to a way to dynamically insert the data in bulk. but i will know the column names for sure before inserting.
Is there anyway to bulk insert into these columns.
I have a business need to create a report by query data from a MS SQL 2008 database and display the result to the users on a web page. The report initially has 6 columns of data and 2 out of 6 have JSON data so the users request to have those 2 JSON columns parse into 15 additional columns (first JSON column has 8 key/value pairs and the second JSON column has 7 key/value pairs). Here what I have done so far:
I found a table value function (fnSplitJson2) from this link [URL]. Using this function I can parse a column of JSON data into a table. So when I use the function above against the first column (with JSON data) in my query (with CROSS APPLY) I got the right data back the but I got 8 additional rows of each of the row in my table. The reason for this side effect is because the function returned a table of 8 row (8 key/value pairs) for each json string data that it parsed.
1. First question: How do I modify my current query (see below) so that for each row in my table i got back one row with 19 columns.
SELECT A.ITEM1,A.ITEM2,A.ITEM3,A.ITEM4, B.* FROM PRODUCT A CROSS APPLY fnSplitJson2(A.ITEM5,NULL) B
If updated my query (see below) and call the function twice within the CROSS APPLY clause I got this error: "The multi-part identifier "A.ITEM6" could be be bound.
2. My second question: How to i get around this error?
SELECT A.ITEM1,A.ITEM2,A.ITEM3,A.ITEM4, B.*, C.* FROM PRODUCT A CROSS APPLY fnSplitJson2(A.ITEM5,NULL) B, fnSplitJson2(A.ITEM6,NULL) C
I am using Microsoft SQL Server 2008 R2 version. Windows 7 desktop.
Here is My requirement, I'm not sure if this is possible. Creating table called master like col1, col2 col3, col4 , col5 ...Where Col1, col2 are updatable - this can be done easily
Col3, col4 are columns in another table but these can be just a read only ?? Is this possible ? this is possible with View but not friendly with share point CRUD...Col 5 is a computed column of col 2 and col5 ? if above step can be done then sure this can be done I guess.
Hello,Using SQL Server 2000, I'm trying to put together a query that willtell me the following information about a view:The View NameThe names of the View's columnsThe names of the source tables used in the viewThe names of the columns that are used from the source tablesBorrowing code from the VIEW_COLUMN_USAGE view, I've got the codebelow, which gives me the View Name, Source Table Name, and SourceColumn Name. And I can easily enough get the View columns from thesyscolumns table. The problem is that I haven't figured out how tolink a source column name to a view column name. Any help would beappreciated.Garyselectv_obj.name as ViewName,t_obj.name as SourceTable,t_col.name as SourceColumnfromsysobjects t_obj,sysobjects v_obj,sysdepends dep,syscolumns t_colwherev_obj.xtype = 'V'and dep.id = v_obj.idand dep.depid = t_obj.idand t_obj.id = t_col.idand dep.depnumber = t_col.colidorder byv_obj.name,t_obj.name,t_col.name
I'm in the process of converting legacy DTS packages to SSIS. I need to populate a table that has more fields than the source file. In DTS I did this with an ActiveX script. How do I go about doing this within SSIS.
In the ActiveX script most of the fields were defaulted with either spaces or zeroes.
One of the Destination fields needs to be incremented by 1 for each new record inserted.
I just created a new table with over 100 Columns and I need to populated just the first 2 columns.
The first columns to populate is an identify column that is the primary key. The second column is a foreign_key to an other column and I am trying to populate this columns with all the values from the foreign_key value. This is what I am trying to do.
column1 = ID column2= P_CLIENT_D
SET IDENTITY_INSERT PIM1 ON
INSERT INTO PIM1 (P_CLIENT_ID) SELECT Client.ID FROMP_Client
So I am trying to insert both an identity values and a value from an other table while leaving the other columns blank. How do I go about doing this.
Hello,I have a query that I need help with.there are two tables...Product- ProductId- Property1- Property2- Property3PropertyType- PropertyTypeId- PropertyTypeThere many columns in (Product) that reverence 1 lookup table (PropertyType)In the table Product, the columns Property1, Property2, Property3 all contain a numerical value that references PropertyType.PropertyTypeIdHow do I select a Product so I get all rows from Product and also the PropertyType that corresponds to the Product.Property1, Product.Property2, and Product.Property3ProductId | Property1 | Property2 | Property3 | PropertyType1 | PropertyType2 | PropertyType3 PropertyType(1) = PropertyType for Property1PropertyType(2) = PropertyType for Property2PropertyType(3) = PropertyType for Property3I hope this makes sence.Thanks in advance.
I have two tables with the common column-name/ The first one include such columns as the name of football team, its country and budget. The second one include the team name, victories (quantity of it), lost games, and season. So I have the task to select such column as name, victories, lost games in 2nd table but only that ones that has budget over 1 mln? How to build select statement SELECT name, victories, lost g/ from TABLE2 WHERE budget >1000000 from TABLE1? Or should I equates the table1.name=table2.name??
Case: Exporting Report to PDF/Printing/TIFF Report: Contains 1 table with 19 Columns. 1 column is static, the other 18 are visible at the users descretion. Report when printed/exported to pdf spans 2 pages naturally, 16 on the first page, 3 on the second, and the column widths have been adjusted to provide a perfect page span .
User A elects to hide two of the columns, and show the rest. The report complies and the viewable version is perfect, the excel export is perfect.. the PDF export on the first page causes every fith column, starting with the last column that was hidden to be expanded to take up additional width. On the spanned page, it renders the first column on that page correctly, then there is a white space gap equal to the width of the hidden columns and then the rest of the cells show with the last column expanded to take up the same width that the original 2 columns were going to take up, plus its width.
We have tried several different settings to see if it helps this issue or makes it worse. So far cangrow/canshrink/keep together have made no impact. It is not possible to increase the page size due to limited page size selection availablility for the client. There are far too many combinations of what the user can elect to show or hide to put together different tables to show and hide on the same report to remove this effect.
Any help or suggestion on this issue would be appreciated
I have a table A, with two ID columns. In a report both ID colums shouldbe shown with the actual value stored in a second table, BThe problem is, both IDs need to be looked up in B, but are not in thesame row.How do I do this in an efficient way? A sub select?Thanks,--John MexIT: http://johnbokma.com/mexit/personal page: http://johnbokma.com/Experienced programmer available: http://castleamber.com/Happy Customers: http://castleamber.com/testimonials.html
Orders, with OrderID as primary key, a code for the client, and a code for the place of delivery/receipant.
Both the client and place of delivery should be linked to the table:
Relations, where each relation has it's own PrimaryID which is a auto-numbered ID. Now I want to substract my orders, with both the clientcode, and the place of delivery code linked to the relations table, so that for both the name and adress is shown.
I can link one of them by:
InnerJoin On Orders.ClientID = Relations.ClientID, but it's not possible to also link to the ReceipantsID. Is there a way to solve this?
Say you have a fact table with a few columns that all reference the same key column in a dimension table, you want to write a view to return the information for those keys?
USE MyTestDB; GO SET NOCOUNT ON; IF OBJECT_ID ('dbo.FactTemp' ,'U') IS NOT NULL DROP TABLE dbo.FactTemp;
[Code] ....
I'm using very small data at the moment, and the query plan and statistics don't really say which way.
What is the easiest way to determine the number of columns in a table (in a particular database)? Also is there any way to get the column id associated with each column (is there such thing)??
NOTE, this needs to be done with only knowing the name of the table and database and nothing else.
I need to find out the number of columns per table for ALL of my tables in my database. This way I can compare my old copy of the database to the new and make sure that any new columns are there.
I have a table (that i get as a result of pivots and joins in a stored procedure) that has two columns with the same name, (each time the stored procedure is called, different columns will be in this situation). For every row, only one of these columns has a value and the other has NULL.
Does anyone know how to do prevent this situation during the JOIN that creates this table? OR of a way to merge these columns once they are there ?
I have a procedure when i need to get the columns of a table and store them in a temp table and process further. For this i am planning to use sys.columns table as follows:
Hi, I have 3 tables which are part of my query and I need to total one column from each of them, but cannot get it to work. I had this working within Access using the nz function (below), but when trying to use this within VS to produce my ASP.net code did not work, saying NZ was not a valid function. nz(HW.HGF)+nz(HD.HGF)+nz(HL.HGF) AS FWhere HW,HD,HL are the table names and HGF is the same column name in each which I want to total.Any help would be grately appreciated.Thanks
I want to get the Primary Key Columns in Arrays by sending a tablename. I am using SQL Server 2000 and I want to make a find utility in VB.net whichwill work for all the forms; I have tables with one Primary key and some tables with composite Primary keys. I used to do this in VB 6 by making a function which fills the Primary Keys inList Box (I require to fill in list box), now I need to get in array. Can some one tell me the migration of the following VB 6 Code? This was written for the MS Access, I need same for SQL Server, I cannot find Table Def and Index Object in VB.net 2003. Public Sub GetFieldsFromDatabase (ldbDatabase As Database, lsTableName AsString) Dim lttabDef As TableDef Dim liCounter As Integer Dim liLoop As Integer Dim idxLoop As Index Dim fldLoop As Field With ldbDatabase For Each lttabDef In .TableDefs If lttabDef.Name = lsTableName Then liCounter = lttabDef.Fields.Count For liLoop = 0 To liCounter - 1 cboFieldLists.List(liLoop) = lttabDef.Fields(liLoop).Name Next liLoop For Each idxLoop In lttabDef.Indexes With idxLoop lblIndexName = .Name If .Primary Then liCounter = 0 For Each fldLoop In .Fields cboPrimaryKeys.List(liCounter) = fldLoop.Name liCounter = liCounter + 1 Next fldLoop End If End With Next cboFieldLists.ListIndex = 0 If cboPrimaryKeys.ListCount > 0 Then cboPrimaryKeys.ListIndex = 0 End If Exit For End If Next End WithEnd Sub
in sql server 2000, I have a table that I have to write a script for to delete several columns in this table. I am finding that I have to use alter or drop keywords or a combination of the two but not sure because I have not done this before. I am googling this but finding all kinds of other information that I dont' need to know. I dont have rights on this table so I cannot do this manually. I have to create the script and send it on to someone else. If anyone can provide a good script example that I can use to delete unwanted columns it would be a great thing. Thanks.
At first I have to apologize that I write here. This is not right place for my problem but I could not find any better place to ask help. I'm creating a new table and I have to copy some columns from the old one to the new. Desciption of the old table is something like that: Table1: name char(32) not null primary key, variable1 char(32), variable2 char(32). Now I would like to copy columns variable1 and variable2 to the new table (called Table2). I need to change the names to the id-numbers. (Don't ask me why, I just have to :) I have created Table2. Table2: id int not null primary key, newvariable1 char(32), newvariable2 char(32). Now the problem is that how can I copy just columns variable1 and 2 without the name-column (witch is the primary key)? I just an error message: DB21034E The command was processed as an SQL statement because it was not a valid Command Line Processor command. During SQL processing it returned: SQL0407N Assignment of a NULL value to a NOT NULL column "TBSPACEID=2, TABLEID=578, COLNO=0" is not allowed. SQLSTATE=23502 It just says that (if I understood the message) I can't leave the id-column blank. How can I generate the keys to Table2? It would be nice to get keys from 1,2,3... Please help me. :)
Hi, I need do get a columns names from a table? How to do this in pure SQL? I thought about creating a stored procedure or user function with a result of a string ( col1name,col2name ....) I do not know how to count the number of columns in a specyfied table? Any help would be appreciated. Thanks in advance. magicxxxx