Hi, I have a denormalized table (done so with reason) with around 40 columns. I would never have to retrieve data for all of those columns together. I haven't done any performance measurements yet but just wondering if anyone has ready answer to this: Will there be a performance degradation if I retrieve data from a table with many columns, even if not all columns are referred in the query? (for making it simple, lets assume that all or varchar type of columns, I just want to find out if performance degrades if there are too many columns in table)
I have a database hosted by GoDaddy. Recently they made some changes to the interface and upgraded to SQL Server 2008. One or the other has made it impossible to access my data in one table.
The table is quite large in terms of the numbers of elements. Each row describes a dog and all the elements are components of the description. There are (I would guess) more than 50 elements all together.
When I try to search the database, the query form goes beyond the top and bottom of the page. I can scroll the database but the search tool (which lies atop the data) does not scroll. The result is that I can't activate the search.
I've tried about 10 machines. All with IE6 display this fault. Machines with IE7 do not. I've tried various screen resolutions on the machines with IE6. That doesn't help.
I've checked other tables in the database. No problem.
In short, there's nothing I can do. I can't edit my data and GoDaddy says, "Tough."
Is there a limit on the number of columns (elements) in a table in SQL Server 2008?
I am creating a report that uses the Matrix control. I need to display a fixed number of columns (5). In my query, I am returning the top 5 rows of data. However, in some cases there are less than 5 rows of data returned from the dataset. Is there a way to force the number of columns displayed in the matrix control and to populate with some text (such as "n/a") if no data is available?
I have a table (Sql server 2000) which has 14 cost columns for each record, and now due to a new requirement, I have 2 taxes which needs to be applied on two more fields called Share1 and share 2 e.g Sales tax = 10% Use Tax = 10% Share1 = 60% Share2 = 40%
So Sales tax Amt (A) = Cost1 * Share1 * Sales Tax So Use tax Amt (B) = cost1 * share2 * Use tax
same calculation for all the costs and then total cost with Sales tax = Cost 1 + A , Cost 2 + A and so on.. and total cost with Use tax = Cost1 +B, Cost 2 +B etc.
So there are around 14 new fields required to save Sales Tax amt for each cost, another 14 new fields to store Cost with Sales Tax, Cost with Use tax. So that increases the table size. Some of these fields might be used for making reports.
I was wondering which is a better approach out of the below 3: 1) To calculate these fields dynamically while displaying them on the User interface and not save in DB (while making reports, again calculate these fields dynamically and show), or 2) Add new formula field columns in database table to save each field, which would make the table size bigger, but reporting becomes easier. 3) Add only those columns in database on which reports needs to be made, calculate rest of the fields dynamically on screen.
I have a table (Sql server 2000) which has 14 cost columns for each record, and now due to a new requirement, I have 2 taxes which needs to be applied on two more fields called Share1 and share 2 e.g Sales tax = 10% Use Tax = 10% Share1 = 60% Share2 = 40%
So Sales tax Amt (A) = Cost1 * Share1 * Sales Tax So Use tax Amt (B) = cost1 * share2 * Use tax
same calculation for all the costs and then total cost with Sales tax = Cost 1 + A , Cost 2 + A and so on.. and total cost with Use tax = Cost1 +B, Cost 2 +B etc.
So there are around 14 new fields required to save Sales Tax amt for each cost, another 14 new fields to store Cost with Sales Tax, Cost with Use tax. So that increases the table size. Some of these fields might be used for making reports.
I was wondering which is a better approach out of the below 4: 1) To calculate these fields dynamically while displaying them on the User interface and not save in DB (while making reports, again calculate these fields dynamically and show), or 2) Add new formula field columns in database table to save each field, which would make the table size bigger, but reporting becomes easier. 3) Add only those columns in database on which reports needs to be made, calculate rest of the fields dynamically on screen.
4) Create a view just for reports, and calculate values dynamically in UI and not adding any computed values in table.
I have a table called Employees which has lots of columns but I only want to count some specific columns of this table.
i.e. EmployeeID: 001
week1: 40 week2: 24 week3: 24 week4: 39
This employee (001) has two weeks below 32. How do I use the COUNT statement to calculate that within these four weeks(columns), how many weeks(columns) have the value below 32?
I have 3 columns. I would like to update a table based on job_cd and permit_nbr column. if we have same job_cd and permit_nbr, reference number should be same else it should take max(reference number) from the table +1 for all rows where reference_nbr column is null
I have a dataset with 2 columns, a rownumber and a servername - eg
rownumber servername
1 server1
2 server2
....
15 server15
I want to display the servernames in a report so that you get 3 columns - eg
server1 | server2 | server3
server4 | server5 | server6
...
server13 | server14 | server15
I have tried using multiple tables and lists and filtering the data on each one but this then makes formating very hard - i either end up with a huge gap between columns or the columns overlap
I have also tried using a matrix control but cant find a way to do this.
Does anybody know an easy way to do this? The data comes from sql 2005 so i can use a pivot clause on the dataset if somebody knows a way to do it this way. The reporting service is also RS2005
I am currently designing a SSIS package to integrate data into a data warehouse fact table. This fact table has about 70 columns among which 17 are foreign keys for dimension tables.
To insert data in that table, I have to make several transformations and lookups. Given the fact that the lookups I have to make are a little complicated, I have about 70 tasks in my Data Flow. I know it's a lot, but I can't find a way to make it simpler. It seems I really need all these tasks.
Now, the problem is that every new action I try to make on the package takes a lot of time. At design time, everything is very slow. My processor is eavily loaded each time I change a single setting in one of the tasks, and executing the package in debug mode takes for ages. If I take a look at the size of my package file on disk, it's more than 3MB.
Hence my question : Are there any limitations in terms of number of columns or number of tasks that can be processed within a Data Flow ?
If not, then do you have any idea why it's so slow ?
I am working on a Statistical Reporting system where:
Data Repository: SQL Server 2005 Business Logic Tier: Views, User Defined Functions, Stored Procedures Data Access Tier: Stored Procedures Presentation Tier: Reporting ServicesThe end user will be able to slice & dice the data for the report by
different organizational hierarchies different number of layers within a hierarchy select a organization or select All of the organizations with the organizational hierarchy combinations of selection criteria, where this selection criteria is independent of each other, and also differeBelow is an example of 2 Organizational Hierarchies: Hierarchy 1
Country -> Work Group -> Project Team (Project Team within Work Group within Country) Hierarchy 2
Client -> Contract -> Project (Project within Contract within Client)Based on 2 different Hierarchies from above - here are a couple of use cases:
Country = "USA", Work Group = "Network Infrastructure", Project Team = all teams Country = "USA", Work Group = all work groups
How to implement the data interface (Stored Procs) to the Reports Implement the business logic to handle the different hierarchies & different number of levelsI did get help earlier in this forum for how to handle a parameter having a specific value or NULL value (to select "all") (WorkGroup = @argWorkGroup OR @argWorkGrop is NULL)
Any Ideas? Should I be doing this in SQL Statements or should I be looking to use Analysis Services.
say, i have a column in the database that has number values. I want to display these number with comma separators (Eg: if the column values is 1654, then i want to display it as 1,654).
So I have been trying to get mySQL query to work for a large database that I have. I have (lets say) two tables Table_One and Table_Two. Table_One has three columns: Type, Animal and TestID and Table_Two has 2 columns Test_Name and Test_ID. Example with values is below:
In Table_One all types come under one column and the values of all Types (Mammal, Fish, Bird, Reptile) come under another column (Animals). Table_One and Two can be linked by Test_ID
I am trying to create a table such as shown below:
This should be my final table. The approach I am currently using is to make multiple instances of Table_One and using joins to form this final table. So the column Bird, Reptile, Mammal and Fish all come from a different copy of Table_one.
For e.g
Select Test_Name AS 'Test_Name', Table_Bird.Animal AS 'Birds', Table_Mammal.Animal AS 'Mammal', Table_Reptile.Animal AS 'Reptile, Table_Fish.Animal AS 'Fish' From Table_One
[Code] .....
The problem with this query is it only works when all entries for Birds, Mammals, Reptiles and Fish have some value. If one field is empty as for Test_Two or Test_Three, it doesn't return that record. I used Or instead of And in the WHERE clause but that didn't work as well.
Summary: Maximum number of records per second that can be inserted intoSQLServer 2000.I am trying to insert hundreds (preferably even thousands) of recordsper second in to SQLServer table (see below) but I am getting thefollowing error in the Windows Event Viewer Application log file:"Insufficent Memory......"And very few records were inserted and no errors where sent back viathe JDBC.By removing the indexes on the table we have stopped getting the errormessage and have managed to load the table at 300 records per second.However I have couple of questions:1) Are the indexes definitely to blame for this error and is thereanyway of getting around this problem i.e. keeping the indexes in placewhen inserting?2) How should I configure SQLServer to maximise the speed ofinserts?3) What is the limiting factor for inserting into SQLServer?4) Does anyone know of any metrics for inserting records? At wantpoint should we consider load balancing across DBs.I currently populate 1.6 million records into this table. Once againthanks for the help!!CREATE TABLE [result] ([id] numeric(20,0) NOT NULL,[iid] numeric(20,0) NOT NULL,[sid] numeric(20,0) NOT NULL,[pn] varchar(30) NOT NULL,[tid] numeric(20,0) NOT NULL,[stid] numeric(6,0) NOT NULL,[cid] numeric(20,0) NOT NULL,[start] datetime NOT NULL,[ec] numeric(5,0) NOT NULL,)GOCREATE INDEX [ix_resultstart]ON [dbo].[result]([start])GOCREATE INDEX [indx_result_1]ON [dbo].[result]([id], [sid], [start], [ec])GOCREATE INDEX [indx_result_3]ON [dbo].[result]([id], [sid], [stid], [start])GOCREATE INDEX [indx_result_2]ON [dbo].[result]([id], [sid], [start])GO
i want to update employee table by selecting columns from employee_temp
i do that using oracle but i want it using sql server 2000
Sample syntax Below: for oracle
update employee e set (col1,col2,col3,col4)= (select t1.col1,t1.col2,t1.col3,t1.col4 from employee_temp t where t.col1=:new.col1) where e.col1=:new.col1
this syntax for oracle....
plsease provide sql 2000 syntax and sql 2005 syntax please.........
does anyone know what the maximum number of columns is that an SQL table can contain ?
i seem to remember that in the past it was something like 200 columns max, but i don't know whether that has changed with newer versions of SQL Server
looking at the spec for SQL Server 2000 i see a maximum of 1024 columns per base table - am i right in interpreting this to mean the maximum is now 1024 columns or is a base table different from an SQL table ?
While Running a SSIS package after migrating it from DTS to SSIS , in MS SQL Server 2005 , it gives error while execution :
DTS_DTSTASK_DATAPUMPTASK_2 The number of columns is incorrect.verify the column metadata is valid. "OLEDB Destination "(22) Failed the pre execution phase and returned error code 0xC0202025 Thanks for the response .....
Hi,I'm designing a new database and I have a doubt in which surely youcan help me.I'm storing in this database historical data of some measurements andthe system in constantly growing, new measurements are added everyday.So, I have to set some extra columns in advance, so space is availablewhenever is needed and the client doesn't have to modify the structurein SQL server.The question is: the more columns I add "just in case", the slower theSQL reads the table?Of course the "empty" columns are not included in any query until theyhave some valid data inside.Will I have better performance if I configure only the columns beingused at the moment, without any empty columns?Thanks in advance.Ignacio
The returned record set is empty with all the the 256 columns name. Could anyone shed light why it is returning empty recordset. Is there any limitation on number of columns that a recordset can hold.
hi.i am using ms sql server 2000. can somebody tell me what the code would be to remove all the values in a given column and replace them with the associated number of the row with each execution. so, if i have a column:nums|1||2||3||4|and somebody deletes record |2|i would like the nums colum to update to|1||2||3|not:|1||3||4|it seems simple but i am having a hard time with this. how is it done?thanks.
In short: I want my following problem to work with a LIKE instead of exact match and if possible be faster. (currently 4s)
Problem: I got a set of rows with varchar(50), spread out over multiple tables. All those tables relate to a central Colour table. For each of the columns, I want to match the values with a set of strings I insert and then return a set of Colour.Id
E.g: input: 'BLACK', 'MERCEDES', '1984' Would return colour ids "025864", 45987632", "65489" and "63249" Because they have a colour name containing 'BLACK' or are on a car from 'MERCEDES' or are used in '1984'.
Current Situation: I) Create a table containing all possible values CREATE TABLE dbo.CommonSearch( id int IDENTITY (1, 1) NOT NULL, clr_id int, keyWord varchar(40), fieldType varchar(25) And fill it with all the values (671694 rows) ) II) Stored Procedure to cut a string up into a table: CREATE FUNCTION dbo.SplitString ( @param varchar(50), @splitChar char = '' ) RETURNS @T TABLE (keyWord varchar(50)) AS BEGIN WHILE LEN( @param ) > 0 BEGIN declare @val varchar(50) IF CHARINDEX( @splitChar, @param ) > 0 SELECT @val = LEFT( @param, CHARINDEX( @splitChar, @param ) - 1 ) , @param = RIGHT( @param, LEN( @param ) - CHARINDEX( @splitChar, @param ) ) ELSE SELECT @val = @param, @param = SPACE(0) INSERT INTO @T values (@val) END RETURN END III)Stored Procedure to query the first table with the second one CREATE PROCEDURE [dbo].[GetCommonSearchResultForTabDelimitedStrings] @keyWords varchar(255) = '' AS BEGIN SET NOCOUNT ON; select clr_id, keyWord, fieldType from dbo.commonSearch where keyWord in (select * from splitString(@keyWords, '')) END
So, how can I use a LIKE statement in the IN statement of the last query. Furthermore, I was wondering if this is the best sollution to go for. Are there any better methods? Got any tuning tips to squeeze out an extra second?
Keshka writes "I'm using function fnParseString form http://www.sqlteam.com/forums/topic.asp?TOPIC_ID=76033 in some of my sp.
it's very helpfull, but my question is if there is a way to split variable into columns if I don't know how many columns I'll have? It could be 1 or 2 or 3 and etc.
I have table1 and table2.In table1 I have a column of numbers, numbers1.In table2 I have a column of numbers, numbers2.I'd like to select the highest number represented in either column.Example:table1:column1--------------345565643656555676table2:column2--------------3456564556456456456456The number I would want would be 56456 since it's the largest numberout of all combined.How can I get that number with one select statement?--[ Sugapablo ][ http://www.sugapablo.com <--music ][ http://www.sugapablo.net <--personal ][ Join Bytes! <--jabber IM ]
We are trying to use the Import/export wizard to load a text file to a SQL Server 2005 database. The input file has a variable number of columns per row. For example, the first row has 3 columns, the second has 7, the third has 3, etc. The number of columns varies from 2 to 9 in the input file. The columns are separated by an uptick (`) and the rows are terminated by {CR}{LF}. We are using code page 1252. On processing, the wizard reads the first row (with 3 columns) ok, but then assumes all the other rows have 3 columns and parses the rows accordingly, ignoring the field and row terminators.
The process worked fine with SQL Server 2000. Is there some setting that we are missing, or some configuration on the database that we should be checking?
Hi, I need to update a number of columns in a number of tables - I just don't know how many. In this case, I am updating all varchar fields and nvarchar fields to be converted to lower case. The problem is that the table structure is amended over time as columns are added programmatically, so I do not know which tables have which columns and if so which of them are varchars.
I can get a table of which fields I need to update using:
SELECT TABLE_NAME, COLUMN_NAME, ORDINAL_POSITION, DATA_TYPE INTO tblTempLCase FROM information_schema.columns WHERE DATA_TYPE LIKE '%varchar'
and I can do the update with UPDATE tblxxx SET column = LOWER(column)
But what I don't know is how to step through my temporary table and do the updates. I can do it in ASP.NET, but that involves pushing commands and data between ASP and SQL, and will be too slow. How do I do it in SQL?
I need to update a number of columns within a number of tables - I just don't know how many. In this case, I want to convert all varchar and nvarchar columns to lower-case versions of themselves. The problem is that the table structure is changed programatically, and so at any point in time I cannot be certain what fields are in which table, and what data type they are.
I know that I can get a lit of columns using:
SELECT TABLE_NAME, COLUMN_NAME, ORDINAL_POSITION, DATA_TYPE INTO tblTempLCase FROM information_schema.columns WHERE DATA_TYPE LIKE '%varchar'
and do the update using:
UPDATE tblABCDE SET column = LOWER(column).
In ASP.NET I can pull in this temporary table using a SQL Data Adapter, and then step through the records to formulate the UPDATE statements and execute them all. However, I hope that this is possible in SQL too, so that I do not have to keep firing data/commands between ASP and SQL, as it should be quicker, and is also neater.