Columns Names
Mar 19, 2004I need a querry to get all columns names.
thanks
I need a querry to get all columns names.
thanks
Hey,
I'm creating registration form.
To show fields names I thought to read columns names.
It's ok if columns is named like "Name", "Age" etc.
But if the columns is named [Country, Address, PostCode] then, I think, it can course some problems. Am I right?
First problem I thought about - changing database in the future (Now MS SQL 2k to MySQL etc.)
Is this the only problem?
To solve this I think using table which store syscolumn names as user defined columns names.
My system is speed critical and using this I would get less performance.
Which way should I go?
Case saving columns names in table, how to generate safe column name from user specified name, which can have special charters.
Thanks
Hi, I need do get a columns names from a table? How to do this in pure SQL? I thought about creating a stored procedure or user function with a result of a string ( col1name,col2name ....) I do not know how to count the number of columns in a specyfied table? Any help would be appreciated. Thanks in advance. magicxxxx
View 2 Replies View RelatedHi,
I am looking for some help in MS SQL server. I want to access the columns of a table without using the names of the colulmns.
Example - SELECT table1.field[1], table1.field[2] FROM table1;
Any information to this effect is much appreciated.
cheers/- Pradeep
I have one column name that is: description
when i write a query the world lights up with blue, I think I saw someone using [ ] around the word but I no longer remember if this is the way to handle reserve words that have been use as columns names
I've called a resultset from SQL Server
using an SQL Selection. I need to iterate over that entire result set
(200+ columns/fields) and all I need are the random numbers contained
in any of the rows/columns. I don't want to have to name each
field/column and then use an if > 0 statement.Isn't there
some way to generically loop through the column's by index or something
instead of their field name so I can just use an integer loop to walk
the dataset? I know there is I've done it about 5 years ago. The
question is how do you do it in C#?SqlConnection thisConn = new SqlConnection(ConfigurationManager.ConnectionStrings["SQLQuery"].ConnectionString); SqlCommand thisCmd = new SqlCommand("Command String", thisConn); thisCmd.CommandText = "Select * from SelectionsByCountry where [" + DropDownList1.SelectedItem.ToString() + "] > '0'"; thisConn.Open(); SqlDataReader thisReader = null; thisReader = thisCmd.ExecuteReader(CommandBehavior.CloseConnection); while (thisReader.Read()) { DropDownList2.Items.Add(thisReader["System"].ToString().Trim());/*** There are 200+ columns left I want to walk over using a loop structure of some sort. How do I do that?*/ }- Rex
I'm looking for a query that will return all index names, the table the index is on and the columns in the index...
View 1 Replies View RelatedI've got a database with an unknown number of columns. Hence, the column names are also unknown. What's the easiest SQL to present the values in each column and the column headings?
View 1 Replies View Relatedhi,
i am sure this question must have been anwsered some where, but after a lot of searching i still have not find the anwser.
i have flat files without column headers (267 columns in total).
since i have the file's description i have created a table to house these extracts with the columns in the same order as in the flat files.
additionally, i have an excel containing a list of the column names their data types and length as well as their position on the flat files.
in the old, DTS would map the columns without headers to those columns in the destination table using their order, in which case it works like a breeze for me. but i can not find a way of doing that in SSIS.
i would very much appreciate someone's assistance on this one since i am sure that there must be a better way than manually (and tediously & error prone) to map all those columns.
thanks in advance
I am able to get a list of columns in a table from the query I have written shown below:
select sc.name ColumnNames,st.name TableName from sys.columns sc inner join sys.tables st on sc.object_id=st.object_id
order by st.name
But I am looking for the resultset with the format below:
TableName  Columns
employee    employeeid,employeename,employeesalary
order       orderid,address,priceÂ
I am trying to do a PIVOT on a query result, but the column names created by the PIVOT function are dynamic.
For example (modified from the SQL Server 2005 Books Online documentation on the PIVOT operator) :
SELECT
Division,
[2] AS CurrentPeriod,
[1] AS PreviousPeriod
FROM
(
SELECT
Period,
Division,
Sales_Amount
FROM
Sales.SalesOrderHeader
WHERE
(
Period = @period
OR Period = @period - 1
)
) p
PIVOT
(
SUM (Sales_Amount)
FOR Period IN ( [2], [1] )
) AS pvt
Let's assume that any value 2 is selected for the @period parameter, and returns the sales by division for periods 2 and 1 (2 minus 1).
Division CurrentPeriod PreviousPeriodA 400 3000 B 400 100 C 470 300 D 800 2500 E 1000 1900
What if the value @period were to be changed, to say period 4 and it should returns the sales for periods 4 and 3 for example, is there a way I can change to code above to still perform the PIVOT while dynamically accepting the period values 4 and 3, applying it to the columns names in the first SELECT statement and the FOR ... IN clause in the PIVOT statement ?
Need a way to represent the following [2] and [1] column names dynamically depending on the value in the @period parameter.
[2] AS CurrentPeriod,
[1] AS PreviousPeriod
FOR Period IN ( [2], [1] )
I have tried to use the @period but it doesn't work.
Thanks in advance.
Kenny
I am imagining something you might pass the names of 2 stored procs (an old version and new one), and a query to produce valid parameters. It would then fire off each proc for a set number of executions, while storing off the results in temp tables, and at the end it would do a data compare, and store off performance data from dynamic management views.
Now I know how to get the parameters for a stored procedure out of the catalogue views, but is SQL Server aware at all of the schema of the results of stored procedures that return result sets, becuase I was thinking of doing something like...
INSERT INTO #datacompare(col1,col2)
EXEC mystoredprocedure
... but I can not seem to figure out how to dynamically gather the schema of the result set.
I am trying to figure out how to pivot a temporary table. I have a table which starts with a date but the number of columns and columns names will vary but will be type INT (Data, col2,col3,col4………….n)
So it could look like
Date , TS-Sales, Budget , Toms sales
01-Jan-14,100,120,300
02-Jan-14,80,150,300
03-Jan-14,100,20,180
Turned to this
01-jan-14, 02-jan-14, 03-jan-14
100,80,100
120,150,20
300,300,180
Or even just the date and a SUM
What I want is to be able to sum al the columns but without knowing the name and the amount columns to start with this is a manually processes. How could I automate this?
While looking forward to design a multi-columnar cross-tab query I am anxious to know if there could be a way to change the default names of the pivot columns? In other words for the query like the following can there be a way to apply anAS type command to reflect some other names, instead of having the four dates in heading? Something like Month_A, Month_B?
SELECT * FROM
(SELECT
X.REP_DT,
X.CUST_ID
AMOUNT_1
FROM
X) P
PIVOT (SUM(AMOUNT_1) FOR REP_DT IN ([2014-12-31], [2015-01-31], [2015-02-28], [2015-03-31])) PVT_01
I need to find column names if any after 2 “check” columns.
Scenario: I have a database, with approx 400-1500 tables, depending on installation of software. The software is structured so that, when it synchronizes the SQL database it will create all the columns e.g. custacc, custname etc. and then it will always put in two check columns “CheckOne” and “CheckTwo” these two columns has to be the two last ones. In 99.9 this always works fine, but sometime if the users creates a new field in the software, when it synchronizes the new field “lands” behind the two checkfields, which is not good.
So what I am after is a script, which can run through all user tables, tell me if there are columns after the two checkfields and list those tables if any.
Any help would be greatly appreciated.
Cheers
Henrik.
Hello,
I've been able to startup a transactional replication between 2 database for some tables that only have different table names. Now there are still some tables that not only have different names but also different column names, can this be done in the wizzard ??
Thx
I€™m using the ImportExport wizard to export the top 5 lines from a MS Sql table into a fixed format (€śragged€?) file. But I want the first record to contain the column names of the exported fields so I selected the €śColumn names in the first data row€? option of the €śChoose a Destination€? box. When I run the Package I get:
>>>
· Information 0x402090dc: Data Flow Task: The processing of file "C:arkingdogExportWithheader.txt" has started (SQL Server Import and Export Wizard)
· Error 0xc0202095: Data Flow Task: Failed to write out column name for column "CustomerID".
(SQL Server Import and Export Wizard)
Error 0xc004701a: Data Flow Task: component "Destination - ExportWithheader_txt" (49) failed the pre-execute phase and returned error code 0xC0202095.
(SQL Server Import and Export Wizard)
>>>
When I de-select the €śColumn names€? option, the package works fine. Other than manually, how can I et the column names in output file?
TIA,
Barkingdog
I am using CROSS APPLY instead of UNPIVOT to unpivot > one column. I am wondering if I can dynamically replace column names based on different tables? The example code that I have working is based on the "Allergy" table. I have thirty more specialty tables to go. I'll show the working code first, then an example of another table's columns to show differences:
select [uplift specialty], [member po],[practice unit name], [final nomination status]
,[final uplift status], [final rank], [final uplift percentage]
,practiceID=row_number() over (partition by [practice unit name] order by Metricname)
,metricname,Metricvalue, metricpercentilerank
[code]....
Rheumatology Table:The columns that vary start with "GDR" and [GDR Percentile Rank] so I'm just showing those:
GDR (nvarchar(255), null)
GDR Percentile Rank (nvarchar(255), null)
GDR PGS (nvarchar(255), null)
GDR Rank Number (nvarchar(255), null)
PMPM (nvarchar(255), null)
[Code] ....
These are imported from an Excel Workbook so that's why all the columns with spaces for now.
 I have a situation where I want to load the Excel file dynamically, and the excel file have different columns or even worksheet name. How I could approach this? I believe there's no way to modify the meta data (specifically the mapping) in the data flow.
View 6 Replies View RelatedI am designing a package to export staging tables into a flat file.The names of the tables will be: TableAStaging_YYYYMM and TableBStaging_YYYYMM. As you can see the names of the tables will be changing each month.
The flat files will have similar naming: C:MyPathFlatFileTableAStaging__YYYYMM and C:MyPathFlatFileTableAStaging__YYYYMM.I want to run the package as an sql job in two steps, one for each table.I need to dynamically pass the table names and file names (together with the path) to the IS package.
As part if a recent requirement I have to export Chinese/Singaporean names in a CSV file. The data in the tables is a NVARCHAR(256).
I am using a FlatFile Connection manager where all the present columns from the table are exported as NVARCHARs. My understanding was that the Chinese/Singaporean names would blend seamlessly with NVARCHARs in place. But, they get garbled when pushed to the CSV.
Here is the connection manager setup
There are a lot of suggestions of fixing this by copying/pasting to a notepad file and changing the formatting... But I cant do that since the file is generated using a schedules SSIS package. How can I tweak the process to fix the issue?
Hi
I was wondering if anyone has an idea of how we could find the table names and column names of the tables in our Sql server database at runtime/dynamically given our connection string? Please let me know.
Thanks.
I'm going to be setting up DB mirroring between two SQL Server 2005 boxes. Since these are on two different servers, can the instance names be the same? Is there any reason NOT to do so if the mirror server is going to be used exclusively for DB mirroring?
For example: if the my primary DB is located on SERVER1INSTANCE1, can the mirror be SERVER2INSTANCE1 or do the instance names have to be different even though they're on different boxes.
Thanks!
I'm trying to do an update query that looks like this:
UPDATE
PAEMPLOYEE
SET PAEMPLOYEE.LOCAT_CODE = EMPLOYEE.PROCESS_LEVEL
FROM
PAEMPLOYEE A
JOIN EMPLOYEE B ON A.EMPLOYEE = B.EMPLOYEE
It's erroring out on the Employee prefix B.EMPLOYEE saying:
..."does not match with a table name or alias name used in the query"
Is it wrong or will it cause problems to have a field name the same as the table name?
Hello there. I'm trying to populate a drop down box with a list of all available instances of SQL server (express or not) available on a network. To list all instances I'm using EnumAvailableServers() in SmoApplication.
The issue is that the Instance column in the returned data table is always empty, and both the Name and the Server columns contain the same string -- the name of the computer the SQL server is installed. Locally I have SSE installed, the instance name is <computer_name>SQLEXPRESS, however, using Smo I can't seem to get to the either full instance name or just the instance name. I can't assume SQLEXPRESS, since there may be more than one installed.
Alternately I used EnumRegisteredServers() in SmoApplication.SqlServerRegistrations and that, while returning instance information (can be retrieved from the RegisteredServer in the returned collection), doesn't always return the remote SQL servers in the list.
Any idea how can this be done? Maybe I'm going at it the wrong way, maybe there's another class / method I need to use to retrieve all instances?
Thank you
Hello:
I am running into an issue with RS2k PDF export.
Case: Exporting Report to PDF/Printing/TIFF
Report: Contains 1 table with 19 Columns. 1 column is static, the other 18 are visible at the users descretion. Report when printed/exported to pdf spans 2 pages naturally, 16 on the first page, 3 on the second, and the column widths have been adjusted to provide a perfect page span .
User A elects to hide two of the columns, and show the rest. The report complies and the viewable version is perfect, the excel export is perfect.. the PDF export on the first page causes every fith column, starting with the last column that was hidden to be expanded to take up additional width. On the spanned page, it renders the first column on that page correctly, then there is a white space gap equal to the width of the hidden columns and then the rest of the cells show with the last column expanded to take up the same width that the original 2 columns were going to take up, plus its width.
We have tried several different settings to see if it helps this issue or makes it worse. So far cangrow/canshrink/keep together have made no impact. It is not possible to increase the page size due to limited page size selection availablility for the client. There are far too many combinations of what the user can elect to show or hide to put together different tables to show and hide on the same report to remove this effect.
Any help or suggestion on this issue would be appreciated
hii all,can any one help me to get this information.
i am using sql server 2000 and i want to know:
1- all names of servers
2- all database names (catalogs) at specific server and only which the user created .
3- user name and password for each server.
thanks.
I have a business need to create a report by query data from a MS SQL 2008 database and display the result to the users on a web page. The report initially has 6 columns of data and 2 out of 6 have JSON data so the users request to have those 2 JSON columns parse into 15 additional columns (first JSON column has 8 key/value pairs and the second JSON column has 7 key/value pairs). Here what I have done so far:
I found a table value function (fnSplitJson2) from this link [URL]. Using this function I can parse a column of JSON data into a table. So when I use the function above against the first column (with JSON data) in my query (with CROSS APPLY) I got the right data back the but I got 8 additional rows of each of the row in my table. The reason for this side effect is because the function returned a table of 8 row (8 key/value pairs) for each json string data that it parsed.
1. First question: How do I modify my current query (see below) so that for each row in my table i got back one row with 19 columns.
SELECT A.ITEM1,A.ITEM2,A.ITEM3,A.ITEM4, B.*
FROM PRODUCT A
CROSS APPLY fnSplitJson2(A.ITEM5,NULL) B
If updated my query (see below)Â and call the function twice within the CROSS APPLY clause I got this error: "The multi-part identifier "A.ITEM6" could be be bound.
2. My second question: How to i get around this error?
SELECT A.ITEM1,A.ITEM2,A.ITEM3,A.ITEM4, B.*, C.*
FROM PRODUCT A
CROSS APPLY fnSplitJson2(A.ITEM5,NULL) B, Â fnSplitJson2(A.ITEM6,NULL) C
I am using Microsoft SQL Server 2008 R2 version. Windows 7 desktop.
I'd like to first figure out the count of how many rows are not the Current Edition have the following:
Second I'd like to be able to select the primary key of all the rows involved
Third I'd like to select all the primary keys of just the rows not in the current edition
Not really sure how to describe this without making a dataset
CREATE TABLE [Project].[TestTable1](
[TestTable1_pk] [int] IDENTITY(1,1) NOT NULL,
[Source_ID] [int] NOT NULL,
[Edition_fk] [int] NOT NULL,
[Key1_fk] [int] NOT NULL,
[Key2_fk] [int] NOT NULL,
[Code] .....
Group by fails me because I only want the groups where the Edition_fk don't match...
Here is My requirement, I'm not sure if this is possible. Creating table called master like col1, col2 col3, col4 , col5 ...Where Col1, col2 are updatable - this can be done easily
Col3, col4 are columns in another table but these can be just a read only ?? Is this possible ? this is possible with View but not friendly with share point CRUD...Col 5 is a computed column of col 2 and col5 ? if above step can be done then sure this can be done I guess.
I have query which retrieves multiple column vary from 5 to 15 based on input parameter passed.I am using table to map all this column.If column is not retrieved in the dataset(I am not talking abt Null data but column is completely missing) then I want to hide it in my report.
Can I do that??
Any reply showing me the right way is appricited.
-Thanks,
Digs
Hi, all experts here,
Thanks a lot for your kind attention.
As I am creating the non-clustered indexes for the tables, I dont quite understand how dose it really matter to put the columns in the index key columns or put them into the included columns of the index?
I am really confused about that and I am looking forward to hearing from you and thank you very much again for your advices and help.
With best regards,
Yours sincerely,
Here's another one of my bitchfest about stuff which annoy the *** out of me in SSIS (and no such problems in DTS):
Do you ever wonder how easy it was to set up text file to db transform in DTS - I had no problems at all. In SSIS - 1 spent half a day trying to figure out how to get proper column data types for text file - OF Course MS was brilliant enough to add "Suggest Types" feature to text file connection manager - BUT guess what - it sample ONLY 1000 rows - so I tried to change that number to 50000 and clicked ok - BUT ms changed it to 1000 without me noticing it - SO NO WONDER later on some of datatypes did not match. And boy what a fun it is to change the source columns after you have created a few transforms.
This s**hit just breaks... So a word about Derived Columns - pretty useful feature heh? ITs not f***ing useful if it DELETES SOME of the Code itself after there have been changes in dataflow. I cant say how pissed off im about that SSIS went ahead and deleted columns from flow & messed up derived columns just because the lineageIDs dont match.
Meta-data - it would be useful if you could change it and refresh it - im just sick and tired of it that it shows warnings and errors when there's nothing wrong - so after a change i need to doubleclick all my transforms so that those red & yellow boxes would disappear.
Oh and y I passionately dislike Derived columns - so you create new fields based on some data - you do some stuff - combine multiple columns to one, but you have no way saying remove the columns from the pipeline. Y you need it - well if you have 50K + rows with 30+ columns then its EXTRA useless memory overhead for your package.
Hopefully one day I will understand how SSIS works (not an ez task I say) - I might be able to spend more time on development and less time on my bitchfest - UNTIL then --> Another Day - Another Hassle with SSIS