SQL Server 2012 :: Store Tabular Data Having Parenthesis As Column Name Into XML
Nov 9, 2014
We are storing changed data of tables into XML format for auditing purpose. The functionality is already achieved. We are using FOR XML Path clause to convert relational data of tables into XML format.
Now, a table is having column name with '(' . For example name of the column is, ColumnName(). In this case we can not convert into XML using For XML clause. Showing error as,
Column name 'columnName()' contains an invalid XML identifier as required by FOR XML; '(' (0x0028) is the first character at fault.
We have a typical issue with Column Store Index, we have a procedure which does 2 activities - Switch & Reverse Switch
Switch: 1. Fetch the Partitions needed to be switched 2. Switch the data from Main Table to Switch table 2. Disable the Column store on Switch table
SSIS Package: 3. Load data to Switch (Insert / Update)
Reverse Switch: 4. Enable the Switch 5. Switch back the data from Switch table to Main table
Issue: Some time the Column store is not getting disabled, and the package fails complaining try disabling the Column store index and try loading data.
If we re-run the procedure, the column store gets disabled.
I am trying to learn building SSAS tabular model. While following a tutorial I need to add a column to an existing table but for some reason the ADD Column option (insert column in other menu is also not appearing) is greyed out.
I have created NONCLUSTERED index on table but my report is taking more time that's why i created columnstore NONCLUSTERED index on the same table but i have one query, if any table have row and column level index(same columns in index) . Which index query will consider.
I have created a tabular model with VS 2013 Ultimate. I deployed the model several times and everything works great with Power Pivot.
One of the dimension tables, "Age", has an integer field that is null. I updated that column in SSMS to have values, processed the table in VS and the "Age" column now has integer values.
I deployed the model again, connected with Power Pivot, and the "Age" column still has not data. I refreshed the Excel spreadsheet several times, started a new one, deployed several times and there is still not data in the "Age" column even though it clearly exists in VS.
I opened SSMS on the Analysis Service and queried the table in question and it also shows no data in the Age column even though it is in the data table and the VS model.
I can't find anything different about this column than others that contain data. How can this be happening?
A DB2 store procedure returns two data sets, when executed from SSMS, using linked server. Do we have any simple way to save the two data sets in two different tables ?
We have a table with 10 columns each column is of datatype int and can accept nulls, when I do a save is it better to have 0 inserted in to the column or just insert null?
When we use Partition switch and load the data to a table, can we refresh the indexes for specific partition, so that we don't need to rebuild / refresh for complete is this possible ?
I'd like to create a table that will store different order items. Several order items make up one single order. Order items can have 0 or more children (max depth will never be deeper than one). Order items can have up to 150 attributes/values. The way I think this should be done is using XML column instead of the EAV type of model. My table structure currently looks like this:
* child_order_item_id (PK) * parent_order_item_id (FK to child_order_item_id) * order_id (FK to Order table) * product_id (FK to Product table) * price * attribute_XML
How my attribute_XML should look like or how to validate the xml.
I just joined a bank related IT department where the existing IT team is already working on creating a new application since a year now and designed database on SQL Server 2012, the amount (currency related) column's datatype found different in tables some where it is decimal and somewhere found different. I want to suggest them Money datatype to choose where we are dealing with currency related columns. My question is, is that correct to choose Money datatype or should I ignore this?
I have system id information in table system_ids and productids and systemidinsformation has lot of data but I am looking two strings in tire data to pull into two separate columns. details below
Database versions :ms sql 2008/2012 tablename:system_id's column:system id information
sample data from system_id_information column
######################################## <obj xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns="urn:vim25" versionId="5.5" xsi:type="ArrayOfHostSystemIdentificationInfo"><HostSystemIdentificationInfo xsi:type="HostSystemIdentificationInfo"><identifierValue> unknown</identifierValue><identifierType><label>Asset Tag</label><summary>Asset tag of the system</summary><key>AssetTag</key></identifierType>
[Code] .....
I am looking output of two columns, which are bolded
product_id snumber 654081-B21 MXQ43905SW
for serial number this is common
before string :HostSystemIdentificationInfo"><identifierValue>
and after string </identifierValue><identifierType><label>Service tag
and snumber is always between the before and after string and number of characters of snumber varies and entire data for a row also varies
All the column names in upper case are actually symptom names, and in those columns are values {NULL, 1, 2, 3, 4, 5} and they belong in a column, so the normalized structure should be like this:
CREATE TABLE Symptom ( PatientID INT NOT NULL, Cycle TINYINT NOT NULL, SymptomName VARCHAR(20) NOT NULL, -- from the source column *name* Grade TINYINT NOT NULL -- from the value in the column with the name in uppercase PRIMARY KEY (PatientID, Cycle, SymptomName));
I can untwist the repeating groups with the code I borrowed from Kenneth Fisher's article [ here ], but the part I'm having a harder time with is grabbing the information that's still left in the column name and integrating it into the solution...
I can retrieve all the column names that are in uppercase using this:
DECLARE @db_id int; DECLARE @object_id int; SET @db_id = DB_ID(N'SCRIDB'); SET @object_id = OBJECT_ID(N'SCRIDB.dbo.BadTox'); SELECT name AS column_name , column_id AS col_order FROM sys.all_columns WHERE name = UPPER(name) COLLATE SQL_Latin1_General_CP1_CS_AS AND object_id = @object_id;
but I can't figure out how to work it into this (that I built by mimicking Kenneth Fisher's article...):
ALTER PROC [dbo].[UnpivotMaxGradeUsingCrossApply] AS SELECT PatientID , Toxicity , MAX(Grade) AS MaxGrade
[code]....
The problem is that I need to extract the column names (where ToxicityName[n] would be). I can do that by querying the sys.all_columns view, but I can't figure out how to integrate the two pieces. About the only thing I have even dreamed up is to build the VALUES(...) statements dynamically from the values returned by the system view.
So how do I get both the value from the ToxicityName[n] column and the column name into my final data query?
I am looking to store the different forms and data in our database. We have several different forms and contains different information. I am looking for different approaches to model this table structure.
Also, I need to make sure the table structure will allow for new forms.
I have created some dynamic sql to check a temporary table that is created on the fly for any columns that do contain data. If they do the column name is added to a dynamic sql, if not they are excluded. This looks like:
If (select sum(Case when [Sat] is null then 0 else 1 end) from #TABLE) >= 1 begin set @OIL_BULK = @OIL_BULK + '[Sat]' +',' END
However, I am currently running this on over 230 columns and large tables 1.3 mil rows and it is quite slow. How I can dynamically create a sql script that only selects the columns in the table where there is data in a speedier manner. Unfortunately it has to be on the fly because the temporary table is created on the fly.
Need to change the datatype of existing column which has huge data.
I'm performing below steps
1. Create new column with correct datatype in the same table 2. copy data into new column 3. drop indexes on column 4. <<<>>> now the existing column also has many SP dependent and I do not wish to drop them. 5. rename existing column to xxx 6. rename new column to correct column 7. drop old column 8. make required indexes
Currently I have a column with multiple postcodes in one value which are split with the “/” character along with the corresponding location data. What I need to do is split these postcode values into separate rows while keeping their corresponding location data.
I recently set up a SQL 2012 FCI with a NetApp fileshare to store the data files. The install worked just fine, but I can't run an integrity check for any of my databases. Whenever I try, I get these error messages:
Msg 1823, Level 16, State 2, Line 1 A database snapshot cannot be created because it failed to start. Msg 1823, Level 16, State 8, Line 1 A database snapshot cannot be created because it failed to start. Msg 5120, Level 16, State 104, Line 1 Unable to open the physical file "path-to-fileshareMSSQL11.MSSQLSERVERMSSQLDATAmodel.mdf:MSSQL_DBCC12". Operating system error 1: "1(Incorrect function.)". Msg 7928, Level 16, State 1, Line 1
The database snapshot for online checks could not be created. Either the reason is given in a previous error or one of the underlying volumes does not support sparse files or alternate streams. Attempting to get exclusive access to run checks offline.The error message suggests SQL had a problem creating the snapshot, but I checked through some NetApp documentation for configuring SMB 3.0 for SQL.
I have a question about partitions in both SQL Server table and Tabular Model. I started to use Tabular Model recently.
I need to partition a table that collects daily rows for different clients.
The natural partition key is a combination of clientID+dateID (something like CL-YYYYMMDD)
I created a configuration table with a primary key PartitionID IDENTITY(1,1) , that contains also the field clientID and dateID
Every day I add a new row in it and I get a partitionID for the new client and date
Then I created a partitioned fact table using PartitionID as the partition field, using the partition function and the partition schema as well.
The daily client data is inserted in the partitioned table using the partitionID
Everything works fine, and the data are loaded correctly into the partitioned fact table.
Then I created a Tabular Model where the fact table is the partitioned table, and I created tabular model partitions using something like "select <field list> from PartitionedTable where partitionID = <partitionID>"
In this way, every day I load partitioned data in both sql server and tabular model. I have two dimensions, client and calendar
Now my question is: when I browse the Tabular Model, and I'm selecting a specific dimension date and dimension client, am I using the partitionID index correctly?
Or should I put in the tabular model partition query something like "select <field list> from PartitionedTable where clientID = <clientID> and dateID = <dateID>"? In this case is still working the partitionID index? How can I check it?
I am trying to create a sample table in the Azure SQL Data warehouse but its giving me a syntax error Incorrect syntax near the keyword 'CLUSTERED'.
CREATE TABLE [dbo].[FactInternetSales] ( [ProductKey] int NOT NULL , [OrderDateKey] int NOT NULL , [CustomerKey] int NOT NULL , [PromotionKey] int NOT NULL
Is there a way to dynamically feed the Text of the Column titles into a report at run-time in Reporting Services?
In Access I could use DLookup to pull YTD Dates (e.g. "05/2008") from another view that kept track of the settings for the year being processed (Curr Year, Prior Year, Prior Year II, YTD I, YTD II, and etc. for each Year's data. YTD I and YTD II would read like '07/2008' and '09/2008'. The YTD II value could change as the next month's data became available ('10/2008'). As we switched from year to year (2007, 2008, etc.) all I had to do was update the [tbl Status Flags] table for the next year and all my many reports would reflect the correct headers for the periods in the data. The period could be switched back to prior years at any time by the user in order to view past data. When the year was changed the report headers would have to reflect that. The status flags table would give the setting for each year. Using hard coded titles was too confusing for the user and changing the titles manually was not feasible.
We have moved to SQL Server 2005 and Reporting Services 2005. Besides losing the ability to Change the Data at the top of every page as the Dept, VP Name, Director and etc. changed I also seem to have lost this dynamic column headers ability. I have found a cumbersome workaround for the data at the top of the page, but the only thing I have come up for the columns is to try to create queried parameters to feed those titles (they will be the same throught the report) and to fill those parameters with the queried defaults. I haven't tried it successfully yet, but so far it's looking like that is not how the defaults work.
Hopefully someone out there has a solution. I'd appreciate any help I can get.
I'm moving data from one database to another (INSERT INTO ... SELECT ... FROM ....) and am encountering this error:
Msg 8114, Level 16, State 5, Line 6 Error converting data type varchar to numeric.
My problem is that Line 6 is:
set @brn_pk = '0D4BDE66347C440F'
so that is obviously not the problem and my query has almost 200 columns. I can go through one by one and compare what column is int in my destination table and what is varchar in my source tables, but that could take quite a while. How I can work out what column is causing the problem?
I have table which has 5 columns(col1,col2,col3,col4,col5) very simple and some data .
I would like to create a tabular report datasets on the report like shown below with grouping on col1
like col1 some static text and dynamic text ------------------------ col1 col2 col3 col4 data data data data ------------------------ col1 some static & dynamic text ------------------------ col1 col2 col3 col4 data data data data ------------------------
Note I need to keep all the tables on the same page and when exported in excel they should come on same page sheet.
i design SP for insert data in 2 tables i need to store list of array in one parameter to complete my query i try the table value but it`s not good for me because table value is readonly and i need to insert data with list of array .....
After the staging_temp data gets inserted into main table.my probelm is to handle such a file where number of columns are more than the actual table.
If you see the sample rows there are 4 column separated by "¯".but actual I am having only 3 columns in my main table.so how can I get only first 3 column from the satging_temp table.