SQL 2012 :: Parse XML Files Into Relational Table
Sep 24, 2014
We have a project to parse out an xml file into relational sql table. The xml file is complex type with multiple nesting. We are trying to resort to use XQuery to parse it out to SQL tables-because of one thing or the other - other options on the table were not viable. I know that we can use C# to do the same thing but we are sticking to TSQL with Xquery. Has anybody used the same route for processing large complex xml files?
View 1 Replies
ADVERTISEMENT
May 5, 2014
declare @xml table (xmldata xml)
insert @xml select
N'<parseObject name="Motel">
<fields>
<field name="vehicleno" fieldType="int" fieldSize="">
[Code] ....
I want to extract data in in table format
ParseObjectName FieldType FieldSizeGrammar
Motel VehicleNo Int NULL div.biz-page-subheader li > span.i-phone-biz_details-wrap
mapbox-icon span.biz-partno[/size]
View 2 Replies
View Related
Oct 22, 2014
I have a table structure where there are multiple "/" separated values in two columns that I need to parse out into single records.
CREATE TABLE CONFIGNEW(PlanID VARCHAR(100), GroupID VARCHAR(6), SubGroupID VARCHAR(255), AddOnCode VARCHAR(2), ExternalCode VARCHAR(20)
INSERT INTO CONFIGNEW(PlanID, GroupID, SubGroupID, ExternalCode) VALUES('101/201', '000005', 'LAA/OCA/UCA/XCA', '1', 'M231_1)
[Code] .....
The results I am looking to achieve are:
PLanIDGroupIDSubGroupIDAddOnCodeExternalCode
101000005LAA1M231_1
101000005OCA2M231_2
101000005UCA3M231_3
101000005XCA4M231_4
201000005LAA1M231_1
201000005OCA2M231_2
201000005UCA3M231_3
201000005XCA4M231_4
Is there an SQL statement that can be used to accomplish this?
View 1 Replies
View Related
Apr 3, 2007
Hi all,
is there a way we can parse a XML(file name lister) file that has a structure say,
<Node>
<Root>
<FullName>/My Documents/Documents/Feature/17470_652</FullName>
<ShortName>17470</ShortName>
<XMLFileName>17470_652.xml</XMLFileName>
<LastModifiedOn>12 December 2006</LastModifiedOn>
</Root>
...
...
...
</Node>
and get the values of all the <XMLFileName> attribute and load them into some variables,say here the filename is "17470_652.xml", I have to look for this file in a specified location, open the XML file and load the contents to a table ? It can be presumed that all the XML files will be stored in a same location
View 11 Replies
View Related
Nov 10, 2014
I am trying to parse data separated through text (ie abc1, abc2, abc3, abc4, etc).
ID ParseData
1 [abc1.Pants/abc2.Orange hat /abc3.Purple shirt /abc4./abc5./abc6./abc7./abc8.]
2 [abc1.Gray Shoes/abc2.Striped jacket /abc3./abc4./abc5./abc6./abc7./abc8.]
3 [abc1.Blue jeans/abc2./abc3./abc4./abc5./abc6./abc7./abc8.]
New Data (abc1, abc2, abc3, etc each have a field in the new data set)
ID ParseData abc1 abc2 abc3 abc4 abc5 abc6 abc7 abc8
1 [abc1.Pants...abc8.] Pants Orange hat Purple shirt
2 [abc1.Gray...abc8.] Gray Shoes Striped jacket
3 [abc1.Blue...abc8.] Blue Jeans
If I only want the data in between abc1 and abc2, between abc2 and abc3, etc, what would be the best way to do that?
My code so far looks like:
DECLARE
@string varchar(100) = '[abc1.Pants/abc2.Orange hat /abc3.Purple shirt /abc4./abc5./abc6./abc7./abc8.]',
@searchString1 varchar(20) = 'abc1',
@searchString2 varchar(20) = 'abc2';
SELECT newstring
FROM dbo.SubstringBetween(@string,@searchString1,@searchString2);
This returns 'Pants.'How do I continue to parse between abc2 and abc3? between abc3 and abc4?And then continue to ID2?Should I be referencing the ParseData field instead of string of data that I want to parse?
View 1 Replies
View Related
Jan 28, 2015
How to parse a string to equal length substrings in SQL
I am getting a long concatenated string from a query (CTVALUE1) and have to use the string in where clause by parsing every 6 characters..
CREATE TABLE [dbo].[PTEMP](
[ID] [char](10) NULL,
[name] [char](10) NULL,
[CTVALUE1] [char](80) NULL
)
INSERT INTO PTEMP
VALUES('11','ABC','0000010T00010L0001000T010C0001')
select * from ptemp
After parsing I have to use these values in a where clause like this
IN('000001','0T0001','0L0001','000T01','0C0001')
Now ,the values can change I mean the string may give 5 values(6 character) today and 10 tomorrow.. So the parsing should be dynamic.
View 2 Replies
View Related
Aug 4, 2005
Hi all,
I am trying to create a diagram for our database, during the creating, I create some of the relationships which were not there(basically our original database is not relational database, that's why I am doing it)
So sometimes I have to chage data type in order to create a relationship for the coloumns in different tables. i.e. change char(16) to varchar(7) (I checked the field that make sure all the data in this field is <= 7 characters)
But when I saved the diagram, there is an error message that state:
Errors were encountered during the save process. Some of your database objects are not saved on your diagram.
'agent' table saved successfully
'VisitUSA' table
- Unable to create relationship 'FK_VisitUSA_agent'.
ODBC error: [Microsoft][ODBC SQL Server Driver][SQL Server]ALTER TABLE statement conflicted with COLUMN FOREIGN KEY constraint 'FK_VisitUSA_agent'. The conflict occurred in database 'CMC', table 'agent', column 'AgentCode'.
What does that mean? is it caused by some of the agentcode data in VisitUSA table which is not in agent table?
Thanks!
Betty
View 3 Replies
View Related
Feb 18, 2014
I am having with trying to import XLSX files into SQL 2012 64 Bit.
I have installed the Access driver (AccessDatabaseEngine_x64.exe)
I have configured the script to run the following SP
sp_configure 'show advanced options', 1
GO
RECONFIGURE WITH OverRide
GO
sp_configure 'Ad Hoc Distributed Queries', 1
[Code] ....
So I first create my Temp Table
The run the SP above then I run the insert into the Temp table defined
INSERT INTO tempdb.dbo.TempTRBZ (IsNew,CoID, Zip, City, County,StateCode,Rate,Taxable,TaxShip,TaxLab,CountryID,StateID)
SELECT * FROM OPENROWSET( 'Microsoft.ACE.OLEDB.12.0','EXCEL 12.0;Database=C:TempNotInTrbzJan.xlsx;HDR=YES','SELECT * FROM [Data$]')
[Code] ....
The error message I get back is
Msg 7303, Level 16, State 1, Line 4
Cannot initialize the data source object of OLE DB provider "Microsoft.ACE.OLEDB.12.0" for linked server "(null)".
What I have set wrong on the import? Using SSIS at this point is not a real option.
View 0 Replies
View Related
Sep 14, 2007
Hi
I am having a huge xml file with nested section.
i also have a xsd file for that xml.
i have a destination table where the data from the xml should be loaded into.
i am using the xml source transformation. But o get all the data i need to use multiple merje joins to get the data in a single row which i can insert into the destination.i was not quiet convinced with using so many joins.
so i tried using the script source transformation where i am using xml objects to get the node and dynamically construction the data row. and the output is then inserted into the destination.
on comparing the two approach the one using the script source is working much faster than the xml source transformation.
i wanted to know is there any limitaion using the script source to parse through xml files.
also i would like to know any other better way of getting the data from xml source without using the joins.
Hari
View 7 Replies
View Related
Dec 10, 2006
Hi! Im working on a webapplication and has serious thoughts about howto optimize my table structure. To explain:
My tablestructure today
(simplified):tbl_customerscust_idname.....tbl_contactscon_idname.....tbl_groupsgrp_idname.....
My subtables look like this(alternative 1):tbl_sub_phonephone_idparent_typeparent_idphone_areaphone_nr.....tbl_sub_emailmail_idparent_typeparent_idemail.....
As seen above every contact, group and customer can be assigned an unlimited amount of phonenumbers or emailadresses.For example when entering a new email or a customer following will be inserted in tbl_sub_email: parent_type = 'cst', parent_id= '2' (the cust_id from tbl_customers), email = 'gwerg@fe.com'The problem is i am uncertain if this is a very unefficient way of handling it? i see two alternatives:
Alternative 2:i create x subtables for each table for example tbl_customers will get its mailadresses and phonenumbers contained in tbl_customers_phone and tbl_customers_emailWhat i am uncertain of here is if this would make things alot more troublesome when searching på example after a specific phonenumber.Alternative 3:
(simplified):tbl_customerscust_idname.....tbl_contactscon_idname.....tbl_groupsgrp_idname.....
tables connection objects to subobjects
tbl_customers_phoneidcust_idphone_idtbl_contacts_phoneidcon_idphone_idtbl_customers_mailidcust_idmail_id
subtables tbl_sub_phonephone_idphone_areaphone_nr.....tbl_sub_emailmail_idemail.....
Ranking these three models, wich would be the most efficient and most inefficient performanswise?What i want to avoid is performanceproblems when listing the objects, my indexing skills are a bit limited although im doing alot of reading and testing regarding this.So thats why im asking for advice so that i can minimize the need of rebuilding the table structure when the application already has been starting to get used.I also have another general question.
I have alot of select querys when i need to fetch data from several different tables.Most of them is that i for example get an application from tbl_applications table, and that tables contains the columns cat1, cat2 and cat3 (wich are categories and contain the primary key integer to the tbl_sub_categorys table)With 3 joins i retrieve these 3 category names returning 1 result with all the info i need.Since ive been getting som strange results from the query analyzer(i got results that using clustered indexing for the primary key resulted in a slower query (higher cost)) i actually have another question.Can it generally be summed up that a single query(join or subquery) generaly ils faster than getting the data in separate selects?In the example above this i have the options either of using joins = 1 query or doing 2 querys and sorting the categorys codewise in aspx pages or doing 4 querys, one for the app followed by 1 for every category.Any input regarding this?
As i said earlier im looking for the most efficient way of doing the things abov, would greatly appriechiate any input!
View 3 Replies
View Related
Sep 22, 2007
Hello, I have a Stored Procedure which insterts into Orders table. Orders table is the parent table, with primary key OrdersID. I also have a child table, Client, with foreign key OrdersID. I want it to insert the data into the orders table, and at the same time insert the OrdersID into the FK of the child table. Any info would be appreciated. I have no idea how to do it.
My SP is as follows:ALTER PROCEDURE dbo.jobInsert
@ClientFileNumber varchar(50),@Identity int OUT
ASINSERT Orders(ClientFileNumber, DateTimeReceived) VALUES(@ClientFileNumber, GetDate())
SET @Identity = SCOPE_IDENTITY()
RETURN
View 4 Replies
View Related
Feb 24, 2015
I have the need to delete old backup files via TSQL job. Found this solution online:
PushD "
emoteservershareDIFF" &&(
forfiles -m *DIFF*.sqb -d -1 -c "cmd /c del /q @path"
) & PopD
It works remotely if I run it via command prompt. But when I add this to a TSQL job on my remote SQL instance, it runs without deleting anything. What I'm missing?
View 6 Replies
View Related
Nov 15, 2007
I have cubes that hold quite a few calculations and so creating Excel pivot table views from it take a long time. This is even true for Excel 2007.
Now I wonder if it would be possible to write back all the calculation results to a relational table - maybe one that exactly matches the report format - so creating another report would be much faster?
SSRS seems to be a way to go but it does not speed up my Excel case.
I read about write-back in ROLAP and MOLAP but I don't think any of these concepts help me to really speed up my reports.
The closest thing I was able to find so far, which besides seems to do exactly what I want is Microsoft's new PerformancePoint 2007. It's just it seems overkill for my projects and the price is at $20K.
Any suggestions are appreciated.
Dirk
View 1 Replies
View Related
Apr 10, 2008
Sharepoint is a pretty darn dynamic service and that got me thinking of how databases are created.
Just wondering out loud, surely someone has thought of creating databases in such a manner, but I don't know if it's a thought that has been struck down.
It would look something like this:
Single Table
ID uniqueidentifier PK
ParentID uniqueidentifier FK to ID
Name nvarchar(MAX)
Value nvarchar(MAX)
In this manner, you would create your database "columns" as needed in the data-layer.
If that is too strict, (every datatype would be encoded to base64), you could create a value option for basically every data type. EG. nvarchar(max), nvarbinary(max).... and add another field that describes the data-type to be used for that "column"
ID uniqueidentifier PK
ParentID uniqueidentifier FK to ID
Name nvarchar(MAX)
DataType nvarchar(50) //constrained to allowable datatypes
MaxLen nvarchar(31) //ahh, what the hey, let's add this for sniggles.
ValueNvarchar nvarchar(MAX) nullable
ValueNvarbin nvarbinary(MAX) nullable
....
Now, to allow the "values" for those "columns", you may be able to still use the single-table approach, but it may be better to have an extra table for that (probably even a different partition and drive).
Things to consider, indexing.....
In development, the data-layer would handle the creation/reading of table columns.
The business-layer, which could be many for different parts of a company, would make it's requests to the DL. It may need a username, and the DL would either just create it, or suggest an already existing username column. The path to that username may not be where a particular biz element wants it, so they will ask the DL create another under a diff path.
The business-layer is probably the most important reason for wanting a single dynamic table.
In the end, the relation structure could be like:
Human //dir, no value, no parent (root)
FirstName //value - text
LastName //value - text
Parent1 //value - Me Id
Sibling1 //value - Me Id
SiblingN //value - Me Id
School //dir, no val
ParentId //value - Id (perhaps category would be Building)
Name //value - text
The sibling N would be an example of how a table may need to be dynamic. A positive of the single-table approach is no limitation on number of "columns". Another is the ease to move a hierarchical structure if needed. School may want to be University instead of just "school" and be placed as a child of "school".
Looking for any thoughts on the subject,
Nathan
View 1 Replies
View Related
Sep 3, 2015
I want to use column name to be join another tables.
I have invoice table to store detail of invoice and post some column ' s record to another table .
Invoice table
Invoice_Name | Invoice_Amount | Invoice_Vat | Invoice_Total
Inv001 | 1000 | 70 | 1070
Account_table
Account_No | Account_Number | Data_Source
JV001 | 1111 | Invoice_Amount ---->1000
JV001 | 1112 | Invoice_Vat ----> 70
JV001 | 1113 | Invoice_Total ----> 1070
I want to join Invoice table to Account_table
ON Invoice_Amount , Invoice_Vat , Invoice_Total with Data_Source
The way i got so far I unpivot Invoice table column into row and join with Account_table .
The problem is , if the column in Invoice_table are created , I must used dynamic columns to do this in sql query.
View 7 Replies
View Related
May 29, 2006
I believe saving prediction query results to relational tables is possible (the BI studio does it!). I am not clear on how to do this w/o the BI studio, which means if I write a DMX query and want to store its output to a relational table, how do I do it?
Tips, anyone?
Thanks!
View 6 Replies
View Related
Nov 19, 2014
I am debugging a DB maintance script which creates a table of index maintainance commands which are created separately for each index according to the level of fragmentation and other factors.
For the debugging process, I'm looking for a way to parse each command in the table without actually running them to locate any syntax errors. In other words, as if you clicked the blue check on each one.
Does such a function exist in SQL 2008 (the version I'm doing this on) or other versions?
View 3 Replies
View Related
Dec 25, 2007
hello,
I am beginner for asp.net and sql server. I used Sql server manegement studio full version and I exported my aspnetdb which was created by VS2005 to my host sql server. I have a question:
relational tables are not relational no longer. I noticed that when I created database diagram. what is wrong by exporting?
thanks for your helps...
View 3 Replies
View Related
Apr 10, 2015
SP to parse a delimited string and insert the result in a table. I am using SQL Server 2008 R2. I have 2 tables - RawData & WIP. I have Robots on a manufacturing line capable of moving data to a DB. I move the raw data to RawData. On insert [RawData], I want to parse the string and move the contents to WIP as indicated below. I will run reports against the WIP Table.
Also, after the string is parsed, I'd like to change the Archive column, the _0 at the end of the raw string to 1 in the WIP table to indicate a successful parse.
Sample Strings - [RawData Table]
04102015_114830_10_013_9_8_6_99999_Test 1_1_0
04102015_115030_10_013_9_8_6_99999_Test 2_1_0
Desired Output - [WIP Table]
Date Time Plant Program Line Zone Station BadgeID Message Alarm Archive
-----------------------------------------------------------------------------------
04102015 114830 10 13 9 8 6 99999 Test 1 1 1
04102015 115030 10 13 9 8 6 99999 Test 2 1 1
View 16 Replies
View Related
Sep 13, 2014
Ok I am faced with working with XML on a regular basis, which is fine.
DECLARE @ViewSN INT
IF NOT EXISTS (select null from tblviews where viewcode = 'loadAtTerm') --<workflowEventType>loadAtTerminal</workflowEventType>
insert into tblviews (ViewName,Description,OutBoundForm,StoredProcSN,TriggersReply,ViewCode,DispXactLayer,DispXactViewType,DispXfcTag,Comments)
select 'QC:WF-LoadAtTerminal','This View Corresponds to the XML for loadAtTerminal in Omnitracs Workflow','0',NULL,'0', 'loadAtTerm','MCOM','MCOM',NULL,NULL
[code]...
What would be really useful is to be able to present any xml file and automatically parse the NODE names into a memory variable table and then the fields of each node in another.
View 7 Replies
View Related
Oct 25, 2007
I am looking for the best way in SSIS to do the following. I have an SQL table that for each row in the table I want to take an element from the table do a lookup in a Teredata Table, return information from the teredata source. Use that returned data to do some calculations and create a derived column from my calculations and place the data into the same SQL table that I am parsing through.
Ideas?
View 1 Replies
View Related
Apr 29, 2015
I have a business need to create a report by query data from a MS SQL 2008 database and display the result to the users on a web page. The report initially has 6 columns of data and 2 out of 6 have JSON data so the users request to have those 2 JSON columns parse into 15 additional columns (first JSON column has 8 key/value pairs and the second JSON column has 7 key/value pairs). Here what I have done so far:
I found a table value function (fnSplitJson2) from this link [URL]. Using this function I can parse a column of JSON data into a table. So when I use the function above against the first column (with JSON data) in my query (with CROSS APPLY) I got the right data back the but I got 8 additional rows of each of the row in my table. The reason for this side effect is because the function returned a table of 8 row (8 key/value pairs) for each json string data that it parsed.
1. First question: How do I modify my current query (see below) so that for each row in my table i got back one row with 19 columns.
SELECT A.ITEM1,A.ITEM2,A.ITEM3,A.ITEM4, B.*
FROM PRODUCT A
CROSS APPLY fnSplitJson2(A.ITEM5,NULL) B
If updated my query (see below) and call the function twice within the CROSS APPLY clause I got this error: "The multi-part identifier "A.ITEM6" could be be bound.
2. My second question: How to i get around this error?
SELECT A.ITEM1,A.ITEM2,A.ITEM3,A.ITEM4, B.*, C.*
FROM PRODUCT A
CROSS APPLY fnSplitJson2(A.ITEM5,NULL) B, fnSplitJson2(A.ITEM6,NULL) C
I am using Microsoft SQL Server 2008 R2 version. Windows 7 desktop.
View 14 Replies
View Related
Sep 18, 2007
Hi ,
My Input is a flat file source and it has spaces in few columns in the data . These columns are linked to another table as a foreign key and when i try loading them in a relational structure Foreigh key violation is occuring , is there a standard method to replace these spaces .
what approach should i take so that data gets loaded in a relational structure.
for example
Name Age Salary Address
dsds 23 fghghgh
Salary description level
2345 nnncncn 4
here salary is used in this example , the datatype is char in real scenario
what approach should i take to load the data in with cleansing the spaces in ssis
View 4 Replies
View Related
Jan 28, 2015
Looking at the documentation, it would suggest that as well as data files, when a backup file's created it will also be zeroed out unless the service a/c's been given Perform Volume Maintenance.
We take our backups to dedicated backup servers, meaning backup performance should improve significantly if instant file initialization's given to the Service account logins for the source boxes if I'm right.
View 4 Replies
View Related
Feb 11, 2014
Trying to find out if this is the best way to move log files in databases that are in an availability group.
remove the DB from the AG
Run alter database commands like you would normally to take offline ,move file,bring online ,etc
drop the db from secondary node
then rejoin the DB to the AG
Is that the only option for moving them when its in an avail group? cant find any other info on moving files in mirrors or HA groups
View 2 Replies
View Related
Nov 5, 2014
We have a cluster with two nodes and two instances of SQL Server 2012 Standard Edition running on them. Volume W: is a Fusion-IO card.
On one of these nodes a lot of database names are showing up in the resource monitor as *.mdf files (W:0MSSQL1…).
How and why SQL Server is using these files? They only show up on one of the nodes having more load.
Volume I: is the volume where the transaction log is written so we can explain these files.
View 3 Replies
View Related
Feb 24, 2015
I run this code on several environments successfully but when on run it on a different environment it fails.
Its very strange as the user has database SA and OS administrator permissions.
Code...
declare @Command varchar (250)
set @Command = 'bcp "SELECT * FROM QAProcess.dbo.ReadMe_Table" queryout "THOMSONS-SQL01TempReadMe.txt" -c -T'
EXEC master..xp_cmdshell @Command
Error....
SQLState = 28000, NativeError = 18456
Error = [Microsoft][SQL Server Native Client 11.0][SQL Server]Login failed for user 'T-BXSQLServerAccount'.
NULL
View 2 Replies
View Related
Mar 3, 2015
I'm having an argument with our infrastructure architect who has just gone and bought lots of SSD drives to use for our tempdb data and log files, sounds great doesn't it? There is a catch though, his plan is to add the disks to the two available slots in each blade in a RAID0+1 configuration, effectively giving you one usable drive, and adding both data and log files on to one disk.
I then pointed out that SQL Server best practice is to host tempdb data and log files on two separate drive to reduce contention. The architect then basically said that because this isn't spinning disk the issue of drive, r/w contention isn't an issue I don't agree with this and wanted to get some opinions from the community, I'm still advising that two separate disks should be used but someone just went and spent £80k ($150k) on SSDs and doesn't want to back down...
View 4 Replies
View Related
Jun 16, 2015
Convert 100 xml files individually to pdf's and zip them in a folder along with the source files.
Can it be possible in SQL server BI world?
If possible make this an automated process for every 100 files.
View 3 Replies
View Related
Jul 21, 2014
I've stepped into a new environment and have never dealt with multiple data files on user databases only with Temp db.What would be the best way to get all my data files in sync. I have done this on databases that aren't that big in size or off in size by a lot. Here is what I have
Mdf -- 69 GB
ndf -- 3
ndf -- 3
ndf -- 3
ndf -- 3
ndf -- 4
ndf --4
ndf -- 2
View 7 Replies
View Related
Sep 23, 2014
I've an emergency requirement to copy Source server database backup files to destination Server through xcopy command. Backup job on source server runs daily, so once this job get completes all databases backups needs to be moved to destination server. But here the main concern is "the backup files on destination server shouldn't be overwritten, they should be placed separately as Source server job runs daily".
We've a command which overwrites backups on destination server. But we need to keep backups on destination at-least for 4 weeks (means : retention should be 4 Weeks).
View 5 Replies
View Related
Feb 8, 2015
I have three FileStreams (FS1 on F drive, FS2 on H drive, FS3 on E drive) belonging to the same FileStream group of one particular database (DB) which is in Simple recovery mode in the SQL Server 2012.
FS1 contains huge number of files due to which F drive is completely full.
So, I am trying to move some of the extra files from one FileStream (FS1 on F drive) to another FileStreams (FS2 on H drive and FS3 on E drive) using command:
dbcc shrinkfile('FS1', emptyfile)
Then, I take the Full and Differential backup of the database and issue the CheckPoint and try to delete the already duplicated files from the Filestream FS1 to get some space in the F drive using command:
sp_filestream_force_garbage_collection @dbname = 'DB' , @filename = 'FS1'
But still no files get deleted and I receive the output as such:
file_name num_collected_items num_marked_for_collection_items num_unprocessed_items last_collected_lsn
DB_FS1 0 0 0 25000001749500000
how to delete these already duplicated files.
View 0 Replies
View Related
Apr 6, 2015
Currently we are trying to load the xml files into sqlserver tables by using ssis 2012,We are getting xml files as a column in source table ,so we have to push these xml files into destination tables.
I'm following the below way to perform this activity
[URL]
But We have standard XSD structure for all the xml files ,and if xml file matches the XSD structure then only we have to load ,else it should skip to next xml file.
View 1 Replies
View Related