SQL Server 2008 :: Creation Of XML File With Data In Table
Jun 1, 2015
I want to create a XML file with data in my table. I have a question about tags.
SELECT
-- Root element attributes
'http://tempuri.org/Form.xsd' AS 'xmlns',
'http://www.w3.org/2001/XMLSchema-instance' AS 'xmlns:xsd',
(
SELECT
-- Creating a default element
[Code] ....
This is my query. When I use 'xmlns' namespace the result is below:
we planning to create partitioning on existing tables. The partitioning is on date column, there should be one partition for each year.
Creating of new partitions should be automated, and also we dont have any plans of archiving old data, all we want is that new partition creation should be automated.
One people created a word input file (15 pages, including check boxes, text boxes, drop down lists...). Is it possible to save data in word input file to SQL table?
one of my database data file is 100 GB and the log file is 500 GB.DB is in full recovery model and the transaction logs happen once in 6 hours.Even then, the Database log file isn't reducing in size.
A recent SharePoint upgrade has rendered several views obsolete. I am redefining them so that our upper level executive reports show valid data.(yes, I know that doing anything to sharepoint could cause MS to deny support, having said that, this is something I've inherited and need to fix, pronto) The old view was created like so:
USE [AHMC] GO /****** Object: View [dbo].[vwSurgicalVolumes] Script Date: 09/04/2015 09:28:03 ******/ SET ANSI_NULLS ON GO SET QUOTED_IDENTIFIER ON GO CREATE VIEW [dbo].[vwSurgicalVolumes] AS SELECT
[code]....
As I said, this view is used in a report showing surgical minutes.SharePoint is now on a new server, which is linked differently (distributed?) I've used OPENQUERY to get my 'new' query to work;
SELECT * FROM OPENQUERY ([PORTALWEBDB], 'SELECT --AllLists AL.tp_ID AS ALtpID ,AL.tp_WebID as altpwebid ,AL.tp_Title AS ALTitle
[code]....
My data (ie surgical minutes, etc) seems to be in the XML column, AUD.tp_ColumnSet . So I need to parse it out and convert it to INT to maintain consistency with the previous view. How do I do this within the context of the view definition?Here is a representation of the new and old view data copied to excel :
can't format it to make it look decent. InHouseCases =2, InHouseMinutes=419, OutPatientCases =16, OutPatientMinutes=1230. This corresponds to the new data I can see in the XML column; 2.000000000000000e+000 is indeed 2 and 4.190000000000000e_002 is indeed 419.
Designing a solution for loading data into SQL destination from a single 5/10 GB flat file? If yes, what kind of performance measures you have taken while designing the solution ?
I am trying to insert bulk data into main table from staging table in sql server 2012. If any error comes, this total activity is rollbacked. I don't want that to happen. I want to know the records where ever the problem persists, and the rest has to be inserted.
We have installed SQL Server 2008 R2 SP1 instance and it's having Share Point 2010 databases.
We have 2 dedicated drives for Tempdb on SAN with 50 GB space. Both tempdb data & log files are created with default size. I would like to presize them.
What are the best values to start with?
U ->Tempdbdata having tempdb.mdf file V->Tempdblog having templog.ldf file
I'm trying to create an import package using BIDS. I'm using SQL Server 2008. The data is saved as a .csv file so that I can use the flat file option for data source. The issue I am having is that when I preview the flat file after selecting it as the datasource, some of the data that have the numeric file format are showing up as non numeric, for instance the value -1,809,575,682,700 is being viewed as ""1 and the package is giving a conversion error.
I am importing Differennt Excels Files into table. my require ment is after importing completed I need to insert all these Filenames ,File creation date into table. (for Auditing).
Can we bulk insert only the desired column from a flat file to a table?
I am using SSIS to bulk insert from a file with more than 200 columns. I am trying to find a way I can bulk insert them to multiples table through SSIS.
The one way I can think is pre map the columns from the file to the destination tables. Build numerous Bulk Insert tasks to achieve that. But not sure if SSIS will let me do that.
I have few tables, which are replicated and partitioned. They also have archival process. I want to avoid having to run that same process on the subscriber.
Replication of partition switching is easy. However I am not sure how to replicate merge range and empty filegroup/file drops.
There the following article options:
Copy file group associations Copy table partitioning schemes Copy index partitioning schemes
I am not sure if these are enough to implement the replication of merge range and empty filegroup/file drops.
I could not find and option to copy partition functions.
I have a report that summarizes hospital readmissions. Some months may only have a female or male patient that is readmitted but, I want to show both months either way.
parent | NAme | Checked | contactmethod|Check2 | Other 974198 | Employment | true | Face to Face | true | null 974224 | Other | true | Face to Face | true | skills 974224 | Other | true | Contact | true | skills
I'd like to pivot on "parent"
In a perfect world I'd like to see output like
974198 | Employment | true | Face to Face | true | null
974224 | Other | true | Face to Face, Collateral Contact | true | skills
If there are more than one name or contactmethod for the same parent then they would be strung along with commas
I need to update the Denominator column in one row with the value from the Numerator column in a different row. For example the last row in the table is
c010A92NULL
I need to update the Denominator, which is currently NULL, with the value from the Numerator where the MeasureID=c001 and GroupID=A.
I have a XML data passed on to the stored proc in the following format, and within the stored proc I am accessing the data of xml using the nodes() method
Here is an example of what i am doing
DECLARE @Participants XML SET @Participants = '<ArrayOfEmployees xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xsd="http://www.w3.org/2001/XMLSchema"> <Employees EmpID="1" EmpName="abcd" /> <Employees EmpID="2" EmpName="efgh" /> </ArrayOfEmployees >'
SELECT Participants.Node.value('@EmpID', 'INT') AS EmployeeID, Participants.Node.value('@EmpName', 'VARCHAR(50)') AS EmployeeName FROM @Participants.nodes('/ArrayOfEmployees /Employees ') Participants (Node)
I saved the result into a csv file and then truncated the table. Now, I am trying to bulk insert the data into the table. So I used:
bulk insert rdb.dbo.scd_event_tab from 'C:userssluintel.ctrdesktopeventtab.csv' with ( codepage = 'RAW', datafiletype = 'native', fieldterminator = ' ', keepidentity, keepnulls ); go
However, I get this error:
Msg 4867, Level 16, State 1, Line 1 Bulk load data conversion error (overflow) for row 1, column 1 (JOB_ID). Msg 4866, Level 16, State 5, Line 1
The bulk load failed. The column is too long in the data file for row 1, column 3. Verify that the field terminator and row terminator are specified correctly.
Msg 7399, Level 16, State 1, Line 1
The OLE DB provider "BULK" for linked server "(null)" reported an error. The provider did not give any information about the error.
Msg 7330, Level 16, State 2, Line 1
Cannot fetch a row from OLE DB provider "BULK" for linked server "(null)".
I have a query needs to look for 5 records data in a table. Basically i need to hardcode. Below is my query which didn't work out.
select BF_ORGN_CD, BF_BDOB_CD, BF_TM_PERD_CD,data from BF_DATA WHERE (BF_ORGN_CD,BF_BDOB_CD,BF_TM_PERD_CD) in ***** i guess this is the wrong query**** ('A1', 'B1', 'C1') ('A2', 'B2', 'C2') ('A3', 'B3', 'C3') ('A4', 'B4', 'C4') ('A5', 'B5', 'C5')
but if i use the query below it will generate more records than these 5 records
select BF_ORGN_CD, BF_BDOB_CD, BF_TM_PERD_CD,data from BF_DATA WHERE (BF_ORGN_CD) in ('A1', 'A2', 'A3', 'A4', 'A5') and (BF_BDOB_CD) in ('B1', 'B2', 'B3', 'B4', 'B5') and (BF_TM_PERD_CD) in ('C1', 'C2', 'C3', 'C4', 'C5')
CREATE TABLE [MailBox].[Message]( [Id] [bigint] IDENTITY(1,1) NOT NULL, [SenderId] [bigint] NOT NULL, [Message] [nvarchar](max) NOT NULL, [SentDate] [datetime] NOT NULL, CONSTRAINT [PK_MailBox.Message] PRIMARY KEY CLUSTERED
[Code] ....
I'm building a messaging functionality in to my application, I'm able to insert a message into the database and this message then appears inside the other users inbox. The issue I have it when I click on this message to view the conversation I make a call to the following sp as shown here:
@UserId bigint, @SenderId bigint AS BEGIN SET NOCOUNT ON;
[Code] .....
The problem with this is I'm trying to connect to the user photos table to return their profile picture, but for some reason even though I have specified IsProfilePic I get all the photos returned, instead it should be two photos, one for the @UserId and the other for the @SenderId, its equivalent to me doing this:
Select * From [User].[User_Photos] where (UserId = 1 or UserId = 2) and IsProfilePic = 1
I have a record in an Excel format (Excel 2010) and I would like to bulk import that into SQL Server 2008 and also while importing, SQL Server will automatically create a new table based on the header fields or row of the source file.
I am not sure if SQL Server 2008 has this capabilities.
Production and development servers are on different domains and they do not trust each other. How do I import data from the table t1 from a database db1 in production and load it into table t1 inside database db1 in development?
I want to store Images as binary data in SQL table and compare it each time with a image file I am getting. I've tried below approach but getting error:
DROP TABLE #BLOBTest CREATE TABLE #BLOBTest ( TestID int IDENTITY(1,1), BLOBName varChar(50), BLOBData varBinary(MAX) );
[Code] ....
Error: Msg 4861, Level 16, State 1, Line 10 Cannot bulk load because the file "C:Files12656.jpg" could not be opened. Operating system error code 3(failed to retrieve text for this error. Reason: 15105).
When you load the data into a new partition table, can it to done online without any downtime? because I have few tables that are around 250 gigs and more.
What I am trying to do is count persons in buckets "non-recidivists" and "recidevists" based on how many bkg_nbr they have per STATE_NBR. If they have more than 1 bkg_nbr per STATE_NBR then put them in the "recdivists" bucket. If they only have a 1 to 1 then put them in the "non-recidivists" bucket.
I am trying to create a trigger on a table. Let's call it table ABC. Table looks like this:
SET ANSI_NULLS ON GO SET QUOTED_IDENTIFIER ON GO CREATE TABLE [dbo].[ABC]( [id] [uniqueidentifier] NOT NULL,
[Code] ....
When someone updates a row on table ABC, I want to insert the original values along with the current date and time getdate() into table ABCD with the current date and time into the updateDate field as defined below:
SET ANSI_NULLS ON GO SET QUOTED_IDENTIFIER ON GO CREATE TABLE [dbo].[ABCD]( [id] [uniqueidentifier] NOT NULL,
[Code] .....
The trigger I've currently written looks like this:
/****** Object: Trigger [dbo].[ABC_trigger] Script Date: 4/10/2015 1:32:33 PM ******/ SET ANSI_NULLS ON GO SET QUOTED_IDENTIFIER ON GO CREATE TRIGGER [dbo].[ABC_trigger] ON [dbo].[ABC]
[Code] ...
This trigger works, but it inserts all of the rows every time. My question is how can I get the trigger to just insert the last row updated?
I can't sort by uniqueidentifier in descending as those can be random.
is there a way to see the data of a table variable in the SSMS debugger? For example, if I set a breakpoint in SSMS and look at a populated table variable named @MyTable in the Locals tab at the bottom of the IDE, a value of "(table)" is displayed. There does not appear to be a way to expand or drill into this variable in the debugger to see the data. Do you know if there's a way to do this through the debugger or do you use an alternate approach when using the SSMS debugger?
Or can it record before and after column changes based on the LSN only?
An extract from a file based legacy accounting system is performed every night. The system does not have a primary key because transactions are managed through program code. (the more things change...). The extract is copied to text in Unix and FTP'd to Windows, where the file is loaded into SQL Server by kill & fill. Because of the expense of modifying the source system, there is enormous inertia/resistance to injecting a primary key at the source, so kill & fill it stays.
In reading about Change Data Capture, it seemed to me that column level insert update and delete are stored in tables that remember the before and after content of each column tracked. In my reading I have seen many references to the LSN to decide when and what to record as changed, but I have not seen any refereference to the necessity of a primary key for Change Data Capture to work. This is in contrast to replication, where the requirement for the existence of a primary key is made plain.
Is it possible to use Change Data Capture against a table without a primary key? How to use it to change the extract from kill and fill to incremental.
I have a table 300+GB. it holds 10 years of Data. I need to delete 5 years of data and put it to another server so I can have more space.
If I delete 5 years of data, Transaction log gets so huge and size of the database even gets bigger because of the .ldf file which even gets bigger! I think I can shrink the log file and the data file. Is this the best way to do it?