I am using file path as package parameter in the SSIS 2012 package. This parameter has file path value which has spaces as there are spaces in some of the folders. When I execute the package from the SQL Agent job, I experience the following error:
Executed as user: xxxxx Microsoft (R) SQL Server Execute Package Utility Version 11.0.5058.0 for 64-bit Copyright (C) Microsoft Corporation. All rights reserved. The argument ""p_test";""xxxxxxx.comfilesCommon2015 "TEST" "Event" "ZONEDataCust" FilesOut" /Par "$ServerOption::LOGGING_LEVEL(Int16)";1 /Par "$ServerOption::SYNCHRONIZED(Boolean)";True /CALLERINFO "SQLAGENT" /REPORTING "E" " has mismatched quotes. The command line parameters are invalid. The step failed.
Parameter name: p_test Parameter value: xxxxxxx.comfilesCommon2015 TEST Event ZONEDataCust FilesOut
I am specifying this value in the sql agent job. The package is working fine locally with the same parameter value. I have tried using double quotes for the entire path but it did not work out. How can I get this resolved?
I think I am definitely thrashing and am not getting anywhere on something I think should be pretty simple to accomplish: I need to pull the total amounts for compartments with different products which are under the same manifest and the same document number conditionally based on if the document types are "Starting" or "Ending" but the values come from the "Adjust" records.
So here is the DDL, sample data, and the ideal return rows
CREATE TABLE #InvLogData ( Id BIGINT, --is actually an identity column Manifest_Id BIGINT, Doc_Num BIGINT, Doc_Type CHAR(1), -- S = Starting, E = Ending, A = Adjust Compart_Id TINYINT,
[Code] ....
I have tried a combination of the below statements but I keep coming back to not being able to actually grab the correct rows.
SELECT DISTINCT(column X) FROM #InvLogData GROUP BY X HAVING COUNT(DISTINCT X) > 1
One further minor problem: I need to make this a set-based solution. This table grows by a couple hundred thousand rows a week, a co-worker suggested using a <shudder/> cursor to do the work but it would never be performant.
I want to write a query where I can see all races and age range as column.
TblRace
ID, RaceName
TblAgeRange
ID,AgeRange.
There is no connection between this two table. I need to display result like below.
Race 17-20 21-30 31-40
A
B
I
W
How do i get this kind of empty data set so that I can fill it out in front end or any better solution. The age range will be displayed as many row as they have. It's not static. Above is just an example.
Our business get orders through the week with the weekends (Fri & Sat) orders being higher than weekdays. Im wanting to graph this years data with last years and possible the years before but to compare days in such a way that the all the weekdays line up. so comparing 2015 week 1 with 2014 week 1 but with 03/01/2015 (Sat) lining up with 04/01/2014 (Sat) etc.
I'm looking for alternatives to adding or removing days from the dates to solve this issue, i have a date dimension table for the past 5 years that i can use to compare calendar week 201401 with calendar week 201501 but I am finding it a bit inflexable.
Was wondering if there was a best practice minimum permissions for creating a SQL login to use when setting up a new shared Data source for SSRS report manager?
Something along the lines of them being a data read for the DB and permissions to update tempdb?
Would have thought it not advisable to have the login be able to update the main db...
I am trying to delete tables from data where the ModifiedDates older than 9 years in AdventureWorks2012 database . I get console notified that foreign keys are dropped but the delete statement is throwing errors. I am sure that somewhere the key constraints are not getting altered, but i'm not able to figure it out as i'm a relative beginner to T-SQL. The error and code:
The DELETE statement conflicted with the REFERENCE constraint "FK_SalesOrderHeaderSalesReason_SalesReason_SalesReasonID". The conflict occurred in database "AdventureWorks2012", table "Sales.SalesOrderHeader [System.Reflection.Assembly]::LoadWithPartialName("Microsoft.SqlServer.SMO") | Out-Null $option_drop = new-object Microsoft.SqlServer.Management.Smo.ScriptingOptions; $option_drop.ScriptDrops = $true;
I need to see inside a SSIS 2012 project a new SSIS installed component, but in the SSDT 2010 I cannot see the SSIS Data Flow Items tab for adding data source/data destination respect to the choose toolbox items pane.
Background: In my current company the business users maintain a huge quantity of master data using excel. Then a series of SSIS jobs are edited and manually executed.
Goal: the challenge is to replace this process using MDS. One of the requested features is the possibility for the users to edit or insert new master data using the Web UI or the Excel Add-in and when they are done perform a merge of the master data in the target, in this case in the reporting DB.
The perfect solution for me is something like trigger the execution of a SSIS package to export the data from the subscription views to the reporting DB after the business rules are apply to a specific entity.
I am using SQL Server 2012 and to me a part of data captured by CDC is not making sense.
I have a table called 'Schema.Table1', and I enabled CDC on it by running 'sys.sp_cdc_enable_table'. I see that a table called 'cdc.Schema_Table1_CT' got created which now gets an entry when ever I Insert, Update or delete a record in the original table.
Till this point every thing works fine.
My original Table has a NOT NULL INT column called 'AuditTrackerUserID' with a default value of 1996. My application does not provides a value for this column, but because the column itself has a default value, records get inserted without error.
When I try to execute the following Query I see multiple records with __$operation of 3 and 1.
SELECT * from cdc.Schema_Table1_CT where AuditTrackerUserID IS NULL
My expectation is that I should not ever see any record returned by this query because AuditTrackerUserID is a not null column, but I do.
I am currently in the process of migrating data from Sybase to Sql server and would like to know how to test the data migrated.
As of now, we took one table data from both source and destination and compared it in Excel to check if the data migrated looks good (note, we used SSIS to migrate data). However, I would like to check if there are any other best & easy ways to apprach data validation post migration.
Where interid in ('comp1', 'comp2', 'comp4', 'comp5')
what would be the best way to using these scripts pull the data to my testDW and not have duplicate data issues?
I was thinking of using a staging DB on the GP cluster and then building an import data package to run nightly. the issue i had was how do i avoid duplicate data ?
I need a recommendation on a data modeling tool that can be used with a data warehouse. My warehouse is running SQL 2012.
Here is my challenge: Most of the tables in the warehouse do not have primary keys and none of the tables have foreign keys on them. However, there are indexes and unique keys/indexes on the tables. I am looking for a tool that I can create virtual relationships on how the data is related, so it is visually easier for the ETL developers to write the code.
I have looked at both ER/Studio 11 and ERwin 9.6. Neither of them do it exactly the way I want it too. However, ER/Studio is pretty close.
i am trying to configure data collector on my server. so i configured data collector on server A and setup on server B. but the "Query statistics collection set" do not show me any data.
i right click and select "collect and upload now " item and get success result for this. but in the report i cant see any data...
also in the log page of data collection i see so many errors with messages like this:
"Failed to create kernel event for collection set: {2DC02BD6-E230-4C05-8516-4E8C0EF21F95}. Inner Error ------------------> Cannot create a file when that file already exists."
i tried some solution like disabling and enabling again, re-configuring, removing and configuring again .... but none of them work right.
I am not sure how to tackle the following. Environment has SQL Server 2005 and above which hosts one database (table1 & table2) per server. I have created a new SQL 2012 server (database called collection (table1 & table2) which will be the central point. I need to do the following:
1. Connect to each server get the data from specific table 2. Add a additional column called "datasourceserver" and add the server name where data came from 3. Able to schedule this task so that new data are sync to SQL 2012 server
Trying to get the PSI Outcome, Expected, and PSIIndex every month whether it has data or not. Created a CTE and left outer joined with PSI table, but it's still not pulling every month for every PSIKey.
1. Take a subset of data from about 100 tables that have multiple references to other tables in this group of 100 from a first DB. 2. Insert the above data into a second DB, a database that already has data in the 100 tables, while maintaining the correct references.
As a general approach, the best way I can think of doing this is as follows:
1. Create mapping tables for every ID that is referenced in a different table (OldID NewID) 2. Insert the old data into the new table and output the OldID and NewID into the mapping table. 3. Use that mapping data to make sure all tables that use those IDs have the new IDs in DB2.
This approach is extremely labor intensive both on initial implementation and would require a fairly substantial amount of work to maintain going forward.
Here is my Query, I don't know whether I'm getting it right?
--Quarter 1 SELECTD.MerchantName, A.MID, A.TID, ISNULL(SUM(A.SumTrxnMon), 0) AS SumTrxnMon, E.FullName, E.DxBEmail INTO#Quarter1 FROMdbo.tblRPT_Spend AS A INNER JOIN dbo.tblMer_DeployORetrieveTerm AS B ON A.MID = B.MID AND A.TID = B.TID INNER JOIN
We have a query that joins column A int which is an int onto column B with contains only int's but was created as a varchar and can't be changed to an int at the moment.
Casting column a as a varchar in the ON of the join to left join seems to void the index altogether and the query just runs for every.
We are talking a few hundred million rows of data in each table.
Temp solution is select into a #Hash table as correct data type and index then use the #Hash table in the join.
Today I have a very similar situation, only today I am dealing with missing text data, not numeric data.
DECLARE @MissingTextData TABLE ( RowID int ,UserID int , EmailAddress varchar(20) ,StreetAddress varchar(20)
[code]...
I would like to fill in the NULL columns with data from the other row, and then select the one row that is filled with all data. I was able to use MAX() for a numeric value, but I am really stumped on the text data. Everything that I have tried is not working.
I was assigned a project to read binary data file (2G) and select data from OrderID, OrderDate and Price three columns (there are about 50 columns) into SQL table.
Where to start? Do I need to convert entire binary data file into text file?
I know that SQL Server runs on Windows Server, is it possible to download data in UNIX using shell script? does SQL Server has any binary or command line client for any operating system other than Windows?
What I need is split the data into two columns if data in column Main starts with 'PR-' then output result to column P and if it starts with 'CC-' then to column C (the output needs to be in one table).
I am using SQL 2012 SE. I have 2 databases say A and B with same structure and relationships. There are 65 tables in each database. A is already replicating data to database C for 35 tables. Now I need to move data from A to B which is greater than getdate()-1 everyday for all the tables and once the move is done I need to delete this data from A. And the same thing the next day and everyday. Since this is for 65 tables its challenging to identify the insert order. Once the insert order is identified the delete order will be the reverse of it.
Is there a tool or any SP that could generate the insert order script? The generate scripts data only is generating the entire data and these databases are almost 400GB. Some tables have 200Mil+ rows. So it takes forever.
I have system id information in table system_ids and productids and systemidinsformation has lot of data but I am looking two strings in tire data to pull into two separate columns. details below
Database versions :ms sql 2008/2012 tablename:system_id's column:system id information
sample data from system_id_information column
######################################## <obj xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns="urn:vim25" versionId="5.5" xsi:type="ArrayOfHostSystemIdentificationInfo"><HostSystemIdentificationInfo xsi:type="HostSystemIdentificationInfo"><identifierValue> unknown</identifierValue><identifierType><label>Asset Tag</label><summary>Asset tag of the system</summary><key>AssetTag</key></identifierType>
[Code] .....
I am looking output of two columns, which are bolded
product_id snumber 654081-B21 MXQ43905SW
for serial number this is common
before string :HostSystemIdentificationInfo"><identifierValue>
and after string </identifierValue><identifierType><label>Service tag
and snumber is always between the before and after string and number of characters of snumber varies and entire data for a row also varies
I am fetching large amount of data from teradata to sql server using linked server. I am facing below query:
A transport-level error has occurred when receiving results from the server. (provider: TCP Provider, error: 0 - The semaphore timeout period has expired.)