i'm hoping someone can help me out with this problem.
i have a database called Tracking.mdf and its associated logfile Tracking_log.ldf
i need to build another database based on the above with more tables and keep the data separate hence the requirement to make a copy of the tracking data file i've tried changing its name but it will not attach. i can change its database name to whatever i want but somehow can't find out how i can say change its actual filename to Tracker.mdf and Tracker_log.ldf
Hi , I am using SQL Server Business Inteligence Developement Studio for SSIS. I want to change the Excel FileName and Sheet name for excel source at the run time. Please suggest me how is it possible.
Hi, please I would like to use something like this>
=IIF(table1_group2 is Collapsed,True,False)
I want to generated some event based on actual state of e.g. report group. if user expand group make this event otherwise (group is collapsed) make another event.
I have a view in SQLServer 2005. It took 30 sec. to finish. Then I deleted 4500 records from one table that is used in view. It took 90 sec. to finish now. I did a comparison on Actual Execution Plan between before I deleted data and after I deleted data, they are almost same, only different is Actual Number Rows become less after deleted data. So, I wonder why data become less but time become more. When I look closely on the Actual Execution Plan, the ridiculous thing is, there are only Estimated Operation Cost on each step, no Actual Operation Cost. I guess there are something wrong with optimizer because reuse same Execution Plan, but how can I tell which step wrong without Actual Operation Cost.
I am studying indexes and keys. I have a table that has a fixed width of data to be loaded in the first column which is parsed in a view based on data types within the fixed width specifications.
Example column A: (name phone house cost of house,zipcodecountystatecountry) -a view will later split this large varchar string based column b: is the source filename of the data load (varchar 256) ....
a. would there be a benefit of adding a clustered or nonclustered index (if so which/point in direction on why)
b. is there benefit of making one of these two columns a primary key (millions of records) or for adding a 3rd new column as a pk?
c. view: this parses the data in column a so it ends up looking more like "name phone house cost of house zipcode county state country" each having their own column.
-any pros/cons of adding indexes (if so which) to the view instead of the tables or both for once the data is parsed?
Let me preface by saying I am not very familiar with SSIS.
Ideally, since the Transfer SQL Server Objects task can do all tables, I would like to use it to copy only data from one server to a new server that has the tables pre-created. When I encounter any kind of error, in addition to the error information provided by SSIS, I also need the actual row data.
If using the Transfer Object task can't do that, how would I loop through all the tables on an OLEDB source and capture the same error information on the destination? I figured out how to do the Data Flow a table with a redirect error output but that does not give me the actual row data.
I have a data flow task that moves all the rows from 18 tables on a production server to a reporting services server. One table, which does not contain the most rows (about 650K rows) reports all the rows have been transferred. However, if I go in to the SQL Mgmt Studio and do a Select count(*) on the table, there are only 110k rows.
Scenario Configuration : VB.net 2005 Code, WebService for Executing SSIS on Server, SSIS deployed on the Database Server
Problem Description : We are developing windows applicaiton in which we call webservice which was deployed on the same server where SSIS packages are deployed. Now from Code we are passing FilePath name in variable and execute the Package. But the SSis result says that The file name "\computernamefol1fol2fol3fol4abc.txt" specified in the connection was not valid.
More Information:
1. Full Permission are given on this network folder. 2. Package executes successfully from SSIS development solution (BIS solution) 3. Deployed packed executes successfully from the Database Server. 4. From Development pc packege executes successfully. 5. Other packages deployed on the same server executed suucessfully with same configuration and scenario.
Only this package is not executing.
-- the only differece with this package with other is -
using ".txt" extension in Flat file connection and using VB Script task
Can any one suggest the appropirate solution for this problem...
I've developed a data driven subscription report. I have a paramete (named "Data") that is a result of my query to the current date. It is working fine.
Now I would like to make one change: In the step4 of the creation of the data-driven report we have the option to give a name to the filename.The name that I gave was TestFile. In this option i'm also able to select the parameter instead of giving the name to the filename. Can I make something like TestFile_ & @Data? Wich would result in TestFile_01-02-2007? Or the only way is to make, in the query of the paramenter another field with this result?
How do i use the foreach loop container and pass each file found according to a specified pattern to a Flat File Source in a Data Flow Task Object so i can operate on each file found in the foreach loop object instead of having to specify a static file name
hi, like, if i need to do delete some items with the id = 10000 then also need to update on the remaining items on the with the same idthen i will need to go through all the records to fetch the items with the same id right? so, is there something that i can use to hold those records so that i can do the delete and update just on those records and don't need to query twice? or is there a way to do that in one go ?thanks in advance!
I have just started using SQL Server reporting services and am stuck with creating subreports.
I have a added a sub report to the main report. When I right click on the sub report, go to properties -> Parameters, and click on the dropdown for Parameter Value, I see all Sum and Count fields but not the data fields.
For example, In the dropdownlist for the Parameter value, I see Sum(Fields!TASK_ID.Value, "AppTest"), Count(Fields!TASK_NAME.Value, "CammpTest") but not Fields!TASK_NAME.Value, Fields!TASK_ID.Value which are the fields retrieved from the dataset assigned to the subreport.
When I manually change the parameter value to Fields!TASK_ID.Value, and try to preview the report, I get Error: Subreport could not be shown. I have no idea what the underlying issue is but am guessing that it's because the field - Fields!TASK_ID.Value is not in the dropdown but am trying to link the main report and sub report with this field.
Am I missing something here? Any help is appreciated.
I have a requirement to implement CDC for 50+ tables to implement incremental data changes warehouse/reporting rather than exporting the whole table data. The largest table is having more than half a billion records.
The warehouse use a daily copy of OLTP db (daily DB refresh). How can I accomplish this. Is there a downside in implementing CDC just for the sake of taking incremental changes on the tables?
Is there any performance impact if we enable CDC on OLTP db?
Can we make use of the CDC tables on the environment we do daily db refresh so that the queries don't hit OLTP database?
What is the best way to implement CDC to take incremental changes for reporting.
I want to make data changes in read_only database , that's why i must set database read_write. While database is at read_write mode, i want to be sure that no one makes change in database.
For this aim, i write the code below, but i suspect that after setting the database read_write, till the setting database single_user ,is it possible get DML script from another user. Is the code below enough for this operation. Or is there another way?
Reminding: Read_only database can not be set single_user mode. That's why, first you must set database read_write.
The code;
use master alter database xxx set read_write with rollback immediate alter database xxx set single_user with rollback immediate
use xxx update tablexxx set columnxxx=yyy use master alter database xxx set read_only with rollback immediate alter database xxx set multi_user with rollback immediate
I am using SQL Server 2012 and to me a part of data captured by CDC is not making sense.
I have a table called 'Schema.Table1', and I enabled CDC on it by running 'sys.sp_cdc_enable_table'. I see that a table called 'cdc.Schema_Table1_CT' got created which now gets an entry when ever I Insert, Update or delete a record in the original table.
Till this point every thing works fine.
My original Table has a NOT NULL INT column called 'AuditTrackerUserID' with a default value of 1996. My application does not provides a value for this column, but because the column itself has a default value, records get inserted without error.
When I try to execute the following Query I see multiple records with __$operation of 3 and 1.
SELECT * from cdc.Schema_Table1_CT where AuditTrackerUserID IS NULL
My expectation is that I should not ever see any record returned by this query because AuditTrackerUserID is a not null column, but I do.
i someone had teach me how to write a query in datatable. however i need to get the data out from my database rather than the data table. can someone teach me how should i do it?esp at the first like.... like DataTable dt = GetFilledTable() since i already have set of data in my preset table i should be getting data from SqlDataSource1 right ( however i am writing this in my background code or within <script></script> so can anyone help me? protected void lnkRadius_Click(object sender, EventArgs e) { DataTable dt = GetFilledTable(); double radius = Convert.ToDouble(txtRadius.Text); decimal checkX = (decimal)dt.Rows[0]["Latitude"]; decimal checkY = (decimal)dt.Rows[0]["Longitude"]; // expect dt[0] to pass - as this is our check point // We use for rather than fopreach because the later does not allow DELETE during loop execution for(int index=0; index < dt.Rows.Count; index++) { DataRow dr = dt.Rows[index]; decimal testX = (decimal)dr["Latitude"]; decimal testY = (decimal)dr["Longitude"]; double testXzeroed = Convert.ToDouble(testX -= checkX); double testYzeroed = Convert.ToDouble(testY -= checkY); double distance = Math.Sqrt((testXzeroed * testXzeroed) + (testYzeroed * testYzeroed)); // mark for delete (not allowed in a foreach - so we use "for") if (distance > radius) dr.Delete(); } // accept deletes dt.AcceptChanges(); GridView1.DataSource = dt.DefaultView; GridView1.DataBind(); }
I'm getting a "Input string was not in a correct format." when I'm running a insert statement against my SQL Server 2005 db table. This helps me zilch as I cant see the actual SQL statement to see which one wasnt right. Using a SQLDatasource and a Formview btw. Datasource is called xSqlIB and formview is called fmvIB. Any ideas?
I have a XL source file, the first column contains the name, 199001,19902.....,199012, In SSIS package using XL source if click the 1st column contains headings it automatically covert my heading as name,F2,F3.....F13.
But i need as it is like Name,199001,19902,.....,199012 as a heading than i am using unpivot transformation and convert the column into row to my staging table.
Please sort out my problem i need the XL column like i mentioned above( Name,199001,19902,.....,199012 )
Normally to find the actual sql that executes i go up in the trace until i find the relevant cursor prepare. In this case the one for 197 cursor prepare.
But this time around i cannot find it, these are app servers connected so for all i know the cursor could be prepared at 5am in the morning and re-used all day long.
Is there anyway (adding certain profiler events or something), so that i can see the actual statement held in the cursor ?
Hi, How do I check what is the actual DB size that is currently being use? e.g. I set 10 GB as initial size. A few days and a few transactions later, how can I know how much was used since it is still under 10GB and checking the physical DB file will not tell me anything.
I am running 6.5 sql and work with a traffic and billing software ( called NOvar) from another company(encoda system) which does a lot of scheduling, reporting etc
I dont know the contents of table (100 table ) and their column or which table its querying to take out reports
Can i create a trace to know the syntax each time some thing is executed.
I also need to create customized reports, can this be done by sql reporting or does i need to go from crystal reports or someone else For i dont know any language except sql and HTML
======================================================== I have one table store below information, and other one table is store staff name and phone number. how to display of table c data by new column and using the key staff_code1=staff_id or staff_code2= staff_id or staff_code3=staff_id ? ---------------------------------------------------------
Table A staff_code1,staff_code2,staff_code3
Table B staff name staff_id staff_phone -----------------------------------------------
Table C display in new column . staff name-1 code1 staff name-2 code2 , staff name-3 code3 peter id-01 susuan id-03 david id-05
I have a stored procedure that will execute with less than 1,000 reads onetime (with a specified set of parameters), then with a different set ofparameters the procedure executes with close to 500,000 reads (according toProfiler).In comparing the execution plans, they are the same, except for the actualand estimated number of rows. When the proc runs with parameters that producereads that are less than 1,000 the actual and estimated number of rows equal1. When the proc runs with parameters that produce reads are near 500,000 theactual rows are approximately 85,000 and the estimated rows equal 1.Then I run:DBCC DROPCLEANBUFFERSDBCC FREEPROCCACHEIf I then reverse the order of execution by executing the procedure thatinitially executes with close to 500,000 reads first, the reads drop to lessthan 2,000. The execution plan shows the acutual number of rows equal to 1,and the estimated rows equal to 2.27. Then when I run the procedure thatinitially executed with less than 1,000 reads, it continues to run at lessthan 1,000 reads, and the actual number of rows is equal to 1 and theestimated rows equal to 2.27. When run in this order, there is consistency inthe actual and estimated number of rows and the reads for both executionswith differing parameters are within reason.Do I need to run DBCC DROPCLEANBUFFERS and DBCC FREEPROCCACHE on productionand then ensure that the procedure that ran close to 500,000 reads is runfirst to ensure the proper plan, as well as using a KEEP PLAN option? Or,what other options might you recommend?I am running SQL 2000 SP4.--Message posted via SQLMonster.comhttp://www.sqlmonster.com/Uwe/Forum...eneral/200609/1