I have a SQL data source and a GridView control in a user control. I used to have them on a specific page that is not the default. When I clicked on a link on the default page to get to the page containing them, they worked as desired.
Now I've moved them to a user control so I can include them on multiple pages, including the default page. Included in the code for the SQL data source is the Selecting event that defines what the SQL query should be. There is a default query defined in the source code of the ascx control, but that's not the query that I want to run when the control is displayed.
However, when my default page is displayed, it seems to ignore the Selecting event and displays the query that is in the code for the default page. When I click to another page, that one defines the user control properly.
I tried fixing it in the Page_Load event for the default page, but now I'm getting an error that CommandText is not initialized. How do I initialize the query for the SQL data source for the default page?
, Hi In this code how can I create a new data source and new data source view and model and structure that it run dynamic. In this code I have a lot of errors, that they are about server and database don€™t have in current code, In this code, first I should definition server or no?
How can I create data source and data source view and model and structure? Please say code of that, and guide me. databasename and srv is unknown. Do I add other reference with analysis services? Please explain about these codes: ************************************************************************ 1) RelationalDataSource dsNew = new RelationalDataSource( datasourceName, Utils.GetSyntacticallyValidID( datasourceName, typeof(RelationalDataSource)));
I have set up a new connection as a connection from data source, but I cannot see how to use this connection to create my Data Flow Source. I have tried using an OLE DB connection, but this is painfully slow! The process of loading 10,000 rows takes 14 - 15 minutes. The same process in Access using SQL on a linked table via DSN takes 45 seconds.
Have I missed something in my set up of the OLE DB source / connection? Will a DSN source be faster?
hi everybody, i want to create data source and data source view for data mining, with using C Sharp. i have create data source and data source view and export to XML file, but when i change to another computer, run those XML file, it return error, when i run statement to create and biuld mining model, what can i change on xml or how to run XML on another computer sucessfully, and have i build data source and data source view, how to do it.?
Today I was making a few reports. When I tested the reports in Visual Studio, they worked great: I got the expected result. But when I deployed the reports to our reportserver the problem started. When I click on the directory in which my reports are deployed, I got my 4 reports. Till now everything worked correct. But when I click on a report to view the results it went wrong. I got an error: "Cannot create a connection to data source 'Live'" (Live is the name of our data source).
We are using the Windows Logons and I am sure that I have all the rights on the server, I gave myself 'sysadmin' rights, so it should work. I also have tried it with all the roles assigned on my account, but then it still won't work.
When I modify the data source, and set it to another server en database it works. The datasource 'Live' exists on a x64 MsSQL server, en the other datasource is on a x86 MsSQL server. Maybe that is the problem?
I was trying to load data using SSIS, Data Flow Task, OLE DB Source, source was a view to a OLE DB Destination (SQL Server). This view returns 420,591 rows from Query Analyzer in 21 seconds. Row length is 925. When I try to executed the Data Flow Task from SSIS, I had to stop the process after 30 minutes, because only 2,000 rows had been retrieved. I modified the view to retun top 440, 000 and reran. This time all 420, 591 rows were retrieved and written in 22 seconds. Next, I tried to use a TOP 100 Percent. Again, only 2,000 rows were return after 30 minutes. TempDB is on a separate SAN Raid group with 200 gig free, Databases on a separate drive with 200 gig free. Server has 13 gig of memory and no other processes were executing.
The only way I could populate the table was by using an Execute SQL Task and hard code an Insert into table selecting data from the view (35 seconds) from SSIS.
Have anyone else experience this or a similar issue? Anyone have a solutionexplanation?
I'm running SQLMaint.exe for DBCC and backups overnight, because I like the -BkUpOnlyIfClean switch, but I'm having a problem. I need Due to space constraints, I need to overwrite the backup every night. The -DelBkUps switch is nice, but it accepts a parameter of weeks, not days. Is there some syntax I can use with SQLMaint.exe to get what I want, or am I going to have to do an NT 'at' commnad to get rid of the file just before th enext dump, or what? Any suggestions would be intensely appreciated.
Can anyone here tell me about initializing high-values?I want to create a generic stored procedure that will perform thefollowingSelect * from ? where ? between parm1 and parm2no problem, but when no parms are passed I want to perform a select *so.......In the declare section I set parm1 = nullI want to initialize parm2 to HIGH-VALUES, but I cannot find theoption. Last I used it was SQL SERVER 7.0
SQL Server 2005 Books Online provides an article entitled, "Initializing a Transactional Subscription without a Snapshot". Is it possible in SQL Server 2000 to initialize transactional replication without a snapshot?
So far, I have been unable to find a similar procedure mentioned in the SQL 2000 Books Online. I was able to follow the 2005 procedure using SQL 2000 until I got to the step that says to enable the "Allow initialization from backup files" option on the "Subscription Options" tab of the "Publication Properties" dialog. But that option does not appear in the SQL 2000 version of the specified dialog box.
We have customers that are using web synchronization. Could you, please tell us when service pack for the bugs, discussed in thread http://forums.microsoft.com/MSDN/ShowPost.aspx?PostID=689428&SiteID=1 will be reliazed?
What replication participants should apply the service pack in a configuration where Subscriber (MS SQL 2005 Express) gets synchronized with Publisher/Diustributor (MS SQL 2005 Standard)?
I'm trying to force an anonymous subscription to re-intialize on it's next sync attempt. I can do this from the subscriber no problems, and from the publisher using 'Re-initalize All Subscriptions', but I can't seem to re-intialize only a single subscription from the publisher.
To do this I'm trying to execute sp_reinitmergesubscription using the subscriber details found in the sysmergesubscriptions table. This executes ok and when the subscriber starts to sync it does try to re-initialize starting with generating a new snapshot, but after processing for a while it throws the following error messages:
Error messages: The merge process could not allocate memory for an operation; your system may be running low on virtual memory. Restart the Merge Agent. (Source: MSSQL_REPL, Error number: MSSQL_REPL-2147198720) Get help: http://help/MSSQL_REPL-2147198720 An error occurred while reading the .bcp data file for the 'CDP_TableDates' article. If the .bcp file is corrupt, you must regenerate the snapshot before initializing the Subscriber. (Source: MSSQL_REPL, Error number: MSSQL_REPL-2147199428) Get help: http://help/MSSQL_REPL-2147199428 The merge process was unable to deliver the snapshot to the Subscriber. If using Web synchronization, the merge process may have been unable to create or write to the message file. When troubleshooting, restart the synchronization with verbose history logging and specify an output file to which to write. (Source: MSSQL_REPL, Error number: MSSQL_REPL-2147201001)
There doesn't appear to be any shortage of memory on the server. There is no problem regenerating the snapshot from the publisher, and there's no problem with syncing through the web using delta syncs or re-initializing from the subscriber, so I can't see any obvious cause from those error messages.
I'm using SQL Server 2005 SP2 on the publisher, and SQL ServerCE 3.1 on the subscriber.
SQL Server 2008 standard as the publisher SQL Server 2012 express as the subscriber
and I tend to use web synchronization; there is no domain trust.
After following the instructions at MSDN for configuring web synchronization, I have an error that I can't get past - after creating the initial snapshot on the publisher, I try to run replmerg.exe at the subscriber and I always get this error:
"The subscription to publication 'TestReplication' has expired or does not exist."
If I refresh the publisher's "Local Publications" and look within the "TestReplication", it does show that the subscriber is a known subscriber. Likewise, if I refresh subscriber's "Local Subscription", it has an entry for TestReplication publication.
I already verified that the user used by Replisapi.dll has the read permission to the snapshot folder, is a member of PAL, is db_owner of the publishing database and distribution database. I am using self-signed certificate for this test and I have already installed the certificate at the subscriber machine so that HTTPS is trusted. I can run diagnosis from subscriber so I know subscriber can reach and logs are being left at the publisher's IIS.
I have seen several posts on the subject, and have tried various configs. At this point, I am simply trying to connect to SQL Express, that's all.
I have an app. in MS Access that uses SQL Express as a back-end database (this is the simple desktop version of our software). I don't remember having so many issues with just connecting to a small db engine on a single computer..! Just to get it off my chest, there was a post about this subject where someone listed more than 15 items to check... If anyone at Microsoft is reading... connections should not require a 15-item check-list.. People running sql express are not creating a sophisticated database system, probably running it because, they want a free, easy solution, and do not have an IT person, nor are they intersted in reading any 1 of the 15 items... Just trying to connect to the instance of the database that runs on the same computer.... that should not require more than possibly 3 different settings.
My objective at this point, is simply to connect! Nothing else.
My environment settings are:
OS: Win XP Professional MS Access(2003) and SQL Express both installed on the same computer All connection attempts are on the same computer SQL Express is installed in mixed mode SQL Browser running, Network DDE running (if needed) All Network protocols are enabled (tried disabling and enabling each 1 at a time as well) All Client protocols are enabled I tried setting network library to "dbmssocn", and left blank for default (uses namedpipes) I tried using host name, localhost, and IP address for servername (with the SQLEXPRESS of course) Also tried leaving it blank I tried with Integrated security, also tried with the user "sa" and "test" user (both enabled and capable of logging in and accessing the database) I tested the connection from SQL Express Manager, successfully connected in all modes (secure, and with SQL users) to make sure there are no issues with the users or security. Tried turning the SQL browser off (this should not be needed in local mode) Tried using an alias Tried using attach database as a filename And, the firewall is turned off.
Always getting the error "....connection failed because of an error in initializing provider" (whether using namedpipes, or TCP, or shared mem)
Both SQL Express and Access are installed on the computer, that should include the providers...
Please let me know if there is anything left for checking and/or trying... Appreciate your help...
I am creating my first SSIS package, simple, I read a file, a computation and then write it to an excel.
I am getting the message: "Test connection failed because of an error in initializing provider." The database is NOT remote. My computer is slow though. after about 10 or 15 seconds after clicking the button to test my connection, I get the message and much more. The message asks me if the instance name is correct. I don't know what this would be. This message is on the first task, reading the file (finding the file).
The last sentence of the long message is: "Named pipes provider: Could not open a connection to SQL Server (2).
My question is: Why am I getting this message and what do I need to do to resolve this problem?
We had to failover our primary db server for maintenance to our secondary replica. The primary was rebooted during maintenance. We failed back after the maintenance and one of the databases is not synchronizing.
I checked sys.dm_hadr_database_replica_states, and it is showing that it is INITIALIZING.
It has been in this state for more than 45 mins now. The last_sent_time, last_received_time, last_hardened_time and last-redone_time are all stuck with a time stamp 45 mins ago.
They haven't changed. How do i resume this database and bring it back in sync?
I tried suspending and resuming the data movement, but hasn't worked.
I'm having a problem sending the query set as an email text attachment. Test transmissions from Database mail working fine.
Send simple messages with the sp_send_dbmail sproc works fine as well.
It is only when I try and send a query result that things blow up. The query itself is working fine also, so I'm now down to think there is some esoteric problem with the sproc itself.
Surface config features have database mail on, and SQL Mail off.
Msg 14661, Level 16, State 1, Procedure sp_send_dbmail, Line 476 Query execution failed: Error initializing COM Msg 0, Level 11, State 0, Line 0 A severe error occurred on the current command. The results, if any, should be discarded.
Investigating the sproc itself shows...
Line 476 in the sproc is the beginning of a 'trap' --Raise an error it the query execution fails -- This will only be the case when @append_query_error is set to 0 (false) IF( (@RetErrorMsg IS NOT NULL) AND (@exclude_query_output=0) ) BEGIN RAISERROR(14661, -1, -1, @RetErrorMsg) END
RETURN (@rc) this is the last section of code in the sproc
Have 6 SQL Server 2012  failover clusters  environments on Windows 2012 R2 standard edition.Have intermittent connectivity issues when using Windows Authentication, with the error "test connection failed because of an error in initializing provider. login timeout expired" . Am checking by using a UDL file and have tried the below.
1) Have made port changes to use static 1433 port. 2) Have enabled shared memory and using Named Pipes and TCP/IP by using cliconfig. 3) Have turned off firewall. 4) Loopback is disabled 5) SQL Browser is running, have changed 'Built in Account setting'  from 'Local Service' to 'Network Service'. but with no effect.
Still I am getting intermittent connectivity issues.
[DTS.Pipeline] Error: "component "Excel Source" (1)" failed validation and returned validation status "VS_NEEDSNEWMETADATA".
and also this:
[Excel Source [1]] Warning: The external metadata column collection is out of synchronization with the data source columns. The column "Fiscal Week" needs to be updated in the external metadata column collection. The column "Fiscal Year" needs to be updated in the external metadata column collection. The column "1st level" needs to be added to the external metadata column collection. The column "2nd level" needs to be added to the external metadata column collection. The column "3rd level" needs to be added to the external metadata column collection. The "external metadata column "1st Level" (16745)" needs to be removed from the external metadata column collection. The "external metadata column "3rd Level" (16609)" needs to be removed from the external metadata column collection. The "external metadata column "2nd Level" (16272)" needs to be removed from the external metadata column collection.
I tried going data flow->excel connection->advanced editor for excel source-> input and output properties and tried to refresh the columns affected. It seems that somehow the 3 columns are not read in from the source file? ans alslo fiscal year, fiscal week is not set up up properly in my data destination? anyone faced such errors before?
RE: XML Data source .. Expression? Variable? Connection? Error: unable to read the XML data.
I want my XML Data source to be an expression as i will be looping through a directory of xml files.
I don't see the expression property or the connection property??
I tried setting the XMLData property to @[User::filename], but that results in:
Information: 0x40043006 at Load XML Files, DTS.Pipeline: Prepare for Execute phase is beginning. Error: 0xC02090D0 at Load XML Files, XML Source [108]: The component "XML Source" (108) was unable to read the XML data. Error: 0xC0047019 at Load XML Files, DTS.Pipeline: component "XML Source" (108) failed the prepare phase and returned error code 0xC02090D0. Information: 0x4004300B at Load XML Files, DTS.Pipeline: "component "OLE DB Destination" (341)" wrote 0 rows. Task failed: Load XML Files Information: 0xC002F30E at Bad, File System Task: File or directory "d:jcpxmlLoadjcp2.xml.bad" was deleted. Warning: 0x80019002 at Package: The Execution method succeeded, but the number of errors raised (2) reached the maximum allowed (1); resulting in failure. This occurs when the number of errors reaches the number specified in MaximumErrorCount. Change the MaximumErrorCount or fix the errors. SSIS package "Package.dtsx" finished: Failure. The program '[3312] Package.dtsx: DTS' has exited with code 0 (0x0).
I've question about how to handle structural datamodel changes in a datasource of PowerPivot. Suppose I'm developing a starmodel in SQL Server and sometimes a datatype changes or a name of a field changes in a table. It seems to me that PowerPivot handle this not gracefully as Analysis MD does (mostly). I received an error because of a wrong fieldname or even no error when a dattype changes in PowerPivot. Is this common or do I something wrong here. Does this mean that every time the datamodel changes the PowerPivot should be recreated? Or am I missing the clue here?
Was wondering if there was a best practice minimum permissions for creating a SQL login to use when setting up a new shared Data source for SSRS report manager?
Something along the lines of them being a data read for the DB and permissions to update tempdb?
Would have thought it not advisable to have the login be able to update the main db...
I have 4 Tablix and 2 of the Tablix get data from Server 1 and other 2 get the data from Server 2.I have set NoRowsMessage "=Data Not Available for the Selected Values"  for all the 4 Tablix.Now if data is not available from Server 1 then I must show "Data Not Available for the Selected Values" only once in the  outputbut now its appearing twice in the output because of the 2 tablix that had no rows.Similarly if data not available from Server 2 then it should show "Data Not Available for the Selected Values" only once in my output.If Data not avilable from all the Tablix then also i t should show only once as "Data Not Available for the Selected Values" in the report output.
A data reader is using a connection manager to connect to an ODBC System DSN . A query in the SqlCommand property is provided. Data is being truncated in the only string column . The data type in data reader output-->external columns shows as Unicode string [DT_WSTR] Length 7.
The truncated output in a text file is the first 3 characters from left to right . Changing the column order has no effect.
A linked server was created in SQL Server Management Studio to test the ODBC System DSN using the following:
Data returned using "OPENQUERY" does not truncate the string column indicating that the ODBC Driver returns data as expected with sql 2005, but not with the Data Reader.
I need to see inside a SSIS 2012 project a new SSIS installed component, but in the SSDT 2010 I cannot see the SSIS Data Flow Items tab for adding data source/data destination respect to the choose toolbox items pane.
Hi i am trying to do a straight forward load from a Flatfile source , i have defined the columns according to the lenghts defined in the Data Dictionary Provided but when i am trying to run the Task i am encounterring this error
The column data for column "Column 20" overflowed the disk I/O buffer.
I tried to add another column 21 at the end and truncate or leave that column unmapped to destination but the same problem occurs for column 21 what should i do to over come this .
In case of Bad Data how to clean up the source.. Please help me with this
I've got a report that is using a cube as a data source and I can't get the report to show all the data. Only data at the lowest level of the cube is displayed. The problem is that most of the data I'm concerned with is at higher levels. There's no problem with the MDX. I get the correct results when I run the query.
I'm using a table to show the results. I've also tried a matrix, but I get the same results. I'm using SSRS 2005 and SSAS 2000.
Anyone have experience with this? Am I missing something simple?
I am pretty new to SSIS. I am trying to create a package which can accept data in any of several formats. i.e. CSV, Excel, a SQL Server database/table and import the data into my destination database.
So far i've managed to get this working OK. However I am now TOTALLY stuck. I'm currently trying to just concentrate on the data sources being a CSV (using a Flat File Data Source) and/or an Excel Spreadsheet.
I can get the data in and to my destination using a UNION ALL component and mapping the data sources to it so long as both the CSV file and the Excel spreadsheet exist.
My problem is that I need my package to handle the possibility that only the CSV file might exist and there is no Excel spreadsheet. In which case i'd like the package to ignore the Excel datasource completely. Currently either of my data sources do not exist I get errors and the package terminates.
Is there any way in SSIS that I can check all my data sources to see which ones exist (i.e. are valid). If they exist I want to use them. If it doesn't exist i'd like to disgard it (without error - as long as there is a single datasource the package should run)
I've tried using the AcquireConnection method in a script task on each of my connections, hoping that it would error if the file/datasource did not exist. It doesn't though (in the case of an Excel datasource it just creates a empty excel file for me).
The only other option I can come up with are to have seperate packages depending on the type of data we want to import and then run a particular package depending on the format of the source data. This seems a bit long winded. I am pretty sure I must be able to do what I want to achieve but I can't work out how.
I'll be grateful to anyone who can send me any tips/hints/links on how I can achieve this.