For my website, I need to grab information from sources located on different servers. This will give me me various datasets that I need to combine into one, main dataset to then bind to my grid. Each dataset has the following fields:
Name || Blocked
I need the main dataset that is to be used on the grid to have the following structure:
The name field I can take from one of the datasets as the list of fields will be the same. However, I need to take the data from each of the blocked fields in the various datasets that are returned and then combine them to be the Blocked 1, etc columns that you can see above.
I'm making this post to ask for any advice on the best way to do this. Any info would be most appreciated.
Filling a DataTable from SqlQuery : If SqlQuery returns some null values problem ocurrs with DataTable. Is it possible using DataTable with some null values in it? Thanks
SQL Server 2000I need to compare 2 tables as follows:Table 1 - AccountsAcct#, Account NameTable 2 - Ledger(Among Others)Acct#,AccountNameI would like to create a view where the account number matches in both tables but the account name does not.I.e. Table 1:5000 MaintenanceTable 2:5000 Maintenance5000 Maintenance5000 Building Maintenancemy query view would display:5000 Building Maintenanceindicating there is a "Bad Record" and I would do some processing from there.FYI - I know I could prevent this easily, but I am actually comparing several databases with Table 2 data with a central Table 1 database.TIA-- Tim Morrison--------------------------------------------------------------------------------Vehicle Web Studio - The easiest way to create and maintain your vehicle related website.http://www.vehiclewebstudio.com
First, I can not change the data structure. I have a table called codesubsections that has an identity as the primary key. When a new codesubsection is entered the user can mark a formercodesubsection as a parent. The problem is that I need to be able to evaluate the table and identify the lowest CodeSubsectionID for a given row. so for instant when I look at row 7 it's text is 17 B p2 but because the CodeSubSectionID is not null I need to look at 6 then 5 and since 5 is null I need the text for 5,6,7 all to be 16:1 B. The only solution I have had any kind of luck with is to do a self join 3 times and coalesce the values up. See the code at the end. I am at my wits end, I had it working the other way assuming that the CodeSubSectionID was the parent of the record. This is the final piece of code on a project that is supposed to end tomorrow.
Any help is appreciated. I would normally post the procedures but I think that they are more confusing that the data explanation. This type of relationship is a first for me after 20 some years.
We're trying to construct a query that deletes records containing 48 particular phone numbers from a large db. The 48 numbers are the entire contents of a smaller db, in a field of the same name as in the larger db (home_phone). We're using Sequel Server 7.0 and Access 97. The db's are in Access now. We failed in Acess and now would like to import into sql and try it there. Thanks, Tad McArdle
I'm running SQL Server 6.5 (SP5a) on an NT4 server (SP3). I am using the Outlook Express account for SQLExecutiveCmdExec which has all the requisite permissions. SQL Executive as well as SQL Server services are logging in under this account. The server seems to be functioning normally in every other way (there's no production data on it yet).
When attempting to start SQL Main, the following lines appear in the Errorlog
1999/06/09 16:15:16.62 ods Starting SQL Mail session... 1999/06/09 16:15:16.64 ods Error : 17951, Severity: 18, State: 1 1999/06/09 16:15:16.64 ods Failed to start SQL Mail session. 1999/06/09 16:15:16.64 ods OS Error : 126, The specified module could not be found.
MSKB Does not have any information on this error message. Can anyone provide some insight as to what's going on please?
P.S. I have another server running SQL Mail with the same configuration with no problem.
I recently renamed a database with sp_renamedb and created a new db with the same name as the old and restored the data into the new db. I put the old renamed db in single user mode to prevent data from becomming violated. After cycling SQL Server to flush the cache, I opened the floodgates for users. Our support line was full of calls from people saying "I'm getting the error 'Single User Mode in database DBORGINALDBNAME'. Any ideas why the application would be hitting the old database after I renamed and recycled the server?
I'm having a bit of trouble with dates/times. All I want to do is extract the contents of a table in an old Informix database and load it in to table in a SQL 2005 database.
The problem is that when I try to extract from the source table, I get the following error:
An OLE DB record is available. Source: "Microsoft SQL Native Client" Hresult: 0x80040E07 Description: "Error converting data type DBTYPE_DBTIMESTAMP to datetime.".Error: 0xC004701A at Dump Dacom Tables, DTS.Pipeline: component "mac_header" (181) failed the pre-execute phase and returned error code 0xC0202009.
It would have been nice if it could tell me what columns it was having trouble with. Anyway, I figured it out. There are two columns called time_open and time_close that contain times but no dates. This is how the source columns are configured:
time_open datetime hour to minutetime_close datetime hour to minute
This is a really old database. We started using it about 14 years ago. I can get an ODBC connection to it fine with any application except SQL Server. So, I have to do an OPENROWSET to get access to the data.
SELECT * FROM OPENROWSET('MSDASQL', 'Connection String Stuff', 'select * from dacom.mac_header') Question 1: Why can't I get an ODBC connection to this Informix 5.10 database?
Question 2: How can I extract tables with this sort of data in it? I'm happy to store it as a normal date/time field in the destination SQL 2005 database, but I can't convert something that I can't extract.
I'm Trying to replicate the creation of an "xml" file that is currently created using a C++ application. I want to take that application out of the picture, but need to create the same format XML file because a step later in the production process uses this file, and I cannot change it.
The output format I am looking for is:<?xml version="1.0" encoding="utf-8"?> <FUNDS> <AMRGX> <NAME>AMERICAN GROWTH D</NAME> </AMRGX> <AHERX> <NAME>AMERICAN HERITAGE FUND</NAME> </AHERX> <AMRVX> <NAME>AMERICAN INVESTORS GROWTH FUND</NAME> </AMRVX> . . . </FUNDS>The problem I am having is that I cannot seem to get the level/node of the fund symbol (AMRGX, AHERX, and AMRVX in the example above) as it needs to be. I think this must be some non-standard use of XML, since the tag is really the data itself (?)
The closest I have been able to get so far is: <FUNDS> <SYMBOL>AMRGX</SYMBOL> <NAME>AMERICAN GROWTH D</NAME> </FUNDS> <FUNDS> <SYMBOL>AHERX</SYMBOL> <NAME>AMERICAN HERITAGE FUND</NAME> </FUNDS> . . .As you can see (hopefully) I am able to get the data I need but cannot get: (1) the FUNDS tag(s) to be the very highest level/root. nor (2) the SYMBOL part (tag label?) to be the actual variable stock fund.
Am I 'splaining this well enough? I don't necessarily need all the code, since I know I haven't given enough info to help with that, but my basic question is - - Is it possible to get a variable TAG based on the table DATA?
I want my SYMBOL tag to be the actual SYMBOL for the stock fund.
Confused? Not as much as I am *LOL* I am new to the use of all but XML EXPLICIT use, so any help would be appreciated - at least regarding my two formatting questions.
Yes, I have (and am still) searching around BOL for my answers, but so I have found nothing that helps me out. Meanwhile, suggestions are welcome!
This is regarding one package where we are trying to deploy the package through €œSql Server deployment€? using .dtsx, .dtsConfig and manifest files, but after deployment the package is not found in €œmsdb€?. Instead it is reflecting in €œfile system€? folder. The same behavior is observed repeatedly when we tried to deploy the package.
We have seen such behavior only in this package. Please help us in solving the above scenario.
I am creating a custom transformation component, and a custom user interface for that component.
In my custom UI, I want to show the custom properties, and allow users to edit these properties similar to how the advanced editor shows the properties.
I know in my UI I need to create a "Property Grid". In the properties of this grid, I can select the object I want to display data for, however, the only objects that appear are the objects that I have already created within this UI, and not the actual component object with the custom properties.
How do I go about getting the properties for my transformation component listed in this property grid?
I got 2 webservers (2003) using NLB. I want some kind of common data store (SQL Server). I need this data store to be fault tolerant. I don't have any more machines. I don't have an enterprise licence for windows.
I have a company with 5 offices connected with a dedicated line (500+ employees).
They have five databases, one in each office.
Now they want a new application which will replace the old (VB6 on MySQL will now be VB.NET 2.0 with MS SQL Server 2005).
The want all the data availiable in all the offices. We will still have 5 databases in 5 offices, but all with the same data.
Sometimes the dedicated line is lost.
Now I have a few questions:
Will the database be still running while a line is down? Will the database be still running if replication is of type Merge (instead of continuesly). Is it good practice to use unique identifiers versus Numeric keys? Is it good practice when the application uses the "master" database to save global things like "Suppliers" and the local database to add and change Orders? Do you have some recommended reads?
For High Protection mode (2servers, no witness, FULL safety), in the event of a catastrophic failure of the principal, can a remote (but well connected) mirror be forced to assume the principal role? The MS manual "Database Mirroring in SQL Server 2005" on p2 says the failover type for HP mode is manual only (as opposed to forced). Elsewhere it seems to seems suggest otherwise but there is no clear and unambiguous discussion describing the exact procedure.
I currently have a workflow "like" application which uses msmq to pass messages to each step transactionally. I'd like to take advantage to something like SSB to make the app scale. Here's how it works today:
Start Workflow WebService puts message on queue. Windows Service has threaded listeners on each queue for the workflow. The listeners do a transactional Receive off the queue. By doing this is in a transaction, we can enforce that only one listener will pick up and process the message. Once they have a message they call a .Net Assembly to perform some work. Once the work is performed succesfully, the listeners put the work context on the next queue using the same transaction as the receive and commit the transaction effectively removing the message from the receive queue permanently. If a fatal error occured during processing, the work context is moved to an error queue transactionaly. If the process blew up due to other reasons, the transaction is rolled back and the message reappears on the original queue so that another instance of the service can pick it up. Currently all this is running on one instance of the Windows service and cannot span more than one host machine since MSMQ did not support transactional reads off public queues without using DTC.
I saw the external activator sample and it looks great but I have a few questions. First, I'd have to modify it to call assemblies instead of external processes. No biggie there. One other requirement that one and only one receiver should process the message and that when the next message is sent to the next queue, it's done within the same transaction as the receive. I hope that this can be done without any deadlocks. I also saw mentioned somewhere that this messaging could advantage of the local SQLExpress instances so that even if the main SQL Server instance is down, the messages can go out. Last requirement is that our message is a blob (serialized .Net object) so how do we define that kind of message?
Table1: WriterID, WriterName, WriterSurname Table2: BookID, WriterID, BookName, Category Table3: CategoryID, CategoryName There will be one form. Which way i should use to enter data from one form to multiple tables with same WriterID. 1.) Should i use SqlCommand class and a Stored Procedure? ( Like that: http://www.aspnet101.com/aspnet101/tutorials.aspx?id=13 ) 2.) Is there another way without a stored procedure? I can't imagine how to insert same form to multiple tables. In real scenario there are lots of table and all contains a column that holds same value such as WriterID.
Hi, I want to know the a solution for my Synchronization Scenario
I have a several client databses which are SQL Server 2005 Express and i have a master database which is SQL Server 2000 containing all the individual Client databases. All the individual client databases are kept seperately at the master location. I need to Synchronization the client database with its copy at the master database (something like Merge replication). Both the Client Copy as well as Master Copy could be Publishers & Subscribers.
Now the problem is Because of security & firewall issues, only the Client should have the ability to schedule & initiate the synchronization process with the master copy. Unfortunately SQL Server 2005 Express has only subscriber agent and not a publisher agent.
Any help on how to achieve this would be appreciated . Thank You
Hello All .. This is the scenario I'm having : -- I'm a beginner so bear the way I'm putting it ... sorry !
* I have a database with tables - company: CompanyID, CompanyName - Person: PersonID, PersonName, CompanyID (fk) - Supplier: SupplierID, SupplierCode, SupplierName, CompanyID (fk)
In the Stored Procedures associated (insertCompany, insertPerson, insertSupplier), I want to check the existance of SupplierID .. which should be the 'Output' ...
There could be different ways to do it like: 1) - In the supplier stored procedure I can read the ID (SELECT) and :
if it exists (I save the existing SupplierID - to 'return' it at the end). if it doesn't (I insert the Company, the Person and save the new SupplierID - to 'return' it at the end) ------------------------------------ 2) - Other way is by doing multiple stored procedures, . one SP that checks, . another SP that do inserts . and a main SP that calls the check SP and gets the values and base the results according to conditions (if - else)
3) it could be done (maybe) using Functions in SQL SERVER...
There should be some reasons why I need to go for one of the methods or another method ! I want to know the best practice for this scenario in terms of performance and other issues - consider a similar big scenario ..... !!!
I'll appreciate your help ... Thanks in Advance . ! .
How can I be able to use "NewReceivedTime" as a variable? Since having to create a new column using CASE statement in SQL would mean that user will not be able to use this new column name and having to receive error such as "Invalid Column Name: NewReceivedTime "
(case when <value> else <value> end) as NewReceivedTime
I'm asking this because, I would want to use "NewReceivedTime" I've created to equate to another Time Column like NewReceivedTime = LogDateTime something like that.
i hav a text file in this format.. currency,exchangerate(INR),date dollar,45,20/04/2006 dollar,46,22/04/2006 britishpound,65,20/04/06 dirham,12,20/04/06..etc..
now,i want that using this as source.. 2 tables should be created and filled with appropriate data.. CurrencyMaster..Currencyid(PK),Currencyname CurrencyDailyRate..ID,Currencyid(FK),rate,date
What I'd like to do is use SQL Express as a back-end for an existing Access 2003 application. The application is currently a single-file Access solution. I'm just investigating options to separate the front-end from the back-end so that the users can access a single data store from multiple PCs. We're only talking about maybe 5 users total, and really no more than 2 accessing it simultaneously at any given time (although the same setup exists in several offices). Right now they are just opening the Access db from a fileshare. This has resulted in corruption on a few occasions, which is part of the reason for wanting to replace the current solution with something that will be a little more robust. I'm wondering if there is a way to deploy a SQL Express db on a fileshare so that it can be connected to by the Access front-end. While we can install things on their desktops, we can't install anything on the file server, we can just put files there. Is there any way that I could make that solution work, or should I just stick with separating the Access front-end and back-end?
I have a huge replication task I need to perform. The source table has over 250,000,000 records and everyday approximately 400,000 records are added to the system regularly.
Currently, I am running snapshot replication and it's taking 10 to 11 hours to complete (The internet connection between the production and the report server is slow). The reason I am using this method is because the source table does not have a timestamp column (so I can run an incremental replication) and I cannot modify that table because it belongs to a third party software. It does have a field which is linked to another table holding the timestamp.
Here is the source table and the other table's structure for reference:
DataLog
Name Datatype Size
DataLogStampID int 4 Value float 8 QuantityID smallint 2
DataLogStamp
ID (Which is foreign key to DataLogStampID field in the above table) int 4
SourceID smallint 2
TimestampSoruceLT datatime 8
What I would like to know is, what is the best practice to handle such replication with MSSQL Server 2000.
I am thinking of developing a procedure which follows the following step using a temp table:
This procedure will run (say) every night.
The temp table will contain a timestamp field.
For one time only, I will run a snapshot replication, so that I can have the existing data into the destination db.
I will record this timestamp into a repl_status table which holds the timestamp of the last replication.
The procedure will read the record from repl_status and select all records from the source table which were added since this date and put them into a new temp table (this table will have the same structure as the source and destination tables) and then a snapshot replication will "add" these new records only every night to the dest. table. By using this method, I will only transfer the records which have been added since the last replication (last night - less records).
I am faced with a scenario where I have to predict the time period that will required to solve a particular problem depending on various factors like Problem Type, Severity Level, Resource Availability etc.
I would like to know which algorithm is best suited for the above scenario and the reasons for the same.
I'm about to implement a DB Mirroring on my production server.and i'm wondering about the scenario what will happen if. 1. if my primary server suddenly die.how i can operate the mirror DB without losing data? 2. when the primary server comes to life, how i make the primary server to lead again with the DB Mirroring?(do i need to recreate all procedures of DB Mirroring again) 3. how can i be sure that the DB mirroring is indeed mirrored on the drp server? 4. what about alerting - can i be notified via email if the DB Mirroring isn't sync? or if the drp server is down and other issues that can be occur .
Hi, I have a product basket scenario in where I have to recommend contracts to a customer based on the product and the quantity he/she buys. Product Quantity is an important factor which administers the user in the purchase of a particular contract I have the following tables with me. Customer product transaction table, Customer Contract transaction table but there is no direct relationship between contract and product in the database. The only way the two can be linked is through the customer. If I create a mining structure with Customer-Product information as the nested table and Customer-Contract information as the nested table with customer being the link between the two, the model is showing some irreverent contract recommendations. what is the solution for the above problem? Is it because the is no direct relationship between the product and the contract? How can I overcome this problem?
We have Asynchronous DB Mirroring established for our Production database
which runs on SQL Server 2005.
For the DR Test we plan to do the following during the period of low activity :
1. Pause the mirroring on the Principal Server.
2. Break the mirror on the Mirror Server
3. Take on offline backup on the Mirror.
4. Bring the Mirror Database up.
5. Run the DR Tests on the Mirrored Database
6. Restore the Offline Backup taken in step 3.
7. Reestablish the mirror from the Principal to Mirror.
8. Resume Mirroring on the Principal Server.
9. Verify Mirroring is happening on the Mirror Server
Can u please let me know if this plan is feasible and if there should be any modifications to the plan that are required. Any other suggestions/input is appreciated.
we have a pretty vast t-sql statement that queries out production db and inserts results into a reporting table on the same db. the problem we're having is that during this process there are several locking processes engaged (key, table, schema, etc). i thought by moving this entire process to a seperate (reporting) database i could bypass this locking problem. however, the same problem exist in the reporting database. is there a way to disable all locking (this is just a reporting database so im not worried about the data integrity. any suggestions to disable all locking? by the way its a sql 2000 server.
example: SET TRANSACTION ISOLATION LEVEL READ UNCOMMITTED //Select cnttrtydayspast insert into tampa_pmr ( hprop, sCategory, sSubCategory1, sSubCategory2, sSubCategory3, dtMonth, sMonth, sYear, iValue, iSort1, iSort2, iSort3, iSort4, iLevel) (select d.hprop, d.sCategory, d.sSubCategory1, d.sSubCategory2, d.sSubCategory3, d.dtMonth, d.sMonth, d.sYear, d.iValue, d.iSort1, d.iSort2, d.iSort3, d.iSort4, d.iLevel from (select p.hmy hProp, 'Avg Days For Outstanding Receivable' sCategory, 'Number of Households' SSUBCATEGORY1, 'z' SSUBCATEGORY2, 'z' SSUBCATEGORY3, '#begMonth#' DTMONTH, month('#begMonth#') SMONTH, year('#begMonth#') SYEAR, isnull(SUM(isnull(cnt.cntHmy,0)), 0) IVALUE, 9 ISORT1, 80 ISORT2, 0 ISORT3, 0 ISORT4, 2 ILEVEL from property p left join (select amt.property_hmy phmy, count (distinct amt.thmy) cntHmy from ( select p.hmy property_hmy ,t.hmyperson thmy ,tr.stotalamount - (sum(case sign(convert(INTEGER,DATEAdd(day, -1, DATEAdd(month, DATEDiff(month, 0, '#begMonth#')+1, 0))-trr.upostdate))when 1 then d.samount when 0 then d.samount else 0 end)) remain_amount from property p inner join trans tr on(p.hmy = tr.hprop and tr.hAccrualAcct is not null) inner join tenant t on(t.hmyperson = tr.hperson and tr.upostdate < DATEAdd(day, -1, DATEAdd(month, DATEDiff(month, 0, '#begMonth#')+1, 0))) left outer join detail d on (d.hchkorchg = tr.hmy )
left outer join trans trr on (trr.hmy = d.hinvorrec ) where tr.itype = 7
and datediff("dd",tr.sdateoccurred,DATEAdd(day, -1, DATEAdd(month, DATEDiff(month, 0, '#begmonth#')+1, 0))) <=30 #Conditions# group by p.hmy ,t.hmyperson ,tr.hmy-700000000 ,tr.stotalamount having tr.stotalamount <> sum(case sign(convert(INTEGER,DATEAdd(day, -1, DATEAdd(month, DATEDiff(month, 0, '#begMonth#')+1, 0))-trr.upostdate)) when 1 then d.samount when 0 then d.samount else 0 end ) AND SIGN (tr.stotalamount - (sum(case sign(convert(INTEGER,DATEAdd(day, -1, DATEAdd(month, DATEDiff(month, 0, '#begmonth#')+1, 0))-trr.upostdate))when 1 then d.samount when 0 then d.samount else 0 end)) - ISNULL(( SELECT /* available prepays */ sum(isnull(d.samount,0)) tran_amountpaid from property p inner join tenant t1 on t1.hproperty = p.hmy inner join unit u on u.hmy = t1.hUnit inner join trans tr on tr.hperson = t1.hmyperson and tr.hmy between 600000000 and 699999999 and tr.upostDate <= '#begmonth#' inner join detail d on d.hinvorrec = tr.hmy inner join acct a on d.hacct = a.hmy and d.hAcct IS NOT NULL left outer join trans trc on trc.hmy = d.hchkorchg Where 1=1 and exists (select detail.hmy from detail left outer join trans trc on trc.hmy = hchkorchg where hinvorrec = tr.hmy and (hchkorchg is null or trc.upostdate > '#begmonth#')) and (d.hchkorchg is null or trc.upostdate > '#begmonth#') AND T1.HMYPERSON = T.HMYPERSON),0)) = 1 ) amt where amt.thmy not in ( select isnull(t.hmyperson,0) thmy from property p inner join trans tr on(p.hmy = tr.hprop and tr.hAccrualAcct is not null) inner join tenant t on(t.hmyperson = tr.hperson and tr.upostdate < DATEAdd(day, -1, DATEAdd(month, DATEDiff(month, 0, '#begMonth#')+1, 0))) left outer join detail d on (d.hchkorchg = tr.hmy )
left outer join trans trr on (trr.hmy = d.hinvorrec ) where tr.itype = 7
and datediff("dd",tr.sdateoccurred,DATEAdd(day, -1, DATEAdd(month, DATEDiff(month, 0, '#begmonth#')+1, 0))) > 30 #Conditions# group by p.hmy ,t.hmyperson ,tr.hmy-700000000 ,tr.stotalamount having tr.stotalamount <> sum(case sign(convert(INTEGER,DATEAdd(day, -1, DATEAdd(month, DATEDiff(month, 0, '#begMonth#')+1, 0))-trr.upostdate)) when 1 then d.samount when 0 then d.samount else 0 end ) AND SIGN (tr.stotalamount - (sum(case sign(convert(INTEGER,DATEAdd(day, -1, DATEAdd(month, DATEDiff(month, 0, '#begmonth#')+1, 0))-trr.upostdate))when 1 then d.samount when 0 then d.samount else 0 end)) - ISNULL(( SELECT /* available prepays */ sum(isnull(d.samount,0)) tran_amountpaid from property p inner join tenant t1 on t1.hproperty = p.hmy inner join unit u on u.hmy = t1.hUnit inner join trans tr on tr.hperson = t1.hmyperson and tr.hmy between 600000000 and 699999999 and tr.upostDate <= '#begmonth#' inner join detail d on d.hinvorrec = tr.hmy inner join acct a on d.hacct = a.hmy and d.hAcct IS NOT NULL left outer join trans trc on trc.hmy = d.hchkorchg Where 1=1 and exists (select detail.hmy from detail left outer join trans trc on trc.hmy = hchkorchg where hinvorrec = tr.hmy and (hchkorchg is null or trc.upostdate > '#begmonth#')) and (d.hchkorchg is null or trc.upostdate > '#begmonth#') AND T1.HMYPERSON = T.HMYPERSON),0)) = 1 ) and amt.remain_amount > 0 group by amt.property_hmy )cnt on cnt.pHmy = p.hmy where 1=1 and p.itype = 3 #Conditions# Group By p.hmy ) d where 1=1) //End cnttrtydayspast
We are about to start a project that will involve three websites, each of which will have its own public domain name and two of which will act as e-commerce applications. Both e-commerce applications will only contain relatively small amounts of data (few thousand products, up to 15000 customer records with basic order history) and will be synchronised with a central data server at another site that will be responsible for handling payments, order processing, invoicing, etc.
The sites will take approx 500 hits every 24 hours and we will need to manage the DNS on the hosting server.
My question is, would we be ok to host these sites on a Web Edition of Windows 2003 Server with SQL Express, or is there any clear reason that we should go with the Standard editions of Windows Server and/or SQL?
Hi I am trying to use the results of a datatable as an input to a ne SQL statement. The datatable is created and populted by the source below after previously creating MyLookupCommand MyLookupReader = MyLookupCommand.ExecuteReader(CommandBehavior.CloseConnection) Dim Mydatatable = New DataTable() Mydatatable.Load(MyLookupReader)
I now create a new SQL string allong the lines of SELECT tableA.field1 FROM TableA INNERJOIN Mydatatable ON tableA.field2 = Mydatatable.Filed9 However I get the following error Invalid object name 'Mydatatable'. Any suggestions on how to resolve this.
possibley of interest is that I would have like to have done this in 1 SQL but the results in Mydatatable are from a SELECT DISTINCT and Field1 is a text filed that can't be used in a DISTINCT statement, hence using two statements. Many thanks in advance
What I want to accomplish is that at design time the designer can enter a value for some custom property on my custom task and that this value is accessed at executing time.