I hope this is the right place to ask this: I am using the microsoft enterpriselibrary for accessing a sql server database. when I use the following code, I receive an error 'ExecuteScalar: Connection property has not been initialized.': --------------------------------------------------------------------------- Database db = new SqlDatabase(p_ConnectionString); DbCommand cmd = db.GetSqlStringCommand(p_Sql); output = cmd.ExecuteScalar(); ---------------------------------------------------------------------------- but when I use this code, everything works correctly: --------------------------------------------------------------------------- Database db = new SqlDatabase(p_ConnectionString); DbCommand cmd = db.GetSqlStringCommand(p_Sql); output = db.ExecuteScalar(cmd); ---------------------------------------------------------------------------- I was assured by my supervisor that both snippets' syntax are correct. Can anyone shine some light on why the first snippet will throw an exception? Thanks in advance, Drew
I created a wrapper class for a function, and exposed it through CLR. However, if I call this function form SQL it blows up but if I call directly from a test Windows Form the call works fine.
The blow up is related to EnterpriseLibrary.Data, where my Queue class uses that library to do all data access call ops
Here's my wrapper class:
namespace inlineCLRsql{
public static class Wrapper{
public static void CallQueueEntry(int queueId, int deskNo, int missed){
EXTERNAL NAME inLineLib.[inlineCLRsql.Wrapper].CallQueueEntry
GO
sp_CallQueueEntry 4,2,0
Here is what I get as a result
System.NullReferenceException: Object reference not set to an instance of an object.
System.NullReferenceException:
at Microsoft.Practices.EnterpriseLibrary.Data.DatabaseConfigurationView.get_DefaultName()
at Microsoft.Practices.EnterpriseLibrary.Data.DatabaseMapper.MapName(String name, IConfigurationSource configSource)
at Microsoft.Practices.EnterpriseLibrary.Common.Configuration.ObjectBuilder.ConfigurationNameMappingStrategy.BuildUp(IBuilderContext context, Type t, Object existing, String id)
at Microsoft.Practices.ObjectBuilder.BuilderBase`1.DoBuildUp(IReadWriteLocator locator, Type typeToBuild, String idToBuild, Object existing, PolicyList[] transientPolicies)
at Microsoft.Practices.ObjectBuilder.BuilderBase`1.BuildUp(IReadWriteLocator locator, Type typeToBuild, String idToBuild, Object existing, PolicyList[] transientPolicies)
at Microsoft.Practices.ObjectBuilder.BuilderBase`1.BuildUp[TTypeToBuild](IReadWriteLocator locator, String idToBuild, Object existing, PolicyList[] transientPolicies)
at Microsoft.Practices.EnterpriseLibrary.Common.Configuration.ObjectBuilder.EnterpriseLibraryFactory.BuildUp[T](IReadWriteLocator locator, IConfigurationSource configurationSource)
at Microsoft.Practices.EnterpriseLibrary.Common.Configuration.ObjectBuilder.EnterpriseLibraryFactory.BuildUp[T](IConfigurationSource configurationSource)
at Microsoft.Practices.EnterpriseLibrary.Common.Configuration.ObjectBuilder.NameTypeFactoryBase`1.CreateDefault()
at Microsoft.Practices.EnterpriseLibrary.Data.DatabaseFactory.CreateDatabase()
at inLineLib.Queue.getNextQueueEntry(Int32 servedBy)
at inLineLib.Queue.callQueueEntry(Int32 servedBy, Boolean callMissed)
at inlineCLRsql.Wrapper.CallQueueEntry(Int32 queueId, Int32 deskNo, Int32 missed)
objEntites.inTest = obj.inTest;-----------------------------------------------ERROR LINE
// objEntites.Add(obj);
}
return objEntites;
}
}
}
Error 2 foreach statement cannot operate on variables of type 'System.Data.IDataReader' because 'System.Data.IDataReader' does not contain a public definition for 'GetEnumerator' D:KOTI_PRJSEnterpriseCustomerClass1.cs 34 13 Customer
I am having diffuculty with multi owner stored procs with enterprise library deriveparameters.
Eg:
Create proc dbo.test @p1 int as select 'hi'
create proc bob.test @p1 int as select 'hi'
Deriveparameters appears to tell us the wrong number of parameters. I gather it is getting ALL the parameters for all procs called 'test' regardless of the owner.
If anyone can give me some guidance on this I would appreciate it!
Hi!I have 6-7 tables total containing subobjects for different objects like phonenumbers and emails for contacts.This meaning i have to do some querys on each detailpage. I will use stored proc for fetching subobjects.My question therefore was: if i could merge subobjects into same tables making me use perhaps 2 querys instead of 4 and thus perhaps doubling the sizeof the tables would this have a possibility of giving me any performance difference whatsoever?As i see pros arefewer querys, and cons are larger tables and i will need another field separating the types of objects in the table.Anyone have insight to this?
I want to log all changes made to a table (only updates, since there will be no deletes or inserts).
I would like to see the user who changed it, the date and time, fieldname, old value, new value. If more fields are changed during the update, than add more records into the logging table.
I have some data that is updated every day but I don't know when. I'm trying to make a solution that runs a SQL query to check if this data has been updated. If it has, I'll send the updated data with FTP as as text file.
How would you solve this?
My idea is to have 2 SSIS packages. - Package1: One runs at the same time every day (inserts any missing updates to a table) - Package2: One runs every hour to check the missing updates table, and runs Package1 if any update for a missing data is found. My only worries is if Package1 is running and at the same time Package2 decides to run Package1 then I could get into trouble if I'm using temp tables with the same name for the text file updates etc. Thank you.
I need to load a lot of Excel, CSV, ... etc. files. These files have hundreds of columns and I need to validate the data. Some are simple range type checking, some are more complex checking involve multiple columns.
There may have several hundreds of such rules. And I may need to let the program to automatically correct some invalid data in the future.
Where to implement it in SSIS? Or just load the files without any checking (all type to text), and checking using T-SQL?
I am about to upgrade my main database server (5 db's - largest 16Gb) from NT 4 SP 6a / SQL Server 7 SP3 to Windows 2000 SP 2 / SQL Server 2000 SP 2
I am planning to detach the db's, backup to tape a few times and then totally trash the server, rebuilding it with the new software, restore the db's from tape and the reattach the db's.
Any reason I should not use this method and can folk advise the best practice way of achieving this?
Hi, hoping I can get a few view on a question I have relating to the above.
I am new to Stored Procedures and Triggers and I am trying to understand 'best practice' a little better. Here is my question: If I have a table that stores information, and when any field in that table is updated (and changes) I would like to inactive the row, prior to change and then add the change by way of a new, active row. This way I can see what it was before and that it's inactive, and what the active value is.
Hope this makes sense, if this is the wrong way to manage change history any suggestions would be appreciated.
A second question I have is as follows: If I have a table that stores a number, based on that number, what would be the best way to create new records in a different table that pulls from the first table. Where the number stored in table 1 represents how many times the record is to be created in the second table.
Thanks. If anyone needs more data, please feel free to ask, I will help as best as I can and appreciate any advice & comments that you can give.
I have master tables that I will be updating from our ERP system. Some examples I have seen take an approach of dropping a table in SQL server then creating it again before importing; some, and probably my choice, append and update; I have not seen an example where records are all deleted then the data appended afterwards. Of the three approaches which is generally regarded as best practise / most efficient?
Hi, I work with a large team developing ASP.NET application that has a large database with over 50 complex stored procedures. It is proving more and more difficult and time consuming to centralise the development and update of the database changes and I was wondering if there were any best practises/tools that could be recommended. I have looked on the web for good articles and haven't found anything difinitive (except that Team Foundation Server is the way forward).. A brief background to the current process is that everyone develops on the same database, and then updates the stored procedure scripts in source safe (manually). Then when we do a new release someone builds a script of all the database updates and runs it. There are issues related to developers updating there stored procedures over other peoples and other concurrency. I am looking to move all the developers to start using local databases so that there work only effects them, but then this brings up problems of keeping all the local databases up to date whenever they get the latest source code. The only way I currently see is to build a database update program, that will run and update to the latest version. Surely this must be a common issue? Anyone have any good ideas/concepts? Also our setup is Visual Studio 2005, SQL Server 2005 and Source Safe 2005. Cheers, Andrew Thomas
Hi. Sorry if I am asking a stupid question since I am an absolutely beginnerin SQL Server. Here is the question . . .About 13 hours ago, I got my SQL Server 2000 to index a table which has 104million records. At first the CPU usage was high. But after an hour or two,the process has seemed dead and the Enterprise Manager has had no response.The CPU usuage dropped to zero and has been jumping between 0 to 5%. Theharddisk indicator has been blinking at a rate of roughly three times pertwo seconds.Is this normal? Has anyone got any idea how long the process would take? Ihave assigned 1.8GB of RAM to the SQL service and is currently taking upabout 1GB.
right now I have a stored procedure that goes through each of the Line and Body fields using a cursor. The problem is that this method is very slow. How would you experts solve this problem? any Hints or suggestions?
BEFORE EXAMPLEPartLineBodySeriesEngineYear 11234A,BWETC1998 25678991,93,94,95WET01997 3345656S,R5,6,12WENC1995
Public Sub OpenConnexionSQL() ConnexionSQL = New SqlConnection(ConfigurationSettings.AppSettings("DataSourceSql").ToString) ConnexionSQL.Open() End Sub
Hi guys, I've been thinking about this problem now for some time but somehow I don't know if my "solution" for it is right. I'd like to read your opinion.
There is a Capital table with Capital_Nr, Capital_Name, Capital_Population, Country_Nr and Country_Name as attributes.
I know the table is chaotic so I brought it to 3NF :
Capital table : Capital_Nr, Capital_Name, Capital_Population, Country_Nr(foreign key)
Country table : Country_Nr and Country_Name
Ok so I guess the table should be now in 3NF, but what intrigues me is in what NF the table originally was. I tried then to use Codd's definition of 2NF : "a 1NF table is in 2NF if and only if none of its non-prime attributes are functionally dependent on a part (proper subset) of a candidate key". In my opinion the original candidate keys could only be {Capital_Nr},{Country_Nr} and {Country_Name}, each one of them single, i.e. separate from each other. So, as there is no composite candidate key, I can affirm that the original table was in 2NF. Am I right ?
I am wondering what normal disk I/O should be. i know it verys depending on use but im looking for an average.
here is an idea of what we have
there is about 10 centers doing replication to our primary server. we have about 80 users connecting directoy to our primary server using MS Dynamics through CITRIX. we have a few other apps use the database as well however i am fairly certin its Dynamics generating our disk IO Hardware wise we have a powerful blade connected to a raid 5 SAN with 15000 rpm disks. normaly the disk IO stays fairly low but every so often it goes crazy and im thinking it shouldn't
Below is a sample of our disk IO from perfmon over 2 minutes or so. as you can see everything looks ok untill 04/15/2008 10:12:49.470 when the Disk I/O % goes above 100%
Hi, In my SQL server 7.0, I have got 250 store procedures in each database. Before using them for my application, I want to ecyption all. I must add "WITH ENCRYPTION" string in each SP in all database and it'll take me a long time. Is there fastest way to encryption all SPs in all DBs? Have anyone got an utility SP ( or anyway else) to do this? Thanks in advance.
What is the fast way a stored procedure can copy a table from a linked server?
I would like to tune this statement, possibly with hints or other logging options. Assume that table_A and table_B have the exact table structure and that I want to preserve table_A and all its indexes and contraints. The table will be truncated before this load, if that helps in any way.
insert into table_A select * from OpenQuery(Server,'select * from Table_B')
In relation to my last post, I have a question for the SQL-gurus.I need to update 70k records, and mark all those updated in a specialcolumn for further processing by another system.So, if the record wasKey1, foo, foo, ""it needs to becomeKey1, fap, fap, "U"iff and only iff the datavalues are actually different (as above, foobecomes fap),otherwise it must becomeKey1, foo,foo, ""Is it quicker to :1) get the row of the destination table, inspect all valuesprogramatically, and determine IF an update query is neededOR2) just do a update on all rows, but addingand (field1 <> value1 or field2<>value2) to the update querythat isupdate myTablesetfield1 = "foo"markField="u"where key="mykey" and (field1 <> foo)The first one will not generate new update queries if the record hasnot changed, on account of doing a select, whereas the second versionalways runs an update, but some of them will not affect any lines.Will I need a full index on the second version?Thanks in advance,Asger Henriksen
Hey All, I'm trying to decide what's the 'best' to use. I've been designing and creating database for a while and have pretty much always used a surrogate key and not a normal one. I've finally had some free time to start studying more so in my spare time and read up and come accross a lot of guides, articles and stories that tout that normal keys should be used whenever possible as they're a better identifier and that surrogate keys should only be used when there is not a readily available normal key. Now perhaps I'd be open to accepting that but absolutely every database I come across tends to only use surrogate keys. For example I'm doing an authentication system from scratch and am looking at the User table. Now of course the user name has to be unique, should that be the primary key or should I have a seperate column with a guid or an incrementing int or the like as the primary key? I can certainly see that username could be used. I can also see how it may be easier when looking through the data tables to identify who/what a table is refering to with a surrogate key. However it still seems sort of sloppy, for lack of a better word, to me. Where now I could have somebody's username (or any other piece of data used for this purpose) spread accross a lot of other tables. And while writting this I just thought of the scenario that perhaps somebody needs their username changed, with this method now the ids need to be changed on all the related rows of all the other tables whereas with a surrogate key it wouldn't matter. Anyways I'm mostly looking for opinions on which way to go (not just with the user sample, but more in general).Thanks.
I've been running a long query which takes almost 39 seconds in Query Analyzer. After creating a Stored Procedure (with the same query) I expected to run it faster bcoz I heared that SP has a cache, and its a faster technique. But I didnt gain any performance improvments.
Can somebody clear my confusion, what I'm doing wrong.
We have a payroll database that needs to be backed up just before completing the payroll for that period. I need to create a batch file that a normal user can run that will tell the database to back up and then tell the user when it is done so they can continue working. Is there an easy way to do this without giving the users special permissions? I don't want to give them backup op status. Any help would be appreciated.
I will be taking over a database that has almost no pk's or relations(this is not my choice, but a vendors) Management is looking at stored procs to improve performance, but I am wondering if the db is in this state will there really be a gain. I am pushing for normalization first, but if anybody has any ideas or opinions I would appreciate
hi friends the below query is actually what type of join whether inner join or normal query..?????
if not exists(select 'x' from cobi_invoice_hdr h(nolock), fin_quick_code_met q(nolock) , ci_adjustment_drdoc_vw z (nolock) where h.tran_ou = @ctxt_ouinstance and h.invoce_cat = @category_tmp and d.so_no between @sonumberfrom and @sonumberto
and isnull(h.tran_amount,0) between @totalinvoiceamountfrom and @totalinvoiceamountto and h.tran_date between convert(varchar(10),@invoicedatefrom,120)and convert(varchar(10),@fininvoicedateto,120) and h.tran_no between @invoicenumberfrom and @invoicenumberto and h.bill_to_cust between @billtocodefrom and @customerto and h.fb_id = isnull(@fb,h.fb_id) and h.tran_currency = isnull(@currency,h.tran_currency) and h.createdby = isnull(@useridentity,h.createdby)
and EXISTS (select '*' from cobi_cust_custinfo_vw c(nolock) where h.bill_to_cust = c.custcode andc.ouid = @ou_tmp )
and z.status = q.parameter_text
and q.parameter_type = 'STATUS' and q.parameter_category = 'STATUS' and q.component_id = 'COBI' and q.parameter_code = @status_tmp and h.tran_no = z.documentno and q.language_id = @ctxt_language and z.language_id = @ctxt_language) begin 'No matching invoices found.' select @m_errorid = 514 -- Porselvi.J - COBIDMS412AT_000255 return end End
Hi, 1)I need to transfer 500 gb of data from one server to other, which is faster, DTS/BCP/Restore. 2)Which are the best methods for checking blocking, dead locks & Indexes!
I have a master table which has demographic data such as name, dob, location along with a primary key id. It will have about 10-12000 records. We get a refresh file every hour which may or may not have corrections for these records hourly with about 3,000 records. I put this data into a table. This data should be considered always to be correct. To handle the update to the master table I need to create an update process. I can take one of two approaches, just update all the records in the master table regardless if they are correct or not, or do some type of left join on those that do not match (in other words, only update the ones where thae names or dob don't match) There is an underlying update trigger on the patient master which will also fire if these values are changed. An opinions on a best approach?
I have a production server that has an 8Gb db. It is dual Xeon with 5x HDD - 2 mirrored and 3 striped. db on stripe, log and OS on mirror. 2x Gb network cards.
The application goes slow (ie users notice) when a backup is running so i have placed a crossover cable from one NIC to a test server so that it can back up to a HDD on that server, and then to tape. The test server has 2xGb NIC and the link between the two servers is on a seperate subnet to However, in the first trial of this the back up and verify takes 3 minutes longer.
Is this because the target server doesnt have a disk stripe?
What is the best config for the production server (ie will a slower backup but to another server be less load to contend with the application)?
I've got a view that is driven from a 80 million record table in a data warehouse. I am trying to populate an aggregate table in a datamart, but am running into preformance problems. The datamart table needs to be updated daily. I understand there are many factors that effect performance, but in general would the fastest approach be: 1) Truncate the datamart table 2) Perform a bcp of the view to a text file 3) Bulk Insert to the datamart table
If you need more information to answer this please let me know.
Hi!We have Sql Server 2000 in our server (NT 4). Our database have now about+350.000 rows with information of images. Table have lot of columnsincluding information about image name, keywords, location, price, colormode etc. So our database don?t include the images itself, just a path tothe location of every image. Keywords -field have data for example likethis:cat,animal,pet,home,child with pet,child. Now our search use Full-TextSearch which sounded like good idea in the beginning but now it have hadproblems that really reduce our search engine?s performance. Also searchresults are not exact enough. Some of our images have also photographer?sname in keywords -column and if photographer?s name is, for example, PeterMoss, his pictures appears in web-page when customer want to search "moss"(nature-like) -pictures.Another problem is that Full-Text Search started to be very slow when queryresult contains thousands of rows. When search term gives maximum 3000rows, search is fast but larger searches take from 6 to 20 seconds tofinish which is not good. I have noticed also that first search is alwaysvery slow, but next ones are faster. It seems that engine is just"starting" when first query started to run.Is there better and faster way to handle the queries? Is it better torebuild the database somehow and use another method to search than Full-Text Search? I don?t know how to handle the database other way when everyimage have about 10 to even 50 different keywords to search.We have made web interface and search code with Coldfusion. ColdfusionServer then take care of sending all queries to Sql Server.I hope that somebody have some idea how to speed up our picture search.--Message posted via http://www.sqlmonster.com