i am newbie in ASP.net world. i am using 3 tier application architechture for my web based application. data base is sql server 2000. i have looked at object and sql datasource objects but i think they are not suitable for my requirements. so i am planning to directly use ado.net to access data from database.( i.e. creating connection, then creating commands n executing them)
now what i am looking for is the best known practices for the above task. i have following solutions in my mind please let me know if i am missing some or which could be the best aproach.
careate one class which will handle all the database requests so that all the pages and business objects request that class to to do all the db related stuff. (creating connection, command n execution)
I was wondering if you guys know of a good site that talks about programmatically accessing and displaying data from a sql server 05 database in ASP.NET 2.0.I want to have a data adapter in a dataset, but I would like to create my own class file and pull the data from the adapter through code into the class. Is this the best way? Im wondering about the best practices while learning this new technology. Any articles provided would be appreciated. Thanks!
Hello all,I just started a new job this week and they complain about the length oftime it takes to load data into their data warehouse,which they do once a month.From what I can gather, they rebuild the indexes before the insert with an80% Fillfactor, then insert the data (with theindexes enabled), then rebuild the indexes with a 100% Fillfactor.Most of my RDBMS experience is with a different product. We would havedisabled the indexes and Foreign Keys, loaded the data, thenre-enabled them, moving any records that violated the constraints into anappropriate audit table to be checked after.Can someone share with me what the accepted "best practices" are for loadingdata efficiently into a data warehouse?Any thoughts would be deeply appreciated.Steve
I think these should be rather simple questions, yet I spent a number of hours last night digging through the forums here and msdn and couldn't find any satisfactory answers. Basically, there tend to be types of information that are commonly saved in most databases, like names, addresses, phone numbers, email addresses, etc...and there are a variety of built in data types in SQL Server. What are the best built in datatypes for some of the common entries in a sql database. Also, there are a number of character based types and I am curious why one would be more useful in certain situations than another. Why is there char( ), nchar( ), varchar( ), nvarchar( ) and text datatypes? Why so many? Also, what is the "text" datatype and when is it most likely to be used? There is very little about the text type that I can find in the msdn or SQL Server docs...aside from the fact that it's text. On top of all this, there's numerous binary types as well. I'm really not getting the reason behind all these different basic types and why I would want to use one over the other in any specific instance.
I am making a form that takes input for 1 to 5 students using VWD. With the help of previous posts I have been able to make the database insert query work properly. In my form I have a radio list that has the user select if they are entering information for 1, 2, 3,4, or 5 children. Depending on how many children are selected on the radio list, I am displaying the proper number of textboxes and validating the data using the handy RequiredFieldValidator. Now I am at the point where I want to perform the instert to the database depending on the selected number of children in the family. What is the general rule for best practices. Please keep in mind that it is my understanding that ALL fileds in a SQL insert statment must have data. Should I ...1) create alternative SQL statements depending on the textboxes displayed OR2) is it more common to insert a standard string or integer, depending on the datatype, into the unused textboxes to populate the unused fields? Sincerely,Mike
[This is one of those cases where I think I know the answer, but I hope I'm wrong!]
I have a data flow which is processing data from the XML Source. There are 16 outputs from the XML Source. I have to perform a variety of validations on these outputs: things like "column 1 is required if column 2 has value 'a' or 'b'", or "column 1 or column 2 may be present, but not both".
For lookups and such, I use the Lookup component and its error output, both to redirect rows that fail the lookup, and to capture the data, column number and error code.
But, how do I do the same for "normal" validations?
If I have to use the Conditional Split transform, then I'll have to have one output per validation, and use a Union All to combine the rows again for output to an error file. This will also cost an extra "Derived Column" transform per output, in order to get a column number and possible error code per failed validation.
Worse, it's a pain to have to maintain all the columns in such a large "Union All"!
If I had the time, I might write a "Conditional Error" transform. It might be fun. But I have to be done by the end of this month, and don't even have time to create the UI for evaluating expressions!
Any tips or tricks or pointers to such would be very welcome.
I apologize if this has been asked, but I can't find a complete answer.
We have a situation with parent/child tables which have an identity column as their PK. We need to be able to insert into the live tables from staging tables. The data in the staging tables are related via a surrogate key.
I have found the OUTPUT clause, but that can only refer to columns of the actual table (since there is no FROM clause in an INSERT). Our current best solution to this problem involves adding bogus "staging" columns to the destination tables, and removing them after we've inserted everything from staging. This is an unattractive solution to say the least.
I'll give an example that mirrors our actual solution, and ask if anyone has a better solution? ----------
Code Snippet CREATE TABLE [dbo].[TABLE_A]( [ID] [int] IDENTITY(1,1) NOT NULL, [DATA] [nchar](10) NOT NULL, [STAGING_COLUMN] [bigint] NULL, CONSTRAINT [PK_TABLE_A] PRIMARY KEY ([ID] ASC) ) GO CREATE TABLE [dbo].[TABLE_B]( [ID] [int] IDENTITY(1,1) NOT NULL, [A_ID] [int] NOT NULL, [DATA] [nchar](10) NOT NULL, [STAGING_COLUMN] [bigint] NULL, CONSTRAINT [PK_TABLE_B] PRIMARY KEY ([ID] ASC) ) GO ALTER TABLE [dbo].[TABLE_B] ADD CONSTRAINT [FK_TABLE_A_TABLE_B] FOREIGN KEY([A_ID]) REFERENCES [dbo].[TABLE_A] ([ID]) GO CREATE TABLE [dbo].[STAGE_TABLE_A]( [A_Key] [bigint] NOT NULL, [DATA] [nchar](10) NOT NULL ) GO CREATE TABLE [dbo].[STAGE_TABLE_B]( [B_Key] [bigint] NOT NULL, [DATA] [nchar](10) NOT NULL, [A_Key] [bigint] NOT NULL ) GO
The STAGING_COLUMN columns are the ones that will be added before, and dropped after.
Code Snippet DECLARE @TABLE_A_MAP TABLE ( A_ID INT, A_Key BIGINT ) INSERT INTO TABLE_A (DATA, STAGING_COLUMN) OUTPUT INSERTED.ID, INSERTED.STAGING_COLUMN INTO @TABLE_A_MAP SELECT DATA, A_Key FROM STAGE_TABLE_A INSERT INTO TABLE_B (A_ID, DATA) SELECT TAM.A_ID, STB.DATA FROM STAGE_TABLE_B STB INNER JOIN @TABLE_A_MAP TAM ON TAM.A_Key = STB.A_Key
This seems to work, but I'd really like another alternative. Even though this is happening when nobody else is using the database, I cringe at the thought of adding and removing columns just to make this work.
Here are a few of my constraints:
The above is a simplification of the actual problem. The actual problem goes about five levels deep (hence the B_Key in STAGE_TABLE_B). At the top level, our larger customer will have 100,000 rows to insert. Each level will average 3 times as many rows as the next higher level, so we're talking about real volumes here.
This has to finish over the course of a weekend.
This has to be delivered to QA this Friday Thanks for any help or insight.
I'm trying to re-write my database to de-couple the interface (MS Access) from the SQL Backend. As a result, I'm going to write a number of Stored Procedures to replace the MS Access code. My first attempt worked on a small sample, however, trying to move this on to a real table hasn't worked (I've amended the SP and code to try and get it to work on 2 fields, rather than the full 20 plus).It works in SQL Management console (supply a Client ID, it returns all the client details), but does not return anything (recordset closed) when trying to access via VBA code.The Stored procedure is:-
USE [VMSProd] GO /****** Object: StoredProcedure [Clients].[vms_Get_Specified_Client] Script Date: 22/09/2015 16:29:59 ******/ SET ANSI_NULLS ON GO SET QUOTED_IDENTIFIER ON
I have a client who has SSMS installed on her laptop. She is able to connect to the SQL server via SSMS in the office and query data on the server.
She needs to be out of site often and doesn't have internet access. She asks if the data tables can be "backed up" or saved on her laptop, so she can look at them without worrying connecting to the server. I am not sure if this can be achieved, as SSMS is built for accessing a server, not a desktop. Myself never have this need. If I really need it, I would go to Microsoft Access and create an ODBC connection to the datatables. But this client thinks that Microsoft Access is beneath her.
I have recently upgraded to SQL2014 on Win2012. The Access front end program works fine.
But, previously created Excel reports with built in MS Queries now fail with the above error for users with MS 2013. The queries still work for users still using MS 2007.
I also cannot create any new queries and get the same error message. If I log on as myself on the domain to another PC with 2007 installed it works fine, so I don't think it is anything to do with AD groups or permissions.
We need to insert data/rows from a SQL Server 2014 database into MS Access database. The problem is, there are so many columns (100+) in the table and there are so many insert transactions of this kind (from different tables) that it is not very easy to write the code in VB.NET that lists all column names.
Both the Access and SQL Server tables have the same number of columns and the equivalent data types, so inserting is not really the problem. It's just that is there a way to do an insert statement in T-SQL that does not name all the columns?
I've been developing desktop client-server and web apps and have used Access and SQL Server Standard most of the time. I'm looking into using SQL CE, and had a few questions that I can't seem to get a clear picture on:
- The documentation for CE says that it supports 256 simultaneous connections and offers the Isolation levels, Transactions, Locking, etc with a 4GB DB. But most people say that CE is strictly a single-user DB and should not be used as a DB Server. Could CE be extended for use as a multi-user DB Server by creating a custom server such as a .NET Remoting Server hosted through a Windows Service (or any other custom host) on a machine whereby the CE DB would run in-process with this server on the machine which would then be accessed by multiple users from multiple machines?? Clients PCs -> Server PC hosting Remoting Service -> ADO.NET -> SQL CE
- and further more can we use Enterprise Services (Serviced Components) to connect to SQL CE and further extend this model to offer a pure high-quality DB Server? Clients PCs -> Server PC hosting Remoting Service -> Enterprise Services -> ADO.NET -> SQL CE
Seems quite doable to me, but I may be wrong..please let me know either ways
When i am trying to start our hospital software based on SQL server 2000, it shows Following Error.Search Condition is not valid, (DBNETLIB) Connection Open (connect()). SQL server does not exist or excess denied. Due to Fetch data.I run our software in Windows 8.1, while it smothly runs in previous version of Windows XP and 7.
I wonder if somebody here could recommend a good article about MS Service Broker. I'm looking for some advice and tips in designing applications using SQL Service Broker, mainly QN. For instance, maintenance routines and common faulty scenarios I might find later when my solution is implemented. I have googled for a while but all I can find are recopied examples of QN.
Hi I use data presentation controls like gridview, formsview in my application. In many of the webforms i also use multiple datasources mainly for the purpose of 2 way data binding for controls within data presentation controls.I am concerned about the performance issues this might cause as users using these pages increase.What is the likely performance impact ?Once the databind is done and values are populated in the respective controls, does the database connection of datasource control get closed, or is it open?What are the best practices while implementing datasource controls?
I'm looking for some documentation on SQL 2K Installation tips on a Windows 2000 Member Server platform as well as best practices for ongoing maintenance .
Real world experience as well as Microsoft propaganda are all welcome.
I am looking for some examples of how to manage DDL scripts amongvarious versions of a production db and development and testing. Ihave tried a few things in the past, and it always gets very muddledand cumbersome.I need to be able to build any version of the database from scratch,BUT I also need to maintain an upgrade path from any version to anylater version. So it is not enough to just maintain a master buildscript, but I don't want to maintain 2 different things (modify themaster build scripts AND create a new "ALTER" script for each versionchange).I thought I had seen an article somewhere that layed out a process formanaging this, but I can't find it now (I thought it was in SQL ServerMag). Does anybody know of this article or have a resource they couldpoint me to that outlines best practices in this area?Thanks,Jason Wood, DBA in training.
Hi All,My question is what are the best practices for administering largeDBs. (My coworker is the DB administrator. I'm more of thedeveloper. But slowly being sucked in.) My main concern is that wehave some DBs that take approx 3 hrs a night just to rebuild theindexes. I know that with MSSQL 2000, I can use partitioned views tobreak out the table(s) into smaller databases and tables. But we alsohave an older server that runs MSSQL 7. Lastly how do you handledrive space issues? Do you spread out the DB across multiple MDFfiles on different drives? Thanks in advance.
Please forgive me if I have overlooked a thread that answers this question, but I assure you that I have looked.
I would really appreciate a guide of sorts that would tell me the correct steps to take to properly secure a column in my database. I don't need specifics on how to do each step, I either have those already or can find them myself. In fact, I have already successfully encrypted and decrypted some data. I just want to make sure that I create the right keys and certificates and that I follow best-practices as far as backups and stuff is concerned.
Environment is SQL Server 2005 x64 Enterprise running under Windows Server 2003 x64 Enterprise with four processors and 16GB of ram.
I have 28 data copy routines I would like to add to a SSIS package. They use the Data Reader Source to an ODBC database (InterSystems Cache) and copy the table contents to a SQL2005 database for reporting needs. The data rows in these 28 routines range from only 100 rows to over 6 million rows depending on the table. I have tested these individually and they work fine. My question is, is it a good practice to have all of these routines in a single package or can I expect performance degragation?
I've got a table that has frequent updates to it. I want 100% change tracking on this table though, so we can rollback to any previous version, or just see any changes people make.
Is there a best practice for things like this? Currently, I'm using a trigger on UPDATE to take the previous values and store them in a history table. This keeps track of who changes what, and when. Plus the most recent data is seperate and more performant to access.
I've also heard about putting an 'IsActive' flag on the main table and any changes that are made just get marked as In-Active and a new record gets added.
I am new to SSIS, but done alot of DTS 2000 development.
What is the concensus for developing SSIS packages? Do you just place objects and change the properties of each object, having multiple objects basically doing the same thing, with different properties? Or do you set object's properties and then change properties by code in scripts? Ie Execute SQL, setting connections and SQL Statement by code in a script? Is this even possible? With Microsoft OOP I assume this is possible.
Script> Set properties of ExecuteSQL > set flow to ExecuteSQL.
Is this this only way to do it in SSIS? http://sqljunkies.com/WebLog/ashvinis/archive/2005/06/15/15829.aspx For some reason I figured that SSIS would have this kind of stuff built into it, it seems a function that many would use.
I wonder if anyone knows what would be the best case scenario for the property 'maxinsertcommitsize' for the sql destination task if I want to load 6m records into a target. Is the best setting 0 (try loading all in one batch) or should I choose a different value for example 1000000 per batch?
When I execute the below stored procedure I get the error that "Arithmetic overflow error converting expression to data type int".
USE [FileSharing] GO /****** Object: StoredProcedure [dbo].[xlaAFSsp_reports] Script Date: 24.07.2015 17:04:10 ******/ SET ANSI_NULLS ON GO SET QUOTED_IDENTIFIER ON GO
[Code] .....
Msg 8115, Level 16, State 2, Procedure xlaAFSsp_reports, Line 25 Arithmetic overflow error converting expression to data type int. The statement has been terminated. (1 row(s) affected)
is there a step by step paper to get there? here is what i need to consider. I Iwill have many customers that will need their own set of records and access pages "branded for their company" each customer will have many clients. I am hosting this application on a windows 2003 server with SQL 2005 server enterprise.
I am using windows authentication, I have created a username in windows, then i added the windows user in SQL management studio in security, granted "DB Read" and "DB write" and again under the database security tab. still from the web authentication fails. i must be nissing a step or two?
I expect to set up a username for each database as i setup new customers.
HiI'm having problems following the tutorial on creating a data access layer - http://www.asp.net/learn/dataaccess/tutorial01cs.aspx?tabid=63 - when I try to compile in Visual Studio 2005 I get namespace could not be found. I followed exactly the tutorial - I created a dataset and added this code in my aspx page. <asp:GridView ID="GridView1" runat="server" CssClass="DataWebControlStyle"> <HeaderStyle CssClass="HeaderStyle" /> <AlternatingRowStyle CssClass="AlternatingRowStyle" />In my C# file I added these lines... using NorthwindTableAdapters; <<<<<this is the problem - where does this come from? protected void Page_Load(object sender, EventArgs e) { ProductsTableAdapter productsAdapter = new ProductsTableAdapter(); GridView1.DataSource = productsAdapter.GetProducts(); GridView1.DataBind(); }Thanks in advance
I'm currently working on a BI architecture for a customer, and consider to propose the Power BI data catalog as a data distribution layer. The customer will use Power BI, but also has other BI tools.
Are data sets in the data catalog available to other clients than Power Query alone? E.g. are there OData feed endpoints available? If not, what would be the best way to give other tools access to the data?
I have two database(MYDB1 , MYDB2) on two different server's(SERVER1 , SERVER2) . I want to create an store procedure in MYDB1 on SERVER1 and get some data from a table of MYDB2 on SERVER2. How can i do this?