Wich One Is More Efficient ? Datafile Or Attached On Server ?
May 8, 2008
Hello friends !
Wich one is more efficient way to use a sql mdf file in Sqlexpress ?
attaching a mdf file on sqlexpress ( and use initialcatalog in connection string ) or use AttachDbFilename on connection string directly ?
Is there any difference in performance and speed ?
Thanks a lot
Hi, Is there any way to change the location for the datafile. I need to change the drive from say c to d because it is filling up. Is there anyway to do this or do I have to recreate the database from scracth. I have a whole lot of data in the database already.
I have a database -- MDB -- with datafile for data and transaction log under the folder d:mssqldata . Now i want to move the data file from d: to e:, say e:mssqldata . Can someone let me know if this is possible under SQL server v7.0 and if so, how
Hi. I am using VS 2005 and I want to create a database file (mdf) and have it on my server, but not have it attach to an instance of SQL server. Is this possible? The hosting I am using only allows for 1 SQL DB, and obviously doesn't allow you to attach instances of a DB. I would rather go with an mdf file that access, but not sure if this is possible. If anyone knows if it is and how to pull it off it would be GREAT!!
I need some clarification about adding file in to mirrored dataabse in primary server without downtime and breaking the mirror server.
In our environment we are using monutdisks in both the servers. in primary for ex we have F drive for data files under mount disk 3 in mirror server also we have same drive but in mount drive2.
As per my knowledge if it is same drives we can add the ndf files in the primary that will reflect on mirror. but in current situation i am confusing about mount points with different names.
Hi, I've the sql code below wich give interesttarget 78 as a result wch I shouldn't. I think it has something to do with the'(' and ')' . SELECT Top 5 * FROM TestnetCampaign a, TestnetAds b WHERE a.campaignid = b.campaignid AND b.sizeid = 21 AND ( a.interesttarget LIKE '10,%' OR a.interesttarget LIKE '%,10,%' OR a.interesttarget LIKE '%,10' ) AND ( a.interesttarget NOT LIKE '78,%' OR a.interesttarget NOT LIKE '%,78,%' OR a.interesttarget NOT LIKE '%,78' ) ORDER BY a.ecpc DESC Any suggestions? Thanks! Roel
I'm aware of the issues with sizing your logfile growth size too low (causing too many VLFs, etc). But I haven't seen much about the datafile side of it.
Are there any benchmarks specifically on setting datafile growth so low (on databases 1-100Gb in size)? Are there circumstances in well utilized servers where that might be warranted?
TABLE ITEMS ItID(PK) | ItName --------------------------------------------------------------- 1 Bike 2 Car 3 Boat
TABLE DET ItID(PK)(FK of ItID) | Detail --------------------------------------------------------------- 1 Mountain 1 With Suspension 2 5 Airbag 2 3500 c.c. 2 215cv. 3 with cabinet
And i want a table result like ItId ItName Detail --------------------------------------------------------------- 1 Bike Mountain 2 Car 5 Airbar 3 Boat with cabinet
WHAT "JOIN" I NEED TO USE OR WHAT QUERY I NEED TO WRITE. CAN ANYONE WRITE ME THE QUERY, that as i show in example, will return only the first Detail for each Item
Hi, paws. My server crashed recently, I didn't give to much importance because I knew that having access to the partitions I could recover my DB files and attach my database to my new server, and everything was fine until ... I found that my DTS Packages and Jobs where not in my Attached DB. Now, where can I find these ? Can you help a tormented man ? Thanks in advance.
I attached .mdf and .ldf from the app_data folder in an asp.net web site. I set the configuration to SSPI. The trouble is, I get this error requested by the login. The login failed. Login failed for user 'IN-XYZmattaniah'.
If I create a database and connect to it using SSPI I do not get this error. I assume there is something I am missing.
I was wondering about the principles of SQL Server. In all of my books, they show me to attach the DB, even create the DB through Visual Studio 2005. Which creates a .mdf file and a .ldf file.
Ok, but I cannot open those files through SQL Server Management. The only DB I am able to manage through SQL Server Management is the ones created with it.
So my question is: For a Multi-User type Application, is there a problem to attach a network based file through VS2005, the same way as an access file ?
Again my keyword here is Multi-User. Sorry if this myth be a stupid question but I am used to MSAccess. SQL Server is a lot more complexe and I don't want to start in the wrong direction.
I have an interesting situation here. I have a SQL Server 2000 database which is attached to SQL Server 2005. The database (SQL 2000) however only has service pack 3 applied. I need to apply SP4 before I can move forward with SQL 2000->2005 upgrade. The question: can I somehow apply this support package with my current configuration or should I install full SQL 2000 front-end anew (which I don't have at the moment)? Whenever I try to run SQL 2000 SP4 it complains "SQL Server 2000 is not installed on this machine."
The server I try to install the update is Windows Server 2003 R2 Standard Edition x64. SQL Server 2000 database itself is x86 (obviously).
From BOL, I see these remarks with respect to the MODIFY FILE subcommand (my underline added):
Initializing Files By default, data and log files are initialized by filling the files with zeros when you perform one of the following operations:
Create a database
Add files to an existing database
Increase the size of an existing file
Restore a database or filegroup
Which leads me to believe that expanding the size of a datafile will also wipe out (my definition of 'initialize') any existing data within that file.
I may be misunderstanding 'initialize', because when I tested it out, I found this wasn't the case - my table data written to the file was still there after a resize.
Need to clarify to what degree I'd be taking a risk by increasing the file size on a datafile which already has data in it.
Iam new to the world of ASP .Net. Right now iam building an application that will IMPORT about 5,000 records from an Excel spreadsheet to a table in MS SQL Server. Right now the code works correctly, but i feel it is not efficient and takes a little bit of more time in doing the import. Could you guys throw some light on how i can make the code run more faster ? Someone suggested me that i can use DataAdapter and update the table in the database thru an update method available with it. I dont know how to do it? Could anyone share with me a snippet of code that does this ?
Here is my code:
Private Sub ProcessRecords() Dim ds2 As New DataSet ' readExcelSheet is a user-defined function that reads a spreadsheet and returns a DataSet object ds2 = readExcelSheet("C:InetpubwwwrootProject1Book2.xls", "SELECT * FROM [Sheet1$]") Dim myConnection As SqlConnection = Connection() ' user-defined function that returns a SQLConnection object myConnection.Open() Dim strSQL As String = "insert_member" ' stored procedure that inserts records Dim myCommand As New SqlCommand(strSQL, myConnection) myCommand.CommandType = CommandType.StoredProcedure myCommand.Parameters.Add("@salutation", SqlDbType.NVarChar) myCommand.Parameters.Add("@firstname", SqlDbType.NVarChar) myCommand.Parameters.Add("@lastname", SqlDbType.NVarChar) myCommand.Parameters.Add("@company", SqlDbType.NVarChar)
Dim i, j As Integer Response.Write(Date.Now() & "<br>") For i = 0 To ds2.Tables("Members").Rows.Count() - 1 myCommand.Parameters("@salutation").Value = ds2.Tables("Members").Rows(i).Item("sal") myCommand.Parameters("@firstname").Value = ds2.Tables("Members").Rows(i).Item("firstname") myCommand.Parameters("@lastname").Value = ds2.Tables("Members").Rows(i).Item("lastname") myCommand.Parameters("@company").Value = ds2.Tables("Members").Rows(i).Item("company") j = myCommand.ExecuteNonQuery() If (j > 0) Then Response.Write("Record Inserted - " & i + 1 & "<br>") End If Next Response.Write(Date.Now() & "<br>") myConnection.Close() End Sub
Hi allI have a question concerning sql database mdf files. In the old days I would user a ms access database. This file would be stored with the actual web files and would utilise a dsn connection. I have noted when designing with vwd 2005 express it allows you to use 2 methods of creating a mdf database. You can either create it as an attachment mdf or you can create it directly using sql manager. My question is, if you create the mdf database as an attachement file can you store it in the same manner as if you where using a ms access database, meaning can you store it with the web site's files so it uses the file storage allocated size and then create a connection similar to a dsn (but for sql) to the isp's sql engine or does it have to be uploaded to the isp' s sql server.The reason for this question is some of my customers do not want to pay the extra cost to have an sql allocation, however I do not want to go back to using old asp methods to create advanced sites as I prefer using stored procedures. Any help will be appreciated
I have an instance with 4 datafiles for tempdb each set at initial size of 4G and growth rate of 100MB. After some time the initial file sizes seem to have changed automatically. They now read 3962,100,3688 and 2847 respectively. Is this something done by SQL Server itself? I cannot imagine that it was done manually.
I don't think there was a restart after the initial sizes of 4G were set, could this be related to the problem?
Hi every body. As you can see in the title I would like to import my tables from the data base which is in the SQL Server, to an mdf file which can be attached to the project.Can some one give an indice please?
Hi, i'd like to try Express2005 but before i'd like to know wich are the limit vs Sql Server. I had seek in Microsoft's site but i haven't found it anyone kwon it ? Thank's for the answer
An SSIS task imports data from a flat file and inserts the data into a staging table. The staging table holds the data in its raw form. A second process then selects the data from the staging table, looking up the foreign key id's for raw data values, and then inserts the data into the live table.
SQL - Only key columns shown for clarity -- Staging Table CREATE TABLE Staging (Information VARCHAR(10), MachineName VARCHAR(10), Status VARCHAR(10))
[code]...
The insert into the live table should look up the id for machine 1, and the id of status success and insert the foreign key values into the live table for the row.There could be 1000's of rows for the output of machine 1 all with different status's - (all pre set in the Status table, i.e success, failure, rerun) and the same for lots of other machines held in the machine table.
What is the best to insert this data all in one go, rather than reading each row of the staging table one by one, looking up the foreign key values depending on the machine and status values, then inserting the data.
I was thinking along the lines of:
INSERT INTO dbo.LiveTable (Information, MachineID, StatusId) SELECT Staging.Information, dbo.Machine.MachineId, dbo.Status.StatusId FROM dbo.Staging JOIN Machine ON Machine.MachineName = Staging.MachineName JOIN STATUS ON Status.Status = Staging.Status But I notice the problem with this is, it doubles up the inserts!
our clients have the flexibility to detach and attach databases (I know there are a lot of considerations around this but there is now way of changing it), once they attach a database we need to run some code to update a bunch of values in the database.
Other than creating a SQL Agent job are there any other options available to automatically execute a script once the database has been attached?
I kow for a solid comparison between using datareaders and datasets I will have to perform that myself. But for now I will be utilizing datasets... What I am doing is currently utilizing assemblies to create my datasets ahead of time. I will eventually compile them as dlls. I'm just utilizing assemblies during my building/testing fase. My questions is: Is it faster to completely build the datasets and all needed connections inside the assemblies/dlls and fill them? Or to build the datasets and connections as a sub procedure that can be accessed and then fill them as each required set of data is needed? I ask because I will be having many different data connections and so I'm not sure if it's faster to explicitly build/fill almost each and every one and have them compiled at runtime ready to be accessed, or to file them when called from a sub etc...? As I take it, the server should track and monitor which are used the most, and cache them, so as to operate faster. I wonder if it will still do this if the datasets aren't pre-filled?
I have a table customer wich has the columns phone_number(char type) and ok_to_call(bit type). There are already data in the table and the column ok_to_call only contains the value false for every row.
Now i want to update the latter column. I have a text file with a list of phone numbers and i want that all the rows in the Customer table(phone_number column)that matches the number in the text file to update ok_to_call to true.
This is to be done in SSIS(Integration Services). I'm new at this and i've looked around that tool but is a lot of items, packages and stuff so i dont know where to begin.
Would appreciate help on how to solve this issue in SSIS. What controlflow/Data flows to use,wich items and packages to use, how to configure and how to link together?
I'm a new user of SQL Server 2005. I have the full version installed. I also have SQL Server Business Integration Dev Studio installed. My OS is Windows XP.
I'm importing a series of 5 flat files into a database on one of the SQL Servers we have. My goal is to get 5 different tables (though perhaps I should do one and add an extra field to distinguish each import) into the database for further analysis.
I tried doing an import via DTS Wizard. There are no column names in the flat file so I defined them during the import process (all 58 of them). When I got to the end, I had an option to save the import process as a SSIS (SQL Server Integration Service) Package on:
SQL SERVER (I don't have permission for this)
or
FILE SYSTEM (did this one)
I saved the Package locally in hopes of being able to go back in, change the source file and destination table of the package and quickly get the other 4 flat files imported.
My problems are:
1) I couldn't find how to run the *.DTSX Package file to run in SQL Server Studio (basically reuse the Package with minor changes and saving me having to redefine the same 58 columns on each flat file import)
2) Tried but didn't understand how to run it in SQL Server Bus Intel Dev Studio (i.e. understanding the mapping and getting the data types right so it wouldn't error out)
3) Don't know how to make the necessary changes so that the Package handles the next source file and puts in a new destination table (do I need to do 5 CREATE TABLES so this Package has a place to run to?)
4) Does the Package need to be part of a Project to run (I haven't found how to take an existing Package and make it part of a Project/Solution)?
5) Is there a good book or online resource for just getting the basics of using SQL Server 2005 and SQL Server Business Intelligence Development Studio?
I'm really at a loss after spending a day fruitlessly on it scouring the help files, forums and experimenting around.
Hope somebody can point me in the right direction.
I just spent some time working out how to do a seemingly simple task. I€™m sharing the steps I took to do this in hopes it saves other SQL Server 2005 users (especially newbies like myself) time.
My original question posed on several SQL newsgroups was based on this goal:
I'm importing a series of 5 flat files (all with same file layout) into a database on one of the SQL Servers we have using SQL Server 2005 (SQL Server Management Studio) . My goal is to get 5 different tables. I want to do this without having to redo all the layout criteria 4 additional times.
Below are the steps I followed to get a solution (all done in Microsoft SQL Server Management Studio):
Create the Package (data import)
1) Use the SQL Server Import Export Wizard (equivalent to SQL Server 2000 Data Transfer Wizard) to import your first flat file. At the CHOOSE DATA SOURCE window browse for your file. 2) Under the Advanced tab, you can set your Column attributes (€œoutput column width€? or €œdata type€? to name a few). I highlighted all the columns and selected €œstring [DT_STR]€? for data type. To avoid truncation errors, I selected 255 for output column width. You can name the columns whose data you are most concerned with (I did import all the available fields). 3) After choosing a server destination you will have a €œSELECT SOURCE TABLES AND VIEWS€? window pop up. Under the €œMapping€? column you can choose to tweak your mapping further editing in SQL (see Edit SQL button). I didn€™t. 4) The €œSAVE AND EXECUTE PACKAGE€? will pop up. The €œExecute Immediately€? box should be checked and you should check the €œSave SSIS Package€? (SQL Server Integration Services). When you do, select €œFile System€? for where to save this import-file-package to. 5) Click OKAY for the Package Protection Level and the €œSAVE SSIS PACKAGE€? window will appear. Browse for a path on your local computer to save to.
Modify Package (data import) for Next Use
6) In SQL Server Management Studio, browse for the Package and open it.
Preparation for SQL Task €“ box
7) You should see a screen that shows two boxes (€œPreparation for SQL Task€?) and (€œData Flow Task€?). 8) Right click on the former and select €œEdit€?. 9) On the €œSQL Statement€? row, click into the right column and select the €œ€¦€? box 10) Change the destination table (the table you will create with this package) to a meaningful name and click OK. 11) Click OK for the €œSQL Task Editor€?
Data Flow Task - box
12) Right click on the €œData Flow Task€? box and select €œEdit€?. 13) Three boxes will appear €œSourceConnectionFlatFile€?, €œData Conversion 1€?, and €œDestination - <whatever table name your original data import went to>€?. Below them is a section that displays €œConnection Managers€?
SourceConnectionFlatFile - editing
14) The first thing you will want to do is change the import source to a new flat file. You do this by going below the boxes under the €œConnection Managers€? window and right clicking on €œSourceConnectionFlatFile€? and then selecting €œEdit€? 15) Browse for the new €œFile Name€? and select it. 16) A €œMicrosoft SQL Server Management Studio€? window will pop up asking you if you want to €œkeep or reset the existing metadata€?. The metadata is just your column definitions and choosing €œYES€? to keep this makes sense if you are doing data imports on files with the same file layout. 17) Still in the €œFlat File Connection Manager Editor€? window, change the €œConnection Manager Name€? to something meaningful (I add <_> at the end and then the name of the table the flat file is going to) and click OK.
SourceConnectionFlatFile €“ box (editing)
18) Right click on the €œSourceConnectionFlatFile€? box and select €œEdit€?. 19) Your newly named €œFlat File Connection Manager€? should appear in select box. 20) Click OK, right click again on the €œSourceConnectionFlatFile€? box and select €œShow Advanced Editor€?. 21) Under the €œConnections Manager€? tab, your newly named €œFlat File Connection€? should appear (the prior step is necessary for the advanced editor to recognize your change). 22) Under the €œComponent Properties€? tab, on the €œName€? row, click into the right column and rename to something meaningful (notice the €œIdentification String€? row description changes too once you click out of the €œName€? row) 23) Under the €œColumn Mappings€? tab, just confirm you are mapping your flat file fields (€œAvailable External Columns€?) to a destination table€™s fields (€œAvailable Output Columns€?). 24) Under the €œInput and Output Properties€? tab you can check in €œFlat File Source Output€? to make modifications to either your €œExternal Columns€? or your €œOutput Columns€? €“ you shouldn€™t need to for a simple import. ((NOTE: any changes you make here would likely need to be consistent with the column properties found under the €œConnection Manager Window€? for the €œSourceConnectionFlatFile€? as well as the €œData Conversion 1€? box under the €œData Flow Tasks€? window, so exercise caution 25) NOTE: This process has worked for me by making my source columns all €œstring [DT_STR]€? data type and the output columns all €œUnicode String [DT_WSTR]€? data type.
Data Conversion 1 €“ box (editing)
26) There is nothing you need to do here. By right clicking on the €œData Conversion 1€? box and selecting €œEdit€?, you can see and change the data type of the output columns (the ones in the table your importing the flat file to). There are probably more edits one can do but they€™re beyond what I€™ve learned.
Destination - <whatever table name your original data import went to> €“ box (editing)
27) Right click on the €œDestination - <whatever table name your original data import went to>€? box and select €œShow Advanced Editor€?. 28) Select the €œComponent Properties€? tab. 29) Select the right column at the €œName€? row and change the name to something meaningful (ie. related to the source file name or the table name you€™re importing to). 30) Select the right column at the €œIdentification String€? row and it will update to this change. 31) Select the right column at the €œOpenRowSet€? and change it to the name of the table you are importing your flat file to (this should be consistent with table name under step 10). 32) Click OK 33) Select FILE and select €œSave As€¦€? and then give your package a new name that€™s meaningful (this will be helpful if you have to rerun the import of the flat file later).
Run (execute) the Revised Package (data import)
34) Go back to SQL Server Management Studio and open the Object Explorer 35) Connect to an €œIntegration Services€? component. This should essentially be a local instance (not sure where it is on the local computer or in SQL Server Management Studio on the local computer). 36) In €œObject Explorer€? go down to your €œIntegration Services€? object and expand it. 37) Expand €œStored Packages€? 38) Right click on €œFile System€? and select €œImport Package€? and an €œIMPORT PACKAGE€? window will appear 39) For €œPackage Location€? choose €œFile System€? and then browse for the €œPackage Path€? 40) Click into the €œPackage Name€? and it defaults to your Package€™s file name. 41) Click OK and the Package is imported. 42) Right click on the newly imported Package and select €œRun Package€? 43) An €œExecute Package Utility€? window appears 44) Select €œExecute€? and the package runs.
ciao a tutti...ci risono...devo migliorare delle performance di accesso ad una tabella...la tabella non ha indice primario, ne altri indici...sulla tabella ci accedo con select di questo tipo..select @ExistInOAG = count(*)FROM caprs05dev.dbo.OAGWHERE Air_Carrier = @Aircarrier andcast(ltrim(rtrim(Flt_nbr)) as int) =cast(ltrim(rtrim(@ComFltNbr)) as int) andAipt_Dpt = @AIPTDep andAipt_Des = @AIPTDst andDT_FLIGHT = @Date_Rif andconvert(varchar, STD, 102) = convert(varchar, @STD,102) andconvert(varchar, STA, 102) = convert(varchar, @STA,102)ho provato a creare diversi tipi di indici, ma le performance migliorile ottengo se lascio stare tutto come era senza nessun'indice!!!! visembra possibile???le prove che ho fatto sono queste...inserire un indice tipo Primary con i campiFlt_nbr,Aipt_Dpt,Aipt_Des,STD,STAcreate Unique, tipo Index, con Ignore duplicate key...create as clustered checcato e fill factor 100%... ci accedopraticamente solo in select nel file...avete consigli per migliorare le performance di accesso allatabella????grazie.. stefano!!!
Hi all (newbie @ asp.net)(oldie @ ASP 3)What is the purpose of using an attached MDF database files in the App_Data folder on a web site as to importing it into the SQL server directly or creating it on the SQL server. Does a mdf database attached file purely use the SQL server as a connection interface.Is it something similiar to DSN(ODBC) Connections for ms access databases.