Hello,
I need some major help, I need to make a database using SQL server for a forum, now I am using pHpBB, but i need that database. I was thinking about it, it doesnt need to be complicated or anything. I really have no idea where to start so any help.
Thank you in advance
I have a new business, and a part of that business includes receiving large amounts of data from time to time. I just found out yesterday that I'm going to be receiving about 1TB of data from an new client! I'm not set up at all for this large of a data set.
I want to use SQL Server as my database. Can I load SQL on a Desktop PC without having to buy a server? How?
I don't have a clue as to how I need to get set up for this data...hardware or software. Any advice you can give will be outstanding!!!!!
I have a site that was supposed to go live yesterday.I am using M$ SQL Express 2005 and the Express Manager.I setup everything using Windows authentication on my local computer. I backed up the database through the manager and simply did a restore to the live database server.I copied my aspx files and everything else.I changed my connection string to allow for SQL Authentication (because I was having trouble with Windows authentication).For some reason, my SQL authenticated user can do whatever it wants within the SQL manager, but I am unable to login to the site. I get no errors, just the usual failed login attempt text.Can someone please help. I don't know where to start on this one.Thanks,Joshua Foulk
I hope I haven't messed up! I was importing some data, and it started taking too long and seemed to have locked up, I did a cold boot and when I tried to open the db it would just load...
I have then detached it and tried to reattach the db, but it seems to just load forever.. I let it sit there for an hour and still nothing...
the DB has a 17gig LDF file and I can't attach without it...
For some reason when I'm trying to restore a back up, I'm encountering this problem, I've asked numerous people and been in and out of chatrooms all day and night, has anyone got any idea what to do?
Should I just pass an array of column names and use the AddWithValues SqlCommand method while looping through the array? Any comments are greatly welcomed.
I have created a database with three tables. The database has been up for a month now and contains about 20,000 records. In order to improve performance and resolve some issues I an attempting to change some of the table information. i.e. allow nulls in a few fields. When I use TSQL or the GUI to make these changes I get the following error: Timeout expired. The timeout period elapsed prior to completion of the operation or server not responding.
Source: .Net SQLClient Data Provider
I have SQL Server Express SP2 installed with .Net framework v3.0
Note: This issue appears when I attempt to delete a row from a table as well.
I have a report with a table and three columns. Whenever the data in one of the columns cannot fit on a single page it continues on the next page BUT, there is no header on the next page and the data in the other two columns repeats itself on the next page. (This behavior happens when I export to PDF)
Is this a known issue? I have tried every setting I could think of.
I have a horizontal barchart with integer values, both positive and negative. Major gridlines are shown and I have not specified the interval or the label format. Sometimes the gridline labels (Y axis) show decimal values. Is there anyway to format these? I have tried to format the labels as "#0" but then I see labels occuring twice. I have also played with the intervals, but then sometimes, depending on the values, the zero line is not being shown.
Hi, We have a product that is developed in ASP and works with SQL Server 2000 or 2005. Since it€™s an ERP, we also use Reporting Services 2000 or 2005. Our application needs 3 registered DLLs that were, a long time ago, developed to support our entire application. Since we are using Windows Server 2003 x64 editions in our clients with SQL Server 2005 x64 edition, we managed to register the 32 bit DLLs in the 64 bit system. We installed them as a COM+ component, ran the command €œcscript.exe adsutil.vbs set W3SVC/AppPools/Enable32BitAppOnWin64 true€? and our application worked fine. This command caused the IIS to use the .NET 2.0 32 bit version so that our DLLs could be correctly invoked. But now Reporting Services doesn€™t work because it needs the 64 bit version of the .Net framework. When I try to connhecto to localhost/reports, I get the error "%1 is not a valid Win32 application". Is there any workaround to this problem so that i can deploy the application and the database in the same machine?
I'm trying to insert data into locally stored database (SQL Server). The data I want inserted, is presented in a Treeview control and the data is fetched from a Webservice. The data is returned in form of a dataset. The treeview contains checkboxes allowing a user to select what to install in the locally stored database.
To sum up:
1. Get data from a webservice' not my problem 2. Present data in a Treview control' not my problem 3. Allow to user to select which data to install' not my problem 4. Insert data that the user has selected into my db' MY PROBLEM!!!!
The Treeview is generated with DataRelations between Group and Rule.
My locally stored database is designed by a third party provider and therefore the database must not be altered. The table I want to store data in is called "Groups" and it looks like this:
GroupID uniqueidentifier ' (newid()) GroupName nvarchar(50) ParentGroupID uniqueidentifier' if grouptype = 0 then ParentGroupID must have a value. GroupType tinyint ' 0 = subgroup, 1 = "top"group
The third party also created a stored procedure called pr_AddGroup taking the following parameters:
@GroupName ' can be both the RuleName and the GroupName @GroupType ' can be 0 for subgroup or 1 for "top"group @ParentGroup ' GUID
The problem with this stored procedure is that it does not have return value, which is here my problem actually lies. If it returned @@IDENTITY I could use this as the parameter for @ParentGroup. Instead I figure I must create two sqlCommand's (one calling pr_AddGroup and another calling SELECT @@IDENTITY to get the newly created record).
My SQL Commands look like this
Dim cmd As SqlCommand Dim Conn As SqlConnection = New SqlConnection Conn.ConnectionString = "Data Source=myServer;Initial Catalog=myTable;Integrated Security=SSPI" cmd = New SqlCommand cmd.CommandType = CommandType.StoredProcedure cmd.Connection = Conn cmd.CommandText = "pr_AddGroup"
dim cmd2 as SqlCommand cmd2 = new SqlCommand cmd2.commandtype = commandtype.Text cmd2.commandtext = "SELECT @@IDENTITY as ID FROM Groups" cmd2.connection = Conn
dim ParentGroupGUID as system.guid
To get the data inserted in the Groups table I would something like the following, but the code is very ugly (and it doesn't work either);
For Each Group In TreeView1.Nodes ' Loop through Groups If Group.Checked Then cmd.Parameters("@GroupName").Value = Group.Text.ToString cmd.Parameters("@GroupType").Value = 1
For Each Rule In Group.Nodes ' Loop through Rules. If Rule.Checked Then cmd.Parameters("@GroupName").Value = Group.Text.ToString cmd.Parameters("@GroupType").Value = 1 cmd.Parameters("@ParentGroup").value = ParentGroupGUID cmd.ExecuteNonquery() End If Next Next
I've spent the last 5 hours figuring out this problem, so ANY help is appreciated :-)
I have a perl program that is looping through a hash of a hash. I need to Update any existing records but also insert any new records in the table using collected data in the hash.
Life would be very simple if it was possible to use a Where Clause in an Insert statement but not does not work.
Here is some example code from my program: sub Test{ foreach my $table(keys %$HoH){ foreach my $field(keys %{$HoH->{$table}}){ if($table eq "CPU"){ my $CPUstatement = "INSERT INTO CPU(CPUNumber, Name, MaxClockSpeed, SystemNetName) Values ('$field', '$HoH->{CPU}{$field}{Name}', '$HoH->{CPU}{$field}{MaxClockSpeed}' , '$HoH->{Host}{SystemNetName}')"; print "$CPUstatement"; if ($db->Sql($CPUstatement)) { print "Error on SQL Statement"; Win32::ODBC::DumpError(); } else { print "successful"; } } }
I had a problem today where I could not see column names and alltables had a _1 after them when viewing a Sql Server view inEnterprise Managere.g.TableName Company when added to the view would be named Company_1 andthe only columns available were 1 which was *(All Columns)After looking through the news groups I saw several occurrences ofthis problem but no answers that gave a fixAfter some investigation I found that it is caused when the databasename in SqlServer has a . in it!Test1 >> Fine the view designer works fineTest1.6 >> Problems as listed aboveI don't see why Enterprise Manager allows database names with .'s ifit is going to create such problems.
There is something very strange going on here. Tested with ADO 2.7 andMSDE/2000. At first, things look quite sensible.You have a simple SQL query, let's sayselect * from mytab where col1 = 1234Now, let's write a simple VB program to do this query back to anMSDE/2000 database on our local machine. Effectively, we'llrs.open sSQLrs.closeand do that 1,000 times. We wont bother fetching the result set, itisn't important in this example.No problem. On my machine this takes around 1.6 seconds and modifyingthe code so that the column value in the where clause changes eachtime (i.e col1 = nnnn), doesn't make a substantial difference to thistime. Well, that all seems reasonable, so moving right along...Now we do it with a stored procedurecreate procedure proctest(@id int)asselect * from mytab where col1 = @idand we now find that executingproctest nnnn1,000 times takes around 1.6 seconds whether or not the argumentchanges. So far so good. No obvious saving, but then we wouldn'texpect any. The query is very simple, after all.Well, get to the point!Now create a table-returning UDFcreate function functest(@id int) returns table asreturn(select * from mytab where col1 = @id)try calling that 1,000 times asselect * from functest(nnnn)and we get around 5.5 seconds on my machine if the argument changes,otherwise 1.6 seconds if it remains the same for each call.Hmm, looks like the query plan is discarded if the argument changes.Well, that's fair enough I guess. UDFs might well be more expensive...gotta be careful about using them. It's odd that discarding the queryplan seems to be SO expensive, but hey, waddya expect?. (perhaps theUDF is completely rebuilt, who knows)last test, then. Create an SP that calls the UDFcreate procedure proctest1(@id int)asselect * from functest(@id)Ok, here's the $64,000 question. How long will this take if @idchanges each time. The raw UDF took 5.5 seconds, remember, so thisshould be slightly slower.But... IT IS NOT.. It takes 1.6 seconds whether or not @id changes.Somehow, the UDF becomes FOUR TIMES more efficient when wrapped in anSP.My theory, which I stress is not entirely scientific, goes somethinglike this:-I deduce that SQL Server decides to reuse the query plan in thiscircumstance but does NOT when the UDF is called directly. This iscounter-intuitive but it may be because SQL Server's query parser istuned for conventional SQL i.e it can saywell, I've gotselect * from mytab WHERE [something or other]and now I've gotselect * from mytab WHERE [something else]so I can probably re-use the query plan from last time. (I don't knowif it is this clever, but it does seem to know when twotextually-different queries have some degree of commonality)Whereas withselect * from UDF(arg1)andselect * from UDF(arg2)it goes... hmm, mebbe not.... I better not risk it.But withsp_something arg1andsp_something arg2it goes... yup, i'll just go call it... and because the SP was alreadycompiled, the internal call to the UDF already has a query plan.Anyway, that's the theory. For more complex UDFs, by the way, theperformance increase can be a lot more substantial. On a big complexUDF with a bunch of joins, I measured a tenfold increase inperformance just by wrapping it in an SP, as above.Obviously, wrapping a UDF in an SP isn't generally a good thing; theidea of UDFs is to allow the column list and where clause to filterthe rowset of the UDF, but if you are repeatedly calling the UDF withthe same where clause and column list, this will make it a *lot*faster.
SQL Server 2005 is installed on a brand new 64-bit server (Windows 2003 x64 std. Edition, 2.4 Ghz AMD opteron- 2cpu, 8.8 Gb of RAM). There is barely few hundred rows of data scattered among few tables in one database.
SQL server and SSIS performace grossly degrades overnight and in the morning everything is slow including the clicking of tool bar selection.It takes 3 seconds to execute a simple select statement against an empty table.
It takes15-20 seconds to execute a SSIS package that normally would take 2-3 seconds.
But once SQL Server is restarted, everything returns to normal and the performance is good all day and then the next day everything is slow again.
I feel like ssis encryption model has a serious flaw. Especially when linked to SQL Agent jobs.
I have posted and others have posted messages about this. Something is plain wrong with ssis encryption keys and password protection. Also, you do not have the choice not to protect the packages. In my case, protecting packages is completely useless.
I created config files for al my packages connections passswords.
Now, by our IT Policy, I had to change again my password and of course, all packages now return multiple errors when I open them.
Hopefully, the config file did its job and the packages are ran anyways by SQL Agent, however, having to manually retype and resave all packages not to have the errors is just a plain hassle. Not to speak about people not using the config files and the correct "Run As" sql agent account.
I stress the fact that in a real world production environment all packages are driven by SQL Agent jobs and MUST run automatically.
Here is the error I get after opening a package after changing my password:
Error 1 Error loading Constants05.dtsx: Failed to decrypt protected XML node "DTS:Password" with error 0x8009000B "Key not valid for use in specified state.". You may not be authorized to access this information. This error occurs when there is a cryptographic error. Verify that the correct key is available. c:projectsssis packagesssis constantsConstants05.dtsx 1 1
So Why is'nt this key automatically adjusted after Windows NT Domain password Change?
How can I refresh the key, not to have to reype all the packages connections passwords and rebuilding, Checkin-in again all the stuff?
I do not think the solution is "Use an application account which password never changes when you create your ssis packages" however at this time, this is the only solution I can think of.
How do you guys deal with this problem?
I still do not understand the ssis security model I feel it is diconnected from the reality and unpracticable in a production environment like mine.
I'm importing a csv-file delimited with semicolons. Firstly I LTRIM the columns "in place" and the data imports fine. All the numbers in right columns in the target table. Then I add another Derived Colum Transformation to replace decimal character comma (,) to a dot (.) in order to convert the string/varchar value to numeric. But here I run into trouble. Running the task ends in success but the result in the target table (same as above) is not. All the commas are now dots as expected but what is worse is that SSIS have added values in cells that should not be there. I get values in cells that shoud be empty!
Shortly: Only LTRIM([Column1]) as expression and "Derived Column" as Replace 'Column1' works OK.
But adding REPLACE-expression (i.e REPLACE(LTRIM([Column1]) , "," , ".") to this breaks things up
I'm aware that I could do this with SQL but this is not the point...
Hi, I have Sql server 2005 Standard edition on my system and was wondering whats the difference between Standard edition and developer edition which one is better..most of the things that i do on sql server is write sprocs, create table etc... any ideas will be appreciated.. regards Karen
We are experiencing a major issue since upgrading from SQL2000 to SQL2005 over the weekend. Starting today, it appears that the performance of SQLServer reaches a limit every 15 minutes.
Our configuration is as follows:
Window Server 2K3 x64 Enterprise
SQLServer 2005 x64 Enterprise
HP DL585 with 4 dual core Opterons
32 GB of RAM
2 TB EMC SAN
At first, I thought there was a memory pressure problem, since I had the default max memory set. After changing the max memory to only 25 GB (out of 32 available), the issue went away temporarily. However, after 15-20 minutes, the number of batches/sec dropped in half, and remained after half until I changed the max memory setting again. Over the course of the day, I was able to fix the issue each time by just changing the max memory by 1MB. (From 30,000 to 29,999 and back from 29,999 to 30,000). Each time, the batches/sec counter immediately doubles and remains there for about 15-20 minutes. None of the SQL statements have changed since upgrading.
I have found this post, which talks about a similar issue at the end of the thread: