We have 2 dbs on our dw-server and the autogrowth values for both the data- and logfiles on both of these dbs changes about once a month. The data autogrowth value changes from 10megs to a percentage value between 3200 and 6400 and the log-file value changes from 10 percent to a percentage value between 3200 and 6400. Resulting in huge files and filling the drive.
Since price obviously change over time, I was wondering what the is the best table schema to use to reflect these changes, while still remembering previous price values (like for generating reports on previous sales...)
is it better to include a "Price SMALLMONEY" field in the purchases table (which kind of de-normalizes it) or is it better to have a separate ProductPrice table that keeps track of changing prices like so:
I have the problem of dynamically changing a field value in a row depending on the value of the same field in the previous row, assuming the table is sorted on that field. :( You may consider that field as a key of same sort as other values in the table do not help in selecting them uniquely. Ex. Before After 130 -> 130 130 -> 130A 140 -> 140 140 -> 140A
How can i change the default value of a column? I already have a columnnamed DateOfRental but I want to alter it so that it has default valuegetdate()ThanksDavid--http://www.nintendo-europe.com/NOE/...=l&a=Prodigious
I have had a look through the forum and I can't seem to be able to find how to do this... I have a vague recollection of achieving this before but it was many years ago since I've done any sql.
what I want to be able to do is change the status to something common if it is not 'Completed' I don't want to update the database though, I just want to return it in a select statement. Is this possible.
I import a flat file from a legacy system, and then convert it into a single table. That works simply enough.
Then I have a SP that querys that table using a parameter for an accountID. My business tier calls that SP and returns the results to the calling tier (my web application). Easy enough...
Now for the question. The people who created the flat file (written in COBOL) decided to use "codes" to represent data. So, for instance, if I'm looking for the account plan, I can expect to see characters like ], or [, or +, etc... These characters have a special meaning, like:
] = Plan A [ = Plan B + = Plan C, and so on.
Currently, the web application displays those characters, but I want it to display the actual plan name. Is there a way that when I execute the SP, the SP could pull the necessary records, and whenever it encounters a certain "plan" character, it could convert it into a "readable" name? Say that it sees that the plan_type field has a value of "]" for twenty records, so it converts those twenty records' plan_type value from "]" into "Plan A"? I'm not sure if I can do that, but I want to at least evaluate the option if I can.
I've evaluated other options, like using a CASE statement in my code, but I shot that down quickly...for obvious reasons. I don't wanna be changing my web application or business tier each time these guys update a plan name, or add a new one, delete an existing one, etc...
I've also thought about creating a dictionary table than contains the plan's code and its name, and then just INNER JOIN the first table with the dict table. This would keep my SP very simple (it's very straight-forward right now, and I like that). That way, if a plan name is ever changed, or a new one is added, I simply update the dict table using a simple query. However, if my SP is doing the conversion, I could just as easily update the SP.
Either of these methods would work for me, and I *do* know how to do the latter (dict table). However, there are quite a few other fields that I may have to do this for. I believe when I left for the day on Friday, my last count was 14 fields total that needed translation. That would mean 14 different dict tables! That could certainly affect my SP performance with all those INNER JOINS!
Therefore, I'm certainly interested in figuring out if it's possible to do the former method (SP), and then I shall decide which method is best for my situation.
Feel free to include your thoughts on which process you think is better as well. I'm really riding the fence with this one. However, if I can't find out how to change field values in my SP, then obviously I'll make a decision very quickly...
I am testing a msde 2000 sp4 database. (would like to use it against sqlserver 2005 later) I need to turn the property for autogrowth on and check that an application alterts the user that it is on
and then off and verify the user is altered that it was set off.
I am not sure if this will work for mydb and mydblog --turn off autogrowth USE master GO ALTER DATABASE mydb MODIFY FILE (NAME = mydb, FILEGROWTH = 0MB) GO
--turn on autogrowth USE master GO ALTER DATABASE mydb MODIFY FILE (NAME = mydb, FILEGROWTH = 10MB) GO
USE master GO ALTER DATABASE mydblog MODIFY FILE (NAME = mydblog, FILEGROWTH = 0MB) GO
--turn on autogrowth USE master GO ALTER DATABASE mydblog MODIFY FILE (NAME = mydblog, FILEGROWTH = 10MB) GO
Also I need to know the t-sql for setting unrestricted growth on and off but I can not seem to find that.
Any help would be great.
We can not use enterprise manager so I'm planning to use osql thru a cmd prompt.
Hello,i have a table and if a record is inserted i will test a numeric valuein this table. If the this value is greather as 1 million, than anstatus column should be changed from 'A' to 'B'.Yes and sorry, this is a newbie question.On Oracle this work simple:create trigger myTrigger on tableXasbeginif :old.x 100000 then:new.y:='B'end if;end;ThanksMaik
Has anyone seen this issue before? We are running a SQL CE 3.5 database on a windows desktop. A couple of our tables have ntext fields. When we do an insert the statement updates the value for all rows, not just the one that was added. I can easily repro this with some of the online samples too. Try the following:
SqlCeConnection conn = new SqlCeConnection(_sConn);
After the second execution the blob column in both rows will have the value 'Name2 Memo'.
This is obviously a huge problem for us and would appreciate it if someone can explain what is happening. Seems like a bug but would like to be certain before I go the support route.
I'm working with merge replication between Sql Server 2005 and Sql Server 2005 Mobile. I'm using dynamic filtering by function HOST_NAME (... where Table.FilteredColumn = HOST_NAME())
When I'm trying to chagne values from filtered column on the client side (Table.FileteredColumn) I get error message that the column is read-only.
How can I change data in filtered column? Is this possible?
I have a SQL 2005 DB that its MDF file is growing at a rate of 1 GB per day, I currently have it set up to unrestricted growth by 500 MB. Should I increase that growth to 1 GB? what would the impact of this change be? what are best practices when it comes to setting up autogrowth for MDF and LDF files?
I have a SQL 2005 DB that its MDF file is growing at a rate of 1 GB per day, I currently have it set up to unrestricted growth by 500 MB. Should I increase that growth to 1 GB? what would the impact of this change be? what are best practices when it comes to setting up autogrowth for MDF and LDF files?
I have a report I want to modify and when I go to deploy it I get:
Changing the report parameters or data sources to the values you specified is not allowed. The report is configured to run unattended. Using the specified values would prevent the unattended processing of this report
Now I understand the error, it is because on the report server's web management interface I entered credentials for the report to run that point to a custom datasource and supplied the appropriate credentials to run the report.
However, I don't want to have to reconfigure that just to deploy a slightly modified version of the report.
How can I get around this? Or what is the proper work around? I just want to change the background color from dark gray to light gray on a header column....
I have an Excel spreadsheet (Excel 2003 format) with a column of values. The column is formated to General (although I've tried it formated to text). The values are alphanumeric. However, there are a few values that are completely numeric.
Example:
_ITEMNUMBER_
1-528
K214-5
184PR
45678
As can be seen, the last value is completely numeric. I am importing into a table with one column that is formated as nvarchar(50).
I use dtswizard.exe and do a simple import into the table and the alphanumeric values import just fine but anytime a value is completely numeric it has a NULL value in the table instead of the value that should be there.
I've tried changing the format of the column to nchar, char, etc, etc. Always comes in NULL.
Is it possible to change the Autogrowth option of a database is none is set? I recieved an alert saying that one of the databases has 39.9% of freespace. Having checked the properties of the database, I noticed that the Autogrowth option had not been used.
Whenever we restart the services on this one SQL server 2005 instance, the database autogrowth changes to grow by 2500%. We have to manually change the autogrowth of data file to some sane number. Has anyone faced this issue? We tried to put SP2 on but even that doesn't help. Any help would be appreciated
So I need to write a script to look for available free space percentage in my databases, but I only want it to look at capped files. We consider a file with autogrowth off as capped for our purposes.
This is my problem: in sys.database_files and sys.master_files, if I have autogrowth off then max_size is -1, which is the same value as unlimited growth.
I cannot find another setting anywhere to determine how SQL Server recognizes that a particular file is set to not allow autogrowth. Any setting in a DMV anywhere where I can see whether autogrowth is disabled or not?
SQL2K5 SP1the autogrowth setting in one of my database's primary filegroup datafile keeps having the value of 12800%, which was originally set to100MB, everytime the service is restarted. the same occurs whenever irestore a backup of this database in our development environment.WTF? this issue does not happen to other filegroups. only on theprimary data file. whenever this happens the 4GB data file grows tomore than 70GB (even the math is incorrect) with about 95% of UNUSEDspace.has anyone else come accross this BS or anyone knows how to preventthis from happening? is MS aware of this not-so-funny joke?thanks
Today we received an issue on an application database on internal free space on the DB is 0% that was designed with as below
name fileid filename filegroup size maxsize growth usage XX 1 I:DataMSSQL.1MSSQLDataNew XX.mdf PRIMARY 68140032 KB Unlimited 0 KB data only XX_log 2 I:DataMSSQL.1MSSQLDataNew XX_log.LDF NULL 1050112 KB 2147483648 KB 102400 KB log only XX_2 3 I:DataMSSQL.1MSSQLDataNew XX_2.ndf PRIMARY 15458304 KB Unlimited 0 KB data only XX_3 4 I:DataMSSQL.1MSSQLDataNew XX_3.ndf PRIMARY 13186048 KB Unlimited 0 KB data only XX_4 5 I:DataMSSQL.1MSSQLDataNew XX_4.ndf PRIMARY 19570688 KB Unlimited 204800 KB data only XX_5 6 I:DataMSSQL.1MSSQLDataNew XX_5.ndf PRIMARY 19591168 KB Unlimited 204800 KB data only
2 of the secondary data files had its autogrowth enabled to unrestricted with 200MB and 3 of the data files including primary had its Autogowth turned OFF. Application use is complaining that there is no internal freespace on the DB.
What fails to understand us is that when the Auto growth was already TURNED OFF on 3 data files ( 1 primary and 2 secondary ) still why was the application trying to increase the space on the .mdf and .ndf files; as well when the Autogrowth is TURNED ON on 2 of the secondary data files, why was the DB not able to expand these file groups when the autogrowth is already turned off on 3 of its other files.
What more data i need to ensure i submit an analysis to this.
How to count the number of values that exist in a row based on the values from an array of numbers. Basically the the array of numbers I want to look for are in row 1 of table [test 1] and I want to search for them and count the "out of" in table [test 2]. Excuse me for not using the easiest way to convey my question below. I guess in short I have 10 numbers and like to find how many of those numbers exist in each row. short example:
I am trying to think my way through a solution which I believe others have probably come across... I am trying to implement a matching routine wherein I need to match an address against a high value and a low value (or, for that matter an input date vs. a start and end date) to return the desired row ... i.e. if I were to use a straight vb program I would just use the following lookup:
" WHERE zip_code = @zip_code AND addr_prim_lo <= @street_number AND addr_prim_hi >= @street_number " & _
" AND addr_prim_oe = @addr_prim_oe AND street_pre = @street_pre AND street_name = @street_name " & _
" AND street_suff = @street_suff AND street_post = @street_post " & _
" AND (expiry_date = '' OR expiry_date = '00000000' OR expiry_date > @expiry_date)" & _
" GROUP BY fire_ID, police_ID, fire_opt_in_out, police_opt_in_out"
My question, then, is how would you perform this type of query using a lookup / merge join or script? I have not found a way to implement a way to set the input columns? I can set the straight matches without a problem, i.e. lookup zip code = input zip code, but can't think of the correct way to set comparisons, i.e. lookup value 1 <= input value AND lookup value 2 >= input value
I have a DTSX package which reads values from a fixed-length text file using a data reader and writes some of the column values from the file to an Oracle table. We have used this DTSX several times without incident but recently the process started inserting NULL values for some of the columns when there was a valid value in the source file. If we extract some of the rows from the source file into a smaller file (i.e 10 rows which incorrectly returned NULLs) and run them through the same package they write the correct values to the table, but running the complete file again results in the NULL values error. As well, if we rerun the same file multiple times the incidence of NULL values varies slightly and does not always seem to impact the same rows. I tried outputting data to a log file to see if I can determine what happens and no error messages are returned but it seems to be the case that the NULL values occur after pulling in the data via a Data Reader. Has anyone seen anything like this before or does anyone have a suggestion on how to try and get some additional debugging information around this error?
I am working with a data set containing several years' of monetary values. I have entries for past dates and the associated values, and I also have entries for future dates. I need to populate the values of the future date records with the values from the same date the previous year. Is there any way this can be done in Power Pivot?
I have to use the above comma separated values into a SQL Search query whose datatype is integer. How would i do this Search query in the IN Operator of SQL Server. My query is :
declare @id varchar(50) set @id= '3,4,6,7' set @id=(select replace(@id,'''',''))-- in below select query Id is of Integer datatype select *from ehsservice where id in(@id)
But this query throws following error message:
Conversion failed when converting the varchar value '3,4,6,7' to data type int.
I have my stored procedure set to Territory_code IN (@Territory)
, now , how do i enter in more then one value. When i select the multi value check box, it gives me more spaces. But then doesnt recognize the values when i put in more then one. am i doing something wrong?
I receive the input file with some 100 columns and some 20k+ rows and I want to check the incoming input row is existed in the database or not based on 2 key columns. If the row is existed then I need to check all the columns (nearly 100 columns) values in input and the database are equal or not. If both are equal I need to treat them seperately if not there is a seperate logic. How Can I do that check for each row and for each column?
Basically the algorithm is like this, if the input file row is not existed in the database then treat that as new row else if the input row is existed in the database then check all the columns are equal or not. If all the columns are equal then treat that as existing row and do nothing else if some columns are not equal then treat this row seperately.
I found some thing to achieve the above thing. 1. Take the input row and check in the database. 2. If the row is not found in the database then treat it as new row. 3. If row is found in the database then a) Take the source row and prepare a concatenated string for all the columns b) Take the database row and prepare a concatenated string for all the columns c) Find out the hash code for the 2 strings and then compare hash codes for equal.
The disadvantage of this is running a loop 2*m*n times where m is the number of rows and n is the number of columns. It should be done 2 times for input file row and database row.
Can anybody suggest a good method to do this?
What does the function "GetHashCode" for InputBuffer in method "Public Overrides Sub Input0_ProcessInputRow(ByVal Row As Input0Buffer)" will do? Will it generates hash code based on all the columns values?
I have a situation in SSRS to get the common values between the two columns where the values are sorted comma separated as below.Ex:
ColumnA : abc,cde,efg ColumnB : cde,xyz,abc
the result in
ColumnC : cde,abc
similarly Column A and B will have n number records. I need to right an expression or the Code function to get the required result in ColumnC. I am using SharePoint Lists as Datasource. Cannot write SQL query to achieve this requirement.