I'm running a transformation script that's taking decimal(18,10) data and trying to shoehorn it into a numeric(9,6). generally this works, as most of the data in the original table is not using anywhere near the precision it's capable of, but once in a while I run into one that does use it.
Is there any way to automagically reduce the precision so that i can cram the data into the destination table?
Hi - just found this site yesterday. Does anyone have experience converting a numeric(11,2) column to a 'mainframe' format with 'trailing sign overpunch'. Any old mainframe programmers will recognize this: there is no punctuation (- or + or , or .) and if the sign is negative, the last character is } for 0, J for 1, K for 2, etc.
I wrote the following command to convert the 'openrtl' column, but is there some other way? oprtl = CASE WHEN openretl >= 0 THEN str((abs(openretl*100)),11,0) WHEN openretl < 0 THEN substring(str((abs(openretl*100)),11,0),1,10) + CASE WHEN substring(str((abs(openretl*100)),11,0),11,1) = '0' THEN '}' WHEN substring(str((abs(openretl*100)),11,0),11,1) = '1' THEN 'J' WHEN substring(str((abs(openretl*100)),11,0),11,1) = '2' THEN 'K' WHEN substring(str((abs(openretl*100)),11,0),11,1) = '3' THEN 'L' WHEN substring(str((abs(openretl*100)),11,0),11,1) = '4' THEN 'M' WHEN substring(str((abs(openretl*100)),11,0),11,1) = '5' THEN 'N' WHEN substring(str((abs(openretl*100)),11,0),11,1) = '6' THEN 'O' WHEN substring(str((abs(openretl*100)),11,0),11,1) = '7' THEN 'P' WHEN substring(str((abs(openretl*100)),11,0),11,1) = '8' THEN 'Q' WHEN substring(str((abs(openretl*100)),11,0),11,1) = '9' THEN 'R' END END,
I'm trying, through the SQL 2005 Mgt Studio, to export a simple table in a delimited format. I'm selecting a double quote as the text qualifier. My expectations were that only text type fields would be exported with the double quotes an numerical fields would not have any quotes around them. SQL 2000 does this just fine, but 2005 is exporting all my text type and numeric fields with double quotes. Is this a change to SQL 2005 or am I doing something wrong.
Hi,I have the following three tables below containing Resources,Categories and a link table so each Resource can belong to one or moreCategories. I would like to create a view (ResourceID, ResourceName,CategoryID, CategoryName) that includes one row for each Resource withjust one of the Categories that it belongs to.Resource table- ResourceID- ResourceName- etc..Category table- CategoryID- CategoryName- etc..ResourceCategory table- ResourceID- CategoryIDCan anyone help? Thanks.
Hi, I have a database field of type "money". But when I rerieve it to text box, it shows 5.0000, but I want only 5.00 to be shown. How do I format that? Any reply will be much appreciated. :)
i used "dbcc shrink file" to reduce the log file of a database.the query analyzer says "successfully executed" but the log file doesn't seem to reduce..am i missing something?
The transaction log of the databse i am using has grown up to 7GB...(previously the setting was unrestricted file growth...now changed to restrict file growth to 7 GB approx.) now this 7GB space is not needed....i would like to reduce the size to around 2 GB...how can i achieve this?i observer that on the properties i can only increase the size and not decrease it...also i am using transactional replication..this server is the publisher to four subscribers..
what i want to do, is to 1.- insert a row in User, 2.- insert a row in Car 3.- using SCOPE_IDENTITY, to insert a new row in user_car that relates the ids of the last added car with the last added user.
But i need that to be done in one single transaction. Once that is done, i have another question
how can i do the same but where i can add a variable number of different cars? in other words, i add 1 user and then add 5 cars, then create 5 rows in the user_car where i have the id of the last added user with the incremental id of the last 5 added cars
how can this be done in one single transaction, i cant make it in 3 transactions cuz it would cause me a lot of trouble. Any help?
my transaction should be something like this
Code:
insert into user(username) values('user1'); insert into car(carName) values('car1'); insert into user_car(???,???)
but im not sure how to do this in one single transaction
Hi, If I understand it correctly, you only need an LDF file to restore to a point in time after the last full backup? If this is so, then Could the LDF file not be reduced in size on perfoming a full backup? Most of the time it's not an issue as there is enough space on the HDD, but is it possible to reduce the ldf file size periodically (manually would be fine). Is changing the recovery mode from FULL to SIMPLE and then back to FULL an option? If so, is anyone able to tell me how, exactly, I can do this? ... I've sifted through the documentaion to no avail ... :eek: Many thanks Rob
Hi! I'm quite new to SQL Server. I need to set a float datatype to display something like 3.55. However, all values that are stored in the float column are truncated to 4 or some other single digit. How can this be prevented?
I am sure this is a newbie question as I am new to Microsoft SQL server but any help is greatly appreciated. I am populating a SQL database from an AS400. The decimal numbers from the AS400 are coming accross with extra decimals. (ie. 63.02 is coming accross as 63.0200001)
Is there a way to limit the number of decimals in a float or real field - or a SQL command I can put in a script to truncate each field to 2 decimal places after they are populated.
I have been trying to develop an automatic way of programmatically accessing datasources and performing some predefined(-supported) processing on them.
The question I would like to ask you people has to do with numeric fields. What exactly is precision? Is it the maximum length in digits of a field, or is there more to it? What about a "field's scale", what is it and how does it affect a field's value handling?
I have a table with a 'quantity' column (decimal 9:3) and a 'price' column (9:3). I have a third column 'amount' with a formula of (price * quantity). The formula gives the correct answer, but the precision is automatically set to 5. Is there any way to set the precision of the result to 2?
I am using ASP and SQL 2005 Express.I am inserting a timestamp from an ASP page using <%=now%into asmalldatetime field. All of my timestamps are appearing without anyseconds (e.g., 1/21/2008 4:02:00 PM or 1/18/2008 11:32:00 AM).When I view the source for my page is shows the date/time as 1/21/20084:27:31 PM, but for some reason the seconds will be converted to1/21/2008 4:27:00 PMHow do i get more a more precise timestamp?Please help.
Exception from HRESULT: 0xC0204018 (Microsoft.SqlServer.DTSPipelineWrap)
The only solution I found is use DataReader Source,
But if I use DataReader Source everything works fine , I mean I am able to see the records and convert it desired data type (using Data Convertion component).
My question is what component should I use as Destination, coz if I use OLEDB Desination I get a red cross on the components although I can map all the columns.....
hi i have a database that becomes to big after a few days. is there a chance to say if the table reaches a number of lines or a special memorysize, delete (or better archive) the oldest entries?
I have started to look at the way our production DB has been defined and set up, with the view to improving performance.
The DB is now 11gb, and the original size was set up as 3000mb, the rest has been take in 10% additional extents.
Now, back in my DB2 DBA days, this was a bad thing to have any data spread across extents as they may not be contiguous. I am assuming that is the same with SQL Server. Can someone confirm/deny this?
If this is the case, how can I get the DB back into one primary partition?
If I have a transaction log in a database of size 1GB ( space allocated is during creation of database) currently only 300 mb of its space is used i.e. nearly 700 mb is free. If I want to reduce physical file size of transaction log by 200 mb and release it for operating system then How can I do it???
I have inherited a number of databases which were substantially over sized when they were set up. I'd like to reduce both the log and database files to be smaller than their original sizes, what's the easiest way to do this? If anyone has any experience of doing this please reply.
i am new to sql server. i recently found the transaction log size of my database has reached 109 MB. how can i reduce it. a transaction log backup was sceduled daily at 12.00 noon nad full backup monthly.
I have a production database of a size of 70 GB. Half of the data was archived and deleted from the current database. What is the best way to reduce the size of the database, as we cannot shrink an entire database to be smaller than its original size? Thanks a lot!
I created few jobs that would archive the production DB and delete the archived data... but it looks like the DB size is not reducing!!! Some times it looks like the size has increased!!
I think this is because of the log file size has increaded by the DELETE operations....But what can I do for this???
Code: <root> <timesheet empid="1" entryyear="2012" entryPeriod="1" adminnotes="These are the admin notes" empnotes="These are the user notes"> <project projnum="TestProject" projname="The really big project for our best customer"> <activity actcode="000103020200302302322" actname="Demolish the 55th story of the main tower">
[Code] ....
Notice how there is a tremendous amount of redundancy in the XML. I was hoping to come up with an XML result of the following, which transmits the same data, without the redundancies.
Code: <root> <timesheet empid="1" entryyear="2012" entryPeriod="1" adminnotes="These are the admin notes" empnotes="These are the user notes"> <project projnum="TestProject" projname="The really big project for our best customer"> <activity actcode="000103020200302302322" actname="Demolish the 55th story of the main tower"> <expenditure expcode="1" expname="Regular Hours">
I'm trying to insert all the rows from a table to a new table. (insert A select * from AA) The reads on Profiler shows ar really high value (10253548).
First I created a unique clustered index and the reads shows (3258445), then I created a non clustered index expecting to have lower reads. Instead the reads shows (10253548).
I read creating indexes helps reduce reads. But it's not happening. Any ideas what is going on?
I am working on a personal project and am drawing a complete blank(too much celebrating last night?) on the SQL term that is used toeliminate multiples of like data when it is returned from thedatabase.ie, instead of ....redblueredgreenit would return ...redbluegreenSorry for the trouble and thanks.
We currently have an e-commerce app written in .NET with SQL Server backend and built-in CMS that works just fine. We are now implementing a service to remove the need for the CMS by automating the synchronisation of the e-commerce database with a back-office database (non SQL Server). The problem we have run into is that during some of the larger updates to the website (i.e. new product information), the e-commerce system is experiencing timeouts. The synchronisation service uses transactions while performing updates and so I am assuming that the timeouts are being caused by the transactions locking tables and data.
What steps can I take to try to reduce these locks? The transactions are as short as possible so I do not think we can reduce the amount of processing each transaction deals with. I was looking at different isolation modes, Snapshot in particular, to reduce the locks, but would like some advice from someone who may have dealt with this type of situation before I start messing around here. (The synchronisation service uses the default ReadCommitted level, BTW)
Any advice you have to offer will be much appreciated.
I have a database who is in full recovery mode. I have four maintenence plans setup: database backup, log backup, optimization and integrity checkup. The last two plans run weekly and the first two run daily. I found that the log size often increase to a dramatically size in a very short period, almost same size as the database file (4G). Further I found that the size seems increase a lot after the last two plan runs.
My question is that the optimization operation(reconstruct index page) will write any reocord to log file? Is this possible a reason?
Now the log file occupy too much disk space (90% of space can be free). What I should do? Shrink database weekly?
Hi I am in the process of creating a new db in sql. In my users table I wish to set the UserIds as Integer datatype. It defualts on precision 4. Does this mean that when the column auto increments as its my primary key with a seed of one, my highest number allowed in the table would be row 9999. ???
Also if you where to store a phone number in your db, what column type would you give it. I have used varChar but its all numbers i want to store. Would this suffice.