Help Required For Right Precision And Scale

Dec 7, 2007



Hi , i am receiving data froma flat file it is

These are amount fields

123456.89

i am selecting numeric (8,2) as datatype is this valid please let me know.

View 3 Replies


ADVERTISEMENT

Precision And Scale?

Nov 8, 2004

Hello,

I have been trying to develop an automatic way of programmatically accessing datasources and performing some predefined(-supported) processing on them.

The question I would like to ask you people has to do with numeric fields. What exactly is precision? Is it the maximum length in digits of a field, or is there more to it? What about a "field's scale", what is it and how does it affect a field's value handling?

Thank you very much,

Dave

View 1 Replies View Related

Precision And Scale In A Calulated Column

Dec 19, 2006

How do I set Precision and Scale in a calulated column?

I'm trying to limit the decimal points returned in a calculated column but can't find where to set the scale. What am I missing please?



Thanks,

Scott

View 6 Replies View Related

Precision And Scale Of SqlDecimal Parameter To Stored Procedure

Feb 8, 2007

I am using SQL CLR Integration to create a series of stored procedures.

I am building and deploying from Visual Studio 2005 SP1 and everything is working well except for my stored procedures that have a SqlDecimal typed input argument. By default, the precision and scale of the SqlDecimal is deployed to SqlServer as (18,0).

How can I change this default?

This is an example of my stored procedure definition:



namespace Microsoft.Hurley.DataStore

{


public partial class StoredProcedures

{


[Microsoft.SqlServer.Server.SqlProcedure(Name = DB.PROC.TEST.INSERT)]

public static Int32 insertTest(out SqlInt16 cTestID, SqlInt16 cOrgID, SqlString cName, SqlDecimal cPassPercentage, SqlByte cNumberQuestionsToDisplay, SqlByte cMaxNumberAttempts, SqlBoolean cIsActive)

{

...

return returnValue;

}

}
}
From SQL Server after the procedure is deployed:

[dbo].[insertTest]

@cTestID [smallint] OUTPUT,

@cOrgID [smallint],

@cName [nvarchar](4000),

@cPassPercentage [numeric](18, 0),

@cNumberQuestionsToDisplay [tinyint],

@cMaxNumberAttempts [tinyint],

@cIsActive [bit]



Thanks!

View 4 Replies View Related

Scale Greater Than Precision - Not A Valid Instance Of Data Type Real

Feb 14, 2006

Our shop recently upgraded to MS SQL 2005 server from the prior SQL 2000 product.

I receive and error during processing related to inserting a value into a field of datatype real. This worked for years under MS SQL 2000 and now this code throws an exception.

The exception states:

The incoming tabular data stream (TDS) remote procedure call (RPC) protocol stream is incorrect. Parameter 15 ("@TEST"): The supplied value is not a valid instance of data type real. Check the source data for invalid values. An example of an invalid value is data of numeric type with scale greater than precision.

This error is caused by inserting several values that fall outside of a range that MS SQL 2005 documentation specifies.

The first value that fails is 6.61242e-039. SQL Server 2005 documentation seems to indicate that values for the datatype real must be - 3.40E + 38 to -1.18E - 38, 0 and 1.18E - 38 to 3.40E + 38.

Why doesn't 6.61242e-039 just default to 0 like it used to?

I saw an article that might apply, even though I just use a C++ float type and use some ATL templates.

Is my question related to this post?http://forums.microsoft.com/MSDN/ShowPost.aspx?PostID=201636&SiteID=1

View 10 Replies View Related

DataReader Source Error - Cannot Change The Datatype, Precision Or Scale In The Output Columns

Oct 3, 2007

I have a data source that I access via odbc in a DataReader Source component in SSIS. I can access the data fine. However, I am having problems with certain fields that are numeric (specifically home prices ranging from 100,000.00 to 99,999,999.00). In the advanced editor for my data reader source under the input and output properties tab, in data reader output under the external columns and output columns, these fields for some reason default to numeric data types with a precision of 4 and a scale of zero, not large enough to hold the data that is coming in. This causes errors that make the data come in as null (after i specify to ignore the errors).

I can change the precision and scale to 18 and 4 in the external columns, but when I try to change the datatype, precision or scale in the output columns I get the following message:

Property Value is not valid.

The details are:

Error at Import DataReader Source: The data type of output columns on the component "DataReader Source" cannot be changed.
Error at DataReader Source: System.Runtime.InteropServices.COMException (0xC020837D)
at Microsoft.SqlServer.Dts.Pipeline.DataReaderSourceAdapter.SetOutputColumnDataTypeProperties(Int32 iOutputID, Int32 iOutputColumnID, DataType eDataType, Int32 iLength, Int32 iPrecision, Int32 iScale, Int32 iCodePage)
at Microsoft.SqlServer.Dts.Pipeline.ManagedComponentHost.HostSetOutputColumnDataTypeProperties(IDTSManagedComponentWrapper90 wrapper, Int32 iOutputID, Int32 iOutputColumnID, DataType eDataType, Int32 iLength, Int32 iPrecision, Int32 iScale, Int32 iCodePage)

Any help is greatly appreciated.
Dave

View 1 Replies View Related

Error At OutputColumn.SetDataTypeProperties(dataType, Length, Precision, Scale, TargetColumn.CodePage);

Feb 14, 2008



I have generated the package ! but it gives me an error at:


outputColumn.SetDataTypeProperties(dataType, length, precision, scale, targetColumn.CodePage);

private void ConfigureDataConversionComponent(Microsoft.SqlServer.Dts.Pipeline.Wrapper.IDTSComponentMetaData90 dataconversionComponent)

{

PipeLineWrapper.CManagedComponentWrapper managedOleInstance = managedOleInstanceDataConversionComponent;

// Get the derived's default input and virtual input.

PipeLineWrapper.IDTSInput90 input = dataconversionComponent.InputCollection[0];

PipeLineWrapper.IDTSVirtualInput90 vInput = input.GetVirtualInput();

// Iterate through the virtual input column collection.

foreach (PipeLineWrapper.IDTSVirtualInputColumn90 vColumn in vInput.VirtualInputColumnCollection)

{

managedOleInstance.SetUsageType(

input.ID, vInput, vColumn.LineageID, PipeLineWrapper.DTSUsageType.UT_READONLY);

}

// putting the truncation row disposition

dataconversionComponent.OutputCollection[0].TruncationRowDisposition =

PipeLineWrapper.DTSRowDisposition.RD_NotUsed;

// putting the error row disposition

dataconversionComponent.OutputCollection[0].ErrorRowDisposition =

PipeLineWrapper.DTSRowDisposition.RD_NotUsed;

// get the output column collection reference

PipeLineWrapper.IDTSOutput90 output = dataconversionComponent.OutputCollection[0];

foreach (PipeLineWrapper.IDTSInputColumn90 inColumn in

dataconversionComponent.InputCollection[0].InputColumnCollection)

{ // create the map

// get the target column from the mapping informations

PipeLineWrapper.IDTSOutputColumn90 outputColumn =

dataconversionComponent.OutputCollection[0].OutputColumnCollection.New();

outputColumn.Name = inColumn.Name;

Column targetColumn = GetTargetColumnInfo(inColumn.Name);

// find out the length of the column

int length = targetColumn.Length;

// get the precision of the target column

int precision = targetColumn.Precision;

// get the scale of the target column

int scale = targetColumn.Scale;

// get the SSSIS complaint datatype from the given mappings

Microsoft.SqlServer.Dts.Runtime.Wrapper.DataType dataType = targetColumn.DataType;

// setting the data properties

outputColumn.SetDataTypeProperties(dataType, length, precision, scale, targetColumn.CodePage);



// putting the external metadata column id to zero

outputColumn.ExternalMetadataColumnID = 0;

outputColumn.ErrorRowDisposition = PipeLineWrapper.DTSRowDisposition.RD_RedirectRow;

outputColumn.TruncationRowDisposition = PipeLineWrapper.DTSRowDisposition.RD_RedirectRow;

PipeLineWrapper.IDTSCustomProperty90 property = outputColumn.CustomPropertyCollection.New();

property.Name = "SourceInputColumnLineageID";

property.Value = GetSourceColumnLineageID(targetColumn.Name);

property = outputColumn.CustomPropertyCollection.New();

property.Name = "FastParse";

property.Value = false;

// Now we are preserving the Lineage id into a list.

// you know, when later we will configure the dataflowcomponent of SQL destination

// then, we will find all the inputs (the input came from flat file and the inputs

// came from the derived columns output). And we need to distinguish among them.

// we will only consider those inputs into the data destination component, where the

// inputs are came from the out put of derived column component. which is actually here.

derivedLineageIdentifiers[outputColumn.LineageID] = outputColumn.Name;

}

}

View 9 Replies View Related

Set Precision

Jan 9, 2006

Hi, I have a database field of type "money". But when I rerieve it to text box, it shows 5.0000, but I want only 5.00 to be shown. How do I format that?
Any reply will be much appreciated. :)

View 2 Replies View Related

How To Set Float Precision

Feb 20, 2000

Hi!
I'm quite new to SQL Server. I need to set a float datatype to display something like 3.55. However, all values that are stored in the float column are truncated to 4 or some other single digit. How can this be prevented?

Regards,
Sam

View 1 Replies View Related

Float Precision

Sep 5, 2002

Hello everyone,

I am sure this is a newbie question as I am new to Microsoft SQL server but any help is greatly appreciated. I am populating a SQL database from an AS400. The decimal numbers from the AS400 are coming accross with extra decimals. (ie. 63.02 is coming accross as 63.0200001)

Is there a way to limit the number of decimals in a float or real field - or a SQL command I can put in a script to truncate each field to 2 decimal places after they are populated.

Thanks,
Randy

View 1 Replies View Related

Formulas & Precision

Sep 21, 2006

I have a table with a 'quantity' column (decimal 9:3) and a 'price' column (9:3). I have a third column 'amount' with a formula of (price * quantity). The formula gives the correct answer, but the precision is automatically set to 5. Is there any way to set the precision of the result to 2?

Thanks.

View 4 Replies View Related

Timestamp Precision

Jan 21, 2008

I am using ASP and SQL 2005 Express.I am inserting a timestamp from an ASP page using <%=now%into asmalldatetime field. All of my timestamps are appearing without anyseconds (e.g., 1/21/2008 4:02:00 PM or 1/18/2008 11:32:00 AM).When I view the source for my page is shows the date/time as 1/21/20084:27:31 PM, but for some reason the seconds will be converted to1/21/2008 4:27:00 PMHow do i get more a more precise timestamp?Please help.

View 1 Replies View Related

Error:The Precision Must Be Between 1 And 38.

Apr 18, 2007

Hi All

I am trying to pull data from Oracle to SQL Server but if I use Oledb Source than I get this error



Error at Data Flow Task [DTS.Pipeline]: The "output column "CUST_ID" (590)" has a precision that is not valid. The precision must be between 1 and 38.



------------------------------
ADDITIONAL INFORMATION:

Exception from HRESULT: 0xC0204018 (Microsoft.SqlServer.DTSPipelineWrap)


The only solution I found is use DataReader Source,

But if I use DataReader Source everything works fine , I mean I am able to see the records and convert it desired data type (using Data Convertion component).

My question is what component should I use as Destination, coz if I use OLEDB Desination I get a red cross on the components although I can map all the columns.....

View 9 Replies View Related

Can Someone Explain The Precision Of An Integer In A Sql Db Pls

Nov 3, 2003

Hi I am in the process of creating a new db in sql. In my users table I wish to set the UserIds as Integer datatype. It defualts on precision 4. Does this mean that when the column auto increments as its my primary key with a seed of one, my highest number allowed in the table would be row 9999. ???

Also if you where to store a phone number in your db, what column type would you give it. I have used varChar but its all numbers i want to store. Would this suffice.

Thanks

View 1 Replies View Related

Subtraction Of Different Precision Values

Sep 24, 2007

We have a field which is decimal (9,2) and another which is decimal(9,3). Is there anyway to subtract the two and get a precision 3value without changing the first field to 9,3?For instance, retail value is 9,2, but our costs are at 9,3 due tobeing averaged. To calculate margin (retail-cost), we want that alsoto be 9,3, but a basic subtraction comes out 9,2. You can see wedon't want to increase retail to be 9,3 (that would look funny), andit seems wasteful to store retail twice (one 9,2 for users and one 9,3for margin calc)...is there any other way?

View 2 Replies View Related

How To Set Precision Of A Decimal Number

Oct 8, 2006

there is a column which type is float in a table, i want to set the precision of its value, for example if its value is 10.333888, i want to get its value as 10.33, how to complete it in a select Sql?

thks

View 3 Replies View Related

Data Type's And Precision

Jan 31, 2008

I receive patient demographic files from hospitals that are in several different formats. I have written translations for each format. I need to upload the files into our accounting software. I have the file layout to upload data and it looks like this.







From To Length
Record Type Code 1 2 2A
Account Number 3 17 15A
Guarantor Name 18 47 30A
Guarantor Zip 125 129 5N
Guarantor Area Code 134 136 3N

In SQL Server I have not found a way to set precision on an int. I have to have the correct length, and data type (A for alfa which is left justified and N for numeric which is right justified) field for a succsessfull upload. Suggestions on what data types to use would be very helpful, and then suggestions on how to output the data in a text file described example above would be a life saver.

View 1 Replies View Related

Precision Problem (rounding Values)

Jul 19, 2004

I am using SQL 2000 in a kind of electronic wallet way. Users out money onto an account and spend it on various services on a system. The cost of those services is deducted from the value in their wallet, and everybody's happy. However, some very strange things have been happening to my transactions; seemingly at random.

Some transactions (such as purchasing time on the Internet) are returning values such as 0.10000000000000001 instead of 0.1. This minute difference affects the user's wallet balance because the rouge digit is subtracted from their account. So instead of a balance of, say, 3.4 they have 3.39999999999999999.

"So what?", I hear you say. Well the problem comes when it's time to give them a refund. They walk over to a kiosk and the machine tells them they have 3.40 remaining in their account (it's nicely rounding up the value), but when they click Refund, it tells them they have insufficient funds to complete the refund! (Note: The refund amount is being compared with the wallet balance). If I go into the database via Query Analyzer it tells me their balance is 3.3999999etc, but in Enterprise Manager the value is 3.4. If try to manipulate the data in any way it is treated as 3.4. However, if I add 0.000000000000001 then QA reads the value as 3.4 and the customer can get their refund.

My questions is this. One, how the hell do I stop this from happening? I only need the two decimal places. Taking the value in a query and round it up/chopping off the remaining decimal points hasn't worked. It always picks up the value as 3.4 in a query. Two, why on Earth is this happening??? Has anyone experienced this problem before.

Thanks in advance to anyone that's read this far down.

View 3 Replies View Related

The Output Column Has A Precision That's Not Valid

May 5, 2008

Hi,

I'm importing data from and oracle database to an SQL one through a SSIS package, I'm getting this error:
"The output column "earned_hours" has a precision that is not valid. The precision must be between 1 and 38".
the package runs but returns this column as NULL values

earned_hours is of type "NUMBER" in oracle (some of the values are decimals), I tried making it numeric(x,y),float or decimal(x,y), but I'm still getting the same results.

does anybody know why is this happening or have a solution for this error?

Thanks

View 5 Replies View Related

Level Of Precision With Date Functions

Jun 4, 2008

am getting some unexpected behaviour using datetimes (on SP2, though i haven't tried other builds), as below...

<code>
----------------------------------------------------------------------

---if i run...
select dateadd(day,datediff(day,0,getutcdate()),0) as today
,dateadd(ms,-1,dateadd(day,datediff(day,0,getutcdate()),1)) as end_of_today

/*
i'd expect the result to be...

today end_of_today
----------------------- -----------------------
2008-06-04 00:00:00.000 2008-06-04 23:59:59.999

what i actually get is...

today end_of_today
----------------------- -----------------------
2008-06-04 00:00:00.000 2008-06-05 00:00:00.000

*/
------------------------------------------------------------------------

--i run...
select dateadd(day,datediff(day,0,getutcdate()),0) as today
,dateadd(ms,-2,dateadd(day,datediff(day,0,getutcdate()),1)) as end_of_today

/*
i expect...
today end_of_today
----------------------- -----------------------
2008-06-04 00:00:00.000 2008-06-04 23:59:59.998

i get...
today end_of_today
----------------------- -----------------------
2008-06-04 00:00:00.000 2008-06-04 23:59:59.997

*/
------------------------------------------------------------------------
--even as simple as this, the result is the same...
select cast('23:59:59.999' as datetime) as end_of_day

/*
results in....
end_of_day
-----------------------
1900-01-02 00:00:00.000

*/

</code>

can anyone shed any light? and also suggest how to reliably create a datetime like yyyymmdd hh:mi:59.999 ?




Em

View 6 Replies View Related

Automatically Reducing Precision On Numerics

Mar 26, 2008

Hi all,

I'm running a transformation script that's taking decimal(18,10) data and trying to shoehorn it into a numeric(9,6). generally this works, as most of the data in the original table is not using anywhere near the precision it's capable of, but once in a while I run into one that does use it.

Is there any way to automagically reduce the precision so that i can cram the data into the destination table?

___________________________
Geek At Large

View 3 Replies View Related

Set Default Precision On Decimal Type?

Dec 8, 2006

This one cost me a solid half hour yesterday. I'm wondering why onearth the default precision for a decimal type is 18,0. Maybe I'mmistaken. A decimal datatype sort of implies that you'd want somethingafter the decimal!Question is, can I set this database-wide? Like all new decimaldatatypes have a precision of 12,6 or something like that? I haven'tseen anything about this in the googling I have done...

View 3 Replies View Related

Precision And User Defined Types ...

Oct 9, 2007

I have a UserDefinedType that is Decimal(21,6)

The fields fltValorPendente e fltTotal are of this type. Field intSinal is an Integer.

I execute the following query:
SELECT
(intSinal * fltValorPendente) / fltTotal as Coef,
cast((intSinal * fltValorPendente) as decimal(21,6)) / fltTotal as CoefCast
into tmp
FROM tbl
Where fltTotal<>0

As a result, table tmp is created with the following structure:
CREATE TABLE [dbo].[tmp](
[Coef] [decimal](38, 6) NULL,
[CoefCast] [decimal](38, 17) NULL
) ON [PRIMARY]

How come both fields don't get the same precision?

View 2 Replies View Related

Precision Lost On Substracting Two Floats

Nov 2, 2007

Hi, first than nothing the query, I'm on sqlserver 2000:

--------------------------------------------------------------------------------------------------------
declare @ordered float
declare @convertion float

set @ordered = 49.0
set @convertion = 24.0

SELECT
CAST((@ordered / @convertion) AS float) as colA,
CAST((@ordered / @convertion) AS INT) as colB,

cast(CAST((@ordered / @convertion) AS float) - CAST((@ordered / @convertion) AS INT) as float) as [colA - colB],

ABS(CAST((@ordered / @convertion) AS float) - CAST((@ordered / @convertion) AS INT)) * @convertion as [The reminder]

--------------------------------------------------------------------------------------------------------

If you run that you'll get this result:

colA colB colA - colB The reminder
2.04166666666667 2 0.0416666666666665 0.999999999999996

You'll notice the workaround to get the decimal part of a number, if you wonder why its because sql2000 in the operand % only supports integers. Anyway, try on your calculator ( 2.04166666666667 - 2 ) it probably answer: 0.04166666666667.

BUT SQL NOT! you see how the result is amazingly 0.0416666666666665 ????

Does anybody knows a solution for this??? What am I doing wrong???

If you try the query but instead of 49.0 use 25.0, it returns the correct value !!.

I really appreciate any comment about this.

View 1 Replies View Related

Bar Chart Scale

Sep 7, 2007



Can you have different scales on a bar chart. I want to chart sales and quantity. Oracle lets you label the top of the bar chart as money and the bottom as quantity.

I am dividing my sales by one million and quantity by one thousand to make them similar in size on the same chart. I am using the sum of the sales/1000000 as a point label. Is there any way to limit the number of decimal places displayed. Currently it is displaying something like 1.94889312043; 1.95M would be better.

Thank you.

View 2 Replies View Related

How Do I Get It To A Precisions Scale Of .00?

Aug 6, 2007



I have set the output columns to decimal and data scale of 2. And have also set the field to be 0.00, and in the csv desination file it always puts .000000, How can I get it to be 0.00?

Thanks you for the help

View 4 Replies View Related

Ms Sql Server Accessing Oracle Loses Precision

Dec 1, 2004

We have a view in a 9205 oracle database. We can query fine
and the decimal precision is there.
When we query this same view from ms sql server we lose the precision
so 115.25 becomes 115.
does anyone know a workaround for this?

View 2 Replies View Related

What Is The Right Datatype To Store Hours Up To The Minute Precision?

Jan 28, 2006

Right now the database I am working with is storing time inan Integer data type and is storing the time value in seconds.The application does not allow entering seconds. It acceptsminutes and hours.I have a report where it is doing:SELECT SUM(TIMEENTERED)and the SUM is *blowing* up as the SUM is reachingthe BIGINT range.I can fix the problem by changing all codes to:SELECT SUM(CAST(TIMEENTERED AS BIGINT))But now that I ran into this problem I want to find outif storing the time in seconds using INTEGER datatype is the best solution?I've been searching this newsgroup and other placesthe whole day. I even ran into my own three year oldpost. Three years ago my problem was data migrationrelated and now it is more of performance related thananything else.http://groups.google.com/groups?as_...y=2006&safe=offI could not find this specific topic in SQL books likeSQL for Smarties 2005 by Joe Celko (very good stuff ontemporal topics but nothing specific to my question),or Inside SQL Server 2000.Which data type would be ideal and why?smalldatetime?integer?decimal?float?The type of operations that are being done in the databaseare:1- Entering time in hours on work done on a taskFor the data entry part, the application accepts2.5 as 2 and a half hours and it is storing2.5 * 3600 = 9000 seconds.It also accepts entering 2:30 as 2 hours and30 minutes and again storing 9000 seconds.I even saw a page where you can enter clocktime: I worked from 9:30AM to 12:45PMas an exampleWhen i checked the underlying table(s) I sawthat the ENTEREDTIME is always the durationin seconds. So the data entry can either be2.5 hours where ENTEREDTIME = 9000 secondsor9:00AM to 11:30AMwhere STARTDATE is today's date for examplestored as 1/27/2005 09:00AMand where ENTEREDTIME = 9000 seconds2- All kinds of reports showing total time in hoursfor example: Project1 = 18.5 hoursThe code in the SP are all like:SUM(ENTEREDTIME) / CAST(3600 AS DECIMAL(6,2))AS TOTALTIME3- I am sure a lot of other arithmetic calculations arebeing done with this ENTEREDTIME field.What would be the best way to store hours/minutesbased on how we are using Time in the database?Either I will stick with Integer but store in minutestime instead of calculating in seconds and most likelyupdate all the SUM(ENTEREDTIME) toSUM(CAST(ENTEREDTIME AS BIGINT))or I will switch to storing in decimal/float andmaybe avoid doing :SUM(ENTEREDTIME) / CAST(3600 AS DECIMAL(6,2))AS TOTALTIMEsince the ENTEREDTIME would already be storedin hours time.or I will use DATETIME since in the cases ofI worked from 9:00AM to 11:30AMI have to have a separate column to store the date also.I am a little confused I am hoping I will get some helpfrom you and maybe if I can't find the best solution, atleast eliminate the NOT so good ones I am thinking of.Thank you

View 1 Replies View Related

MSSQL Float Precision Problem By Using PHP Driver

Nov 13, 2007

I'm connecting MSSQL Server 2005 Express via MS Driver for PHP (CTP version October 2007) and sometimes I don't retrieve exact float values. For example, in database is 0.7 and I get 0.69999999999999996, but for 1.0 in database I get 1.0. The result is the same, if I use prepared statement (sqlsrv_conn_prepare() and sqlsrv_stmt_execute()) or directly sql_conn_execute().

The table definition is very simple:
CREATE TABLE [dbo].[test](
a float NOT NULL
);
insert into test values(0.7);
insert into test values(1.0);
insert into test values(1.1);
insert into test values(1.2);
insert into test values(1.3);
insert into test values(1.4);
insert into test values(1.5);
insert into test values(1.6);
insert into test values(1.7);
insert into test values(1.8);
insert into test values(1.9);
insert into test values(2.0);

And select command is:
select a from test;

PHP code:
$conn = sqlsrv_connect("localhostsqlexpress", array("UID" => "sa", "PWD" => "password"));
sqlsrv_conn_execute($conn, "USE dbname");
$stmt = sqlsrv_conn_execute($conn, "select a from test");

while($row = sqlsrv_stmt_fetch_array($stmt, SQLSRV_FETCH_TYPE_ARRAY))
echo $row[1]."; ";

Result:
0.69999999999999996; 1.0; 1.1000000000000001; 1.2; 1.3; 1.3999999999999999; 1.5; 1.6000000000000001; 1.7; 1.8; 1.8999999999999999; 2.0

Expected result:
0.7; 1.0; 1.1; 1.2; 1.3; 1.4; 1.5; 1.6; 1.7; 1.8; 1.9; 2.0

Running configuration:
Windows 2003 Server SP2, SQL Server 2005 Express SP2, IIS6, PHP 5.2.5 ISAPI

Thank you very much for help.
Vlasta

View 4 Replies View Related

Data Flow: Decimal Precision Is Lost

Apr 17, 2008

Hello,

Data is being trnasferred from an Oracle view to a SQL Server 2005 table.

Decimals can be previewed in the from the "SQL Command Text Window" but the columns in the target table which are defined as float shows the data being rounded to zero decimal places.

For the Data Source the always use default code page is selected.

Is there a way to retain the decimals?


Thank you,

Rod

View 3 Replies View Related

Losing Precision Converting To Decimal From String!

Mar 3, 2008



Hello!


Iīm having some trouble converting values represented as strings to the decimal data type.
I have a flat file source, from where I read some currency rates represented without decimals.
Before sending those values to my SQL Server destination, I want to convert them to represent correct values.

An example to clarify:

If my source contains a column named "curr_rate" with the value 000092500 I want to send it
to my destination as 9,2500.

So I set up a Dervied column component, converting my value like so:

((DT_NUMERIC,9,4)curr_rate)/10000

My problems is that the precision is lost, and all thatīs sent to my destination table is 9,0000.

How should I go about to convert my strings without losing precision in the process?

Thanks in advance!

View 4 Replies View Related

Decimal Scale Overflow

Jun 27, 2001

This is not a real big deal, cause I worked around it.. but I just tried to INCREASE the scale on a decimal column & got an arithmatic overflow error. I can understand why this would occur if trying to decrease the scale, but not increasing it. This is not a computed column.. why does SQL give an overflow error if all it has to do is add a couple of zeros to the end? Yes, I know SQL considers them to be different data types, but this is still confusing me.

View 2 Replies View Related

How Large Can SQLServer Scale?

May 5, 2004

I'm considering options for a large scale data warehouse. Even though SQL can theorectially scale to 10 Terabytes plus, in practice - will it be able to do it? Has anyone else actually done it? Or should Oracle be used?

View 4 Replies View Related







Copyrights 2005-15 www.BigResource.com, All rights reserved