Simple question from a simpleton. I'm working through Microsoft Press SQL Server 2005 Reporting Services Step by Step. It references a database rs2005sbsDW, which as far as I can tell was not included with either the sample databases on the SQL Server (standard edition) install disk or the Step by Step book. Where the heck is it? What am I missing.
Trying to reinstall the sample databases from the SQL Server install disk's tells me I have everything installed already.
How does OR work when mixed with AND. This I know is elementary but my loss of SQL intellect is approaching total eclipse. WHERE (Name LIKE @Kwrd0) OR ( Description LIKE '%' + @Kwrd0 + ' ' + @Kwrd1 + '%' ) OR (Keywords LIKE '%' + @Kwrd0 + '%') AND ((@Kwrd1 IS NULL OR Keywords LIKE '%' + @Kwrd1 + '%') AND (@Kwrd2 IS NULL OR Keywords LIKE '%' + @Kwrd2 + '%') AND (@Kwrd3 IS NULL OR Keywords LIKE '%' + @Kwrd3 + '%')AND (@Kwrd4 IS NULL OR Keywords LIKE '%' + @Kwrd4 + '%')AND (Enabled = 1) AND (Display = 1) AND (Rank < 20))ORDER BY L_Rank It qualifies on the first two words and ignores the rest of the statement. Can OR and AND's be mixed? The potential input is up to five words. Thank you
Hi all, I have a question. I have a database which is really big. I got into the enterprise manager and right click on the database and select property, I saw the the size:7000MB and space available is 6700MB. Does that mean when the database initially created, it allocated the storage space is around 7GB and now even I delete some data in the database, it will not shrink the database size, it only makes the available space bigger? Is there any way I shrink the database size without destroy any data?
I have a mixed environment of mostly SQL 2000 server with a few (3?) SQL 6.5 servers (the ? is there because every now and again, I find a new, undocumented server hiding behind a firewall that some developer just happens to be having an issue with).
Can I install SQL 6.5 client tools alongside SQL 2000 client tools on the same server?
Sorry about this. But I've worked primarily in access for years.Does SQL Server have a boolean or a yes/no data type for its table columns?Thanks in advance,Bill
I have 4 tables (AA,MAIN,CC,DD)Everything that is not in AA but is in MAIN I want put into CCEverything that is not in MAIN but is in AA I want put into DDAdd everything in DD to MAINClear AA CC and DDThanks in advance
Just bought a new computer, and want to install vb express. There is an icon in my system tray for Microsoft SQL Server Service Manager ver. 8.00.2039. Does this mean that I won't have to install SQL Server Express? I can't find anything in the Add/Remove programs that shows any version of SQL Server that was pre-installed.
Hi everyone, I have a dumb question. I want to import some MS Access tables into SQL Server 2005 Express... there is supposed to be an import wizard in the binn directory, but it doesn't seem to be there... is this wizard only part of the full blown SQL Server 2005 package? I've searched through the SQL docs, but can't make heads or tails of it.
I have a simple flow that loads a data table from some flat files. It works properly but I can't figure out how to add only rows that exist (so I won't get an error from the duplicate ID). I added a lookup that redirects records that don't match any ID, but when I run it I get a timeout error (?). It seems to pick up the right # of records to add, but when it gets to the SQL Server Destination it seems to generate a timeout.
SSIS package "ImportAL3.dtsx" starting. Information: 0x4004300A at Data Flow Task, DTS.Pipeline: Validation phase is beginning. Warning: 0x802092A7 at Data Flow Task, SQL Server Destination [872]: Truncation may occur due to inserting data from data flow column "SampleID" with a length of 4000 to database column "SampleID" with a length of 10. Warning: 0x800470D8 at Data Flow Task, Derived Column [1446]: The result string for expression "TRIM([Column 17]) + REPLICATE(" ",10 - LEN(TRIM([Column 17])))" may be truncated if it exceeds the maximum length of 4000 characters. The expression could have a result value that exceeds the maximum size of a DT_WSTR. Warning: 0x800470D8 at Data Flow Task, Derived Column [1446]: The result string for expression "TRIM([Column 2]) + REPLICATE(" ",25 - LEN(TRIM([Column 2])))" may be truncated if it exceeds the maximum length of 4000 characters. The expression could have a result value that exceeds the maximum size of a DT_WSTR. Information: 0x4004300A at Data Flow Task, DTS.Pipeline: Validation phase is beginning. Warning: 0x802092A7 at Data Flow Task, SQL Server Destination [872]: Truncation may occur due to inserting data from data flow column "SampleID" with a length of 4000 to database column "SampleID" with a length of 10. Warning: 0x800470D8 at Data Flow Task, Derived Column [1446]: The result string for expression "TRIM([Column 17]) + REPLICATE(" ",10 - LEN(TRIM([Column 17])))" may be truncated if it exceeds the maximum length of 4000 characters. The expression could have a result value that exceeds the maximum size of a DT_WSTR. Warning: 0x800470D8 at Data Flow Task, Derived Column [1446]: The result string for expression "TRIM([Column 2]) + REPLICATE(" ",25 - LEN(TRIM([Column 2])))" may be truncated if it exceeds the maximum length of 4000 characters. The expression could have a result value that exceeds the maximum size of a DT_WSTR. Information: 0x40043006 at Data Flow Task, DTS.Pipeline: Prepare for Execute phase is beginning. Information: 0x40043007 at Data Flow Task, DTS.Pipeline: Pre-Execute phase is beginning. Information: 0x402090DC at Data Flow Task, Flat File Source [100]: The processing of file "C: empLW002785.AL3" has started. Warning: 0x800470D8 at Data Flow Task, Derived Column [1446]: The result string for expression "TRIM([Column 17]) + REPLICATE(" ",10 - LEN(TRIM([Column 17])))" may be truncated if it exceeds the maximum length of 4000 characters. The expression could have a result value that exceeds the maximum size of a DT_WSTR. Warning: 0x800470D8 at Data Flow Task, Derived Column [1446]: The result string for expression "TRIM([Column 2]) + REPLICATE(" ",25 - LEN(TRIM([Column 2])))" may be truncated if it exceeds the maximum length of 4000 characters. The expression could have a result value that exceeds the maximum size of a DT_WSTR. Information: 0x400490F4 at Data Flow Task, LookupGrade [2832]: component "LookupGrade" (2832) has cached 11 rows. Information: 0x400490F4 at Data Flow Task, LookupTestID [5608]: component "LookupTestID" (5608) has cached 0 rows. Error: 0xC0202009 at Data Flow Task, SQL Server Destination [872]: An OLE DB error has occurred. Error code: 0x80040E14. An OLE DB record is available. Source: "Microsoft SQL Native Client" Hresult: 0x80040E14 Description: "Cannot fetch a row from OLE DB provider "BULK" for linked server "(null)".". An OLE DB record is available. Source: "Microsoft SQL Native Client" Hresult: 0x80040E14 Description: "The OLE DB provider "BULK" for linked server "(null)" reported an error. The provider did not give any information about the error.". An OLE DB record is available. Source: "Microsoft SQL Native Client" Hresult: 0x80040E14 Description: "Reading from DTS buffer timed out.". Information: 0x4004300C at Data Flow Task, DTS.Pipeline: Execute phase is beginning. Information: 0x402090DE at Data Flow Task, Flat File Source [100]: The total number of data rows processed for file "C: empLW002785.AL3" is 1. Information: 0x402090DD at Data Flow Task, Flat File Source [100]: The processing of file "C: empLW002785.AL3" has ended. Information: 0x402090DC at Data Flow Task, Flat File Source [100]: The processing of file "C: empLW002786.AL3" has started. Information: 0x402090DE at Data Flow Task, Flat File Source [100]: The total number of data rows processed for file "C: empLW002786.AL3" is 1. Information: 0x402090DD at Data Flow Task, Flat File Source [100]: The processing of file "C: empLW002786.AL3" has ended. Information: 0x402090DC at Data Flow Task, Flat File Source [100]: The processing of file "C: empLW002787.AL3" has started. Information: 0x402090DE at Data Flow Task, Flat File Source [100]: The total number of data rows processed for file "C: empLW002787.AL3" is 1. Information: 0x402090DD at Data Flow Task, Flat File Source [100]: The processing of file "C: empLW002787.AL3" has ended. Information: 0x402090DC at Data Flow Task, Flat File Source [100]: The processing of file "C: empLW002788.AL3" has started. Information: 0x402090DE at Data Flow Task, Flat File Source [100]: The total number of data rows processed for file "C: empLW002788.AL3" is 1. Information: 0x402090DD at Data Flow Task, Flat File Source [100]: The processing of file "C: empLW002788.AL3" has ended. Information: 0x402090DC at Data Flow Task, Flat File Source [100]: The processing of file "C: empLW002789.AL3" has started. Information: 0x402090DE at Data Flow Task, Flat File Source [100]: The total number of data rows processed for file "C: empLW002789.AL3" is 1. Information: 0x402090DD at Data Flow Task, Flat File Source [100]: The processing of file "C: empLW002789.AL3" has ended. Information: 0x402090DC at Data Flow Task, Flat File Source [100]: The processing of file "C: empLW002790.AL3" has started. Information: 0x402090DE at Data Flow Task, Flat File Source [100]: The total number of data rows processed for file "C: empLW002790.AL3" is 1. Information: 0x402090DD at Data Flow Task, Flat File Source [100]: The processing of file "C: empLW002790.AL3" has ended. Information: 0x40043008 at Data Flow Task, DTS.Pipeline: Post Execute phase is beginning. Information: 0x40043009 at Data Flow Task, DTS.Pipeline: Cleanup phase is beginning. Information: 0x4004300B at Data Flow Task, DTS.Pipeline: "component "SQL Server Destination" (872)" wrote 6 rows. Warning: 0x80019002 at Data Flow Task: The Execution method succeeded, but the number of errors raised (1) reached the maximum allowed (1); resulting in failure. This occurs when the number of errors reaches the number specified in MaximumErrorCount. Change the MaximumErrorCount or fix the errors. Task failed: Data Flow Task Warning: 0x80019002 at ImportAL3: The Execution method succeeded, but the number of errors raised (1) reached the maximum allowed (1); resulting in failure. This occurs when the number of errors reaches the number specified in MaximumErrorCount. Change the MaximumErrorCount or fix the errors. SSIS package "ImportAL3.dtsx" finished: Failure.
If I install the Desktop Edition on a Nt-workstation, can I then fully administer 6.5 installations of SQL-server. Now I have both SQL-server 6.5 and the SQL-server 7.0 Enterprise Edition but can't access my 6.5 databases because the SQL-DMO verstion is to low.
Pardon me, my SQL knowledge is not advanced enough to know how to do this, though I'm sure it's pretty simple:
Let's say I have a table called products, with fields like SKU, name, price. And let's say I have a temporary table with changes to be made (updates) to the current products.. same fields, SKU, name, price. I basically would like to be able to update currently existing entries in the products table, with the changes shown in the temporary table. Example:
I know this is a stupid question, but I just can't find the proper explanation. I often see the letter N preceding a parameter when executing a stored procedure (ie. exec sp_xxx @parm = N'test'). What is the significance of the N and when should it be used?
If I am running a cross-tab query on a table that has 15000 records in it to check specific records (It basically is running a table-valued function about 10,000 times
Here's the actual query
Select a.EmployeeID,b.* from #TmpActiveEmployeesWSeverance a cross apply dbo.fn_Severance_AccountItemsTableBULKRUN(a.EmployeeID,a.BenefitTypeID,null,null,null,null,null) b
Within that function there is a sub-query on a table that is
Select col3 from T_Mytable a where col1 = @EmployeeID and Col2 = @BenefitTypeID
I can not figure out why there not ANY performance increase in having a non-clusterd index on T_Mytable(EmployeeID,BenefitTypeID)
Shouldn't Sql Reference this index when determining the plan execution, or is it because the record count is only about 10,000 records, so there is no need for sql to use the index?
Thanks for the clarification, I just would like to know why this is.
I need to develop a web-based (intranet) database application. Would SQL be the best product for the data storage? And would it be available "real time" for statewide input? (In other words, not needing replication).
Thanks in advance for your patience and assistance.
Hello, So, here's my dumb question; if I wanted to store some *.gif images in some database (SQL2K possibly 2K5) field and wanted to pull the information from that to display on the web form, am I actually storing the image in the database or am I storing the location of the image in the database?I ask this because I was under the impression that the location to the image file is what was being stored but another person was saying that it was the actual image. I guess I'm confused... Thanks in advance....
Ok, I know a LITTLE about SQL 2000. I am just starting to get my feet wet in the area. I know how to install SQL, backup DB's and Restore them. Can even do them over the network now. YAY ME! Anyway, here is my question. I have a production server with 4 RAID arrays. I have one for my OS, one for the Main SQL DB's (heavy transaction DB's), one for my log files, and one that is supposed to be for my temp DB. Well when I installed SQL it asked me for the default data path, which would be where my main SQL db's go. I dont remember it asking me where I want the temp DB to go. How do I change the location of the tempdb.mdf and tempdb.ldf to the drive arrays I want them to go to, even though I have already installed SQL.
Thanks. And sorry ahead of time for the noobiness of the question. I did do a search first, but didnt really see anything that would help me.
Hello All,I'm trying to find out exactly what JOIN doeseg.SELECT A.NameFROM Author A JOIN Publisher PON A.SomeID = P.SomeIDWHERE P.Country = 'X'I know what inner, outer, right and left joins do, but what does justJOIN on its own do? (Can't find it in help either)Thanks,K Finegan
I've designed a very basic SQL Server Stored Procedure that I'm usingvia a Visual Basic 6.0 front-end file in retrieving records into adata entry form.I can't for the life of me get the records retrieved via the StoredProcedure to be edited.I can't even run the stored procedure in MSDE's T-SQL utility and thenedit any of the records shown to me.Any ideas?Thanks!Brad McCollumJoin Bytes!
How do you tell the script task NOT to write a particular row?
This code currently writes 0 and blank when the type <> 4, i'm looking NOT to write the row at all.
Imports System Imports System.Data Imports System.Math Imports Microsoft.SqlServer.Dts.Pipeline.Wrapper Imports Microsoft.SqlServer.Dts.Runtime.Wrapper
Public Class ScriptMain Inherits UserComponent
Private iType, iRest As String Private rawAmount As Double
Public Overrides Sub Input0_ProcessInputRow(ByVal Row As Input0Buffer) iType = Row.Type iRest = Row.rest If iType = "4" Then Row.ani = iRest.Substring(112, 10) rawAmount = CInt(iRest.Substring(41, 4)) If rawAmount <> 0 Then Row.amount = rawAmount / 100 End If End If End Sub
End Class
Also, I'm writting these columns to a destination excel, and event hough the spreadsheet cells are formated for an nn.nn numeric, every cell has an error that my data is text and I get that green wedge asking me to convert it. If I manually enter what I'm sending(example 45.5, it takes it just fine and turns it into 45.50 numeric). What can I do about this? anything additional I can send excel to tell it treat numerics like numerics.. maybe something like: mso-number-format or something.
This is probably a no-brainer for most of you, but I'm really really new to sql.
I'm using ms sql 2005 (I think), and I'm making a scheduling program in VB that uses a database hosted on the sql server. Does sql come with archiving tools, or will I need to make something? All the other DBs on the server are 3rd party apps that have a little archiving tool as part of the utilities package. Will I have to make something like that to remove and save the old schedule entries? If so, can anyone point me in the right direction?
Right now, I'm working on the database of a new project. The project is basically a keyword tracking and referral tracking website directory. My plan is to give the website some java code that will allow me to grab the http_referer and request_uri per page load. (everytime they get traffic)Right now, I've got some tables I think will work fine. One is the storage table, where I'll store what the javascript grabs.table.http_ref (columns will be)uid (Unique ID)http_refererrequest_uriThen I got the results page I want to parse the http_ref table into. For example.Here we have a google results page which is a search for baby shoes. This would be the http_refererhttp://www.google.com/search?sourceid=navclient-ff&ie=UTF-8&rlz=1B2GGGL_enUS177US177&q=baby+shoesAnd if someone clicks on a link get this page, which let's say I'm tracking in my project.http://www.thebabymarketplace.com/securestore/c54581.2.htmlin my http_ref table I will haveuid (whatever)http_referer http://www.google.com/search?sourceid=navclient-ff&ie=UTF-8&rlz=1B2GGGL_enUS177US177&q=baby+shoesrequest_uri http://www.thebabymarketplace.com/securestore/c54581.2.htmlso I want a stored proc to grab the http_referer column and parse it for the refering search enginehttp://www.google.comand the key wordssearch?sourceid=navclient-ff&ie=UTF-8&rlz=1B2GGGL_enUS177US177&q=baby+shoesThen grab the uri_request, which should be the web page visited,and INSERT into another table where I will holdkeywordsrefererrefered pageAny suggestions? I've been going nuts over this.
I want to know if it is possible to set RS up so it will only work for only for reports that are non-dynamic. IE a month to date report which has an ovenight update so no variables as opposed to a historic rpt where the viewer can select which historic dated they want to view.
Im just a little unsure of somethingl If i have, for instance, 2 containers with both containing control flow items, and container 1 flows into container 2 (thus ensuring that container 1's items are executed first), how do I set the control flow to NOT execute containers 2 at all if any of the items of container 1 has failed BUT ensuring that container 1's items are ALL executed even though some in container1 might fail.
Thus I want all items in container1 to at least execute but only if all are successful should the control flow execute container 2's items.
We have never built a data warehouse. We are not even sure what the required features are to do this. Nontheless, our developers are exploring two ETL tools: SSIS and Sunopsis.
They tend to like Sunopsis because it has been around a few years, has the equivalent of source-code modules (libraries) in the tool, has a highly granular security system, has 500 companies using it. and therefore must have survived several ETL projects. (We don't know if, or how many, real-world projects SSIS has been used for.). Sunopsis is Java based and uses its own authentication scheme (no support for Windows Active Directory yet. Bummer)
To the best of your knowledge is SSIS, out of the box, missing any of the standard functionalites needed to populate the datastore that will be used as the basis of our warehoue? (I don't even know precisely what all these functionalites are but my boss is worried we will find out somewhere down the road. While he recognizes that we can write our own code in .NET he prefers things "out of the box". They may not be fast but they tend to work right.)
ok I have a database on a server that I would like to get on my hard drive, so I can work on it without being connected,etc. Does anyone know how to do this, and if so, could you please help me. Thks - cam
Hi, I'm trying to work on a database when I'm not connected. I can't figure out how to get a local copy of a database.mdf that I created on the server, onto my hard drive, using SQL Management Studio Express. Does anyone have any suggestions? I'd be forever in your debt. Thanks
[This is one of those cases where I think I know the answer, but I hope I'm wrong!]
I have a data flow which is processing data from the XML Source. There are 16 outputs from the XML Source. I have to perform a variety of validations on these outputs: things like "column 1 is required if column 2 has value 'a' or 'b'", or "column 1 or column 2 may be present, but not both".
For lookups and such, I use the Lookup component and its error output, both to redirect rows that fail the lookup, and to capture the data, column number and error code.
But, how do I do the same for "normal" validations?
If I have to use the Conditional Split transform, then I'll have to have one output per validation, and use a Union All to combine the rows again for output to an error file. This will also cost an extra "Derived Column" transform per output, in order to get a column number and possible error code per failed validation.
Worse, it's a pain to have to maintain all the columns in such a large "Union All"!
If I had the time, I might write a "Conditional Error" transform. It might be fun. But I have to be done by the end of this month, and don't even have time to create the UI for evaluating expressions!
Any tips or tricks or pointers to such would be very welcome.